id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
27,059 | https://en.wikipedia.org/wiki/Stainless%20steel | Stainless steel, also known as inox, corrosion-resistant steel (CRES), and rustless steel, is an iron-based alloy containing a minimum level of chromium that is resistant to rusting and corrosion. Stainless steel's resistance to corrosion results from the 10.5%, or more, chromium content which forms a passive film that can protect the material and self-heal in the presence of oxygen. It can also be alloyed with other elements such as molybdenum, carbon, nickel and nitrogen to develop a range of different properties depending on its specific use.
The alloy's properties, such as luster and resistance to corrosion, are useful in many applications. Stainless steel can be rolled into sheets, plates, bars, wire, and tubing. These can be used in cookware, cutlery, surgical instruments, major appliances, vehicles, construction material in large buildings, industrial equipment (e.g., in paper mills, chemical plants, water treatment), and storage tanks and tankers for chemicals and food products. Some grades are also suitable for forging and casting.
The biological cleanability of stainless steel is superior to both aluminium and copper, and comparable to glass. Its cleanability, strength, and corrosion resistance have prompted the use of stainless steel in pharmaceutical and food processing plants.
Different types of stainless steel are labeled with an AISI three-digit number. The ISO 15510 standard lists the chemical compositions of stainless steels of the specifications in existing ISO, ASTM, EN, JIS, and GB standards in a useful interchange table.
Properties
Corrosion resistance
Although stainless steel does rust, this only affects the outer few layers of atoms, its chromium content shielding deeper layers from oxidation.
The addition of nitrogen also improves resistance to pitting corrosion and increases mechanical strength. Thus, there are numerous grades of stainless steel with varying chromium and molybdenum contents to suit the environment the alloy must endure. Corrosion resistance can be increased further by the following means:
increasing chromium content to more than 11%
adding nickel to at least 8%
adding molybdenum (which also improves resistance to pitting corrosion)
Strength
The most common type of stainless steel, 304, has a tensile yield strength around in the annealed condition. It can be strengthened by cold working to a strength of in the full-hard condition.
The strongest commonly available stainless steels are precipitation hardening alloys such as 17-4 PH and Custom 465. These can be heat treated to have tensile yield strengths up to .
Melting point
Melting point of stainless steel is near that of ordinary steel, and much higher than the melting points of aluminium or copper.
As with most alloys, the melting point of stainless steel is expressed in the form of a range of temperatures, and not a single temperature. This temperature range goes from depending on the specific consistency of the alloy in question.
Conductivity
Like steel, stainless steels are relatively poor conductors of electricity, with significantly lower electrical conductivities than copper. In particular, the electrical contact resistance (ECR) of stainless steel arises as a result of the dense protective oxide layer and limits its functionality in applications as electrical connectors. Copper alloys and nickel-coated connectors tend to exhibit lower ECR values and are preferred materials for such applications. Nevertheless, stainless steel connectors are employed in situations where ECR poses a lower design criteria and corrosion resistance is required, for example in high temperatures and oxidizing environments.
Magnetism
Martensitic, duplex and ferritic stainless steels are magnetic, while austenitic stainless steel is usually non-magnetic. Ferritic steel owes its magnetism to its body-centered cubic crystal structure, in which iron atoms are arranged in cubes (with one iron atom at each corner) and an additional iron atom in the center. This central iron atom is responsible for ferritic steel's magnetic properties. This arrangement also limits the amount of carbon the steel can absorb to around 0.025%. Grades with low coercive field have been developed for electro-valves used in household appliances and for injection systems in internal combustion engines. Some applications require non-magnetic materials, such as magnetic resonance imaging. Austenitic stainless steels, which are usually non-magnetic, can be made slightly magnetic through work hardening. Sometimes, if austenitic steel is bent or cut, magnetism occurs along the edge of the stainless steel because the crystal structure rearranges itself.
Wear
Galling, sometimes called cold welding, is a form of severe adhesive wear, which can occur when two metal surfaces are in relative motion to each other and under heavy pressure. Austenitic stainless steel fasteners are particularly susceptible to thread galling, though other alloys that self-generate a protective oxide surface film, such as aluminum and titanium, are also susceptible. Under high contact-force sliding, this oxide can be deformed, broken, and removed from parts of the component, exposing the bare reactive metal. When the two surfaces are of the same material, these exposed surfaces can easily fuse. Separation of the two surfaces can result in surface tearing and even complete seizure of metal components or fasteners. Galling can be mitigated by the use of dissimilar materials (bronze against stainless steel) or using different stainless steels (martensitic against austenitic). Additionally, threaded joints may be lubricated to provide a film between the two parts and prevent galling. Nitronic 60, made by selective alloying with manganese, silicon, and nitrogen, has demonstrated a reduced tendency to gall.
Density
The density of stainless steel ranges from depending on the alloy.
History
The invention of stainless steel followed a series of scientific developments, starting in 1798 when chromium was first shown to the French Academy by Louis Vauquelin. In the early 1800s, British scientists James Stoddart, Michael Faraday, and Robert Mallet observed the resistance of chromium-iron alloys ("chromium steels") to oxidizing agents. Robert Bunsen discovered chromium's resistance to strong acids. The corrosion resistance of iron-chromium alloys may have been first recognized in 1821 by Pierre Berthier, who noted their resistance against attack by some acids and suggested their use in cutlery.
In the 1840s, both Britain's Sheffield steelmakers and then Krupp of Germany were producing chromium steel with the latter employing it for cannons in the 1850s. In 1861, Robert Forester Mushet took out a patent on chromium steel in Britain.
These events led to the first American production of chromium-containing steel by J. Baur of the Chrome Steel Works of Brooklyn for the construction of bridges. A US patent for the product was issued in 1869. This was followed with recognition of the corrosion resistance of chromium alloys by Englishmen John T. Woods and John Clark, who noted ranges of chromium from 5–30%, with added tungsten and "medium carbon". They pursued the commercial value of the innovation via a British patent for "Weather-Resistant Alloys".
Scientists researching steel corrosion in the second half of the 19th century didn't pay attention to the amount of carbon in the alloyed steels they were testing until in 1898 Adolphe Carnot and E. Goutal noted that chromium steels better resist to oxidation with acids the less carbon they contain.
Also in the late 1890s, German chemist Hans Goldschmidt developed an aluminothermic (thermite) process for producing carbon-free chromium. Between 1904 and 1911, several researchers, particularly Leon Guillet of France, prepared alloys that would be considered stainless steel today.
In 1908, the Essen firm Friedrich Krupp Germaniawerft built the 366-ton sailing yacht Germania featuring a chrome-nickel steel hull, in Germany. In 1911, Philip Monnartz reported on the relationship between chromium content and corrosion resistance. On 17 October 1912, Krupp engineers Benno Strauss and Eduard Maurer patented as Nirosta the austenitic stainless steel known today as 18/8 or AISI type 304.
Similar developments were taking place in the United States, where Christian Dantsizen of General Electric and Frederick Becket (1875–1942) at Union Carbide were industrializing ferritic stainless steel. In 1912, Elwood Haynes applied for a US patent on a martensitic stainless steel alloy, which was not granted until 1919.
Harry Brearley
While seeking a corrosion-resistant alloy for gun barrels in 1913, Harry Brearley of the Brown-Firth research laboratory in Sheffield, England, discovered and subsequently industrialized a martensitic stainless steel alloy, today known as AISI type 420. The discovery was announced two years later in a January 1915 newspaper article in The New York Times.
The metal was later marketed under the "Staybrite" brand by Firth Vickers in England and was used for the new entrance canopy for the Savoy Hotel in London in 1929. Brearley applied for a US patent during 1915 only to find that Haynes had already registered one. Brearley and Haynes pooled their funding and, with a group of investors, formed the American Stainless Steel Corporation, with headquarters in Pittsburgh, Pennsylvania.
Rustless steel
Brearley initially called his new alloy "rustless steel". The alloy was sold in the US under different brand names like "Allegheny metal" and "Nirosta steel". Even within the metallurgy industry, the name remained unsettled; in 1921, one trade journal called it "unstainable steel". Brearley worked with a local cutlery manufacturer, who gave it the name "stainless steel". As late as 1932, Ford Motor Company continued calling the alloy "rustless steel" in automobile promotional materials.
In 1929, before the Great Depression, over 25,000 tons of stainless steel were manufactured and sold in the US annually.
Major technological advances in the 1950s and 1960s allowed the production of large tonnages at an affordable cost:
AOD process (argon oxygen decarburization), for the removal of carbon and sulfur
Continuous casting and hot strip rolling
The Z-Mill, or Sendzimir cold rolling mill
The Creusot-Loire Uddeholm (CLU) and related processes which use steam instead of some or all of the argon
Families
Stainless steel is classified into five different "families" of alloys, each having a distinct set of attributes. Four of the families are defined by their predominant crystalline structure - the austenitic, ferritic, martensitic, and duplex alloys. The fifth family, precipitation hardening, is defined by the type of heat treatment used to develop its properties.
Austenitic
Austenitic stainless steel is the largest family of stainless steels, making up about two-thirds of all stainless steel production. They have a face-centered cubic crystal structure. This microstructure is achieved by alloying steel with sufficient nickel, manganese, or nitrogen to maintain an austenitic microstructure at all temperatures, ranging from the cryogenic region to the melting point. Thus, austenitic stainless steels are not hardenable by heat treatment since they possess the same microstructure at all temperatures.
Austenitic stainless steels consist of two subfamilies:
200 series are chromium-manganese-nickel alloys that maximize the use of manganese and nitrogen to minimize the use of nickel. Due to their nitrogen addition, they possess approximately 50% higher yield strength than 300-series stainless sheets of steel. Representative alloys include Type 201 and Type 202.
300 series are chromium-nickel alloys that achieve their austenitic microstructure almost exclusively by nickel alloying; some very highly alloyed grades include some nitrogen to reduce nickel requirements. 300 series is the largest group and the most widely used. Representative alloys include Type 304 and Type 316.
Ferritic
Ferritic stainless steels have a body-centered cubic crystal structure, are magnetic, and are hardenable by cold working, but not by heat treating. They contain between 10.5% and 27% chromium with very little or no nickel. Due to the near-absence of nickel, they are less expensive than austenitic stainless steels. Representative alloys include Type 409, Type 429, Type 430, and Type 446. Ferritic stainless steels are present in many products, which include:
Automobile exhaust pipes
Architectural and structural applications
Building components, such as slate hooks, roofing, and chimney ducts
Power plates in solid oxide fuel cells operating at temperatures around
Martensitic
Martensitic stainless steels have a body-centered tetragonal crystal structure, are magnetic, and are hardenable by heat treating and by cold working. They offer a wide range of properties and are used as stainless engineering steels, stainless tool steels, and creep-resistant steels. They are not as corrosion-resistant as ferritic and austenitic stainless steels due to their low chromium content. They fall into four categories (with some overlap):
Fe-Cr-C grades. These were the first grades used and are still widely used in engineering and wear-resistant applications. Representative grades include Type 410, Type 420, and Type 440C.
Fe-Cr-Ni-C grades. Some carbon is replaced by nickel. They offer higher toughness and higher corrosion resistance. Representative grades include Type 431.
Martensitic precipitation hardening grades. 17-4 PH (UNS S17400), the best-known grade, combines martensitic hardening and precipitation hardening to increase strength and toughness.
Creep-resisting grades. Small additions of niobium, vanadium, boron, and cobalt increase the strength and creep resistance up to about .
Martensitic stainless steels can be heat treated to provide better mechanical properties. The heat treatment typically involves three steps:
Austenitizing, in which the steel is heated to a temperature in the range , depending on grade. The resulting austenite has a face-centered cubic crystal structure.
Quenching. The austenite is transformed into martensite, a hard body-centered tetragonal crystal structure. The quenched martensite is very hard and too brittle for most applications. Some residual austenite may remain.
Tempering. Martensite is heated to around , held at temperature, then air-cooled. Higher tempering temperatures decrease yield strength and ultimate tensile strength but increase the elongation and impact resistance.
Duplex
Duplex stainless steels have a mixed microstructure of austenite and ferrite, the ideal ratio being a 50:50 mix, though commercial alloys may have ratios of 40:60. They are characterized by higher chromium (19–32%) and molybdenum (up to 5%) and lower nickel contents than austenitic stainless steels. Duplex stainless steels have roughly twice the yield strength of austenitic stainless steel. Their mixed microstructure provides improved resistance to chloride stress corrosion cracking in comparison to austenitic stainless steel types 304 and 316. Duplex grades are usually divided into three sub-groups based on their corrosion resistance: lean duplex, standard duplex, and super duplex. The properties of duplex stainless steels are achieved with an overall lower alloy content than similar-performing super-austenitic grades, making their use cost-effective for many applications. The pulp and paper industry was one of the first to extensively use duplex stainless steel. Today, the oil and gas industry is the largest user and has pushed for more corrosion resistant grades, leading to the development of super duplex and hyper duplex grades. More recently, the less expensive (and slightly less corrosion-resistant) lean duplex has been developed, chiefly for structural applications in building and construction (concrete reinforcing bars, plates for bridges, coastal works) and in the water industry.
Precipitation hardening
Precipitation hardening stainless steels are characterized by the abiity to be precipitation hardened to higher strength. There are three types of precipitation hardening stainless steels which are classified according to their crystalline structure:
Martensitic precipitation hardenable stainless steels are martensitic at room temperature in both the solution annealed and precipitation hardened conditions. Representative alloys include 17-4 PH (UNS S17400), 15-5 PH (UNS S15500), Custom 450 (UNS S45000) and Custom 465 (UNS S46500).
Semi-austenitic precipitation hardenable stainless steels are initially austenitic in the solution annealed condition for ease of fabrication, but are subsequently transformed to martensite to provide higher strength and to be precipitation hardened. Representative alloys include 17-7 PH (UNS S17700), 15-7 PH (UNS S15700), AM-350 (UNS S35000), and AM-355 (UNS S35500).
Austenitic precipitation hardenable stainless steels are austenitic at room temperature in both the solution annealed and precipitation hardened conditions. Representative alloys include A-286 (UNS S66286) and Discalloy (UNS S66220).
Classification systems
Several different classification systems have been developed for designating stainless steels. The main system used in the United States has been the SAE steel grades numbering system. The SAE numbering system designates stainless steels by "Type" followed by a three-digit number and sometimes a letter suffix. A newer system that was jointly developed by ASTM and SAE in 1974 is The Unified Numbering System for Metals and Alloys (UNS). The Unified Numbering System classifies stainless steels using an alpha-numeric identifier consisting of "S" followed by five digits, although some austenitic stainless steels with high nickel content may fall into the nickel-base designation which uses "N" as the alpha identifer. The UNS designations incorporate previously used designations, whether from the SAE numbering system or proprietary alloy designations. Europe has adopted EN 10088 for classification of stainless steels.
Corrosion resistance
Unlike carbon steel, stainless steels do not suffer uniform corrosion when exposed to wet environments. Unprotected carbon steel rusts readily when exposed to a combination of air and moisture. The resulting iron oxide surface layer is porous and fragile. In addition, as iron oxide occupies a larger volume than the original steel, this layer expands and tends to flake and fall away, exposing the underlying steel to further attack. In comparison, stainless steels contain sufficient chromium to undergo passivation, spontaneously forming a microscopically thin inert surface film of chromium oxide by reaction with the oxygen in the air and even the small amount of dissolved oxygen in the water. This passive film prevents further corrosion by blocking oxygen diffusion to the steel surface and thus prevents corrosion from spreading into the bulk of the metal. This film is self-repairing, even when scratched or temporarily disturbed by conditions that exceed the inherent corrosion resistance of that grade.
The resistance of this film to corrosion depends upon the chemical composition of the stainless steel, chiefly the chromium content. It is customary to distinguish between four forms of corrosion: uniform, localized (pitting), galvanic, and SCC (stress corrosion cracking). Any of these forms of corrosion can occur when the grade of stainless steel is not suited for the working environment.
Uniform
Uniform corrosion takes place in very aggressive environments, typically where chemicals are produced or heavily used, such as in the pulp and paper industries. The entire surface of the steel is attacked, and the corrosion is expressed as corrosion rate in mm/year (usually less than 0.1 mm/year is acceptable for such cases). Corrosion tables provide guidelines.
This is typically the case when stainless steels are exposed to acidic or basic solutions. Whether stainless steel corrodes depends on the kind and concentration of acid or base and the solution temperature. Uniform corrosion is typically easy to avoid because of extensive published corrosion data or easily performed laboratory corrosion testing.
Acidic solutions can be put into two general categories: reducing acids, such as hydrochloric acid and dilute sulfuric acid, and oxidizing acids, such as nitric acid and concentrated sulfuric acid. Increasing chromium and molybdenum content provides increased resistance to reducing acids while increasing chromium and silicon content provides increased resistance to oxidizing acids. Sulfuric acid is one of the most-produced industrial chemicals. At room temperature, type 304 stainless steel is only resistant to 3% acid, while type 316 is resistant to 3% acid up to and 20% acid at room temperature. Thus type 304 SS is rarely used in contact with sulfuric acid. Type 904L and Alloy 20 are resistant to sulfuric acid at even higher concentrations above room temperature. Concentrated sulfuric acid possesses oxidizing characteristics like nitric acid, and thus silicon-bearing stainless steels are also useful. Hydrochloric acid damages any kind of stainless steel and should be avoided. All types of stainless steel resist attack from phosphoric acid and nitric acid at room temperature. At high concentrations and elevated temperatures, attack will occur, and higher-alloy stainless steels are required. In general, organic acids are less corrosive than mineral acids such as hydrochloric and sulfuric acid.
Type 304 and type 316 stainless steels are unaffected by weak bases such as ammonium hydroxide, even in high concentrations and at high temperatures. The same grades exposed to stronger bases such as sodium hydroxide at high concentrations and high temperatures will likely experience some etching and cracking. Increasing chromium and nickel contents provide increased resistance.
All grades resist damage from aldehydes and amines, though in the latter case type 316 is preferable to type 304; cellulose acetate damages type 304 unless the temperature is kept low. Fats and fatty acids only affect type 304 at temperatures above and type 316 SS above , while type 317 SS is unaffected at all temperatures. Type 316L is required for the processing of urea.
Localized
Localized corrosion can occur in several ways, e.g. pitting corrosion and crevice corrosion. These localized attacks are most common in the presence of chloride ions. Higher chloride levels require more highly alloyed stainless steels.
Localized corrosion can be difficult to predict because it is dependent on many factors, including:
Chloride ion concentration. Even when chloride solution concentration is known, it is still possible for localized corrosion to occur unexpectedly. Chloride ions can become unevenly concentrated in certain areas, such as in crevices (e.g. under gaskets) or on surfaces in vapor spaces due to evaporation and condensation.
Temperature: increasing temperature increases susceptibility.
Acidity: increasing acidity increases susceptibility.
Stagnation: stagnant conditions increase susceptibility.
Oxidizing species: the presence of oxidizing species, such as ferric and cupric ions, increases susceptibility.
Pitting corrosion is considered the most common form of localized corrosion. The corrosion resistance of stainless steels to pitting corrosion is often expressed by the PREN, obtained through the formula:
,
where the terms correspond to the proportion of the contents by mass of chromium, molybdenum, and nitrogen in the steel. For example, if the steel consisted of 15% chromium %Cr would be equal to 15.
The higher the PREN, the higher the pitting corrosion resistance. Thus, increasing chromium, molybdenum, and nitrogen contents provide better resistance to pitting corrosion.
Though the PREN of certain steel may be theoretically sufficient to resist pitting corrosion, crevice corrosion can still occur when the poor design has created confined areas (overlapping plates, washer-plate interfaces, etc.) or when deposits form on the material. In these select areas, the PREN may not be high enough for the service conditions. Good design, fabrication techniques, alloy selection, proper operating conditions based on the concentration of active compounds present in the solution causing corrosion, pH, etc. can prevent such corrosion.
Stress
Stress corrosion cracking (SCC) is caused by combination of tensile stress and a corrosive environment and can lead to unexpected and sudden failure of a stainless steel component. It may occur when three conditions are met:
The part contains either applied or residual tensile stresses.
The part is in a corrosive environment.
The stainless steel is susceptible to SCC.
SCC can be prevented by eliminating one of these three conditions.
The SCC mechanism results from the following sequence of events:
Pitting occurs.
Cracks start from a pit initiation site.
Cracks then propagate through the metal in a transgranular or intergranular mode.
Failure occurs.
Galvanic
Galvanic corrosion (also called "dissimilar-metal corrosion") refers to corrosion damage induced when two dissimilar materials are coupled in a corrosive electrolyte. The most common electrolyte is water, ranging from freshwater to seawater. When a galvanic couple forms, one of the metals in the couple becomes the anode and corrodes faster than it would alone, while the other becomes the cathode and corrodes slower than it would alone. Stainless steel, due to having a more positive electrode potential than for example carbon steel and aluminium, becomes the cathode, accelerating the corrosion of the anodic metal. An example is the corrosion of aluminium rivets fastening stainless steel sheets in contact with water. The relative surface areas of the anode and the cathode are important in determining the rate of corrosion. In the above example, the surface area of the rivets is small compared to that of the stainless steel sheet, resulting in rapid corrosion. However, if stainless steel fasteners are used to assemble aluminium sheets, galvanic corrosion will be much slower because the galvanic current density on the aluminium surface will be many orders of magnitude smaller. A frequent mistake is to assemble stainless steel plates with carbon steel fasteners; whereas using stainless steel to fasten carbon-steel plates is usually acceptable, the reverse is not. Providing electrical insulation between the dissimilar metals, where possible, is effective at preventing this type of corrosion.
High-temperature
At elevated temperatures, all metals react with hot gases. The most common high-temperature gaseous mixture is air, of which oxygen is the most reactive component. To avoid corrosion in air, carbon steel is limited to approximately . Oxidation resistance in stainless steels increases with additions of chromium, silicon, and aluminium. Small additions of cerium and yttrium increase the adhesion of the oxide layer on the surface. The addition of chromium remains the most common method to increase high-temperature corrosion resistance in stainless steels; chromium reacts with oxygen to form a chromium oxide scale, which reduces oxygen diffusion into the material. The minimum 10.5% chromium in stainless steels provides resistance to approximately , while 16% chromium provides resistance up to approximately . Type 304, the most common grade of stainless steel with 18% chromium, is resistant to approximately . Other gases, such as sulfur dioxide, hydrogen sulfide, carbon monoxide, chlorine, also attack stainless steel. Resistance to other gases is dependent on the type of gas, the temperature, and the alloying content of the stainless steel. With the addition of up to 5% aluminium, ferritic grades Fe-Cr-Al are designed for electrical resistance and oxidation resistance at elevated temperatures. Such alloys include Kanthal, produced in the form of wire or ribbons.
Standard finishes
Standard mill finishes can be applied to flat rolled stainless steel directly by the rollers and by mechanical abrasives. Steel is first rolled to size and thickness and then annealed to change the properties of the final material. Any oxidation that forms on the surface (mill scale) is removed by pickling, and a passivation layer is created on the surface. A final finish can then be applied to achieve the desired aesthetic appearance.
The following designations are used in the U.S. to describe stainless steel finishes by ASTM A480/A480M-18 (DIN):
No. 0: Hot-rolled, annealed, thicker plates
No. 1 (1D): Hot-rolled, annealed and passivated
No. 2D (2D): Cold rolled, annealed, pickled and passivated
No. 2B (2B): Same as above with additional pass through highly polished rollers
No. 2BA (2R): Bright annealed (BA or 2R) same as above then bright annealed under oxygen-free atmospheric condition
No. 3 (G-2G:) Coarse abrasive finish applied mechanically
No. 4 (1J-2J): Brushed finish
No. 5: Satin finish
No. 6 (1K-2K): Matte finish (brushed but smoother than #4)
No. 7 (1P-2P): Reflective finish
No. 8: Mirror finish
No. 9: Bead blast finish
No. 10: Heat colored finish – offering a wide range of electropolished and heat colored surfaces
Joining
A wide range of joining processes are available for stainless steels, though welding is by far the most common.
The ease of welding largely depends on the type of stainless steel used. Austenitic stainless steels are the easiest to weld by electric arc, with weld properties similar to those of the base metal (not cold-worked). Martensitic stainless steels can also be welded by electric-arc but, as the heat-affected zone (HAZ) and the fusion zone (FZ) form martensite upon cooling, precautions must be taken to avoid cracking of the weld. Improper welding practices can additionally cause sugaring (oxide scaling) and heat tint on the backside of the weld. This can be prevented with the use of back-purging gases, backing plates, and fluxes. Post-weld heat treatment is almost always required while preheating before welding is also necessary in some cases. Electric arc welding of type 430 ferritic stainless steel results in grain growth in the HAZ, which leads to brittleness. This has largely been overcome with stabilized ferritic grades, where niobium, titanium, and zirconium form precipitates that prevent grain growth. Duplex stainless steel welding by electric arc is a common practice but requires careful control of the process parameters. Otherwise, the precipitation of unwanted intermetallic phases occurs, which reduces the toughness of the welds.
Electric arc welding processes include:
Gas metal arc welding, also known as MIG/MAG welding
Gas tungsten arc welding, also known as tungsten inert gas (TIG) welding
Plasma arc welding
Flux-cored arc welding
Shielded metal arc welding (covered electrode)
Submerged arc welding
MIG, MAG and TIG welding are the most common methods.
Other welding processes include:
Stud welding
Resistance spot welding
Resistance seam welding
Flash welding
Laser beam welding
Oxy-acetylene welding
Stainless steel may be bonded with adhesives such as silicone, silyl modified polymers, and epoxies. Acrylic and polyurethane adhesives are also used in some situations.
Production
Most of the world's stainless steel production is produced by the following processes:
Electric arc furnace (EAF): stainless steel scrap, other ferrous scrap, and ferrous alloys (Fe Cr, Fe Ni, Fe Mo, Fe Si) are melted together. The molten metal is then poured into a ladle and transferred into the AOD process (see below).
Argon oxygen decarburization (AOD): carbon in the molten steel is removed (by turning it into carbon monoxide gas) and other compositional adjustments are made to achieve the desired chemical composition.
Continuous casting (CC): the molten metal is solidified into slabs for flat products (a typical section is thick and wide) or blooms (sections vary widely but is the average size).
Hot rolling (HR): slabs and blooms are reheated in a furnace and hot-rolled. Hot rolling reduces the thickness of the slabs to produce about -thick coils. Blooms, on the other hand, are hot-rolled into bars, which are cut into lengths at the exit of the rolling mill, or wire rod, which is coiled.
Cold finishing (CF) depends on the type of product being finished:
Hot-rolled coils are pickled in acid solutions to remove the oxide scale on the surface, then subsequently cold rolled in Sendzimir rolling mills and annealed in a protective atmosphere until the desired thickness and surface finish is obtained. Further operations such as slitting and tube forming can be performed in downstream facilities.
Hot-rolled bars are straightened, then machined to the required tolerance and finish.
Wire rod coils are subsequently processed to produce cold-finished bars on drawing benches, fasteners on boltmaking machines, and wire on single or multipass drawing machines.
World stainless steel production figures are published yearly by the International Stainless Steel Forum. Of the EU production figures, Italy, Belgium and Spain were notable, while Canada and Mexico produced none. China, Japan, South Korea, Taiwan, India the US and Indonesia were large producers while Russia reported little production.
Breakdown of production by stainless steels families in 2017:
Austenitic stainless steels Cr-Ni (also called 300-series, see "Grades" section above): 54%
Austenitic stainless steels Cr-Mn (also called 200-series): 21%
Ferritic and martensitic stainless steels (also called 400-series): 23%
Applications
Stainless steel is used in a multitude of fields including architecture, art, chemical engineering, food and beverage manufacture, vehicles, medicine, energy and firearms.
Life cycle cost
Life cycle cost (LCC) calculations are used to select the design and the materials that will lead to the lowest cost over the whole life of a project, such as a building or a bridge.
The formula, in a simple form, is the following:
where LCC is the overall life cycle cost, AC is the acquisition cost, IC the installation cost, OC the operating and maintenance costs, LP the cost of lost production due to downtime, and RC the replacement materials cost.
In addition, N is the planned life of the project, i the interest rate, and n the year in which a particular OC or LP or RC is taking place. The interest rate (i) is used to convert expenses from different years to their present value (a method widely used by banks and insurance companies) so they can be added and compared fairly. The usage of the sum formula () captures the fact that expenses over the lifetime of a project must be cumulated after they are corrected for interest rate.
Application of LCC in materials selection
Stainless steel used in projects often results in lower LCC values compared to other materials. The higher acquisition cost (AC) of stainless steel components are often offset by improvements in operating and maintenance costs, reduced loss of production (LP) costs, and the higher resale value of stainless steel components.
LCC calculations are usually limited to the project itself. However, there may be other costs that a project stakeholder may wish to consider:
Utilities, such as power plants, water supply & wastewater treatment, and hospitals, cannot be shut down. Any maintenance will require extra costs associated with continuing service.
Indirect societal costs (with possible political fallout) may be incurred in some situations such as closing or reducing traffic on bridges, creating queues, delays, loss of working hours to the people, and increased pollution by idling vehicles.
Sustainability – recycling and reuse
The average carbon footprint of stainless steel (all grades, all countries) is estimated to be 2.90 kg of CO2 per kg of stainless steel produced, of which 1.92 kg are emissions from raw materials (Cr, Ni, Mo); 0.54 kg from electricity and steam, and 0.44 kg are direct emissions (i.e., by the stainless steel plant). Note that stainless steel produced in countries that use cleaner sources of electricity (such as France, which uses nuclear energy) will have a lower carbon footprint. Ferritics without Ni will have a lower CO2 footprint than austenitics with 8% Ni or more. Carbon footprint must not be the only sustainability-related factor for deciding the choice of materials:
Over any product life, maintenance, repairs or early end of life (planned obsolescence) can increase its overall footprint far beyond initial material differences. In addition, loss of service (typically for bridges) may induce large hidden costs, such as queues, wasted fuel, and loss of man-hours.
How much material is used to provide a given service varies with the performance, particularly the strength level, which allows lighter structures and components.
Stainless steel is 100% recyclable. An average stainless steel object is composed of about 60% recycled material of which approximately 40% originates from end-of-life products, while the remaining 60% comes from manufacturing processes. What prevents a higher recycling content is the availability of stainless steel scrap, in spite of a very high recycling rate. According to the International Resource Panel's Metal Stocks in Society report, the per capita stock of stainless steel in use in society is in more developed countries and in less-developed countries. There is a secondary market that recycles usable scrap for many stainless steel markets. The product is mostly coil, sheet, and blanks. This material is purchased at a less-than-prime price and sold to commercial quality stampers and sheet metal houses. The material may have scratches, pits, and dents but is made to the current specifications.
The stainless steel cycle starts with carbon steel scrap, primary metals, and slag. The next step is the production of hot-rolled and cold-finished steel products in steel mills. Some scrap is produced, which is directly reused in the melting shop. The manufacturing of components is the third step. Some scrap is produced and enters the recycling loop. Assembly of final goods and their use does not generate any material loss. The fourth step is the collection of stainless steel for recycling at the end of life of the goods (such as kitchenware, pulp and paper plants, or automotive parts). This is where it is most difficult to get stainless steel to enter the recycling loop, as shown in the table below:
Nanoscale stainless steel
Stainless steel nanoparticles have been produced in the laboratory. These may have applications as additives for high-performance applications. For example, sulfurization, phosphorization, and nitridation treatments to produce nanoscale stainless steel based catalysts could enhance the electrocatalytic performance of stainless steel for water splitting.
Health effects
There is extensive research indicating some probable increased risk of cancer (particularly lung cancer) from inhaling fumes while welding stainless steel. Stainless steel welding is suspected of producing carcinogenic fumes from cadmium oxides, nickel, and chromium. According to Cancer Council Australia, "In 2017, all types of welding fumes were classified as a Group 1 carcinogen."
Stainless steel is generally considered to be biologically inert. However, during cooking, small amounts of nickel and chromium leach out of new stainless steel cookware into highly acidic food. Nickel can contribute to cancer risks—particularly lung cancer and nasal cancer. However, no connection between stainless steel cookware and cancer has been established.
See also
Cobalt-chrome
Corrosion engineering
Corrugated stainless steel tubing
List of blade materials
List of steel producers
Metallic fiber
Pilling–Bedworth ratio
Rouging
Weathering steel
Notes
References
Further reading
International Standard ISO15510:2014
External links
1916 introductions
Biomaterials
Building materials
Chromium alloys
English inventions
Roofing materials | Stainless steel | [
"Physics",
"Chemistry",
"Engineering",
"Biology"
] | 8,229 | [
"Biomaterials",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Alloys",
"Medical technology",
"Chromium alloys",
"Matter",
"Building materials"
] |
27,114 | https://en.wikipedia.org/wiki/Silicon | Silicon is a chemical element; it has symbol Si and atomic number 14. It is a hard, brittle crystalline solid with a blue-grey metallic lustre, and is a tetravalent metalloid and semiconductor. It is a member of group 14 in the periodic table: carbon is above it; and germanium, tin, lead, and flerovium are below it. It is relatively unreactive. Silicon is a significant element that is essential for several physiological and metabolic processes in plants. Silicon is widely regarded as the predominant semiconductor material due to its versatile applications in various electrical devices such as transistors, solar cells, integrated circuits, and others. These may be due to its significant band gap, expansive optical transmission range, extensive absorption spectrum, surface roughening, and effective anti-reflection coating.
Because of its high chemical affinity for oxygen, it was not until 1823 that Jöns Jakob Berzelius was first able to prepare it and characterize it in pure form. Its oxides form a family of anions known as silicates. Its melting and boiling points of 1414 °C and 3265 °C, respectively, are the second highest among all the metalloids and nonmetals, being surpassed only by boron.
Silicon is the eighth most common element in the universe by mass, but very rarely occurs in its pure form in the Earth's crust. It is widely distributed throughout space in cosmic dusts, planetoids, and planets as various forms of silicon dioxide (silica) or silicates. More than 90% of the Earth's crust is composed of silicate minerals, making silicon the second most abundant element in the Earth's crust (about 28% by mass), after oxygen.
Most silicon is used commercially without being separated, often with very little processing of the natural minerals. Such use includes industrial construction with clays, silica sand, and stone. Silicates are used in Portland cement for mortar and stucco, and mixed with silica sand and gravel to make concrete for walkways, foundations, and roads. They are also used in whiteware ceramics such as porcelain, and in traditional silicate-based soda–lime glass and many other specialty glasses. Silicon compounds such as silicon carbide are used as abrasives and components of high-strength ceramics. Silicon is the basis of the widely used synthetic polymers called silicones.
The late 20th century to early 21st century has been described as the Silicon Age (also known as the Digital Age or Information Age) because of the large impact that elemental silicon has on the modern world economy. The small portion of very highly purified elemental silicon used in semiconductor electronics (<15%) is essential to the transistors and integrated circuit chips used in most modern technology such as smartphones and other computers. In 2019, 32.4% of the semiconductor market segment was for networks and communications devices, and the semiconductors industry is projected to reach $726.73 billion by 2027.
Silicon is an essential element in biology. Only traces are required by most animals, but some sea sponges and microorganisms, such as diatoms and radiolaria, secrete skeletal structures made of silica. Silica is deposited in many plant tissues.
History
Owing to the abundance of silicon in the Earth's crust, natural silicon-based materials have been used for thousands of years. Silicon rock crystals were familiar to various ancient civilizations, such as the predynastic Egyptians who used it for beads and small vases, as well as the ancient Chinese. Glass containing silica was manufactured by the Egyptians since at least 1500 BC, as well as by the ancient Phoenicians. Natural silicate compounds were also used in various types of mortar for construction of early human dwellings.
Discovery
In 1787, Antoine Lavoisier suspected that silica might be an oxide of a fundamental chemical element, but the chemical affinity of silicon for oxygen is high enough that he had no means to reduce the oxide and isolate the element. After an attempt to isolate silicon in 1808, Sir Humphry Davy proposed the name "silicium" for silicon, from the Latin , silicis for flint, and adding the "-ium" ending because he believed it to be a metal. Most other languages use transliterated forms of Davy's name, sometimes adapted to local phonology (e.g. German , Turkish , Catalan , Armenian or Silitzioum). A few others use instead a calque of the Latin root (e.g. Russian , from "flint"; Greek from "fire"; Finnish from "flint", Czech from "quartz", "flint").
Gay-Lussac and Thénard are thought to have prepared impure amorphous silicon in 1811, through the heating of recently isolated potassium metal with silicon tetrafluoride, but they did not purify and characterize the product, nor identify it as a new element. Silicon was given its present name in 1817 by Scottish chemist Thomas Thomson. He retained part of Davy's name but added "-on" because he believed that silicon was a nonmetal similar to boron and carbon. In 1824, Jöns Jacob Berzelius prepared amorphous silicon using approximately the same method as Gay-Lussac (reducing potassium fluorosilicate with molten potassium metal), but purifying the product to a brown powder by repeatedly washing it. As a result, he is usually given credit for the element's discovery. The same year, Berzelius became the first to prepare silicon tetrachloride; silicon tetrafluoride had already been prepared long before in 1771 by Carl Wilhelm Scheele by dissolving silica in hydrofluoric acid. In 1823 for the first time Jacob Berzelius discovered silicon tetrachloride (SiCl4). In 1846 Von Ebelman's synthesized tetraethyl orthosilicate (Si(OC2H5)4).
Silicon in its more common crystalline form was not prepared until 31 years later, by Deville. By electrolyzing a mixture of sodium chloride and aluminium chloride containing approximately 10% silicon, he was able to obtain a slightly impure allotrope of silicon in 1854. Later, more cost-effective methods have been developed to isolate several allotrope forms, the most recent being silicene in 2010. Meanwhile, research on the chemistry of silicon continued; Friedrich Wöhler discovered the first volatile hydrides of silicon, synthesising trichlorosilane in 1857 and silane itself in 1858, but a detailed investigation of the silanes was only carried out in the early 20th century by Alfred Stock, despite early speculation on the matter dating as far back as the beginnings of synthetic organic chemistry in the 1830s. Similarly, the first organosilicon compound, tetraethylsilane, was synthesised by Charles Friedel and James Crafts in 1863, but detailed characterisation of organosilicon chemistry was only done in the early 20th century by Frederic Kipping.
Starting in the 1920s, the work of William Lawrence Bragg on X-ray crystallography elucidated the compositions of the silicates, which had previously been known from analytical chemistry but had not yet been understood, together with Linus Pauling's development of crystal chemistry and Victor Goldschmidt's development of geochemistry. The middle of the 20th century saw the development of the chemistry and industrial use of siloxanes and the growing use of silicone polymers, elastomers, and resins. In the late 20th century, the complexity of the crystal chemistry of silicides was mapped, along with the solid-state physics of doped semiconductors.
Silicon semiconductors
The first semiconductor devices did not use silicon, but used galena, including German physicist Ferdinand Braun's crystal detector in 1874 and Indian physicist Jagadish Chandra Bose's radio crystal detector in 1901. The first silicon semiconductor device was a silicon radio crystal detector, developed by American engineer Greenleaf Whittier Pickard in 1906.
In 1940, Russell Ohl discovered the p–n junction and photovoltaic effects in silicon. In 1941, techniques for producing high-purity germanium and silicon crystals were developed for radar microwave detector crystals during World War II. In 1947, physicist William Shockley theorized a field-effect amplifier made from germanium and silicon, but he failed to build a working device, before eventually working with germanium instead. The first working transistor was a point-contact transistor built by John Bardeen and Walter Brattain later that year while working under Shockley. In 1954, physical chemist Morris Tanenbaum fabricated the first silicon junction transistor at Bell Labs. In 1955, Carl Frosch and Lincoln Derick at Bell Labs accidentally discovered that silicon dioxide () could be grown on silicon. By 1957 Frosch and Derick published their work on the first manufactured semiconductor oxide transistor: the first planar transistors, in which drain and source were adjacent at the same surface.
Silicon Age
The "Silicon Age" refers to the late 20th century to early 21st century. This is due to silicon being the dominant material used in electronics and information technology (also known as the Digital Age or Information Age), similar to how the Stone Age, Bronze Age and Iron Age were defined by the dominant materials during their respective ages of civilization.
Because silicon is an important element in high-technology semiconductor devices, many places in the world bear its name. For example, the Santa Clara Valley in California acquired the nickname Silicon Valley, as the element is the base material in the semiconductor industry there. Since then, many other places have been similarly dubbed, including Silicon Wadi in Israel; Silicon Forest in Oregon; Silicon Hills in Austin, Texas; Silicon Slopes in Salt Lake City, Utah; Silicon Saxony in Germany; Silicon Valley in India; Silicon Border in Mexicali, Mexico; Silicon Fen in Cambridge, England; Silicon Roundabout in London; Silicon Glen in Scotland; Silicon Gorge in Bristol, England; Silicon Alley in New York City; and Silicon Beach in Los Angeles.
Characteristics
Physical and atomic
A silicon atom has fourteen electrons. In the ground state, they are arranged in the electron configuration [Ne]3s23p2. Of these, four are valence electrons, occupying the 3s orbital and two of the 3p orbitals. Like the other members of its group, the lighter carbon and the heavier germanium, tin, and lead, it has the same number of valence electrons as valence orbitals: hence, it can complete its octet and obtain the stable noble gas configuration of argon by forming sp3 hybrid orbitals, forming tetrahedral derivatives where the central silicon atom shares an electron pair with each of the four atoms it is bonded to. The first four ionisation energies of silicon are 786.3, 1576.5, 3228.3, and 4354.4 kJ/mol respectively; these figures are high enough to preclude the possibility of simple cationic chemistry for the element. Following periodic trends, its single-bond covalent radius of 117.6 pm is intermediate between those of carbon (77.2 pm) and germanium (122.3 pm). The hexacoordinate ionic radius of silicon may be considered to be 40 pm, although this must be taken as a purely notional figure given the lack of a simple cation in reality.
Electrical
At standard temperature and pressure, silicon is a shiny semiconductor with a bluish-grey metallic lustre; as typical for semiconductors, its resistivity drops as temperature rises. This arises because silicon has a small energy gap (band gap) between its highest occupied energy levels (the valence band) and the lowest unoccupied ones (the conduction band). The Fermi level is about halfway between the valence and conduction bands and is the energy at which a state is as likely to be occupied by an electron as not. Hence pure silicon is effectively an insulator at room temperature. However, doping silicon with a pnictogen such as phosphorus, arsenic, or antimony introduces one extra electron per dopant and these may then be excited into the conduction band either thermally or photolytically, creating an n-type semiconductor. Similarly, doping silicon with a group 13 element such as boron, aluminium, or gallium results in the introduction of acceptor levels that trap electrons that may be excited from the filled valence band, creating a p-type semiconductor. Joining n-type silicon to p-type silicon creates a p–n junction with a common Fermi level; electrons flow from n to p, while holes flow from p to n, creating a voltage drop. This p–n junction thus acts as a diode that can rectify alternating current that allows current to pass more easily one way than the other. A transistor is an n–p–n junction, with a thin layer of weakly p-type silicon between two n-type regions. Biasing the emitter through a small forward voltage and the collector through a large reverse voltage allows the transistor to act as a triode amplifier.
Crystal structure
Silicon crystallises in a giant covalent structure at standard conditions, specifically in a diamond cubic crystal lattice (space group 227). It thus has a high melting point of 1414 °C, as a lot of energy is required to break the strong covalent bonds and melt the solid. Upon melting silicon contracts as the long-range tetrahedral network of bonds breaks up and the voids in that network are filled in, similar to water ice when hydrogen bonds are broken upon melting. It does not have any thermodynamically stable allotropes at standard pressure, but several other crystal structures are known at higher pressures. The general trend is one of increasing coordination number with pressure, culminating in a hexagonal close-packed allotrope at about 40 gigapascals known as Si–VII (the standard modification being Si–I). An allotrope called BC8 (or bc8), having a body-centred cubic lattice with eight atoms per primitive unit cell (space group 206), can be created at high pressure and remains metastable at low pressure. Its properties have been studied in detail.
Silicon boils at 3265 °C: this, while high, is still lower than the temperature at which its lighter congener carbon sublimes (3642 °C) and silicon similarly has a lower heat of vaporisation than carbon, consistent with the fact that the Si–Si bond is weaker than the C–C bond.
It is also possible to construct silicene layers analogous to graphene.
Isotopes
Naturally occurring silicon is composed of three stable isotopes, 28Si (92.23%), 29Si (4.67%), and 30Si (3.10%). Out of these, only 29Si is of use in NMR and EPR spectroscopy, as it is the only one with a nuclear spin (I =). All three are produced in Type Ia supernovae through the oxygen-burning process, with 28Si being made as part of the alpha process and hence the most abundant. The fusion of 28Si with alpha particles by photodisintegration rearrangement in stars is known as the silicon-burning process; it is the last stage of stellar nucleosynthesis before the rapid collapse and violent explosion of the star in question in a type II supernova.
Twenty-two radioisotopes have been characterized, the two stablest being 32Si with a half-life of about 150 years, and 31Si with a half-life of 2.62 hours. All the remaining radioactive isotopes have half-lives that are less than seven seconds, and the majority of these have half-lives that are less than one-tenth of a second. Silicon has one known nuclear isomer, 34mSi, with a half-life less than 210 nanoseconds. 32Si undergoes low-energy beta decay to 32P and then stable 32S. 31Si may be produced by the neutron activation of natural silicon and is thus useful for quantitative analysis; it can be easily detected by its characteristic beta decay to stable 31P, in which the emitted electron carries up to 1.48 MeV of energy.
The known isotopes of silicon range in mass number from 22 to 46. The most common decay mode of the isotopes with mass numbers lower than the three stable isotopes is inverse beta decay, primarily forming aluminium isotopes (13 protons) as decay products. The most common decay mode for the heavier unstable isotopes is beta decay, primarily forming phosphorus isotopes (15 protons) as decay products.
Silicon can enter the oceans through groundwater and riverine transport. Large fluxes of groundwater input have an isotopic composition which is distinct from riverine silicon inputs. Isotopic variations in groundwater and riverine transports contribute to variations in oceanic 30Si values. Currently, there are substantial differences in the isotopic values of deep water in the world's ocean basins. Between the Atlantic and Pacific oceans, there is a deep water 30Si gradient of greater than 0.3 parts per thousand. 30Si is most commonly associated with productivity in the oceans.
Chemistry and compounds
Crystalline bulk silicon is rather inert, but becomes more reactive at high temperatures. Like its neighbour aluminium, silicon forms a thin, continuous surface layer of silicon dioxide () that protects the material beneath from oxidation. Because of this, silicon does not measurably react with the air below 900 °C. Between 950 °C and 1160 °C, the formation rate of the vitreous dioxide rapidly increases, and when 1400 °C is reached, atmospheric nitrogen also reacts to give the nitrides SiN and . Silicon reacts with gaseous sulfur at 600 °C and gaseous phosphorus at 1000 °C. This oxide layer nevertheless does not prevent reaction with the halogens; fluorine attacks silicon vigorously at room temperature, chlorine does so at about 300 °C, and bromine and iodine at about 500 °C. Silicon does not react with most aqueous acids, but is oxidised and complexed by hydrofluoric acid mixtures containing either chlorine or nitric acid to form hexafluorosilicates. It readily dissolves in hot aqueous alkali to form silicates. At high temperatures, silicon also reacts with alkyl halides; this reaction may be catalysed by copper to directly synthesise organosilicon chlorides as precursors to silicone polymers. Upon melting, silicon becomes extremely reactive, alloying with most metals to form silicides, and reducing most metal oxides because the heat of formation of silicon dioxide is so large. In fact, molten silicon reacts virtually with every known kind of crucible material (except its own oxide, ). This happens due to silicon's high binding forces for the light elements and to its high dissolving power for most elements. As a result, containers for liquid silicon must be made of refractory, unreactive materials such as zirconium dioxide or group 4, 5, and 6 borides.
Tetrahedral coordination is a major structural motif in silicon chemistry just as it is for carbon chemistry. However, the 3p subshell is rather more diffuse than the 2p subshell and does not hybridise so well with the 3s subshell. As a result, the chemistry of silicon and its heavier congeners shows significant differences from that of carbon, and thus octahedral coordination is also significant. For example, the electronegativity of silicon (1.90) is much less than that of carbon (2.55), because the valence electrons of silicon are further from the nucleus than those of carbon and hence experience smaller electrostatic forces of attraction from the nucleus. The poor overlap of 3p orbitals also results in a much lower tendency toward catenation (formation of Si–Si bonds) for silicon than for carbon, due to the concomitant weakening of the Si–Si bond compared to the C–C bond: the average Si–Si bond energy is approximately 226 kJ/mol, compared to a value of 356 kJ/mol for the C–C bond. This results in multiply bonded silicon compounds generally being much less stable than their carbon counterparts, an example of the double bond rule. On the other hand, the presence of radial nodes in the 3p orbitals of silicon suggests the possibility of hypervalence, as seen in five and six-coordinate derivatives of silicon such as and . Lastly, because of the increasing energy gap between the valence s and p orbitals as the group is descended, the divalent state grows in importance from carbon to lead, so that a few unstable divalent compounds are known for silicon; this lowering of the main oxidation state, in tandem with increasing atomic radii, results in an increase of metallic character down the group. Silicon already shows some incipient metallic behavior, particularly in the behavior of its oxide compounds and its reaction with acids as well as bases (though this takes some effort), and is hence often referred to as a metalloid rather than a nonmetal. Germanium shows more, and tin is generally considered a metal.
Silicon shows clear differences from carbon. For example, organic chemistry has very few analogies with silicon chemistry, while silicate minerals have a structural complexity unseen in oxocarbons. Silicon tends to resemble germanium far more than it does carbon, and this resemblance is enhanced by the d-block contraction, resulting in the size of the germanium atom being much closer to that of the silicon atom than periodic trends would predict. Nevertheless, there are still some differences because of the growing importance of the divalent state in germanium compared to silicon. Additionally, the lower Ge–O bond strength compared to the Si–O bond strength results in the absence of "germanone" polymers that would be analogous to silicone polymers.
Occurrence
Silicon is the eighth most abundant element in the universe, coming after hydrogen, helium, carbon, nitrogen, oxygen, iron, and neon. These abundances are not replicated well on Earth due to substantial separation of the elements taking place during the formation of the Solar System. Silicon makes up 27.2% of the Earth's crust by weight, second only to oxygen at 45.5%, with which it always is associated in nature. Further fractionation took place in the formation of the Earth by planetary differentiation: Earth's core, which makes up 31.5% of the mass of the Earth, has approximate composition ; the mantle makes up 68.1% of the Earth's mass and is composed mostly of denser oxides and silicates, an example being olivine, ; while the lighter siliceous minerals such as aluminosilicates rise to the surface and form the crust, making up 0.4% of the Earth's mass.
The crystallisation of igneous rocks from magma depends on a number of factors; among them are the chemical composition of the magma, the cooling rate, and some properties of the individual minerals to be formed, such as lattice energy, melting point, and complexity of their crystal structure. As magma is cooled, olivine appears first, followed by pyroxene, amphibole, biotite mica, orthoclase feldspar, muscovite mica, quartz, zeolites, and finally, hydrothermal minerals. This sequence shows a trend toward increasingly complex silicate units with cooling, and the introduction of hydroxide and fluoride anions in addition to oxides. Many metals may substitute for silicon. After these igneous rocks undergo weathering, transport, and deposition, sedimentary rocks like clay, shale, and sandstone are formed. Metamorphism also may occur at high temperatures and pressures, creating an even vaster variety of minerals.
There are four sources for silicon fluxes into the ocean: chemical weathering of continental rocks, river transport, dissolution of continental terrigenous silicates, and the reaction between submarine basalts and hydrothermal fluid which release dissolved silicon. All four of these fluxes are interconnected in the ocean's biogeochemical cycle as they all were initially formed from the weathering of Earth's crust.
Approximately 300–900 megatonnes of Aeolian dust is deposited into the world's oceans each year. Of that value, 80–240 megatonnes are in the form of particulate silicon. The total amount of particulate silicon deposition into the ocean is still less than the amount of silicon influx into the ocean via riverine transportation. Aeolian inputs of particulate lithogenic silicon into the North Atlantic and Western North Pacific oceans are the result of dust settling on the oceans from the Sahara and Gobi Desert, respectively. Riverine transports are the major source of silicon influx into the ocean in coastal regions, while silicon deposition in the open ocean is greatly influenced by the settling of Aeolian dust.
Production
Silicon of 96–99% purity is made by carbothermically reducing quartzite or sand with highly pure coke. The reduction is carried out in an electric arc furnace, with an excess of used to stop silicon carbide (SiC) from accumulating:
+ 2 C → Si + 2 CO
2 SiC + → 3 Si + 2 CO
This reaction, known as carbothermal reduction of silicon dioxide, usually is conducted in the presence of scrap iron with low amounts of phosphorus and sulfur, producing ferrosilicon. Ferrosilicon, an iron-silicon alloy that contains varying ratios of elemental silicon and iron, accounts for about 80% of the world's production of elemental silicon, with China, the leading supplier of elemental silicon, providing 4.6 million tonnes (or 2/3rds of world output) of silicon, most of it in the form of ferrosilicon. It is followed by Russia (610,000 t), Norway (330,000 t), Brazil (240,000 t), and the United States (170,000 t). Ferrosilicon is primarily used by the iron and steel industry (see below) with primary use as alloying addition in iron or steel and for de-oxidation of steel in integrated steel plants.
Another reaction, sometimes used, is aluminothermal reduction of silicon dioxide, as follows:
3 + 4 Al → 3 Si + 2
Leaching powdered 96–97% pure silicon with water results in ~98.5% pure silicon, which is used in the chemical industry. However, even greater purity is needed for semiconductor applications, and this is produced from the reduction of tetrachlorosilane (silicon tetrachloride) or trichlorosilane. The former is made by chlorinating scrap silicon and the latter is a byproduct of silicone production. These compounds are volatile and hence can be purified by repeated fractional distillation, followed by reduction to elemental silicon with very pure zinc metal as the reducing agent. The spongy pieces of silicon thus produced are melted and then grown to form cylindrical single crystals, before being purified by zone refining. Other routes use the thermal decomposition of silane or tetraiodosilane (). Another process used is the reduction of sodium hexafluorosilicate, a common waste product of the phosphate fertilizer industry, by metallic sodium: this is highly exothermic and hence requires no outside energy source. Hyperfine silicon is made at a higher purity than almost any other material: transistor production requires impurity levels in silicon crystals less than 1 part per 1010, and in special cases impurity levels below 1 part per 1012 are needed and attained.
Silicon nanostructures can directly be produced from silica sand using conventional metalothermic processes, or the combustion synthesis approach. Such nanostructured silicon materials can be used in various functional applications including the anode of lithium-ion batteries (LIBs), other ion batteries, future computing devices like memristors or photocatalytic applications.
Applications
Compounds
Most silicon is used industrially without being purified, often with comparatively little processing from its natural form. More than 90% of the Earth's crust is composed of silicate minerals, which are compounds of silicon and oxygen, often with metallic ions when negatively charged silicate anions require cations to balance the charge. Many of these have direct commercial uses, such as clays, silica sand, and most kinds of building stone. Thus, the vast majority of uses for silicon are as structural compounds, either as the silicate minerals or silica (crude silicon dioxide). Silicates are used in making Portland cement (made mostly of calcium silicates) which is used in building mortar and modern stucco, but more importantly, combined with silica sand, and gravel (usually containing silicate minerals such as granite), to make the concrete that is the basis of most of the very largest industrial building projects of the modern world.
Silica is used to make fire brick, a type of ceramic. Silicate minerals are also in whiteware ceramics, an important class of products usually containing various types of fired clay minerals (natural aluminium phyllosilicates). An example is porcelain, which is based on the silicate mineral kaolinite. Traditional glass (silica-based soda–lime glass) also functions in many of the same ways, and also is used for windows and containers. In addition, specialty silica based glass fibers are used for optical fiber, as well as to produce fiberglass for structural support and glass wool for thermal insulation.
Silicones often are used in waterproofing treatments, molding compounds, mold-release agents, mechanical seals, high temperature greases and waxes, and caulking compounds. Silicone is also sometimes used in breast implants, contact lenses, explosives and pyrotechnics. Silly Putty was originally made by adding boric acid to silicone oil. Other silicon compounds function as high-technology abrasives and new high-strength ceramics based upon silicon carbide. Silicon is a component of some superalloys.
Alloys
Elemental silicon is added to molten cast iron as ferrosilicon or silicocalcium alloys to improve performance in casting thin sections and to prevent the formation of cementite where exposed to outside air. The presence of elemental silicon in molten iron acts as a sink for oxygen, so that the steel carbon content, which must be kept within narrow limits for each type of steel, can be more closely controlled. Ferrosilicon production and use is a monitor of the steel industry, and although this form of elemental silicon is grossly impure, it accounts for 80% of the world's use of free silicon. Silicon is an important constituent of transformer steel, modifying its resistivity and ferromagnetic properties.
The properties of silicon may be used to modify alloys with metals other than iron. "Metallurgical grade" silicon is silicon of 95–99% purity. About 55% of the world consumption of metallurgical purity silicon goes for production of aluminium-silicon alloys (silumin alloys) for aluminium part casts, mainly for use in the automotive industry. Silicon's importance in aluminium casting is that a significantly high amount (12%) of silicon in aluminium forms a eutectic mixture which solidifies with very little thermal contraction. This greatly reduces tearing and cracks formed from stress as casting alloys cool to solidity. Silicon also significantly improves the hardness and thus wear-resistance of aluminium.
Electronics
Most elemental silicon produced remains as a ferrosilicon alloy, and only approximately 20% is refined to metallurgical grade purity (a total of 1.3–1.5 million metric tons/year). An estimated 15% of the world production of metallurgical grade silicon is further refined to semiconductor purity. This typically is the "nine-9" or 99.9999999% purity, nearly defect-free single crystalline material.
Monocrystalline silicon of such purity is usually produced by the Czochralski process, and is used to produce silicon wafers used in the semiconductor industry, in electronics, and in some high-cost and high-efficiency photovoltaic applications. Pure silicon is an intrinsic semiconductor, which means that unlike metals, it conducts electron holes and electrons released from atoms by heat; silicon's electrical conductivity increases with higher temperatures. Pure silicon has too low a conductivity (i.e., too high a resistivity) to be used as a circuit element in electronics. In practice, pure silicon is doped with small concentrations of certain other elements, which greatly increase its conductivity and adjust its electrical response by controlling the number and charge (positive or negative) of activated carriers. Such control is necessary for transistors, solar cells, semiconductor detectors, and other semiconductor devices used in the computer industry and other technical applications. In silicon photonics, silicon may be used as a continuous wave Raman laser medium to produce coherent light.
In common integrated circuits, a wafer of monocrystalline silicon serves as a mechanical support for the circuits, which are created by doping and insulated from each other by thin layers of silicon oxide, an insulator that is easily produced on Si surfaces by processes of thermal oxidation or local oxidation (LOCOS), which involve exposing the element to oxygen under the proper conditions that can be predicted by the Deal–Grove model. Silicon has become the most popular material for both high power semiconductors and integrated circuits because it can withstand the highest temperatures and greatest electrical activity without suffering avalanche breakdown (an electron avalanche is created when heat produces free electrons and holes, which in turn pass more current, which produces more heat). In addition, the insulating oxide of silicon is not soluble in water, which gives it an advantage over germanium (an element with similar properties which can also be used in semiconductor devices) in certain fabrication techniques.
Monocrystalline silicon is expensive to produce, and is usually justified only in production of integrated circuits, where tiny crystal imperfections can interfere with tiny circuit paths. For other uses, other types of pure silicon may be employed. These include hydrogenated amorphous silicon and upgraded metallurgical-grade silicon (UMG-Si) used in the production of low-cost, large-area electronics in applications such as liquid crystal displays and of large-area, low-cost, thin-film solar cells. Such semiconductor grades of silicon are either slightly less pure or polycrystalline rather than monocrystalline, and are produced in comparable quantities as the monocrystalline silicon: 75,000 to 150,000 metric tons per year. The market for the lesser grade is growing more quickly than for monocrystalline silicon. By 2013, polycrystalline silicon production, used mostly in solar cells, was projected to reach 200,000 metric tons per year, while monocrystalline semiconductor grade silicon was expected to remain less than 50,000 tons per year.
Quantum dots
Silicon quantum dots are created through the thermal processing of hydrogen silsesquioxane into nanocrystals ranging from a few nanometers to a few microns, displaying size dependent luminescent properties. The nanocrystals display large Stokes shifts converting photons in the ultraviolet range to photons in the visible or infrared, depending on the particle size, allowing for applications in quantum dot displays and luminescent solar concentrators due to their limited self absorption. A benefit of using silicon based quantum dots over cadmium or indium is the non-toxic, metal-free nature of silicon.
Another application of silicon quantum dots is for sensing of hazardous materials. The sensors take advantage of the luminescent properties of the quantum dots through quenching of the photoluminescence in the presence of the hazardous substance. There are many methods used for hazardous chemical sensing with a few being electron transfer, fluorescence resonance energy transfer, and photocurrent generation. Electron transfer quenching occurs when the lowest unoccupied molecular orbital (LUMO) is slightly lower in energy than the conduction band of the quantum dot, allowing for the transfer of electrons between the two, preventing recombination of the holes and electrons within the nanocrystals. The effect can also be achieved in reverse with a donor molecule having its highest occupied molecular orbital (HOMO) slightly higher than a valence band edge of the quantum dot, allowing electrons to transfer between them, filling the holes and preventing recombination. Fluorescence resonance energy transfer occurs when a complex forms between the quantum dot and a quencher molecule. The complex will continue to absorb light but when the energy is converted to the ground state it does not release a photon, quenching the material. The third method uses different approach by measuring the photocurrent emitted by the quantum dots instead of monitoring the photoluminescent display. If the concentration of the desired chemical increases then the photocurrent given off by the nanocrystals will change in response.
Thermal energy storage
Biological role
Although silicon is readily available in the form of silicates, very few organisms use it directly. Diatoms, radiolaria, and siliceous sponges use biogenic silica as a structural material for their skeletons. Some plants accumulate silica in their tissues and require silicon for their growth, for example rice. Silicon may be taken up by plants as orthosilicic acid (also known as monosilicic acid) and transported through the xylem, where it forms amorphous complexes with components of the cell wall. This has been shown to improve cell wall strength and structural integrity in some plants, thereby reducing insect herbivory and pathogenic infections. In certain plants, silicon may also upregulate the production of volatile organic compounds and phytohormones which play a significant role in plant defense mechanisms. In more advanced plants, the silica phytoliths (opal phytoliths) are rigid microscopic bodies occurring in the cell.
Several horticultural crops are known to protect themselves against fungal plant pathogens with silica, to such a degree that fungicide application may fail unless accompanied by sufficient silicon nutrition. Silicaceous plant defense molecules activate some phytoalexins, meaning some of them are signalling substances producing acquired immunity. When deprived, some plants will substitute with increased production of other defensive substances.
Life on Earth is largely composed of carbon, but astrobiology considers that extraterrestrial life may have other hypothetical types of biochemistry. Silicon is considered an alternative to carbon, as it can create complex and stable molecules with four covalent bonds, required for a DNA-analog, and it is available in large quantities.
Marine microbial influences
Diatoms use silicon in the biogenic silica (bSi) form, which is taken up by the silicon transport protein (SIT) to be predominantly used in the cell wall structure as frustules. Silicon enters the ocean in a dissolved form such as silicic acid or silicate. Since diatoms are one of the main users of these forms of silicon, they contribute greatly to the concentration of silicon throughout the ocean. Silicon forms a nutrient-like profile in the ocean due to the diatom productivity in shallow depths. Therefore, concentration of silicon is lower in the shallow ocean and higher in the deep ocean.
Diatom productivity in the upper ocean contributes to the amount of silicon exported to the lower ocean. When diatom cells are lysed in the upper ocean, their nutrients such as iron, zinc, and silicon, are brought to the lower ocean through a process called marine snow. Marine snow involves the downward transfer of particulate organic matter by vertical mixing of dissolved organic matter. It has been suggested that silicon is considered crucial to diatom productivity and as long as there is silicic acid available for diatoms to use, the diatoms can contribute to other important nutrient concentrations in the deep ocean as well.
In coastal zones, diatoms serve as the major phytoplanktonic organisms and greatly contribute to biogenic silica production. In the open ocean, however, diatoms have a reduced role in global annual silica production. Diatoms in North Atlantic and North Pacific subtropical gyres only contribute about 5–7% of global annual marine silica production. The Southern Ocean produces about one-third of global marine biogenic silica. The Southern Ocean is referred to as having a "biogeochemical divide" since only minuscule amounts of silicon are transported out of this region.
Human nutrition
There is some evidence that silicon is important to human health for their nail, hair, bone, and skin tissues, for example, in studies that demonstrate that premenopausal women with higher dietary silicon intake have higher bone density, and that silicon supplementation can increase bone volume and density in patients with osteoporosis. Silicon is needed for synthesis of elastin and collagen, of which the aorta contains the greatest quantity in the human body, and has been considered an essential element; nevertheless, it is difficult to prove its essentiality, because silicon is very common, and hence, deficiency symptoms are difficult to reproduce.
Silicon is currently under consideration for elevation to the status of a "plant beneficial substance by the Association of American Plant Food Control Officials (AAPFCO)."
Safety
People may be exposed to elemental silicon in the workplace by breathing it in, swallowing it, or having contact with the skin or eye. In the latter two cases, silicon poses a slight hazard as an irritant. It is hazardous if inhaled. The Occupational Safety and Health Administration (OSHA) has set the legal limit for silicon exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an eight-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an eight-hour workday. Inhalation of crystalline silica dust may lead to silicosis, an occupational lung disease marked by inflammation and scarring in the form of nodular lesions in the upper lobes of the lungs.
See also
Amorphous silicon
Black silicon
Covalent superconductors
List of countries by silicon production
List of silicon producers
Monocrystalline silicon
Polycrystalline silicon
Printed silicon electronics
Silicene
Silicon nanowire
Silicon tombac
Silicon Valley
Transistor
Notes
References
Bibliography
External links
Chemical elements
Metalloids
Group IV semiconductors
Pyrotechnic fuels
Dietary minerals
Reducing agents
Native element minerals
Chemical elements with diamond cubic structure
Crystals in space group 227
Crystals in space group 206
Materials that expand upon freezing | Silicon | [
"Physics",
"Chemistry"
] | 8,872 | [
"Physical phenomena",
"Phase transitions",
"Chemical elements",
"Redox",
"Semiconductor materials",
"Reducing agents",
"Group IV semiconductors",
"Materials",
"Materials that expand upon freezing",
"Atoms",
"Matter"
] |
27,573 | https://en.wikipedia.org/wiki/Superfluid%20helium-4 | Superfluid helium-4 (helium II or He-II) is the superfluid form of helium-4, an isotope of the element helium. A superfluid is a state of matter in which matter behaves like a fluid with zero viscosity. The substance, which resembles other liquids such as helium I (conventional, non-superfluid liquid helium), flows without friction past any surface, which allows it to continue to circulate over obstructions and through pores in containers which hold it, subject only to its own inertia.
The formation of the superfluid is a manifestation of the formation of a Bose–Einstein condensate of helium atoms. This condensation occurs in liquid helium-4 at a far higher temperature (2.17 K) than it does in helium-3 (2.5 mK) because each atom of helium-4 is a boson particle, by virtue of its zero spin. Helium-3, however, is a fermion particle, which can form bosons only by pairing with itself at much lower temperatures, in a weaker process that is similar to the electron pairing in superconductivity.
History
Known as a major facet in the study of quantum hydrodynamics and macroscopic quantum phenomena, the superfluidity effect was discovered by Pyotr Kapitsa and John F. Allen, and Don Misener in 1937. Onnes possibly observed the superfluid phase transition on August 2, 1911, the same day that he observed superconductivity in mercury. It has since been described through phenomenological and microscopic theories.
In the 1950s, Hall and Vinen performed experiments establishing the existence of quantized vortex lines in superfluid helium. In the 1960s, Rayfield and Reif established the existence of quantized vortex rings. Packard has observed the intersection of vortex lines with the free surface of the fluid,
and Avenel and Varoquaux have studied the Josephson effect in superfluid helium-4. In 2006, a group at the University of Maryland visualized quantized vortices by using small tracer particles of solid hydrogen.
In the early 2000s, physicists created a Fermionic condensate from pairs of ultra-cold fermionic atoms. Under certain conditions, fermion pairs form diatomic molecules and undergo Bose–Einstein condensation. At the other limit, the fermions (most notably superconducting electrons) form Cooper pairs which also exhibit superfluidity. This work with ultra-cold atomic gases has allowed scientists to study the region in between these two extremes, known as the BEC-BCS crossover.
Supersolids may also have been discovered in 2004 by physicists at Penn State University. When helium-4 is cooled below about 200 mK under high pressures, a fraction (≈1%) of the solid appears to become superfluid. By quench cooling or lengthening the annealing time, thus increasing or decreasing the defect density respectively, it was shown, via torsional oscillator experiment, that the supersolid fraction could be made to range from 20% to completely non-existent. This suggested that the supersolid nature of helium-4 is not intrinsic to helium-4 but a property of helium-4 and disorder. Some emerging theories posit that the supersolid signal observed in helium-4 was actually an observation of either a superglass state or intrinsically superfluid grain boundaries in the helium-4 crystal.
Applications
Recently in the field of chemistry, superfluid helium-4 has been successfully used in spectroscopic techniques as a quantum solvent. Referred to as superfluid helium droplet spectroscopy (SHeDS), it is of great interest in studies of gas molecules, as a single molecule solvated in a superfluid medium allows a molecule to have effective rotational freedom, allowing it to behave similarly to how it would in the "gas" phase. Droplets of superfluid helium also have a characteristic temperature of about 0.4 K which cools the solvated molecule(s) to its ground or nearly ground rovibronic state.
Superfluids are also used in high-precision devices such as gyroscopes, which allow the measurement of some theoretically predicted gravitational effects (for an example, see Gravity Probe B).
The Infrared Astronomical Satellite IRAS, launched in January 1983 to gather infrared data was cooled by 73 kilograms of superfluid helium, maintaining a temperature of . When used in conjunction with helium-3, temperatures as low as 40 mK are routinely achieved in extreme low temperature experiments. The helium-3, in liquid state at 3.2 K, can be evaporated into the superfluid helium-4, where it acts as a gas due to the latter's properties as a Bose–Einstein condensate. This evaporation pulls energy from the overall system, which can be pumped out in a way completely analogous to normal refrigeration techniques. (See dilution refrigerator)
Superfluid-helium technology is used to extend the temperature range of cryocoolers to lower temperatures. So far the limit is 1.19 K, but there is a potential to reach 0.7 K.
Properties
Superfluids, such as helium-4 below the lambda point (known, for simplicity, as helium II), exhibit many unusual properties. A superfluid acts as if it were a mixture of a normal component, with all the properties of a normal fluid, and a superfluid component. The superfluid component has zero viscosity and zero entropy. Application of heat to a spot in superfluid helium results in a flow of the normal component which takes care of the heat transport at relatively high velocity (up to 20 cm/s) which leads to a very high effective thermal conductivity.
Film flow
Many ordinary liquids, like alcohol or petroleum, creep up solid walls, driven by their surface tension. Liquid helium also has this property, but, in the case of He-II, the flow of the liquid in the layer is not restricted by its viscosity but by a critical velocity which is about 20 cm/s. This is a fairly high velocity so superfluid helium can flow relatively easily up the wall of containers, over the top, and down to the same level as the surface of the liquid inside the container, in a siphon effect. It was, however, observed, that the flow through nanoporous membrane becomes restricted if the pore diameter is less than 0.7 nm (i.e. roughly three times the classical diameter of helium atom), suggesting the unusual hydrodynamic properties of He arise at larger scale than in the classical liquid helium.
Rotation
Another fundamental property becomes visible if a superfluid is placed in a rotating container. Instead of rotating uniformly with the container, the rotating state consists of quantized vortices. That is, when the container is rotated at speeds below the first critical angular velocity, the liquid remains perfectly stationary. Once the first critical angular velocity is reached, the superfluid will form a vortex. The vortex strength is quantized, that is, a superfluid can only spin at certain "allowed" values. Rotation in a normal fluid, like water, is not quantized. If the rotation speed is increased more and more quantized vortices will be formed which arrange in nice patterns similar to the Abrikosov lattice in a superconductor.
Comparison with helium-3
Although the phenomenologies of the superfluid states of helium-4 and helium-3 are very similar, the microscopic details of the transitions are very different. Helium-4 atoms are bosons, and their superfluidity can be understood in terms of the Bose–Einstein statistics that they obey. Specifically, the superfluidity of helium-4 can be regarded as a consequence of Bose–Einstein condensation in an interacting system. On the other hand, helium-3 atoms are fermions, and the superfluid transition in this system is described by a generalization of the BCS theory of superconductivity. In it, Cooper pairing takes place between atoms rather than electrons, and the attractive interaction between them is mediated by spin fluctuations rather than phonons. (See fermion condensate.) A unified description of superconductivity and superfluidity is possible in terms of gauge symmetry breaking.
Macroscopic theory
Thermodynamics
Figure 1 is the phase diagram of 4He. It is a pressure-temperature (p-T) diagram indicating the solid and liquid regions separated by the melting curve (between the liquid and solid state) and the liquid and gas region, separated by the vapor-pressure line. This latter ends in the critical point where the difference between gas and liquid disappears. The diagram shows the remarkable property that 4He is liquid even at absolute zero. 4He is only solid at pressures above 25 bar.
Figure 1 also shows the λ-line. This is the line that separates two fluid regions in the phase diagram indicated by He-I and He-II. In the He-I region the helium behaves like a normal fluid; in the He-II region the helium is superfluid.
The name lambda-line comes from the specific heat – temperature plot which has the shape of the Greek letter λ. See figure 2, which shows a peak at 2.172 K, the so-called λ-point of 4He.
Below the lambda line the liquid can be described by the so-called two-fluid model. It behaves as if it consists of two components: a normal component, which behaves like a normal fluid, and a superfluid component with zero viscosity and zero entropy. The ratios of the respective densities ρn/ρ and ρs/ρ, with ρn (ρs) the density of the normal (superfluid) component, and ρ (the total density), depends on temperature and is represented in figure 3. By lowering the temperature, the fraction of the superfluid density increases from zero at Tλ to one at zero kelvins. Below 1 K the helium is almost completely superfluid.
It is possible to create density waves of the normal component (and hence of the superfluid component since ρn + ρs = constant) which are similar to ordinary sound waves. This effect is called second sound. Due to the temperature dependence of ρn (figure 3) these waves in ρn are also temperature waves.
Superfluid hydrodynamics
The equation of motion for the superfluid component, in a somewhat simplified form, is given by Newton's law
The mass is the molar mass of 4He, and is the velocity of the superfluid component. The time derivative is the so-called hydrodynamic derivative, i.e. the rate of increase of the velocity when moving with the fluid. In the case of superfluid 4He in the gravitational field the force is given by
In this expression is the molar chemical potential, the gravitational acceleration, and the vertical coordinate. Thus we get the equation which states that the thermodynamics of a certain constant will be amplified by the force of the natural gravitational acceleration
Eq. only holds if is below a certain critical value, which usually is determined by the diameter of the flow channel.
In classical mechanics the force is often the gradient of a potential energy. Eq. shows that, in the case of the superfluid component, the force contains a term due to the gradient of the chemical potential. This is the origin of the remarkable properties of He-II such as the fountain effect.
Fountain pressure
In order to rewrite Eq. in more familiar form we use the general formula
Here is the molar entropy and the molar volume. With Eq. can be found by a line integration in the – plane. First we integrate from the origin to , so at . Next we integrate from to , so with constant pressure (see figure 6). In the first integral and in the second . With Eq. we obtain
We are interested only in cases where is small so that is practically constant. So
where is the molar volume of the liquid at and . The other term in Eq. is also written as a product of and a quantity which has the dimension of pressure
The pressure is called the fountain pressure. It can be calculated from the entropy of 4He which, in turn, can be calculated from the heat capacity. For the fountain pressure is equal to 0.692 bar. With a density of liquid helium of 125 kg/m3 and = 9.8 m/s2 this corresponds with a liquid-helium column of 56 meter height. So, in many experiments, the fountain pressure has a bigger effect on the motion of the superfluid helium than gravity.
With Eqs. and , Eq. obtains the form
Substitution of Eq. in gives
with the density of liquid 4He at zero pressure and temperature.
Eq. shows that the superfluid component is accelerated by gradients in the pressure and in the gravitational field, as usual, but also by a gradient in the fountain pressure.
So far Eq. has only mathematical meaning, but in special experimental arrangements can show up as a real pressure. Figure 7 shows two vessels both containing He-II. The left vessel is supposed to be at zero kelvins () and zero pressure (). The vessels are connected by a so-called superleak. This is a tube, filled with a very fine powder, so the flow of the normal component is blocked. However, the superfluid component can flow through this superleak without any problem (below a critical velocity of about 20 cm/s). In the steady state so Eq. implies
where the indexes and apply to the left and right side of the superleak respectively. In this particular case , , and (since ). Consequently,
This means that the pressure in the right vessel is equal to the fountain pressure at .
In an experiment, arranged as in figure 8, a fountain can be created. The fountain effect is used to drive the circulation of 3He in dilution refrigerators.
Heat transport
Figure 9 depicts a heat-conduction experiment between two temperatures and connected by a tube filled with He-II. When heat is applied to the hot end a pressure builds up at the hot end according to Eq.. This pressure drives the normal component from the hot end to the cold end according to
Here is the viscosity of the normal component, some geometrical factor, and the volume flow. The normal flow is balanced by a flow of the superfluid component from the cold to the hot end. At the end sections a normal to superfluid conversion takes place and vice versa. So heat is transported, not by heat conduction, but by convection. This kind of heat transport is very effective, so the thermal conductivity of He-II is very much better than the best materials. The situation is comparable with heat pipes where heat is transported via gas–liquid conversion. The high thermal conductivity of He-II is applied for stabilizing superconducting magnets such as in the Large Hadron Collider at CERN.
Microscopic theory
Landau two-fluid approach
L. D. Landau's phenomenological and semi-microscopic theory of superfluidity of helium-4 earned him the Nobel Prize in physics, in 1962. Assuming that sound waves are the most important excitations in helium-4 at low temperatures, he showed that helium-4 flowing past a wall would not spontaneously create excitations if the flow velocity was less than the sound velocity. In this model, the sound velocity is the "critical velocity" above which superfluidity is destroyed. (Helium-4 actually has a lower flow velocity than the sound velocity, but this model is useful to illustrate the concept.) Landau also showed that the sound wave and other excitations could equilibrate with one another and flow separately from the rest of the helium-4, which is known as the "condensate".
From the momentum and flow velocity of the excitations he could then define a "normal fluid" density, which is zero at zero temperature and increases with temperature. At the so-called Lambda temperature, where the normal fluid density equals the total density, the helium-4 is no longer superfluid.
To explain the early specific heat data on superfluid helium-4, Landau posited the existence of a type of excitation he called a "roton", but as better data became available he considered that the "roton" was the same as a high momentum version of sound.
The Landau theory does not elaborate on the microscopic structure of the superfluid component of liquid helium. The first attempts to create a microscopic theory of the superfluid component itself were done by London and subsequently, Tisza.
Other microscopical models have been proposed by different authors. Their main objective is to derive the form of the inter-particle potential between helium atoms in superfluid state from first principles of quantum mechanics.
To date, a number of models of this kind have been proposed, including: models with vortex rings, hard-sphere models, and Gaussian cluster theories.
Vortex ring model
Landau thought that vorticity entered superfluid helium-4 by vortex sheets, but such sheets have since been shown to be unstable.
Lars Onsager and, later independently, Feynman showed that vorticity enters by quantized vortex lines. They also developed the idea of quantum vortex rings.
Arie Bijl in the 1940s,
and Richard Feynman around 1955, developed microscopic theories for the roton, which was shortly observed with inelastic neutron experiments by Palevsky. Later on, Feynman admitted that his model gives only qualitative agreement with experiment.
Hard-sphere models
The models are based on the simplified form of the inter-particle potential between helium-4 atoms in the superfluid phase. Namely, the potential is assumed to be of the hard-sphere type.
In these models the famous Landau (roton) spectrum of excitations is qualitatively reproduced.
Gaussian cluster approach
This is a two-scale approach which describes the superfluid component of liquid helium-4. It
consists of two nested models linked via parametric space. The short-wavelength part describes the interior structure of the fluid element using a non-perturbative approach based on the logarithmic Schrödinger equation; it suggests the Gaussian-like behaviour of the element's interior density and interparticle interaction potential. The long-wavelength part is the quantum many-body theory of such elements which deals with their dynamics and interactions. The approach provides a unified description of the phonon, maxon and roton excitations, and has noteworthy agreement with experiment: with one essential parameter to fit one reproduces at high accuracy the Landau roton spectrum, sound velocity and structure factor of superfluid helium-4. This model utilizes the general theory of quantum Bose liquids with logarithmic nonlinearities which is based on introducing a dissipative-type contribution to energy related to the quantum Everett–Hirschman entropy function.
See also
Douglas D. Osheroff
Large Hadron Collider
London moment
Polariton superfluid
Quantum acoustics
Quantum gyroscope
Superdiamagnetism
Superfluid film
Timeline of low-temperature technology
References
Further reading
Antony M. Guénault: Basic superfluids. Taylor & Francis, London 2003,
D.R. Tilley and J. Tilley, Superfluidity and Superconductivity, (IOP Publishing Ltd., Bristol, 1990)
Department of Energy Office of Science: Superfluidity
Hagen Kleinert, Gauge Fields in Condensed Matter, Vol. I, "SUPERFLOW AND VORTEX LINES", pp. 1–742, World Scientific (Singapore, 1989); Paperback (also available online)
James F. Annett: Superconductivity, superfluids, and condensates. Oxford Univ. Press, Oxford 2005,
London, F. Superfluids (Wiley, New York, 1950)
Philippe Lebrun & Laurent Tavian: The technology of superfluid helium
External links
Helium-4 Interactive Properties
http://web.mit.edu/newsoffice/2005/matter.html
Liquid Helium II, Superfluid: demonstrations of Lambda point transition/viscosity paradox /two fluid model/fountain effect/creeping film/ second sound.
Physics Today February 2001
superfluid hydrodynamics
Superfluid phases of helium
The Hindu article on superfluid states
Video including superfluid helium's strange behavior
Liquid helium
Bose–Einstein condensates
Fluid dynamics
Superfluidity | Superfluid helium-4 | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,311 | [
"Physical phenomena",
"Phase transitions",
"Bose–Einstein condensates",
"Chemical engineering",
"Phases of matter",
"Superfluidity",
"Condensed matter physics",
"Exotic matter",
"Piping",
"Matter",
"Fluid dynamics"
] |
27,709 | https://en.wikipedia.org/wiki/Semiconductor | A semiconductor is a material that is between the conductor and insulator in ability to conduct electrical current. In many cases their conducting properties may be altered in useful ways by introducing impurities ("doping") into the crystal structure. When two differently doped regions exist in the same crystal, a semiconductor junction is created. The behavior of charge carriers, which include electrons, ions, and electron holes, at these junctions is the basis of diodes, transistors, and most modern electronics. Some examples of semiconductors are silicon, germanium, gallium arsenide, and elements near the so-called "metalloid staircase" on the periodic table. After silicon, gallium arsenide is the second-most common semiconductor and is used in laser diodes, solar cells, microwave-frequency integrated circuits, and others. Silicon is a critical element for fabricating most electronic circuits.
Semiconductor devices can display a range of different useful properties, such as passing current more easily in one direction than the other, showing variable resistance, and having sensitivity to light or heat. Because the electrical properties of a semiconductor material can be modified by doping and by the application of electrical fields or light, devices made from semiconductors can be used for amplification, switching, and energy conversion. The term semiconductor is also used to describe materials used in high capacity, medium- to high-voltage cables as part of their insulation, and these materials are often plastic XLPE (Cross-linked polyethylene) with carbon black.
The conductivity of silicon is increased by adding a small amount (of the order of 1 in 108) of pentavalent (antimony, phosphorus, or arsenic) or trivalent (boron, gallium, indium) atoms. This process is known as doping, and the resulting semiconductors are known as doped or extrinsic semiconductors. Apart from doping, the conductivity of a semiconductor can be improved by increasing its temperature. This is contrary to the behavior of a metal, in which conductivity decreases with an increase in temperature.
The modern understanding of the properties of a semiconductor relies on quantum physics to explain the movement of charge carriers in a crystal lattice. Doping greatly increases the number of charge carriers within the crystal. When a semiconductor is doped by Group V elements, they will behave like donors creating free electrons, known as "n-type" doping. When a semiconductor is doped by Group III elements, they will behave like acceptors creating free holes, known as "p-type" doping. The semiconductor materials used in electronic devices are doped under precise conditions to control the concentration and regions of p- and n-type dopants. A single semiconductor device crystal can have many p- and n-type regions; the p–n junctions between these regions are responsible for the useful electronic behavior. Using a hot-point probe, one can determine quickly whether a semiconductor sample is p- or n-type.
A few of the properties of semiconductor materials were observed throughout the mid-19th and first decades of the 20th century. The first practical application of semiconductors in electronics was the 1904 development of the cat's-whisker detector, a primitive semiconductor diode used in early radio receivers. Developments in quantum physics led in turn to the invention of the transistor in 1947 and the integrated circuit in 1958.
Properties
Variable electrical conductivity
Semiconductors in their natural state are poor conductors because a current requires the flow of electrons, and semiconductors have their valence bands filled, preventing the entire flow of new electrons. Several developed techniques allow semiconducting materials to behave like conducting materials, such as doping or gating. These modifications have two outcomes: n-type and p-type. These refer to the excess or shortage of electrons, respectively. A balanced number of electrons would cause a current to flow throughout the material.
Homojunctions
Homojunctions occur when two differently doped semiconducting materials are joined. For example, a configuration could consist of p-doped and n-doped germanium. This results in an exchange of electrons and holes between the differently doped semiconducting materials. The n-doped germanium would have an excess of electrons, and the p-doped germanium would have an excess of holes. The transfer occurs until an equilibrium is reached by a process called recombination, which causes the migrating electrons from the n-type to come in contact with the migrating holes from the p-type. The result of this process is a narrow strip of immobile ions, which causes an electric field across the junction.
Excited electrons
A difference in electric potential on a semiconducting material would cause it to leave thermal equilibrium and create a non-equilibrium situation. This introduces electrons and holes to the system, which interact via a process called ambipolar diffusion. Whenever thermal equilibrium is disturbed in a semiconducting material, the number of holes and electrons changes. Such disruptions can occur as a result of a temperature difference or photons, which can enter the system and create electrons and holes. The processes that create or annihilate electrons and holes are called generation and recombination, respectively.
Light emission
In certain semiconductors, excited electrons can relax by emitting light instead of producing heat. Controlling the semiconductor composition and electrical current allows for the manipulation of the emitted light's properties. These semiconductors are used in the construction of light-emitting diodes and fluorescent quantum dots.
High thermal conductivity
Semiconductors with high thermal conductivity can be used for heat dissipation and improving thermal management of electronics. They play a crucial role in electric vehicles, high-brightness LEDs and power modules, among other applications.
Thermal energy conversion
Semiconductors have large thermoelectric power factors making them useful in thermoelectric generators, as well as high thermoelectric figures of merit making them useful in thermoelectric coolers.
Materials
A large number of elements and compounds have semiconducting properties, including:
Certain pure elements are found in group 14 of the periodic table; the most commercially important of these elements are silicon and germanium. Silicon and germanium are used here effectively because they have 4 valence electrons in their outermost shell, which gives them the ability to gain or lose electrons equally at the same time.
Binary compounds, particularly between elements in groups 13 and 15, such as gallium arsenide, groups 12 and 16, groups 14 and 16, and between different group-14 elements, e.g. silicon carbide.
Certain ternary compounds, oxides, and alloys.
Organic semiconductors, made of organic compounds.
Semiconducting metal–organic frameworks.
The most common semiconducting materials are crystalline solids, but amorphous and liquid semiconductors are also known. These include hydrogenated amorphous silicon and mixtures of arsenic, selenium, and tellurium in a variety of proportions. These compounds share with better-known semiconductors the properties of intermediate conductivity and a rapid variation of conductivity with temperature, as well as occasional negative resistance. Such disordered materials lack the rigid crystalline structure of conventional semiconductors such as silicon. They are generally used in thin film structures, which do not require material of higher electronic quality, being relatively insensitive to impurities and radiation damage.
Preparation of semiconductor materials
Almost all of today's electronic technology involves the use of semiconductors, with the most important aspect being the integrated circuit (IC), which are found in desktops, laptops, scanners, cell-phones, and other electronic devices. Semiconductors for ICs are mass-produced. To create an ideal semiconducting material, chemical purity is paramount. Any small imperfection can have a drastic effect on how the semiconducting material behaves due to the scale at which the materials are used.
A high degree of crystalline perfection is also required, since faults in the crystal structure (such as dislocations, twins, and stacking faults) interfere with the semiconducting properties of the material. Crystalline faults are a major cause of defective semiconductor devices. The larger the crystal, the more difficult it is to achieve the necessary perfection. Current mass production processes use crystal ingots between in diameter, grown as cylinders and sliced into wafers. The round shape characteristic of these wafers comes from single-crystal ingots usually produced using the Czochralski method. Silicon wafers were first introduced in the 1940s.
There is a combination of processes that are used to prepare semiconducting materials for ICs. One process is called thermal oxidation, which forms silicon dioxide on the surface of the silicon. This is used as a gate insulator and field oxide. Other processes are called photomasks and photolithography. This process is what creates the patterns on the circuit in the integrated circuit. Ultraviolet light is used along with a photoresist layer to create a chemical change that generates the patterns for the circuit.
The etching is the next process that is required. The part of the silicon that was not covered by the photoresist layer from the previous step can now be etched. The main process typically used today is called plasma etching. Plasma etching usually involves an etch gas pumped in a low-pressure chamber to create plasma. A common etch gas is chlorofluorocarbon, or more commonly known Freon. A high radio-frequency voltage between the cathode and anode is what creates the plasma in the chamber. The silicon wafer is located on the cathode, which causes it to be hit by the positively charged ions that are released from the plasma. The result is silicon that is etched anisotropically.
The last process is called diffusion. This is the process that gives the semiconducting material its desired semiconducting properties. It is also known as doping. The process introduces an impure atom to the system, which creates the p–n junction. To get the impure atoms embedded in the silicon wafer, the wafer is first put in a 1,100 degree Celsius chamber. The atoms are injected in and eventually diffuse with the silicon. After the process is completed and the silicon has reached room temperature, the doping process is done and the semiconducting wafer is almost prepared.
Physics of semiconductors
Energy bands and electrical conduction
Semiconductors are defined by their unique electric conductive behavior, somewhere between that of a conductor and an insulator. The differences between these materials can be understood in terms of the quantum states for electrons, each of which may contain zero or one electron (by the Pauli exclusion principle). These states are associated with the electronic band structure of the material. Electrical conductivity arises due to the presence of electrons in states that are delocalized (extending through the material), however in order to transport electrons a state must be partially filled, containing an electron only part of the time. If the state is always occupied with an electron, then it is inert, blocking the passage of other electrons via that state. The energies of these quantum states are critical since a state is partially filled only if its energy is near the Fermi level (see Fermi–Dirac statistics).
High conductivity in material comes from it having many partially filled states and much state delocalization.
Metals are good electrical conductors and have many partially filled states with energies near their Fermi level.
Insulators, by contrast, have few partially filled states, their Fermi levels sit within band gaps with few energy states to occupy. Importantly, an insulator can be made to conduct by increasing its temperature: heating provides energy to promote some electrons across the band gap, inducing partially filled states in both the band of states beneath the band gap (valence band) and the band of states above the band gap (conduction band). An (intrinsic) semiconductor has a band gap that is smaller than that of an insulator and at room temperature, significant numbers of electrons can be excited to cross the band gap.
A pure semiconductor, however, is not very useful, as it is neither a very good insulator nor a very good conductor.
However, one important feature of semiconductors (and some insulators, known as semi-insulators) is that their conductivity can be increased and controlled by doping with impurities and gating with electric fields. Doping and gating move either the conduction or valence band much closer to the Fermi level and greatly increase the number of partially filled states.
Some wider-bandgap semiconductor materials are sometimes referred to as semi-insulators. When undoped, these have electrical conductivity nearer to that of electrical insulators, however they can be doped (making them as useful as semiconductors). Semi-insulators find niche applications in micro-electronics, such as substrates for HEMT. An example of a common semi-insulator is gallium arsenide. Some materials, such as titanium dioxide, can even be used as insulating materials for some applications, while being treated as wide-gap semiconductors for other applications.
Charge carriers (electrons and holes)
The partial filling of the states at the bottom of the conduction band can be understood as adding electrons to that band. The electrons do not stay indefinitely (due to the natural thermal recombination) but they can move around for some time. The actual concentration of electrons is typically very dilute, and so (unlike in metals) it is possible to think of the electrons in the conduction band of a semiconductor as a sort of classical ideal gas, where the electrons fly around freely without being subject to the Pauli exclusion principle. In most semiconductors, the conduction bands have a parabolic dispersion relation, and so these electrons respond to forces (electric field, magnetic field, etc.) much as they would in a vacuum, though with a different effective mass. Because the electrons behave like an ideal gas, one may also think about conduction in very simplistic terms such as the Drude model, and introduce concepts such as electron mobility.
For partial filling at the top of the valence band, it is helpful to introduce the concept of an electron hole. Although the electrons in the valence band are always moving around, a completely full valence band is inert, not conducting any current. If an electron is taken out of the valence band, then the trajectory that the electron would normally have taken is now missing its charge. For the purposes of electric current, this combination of the full valence band, minus the electron, can be converted into a picture of a completely empty band containing a positively charged particle that moves in the same way as the electron. Combined with the negative effective mass of the electrons at the top of the valence band, we arrive at a picture of a positively charged particle that responds to electric and magnetic fields just as a normal positively charged particle would do in a vacuum, again with some positive effective mass. This particle is called a hole, and the collection of holes in the valence band can again be understood in simple classical terms (as with the electrons in the conduction band).
Carrier generation and recombination
When ionizing radiation strikes a semiconductor, it may excite an electron out of its energy level and consequently leave a hole. This process is known as electron-hole pair generation. Electron-hole pairs are constantly generated from thermal energy as well, in the absence of any external energy source.
Electron-hole pairs are also apt to recombine. Conservation of energy demands that these recombination events, in which an electron loses an amount of energy larger than the band gap, be accompanied by the emission of thermal energy (in the form of phonons) or radiation (in the form of photons).
In some states, the generation and recombination of electron–hole pairs are in equipoise. The number of electron-hole pairs in the steady state at a given temperature is determined by quantum statistical mechanics. The precise quantum mechanical mechanisms of generation and recombination are governed by the conservation of energy and conservation of momentum.
As the probability that electrons and holes meet together is proportional to the product of their numbers, the product is in the steady-state nearly constant at a given temperature, providing that there is no significant electric field (which might "flush" carriers of both types, or move them from neighbor regions containing more of them to meet together) or externally driven pair generation. The product is a function of the temperature, as the probability of getting enough thermal energy to produce a pair increases with temperature, being approximately , where k is the Boltzmann constant, T is the absolute temperature and EG is bandgap.
The probability of meeting is increased by carrier traps – impurities or dislocations which can trap an electron or hole and hold it until a pair is completed. Such carrier traps are sometimes purposely added to reduce the time needed to reach the steady-state.
Doping
The conductivity of semiconductors may easily be modified by introducing impurities into their crystal lattice. The process of adding controlled impurities to a semiconductor is known as doping. The amount of impurity, or dopant, added to an intrinsic (pure) semiconductor varies its level of conductivity. Doped semiconductors are referred to as extrinsic. By adding impurity to the pure semiconductors, the electrical conductivity may be varied by factors of thousands or millions.
A 1 cm3 specimen of a metal or semiconductor has the order of 1022 atoms. In a metal, every atom donates at least one free electron for conduction, thus 1 cm3 of metal contains on the order of 1022 free electrons, whereas a 1 cm3 sample of pure germanium at 20°C contains about atoms, but only free electrons and holes. The addition of 0.001% of arsenic (an impurity) donates an extra 1017 free electrons in the same volume and the electrical conductivity is increased by a factor of 10,000.
The materials chosen as suitable dopants depend on the atomic properties of both the dopant and the material to be doped. In general, dopants that produce the desired controlled changes are classified as either electron acceptors or donors. Semiconductors doped with donor impurities are called n-type, while those doped with acceptor impurities are known as p-type. The n and p type designations indicate which charge carrier acts as the material's majority carrier. The opposite carrier is called the minority carrier, which exists due to thermal excitation at a much lower concentration compared to the majority carrier.
For example, the pure semiconductor silicon has four valence electrons that bond each silicon atom to its neighbors. In silicon, the most common dopants are group III and group V elements. Group III elements all contain three valence electrons, causing them to function as acceptors when used to dope silicon. When an acceptor atom replaces a silicon atom in the crystal, a vacant state (an electron "hole") is created, which can move around the lattice and function as a charge carrier. Group V elements have five valence electrons, which allows them to act as a donor; substitution of these atoms for silicon creates an extra free electron. Therefore, a silicon crystal doped with boron creates a p-type semiconductor whereas one doped with phosphorus results in an n-type material.
During manufacture, dopants can be diffused into the semiconductor body by contact with gaseous compounds of the desired element, or ion implantation can be used to accurately position the doped regions.
Amorphous semiconductors
Some materials, when rapidly cooled to a glassy amorphous state, have semiconducting properties. These include B, Si, Ge, Se, and Te, and there are multiple theories to explain them.
Early history of semiconductors
The history of the understanding of semiconductors begins with experiments on the electrical properties of materials. The properties of the time-temperature coefficient of resistance, rectification, and light-sensitivity were observed starting in the early 19th century.
Thomas Johann Seebeck was the first to notice that semiconductors exhibit special feature such that experiment concerning an Seebeck effect emerged with much stronger result when applying semiconductors, in 1821. In 1833, Michael Faraday reported that the resistance of specimens of silver sulfide decreases when they are heated. This is contrary to the behavior of metallic substances such as copper. In 1839, Alexandre Edmond Becquerel reported observation of a voltage between a solid and a liquid electrolyte, when struck by light, the photovoltaic effect. In 1873, Willoughby Smith observed that selenium resistors exhibit decreasing resistance when light falls on them. In 1874, Karl Ferdinand Braun observed conduction and rectification in metallic sulfides, although this effect had been discovered earlier by Peter Munck af Rosenschöld (sv) writing for the Annalen der Physik und Chemie in 1835; Rosenschöld's findings were ignored. Simon Sze stated that Braun's research was the earliest systematic study of semiconductor devices. Also in 1874, Arthur Schuster found that a copper oxide layer on wires had rectification properties that ceased when the wires are cleaned. William Grylls Adams and Richard Evans Day observed the photovoltaic effect in selenium in 1876.
A unified explanation of these phenomena required a theory of solid-state physics, which developed greatly in the first half of the 20th century. In 1878 Edwin Herbert Hall demonstrated the deflection of flowing charge carriers by an applied magnetic field, the Hall effect. The discovery of the electron by J.J. Thomson in 1897 prompted theories of electron-based conduction in solids. Karl Baedeker, by observing a Hall effect with the reverse sign to that in metals, theorized that copper iodide had positive charge carriers. classified solid materials like metals, insulators, and "variable conductors" in 1914 although his student Josef Weiss already introduced the term Halbleiter (a semiconductor in modern meaning) in his Ph.D. thesis in 1910. Felix Bloch published a theory of the movement of electrons through atomic lattices in 1928. In 1930, stated that conductivity in semiconductors was due to minor concentrations of impurities. By 1931, the band theory of conduction had been established by Alan Herries Wilson and the concept of band gaps had been developed. Walter H. Schottky and Nevill Francis Mott developed models of the potential barrier and of the characteristics of a metal–semiconductor junction. By 1938, Boris Davydov had developed a theory of the copper-oxide rectifier, identifying the effect of the p–n junction and the importance of minority carriers and surface states.
Agreement between theoretical predictions (based on developing quantum mechanics) and experimental results was sometimes poor. This was later explained by John Bardeen as due to the extreme "structure sensitive" behavior of semiconductors, whose properties change dramatically based on tiny amounts of impurities. Commercially pure materials of the 1920s containing varying proportions of trace contaminants produced differing experimental results. This spurred the development of improved material refining techniques, culminating in modern semiconductor refineries producing materials with parts-per-trillion purity.
Devices using semiconductors were at first constructed based on empirical knowledge before semiconductor theory provided a guide to the construction of more capable and reliable devices.
Alexander Graham Bell used the light-sensitive property of selenium to transmit sound over a beam of light in 1880. A working solar cell, of low efficiency, was constructed by Charles Fritts in 1883, using a metal plate coated with selenium and a thin layer of gold; the device became commercially useful in photographic light meters in the 1930s. Point-contact microwave detector rectifiers made of lead sulfide were used by Jagadish Chandra Bose in 1904; the cat's-whisker detector using natural galena or other materials became a common device in the development of radio. However, it was somewhat unpredictable in operation and required manual adjustment for best performance. In 1906, H.J. Round observed light emission when electric current passed through silicon carbide crystals, the principle behind the light-emitting diode. Oleg Losev observed similar light emission in 1922, but at the time the effect had no practical use. Power rectifiers, using copper oxide and selenium, were developed in the 1920s and became commercially important as an alternative to vacuum tube rectifiers.
The first semiconductor devices used galena, including German physicist Ferdinand Braun's crystal detector in 1874 and Indian physicist Jagadish Chandra Bose's radio crystal detector in 1901.
In the years preceding World War II, infrared detection and communications devices prompted research into lead-sulfide and lead-selenide materials. These devices were used for detecting ships and aircraft, for infrared rangefinders, and for voice communication systems. The point-contact crystal detector became vital for microwave radio systems since available vacuum tube devices could not serve as detectors above about 4000 MHz; advanced radar systems relied on the fast response of crystal detectors. Considerable research and development of silicon materials occurred during the war to develop detectors of consistent quality.
Early transistors
Detector and power rectifiers could not amplify a signal. Many efforts were made to develop a solid-state amplifier and were successful in developing a device called the point contact transistor which could amplify 20 dB or more. In 1922, Oleg Losev developed two-terminal, negative resistance amplifiers for radio, but he died in the Siege of Leningrad after successful completion. In 1926, Julius Edgar Lilienfeld patented a device resembling a field-effect transistor, but it was not practical. and in 1938 demonstrated a solid-state amplifier using a structure resembling the control grid of a vacuum tube; although the device displayed power gain, it had a cut-off frequency of one cycle per second, too low for any practical applications, but an effective application of the available theory. At Bell Labs, William Shockley and A. Holden started investigating solid-state amplifiers in 1938. The first p–n junction in silicon was observed by Russell Ohl about 1941 when a specimen was found to be light-sensitive, with a sharp boundary between p-type impurity at one end and n-type at the other. A slice cut from the specimen at the p–n boundary developed a voltage when exposed to light.
The first working transistor was a point-contact transistor invented by John Bardeen, Walter Houser Brattain, and William Shockley at Bell Labs in 1947. Shockley had earlier theorized a field-effect amplifier made from germanium and silicon, but he failed to build such a working device, before eventually using germanium to invent the point-contact transistor. In France, during the war, Herbert Mataré had observed amplification between adjacent point contacts on a germanium base. After the war, Mataré's group announced their "Transistron" amplifier only shortly after Bell Labs announced the "transistor".
In 1954, physical chemist Morris Tanenbaum fabricated the first silicon junction transistor at Bell Labs. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.
See also
Deathnium
Semiconductor device fabrication
Semiconductor industry
Semiconductor characterization techniques
Transistor count
References
Further reading
G. B. Abdullayev, T. D. Dzhafarov, S. Torstveit (Translator), Atomic Diffusion in Semiconductor Structures, Gordon & Breach Science Pub., 1987
Feynman's lecture on Semiconductors
External links
Semiconductors | Semiconductor | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 5,688 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
27,712 | https://en.wikipedia.org/wiki/Sugar | Sugar is the generic name for sweet-tasting, soluble carbohydrates, many of which are used in food. Simple sugars, also called monosaccharides, include glucose, fructose, and galactose. Compound sugars, also called disaccharides or double sugars, are molecules made of two bonded monosaccharides; common examples are sucrose (glucose + fructose), lactose (glucose + galactose), and maltose (two molecules of glucose). White sugar is a refined form of sucrose. In the body, compound sugars are hydrolysed into simple sugars.
Longer chains of monosaccharides (>2) are not regarded as sugars and are called oligosaccharides or polysaccharides. Starch is a glucose polymer found in plants, the most abundant source of energy in human food. Some other chemical substances, such as ethylene glycol, glycerol and sugar alcohols, may have a sweet taste but are not classified as sugar.
Sugars are found in the tissues of most plants. Honey and fruits are abundant natural sources of simple sugars. Sucrose is especially concentrated in sugarcane and sugar beet, making them ideal for efficient commercial extraction to make refined sugar. In 2016, the combined world production of those two crops was about two billion tonnes. Maltose may be produced by malting grain. Lactose is the only sugar that cannot be extracted from plants. It can only be found in milk, including human breast milk, and in some dairy products. A cheap source of sugar is corn syrup, industrially produced by converting corn starch into sugars, such as maltose, fructose and glucose.
Sucrose is used in prepared foods (e.g., cookies and cakes), is sometimes added to commercially available ultra-processed food and beverages, and is sometimes used as a sweetener for foods (e.g., toast and cereal) and beverages (e.g., coffee and tea). The average person consumes about of sugar each year. North and South Americans consume up to , and Africans consume under .
As free sugar consumption grew in the latter part of the 20th century, researchers began to examine whether a diet high in free sugar, especially refined sugar, was damaging to human health. In 2015, the World Health Organization strongly recommended that adults and children reduce their intake of free sugars to less than 10% of their total energy intake and encouraged a reduction to below 5%. In general, high sugar consumption damages human health more than it provides nutritional benefit and is associated with a risk of cardiometabolic and other health detriments.
Etymology
The etymology of sugar reflects the commodity's spread. From Sanskrit , meaning "ground or candied sugar", came Persian and Arabic sukkar. The Arabic word was borrowed in Medieval Latin as succarum, whence came the 12th century French sucre and the English sugar. Sugar was introduced into Europe by the Arabs in Sicily and Spain.
The English word jaggery, a coarse brown sugar made from date palm sap or sugarcane juice, has a similar etymological origin: Portuguese from the Malayalam , which is from the Sanskrit .
History
Ancient world to Renaissance
Asia
Sugar has been produced in the Indian subcontinent for thousands of years. Sugarcane cultivation spread from there into China via the Khyber Pass and caravan routes. It was not plentiful or cheap in early times, and in most parts of the world, honey was more often used for sweetening. Originally, people chewed raw sugarcane to extract its sweetness. Even after refined sugarcane became more widely available during the European colonial era, palm sugar was preferred in Java and other sugar producing parts of southeast Asia, and along with coconut sugar, is still used locally to make desserts today.
Sugarcane is native of tropical areas such as the Indian subcontinent (South Asia) and Southeast Asia. Different species seem to have originated from different locations; Saccharum barberi originated in India, and S. edule and S. officinarum came from New Guinea. One of the earliest historical references to sugarcane is in Chinese manuscripts dating to the 8th century BCE, which state that the use of sugarcane originated in India.
In the tradition of Indian medicine (āyurveda), sugarcane is known by the name Ikṣu, and sugarcane juice is known as Phāṇita. Its varieties, synonyms and characteristics are defined in nighaṇṭus such as the Bhāvaprakāśa (1.6.23, group of sugarcanes).
Sugar remained relatively unimportant until the Indians discovered methods of turning sugarcane juice into granulated crystals that were easier to store and transport. The Greek physician Pedanius Dioscorides attested to the method in his 1st century CE medical treatise De Materia Medica:
In the local Indian language, these crystals were called khanda (Devanagari: खण्ड, ), which is the source of the word candy. Indian sailors, who carried clarified butter and sugar as supplies, introduced knowledge of sugar along the various trade routes they travelled. Traveling Buddhist monks took sugar crystallization methods to China. During the reign of Harsha (r. 606–647) in North India, Indian envoys in Tang China taught methods of cultivating sugarcane after Emperor Taizong of Tang (r. 626–649) made known his interest in sugar. China established its first sugarcane plantations in the seventh century. Chinese documents confirm at least two missions to India, initiated in 647 CE, to obtain technology for sugar refining.
Europe
Nearchus, admiral of Alexander the Great, knew of sugar during the year 325 BC because of his participation in the campaign of India led by Alexander (Arrian, Anabasis). In addition to the Greek physician Pedanius Dioscorides, the Roman Pliny the Elder also described sugar in his 1st century CE Natural History: "Sugar is made in Arabia as well, but Indian sugar is better. It is a kind of honey found in cane, white as gum, and it crunches between the teeth. It comes in lumps the size of a hazelnut. Sugar is used only for medical purposes." Crusaders brought sugar back to Europe after their campaigns in the Holy Land, where they encountered caravans carrying "sweet salt". Early in the 12th century, the Republic of Venice acquired some villages near Tyre and set up estates to produce sugar for export to Europe. It supplemented the use of honey, which had previously been the only available sweetener. Crusade chronicler William of Tyre, writing in the late 12th century, described sugar as "very necessary for the use and health of mankind". In the 15th century, Venice was the chief sugar refining and distribution center in Europe.
There was a drastic change in the mid-15th century, when Madeira and the Canary Islands were settled from Europe and sugar introduced there. After this an "all-consuming passion for sugar ... swept through society" as it became far more easily available, though initially still very expensive. By 1492, Madeira was producing over of sugar annually. Genoa, one of the centers of distribution, became known for candied fruit, while Venice specialized in pastries, sweets (candies), and sugar sculptures. Sugar was considered to have "valuable medicinal properties" as a "warm" food under prevailing categories, being "helpful to the stomach, to cure cold diseases, and sooth lung complaints".
A feast given in Tours in 1457 by Gaston de Foix, which is "probably the best and most complete account we have of a late medieval banquet" includes the first mention of sugar sculptures, as the final food brought in was "a heraldic menagerie sculpted in sugar: lions, stags, monkeys ... each holding in paw or beak the arms of the Hungarian king". Other recorded grand feasts in the decades following included similar pieces. Originally the sculptures seem to have been eaten in the meal, but later they become merely table decorations, the most elaborate called trionfi. Several significant sculptors are known to have produced them; in some cases their preliminary drawings survive. Early ones were in brown sugar, partly cast in molds, with the final touches carved. They continued to be used until at least the Coronation Banquet for Edward VII of the United Kingdom in 1903; among other sculptures every guest was given a sugar crown to take away.
Modern history
In August 1492, Christopher Columbus collected sugar cane samples in La Gomera in the Canary Islands, and introduced it to the New World. The cuttings were planted and the first sugar-cane harvest in Hispaniola took place in 1501. Many sugar mills had been constructed in Cuba and Jamaica by the 1520s. The Portuguese took sugar cane to Brazil. By 1540, there were 800 cane-sugar mills in Santa Catarina Island and another 2,000 on the north coast of Brazil, Demarara, and Surinam. It took until 1600 for Brazilian sugar production to exceed that of São Tomé, which was the main center of sugar production in sixteenth century.
Sugar was a luxury in Europe until the early 19th century, when it became more widely available, due to the rise of beet sugar in Prussia, and later in France under Napoleon. Beet sugar was a German invention, since, in 1747, Andreas Sigismund Marggraf announced the discovery of sugar in beets and devised a method using alcohol to extract it. Marggraf's student, Franz Karl Achard, devised an economical industrial method to extract the sugar in its pure form in the late 18th century. Achard first produced beet sugar in 1783 in Kaulsdorf, and in 1801, the world's first beet sugar production facility was established in Cunern, Silesia (then part of Prussia, now Poland). The works of Marggraf and Achard were the starting point for the sugar industry in Europe, and for the modern sugar industry in general, since sugar was no longer a luxury product and a product almost only produced in warmer climates.
Sugar became highly popular and by the 19th century, was found in every household. This evolution of taste and demand for sugar as an essential food ingredient resulted in major economic and social changes. Demand drove, in part, the colonization of tropical islands and areas where labor-intensive sugarcane plantations and sugar manufacturing facilities could be successful. World consumption increased more than 100 times from 1850 to 2000, led by Britain, where it increased from about 2 pounds per head per year in 1650 to 90 pounds by the early 20th century. In the late 18th century Britain consumed about half the sugar which reached Europe.
After slavery was abolished, the demand for workers in European colonies in the Caribbean was filled by indentured laborers from the Indian subcontinent. Millions of enslaved or indentured laborers were brought to various European colonies in the Americas, Africa and Asia (as a result of demand in Europe for among other commodities, sugar), influencing the ethnic mixture of numerous nations around the globe.
Sugar also led to some industrialization of areas where sugar cane was grown. For example, in the 1790s Lieutenant J. Paterson, of the Bengal Presidency promoted to the British parliament the idea that sugar cane could grow in British India, where it had started, with many advantages and at less expense than in the West Indies. As a result, sugar factories were established in Bihar in eastern India. During the Napoleonic Wars, sugar-beet production increased in continental Europe because of the difficulty of importing sugar when shipping was subject to blockade. By 1880 the sugar beet was the main source of sugar in Europe. It was also cultivated in Lincolnshire and other parts of England, although the United Kingdom continued to import the main part of its sugar from its colonies.
Until the late nineteenth century, sugar was purchased in loaves, which had to be cut using implements called sugar nips. In later years, granulated sugar was more usually sold in bags. Sugar cubes were produced in the nineteenth century. The first inventor of a process to produce sugar in cube form was Jakob Christof Rad, director of a sugar refinery in Dačice. In 1841, he produced the first sugar cube in the world. He began sugar-cube production after being granted a five-year patent for the process on 23 January 1843. Henry Tate of Tate & Lyle was another early manufacturer of sugar cubes at his refineries in Liverpool and London. Tate purchased a patent for sugar-cube manufacture from German Eugen Langen, who in 1872 had invented a different method of processing of sugar cubes.
Sugar was rationed during World War I, though it was said that "No previous war in history has been fought so largely on sugar and so little on alcohol", and more sharply during World War II. Rationing led to the development and use of various artificial sweeteners.
Chemistry
Scientifically, sugar loosely refers to a number of carbohydrates, such as monosaccharides, disaccharides, or oligosaccharides. Monosaccharides are also called "simple sugars", the most important being glucose. Most monosaccharides have a formula that conforms to with n between 3 and 7 (deoxyribose being an exception). Glucose has the molecular formula . The names of typical sugars end with -ose, as in "glucose" and "fructose". Sometimes such words may also refer to any types of carbohydrates soluble in water. The acyclic mono- and disaccharides contain either aldehyde groups or ketone groups. These carbon-oxygen double bonds (C=O) are the reactive centers. All saccharides with more than one ring in their structure result from two or more monosaccharides joined by glycosidic bonds with the resultant loss of a molecule of water () per bond.
Monosaccharides in a closed-chain form can form glycosidic bonds with other monosaccharides, creating disaccharides (such as sucrose) and polysaccharides (such as starch or cellulose). Enzymes must hydrolyze or otherwise break these glycosidic bonds before such compounds become metabolized. After digestion and absorption the principal monosaccharides present in the blood and internal tissues include glucose, fructose, and galactose. Many pentoses and hexoses can form ring structures. In these closed-chain forms, the aldehyde or ketone group remains non-free, so many of the reactions typical of these groups cannot occur. Glucose in solution exists mostly in the ring form at equilibrium, with less than 0.1% of the molecules in the open-chain form.
Natural polymers
Biopolymers of sugars are common in nature. Through photosynthesis, plants produce glyceraldehyde-3-phosphate (G3P), a phosphated 3-carbon sugar that is used by the cell to make monosaccharides such as glucose () or (as in cane and beet) sucrose (). Monosaccharides may be further converted into structural polysaccharides such as cellulose and pectin for cell wall construction or into energy reserves in the form of storage polysaccharides such as starch or inulin. Starch, consisting of two different polymers of glucose, is a readily degradable form of chemical energy stored by cells, and can be converted to other types of energy. Another polymer of glucose is cellulose, which is a linear chain composed of several hundred or thousand glucose units. It is used by plants as a structural component in their cell walls. Humans can digest cellulose only to a very limited extent, though ruminants can do so with the help of symbiotic bacteria in their gut. DNA and RNA are built up of the monosaccharides deoxyribose and ribose, respectively. Deoxyribose has the formula and ribose the formula .
Flammability and heat response
Because sugars burn easily when exposed to flame, the handling of sugars risks dust explosion. The risk of explosion is higher when the sugar has been milled to superfine texture, such as for use in chewing gum. The 2008 Georgia sugar refinery explosion, which killed 14 people and injured 36, and destroyed most of the refinery, was caused by the ignition of sugar dust.
In its culinary use, exposing sugar to heat causes caramelization. As the process occurs, volatile chemicals such as diacetyl are released, producing the characteristic caramel flavor.
Types
Monosaccharides
Fructose, galactose, and glucose are all simple sugars, monosaccharides, with the general formula C6H12O6. They have five hydroxyl groups (−OH) and a carbonyl group (C=O) and are cyclic when dissolved in water. They each exist as several isomers with dextro- and laevo-rotatory forms that cause polarized light to diverge to the right or the left.
Fructose, or fruit sugar, occurs naturally in fruits, some root vegetables, cane sugar and honey and is the sweetest of the sugars. It is one of the components of sucrose or table sugar. It is used as a high-fructose syrup, which is manufactured from hydrolyzed corn starch that has been processed to yield corn syrup, with enzymes then added to convert part of the glucose into fructose.
Galactose generally does not occur in the free state but is a constituent with glucose of the disaccharide lactose or milk sugar. It is less sweet than glucose. It is a component of the antigens found on the surface of red blood cells that determine blood groups.
Glucose occurs naturally in fruits and plant juices and is the primary product of photosynthesis. Starch is converted into glucose during digestion, and glucose is the form of sugar that is transported around the bodies of animals in the bloodstream. Although in principle there are two enantiomers of glucose (mirror images one of the other), naturally occurring glucose is D-glucose. This is also called dextrose, or grape sugar because drying grape juice produces crystals of dextrose that can be sieved from the other components. Glucose syrup is a liquid form of glucose that is widely used in the manufacture of foodstuffs. It can be manufactured from starch by enzymatic hydrolysis. For example, corn syrup, which is produced commercially by breaking down maize starch, is one common source of purified dextrose. However, dextrose is naturally present in many unprocessed, whole foods, including honey and fruits such as grapes.
Disaccharides
Lactose, maltose, and sucrose are all compound sugars, disaccharides, with the general formula . They are formed by the combination of two monosaccharide molecules with the exclusion of a molecule of water.
Lactose is the naturally occurring sugar found in milk. A molecule of lactose is formed by the combination of a molecule of galactose with a molecule of glucose. It is broken down when consumed into its constituent parts by the enzyme lactase during digestion. Children have this enzyme but some adults no longer form it and they are unable to digest lactose.
Maltose is formed during the germination of certain grains, the most notable being barley, which is converted into malt, the source of the sugar's name. A molecule of maltose is formed by the combination of two molecules of glucose. It is less sweet than glucose, fructose or sucrose. It is formed in the body during the digestion of starch by the enzyme amylase and is itself broken down during digestion by the enzyme maltase.
Sucrose is found in the stems of sugarcane and roots of sugar beet. It also occurs naturally alongside fructose and glucose in other plants, in particular fruits and some roots such as carrots. The different proportions of sugars found in these foods determines the range of sweetness experienced when eating them. A molecule of sucrose is formed by the combination of a molecule of glucose with a molecule of fructose. After being eaten, sucrose is split into its constituent parts during digestion by a number of enzymes known as sucrases.
Sources
The sugar contents of common fruits and vegetables are presented in Table 1.
The carbohydrate figure is calculated in the USDA database and does not always correspond to the sum of the sugars, the starch, and the dietary fiber.
The fructose to fructose plus glucose ratio is calculated by including the fructose and glucose coming from the sucrose.
Production
Due to rising demand, sugar production in general increased some 14% over the period 2009 to 2018. The largest importers were China, Indonesia, and the United States.
Sugar
In 2022–2023 world production of sugar was 186 million tonnes, and in 2023–2024 an estimated 194 million tonnes — a surplus of 5 million tonnes, according to Ragus.
Sugarcane
Sugar cane accounted for around 21% of the global crop production over the 2000–2021 period. The Americas was the leading region in the production of sugar cane (52% of the world total).
Global production of sugarcane in 2020 was 1.9 billion tonnes, with Brazil producing 40% of the world total and India 20% (table).
Sugarcane is any of several species, or their hybrids, of giant grasses in the genus Saccharum in the family Poaceae. They have been cultivated in tropical climates in the Indian subcontinent and Southeast Asia over centuries for the sucrose found in their stems.
Sugar cane requires a frost-free climate with sufficient rainfall during the growing season to make full use of the plant's substantial growth potential. The crop is harvested mechanically or by hand, chopped into lengths and conveyed rapidly to the processing plant (commonly known as a sugar mill) where it is either milled and the juice extracted with water or extracted by diffusion. The juice is clarified with lime and heated to destroy enzymes. The resulting thin syrup is concentrated in a series of evaporators, after which further water is removed. The resulting supersaturated solution is seeded with sugar crystals, facilitating crystal formation and drying. Molasses is a by-product of the process and the fiber from the stems, known as bagasse, is burned to provide energy for the sugar extraction process. The crystals of raw sugar have a sticky brown coating and either can be used as they are, can be bleached by sulfur dioxide, or can be treated in a carbonatation process to produce a whiter product. About of irrigation water is needed for every of sugar produced.
Sugar beet
In 2020, global production of sugar beets was 253 million tonnes, led by Russia with 13% of the world total (table).
Sugar beet became a major source of sugar in the 19th century when methods for extracting the sugar became available. It is a biennial plant, a cultivated variety of Beta vulgaris in the family Amaranthaceae, the tuberous root of which contains a high proportion of sucrose. It is cultivated as a root crop in temperate regions with adequate rainfall and requires a fertile soil. The crop is harvested mechanically in the autumn and the crown of leaves and excess soil removed. The roots do not deteriorate rapidly and may be left in the field for some weeks before being transported to the processing plant where the crop is washed and sliced, and the sugar extracted by diffusion. Milk of lime is added to the raw juice with calcium carbonate. After water is evaporated by boiling the syrup under a vacuum, the syrup is cooled and seeded with sugar crystals. The white sugar that crystallizes can be separated in a centrifuge and dried, requiring no further refining.
Refining
Refined sugar is made from raw sugar that has undergone a refining process to remove the molasses. Raw sugar is sucrose which is extracted from sugarcane or sugar beet. While raw sugar can be consumed, the refining process removes unwanted tastes and results in refined sugar or white sugar.
The sugar may be transported in bulk to the country where it will be used and the refining process often takes place there. The first stage is known as affination and involves immersing the sugar crystals in a concentrated syrup that softens and removes the sticky brown coating without dissolving them. The crystals are then separated from the liquor and dissolved in water. The resulting syrup is treated either by a carbonatation or by a phosphatation process. Both involve the precipitation of a fine solid in the syrup and when this is filtered out, many of the impurities are removed at the same time. Removal of color is achieved by using either a granular activated carbon or an ion-exchange resin. The sugar syrup is concentrated by boiling and then cooled and seeded with sugar crystals, causing the sugar to crystallize out. The liquor is spun off in a centrifuge and the white crystals are dried in hot air and ready to be packaged or used. The surplus liquor is made into refiners' molasses.
The International Commission for Uniform Methods of Sugar Analysis sets standards for the measurement of the purity of refined sugar, known as ICUMSA numbers; lower numbers indicate a higher level of purity in the refined sugar.
Refined sugar is widely used for industrial needs for higher quality. Refined sugar is purer (ICUMSA below 300) than raw sugar (ICUMSA over 1,500). The level of purity associated with the colors of sugar, expressed by standard number ICUMSA, the smaller ICUMSA numbers indicate the higher purity of sugar.
Forms and uses
Crystal size
Coarse-grain sugar, also known as sanding sugar, composed of reflective crystals with grain size of about 1 to 3 mm, similar to kitchen salt. Used atop baked products and candies, it will not dissolve when subjected to heat and moisture.
Granulated sugar (about 0.6 mm crystals), also known as table sugar or regular sugar, is used at the table, to sprinkle on foods and to sweeten hot drinks (coffee and tea), and in home baking to add sweetness and texture to baked products (cookies and cakes) and desserts (pudding and ice cream). It is also used as a preservative to prevent micro-organisms from growing and perishable food from spoiling, as in candied fruits, jams, and marmalades.
Milled sugars such as powdered sugar (icing sugar) are ground to a fine powder. They are used for dusting foods and in baking and confectionery.
Screened sugars such as caster sugar are crystalline products separated according to the size of the grains. They are used for decorative table sugars, for blending in dry mixes and in baking and confectionery.
Shapes
Cube sugar (sometimes called sugar lumps) are white or brown granulated sugars lightly steamed and pressed together in block shape. They are used to sweeten drinks.
Sugarloaf was the usual cone-form in which refined sugar was produced and sold until the late 19th century.
Brown sugars
Brown sugars are granulated sugars, either containing residual molasses, or with the grains deliberately coated with molasses to produce a light- or dark-colored sugar such as muscovado and turbinado. They are used in baked goods, confectionery, and toffees. Their darkness is due to the amount of molasses they contain. They may be classified based on their darkness or country of origin.
Liquid sugars
Syrups are thick, viscous liquids consisting primarily of a solution of sugar in water. They are used in the food processing of a wide range of products including beverages, hard candy, ice cream, and jams.
Inverted sugar syrup, commonly known as invert syrup or invert sugar, is a mixture of two simple sugars—glucose and fructose—that is made by heating granulated sugar in water. It is used in breads, cakes, and beverages for adjusting sweetness, aiding moisture retention and avoiding crystallization of sugars.
Molasses and treacle are obtained by removing sugar from sugarcane or sugar beet juice, as a byproduct of sugar production. They may be blended with the above-mentioned syrups to enhance sweetness and used in a range of baked goods and confectionery including toffees and licorice.
In winemaking, fruit sugars are converted into alcohol by a fermentation process. If the must formed by pressing the fruit has a low sugar content, additional sugar may be added to raise the alcohol content of the wine in a process called chaptalization. In the production of sweet wines, fermentation may be halted before it has run its full course, leaving behind some residual sugar that gives the wine its sweet taste.
Other sweeteners
Low-calorie sweeteners are often made of maltodextrin with added sweeteners. Maltodextrin is an easily digestible synthetic polysaccharide consisting of short chains of three or more glucose molecules and is made by the partial hydrolysis of starch. Strictly, maltodextrin is not classified as sugar as it contains more than two glucose molecules, although its structure is similar to maltose, a molecule composed of two joined glucose molecules.
Polyols are sugar alcohols and are used in chewing gums where a sweet flavor is required that lasts for a prolonged time in the mouth.
Consumption
Worldwide sugar provides 10% of the daily calories (based on a 2000 kcal diet). In 1750, the average Briton got 72 calories a day from sugar. In 1913, this had risen to 395. In 2015, sugar still provided around 14% of the calories in British diets. According to one source, per capita consumption of sugar in 2016 was highest in the United States, followed by Germany and the Netherlands.
Nutrition and flavor
Brown and white granulated sugar are 97% to nearly 100% carbohydrates, respectively, with less than 2% water, and no dietary fiber, protein or fat (table). Brown sugar contains a moderate amount of iron (15% of the Reference Daily Intake in a 100 gram amount, see table), but a typical serving of 4 grams (one teaspoon), would provide 15 calories and a negligible amount of iron or any other nutrient. Because brown sugar contains 5–10% molasses reintroduced during processing, its value to some consumers is a richer flavor than white sugar.
Health effects
Genera
High sugar consumption damages human health more than it provides nutritional benefit, and in particular is associated with a risk of cardiometabolic health detriments.
Sugar industry funding and health information
Sugar refiners and manufacturers of sugary foods and drinks have sought to influence medical research and public health recommendations, with substantial and largely clandestine spending documented from the 1960s to 2016. The results of research on the health effects of sugary food and drink differ significantly, depending on whether the researcher has financial ties to the food and drink industry. A 2013 medical review concluded that "unhealthy commodity industries should have no role in the formation of national or international NCD [non-communicable disease] policy". Similar efforts to steer coverage of sugar-related health information have been made in popular media, including news media and social media.
Obesity and metabolic syndrome
A 2003 technical report by the World Health Organization (WHO) provides evidence that high intake of sugary drinks (including fruit juice) increases the risk of obesity by adding to overall energy intake. By itself, sugar is doubtfully a factor causing obesity and metabolic syndrome. Meta-analysis showed that excessive consumption of sugar-sweetened beverages increased the risk of developing type 2 diabetes and metabolic syndrome – including weight gain and obesity – in adults and children.
Cancer
Sugar consumption does not directly cause cancer. Cancer Council Australia have stated that "there is no evidence that consuming sugar makes cancer cells grow faster or cause cancer". There is an indirect relationship between sugar consumption and obesity-related cancers through increased risk of excess body weight.
The American Institute for Cancer Research and World Cancer Research Fund recommend that people limit sugar consumption.
There is a popular misconception that cancer can be treated by reducing sugar and carbohydrate intake to supposedly "starve" tumours. In reality, the health of people with cancer is best served by maintaining a healthy diet.
Cognition
Despite some studies suggesting that sugar consumption causes hyperactivity, the quality of evidence is low and it is generally accepted within the scientific community that the notion of children's 'sugar rush' is a myth. A 2019 meta-analysis found that sugar consumption does not improve mood, but can lower alertness and increase fatigue within an hour of consumption. One review of low-quality studies of children consuming high amounts of energy drinks showed association with higher rates of unhealthy behaviors, including smoking and excessive alcohol use, and with hyperactivity and insomnia, although such effects could not be specifically attributed to sugar over other components of those drinks such as caffeine.
Tooth decay
The WHO, Action on Sugar and the Scientific Advisory Committee on Nutrition (SACN) consider free sugars an essential dietary factor in the development of dental caries. WHO have stated that "dental caries can be prevented by avoiding dietary free sugars".
A review of human studies showed that the incidence of caries is lower when sugar intake is less than 10% of total energy consumed. Sugar-sweetened beverage consumption is associated with an increased risk of tooth decay.
Nutritional displacement
The "empty calories" argument states that a diet high in added (or 'free') sugars will reduce consumption of foods that contain essential nutrients. This nutrient displacement occurs if sugar makes up more than 25% of daily energy intake, a proportion associated with poor diet quality and risk of obesity. Displacement may occur at lower levels of consumption.
Recommended dietary intake
The WHO recommends that both adults and children reduce the intake of free sugars to less than 10% of total energy intake, and suggests a reduction to below 5%. "Free sugars" include monosaccharides and disaccharides added to foods, and sugars found in fruit juice and concentrates, as well as in honey and syrups. According to the WHO, "[t]hese recommendations were based on the totality of available evidence reviewed regarding the relationship between free sugars intake and body weight (low and moderate quality evidence) and dental caries (very low and moderate quality evidence)."
On 20 May 2016, the U.S. Food and Drug Administration announced changes to the Nutrition Facts panel displayed on all foods, to be effective by July 2018. New to the panel is a requirement to list "added sugars" by weight and as a percent of Daily Value (DV). For vitamins and minerals, the intent of DVs is to indicate how much should be consumed. For added sugars, the guidance is that 100% DV should not be exceeded. 100% DV is defined as 50 grams. For a person consuming 2000 calories a day, 50 grams is equal to 200 calories and thus 10% of total calories—the same guidance as the WHO. To put this in context, most cans of soda contain 39 grams of sugar. In the United States, a government survey on food consumption in 2013–2014 reported that, for men and women aged 20 and older, the average total sugar intakes—naturally occurring in foods and added—were, respectively, 125 and 99 g/day.
Measurements
Various culinary sugars have different densities due to differences in particle size and inclusion of moisture. The "Engineering Resources – Bulk Density Chart" published in Powder and Bulk gives values for bulk densities:
Beet sugar 0.80 g/mL
Dextrose sugar 0.62 g/mL ( = 620 kg/m^3)
Granulated sugar 0.70 g/mL
Powdered sugar 0.56 g/mL
Society and culture
Manufacturers of sugary products, such as soft drinks and candy, and the Sugar Research Foundation have been accused of trying to influence consumers and medical associations in the 1960s and 1970s by creating doubt about the potential health hazards of sucrose overconsumption, while promoting saturated fat as the main dietary risk factor in cardiovascular diseases. In 2016, the criticism led to recommendations that diet policymakers emphasize the need for high-quality research that accounts for multiple biomarkers on development of cardiovascular diseases.
Gallery
See also
Barley sugar
Holing cane
List of unrefined sweeteners
Rare sugar
Carbonated drinks
Sugar plantations in the Caribbean
Glycomics
References
Sources
Further reading
Frankopan, Peter, The Silk Roads: A New History of the World, 2016, Bloomsbury,
Strong, Roy (2002), Feast: A History of Grand Eating, Jonathan Cape,
External links
Sugar at the National Health Service
Carbohydrates
Excipients
Indian inventions | Sugar | [
"Chemistry"
] | 7,661 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Organic compounds",
"Sugar",
"Carbohydrate chemistry"
] |
27,725 | https://en.wikipedia.org/wiki/Surface%20area | The surface area (symbol A) of a solid object is a measure of the total area that the surface of the object occupies. The mathematical definition of surface area in the presence of curved surfaces is considerably more involved than the definition of arc length of one-dimensional curves, or of the surface area for polyhedra (i.e., objects with flat polygonal faces), for which the surface area is the sum of the areas of its faces. Smooth surfaces, such as a sphere, are assigned surface area using their representation as parametric surfaces. This definition of surface area is based on methods of infinitesimal calculus and involves partial derivatives and double integration.
A general definition of surface area was sought by Henri Lebesgue and Hermann Minkowski at the turn of the twentieth century. Their work led to the development of geometric measure theory, which studies various notions of surface area for irregular objects of any dimension. An important example is the Minkowski content of a surface.
Definition
While the areas of many simple surfaces have been known since antiquity, a rigorous mathematical definition of area requires a great deal of care.
This should provide a function
which assigns a positive real number to a certain class of surfaces that satisfies several natural requirements. The most fundamental property of the surface area is its additivity: the area of the whole is the sum of the areas of the parts. More rigorously, if a surface S is a union of finitely many pieces S1, …, Sr which do not overlap except at their boundaries, then
Surface areas of flat polygonal shapes must agree with their geometrically defined area. Since surface area is a geometric notion, areas of congruent surfaces must be the same and the area must depend only on the shape of the surface, but not on its position and orientation in space. This means that surface area is invariant under the group of Euclidean motions. These properties uniquely characterize surface area for a wide class of geometric surfaces called piecewise smooth. Such surfaces consist of finitely many pieces that can be represented in the parametric form
with a continuously differentiable function The area of an individual piece is defined by the formula
Thus the area of SD is obtained by integrating the length of the normal vector to the surface over the appropriate region D in the parametric uv plane. The area of the whole surface is then obtained by adding together the areas of the pieces, using additivity of surface area. The main formula can be specialized to different classes of surfaces, giving, in particular, formulas for areas of graphs z = f(x,y) and surfaces of revolution.
One of the subtleties of surface area, as compared to arc length of curves, is that surface area cannot be defined simply as the limit of areas of polyhedral shapes approximating a given smooth surface. It was demonstrated by Hermann Schwarz that already for the cylinder, different choices of approximating flat surfaces can lead to different limiting values of the area; this example is known as the Schwarz lantern.
Various approaches to a general definition of surface area were developed in the late nineteenth and the early twentieth century by Henri Lebesgue and Hermann Minkowski. While for piecewise smooth surfaces there is a unique natural notion of surface area, if a surface is very irregular, or rough, then it may not be possible to assign an area to it at all. A typical example is given by a surface with spikes spread throughout in a dense fashion. Many surfaces of this type occur in the study of fractals. Extensions of the notion of area which partially fulfill its function and may be defined even for very badly irregular surfaces are studied in geometric measure theory. A specific example of such an extension is the Minkowski content of the surface.
Common formulas
Ratio of surface areas of a sphere and cylinder of the same radius and height
The below given formulas can be used to show that the surface area of a sphere and cylinder of the same radius and height are in the ratio 2 : 3, as follows.
Let the radius be r and the height be h (which is 2r for the sphere).
The discovery of this ratio is credited to Archimedes.
In chemistry
Surface area is important in chemical kinetics. Increasing the surface area of a substance generally increases the rate of a chemical reaction. For example, iron in a fine powder will combust, while in solid blocks it is stable enough to use in structures. For different applications a minimal or maximal surface area may be desired.
In biology
The surface area of an organism is important in several considerations, such as regulation of body temperature and digestion. Animals use their teeth to grind food down into smaller particles, increasing the surface area available for digestion. The epithelial tissue lining the digestive tract contains microvilli, greatly increasing the area available for absorption. Elephants have large ears, allowing them to regulate their own body temperature. In other instances, animals will need to minimize surface area; for example, people will fold their arms over their chest when cold to minimize heat loss.
The surface area to volume ratio (SA:V) of a cell imposes upper limits on size, as the volume increases much faster than does the surface area, thus limiting the rate at which substances diffuse from the interior across the cell membrane to interstitial spaces or to other cells. Indeed, representing a cell as an idealized sphere of radius , the volume and surface area are, respectively, and . The resulting surface area to volume ratio is therefore . Thus, if a cell has a radius of 1 μm, the SA:V ratio is 3; whereas if the radius of the cell is instead 10 μm, then the SA:V ratio becomes 0.3. With a cell radius of 100, SA:V ratio is 0.03. Thus, the surface area falls off steeply with increasing volume.
See also
Perimeter length
Projected area
BET theory, technique for the measurement of the specific surface area of materials
Spherical area
Surface integral
References
External links
Surface Area Video at Thinkwell
Area | Surface area | [
"Physics",
"Mathematics"
] | 1,212 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Wikipedia categories named after physical quantities",
"Area"
] |
27,745 | https://en.wikipedia.org/wiki/Standard%20temperature%20and%20pressure | Standard temperature and pressure (STP) or standard conditions for temperature and pressure are various standard sets of conditions for experimental measurements used to allow comparisons to be made between different sets of data. The most used standards are those of the International Union of Pure and Applied Chemistry (IUPAC) and the National Institute of Standards and Technology (NIST), although these are not universally accepted. Other organizations have established a variety of other definitions.
In industry and commerce, the standard conditions for temperature and pressure are often necessary for expressing the volumes of gases and liquids and related quantities such as the rate of volumetric flow (the volumes of gases vary significantly with temperature and pressure): standard cubic meters per second (Sm3/s), and normal cubic meters per second (Nm3/s).
Many technical publications (books, journals, advertisements for equipment and machinery) simply state "standard conditions" without specifying them; often substituting the term with older "normal conditions", or "NC". In special cases this can lead to confusion and errors. Good practice always incorporates the reference conditions of temperature and pressure. If not stated, some room environment conditions are supposed, close to 1 atm pressure, (), and 0% humidity.
Definitions
In chemistry, IUPAC changed its definition of standard temperature and pressure in 1982:
Until 1982, STP was defined as a temperature of 273.15 K (0 °C, 32 °F) and an absolute pressure of exactly 1 atm (101.325 kPa).
Since 1982, STP has been defined as a temperature of 273.15 K (0 °C, 32 °F) and an absolute pressure of exactly 1 bar (100 kPa, 105 Pa).
NIST uses a temperature of 20 °C (293.15 K, 68 °F) and an absolute pressure of 1 atm (14.696 psi, 101.325 kPa). This standard is also called normal temperature and pressure (abbreviated as NTP). However, a common temperature and pressure in use by NIST for thermodynamic experiments is 298.15 K (25 °C, 77 °F) and 1 bar (14.5038 psi, 100 kPa). NIST also uses 15 °C (288.15 K, 59 °F) for the temperature compensation of refined petroleum products, despite noting that these two values are not exactly consistent with each other.
The ISO 13443 standard reference conditions for natural gas and similar fluids are and 101.325 kPa;
by contrast, the American Petroleum Institute adopts .
Past uses
Before 1918, many professionals and scientists using the metric system of units defined the standard reference conditions of temperature and pressure for expressing gas volumes as being and . During those same years, the most commonly used standard reference conditions for people using the imperial or U.S. customary systems was and 14.696 psi (1 atm) because it was almost universally used by the oil and gas industries worldwide. The above definitions are no longer the most commonly used in either system of units.
Current use
Many different definitions of standard reference conditions are currently being used by organizations all over the world. The table below lists a few of them, but there are more. Some of these organizations used other standards in the past. For example, IUPAC has, since 1982, defined standard reference conditions as being 0 °C and 100 kPa (1 bar), in contrast to its old standard of 0 °C and 101.325 kPa (1 atm). The new value is the mean atmospheric pressure at an altitude of about 112 metres, which is closer to the worldwide median altitude of human habitation (194 m).
Natural gas companies in Europe, Australia, and South America have adopted 15 °C (59 °F) and 101.325 kPa (14.696 psi) as their standard gas volume reference conditions, used as the base values for defining the standard cubic meter. Also, the International Organization for Standardization (ISO), the United States Environmental Protection Agency (EPA) and National Institute of Standards and Technology (NIST) each have more than one definition of standard reference conditions in their various standards and regulations.
Abbreviations:
EGIA: Electricity and Gas Inspection Act (of Canada)
SATP: Standard Ambient Temperature and Pressure
SCF: Standard Cubic Foot
International Standard Atmosphere
In aeronautics and fluid dynamics the "International Standard Atmosphere" (ISA) is a specification of pressure, temperature, density, and speed of sound at each altitude. At standard mean sea level it specifies a temperature of , pressure of (1 atm), and a density of . It also specifies a temperature lapse rate of −6.5 °C (−11.7 °F) per km (approximately −2 °C (−3.6 °F) per 1,000 ft).
The International Standard Atmosphere is representative of atmospheric conditions at mid latitudes. In the US this information is specified the U.S. Standard Atmosphere which is identical to the "International Standard Atmosphere" at all altitudes up to 65,000 feet above sea level.
Standard laboratory conditions
Because many definitions of standard temperature and pressure differ in temperature significantly from standard laboratory temperatures (e.g. 0 °C vs. ~28 °C), reference is often made to "standard laboratory conditions" (a term deliberately chosen to be different from the term "standard conditions for temperature and pressure", despite its semantic near identity when interpreted literally). However, what is a "standard" laboratory temperature and pressure is inevitably geography-bound, given that different parts of the world differ in climate, altitude and the degree of use of heat/cooling in the workplace. For example, schools in New South Wales, Australia use 25 °C at 100 kPa for standard laboratory conditions.
ASTM International has published Standard ASTM E41- Terminology Relating to Conditioning and hundreds of special conditions for particular materials and test methods. Other standards organizations also have specialized standard test conditions.
Molar volume of a gas
It is as important to indicate the applicable reference conditions of temperature and pressure when stating the molar volume of a gas as it is when expressing a gas volume or volumetric flow rate. Stating the molar volume of a gas without indicating the reference conditions of temperature and pressure has very little meaning and can cause confusion.
The molar volume of gases around STP and at atmospheric pressure can be calculated with an accuracy that is usually sufficient by using the ideal gas law. The molar volume of any ideal gas may be calculated at various standard reference conditions as shown below:
Vm = 8.3145 × 273.15 / 101.325 = 22.414 dm3/mol at 0 °C and 101.325 kPa
Vm = 8.3145 × 273.15 / 100.000 = 22.711 dm3/mol at 0 °C and 100 kPa
Vm = 8.3145 × 288.15 / 101.325 = 23.645 dm3/mol at 15 °C and 101.325 kPa
Vm = 8.3145 × 298.15 / 101.325 = 24.466 dm3/mol at 25 °C and 101.325 kPa
Vm = 8.3145 × 298.15 / 100.000 = 24.790 dm3/mol at 25 °C and 100 kPa
Vm = 10.7316 × 519.67 / 14.696 = 379.48 ft3/lbmol at 60 °F and 14.696 psi (or about 0.8366 ft3/gram mole)
Vm = 10.7316 × 519.67 / 14.730 = 378.61 ft3/lbmol at 60 °F and 14.73 psi
Technical literature can be confusing because many authors fail to explain whether they are using the ideal gas constant R, or the specific gas constant Rs. The relationship between the two constants is Rs = R / m, where m is the molecular mass of the gas.
The US Standard Atmosphere (USSA) uses 8.31432 m3·Pa/(mol·K) as the value of R. However, the USSA in 1976 does recognize that this value is not consistent with the values of the Avogadro constant and the Boltzmann constant.
See also
Environmental chamber
ISO 1 – standard reference temperature for geometric product specifications
Reference atmospheric model
Room temperature
Standard sea-level conditions
Standard state
Explanatory notes
References
External links
"Standard conditions for gases" from the IUPAC Gold Book.
"Standard pressure" from the IUPAC Gold Book.
"STP" from the IUPAC Gold Book.
"Standard state" from the IUPAC Gold Book.
Atmospheric thermodynamics
Aerodynamics
Engineering thermodynamics
Gases
Measurement
Physical chemistry
Standards
Thermodynamics | Standard temperature and pressure | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,789 | [
"Matter",
"Applied and interdisciplinary physics",
"Physical quantities",
"Engineering thermodynamics",
"Quantity",
"Phases of matter",
"Measurement",
"Size",
"Aerodynamics",
"Thermodynamics",
"nan",
"Aerospace engineering",
"Dynamical systems",
"Mechanical engineering",
"Statistical mec... |
27,752 | https://en.wikipedia.org/wiki/Spectroscopy | Spectroscopy is the field of study that measures and interprets electromagnetic spectra. In narrower contexts, spectroscopy is the precise study of color as generalized from visible light to all bands of the electromagnetic spectrum.
Spectroscopy, primarily in the electromagnetic spectrum, is a fundamental exploratory tool in the fields of astronomy, chemistry, materials science, and physics, allowing the composition, physical structure and electronic structure of matter to be investigated at the atomic, molecular and macro scale, and over astronomical distances.
Historically, spectroscopy originated as the study of the wavelength dependence of the absorption by gas phase matter of visible light dispersed by a prism. Current applications of spectroscopy include biomedical spectroscopy in the areas of tissue analysis and medical imaging. Matter waves and acoustic waves can also be considered forms of radiative energy, and recently gravitational waves have been associated with a spectral signature in the context of the Laser Interferometer Gravitational-Wave Observatory (LIGO).
Introduction
Spectroscopy is a branch of science concerned with the spectra of electromagnetic radiation as a function of its wavelength or frequency measured by spectrographic equipment, and other techniques, in order to obtain information concerning the structure and properties of matter. Spectral measurement devices are referred to as spectrometers, spectrophotometers, spectrographs or spectral analyzers. Most spectroscopic analysis in the laboratory starts with a sample to be analyzed, then a light source is chosen from any desired range of the light spectrum, then the light goes through the sample to a dispersion array (diffraction grating instrument) and captured by a photodiode. For astronomical purposes, the telescope must be equipped with the light dispersion device. There are various versions of this basic setup that may be employed.
Spectroscopy began with Isaac Newton splitting light with a prism; a key moment in the development of modern optics. Therefore, it was originally the study of visible light that we call color that later under the studies of James Clerk Maxwell came to include the entire electromagnetic spectrum. Although color is involved in spectroscopy, it is not equated with the color of elements or objects that involve the absorption and reflection of certain electromagnetic waves to give objects a sense of color to our eyes. Rather spectroscopy involves the splitting of light by a prism, diffraction grating, or similar instrument, to give off a particular discrete line pattern called a "spectrum" unique to each different type of element. Most elements are first put into a gaseous phase to allow the spectra to be examined although today other methods can be used on different phases. Each element that is diffracted by a prism-like instrument displays either an absorption spectrum or an emission spectrum depending upon whether the element is being cooled or heated.
Until recently all spectroscopy involved the study of line spectra and most spectroscopy still does. Vibrational spectroscopy is the branch of spectroscopy that studies the spectra. However, the latest developments in spectroscopy can sometimes dispense with the dispersion technique. In biochemical spectroscopy, information can be gathered about biological tissue by absorption and light scattering techniques. Light scattering spectroscopy is a type of reflectance spectroscopy that determines tissue structures by examining elastic scattering. In such a case, it is the tissue that acts as a diffraction or dispersion mechanism.
Spectroscopic studies were central to the development of quantum mechanics, because the first useful atomic models described the spectra of hydrogen, which include the Bohr model, the Schrödinger equation, and Matrix mechanics, all of which can produce the spectral lines of hydrogen, therefore providing the basis for discrete quantum jumps to match the discrete hydrogen spectrum. Also, Max Planck's explanation of blackbody radiation involved spectroscopy because he was comparing the wavelength of light using a photometer to the temperature of a Black Body. Spectroscopy is used in physical and analytical chemistry because atoms and molecules have unique spectra. As a result, these spectra can be used to detect, identify and quantify information about the atoms and molecules. Spectroscopy is also used in astronomy and remote sensing on Earth. Most research telescopes have spectrographs. The measured spectra are used to determine the chemical composition and physical properties of astronomical objects (such as their temperature, density of elements in a star, velocity, black holes and more). An important use for spectroscopy is in biochemistry. Molecular samples may be analyzed for species identification and energy content.
Theory
The underlying premise of spectroscopy is that light is made of different wavelengths and that each wavelength corresponds to a different frequency. The importance of spectroscopy is centered around the fact that every element in the periodic table has a unique light spectrum described by the frequencies of light it emits or absorbs consistently appearing in the same part of the electromagnetic spectrum when that light is diffracted. This opened up an entire field of study with anything that contains atoms. Spectroscopy is the key to understanding the atomic properties of all matter. As such spectroscopy opened up many new sub-fields of science yet undiscovered. The idea that each atomic element has its unique spectral signature enabled spectroscopy to be used in a broad number of fields each with a specific goal achieved by different spectroscopic procedures. The National Institute of Standards and Technology maintains a public Atomic Spectra Database that is continually updated with precise measurements.
The broadening of the field of spectroscopy is due to the fact that any part of the electromagnetic spectrum may be used to analyze a sample from the infrared to the ultraviolet telling scientists different properties about the very same sample. For instance in chemical analysis, the most common types of spectroscopy include atomic spectroscopy, infrared spectroscopy, ultraviolet and visible spectroscopy, Raman spectroscopy and nuclear magnetic resonance. In nuclear magnetic resonance (NMR), the theory behind it is that frequency is analogous to resonance and its corresponding resonant frequency. Resonances by the frequency were first characterized in mechanical systems such as pendulums, which have a frequency of motion noted famously by Galileo.
Classification of methods
Spectroscopy is a sufficiently broad field that many sub-disciplines exist, each with numerous implementations of specific spectroscopic techniques. The various implementations and techniques can be classified in several ways.
Type of radiative energy
The types of spectroscopy are distinguished by the type of radiative energy involved in the interaction. In many applications, the spectrum is determined by measuring changes in the intensity or frequency of this energy. The types of radiative energy studied include:
Electromagnetic radiation was the first source of energy used for spectroscopic studies. Techniques that employ electromagnetic radiation are typically classified by the wavelength region of the spectrum and include microwave, terahertz, infrared, near-infrared, ultraviolet-visible, x-ray, and gamma spectroscopy.
Particles, because of their de Broglie waves, can also be a source of radiative energy. Both electron and neutron spectroscopy are commonly used. For a particle, its kinetic energy determines its wavelength.
Acoustic spectroscopy involves radiated pressure waves.
Dynamic mechanical analysis can be employed to impart radiating energy, similar to acoustic waves, to solid materials.
Nature of the interaction
The types of spectroscopy also can be distinguished by the nature of the interaction between the energy and the material. These interactions include:
Absorption spectroscopy: Absorption occurs when energy from the radiative source is absorbed by the material. Absorption is often determined by measuring the fraction of energy transmitted through the material, with absorption decreasing the transmitted portion.
Emission spectroscopy: Emission indicates that radiative energy is released by the material. A material's blackbody spectrum is a spontaneous emission spectrum determined by its temperature. This feature can be measured in the infrared by instruments such as the atmospheric emitted radiance interferometer. Emission can also be induced by other sources of energy such as flames, sparks, electric arcs or electromagnetic radiation in the case of fluorescence.
Elastic scattering and reflection spectroscopy determine how incident radiation is reflected or scattered by a material. Crystallography employs the scattering of high energy radiation, such as x-rays and electrons, to examine the arrangement of atoms in proteins and solid crystals.
Impedance spectroscopy: Impedance is the ability of a medium to impede or slow the transmittance of energy. For optical applications, this is characterized by the index of refraction.
Inelastic scattering phenomena involve an exchange of energy between the radiation and the matter that shifts the wavelength of the scattered radiation. These include Raman and Compton scattering.
Coherent or resonance spectroscopy are techniques where the radiative energy couples two quantum states of the material in a coherent interaction that is sustained by the radiating field. The coherence can be disrupted by other interactions, such as particle collisions and energy transfer, and so often require high intensity radiation to be sustained. Nuclear magnetic resonance (NMR) spectroscopy is a widely used resonance method, and ultrafast laser spectroscopy is also possible in the infrared and visible spectral regions.
Nuclear spectroscopy are methods that use the properties of specific nuclei to probe the local structure in matter, mainly condensed matter, molecules in liquids or frozen liquids and bio-molecules.
Quantum logic spectroscopy is a general technique used in ion traps that enables precision spectroscopy of ions with internal structures that preclude laser cooling, state manipulation, and detection. Quantum logic operations enable a controllable ion to exchange information with a co-trapped ion that has a complex or unknown electronic structure.
Type of material
Spectroscopic studies are designed so that the radiant energy interacts with specific types of matter.
Atoms
Atomic spectroscopy was the first application of spectroscopy. Atomic absorption spectroscopy and atomic emission spectroscopy involve visible and ultraviolet light. These absorptions and emissions, often referred to as atomic spectral lines, are due to electronic transitions of outer shell electrons as they rise and fall from one electron orbit to another. Atoms also have distinct x-ray spectra that are attributable to the excitation of inner shell electrons to excited states.
Atoms of different elements have distinct spectra and therefore atomic spectroscopy allows for the identification and quantitation of a sample's elemental composition. After inventing the spectroscope, Robert Bunsen and Gustav Kirchhoff discovered new elements by observing their emission spectra. Atomic absorption lines are observed in the solar spectrum and referred to as Fraunhofer lines after their discoverer. A comprehensive explanation of the hydrogen spectrum was an early success of quantum mechanics and explained the Lamb shift observed in the hydrogen spectrum, which further led to the development of quantum electrodynamics.
Modern implementations of atomic spectroscopy for studying visible and ultraviolet transitions include flame emission spectroscopy, inductively coupled plasma atomic emission spectroscopy, glow discharge spectroscopy, microwave induced plasma spectroscopy, and spark or arc emission spectroscopy. Techniques for studying x-ray spectra include X-ray spectroscopy and X-ray fluorescence.
Molecules
The combination of atoms into molecules leads to the creation of unique types of energetic states and therefore unique spectra of the transitions between these states. Molecular spectra can be obtained due to electron spin states (electron paramagnetic resonance), molecular rotations, molecular vibration, and electronic states. Rotations are collective motions of the atomic nuclei and typically lead to spectra in the microwave and millimetre-wave spectral regions. Rotational spectroscopy and microwave spectroscopy are synonymous. Vibrations are relative motions of the atomic nuclei and are studied by both infrared and Raman spectroscopy. Electronic excitations are studied using visible and ultraviolet spectroscopy as well as fluorescence spectroscopy.
Studies in molecular spectroscopy led to the development of the first maser and contributed to the subsequent development of the laser.
Crystals and extended materials
The combination of atoms or molecules into crystals or other extended forms leads to the creation of additional energetic states. These states are numerous and therefore have a high density of states. This high density often makes the spectra weaker and less distinct, i.e., broader. For instance, blackbody radiation is due to the thermal motions of atoms and molecules within a material. Acoustic and mechanical responses are due to collective motions as well.
Pure crystals, though, can have distinct spectral transitions, and the crystal arrangement also has an effect on the observed molecular spectra. The regular lattice structure of crystals also scatters x-rays, electrons or neutrons allowing for crystallographic studies.
Nuclei
Nuclei also have distinct energy states that are widely separated and lead to gamma ray spectra. Distinct nuclear spin states can have their energy separated by a magnetic field, and this allows for nuclear magnetic resonance spectroscopy.
Other types
Other types of spectroscopy are distinguished by specific applications or implementations:
Acoustic resonance spectroscopy is based on sound waves primarily in the audible and ultrasonic regions.
Auger electron spectroscopy is a method used to study surfaces of materials on a micro-scale. It is often used in connection with electron microscopy.
Cavity ring-down spectroscopy
Circular dichroism spectroscopy
Coherent anti-Stokes Raman spectroscopy is a recent technique that has high sensitivity and powerful applications for in vivo spectroscopy and imaging.
Cold vapour atomic fluorescence spectroscopy
Correlation spectroscopy encompasses several types of two-dimensional NMR spectroscopy.
Deep-level transient spectroscopy measures concentration and analyzes parameters of electrically active defects in semiconducting materials.
Dielectric spectroscopy
Dual-polarization interferometry measures the real and imaginary components of the complex refractive index.
Electron energy loss spectroscopy in transmission electron microscopy.
Electron phenomenological spectroscopy measures the physicochemical properties and characteristics of the electronic structure of multicomponent and complex molecular systems.
Electron paramagnetic resonance spectroscopy
Force spectroscopy
Fourier-transform spectroscopy is an efficient method for processing spectra data obtained using interferometers. Fourier-transform infrared spectroscopy is a common implementation of infrared spectroscopy. NMR also employs Fourier transforms.
Gamma spectroscopy
Hadron spectroscopy studies the energy/mass spectrum of hadrons according to spin, parity, and other particle properties. Baryon spectroscopy and meson spectroscopy are types of hadron spectroscopy.
Multispectral imaging and hyperspectral imaging is a method to create a complete picture of the environment or various objects, each pixel containing a full visible, visible near infrared, near infrared, or infrared spectrum.
Inelastic electron tunneling spectroscopy uses the changes in current due to inelastic electron-vibration interaction at specific energies that can also measure optically forbidden transitions.
Inelastic neutron scattering is similar to Raman spectroscopy, but uses neutrons instead of photons.
Laser-induced breakdown spectroscopy, also called laser-induced plasma spectrometry
Laser spectroscopy uses tunable lasers and other types of coherent emission sources, such as optical parametric oscillators, for selective excitation of atomic or molecular species.
Light scattering spectroscopy (LSS) is a spectroscopic technique typically used to evaluate morphological changes in epithelial cells in order to study mucosal tissue and detect early cancer and precancer.
Mass spectroscopy is a historical term used to refer to mass spectrometry. The current recommendation is to use the latter term. The term "mass spectroscopy" originated in the use of phosphor screens to detect ions.
Mössbauer spectroscopy probes the properties of specific isotopic nuclei in different atomic environments by analyzing the resonant absorption of gamma rays. See also Mössbauer effect.
Multivariate optical computing is an all optical compressed sensing technique, generally used in harsh environments, that directly calculates chemical information from a spectrum as analogue output.
Neutron spin echo spectroscopy measures internal dynamics in proteins and other soft matter systems.
Nuclear quadrupole resonance is a chemical spectroscopy method mediated by NMR of the electric field gradient (EFG) in the absence of magnetic field
Perturbed angular correlation (PAC) uses radioactive nuclei as probe to study electric and magnetic fields (hyperfine interactions) in crystals (condensed matter) and bio-molecules.
Photoacoustic spectroscopy measures the sound waves produced upon the absorption of radiation.
Photoemission spectroscopy
Photothermal spectroscopy measures heat evolved upon absorption of radiation.
Pump-probe spectroscopy can use ultrafast laser pulses to measure reaction intermediates in the femtosecond timescale.
Raman optical activity spectroscopy exploits Raman scattering and optical activity effects to reveal detailed information on chiral centers in molecules.
Raman spectroscopy
Saturated spectroscopy
Scanning tunneling spectroscopy
Spectrophotometry
Spin noise spectroscopy traces spontaneous fluctuations of electronic and nuclear spins.
Time-resolved spectroscopy measures the decay rates of excited states using various spectroscopic methods.
Time-stretch spectroscopy
Thermal infrared spectroscopy measures thermal radiation emitted from materials and surfaces and is used to determine the type of bonds present in a sample as well as their lattice environment. The techniques are widely used by organic chemists, mineralogists, and planetary scientists.
Transient grating spectroscopy measures quasiparticle propagation. It can track changes in metallic materials as they are irradiated.
Ultraviolet photoelectron spectroscopy
Ultraviolet–visible spectroscopy
Vibrational circular dichroism spectroscopy
Video spectroscopy
X-ray photoelectron spectroscopy
Applications
There are several applications of spectroscopy in the fields of medicine, physics, chemistry, and astronomy. Taking advantage of the properties of absorbance and with astronomy emission, spectroscopy can be used to identify certain states of nature. The uses of spectroscopy in so many different fields and for so many different applications has caused specialty scientific subfields. Such examples include:
Determining the atomic structure of a sample
Studying spectral emission lines of the sun and distant galaxies
Space exploration
Cure monitoring of composites using optical fibers.
Estimating weathered wood exposure times using near infrared spectroscopy.
Measurement of different compounds in food samples by absorption spectroscopy both in visible and infrared spectrum.
Measurement of toxic compounds in blood samples
Non-destructive elemental analysis by X-ray fluorescence.
Electronic structure research with various spectroscopes.
Redshift to determine the speed and velocity of a distant object
Determining the metabolic structure of a muscle
Monitoring dissolved oxygen content in freshwater and marine ecosystems
Altering the structure of drugs to improve effectiveness
Characterization of proteins
Respiratory gas analysis in hospitals
Finding the physical properties of a distant star or nearby exoplanet using the Relativistic Doppler effect.
In-ovo sexing: spectroscopy allows to determine the sex of the egg while it is hatching. Developed by French and German companies, both countries decided to ban chick culling, mostly done through a macerator, in 2022.
Process monitoring in Industrial process control
History
The history of spectroscopy began with Isaac Newton's optics experiments (1666–1672). According to Andrew Fraknoi and David Morrison, "In 1672, in the first paper that he submitted to the Royal Society, Isaac Newton described an experiment in which he permitted sunlight to pass through a small hole and then through a prism. Newton found that sunlight, which looks white to us, is actually made up of a mixture of all the colors of the rainbow." Newton applied the word "spectrum" to describe the rainbow of colors that combine to form white light and that are revealed when the white light is passed through a prism.
Fraknoi and Morrison state that "In 1802, William Hyde Wollaston built an improved spectrometer that included a lens to focus the Sun's spectrum on a screen. Upon use, Wollaston realized that the colors were not spread uniformly, but instead had missing patches of colors, which appeared as dark bands in the spectrum." During the early 1800s, Joseph von Fraunhofer made experimental advances with dispersive spectrometers that enabled spectroscopy to become a more precise and quantitative scientific technique. Since then, spectroscopy has played and continues to play a significant role in chemistry, physics, and astronomy. Per Fraknoi and Morrison, "Later, in 1815, German physicist Joseph Fraunhofer also examined the solar spectrum, and found about 600 such dark lines (missing colors), are now known as Fraunhofer lines, or Absorption lines."
In quantum mechanical systems, the analogous resonance is a coupling of two quantum mechanical stationary states of one system, such as an atom, via an oscillatory source of energy such as a photon. The coupling of the two states is strongest when the energy of the source matches the energy difference between the two states. The energy of a photon is related to its frequency by where is the Planck constant, and so a spectrum of the system response vs. photon frequency will peak at the resonant frequency or energy. Particles such as electrons and neutrons have a comparable relationship, the de Broglie relations, between their kinetic energy and their wavelength and frequency and therefore can also excite resonant interactions.
Spectra of atoms and molecules often consist of a series of spectral lines, each one representing a resonance between two different quantum states. The explanation of these series, and the spectral patterns associated with them, were one of the experimental enigmas that drove the development and acceptance of quantum mechanics. The hydrogen spectral series in particular was first successfully explained by the Rutherford–Bohr quantum model of the hydrogen atom. In some cases spectral lines are well separated and distinguishable, but spectral lines can also overlap and appear to be a single transition if the density of energy states is high enough. Named series of lines include the principal, sharp, diffuse and fundamental series.
See also
Notes
References
External links
NIST Atomic Spectroscopy Databases
MIT Spectroscopy Lab's History of Spectroscopy
Timeline of Spectroscopy
Spectroscopy: Reading the Rainbow
Observational astronomy
Scattering, absorption and radiative transfer (optics)
Scientific techniques
Concepts in astronomy
Gustav Kirchhoff | Spectroscopy | [
"Physics",
"Chemistry",
"Astronomy"
] | 4,284 | [
" absorption and radiative transfer (optics)",
"Molecular physics",
"Spectrum (physical sciences)",
"Concepts in astronomy",
"Instrumental analysis",
"Observational astronomy",
"Scattering",
"Spectroscopy",
"Astronomical sub-disciplines"
] |
27,837 | https://en.wikipedia.org/wiki/Superoxide%20dismutase | Superoxide dismutase (SOD, ) is an enzyme that alternately catalyzes the dismutation (or partitioning) of the superoxide () anion radical into normal molecular oxygen (O2) and hydrogen peroxide (). Superoxide is produced as a by-product of oxygen metabolism and, if not regulated, causes many types of cell damage. Hydrogen peroxide is also damaging and is degraded by other enzymes such as catalase. Thus, SOD is an important antioxidant defense in nearly all living cells exposed to oxygen. One exception is Lactobacillus plantarum and related lactobacilli, which use intracellular manganese to prevent damage from reactive .
Chemical reaction
SODs catalyze the disproportionation of superoxide:
+ → +
In this way, is converted into two less damaging species.
The general form, applicable to all the different metal−coordinated forms of SOD, can be written as follows:
+ → +
+ + → +
The reactions by which SOD−catalyzed dismutation of superoxide for Cu,Zn SOD can be written as follows:
+ → + (reduction of copper; oxidation of superoxide)
+ + → + (oxidation of copper; reduction of superoxide)
where M = Cu (n=1); Mn (n=2); Fe (n=2); Ni (n=2) only in prokaryotes.
In a series of such reactions, the oxidation state and the charge of the metal cation oscillates between n and n+1: +1 and +2 for Cu, or +2 and +3 for the other metals .
Types
General
Irwin Fridovich and Joe McCord at Duke University discovered the enzymatic activity of superoxide dismutase in 1968. SODs were previously known as a group of metalloproteins with unknown function; for example, CuZnSOD was known as erythrocuprein (or hemocuprein, or cytocuprein) or as the veterinary anti-inflammatory drug "Orgotein". Likewise, Brewer (1967) identified a protein that later became known as superoxide dismutase as an indophenol oxidase by protein analysis of starch gels using the phenazine-tetrazolium technique.
There are three major families of superoxide dismutase, depending on the protein fold and the metal cofactor: the Cu/Zn type (which binds both copper and zinc), Fe and Mn types (which bind either iron or manganese), and the Ni type (which binds nickel).
Copper and zinc – most commonly used by eukaryotes, including humans. The cytosols of virtually all eukaryotic cells contain a SOD enzyme with copper and zinc (Cu-Zn-SOD). For example, Cu-Zn-SOD available commercially is normally purified from bovine red blood cells. The bovine Cu-Zn enzyme is a homodimer of molecular weight 32,500. It was the first SOD whose atomic-detail crystal structure was solved, in 1975. It is an 8-stranded "Greek key" beta-barrel, with the active site held between the barrel and two surface loops. The two subunits are tightly joined back-to-back, mostly by hydrophobic and some electrostatic interactions. The ligands of the copper and zinc are six histidine and one aspartate side-chains; one histidine is bound between the two metals.
Iron or manganese – used by prokaryotes and protists, and in mitochondria and chloroplasts
Iron – Many bacteria contain a form of the enzyme with iron (Fe-SOD); some bacteria contain Fe-SOD, others Mn-SOD, and some (such as E. coli) contain both. Fe-SOD can also be found in the chloroplasts of plants. The 3D structures of the homologous Mn and Fe superoxide dismutases have the same arrangement of alpha-helices, and their active sites contain the same type and arrangement of amino acid side-chains. They are usually dimers, but occasionally tetramers.
Manganese – Nearly all mitochondria, and many bacteria, contain a form with manganese (Mn-SOD): For example, the Mn-SOD found in human mitochondria. The ligands of the manganese ions are 3 histidine side-chains, an aspartate side-chain and a water molecule or hydroxy ligand, depending on the Mn oxidation state (respectively II and III).
Nickel – prokaryotic. This has a hexameric (6-copy) structure built from right-handed 4-helix bundles, each containing N-terminal hooks that chelate a Ni ion. The Ni-hook contains the motif His-Cys-X-X-Pro-Cys-Gly-X-Tyr; it provides most of the interactions critical for metal binding and catalysis and is, therefore, a likely diagnostic of NiSODs.
In higher plants, SOD isozymes have been localized in different cell compartments. Mn-SOD is present in mitochondria and peroxisomes. Fe-SOD has been found mainly in chloroplasts but has also been detected in peroxisomes, and CuZn-SOD has been localized in cytosol, chloroplasts, peroxisomes, and apoplast.
Human
There are three forms of superoxide dismutase present in humans, in all other mammals, and most chordates. SOD1 is located in the cytoplasm, SOD2 in the mitochondria, and SOD3 is extracellular. The first is a dimer (consists of two units), whereas the others are tetramers (four subunits). SOD1 and SOD3 contain copper and zinc, whereas SOD2, the mitochondrial enzyme, has manganese in its reactive centre. The genes are located on chromosomes 21, 6, and 4, respectively (21q22.1, 6q25.3 and 4p15.3-p15.1).
Plants
In higher plants, superoxide dismutase enzymes (SODs) act as antioxidants and protect cellular components from being oxidized by reactive oxygen species (ROS). ROS can form as a result of drought, injury, herbicides and pesticides, ozone, plant metabolic activity, nutrient deficiencies, photoinhibition, temperature above and below ground, toxic metals, and UV or gamma rays. To be specific, molecular O2 is reduced to (a ROS called superoxide) when it absorbs an excited electron released from compounds of the electron transport chain. Superoxide is known to denature enzymes, oxidize lipids, and fragment DNA. SODs catalyze the production of O2 and from superoxide (), which results in less harmful reactants.
When acclimating to increased levels of oxidative stress, SOD concentrations typically increase with the degree of stress conditions. The compartmentalization of different forms of SOD throughout the plant makes them counteract stress very effectively. There are three well-known and -studied classes of SOD metallic coenzymes that exist in plants. First, Fe SODs consist of two species, one homodimer (containing 1–2 g Fe) and one tetramer (containing 2–4 g Fe). They are thought to be the most ancient SOD metalloenzymes and are found within both prokaryotes and eukaryotes. Fe SODs are most abundantly localized inside plant chloroplasts, where they are indigenous. Second, Mn SODs consist of a homodimer and homotetramer species each containing a single Mn(III) atom per subunit. They are found predominantly in mitochondrion and peroxisomes. Third, Cu-Zn SODs have electrical properties very different from those of the other two classes. These are concentrated in the chloroplast, cytosol, and in some cases the extracellular space. Note that Cu-Zn SODs provide less protection than Fe SODs when localized in the chloroplast.
Bacteria
Human white blood cells use enzymes such as NADPH oxidase to generate superoxide and other reactive oxygen species to kill bacteria. During infection, some bacteria (e.g., Burkholderia pseudomallei) therefore produce superoxide dismutase to protect themselves from being killed.
Biochemistry
SOD out-competes damaging reactions of superoxide, thus protecting the cell from superoxide toxicity.
The reaction of superoxide with non-radicals is spin-forbidden. In biological systems, this means that its main reactions are with itself (dismutation) or with another biological radical such as nitric oxide (NO) or with a transition-series metal. The superoxide anion radical () spontaneously dismutes to O2 and hydrogen peroxide () quite rapidly (~105 M−1s−1 at pH 7). SOD is necessary because superoxide reacts with sensitive and critical cellular targets. For example, it reacts with the NO radical, and makes toxic peroxynitrite.
Because the uncatalysed dismutation reaction for superoxide requires two superoxide molecules to react with each other, the dismutation rate is second-order with respect to initial superoxide concentration. Thus, the half-life of superoxide, although very short at high concentrations (e.g., 0.05 seconds at 0.1mM) is actually quite long at low concentrations (e.g., 14 hours at 0.1 nM). In contrast, the reaction of superoxide with SOD is first order with respect to superoxide concentration. Moreover, superoxide dismutase has the largest kcat/KM (an approximation of catalytic efficiency) of any known enzyme (~7 x 109 M−1s−1), this reaction being limited only by the frequency of collision between itself and superoxide. That is, the reaction rate is "diffusion-limited".
The high efficiency of superoxide dismutase seems necessary: even at the subnanomolar concentrations achieved by the high concentrations of SOD within cells, superoxide inactivates the citric acid cycle enzyme aconitase, can poison energy metabolism, and releases potentially toxic iron. Aconitase is one of several iron-sulfur-containing (de)hydratases in metabolic pathways shown to be inactivated by superoxide.
Stability and folding mechanism
SOD1 is an extremely stable protein. In the holo form (both copper and zinc bound) the melting point is > 90 °C. In the apo form (no copper or zinc bound) the melting point is ~60 °C. By differential scanning calorimetry (DSC), holo SOD1 unfolds by a two-state mechanism: from dimer to two unfolded monomers. In chemical denaturation experiments, holo SOD1 unfolds by a three-state mechanism with observation of a folded monomeric intermediate.
Physiology
Superoxide is one of the main reactive oxygen species in the cell. As a consequence, SOD serves a key antioxidant role. The physiological importance of SODs is illustrated by the severe pathologies evident in mice genetically engineered to lack these enzymes. Mice lacking SOD2 die several days after birth, amid massive oxidative stress. Mice lacking SOD1 develop a wide range of pathologies, including hepatocellular carcinoma, an acceleration of age-related muscle mass loss, an earlier incidence of cataracts, and a reduced lifespan. Mice lacking SOD3 do not show any obvious defects and exhibit a normal lifespan, though they are more sensitive to hyperoxic injury. Knockout mice of any SOD enzyme are more sensitive to the lethal effects of superoxide-generating compounds, such as paraquat and diquat (herbicides).
Drosophila lacking SOD1 have a dramatically shortened lifespan, whereas flies lacking SOD2 die before birth. Depletion of SOD1 and SOD2 in the nervous system and muscles of Drosophila is associated with reduced lifespan. The accumulation of neuronal and muscular ROS appears to contribute to age-associated impairments. When overexpression of mitochondrial SOD2 is induced, the lifespan of adult Drosophila is extended.
Among black garden ants (Lasius niger), the lifespan of queens is an order of magnitude greater than of workers despite no systematic nucleotide sequence difference between them. The SOD3 gene was found to be the most differentially over-expressed in the brains of queen vs worker ants. This finding raises the possibility of an important role of antioxidant function in modulating lifespan.
SOD knockdowns in the worm C. elegans do not cause major physiological disruptions. However, the lifespan of C. elegans can be extended by superoxide/catalase mimetics suggesting that oxidative stress is a major determinant of the rate of aging.
Knockout or null mutations in SOD1 are highly detrimental to aerobic growth in the budding yeast Saccharomyces cerevisiae and result in a dramatic reduction in post-diauxic lifespan. In wild-type S. cerevisiae, DNA damage rates increased 3-fold with age, but more than 5-fold in mutants deleted for either the SOD1 or SOD2 genes. Reactive oxygen species levels increase with age in these mutant strains and show a similar pattern to the pattern of DNA damage increase with age. Thus it appears that superoxide dismutase plays a substantial role in preserving genome integrity during aging in S. cerevisiae.
SOD2 knockout or null mutations cause growth inhibition on respiratory carbon sources in addition to decreased post-diauxic lifespan.
In the fission yeast Schizosaccharomyces pombe, deficiency of mitochondrial superoxide dismutase SOD2 accelerates chronological aging.
Several prokaryotic SOD null mutants have been generated, including E. coli. The loss of periplasmic CuZnSOD causes loss of virulence and might be an attractive target for new antibiotics.
Role in disease
Mutations in the first SOD enzyme (SOD1) can cause familial amyotrophic lateral sclerosis (ALS, a form of motor neuron disease). The most common mutation in the U.S. is A4V, while the most intensely studied is G93A. Inactivation of SOD1 causes hepatocellular carcinoma. Diminished SOD3 activity has been linked to lung diseases such as acute respiratory distress syndrome (ARDS) or chronic obstructive pulmonary disease (COPD). Superoxide dismutase is not expressed in neural crest cells in the developing fetus. Hence, high levels of free radicals can cause damage to them and induce dysraphic anomalies (neural tube defects).
Mutations in SOD1 can cause familial ALS (several pieces of evidence also show that wild-type SOD1, under conditions of cellular stress, is implicated in a significant fraction of sporadic ALS cases, which represent 90% of ALS patients.), by a mechanism that is presently not understood, but not due to loss of enzymatic activity or a decrease in the conformational stability of the SOD1 protein. Overexpression of SOD1 has been linked to the neural disorders seen in Down syndrome. In patients with thalassemia, SOD will increase as a form of compensation mechanism. However, in the chronic stage, SOD does not seem to be sufficient and tends to decrease due to the destruction of proteins from the massive reaction of oxidant-antioxidant.
In mice, the extracellular superoxide dismutase (SOD3, ecSOD) contributes to the development of hypertension. Inactivation of SOD2 in mice causes perinatal lethality.
Medical uses
Supplementary superoxide dimutase has been suggested as a treatment to prevent bronchopulmonary dysplasia in infants who are born preterm, however the effectiveness of his treatment is not clear.
Research
SOD has been used in experimental treatment of chronic inflammation in inflammatory bowel conditions. SOD may ameliorate cis-platinum-induced nephrotoxicity (rodent studies). As "Orgotein" or "ontosein", a pharmacologically-active purified bovine liver SOD, it is also effective in the treatment of urinary tract inflammatory disease in man. For a time, bovine liver SOD even had regulatory approval in several European countries for such use. This was cut short by concerns about prion disease.
An SOD-mimetic agent, TEMPOL, is currently in clinical trials for radioprotection and to prevent radiation-induced dermatitis. TEMPOL and similar SOD-mimetic nitroxides exhibit a multiplicity of actions in diseases involving oxidative stress.
The synthesis of enzymes such as superoxide dismutase, L-ascorbate oxidase, and Delta 1 DNA polymerase is initiated in plants with the activation of genes associated with stress conditions for plants. The most common stress conditions can be injury, drought or soil salinity. Limiting this process initiated by the conditions of strong soil salinity can be achieved by administering exogenous glutamine to plants. The decrease in the level of expression of genes responsible for the synthesis of superoxide dismutase increases with the increase in glutamine concentration.
Cosmetic uses
SOD may reduce free radical damage to skin—for example, to reduce fibrosis following radiation for breast cancer. Studies of this kind must be regarded as tentative, however, as there were not adequate controls in the study including a lack of randomization, double-blinding, or placebo. Superoxide dismutase is known to reverse fibrosis, possibly through de-differentiation of myofibroblasts back to fibroblasts.
Commercial sources
SOD is commercially obtained from marine phytoplankton, bovine liver, horseradish, cantaloupe, and certain bacteria. For therapeutic purpose, SOD is usually injected locally. There is no evidence that ingestion of unprotected SOD or SOD-rich foods can have any physiological effects, as all ingested SOD is broken down into amino acids before being absorbed. However, ingestion of SOD bound to wheat proteins could improve its therapeutic activity, at least in theory.
See also
Catalase
Glutathione peroxidase
Jiaogulan
NADPH oxidase, an enzyme which produces superoxide
Peroxidase
References
External links
(ALS)
The ALS Online Database
A short but substantive overview of SOD and its literature.
Damage-Based Theories of Aging Includes a discussion of the roles of SOD1 and SOD2 in aging.
Physicians' Comm. For Responsible Med.
SOD and Oxidative Stress Pathway Image
PDBe-KB provides an overview of all the structure information available in the PDB for Human Superoxide dismutase [Cu-Zn]
PDBe-KB provides an overview of all the structure information available in the PDB for Human Superoxide dismutase [Mn], mitochondrial
PDBe-KB provides an overview of all the structure information available in the PDB for Human Extracellular superoxide dismutase [Cu-Zn]
Antioxidants
Metalloproteins
Oxidoreductases
EC 1.15.1
Copper enzymes
Aging-related enzymes
Iron enzymes
Zinc enzymes
Nickel enzymes
Manganese enzymes | Superoxide dismutase | [
"Chemistry",
"Biology"
] | 4,161 | [
"Aging-related enzymes",
"Oxidoreductases",
"Senescence",
"Metalloproteins",
"Bioinorganic chemistry"
] |
27,970 | https://en.wikipedia.org/wiki/Stereoisomerism | In stereochemistry, stereoisomerism, or spatial isomerism, is a form of isomerism in which molecules have the same molecular formula and sequence of bonded atoms (constitution), but differ in the three-dimensional orientations of their atoms in space. This contrasts with structural isomers, which share the same molecular formula, but the bond connections or their order differs. By definition, molecules that are stereoisomers of each other represent the same structural isomer.
Enantiomers
Enantiomers, also known as optical isomers, are two stereoisomers that are related to each other by a reflection: they are mirror images of each other that are non-superposable. Human hands are a macroscopic analog of this. Every stereogenic center in one has the opposite configuration in the other. Two compounds that are enantiomers of each other have the same physical properties, except for the direction in which they rotate polarized light and how they interact with different enantiomers of other compounds. As a result, different enantiomers of a compound may have substantially different biological effects. Pure enantiomers also exhibit the phenomenon of optical activity and can be separated only with the use of a chiral agent. In nature, only one enantiomer of most chiral biological compounds, such as amino acids (except glycine, which is achiral), is present. An optically active compound shows two forms: D-(+) form and L-(−) form.
Diastereomers
Diastereomers are stereoisomers not related through a reflection operation. They are not mirror images of each other. These include meso compounds, cis–trans isomers, E-Z isomers, and non-enantiomeric optical isomers. Diastereomers seldom have the same physical properties. In the example shown below, the meso form of tartaric acid forms a diastereomeric pair with both levo- and dextro-tartaric acids, which form an enantiomeric pair.
The D- and L- labeling of the isomers above is not the same as the d- and l- labeling more commonly seen, explaining why these may appear reversed to those familiar with only the latter naming convention.
A Fischer projection can be used to differentiate between L- and D- molecules Chirality (chemistry). For instance, by definition, in a Fischer projection the penultimate carbon of D-sugars are depicted with hydrogen on the left and hydroxyl on the right. L-sugars will be shown with the hydrogen on the right and the hydroxyl on the left.
The other refers to Optical rotation, when looking at the source of light, the rotation of the plane of polarization may be either to the right (dextrorotary — d-rotary, represented by (+), clockwise), or to the left (levorotary — l-rotary, represented by (−), counter-clockwise) depending on which stereoisomer is dominant. For instance, sucrose and camphor are d-rotary whereas cholesterol is l-rotary.
Cis–trans and E–Z isomerism
Stereoisomerism about double bonds arises because rotation about the double bond is restricted, keeping the substituents fixed relative to each other. If the two substituents on at least one end of a double bond are the same, then there is no stereoisomer and the double bond is not a stereocenter, e.g. propene, CH3CH=CH2 where the two substituents at one end are both H.
Traditionally, double bond stereochemistry was described as either cis (Latin, on this side) or trans (Latin, across), in reference to the relative position of substituents on either side of a double bond. A simple example of cis–trans isomerism is the 1,2-disubstituted ethenes, like the dichloroethene (C2H2Cl2) isomers shown below.
Molecule I is cis-1,2-dichloroethene and molecule II is trans-1,2-dichloroethene. Due to occasional ambiguity, IUPAC adopted a more rigorous system wherein the substituents at each end of the double bond are assigned priority based on their atomic number. If the high-priority substituents are on the same side of the bond, it is assigned Z (Ger. zusammen, together). If they are on opposite sides, it is E (Ger. entgegen, opposite). Since chlorine has a larger atomic number than hydrogen, it is the highest-priority group. Using this notation to name the above pictured molecules, molecule I is (Z)-1,2-dichloroethene and molecule II is (E)-1,2-dichloroethene. It is not the case that Z and cis, or E and trans, are always interchangeable. Consider the following fluoromethylpentene:
The proper name for this molecule is either trans-2-fluoro-3-methylpent-2-ene because the alkyl groups that form the backbone chain (i.e., methyl and ethyl) reside across the double bond from each other, or (Z)-2-fluoro-3-methylpent-2-ene because the highest-priority groups on each side of the double bond are on the same side of the double bond. Fluoro is the highest-priority group on the left side of the double bond, and ethyl is the highest-priority group on the right side of the molecule.
The terms cis and trans are also used to describe the relative position of two substituents on a ring; cis if on the same side, otherwise trans.
Conformers
Conformational isomerism is a form of isomerism that describes the phenomenon of molecules with the same structural formula but with different shapes due to rotations about one or more bonds. Different conformations can have different energies, can usually interconvert, and are very rarely isolatable. For example, there exists a variety of Cyclohexane conformations (which cyclohexane is an essential intermediate for the synthesis of nylon–6,6) including a chair conformation where four of the carbon atoms form the "seat" of the chair, one carbon atom is the "back" of the chair, and one carbon atom is the "foot rest"; and a boat conformation, the boat conformation represents the energy maximum on a conformational itinerary between the two equivalent chair forms; however, it does not represent the transition state for this process, because there are lower-energy pathways. The conformational inversion of substituted cyclohexanes is a very rapid process at room temperature, with a half-life of 0.00001 seconds.
There are some molecules that can be isolated in several conformations, due to the large energy barriers between different conformations. 2,2',6,6'-Tetrasubstituted biphenyls can fit into this latter category.
Anomers
Anomerism is an identity for single bonded ring structures where "cis" or "Z" and "trans" or "E" (geometric isomerism) needs to name the substitutions on a carbon atom that also displays the identity of chirality; so anomers have carbon atoms that have geometric isomerism and optical isomerism (enantiomerism) on one or more of the carbons of the ring. Anomers are named "alpha" or "axial" and "beta" or "equatorial" when substituting a cyclic ring structure that has single bonds between the carbon atoms of the ring for example, a hydroxyl group, a methyl hydroxyl group, a methoxy group or another pyranose or furanose group which are typical single bond substitutions but not limited to these. Axial geometric isomerism will be perpendicular (90 degrees) to a reference plane and equatorial will be 120 degrees away from the axial bond or deviate 30 degrees from the reference plane.
Atropisomers
Atropisomers are stereoisomers resulting from hindered rotation about single bonds where the steric strain barrier to rotation is high enough to allow for the isolation of the conformers.
More definitions
A configurational stereoisomer is a stereoisomer of a reference molecule that has the opposite configuration at a stereocenter (e.g., R- vs S- or E- vs Z-). This means that configurational isomers can be interconverted only by breaking covalent bonds to the stereocenter, for example, by inverting the configurations of some or all of the stereocenters in a compound.
An epimer is a diastereoisomer that has the opposite configuration at only one of the stereocenters.
Le Bel-van't Hoff rule
Le Bel-van't Hoff rule states that for a structure with n asymmetric carbon atoms, there is a maximum of 2n different stereoisomers possible. As an example, D-glucose is an aldohexose and has the formula C6H12O6. Four of its six carbon atoms are stereogenic, which means D-glucose is one of 24=16 possible stereoisomers.
See also
Descriptor (chemistry)
Backbone-dependent rotamer library
References
Stereochemistry
Isomerism
Jacobus Henricus van 't Hoff | Stereoisomerism | [
"Physics",
"Chemistry"
] | 2,003 | [
"Stereochemistry",
"Space",
"Isomerism",
"nan",
"Spacetime"
] |
28,002 | https://en.wikipedia.org/wiki/Simple%20machine | A simple machine is a mechanical device that changes the direction or magnitude of a force. In general, they can be defined as the simplest mechanisms that use mechanical advantage (also called leverage) to multiply force. Usually the term refers to the six classical simple machines that were defined by Renaissance scientists:
Lever
Wheel and axle
Pulley
Inclined plane
Wedge
Screw
A simple machine uses a single applied force to do work against a single load force. Ignoring friction losses, the work done on the load is equal to the work done by the applied force. The machine can increase the amount of the output force, at the cost of a proportional decrease in the distance moved by the load. The ratio of the output to the applied force is called the mechanical advantage.
Simple machines can be regarded as the elementary "building blocks" of which all more complicated machines (sometimes called "compound machines") are composed. For example, wheels, levers, and pulleys are all used in the mechanism of a bicycle. The mechanical advantage of a compound machine is just the product of the mechanical advantages of the simple machines of which it is composed.
Although they continue to be of great importance in mechanics and applied science, modern mechanics has moved beyond the view of the simple machines as the ultimate building blocks of which all machines are composed, which arose in the Renaissance as a neoclassical amplification of ancient Greek texts. The great variety and sophistication of modern machine linkages, which arose during the Industrial Revolution, is inadequately described by these six simple categories. Various post-Renaissance authors have compiled expanded lists of "simple machines", often using terms like basic machines, compound machines, or machine elements to distinguish them from the classical simple machines above. By the late 1800s, Franz Reuleaux had identified hundreds of machine elements, calling them simple machines. Modern machine theory analyzes machines as kinematic chains composed of elementary linkages called kinematic pairs.
History
The idea of a simple machine originated with the Greek philosopher Archimedes around the 3rd century BC, who studied the Archimedean simple machines: lever, pulley, and screw. He discovered the principle of mechanical advantage in the lever. Archimedes' famous remark with regard to the lever: "Give me a place to stand on, and I will move the Earth," () expresses his realization that there was no limit to the amount of force amplification that could be achieved by using mechanical advantage. Later Greek philosophers defined the classic five simple machines (excluding the inclined plane) and were able to calculate their (ideal) mechanical advantage. For example, Heron of Alexandria (–75 AD) in his work Mechanics lists five mechanisms that can "set a load in motion": lever, windlass, pulley, wedge, and screw, and describes their fabrication and uses. However the Greeks' understanding was limited to the statics of simple machines (the balance of forces), and did not include dynamics, the tradeoff between force and distance, or the concept of work.
During the Renaissance the dynamics of the mechanical powers, as the simple machines were called, began to be studied from the standpoint of how far they could lift a load, in addition to the force they could apply, leading eventually to the new concept of mechanical work. In 1586 Flemish engineer Simon Stevin derived the mechanical advantage of the inclined plane, and it was included with the other simple machines. The complete dynamic theory of simple machines was worked out by Italian scientist Galileo Galilei in 1600 in (On Mechanics), in which he showed the underlying mathematical similarity of the machines as force amplifiers. He was the first to explain that simple machines do not create energy, only transform it.
The classic rules of sliding friction in machines were discovered by Leonardo da Vinci (1452–1519), but were unpublished and merely documented in his notebooks, and were based on pre-Newtonian science such as believing friction was an ethereal fluid. They were rediscovered by Guillaume Amontons (1699) and were further developed by Charles-Augustin de Coulomb (1785).
Ideal simple machine
If a simple machine does not dissipate energy through friction, wear or deformation, then energy is conserved and it is called an ideal simple machine. In this case, the power into the machine equals the power out, and the mechanical advantage can be calculated from its geometric dimensions.
Although each machine works differently mechanically, the way they function is similar mathematically. In each machine, a force is applied to the device at one point, and it does work moving a load at another point. Although some machines only change the direction of the force, such as a stationary pulley, most machines multiply the magnitude of the force by a factor, the mechanical advantage
that can be calculated from the machine's geometry and friction.
Simple machines do not contain a source of energy, so they cannot do more work than they receive from the input force. A simple machine with no friction or elasticity is called an ideal machine. Due to conservation of energy, in an ideal simple machine, the power output (rate of energy output) at any time is equal to the power input
The power output equals the velocity of the load multiplied by the load force . Similarly the power input from the applied force is equal to the velocity of the input point multiplied by the applied force . Therefore,
So the mechanical advantage of an ideal machine is equal to the velocity ratio, the ratio of input velocity to output velocity
The velocity ratio is also equal to the ratio of the distances covered in any given period of time
Therefore, the mechanical advantage of an ideal machine is also equal to the distance ratio, the ratio of input distance moved to output distance moved
This can be calculated from the geometry of the machine. For example, the mechanical advantage and distance ratio of the lever is equal to the ratio of its lever arms.
The mechanical advantage can be greater or less than one:
If , the output force is greater than the input, the machine acts as a force amplifier, but the distance moved by the load is less than the distance moved by the input force .
If , the output force is less than the input, but the distance moved by the load is greater than the distance moved by the input force.
In the screw, which uses rotational motion, the input force should be replaced by the torque, and the velocity by the angular velocity the shaft is turned.
Friction and efficiency
All real machines have friction, which causes some of the input power to be dissipated as heat. If is the power lost to friction, from conservation of energy
The mechanical efficiency of a machine (where ) is defined as the ratio of power out to the power in, and is a measure of the frictional energy losses
As above, the power is equal to the product of force and velocity, so
Therefore,
So in non-ideal machines, the mechanical advantage is always less than the velocity ratio by the product with the efficiency . So a machine that includes friction will not be able to move as large a load as a corresponding ideal machine using the same input force.
Compound machines
A compound machine is a machine formed from a set of simple machines connected in series with the output force of one providing the input force to the next. For example, a bench vise consists of a lever (the vise's handle) in series with a screw, and a simple gear train consists of a number of gears (wheels and axles) connected in series.
The mechanical advantage of a compound machine is the ratio of the output force exerted by the last machine in the series divided by the input force applied to the first machine, that is
Because the output force of each machine is the input of the next, , this mechanical advantage is also given by
Thus, the mechanical advantage of the compound machine is equal to the product of the mechanical advantages of the series of simple machines that form it
Similarly, the efficiency of a compound machine is also the product of the efficiencies of the series of simple machines that form it
Self-locking machines
In many simple machines, if the load force on the machine is high enough in relation to the input force , the machine will move backwards, with the load force doing work on the input force. So these machines can be used in either direction, with the driving force applied to either input point. For example, if the load force on a lever is high enough, the lever will move backwards, moving the input arm backwards against the input force. These are called reversible, non-locking or overhauling machines, and the backward motion is called overhauling.
However, in some machines, if the frictional forces are high enough, no amount of load force can move it backwards, even if the input force is zero. This is called a self-locking, nonreversible, or non-overhauling machine. These machines can only be set in motion by a force at the input, and when the input force is removed will remain motionless, "locked" by friction at whatever position they were left.
Self-locking occurs mainly in those machines with large areas of sliding contact between moving parts: the screw, inclined plane, and wedge:
The most common example is a screw. In most screws, one can move the screw forward or backward by turning it, and one can move the nut along the shaft by turning it, but no amount of pushing the screw or the nut will cause either of them to turn.
On an inclined plane, a load can be pulled up the plane by a sideways input force, but if the plane is not too steep and there is enough friction between load and plane, when the input force is removed the load will remain motionless and will not slide down the plane, regardless of its weight.
A wedge can be driven into a block of wood by force on the end, such as from hitting it with a sledge hammer, forcing the sides apart, but no amount of compression force from the wood walls will cause it to pop back out of the block.
A machine will be self-locking if and only if its efficiency is below 50%:
Whether a machine is self-locking depends on both the friction forces (coefficient of static friction) between its parts, and the distance ratio (ideal mechanical advantage). If both the friction and ideal mechanical advantage are high enough, it will self-lock.
Proof
When a machine moves in the forward direction from point 1 to point 2, with the input force doing work on a load force, from conservation of energy the input work is equal to the sum of the work done on the load force and the work lost to friction
If the efficiency is below 50%
From
When the machine moves backward from point 2 to point 1 with the load force doing work on the input force, the work lost to friction is the same
So the output work is
Thus the machine self-locks, because the work dissipated in friction is greater than the work done by the load force moving it backwards even with no input force.
Modern machine theory
Machines are studied as mechanical systems consisting of actuators and mechanisms that transmit forces and movement, monitored by sensors and controllers. The components of actuators and mechanisms consist of links and joints that form kinematic chains.
Kinematic chains
Simple machines are elementary examples of kinematic chains that are used to model mechanical systems ranging from the steam engine to robot manipulators. The bearings that form the fulcrum of a lever and that allow the wheel and axle and pulleys to rotate are examples of a kinematic pair called a hinged joint. Similarly, the flat surface of an inclined plane and wedge are examples of the kinematic pair called a sliding joint. The screw is usually identified as its own kinematic pair called a helical joint.
Two levers, or cranks, are combined into a planar four-bar linkage by attaching a link that connects the output of one crank to the input of another. Additional links can be attached to form a six-bar linkage or in series to form a robot.
Classification of machines
The identification of simple machines arises from a desire for a systematic method to invent new machines. Therefore, an important concern is how simple machines are combined to make more complex machines. One approach is to attach simple machines in series to obtain compound machines.
However, a more successful strategy was identified by Franz Reuleaux, who collected and studied over 800 elementary machines. He realized that a lever, pulley, and wheel and axle are in essence the same device: a body rotating about a hinge. Similarly, an inclined plane, wedge, and screw are a block sliding on a flat surface.
This realization shows that it is the joints, or the connections that provide movement, that are the primary elements of a machine. Starting with four types of joints, the revolute joint, sliding joint, cam joint and gear joint, and related connections such as cables and belts, it is possible to understand a machine as an assembly of solid parts that connect these joints.
Kinematic synthesis
The design of mechanisms to perform required movement and force transmission is known as kinematic synthesis. This is a collection of geometric techniques for the mechanical design of linkages, cam and follower mechanisms and gears and gear trains.
See also
Linkage (mechanical)
Cam and follower mechanisms
Gears and gear trains
Mechanism (engineering)
Rolamite, the only elementary machine discovered in the 20th century
References
Mechanical engineering | Simple machine | [
"Physics",
"Technology",
"Engineering"
] | 2,725 | [
"Machines",
"Applied and interdisciplinary physics",
"Physical systems",
"Mechanical engineering",
"Simple machines"
] |
28,186 | https://en.wikipedia.org/wiki/Symmetry%20group | In group theory, the symmetry group of a geometric object is the group of all transformations under which the object is invariant, endowed with the group operation of composition. Such a transformation is an invertible mapping of the ambient space which takes the object to itself, and which preserves all the relevant structure of the object. A frequent notation for the symmetry group of an object X is G = Sym(X).
For an object in a metric space, its symmetries form a subgroup of the isometry group of the ambient space. This article mainly considers symmetry groups in Euclidean geometry, but the concept may also be studied for more general types of geometric structure.
Introduction
We consider the "objects" possessing symmetry to be geometric figures, images, and patterns, such as a wallpaper pattern. For symmetry of physical objects, one may also take their physical composition as part of the pattern. (A pattern may be specified formally as a scalar field, a function of position with values in a set of colors or substances; as a vector field; or as a more general function on the object.) The group of isometries of space induces a group action on objects in it, and the symmetry group Sym(X) consists of those isometries which map X to itself (as well as mapping any further pattern to itself). We say X is invariant under such a mapping, and the mapping is a symmetry of X.
The above is sometimes called the full symmetry group of X to emphasize that it includes orientation-reversing isometries (reflections, glide reflections and improper rotations), as long as those isometries map this particular X to itself. The subgroup of orientation-preserving symmetries (translations, rotations, and compositions of these) is called its proper symmetry group. An object is chiral when it has no orientation-reversing symmetries, so that its proper symmetry group is equal to its full symmetry group.
Any symmetry group whose elements have a common fixed point, which is true if the group is finite or the figure is bounded, can be represented as a subgroup of the orthogonal group O(n) by choosing the origin to be a fixed point. The proper symmetry group is then a subgroup of the special orthogonal group SO(n), and is called the rotation group of the figure.
In a discrete symmetry group, the points symmetric to a given point do not accumulate toward a limit point. That is, every orbit of the group (the images of a given point under all group elements) forms a discrete set. All finite symmetry groups are discrete.
Discrete symmetry groups come in three types: (1) finite point groups, which include only rotations, reflections, inversions and rotoinversions – i.e., the finite subgroups of O(n); (2) infinite lattice groups, which include only translations; and (3) infinite space groups containing elements of both previous types, and perhaps also extra transformations like screw displacements and glide reflections. There are also continuous symmetry groups (Lie groups), which contain rotations of arbitrarily small angles or translations of arbitrarily small distances. An example is O(3), the symmetry group of a sphere. Symmetry groups of Euclidean objects may be completely classified as the subgroups of the Euclidean group E(n) (the isometry group of Rn).
Two geometric figures have the same symmetry type when their symmetry groups are conjugate subgroups of the Euclidean group: that is, when the subgroups H1, H2 are related by for some g in E(n). For example:
two 3D figures have mirror symmetry, but with respect to different mirror planes.
two 3D figures have 3-fold rotational symmetry, but with respect to different axes.
two 2D patterns have translational symmetry, each in one direction; the two translation vectors have the same length but a different direction.
In the following sections, we only consider isometry groups whose orbits are topologically closed, including all discrete and continuous isometry groups. However, this excludes for example the 1D group of translations by a rational number; such a non-closed figure cannot be drawn with reasonable accuracy due to its arbitrarily fine detail.
One dimension
The isometry groups in one dimension are:
the trivial cyclic group C1
the groups of two elements generated by a reflection; they are isomorphic with C2
the infinite discrete groups generated by a translation; they are isomorphic with Z, the additive group of the integers
the infinite discrete groups generated by a translation and a reflection; they are isomorphic with the generalized dihedral group of Z, Dih(Z), also denoted by D∞ (which is a semidirect product of Z and C2).
the group generated by all translations (isomorphic with the additive group of the real numbers R); this group cannot be the symmetry group of a Euclidean figure, even endowed with a pattern: such a pattern would be homogeneous, hence could also be reflected. However, a constant one-dimensional vector field has this symmetry group.
the group generated by all translations and reflections in points; they are isomorphic with the generalized dihedral group Dih(R).
Two dimensions
Up to conjugacy the discrete point groups in two-dimensional space are the following classes:
cyclic groups C1, C2, C3, C4, ... where Cn consists of all rotations about a fixed point by multiples of the angle 360°/n
dihedral groups D1, D2, D3, D4, ..., where Dn (of order 2n) consists of the rotations in Cn together with reflections in n axes that pass through the fixed point.
C1 is the trivial group containing only the identity operation, which occurs when the figure is asymmetric, for example the letter "F". C2 is the symmetry group of the letter "Z", C3 that of a triskelion, C4 of a swastika, and C5, C6, etc. are the symmetry groups of similar swastika-like figures with five, six, etc. arms instead of four.
D1 is the 2-element group containing the identity operation and a single reflection, which occurs when the figure has only a single axis of bilateral symmetry, for example the letter "A".
D2, which is isomorphic to the Klein four-group, is the symmetry group of a non-equilateral rectangle. This figure has four symmetry operations: the identity operation, one twofold axis of rotation, and two nonequivalent mirror planes.
D3, D4 etc. are the symmetry groups of the regular polygons.
Within each of these symmetry types, there are two degrees of freedom for the center of rotation, and in the case of the dihedral groups, one more for the positions of the mirrors.
The remaining isometry groups in two dimensions with a fixed point are:
the special orthogonal group SO(2) consisting of all rotations about a fixed point; it is also called the circle group S1, the multiplicative group of complex numbers of absolute value 1. It is the proper symmetry group of a circle and the continuous equivalent of Cn. There is no geometric figure that has as full symmetry group the circle group, but for a vector field it may apply (see the three-dimensional case below).
the orthogonal group O(2) consisting of all rotations about a fixed point and reflections in any axis through that fixed point. This is the symmetry group of a circle. It is also called Dih(S1) as it is the generalized dihedral group of S1.
Non-bounded figures may have isometry groups including translations; these are:
the 7 frieze groups
the 17 wallpaper groups
for each of the symmetry groups in one dimension, the combination of all symmetries in that group in one direction, and the group of all translations in the perpendicular direction
ditto with also reflections in a line in the first direction.
Three dimensions
Up to conjugacy the set of three-dimensional point groups consists of 7 infinite series, and 7 other individual groups. In crystallography, only those point groups are considered which preserve some crystal lattice (so their rotations may only have order 1, 2, 3, 4, or 6). This crystallographic restriction of the infinite families of general point groups results in 32 crystallographic point groups (27 individual groups from the 7 series, and 5 of the 7 other individuals).
The continuous symmetry groups with a fixed point include those of:
cylindrical symmetry without a symmetry plane perpendicular to the axis. This applies, for example, to a bottle or cone.
cylindrical symmetry with a symmetry plane perpendicular to the axis
spherical symmetry
For objects with scalar field patterns, the cylindrical symmetry implies vertical reflection symmetry as well. However, this is not true for vector field patterns: for example, in cylindrical coordinates with respect to some axis, the vector field
has cylindrical symmetry with respect to the axis whenever and have this symmetry (no dependence on ); and it has reflectional symmetry only when .
For spherical symmetry, there is no such distinction: any patterned object has planes of reflection symmetry.
The continuous symmetry groups without a fixed point include those with a screw axis, such as an infinite helix. See also subgroups of the Euclidean group.
Symmetry groups in general
In wider contexts, a symmetry group may be any kind of transformation group, or automorphism group. Each type of mathematical structure has invertible mappings which preserve the structure. Conversely, specifying the symmetry group can define the structure, or at least clarify the meaning of geometric congruence or invariance; this is one way of looking at the Erlangen programme.
For example, objects in a hyperbolic non-Euclidean geometry have Fuchsian symmetry groups, which are the discrete subgroups of the isometry group of the hyperbolic plane, preserving hyperbolic rather than Euclidean distance. (Some are depicted in drawings of Escher.) Similarly, automorphism groups of finite geometries preserve families of point-sets (discrete subspaces) rather than Euclidean subspaces, distances, or inner products. Just as for Euclidean figures, objects in any geometric space have symmetry groups which are subgroups of the symmetries of the ambient space.
Another example of a symmetry group is that of a combinatorial graph: a graph symmetry is a permutation of the vertices which takes edges to edges. Any finitely presented group is the symmetry group of its Cayley graph; the free group is the symmetry group of an infinite tree graph.
Group structure in terms of symmetries
Cayley's theorem states that any abstract group is a subgroup of the permutations of some set X, and so can be considered as the symmetry group of X with some extra structure. In addition, many abstract features of the group (defined purely in terms of the group operation) can be interpreted in terms of symmetries.
For example, let G = Sym(X) be the finite symmetry group of a figure X in a Euclidean space, and let H ⊂ G be a subgroup. Then H can be interpreted as the symmetry group of X+, a "decorated" version of X. Such a decoration may be constructed as follows. Add some patterns such as arrows or colors to X so as to break all symmetry, obtaining a figure X# with Sym(X#) = {1}, the trivial subgroup; that is, gX# ≠ X# for all non-trivial g ∈ G. Now we get:
Normal subgroups may also be characterized in this framework.
The symmetry group of the translation gX + is the conjugate subgroup gHg−1. Thus H is normal whenever:
that is, whenever the decoration of X+ may be drawn in any orientation, with respect to any side or feature of X, and still yield the same symmetry group gHg−1 = H.
As an example, consider the dihedral group G = D3 = Sym(X), where X is an equilateral triangle. We may decorate this with an arrow on one edge, obtaining an asymmetric figure X#. Letting τ ∈ G be the reflection of the arrowed edge, the composite figure X+ = X# ∪ τX# has a bidirectional arrow on that edge, and its symmetry group is H = {1, τ}. This subgroup is not normal, since gX+ may have the bi-arrow on a different edge, giving a different reflection symmetry group.
However, letting H = {1, ρ, ρ2} ⊂ D3 be the cyclic subgroup generated by a rotation, the decorated figure X+ consists of a 3-cycle of arrows with consistent orientation. Then H is normal, since drawing such a cycle with either orientation yields the same symmetry group H.
See also
Further reading
External links
Overview of the 32 crystallographic point groups - form the first parts (apart from skipping n=5) of the 7 infinite series and 5 of the 7 separate 3D point groups
Geometry
Symmetry
Group theory | Symmetry group | [
"Physics",
"Mathematics"
] | 2,688 | [
"Group theory",
"Fields of abstract algebra",
"Geometry",
"Symmetry"
] |
28,305 | https://en.wikipedia.org/wiki/String%20theory | In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string acts like a particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries the gravitational force. Thus, string theory is a theory of quantum gravity.
String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. String theory has contributed a number of advances to mathematical physics, which have been applied to a variety of problems in black hole physics, early universe cosmology, nuclear physics, and condensed matter physics, and it has stimulated a number of major developments in pure mathematics. Because string theory potentially provides a unified description of gravity and particle physics, it is a candidate for a theory of everything, a self-contained mathematical model that describes all fundamental forces and forms of matter. Despite much work on these problems, it is not known to what extent string theory describes the real world or how much freedom the theory allows in the choice of its details.
String theory was first studied in the late 1960s as a theory of the strong nuclear force, before being abandoned in favor of quantum chromodynamics. Subsequently, it was realized that the very properties that made string theory unsuitable as a theory of nuclear physics made it a promising candidate for a quantum theory of gravity. The earliest version of string theory, bosonic string theory, incorporated only the class of particles known as bosons. It later developed into superstring theory, which posits a connection called supersymmetry between bosons and the class of particles called fermions. Five consistent versions of superstring theory were developed before it was conjectured in the mid-1990s that they were all different limiting cases of a single theory in eleven dimensions known as M-theory. In late 1997, theorists discovered an important relationship called the anti-de Sitter/conformal field theory correspondence (AdS/CFT correspondence), which relates string theory to another type of physical theory called a quantum field theory.
One of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. Another issue is that the theory is thought to describe an enormous landscape of possible universes, which has complicated efforts to develop theories of particle physics based on string theory. These issues have led some in the community to criticize these approaches to physics, and to question the value of continued research on string theory unification.
Fundamentals
Overview
In the 20th century, two theoretical frameworks emerged for formulating the laws of physics. The first is Albert Einstein's general theory of relativity, a theory that explains the force of gravity and the structure of spacetime at the macro-level. The other is quantum mechanics, a completely different formulation, which uses known probability principles to describe physical phenomena at the micro-level. By the late 1970s, these two frameworks had proven to be sufficient to explain most of the observed features of the universe, from elementary particles to atoms to the evolution of stars and the universe as a whole.
In spite of these successes, there are still many problems that remain to be solved. One of the deepest problems in modern physics is the problem of quantum gravity. The general theory of relativity is formulated within the framework of classical physics, whereas the other fundamental forces are described within the framework of quantum mechanics. A quantum theory of gravity is needed in order to reconcile general relativity with the principles of quantum mechanics, but difficulties arise when one attempts to apply the usual prescriptions of quantum theory to the force of gravity.
String theory is a theoretical framework that attempts to address these questions.
The starting point for string theory is the idea that the point-like particles of particle physics can also be modeled as one-dimensional objects called strings. String theory describes how strings propagate through space and interact with each other. In a given version of string theory, there is only one kind of string, which may look like a small loop or segment of ordinary string, and it can vibrate in different ways. On distance scales larger than the string scale, a string will look just like an ordinary particle consistent with non-string models of elementary particles, with its mass, charge, and other properties determined by the vibrational state of the string. String theory's application as a form of quantum gravity proposes a vibrational state responsible for the graviton, a yet unproven quantum particle that is theorized to carry gravitational force.
One of the main developments of the past several decades in string theory was the discovery of certain 'dualities', mathematical transformations that identify one physical theory with another. Physicists studying string theory have discovered a number of these dualities between different versions of string theory, and this has led to the conjecture that all consistent versions of string theory are subsumed in a single framework known as M-theory.
Studies of string theory have also yielded a number of results on the nature of black holes and the gravitational interaction. There are certain paradoxes that arise when one attempts to understand the quantum aspects of black holes, and work on string theory has attempted to clarify these issues. In late 1997 this line of work culminated in the discovery of the anti-de Sitter/conformal field theory correspondence or AdS/CFT. This is a theoretical result that relates string theory to other physical theories which are better understood theoretically. The AdS/CFT correspondence has implications for the study of black holes and quantum gravity, and it has been applied to other subjects, including nuclear and condensed matter physics.
Since string theory incorporates all of the fundamental interactions, including gravity, many physicists hope that it will eventually be developed to the point where it fully describes our universe, making it a theory of everything. One of the goals of current research in string theory is to find a solution of the theory that reproduces the observed spectrum of elementary particles, with a small cosmological constant, containing dark matter and a plausible mechanism for cosmic inflation. While there has been progress toward these goals, it is not known to what extent string theory describes the real world or how much freedom the theory allows in the choice of details.
One of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. The scattering of strings is most straightforwardly defined using the techniques of perturbation theory, but it is not known in general how to define string theory nonperturbatively. It is also not clear whether there is any principle by which string theory selects its vacuum state, the physical state that determines the properties of our universe. These problems have led some in the community to criticize these approaches to the unification of physics and question the value of continued research on these problems.
Strings
The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory. In particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields.
In quantum field theory, one typically computes the probabilities of various physical events using the techniques of perturbation theory. Developed by Richard Feynman and others in the first half of the twentieth century, perturbative quantum field theory uses special diagrams called Feynman diagrams to organize computations. One imagines that these diagrams depict the paths of point-like particles and their interactions.
The starting point for string theory is the idea that the point-like particles of quantum field theory can also be modeled as one-dimensional objects called strings. The interaction of strings is most straightforwardly defined by generalizing the perturbation theory used in ordinary quantum field theory. At the level of Feynman diagrams, this means replacing the one-dimensional diagram representing the path of a point particle by a two-dimensional (2D) surface representing the motion of a string. Unlike in quantum field theory, string theory does not have a full non-perturbative definition, so many of the theoretical questions that physicists would like to answer remain out of reach.
In theories of particle physics based on string theory, the characteristic length scale of strings is assumed to be on the order of the Planck length, or meters, the scale at which the effects of quantum gravity are believed to become significant. On much larger length scales, such as the scales visible in physics laboratories, such objects would be indistinguishable from zero-dimensional point particles, and the vibrational state of the string would determine the type of particle. One of the vibrational states of a string corresponds to the graviton, a quantum mechanical particle that carries the gravitational force.
The original version of string theory was bosonic string theory, but this version described only bosons, a class of particles that transmit forces between the matter particles, or fermions. Bosonic string theory was eventually superseded by theories called superstring theories. These theories describe both bosons and fermions, and they incorporate a theoretical idea called supersymmetry. In theories with supersymmetry, each boson has a counterpart which is a fermion, and vice versa.
There are several versions of superstring theory: type I, type IIA, type IIB, and two flavors of heterotic string theory ( and ). The different theories allow different types of strings, and the particles that arise at low energies exhibit different symmetries. For example, the type I theory includes both open strings (which are segments with endpoints) and closed strings (which form closed loops), while types IIA, IIB and heterotic include only closed strings.
Extra dimensions
In everyday life, there are three familiar dimensions (3D) of space: height, width and length. Einstein's general theory of relativity treats time as a dimension on par with the three spatial dimensions; in general relativity, space and time are not modeled as separate entities but are instead unified to a four-dimensional (4D) spacetime. In this framework, the phenomenon of gravity is viewed as a consequence of the geometry of spacetime.
In spite of the fact that the Universe is well described by 4D spacetime, there are several reasons why physicists consider theories in other dimensions. In some cases, by modeling spacetime in a different number of dimensions, a theory becomes more mathematically tractable, and one can perform calculations and gain general insights more easily. There are also situations where theories in two or three spacetime dimensions are useful for describing phenomena in condensed matter physics. Finally, there exist scenarios in which there could actually be more than 4D of spacetime which have nonetheless managed to escape detection.
String theories require extra dimensions of spacetime for their mathematical consistency. In bosonic string theory, spacetime is 26-dimensional, while in superstring theory it is 10-dimensional, and in M-theory it is 11-dimensional. In order to describe real physical phenomena using string theory, one must therefore imagine scenarios in which these extra dimensions would not be observed in experiments.
Compactification is one way of modifying the number of dimensions in a physical theory. In compactification, some of the extra dimensions are assumed to "close up" on themselves to form circles. In the limit where these curled up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions. A standard analogy for this is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length. However, as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling on the surface of the hose would move in two dimensions.
Compactification can be used to construct models in which spacetime is effectively four-dimensional. However, not every way of compactifying the extra dimensions produces a model with the right properties to describe nature. In a viable model of particle physics, the compact extra dimensions must be shaped like a Calabi–Yau manifold. A Calabi–Yau manifold is a special space which is typically taken to be six-dimensional in applications to string theory. It is named after mathematicians Eugenio Calabi and Shing-Tung Yau.
Another approach to reducing the number of dimensions is the so-called brane-world scenario. In this approach, physicists assume that the observable universe is a four-dimensional subspace of a higher dimensional space. In such models, the force-carrying bosons of particle physics arise from open strings with endpoints attached to the four-dimensional subspace, while gravity arises from closed strings propagating through the larger ambient space. This idea plays an important role in attempts to develop models of real-world physics based on string theory, and it provides a natural explanation for the weakness of gravity compared to the other fundamental forces.
Dualities
A notable fact about string theory is that the different versions of the theory all turn out to be related in highly nontrivial ways. One of the relationships that can exist between different string theories is called S-duality. This is a relationship that says that a collection of strongly interacting particles in one theory can, in some cases, be viewed as a collection of weakly interacting particles in a completely different theory. Roughly speaking, a collection of particles is said to be strongly interacting if they combine and decay often and weakly interacting if they do so infrequently. Type I string theory turns out to be equivalent by S-duality to the heterotic string theory. Similarly, type IIB string theory is related to itself in a nontrivial way by S-duality.
Another relationship between different string theories is T-duality. Here one considers strings propagating around a circular extra dimension. T-duality states that a string propagating around a circle of radius is equivalent to a string propagating around a circle of radius in the sense that all observable quantities in one description are identified with quantities in the dual description. For example, a string has momentum as it propagates around a circle, and it can also wind around the circle one or more times. The number of times the string winds around a circle is called the winding number. If a string has momentum and winding number in one description, it will have momentum and winding number in the dual description. For example, type IIA string theory is equivalent to type IIB string theory via T-duality, and the two versions of heterotic string theory are also related by T-duality.
In general, the term duality refers to a situation where two seemingly different physical systems turn out to be equivalent in a nontrivial way. Two theories related by a duality need not be string theories. For example, Montonen–Olive duality is an example of an S-duality relationship between quantum field theories. The AdS/CFT correspondence is an example of a duality that relates string theory to a quantum field theory. If two theories are related by a duality, it means that one theory can be transformed in some way so that it ends up looking just like the other theory. The two theories are then said to be dual to one another under the transformation. Put differently, the two theories are mathematically different descriptions of the same phenomena.
Branes
In string theory and other related theories, a brane is a physical object that generalizes the notion of a point particle to higher dimensions. For instance, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes. In dimension p, these are called p-branes. The word brane comes from the word "membrane" which refers to a two-dimensional brane.
Branes are dynamical objects which can propagate through spacetime according to the rules of quantum mechanics. They have mass and can have other attributes such as charge. A p-brane sweeps out a (p+1)-dimensional volume in spacetime called its worldvolume. Physicists often study fields analogous to the electromagnetic field which live on the worldvolume of a brane.
In string theory, D-branes are an important class of branes that arise when one considers open strings. As an open string propagates through spacetime, its endpoints are required to lie on a D-brane. The letter "D" in D-brane refers to a certain mathematical condition on the system known as the Dirichlet boundary condition. The study of D-branes in string theory has led to important results such as the AdS/CFT correspondence, which has shed light on many problems in quantum field theory.
Branes are frequently studied from a purely mathematical point of view, and they are described as objects of certain categories, such as the derived category of coherent sheaves on a complex algebraic variety, or the Fukaya category of a symplectic manifold. The connection between the physical notion of a brane and the mathematical notion of a category has led to important mathematical insights in the fields of algebraic and symplectic geometry and representation theory.
M-theory
Prior to 1995, theorists believed that there were five consistent versions of superstring theory (type I, type IIA, type IIB, and two versions of heterotic string theory). This understanding changed in 1995 when Edward Witten suggested that the five theories were just special limiting cases of an eleven-dimensional theory called M-theory. Witten's conjecture was based on the work of a number of other physicists, including Ashoke Sen, Chris Hull, Paul Townsend, and Michael Duff. His announcement led to a flurry of research activity now known as the second superstring revolution.
Unification of superstring theories
In the 1970s, many physicists became interested in supergravity theories, which combine general relativity with supersymmetry. Whereas general relativity makes sense in any number of dimensions, supergravity places an upper limit on the number of dimensions. In 1978, work by Werner Nahm showed that the maximum spacetime dimension in which one can formulate a consistent supersymmetric theory is eleven. In the same year, Eugene Cremmer, Bernard Julia, and Joël Scherk of the École Normale Supérieure showed that supergravity not only permits up to eleven dimensions but is in fact most elegant in this maximal number of dimensions.
Initially, many physicists hoped that by compactifying eleven-dimensional supergravity, it might be possible to construct realistic models of our four-dimensional world. The hope was that such models would provide a unified description of the four fundamental forces of nature: electromagnetism, the strong and weak nuclear forces, and gravity. Interest in eleven-dimensional supergravity soon waned as various flaws in this scheme were discovered. One of the problems was that the laws of physics appear to distinguish between clockwise and counterclockwise, a phenomenon known as chirality. Edward Witten and others observed this chirality property cannot be readily derived by compactifying from eleven dimensions.
In the first superstring revolution in 1984, many physicists turned to string theory as a unified theory of particle physics and quantum gravity. Unlike supergravity theory, string theory was able to accommodate the chirality of the standard model, and it provided a theory of gravity consistent with quantum effects. Another feature of string theory that many physicists were drawn to in the 1980s and 1990s was its high degree of uniqueness. In ordinary particle theories, one can consider any collection of elementary particles whose classical behavior is described by an arbitrary Lagrangian. In string theory, the possibilities are much more constrained: by the 1990s, physicists had argued that there were only five consistent supersymmetric versions of the theory.
Although there were only a handful of consistent superstring theories, it remained a mystery why there was not just one consistent formulation. However, as physicists began to examine string theory more closely, they realized that these theories are related in intricate and nontrivial ways. They found that a system of strongly interacting strings can, in some cases, be viewed as a system of weakly interacting strings. This phenomenon is known as S-duality. It was studied by Ashoke Sen in the context of heterotic strings in four dimensions and by Chris Hull and Paul Townsend in the context of the type IIB theory. Theorists also found that different string theories may be related by T-duality. This duality implies that strings propagating on completely different spacetime geometries may be physically equivalent.
At around the same time, as many physicists were studying the properties of strings, a small group of physicists were examining the possible applications of higher dimensional objects. In 1987, Eric Bergshoeff, Ergin Sezgin, and Paul Townsend showed that eleven-dimensional supergravity includes two-dimensional branes. Intuitively, these objects look like sheets or membranes propagating through the eleven-dimensional spacetime. Shortly after this discovery, Michael Duff, Paul Howe, Takeo Inami, and Kellogg Stelle considered a particular compactification of eleven-dimensional supergravity with one of the dimensions curled up into a circle. In this setting, one can imagine the membrane wrapping around the circular dimension. If the radius of the circle is sufficiently small, then this membrane looks just like a string in ten-dimensional spacetime. Duff and his collaborators showed that this construction reproduces exactly the strings appearing in type IIA superstring theory.
Speaking at a string theory conference in 1995, Edward Witten made the surprising suggestion that all five superstring theories were in fact just different limiting cases of a single theory in eleven spacetime dimensions. Witten's announcement drew together all of the previous results on S- and T-duality and the appearance of higher-dimensional branes in string theory. In the months following Witten's announcement, hundreds of new papers appeared on the Internet confirming different parts of his proposal. Today this flurry of work is known as the second superstring revolution.
Initially, some physicists suggested that the new theory was a fundamental theory of membranes, but Witten was skeptical of the role of membranes in the theory. In a paper from 1996, Hořava and Witten wrote "As it has been proposed that the eleven-dimensional theory is a supermembrane theory but there are some reasons to doubt that interpretation, we will non-committally call it the M-theory, leaving to the future the relation of M to membranes." In the absence of an understanding of the true meaning and structure of M-theory, Witten has suggested that the M should stand for "magic", "mystery", or "membrane" according to taste, and the true meaning of the title should be decided when a more fundamental formulation of the theory is known.
Matrix theory
In mathematics, a matrix is a rectangular array of numbers or other data. In physics, a matrix model is a particular kind of physical theory whose mathematical formulation involves the notion of a matrix in an important way. A matrix model describes the behavior of a set of matrices within the framework of quantum mechanics.
One important example of a matrix model is the BFSS matrix model proposed by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind in 1997. This theory describes the behavior of a set of nine large matrices. In their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory. The BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting.
The development of the matrix model formulation of M-theory has led physicists to consider various connections between string theory and a branch of mathematics called noncommutative geometry. This subject is a generalization of ordinary geometry in which mathematicians define new geometric notions using tools from noncommutative algebra. In a paper from 1998, Alain Connes, Michael R. Douglas, and Albert Schwarz showed that some aspects of matrix models and M-theory are described by a noncommutative quantum field theory, a special kind of physical theory in which spacetime is described mathematically using noncommutative geometry. This established a link between matrix models and M-theory on the one hand, and noncommutative geometry on the other hand. It quickly led to the discovery of other important links between noncommutative geometry and various physical theories.
Black holes
In general relativity, a black hole is defined as a region of spacetime in which the gravitational field is so strong that no particle or radiation can escape. In the currently accepted models of stellar evolution, black holes are thought to arise when massive stars undergo gravitational collapse, and many galaxies are thought to contain supermassive black holes at their centers. Black holes are also important for theoretical reasons, as they present profound challenges for theorists attempting to understand the quantum aspects of gravity. String theory has proved to be an important tool for investigating the theoretical properties of black holes because it provides a framework in which theorists can study their thermodynamics.
Bekenstein–Hawking formula
In the branch of physics called statistical mechanics, entropy is a measure of the randomness or disorder of a physical system. This concept was studied in the 1870s by the Austrian physicist Ludwig Boltzmann, who showed that the thermodynamic properties of a gas could be derived from the combined properties of its many constituent molecules. Boltzmann argued that by averaging the behaviors of all the different molecules in a gas, one can understand macroscopic properties such as volume, temperature, and pressure. In addition, this perspective led him to give a precise definition of entropy as the natural logarithm of the number of different states of the molecules (also called microstates) that give rise to the same macroscopic features.
In the twentieth century, physicists began to apply the same concepts to black holes. In most systems such as gases, the entropy scales with the volume. In the 1970s, the physicist Jacob Bekenstein suggested that the entropy of a black hole is instead proportional to the surface area of its event horizon, the boundary beyond which matter and radiation are lost to its gravitational attraction. When combined with ideas of the physicist Stephen Hawking, Bekenstein's work yielded a precise formula for the entropy of a black hole. The Bekenstein–Hawking formula expresses the entropy as
where is the speed of light, is the Boltzmann constant, is the reduced Planck constant, is Newton's constant, and is the surface area of the event horizon.
Like any physical system, a black hole has an entropy defined in terms of the number of different microstates that lead to the same macroscopic features. The Bekenstein–Hawking entropy formula gives the expected value of the entropy of a black hole, but by the 1990s, physicists still lacked a derivation of this formula by counting microstates in a theory of quantum gravity. Finding such a derivation of this formula was considered an important test of the viability of any theory of quantum gravity such as string theory.
Derivation within string theory
In a paper from 1996, Andrew Strominger and Cumrun Vafa showed how to derive the Bekenstein–Hawking formula for certain black holes in string theory. Their calculation was based on the observation that D-branes—which look like fluctuating membranes when they are weakly interacting—become dense, massive objects with event horizons when the interactions are strong. In other words, a system of strongly interacting D-branes in string theory is indistinguishable from a black hole. Strominger and Vafa analyzed such D-brane systems and calculated the number of different ways of placing D-branes in spacetime so that their combined mass and charge is equal to a given mass and charge for the resulting black hole. Their calculation reproduced the Bekenstein–Hawking formula exactly, including the factor of . Subsequent work by Strominger, Vafa, and others refined the original calculations and gave the precise values of the "quantum corrections" needed to describe very small black holes.
The black holes that Strominger and Vafa considered in their original work were quite different from real astrophysical black holes. One difference was that Strominger and Vafa considered only extremal black holes in order to make the calculation tractable. These are defined as black holes with the lowest possible mass compatible with a given charge. Strominger and Vafa also restricted attention to black holes in five-dimensional spacetime with unphysical supersymmetry.
Although it was originally developed in this very particular and physically unrealistic context in string theory, the entropy calculation of Strominger and Vafa has led to a qualitative understanding of how black hole entropy can be accounted for in any theory of quantum gravity. Indeed, in 1998, Strominger argued that the original result could be generalized to an arbitrary consistent theory of quantum gravity without relying on strings or supersymmetry. In collaboration with several other authors in 2010, he showed that some results on black hole entropy could be extended to non-extremal astrophysical black holes.
AdS/CFT correspondence
One approach to formulating string theory and studying its properties is provided by the anti-de Sitter/conformal field theory (AdS/CFT) correspondence. This is a theoretical result that implies that string theory is in some cases equivalent to a quantum field theory. In addition to providing insights into the mathematical structure of string theory, the AdS/CFT correspondence has shed light on many aspects of quantum field theory in regimes where traditional calculational techniques are ineffective. The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2010, Maldacena's article had over 7000 citations, becoming the most highly cited article in the field of high energy physics.
Overview of the correspondence
In the AdS/CFT correspondence, the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space. In very elementary terms, anti-de Sitter space is a mathematical model of spacetime in which the notion of distance between points (the metric) is different from the notion of distance in ordinary Euclidean geometry. It is closely related to hyperbolic space, which can be viewed as a disk as illustrated on the left. This image shows a tessellation of a disk by triangles and squares. One can define the distance between points of this disk in such a way that all the triangles and squares are the same size and the circular outer boundary is infinitely far from any point in the interior.
One can imagine a stack of hyperbolic disks where each disk represents the state of the universe at a given time. The resulting geometric object is three-dimensional anti-de Sitter space. It looks like a solid cylinder in which any cross section is a copy of the hyperbolic disk. Time runs along the vertical direction in this picture. The surface of this cylinder plays an important role in the AdS/CFT correspondence. As with the hyperbolic plane, anti-de Sitter space is curved in such a way that any point in the interior is actually infinitely far from this boundary surface.
This construction describes a hypothetical universe with only two space dimensions and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space.
An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, within a small region on the surface around any given point, it looks just like Minkowski space, the model of spacetime used in non-gravitational physics. One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space. This observation is the starting point for AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a quantum field theory. The claim is that this quantum field theory is equivalent to a gravitational theory, such as string theory, in the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating entities and calculations in one theory into their counterparts in the other theory. For example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding.
Applications to quantum gravity
The discovery of the AdS/CFT correspondence was a major advance in physicists' understanding of string theory and quantum gravity. One reason for this is that the correspondence provides a formulation of string theory in terms of quantum field theory, which is well understood by comparison. Another reason is that it provides a general framework in which physicists can study and attempt to resolve the paradoxes of black holes.
In 1975, Stephen Hawking published a calculation which suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon. At first, Hawking's result posed a problem for theorists because it suggested that black holes destroy information. More precisely, Hawking's calculation seemed to conflict with one of the basic postulates of quantum mechanics, which states that physical systems evolve in time according to the Schrödinger equation. This property is usually referred to as unitarity of time evolution. The apparent contradiction between Hawking's calculation and the unitarity postulate of quantum mechanics came to be known as the black hole information paradox.
The AdS/CFT correspondence resolves the black hole information paradox, at least to some extent, because it shows how a black hole can evolve in a manner consistent with quantum mechanics in some contexts. Indeed, one can consider black holes in the context of the AdS/CFT correspondence, and any such black hole corresponds to a configuration of particles on the boundary of anti-de Sitter space. These particles obey the usual rules of quantum mechanics and in particular evolve in a unitary fashion, so the black hole must also evolve in a unitary fashion, respecting the principles of quantum mechanics. In 2005, Hawking announced that the paradox had been settled in favor of information conservation by the AdS/CFT correspondence, and he suggested a concrete mechanism by which black holes might preserve information.
Applications to nuclear physics
In addition to its applications to theoretical problems in quantum gravity, the AdS/CFT correspondence has been applied to a variety of problems in quantum field theory. One physical system that has been studied using the AdS/CFT correspondence is the quark–gluon plasma, an exotic state of matter produced in particle accelerators. This state of matter arises for brief instants when heavy ions such as gold or lead nuclei are collided at high energies. Such collisions cause the quarks that make up atomic nuclei to deconfine at temperatures of approximately two trillion kelvin, conditions similar to those present at around seconds after the Big Bang.
The physics of the quark–gluon plasma is governed by a theory called quantum chromodynamics, but this theory is mathematically intractable in problems involving the quark–gluon plasma. In an article appearing in 2005, Đàm Thanh Sơn and his collaborators showed that the AdS/CFT correspondence could be used to understand some aspects of the quark-gluon plasma by describing it in the language of string theory. By applying the AdS/CFT correspondence, Sơn and his collaborators were able to describe the quark-gluon plasma in terms of black holes in five-dimensional spacetime. The calculation showed that the ratio of two quantities associated with the quark-gluon plasma, the shear viscosity and volume density of entropy, should be approximately equal to a certain universal constant. In 2008, the predicted value of this ratio for the quark-gluon plasma was confirmed at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory.
Applications to condensed matter physics
The AdS/CFT correspondence has also been used to study aspects of condensed matter physics. Over the decades, experimental condensed matter physicists have discovered a number of exotic states of matter, including superconductors and superfluids. These states are described using the formalism of quantum field theory, but some phenomena are difficult to explain using standard field theoretic techniques. Some condensed matter theorists including Subir Sachdev hope that the AdS/CFT correspondence will make it possible to describe these systems in the language of string theory and learn more about their behavior.
So far some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator. A superfluid is a system of electrically neutral atoms that flows without any friction. Such systems are often produced in the laboratory using liquid helium, but recently experimentalists have developed new ways of producing artificial superfluids by pouring trillions of cold atoms into a lattice of criss-crossing lasers. These atoms initially behave as a superfluid, but as experimentalists increase the intensity of the lasers, they become less mobile and then suddenly transition to an insulating state. During the transition, the atoms behave in an unusual way. For example, the atoms slow to a halt at a rate that depends on the temperature and on the Planck constant, the fundamental parameter of quantum mechanics, which does not enter into the description of the other phases. This behavior has recently been understood by considering a dual description where properties of the fluid are described in terms of a higher dimensional black hole.
Phenomenology
In addition to being an idea of considerable theoretical interest, string theory provides a framework for constructing models of real-world physics that combine general relativity and particle physics. Phenomenology is the branch of theoretical physics in which physicists construct realistic models of nature from more abstract theoretical ideas. String phenomenology is the part of string theory that attempts to construct realistic or semi-realistic models based on string theory.
Partly because of theoretical and mathematical difficulties and partly because of the extremely high energies needed to test these theories experimentally, there is so far no experimental evidence that would unambiguously point to any of these models being a correct fundamental description of nature. This has led some in the community to criticize these approaches to unification and question the value of continued research on these problems.
Particle physics
The currently accepted theory describing elementary particles and their interactions is known as the standard model of particle physics. This theory provides a unified description of three of the fundamental forces of nature: electromagnetism and the strong and weak nuclear forces. Despite its remarkable success in explaining a wide range of physical phenomena, the standard model cannot be a complete description of reality. This is because the standard model fails to incorporate the force of gravity and because of problems such as the hierarchy problem and the inability to explain the structure of fermion masses or dark matter.
String theory has been used to construct a variety of models of particle physics going beyond the standard model. Typically, such models are based on the idea of compactification. Starting with the ten- or eleven-dimensional spacetime of string or M-theory, physicists postulate a shape for the extra dimensions. By choosing this shape appropriately, they can construct models roughly similar to the standard model of particle physics, together with additional undiscovered particles. One popular way of deriving realistic physics from string theory is to start with the heterotic theory in ten dimensions and assume that the six extra dimensions of spacetime are shaped like a six-dimensional Calabi–Yau manifold. Such compactifications offer many ways of extracting realistic physics from string theory. Other similar methods can be used to construct realistic or semi-realistic models of our four-dimensional world based on M-theory.
Cosmology
The Big Bang theory is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large-scale evolution. Despite its success in explaining many observed features of the universe including galactic redshifts, the relative abundance of light elements such as hydrogen and helium, and the existence of a cosmic microwave background, there are several questions that remain unanswered. For example, the standard Big Bang model does not explain why the universe appears to be the same in all directions, why it appears flat on very large distance scales, or why certain hypothesized particles such as magnetic monopoles are not observed in experiments.
Currently, the leading candidate for a theory going beyond the Big Bang is the theory of cosmic inflation. Developed by Alan Guth and others in the 1980s, inflation postulates a period of extremely rapid accelerated expansion of the universe prior to the expansion described by the standard Big Bang theory. The theory of cosmic inflation preserves the successes of the Big Bang while providing a natural explanation for some of the mysterious features of the universe. The theory has also received striking support from observations of the cosmic microwave background, the radiation that has filled the sky since around 380,000 years after the Big Bang.
In the theory of inflation, the rapid initial expansion of the universe is caused by a hypothetical particle called the inflaton. The exact properties of this particle are not fixed by the theory but should ultimately be derived from a more fundamental theory such as string theory. Indeed, there have been a number of attempts to identify an inflaton within the spectrum of particles described by string theory and to study inflation using string theory. While these approaches might eventually find support in observational data such as measurements of the cosmic microwave background, the application of string theory to cosmology is still in its early stages.
Connections to mathematics
In addition to influencing research in theoretical physics, string theory has stimulated a number of major developments in pure mathematics. Like many developing ideas in theoretical physics, string theory does not at present have a mathematically rigorous formulation in which all of its concepts can be defined precisely. As a result, physicists who study string theory are often guided by physical intuition to conjecture relationships between the seemingly different mathematical structures that are used to formalize different parts of the theory. These conjectures are later proved by mathematicians, and in this way, string theory serves as a source of new ideas in pure mathematics.
Mirror symmetry
After Calabi–Yau manifolds had entered physics as a way to compactify extra dimensions in string theory, many physicists began studying these manifolds. In the late 1980s, several physicists noticed that given such a compactification of string theory, it is not possible to reconstruct uniquely a corresponding Calabi–Yau manifold. Instead, two different versions of string theory, type IIA and type IIB, can be compactified on completely different Calabi–Yau manifolds giving rise to the same physics. In this situation, the manifolds are called mirror manifolds, and the relationship between the two physical theories is called mirror symmetry.
Regardless of whether Calabi–Yau compactifications of string theory provide a correct description of nature, the existence of the mirror duality between different string theories has significant mathematical consequences. The Calabi–Yau manifolds used in string theory are of interest in pure mathematics, and mirror symmetry allows mathematicians to solve problems in enumerative geometry, a branch of mathematics concerned with counting the numbers of solutions to geometric questions.
Enumerative geometry studies a class of geometric objects called algebraic varieties which are defined by the vanishing of polynomials. For example, the Clebsch cubic illustrated on the right is an algebraic variety defined using a certain polynomial of degree three in four variables. A celebrated result of nineteenth-century mathematicians Arthur Cayley and George Salmon states that there are exactly 27 straight lines that lie entirely on such a surface.
Generalizing this problem, one can ask how many lines can be drawn on a quintic Calabi–Yau manifold, such as the one illustrated above, which is defined by a polynomial of degree five. This problem was solved by the nineteenth-century German mathematician Hermann Schubert, who found that there are exactly 2,875 such lines. In 1986, geometer Sheldon Katz proved that the number of curves, such as circles, that are defined by polynomials of degree two and lie entirely in the quintic is 609,250.
By the year 1991, most of the classical problems of enumerative geometry had been solved and interest in enumerative geometry had begun to diminish. The field was reinvigorated in May 1991 when physicists Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parkes showed that mirror symmetry could be used to translate difficult mathematical questions about one Calabi–Yau manifold into easier questions about its mirror. In particular, they used mirror symmetry to show that a six-dimensional Calabi–Yau manifold can contain exactly 317,206,375 curves of degree three. In addition to counting degree-three curves, Candelas and his collaborators obtained a number of more general results for counting rational curves which went far beyond the results obtained by mathematicians.
Originally, these results of Candelas were justified on physical grounds. However, mathematicians generally prefer rigorous proofs that do not require an appeal to physical intuition. Inspired by physicists' work on mirror symmetry, mathematicians have therefore constructed their own arguments proving the enumerative predictions of mirror symmetry. Today mirror symmetry is an active area of research in mathematics, and mathematicians are working to develop a more complete mathematical understanding of mirror symmetry based on physicists' intuition. Major approaches to mirror symmetry include the homological mirror symmetry program of Maxim Kontsevich and the SYZ conjecture of Andrew Strominger, Shing-Tung Yau, and Eric Zaslow.
Monstrous moonshine
Group theory is the branch of mathematics that studies the concept of symmetry. For example, one can consider a geometric shape such as an equilateral triangle. There are various operations that one can perform on this triangle without changing its shape. One can rotate it through 120°, 240°, or 360°, or one can reflect in any of the lines labeled , , or in the picture. Each of these operations is called a symmetry, and the collection of these symmetries satisfies certain technical properties making it into what mathematicians call a group. In this particular example, the group is known as the dihedral group of order 6 because it has six elements. A general group may describe finitely many or infinitely many symmetries; if there are only finitely many symmetries, it is called a finite group.
Mathematicians often strive for a classification (or list) of all mathematical objects of a given type. It is generally believed that finite groups are too diverse to admit a useful classification. A more modest but still challenging problem is to classify all finite simple groups. These are finite groups that may be used as building blocks for constructing arbitrary finite groups in the same way that prime numbers can be used to construct arbitrary whole numbers by taking products. One of the major achievements of contemporary group theory is the classification of finite simple groups, a mathematical theorem that provides a list of all possible finite simple groups.
This classification theorem identifies several infinite families of groups as well as 26 additional groups which do not fit into any family. The latter groups are called the "sporadic" groups, and each one owes its existence to a remarkable combination of circumstances. The largest sporadic group, the so-called monster group, has over elements, more than a thousand times the number of atoms in the Earth.
A seemingly unrelated construction is the -function of number theory. This object belongs to a special class of functions called modular functions, whose graphs form a certain kind of repeating pattern. Although this function appears in a branch of mathematics that seems very different from the theory of finite groups, the two subjects turn out to be intimately related. In the late 1970s, mathematicians John McKay and John Thompson noticed that certain numbers arising in the analysis of the monster group (namely, the dimensions of its irreducible representations) are related to numbers that appear in a formula for the -function (namely, the coefficients of its Fourier series). This relationship was further developed by John Horton Conway and Simon Norton who called it monstrous moonshine because it seemed so far fetched.
In 1992, Richard Borcherds constructed a bridge between the theory of modular functions and finite groups and, in the process, explained the observations of McKay and Thompson. Borcherds' work used ideas from string theory in an essential way, extending earlier results of Igor Frenkel, James Lepowsky, and Arne Meurman, who had realized the monster group as the symmetries of a particular version of string theory. In 1998, Borcherds was awarded the Fields medal for his work.
Since the 1990s, the connection between string theory and moonshine has led to further results in mathematics and physics. In 2010, physicists Tohru Eguchi, Hirosi Ooguri, and Yuji Tachikawa discovered connections between a different sporadic group, the Mathieu group , and a certain version of string theory. Miranda Cheng, John Duncan, and Jeffrey A. Harvey proposed a generalization of this moonshine phenomenon called umbral moonshine, and their conjecture was proved mathematically by Duncan, Michael Griffin, and Ken Ono. Witten has also speculated that the version of string theory appearing in monstrous moonshine might be related to a certain simplified model of gravity in three spacetime dimensions.
History
Early results
Some of the structures reintroduced by string theory arose for the first time much earlier as part of the program of classical unification started by Albert Einstein. The first person to add a fifth dimension to a theory of gravity was Gunnar Nordström in 1914, who noted that gravity in five dimensions describes both gravity and electromagnetism in four. Nordström attempted to unify electromagnetism with his theory of gravitation, which was however superseded by Einstein's general relativity in 1919. Thereafter, German mathematician Theodor Kaluza combined the fifth dimension with general relativity, and only Kaluza is usually credited with the idea. In 1926, the Swedish physicist Oskar Klein gave a physical interpretation of the unobservable extra dimension—it is wrapped into a small circle. Einstein introduced a non-symmetric metric tensor, while much later Brans and Dicke added a scalar component to gravity. These ideas would be revived within string theory, where they are demanded by consistency conditions.
String theory was originally developed during the late 1960s and early 1970s as a never completely successful theory of hadrons, the subatomic particles like the proton and neutron that feel the strong interaction. In the 1960s, Geoffrey Chew and Steven Frautschi discovered that the mesons make families called Regge trajectories with masses related to spins in a way that was later understood by Yoichiro Nambu, Holger Bech Nielsen and Leonard Susskind to be the relationship expected from rotating strings. Chew advocated making a theory for the interactions of these trajectories that did not presume that they were composed of any fundamental particles, but would construct their interactions from self-consistency conditions on the S-matrix. The S-matrix approach was started by Werner Heisenberg in the 1940s as a way of constructing a theory that did not rely on the local notions of space and time, which Heisenberg believed break down at the nuclear scale. While the scale was off by many orders of magnitude, the approach he advocated was ideally suited for a theory of quantum gravity.
Working with experimental data, R. Dolen, D. Horn and C. Schmid developed some sum rules for hadron exchange. When a particle and antiparticle scatter, virtual particles can be exchanged in two qualitatively different ways. In the s-channel, the two particles annihilate to make temporary intermediate states that fall apart into the final state particles. In the t-channel, the particles exchange intermediate states by emission and absorption. In field theory, the two contributions add together, one giving a continuous background contribution, the other giving peaks at certain energies. In the data, it was clear that the peaks were stealing from the background—the authors interpreted this as saying that the t-channel contribution was dual to the s-channel one, meaning both described the whole amplitude and included the other.
The result was widely advertised by Murray Gell-Mann, leading Gabriele Veneziano to construct a scattering amplitude that had the property of Dolen–Horn–Schmid duality, later renamed world-sheet duality. The amplitude needed poles where the particles appear, on straight-line trajectories, and there is a special mathematical function whose poles are evenly spaced on half the real line—the gamma function—which was widely used in Regge theory. By manipulating combinations of gamma functions, Veneziano was able to find a consistent scattering amplitude with poles on straight lines, with mostly positive residues, which obeyed duality and had the appropriate Regge scaling at high energy. The amplitude could fit near-beam scattering data as well as other Regge type fits and had a suggestive integral representation that could be used for generalization.
Over the next years, hundreds of physicists worked to complete the bootstrap program for this model, with many surprises. Veneziano himself discovered that for the scattering amplitude to describe the scattering of a particle that appears in the theory, an obvious self-consistency condition, the lightest particle must be a tachyon. Miguel Virasoro and Joel Shapiro found a different amplitude now understood to be that of closed strings, while Ziro Koba and Holger Nielsen generalized Veneziano's integral representation to multiparticle scattering. Veneziano and Sergio Fubini introduced an operator formalism for computing the scattering amplitudes that was a forerunner of world-sheet conformal theory, while Virasoro understood how to remove the poles with wrong-sign residues using a constraint on the states. Claud Lovelace calculated a loop amplitude, and noted that there is an inconsistency unless the dimension of the theory is 26. Charles Thorn, Peter Goddard and Richard Brower went on to prove that there are no wrong-sign propagating states in dimensions less than or equal to 26.
In 1969–1970, Yoichiro Nambu, Holger Bech Nielsen, and Leonard Susskind recognized that the theory could be given a description in space and time in terms of strings. The scattering amplitudes were derived systematically from the action principle by Peter Goddard, Jeffrey Goldstone, Claudio Rebbi, and Charles Thorn, giving a space-time picture to the vertex operators introduced by Veneziano and Fubini and a geometrical interpretation to the Virasoro conditions.
In 1971, Pierre Ramond added fermions to the model, which led him to formulate a two-dimensional supersymmetry to cancel the wrong-sign states. John Schwarz and André Neveu added another sector to the fermi theory a short time later. In the fermion theories, the critical dimension was 10. Stanley Mandelstam formulated a world sheet conformal theory for both the bose and fermi case, giving a two-dimensional field theoretic path-integral to generate the operator formalism. Michio Kaku and Keiji Kikkawa gave a different formulation of the bosonic string, as a string field theory, with infinitely many particle types and with fields taking values not on points, but on loops and curves.
In 1974, Tamiaki Yoneya discovered that all the known string theories included a massless spin-two particle that obeyed the correct Ward identities to be a graviton. John Schwarz and Joël Scherk came to the same conclusion and made the bold leap to suggest that string theory was a theory of gravity, not a theory of hadrons. They reintroduced Kaluza–Klein theory as a way of making sense of the extra dimensions. At the same time, quantum chromodynamics was recognized as the correct theory of hadrons, shifting the attention of physicists and apparently leaving the bootstrap program in the dustbin of history.
String theory eventually made it out of the dustbin, but for the following decade, all work on the theory was completely ignored. Still, the theory continued to develop at a steady pace thanks to the work of a handful of devotees. Ferdinando Gliozzi, Joël Scherk, and David Olive realized in 1977 that the original Ramond and Neveu Schwarz-strings were separately inconsistent and needed to be combined. The resulting theory did not have a tachyon and was proven to have space-time supersymmetry by John Schwarz and Michael Green in 1984. The same year, Alexander Polyakov gave the theory a modern path integral formulation, and went on to develop conformal field theory extensively. In 1979, Daniel Friedan showed that the equations of motions of string theory, which are generalizations of the Einstein equations of general relativity, emerge from the renormalization group equations for the two-dimensional field theory. Schwarz and Green discovered T-duality, and constructed two superstring theories—IIA and IIB related by T-duality, and type I theories with open strings. The consistency conditions had been so strong, that the entire theory was nearly uniquely determined, with only a few discrete choices.
First superstring revolution
In the early 1980s, Edward Witten discovered that most theories of quantum gravity could not accommodate chiral fermions like the neutrino. This led him, in collaboration with Luis Álvarez-Gaumé, to study violations of the conservation laws in gravity theories with anomalies, concluding that type I string theories were inconsistent. Green and Schwarz discovered a contribution to the anomaly that Witten and Alvarez-Gaumé had missed, which restricted the gauge group of the type I string theory to be SO(32). In coming to understand this calculation, Edward Witten became convinced that string theory was truly a consistent theory of gravity, and he became a high-profile advocate. Following Witten's lead, between 1984 and 1986, hundreds of physicists started to work in this field, and this is sometimes called the first superstring revolution.
During this period, David Gross, Jeffrey Harvey, Emil Martinec, and Ryan Rohm discovered heterotic strings. The gauge group of these closed strings was two copies of E8, and either copy could easily and naturally include the standard model. Philip Candelas, Gary Horowitz, Andrew Strominger and Edward Witten found that the Calabi–Yau manifolds are the compactifications that preserve a realistic amount of supersymmetry, while Lance Dixon and others worked out the physical properties of orbifolds, distinctive geometrical singularities allowed in string theory. Cumrun Vafa generalized T-duality from circles to arbitrary manifolds, creating the mathematical field of mirror symmetry. Daniel Friedan, Emil Martinec and Stephen Shenker further developed the covariant quantization of the superstring using conformal field theory techniques. David Gross and Vipul Periwal discovered that string perturbation theory was divergent. Stephen Shenker showed it diverged much faster than in field theory suggesting that new non-perturbative objects were missing.
In the 1990s, Joseph Polchinski discovered that the theory requires higher-dimensional objects, called D-branes and identified these with the black-hole solutions of supergravity. These were understood to be the new objects suggested by the perturbative divergences, and they opened up a new field with rich mathematical structure. It quickly became clear that D-branes and other p-branes, not just strings, formed the matter content of the string theories, and the physical interpretation of the strings and branes was revealed—they are a type of black hole. Leonard Susskind had incorporated the holographic principle of Gerardus 't Hooft into string theory, identifying the long highly excited string states with ordinary thermal black hole states. As suggested by 't Hooft, the fluctuations of the black hole horizon, the world-sheet or world-volume theory, describes not only the degrees of freedom of the black hole, but all nearby objects too.
Second superstring revolution
In 1995, at the annual conference of string theorists at the University of Southern California (USC), Edward Witten gave a speech on string theory that in essence united the five string theories that existed at the time, and giving birth to a new 11-dimensional theory called M-theory. M-theory was also foreshadowed in the work of Paul Townsend at approximately the same time. The flurry of activity that began at this time is sometimes called the second superstring revolution.
During this period, Tom Banks, Willy Fischler, Stephen Shenker and Leonard Susskind formulated matrix theory, a full holographic description of M-theory using IIA D0 branes. This was the first definition of string theory that was fully non-perturbative and a concrete mathematical realization of the holographic principle. It is an example of a gauge-gravity duality and is now understood to be a special case of the AdS/CFT correspondence. Andrew Strominger and Cumrun Vafa calculated the entropy of certain configurations of D-branes and found agreement with the semi-classical answer for extreme charged black holes. Petr Hořava and Witten found the eleven-dimensional formulation of the heterotic string theories, showing that orbifolds solve the chirality problem. Witten noted that the effective description of the physics of D-branes at low energies is by a supersymmetric gauge theory, and found geometrical interpretations of mathematical structures in gauge theory that he and Nathan Seiberg had earlier discovered in terms of the location of the branes.
In 1997, Juan Maldacena noted that the low energy excitations of a theory near a black hole consist of objects close to the horizon, which for extreme charged black holes looks like an anti-de Sitter space. He noted that in this limit the gauge theory describes the string excitations near the branes. So he hypothesized that string theory on a near-horizon extreme-charged black-hole geometry, an anti-de Sitter space times a sphere with flux, is equally well described by the low-energy limiting gauge theory, the N = 4 supersymmetric Yang–Mills theory. This hypothesis, which is called the AdS/CFT correspondence, was further developed by Steven Gubser, Igor Klebanov and Alexander Polyakov, and by Edward Witten, and it is now well-accepted. It is a concrete realization of the holographic principle, which has far-reaching implications for black holes, locality and information in physics, as well as the nature of the gravitational interaction. Through this relationship, string theory has been shown to be related to gauge theories like quantum chromodynamics and this has led to a more quantitative understanding of the behavior of hadrons, bringing string theory back to its roots.
Criticism
Number of solutions
To construct models of particle physics based on string theory, physicists typically begin by specifying a shape for the extra dimensions of spacetime. Each of these different shapes corresponds to a different possible universe, or "vacuum state", with a different collection of particles and forces. String theory as it is currently understood has an enormous number of vacuum states, typically estimated to be around , and these might be sufficiently diverse to accommodate almost any phenomenon that might be observed at low energies.
Many critics of string theory have expressed concerns about the large number of possible universes described by string theory. In his book Not Even Wrong, Peter Woit, a lecturer in the mathematics department at Columbia University, has argued that the large number of different physical scenarios renders string theory vacuous as a framework for constructing models of particle physics. According to Woit,
Some physicists believe this large number of solutions is actually a virtue because it may allow a natural anthropic explanation of the observed values of physical constants, in particular the small value of the cosmological constant. The anthropic principle is the idea that some of the numbers appearing in the laws of physics are not fixed by any fundamental principle but must be compatible with the evolution of intelligent life. In 1987, Steven Weinberg published an article in which he argued that the cosmological constant could not have been too large, or else galaxies and intelligent life would not have been able to develop. Weinberg suggested that there might be a huge number of possible consistent universes, each with a different value of the cosmological constant, and observations indicate a small value of the cosmological constant only because humans happen to live in a universe that has allowed intelligent life, and hence observers, to exist.
String theorist Leonard Susskind has argued that string theory provides a natural anthropic explanation of the small value of the cosmological constant. According to Susskind, the different vacuum states of string theory might be realized as different universes within a larger multiverse. The fact that the observed universe has a small cosmological constant is just a tautological consequence of the fact that a small value is required for life to exist. Many prominent theorists and critics have disagreed with Susskind's conclusions. According to Woit, "in this case [anthropic reasoning] is nothing more than an excuse for failure. Speculative scientific ideas fail not just when they make incorrect predictions, but also when they turn out to be vacuous and incapable of predicting anything."
Compatibility with dark energy
It remains unknown whether string theory is compatible with a metastable, positive cosmological constant.
Some putative examples of such solutions do exist, such as the model described by Kachru et al. in 2003. In 2018, a group of four physicists advanced a controversial conjecture which would imply that no such universe exists. This is contrary to some popular models of dark energy such as Λ-CDM, which requires a positive vacuum energy. However, string theory is likely compatible with certain types of quintessence, where dark energy is caused by a new field with exotic properties.
Background independence
One of the fundamental properties of Einstein's general theory of relativity is that it is background independent, meaning that the formulation of the theory does not in any way privilege a particular spacetime geometry.
One of the main criticisms of string theory from early on is that it is not manifestly background-independent. In string theory, one must typically specify a fixed reference geometry for spacetime, and all other possible geometries are described as perturbations of this fixed one. In his book The Trouble With Physics, physicist Lee Smolin of the Perimeter Institute for Theoretical Physics claims that this is the principal weakness of string theory as a theory of quantum gravity, saying that string theory has failed to incorporate this important insight from general relativity.
Others have disagreed with Smolin's characterization of string theory. In a review of Smolin's book, string theorist Joseph Polchinski writes
Polchinski notes that an important open problem in quantum gravity is to develop holographic descriptions of gravity which do not require the gravitational field to be asymptotically anti-de Sitter. Smolin has responded by saying that the AdS/CFT correspondence, as it is currently understood, may not be strong enough to resolve all concerns about background independence.
Sociology of science
Since the superstring revolutions of the 1980s and 1990s, string theory has been one of the dominant paradigms of high energy theoretical physics. Some string theorists have expressed the view that there does not exist an equally successful alternative theory addressing the deep questions of fundamental physics. In an interview from 1987, Nobel laureate David Gross made the following controversial comments about the reasons for the popularity of string theory:
Several other high-profile theorists and commentators have expressed similar views, suggesting that there are no viable alternatives to string theory.
Many critics of string theory have commented on this state of affairs. In his book criticizing string theory, Peter Woit views the status of string theory research as unhealthy and detrimental to the future of fundamental physics. He argues that the extreme popularity of string theory among theoretical physicists is partly a consequence of the financial structure of academia and the fierce competition for scarce resources. In his book The Road to Reality, mathematical physicist Roger Penrose expresses similar views, stating "The often frantic competitiveness that this ease of communication engenders leads to bandwagon effects, where researchers fear to be left behind if they do not join in." Penrose also claims that the technical difficulty of modern physics forces young scientists to rely on the preferences of established researchers, rather than forging new paths of their own. Lee Smolin expresses a slightly different position in his critique, claiming that string theory grew out of a tradition of particle physics which discourages speculation about the foundations of physics, while his preferred approach, loop quantum gravity, encourages more radical thinking. According to Smolin,
Smolin goes on to offer a number of prescriptions for how scientists might encourage a greater diversity of approaches to quantum gravity research.
Notes
References
Bibliography
Further reading
Popular science
Textbooks
External links
Concepts in physics
Dimension
Mathematical physics
Multi-dimensional geometry
Physics beyond the Standard Model | String theory | [
"Physics",
"Astronomy",
"Mathematics"
] | 14,401 | [
"Geometric measurement",
"Astronomical hypotheses",
"Physical quantities",
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"Particle physics",
"nan",
"Theory of relativity",
"String theory",
"Mathematical physics",
"Dimension",
"Physics beyond the Standard Model... |
28,356 | https://en.wikipedia.org/wiki/Symplectic%20manifold | In differential geometry, a subject of mathematics, a symplectic manifold is a smooth manifold, , equipped with a closed nondegenerate differential 2-form , called the symplectic form. The study of symplectic manifolds is called symplectic geometry or symplectic topology. Symplectic manifolds arise naturally in abstract formulations of classical mechanics and analytical mechanics as the cotangent bundles of manifolds. For example, in the Hamiltonian formulation of classical mechanics, which provides one of the major motivations for the field, the set of all possible configurations of a system is modeled as a manifold, and this manifold's cotangent bundle describes the phase space of the system.
Motivation
Symplectic manifolds arise from classical mechanics; in particular, they are a generalization of the phase space of a closed system. In the same way the Hamilton equations allow one to derive the time evolution of a system from a set of differential equations, the symplectic form should allow one to obtain a vector field describing the flow of the system from the differential of a Hamiltonian function . So we require a linear map from the tangent manifold to the cotangent manifold , or equivalently, an element of . Letting denote a section of , the requirement that be non-degenerate ensures that for every differential there is a unique corresponding vector field such that . Since one desires the Hamiltonian to be constant along flow lines, one should have , which implies that is alternating and hence a 2-form. Finally, one makes the requirement that should not change under flow lines, i.e. that the Lie derivative of along vanishes. Applying Cartan's formula, this amounts to (here is the interior product):
so that, on repeating this argument for different smooth functions such that the corresponding span the tangent space at each point the argument is applied at, we see that the requirement for the vanishing Lie derivative along flows of corresponding to arbitrary smooth is equivalent to the requirement that ω should be closed.
Definition
A symplectic form on a smooth manifold is a closed non-degenerate differential 2-form . Here, non-degenerate means that for every point , the skew-symmetric pairing on the tangent space defined by is non-degenerate. That is to say, if there exists an such that for all , then . Since in odd dimensions, skew-symmetric matrices are always singular, the requirement that be nondegenerate implies that has an even dimension. The closed condition means that the exterior derivative of vanishes. A symplectic manifold is a pair where is a smooth manifold and is a symplectic form. Assigning a symplectic form to is referred to as giving a symplectic structure.
Examples
Symplectic vector spaces
Let be a basis for We define our symplectic form ω on this basis as follows:
In this case the symplectic form reduces to a simple quadratic form. If In denotes the n × n identity matrix then the matrix, Ω, of this quadratic form is given by the block matrix:
Cotangent bundles
Let be a smooth manifold of dimension . Then the total space of the cotangent bundle has a natural symplectic form, called the Poincaré two-form or the canonical symplectic form
Here are any local coordinates on and are fibrewise coordinates with respect to the cotangent vectors . Cotangent bundles are the natural phase spaces of classical mechanics. The point of distinguishing upper and lower indexes is driven by the case of the manifold having a metric tensor, as is the case for Riemannian manifolds. Upper and lower indexes transform contra and covariantly under a change of coordinate frames. The phrase "fibrewise coordinates with respect to the cotangent vectors" is meant to convey that the momenta are "soldered" to the velocities . The soldering is an expression of the idea that velocity and momentum are colinear, in that both move in the same direction, and differ by a scale factor.
Kähler manifolds
A Kähler manifold is a symplectic manifold equipped with a compatible integrable complex structure. They form a particular class of complex manifolds. A large class of examples come from complex algebraic geometry. Any smooth complex projective variety has a symplectic form which is the restriction of the Fubini—Study form on the projective space .
Almost-complex manifolds
Riemannian manifolds with an -compatible almost complex structure are termed almost-complex manifolds. They generalize Kähler manifolds, in that they need not be integrable. That is, they do not necessarily arise from a complex structure on the manifold.
Lagrangian and other submanifolds
There are several natural geometric notions of submanifold of a symplectic manifold :
Symplectic submanifolds of (potentially of any even dimension) are submanifolds such that is a symplectic form on .
Isotropic submanifolds are submanifolds where the symplectic form restricts to zero, i.e. each tangent space is an isotropic subspace of the ambient manifold's tangent space. Similarly, if each tangent subspace to a submanifold is co-isotropic (the dual of an isotropic subspace), the submanifold is called co-isotropic.
Lagrangian submanifolds of a symplectic manifold are submanifolds where the restriction of the symplectic form to is vanishing, i.e. and . Lagrangian submanifolds are the maximal isotropic submanifolds.
One major example is that the graph of a symplectomorphism in the product symplectic manifold is Lagrangian. Their intersections display rigidity properties not possessed by smooth manifolds; the Arnold conjecture gives the sum of the submanifold's Betti numbers as a lower bound for the number of self intersections of a smooth Lagrangian submanifold, rather than the Euler characteristic in the smooth case.
Examples
Let have global coordinates labelled . Then, we can equip with the canonical symplectic form
There is a standard Lagrangian submanifold given by . The form vanishes on because given any pair of tangent vectors we have that To elucidate, consider the case . Then, and . Notice that when we expand this out
both terms we have a factor, which is 0, by definition.
Example: Cotangent bundle
The cotangent bundle of a manifold is locally modeled on a space similar to the first example. It can be shown that we can glue these affine symplectic forms hence this bundle forms a symplectic manifold. A less trivial example of a Lagrangian submanifold is the zero section of the cotangent bundle of a manifold. For example, let
Then, we can present as
where we are treating the symbols as coordinates of . We can consider the subset where the coordinates and , giving us the zero section. This example can be repeated for any manifold defined by the vanishing locus of smooth functions and their differentials .
Example: Parametric submanifold
Consider the canonical space with coordinates . A parametric submanifold of is one that is parameterized by coordinates such that
This manifold is a Lagrangian submanifold if the Lagrange bracket vanishes for all . That is, it is Lagrangian if
for all . This can be seen by expanding
in the condition for a Lagrangian submanifold . This is that the symplectic form must vanish on the tangent manifold ; that is, it must vanish for all tangent vectors:
for all . Simplify the result by making use of the canonical symplectic form on :
and all others vanishing.
As local charts on a symplectic manifold take on the canonical form, this example suggests that Lagrangian submanifolds are relatively unconstrained. The classification of symplectic manifolds is done via Floer homology—this is an application of Morse theory to the action functional for maps between Lagrangian submanifolds. In physics, the action describes the time evolution of a physical system; here, it can be taken as the description of the dynamics of branes.
Example: Morse theory
Another useful class of Lagrangian submanifolds occur in Morse theory. Given a Morse function and for a small enough one can construct a Lagrangian submanifold given by the vanishing locus . For a generic Morse function we have a Lagrangian intersection given by .
Special Lagrangian submanifolds
In the case of Kähler manifolds (or Calabi–Yau manifolds) we can make a choice on as a holomorphic n-form, where is the real part and imaginary. A Lagrangian submanifold is called special if in addition to the above Lagrangian condition the restriction to is vanishing. In other words, the real part restricted on leads the volume form on . The following examples are known as special Lagrangian submanifolds,
complex Lagrangian submanifolds of hyperkähler manifolds,
fixed points of a real structure of Calabi–Yau manifolds.
The SYZ conjecture deals with the study of special Lagrangian submanifolds in mirror symmetry; see .
The Thomas–Yau conjecture predicts that the existence of a special Lagrangian submanifolds on Calabi–Yau manifolds in Hamiltonian isotopy classes of Lagrangians is equivalent to stability with respect to a stability condition on the Fukaya category of the manifold.
Lagrangian fibration
A Lagrangian fibration of a symplectic manifold M is a fibration where all of the fibres are Lagrangian submanifolds. Since M is even-dimensional we can take local coordinates and by Darboux's theorem the symplectic form ω can be, at least locally, written as , where d denotes the exterior derivative and ∧ denotes the exterior product. This form is called the Poincaré two-form or the canonical two-form. Using this set-up we can locally think of M as being the cotangent bundle and the Lagrangian fibration as the trivial fibration This is the canonical picture.
Lagrangian mapping
Let L be a Lagrangian submanifold of a symplectic manifold (K,ω) given by an immersion (i is called a Lagrangian immersion). Let give a Lagrangian fibration of K. The composite is a Lagrangian mapping. The critical value set of π ∘ i is called a caustic.
Two Lagrangian maps and are called Lagrangian equivalent if there exist diffeomorphisms σ, τ and ν such that both sides of the diagram given on the right commute, and τ preserves the symplectic form. Symbolically:
where τ∗ω2 denotes the pull back of ω2 by τ.
Special cases and generalizations
A symplectic manifold is exact if the symplectic form is exact. For example, the cotangent bundle of a smooth manifold is an exact symplectic manifold if we use the canonical symplectic form. The area 2-form on the 2-sphere is a symplectic form that is not exact.
A symplectic manifold endowed with a metric that is compatible with the symplectic form is an almost Kähler manifold in the sense that the tangent bundle has an almost complex structure, but this need not be integrable.
Symplectic manifolds are special cases of a Poisson manifold.
A multisymplectic manifold of degree k is a manifold equipped with a closed nondegenerate k-form.
A polysymplectic manifold is a Legendre bundle provided with a polysymplectic tangent-valued -form; it is utilized in Hamiltonian field theory.
See also
—an odd-dimensional counterpart of the symplectic manifold.
Citations
General and cited references
Further reading
Differential topology
Hamiltonian mechanics
Smooth manifolds
Symplectic geometry | Symplectic manifold | [
"Physics",
"Mathematics"
] | 2,519 | [
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Topology",
"Differential topology",
"Dynamical systems"
] |
28,420 | https://en.wikipedia.org/wiki/Specific%20heat%20capacity | In thermodynamics, the specific heat capacity (symbol ) of a substance is the amount of heat that must be added to one unit of mass of the substance in order to cause an increase of one unit in temperature. It is also referred to as Massic heat capacity or as the Specific heat. More formally it is the heat capacity of a sample of the substance divided by the mass of the sample. The SI unit of specific heat capacity is joule per kelvin per kilogram, J⋅kg−1⋅K−1. For example, the heat required to raise the temperature of of water by is , so the specific heat capacity of water is .
Specific heat capacity often varies with temperature, and is different for each state of matter. Liquid water has one of the highest specific heat capacities among common substances, about at 20 °C; but that of ice, just below 0 °C, is only . The specific heat capacities of iron, granite, and hydrogen gas are about 449 J⋅kg−1⋅K−1, 790 J⋅kg−1⋅K−1, and 14300 J⋅kg−1⋅K−1, respectively. While the substance is undergoing a phase transition, such as melting or boiling, its specific heat capacity is technically undefined, because the heat goes into changing its state rather than raising its temperature.
The specific heat capacity of a substance, especially a gas, may be significantly higher when it is allowed to expand as it is heated (specific heat capacity at constant pressure) than when it is heated in a closed vessel that prevents expansion (specific heat capacity at constant volume). These two values are usually denoted by and , respectively; their quotient is the heat capacity ratio.
The term specific heat may also refer to the ratio between the specific heat capacities of a substance at a given temperature and of a reference substance at a reference temperature, such as water at 15 °C; much in the fashion of specific gravity. Specific heat capacity is also related to other intensive measures of heat capacity with other denominators. If the amount of substance is measured as a number of moles, one gets the molar heat capacity instead, whose SI unit is joule per kelvin per mole, J⋅mol−1⋅K−1. If the amount is taken to be the volume of the sample (as is sometimes done in engineering), one gets the volumetric heat capacity, whose SI unit is joule per kelvin per cubic meter, J⋅m−3⋅K−1.
History
Discovery of specific heat
One of the first scientists to use the concept was Joseph Black, an 18th-century medical doctor and professor of medicine at Glasgow University. He measured the specific heat capacities of many substances, using the term capacity for heat. In 1756 or soon thereafter, Black began an extensive study of heat. In 1760 he realized that when two different substances of equal mass but different temperatures are mixed, the changes in number of degrees in the two substances differ, though the heat gained by the cooler substance and lost by the hotter is the same. Black related an experiment conducted by Daniel Gabriel Fahrenheit on behalf of Dutch physician Herman Boerhaave. For clarity, he then described a hypothetical, but realistic variant of the experiment: If equal masses of 100 °F water and 150 °F mercury are mixed, the water temperature increases by 20 ° and the mercury temperature decreases by 30 ° (both arriving at 120 °F), even though the heat gained by the water and lost by the mercury is the same. This clarified the distinction between heat and temperature. It also introduced the concept of specific heat capacity, being different for different substances. Black wrote: “Quicksilver [mercury] ... has less capacity for the matter of heat than water.”
Definition
The specific heat capacity of a substance, usually denoted by or , is the heat capacity of a sample of the substance, divided by the mass of the sample:
where represents the amount of heat needed to uniformly raise the temperature of the sample by a small increment .
Like the heat capacity of an object, the specific heat capacity of a substance may vary, sometimes substantially, depending on the starting temperature of the sample and the pressure applied to it. Therefore, it should be considered a function of those two variables.
These parameters are usually specified when giving the specific heat capacity of a substance. For example, "Water (liquid): = 4187 J⋅kg−1⋅K−1 (15 °C)." When not specified, published values of the specific heat capacity generally are valid for some standard conditions for temperature and pressure.
However, the dependency of on starting temperature and pressure can often be ignored in practical contexts, e.g. when working in narrow ranges of those variables. In those contexts one usually omits the qualifier and approximates the specific heat capacity by a constant suitable for those ranges.
Specific heat capacity is an intensive property of a substance, an intrinsic characteristic that does not depend on the size or shape of the amount in consideration. (The qualifier "specific" in front of an extensive property often indicates an intensive property derived from it.)
Variations
The injection of heat energy into a substance, besides raising its temperature, usually causes an increase in its volume and/or its pressure, depending on how the sample is confined. The choice made about the latter affects the measured specific heat capacity, even for the same starting pressure and starting temperature . Two particular choices are widely used:
If the pressure is kept constant (for instance, at the ambient atmospheric pressure), and the sample is allowed to expand, the expansion generates work, as the force from the pressure displaces the enclosure or the surrounding fluid. That work must come from the heat energy provided. The specific heat capacity thus obtained is said to be measured at constant pressure (or isobaric) and is often denoted
On the other hand, if the expansion is prevented for example, by a sufficiently rigid enclosure or by increasing the external pressure to counteract the internal one no work is generated, and the heat energy that would have gone into it must instead contribute to the internal energy of the sample, including raising its temperature by an extra amount. The specific heat capacity obtained this way is said to be measured at constant volume (or isochoric) and denoted
The value of is always less than the value of for all fluids. This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume. Hence the heat capacity ratio of gases is typically between 1.3 and 1.67.
Applicability
The specific heat capacity can be defined and measured for gases, liquids, and solids of fairly general composition and molecular structure. These include gas mixtures, solutions and alloys, or heterogenous materials such as milk, sand, granite, and concrete, if considered at a sufficiently large scale.
The specific heat capacity can be defined also for materials that change state or composition as the temperature and pressure change, as long as the changes are reversible and gradual. Thus, for example, the concepts are definable for a gas or liquid that dissociates as the temperature increases, as long as the products of the dissociation promptly and completely recombine when it drops.
The specific heat capacity is not meaningful if the substance undergoes irreversible chemical changes, or if there is a phase change, such as melting or boiling, at a sharp temperature within the range of temperatures spanned by the measurement.
Measurement
The specific heat capacity of a substance is typically determined according to the definition; namely, by measuring the heat capacity of a sample of the substance, usually with a calorimeter, and dividing by the sample's mass. Several techniques can be applied for estimating the heat capacity of a substance, such as differential scanning calorimetry.
The specific heat capacities of gases can be measured at constant volume, by enclosing the sample in a rigid container. On the other hand, measuring the specific heat capacity at constant volume can be prohibitively difficult for liquids and solids, since one often would need impractical pressures in order to prevent the expansion that would be caused by even small increases in temperature. Instead, the common practice is to measure the specific heat capacity at constant pressure (allowing the material to expand or contract as it wishes), determine separately the coefficient of thermal expansion and the compressibility of the material, and compute the specific heat capacity at constant volume from these data according to the laws of thermodynamics.
Units
International system
The SI unit for specific heat capacity is joule per kelvin per kilogram , J⋅K−1⋅kg−1. Since an increment of temperature of one degree Celsius is the same as an increment of one kelvin, that is the same as joule per degree Celsius per kilogram: J/(kg⋅°C). Sometimes the gram is used instead of kilogram for the unit of mass: 1 J⋅g−1⋅K−1 = 1000 J⋅kg−1⋅K−1.
The specific heat capacity of a substance (per unit of mass) has dimension L2⋅Θ−1⋅T−2, or (L/T)2/Θ. Therefore, the SI unit J⋅kg−1⋅K−1 is equivalent to metre squared per second squared per kelvin (m2⋅K−1⋅s−2).
Imperial engineering units
Professionals in construction, civil engineering, chemical engineering, and other technical disciplines, especially in the United States, may use English Engineering units including the pound (lb = 0.45359237 kg) as the unit of mass, the degree Fahrenheit or Rankine (°R = K, about 0.555556 K) as the unit of temperature increment, and the British thermal unit (BTU ≈ 1055.056 J), as the unit of heat.
In those contexts, the unit of specific heat capacity is BTU/lb⋅°R, or 1 = 4186.68. The BTU was originally defined so that the average specific heat capacity of water would be 1 BTU/lb⋅°F. Note the value's similarity to that of the calorie - 4187 J/kg⋅°C ≈ 4184 J/kg⋅°C (~.07%) - as they are essentially measuring the same energy, using water as a basis reference, scaled to their systems' respective lbs and °F, or kg and °C.
Calories
In chemistry, heat amounts were often measured in calories. Confusingly, there are two common units with that name, respectively denoted cal and Cal:
the small calorie (gram-calorie, cal) is 4.184 J exactly. It was originally defined so that the specific heat capacity of liquid water would be 1 cal/(°C⋅g).
The grand calorie (kilocalorie, kilogram-calorie, food calorie, kcal, Cal) is 1000 small calories, 4184 J exactly. It was defined so that the specific heat capacity of water would be 1 Cal/(°C⋅kg).
While these units are still used in some contexts (such as kilogram calorie in nutrition), their use is now deprecated in technical and scientific fields. When heat is measured in these units, the unit of specific heat capacity is usually:
Note that while cal is of a Cal or kcal, it is also per gram instead of kilogram: ergo, in either unit, the specific heat capacity of water is approximately 1.
Physical basis
The temperature of a sample of a substance reflects the average kinetic energy of its constituent particles (atoms or molecules) relative to its center of mass. However, not all energy provided to a sample of a substance will go into raising its temperature, exemplified via the equipartition theorem.
Monatomic gases
Statistical mechanics predicts that at room temperature and ordinary pressures, an isolated atom in a gas cannot store any significant amount of energy except in the form of kinetic energy, unless multiple electronic states are accessible at room temperature (such is the case for atomic fluorine). Thus, the heat capacity per mole at room temperature is the same for all of the noble gases as well as for many other atomic vapors. More precisely, and , where is the ideal gas unit (which is the product of Boltzmann conversion constant from kelvin microscopic energy unit to the macroscopic energy unit joule, and the Avogadro number).
Therefore, the specific heat capacity (per gram, not per mole) of a monatomic gas will be inversely proportional to its (adimensional) atomic weight . That is, approximately,
For the noble gases, from helium to xenon, these computed values are
Polyatomic gases
On the other hand, a polyatomic gas molecule (consisting of two or more atoms bound together) can store heat energy in additional degrees of freedom. Its kinetic energy contributes to the heat capacity in the same way as monatomic gases, but there are also contributions from the rotations of the molecule and vibration of the atoms relative to each other (including internal potential energy).
There may also be contributions to the heat capacity from excited electronic states for molecules where the energy gap between the ground state and the excited state is sufficiently small, such as . For a few systems, quantum spin statistics can also be important contributions to the heat capacity, even at room temperature. The analysis of the heat capacity of due to ortho/para separation, which arises from nuclear spin statistics, has been referred to as "one of the great triumphs of post-quantum mechanical statistical mechanics."
These extra degrees of freedom or "modes" contribute to the specific heat capacity of the substance. Namely, when heat energy is injected into a gas with polyatomic molecules, only part of it will go into increasing their kinetic energy, and hence the temperature; the rest will go to into the other degrees of freedom. To achieve the same increase in temperature, more heat energy is needed for a gram of that substance than for a gram of a monatomic gas. Thus, the specific heat capacity per mole of a polyatomic gas depends both on the molecular mass and the number of degrees of freedom of the molecules.
Quantum statistical mechanics predicts that each rotational or vibrational mode can only take or lose energy in certain discrete amounts (quanta), and that this affects the system’s thermodynamic properties. Depending on the temperature, the average heat energy per molecule may be too small compared to the quanta needed to activate some of those degrees of freedom. Those modes are said to be "frozen out". In that case, the specific heat capacity of the substance increases with temperature, sometimes in a step-like fashion as mode becomes unfrozen and starts absorbing part of the input heat energy.
For example, the molar heat capacity of nitrogen at constant volume is (at 15 °C, 1 atm), which is . That is the value expected from the Equipartition Theorem if each molecule had 5 kinetic degrees of freedom. These turn out to be three degrees of the molecule's velocity vector, plus two degrees from its rotation about an axis through the center of mass and perpendicular to the line of the two atoms. Because of those two extra degrees of freedom, the specific heat capacity of (736 J⋅K−1⋅kg−1) is greater than that of an hypothetical monatomic gas with the same molecular mass 28 (445 J⋅K−1⋅kg−1), by a factor of . The vibrational and electronic degrees of freedom do not contribute significantly to the heat capacity in this case, due to the relatively large energy level gaps for both vibrational and electronic excitation in this molecule.
This value for the specific heat capacity of nitrogen is practically constant from below −150 °C to about 300 °C. In that temperature range, the two additional degrees of freedom that correspond to vibrations of the atoms, stretching and compressing the bond, are still "frozen out". At about that temperature, those modes begin to "un-freeze" as vibrationally excited states become accessible. As a result starts to increase rapidly at first, then slower as it tends to another constant value. It is 35.5 J⋅K−1⋅mol−1 at 1500 °C, 36.9 at 2500 °C, and 37.5 at 3500 °C. The last value corresponds almost exactly to the value predicted by the Equipartition Theorem, since in the high-temperature limit the theorem predicts that the vibrational degree of freedom contributes twice as much to the heat capacity as any one of the translational or rotational degrees of freedom.
Derivations of heat capacity
Relation between specific heat capacities
Starting from the fundamental thermodynamic relation one can show,
where
is the coefficient of thermal expansion,
is the isothermal compressibility, and
is density.
A derivation is discussed in the article Relations between specific heats.
For an ideal gas, if is expressed as molar density in the above equation, this equation reduces simply to Mayer's relation,
where and are intensive property heat capacities expressed on a per mole basis at constant pressure and constant volume, respectively.
Specific heat capacity
The specific heat capacity of a material on a per mass basis is
which in the absence of phase transitions is equivalent to
where
is the heat capacity of a body made of the material in question,
is the mass of the body,
is the volume of the body, and
is the density of the material.
For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, ) or isochoric (constant volume, ) processes. The corresponding specific heat capacities are expressed as
A related parameter to is , the volumetric heat capacity. In engineering practice, for solids or liquids often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the mass-specific heat capacity is often explicitly written with the subscript , as . Of course, from the above relationships, for solids one writes
For pure homogeneous chemical compounds with established molecular or molar mass or a molar quantity is established, heat capacity as an intensive property can be expressed on a per mole basis instead of a per mass basis by the following equations analogous to the per mass equations:
where n = number of moles in the body or thermodynamic system. One may refer to such a per mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per-mass basis.
Polytropic heat capacity
The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change
The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent (γ or κ)
Dimensionless heat capacity
The dimensionless heat capacity of a material is
where
C is the heat capacity of a body made of the material in question (J/K)
n is the amount of substance in the body (mol)
R is the gas constant (J⋅K−1⋅mol−1)
N is the number of molecules in the body. (dimensionless)
kB is the Boltzmann constant (J⋅K−1)
Again, SI units shown for example.
Read more about the quantities of dimension one at BIPM
In the Ideal gas article, dimensionless heat capacity is expressed as .
Heat capacity at absolute zero
From the definition of entropy
the absolute entropy can be calculated by integrating from zero kelvins temperature to the final temperature Tf
The heat capacity must be zero at zero temperature in order for the above integral not to yield an infinite absolute entropy, thus violating the third law of thermodynamics. One of the strengths of the Debye model is that (unlike the preceding Einstein model) it predicts the proper mathematical form of the approach of heat capacity toward zero, as absolute zero temperature is approached.
Solid phase
The theoretical maximum heat capacity for larger and larger multi-atomic gases at higher temperatures, also approaches the Dulong–Petit limit of 3R, so long as this is calculated per mole of atoms, not molecules. The reason is that gases with very large molecules, in theory have almost the same high-temperature heat capacity as solids, lacking only the (small) heat capacity contribution that comes from potential energy that cannot be stored between separate molecules in a gas.
The Dulong–Petit limit results from the equipartition theorem, and as such is only valid in the classical limit of a microstate continuum, which is a high temperature limit. For light and non-metallic elements, as well as most of the common molecular solids based on carbon compounds at standard ambient temperature, quantum effects may also play an important role, as they do in multi-atomic gases. These effects usually combine to give heat capacities lower than 3R per mole of atoms in the solid, although in molecular solids, heat capacities calculated per mole of molecules in molecular solids may be more than 3R. For example, the heat capacity of water ice at the melting point is about 4.6R per mole of molecules, but only 1.5R per mole of atoms. The lower than 3R number "per atom" (as is the case with diamond and beryllium) results from the “freezing out” of possible vibration modes for light atoms at suitably low temperatures, just as in many low-mass-atom gases at room temperatures. Because of high crystal binding energies, these effects are seen in solids more often than liquids: for example the heat capacity of liquid water is twice that of ice at near the same temperature, and is again close to the 3R per mole of atoms of the Dulong–Petit theoretical maximum.
For a more modern and precise analysis of the heat capacities of solids, especially at low temperatures, it is useful to use the idea of phonons. See Debye model.
Theoretical estimation
The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below.
Water (liquid): CP = 4185.5 J⋅K−1⋅kg−1 (15 °C, 101.325 kPa)
Water (liquid): CVH = 74.539 J⋅K−1⋅mol−1 (25 °C)
For liquids and gases, it is important to know the pressure to which given heat capacity data refer. Most published data are given for standard pressure. However, different standard conditions for temperature and pressure have been defined by different organizations. The International Union of Pure and Applied Chemistry (IUPAC) changed its recommendation from one atmosphere to the round value 100 kPa (≈750.062 Torr).
Relation between heat capacities
Measuring the specific heat capacity at constant volume can be prohibitively difficult for liquids and solids. That is, small temperature changes typically require large pressures to maintain a liquid or solid at constant volume, implying that the containing vessel must be nearly rigid or at least very strong (see coefficient of thermal expansion and compressibility). Instead, it is easier to measure the heat capacity at constant pressure (allowing the material to expand or contract freely) and solve for the heat capacity at constant volume using mathematical relationships derived from the basic thermodynamic laws.
The heat capacity ratio, or adiabatic index, is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor.
Ideal gas
For an ideal gas, evaluating the partial derivatives above according to the equation of state, where R is the gas constant, for an ideal gas
Substituting
this equation reduces simply to Mayer's relation:
The differences in heat capacities as defined by the above Mayer relation is only exact for an ideal gas and would be different for any real gas.
Specific heat capacity
The specific heat capacity of a material on a per mass basis is
which in the absence of phase transitions is equivalent to
where
is the heat capacity of a body made of the material in question,
is the mass of the body,
is the volume of the body,
is the density of the material.
For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, ) or isochoric (constant volume, ) processes. The corresponding specific heat capacities are expressed as
From the results of the previous section, dividing through by the mass gives the relation
A related parameter to is , the volumetric heat capacity. In engineering practice, for solids or liquids often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the specific heat capacity is often explicitly written with the subscript , as . Of course, from the above relationships, for solids one writes
For pure homogeneous chemical compounds with established molecular or molar mass, or a molar quantity, heat capacity as an intensive property can be expressed on a per-mole basis instead of a per-mass basis by the following equations analogous to the per mass equations:
where n is the number of moles in the body or thermodynamic system. One may refer to such a per-mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per-mass basis.
Polytropic heat capacity
The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change:
The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent (γ or κ).
Dimensionless heat capacity
The dimensionless heat capacity of a material is
where
is the heat capacity of a body made of the material in question (J/K),
n is the amount of substance in the body (mol),
R is the gas constant (J/(K⋅mol)),
N is the number of molecules in the body (dimensionless),
kB is the Boltzmann constant (J/(K⋅molecule)).
In the ideal gas article, dimensionless heat capacity is expressed as and is related there directly to half the number of degrees of freedom per particle. This holds true for quadratic degrees of freedom, a consequence of the equipartition theorem.
More generally, the dimensionless heat capacity relates the logarithmic increase in temperature to the increase in the dimensionless entropy per particle , measured in nats.
Alternatively, using base-2 logarithms, relates the base-2 logarithmic increase in temperature to the increase in the dimensionless entropy measured in bits.
Heat capacity at absolute zero
From the definition of entropy
the absolute entropy can be calculated by integrating from zero to the final temperature Tf:
Thermodynamic derivation
In theory, the specific heat capacity of a substance can also be derived from its abstract thermodynamic modeling by an equation of state and an internal energy function.
State of matter in a homogeneous sample
To apply the theory, one considers the sample of the substance (solid, liquid, or gas) for which the specific heat capacity can be defined; in particular, that it has homogeneous composition and fixed mass . Assume that the evolution of the system is always slow enough for the internal pressure and temperature be considered uniform throughout. The pressure would be equal to the pressure applied to it by the enclosure or some surrounding fluid, such as air.
The state of the material can then be specified by three parameters: its temperature , the pressure , and its specific volume , where is the volume of the sample. (This quantity is the reciprocal of the material's density .) Like and , the specific volume is an intensive property of the material and its state, that does not depend on the amount of substance in the sample.
Those variables are not independent. The allowed states are defined by an equation of state relating those three variables: The function depends on the material under consideration. The specific internal energy stored internally in the sample, per unit of mass, will then be another function of these state variables, that is also specific of the material. The total internal energy in the sample then will be .
For some simple materials, like an ideal gas, one can derive from basic theory the equation of state and even the specific internal energy In general, these functions must be determined experimentally for each substance.
Conservation of energy
The absolute value of this quantity is undefined, and (for the purposes of thermodynamics) the state of "zero internal energy" can be chosen arbitrarily. However, by the law of conservation of energy, any infinitesimal increase in the total internal energy must be matched by the net flow of heat energy into the sample, plus any net mechanical energy provided to it by enclosure or surrounding medium on it. The latter is , where is the change in the sample's volume in that infinitesimal step. Therefore
hence
If the volume of the sample (hence the specific volume of the material) is kept constant during the injection of the heat amount , then the term is zero (no mechanical work is done). Then, dividing by ,
where is the change in temperature that resulted from the heat input. The left-hand side is the specific heat capacity at constant volume of the material.
For the heat capacity at constant pressure, it is useful to define the specific enthalpy of the system as the sum . An infinitesimal change in the specific enthalpy will then be
therefore
If the pressure is kept constant, the second term on the left-hand side is zero, and
The left-hand side is the specific heat capacity at constant pressure of the material.
Connection to equation of state
In general, the infinitesimal quantities are constrained by the equation of state and the specific internal energy function. Namely,
Here denotes the (partial) derivative of the state equation with respect to its argument, keeping the other two arguments fixed, evaluated at the state in question. The other partial derivatives are defined in the same way. These two equations on the four infinitesimal increments normally constrain them to a two-dimensional linear subspace space of possible infinitesimal state changes, that depends on the material and on the state. The constant-volume and constant-pressure changes are only two particular directions in this space.
This analysis also holds no matter how the energy increment is injected into the sample, namely by heat conduction, irradiation, electromagnetic induction, radioactive decay, etc.
Relation between heat capacities
For any specific volume , denote the function that describes how the pressure varies with the temperature , as allowed by the equation of state, when the specific volume of the material is forcefully kept constant at . Analogously, for any pressure , let be the function that describes how the specific volume varies with the temperature, when the pressure is kept constant at . Namely, those functions are such that
and
for any values of . In other words, the graphs of and are slices of the surface defined by the state equation, cut by planes of constant and constant , respectively.
Then, from the fundamental thermodynamic relation it follows that
This equation can be rewritten as
where
is the coefficient of thermal expansion,
is the isothermal compressibility,
both depending on the state .
The heat capacity ratio, or adiabatic index, is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor.
Calculation from first principles
The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below. However, attention should be made for the consistency of such ab-initio considerations when used along with an equation of state for the considered material.
Ideal gas
For an ideal gas, evaluating the partial derivatives above according to the equation of state, where R is the gas constant, for an ideal gas
Substituting
this equation reduces simply to Mayer's relation:
The differences in heat capacities as defined by the above Mayer relation is only exact for an ideal gas and would be different for any real gas.
See also
Specific heat of melting (Enthalpy of fusion)
Specific heat of vaporization (Enthalpy of vaporization)
Frenkel line
Heat capacity ratio
Heat equation
Heat transfer coefficient
History of thermodynamics
Joback method (Estimation of heat capacities)
Latent heat
Material properties (thermodynamics)
Quantum statistical mechanics
R-value (insulation)
Enthalpy of vaporization
Enthalpy of fusion
Statistical mechanics
Table of specific heat capacities
Thermal mass
Thermodynamic databases for pure substances
Thermodynamic equations
Volumetric heat capacity
Notes
References
Further reading
Emmerich Wilhelm & Trevor M. Letcher, Eds., 2010, Heat Capacities: Liquids, Solutions and Vapours, Cambridge, U.K.:Royal Society of Chemistry, . A very recent outline of selected traditional aspects of the title subject, including a recent specialist introduction to its theory, Emmerich Wilhelm, "Heat Capacities: Introduction, Concepts, and Selected Applications" (Chapter 1, pp. 1–27), chapters on traditional and more contemporary experimental methods such as photoacoustic methods, e.g., Jan Thoen & Christ Glorieux, "Photothermal Techniques for Heat Capacities," and chapters on newer research interests, including on the heat capacities of proteins and other polymeric systems (Chs. 16, 15), of liquid crystals (Ch. 17), etc.
External links
(2012-05may-24) Phonon theory sheds light on liquid thermodynamics, heat capacity – Physics World The phonon theory of liquid thermodynamics | Scientific Reports
Physical quantities
Thermodynamic properties | Specific heat capacity | [
"Physics",
"Chemistry",
"Mathematics"
] | 7,211 | [
"Thermodynamic properties",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Physical properties"
] |
28,437 | https://en.wikipedia.org/wiki/Simple%20harmonic%20motion | In mechanics and physics, simple harmonic motion (sometimes abbreviated as ) is a special type of periodic motion an object experiences by means of a restoring force whose magnitude is directly proportional to the distance of the object from an equilibrium position and acts towards the equilibrium position. It results in an oscillation that is described by a sinusoid which continues indefinitely (if uninhibited by friction or any other dissipation of energy).
Simple harmonic motion can serve as a mathematical model for a variety of motions, but is typified by the oscillation of a mass on a spring when it is subject to the linear elastic restoring force given by Hooke's law. The motion is sinusoidal in time and demonstrates a single resonant frequency. Other phenomena can be modeled by simple harmonic motion, including the motion of a simple pendulum, although for it to be an accurate model, the net force on the object at the end of the pendulum must be proportional to the displacement (and even so, it is only a good approximation when the angle of the swing is small; see small-angle approximation). Simple harmonic motion can also be used to model molecular vibration.
Simple harmonic motion provides a basis for the characterization of more complicated periodic motion through the techniques of Fourier analysis.
Introduction
The motion of a particle moving along a straight line with an acceleration whose direction is always toward a fixed point on the line and whose magnitude is proportional to the displacement from the fixed point is called simple harmonic motion.
In the diagram, a simple harmonic oscillator, consisting of a weight attached to one end of a spring, is shown. The other end of the spring is connected to a rigid support such as a wall. If the system is left at rest at the equilibrium position then there is no net force acting on the mass. However, if the mass is displaced from the equilibrium position, the spring exerts a restoring elastic force that obeys Hooke's law.
Mathematically,
where is the restoring elastic force exerted by the spring (in SI units: N), is the spring constant (N·m−1), and is the displacement from the equilibrium position (in metres).
For any simple mechanical harmonic oscillator:
When the system is displaced from its equilibrium position, a restoring force that obeys Hooke's law tends to restore the system to equilibrium.
Once the mass is displaced from its equilibrium position, it experiences a net restoring force. As a result, it accelerates and starts going back to the equilibrium position. When the mass moves closer to the equilibrium position, the restoring force decreases. At the equilibrium position, the net restoring force vanishes. However, at , the mass has momentum because of the acceleration that the restoring force has imparted. Therefore, the mass continues past the equilibrium position, compressing the spring. A net restoring force then slows it down until its velocity reaches zero, whereupon it is accelerated back to the equilibrium position again.
As long as the system has no energy loss, the mass continues to oscillate. Thus simple harmonic motion is a type of periodic motion. If energy is lost in the system, then the mass exhibits damped oscillation.
Note if the real space and phase space plot are not co-linear, the phase space motion becomes elliptical. The area enclosed depends on the amplitude and the maximum momentum.
Dynamics
In Newtonian mechanics, for one-dimensional simple harmonic motion, the equation of motion, which is a second-order linear ordinary differential equation with constant coefficients, can be obtained by means of Newton's second law and Hooke's law for a mass on a spring.
where is the inertial mass of the oscillating body, is its displacement from the equilibrium (or mean) position, and is a constant (the spring constant for a mass on a spring).
Therefore,
Solving the differential equation above produces a solution that is a sinusoidal function:
where
The meaning of the constants and can be easily found: setting on the equation above we see that , so that is the initial position of the particle, ; taking the derivative of that equation and evaluating at zero we get that , so that is the initial speed of the particle divided by the angular frequency, . Thus we can write:
This equation can also be written in the form:
where
or equivalently
In the solution, and are two constants determined by the initial conditions (specifically, the initial position at time is , while the initial velocity is ), and the origin is set to be the equilibrium position. Each of these constants carries a physical meaning of the motion: is the amplitude (maximum displacement from the equilibrium position), is the angular frequency, and is the initial phase.
Using the techniques of calculus, the velocity and acceleration as a function of time can be found:
Speed:
Maximum speed: (at equilibrium point)
Maximum acceleration: (at extreme points)
By definition, if a mass is under SHM its acceleration is directly proportional to displacement.
where
Since ,
and, since where is the time period,
These equations demonstrate that the simple harmonic motion is isochronous (the period and frequency are independent of the amplitude and the initial phase of the motion).
Energy
Substituting with , the kinetic energy of the system at time is
and the potential energy is
In the absence of friction and other energy loss, the total mechanical energy has a constant value
Examples
The following physical systems are some examples of simple harmonic oscillator.
Mass on a spring
A mass attached to a spring of spring constant exhibits simple harmonic motion in closed space. The equation for describing the period:
shows the period of oscillation is independent of the amplitude, though in practice the amplitude should be small. The above equation is also valid in the case when an additional constant force is being applied on the mass, i.e. the additional constant force cannot change the period of oscillation.
Uniform circular motion
Simple harmonic motion can be considered the one-dimensional projection of uniform circular motion. If an object moves with angular speed around a circle of radius centered at the origin of the -plane, then its motion along each coordinate is simple harmonic motion with amplitude and angular frequency .
Oscillatory motion
The motion of a body in which it moves to and from about a definite point is also called oscillatory motion or vibratory motion. The time period is able to be calculated by
where l is the distance from rotation to center of mass of object undergoing SHM and g being gravitational acceleration. This is analogous to the mass-spring system.
Mass of a simple pendulum
In the small-angle approximation, the motion of a simple pendulum is approximated by simple harmonic motion. The period of a mass attached to a pendulum of length with gravitational acceleration is given by
This shows that the period of oscillation is independent of the amplitude and mass of the pendulum but not of the acceleration due to gravity, , therefore a pendulum of the same length on the Moon would swing more slowly due to the Moon's lower gravitational field strength. Because the value of varies slightly over the surface of the earth, the time period will vary slightly from place to place and will also vary with height above sea level.
This approximation is accurate only for small angles because of the expression for angular acceleration being proportional to the sine of the displacement angle:
where is the moment of inertia. When is small, and therefore the expression becomes
which makes angular acceleration directly proportional and opposite to , satisfying the definition of simple harmonic motion (that net force is directly proportional to the displacement from the mean position and is directed towards the mean position).
Scotch yoke
A Scotch yoke mechanism can be used to convert between rotational motion and linear reciprocating motion. The linear motion can take various forms depending on the shape of the slot, but the basic yoke with a constant rotation speed produces a linear motion that is simple harmonic in form.
See also
Notes
References
External links
Simple Harmonic Motion from HyperPhysics
Java simulation of spring-mass oscillator
Geogebra applet for spring-mass, with 3 attached PDFs on SHM, driven/damped oscillators, spring-mass with friction
Classical mechanics
Motion (physics)
Pendulums | Simple harmonic motion | [
"Physics"
] | 1,677 | [
"Physical phenomena",
"Classical mechanics",
"Motion (physics)",
"Mechanics",
"Space",
"Spacetime"
] |
28,462 | https://en.wikipedia.org/wiki/Sudbury%20Neutrino%20Observatory | The Sudbury Neutrino Observatory (SNO) was a neutrino observatory located 2100 m underground in Vale's Creighton Mine in Sudbury, Ontario, Canada. The detector was designed to detect solar neutrinos through their interactions with a large tank of heavy water.
The detector was turned on in May 1999, and was turned off on 28 November 2006. The SNO collaboration was active for several years after that analyzing the data taken.
The director of the experiment, Art McDonald, was co-awarded the Nobel Prize in Physics in 2015 for the experiment's contribution to the discovery of neutrino oscillation.
The underground laboratory has been enlarged into a permanent facility and now operates multiple experiments as SNOLAB. The SNO equipment itself was being refurbished for use in the SNO+ experiment.
Experimental motivation
The first measurements of the number of solar neutrinos reaching the Earth were taken in the 1960s, and all experiments prior to SNO observed a third to a half fewer neutrinos than were predicted by the Standard Solar Model. As several experiments confirmed this deficit the effect became known as the solar neutrino problem. Over several decades many ideas were put forward to try to explain the effect, one of which was the hypothesis of neutrino oscillations. All of the solar neutrino detectors prior to SNO had been sensitive primarily or exclusively to electron neutrinos and yielded little to no information on muon neutrinos and tau neutrinos.
In 1984, Herb Chen of the University of California at Irvine first pointed out the advantages of using heavy water as a detector for solar neutrinos. Unlike previous detectors, using heavy water would make the detector sensitive to two reactions, one reaction sensitive to all neutrino flavours, the other reaction sensitive to only electron neutrino. Thus, such a detector could measure neutrino oscillations directly. A location in Canada was attractive because Atomic Energy of Canada Limited, which maintains large stockpiles of heavy water to support its CANDU reactor power plants, was willing to lend the necessary amount (worth at market prices) at no cost.
The Creighton Mine in Sudbury is among the deepest in the world and, accordingly, experiences a very small background flux of radiation. It was quickly identified as an ideal place for Chen's proposed experiment to be built, and the mine management was willing to make the location available for only incremental costs.
The SNO collaboration held its first meeting in 1984. At the time it competed with TRIUMF's KAON Factory proposal for federal funding, and the wide variety of universities backing SNO quickly led to it being selected for development. The official go-ahead was given in 1990.
The experiment observed the light produced by relativistic electrons in the water created by neutrino interactions. As relativistic electrons travel through a medium, they lose energy producing a cone of blue light through the Cherenkov effect, and it is this light that is directly detected.
Detector description
The SNO detector target consisted of of heavy water contained in a acrylic vessel. The detector cavity outside the vessel was filled with normal water to provide both buoyancy for the vessel and radiation shielding. The heavy water was viewed by approximately 9,600 photomultiplier tubes (PMTs) mounted on a geodesic sphere at a radius of about . The cavity housing the detector was the largest in the world at such a depth, requiring a variety of high-performance rock bolting techniques to prevent rock bursts.
The observatory is located at the end of a drift, named the "SNO drift", isolating it from other mining operations. Along the drift are a number of operations and equipment rooms, all held in a clean room setting. Most of the facility is Class 3000 (fewer than 3,000 particles of 1 μm or larger per 1 ft3 of air) but the final cavity containing the detector is an even stricter Class 100.
Charged current interaction
In the charged current interaction, a neutrino converts the neutron in a deuteron to a proton. The neutrino is absorbed in the reaction and an electron is produced. Solar neutrinos have energies smaller than the mass of muons and tau leptons, so only electron neutrinos can participate in this reaction. The emitted electron carries off most of the neutrino's energy, on the order of 5–15 MeV, and is detectable. The proton which is produced does not have enough energy to be detected easily. The electrons produced in this reaction are emitted in all directions, but there is a slight tendency for them to point back in the direction from which the neutrino came.
Neutral current interaction
In the neutral current interaction, a neutrino dissociates the deuteron, breaking it into its constituent neutron and proton. The neutrino continues on with slightly less energy, and all three neutrino flavours are equally likely to participate in this interaction. Heavy water has a small cross section for neutrons, but when neutrons are captured by a deuterium nucleus, a gamma ray (photon) with roughly 6 MeV of energy is produced. The direction of the gamma ray is completely uncorrelated with the direction of the neutrino. Some of the neutrons produced from the dissociated deuterons make their way through the acrylic vessel into the light water jacket surrounding the heavy water, and since light water has a very large cross section for neutron capture, these neutrons are captured very quickly. Gamma rays of roughly 2.2 MeV are produced in this reaction, but because the energy of the photons is less than the detector's energy threshold (meaning they do not trigger the photomultipliers), they are not directly observable. However, when the gamma ray collides with an electron via Compton scattering, the accelerated electron can be detected through Cherenkov radiation.
Electron elastic scattering
In the elastic scattering interaction, a neutrino collides with an atomic electron and imparts some of its energy to the electron. All three neutrinos can participate in this interaction through the exchange of the neutral Z boson, and electron neutrinos can also participate with the exchange of a charged W boson. For this reason this interaction is dominated by electron neutrinos, and this is the channel through which the Super-Kamiokande (Super-K) detector can observe solar neutrinos. This interaction is the relativistic equivalent of billiards, and for this reason the electrons produced usually point in the direction that the neutrino was travelling (away from the sun). Because this interaction takes place on atomic electrons it occurs with the same rate in both the heavy and light water.
Experimental results and impact
The first scientific results of SNO were published on 18 June 2001, and presented the first clear evidence that neutrinos oscillate (i.e. that they can transmute into one another), as they travel from the Sun. This oscillation, in turn, implies that neutrinos have non-zero masses. The total flux of all neutrino flavours measured by SNO agrees well with theoretical predictions. Further measurements carried out by SNO have since confirmed and improved the precision of the original result.
Although Super-K had beaten SNO to the punch, having published evidence for neutrino oscillation as early as 1998, the Super-K results were not conclusive and did not specifically deal with solar neutrinos. SNO's results were the first to directly demonstrate oscillations in solar neutrinos. This was important to the standard solar model. In 2007, the Franklin Institute awarded the director of SNO Art McDonald with the Benjamin Franklin Medal in Physics. In 2015 the Nobel Prize for Physics was jointly awarded to Arthur B. McDonald, and Takaaki Kajita of the University of Tokyo, for the discovery of neutrino oscillations.
Other possible analyses
The SNO detector would have been capable of detecting a supernova within our galaxy if one had occurred while the detector was online. As neutrinos emitted by a supernova are released earlier than the photons, it is possible to alert the astronomical community before the supernova is visible. SNO was a founding member of the Supernova Early Warning System (SNEWS) with Super-Kamiokande and the Large Volume Detector. No such supernovae have yet been detected.
The SNO experiment was also able to observe atmospheric neutrinos produced by cosmic ray interactions in the atmosphere. Due to the limited size of the SNO detector in comparison with Super-K, the low cosmic ray neutrino signal is not statistically significant at neutrino energies below 1 GeV.
Participating institutions
Large particle physics experiments require large collaborations. With approximately 100 collaborators, SNO was a rather small group compared to collider experiments. The participating institutions have included:
Canada
Carleton University
Laurentian University
Queen's University – designed and built many calibration sources and the device for deploying sources
TRIUMF
University of British Columbia
University of Guelph
Although no longer a collaborating institution, Chalk River Laboratories led the construction of the acrylic vessel that holds the heavy water, and Atomic Energy of Canada Limited was the source of the heavy water.
United Kingdom
University of Oxford – developed much of the experiment's Monte Carlo analysis program (SNOMAN), and maintained the program
University of Sussex – calibration
United States
Lawrence Berkeley National Laboratory (LBNL) – Led the construction of the geodesic structure that holds the PMTs
Pacific Northwest National Laboratory (PNNL)
Los Alamos National Laboratory (LANL)
University of Pennsylvania – designed and built the front end electronics and trigger
University of Washington – designed and built proportional counter tubes for detection of neutrons in the third phase of the experiment
Brookhaven National Laboratory
University of Texas at Austin
Massachusetts Institute of Technology
Honours and awards
Asteroid 14724 SNO is named in honour of SNO.
In November 2006, the entire SNO team was awarded the inaugural John C. Polanyi Award for "a recent outstanding advance in any field of the natural sciences or engineering" conducted in Canada.
SNO principal investigator Arthur B. McDonald won the 2015 Nobel Prize in Physics, jointly with Takaaki Kajita of Super-Kamiokande, for the discovery of neutrino oscillation.
SNO was awarded the 2016 Fundamental Physics Prize along with 4 other neutrino experiments.
See also
DEAP – Dark Matter Experiment using Argon Pulse-shape at SNO location
Homestake experiment – A predecessor experiment conducted 1970–1994 in a mine at Lead, South Dakota
SNO+ – The successor of SNO
SNOLAB – A permanent underground physics laboratory being built around SNO
References
External links
SNO's official site
Joshua Klein's Introduction to SNO, Solar Neutrinos, and Penn at SNO
Showcase of Canadian Engineering Achievement: Sudbury Neutrino Observatory (IEEE Canada). Several articles about the civil engineering of SNO.
SNO experiment record on INSPIRE-HEP
Neutrino observatories
Buildings and structures in Greater Sudbury
Research institutes in Canada
Nuclear research institutes
Particle experiments
Underground laboratories
Physics beyond the Standard Model
Laboratories in Canada | Sudbury Neutrino Observatory | [
"Physics",
"Engineering"
] | 2,339 | [
"Nuclear research institutes",
"Nuclear organizations",
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model"
] |
28,464 | https://en.wikipedia.org/wiki/Super-Kamiokande | Super-Kamiokande (abbreviation of Super-Kamioka Neutrino Detection Experiment, also abbreviated to Super-K or SK; ) is a neutrino observatory located under Mount Ikeno near the city of Hida, Gifu Prefecture, Japan. It is operated by the Institute for Cosmic Ray Research, University of Tokyo with the help of an international team. It is located 1,000 m (3,300 ft) underground in the Mozumi Mine in Hida's Kamioka area. The observatory was designed to detect high-energy neutrinos, to search for proton decay, study solar and atmospheric neutrinos, and keep watch for supernovae in the Milky Way galaxy.
Description
Super-K is located underground in the Mozumi Mine in Hida's Kamioka area. It consists of a cylindrical stainless steel tank that is tall and in diameter holding 50,220 tonnes (55,360 US tons) of ultrapure water. The tank volume is divided by a stainless steel superstructure into an inner detector (ID) region, which is in height and in diameter, and outer detector (OD) which consists of the remaining tank volume. Mounted on the superstructure are 11,146 photomultiplier tubes (PMT) in diameter that face the ID and 1,885 PMTs that face the OD. A Tyvek and blacksheet barrier attached to the superstructure optically separates the ID and OD.
A neutrino interaction with the electrons or nuclei of water can produce a charged particle that moves faster than the speed of light in water, which is slower than the speed of light in vacuum. This creates a cone of light known as Cherenkov radiation, which is the optical equivalent to a sonic boom. The Cherenkov light is projected as a ring on the wall of the detector and recorded by the PMTs. Using the timing and charge information recorded by each PMT, the interaction vertex, ring direction, and flavor of the incoming neutrino is determined. From the sharpness of the edge of the ring the type of particle can be inferred. The multiple scattering of electrons is large, so electromagnetic showers produce fuzzy rings. Highly relativistic muons, in contrast, travel almost straight through the detector and produce rings with sharp edges.
History
Construction of the predecessor of the present Kamioka Observatory, the Institute for Cosmic Ray Research, University of Tokyo began in 1982 and was completed in April 1983. The purpose of the observatory was to determine the existence of proton decay, one of the most fundamental questions of elementary particle physics.
The detector, named KamiokaNDE for Kamioka Nucleon Decay Experiment, was a tank in height and in width, containing 3,058 tonnes (3,400 US tons) of pure water and about 1,000 photomultiplier tubes (PMTs) attached to its inner surface. The detector was upgraded, starting in 1985, to allow it to observe solar neutrinos. As a result, the detector (KamiokaNDE-II) had become sensitive enough to detect ten neutrinos from SN 1987A, a supernova which was observed in the Large Magellanic Cloud in February 1987, and to observe solar neutrinos in 1988. The ability of the Kamiokande experiment to observe the direction of electrons produced in solar neutrino interactions allowed experimenters to directly demonstrate for the first time that the Sun was a source of neutrinos.
While making discoveries in neutrino astronomy and neutrino astrophysics, Kamiokande never detected a proton decay, the primary goal for its construction. The absence of any such observation pushed back the possible half-life of any potential proton decay far enough to eliminate some of the GUT models which allow for such a decay. Other models predict a longer half-life, with rarer decays.
To increase the chance of detecting such decays, a larger detector was needed. A higher sensitivity was also necessary to obtain a higher statistical confidence in other detections. This led to the design and construction of Super-Kamiokande, with fifteen times the volume of water and ten times as many PMTs as Kamiokande.
The Super-Kamiokande project was approved by the Japanese Ministry of Education, Science, Sports and Culture in 1991 for total funding of approximately $100 million. The American portion of the proposal, which was primarily to build the OD system, was approved by the United States Department of Energy in 1993 for $3 million. In addition, the United States has also contributed about 2000 20 cm PMTs recycled from the IMB experiment.
Super-Kamiokande started operation in 1996 and announced the first evidence of neutrino oscillation in 1998. This was the first experimental observation supporting the theory that the neutrino has non-zero mass, a possibility that theorists had speculated about for years. The 2015 Nobel Prize in Physics was awarded to Super-Kamiokande researcher Takaaki Kajita alongside Arthur McDonald at the Sudbury Neutrino Observatory for their work confirming neutrino oscillation.
On 12 November 2001, about 6,600 of the photomultiplier tubes imploded in a chain reaction, as the shock wave from the concussion of each imploding tube cracked its neighbours. The detector was partially restored by redistributing the photomultiplier tubes which did not implode, and by adding protective acrylic shells that are hoped will prevent another chain reaction from recurring (Super-Kamiokande-II).
In July 2005, preparations began to restore the detector to its original form by reinstalling about 6,000 PMTs. The work was completed in June 2006, whereupon the detector was renamed Super-Kamiokande-III. This phase of the experiment collected data from October 2006 till August 2008. At that time, significant upgrades were made to the electronics. After the upgrade, the new phase of the experiment has been referred to as Super-Kamiokande-IV. SK-IV collected data on various natural sources of neutrinos, as well as acted as the far detector for the Tokai-to-Kamioka (T2K) long baseline neutrino oscillation experiment.
SK-IV continued until June 2018. After that, the detector underwent a full refurbishment during Autumn of 2018. On 29 January 2019 the detector resumed data acquisition.
In 2020, the detector was upgraded for the SuperKGd project by adding a gadolinium salt to the ultrapure water in order to enable the detection of antineutrinos from supernova explosions.
Detector
The Super-Kamiokande (SK) is a Cherenkov detector used to study neutrinos from different sources including the Sun, supernovae, the atmosphere, and accelerators. It is also used to search for proton decay. The experiment began in April 1996 and was shut down for maintenance in July 2001, a period known as "SK-I". Since an accident occurred during maintenance, the experiment resumed in October 2002 with only half of its original number of ID-PMTs.
In order to prevent further accidents, all of the ID-PMTs were covered by fiber-reinforced plastic with acrylic front windows. This phase from October 2002 to another closure for an entire reconstruction in October 2005 is called "SK-II". In July 2006, the experiment resumed with the full number of PMTs and stopped in September 2008 for electronics upgrades. This period was known as "SK-III". The period after 2008 is known as "SK-IV". The phases and their main characteristics are summarised in table 1.
SK-IV upgrade
In the previous phases, the ID-PMTs processed signals by custom electronics modules called analog timing modules (ATMs). Charge-to-analog converters (QAC) and time-to-analog converters (TAC) are contained in these modules that had dynamic range from 0 to 450 picocoulombs (pC) with 0.2 pC resolution for charge and from −300 to 1000 ns with 0.4 ns resolution for time. There were two pairs of QAC/TAC for each PMT input signal, this prevented dead time and allowed the readout of multiple sequential hits that may arise, e.g., from electrons that are decay products of stopping muons.
The SK system was upgraded in September 2008 in order to maintain the stability in the next decade and improve the throughput of the data acquisition systems, QTC-based electronics with Ethernet (QBEE). The QBEE provides high-speed signal processing by combining pipelined components. These components are a newly developed custom charge-to-time converter (QTC) in the form of an application-specific integrated circuit (ASIC), a multi-hit time-to-digital converter (TDC), and field-programmable gate array (FPGA). Each QTC input has three gain ranges "Small", "Medium", and "Large" – the resolutions for each are shown in Table.
For each range, analog-to-digital conversion is conducted separately, but the only range used is that with the highest resolution that is not being saturated. The overall charge dynamic range of the QTC is 0.2–2500 pC, five times larger than the old . The charge and timing resolution of the QBEE at the single photoelectron level is 0.1 photoelectrons and 0.3 ns respectively, both are better than the intrinsic resolution of the 20-in. PMTs used in SK. The QBEE achieves good charge linearity over a wide dynamic range. The integrated charge linearity of the electronics is better than 1%. The thresholds of the discriminators in the QTC are set to −0.69 mV (equivalent to 0.25 photoelectron, which is the same as for SK-III). This threshold was chosen to replicate the behavior of the detector during its previous ATM-based phases.
SuperKGd
Gadolinium was introduced into the Super-Kamiokande water tank in 2020 in order to distinguish neutrinos from antineutrinos that arise from supernova explosions. This is known as the SK-Gd project (other names include SuperKGd, SUPERK-GD, and similar names). In the first phase of the project, 1.3 tons of a Gd salt (gadolinium sulfate octahydrate, ) were added to the ultrapure water in 2020, giving 0.02% (by mass) of the salt. This amount is about a tenth of the planned final target concentration.
Nuclear fusion in the Sun and other stars turns protons into neutrons with the emission of neutrinos. Beta decay in the Earth and in supernovas turns neutrons into protons with the emission of anti-neutrinos. The Super-Kamiokande detects electrons knocked off a water molecule producing a flash of blue Cherenkov light, and these are produced both by neutrinos and antineutrinos. A rarer instance is when an antineutrino interacts with a proton in water to produce a neutron and a positron.
Gadolinium has an affinity for neutrons and produces a bright flash of gamma rays when it absorbs one. Adding gadolinium to the Super-Kamiokande allows it to distinguish between neutrinos and antineutrinos. Antineutrinos produce a double flash of light about 30 microseconds apart, first when the neutrino hits a proton and second when gadolinium absorbs a neutron. The brightness of the first flash allows physicists to distinguish between low-energy antineutrinos from the Earth and high-energy antineutrinos from supernovas. In addition to observing neutrinos from distant supernovas, the Super-Kamiokande will be able to set off an alarm to inform astronomers around the world of the presence of a supernova in the Milky Way within one second of it occurring.
The biggest challenge was whether the detector's water could be continuously filtered to remove impurities without removing the gadolinium at the same time. A 200-ton prototype called EGADS with added gadolinium sulfate was installed in the Kamioka mine and operated for years. It finished operation in 2018 and showed that the new water purification system would remove impurities while keeping the gadolinium concentration stable. It also showed that gadolinium sulfate would not significantly impair the transparency of the otherwise ultrapure water, or cause corrosion or deposition on existing equipment or on the new valves that will later be installed in the Hyper-Kamiokande.
Water tank
The outer shell of the water tank is a cylindrical stainless-steel tank 39 m in diameter and 42 m in height. The tank is self-supporting, with concrete backfilled against the rough-hewn stone walls to counteract water pressure when the tank is filled. The capacity of the tank exceeds 50 kilotonnes of water.
PMTs and associate structure
The basic unit for the ID PMTs is a "supermodule", a frame which supports a 3×4 array of PMTs. Supermodule frames are 2.1 m in height, 2.8 m in width, and 0.55 m in thickness. These frames are connected to each other in both the vertical and horizontal directions. Then the whole support structure is connected to the bottom of the tank and to the top structure. In addition to serving as rigid structural elements, supermodules simplified the initial assembly of the ID.
Each supermodule was assembled on the tank floor and then hoisted into its final position. Thus the ID is in effect tiled with supermodules. During installation, ID PMTs were pre-assembled in units of three for easy installation. Each supermodule has two OD PMTs attached on its back side. The support structure for the bottom PMTs is attached to the bottom of the stainless-steel tank by one vertical beam per supermodule frame. The support structure for the top of the tank is also used as the support structure for the top PMTs.
Cables from each group of three PMTs are bundled together. All cables run up the outer surface of the PMT support structure, i.e., on the OD PMT plane, pass through cable ports at the top of the tank, and are then routed into the electronics huts.
The thickness of the OD varies slightly, but is on average about 2.6 m on top and bottom, and 2.7 m on the barrel wall, giving the OD a total mass of 18 kilotons. OD PMTs were distributed with 302 on the top layer, 308 on the bottom, and 1275 on the barrel wall.
To protect against low-energy background radiation from radon decay products in the air, the roof of the cavity and the access tunnels were sealed with a coating called Mineguard. Mineguard is a spray-applied polyurethane membrane developed for use as a rock support system and radon gas barrier in the mining industry.
The average geomagnetic field is about 450 mG and is inclined by about 45° with respect to the horizon at the detector site. This presents a problem for the large and very sensitive PMTs which prefer a much lower ambient field. The strength and uniform direction of the geomagnetic field could systematically bias photoelectron trajectories and timing in the PMTs. To counteract this 26 sets of horizontal and vertical Helmholtz coils are arranged around the inner surfaces of the tank. With these in operation the average field in the detector is reduced to about 50 mG. The magnetic field at various PMT locations were measured before the tank was filled with water.
A standard fiducial volume of approximately 22.5 kilotonnes is defined as the region inside a surface drawn 2.00 m from the ID wall to minimize the anomalous response caused by natural radioactivity in the surrounding rock.
Monitoring system
Online monitoring system
An online monitor computer located in the control room reads data from the DAQ host computer via an FDDI link. It provides shift operators with a flexible tool for selecting event display features, makes online and recent-history histograms to monitor detector performance, and performs a variety of additional tasks needed to efficiently monitor status and diagnose detector and DAQ problems. Events in the data stream can be skimmed off and elementary analysis tools can be applied to check data quality during calibrations or after changes in hardware or online software.
Realtime supernova monitor
To detect and identify such bursts as efficiently and promptly as possible Super-Kamiokande is equipped with an online supernova monitor system. About 10,000 total events are expected in Super-Kamiokande for a supernova explosion at the center of the Milky Way Galaxy. Super-Kamiokande can measure a burst with no dead-time, up to 30,000 events within the first second of a burst. Theoretical calculations of supernova explosions suggest that neutrinos are emitted over a total time-scale of tens of seconds with about a half of them emitted during the first one or two seconds. The Super-K will search for event clusters in specified time windows of 0.5, 2, and 10 s.
Data are transmitted to realtime SN-watch analysis process every 2 minutes and analysis is completed typically in 1 minute. When supernova (SN) event candidates are found, is calculated if the event multiplicity is larger than 16, where is defined as the average spatial distance between events, i.e.
Neutrinos from supernovae interact with free protons, producing positrons which are distributed so uniformly in the detector that for SN events should be significantly larger than for ordinary spatial clusters of events. In the Super-Kamiokande detector, for uniformly distributed Monte Carlo events shows that no tail exists below ⩽1000 cm. For the "alarm" class of burst, the events are required to have ⩾900 cm for 25⩽⩽40 or ⩾750 cm for >40. These thresholds were determined by extrapolation from SN1987A data.
The system will run special processes to check for spallation muons when burst candidates meeting "alarm" criteria and make a primary decision for further process. If the burst candidate passes these checks, the data will be reanalyzed using an offline process and a final decision will be made within a few hours. During the Super-Kamiokande I running, this never occurred.
One of the important capabilities for [Super-Kamiokande] is to reconstruct the direction to supernova. By neutrino–electron scattering, , a total of 100–150 events are expected in case of a supernova at the center of the Milky Way Galaxy. The direction to supernova can be measured with angular resolution
where N is the number of events produced by the ν–e scattering. The angular resolution, therefore, can be as good as δθ~3° for a supernova at the center of the Milky Way Galaxy. In this case, not only time profile and the energy spectrum of a neutrino burst, but also the information on direction of supernova can be provided.
Slow control monitor and offline process monitor
There is a process called the "slow control" monitor, as part of the online monitoring system, watches the status of the HV systems, the temperatures of electronics crates and the status of the compensating coils used to cancel the geomagnetic field. When any deviation from norms is detected, it will alert physicists to prompt to investigate, take appropriate action, or notify experts.
To monitor and control the offline processes that analyze and transfer data, a sophisticated set of software was developed. This monitor allows non-expert shift physicists to identify and repair common problems to minimize down time, and the software package was a significant contribution to the smooth operation of the experiment and its overall high lifetime efficiency for data taking.
Research
Solar neutrino
The energy of the Sun comes from the nuclear fusion in its core where a helium atom and two electron neutrinos are generated by 4 protons. These neutrinos emitted from this reaction are called solar neutrinos. Photons, created by the nuclear fusion in the center of the Sun, take millions of years to reach the surface; on the other hand, solar neutrinos arrive at the earth in eight minutes due to their lack of interactions with matter. Hence, solar neutrinos make it possible for us to observe the inner Sun in "real-time" that takes millions of years for visible light.
In 1999, the Super-Kamiokande detected strong evidence of neutrino oscillation that successfully explained the solar neutrino problem. The Sun and about 80% of the visible stars produce their energy by the conversion of hydrogen to helium via
MeV
Consequently, stars are a source of neutrinos, including the Sun. These neutrinos primarily come through the p-p chain in lower masses, and for cooler stars, primarily through the CNO cycle of heavier masses.
In the early 1990s, particularly with the uncertainties that accompanied the initial results from Kamioka II and the Ga experiments, no individual experiment required a non-astrophysical solution of the solar neutrino problem. But in aggregate, the Cl, Kamioka II, and Ga experiments indicated a pattern of neutrino fluxes that was not compatible with any adjustment of the SSM. This in turn helped motivate a new generation of spectacularly capable active detectors. These experiments are Super-Kamiokande, the Sudbury Neutrino Observatory (SNO), and Borexino. Super-Kamiokande was able to detect elastic scattering (ES) events
which, due to the charged-current contribution to scattering, has a relative sensitivity to s and heavy-flavor neutrinos of ~7:1. Since the direction of the recoil electron is constrained to be very forward, the direction of the neutrinos are kept in the direction of recoil electrons. Here, is provided where is the angle between the direction of recoil electrons and the Sun's position. This shows that the solar neutrino flux can be calculated to be . Comparing to the SSM, the ratio is . The result clearly indicates the deficit of solar neutrinos.
Atmospheric neutrino
Atmospheric neutrinos are secondary cosmic rays produced by the decay of particles resulting from interactions of primary cosmic rays (mostly protons) with Earth atmosphere. The observed atmospheric neutrino events fall into four categories. Fully contained (FC) events have all their tracks in the inner detector, while partially contained (PC) events have escaping tracks from the inner detector. Upward through-going muons (UTM) are produced in the rock beneath the detector and go through the inner detector. Upward stopping muons (USM) are also produced in the rock beneath the detector, but stop in the inner detector.
The number of observed number of neutrinos is predicted uniformly regardless of the zenith angle. However, Super-Kamiokande found that the number of upward going muon neutrinos (generated on the other side of the Earth) is half of the number of downward going muon neutrinos in 1998. This can be explained by the neutrinos changing or oscillating into some other neutrinos that are not detected. This is called neutrino oscillation; this discovery indicates the finite mass of neutrinos and suggests an extension of the Standard Model. Neutrinos oscillate in three flavors, and all neutrinos have their rest mass. Later analysis in 2004 suggested a sinusoidal dependence of the event rate as a function of "Length/Energy", which confirmed the neutrino oscillations.
K2K Experiment
The K2K experiment was a neutrino experiment from June 1999 to November 2004. This experiment was designed to verify oscillations observed by Super-Kamiokande through muon neutrinos. It gives first positive measurement of neutrino oscillations in conditions that both source and detector are under control. The Super-Kamiokande detector plays an important role in the experiment as the far detector. Later experiment T2K experiment continued as the second generation follow up to the K2K experiment.
T2K Experiment
T2K (Tokai to Kamioka) experiment is a neutrino experiment collaborated by several countries including Japan, United States, and others. The goal of T2K is to gain deeper understanding of parameters of neutrino oscillation. T2K has made a search for oscillations from muon neutrinos to electron neutrinos, and announced the first experimental indications for them in June 2011. The Super-Kamiokande detector plays as the "far detector". The Super-K detector will record the Cherenkov radiation of muons and electrons created by interactions between high-energy neutrinos and water.
Proton decay
The proton is assumed to be absolutely stable in the Standard Model. However, the Grand Unified Theories (GUTs) predict that protons can decay into lighter energetic charged particles such as electrons, muons, pions, or others which can be observed. Kamiokande helps to rule out some of these theories. Super-Kamiokande is currently the largest detector for observation of proton decay.
Purification
Water purification system
Since 2002, the 50 kilotons of pure water have been continually reprocessed at a rate of about 30 tons per hour in a closed system. Now, raw mine water is recycled through the first step (particle filters and RO) for some time before other processes, which involve expensive expendables, are imposed. Initially, water from the Super-Kamiokande tank is passed through nominal 1 μm mesh filters to remove dust and particles, which reduce the transparency of the water for Cherenkov photons and provide a possible radon source inside the Super-Kamiokande detector.
A heat exchanger is used to cool down the water in order to reduce the PMT dark noise level as well as suppress the growth of bacteria. Surviving bacteria are killed by a UV sterilizer stage. A cartridge polisher (CP) eliminates heavy ions, which also reduce water transparency and include radioactive species. The CP module increases the typical resistivity of recirculating water from 11 MΩ cm to 18.24 MΩ cm, approaching the chemical limit.
Originally, an ion-exchanger (IE) was included in the system, but it was removed when IE resin was found to be a significant radon source. The reverse osmosis (RO) step that removes additional particulates, and the introduction of Rn-reduced air into the water that increases radon removal efficiency in the vacuum degasifier (VD) stage which follows, were installed in 1999. After that, a VD removes dissolved gases in the water.
These dissolved gases in water are a serious background event source for solar neutrinos in the MeV energy range and the dissolved oxygen encourages the growth of bacteria. The removal efficiency is about 96%. Then, the ultra filter (UF) is introduced to remove particles whose minimum size corresponds to molecular weight approximately 10,000 (or about 10 nm diameter) thanks to hollow fiber membrane filters. Finally, a membrane degasifier (MD) removes radon dissolved in water, and the measured removal efficiency for radon is about 83%. The concentration of radon gases is miniaturized by realtime detectors. In June 2001, typical radon concentrations in water coming into the purification system from the Super-Kamiokande tank were less than 2 mBq m−3, and in water output by the system, 0.4±0.2 mBq m−3.
Air purification system
Purified air is supplied in the gap between the water surface and the top of the Super-Kamiokande tank. The air purification system contains three compressors, a buffer tank, dryers, filters, and activated charcoal filters. A total of 8 m3 of activated charcoal is used. The last 50 L of charcoal is cooled to −40 °C to increase removal efficiency for radon. Typical flow rates, dew point, and residual radon concentration are 18 m3/h, −65 °C (@+1 kg/cm2), and a few mBq m−3, respectively. Typical radon concentration in the dome air is measured to be 40 Bq m−3.
Radon levels in the mine tunnel air, near the tank cavity dome, typically reach 2000–3000 Bq m−3 during the warm season, from May until October, while from November to April the radon level is approximately 100–300 Bq m−3. This variation is due to the chimney effect in the ventilation pattern of the mine tunnel system; in cold seasons, fresh air flows into the Atotsu tunnel entrance that is a relatively short path through exposed rock before reaching the experimental area, while in the summer, air flows out the tunnel, drawing radon-rich air from deep within the mine past the experimental area.
In order to keep radon levels in the dome area and water purification system below 100 Bq m−3, fresh air is continually pumped at approximately 10 m3/min from outside the mine which generates a slight over-pressure in the Super-Kamiokande experimental area to minimize the entry of ambient mine air. A "Radon Hut" (Rn Hut) was constructed near the Atotsu tunnel entrance to house equipment for the dome air system: a 40 hp air pump with 10 m3 min−1 /15 PSI pump capacity, air dehumidifier, carbon filter tanks, and control electronics. In autumn 1997, an extended intake air pipe was installed at a location approximately 25 m above the Atotsu tunnel entrance. This low level satisfies that goals of air quality so that carbon filter regeneration operations would no longer be required.
Data processing
Offline data processing is produced both in Kamioka and in the United States.
In Kamioka
The offline data processing system is located in Kenkyuto and is connected to Super-Kamiokande detector with 4 km FDDI optical fiber link. Data flow from online system is 450 kbytes s−1 on average, corresponding to 40 Gbytes day−1 or 14 Tbytes yr−1. Magnetic tapes are used in offline system to store data and most of the analysis is accomplished here. The offline processing system is designed platform-independent because different computer architectures are used for data analysis. Because of this, the data structures are based on ZEBRA bank system developed in CERN as well as the ZEBRA exchange system.
Event data from Super-Kamiokande online DAQ system basically contains a list of number of hit PMT, TDC, and ADC counts, GPS time-stamps, and other housekeeping data. For solar neutrino analysis, lowering the energy threshold is a constant goal, so it is a continual effort to improve the efficiency of reduction algorithms; however, changes in calibrations or reduction methods require reprocessing of earlier data. Typically, 10 Tbytes of raw data is processed every month so that a large amount of CPU power and high-speed I/O access to the raw data. Extensive Monte Carlo simulation processing is also necessary.
Offline system was designed to meet demand of all these: tape storage of a large database (14 Tbytes yr−1), stable semi-realtime processing, nearly continuous re-processing and Monte Carlo simulation. The computer system consists of three major sub-systems: the data server, the CPU farm, and the network at the end of Run I.
In the United States
A system dedicated to offsite offline data processing was set up at the Stony Brook University in Stony Brook, New York, to process raw data sent from Kamioka. Most of the reformatted raw data are copied from system facility in Kamioka. At Stony Brook, a system was set up for analysis and further processing. At Stony Brook the raw data were processed with a multi-tape DLT drive. The first-stage data reduction processes were done for the high-energy analysis and for the low-energy analysis.
The data reduction for the high-energy analysis was mainly for atmospheric neutrino events and proton decay search while the low-energy analysis was mainly for the solar neutrino events. The reduced data for the high-energy analysis was further filtered by other reduction processes and the resulting data were stored on disks. The reduced data for the low-energy were stored on DLT tapes and sent to University of California, Irvine, for further processing.
This offset analysis system continued for three years until their analysis chains were proved to produce equivalent results. Thus, in order to limit manpower, collaborations were concentrated to a single combined analysis
Results
In 1998, Super-K found first strong evidence of neutrino oscillation from the observation of muon neutrinos changed into tau-neutrinos.
SK has set limits on proton lifetime and other rare decays and neutrino properties. SK set a lower bound on protons decaying to kaons of 5.9 × 1033 yr
In January 2023 from data collected during the 1996–2018 period new limits were reported by Super-Kamiokande for sub-GeV dark matter excluding the dark matter–nucleon elastic scattering cross section between and with masses from to .
In popular culture
Super-Kamiokande is the subject of Andreas Gursky's 2007 photograph Kamiokande and was featured in an episode of Cosmos: A Spacetime Odyssey.
In September 2018, the detector was drained for maintenance, affording a team of Australian Broadcasting Corporation reporters the opportunity to obtain 4K resolution video from within the detection tank.
See also
Masatoshi Koshiba
Yoji Totsuka
References
External links
The Super-Kamiokande Homepage
Super-Kamiokande experiment record on INSPIRE-HEP
Neutrino observatories
Particle experiments
Physics beyond the Standard Model
Buildings and structures in Gifu Prefecture
Hida, Gifu
Laboratories in Japan
1983 establishments in Japan | Super-Kamiokande | [
"Physics"
] | 7,134 | [
"Unsolved problems in physics",
"Particle physics",
"Physics beyond the Standard Model"
] |
28,481 | https://en.wikipedia.org/wiki/Statistical%20mechanics | In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applications include many problems in the fields of physics, biology, chemistry, neuroscience, computer science, information theory and sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.
Statistical mechanics arose out of the development of classical thermodynamics, a field for which it was successful in explaining macroscopic physical properties—such as temperature, pressure, and heat capacity—in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions.
While classical thermodynamics is primarily concerned with thermodynamic equilibrium, statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.
History
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion.
The founding of the field of statistical mechanics is generally credited to three physicists:
Ludwig Boltzmann, who developed the fundamental interpretation of entropy in terms of a collection of microstates
James Clerk Maxwell, who developed models of probability distribution of such states
Josiah Willard Gibbs, who coined the name of the field in 1884
In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. Five years later, in 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further.
Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory. Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H-theorem.
The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871:
"Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. Gibbs' methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day.
Principles: mechanics and ensembles
In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts:
The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics).
An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the Schrödinger equation (quantum mechanics)
Using these two concepts, the state at any other time, past or future, can in principle be calculated.
There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in.
Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix.
As is usual for probabilities, the ensemble can be interpreted in different ways:
an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or
the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in a similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials.
These two meanings are equivalent for many purposes, and will be used interchangeably in this article.
However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state.
One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.
Statistical thermodynamics
The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material.
Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.
Fundamental postulate
A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).
There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.
A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that
For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge.
The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:
Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic.
Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation.
Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy).
Other fundamental postulates for statistical mechanics have also been proposed. For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates:
where the third postulate can be replaced by the following:
Three thermodynamic ensembles
There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.
Microcanonical ensemble
describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition.
Canonical ensemble
describes a system of fixed composition that is in thermal equilibrium with a heat bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are accorded different probabilities depending on their total energy.
Grand canonical ensemble
describes a system with non-fixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers.
For systems containing many particles (the thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used. The Gibbs theorem about equivalence of ensembles was developed into the theory of concentration of measure phenomenon, which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology.
Important cases where the thermodynamic ensembles do not give identical results include:
Microscopic systems.
Large systems at a phase transition.
Large systems with long-range interactions.
In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.
Calculation methods
Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities.
Exact
There are some cases which allow exact solutions.
For very small microscopic systems, the ensembles can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics).
Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of Maxwell–Boltzmann statistics, Fermi–Dirac statistics, and Bose–Einstein statistics.
A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models. Some examples include the Bethe ansatz, square-lattice Ising model in zero field, hard hexagon model.
Monte Carlo
Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system. Monte Carlo methods are important in computational physics, physical chemistry, and related fields, and have diverse applications including medical physics, where they are used to model radiation transport for radiation dosimetry calculations.
The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level.
The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble.
Path integral Monte Carlo, also used to sample the canonical ensemble.
Other
For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion.
For dense fluids, another approximate approach is based on reduced distribution functions, in particular the radial distribution function.
Molecular dynamics computer simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to a stochastic heat bath, they can also model canonical and grand canonical conditions.
Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful.
Non-equilibrium statistical mechanics
Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example:
heat transport by the internal motions in a material, driven by a temperature imbalance,
electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance,
spontaneous chemical reactions driven by a decrease in free energy,
friction, dissipation, quantum decoherence,
systems being pumped by external forces (optical pumping, etc.),
and irreversible processes in general.
All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.)
In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics.
Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections.
Stochastic methods
One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier.
Near-equilibrium methods
Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory. A remarkable result, as formalized by the fluctuation–dissipation theorem, is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium.
This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics.
A few of the theoretical tools used to make this connection include:
Fluctuation–dissipation theorem
Onsager reciprocal relations
Green–Kubo relations
Landauer–Büttiker formalism
Mori–Zwanzig formalism
GENERIC formalism
Hybrid methods
An advanced approach uses a combination of stochastic methods and linear response theory. As an example, one approach to compute quantum coherence effects (weak localization, conductance fluctuations) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method.
Applications
The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in:
propagation of uncertainty over time,
regression analysis of gravitational orbits,
ensemble forecasting of weather,
dynamics of neural networks,
bounded-rational potential games in game theory and non-equilibrium economics.
Statistical physics explains and quantitatively describes superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics and virial theorem. In solid state physics, statistical physics aids the study of liquid crystals, phase transitions, and critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases).
Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
Quantum statistical mechanics
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.
See also
Quantum statistical mechanics
List of textbooks in thermodynamics and statistical mechanics
Laplace transform
References
Further reading
External links
Philosophy of Statistical Mechanics article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy.
Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. SklogWiki is particularly orientated towards liquids and soft condensed matter.
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick
taught by Leonard Susskind.
Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28.
Statistical mechanics
Thermodynamics | Statistical mechanics | [
"Physics",
"Chemistry",
"Mathematics"
] | 4,445 | [
"Statistical mechanics",
"Thermodynamics",
"Dynamical systems"
] |
28,524 | https://en.wikipedia.org/wiki/RNA%20splicing | RNA splicing is a process in molecular biology where a newly-made precursor messenger RNA (pre-mRNA) transcript is transformed into a mature messenger RNA (mRNA). It works by removing all the introns (non-coding regions of RNA) and splicing back together exons (coding regions). For nuclear-encoded genes, splicing occurs in the nucleus either during or immediately after transcription. For those eukaryotic genes that contain introns, splicing is usually needed to create an mRNA molecule that can be translated into protein. For many eukaryotic introns, splicing occurs in a series of reactions which are catalyzed by the spliceosome, a complex of small nuclear ribonucleoproteins (snRNPs). There exist self-splicing introns, that is, ribozymes that can catalyze their own excision from their parent RNA molecule. The process of transcription, splicing and translation is called gene expression, the central dogma of molecular biology.
Splicing pathways
Several methods of RNA splicing occur in nature; the type of splicing depends on the structure of the spliced intron and the catalysts required for splicing to occur.
Spliceosomal complex
Introns
The word intron is derived from the terms intragenic region, and intracistron, that is, a segment of DNA that is located between two exons of a gene. The term intron refers to both the DNA sequence within a gene and the corresponding sequence in the unprocessed RNA transcript. As part of the RNA processing pathway, introns are removed by RNA splicing either shortly after or concurrent with transcription. Introns are found in the genes of most organisms and many viruses. They can be located in a wide range of genes, including those that generate proteins, ribosomal RNA (rRNA), and transfer RNA (tRNA).
Within introns, a donor site (5' end of the intron), a branch site (near the 3' end of the intron) and an acceptor site (3' end of the intron) are required for splicing. The splice donor site includes an almost invariant sequence GU at the 5' end of the intron, within a larger, less highly conserved region. The splice acceptor site at the 3' end of the intron terminates the intron with an almost invariant AG sequence. Upstream (5'-ward) from the AG there is a region high in pyrimidines (C and U), or polypyrimidine tract. Further upstream from the polypyrimidine tract is the branchpoint, which includes an adenine nucleotide involved in lariat formation. The consensus sequence for an intron (in IUPAC nucleic acid notation) is: G-G-[cut]-G-U-R-A-G-U (donor site) ... intron sequence ... Y-U-R-A-C (branch sequence 20-50 nucleotides upstream of acceptor site) ... Y-rich-N-C-A-G-[cut]-G (acceptor site). However, it is noted that the specific sequence of intronic splicing elements and the number of nucleotides between the branchpoint and the nearest 3' acceptor site affect splice site selection. Also, point mutations in the underlying DNA or errors during transcription can activate a cryptic splice site in part of the transcript that usually is not spliced. This results in a mature messenger RNA with a missing section of an exon. In this way, a point mutation, which might otherwise affect only a single amino acid, can manifest as a deletion or truncation in the final protein.
Formation and activity
Splicing is catalyzed by the spliceosome, a large RNA-protein complex composed of five small nuclear ribonucleoproteins (snRNPs). Assembly and activity of the spliceosome occurs during transcription of the pre-mRNA. The RNA components of snRNPs interact with the intron and are involved in catalysis. Two types of spliceosomes have been identified (major and minor) which contain different snRNPs.
The major spliceosome splices introns containing GU at the 5' splice site and AG at the 3' splice site. It is composed of the U1, U2, U4, U5, and U6 snRNPs and is active in the nucleus. In addition, a number of proteins including U2 small nuclear RNA auxiliary factor 1 (U2AF35), U2AF2 (U2AF65) and SF1 are required for the assembly of the spliceosome. The spliceosome forms different complexes during the splicing process:
Complex E
The U1 snRNP binds to the GU sequence at the 5' splice site of an intron;
Splicing factor 1 binds to the intron branch point sequence;
U2AF1 binds at the 3' splice site of the intron;
U2AF2 binds to the polypyrimidine tract;
Complex A (pre-spliceosome)
The U2 snRNP displaces SF1 and binds to the branch point sequence and ATP is hydrolyzed;
Complex B (pre-catalytic spliceosome)
The U5/U4/U6 snRNP trimer binds, and the U5 snRNP binds exons at the 5' site, with U6 binding to U2;
Complex B*
The U1 snRNP is released, U5 shifts from exon to intron, and the U6 binds at the 5' splice site;
Complex C (catalytic spliceosome)
U4 is released, U6/U2 catalyzes transesterification, making the 5'-end of the intron ligate to the A on intron and form a lariat, U5 binds exon at 3' splice site, and the 5' site is cleaved, resulting in the formation of the lariat;
Complex C* (post-spliceosomal complex)
U2/U5/U6 remain bound to the lariat, and the 3' site is cleaved and exons are ligated using ATP hydrolysis. The spliced RNA is released, the lariat is released and degraded, and the snRNPs are recycled.
This type of splicing is termed canonical splicing or termed the lariat pathway, which accounts for more than 99% of splicing. By contrast, when the intronic flanking sequences do not follow the GU-AG rule, noncanonical splicing is said to occur (see "minor spliceosome" below).
The minor spliceosome is very similar to the major spliceosome, but instead it splices out rare introns with different splice site sequences. While the minor and major spliceosomes contain the same U5 snRNP, the minor spliceosome has different but functionally analogous snRNPs for U1, U2, U4, and U6, which are respectively called U11, U12, U4atac, and U6atac.
Recursive splicing
In most cases, splicing removes introns as single units from precursor mRNA transcripts. However, in some cases, especially in mRNAs with very long introns, splicing happens in steps, with part of an intron removed and then the remaining intron is spliced out in a following step. This has been found first in the Ultrabithorax (Ubx) gene of the fruit fly, Drosophila melanogaster, and a few other Drosophila genes, but cases in humans have been reported as well.
Trans-splicing
Trans-splicing is a form of splicing that removes introns or outrons, and joins two exons that are not within the same RNA transcript. Trans-splicing can occur between two different endogenous pre-mRNAs or between an endogenous and an exogenous (such as from viruses) or artificial RNAs.
Self-splicing
Self-splicing occurs for rare introns that form a ribozyme, performing the functions of the spliceosome by RNA alone. There are three kinds of self-splicing introns, Group I, Group II and Group III. Group I and II introns perform splicing similar to the spliceosome without requiring any protein. This similarity suggests that Group I and II introns may be evolutionarily related to the spliceosome. Self-splicing may also be very ancient, and may have existed in an RNA world present before protein.
Two transesterifications characterize the mechanism in which group I introns are spliced:
3'OH of a free guanine nucleoside (or one located in the intron) or a nucleotide cofactor (GMP, GDP, GTP) attacks phosphate at the 5' splice site.
3'OH of the 5' exon becomes a nucleophile and the second transesterification results in the joining of the two exons.
The mechanism in which group II introns are spliced (two transesterification reaction like group I introns) is as follows:
The 2'OH of a specific adenosine in the intron attacks the 5' splice site, thereby forming the lariat
The 3'OH of the 5' exon triggers the second transesterification at the 3' splice site, thereby joining the exons together.
tRNA splicing
tRNA (also tRNA-like) splicing is another rare form of splicing that usually occurs in tRNA. The splicing reaction involves a different biochemistry than the spliceosomal and self-splicing pathways.
In the yeast Saccharomyces cerevisiae, a yeast tRNA splicing endonuclease heterotetramer, composed of TSEN54, TSEN2, TSEN34, and TSEN15, cleaves pre-tRNA at two sites in the acceptor loop to form a 5'-half tRNA, terminating at a 2',3'-cyclic phosphodiester group, and a 3'-half tRNA, terminating at a 5'-hydroxyl group, along with a discarded intron. Yeast tRNA kinase then phosphorylates the 5'-hydroxyl group using adenosine triphosphate. Yeast tRNA cyclic phosphodiesterase cleaves the cyclic phosphodiester group to form a 2'-phosphorylated 3' end. Yeast tRNA ligase adds an adenosine monophosphate group to the 5' end of the 3'-half and joins the two halves together. NAD-dependent 2'-phosphotransferase then removes the 2'-phosphate group.
Evolution
Splicing occurs in all the kingdoms or domains of life, however, the extent and types of splicing can be very different between the major divisions. Eukaryotes splice many protein-coding messenger RNAs and some non-coding RNAs. Prokaryotes, on the other hand, splice rarely and mostly non-coding RNAs. Another important difference between these two groups of organisms is that prokaryotes completely lack the spliceosomal pathway.
Because spliceosomal introns are not conserved in all species, there is debate concerning when spliceosomal splicing evolved. Two models have been proposed: the intron late and intron early models (see intron evolution).
Biochemical mechanism
Spliceosomal splicing and self-splicing involve a two-step biochemical process. Both steps involve transesterification reactions that occur between RNA nucleotides. tRNA splicing, however, is an exception and does not occur by transesterification.
Spliceosomal and self-splicing transesterification reactions occur via two sequential transesterification reactions. First, the 2'OH of a specific branchpoint nucleotide within the intron, defined during spliceosome assembly, performs a nucleophilic attack on the first nucleotide of the intron at the 5' splice site, forming the lariat intermediate. Second, the 3'OH of the released 5' exon then performs a nucleophilic attack at the first nucleotide following the last nucleotide of the intron at the 3' splice site, thus joining the exons and releasing the intron lariat.
Alternative splicing
In many cases, the splicing process can create a range of unique proteins by varying the exon composition of the same mRNA. This phenomenon is then called alternative splicing. Alternative splicing can occur in many ways. Exons can be extended or skipped, or introns can be retained. It is estimated that 95% of transcripts from multiexon genes undergo alternative splicing, some instances of which occur in a tissue-specific manner and/or under specific cellular conditions. Development of high throughput mRNA sequencing technology can help quantify the expression levels of alternatively spliced isoforms. Differential expression levels across tissues and cell lineages allowed computational approaches to be developed to predict the functions of these isoforms.
Given this complexity, alternative splicing of pre-mRNA transcripts is regulated by a system of trans-acting proteins (activators and repressors) that bind to cis-acting sites or "elements" (enhancers and silencers) on the pre-mRNA transcript itself. These proteins and their respective binding elements promote or reduce the usage of a particular splice site. The binding specificity comes from the sequence and structure of the cis-elements, e.g. in HIV-1 there are many donor and acceptor splice sites. Among the various splice sites, ssA7, which is 3' acceptor site, folds into three stem loop structures, i.e. Intronic splicing silencer (ISS), Exonic splicing enhancer (ESE), and Exonic splicing silencer (ESSE3). Solution structure of Intronic splicing silencer and its interaction to host protein hnRNPA1 give insight into specific recognition. However, adding to the complexity of alternative splicing, it is noted that the effects of regulatory factors are many times position-dependent. For example, a splicing factor that serves as a splicing activator when bound to an intronic enhancer element may serve as a repressor when bound to its splicing element in the context of an exon, and vice versa. In addition to the position-dependent effects of enhancer and silencer elements, the location of the branchpoint (i.e., distance upstream of the nearest 3' acceptor site) also affects splicing. The secondary structure of the pre-mRNA transcript also plays a role in regulating splicing, such as by bringing together splicing elements or by masking a sequence that would otherwise serve as a binding element for a splicing factor.
Role of nuclear speckles in RNA splicing
The location of pre-mRNA splicing is throughout the nucleus, and once mature mRNA is generated, it is transported to the cytoplasm for translation. In both plant and animal cells, nuclear speckles are regions with high concentrations of splicing factors. These speckles were once thought to be mere storage centers for splicing factors. However, it is now understood that nuclear speckles help concentrate splicing factors near genes that are physically located close to them. Genes located farther from speckles can still be transcribed and spliced, but their splicing is less efficient compared to those closer to speckles. Cells can vary their genomic positions of genes relative to nuclear speckles as a mechanism to modulate the expression of genes via splicing.
Role of splicing/alternative splicing in HIV-integration
The process of splicing is linked with HIV integration, as HIV-1 targets highly spliced genes.
Splicing response to DNA damage
DNA damage affects splicing factors by altering their post-translational modification, localization, expression and activity. Furthermore, DNA damage often disrupts splicing by interfering with its coupling to transcription. DNA damage also has an impact on the splicing and alternative splicing of genes intimately associated with DNA repair. For instance, DNA damages modulate the alternative splicing of the DNA repair genes Brca1 and Ercc1.
Experimental manipulation of splicing
Splicing events can be experimentally altered by binding steric-blocking antisense oligos, such as Morpholinos or Peptide nucleic acids to snRNP binding sites, to the branchpoint nucleotide that closes the lariat, or to splice-regulatory element binding sites.
The use of antisense oligonucleotides to modulate splicing has shown great promise as a therapeutic strategy for a variety of genetic diseases caused by splicing defects.
Recent studies have shown that RNA splicing can be regulated by a variety of epigenetic modifications, including DNA methylation and histone modifications.
Splicing errors and variation
It has been suggested that one third of all disease-causing mutations impact on splicing. Common errors include:
Mutation of a splice site resulting in loss of function of that site. Results in exposure of a premature stop codon, loss of an exon, or inclusion of an intron.
Mutation of a splice site reducing specificity. May result in variation in the splice location, causing insertion or deletion of amino acids, or most likely, a disruption of the reading frame.
Displacement of a splice site, leading to inclusion or exclusion of more RNA than expected, resulting in longer or shorter exons.
Although many splicing errors are safeguarded by a cellular quality control mechanism termed nonsense-mediated mRNA decay (NMD), a number of splicing-related diseases also exist, as suggested above.
Allelic differences in mRNA splicing are likely to be a common and important source of phenotypic diversity at the molecular level, in addition to their contribution to genetic disease susceptibility. Indeed, genome-wide studies in humans have identified a range of genes that are subject to allele-specific splicing.
In plants, variation for flooding stress tolerance correlated with stress-induced alternative splicing of transcripts associated with gluconeogenesis and other processes.
Protein splicing
In addition to RNA, proteins can undergo splicing. Although the biomolecular mechanisms are different, the principle is the same: parts of the protein, called inteins instead of introns, are removed. The remaining parts, called exteins instead of exons, are fused together.
Protein splicing has been observed in a wide range of organisms, including bacteria, archaea, plants, yeast and humans.
Splicing and genesis of circRNAs
The existence of backsplicing was first suggested in 2012. This backsplicing explains the genesis of circular RNAs resulting from the exact junction between the 3' boundary of an exon with the 5' boundary of an exon located upstream. In these exonic circular RNAs, the junction is a classic 3'-5'link.
The exclusion of intronic sequences during splicing can also leave traces, in the form of circular RNAs. In some cases, the intronic lariat is not destroyed and the circular part remains as a lariat-derived circRNA.In these lariat-derived circular RNAs, the junction is a 2'-5'link.
See also
cDNA
DBASS3/5
Exon junction complex
mRNA capping
Polyadenylation
Post-transcriptional modification
RNA editing
SWAP protein domain, a splicing regulator
References
External links
Virtual Cell Animation Collection: mRNA Splicing
Gene expression
Spliceosome | RNA splicing | [
"Chemistry",
"Biology"
] | 4,246 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
28,616 | https://en.wikipedia.org/wiki/Shotgun%20sequencing | In genetics, shotgun sequencing is a method used for sequencing random DNA strands. It is named by analogy with the rapidly expanding, quasi-random shot grouping of a shotgun.
The chain-termination method of DNA sequencing ("Sanger sequencing") can only be used for short DNA strands of 100 to 1000 base pairs. Due to this size limit, longer sequences are subdivided into smaller fragments that can be sequenced separately, and these sequences are assembled to give the overall sequence.
In shotgun sequencing, DNA is broken up randomly into numerous small segments, which are sequenced using the chain termination method to obtain reads. Multiple overlapping reads for the target DNA are obtained by performing several rounds of this fragmentation and sequencing. Computer programs then use the overlapping ends of different reads to assemble them into a continuous sequence.
Shotgun sequencing was one of the precursor technologies that was responsible for enabling whole genome sequencing.
Example
For example, consider the following two rounds of shotgun reads:
In this extremely simplified example, none of the reads cover the full length of the original sequence, but the four reads can be assembled into the original sequence using the overlap of their ends to align and order them. In reality, this process uses enormous amounts of information that are rife with ambiguities and sequencing errors. Assembly of complex genomes is additionally complicated by the great abundance of repetitive sequences, meaning similar short reads could come from completely different parts of the sequence.
Many overlapping reads for each segment of the original DNA are necessary to overcome these difficulties and accurately assemble the sequence. For example, to complete the Human Genome Project, most of the human genome was sequenced at 12X or greater coverage; that is, each base in the final sequence was present on average in 12 different reads. Even so, current methods have failed to isolate or assemble reliable sequence for approximately 1% of the (euchromatic) human genome, as of 2004.
Whole genome shotgun sequencing
History
Whole genome shotgun sequencing for small (4000- to 7000-base-pair) genomes was first suggested in 1979. The first genome sequenced by shotgun sequencing was that of cauliflower mosaic virus, published in 1981.
Paired-end sequencing
Broader application benefited from pairwise end sequencing, known colloquially as double-barrel shotgun sequencing. As sequencing projects began to take on longer and more complicated DNA sequences, multiple groups began to realize that useful information could be obtained by sequencing both ends of a fragment of DNA. Although sequencing both ends of the same fragment and keeping track of the paired data was more cumbersome than sequencing a single end of two distinct fragments, the knowledge that the two sequences were oriented in opposite directions and were about the length of a fragment apart from each other was valuable in reconstructing the sequence of the original target fragment.
History. The first published description of the use of paired ends was in 1990 as part of the sequencing of the human HGPRT locus, although the use of paired ends was limited to closing gaps after the application of a traditional shotgun sequencing approach. The first theoretical description of a pure pairwise end sequencing strategy, assuming fragments of constant length, was in 1991. At the time, there was community consensus that the optimal fragment length for pairwise end sequencing would be three times the sequence read length. In 1995 Roach et al. introduced the innovation of using fragments of varying sizes, and demonstrated that a pure pairwise end-sequencing strategy would be possible on large targets. The strategy was subsequently adopted by The Institute for Genomic Research (TIGR) to sequence the genome of the bacterium Haemophilus influenzae in 1995, and then by Celera Genomics to sequence the Drosophila melanogaster (fruit fly) genome in 2000, and subsequently the human genome.
Approach
To apply the strategy, a high-molecular-weight DNA strand is sheared into random fragments, size-selected (usually 2, 10, 50, and 150 kb), and cloned into an appropriate vector. The clones are then sequenced from both ends using the chain termination method yielding two short sequences. Each sequence is called an end-read or read 1 and read 2 and two reads from the same clone are referred to as mate pairs. Since the chain termination method usually can only produce reads between 500 and 1000 bases long, in all but the smallest clones, mate pairs will rarely overlap.
Assembly
The original sequence is reconstructed from the reads using sequence assembly software. First, overlapping reads are collected into longer composite sequences known as contigs. Contigs can be linked together into scaffolds by following connections between mate pairs. The distance between contigs can be inferred from the mate pair positions if the average fragment length of the library is known and has a narrow window of deviation. Depending on the size of the gap between contigs, different techniques can be used to find the sequence in the gaps. If the gap is small (5-20kb) then the use of polymerase chain reaction (PCR) to amplify the region is required, followed by sequencing. If the gap is large (>20kb) then the large fragment is cloned in special vectors such as bacterial artificial chromosomes (BAC) followed by sequencing of the vector.
Pros and cons
Proponents of this approach argue that it is possible to sequence the whole genome at once using large arrays of sequencers, which makes the whole process much more efficient than more traditional approaches. Detractors argue that although the technique quickly sequences large regions of DNA, its ability to correctly link these regions is suspect, particularly for eukaryotic genomes with repeating regions. As sequence assembly programs become more sophisticated and computing power becomes cheaper, it may be possible to overcome this limitation.
Coverage
Coverage (read depth or depth) is the average number of reads representing a given nucleotide in the reconstructed sequence. It can be calculated from the length of the original genome (G), the number of reads(N), and the average read length(L) as . For example, a hypothetical genome with 2,000 base pairs reconstructed from 8 reads with an average length of 500 nucleotides will have 2x redundancy. This parameter also enables one to estimate other quantities, such as the percentage of the genome covered by reads (sometimes also called coverage). A high coverage in shotgun sequencing is desired because it can overcome errors in base calling and assembly. The subject of DNA sequencing theory addresses the relationships of such quantities.
Sometimes a distinction is made between sequence coverage and physical coverage. Sequence coverage is the average number of times a base is read (as described above). Physical coverage is the average number of times a base is read or spanned by mate paired reads.
Hierarchical shotgun sequencing
Although shotgun sequencing can in theory be applied to a genome of any size, its direct application to the sequencing of large genomes (for instance, the human genome) was limited until the late 1990s, when technological advances made practical the handling of the vast quantities of complex data involved in the process. Historically, full-genome shotgun sequencing was believed to be limited by both the sheer size of large genomes and by the complexity added by the high percentage of repetitive DNA (greater than 50% for the human genome) present in large genomes. It was not widely accepted that a full-genome shotgun sequence of a large genome would provide reliable data. For these reasons, other strategies that lowered the computational load of sequence assembly had to be utilized before shotgun sequencing was performed.
In hierarchical sequencing, also known as top-down sequencing, a low-resolution physical map of the genome is made prior to actual sequencing. From this map, a minimal number of fragments that cover the entire chromosome are selected for sequencing. In this way, the minimum amount of high-throughput sequencing and assembly is required.
The amplified genome is first sheared into larger pieces (50-200kb) and cloned into a bacterial host using BACs or P1-derived artificial chromosomes (PAC). Because multiple genome copies have been sheared at random, the fragments contained in these clones have different ends, and with enough coverage (see section above) finding the smallest possible scaffold of BAC contigs that covers the entire genome is theoretically possible. This scaffold is called the minimum tiling path. Once a tiling path has been found, the BACs that form this path are sheared at random into smaller fragments and can be sequenced using the shotgun method on a smaller scale.
Although the full sequences of the BAC contigs is not known, their orientations relative to one another are known. There are several methods for deducing this order and selecting the BACs that make up a tiling path. The general strategy involves identifying the positions of the clones relative to one another and then selecting the fewest clones required to form a contiguous scaffold that covers the entire area of interest. The order of the clones is deduced by determining the way in which they overlap. Overlapping clones can be identified in several ways. A small radioactively or chemically labeled probe containing a sequence-tagged site (STS) can be hybridized onto a microarray upon which the clones are printed. In this way, all the clones that contain a particular sequence in the genome are identified. The end of one of these clones can then be sequenced to yield a new probe and the process repeated in a method called chromosome walking.
Alternatively, the BAC library can be restriction-digested. Two clones that have several fragment sizes in common are inferred to overlap because they contain multiple similarly spaced restriction sites in common. This method of genomic mapping is called restriction or BAC fingerprinting because it identifies a set of restriction sites contained in each clone. Once the overlap between the clones has been found and their order relative to the genome known, a scaffold of a minimal subset of these contigs that covers the entire genome is shotgun-sequenced.
Because it involves first creating a low-resolution map of the genome, hierarchical shotgun sequencing is slower than whole-genome shotgun sequencing, but relies less heavily on computer algorithms than whole-genome shotgun sequencing. The process of extensive BAC library creation and tiling path selection, however, make hierarchical shotgun sequencing slow and labor-intensive. Now that the technology is available and the reliability of the data demonstrated, the speed and cost efficiency of whole-genome shotgun sequencing has made it the primary method for genome sequencing.
Newer sequencing technologies
The classical shotgun sequencing was based on the Sanger sequencing method: this was the most advanced technique for sequencing genomes from about 1995–2005. The shotgun strategy is still applied today, however using other sequencing technologies, such as short-read sequencing and long-read sequencing.
Short-read or "next-gen" sequencing produces shorter reads (anywhere from 25–500bp) but many hundreds of thousands or millions of reads in a relatively short time (on the order of a day).
This results in high coverage, but the assembly process is much more computationally intensive. These technologies are vastly superior to Sanger sequencing due to the high volume of data and the relatively short time it takes to sequence a whole genome.
Metagenomic shotgun sequencing
Having reads of 400-500 base pairs length is sufficient to determine the species or strain of the organism where the DNA comes from, provided its genome is already known, by using for example a k-mer based taxonomic classifier software. With millions of reads from next generation sequencing of an environmental sample, it is possible to get a complete overview of any complex microbiome with thousands of species, like the gut flora. Advantages over 16S rRNA amplicon sequencing are: not being limited to bacteria; strain-level classification where amplicon sequencing only gets the genus; and the possibility to extract whole genes and specify their function as part of the metagenome. The sensitivity of metagenomic sequencing makes it an attractive choice for clinical use. It however emphasizes the problem of contamination of the sample or the sequencing pipeline.
See also
Clinical metagenomic sequencing
DNA sequencing theory
References
Further reading
External links
Molecular biology
DNA sequencing
1981 in biotechnology
Metagenomics
Bioinformatics | Shotgun sequencing | [
"Chemistry",
"Engineering",
"Biology"
] | 2,470 | [
"Biological engineering",
"Bioinformatics",
"Molecular biology techniques",
"DNA sequencing",
"Molecular biology",
"Biochemistry"
] |
28,728 | https://en.wikipedia.org/wiki/Superconducting%20magnetic%20energy%20storage | Superconducting magnetic energy storage (SMES) systems store energy in the magnetic field created by the flow of direct current in a superconducting coil that has been cryogenically cooled to a temperature below its superconducting critical temperature. This use of superconducting coils to store magnetic energy was invented by M. Ferrier in 1970.
A typical SMES system includes three parts: superconducting coil, power conditioning system and cryogenically cooled refrigerator. Once the superconducting coil is energized, the current will not decay and the magnetic energy can be stored indefinitely.
The stored energy can be released back to the network by discharging the coil. The power conditioning system uses an inverter/rectifier to transform alternating current (AC) power to direct current or convert DC back to AC power. The inverter/rectifier accounts for about 2–3% energy loss in each direction. SMES loses the least amount of electricity in the energy storage process compared to other methods of storing energy. SMES systems are highly efficient; the round-trip efficiency is greater than 95%.
Due to the energy requirements of refrigeration and the high cost of superconducting wire, SMES is currently used for short duration energy storage. Therefore, SMES is most commonly devoted to improving power quality.
Advantages over other energy storage methods
There are several reasons for using superconducting magnetic energy storage instead of other energy storage methods. The most important advantage of SMES is that the time delay during charge and discharge is quite short. Power is available almost instantaneously and very high power output can be provided for a brief period of time. Other energy storage methods, such as pumped hydro or compressed air, have a substantial time delay associated with the energy conversion of stored mechanical energy back into electricity. Thus if demand is immediate, SMES is a viable option. Another advantage is that the loss of power is less than other storage methods because electric currents encounter almost no resistance. Additionally the main parts in a SMES are motionless, which results in high reliability.
Current use
There are several small SMES units available for commercial use and several larger test bed projects. Several 1 MW·h units are used for power quality control in installations around the world, especially to provide power quality at manufacturing plants requiring ultra-clean power, such as microchip fabrication facilities.
These facilities have also been used to provide grid stability in distribution systems. SMES is also used in utility applications. In northern Wisconsin, a string of distributed SMES units were deployed to enhance stability of a transmission loop. The transmission line is subject to large, sudden load changes due to the operation of a paper mill, with the potential for uncontrolled fluctuations and voltage collapse.
The Engineering Test Model is a large SMES with a capacity of approximately 20 MW·h, capable of providing 40 MW of power for 30 minutes or 10 MW of power for 2 hours.
System architecture
A SMES system typically consists of four parts
Superconducting magnet and supporting structure
This system includes the superconducting coil, a magnet and the coil protection. Here the energy is stored by disconnecting the coil from the larger system and then using electromagnetic induction from the magnet to induce a current in the superconducting coil. This coil then preserves the current until the coil is reconnected to the larger system, after which the coil partly or fully discharges.
Refrigeration system
The refrigeration system maintains the superconducting state of the coil by cooling the coil to the operating temperature.
Power conditioning system
The power conditioning system typically contains a power conversion system that converts DC to AC current and the other way around.
Control system
The control system monitors the power demand of the grid and controls the power flow from and to the coil. The control system also manages the condition of the SMES coil by controlling the refrigerator.
Working principle
As a consequence of Faraday's law of induction, any loop of wire that generates a changing magnetic field in time, also generates an electric field. This process takes energy out of the wire through the electromotive force (EMF). EMF is defined as electromagnetic work done on a unit charge when it has traveled one round of a conductive loop. The energy could now be seen as stored in the electric field. This process uses energy from the wire with power equal to the electric potential times the total charge divided by time. Where ℰ is the voltage or EMF. By defining the power we can calculate the work that is needed to create such an electric field. Due to energy conservation this amount of work also has to be equal to the energy stored in the field.
This formula can be rewritten in the easier to measure variable of electric current by the substitution.
where I is the electric current in Ampere. The EMF ℰ is an inductance and can thus be rewritten as:
Substitution now gives:
where L is just a linearity constant called the inductance measured in Henry. Now that the power is found, all that is left to do is fill in the work equation to find the work.
As said earlier the work has to be equal to the energy stored in the field. This entire calculation is based on a single looped wire. For wires that are looped multiple times the inductance L increases, as L is simply defined as the ratio between the voltage and rate of change of the current. In conclusion the stored energy in the coil is equal to:
where
E = energy measured in joules
L = inductance measured in henries
I = current measured in amperes
Consider a cylindrical coil with conductors of a rectangular cross section. The mean radius of coil is R. a and b are width and depth of the conductor. f is called form function, which is different for different shapes of coil. ξ (xi) and δ (delta) are two parameters to characterize the dimensions of the coil. We can therefore write the magnetic energy stored in such a cylindrical coil as shown below. This energy is a function of coil dimensions, number of turns and carrying current.
where
E = energy measured in joules
I = current measured in amperes
f(ξ, δ) = form function, joules per ampere-meter
N = number of turns of coil
Solenoid versus toroid
Besides the properties of the wire, the configuration of the coil itself is an important issue from a mechanical engineering aspect. There are three factors that affect the design and the shape of the coil – they are: Inferior strain tolerance, thermal contraction upon cooling and Lorentz forces in an energized coil. Among them, the strain tolerance is crucial not because of any electrical effect, but because it determines how much structural material is needed to keep the SMES from breaking. For small SMES systems, the optimistic value of 0.3% strain tolerance is selected. Toroidal geometry can help to lessen the external magnetic forces and therefore reduces the size of mechanical support needed. Also, due to the low external magnetic field, toroidal SMES can be located near a utility or customer load.
For small SMES, solenoids are usually used because they are easy to coil and no pre-compression is needed. In toroidal SMES, the coil is always under compression by the outer hoops and two disks, one of which is on the top and the other is on the bottom to avoid breakage. Currently, there is little need for toroidal geometry for small SMES, but as the size increases, mechanical forces become more important and the toroidal coil is needed.
The older large SMES concepts usually featured a low aspect ratio solenoid approximately 100 m in diameter buried in earth. At the low extreme of size is the concept of micro-SMES solenoids, for energy storage range near 1 MJ.
Low-temperature versus high-temperature superconductors
Under steady state conditions and in the superconducting state, the coil resistance is negligible. However, the refrigerator necessary to keep the superconductor cool requires electric power and this refrigeration energy must be considered when evaluating the efficiency of SMES as an energy storage device.
Although high-temperature superconductors (HTS) have higher critical temperature, flux lattice melting takes place in moderate magnetic fields around a temperature lower than this critical temperature. The heat loads that must be removed by the cooling system include conduction through the support system, radiation from warmer to colder surfaces, AC losses in the conductor (during charge and discharge), and losses from the cold–to-warm power leads that connect the cold coil to the power conditioning system. Conduction and radiation losses are minimized by proper design of thermal surfaces. Lead losses can be minimized by good design of the leads. AC losses depend on the design of the conductor, the duty cycle of the device and the power rating.
The refrigeration requirements for HTSC and low-temperature superconductor (LTSC) toroidal coils for the baseline temperatures of 77 K, 20 K, and 4.2 K, increases in that order. The refrigeration requirements here is defined as electrical power to operate the refrigeration system. As the stored energy increases by a factor of 100, refrigeration cost only goes up by a factor of 20. Also, the savings in refrigeration for an HTSC system is larger (by 60% to 70%) than for an LTSC systems.
Cost
Whether HTSC or LTSC systems are more economical depends because there are other major components determining the cost of SMES: Conductor consisting of superconductor and copper stabilizer and cold support are major costs in themselves. They must be judged with the overall efficiency and cost of the device. Other components, such as vacuum vessel insulation, has been shown to be a small part compared to the large coil cost. The combined costs of conductors, structure and refrigerator for toroidal coils are dominated by the cost of the superconductor. The same trend is true for solenoid coils. HTSC coils cost more than LTSC coils by a factor of 2 to 4. HTSC was expected to be cheaper due to lower refrigeration requirements but this is not the case.
To gain some insight into costs consider a breakdown by major components of both HTSC and LTSC coils corresponding to three typical stored energy levels, 2, 20 and 200 MW·h. The conductor cost dominates the three costs for all HTSC cases and is particularly important at small sizes. The principal reason lies in the comparative current density of LTSC and HTSC materials. The critical current of HTSC wire is lower than LTSC wire generally in the operating magnetic field, about 5 to 10 teslas (T). Assume the wire costs are the same by weight. Because HTSC wire has lower (Jc) value than LTSC wire, it will take much more wire to create the same inductance. Therefore, the cost of wire is much higher than LTSC wire. Also, as the SMES size goes up from 2 to 20 to 200 MW·h, the LTSC conductor cost also goes up about a factor of 10 at each step. The HTSC conductor cost rises a little slower but is still by far the costliest item.
The structure costs of either HTSC or LTSC go up uniformly (a factor of 10) with each step from 2 to 20 to 200 MW·h. But HTSC structure cost is higher because the strain tolerance of the HTSC (ceramics cannot carry much tensile load) is less than LTSC, such as Nb3Ti or Nb3Sn, which demands more structure materials. Thus, in the very large cases, the HTSC cost can not be offset by simply reducing the coil size at a higher magnetic field.
It is worth noting here that the refrigerator cost in all cases is so small that there is very little percentage savings associated with reduced refrigeration demands at high temperature. This means that if a HTSC, BSCCO for instance, works better at a low temperature, say 20K, it will certainly be operated there. For very small SMES, the reduced refrigerator cost will have a more significant positive impact.
Clearly, the volume of superconducting coils increases with the stored energy. Also, we can see that the LTSC torus maximum diameter is always smaller for a HTSC magnet than LTSC due to higher magnetic field operation. In the case of solenoid coils, the height or length is also smaller for HTSC coils, but still much higher than in a toroidal geometry (due to low external magnetic field).
An increase in peak magnetic field yields a reduction in both volume (higher energy density) and cost (reduced conductor length). Smaller volume means higher energy density and cost is reduced due to the decrease of the conductor length. There is an optimum value of the peak magnetic field, about 7 T in this case. If the field is increased past the optimum, further volume reductions are possible with minimal increase in cost. The limit to which the field can be increased is usually not economic but physical and it relates to the impossibility of bringing the inner legs of the toroid any closer together and still leave room for the bucking cylinder.
The superconductor material is a key issue for SMES. Superconductor development efforts focus on increasing Jc and strain range and on reducing the wire manufacturing cost.
Applications
The energy density, efficiency and the high discharge rate make SMES useful systems to incorporate into modern energy grids and green energy initiatives. The SMES system's uses can be categorized into three categories: power supply systems, control systems and emergency/contingency systems.
FACTS
FACTS (flexible AC transmission system) devices are static devices that can be installed in electricity grids. These devices are used to enhance the controllability and power transfer capability of an electric power grid. The application of SMES in FACTS devices was the first application of SMES systems. The first realization of SMES using FACTS devices were installed by the Bonneville power authority in 1980. This system uses SMES systems to damp the low frequencies, which contributes to the stabilization of the power grid. In 2000, SMES based FACTS systems were introduced at key points in the northern Winston power grid to enhance the stability of the grid.
Load leveling
The use of electric power requires a stable energy supply that delivers a constant power. This stability is dependent on the amount of power used and the amount of power created. The power usage varies throughout the day, and also varies during the seasons. SMES systems can be used to store energy when the generated power is higher than the demand/Load, and release power when the load is higher than the generated power. Thereby compensating for power fluctuations. Using these systems makes it possible for conventional generating units to operate at a constant output that is more efficient and convenient. However, when the power imbalance between supply and demand lasts for a long time, the SMES may get completely discharged.
Load frequency control
When the load does not meet the generated power output, due to a load perturbation, this can cause the load to be larger than the rated power output of the generators. This for example can happen when wind generators don't spin due to a sudden lack of wind. This load perturbation can cause a load-frequency control problem. This problem can be amplified in DFIG-based wind power generators. This load disparity can be compensated by power output from SMES systems that store energy when the generation is larger than the load. SMES based load frequency control systems have the advantage of a fast response when compared to contemporary control systems.
Uninterruptible power supplies
Uninterruptible Power Supplies (UPS) are used to protect against power surges and shortfalls by supplying a continuous power supply. This compensation is done by switching from the failing power supply to a SMES systems that can almost instantaneously supply the necessary power to continue the operation of essential systems. The SMES based UPS are most useful in systems that need to be kept at certain critical loads.
Circuit breaker reclosing
When the power angle difference across a circuit breaker is too large, protective relays prevent the reclosing of the circuit breakers. SMES systems can be used in these situations to reduce the power angle difference across the circuit breaker. Thereby allowing the reclosing of the circuit breaker. These systems allow the quick restoration of system power after major transmission line outages.
Spinning reserve
Spinning reserve is the extra generating capacity that is available by increasing the power generation of systems that are connected to the grid. This capacity reserved by the system operator for the compensation of disruptions in the power grid. Due to the fast recharge times and fast alternating current to direct current conversion process of SMES systems, these systems can be used as a spinning reserve when a major grid of transmission line is out of service.
SFCL
Superconducting fault current limiters (SFCL) are used to limit current under a fault in the grid. In this system a superconductor is quenched (raised in temperature) when a fault in the gridline is detected. By quenching the superconductor the resistance rises and the current is diverted to other grid lines. This is done without interrupting the larger grid. Once the fault is cleared, the SFCL temperature is lowered and becomes invisible to the larger grid.
Electromagnetic launchers
Electromagnetic launchers are electric projectile weapons that use a magnetic field to accelerate projectiles to a very high velocity. These launchers require high power pulse sources in order to work. These launchers can be realised by the use of the quick release capability and the high power density of the SMES system.
Future developments for SMES systems
Future developments in the components of SMES systems could make them more viable for other applications; specifically, superconductors with higher critical temperatures and critical current densities. These limits are the same faced in other industrial usage of superconductors. Recent development of HTS wire made of YBCO with a superconducting transition temperature of around 90 K shows promise.Typically, the higher the superconducting transition temperature, the higher the maximum current density the superconductor can sustain before Cooper pair breakdown. A substance with a high critical temperature will generally have a higher critical current at low temperature than a superconductor with a lower critical temperature. This higher critical current will raise the energy storage quadratically, which may make SMES and other industrial applications of superconductors cost-effective.
Technical challenges
The energy content of current SMES systems is usually quite small. Methods to increase the energy stored in SMES often resort to large-scale storage units. As with other superconducting applications, cryogenics are a necessity. A robust mechanical structure is usually required to contain the very large Lorentz forces generated by and on the magnet coils. The dominant cost for SMES is the superconductor, followed by the cooling system and the rest of the mechanical structure.
Mechanical support
Needed because of large Lorentz forces generated by the strong magnetic field acting on the coil, and the strong magnetic field generated by the coil on the larger structure.
Size
To achieve commercially useful levels of storage, around 5 GW·h (18 TJ), a SMES installation would need a loop of around 800 m. This is traditionally pictured as a circle, though in practice it could be more like a rounded rectangle. In either case it would require access to a significant amount of land to house the installation.
Manufacturing
here are two manufacturing issues around SMES. The first is the fabrication of bulk cable suitable to carry the current. The HTSC superconducting materials found to date are relatively delicate ceramics, making it difficult to use established techniques to draw extended lengths of superconducting wire. Much research has focused on layer deposit techniques, applying a thin film of material onto a stable substrate, but this is currently only suitable for small-scale electrical circuits.
Infrastructure
The second problem is the infrastructure required for an installation. Until room-temperature superconductors are found, the 800 m loop of wire would have to be contained within a vacuum flask of liquid nitrogen. This in turn would require stable support, most commonly envisioned by burying the installation.
Critical magnetic field
Above a certain field strength, known as the critical field, the superconducting state is destroyed. This means that there exists a maximum charging rate for the superconducting material, given that the magnitude of the magnetic field determines the flux captured by the superconducting coil.
Critical current
In general power systems look to maximize the current they are able to handle. This makes any losses due to inefficiencies in the system relatively insignificant. Unfortunately, large currents may generate magnetic fields greater than the critical field due to Ampere's Law. Current materials struggle, therefore, to carry sufficient current to make a commercial storage facility economically viable.
Several issues at the onset of the technology have hindered its proliferation:
Expensive refrigeration units and high power cost to maintain operating temperatures
Existence and continued development of adequate technologies using normal conductors
These still pose problems for superconducting applications but are improving over time. Advances have been made in the performance of superconducting materials. Furthermore, the reliability and efficiency of refrigeration systems has improved significantly.
Long precooling time
At the moment it takes four months to cool the coil from room temperature to its operating temperature. This also means that the SMES takes equally long to return to operating temperature after maintenance and when restarting after operating failures.
Protection
Due to the large amount of energy stored, certain measures need to be taken to protect the coils from damage in the case of coil failure. The rapid release of energy in case of coil failure might damage surrounding systems. Some conceptual designs propose to incorporate a superconducting cable into the design with as goal the absorption of energy after coil failure. The system also needs to be kept in excellent electric isolation in order to prevent loss of energy.
See also
Grid energy storage
References
Bibliography
Sheahen, T., P. (1994). Introduction to High-Temperature Superconductivity. Plenum Press, New York. pp. 66, 76–78, 425–430, 433–446.
El-Wakil, M., M. (1984). Powerplant Technology. McGraw-Hill, pp. 685–689, 691–695.
Wolsky, A., M. (2002). The status and prospects for flywheels and SMES that incorporate HTS. Physica C 372–376, pp. 1,495–1,499.
Further reading
External links
Cost Analysis of Energy Storage Systems for Electric Utility Applications
Loyola SMES summary
Superconductivity
Energy storage | Superconducting magnetic energy storage | [
"Physics",
"Materials_science",
"Engineering"
] | 4,724 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
28,729 | https://en.wikipedia.org/wiki/Solution%20%28chemistry%29 | In chemistry, a solution is defined by IUPAC as "A liquid or solid phase containing more than one substance, when for convenience one (or more) substance, which is called the solvent, is treated differently from the other substances, which are called solutes. When, as is often but not necessarily the case, the sum of the mole fractions of solutes is small compared with unity, the solution is called a dilute solution. A superscript attached to the ∞ symbol for a property of a solution denotes the property in the limit of infinite dilution." One important parameter of a solution is the concentration, which is a measure of the amount of solute in a given amount of solution or solvent. The term "aqueous solution" is used when one of the solvents is water.
Types
Homogeneous means that the components of the mixture form a single phase. Heterogeneous means that the components of the mixture are of different phase. The properties of the mixture (such as concentration, temperature, and density) can be uniformly distributed through the volume but only in absence of diffusion phenomena or after their completion. Usually, the substance present in the greatest amount is considered the solvent. Solvents can be gases, liquids, or solids. One or more components present in the solution other than the solvent are called solutes. The solution has the same physical state as the solvent.
Gaseous mixtures
If the solvent is a gas, only gases (non-condensable) or vapors (condensable) are dissolved under a given set of conditions. An example of a gaseous solution is air (oxygen and other gases dissolved in nitrogen). Since interactions between gaseous molecules play almost no role, non-condensable gases form rather trivial solutions. In the literature, they are not even classified as solutions, but simply addressed as homogeneous mixtures of gases. The Brownian motion and the permanent molecular agitation of gas molecules guarantee the homogeneity of the gaseous systems. Non-condensable gaseous mixtures (e.g., air/CO2, or air/xenon) do not spontaneously demix, nor sediment, as distinctly stratified and separate gas layers as a function of their relative density. Diffusion forces efficiently counteract gravitation forces under normal conditions prevailing on Earth. The case of condensable vapors is different: once the saturation vapor pressure at a given temperature is reached, vapor excess condenses into the liquid state.
Liquid solutions
Liquids dissolve gases, other liquids, and solids. An example of a dissolved gas is oxygen in water, which allows fish to breathe under water. An examples of a dissolved liquid is ethanol in water, as found in alcoholic beverages. An example of a dissolved solid is sugar water, which contains dissolved sucrose.
Solid solutions
If the solvent is a solid, then gases, liquids, and solids can be dissolved.
Gas in solids:
Hydrogen dissolves rather well in metals, especially in palladium; this is studied as a means of hydrogen storage.
Liquid in solid:
Mercury in gold, forming an amalgam
Water in solid salt or sugar, forming moist solids
Hexane in paraffin wax
Polymers containing plasticizers such as phthalate (liquid) in PVC (solid)
Solid in solid:
Steel, basically a solution of carbon atoms in a crystalline matrix of iron atoms
Alloys like bronze and many others
Radium sulfate dissolved in barium sulfate: a true solid solution of Ra in BaSO4
Solubility
The ability of one compound to dissolve in another compound is called solubility. When a liquid can completely dissolve in another liquid the two liquids are miscible. Two substances that can never mix to form a solution are said to be immiscible.
All solutions have a positive entropy of mixing. The interactions between different molecules or ions may be energetically favored or not. If interactions are unfavorable, then the free energy decreases with increasing solute concentration. At some point, the energy loss outweighs the entropy gain, and no more solute particles can be dissolved; the solution is said to be saturated. However, the point at which a solution can become saturated can change significantly with different environmental factors, such as temperature, pressure, and contamination. For some solute-solvent combinations, a supersaturated solution can be prepared by raising the solubility (for example by increasing the temperature) to dissolve more solute and then lowering it (for example by cooling).
Usually, the greater the temperature of the solvent, the more of a given solid solute it can dissolve. However, most gases and some compounds exhibit solubilities that decrease with increased temperature. Such behavior is a result of an exothermic enthalpy of solution. Some surfactants exhibit this behaviour. The solubility of liquids in liquids is generally less temperature-sensitive than that of solids or gases.
Properties
The physical properties of compounds such as melting point and boiling point change when other compounds are added. Together they are called colligative properties. There are several ways to quantify the amount of one compound dissolved in the other compounds collectively called concentration. Examples include molarity, volume fraction, and mole fraction.
The properties of ideal solutions can be calculated by the linear combination of the properties of its components. If both solute and solvent exist in equal quantities (such as in a 50% ethanol, 50% water solution), the concepts of "solute" and "solvent" become less relevant, but the substance that is more often used as a solvent is normally designated as the solvent (in this example, water).
Liquid solution characteristics
In principle, all types of liquids can behave as solvents: liquid noble gases, molten metals, molten salts, molten covalent networks, and molecular liquids. In the practice of chemistry and biochemistry, most solvents are molecular liquids. They can be classified into polar and non-polar, according to whether their molecules possess a permanent electric dipole moment. Another distinction is whether their molecules can form hydrogen bonds (protic and aprotic solvents). Water, the most commonly used solvent, is both polar and sustains hydrogen bonds.
Salts dissolve in polar solvents, forming positive and negative ions that are attracted to the negative and positive ends of the solvent molecule, respectively. If the solvent is water, hydration occurs when the charged solute ions become surrounded by water molecules. A standard example is aqueous saltwater. Such solutions are called electrolytes. Whenever salt dissolves in water ion association has to be taken into account.
Polar solutes dissolve in polar solvents, forming polar bonds or hydrogen bonds. As an example, all alcoholic beverages are aqueous solutions of ethanol. On the other hand, non-polar solutes dissolve better in non-polar solvents. Examples are hydrocarbons such as oil and grease that easily mix, while being incompatible with water.
An example of the immiscibility of oil and water is a leak of petroleum from a damaged tanker, that does not dissolve in the ocean water but rather floats on the surface.
See also
is a common term in a range of disciplines, and can have different meanings depending on the analytical method used. In water quality, it refers to the amount of residue remaining after the evaporation of water from a sample.
References
External links
Homogeneous chemical mixtures
Alchemical processes
Physical chemistry
Colloidal chemistry
Drug delivery devices
Dosage forms | Solution (chemistry) | [
"Physics",
"Chemistry"
] | 1,528 | [
"Pharmacology",
"Colloidal chemistry",
"Applied and interdisciplinary physics",
"Drug delivery devices",
"Colloids",
"Surface science",
"Homogeneous chemical mixtures",
"Chemical mixtures",
"Alchemical processes",
"nan",
"Solutions",
"Physical chemistry"
] |
28,736 | https://en.wikipedia.org/wiki/Speed%20of%20light | The speed of light in vacuum, commonly denoted , is a universal physical constant that is exactly equal to ). According to the special theory of relativity, is the upper limit for the speed at which conventional matter or energy (and thus any signal carrying information) can travel through space.
All forms of electromagnetic radiation, including visible light, travel at the speed of light. For many practical purposes, light and other electromagnetic waves will appear to propagate instantaneously, but for long distances and very sensitive measurements, their finite speed has noticeable effects. Much starlight viewed on Earth is from the distant past, allowing humans to study the history of the universe by viewing distant objects. When communicating with distant space probes, it can take minutes to hours for signals to travel. In computing, the speed of light fixes the ultimate minimum communication delay. The speed of light can be used in time of flight measurements to measure large distances to extremely high precision.
Ole Rømer first demonstrated in 1676 that light does not travel instantaneously by studying the apparent motion of Jupiter's moon Io. Progressively more accurate measurements of its speed came over the following centuries. In a paper published in 1865, James Clerk Maxwell proposed that light was an electromagnetic wave and, therefore, travelled at speed . In 1905, Albert Einstein postulated that the speed of light with respect to any inertial frame of reference is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the theory of relativity and, in doing so, showed that the parameter had relevance outside of the context of light and electromagnetism.
Massless particles and field perturbations, such as gravitational waves, also travel at speed in vacuum. Such particles and waves travel at regardless of the motion of the source or the inertial reference frame of the observer. Particles with nonzero rest mass can be accelerated to approach but can never reach it, regardless of the frame of reference in which their speed is measured. In the theory of relativity, interrelates space and time and appears in the famous mass–energy equivalence, .
In some cases, objects or waves may appear to travel faster than light (e.g., phase velocities of waves, the appearance of certain high-speed astronomical objects, and particular quantum effects). The expansion of the universe is understood to exceed the speed of light beyond a certain boundary.
The speed at which light propagates through transparent materials, such as glass or air, is less than ; similarly, the speed of electromagnetic waves in wire cables is slower than . The ratio between and the speed at which light travels in a material is called the refractive index of the material (). For example, for visible light, the refractive index of glass is typically around 1.5, meaning that light in glass travels at ; the refractive index of air for visible light is about 1.0003, so the speed of light in air is about slower than .
Numerical value, notation, and units
The speed of light in vacuum is usually denoted by a lowercase , for "constant" or the Latin (meaning 'swiftness, celerity'). In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used for a different constant that was later shown to equal times the speed of light in vacuum. Historically, the symbol V was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1894, Paul Drude redefined with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to , which by then had become the standard symbol for the speed of light.
Sometimes is used for the speed of waves in any material medium, and 0 for the speed of light in vacuum. This subscripted notation, which is endorsed in official SI literature, has the same form as related electromagnetic constants: namely, μ0 for the vacuum permeability or magnetic constant, ε0 for the vacuum permittivity or electric constant, and Z0 for the impedance of free space. This article uses exclusively for the speed of light in vacuum.
Use in unit systems
Since 1983, the constant has been defined in the International System of Units (SI) as exactly ; this relationship is used to define the metre as exactly the distance that light travels in vacuum in of a second. By using the value of , as well as an accurate measurement of the second, one can thus establish a standard for the metre. As a dimensional physical constant, the numerical value of is different for different unit systems. For example, in imperial units, the speed of light is approximately miles per second, or roughly 1 foot per nanosecond.
In branches of physics in which appears often, such as in relativity, it is common to use systems of natural units of measurement or the geometrized unit system where . Using these units, does not appear explicitly because multiplication or division by1 does not affect the result. Its unit of light-second per second is still relevant, even if omitted.
Fundamental role in physics
The speed at which light waves propagate in vacuum is independent both of the motion of the wave source and of the inertial frame of reference of the observer. This invariance of the speed of light was postulated by Einstein in 1905, after being motivated by Maxwell's theory of electromagnetism and the lack of evidence for motion against the luminiferous aether. It has since been consistently confirmed by many experiments. It is only possible to verify experimentally that the two-way speed of light (for example, from a source to a mirror and back again) is frame-independent, because it is impossible to measure the one-way speed of light (for example, from a source to a distant detector) without some convention as to how clocks at the source and at the detector should be synchronized.
By adopting Einstein synchronization for the clocks, the one-way speed of light becomes equal to the two-way speed of light by definition. The special theory of relativity explores the consequences of this invariance of c with the assumption that the laws of physics are the same in all inertial frames of reference. One consequence is that c is the speed at which all massless particles and waves, including light, must travel in vacuum.
Special relativity has many counterintuitive and experimentally verified implications. These include the equivalence of mass and energy , length contraction (moving objects shorten), and time dilation (moving clocks run more slowly). The factor γ by which lengths contract and times dilate is known as the Lorentz factor and is given by , where v is the speed of the object. The difference of γ from1 is negligible for speeds much slower than c, such as most everyday speedsin which case special relativity is closely approximated by Galilean relativitybut it increases at relativistic speeds and diverges to infinity as v approaches c. For example, a time dilation factor of γ = 2 occurs at a relative velocity of 86.6% of the speed of light (v = 0.866 c). Similarly, a time dilation factor of γ = 10 occurs at 99.5% the speed of light (v = 0.995 c).
The results of special relativity can be summarized by treating space and time as a unified structure known as spacetime (with c relating the units of space and time), and requiring that physical theories satisfy a special symmetry called Lorentz invariance, whose mathematical formulation contains the parameter c. Lorentz invariance is an almost universal assumption for modern physical theories, such as quantum electrodynamics, quantum chromodynamics, the Standard Model of particle physics, and general relativity. As such, the parameter c is ubiquitous in modern physics, appearing in many contexts that are unrelated to light. For example, general relativity predicts that c is also the speed of gravity and of gravitational waves, and observations of gravitational waves have been consistent with this prediction. In non-inertial frames of reference (gravitationally curved spacetime or accelerated reference frames), the local speed of light is constant and equal to c, but the speed of light can differ from c when measured from a remote frame of reference, depending on how measurements are extrapolated to the region.
It is generally assumed that fundamental constants such as c have the same value throughout spacetime, meaning that they do not depend on location and do not vary with time. However, it has been suggested in various theories that the speed of light may have changed over time. No conclusive evidence for such changes has been found, but they remain the subject of ongoing research.
It is generally assumed that the two-way speed of light is isotropic, meaning that it has the same value regardless of the direction in which it is measured. Observations of the emissions from nuclear energy levels as a function of the orientation of the emitting nuclei in a magnetic field (see Hughes–Drever experiment), and of rotating optical resonators (see Resonator experiments) have put stringent limits on the possible two-way anisotropy.
Upper limit on speeds
According to special relativity, the energy of an object with rest mass m and speed v is given by , where γ is the Lorentz factor defined above. When v is zero, γ is equal to one, giving rise to the famous formula for mass–energy equivalence. The γ factor approaches infinity as v approaches c, and it would take an infinite amount of energy to accelerate an object with mass to the speed of light. The speed of light is the upper limit for the speeds of objects with positive rest mass, and individual photons cannot travel faster than the speed of light. This is experimentally established in many tests of relativistic energy and momentum.
More generally, it is impossible for signals or energy to travel faster than c. One argument for this follows from the counter-intuitive implication of special relativity known as the relativity of simultaneity. If the spatial distance between two events A and B is greater than the time interval between them multiplied by c then there are frames of reference in which A precedes B, others in which B precedes A, and others in which they are simultaneous. As a result, if something were travelling faster than c relative to an inertial frame of reference, it would be travelling backwards in time relative to another frame, and causality would be violated. In such a frame of reference, an "effect" could be observed before its "cause". Such a violation of causality has never been recorded, and would lead to paradoxes such as the tachyonic antitelephone.
Faster-than-light observations and experiments
There are situations in which it may seem that matter, energy, or information-carrying signal travels at speeds greater than c, but they do not. For example, as is discussed in the propagation of light in a medium section below, many wave velocities can exceed c. The phase velocity of X-rays through most glasses can routinely exceed c, but phase velocity does not determine the velocity at which waves convey information.
If a laser beam is swept quickly across a distant object, the spot of light can move faster than c, although the initial movement of the spot is delayed because of the time it takes light to get to the distant object at the speed c. However, the only physical entities that are moving are the laser and its emitted light, which travels at the speed c from the laser to the various positions of the spot. Similarly, a shadow projected onto a distant object can be made to move faster than c, after a delay in time. In neither case does any matter, energy, or information travel faster than light.
The rate of change in the distance between two objects in a frame of reference with respect to which both are moving (their closing speed) may have a value in excess of c. However, this does not represent the speed of any single object as measured in a single inertial frame.
Certain quantum effects appear to be transmitted instantaneously and therefore faster than c, as in the EPR paradox. An example involves the quantum states of two particles that can be entangled. Until either of the particles is observed, they exist in a superposition of two quantum states. If the particles are separated and one particle's quantum state is observed, the other particle's quantum state is determined instantaneously. However, it is impossible to control which quantum state the first particle will take on when it is observed, so information cannot be transmitted in this manner.
Another quantum effect that predicts the occurrence of faster-than-light speeds is called the Hartman effect: under certain conditions the time needed for a virtual particle to tunnel through a barrier is constant, regardless of the thickness of the barrier. This could result in a virtual particle crossing a large gap faster than light. However, no information can be sent using this effect.
So-called superluminal motion is seen in certain astronomical objects, such as the relativistic jets of radio galaxies and quasars. However, these jets are not moving at speeds in excess of the speed of light: the apparent superluminal motion is a projection effect caused by objects moving near the speed of light and approaching Earth at a small angle to the line of sight: since the light which was emitted when the jet was farther away took longer to reach the Earth, the time between two successive observations corresponds to a longer time between the instants at which the light rays were emitted.
A 2011 experiment where neutrinos were observed to travel faster than light turned out to be due to experimental error.
In models of the expanding universe, the farther galaxies are from each other, the faster they drift apart. For example, galaxies far away from Earth are inferred to be moving away from the Earth with speeds proportional to their distances. Beyond a boundary called the Hubble sphere, the rate at which their distance from Earth increases becomes greater than the speed of light.
These recession rates, defined as the increase in proper distance per cosmological time, are not velocities in a relativistic sense. Faster-than-light cosmological recession speeds are only a coordinate artifact.
Propagation of light
In classical physics, light is described as a type of electromagnetic wave. The classical behaviour of the electromagnetic field is described by Maxwell's equations, which predict that the speed c with which electromagnetic waves (such as light) propagate in vacuum is related to the distributed capacitance and inductance of vacuum, otherwise respectively known as the electric constant ε0 and the magnetic constant μ0, by the equation
In modern quantum physics, the electromagnetic field is described by the theory of quantum electrodynamics (QED). In this theory, light is described by the fundamental excitations (or quanta) of the electromagnetic field, called photons. In QED, photons are massless particles and thus, according to special relativity, they travel at the speed of light in vacuum.
Extensions of QED in which the photon has a mass have been considered. In such a theory, its speed would depend on its frequency, and the invariant speed c of special relativity would then be the upper limit of the speed of light in vacuum. No variation of the speed of light with frequency has been observed in rigorous testing, putting stringent limits on the mass of the photon. The limit obtained depends on the model used: if the massive photon is described by Proca theory, the experimental upper bound for its mass is about 10−57 grams; if photon mass is generated by a Higgs mechanism, the experimental upper limit is less sharp, (roughly 2 × 10−47 g).
Another reason for the speed of light to vary with its frequency would be the failure of special relativity to apply to arbitrarily small scales, as predicted by some proposed theories of quantum gravity. In 2009, the observation of gamma-ray burst GRB 090510 found no evidence for a dependence of photon speed on energy, supporting tight constraints in specific models of spacetime quantization on how this speed is affected by photon energy for energies approaching the Planck scale.
In a medium
In a medium, light usually does not propagate at a speed equal to c; further, different types of light wave will travel at different speeds. The speed at which the individual crests and troughs of a plane wave (a wave filling the whole space, with only one frequency) propagate is called the phase velocity vp. A physical signal with a finite extent (a pulse of light) travels at a different speed. The overall envelope of the pulse travels at the group velocity vg, and its earliest part travels at the front velocity vf.
The phase velocity is important in determining how a light wave travels through a material or from one material to another. It is often represented in terms of a refractive index. The refractive index of a material is defined as the ratio of c to the phase velocity vp in the material: larger indices of refraction indicate lower speeds. The refractive index of a material may depend on the light's frequency, intensity, polarization, or direction of propagation; in many cases, though, it can be treated as a material-dependent constant. The refractive index of air is approximately 1.0003. Denser media, such as water, glass, and diamond, have refractive indexes of around 1.3, 1.5 and 2.4, respectively, for visible light.
In exotic materials like Bose–Einstein condensates near absolute zero, the effective speed of light may be only a few metres per second. However, this represents absorption and re-radiation delay between atoms, as do all slower-than-c speeds in material substances. As an extreme example of light "slowing" in matter, two independent teams of physicists claimed to bring light to a "complete standstill" by passing it through a Bose–Einstein condensate of the element rubidium. The popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrarily later time, as stimulated by a second laser pulse. During the time it had "stopped", it had ceased to be light. This type of behaviour is generally microscopically true of all transparent media which "slow" the speed of light.
In transparent materials, the refractive index generally is greater than 1, meaning that the phase velocity is less than c. In other materials, it is possible for the refractive index to become smaller than1 for some frequencies; in some exotic materials it is even possible for the index of refraction to become negative. The requirement that causality is not violated implies that the real and imaginary parts of the dielectric constant of any material, corresponding respectively to the index of refraction and to the attenuation coefficient, are linked by the Kramers–Kronig relations. In practical terms, this means that in a material with refractive index less than 1, the wave will be absorbed quickly.
A pulse with different group and phase velocities (which occurs if the phase velocity is not the same for all the frequencies of the pulse) smears out over time, a process known as dispersion. Certain materials have an exceptionally low (or even zero) group velocity for light waves, a phenomenon called slow light.
The opposite, group velocities exceeding c, was proposed theoretically in 1993 and achieved experimentally in 2000. It should even be possible for the group velocity to become infinite or negative, with pulses travelling instantaneously or backwards in time.
None of these options allow information to be transmitted faster than c. It is impossible to transmit information with a light pulse any faster than the speed of the earliest part of the pulse (the front velocity). It can be shown that this is (under certain assumptions) always equal to c.
It is possible for a particle to travel through a medium faster than the phase velocity of light in that medium (but still slower than c). When a charged particle does that in a dielectric material, the electromagnetic equivalent of a shock wave, known as Cherenkov radiation, is emitted.
Practical effects of finiteness
The speed of light is of relevance to telecommunications: the one-way and round-trip delay time are greater than zero. This applies from small to astronomical scales. On the other hand, some techniques depend on the finite speed of light, for example in distance measurements.
Small scales
In computers, the speed of light imposes a limit on how quickly data can be sent between processors. If a processor operates at 1gigahertz, a signal can travel only a maximum of about in a single clock cycle – in practice, this distance is even shorter since the printed circuit board refracts and slows down signals. Processors must therefore be placed close to each other, as well as memory chips, to minimize communication latencies, and care must be exercised when routing wires between them to ensure signal integrity. If clock frequencies continue to increase, the speed of light may eventually become a limiting factor for the internal design of single chips.
Large distances on Earth
Given that the equatorial circumference of the Earth is about and that c is about , the theoretical shortest time for a piece of information to travel half the globe along the surface is about 67 milliseconds. When light is traveling in optical fibre (a transparent material) the actual transit time is longer, in part because the speed of light is slower by about 35% in optical fibre, depending on its refractive index n. Straight lines are rare in global communications and the travel time increases when signals pass through electronic switches or signal regenerators.
Although this distance is largely irrelevant for most applications, latency becomes important in fields such as high-frequency trading, where traders seek to gain minute advantages by delivering their trades to exchanges fractions of a second ahead of other traders. For example, traders have been switching to microwave communications between trading hubs, because of the advantage which radio waves travelling at near to the speed of light through air have over comparatively slower fibre optic signals.
Spaceflight and astronomy
Similarly, communications between the Earth and spacecraft are not instantaneous. There is a brief delay from the source to the receiver, which becomes more noticeable as distances increase. This delay was significant for communications between ground control and Apollo 8 when it became the first crewed spacecraft to orbit the Moon: for every question, the ground control station had to wait at least three seconds for the answer to arrive.
The communications delay between Earth and Mars can vary between five and twenty minutes depending upon the relative positions of the two planets. As a consequence of this, if a robot on the surface of Mars were to encounter a problem, its human controllers would not be aware of it until approximately later. It would then take a further for commands to travel from Earth to Mars.
Receiving light and other signals from distant astronomical sources takes much longer. For example, it takes 13 billion (13) years for light to travel to Earth from the faraway galaxies viewed in the Hubble Ultra-Deep Field images. Those photographs, taken today, capture images of the galaxies as they appeared 13 billion years ago, when the universe was less than a billion years old. The fact that more distant objects appear to be younger, due to the finite speed of light, allows astronomers to infer the evolution of stars, of galaxies, and of the universe itself.
Astronomical distances are sometimes expressed in light-years, especially in popular science publications and media. A light-year is the distance light travels in one Julian year, around 9461 billion kilometres, 5879 billion miles, or 0.3066 parsecs. In round figures, a light year is nearly 10 trillion kilometres or nearly 6 trillion miles. Proxima Centauri, the closest star to Earth after the Sun, is around 4.2 light-years away.
Distance measurement
Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of light. A Global Positioning System (GPS) receiver measures its distance to GPS satellites based on how long it takes for a radio signal to arrive from each satellite, and from these distances calculates the receiver's position. Because light travels about () in one second, these measurements of small fractions of a second must be very precise. The Lunar Laser Ranging experiment, radar astronomy and the Deep Space Network determine distances to the Moon, planets and spacecraft, respectively, by measuring round-trip transit times.
Measurement
There are different ways to determine the value of c. One way is to measure the actual speed at which light waves propagate, which can be done in various astronomical and Earth-based setups. It is also possible to determine c from other physical laws where it appears, for example, by determining the values of the electromagnetic constants ε0 and μ0 and using their relation to c. Historically, the most accurate results have been obtained by separately determining the frequency and wavelength of a light beam, with their product equalling c. This is described in more detail in the "Interferometry" section below.
In 1983 the metre was defined as "the length of the path travelled by light in vacuum during a time interval of of a second", fixing the value of the speed of light at by definition, as described below. Consequently, accurate measurements of the speed of light yield an accurate realization of the metre rather than an accurate value of c.
Astronomical measurements
Outer space is a convenient setting for measuring the speed of light because of its large scale and nearly perfect vacuum. Typically, one measures the time needed for light to traverse some reference distance in the Solar System, such as the radius of the Earth's orbit. Historically, such measurements could be made fairly accurately, compared to how accurately the length of the reference distance is known in Earth-based units.
Ole Rømer used an astronomical measurement to make the first quantitative estimate of the speed of light in the year 1676. When measured from Earth, the periods of moons orbiting a distant planet are shorter when the Earth is approaching the planet than when the Earth is receding from it. The difference is small, but the cumulative time becomes significant when measured over months. The distance travelled by light from the planet (or its moon) to Earth is shorter when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun. The observed change in the moon's orbital period is caused by the difference in the time it takes light to traverse the shorter or longer distance. Rømer observed this effect for Jupiter's innermost major moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbit.
Another method is to use the aberration of light, discovered and explained by James Bradley in the 18th century. This effect results from the vector addition of the velocity of light arriving from a distant source (such as a star) and the velocity of its observer (see diagram on the right). A moving observer thus sees the light coming from a slightly different direction and consequently sees the source at a position shifted from its original position. Since the direction of the Earth's velocity changes continuously as the Earth orbits the Sun, this effect causes the apparent position of stars to move around. From the angular difference in the position of stars (maximally 20.5 arcseconds) it is possible to express the speed of light in terms of the Earth's velocity around the Sun, which with the known length of a year can be converted to the time needed to travel from the Sun to the Earth. In 1729, Bradley used this method to derive that light travelled times faster than the Earth in its orbit (the modern figure is times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth.
Astronomical unit
An astronomical unit (AU) is approximately the average distance between the Earth and Sun. It was redefined in 2012 as exactly . Previously the AU was not based on the International System of Units but in terms of the gravitational force exerted by the Sun in the framework of classical mechanics. The current definition uses the recommended value in metres for the previous definition of the astronomical unit, which was determined by measurement. This redefinition is analogous to that of the metre and likewise has the effect of fixing the speed of light to an exact value in astronomical units per second (via the exact speed of light in metres per second).
Previously, the inverse of expressed in seconds per astronomical unit was measured by comparing the time for radio signals to reach different spacecraft in the Solar System, with their position calculated from the gravitational effects of the Sun and various planets. By combining many such measurements, a best fit value for the light time per unit distance could be obtained. For example, in 2009, the best estimate, as approved by the International Astronomical Union (IAU), was:
light time for unit distance: tau = ,
c = = .
The relative uncertainty in these measurements is 0.02 parts per billion (), equivalent to the uncertainty in Earth-based measurements of length by interferometry. Since the metre is defined to be the length travelled by light in a certain time interval, the measurement of the light time in terms of the previous definition of the astronomical unit can also be interpreted as measuring the length of an AU (old definition) in metres.
Time of flight techniques
A method of measuring the speed of light is to measure the time needed for light to travel to a mirror at a known distance and back. This is the working principle behind experiments by Hippolyte Fizeau and Léon Foucault.
The setup as used by Fizeau consists of a beam of light directed at a mirror away. On the way from the source to the mirror, the beam passes through a rotating cogwheel. At a certain rate of rotation, the beam passes through one gap on the way out and another on the way back, but at slightly higher or lower rates, the beam strikes a tooth and does not pass through the wheel. Knowing the distance between the wheel and the mirror, the number of teeth on the wheel, and the rate of rotation, the speed of light can be calculated.
The method of Foucault replaces the cogwheel with a rotating mirror. Because the mirror keeps rotating while the light travels to the distant mirror and back, the light is reflected from the rotating mirror at a different angle on its way out than it is on its way back. From this difference in angle, the known speed of rotation and the distance to the distant mirror the speed of light may be calculated. Foucault used this apparatus to measure the speed of light in air versus water, based on a suggestion by François Arago.
Today, using oscilloscopes with time resolutions of less than one nanosecond, the speed of light can be directly measured by timing the delay of a light pulse from a laser or an LED reflected from a mirror. This method is less precise (with errors of the order of 1%) than other modern techniques, but it is sometimes used as a laboratory experiment in college physics classes.
Electromagnetic constants
An option for deriving c that does not directly depend on a measurement of the propagation of electromagnetic waves is to use the relation between c and the vacuum permittivity ε0 and vacuum permeability μ0 established by Maxwell's theory: c2 = 1/(ε0μ0). The vacuum permittivity may be determined by measuring the capacitance and dimensions of a capacitor, whereas the value of the vacuum permeability was historically fixed at exactly through the definition of the ampere. Rosa and Dorsey used this method in 1907 to find a value of . Their method depended upon having a standard unit of electrical resistance, the "international ohm", and so its accuracy was limited by how this standard was defined.
Cavity resonance
Another way to measure the speed of light is to independently measure the frequency f and wavelength λ of an electromagnetic wave in vacuum. The value of c can then be found by using the relation c = fλ. One option is to measure the resonance frequency of a cavity resonator. If the dimensions of the resonance cavity are also known, these can be used to determine the wavelength of the wave. In 1946, Louis Essen and A.C. Gordon-Smith established the frequency for a variety of normal modes of microwaves of a microwave cavity of precisely known dimensions. The dimensions were established to an accuracy of about ±0.8 μm using gauges calibrated by interferometry. As the wavelength of the modes was known from the geometry of the cavity and from electromagnetic theory, knowledge of the associated frequencies enabled a calculation of the speed of light.
The Essen–Gordon-Smith result, , was substantially more precise than those found by optical techniques. By 1950, repeated measurements by Essen established a result of .
A household demonstration of this technique is possible, using a microwave oven and food such as marshmallows or margarine: if the turntable is removed so that the food does not move, it will cook the fastest at the antinodes (the points at which the wave amplitude is the greatest), where it will begin to melt. The distance between two such spots is half the wavelength of the microwaves; by measuring this distance and multiplying the wavelength by the microwave frequency (usually displayed on the back of the oven, typically 2450 MHz), the value of c can be calculated, "often with less than 5% error".
Interferometry
Interferometry is another method to find the wavelength of electromagnetic radiation for determining the speed of light. A coherent beam of light (e.g. from a laser), with a known frequency (f), is split to follow two paths and then recombined. By adjusting the path length while observing the interference pattern and carefully measuring the change in path length, the wavelength of the light (λ) can be determined. The speed of light is then calculated using the equation c = λf.
Before the advent of laser technology, coherent radio sources were used for interferometry measurements of the speed of light. Interferometric determination of wavelength becomes less precise with wavelength and the experiments were thus limited in precision by the long wavelength (~) of the radiowaves. The precision can be improved by using light with a shorter wavelength, but then it becomes difficult to directly measure the frequency of the light.
One way around this problem is to start with a low frequency signal of which the frequency can be precisely measured, and from this signal progressively synthesize higher frequency signals whose frequency can then be linked to the original signal. A laser can then be locked to the frequency, and its wavelength can be determined using interferometry. This technique was due to a group at the National Bureau of Standards (which later became the National Institute of Standards and Technology). They used it in 1972 to measure the speed of light in vacuum with a fractional uncertainty of .
History
Until the early modern period, it was not known whether light travelled instantaneously or at a very fast finite speed. The first extant recorded examination of this subject was in ancient Greece. The ancient Greeks, Arabic scholars, and classical European scientists long debated this until Rømer provided the first calculation of the speed of light. Einstein's theory of special relativity postulates that the speed of light is constant regardless of one's frame of reference. Since then, scientists have provided increasingly accurate measurements.
Early history
Empedocles (c. 490–430 BCE) was the first to propose a theory of light and claimed that light has a finite speed. He maintained that light was something in motion, and therefore must take some time to travel. Aristotle argued, to the contrary, that "light is due to the presence of something, but it is not a movement". Euclid and Ptolemy advanced Empedocles' emission theory of vision, where light is emitted from the eye, thus enabling sight. Based on that theory, Heron of Alexandria argued that the speed of light must be infinite because distant objects such as stars appear immediately upon opening the eyes.
Early Islamic philosophers initially agreed with the Aristotelian view that light had no speed of travel. In 1021, Alhazen (Ibn al-Haytham) published the Book of Optics, in which he presented a series of arguments dismissing the emission theory of vision in favour of the now accepted intromission theory, in which light moves from an object into the eye. This led Alhazen to propose that light must have a finite speed, and that the speed of light is variable, decreasing in denser bodies. He argued that light is substantial matter, the propagation of which requires time, even if this is hidden from the senses. Also in the 11th century, Abū Rayhān al-Bīrūnī agreed that light has a finite speed, and observed that the speed of light is much faster than the speed of sound.
In the 13th century, Roger Bacon argued that the speed of light in air was not infinite, using philosophical arguments backed by the writing of Alhazen and Aristotle. In the 1270s, Witelo considered the possibility of light travelling at infinite speed in vacuum, but slowing down in denser bodies.
In the early 17th century, Johannes Kepler believed that the speed of light was infinite since empty space presents no obstacle to it. René Descartes argued that if the speed of light were to be finite, the Sun, Earth, and Moon would be noticeably out of alignment during a lunar eclipse. Although this argument fails when aberration of light is taken into account, the latter was not recognized until the following century. Since such misalignment had not been observed, Descartes concluded the speed of light was infinite. Descartes speculated that if the speed of light were found to be finite, his whole system of philosophy might be demolished. Despite this, in his derivation of Snell's law, Descartes assumed that some kind of motion associated with light was faster in denser media. Pierre de Fermat derived Snell's law using the opposing assumption, the denser the medium the slower light travelled. Fermat also argued in support of a finite speed of light.
First measurement attempts
In 1629, Isaac Beeckman proposed an experiment in which a person observes the flash of a cannon reflecting off a mirror about one mile (1.6 km) away. In 1638, Galileo Galilei proposed an experiment, with an apparent claim to having performed it some years earlier, to measure the speed of light by observing the delay between uncovering a lantern and its perception some distance away. He was unable to distinguish whether light travel was instantaneous or not, but concluded that if it were not, it must nevertheless be extraordinarily rapid. In 1667, the Accademia del Cimento of Florence reported that it had performed Galileo's experiment, with the lanterns separated by about one mile, but no delay was observed. The actual delay in this experiment would have been about 11 microseconds.
The first quantitative estimate of the speed of light was made in 1676 by Ole Rømer. From the observation that the periods of Jupiter's innermost moon Io appeared to be shorter when the Earth was approaching Jupiter than when receding from it, he concluded that light travels at a finite speed, and estimated that it takes light 22 minutes to cross the diameter of Earth's orbit. Christiaan Huygens combined this estimate with an estimate for the diameter of the Earth's orbit to obtain an estimate of speed of light of , which is 27% lower than the actual value.
In his 1704 book Opticks, Isaac Newton reported Rømer's calculations of the finite speed of light and gave a value of "seven or eight minutes" for the time taken for light to travel from the Sun to the Earth (the modern value is 8 minutes 19 seconds). Newton queried whether Rømer's eclipse shadows were coloured. Hearing that they were not, he concluded the different colours travelled at the same speed. In 1729, James Bradley discovered stellar aberration. From this effect he determined that light must travel 10,210 times faster than the Earth in its orbit (the modern figure is 10,066 times faster) or, equivalently, that it would take light 8 minutes 12 seconds to travel from the Sun to the Earth.
Connections with electromagnetism
In the 19th century Hippolyte Fizeau developed a method to determine the speed of light based on time-of-flight measurements on Earth and reported a value of . His method was improved upon by Léon Foucault who obtained a value of in 1862. In the year 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch measured the ratio of the electromagnetic and electrostatic units of charge, 1/, by discharging a Leyden jar, and found that its numerical value was very close to the speed of light as measured directly by Fizeau. The following year Gustav Kirchhoff calculated that an electric signal in a resistanceless wire travels along the wire at this speed.
In the early 1860s, Maxwell showed that, according to the theory of electromagnetism he was working on, electromagnetic waves propagate in empty space at a speed equal to the above Weber/Kohlrausch ratio, and drawing attention to the numerical proximity of this value to the speed of light as measured by Fizeau, he proposed that light is in fact an electromagnetic wave. Maxwell backed up his claim with his own experiment published in the 1868 Philosophical Transactions which determined the ratio of the electrostatic and electromagnetic units of electricity.
"Luminiferous aether"
The wave properties of light were well known since Thomas Young. In the 19th century, physicists believed light was propagating in a medium called aether (or ether). But for electric force, it looks more like the gravitational force in Newton's law. A transmitting medium was not required. After Maxwell theory unified light and electric and magnetic waves, it was favored that both light and electric magnetic waves propagate in the same aether medium (or called the luminiferous aether).
It was thought at the time that empty space was filled with a background medium called the luminiferous aether in which the electromagnetic field existed. Some physicists thought that this aether acted as a preferred frame of reference for the propagation of light and therefore it should be possible to measure the motion of the Earth with respect to this medium, by measuring the isotropy of the speed of light. Beginning in the 1880s several experiments were performed to try to detect this motion, the most famous of which is the experiment performed by Albert A. Michelson and Edward W. Morley in 1887. The detected motion was found to always be nil (within observational error). Modern experiments indicate that the two-way speed of light is isotropic (the same in every direction) to within 6 nanometres per second.
Because of this experiment Hendrik Lorentz proposed that the motion of the apparatus through the aether may cause the apparatus to contract along its length in the direction of motion, and he further assumed that the time variable for moving systems must also be changed accordingly ("local time"), which led to the formulation of the Lorentz transformation. Based on Lorentz's aether theory, Henri Poincaré (1900) showed that this local time (to first order in v/c) is indicated by clocks moving in the aether, which are synchronized under the assumption of constant light speed. In 1904, he speculated that the speed of light could be a limiting velocity in dynamics, provided that the assumptions of Lorentz's theory are all confirmed. In 1905, Poincaré brought Lorentz's aether theory into full observational agreement with the principle of relativity.
Special relativity
In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observer. Using this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum c featured as a fundamental constant, also appearing in contexts unrelated to light. This made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time.
Increased accuracy of c and redefinition of the metre and second
In the second half of the 20th century, much progress was made in increasing the accuracy of measurements of the speed of light, first by cavity resonance techniques and later by laser interferometer techniques. These were aided by new, more precise, definitions of the metre and second. In 1950, Louis Essen determined the speed as , using cavity resonance. This value was adopted by the 12th General Assembly of the Radio-Scientific Union in 1957. In 1960, the metre was redefined in terms of the wavelength of a particular spectral line of krypton-86, and, in 1967, the second was redefined in terms of the hyperfine transition frequency of the ground state of caesium-133.
In 1972, using the laser interferometer method and the new definitions, a group at the US National Bureau of Standards in Boulder, Colorado determined the speed of light in vacuum to be c = . This was 100 times less uncertain than the previously accepted value. The remaining uncertainty was mainly related to the definition of the metre. As similar experiments found comparable results for c, the 15th General Conference on Weights and Measures in 1975 recommended using the value for the speed of light.
Defined as an explicit constant
In 1983 the 17th meeting of the General Conference on Weights and Measures (CGPM) found that wavelengths from frequency measurements and a given value for the speed of light are more reproducible than the previous standard. They kept the 1967 definition of second, so the caesium hyperfine frequency would now determine both the second and the metre. To do this, they redefined the metre as "the length of the path traveled by light in vacuum during a time interval of 1/ of a second".
As a result of this definition, the value of the speed of light in vacuum is exactly and has become a defined constant in the SI system of units. Improved experimental techniques that, prior to 1983, would have measured the speed of light no longer affect the known value of the speed of light in SI units, but instead allow a more precise realization of the metre by more accurately measuring the wavelength of krypton-86 and other light sources.
In 2011, the CGPM stated its intention to redefine all seven SI base units using what it calls "the explicit-constant formulation", where each "unit is defined indirectly by specifying explicitly an exact value for a well-recognized fundamental constant", as was done for the speed of light. It proposed a new, but completely equivalent, wording of the metre's definition: "The metre, symbol m, is the unit of length; its magnitude is set by fixing the numerical value of the speed of light in vacuum to be equal to exactly when it is expressed in the SI unit ." This was one of the changes that was incorporated in the 2019 revision of the SI, also termed the New SI.
See also
Light-second
Speed of electricity
Speed of gravity
Speed of sound
Velocity factor
Warp factor (fictional)
Notes
References
Further reading
Historical references
Translated as
Modern references
External links
"Test Light Speed in Mile Long Vacuum Tube". Popular Science Monthly, September 1930, pp. 17–18.
Definition of the metre (International Bureau of Weights and Measures, BIPM)
Speed of light in vacuum (National Institute of Standards and Technology, NIST)
Data Gallery: Michelson Speed of Light (Univariate Location Estimation) (download data gathered by Albert A. Michelson)
Subluminal (Java applet by Greg Egan demonstrating group velocity information limits)
Light discussion on adding velocities
Speed of Light (Sixty Symbols, University of Nottingham Department of Physics [video])
Speed of Light, BBC Radio4 discussion (In Our Time, 30 November 2006)
Speed of Light (Live-Counter – Illustrations)
Speed of Light – animated demonstrations
"The Velocity of Light", Albert A. Nicholson, Scientific American, 28 September 1878, p. 193
Fundamental constants
Physical quantities
Light
Special relativity
Light | Speed of light | [
"Physics",
"Mathematics"
] | 10,005 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Physical quantities",
"Quantity",
"Electromagnetic spectrum",
"Special relativity",
"Waves",
"Physical constants",
"Motion (physics)",
"Light",
"Vector physical quantities",
"Fundamental constants",
"Theory of relativity",
"Velocity",
... |
28,758 | https://en.wikipedia.org/wiki/Spacetime | In physics, spacetime, also called the space-time continuum, is a mathematical model that fuses the three dimensions of space and the one dimension of time into a single four-dimensional continuum. Spacetime diagrams are useful in visualizing and understanding relativistic effects, such as how different observers perceive where and when events occur.
Until the turn of the 20th century, the assumption had been that the three-dimensional geometry of the universe (its description in terms of locations, shapes, distances, and directions) was distinct from time (the measurement of when events occur within the universe). However, space and time took on new meanings with the Lorentz transformation and special theory of relativity.
In 1908, Hermann Minkowski presented a geometric interpretation of special relativity that fused time and the three spatial dimensions into a single four-dimensional continuum now known as Minkowski space. This interpretation proved vital to the general theory of relativity, wherein spacetime is curved by mass and energy.
Fundamentals
Definitions
Non-relativistic classical mechanics treats time as a universal quantity of measurement that is uniform throughout, is separate from space, and is agreed on by all observers. Classical mechanics assumes that time has a constant rate of passage, independent of the observer's state of motion, or anything external. It assumes that space is Euclidean: it assumes that space follows the geometry of common sense.
In the context of special relativity, time cannot be separated from the three dimensions of space, because the observed rate at which time passes for an object depends on the object's velocity relative to the observer. General relativity provides an explanation of how gravitational fields can slow the passage of time for an object as seen by an observer outside the field.
In ordinary space, a position is specified by three numbers, known as dimensions. In the Cartesian coordinate system, these are often called x, y and z. A point in spacetime is called an event, and requires four numbers to be specified: the three-dimensional location in space, plus the position in time (Fig. 1). An event is represented by a set of coordinates x, y, z and t. Spacetime is thus four-dimensional.
Unlike the analogies used in popular writings to explain events, such as firecrackers or sparks, mathematical events have zero duration and represent a single point in spacetime. Although it is possible to be in motion relative to the popping of a firecracker or a spark, it is not possible for an observer to be in motion relative to an event.
The path of a particle through spacetime can be considered to be a sequence of events. The series of events can be linked together to form a curve that represents the particle's progress through spacetime. That path is called the particle's world line.
Mathematically, spacetime is a manifold, which is to say, it appears locally "flat" near each point in the same way that, at small enough scales, the surface of a globe appears to be flat. A scale factor, (conventionally called the speed-of-light) relates distances measured in space to distances measured in time. The magnitude of this scale factor (nearly in space being equivalent to one second in time), along with the fact that spacetime is a manifold, implies that at ordinary, non-relativistic speeds and at ordinary, human-scale distances, there is little that humans might observe that is noticeably different from what they might observe if the world were Euclidean. It was only with the advent of sensitive scientific measurements in the mid-1800s, such as the Fizeau experiment and the Michelson–Morley experiment, that puzzling discrepancies began to be noted between observation versus predictions based on the implicit assumption of Euclidean space.
In special relativity, an observer will, in most cases, mean a frame of reference from which a set of objects or events is being measured. This usage differs significantly from the ordinary English meaning of the term. Reference frames are inherently nonlocal constructs, and according to this usage of the term, it does not make sense to speak of an observer as having a location.
In Fig. 1-1, imagine that the frame under consideration is equipped with a dense lattice of clocks, synchronized within this reference frame, that extends indefinitely throughout the three dimensions of space. Any specific location within the lattice is not important. The latticework of clocks is used to determine the time and position of events taking place within the whole frame. The term observer refers to the whole ensemble of clocks associated with one inertial frame of reference.
In this idealized case, every point in space has a clock associated with it, and thus the clocks register each event instantly, with no time delay between an event and its recording. A real observer will see a delay between the emission of a signal and its detection due to the speed of light. To synchronize the clocks, in the data reduction following an experiment, the time when a signal is received will be corrected to reflect its actual time were it to have been recorded by an idealized lattice of clocks.
In many books on special relativity, especially older ones, the word "observer" is used in the more ordinary sense of the word. It is usually clear from context which meaning has been adopted.
Physicists distinguish between what one measures or observes, after one has factored out signal propagation delays, versus what one visually sees without such corrections. Failing to understand the difference between what one measures and what one sees is the source of much confusion among students of relativity.
History
By the mid-1800s, various experiments such as the observation of the Arago spot and differential measurements of the speed of light in air versus water were considered to have proven the wave nature of light as opposed to a corpuscular theory. Propagation of waves was then assumed to require the existence of a waving medium; in the case of light waves, this was considered to be a hypothetical luminiferous aether. The various attempts to establish the properties of this hypothetical medium yielded contradictory results. For example, the Fizeau experiment of 1851, conducted by French physicist Hippolyte Fizeau, demonstrated that the speed of light in flowing water was less than the sum of the speed of light in air plus the speed of the water by an amount dependent on the water's index of refraction.
Among other issues, the dependence of the partial aether-dragging implied by this experiment on the index of refraction (which is dependent on wavelength) led to the unpalatable conclusion that aether simultaneously flows at different speeds for different colors of light. The Michelson–Morley experiment of 1887 (Fig. 1-2) showed no differential influence of Earth's motions through the hypothetical aether on the speed of light, and the most likely explanation, complete aether dragging, was in conflict with the observation of stellar aberration.
George Francis FitzGerald in 1889, and Hendrik Lorentz in 1892, independently proposed that material bodies traveling through the fixed aether were physically affected by their passage, contracting in the direction of motion by an amount that was exactly what was necessary to explain the negative results of the Michelson–Morley experiment. No length changes occur in directions transverse to the direction of motion.
By 1904, Lorentz had expanded his theory such that he had arrived at equations formally identical with those that Einstein was to derive later, i.e. the Lorentz transformation. As a theory of dynamics (the study of forces and torques and their effect on motion), his theory assumed actual physical deformations of the physical constituents of matter. Lorentz's equations predicted a quantity that he called local time, with which he could explain the aberration of light, the Fizeau experiment and other phenomena.
Henri Poincaré was the first to combine space and time into spacetime. He argued in 1898 that the simultaneity of two events is a matter of convention. In 1900, he recognized that Lorentz's "local time" is actually what is indicated by moving clocks by applying an explicitly operational definition of clock synchronization assuming constant light speed. In 1900 and 1904, he suggested the inherent undetectability of the aether by emphasizing the validity of what he called the principle of relativity. In 1905/1906 he mathematically perfected Lorentz's theory of electrons in order to bring it into accordance with the postulate of relativity.
While discussing various hypotheses on Lorentz invariant gravitation, he introduced the innovative concept of a 4-dimensional spacetime by defining various four vectors, namely four-position, four-velocity, and four-force. He did not pursue the 4-dimensional formalism in subsequent papers, however, stating that this line of research seemed to "entail great pain for limited profit", ultimately concluding "that three-dimensional language seems the best suited to the description of our world". Even as late as 1909, Poincaré continued to describe the dynamical interpretation of the Lorentz transform.
In 1905, Albert Einstein analyzed special relativity in terms of kinematics (the study of moving bodies without reference to forces) rather than dynamics. His results were mathematically equivalent to those of Lorentz and Poincaré. He obtained them by recognizing that the entire theory can be built upon two postulates: the principle of relativity and the principle of the constancy of light speed. His work was filled with vivid imagery involving the exchange of light signals between clocks in motion, careful measurements of the lengths of moving rods, and other such examples.
Einstein in 1905 superseded previous attempts of an electromagnetic mass–energy relation by introducing the general equivalence of mass and energy, which was instrumental for his subsequent formulation of the equivalence principle in 1907, which declares the equivalence of inertial and gravitational mass. By using the mass–energy equivalence, Einstein showed that the gravitational mass of a body is proportional to its energy content, which was one of the early results in developing general relativity. While it would appear that he did not at first think geometrically about spacetime, in the further development of general relativity, Einstein fully incorporated the spacetime formalism.
When Einstein published in 1905, another of his competitors, his former mathematics professor Hermann Minkowski, had also arrived at most of the basic elements of special relativity. Max Born recounted a meeting he had made with Minkowski, seeking to be Minkowski's student/collaborator:
Minkowski had been concerned with the state of electrodynamics after Michelson's disruptive experiments at least since the summer of 1905, when Minkowski and David Hilbert led an advanced seminar attended by notable physicists of the time to study the papers of Lorentz, Poincaré et al. Minkowski saw Einstein's work as an extension of Lorentz's, and was most directly influenced by Poincaré.
On 5 November 1907 (a little more than a year before his death), Minkowski introduced his geometric interpretation of spacetime in a lecture to the Göttingen Mathematical society with the title, The Relativity Principle (Das Relativitätsprinzip). On 21 September 1908, Minkowski presented his talk, Space and Time (Raum und Zeit), to the German Society of Scientists and Physicians. The opening words of Space and Time include Minkowski's statement that "Henceforth, space for itself, and time for itself shall completely reduce to a mere shadow, and only some sort of union of the two shall preserve independence." Space and Time included the first public presentation of spacetime diagrams (Fig. 1-4), and included a remarkable demonstration that the concept of the invariant interval (discussed below), along with the empirical observation that the speed of light is finite, allows derivation of the entirety of special relativity.{{refn|group=note|(In the following, the group G∞ is the Galilean group and the group Gc the Lorentz group.) "With respect to this it is clear that the group Gc in the limit for
The spacetime concept and the Lorentz group are closely connected to certain types of sphere, hyperbolic, or conformal geometries and their transformation groups already developed in the 19th century, in which invariant intervals analogous to the spacetime interval are used.
Einstein, for his part, was initially dismissive of Minkowski's geometric interpretation of special relativity, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). However, in order to complete his search for general relativity that started in 1907, the geometric interpretation of relativity proved to be vital. In 1916, Einstein fully acknowledged his indebtedness to Minkowski, whose interpretation greatly facilitated the transition to general relativity. Since there are other types of spacetime, such as the curved spacetime of general relativity, the spacetime of special relativity is today known as Minkowski spacetime.
Spacetime in special relativity
Spacetime interval
In three dimensions, the distance between two points can be defined using the Pythagorean theorem:
Although two viewers may measure the x, y, and z position of the two points using different coordinate systems, the distance between the points will be the same for both, assuming that they are measuring using the same units. The distance is "invariant".
In special relativity, however, the distance between two points is no longer the same if measured by two different observers, when one of the observers is moving, because of Lorentz contraction. The situation is even more complicated if the two points are separated in time as well as in space. For example, if one observer sees two events occur at the same place, but at different times, a person moving with respect to the first observer will see the two events occurring at different places, because the moving point of view sees itself as stationary, and the position of the event as receding or approaching. Thus, a different measure must be used to measure the effective "distance" between two events.
In four-dimensional spacetime, the analog to distance is the interval. Although time comes in as a fourth dimension, it is treated differently than the spatial dimensions. Minkowski space hence differs in important respects from four-dimensional Euclidean space. The fundamental reason for merging space and time into spacetime is that space and time are separately not invariant, which is to say that, under the proper conditions, different observers will disagree on the length of time between two events (because of time dilation) or the distance between the two events (because of length contraction). Special relativity provides a new invariant, called the spacetime interval, which combines distances in space and in time. All observers who measure the time and distance between any two events will end up computing the same spacetime interval. Suppose an observer measures two events as being separated in time by and a spatial distance Then the squared spacetime interval between the two events that are separated by a distance in space and by in the -coordinate is:
or for three space dimensions,
The constant the speed of light, converts time units (like seconds) into space units (like meters). The squared interval is a measure of separation between events A and B that are time separated and in addition space separated either because there are two separate objects undergoing events, or because a single object in space is moving inertially between its events. The separation interval is the difference between the square of the spatial distance separating event B from event A and the square of the spatial distance traveled by a light signal in that same time interval . If the event separation is due to a light signal, then this difference vanishes and .
When the event considered is infinitesimally close to each other, then we may write
In a different inertial frame, say with coordinates , the spacetime interval can be written in a same form as above. Because of the constancy of speed of light, the light events in all inertial frames belong to zero interval, . For any other infinitesimal event where , one can prove that
which in turn upon integration leads to . The invariance of the spacetime interval between the same events for all inertial frames of reference is one of the fundamental results of special theory of relativity.
Although for brevity, one frequently sees interval expressions expressed without deltas, including in most of the following discussion, it should be understood that in general, means , etc. We are always concerned with differences of spatial or temporal coordinate values belonging to two events, and since there is no preferred origin, single coordinate values have no essential meaning.
The equation above is similar to the Pythagorean theorem, except with a minus sign between the and the terms. The spacetime interval is the quantity not itself. The reason is that unlike distances in Euclidean geometry, intervals in Minkowski spacetime can be negative. Rather than deal with square roots of negative numbers, physicists customarily regard as a distinct symbol in itself, rather than the square of something.
Note: There are two sign conventions in use in the relativity literature:
and
These sign conventions are associated with the metric signatures and A minor variation is to place the time coordinate last rather than first. Both conventions are widely used within the field of study.
In the following discussion, we use the first convention.
In general can assume any real number value. If is positive, the spacetime interval is referred to as timelike. Since spatial distance traversed by any massive object is always less than distance traveled by the light for the same time interval, positive intervals are always timelike. If is negative, the spacetime interval is said to be spacelike. Spacetime intervals are equal to zero when In other words, the spacetime interval between two events on the world line of something moving at the speed of light is zero. Such an interval is termed lightlike or null. A photon arriving in our eye from a distant star will not have aged, despite having (from our perspective) spent years in its passage.
A spacetime diagram is typically drawn with only a single space and a single time coordinate. Fig. 2-1 presents a spacetime diagram illustrating the world lines (i.e. paths in spacetime) of two photons, A and B, originating from the same event and going in opposite directions. In addition, C illustrates the world line of a slower-than-light-speed object. The vertical time coordinate is scaled by so that it has the same units (meters) as the horizontal space coordinate. Since photons travel at the speed of light, their world lines have a slope of ±1. In other words, every meter that a photon travels to the left or right requires approximately 3.3 nanoseconds of time.
Reference frames
To gain insight in how spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration. With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-2, two Galilean reference frames (i.e. conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame S′ (pronounced "S prime") belongs to a second observer O′.
The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame S′.
Frame S′ moves in the x-direction of frame S with a constant velocity v as measured in frame S.
The origins of frames S and S′ are coincident when time t = 0 for frame S and t′ = 0 for frame S′.
Fig. 2-3a redraws Fig. 2-2 in a different orientation. Fig. 2-3b illustrates a relativistic spacetime diagram from the viewpoint of observer O. Since S and S′ are in standard configuration, their origins coincide at times t = 0 in frame S and t′ = 0 in frame S′. The ct′ axis passes through the events in frame S′ which have x′ = 0. But the points with x′ = 0 are moving in the x-direction of frame S with velocity v, so that they are not coincident with the ct axis at any time other than zero. Therefore, the ct′ axis is tilted with respect to the ct axis by an angle θ given by
The x′ axis is also tilted with respect to the x axis. To determine the angle of this tilt, we recall that the slope of the world line of a light pulse is always ±1. Fig. 2-3c presents a spacetime diagram from the viewpoint of observer O′. Event P represents the emission of a light pulse at x′ = 0, ct′ = −a. The pulse is reflected from a mirror situated a distance a from the light source (event Q), and returns to the light source at x′ = 0, ct′ = a (event R).
The same events P, Q, R are plotted in Fig. 2-3b in the frame of observer O. The light paths have slopes = 1 and −1, so that △PQR forms a right triangle with PQ and QR both at 45 degrees to the x and ct axes. Since OP = OQ = OR, the angle between x′ and x must also be θ.
While the rest frame has space and time axes that meet at right angles, the moving frame is drawn with axes that meet at an acute angle. The frames are actually equivalent. The asymmetry is due to unavoidable distortions in how spacetime coordinates can map onto a Cartesian plane, and should be considered no stranger than the manner in which, on a Mercator projection of the Earth, the relative sizes of land masses near the poles (Greenland and Antarctica) are highly exaggerated relative to land masses near the Equator.
Light cone
In Fig. 2–4, event O is at the origin of a spacetime diagram, and the two diagonal lines represent all events that have zero spacetime interval with respect to the origin event. These two lines form what is called the light cone of the event O, since adding a second spatial dimension (Fig. 2-5) makes the appearance that of two right circular cones meeting with their apices at O. One cone extends into the future (t>0), the other into the past (t<0).
A light (double) cone divides spacetime into separate regions with respect to its apex. The interior of the future light cone consists of all events that are separated from the apex by more time (temporal distance) than necessary to cross their spatial distance at lightspeed; these events comprise the timelike future of the event O. Likewise, the timelike past comprises the interior events of the past light cone. So in timelike intervals Δct is greater than Δx, making timelike intervals positive.
The region exterior to the light cone consists of events that are separated from the event O by more space than can be crossed at lightspeed in the given time. These events comprise the so-called spacelike region of the event O, denoted "Elsewhere" in Fig. 2-4. Events on the light cone itself are said to be lightlike (or null separated) from O. Because of the invariance of the spacetime interval, all observers will assign the same light cone to any given event, and thus will agree on this division of spacetime.
The light cone has an essential role within the concept of causality. It is possible for a not-faster-than-light-speed signal to travel from the position and time of O to the position and time of D (Fig. 2-4). It is hence possible for event O to have a causal influence on event D. The future light cone contains all the events that could be causally influenced by O. Likewise, it is possible for a not-faster-than-light-speed signal to travel from the position and time of A, to the position and time of O. The past light cone contains all the events that could have a causal influence on O. In contrast, assuming that signals cannot travel faster than the speed of light, any event, like e.g. B or C, in the spacelike region (Elsewhere), cannot either affect event O, nor can they be affected by event O employing such signalling. Under this assumption any causal relationship between event O and any events in the spacelike region of a light cone is excluded.
Relativity of simultaneity
All observers will agree that for any given event, an event within the given event's future light cone occurs after the given event. Likewise, for any given event, an event within the given event's past light cone occurs before the given event. The before–after relationship observed for timelike-separated events remains unchanged no matter what the reference frame of the observer, i.e. no matter how the observer may be moving. The situation is quite different for spacelike-separated events. Fig. 2-4 was drawn from the reference frame of an observer moving at From this reference frame, event C is observed to occur after event O, and event B is observed to occur before event O.
From a different reference frame, the orderings of these non-causally-related events can be reversed. In particular, one notes that if two events are simultaneous in a particular reference frame, they are necessarily separated by a spacelike interval and thus are noncausally related. The observation that simultaneity is not absolute, but depends on the observer's reference frame, is termed the relativity of simultaneity.
Fig. 2-6 illustrates the use of spacetime diagrams in the analysis of the relativity of simultaneity. The events in spacetime are invariant, but the coordinate frames transform as discussed above for Fig. 2-3. The three events are simultaneous from the reference frame of an observer moving at From the reference frame of an observer moving at the events appear to occur in the order From the reference frame of an observer moving at , the events appear to occur in the order . The white line represents a plane of simultaneity being moved from the past of the observer to the future of the observer, highlighting events residing on it. The gray area is the light cone of the observer, which remains invariant.
A spacelike spacetime interval gives the same distance that an observer would measure if the events being measured were simultaneous to the observer. A spacelike spacetime interval hence provides a measure of proper distance, i.e. the true distance = Likewise, a timelike spacetime interval gives the same measure of time as would be presented by the cumulative ticking of a clock that moves along a given world line. A timelike spacetime interval hence provides a measure of the proper time =
Invariant hyperbola
In Euclidean space (having spatial dimensions only), the set of points equidistant (using the Euclidean metric) from some point form a circle (in two dimensions) or a sphere (in three dimensions). In Minkowski spacetime (having one temporal and one spatial dimension), the points at some constant spacetime interval away from the origin (using the Minkowski metric) form curves given by the two equations
with some positive real constant. These equations describe two families of hyperbolae in an x–ct spacetime diagram, which are termed invariant hyperbolae.
In Fig. 2-7a, each magenta hyperbola connects all events having some fixed spacelike separation from the origin, while the green hyperbolae connect events of equal timelike separation.
The magenta hyperbolae, which cross the x axis, are timelike curves, which is to say that these hyperbolae represent actual paths that can be traversed by (constantly accelerating) particles in spacetime: Between any two events on one hyperbola a causality relation is possible, because the inverse of the slope—representing the necessary speed—for all secants is less than . On the other hand, the green hyperbolae, which cross the ct axis, are spacelike curves because all intervals along these hyperbolae are spacelike intervals: No causality is possible between any two points on one of these hyperbolae, because all secants represent speeds larger than .
Fig. 2-7b reflects the situation in Minkowski spacetime (one temporal and two spatial dimensions) with the corresponding hyperboloids. The invariant hyperbolae displaced by spacelike intervals from the origin generate hyperboloids of one sheet, while the invariant hyperbolae displaced by timelike intervals from the origin generate hyperboloids of two sheets.
The (1+2)-dimensional boundary between space- and time-like hyperboloids, established by the events forming a zero spacetime interval to the origin, is made up by degenerating the hyperboloids to the light cone. In (1+1)-dimensions the hyperbolae degenerate to the two grey 45°-lines depicted in Fig. 2-7a.
Time dilation and length contraction
Fig. 2-8 illustrates the invariant hyperbola for all events that can be reached from the origin in a proper time of 5 meters (approximately ). Different world lines represent clocks moving at different speeds. A clock that is stationary with respect to the observer has a world line that is vertical, and the elapsed time measured by the observer is the same as the proper time. For a clock traveling at 0.3 c, the elapsed time measured by the observer is 5.24 meters (), while for a clock traveling at 0.7 c, the elapsed time measured by the observer is 7.00 meters ().
This illustrates the phenomenon known as time dilation. Clocks that travel faster take longer (in the observer frame) to tick out the same amount of proper time, and they travel further along the x–axis within that proper time than they would have without time dilation. The measurement of time dilation by two observers in different inertial reference frames is mutual. If observer O measures the clocks of observer O′ as running slower in his frame, observer O′ in turn will measure the clocks of observer O as running slower.
Length contraction, like time dilation, is a manifestation of the relativity of simultaneity. Measurement of length requires measurement of the spacetime interval between two events that are simultaneous in one's frame of reference. But events that are simultaneous in one frame of reference are, in general, not simultaneous in other frames of reference.
Fig. 2-9 illustrates the motions of a 1 m rod that is traveling at 0.5 c along the x axis. The edges of the blue band represent the world lines of the rod's two endpoints. The invariant hyperbola illustrates events separated from the origin by a spacelike interval of 1 m. The endpoints O and B measured when = 0 are simultaneous events in the S′ frame. But to an observer in frame S, events O and B are not simultaneous. To measure length, the observer in frame S measures the endpoints of the rod as projected onto the x-axis along their world lines. The projection of the rod's world sheet onto the x axis yields the foreshortened length OC.
(not illustrated) Drawing a vertical line through A so that it intersects the x′ axis demonstrates that, even as OB is foreshortened from the point of view of observer O, OA is likewise foreshortened from the point of view of observer O′. In the same way that each observer measures the other's clocks as running slow, each observer measures the other's rulers as being contracted.
In regards to mutual length contraction, Fig. 2-9 illustrates that the primed and unprimed frames are mutually rotated by a hyperbolic angle (analogous to ordinary angles in Euclidean geometry). Because of this rotation, the projection of a primed meter-stick onto the unprimed x-axis is foreshortened, while the projection of an unprimed meter-stick onto the primed x′-axis is likewise foreshortened.
Mutual time dilation and the twin paradox
Mutual time dilation
Mutual time dilation and length contraction tend to strike beginners as inherently self-contradictory concepts. If an observer in frame S measures a clock, at rest in frame S', as running slower than his', while S' is moving at speed v in S, then the principle of relativity requires that an observer in frame S' likewise measures a clock in frame S, moving at speed −v in S', as running slower than hers. How two clocks can run both slower than the other, is an important question that "goes to the heart of understanding special relativity."
This apparent contradiction stems from not correctly taking into account the different settings of the necessary, related measurements. These settings allow for a consistent explanation of the only apparent contradiction. It is not about the abstract ticking of two identical clocks, but about how to measure in one frame the temporal distance of two ticks of a moving clock. It turns out that in mutually observing the duration between ticks of clocks, each moving in the respective frame, different sets of clocks must be involved. In order to measure in frame S the tick duration of a moving clock W′ (at rest in S′), one uses two additional, synchronized clocks W1 and W2 at rest in two arbitrarily fixed points in S with the spatial distance d.
Two events can be defined by the condition "two clocks are simultaneously at one place", i.e., when W′ passes each W1 and W2. For both events the two readings of the collocated clocks are recorded. The difference of the two readings of W1 and W2 is the temporal distance of the two events in S, and their spatial distance is d. The difference of the two readings of W′ is the temporal distance of the two events in S′. In S′ these events are only separated in time, they happen at the same place in S′. Because of the invariance of the spacetime interval spanned by these two events, and the nonzero spatial separation d in S, the temporal distance in S′ must be smaller than the one in S: the smaller temporal distance between the two events, resulting from the readings of the moving clock W′, belongs to the slower running clock W′.
Conversely, for judging in frame S′ the temporal distance of two events on a moving clock W (at rest in S), one needs two clocks at rest in S′.
In this comparison the clock W is moving by with velocity −v. Recording again the four readings for the events, defined by "two clocks simultaneously at one place", results in the analogous temporal distances of the two events, now temporally and spatially separated in S′, and only temporally separated but collocated in S. To keep the spacetime interval invariant, the temporal distance in S must be smaller than in S′, because of the spatial separation of the events in S′: now clock W is observed to run slower.
The necessary recordings for the two judgements, with "one moving clock" and "two clocks at rest" in respectively S or S′, involves two different sets, each with three clocks. Since there are different sets of clocks involved in the measurements, there is no inherent necessity that the measurements be reciprocally "consistent" such that, if one observer measures the moving clock to be slow, the other observer measures the one's clock to be fast.
Fig. 2-10 illustrates the previous discussion of mutual time dilation with Minkowski diagrams. The upper picture reflects the measurements as seen from frame S "at rest" with unprimed, rectangular axes, and frame S′ "moving with v > 0", coordinatized by primed, oblique axes, slanted to the right; the lower picture shows frame S′ "at rest" with primed, rectangular coordinates, and frame S "moving with −v < 0", with unprimed, oblique axes, slanted to the left.
Each line drawn parallel to a spatial axis (x, x′) represents a line of simultaneity. All events on such a line have the same time value (ct, ct′). Likewise, each line drawn parallel to a temporal axis (ct, ct′) represents a line of equal spatial coordinate values (x, x′).
One may designate in both pictures the origin O (= ) as the event, where the respective "moving clock" is collocated with the "first clock at rest" in both comparisons. Obviously, for this event the readings on both clocks in both comparisons are zero. As a consequence, the worldlines of the moving clocks are the slanted to the right ct′-axis (upper pictures, clock W′) and the slanted to the left ct-axes (lower pictures, clock W). The worldlines of W1 and W′1 are the corresponding vertical time axes (ct in the upper pictures, and ct′ in the lower pictures).
In the upper picture the place for W2 is taken to be Ax > 0, and thus the worldline (not shown in the pictures) of this clock intersects the worldline of the moving clock (the ct′-axis) in the event labelled A, where "two clocks are simultaneously at one place". In the lower picture the place for W′2 is taken to be Cx′ < 0, and so in this measurement the moving clock W passes W′2 in the event C.
In the upper picture the ct-coordinate At of the event A (the reading of W2) is labeled B, thus giving the elapsed time between the two events, measured with W1 and W2, as OB. For a comparison, the length of the time interval OA, measured with W′, must be transformed to the scale of the ct-axis. This is done by the invariant hyperbola (see also Fig. 2-8) through A, connecting all events with the same spacetime interval from the origin as A. This yields the event C on the ct-axis, and obviously: OC < OB, the "moving" clock W′ runs slower.
To show the mutual time dilation immediately in the upper picture, the event D may be constructed as the event at x′ = 0 (the location of clock W′ in S′), that is simultaneous to C (OC has equal spacetime interval as OA) in S′. This shows that the time interval OD is longer than OA, showing that the "moving" clock runs slower.
In the lower picture the frame S is moving with velocity −v in the frame S′ at rest. The worldline of clock W is the ct-axis (slanted to the left), the worldline of W′1 is the vertical ct′-axis, and the worldline of W′2 is the vertical through event C, with ct′-coordinate D. The invariant hyperbola through event C scales the time interval OC to OA, which is shorter than OD; also, B is constructed (similar to D in the upper pictures) as simultaneous to A in S, at x = 0. The result OB > OC corresponds again to above.
The word "measure" is important. In classical physics an observer cannot affect an observed object, but the object's state of motion can affect the observer's observations of the object.
Twin paradox
Many introductions to special relativity illustrate the differences between Galilean relativity and special relativity by posing a series of "paradoxes". These paradoxes are, in fact, ill-posed problems, resulting from our unfamiliarity with velocities comparable to the speed of light. The remedy is to solve many problems in special relativity and to become familiar with its so-called counter-intuitive predictions. The geometrical approach to studying spacetime is considered one of the best methods for developing a modern intuition.
The twin paradox is a thought experiment involving identical twins, one of whom makes a journey into space in a high-speed rocket, returning home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin observes the other twin as moving, and so at first glance, it would appear that each should find the other to have aged less. The twin paradox sidesteps the justification for mutual time dilation presented above by avoiding the requirement for a third clock. Nevertheless, the twin paradox is not a true paradox because it is easily understood within the context of special relativity.
The impression that a paradox exists stems from a misunderstanding of what special relativity states. Special relativity does not declare all frames of reference to be equivalent, only inertial frames. The traveling twin's frame is not inertial during periods when she is accelerating. Furthermore, the difference between the twins is observationally detectable: the traveling twin needs to fire her rockets to be able to return home, while the stay-at-home twin does not.Even with no (de)acceleration i.e. using one inertial frame O for constant, high-velocity outward journey and another inertial frame I for constant, high-velocity inward journey – the sum of the elapsed time in those frames (O and I) is shorter than the elapsed time in the stationary inertial frame S. Thus acceleration and deceleration is not the cause of shorter elapsed time during the outward and inward journey. Instead the use of two different constant, high-velocity inertial frames for outward and inward journey is really the cause of shorter elapsed time total. Granted, if the same twin has to travel outward and inward leg of the journey and safely switch from outward to inward leg of the journey, the acceleration and deceleration is required. If the travelling twin could ride the high-velocity outward inertial frame and instantaneously switch to high-velocity inward inertial frame the example would still work. The point is that real reason should be stated clearly. The asymmetry is because of the comparison of sum of elapsed times in two different inertial frames (O and I) to the elapsed time in a single inertial frame S.
These distinctions should result in a difference in the twins' ages. The spacetime diagram of Fig. 2-11 presents the simple case of a twin going straight out along the x axis and immediately turning back. From the standpoint of the stay-at-home twin, there is nothing puzzling about the twin paradox at all. The proper time measured along the traveling twin's world line from O to C, plus the proper time measured from C to B, is less than the stay-at-home twin's proper time measured from O to A to B. More complex trajectories require integrating the proper time between the respective events along the curve (i.e. the path integral) to calculate the total amount of proper time experienced by the traveling twin.
Complications arise if the twin paradox is analyzed from the traveling twin's point of view.
Weiss's nomenclature, designating the stay-at-home twin as Terence and the traveling twin as Stella, is hereafter used.
Stella is not in an inertial frame. Given this fact, it is sometimes incorrectly stated that full resolution of the twin paradox requires general relativity:
Although general relativity is not required to analyze the twin paradox, application of the Equivalence Principle of general relativity does provide some additional insight into the subject. Stella is not stationary in an inertial frame. Analyzed in Stella's rest frame, she is motionless for the entire trip. When she is coasting her rest frame is inertial, and Terence's clock will appear to run slow. But when she fires her rockets for the turnaround, her rest frame is an accelerated frame and she experiences a force which is pushing her as if she were in a gravitational field. Terence will appear to be high up in that field and because of gravitational time dilation, his clock will appear to run fast, so much so that the net result will be that Terence has aged more than Stella when they are back together. The theoretical arguments predicting gravitational time dilation are not exclusive to general relativity. Any theory of gravity will predict gravitational time dilation if it respects the principle of equivalence, including Newton's theory.
Gravitation
This introductory section has focused on the spacetime of special relativity, since it is the easiest to describe. Minkowski spacetime is flat, takes no account of gravity, is uniform throughout, and serves as nothing more than a static background for the events that take place in it. The presence of gravity greatly complicates the description of spacetime. In general relativity, spacetime is no longer a static background, but actively interacts with the physical systems that it contains. Spacetime curves in the presence of matter, can propagate waves, bends light, and exhibits a host of other phenomena. A few of these phenomena are described in the later sections of this article.
Basic mathematics of spacetime
Galilean transformations
A basic goal is to be able to compare measurements made by observers in relative motion. If there is an observer O in frame S who has measured the time and space coordinates of an event, assigning this event three Cartesian coordinates and the time as measured on his lattice of synchronized clocks (see Fig. 1-1). A second observer O′ in a different frame S′ measures the same event in her coordinate system and her lattice of synchronized clocks . With inertial frames, neither observer is under acceleration, and a simple set of equations allows us to relate coordinates to . Given that the two coordinate systems are in standard configuration, meaning that they are aligned with parallel coordinates and that when , the coordinate transformation is as follows:
Fig. 3-1 illustrates that in Newton's theory, time is universal, not the velocity of light. Consider the following thought experiment: The red arrow illustrates a train that is moving at 0.4 c with respect to the platform. Within the train, a passenger shoots a bullet with a speed of 0.4 c in the frame of the train. The blue arrow illustrates that a person standing on the train tracks measures the bullet as traveling at 0.8 c. This is in accordance with our naive expectations.
More generally, assuming that frame S′ is moving at velocity v with respect to frame S, then within frame S′, observer O′ measures an object moving with velocity . Velocity u with respect to frame S, since , , and , can be written as = = . This leads to and ultimately
or
which is the common-sense Galilean law for the addition of velocities.
Relativistic composition of velocities
The composition of velocities is quite different in relativistic spacetime. To reduce the complexity of the equations slightly, we introduce a common shorthand for the ratio of the speed of an object relative to light,
Fig. 3-2a illustrates a red train that is moving forward at a speed given by . From the primed frame of the train, a passenger shoots a bullet with a speed given by , where the distance is measured along a line parallel to the red axis rather than parallel to the black x axis. What is the composite velocity u of the bullet relative to the platform, as represented by the blue arrow? Referring to Fig. 3-2b:
From the platform, the composite speed of the bullet is given by .
The two yellow triangles are similar because they are right triangles that share a common angle α. In the large yellow triangle, the ratio .
The ratios of corresponding sides of the two yellow triangles are constant, so that = . So and .
Substitute the expressions for b and r into the expression for u in step 1 to yield Einstein's formula for the addition of velocities:
The relativistic formula for addition of velocities presented above exhibits several important features:
If and v are both very small compared with the speed of light, then the product /c2 becomes vanishingly small, and the overall result becomes indistinguishable from the Galilean formula (Newton's formula) for the addition of velocities: u = + v. The Galilean formula is a special case of the relativistic formula applicable to low velocities.
If is set equal to c, then the formula yields u = c regardless of the starting value of v. The velocity of light is the same for all observers regardless their motions relative to the emitting source.
Time dilation and length contraction revisited
It is straightforward to obtain quantitative expressions for time dilation and length contraction. Fig. 3-3 is a composite image containing individual frames taken from two previous animations, simplified and relabeled for the purposes of this section.
To reduce the complexity of the equations slightly, there are a variety of different shorthand notations for ct:
and are common.
One also sees very frequently the use of the convention
In Fig. 3-3a, segments OA and OK represent equal spacetime intervals. Time dilation is represented by the ratio OB/OK. The invariant hyperbola has the equation where k = OK, and the red line representing the world line of a particle in motion has the equation w = x/β = xc/v. A bit of algebraic manipulation yields
The expression involving the square root symbol appears very frequently in relativity, and one over the expression is called the Lorentz factor, denoted by the Greek letter gamma :
If v is greater than or equal to c, the expression for becomes physically meaningless, implying that c is the maximum possible speed in nature. For any v greater than zero, the Lorentz factor will be greater than one, although the shape of the curve is such that for low speeds, the Lorentz factor is extremely close to one.
In Fig. 3-3b, segments OA and OK represent equal spacetime intervals. Length contraction is represented by the ratio OB/OK. The invariant hyperbola has the equation , where k = OK, and the edges of the blue band representing the world lines of the endpoints of a rod in motion have slope 1/β = c/v. Event A has coordinates
(x, w) = (γk, γβk). Since the tangent line through A and B has the equation w = (x − OB)/β, we have γβk = (γk − OB)/β and
Lorentz transformations
The Galilean transformations and their consequent commonsense law of addition of velocities work well in our ordinary low-speed world of planes, cars and balls. Beginning in the mid-1800s, however, sensitive scientific instrumentation began finding anomalies that did not fit well with the ordinary addition of velocities.
Lorentz transformations are used to transform the coordinates of an event from one frame to another in special relativity.
The Lorentz factor appears in the Lorentz transformations:
The inverse Lorentz transformations are:
When v ≪ c and x is small enough, the v2/c2 and vx/c2 terms approach zero, and the Lorentz transformations approximate to the Galilean transformations.
etc., most often really mean etc. Although for brevity the Lorentz transformation equations are written without deltas, x means Δx, etc. We are, in general, always concerned with the space and time differences between events.
Calling one set of transformations the normal Lorentz transformations and the other the inverse transformations is misleading, since there is no intrinsic difference between the frames. Different authors call one or the other set of transformations the "inverse" set. The forwards and inverse transformations are trivially related to each other, since the S frame can only be moving forwards or reverse with respect to . So inverting the equations simply entails switching the primed and unprimed variables and replacing v with −v.
Example: Terence and Stella are at an Earth-to-Mars space race. Terence is an official at the starting line, while Stella is a participant. At time , Stella's spaceship accelerates instantaneously to a speed of 0.5 c. The distance from Earth to Mars is 300 light-seconds (about ). Terence observes Stella crossing the finish-line clock at . But Stella observes the time on her ship chronometer to be as she passes the finish line, and she calculates the distance between the starting and finish lines, as measured in her frame, to be 259.81 light-seconds (about ).
1).
Deriving the Lorentz transformations
There have been many dozens of derivations of the Lorentz transformations since Einstein's original work in 1905, each with its particular focus. Although Einstein's derivation was based on the invariance of the speed of light, there are other physical principles that may serve as starting points. Ultimately, these alternative starting points can be considered different expressions of the underlying principle of locality, which states that the influence that one particle exerts on another can not be transmitted instantaneously.
The derivation given here and illustrated in Fig. 3-5 is based on one presented by Bais and makes use of previous results from the Relativistic Composition of Velocities, Time Dilation, and Length Contraction sections. Event P has coordinates (w, x) in the black "rest system" and coordinates in the red frame that is moving with velocity parameter . To determine and in terms of w and x (or the other way around) it is easier at first to derive the inverse Lorentz transformation.
There can be no such thing as length expansion/contraction in the transverse directions. y must equal y and must equal z, otherwise whether a fast moving 1 m ball could fit through a 1 m circular hole would depend on the observer. The first postulate of relativity states that all inertial frames are equivalent, and transverse expansion/contraction would violate this law.
From the drawing, w = a + b and
From previous results using similar triangles, we know that .
Because of time dilation,
Substituting equation (4) into yields .
Length contraction and similar triangles give us and
Substituting the expressions for s, a, r and b into the equations in Step 2 immediately yield
The above equations are alternate expressions for the t and x equations of the inverse Lorentz transformation, as can be seen by substituting ct for w, for , and v/c for β. From the inverse transformation, the equations of the forwards transformation can be derived by solving for and .
Linearity of the Lorentz transformations
The Lorentz transformations have a mathematical property called linearity, since and are obtained as linear combinations of x and t, with no higher powers involved. The linearity of the transformation reflects a fundamental property of spacetime that was tacitly assumed in the derivation, namely, that the properties of inertial frames of reference are independent of location and time. In the absence of gravity, spacetime looks the same everywhere. All inertial observers will agree on what constitutes accelerating and non-accelerating motion. Any one observer can use her own measurements of space and time, but there is nothing absolute about them. Another observer's conventions will do just as well.
A result of linearity is that if two Lorentz transformations are applied sequentially, the result is also a Lorentz transformation.
Example: Terence observes Stella speeding away from him at 0.500 c, and he can use the Lorentz transformations with to relate Stella's measurements to his own. Stella, in her frame, observes Ursula traveling away from her at 0.250 c, and she can use the Lorentz transformations with to relate Ursula's measurements with her own. Because of the linearity of the transformations and the relativistic composition of velocities, Terence can use the Lorentz transformations with to relate Ursula's measurements with his own.
Doppler effect
The Doppler effect is the change in frequency or wavelength of a wave for a receiver and source in relative motion. For simplicity, we consider here two basic scenarios: (1) The motions of the source and/or receiver are exactly along the line connecting them (longitudinal Doppler effect), and (2) the motions are at right angles to the said line (transverse Doppler effect). We are ignoring scenarios where they move along intermediate angles.
Longitudinal Doppler effect
The classical Doppler analysis deals with waves that are propagating in a medium, such as sound waves or water ripples, and which are transmitted between sources and receivers that are moving towards or away from each other. The analysis of such waves depends on whether the source, the receiver, or both are moving relative to the medium. Given the scenario where the receiver is stationary with respect to the medium, and the source is moving directly away from the receiver at a speed of vs for a velocity parameter of βs, the wavelength is increased, and the observed frequency f is given by
On the other hand, given the scenario where source is stationary, and the receiver is moving directly away from the source at a speed of vr for a velocity parameter of βr, the wavelength is not changed, but the transmission velocity of the waves relative to the receiver is decreased, and the observed frequency f is given by
Light, unlike sound or water ripples, does not propagate through a medium, and there is no distinction between a source moving away from the receiver or a receiver moving away from the source. Fig. 3-6 illustrates a relativistic spacetime diagram showing a source separating from the receiver with a velocity parameter so that the separation between source and receiver at time is . Because of time dilation, Since the slope of the green light ray is −1, Hence, the relativistic Doppler effect is given by
Transverse Doppler effect
Suppose that a source and a receiver, both approaching each other in uniform inertial motion along non-intersecting lines, are at their closest approach to each other. It would appear that the classical analysis predicts that the receiver detects no Doppler shift. Due to subtleties in the analysis, that expectation is not necessarily true. Nevertheless, when appropriately defined, transverse Doppler shift is a relativistic effect that has no classical analog. The subtleties are these:
<!—end plainlist—>
In scenario (a), the point of closest approach is frame-independent and represents the moment where there is no change in distance versus time (i.e. dr/dt = 0 where r is the distance between receiver and source) and hence no longitudinal Doppler shift. The source observes the receiver as being illuminated by light of frequency , but also observes the receiver as having a time-dilated clock. In frame S, the receiver is therefore illuminated by blueshifted light of frequency
In scenario (b) the illustration shows the receiver being illuminated by light from when the source was closest to the receiver, even though the source has moved on. Because the source's clocks are time dilated as measured in frame S, and since dr/dt was equal to zero at this point, the light from the source, emitted from this closest point, is redshifted with frequency
Scenarios (c) and (d) can be analyzed by simple time dilation arguments. In (c), the receiver observes light from the source as being blueshifted by a factor of , and in (d), the light is redshifted. The only seeming complication is that the orbiting objects are in accelerated motion. However, if an inertial observer looks at an accelerating clock, only the clock's instantaneous speed is important when computing time dilation. (The converse, however, is not true.) Most reports of transverse Doppler shift refer to the effect as a redshift and analyze the effect in terms of scenarios (b) or (d).
Energy and momentum
Extending momentum to four dimensions
In classical mechanics, the state of motion of a particle is characterized by its mass and its velocity. Linear momentum, the product of a particle's mass and velocity, is a vector quantity, possessing the same direction as the velocity: . It is a conserved quantity, meaning that if a closed system is not affected by external forces, its total linear momentum cannot change.
In relativistic mechanics, the momentum vector is extended to four dimensions. Added to the momentum vector is a time component that allows the spacetime momentum vector to transform like the spacetime position vector . In exploring the properties of the spacetime momentum, we start, in Fig. 3-8a, by examining what a particle looks like at rest. In the rest frame, the spatial component of the momentum is zero, i.e. , but the time component equals mc.
We can obtain the transformed components of this vector in the moving frame by using the Lorentz transformations, or we can read it directly from the figure because we know that and , since the red axes are rescaled by gamma. Fig. 3-8b illustrates the situation as it appears in the moving frame. It is apparent that the space and time components of the four-momentum go to infinity as the velocity of the moving frame approaches c.
We will use this information shortly to obtain an expression for the four-momentum.
Momentum of light
Light particles, or photons, travel at the speed of c, the constant that is conventionally known as the speed of light. This statement is not a tautology, since many modern formulations of relativity do not start with constant speed of light as a postulate. Photons therefore propagate along a lightlike world line and, in appropriate units, have equal space and time components for every observer.
A consequence of Maxwell's theory of electromagnetism is that light carries energy and momentum, and that their ratio is a constant: . Rearranging, , and since for photons, the space and time components are equal, E/c must therefore be equated with the time component of the spacetime momentum vector.
Photons travel at the speed of light, yet have finite momentum and energy. For this to be so, the mass term in γmc must be zero, meaning that photons are massless particles. Infinity times zero is an ill-defined quantity, but E/c is well-defined.
By this analysis, if the energy of a photon equals E in the rest frame, it equals in a moving frame. This result can be derived by inspection of Fig. 3-9 or by application of the Lorentz transformations, and is consistent with the analysis of Doppler effect given previously.
Mass–energy relationship
Consideration of the interrelationships between the various components of the relativistic momentum vector led Einstein to several important conclusions.
In the low speed limit as approaches zero, approaches 1, so the spatial component of the relativistic momentum approaches mv, the classical term for momentum. Following this perspective, γm can be interpreted as a relativistic generalization of m. Einstein proposed that the relativistic mass of an object increases with velocity according to the formula .
Likewise, comparing the time component of the relativistic momentum with that of the photon, , so that Einstein arrived at the relationship . Simplified to the case of zero velocity, this is Einstein's equation relating energy and mass.
Another way of looking at the relationship between mass and energy is to consider a series expansion of at low velocity:
The second term is just an expression for the kinetic energy of the particle. Mass indeed appears to be another form of energy.
The concept of relativistic mass that Einstein introduced in 1905, mrel, although amply validated every day in particle accelerators around the globe (or indeed in any instrumentation whose use depends on high velocity particles, such as electron microscopes, old-fashioned color television sets, etc.), has nevertheless not proven to be a fruitful concept in physics in the sense that it is not a concept that has served as a basis for other theoretical development. Relativistic mass, for instance, plays no role in general relativity.
For this reason, as well as for pedagogical concerns, most physicists currently prefer a different terminology when referring to the relationship between mass and energy. "Relativistic mass" is a deprecated term. The term "mass" by itself refers to the rest mass or invariant mass, and is equal to the invariant length of the relativistic momentum vector. Expressed as a formula,
This formula applies to all particles, massless as well as massive. For photons where mrest equals zero, it yields, .
Four-momentum
Because of the close relationship between mass and energy, the four-momentum (also called 4-momentum) is also called the energy–momentum 4-vector. Using an uppercase P to represent the four-momentum and a lowercase p to denote the spatial momentum, the four-momentum may be written as
or alternatively,
using the convention that
Conservation laws
In physics, conservation laws state that certain particular measurable properties of an isolated physical system do not change as the system evolves over time. In 1915, Emmy Noether discovered that underlying each conservation law is a fundamental symmetry of nature. The fact that physical processes do not care where in space they take place (space translation symmetry) yields conservation of momentum, the fact that such processes do not care when they take place (time translation symmetry) yields conservation of energy, and so on. In this section, we examine the Newtonian views of conservation of mass, momentum and energy from a relativistic perspective.
Total momentum
To understand how the Newtonian view of conservation of momentum needs to be modified in a relativistic context, we examine the problem of two colliding bodies limited to a single dimension.
In Newtonian mechanics, two extreme cases of this problem may be distinguished yielding mathematics of minimum complexity:
(1) The two bodies rebound from each other in a completely elastic collision.
(2) The two bodies stick together and continue moving as a single particle. This second case is the case of completely inelastic collision.
For both cases (1) and (2), momentum, mass, and total energy are conserved. However, kinetic energy is not conserved in cases of inelastic collision. A certain fraction of the initial kinetic energy is converted to heat.
In case (2), two masses with momentums
and collide to produce a single particle of conserved mass traveling at the center of mass velocity of the original system, . The total momentum is conserved.
Fig. 3-10 illustrates the inelastic collision of two particles from a relativistic perspective. The time components and add up to total E/c of the resultant vector, meaning that energy is conserved. Likewise, the space components and add up to form p of the resultant vector. The four-momentum is, as expected, a conserved quantity. However, the invariant mass of the fused particle, given by the point where the invariant hyperbola of the total momentum intersects the energy axis, is not equal to the sum of the invariant masses of the individual particles that collided. Indeed, it is larger than the sum of the individual masses: .
Looking at the events of this scenario in reverse sequence, we see that non-conservation of mass is a common occurrence: when an unstable elementary particle spontaneously decays into two lighter particles, total energy is conserved, but the mass is not. Part of the mass is converted into kinetic energy.
Choice of reference frames
The freedom to choose any frame in which to perform an analysis allows us to pick one which may be particularly convenient. For analysis of momentum and energy problems, the most convenient frame is usually the "center-of-momentum frame" (also called the zero-momentum frame, or COM frame). This is the frame in which the space component of the system's total momentum is zero. Fig. 3-11 illustrates the breakup of a high speed particle into two daughter particles. In the lab frame, the daughter particles are preferentially emitted in a direction oriented along the original particle's trajectory. In the COM frame, however, the two daughter particles are emitted in opposite directions, although their masses and the magnitude of their velocities are generally not the same.
Energy and momentum conservation
In a Newtonian analysis of interacting particles, transformation between frames is simple because all that is necessary is to apply the Galilean transformation to all velocities. Since , the momentum . If the total momentum of an interacting system of particles is observed to be conserved in one frame, it will likewise be observed to be conserved in any other frame.
Conservation of momentum in the COM frame amounts to the requirement that both before and after collision. In the Newtonian analysis, conservation of mass dictates that . In the simplified, one-dimensional scenarios that we have been considering, only one additional constraint is necessary before the outgoing momenta of the particles can be determined—an energy condition. In the one-dimensional case of a completely elastic collision with no loss of kinetic energy, the outgoing velocities of the rebounding particles in the COM frame will be precisely equal and opposite to their incoming velocities. In the case of a completely inelastic collision with total loss of kinetic energy, the outgoing velocities of the rebounding particles will be zero.
Newtonian momenta, calculated as , fail to behave properly under Lorentzian transformation. The linear transformation of velocities is replaced by the highly nonlinear
so that a calculation demonstrating conservation of momentum in one frame will be invalid in other frames. Einstein was faced with either having to give up conservation of momentum, or to change the definition of momentum. This second option was what he chose.
The relativistic conservation law for energy and momentum replaces the three classical conservation laws for energy, momentum and mass. Mass is no longer conserved independently, because it has been subsumed into the total relativistic energy. This makes the relativistic conservation of energy a simpler concept than in nonrelativistic mechanics, because the total energy is conserved without any qualifications. Kinetic energy converted into heat or internal potential energy shows up as an increase in mass.
Introduction to curved spacetime
Technical topics
Is spacetime really curved?
In Poincaré's conventionalist views, the essential criteria according to which one should select a Euclidean versus non-Euclidean geometry would be economy and simplicity. A realist would say that Einstein discovered spacetime to be non-Euclidean. A conventionalist would say that Einstein merely found it more convenient to use non-Euclidean geometry. The conventionalist would maintain that Einstein's analysis said nothing about what the geometry of spacetime really is.
Such being said,
Is it possible to represent general relativity in terms of flat spacetime?
Are there any situations where a flat spacetime interpretation of general relativity may be more convenient than the usual curved spacetime interpretation?
In response to the first question, a number of authors including Deser, Grishchuk, Rosen, Weinberg, etc. have provided various formulations of gravitation as a field in a flat manifold. Those theories are variously called "bimetric gravity", the "field-theoretical approach to general relativity", and so forth. Kip Thorne has provided a popular review of these theories.
The flat spacetime paradigm posits that matter creates a gravitational field that causes rulers to shrink when they are turned from circumferential orientation to radial, and that causes the ticking rates of clocks to dilate. The flat spacetime paradigm is fully equivalent to the curved spacetime paradigm in that they both represent the same physical phenomena. However, their mathematical formulations are entirely different. Working physicists routinely switch between using curved and flat spacetime techniques depending on the requirements of the problem. The flat spacetime paradigm is convenient when performing approximate calculations in weak fields. Hence, flat spacetime techniques tend be used when solving gravitational wave problems, while curved spacetime techniques tend be used in the analysis of black holes.
Asymptotic symmetries
The spacetime symmetry group for Special Relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries if any might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group.
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at lightlike infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields.
What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances.
Riemannian geometry
Curved manifolds
For physical reasons, a spacetime continuum is mathematically defined as a four-dimensional, smooth, connected Lorentzian manifold . This means the smooth Lorentz metric has signature . The metric determines the , as well as determining the geodesics of particles and light beams. About each point (event) on this manifold, coordinate charts are used to represent observers in reference frames. Usually, Cartesian coordinates are used. Moreover, for simplicity's sake, units of measurement are usually chosen such that the speed of light is equal to 1.
A reference frame (observer) can be identified with one of these coordinate charts; any such observer can describe any event . Another reference frame may be identified by a second coordinate chart about . Two observers (one in each reference frame) may describe the same event but obtain different descriptions.
Usually, many overlapping coordinate charts are needed to cover a manifold. Given two coordinate charts, one containing (representing an observer) and another containing (representing another observer), the intersection of the charts represents the region of spacetime in which both observers can measure physical quantities and hence compare results. The relation between the two sets of measurements is given by a non-singular coordinate transformation on this intersection. The idea of coordinate charts as local observers who can perform measurements in their vicinity also makes good physical sense, as this is how one actually collects physical data—locally.
For example, two observers, one of whom is on Earth, but the other one who is on a fast rocket to Jupiter, may observe a comet crashing into Jupiter (this is the event ). In general, they will disagree about the exact location and timing of this impact, i.e., they will have different 4-tuples (as they are using different coordinate systems). Although their kinematic descriptions will differ, dynamical (physical) laws, such as momentum conservation and the first law of thermodynamics, will still hold. In fact, relativity theory requires more than this in the sense that it stipulates these (and all other physical) laws must take the same form in all coordinate systems. This introduces tensors into relativity, by which all physical quantities are represented.
Geodesics are said to be timelike, null, or spacelike if the tangent vector to one point of the geodesic is of this nature. Paths of particles and light beams in spacetime are represented by timelike and null (lightlike) geodesics, respectively.
Privileged character of 3+1 spacetime
See also
Basic introduction to the mathematics of curved spacetime
Complex spacetime
Einstein's thought experiments
Four-dimensionalism
Geography
Global spacetime structure
List of spacetimes
Metric space
Philosophy of space and time
Present
Time geography
Notes
Additional details
References
Further reading
George F. Ellis and Ruth M. Williams (1992) Flat and curved space–times. Oxford University Press.
Lorentz, H. A., Einstein, Albert, Minkowski, Hermann, and Weyl, Hermann (1952) The Principle of Relativity: A Collection of Original Memoirs. Dover.
Lucas, John Randolph (1973) A Treatise on Time and Space. London: Methuen.
Chapters 17–18.
External links
Albert Einstein on space–time 13th edition Encyclopædia Britannica Historical: Albert Einstein's 1926 article
Encyclopedia of Space–time and gravitation Scholarpedia Expert articles
Stanford Encyclopedia of Philosophy: "Space and Time: Inertial Frames" by Robert DiSalle.
Concepts in physics
Theoretical physics
Theory of relativity
Time
Time in physics
Conceptual models | Spacetime | [
"Physics",
"Mathematics"
] | 16,030 | [
"Physical phenomena",
"Time in physics",
"Physical quantities",
"Time",
"Vector spaces",
"Quantity",
"Theoretical physics",
"Space (mathematics)",
"nan",
"Theory of relativity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
28,927 | https://en.wikipedia.org/wiki/Stellar%20classification | In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines. Each line indicates a particular chemical element or molecule, with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary mainly due to the temperature of the photosphere, although in some cases there are true abundance differences. The spectral class of a star is a short code primarily summarizing the ionization state, giving an objective measure of the photosphere's temperature.
Most stars are currently classified under the Morgan–Keenan (MK) system using the letters O, B, A, F, G, K, and M, a sequence from the hottest (O type) to the coolest (M type). Each letter class is then subdivided using a numeric digit with 0 being hottest and 9 being coolest (e.g., A8, A9, F0, and F1 form a sequence from hotter to cooler). The sequence has been expanded with three classes for other stars that do not fit in the classical system: W, S and C. Some stellar remnants or objects of deviating mass have also been assigned letters: D for white dwarfs and L, T and Y for Brown dwarfs (and exoplanets).
In the MK system, a luminosity class is added to the spectral class using Roman numerals. This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants, class I for supergiants, class II for bright giants, class III for regular giants, class IV for subgiants, class V for main-sequence stars, class sd (or VI) for subdwarfs, and class D (or VII) for white dwarfs. The full spectral class for the Sun is then G2V, indicating a main-sequence star with a surface temperature around 5,800 K.
Conventional colour description
The conventional colour description takes into account only the peak of the stellar spectrum. In actuality, however, stars radiate in all parts of the spectrum. Because all spectral colours combined appear white, the actual apparent colours the human eye would observe are far lighter than the conventional colour descriptions would suggest. This characteristic of 'lightness' indicates that the simplified assignment of colours within the spectrum can be misleading. Excluding colour-contrast effects in dim light, in typical viewing conditions there are no green, cyan, indigo, or violet stars. "Yellow" dwarfs such as the Sun are white, "red" dwarfs are a deep shade of yellow/orange, and "brown" dwarfs do not literally appear brown, but hypothetically would appear dim red or grey/black to a nearby observer.
Modern classification
The modern classification system is known as the Morgan–Keenan (MK) classification. Each star is assigned a spectral class (from the older Harvard spectral classification, which did not include luminosity) and a luminosity class using Roman numerals as explained below, forming the star's spectral type.
Other modern stellar classification systems, such as the UBV system, are based on color indices—the measured differences in three or more color magnitudes. Those numbers are given labels such as "U−V" or "B−V", which represent the colors passed by two standard filters (e.g. Ultraviolet, Blue and Visual).
Harvard spectral classification
The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon, who re-ordered and simplified the prior alphabetical system by Draper (see History). Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions. Main-sequence stars vary in surface temperature from approximately 2,000 to 50,000 K, whereas more-evolved stars – in particular, newly-formed white dwarfs – can have surface temperatures above 100,000 K. Physically, the classes indicate the temperature of the star's atmosphere and are normally listed from hottest to coldest.
A common mnemonic for remembering the order of the spectral type letters, from hottest to coolest, is "Oh, Be A Fine Guy/Girl: Kiss Me!", or another one is "Our Bright Astronomers Frequently Generate Killer Mnemonics!".
The spectral classes O through M, as well as other more specialized classes discussed later, are subdivided by Arabic numerals (0–9), where 0 denotes the hottest stars of a given class. For example, A0 denotes the hottest stars in class A and A9 denotes the coolest ones. Fractional numbers are allowed; for example, the star Mu Normae is classified as O9.7. The Sun is classified as G2.
The fact that the Harvard classification of a star indicated its surface or photospheric temperature (or more precisely, its effective temperature) was not fully understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated (by 1914), this was generally suspected to be true. In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere, then to stellar spectra.
Harvard astronomer Cecilia Payne then demonstrated that the O-B-A-F-G-K-M spectral sequence is actually a sequence in temperature. Because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon (largely subjective) estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals.
Yerkes spectral classification
The Yerkes spectral classification, also called the MK, or Morgan-Keenan (alternatively referred to as the MKK, or Morgan-Keenan-Kellman) system from the authors' initials, is a system of stellar spectral classification introduced in 1943 by William Wilson Morgan, Philip C. Keenan, and Edith Kellman from Yerkes Observatory. This two-dimensional (temperature and luminosity) classification scheme is based on spectral lines sensitive to stellar temperature and surface gravity, which is related to luminosity (whilst the Harvard classification is based on just surface temperature). Later, in 1953, after some revisions to the list of standard stars and classification criteria, the scheme was named the Morgan–Keenan classification, or MK, which remains in use today.
Denser stars with higher surface gravity exhibit greater pressure broadening of spectral lines. The gravity, and hence the pressure, on the surface of a giant star is much lower than for a dwarf star because the radius of the giant is much greater than a dwarf of similar mass. Therefore, differences in the spectrum can be interpreted as luminosity effects and a luminosity class can be assigned purely from examination of the spectrum.
A number of different luminosity classes are distinguished, as listed in the table below.
Marginal cases are allowed; for example, a star may be either a supergiant or a bright giant, or may be in between the subgiant and main-sequence classifications.
In these cases, two special symbols are used:
A slash (/) means that a star is either one class or the other.
A dash (-) means that the star is in between the two classes.
For example, a star classified as A3-4III/IV would be in between spectral types A3 and A4, while being either a giant star or a subgiant.
Sub-dwarf classes have also been used: VI for sub-dwarfs (stars slightly less luminous than the main sequence).
Nominal luminosity class VII (and sometimes higher numerals) is now rarely used for white dwarf or "hot sub-dwarf" classes, since the temperature-letters of the main sequence and giant stars no longer apply to white dwarfs.
Occasionally, letters a and b are applied to luminosity classes other than supergiants; for example, a giant star slightly less luminous than typical may be given a luminosity class of IIIb, while a luminosity class IIIa indicates a star slightly brighter than a typical giant.
A sample of extreme V stars with strong absorption in He II λ4686 spectral lines have been given the Vz designation. An example star is HD 93129 B.
Spectral peculiarities
Additional nomenclature, in the form of lower-case letters, can follow the spectral type to indicate peculiar features of the spectrum.
For example, 59 Cygni is listed as spectral type B1.5Vnne, indicating a spectrum with the general classification B1.5V, as well as very broad absorption lines and certain emission lines.
History
The reason for the odd arrangement of letters in the Harvard classification is historical, having evolved from the earlier Secchi classes and been progressively modified as understanding improved.
Secchi classes
During the 1860s and 1870s, pioneering stellar spectroscopist Angelo Secchi created the Secchi classes in order to classify observed spectra. By 1866, he had developed three classes of stellar spectra, shown in the table below.
In the late 1890s, this classification began to be superseded by the Harvard classification, which is discussed in the remainder of this article.
The Roman numerals used for Secchi classes should not be confused with the completely unrelated Roman numerals used for Yerkes luminosity classes and the proposed neutron star classes.
Draper system
In the 1880s, the astronomer Edward C. Pickering began to make a survey of stellar spectra at the Harvard College Observatory, using the objective-prism method. A first result of this work was the Draper Catalogue of Stellar Spectra, published in 1890. Williamina Fleming classified most of the spectra in this catalogue and was credited with classifying over 10,000 featured stars and discovering 10 novae and more than 200 variable stars. With the help of the Harvard computers, especially Williamina Fleming, the first iteration of the Henry Draper catalogue was devised to replace the Roman-numeral scheme established by Angelo Secchi.
The catalogue used a scheme in which the previously used Secchi classes (I to V) were subdivided into more specific classes, given letters from A to P. Also, the letter Q was used for stars not fitting into any other class. Fleming worked with Pickering to differentiate 17 different classes based on the intensity of hydrogen spectral lines, which causes variation in the wavelengths emanated from stars and results in variation in color appearance. The spectra in class A tended to produce the strongest hydrogen absorption lines while spectra in class O produced virtually no visible lines. The lettering system displayed the gradual decrease in hydrogen absorption in the spectral classes when moving down the alphabet. This classification system was later modified by Annie Jump Cannon and Antonia Maury to produce the Harvard spectral classification scheme.
The old Harvard system (1897)
In 1897, another astronomer at Harvard, Antonia Maury, placed the Orion subtype of Secchi class I ahead of the remainder of Secchi class I, thus placing the modern type B ahead of the modern type A. She was the first to do so, although she did not use lettered spectral types, but rather a series of twenty-two types numbered from I–XXII.
Because the 22 Roman numeral groupings did not account for additional variations in spectra, three additional divisions were made to further specify differences: Lowercase letters were added to differentiate relative line appearance in spectra; the lines were defined as:
(a): average width
(b): hazy
(c): sharp
Antonia Maury published her own stellar classification catalogue in 1897 called "Spectra of Bright Stars Photographed with the 11 inch Draper Telescope as Part of the Henry Draper Memorial", which included 4,800 photographs and Maury's analyses of 681 bright northern stars. This was the first instance in which a woman was credited for an observatory publication.
The current Harvard system (1912)
In 1901, Annie Jump Cannon returned to the lettered types, but dropped all letters except O, B, A, F, G, K, M, and N used in that order, as well as P for planetary nebulae and Q for some peculiar spectra. She also used types such as B5A for stars halfway between types B and A, F2G for stars one fifth of the way from F to G, and so on.
Finally, by 1912, Cannon had changed the types B, A, B5A, F2G, etc. to B0, A0, B5, F2, etc. This is essentially the modern form of the Harvard classification system. This system was developed through the analysis of spectra on photographic plates, which could convert light emanated from stars into a readable spectrum.
Mount Wilson classes
A luminosity classification known as the Mount Wilson system was used to distinguish between stars of different luminosities. This notation system is still sometimes seen on modern spectra.
sd: subdwarf
d: dwarf
sg: subgiant
g: giant
c: supergiant
Spectral types
The stellar classification system is taxonomic, based on type specimens, similar to classification of species in biology: The categories are defined by one or more standard stars for each category and sub-category, with an associated description of the distinguishing features.
"Early" and "late" nomenclature
Stars are often referred to as early or late types. "Early" is a synonym for hotter, while "late" is a synonym for cooler.
Depending on the context, "early" and "late" may be absolute or relative terms. "Early" as an absolute term would therefore refer to O or B, and possibly A stars. As a relative reference it relates to stars hotter than others, such as "early K" being perhaps K0, K1, K2 and K3.
"Late" is used in the same way, with an unqualified use of the term indicating stars with spectral types such as K and M, but it can also be used for stars that are cool relative to other stars, as in using "late G" to refer to G7, G8, and G9.
In the relative sense, "early" means a lower Arabic numeral following the class letter, and "late" means a higher number.
This obscure terminology is a hold-over from a late nineteenth century model of stellar evolution, which supposed that stars were powered by gravitational contraction via the Kelvin–Helmholtz mechanism, which is now known to not apply to main-sequence stars. If that were true, then stars would start their lives as very hot "early-type" stars and then gradually cool down into "late-type" stars. This mechanism provided ages of the Sun that were much smaller than what is observed in the geologic record, and was rendered obsolete by the discovery that stars are powered by nuclear fusion. The terms "early" and "late" were carried over, beyond the demise of the model they were based on.
Class O
O-type stars are very hot and extremely luminous, with most of their radiated output in the ultraviolet range. These are the rarest of all main-sequence stars. About 1 in 3,000,000 (0.00003%) of the main-sequence stars in the solar neighborhood are O-type stars. Some of the most massive stars lie within this spectral class. O-type stars frequently have complicated surroundings that make measurement of their spectra difficult.
O-type spectra formerly were defined by the ratio of the strength of the He II λ4541 relative to that of He I λ4471, where λ is the radiation wavelength. Spectral type O7 was defined to be the point at which the two intensities are equal, with the He I line weakening towards earlier types. Type O3 was, by definition, the point at which said line disappears altogether, although it can be seen very faintly with modern technology. Due to this, the modern definition uses the ratio of the nitrogen line N IV λ4058 to N III λλ4634-40-42.
O-type stars have dominant lines of absorption and sometimes emission for He II lines, prominent ionized (Si IV, O III, N III, and C III) and neutral helium lines, strengthening from O5 to O9, and prominent hydrogen Balmer lines, although not as strong as in later types. Higher-mass O-type stars do not retain extensive atmospheres due to the extreme velocity of their stellar wind, which may reach 2,000 km/s. Because they are so massive, O-type stars have very hot cores and burn through their hydrogen fuel very quickly, so they are the first stars to leave the main sequence.
When the MKK classification scheme was first described in 1943, the only subtypes of class O used were O5 to O9.5. The MKK scheme was extended to O9.7 in 1971 and O4 in 1978, and new classification schemes that add types O2, O3, and O3.5 have subsequently been introduced.
Spectral standards:
O7V – S Monocerotis
O9V – 10 Lacertae
Class B
B-type stars are very luminous and blue. Their spectra have neutral helium lines, which are most prominent at the B2 subclass, and moderate hydrogen lines. As O- and B-type stars are so energetic, they only live for a relatively short time. Thus, due to the low probability of kinematic interaction during their lifetime, they are unable to stray far from the area in which they formed, apart from runaway stars.
The transition from class O to class B was originally defined to be the point at which the He II λ4541 disappears. However, with modern equipment, the line is still apparent in the early B-type stars. Today for main-sequence stars, the B class is instead defined by the intensity of the He I violet spectrum, with the maximum intensity corresponding to class B2. For supergiants, lines of silicon are used instead; the Si IV λ4089 and Si III λ4552 lines are indicative of early B. At mid-B, the intensity of the latter relative to that of Si II λλ4128-30 is the defining characteristic, while for late B, it is the intensity of Mg II λ4481 relative to that of He I λ4471.
These stars tend to be found in their originating OB associations, which are associated with giant molecular clouds. The Orion OB1 association occupies a large portion of a spiral arm of the Milky Way and contains many of the brighter stars of the constellation Orion. About 1 in 800 (0.125%) of the main-sequence stars in the solar neighborhood are B-type main-sequence stars. B-type stars are relatively uncommon and the closest is Regulus, at around 80 light years.
Massive yet non-supergiant stars known as Be stars have been observed to show one or more Balmer lines in emission, with the hydrogen-related electromagnetic radiation series projected out by the stars being of particular interest. Be stars are generally thought to feature unusually strong stellar winds, high surface temperatures, and significant attrition of stellar mass as the objects rotate at a curiously rapid rate.
Objects known as B[e] stars – or B(e) stars for typographic reasons – possess distinctive neutral or low ionisation emission lines that are considered to have forbidden mechanisms, undergoing processes not normally allowed under current understandings of quantum mechanics.
Spectral standards:
B0V – Upsilon Orionis
B0Ia – Alnilam
B2Ia – Chi2 Orionis
B2Ib – 9 Cephei
B3V – Eta Ursae Majoris
B3V – Eta Aurigae
B3Ia – Omicron2 Canis Majoris
B5Ia – Eta Canis Majoris
B8Ia – Rigel
Class A
A-type stars are among the more common naked eye stars, and are white or bluish-white. They have strong hydrogen lines, at a maximum by A0, and also lines of ionized metals (Fe II, Mg II, Si II) at a maximum at A5. The presence of Ca II lines is notably strengthening by this point. About 1 in 160 (0.625%) of the main-sequence stars in the solar neighborhood are A-type stars, which includes 9 stars within 15 parsecs.
Spectral standards:
A0Van – Gamma Ursae Majoris
A0Va – Vega
A0Ib – Eta Leonis
A0Ia – HD 21389
A1V – Sirius A
A2Ia – Deneb
A3Va – Fomalhaut
Class F
F-type stars have strengthening spectral lines H and K of Ca II. Neutral metals (Fe I, Cr I) beginning to gain on ionized metal lines by late F. Their spectra are characterized by the weaker hydrogen lines and ionized metals. Their color is white. About 1 in 33 (3.03%) of the main-sequence stars in the solar neighborhood are F-type stars, including 1 star Procyon A within 20 ly.
Spectral standards:
F0IIIa – Zeta Leonis
F0Ib – Alpha Leporis
F1V - 37 Ursae Majoris
F2V – 78 Ursae Majoris
F7V - Iota Piscium
F9V - Beta Virginis
F9V - HD 10647
Class G
G-type stars, including the Sun, have prominent spectral lines H and K of Ca II, which are most pronounced at G2. They have even weaker hydrogen lines than F, but along with the ionized metals, they have neutral metals. There is a prominent spike in the G band of CN molecules. Class G main-sequence stars make up about 7.5%, nearly one in thirteen, of the main-sequence stars in the solar neighborhood. There are 21 G-type stars within 10pc.
Class G contains the "Yellow Evolutionary Void". Supergiant stars often swing between O or B (blue) and K or M (red). While they do this, they do not stay for long in the unstable yellow supergiant class.
Spectral standards:
G0V – Beta Canum Venaticorum
G0IV – Eta Boötis
G0Ib – Beta Aquarii
G2V – Sun
G5V – Kappa1 Ceti
G5IV – Mu Herculis
G5Ib – 9 Pegasi
G8V – 61 Ursae Majoris
G8IV – Beta Aquilae
G8IIIa – Kappa Geminorum
G8IIIab – Epsilon Virginis
G8Ib – Epsilon Geminorum
Class K
K-type stars are orangish stars that are slightly cooler than the Sun. They make up about 12% of the main-sequence stars in the solar neighborhood. There are also giant K-type stars, which range from hypergiants like RW Cephei, to giants and supergiants, such as Arcturus, whereas orange dwarfs, like Alpha Centauri B, are main-sequence stars.
They have extremely weak hydrogen lines, if those are present at all, and mostly neutral metals (Mn I, Fe I, Si I). By late K, molecular bands of titanium oxide become present. Mainstream theories (those rooted in lower harmful radioactivity and star longevity) would thus suggest such stars have the optimal chances of heavily evolved life developing on orbiting planets (if such life is directly analogous to Earth's) due to a broad habitable zone yet much lower harmful periods of emission compared to those with the broadest such zones.
Spectral standards:
K0V – Sigma Draconis
K0III – Pollux
K0III – Epsilon Cygni
K2V – Epsilon Eridani
K2III – Kappa Ophiuchi
K3III – Rho Boötis
K5V – 61 Cygni A
K5III – Gamma Draconis
Class M
Class M stars are by far the most common. About 76% of the main-sequence stars in the solar neighborhood are class M stars. However, class M main-sequence stars (red dwarfs) have such low luminosities that none are bright enough to be seen with the unaided eye, unless under exceptional conditions. The brightest-known M class main-sequence star is Lacaille 8760, class M0V, with magnitude 6.7 (the limiting magnitude for typical naked-eye visibility under good conditions being typically quoted as 6.5), and it is extremely unlikely that any brighter examples will be found.
Although most class M stars are red dwarfs, most of the largest-known supergiant stars in the Milky Way are class M stars, such as VY Canis Majoris, VV Cephei, Antares, and Betelgeuse. Furthermore, some larger, hotter brown dwarfs are late class M, usually in the range of M6.5 to M9.5.
The spectrum of a class M star contains lines from oxide molecules (in the visible spectrum, especially TiO) and all neutral metals, but absorption lines of hydrogen are usually absent. TiO bands can be strong in class M stars, usually dominating their visible spectrum by about M5. Vanadium(II) oxide bands become present by late M.
Spectral standards:
M0IIIa – Beta Andromedae
M2III – Chi Pegasi
M1-M2Ia-Iab – Betelgeuse
M2Ia – Mu Cephei ("Herschel's garnet")
Extended spectral types
A number of new spectral types have been taken into use from newly discovered types of stars.
Hot blue emission star classes
Spectra of some very hot and bluish stars exhibit marked emission lines from carbon or nitrogen, or sometimes oxygen.
Class WR (or W): Wolf–Rayet
Once included as type O stars, the Wolf–Rayet stars of class W or WR are notable for spectra lacking hydrogen lines. Instead their spectra are dominated by broad emission lines of highly ionized helium, nitrogen, carbon, and sometimes oxygen. They are thought to mostly be dying supergiants with their hydrogen layers blown away by stellar winds, thereby directly exposing their hot helium shells. Class WR is further divided into subclasses according to the relative strength of nitrogen and carbon emission lines in their spectra (and outer layers).
WR spectra range is listed below:
WN – spectrum dominated by N III-V and He I-II lines
WNE (WN2 to WN5 with some WN6) – hotter or "early"
WNL (WN7 to WN9 with some WN6) – cooler or "late"
Extended WN classes WN10 and WN11 sometimes used for the Ofpe/WN9 stars
h tag used (e.g. WN9h) for WR with hydrogen emission and ha (e.g. WN6ha) for both hydrogen emission and absorption
WN/C – WN stars plus strong C IV lines, intermediate between WN and WC stars
WC – spectrum with strong C II-IV lines
WCE (WC4 to WC6) – hotter or "early"
WCL (WC7 to WC9) – cooler or "late"
WO (WO1 to WO4) – strong O VI lines, extremely rare, extension of the WCE class into incredibly hot temperatures (up to 200 kK or more)
Although the central stars of most planetary nebulae (CSPNe) show O-type spectra, around 10% are hydrogen-deficient and show WR spectra. These are low-mass stars and to distinguish them from the massive Wolf–Rayet stars, their spectra are enclosed in square brackets: e.g. [WC]. Most of these show [WC] spectra, some [WO], and very rarely [WN].
Slash stars
The slash stars are O-type stars with WN-like lines in their spectra. The name "slash" comes from their printed spectral type having a slash in it (e.g. "Of/WNL")).
There is a secondary group found with these spectra, a cooler, "intermediate" group designated "Ofpe/WN9". These stars have also been referred to as WN10 or WN11, but that has become less popular with the realisation of the evolutionary difference from other Wolf–Rayet stars. Recent discoveries of even rarer stars have extended the range of slash stars as far as O2-3.5If*/WN5-7, which are even hotter than the original "slash" stars.
Magnetic O stars
They are O stars with strong magnetic fields. Designation is Of?p.
Cool red and brown dwarf classes
The new spectral types L, T, and Y were created to classify infrared spectra of cool stars. This includes both red dwarfs and brown dwarfs that are very faint in the visible spectrum.
Brown dwarfs, stars that do not undergo hydrogen fusion, cool as they age and so progress to later spectral types. Brown dwarfs start their lives with M-type spectra and will cool through the L, T, and Y spectral classes, faster the less massive they are; the highest-mass brown dwarfs cannot have cooled to Y or even T dwarfs within the age of the universe. Because this leads to an unresolvable overlap between spectral types effective temperature and luminosity for some masses and ages of different L-T-Y types, no distinct temperature or luminosity values can be given.
Class L
Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs. They are a very dark red in color and brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra.
Due to low surface gravity in giant stars, TiO- and VO-bearing condensates never form. Thus, L-type stars larger than dwarfs can never form in an isolated environment. However, it may be possible for these L-type supergiants to form through stellar collisions, an example of which is V838 Monocerotis while in the height of its luminous red nova eruption.
Class T
Class T dwarfs are cool brown dwarfs with surface temperatures between approximately . Their emission peaks in the infrared. Methane is prominent in their spectra.
Study of the number of proplyds (protoplanetary disks, clumps of gas in nebulae from which stars and planetary systems are formed) indicates that the number of stars in the galaxy should be several orders of magnitude higher than what was previously conjectured. It is theorized that these proplyds are in a race with each other. The first one to form will become a protostar, which are very violent objects and will disrupt other proplyds in the vicinity, stripping them of their gas. The victim proplyds will then probably go on to become main-sequence stars or brown dwarfs of the L and T classes, which are quite invisible to us.
Class Y
Brown dwarfs of spectral class Y are cooler than those of spectral class T and have qualitatively different spectra from them. A total of 17 objects have been placed in class Y as of August 2013. Although such dwarfs have been modelled and detected within forty light-years by the Wide-field Infrared Survey Explorer (WISE) there is no well-defined spectral sequence yet and no prototypes. Nevertheless, several objects have been proposed as spectral classes Y0, Y1, and Y2.
The spectra of these prospective Y objects display absorption around 1.55 micrometers. Delorme et al. have suggested that this feature is due to absorption from ammonia, and that this should be taken as the indicative feature for the T-Y transition. In fact, this ammonia-absorption feature is the main criterion that has been adopted to define this class. However, this feature is difficult to distinguish from absorption by water and methane, and other authors have stated that the assignment of class Y0 is premature.
The latest brown dwarf proposed for the Y spectral type, WISE 1828+2650, is a > Y2 dwarf with an effective temperature originally estimated around 300 K, the temperature of the human body. Parallax measurements have, however, since shown that its luminosity is inconsistent with it being colder than ~400 K. The coolest Y dwarf currently known is WISE 0855−0714 with an approximate temperature of 250 K, and a mass just seven times that of Jupiter.
The mass range for Y dwarfs is 9–25 Jupiter masses, but young objects might reach below one Jupiter mass (although they cool to become planets), which means that Y class objects straddle the 13 Jupiter mass deuterium-fusion limit that marks the current IAU division between brown dwarfs and planets.
Peculiar brown dwarfs
Young brown dwarfs have low surface gravities because they have larger radii and lower masses compared to the field stars of similar spectral type. These sources are marked by a letter beta () for intermediate surface gravity and gamma () for low surface gravity. Indication for low surface gravity are weak CaH, K and Na lines, as well as strong VO line. Alpha () stands for normal surface gravity and is usually dropped. Sometimes an extremely low surface gravity is denoted by a delta (). The suffix "pec" stands for peculiar. The peculiar suffix is still used for other features that are unusual and summarizes different properties, indicative of low surface gravity, subdwarfs and unresolved binaries.
The prefix sd stands for subdwarf and only includes cool subdwarfs. This prefix indicates a low metallicity and kinematic properties that are more similar to halo stars than to disk stars. Subdwarfs appear bluer than disk objects.
The red suffix describes objects with red color, but an older age. This is not interpreted as low surface gravity, but as a high dust content. The blue suffix describes objects with blue near-infrared colors that cannot be explained with low metallicity. Some are explained as L+T binaries, others are not binaries, such as 2MASS J11263991−5003550 and are explained with thin and/or large-grained clouds.
Late giant carbon-star classes
Carbon-stars are stars whose spectra indicate production of carbon – a byproduct of triple-alpha helium fusion. With increased carbon abundance, and some parallel s-process heavy element production, the spectra of these stars become increasingly deviant from the usual late spectral classes G, K, and M. Equivalent classes for carbon-rich stars are S and C.
The giants among those stars are presumed to produce this carbon themselves, but some stars in this class are double stars, whose odd atmosphere is suspected of having been transferred from a companion that is now a white dwarf, when the companion was a carbon-star.
Class C
Originally classified as R and N stars, these are also known as carbon stars. These are red giants, near the end of their lives, in which there is an excess of carbon in the atmosphere. The old R and N classes ran parallel to the normal classification system from roughly mid-G to late M. These have more recently been remapped into a unified carbon classifier C with N0 starting at roughly C6. Another subset of cool carbon stars are the C–J-type stars, which are characterized by the strong presence of molecules of 13CN in addition to those of 12CN. A few main-sequence carbon stars are known, but the overwhelming majority of known carbon stars are giants or supergiants. There are several subclasses:
C-R – Formerly its own class (R) representing the carbon star equivalent of late G- to early K-type stars.
C-N – Formerly its own class representing the carbon star equivalent of late K- to M-type stars.
C-J – A subtype of cool C stars with a high content of 13C.
C-H – Population II analogues of the C-R stars.
C-Hd – Hydrogen-deficient carbon stars, similar to late G supergiants with CH and C2 bands added.
Class S
Class S stars form a continuum between class M stars and carbon stars. Those most similar to class M stars have strong ZrO absorption bands analogous to the TiO bands of class M stars, whereas those most similar to carbon stars have strong sodium D lines and weak C2 bands. Class S stars have excess amounts of zirconium and other elements produced by the s-process, and have more similar carbon and oxygen abundances to class M or carbon stars. Like carbon stars, nearly all known class S stars are asymptotic-giant-branch stars.
The spectral type is formed by the letter S and a number between zero and ten. This number corresponds to the temperature of the star and approximately follows the temperature scale used for class M giants. The most common types are S3 to S5. The non-standard designation S10 has only been used for the star Chi Cygni when at an extreme minimum.
The basic classification is usually followed by an abundance indication, following one of several schemes: S2,5; S2/5; S2 Zr4 Ti2; or S2*5. A number following a comma is a scale between 1 and 9 based on the ratio of ZrO and TiO. A number following a slash is a more-recent but less-common scheme designed to represent the ratio of carbon to oxygen on a scale of 1 to 10, where a 0 would be an MS star. Intensities of zirconium and titanium may be indicated explicitly. Also occasionally seen is a number following an asterisk, which represents the strength of the ZrO bands on a scale from 1 to 5.
Classes MS and SC: Intermediate carbon-related classes
In between the M and S classes, border cases are named MS stars. In a similar way, border cases between the S and C-N classes are named SC or CS. The sequence M → MS → S → SC → C-N is hypothesized to be a sequence of increased carbon abundance with age for carbon stars in the asymptotic giant branch.
White dwarf classifications
The class D (for Degenerate) is the modern classification used for white dwarfs—low-mass stars that are no longer undergoing nuclear fusion and have shrunk to planetary size, slowly cooling down. Class D is further divided into spectral types DA, DB, DC, DO, DQ, DX, and DZ. The letters are not related to the letters used in the classification of other stars, but instead indicate the composition of the white dwarf's visible outer layer or atmosphere.
The white dwarf types are as follows:
DA – a hydrogen-rich atmosphere or outer layer, indicated by strong Balmer hydrogen spectral lines.
DB – a helium-rich atmosphere, indicated by neutral helium, He I, spectral lines.
DO – a helium-rich atmosphere, indicated by ionized helium, He II, spectral lines.
DQ – a carbon-rich atmosphere, indicated by atomic or molecular carbon lines.
DZ – a metal-rich atmosphere, indicated by metal spectral lines (a merger of the obsolete white dwarf spectral types, DG, DK, and DM).
DC – no strong spectral lines indicating one of the above categories.
DX – spectral lines are insufficiently clear to classify into one of the above categories.
The type is followed by a number giving the white dwarf's surface temperature. This number is a rounded form of 50400/Teff, where Teff is the effective surface temperature, measured in kelvins. Originally, this number was rounded to one of the digits 1 through 9, but more recently fractional values have started to be used, as well as values below 1 and above 9.(For example DA1.5 for IK Pegasi B)
Two or more of the type letters may be used to indicate a white dwarf that displays more than one of the spectral features above.
Extended white dwarf spectral types
DAB – a hydrogen- and helium-rich white dwarf displaying neutral helium lines
DAO – a hydrogen- and helium-rich white dwarf displaying ionized helium lines
DAZ – a hydrogen-rich metallic white dwarf
DBZ – a helium-rich metallic white dwarf
A different set of spectral peculiarity symbols are used for white dwarfs than for other types of stars:
Luminous Blue Variables
Luminous blue variables (LBVs) are rare, massive and evolved stars that show unpredictable and sometimes dramatic variations in their spectra and brightness. During their "quiescent" states, they are usually similar to B-type stars, although with unusual spectral lines. During outbursts, they are more similar to F-type stars, with significantly lower temperatures. Many papers treat LBV as its own spectral type.
Spectral types of non-single objects: Classes P and Q
Finally, the classes P and Q are left over from the system developed by Cannon for the Henry Draper Catalogue. They are occasionally used for certain objects, not associated with a single star: Type P objects are stars within planetary nebulae (typically young white dwarfs or hydrogen-poor M giants); type Q objects are novae.
Stellar remnants
Stellar remnants are objects associated with the death of stars. Included in the category are white dwarfs, and as can be seen from the radically different classification scheme for class D, stellar remnants are difficult to fit into the MK system.
The Hertzsprung–Russell diagram, which the MK system is based on, is observational in nature so these remnants cannot easily be plotted on the diagram, or cannot be placed at all. Old neutron stars are relatively small and cold, and would fall on the far right side of the diagram. Planetary nebulae are dynamic and tend to quickly fade in brightness as the progenitor star transitions to the white dwarf branch. If shown, a planetary nebula would be plotted to the right of the diagram's upper right quadrant. A black hole emits no visible light of its own, and therefore would not appear on the diagram.
A classification system for neutron stars using Roman numerals has been proposed: type I for less massive neutron stars with low cooling rates, type II for more massive neutron stars with higher cooling rates, and a proposed type III for more massive neutron stars (possible exotic star candidates) with higher cooling rates. The more massive a neutron star is, the higher neutrino flux it carries. These neutrinos carry away so much heat energy that after only a few years the temperature of an isolated neutron star falls from the order of billions to only around a million Kelvin. This proposed neutron star classification system is not to be confused with the earlier Secchi spectral classes and the Yerkes luminosity classes.
Replaced spectral classes
Several spectral types, all previously used for non-standard stars in the mid-20th century, have been replaced during revisions of the stellar classification system. They may still be found in old editions of star catalogs: R and N have been subsumed into the new C class as C-R and C-N.
Stellar classification, habitability, and the search for life
While humans may eventually be able to colonize any kind of stellar habitat, this section will address the probability of life arising around other stars.
Stability, luminosity, and lifespan are all factors in stellar habitability. Humans know of only one star that hosts life, the G-class Sun, a star with an abundance of heavy elements and low variability in brightness. The Solar System is also unlike many stellar systems in that it only contains one star (see Habitability of binary star systems).
Working from these constraints and the problems of having an empirical sample set of only one, the range of stars that are predicted to be able to support life is limited by a few factors. Of the main-sequence star types, stars more massive than 1.5 times that of the Sun (spectral types O, B, and A) age too quickly for advanced life to develop (using Earth as a guideline). On the other extreme, dwarfs of less than half the mass of the Sun (spectral type M) are likely to tidally lock planets within their habitable zone, along with other problems (see Habitability of red dwarf systems). While there are many problems facing life on red dwarfs, many astronomers continue to model these systems due to their sheer numbers and longevity.
For these reasons NASA's Kepler Mission is searching for habitable planets at nearby main-sequence stars that are less massive than spectral type A but more massive than type M—making the most probable stars to host life dwarf stars of types F, G, and K.
See also
, survey of stars
Notes
References
Further reading
External links
Libraries of stellar spectra by D. Montes, UCM
Spectral Types for Hipparcos Catalogue Entries
Stellar Spectral Classification by Richard O. Gray and Christopher J. Corbally
Spectral models of stars by P. Coelho
Stellar classification table
Classification
Concepts in astronomy | Stellar classification | [
"Physics",
"Astronomy"
] | 9,317 | [
"Concepts in astronomy",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
28,930 | https://en.wikipedia.org/wiki/SN%201987A | SN 1987A was a type II supernova in the Large Magellanic Cloud, a dwarf satellite galaxy of the Milky Way. It occurred approximately from Earth and was the closest observed supernova since Kepler's Supernova in 1604. Light and neutrinos from the explosion reached Earth on February 23, 1987 and was designated "SN 1987A" as the first supernova discovered that year. Its brightness peaked in May of that year, with an apparent magnitude of about 3.
It was the first supernova that modern astronomers were able to study in great detail, and its observations have provided much insight into core-collapse supernovae. SN 1987A provided the first opportunity to confirm by direct observation the radioactive source of the energy for visible light emissions, by detecting predicted gamma-ray line radiation from two of its abundant radioactive nuclei. This proved the radioactive nature of the long-duration post-explosion glow of supernovae.
In 2019, indirect evidence for the presence of a collapsed neutron star within the remnants of SN 1987A was discovered using the Atacama Large Millimeter Array telescope. Further evidence was subsequently uncovered in 2021 through observations conducted by the Chandra and NuSTAR X-ray telescopes.
Discovery
SN 1987A was discovered independently by Ian Shelton and Oscar Duhalde at the Las Campanas Observatory in Chile on February 24, 1987, and within the same 24 hours by Albert Jones in New Zealand.
Later investigations found photographs showing the supernova brightening rapidly early on February 23. On March 4–12, 1987, it was observed from space by Astron, the largest ultraviolet space telescope of that time.
Progenitor
Four days after the event was recorded, the progenitor star was tentatively identified as Sanduleak −69 202 (Sk -69 202), a blue supergiant.
After the supernova faded, that identification was definitively confirmed, as Sk −69 202 had disappeared. The possibility of a blue supergiant producing a supernova was considered surprising, and the confirmation led to further research which identified an earlier supernova with a blue supergiant progenitor.
Some models of SN 1987A's progenitor attributed the blue color largely to its chemical composition rather than its evolutionary stage, particularly the low levels of heavy elements. There was some speculation that the star might have merged with a companion star before the supernova. However, it is now widely understood that blue supergiants are natural progenitors of some supernovae, although there is still speculation that the evolution of such stars could require mass loss involving a binary companion.
Neutrino emissions
Approximately two to three hours before the visible light from SN 1987A reached Earth, a burst of neutrinos was observed at three neutrino observatories. This was likely due to neutrino emission which occurs simultaneously with core collapse, but before visible light is emitted as the shock wave reaches the stellar surface. At 7:35 UT, 12 antineutrinos were detected by Kamiokande II, 8 by IMB, and 5 by Baksan in a burst lasting less than 13 seconds. Approximately three hours earlier, the Mont Blanc liquid scintillator detected a five-neutrino burst, but this is generally not believed to be associated with SN 1987A.
The Kamiokande II detection, which at 12 neutrinos had the largest sample population, showed the neutrinos arriving in two distinct pulses. The first pulse at 07:35:35 comprised 9 neutrinos over a period of 1.915 seconds. A second pulse of three neutrinos arrived during a 3.220-second interval from 9.219 to 12.439 seconds after the beginning of the first pulse.
Although only 25 neutrinos were detected during the event, it was a significant increase from the previously observed background level. This was the first time neutrinos known to be emitted from a supernova had been observed directly, which marked the beginning of neutrino astronomy. The observations were consistent with theoretical supernova models in which 99% of the energy of the collapse is radiated away in the form of neutrinos. The observations are also consistent with the models' estimates of a total neutrino count of 1058 with a total energy of 1046 joules, i.e. a mean value of some dozens of MeV per neutrino. Billions of neutrinos passed through a square centimeter on Earth.
The neutrino measurements allowed upper bounds on neutrino mass and charge, as well as the number of flavors of neutrinos and other properties. For example, the data show that the rest mass of the electron neutrino is < 16 eV/c2 at 95% confidence, which is 30,000 times smaller than the mass of an electron. The data suggest that the total number of neutrino flavors is at most 8 but other observations and experiments give tighter estimates. Many of these results have since been confirmed or tightened by other neutrino experiments such as more careful analysis of solar neutrinos and atmospheric neutrinos as well as experiments with artificial neutrino sources.
Neutron star
SN 1987A appears to be a core-collapse supernova, which should result in a neutron star given the size of the original star. The neutrino data indicate that a compact object did form at the star's core, and astronomers immediately began searching for the collapsed core. The Hubble Space Telescope took images of the supernova regularly from August 1990 without a clear detection of a neutron star.
A number of possibilities for the "missing" neutron star were considered. First, that the neutron star may be obscured by surrounding dense dust clouds. Second, that a pulsar was formed, but with either an unusually large or small magnetic field. Third, that large amounts of material fell back on the neutron star, collapsing it further into a black hole. Neutron stars and black holes often give off light as material falls onto them. If there is a compact object in the supernova remnant, but no material to fall onto it, it would be too dim for detection. A fourth hypothesis is that the collapsed core became a quark star.
In 2019, evidence was presented for a neutron star inside one of the brightest dust clumps, close to the expected position of the supernova remnant. In 2021, further evidence was presented of hard X-ray emissions from SN 1987A originating in the pulsar wind nebula. The latter result is supported by a three-dimensional magnetohydrodynamic model, which describes the evolution of SN 1987A from the SN event to the present, and reconstructs the ambient environment, predicting the absorbing power of the dense stellar material around the pulsar.
In 2024, researchers using the James Webb Space Telescope (JWST) identified distinctive emission lines of ionized argon within the central region of the Supernova 1987A remnants. These emission lines, discernible only near the remnant's core, were analyzed using photoionization models. The models indicate that the observed line ratios and velocities can be attributed to ionizing radiation originating from a neutron star illuminating gas from the inner regions of the exploded star.
Light curve
Much of the light curve, or graph of luminosity as a function of time, after the explosion of a type II supernova such as SN 1987A is produced by the energy from radioactive decay. Although the luminous emission consists of optical photons, it is the radioactive power absorbed that keeps the remnant hot enough to radiate light. Without the radioactive heat, it would dim quickly. The radioactive decay of 56Ni through its daughters 56Co to 56Fe produces gamma-ray photons that are absorbed and dominate the heating and thus the luminosity of the ejecta at intermediate times (several weeks) to late times (several months). Energy for the peak of the light curve of SN1987A was provided by the decay of 56Ni to 56Co (half life of 6 days) while energy for the later light curve in particular fit very closely with the 77.3-day half-life of 56Co decaying to 56Fe. Later measurements by space gamma-ray telescopes of the small fraction of the 56Co and 57Co gamma rays that escaped the SN1987A remnant without absorption confirmed earlier predictions that those two radioactive nuclei were the power source.
Because the 56Co in SN1987A has now completely decayed, it no longer supports the luminosity of the SN 1987A ejecta. That is currently powered by the radioactive decay of 44Ti with a half life of about 60 years. With this change, X-rays produced by the ring interactions of the ejecta began to contribute significantly to the total light curve. This was noticed by the Hubble Space Telescope as a steady increase in luminosity 10,000 days after the event in the blue and red spectral bands. X-ray lines 44Ti observed by the INTEGRAL space X-ray telescope showed that the total mass of radioactive 44Ti synthesized during the explosion was .
Observations of the radioactive power from their decays in the 1987A light curve have measured accurate total masses of the 56Ni, 57Ni, and 44Ti created in the explosion, which agree with the masses measured by gamma-ray line space telescopes and provides nucleosynthesis constraints on the computed supernova model.
Interaction with circumstellar material
The three bright rings around SN 1987A that were visible after a few months in images by the Hubble Space Telescope are material from the stellar wind of the progenitor. These rings were ionized by the ultraviolet flash from the supernova explosion, and consequently began emitting in various emission lines. These rings did not "turn on" until several months after the supernova and the process can be very accurately studied through spectroscopy. The rings are large enough that their angular size can be measured accurately: the inner ring is 0.808 arcseconds in radius. The time light traveled to light up the inner ring gives its radius of 0.66 (ly) light years. Using this as the base of a right angle triangle and the angular size as seen from the Earth for the local angle, one can use basic trigonometry to calculate the distance to SN 1987A, which is about 168,000 light-years. The material from the explosion is catching up with the material expelled during both its red and blue supergiant phases and heating it, so we observe ring structures about the star.
Around 2001, the expanding (>7,000 km/s) supernova ejecta collided with the inner ring. This caused its heating and the generation of x-rays—the x-ray flux from the ring increased by a factor of three between 2001 and 2009. A part of the x-ray radiation, which is absorbed by the dense ejecta close to the center, is responsible for a comparable increase in the optical flux from the supernova remnant in 2001–2009. This increase of the brightness of the remnant reversed the trend observed before 2001, when the optical flux was decreasing due to the decaying of 44Ti isotope.
A study reported in June 2015, using images from the Hubble Space Telescope and the Very Large Telescope taken between 1994 and 2014, shows that the emissions from the clumps of matter making up the rings are fading as the clumps are destroyed by the shock wave. It is predicted the ring would fade away between 2020 and 2030. These findings are also supported by the results of a three-dimensional hydrodynamic model which describes the interaction of the blast wave with the circumstellar nebula.
The model also shows that X-ray emission from ejecta heated up by the shock will be dominant very soon, after which the ring would fade away. As the shock wave passes the circumstellar ring it will trace the history of mass loss of the supernova's progenitor and provide useful information for discriminating among various models for the progenitor of SN 1987A.
In 2018, radio observations from the interaction between the circumstellar ring of dust and the shockwave has confirmed the shockwave has now left the circumstellar material. It also shows that the speed of the shockwave, which slowed down to 2,300 km/s while interacting with the dust in the ring, has now re-accelerated to 3,600 km/s.
Condensation of warm dust in the ejecta
Soon after the SN 1987A outburst, three major groups embarked in a photometric monitoring of the supernova: the South African Astronomical Observatory (SAAO), the Cerro Tololo Inter-American Observatory (CTIO), and the European Southern Observatory (ESO). In particular, the ESO team reported an infrared excess which became apparent beginning less than one month after the explosion (March 11, 1987). Three possible interpretations for it were discussed in this work: the infrared echo hypothesis was discarded, and thermal emission from dust that could have condensed in the ejecta was favoured (in which case the estimated temperature at that epoch was ~ 1250 K, and the dust mass was approximately ). The possibility that the IR excess could be produced by optically thick free-free emission seemed unlikely because the luminosity in UV photons needed to keep the envelope ionized was much larger than what was available, but it was not ruled out in view of the eventuality of electron scattering, which had not been considered.
However, none of these three groups had sufficiently convincing proofs to claim for a dusty ejecta on the basis of an IR excess alone.
An independent Australian team advanced several arguments in favour of an echo interpretation. This seemingly straightforward interpretation of the nature of the IR emission was challenged by the ESO group and definitively ruled out after presenting optical evidence for the presence of dust in the SN ejecta.
To discriminate between the two interpretations, they considered the implication of the presence of an echoing dust cloud on the optical light curve, and on the existence of diffuse optical emission around the SN. They concluded that the expected optical echo from the cloud should be resolvable, and could be very bright with an integrated visual brightness of magnitude 10.3 around day 650. However, further optical observations, as expressed in SN light curve, showed no inflection in the light curve at the predicted level. Finally, the ESO team presented a convincing clumpy model for dust condensation in the ejecta.
Although it had been thought more than 50 years ago that dust could form in the ejecta of a core-collapse supernova, which in particular could explain the origin of the dust seen in young galaxies, that was the first time that such a condensation was observed. If SN 1987A is a typical representative of its class then the derived mass of the warm dust formed in the debris of core collapse supernovae is not sufficient to account for all the dust observed in the early universe. However, a much larger reservoir of ~0.25 solar mass of colder dust (at ~26 K) in the ejecta of SN 1987A was found with the infrared Herschel Space Telescope in 2011 and confirmed with the Atacama Large Millimeter Array (ALMA) in 2014.
ALMA observations
Following the confirmation of a large amount of cold dust in the ejecta, ALMA has continued observing SN 1987A. Synchrotron radiation due to shock interaction in the equatorial ring has been measured. Cold (20–100K) carbon monoxide (CO) and silicate molecules (SiO) were observed. The data show that CO and SiO distributions are clumpy, and that different nucleosynthesis products (C, O and Si) are located in different places of the ejecta, indicating the footprints of the stellar interior at the time of the explosion.
Gallery
See also
History of supernova observation
List of supernovae
List of supernova remnants
List of supernova candidates
References
Sources
Further reading
External links
AAVSO: More information on the discovery of SN 1987A
Rochester Astronomy discovery timeline
Light curves and spectra on the Open Supernova Catalog
Light echoes from Sn1987a, Movie with real images by the group EROS2
SN 1987A at ESA/Hubble
Supernova 1987A, WIKISKY.ORG
More information at Phil Plait's Bad Astronomy site
3D View of Supernova's 'Heart' Sheds New Light on Star Explosions (Images) - Space.com
Supernova remnants
Supernovae
Large Magellanic Cloud
Astronomical objects discovered in 1987
Dorado | SN 1987A | [
"Chemistry",
"Astronomy"
] | 3,444 | [
"Supernovae",
"Astronomical events",
"Dorado",
"Constellations",
"Explosions"
] |
4,518,807 | https://en.wikipedia.org/wiki/Quasiprobability%20distribution | A quasiprobability distribution is a mathematical object similar to a probability distribution but which relaxes some of Kolmogorov's axioms of probability theory. Quasiprobability distributions arise naturally in the study of quantum mechanics when treated in phase space formulation, commonly used in quantum optics, time-frequency analysis, and elsewhere.
Quasiprobabilities share several of general features with ordinary probabilities, such as, crucially, the ability to yield expectation values with respect to the weights of the distribution. However, they can violate the σ-additivity axiom: integrating over them does not necessarily yield probabilities of mutually exclusive states. Quasiprobability distributions also have regions of negative probability density, counterintuitively, contradicting the first axiom.
Introduction
In the most general form, the dynamics of a quantum-mechanical system are determined by a master equation in Hilbert space: an equation of motion for the density operator (usually written ) of the system. The density operator is defined with respect to a complete orthonormal basis. Although it is possible to directly integrate this equation for very small systems (i.e., systems with few particles or degrees of freedom), this quickly becomes intractable for larger systems. However, it is possible to prove that the density operator can always be written in a diagonal form, provided that it is with respect to an overcomplete basis. When the density operator is represented in such an overcomplete basis, then it can be written in a manner more resembling of an ordinary function, at the expense that the function has the features of a quasiprobability distribution. The evolution of the system is then completely determined by the evolution of the quasiprobability distribution function.
The coherent states, i.e. right eigenstates of the annihilation operator serve as the overcomplete basis in the construction described above. By definition, the coherent states have the following property,
They also have some further interesting properties. For example, no two coherent states are orthogonal. In fact, if |α〉 and |β〉 are a pair of coherent states, then
Note that these states are, however, correctly normalized with〈α | α〉 = 1. Owing to the completeness of the basis of Fock states, the choice of the basis of coherent states must be overcomplete. Click to show an informal proof.
In the coherent states basis, however, it is always possible to express the density operator in the diagonal form
where f is a representation of the phase space distribution. This function f is considered a quasiprobability density because it has the following properties:
(normalization)
If is an operator that can be expressed as a power series of the creation and annihilation operators in an ordering Ω, then its expectation value is
(optical equivalence theorem).
There exists a family of different representations, each connected to a different ordering Ω. The most popular in the general physics literature and historically first of these is the Wigner quasiprobability distribution, which is related to symmetric operator ordering. In quantum optics specifically, often the operators of interest, especially the particle number operator, is naturally expressed in normal order. In that case, the corresponding representation of the phase space distribution is the Glauber–Sudarshan P representation. The quasiprobabilistic nature of these phase space distributions is best understood in the representation because of the following key statement:
This sweeping statement is inoperative in other representations. For example, the Wigner function of the EPR state is positive definite but has no classical analog.
In addition to the representations defined above, there are many other quasiprobability distributions that arise in alternative representations of the phase space distribution. Another popular representation is the Husimi Q representation, which is useful when operators are in anti-normal order. More recently, the positive representation and a wider class of generalized representations have been used to solve complex problems in quantum optics. These are all equivalent and interconvertible to each other, viz. Cohen's class distribution function.
Characteristic functions
Analogous to probability theory, quantum quasiprobability distributions
can be written in terms of characteristic functions,
from which all operator expectation values can be derived. The characteristic
functions for the Wigner, Glauber P and Q distributions of an N mode system
are as follows:
Here and are vectors containing the annihilation and creation operators for each mode
of the system. These characteristic functions can be used to directly evaluate expectation values of operator moments. The ordering of the annihilation and creation operators in these moments is specific to the particular characteristic function. For instance, normally ordered (creation operators preceding annihilation operators) moments can be evaluated in the following way from :
In the same way, expectation values of anti-normally ordered and symmetrically ordered combinations of annihilation and creation operators can be evaluated from the characteristic functions for the Q and Wigner distributions, respectively. The quasiprobability functions themselves are defined as Fourier transforms of the above characteristic functions. That is,
Here and may be identified as coherent state amplitudes in the case of the Glauber P and Q distributions, but simply c-numbers for the Wigner function. Since differentiation in normal space becomes multiplication in Fourier space, moments can be calculated from these functions in the following way:
Here denotes symmetric ordering.
These representations are all interrelated through convolution by Gaussian functions, Weierstrass transforms,
or, using the property that convolution is associative,
It follows that
an often divergent integral, indicating P is often a distribution. Q is always broader than P for the same density matrix.
For example, for a thermal state,
one has
Time evolution and operator correspondences
Since each of the above transformations from to the distribution functions is linear, the equation of motion for each distribution can be obtained by performing the same transformations to . Furthermore, as any master equation which can be expressed in Lindblad form is completely described by the action of combinations of annihilation and creation operators on the density operator, it is useful to consider the effect such operations have on each of the quasiprobability functions.
For instance, consider the annihilation operator acting on . For the characteristic function of the P distribution we have
Taking the Fourier transform with respect to to find the
action corresponding action on the Glauber P function, we find
By following this procedure for each of the above distributions, the following
operator correspondences can be identified:
Here or 1 for P, Wigner, and Q distributions, respectively. In this way, master equations can be expressed as an equations of
motion of quasiprobability functions.
Examples
Coherent state
By construction, P for a coherent state is simply a delta function:
The Wigner and Q representations follows immediately from the Gaussian convolution formulas above,
The Husimi representation can also be found using the formula above for the inner product of two coherent states,
Fock state
The P representation of a Fock state is
Since for n>0 this is more singular than a delta function, a Fock state has no classical analog. The non-classicality is less transparent as one proceeds with the Gaussian convolutions. If Ln is the nth Laguerre polynomial, W is
which can go negative but is bounded.
Q, by contrast, always remains positive and bounded,
Damped quantum harmonic oscillator
Consider the damped quantum harmonic oscillator with the following master equation,
This results in the Fokker–Planck equation,
where κ = 0, 1/2, 1 for the P, W, and Q representations, respectively.
If the system is initially in the coherent state , then this equation has the solution
See also
Krein space
References
Particle distributions
Quantum optics
Exotic probabilities | Quasiprobability distribution | [
"Physics"
] | 1,599 | [
"Quantum optics",
"Quantum mechanics"
] |
4,521,890 | https://en.wikipedia.org/wiki/Control%20loop | A control loop is the fundamental building block of control systems in general and industrial control systems in particular. It consists of the process sensor, the controller function, and the final control element (FCE) which controls the process necessary to automatically adjust the value of a measured process variable (PV) to equal the value of a desired set-point (SP).
There are two common classes of control loop: open loop and closed loop. In an open-loop control system, the control action from the controller is independent of the process variable. An example of this is a central heating boiler controlled only by a timer. The control action is the switching on or off of the boiler. The process variable is the building temperature. This controller operates the heating system for a constant time regardless of the temperature of the building.
In a closed-loop control system, the control action from the controller is dependent on the desired and actual process variable. In the case of the boiler analogy, this would utilize a thermostat to monitor the building temperature, and feed back a signal to ensure the controller output maintains the building temperature close to that set on the thermostat. A closed-loop controller has a feedback loop which ensures the controller exerts a control action to control a process variable at the same value as the setpoint. For this reason, closed-loop controllers are also called feedback controllers.
Open-loop and closed-loop
Fundamentally, there are two types of control loop: open-loop control (feedforward), and closed-loop control (feedback).
In open-loop control, the control action from the controller is independent of the "process output" (or "controlled process variable"). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the switching on/off of the boiler, but the controlled variable should be the building temperature, but is not because this is open-loop control of the boiler, which does not give closed-loop control of the temperature.
In closed loop control, the control action from the controller is dependent on the process output. In the case of the boiler analogy this would include a thermostat to monitor the building temperature, and thereby feed back a signal to ensure the controller maintains the building at the temperature set on the thermostat. A closed loop controller therefore has a feedback loop which ensures the controller exerts a control action to give a process output the same as the "reference input" or "set point". For this reason, closed loop controllers are also called feedback controllers.
The definition of a closed loop control system according to the British Standards Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."
Likewise; "A Feedback Control System is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control."
Other examples
An example of a control system is a car's cruise control, which is a device designed to maintain vehicle speed at a constant desired or reference speed provided by the driver. The controller is the cruise control, the plant is the car, and the system is the car and the cruise control. The system output is the car's speed, and the control itself is the engine's throttle position which determines how much power the engine delivers.
A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control. However, if the cruise control is engaged on a stretch of non-flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an open-loop controller because there is no feedback; no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road.
In a closed-loop control system, data from a sensor monitoring the car's speed (the system output) enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed. The difference, called the error, determines the throttle position (the control). The result is to match the car's speed to the reference speed (maintain the desired system output). Now, when the car goes uphill, the difference between the input (the sensed speed) and the reference continuously determines the throttle position. As the sensed speed drops below the reference, the difference increases, the throttle opens, and engine power increases, speeding up the vehicle. In this way, the controller dynamically counteracts changes to the car's speed. The central idea of these control systems is the feedback loop, the controller affects the system output, which in turn is measured and fed back to the controller.
Application
The accompanying diagram shows a control loop with a single PV input, a control function, and the control output (CO) which modulates the action of the final control element (FCE) to alter the value of the manipulated variable (MV). In this example, a flow control loop is shown, but can be level, temperature, or any one of many process parameters which need to be controlled. The control function shown is an "intermediate type" such as a PID controller which means it can generate a full range of output signals anywhere between 0-100%, rather than just an on/off signal.
In this example, the value of the PV is always the same as the MV, as they are in series in the pipeline. However, if the feed from the valve was to a tank, and the controller function was to control the level using the fill valve, the PV would be the tank level, and the MV would be the flow to the tank.
The controller function can be a discrete controller or a function block in a computerised control system such as a distributed control system or a programmable logic controller. In all cases, a control loop diagram is a very convenient and useful way of representing the control function and its interaction with plant. In practice at a process control level, control loops are normally abbreviated using standard symbols in a Piping and instrumentation diagram, which shows all elements of the process measurement and control based on a process flow diagram.
At a detailed level the control loop connection diagram is created to show the electrical and pneumatic connections. This greatly aids diagnostics and repair, as all the connections for a single control function are on one diagram.
Loop and control equipment tagging
To aid unique identification of equipment, each loop and its elements are identified by a "tagging" system and each element has a unique tag identification.
Based on the standards ANSI/ISA S5.1 and ISO 14617-6, the identifications consist of up to 5 letters.
The first identification letter is for the measured value, the second is a modifier, 3rd indicates the passive/readout function, 4th - active/output function, and the 5th is the function modifier. This is followed by loop number, which is unique to that loop.
For instance, FIC045 means it is the Flow Indicating Controller in control loop 045. This is also known as the "tag" identifier of the field device, which is normally given to the location and function of the instrument. The same loop may have FT045 - which is the flow transmitter in the same loop.
For reference designation of any equipment in industrial systems the standard IEC 61346 (''Industrial systems, installations and equipment and industrial products — Structuring principles and reference
References
Control engineering
Control loop theory | Control loop | [
"Engineering"
] | 1,608 | [
"Control engineering"
] |
16,953,152 | https://en.wikipedia.org/wiki/Extreme%20environment | An extreme environment is a habitat that is considered very hard to survive in due to its considerably extreme conditions such as temperature, accessibility to different energy sources or under high pressure. For an area to be considered an extreme environment, it must contain certain conditions and aspects that are considered very hard for other life forms to survive. Pressure conditions may be extremely high or low; high or low content of oxygen or carbon dioxide in the atmosphere; high levels of radiation, acidity, or alkalinity; absence of water; water containing a high concentration of salt; the presence of sulphur, petroleum, and other toxic substances.
Examples of extreme environments include the geographical poles, very arid deserts, volcanoes, deep ocean trenches, upper atmosphere, outer space, and the environments of every planet in the Solar System except the Earth. Any organisms living in these conditions are often very well adapted to their living circumstances, which is usually a result of long-term evolution. Physiologists have long known that organisms living in extreme environments are especially likely to exhibit clear examples of evolutionary adaptation because of the presumably intense past natural selection they have experienced.
On Earth
The distribution of extreme environments on Earth has varied through geological time. Humans generally do not inhabit extreme environments. There are organisms referred to as extremophiles that do live in such conditions and are so well-adapted that they readily grow and multiply. Extreme environments are usually hard to survive in.
Beyond Earth
Most of the moons and planets in the Solar System are also extreme environments. Astrobiologists have not yet found life in any environments beyond Earth, though experiments have shown that tardigrades can survive the harsh vacuum and intense radiation of outer space. The conceptual modification of conditions in locations beyond Earth, to make them more habitable by humans and other terrestrial organisms, is known as terraforming.
Types
Among extreme environments are places that are alkaline, acidic, or unusually hot or cold or salty, or without water or oxygen. There are also places altered by humans, such as mine tailings or oil impacted habitats.
Alkaline: broadly conceived as natural habitats above pH 9 whether persistently, or with regular frequency or for protracted periods of time.
Acidic: broadly conceived as natural habitats below pH 5 whether persistently, or with regular frequency or for protracted periods of time.
Extremely cold: broadly conceived habitats periodically or consistently below -17 °C either persistently, or with regular frequency or for protracted periods of time. Includes montane sites, polar sites, and deep ocean habitats.
Extremely hot: broadly conceived habitats periodically or consistently in excess of 40 °C either persistently, or with regular frequency or for protracted periods of time. Includes sites with geothermal influences such as Yellowstone and comparable locations worldwide or deep-sea vents.
Hypersaline: environments with salt concentrations greater than that of seawater, that is, >3.5%. Includes salt lakes.
Under pressure: broadly conceived as habitats under extreme hydrostatic pressure – i.e. aquatic habitats deeper than 2000 meters and enclosed habitats under pressure. Includes habitats in oceans and deep lakes.
Radiation: broadly conceived as habitats exposed to abnormally high radiation or of radiation outside the normal range of light. Includes habitats exposed to high UV and IR radiation.
Without water: broadly conceived as habitats without free water whether persistently, or with regular frequency or for protracted periods of time. Includes hot and cold desert environments, and some endolithic habitats
Without oxygen: broadly conceived as habitats without free oxygen – whether persistently, or with regular frequency, or for protracted periods of time. Includes habitats in deeper sediments.
Altered by humans: i.e. anthropogenically impacted habitats. Includes mine tailings, oil impacted habitats, and pollution by heavy metals or organic compounds.
Without light: deep ocean environments and habitats such as caves.
Void of food: areas on earth that lack an abundance of food such as the vast ocean, desert and high country.
Extreme pressure: deep ocean areas
Extreme habitats
Many different habitats can be considered extreme environments, such as the polar ice caps, the driest spots in deserts, and abysmal depths in the ocean. Many different places on the Earth demand that species become highly specialized if they are to survive. In particular, microscopic organisms that can't be seen with the naked eye often thrive in surprising places.
Polar regions
Owing to the dangerously low temperatures, the number of species that can survive in these remote areas is very slim. Over years of evolution and adaptation to this extremely cold environment, both microscopic and larger species have survived and thrived no matter what conditions they have faced. By changing their eating patterns and due to their dense pelt or their body fat, only a few species have been capable of adapting to such harsh conditions and have learned how to thrive in these cold environments.
Deserts
A desert is known for its extreme temperatures and extremely dry climate. The type of species that live in this area have adapted to these harsh conditions over years and years. Species that are able to store water and have learned how to protect themselves from the Sun's harsh rays are the only ones that are capable of surviving in these extreme environments.
Oceans
The oceans depths and temperatures contains some of the most extreme conditions for any species to survive. The deeper one travels, the higher the pressure and the lower the visibility gets, causing completely blacked out conditions. Many of these conditions are too intense for humans to travel to, so instead of sending humans down to these depths to collect research, scientists are using smaller submarines or deep sea drones to study these creatures and extreme environments.
Types of species in extreme environments
There are many different species that are either commonly known or not known amongst many people. These species have either adapted over time into these extreme environments or they have resided their entire life no matter how many generations. The different species are able to live in these environments because of their flexibility with adaptation. Many can adapt to different climate conditions and hibernate, if need be, to survive.
The following list contains only a few species that live in extreme environments.
Examples
Giant kangaroo rat
Certain species of frogs
Thermotolerant worms (Alvinella pompejana)
Devil worms, Halicephalobus mephisto
Greenland shark
Marine microorganism
Bdelloidea
Tardigrade (waterbear)
Himalayan jumping spider, Euophrys omnisuperstes
Cockroach
Extreme environment examples
Antarctica
Dead Sea
Mammoth Hot Springs
Mariana Trench
Mono Lake
Mount Everest
Sahara
Picture gallery
See also
Adaptation
Ecology
Ecophysiology
Evolutionary physiology
Extreme environment clothing
Extremophile
Habitat
LExEN (Life in Extreme Environments)
Natural environment
Species
References
"Extreme Environment." Microbial Life. N.p., n.d. Web. 16 May 2013.
Extremophiles
Geography | Extreme environment | [
"Biology",
"Environmental_science"
] | 1,374 | [
"Organisms by adaptation",
"Extremophiles",
"Environmental microbiology",
"Bacteria"
] |
16,955,480 | https://en.wikipedia.org/wiki/Stormwater%20detention%20vault | A stormwater detention vault is an underground structure designed to manage excess stormwater runoff on a developed site, often in an urban setting. This type of best management practice may be selected when there is insufficient space on the site to infiltrate the runoff or build a surface facility such as a detention basin or retention basin.
Detention vaults manage stormwater quantity flowing to nearby surface waters. They help prevent flooding and can reduce erosion in rivers and streams. They do not provide treatment to improve water quality, though some are attached to a media filter bank to remove pollutants.
Design and installation
Underground stormwater detention allows for high volume storage of runoff in a small footprint area. The storage vessels can be made from a variety of materials, including corrugated metal pipe, aluminum, steel, plastic, fiberglass, pre-cast or poured-in-place concrete.
The vault is typically buried under a parking lot or other open land on the site. In the latter case, this underground vault may be preferable to a surface detention pond if other uses are intended for the land (e.g. a pedestrian plaza or park). In other situations, a vault is used because installing a pond might pose other problems, such as attracting unwanted waterfowl or other animals. In some sites, a vault may be installed in the basement of a building, such as a parking garage. Tunnels may be bored to serve as detention vaults. Tunnels may be cheaper than basins, as they do not require pumps to move the water.
The outlet is generally a restricted-flow drain from the detention vessel, with a weir for containing detritus. Detention vessels delay water's delivery downstream, and possibly creates a later water level peak post-rainfall. It is important to consider timing of water release and the types of reservoirs feeding a waterway.
See also
Best management practice for water pollution
Detention basin
Retention basin
Storm drain
References
Environmental engineering
Hydraulic engineering
Hydrology
Stormwater management | Stormwater detention vault | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 389 | [
"Hydrology",
"Water treatment",
"Stormwater management",
"Chemical engineering",
"Water pollution",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
16,958,330 | https://en.wikipedia.org/wiki/Meshimakobu | Meshimakobu and sanghuang / sanghwang, also known as mesima (English) or black hoof mushroom (American English), is a mushroom in East Asia.
Understanding of the concept
Etymology and association with mulberry
The Japanese name is composed of , an island of Gotō, Nagasaki, where this mushroom used to grow, and , which means bump, referring to the mushroom's appearance. Per Wu et al. (2012) citing Ito (1955) and Imazeki and Hongo (1989), this is a mushroom that is always said to be on mulberry trees.
The Chinese name / is composed of 桑 ("mulberry tree") and 黃 / 黄 ("yellow"). The Korean name is from Chinese.
Historical records
The earliest attestation of the name 桑黃 is in Yaoxing Lun.
Various Chinese historical records documented Xinzhou sanghuang (信州桑黃), in which Xinzhou is a place name in modern-day Jiangxi. It was depicted with hair-like objects, apparently describing Inonotus hispidus.
Folk understandings
In Tonghua, Jilin, various mushrooms were seen as sanghuang by the locals, where it was used to treat cancer and stomach illnesses. The report described the mushrooms and attached photos, but didn't identify them by Latin names.
Associated taxons
Phellinus linteus
It had been long thought that this mushroom is Phellinus linteus, which is a view whose earlier iteration, that this mushroom is Phellinus yucatanensis, can be traced back to Japanese academic literacies in early 20th century based on specimens identified as Fomes yucatanensis, later deemed a synonym of P. linteus.
Polyporus linteus is a species named by Miles Joseph Berkeley and Moses Ashley Curtis in 1860 with the specimen from Nicaragua. Shu Chün Teng in 1963 renamed it Phellinus linteus.
Dai and Xu (1998) studied specimens from various East Asian regions, and found them morphologically different from American Phellinus linteus; the study concluded that Phellinus linteus is not found in East Asia. The study deemed Phellinus linteus exist in tropical America with specimens from there, and Africa with the type specimen of Xanthochrous rudis.
Zhou et al. (2015) examined two African specimens that morphologically fit X. rudis, and their sequences formed a distinct clade from P. linteus. Hence, X. rudis regained standalone taxon status and was renamed Tropicoporus rudis. And Phellinus linteus, now Tropicoporus linteus, is a tropical American species.
Phellinstatin is an enoyl-ACP reductase inhibitor isolated from the Phellinus linteus in East Asia.
Phellinus igniarius
Liu (1974) treated Phellinus igniarius as sanghuang.
Sanghuangporus baumii
Phellinus baumii was the name for this mushroom seen in Dai and Xu (1998). Phellinus baumii is known as Sanghuangporus baumii from 2015 on.
Sanghuangporus vaninii
Xie et al. (2010) inspected sanghuang strains from various institutions with molecular methods, whose test results were analyzed by Wu et al. (2012) to contain Inonotus vaninii (formerly Phellinus vaninii). This mushroom is from 2015 known as Sanghuangporus vaninii.
Sanghuangporus sanghuang
Inonotus sanghuang was seen as this mushroom in Wu et al. (2012). It only grows on mulberry trees. It was renamed Sanghuangporus sanghuang in 2015.
Inonotus hispidus
Inonotus hispidus was seen as sanghuang in Bao et al. (2017). I. hispidus grows on various broad-leaf trees, including mulberry trees.
Folk medicine
There is insufficient evidence from clinical studies to indicate its use as a prescription drug to treat cancer or any disease.
In Asian folk medicine, the mushroom is prepared as a tea. Its processed mycelium may be sold as a dietary supplement in the form of capsules, pills or powder.
See also
Medicinal fungi
Notes
References
Fungus common names
Medicinal fungi
Fungi of Asia | Meshimakobu | [
"Biology"
] | 910 | [
"Fungus common names",
"Fungi",
"Common names of organisms"
] |
16,959,258 | https://en.wikipedia.org/wiki/List%20of%20vascular%20plants%20of%20the%20Karelian%20Isthmus | This is a comprehensive list of the vascular plants of the Karelian Isthmus, a land mass in Russia connected to Finland on one side and otherwise surrounded by three bodies of water: the Gulf of Finland, the Neva River, and Lake Ladoga.
Pteridophyta
Aspleniaceae
Asplenium septentrionale - rare
Asplenium trichomanes - rare
Athyriaceae
Athyrium filix-femina – common
Cystopteris fragilis - rare
Diplazium sibiricum – rare
Gymnocarpium dryopteris - common
Botrychiaceae
Botrychium lanceolatum - rare
Botrychium lunaria
Botrychium matricariifolium - rare
Botrychium multifidum
Botrychium simplex - rare
Botrychium virginianum - rare
Dennstaedtiaceae
Pteridium aquilinum - common
Dryopteridaceae
Dryopteris carthusiana - common
Dryopteris cristata
Dryopteris expansa - common
Dryopteris filix-mas - common
Equisetaceae
Equisetum arvense - common
Equisetum fluviatile - common
Equisetum hyemale - common
Equisetum × litorale
Equisetum palustre - common
Equisetum pratense - common
Equisetum sylvaticum - common
Equisetum variegatum
Onocleaceae
Matteuccia struthiopteris – common
Ophioglossaceae
Ophioglossum vulgatum
Polypodiaceae
Polypodium vulgare
Thelypteridaceae
Phegopteris connectilis - common
Thelypteris palustris
Woodsiaceae
Woodsia ilvensis
Lycopodiophyta
Huperziaceae
Huperzia selago - common
Isoetaceae
Isoetes echinospora
Isoetes lacustris
Lycopodiaceae
Diphasiastrum complanatum - common
Diphasiastrum tristachyum
Diphasiastrum × zeileri
Lycopodiella inundata
Lycopodium annotinum - common
Lycopodium clavatum - common
Selaginellaceae
Selaginella selaginoides - extinct
Pinophyta
Cupressaceae
Juniperus communis - common
Pinaceae
Pinus sylvestris - common
Picea abies - common
Picea × fennica
Magnoliophyta
Liliopsida
Alismataceae
Alisma gramineum - rare
Alisma × juzepczukii - rare
Alisma plantago-aquatica - common
Alisma wahlenbergii
Sagittaria sagittifolia - common
Alliaceae
Allium angulosum - rare
Allium oleraceum
Allium schoenoprasum
Araceae
Calla palustris - common
Spirodela polyrhiza - common
Butomaceae
Butomus umbellatus
Cyperaceae
Blysmus rufus
Bolboschoenus maritimus
Carex acuta - common
Carex acutiformis - rare
Carex appropinquata - rare
Carex aquatilis - rare
Carex arenaria
Carex atherodes - rare
Carex bergrothii - rare
Carex bohemica - rare
Carex brunnescens - common
Carex buxbaumii - rare
Carex canescens - common
Carex caryophyllea - extinct
Carex cespitosa - common
Carex chordorrhiza - common
Carex contigua - rare
Carex diandra - common
Carex digitata
Carex dioica
Carex disperma - common
Carex disticha - rare
Carex echinata - common
Carex elata - rare
Carex elongata - common
Carex ericetorum - common
Carex flava - common
Carex glareosa - rare
Carex globularis - common
Carex hartmanii - rare
Carex heleonastes - rare
Carex hirta - common
Carex juncella - common
Carex lasiocarpa - common
Carex lepidocarpa - rare
Carex leporina - common
Carex limosa - common
Carex livida - rare
Carex loliacea
Carex mackenziei - rare
Carex muricata - rare
Carex nigra - common
Carex omskiana - rare
Carex otrubae - rare
Carex pallescens - common
Carex panicea - common
Carex paniculata - rare
Carex pauciflora - common
Carex paupercula - common
Carex pilulifera
Carex praecox - introduced, rare
Carex pseudocyperus - common
Carex rhynchophysa
Carex riparia - rare
Carex rostrata - common
Carex scandinavica - rare
Carex serotina
Carex sylvatica - rare
Carex vaginata - common
Carex vesicaria - common
Carex vulpina - rare
Eleocharis acicularis - common
Eleocharis fennica
Eleocharis mamillata
Eleocharis palustris - common
Eleocharis parvula - rare
Eleocharis quinqueflora - rare
Eriophorum angustifolium - common
Eriophorum gracile
Eriophorum latifolium
Eriophorum vaginatum - common
Rhynchospora alba
Rhynchospora fusca - rare
Scirpus lacustris - common
Scirpus radicans
Scirpus sylvaticus - common
Scirpus tabernaemontani
Trichophorum alpinum
Trichophorum cespitosum - rare
Hydrocharitaceae
Caulinia tenuissima - rare
Elodea canadensis - introduced, common
Hydrocharis morsus-ranae - common
Najas marina - rare
Stratiotes aloides - common
Iridaceae
Iris pseudacorus - common
Juncaceae
Juncus alpinoarticulatus - common
Juncus articulatus - common
Juncus balticus
Juncus bufonius - common
Juncus bulbosus
Juncus capitatus - extinct
Juncus compressus - common
Juncus conglomeratus - common
Juncus effusus - common
Juncus filiformis - common
Juncus fischerianus - rare
Juncus gerardii
Juncus hylanderi - rare
Juncus nastanthus - common
Juncus nodulosus - common
Juncus ranarius - common
Juncus squarrosus
Juncus stygius - rare
Juncus tenuis - introduced, common
Luzula campestris - rare
Luzula multiflora - common
Luzula pallidula - common
Luzula pilosa - common
Juncaginaceae
Triglochin maritimum
Triglochin palustris - common
Lemnaceae
Lemna gibba - rare
Lemna minor - common
Lemna trisulca - common
Liliaceae
Gagea lutea - rare
Gagea minima
Orchidaceae
Calypso bulbosa - extinct
Coeloglossum viride - rare
Corallorhiza trifida
Cypripedium calceolus - extinct
Dactylorhiza baltica - rare
Dactylorhiza fuchsii - common
Dactylorhiza incarnata
Dactylorhiza maculata - common
Dactylorhiza traunsteineri - rare
Epipactis atrorubens - rare
Epipactis helleborine
Epipactis palustris - rare
Epipogium aphyllum - rare
Goodyera repens - common
Gymnadenia conopsea
Neottia cordata - rare
Neottia ovata
Malaxis monophyllos - rare
Malaxis paludosa
Neottia nidus-avis
Platanthera bifolia
Platanthera chlorantha - rare
Poaceae
Agropyron pectinatum - introduced, rare
Agrostis canina - common
Agrostis capillaris - common
Agrostis gigantea - common
Agrostis stolonifera - common
Agrostis straminea
Alopecurus aequalis - common
Alopecurus arundinaceus - rare
Alopecurus geniculatus - common
Alopecurus pratensis - common
Anisantha tectorum - introduced, rare
Anthoxanthum odoratum - common
Apera spica-venti
Arrhenatherum elatius - introduced, rare
Avena fatua - introduced, rare
Avena strigosa - introduced
Brachypodium pinnatum - rare
Briza media - common
Bromus inermis - common
Bromus mollis - introduced
Calamagrostis arundinacea - common
Calamagrostis canescens - common
Calamagrostis epigeios - common
Calamagrostis meinshausenii - common
Calamagrostis neglecta - common
Calamagrostis phragmitoides - common
Calamagrostis purpurea - rare
Catabrosa aquatica - rare
Cinna latifolia - rare
Dactylis glomerata - common
Dactylis polygama - introduced, rare
Deschampsia cespitosa - common
Deschampsia flexuosa - common
Echinochloa crusgalli - introduced
Elymus caninus - common
Elytrigia repens - common
Festuca arenaria - rare
Festuca brevipila - rare
Festuca gigantea
Festuca ovina - common
Festuca polesica - rare
Festuca pratensis - common
Festuca rubra - common
Festuca sabulosa
Glyceria fluitans - common
Glyceria lithuanica - rare
Glyceria maxima - common
Glyceria notata - common
Helictotrichon pubescens
Hierochloe arctica - common
Hierochloe australis - rare
Hierochloe baltica - common
Hierochloe hirta - rare
Holcus mollis
Koeleria delavignei - introduced, rare
Koeleria glauca - introduced, rare
Leersia oryzoides
Leymus arenarius
Melica nutans - common
Melica picta - rare
Milium effusum - common
Molinia caerulea - common
Nardus stricta
Panicum miliaceum - introduced, rare
Phalaris arundinacea - common
Phalaris canariensis - introduced, rare
Phleum pratense - common
Phragmites australis - common
Poa angustifolia - common
Poa annua - common
Poa compressa - common
Poa humilis - common
Poa nemoralis - common
Poa palustris - common
Poa pratensis - common
Poa remota
Poa supina - introduced, rare
Poa trivialis - common
Puccinellia distans - introduced
Puccinellia hauptiana - introduced, rare
Puccinellia pulvinata - rare
Scolochloa festucacea
Setaria faberi - introduced, rare
Setaria pumila - introduced, rare
Setaria pycnocoma - introduced, rare
Setaria viridis - introduced
Sieglingia decumbens
Potamogetonaceae
Potamogeton alpinus - common
Potamogeton berchtoldii - common
Potamogeton compressus - common
Potamogeton crispus - rare
Potamogeton filiformis - rare
Potamogeton friesii - rare
Potamogeton gramineus - common
Potamogeton lucens - common
Potamogeton natans - common
Potamogeton obtusifolius
Potamogeton pectinatus - common
Potamogeton perfoliatus - common
Potamogeton praelongus - rare
Potamogeton pusillus
Potamogeton rutilus - rare
Potamogeton trichoides - rare
Zannichellia palustris
Ruppiaceae
Ruppia brachypus - rare
Ruscaceae
Convallaria majalis - common
Maianthemum bifolium - common
Polygonatum multiflorum
Polygonatum odoratum - common
Scheuchzeriaceae
Scheuchzeria palustris - common
Sparganiaceae
Sparganium angustifolium
Sparganium emersum - common
Sparganium glomeratum
Sparganium gramineum
Sparganium microcarpum - common
Sparganium minimum - common
Trilliaceae
Paris quadrifolia - common
Typhaceae
Typha angustifolia
Typha latifolia - common
Magnoliopsida
Aceraceae
Acer platanoides
Adoxaceae
Adoxa moschatellina
Viburnum opulus - common
Amaranthaceae
Amaranthus retroflexus - introduced, rare
Atriplex calotheca - rare
Atriplex littoralis
Atriplex longipes - rare
Atriplex patula - introduced, common
Atriplex prostrata - common
Atriplex sagittata - introduced
Chenopodium album - introduced, common
Chenopodium glaucum - introduced, common
Chenopodium polyspermum - introduced
Chenopodium rubrum - introduced, common
Chenopodium suecicum - introduced
Corispermum membranaceum - introduced, rare
Salsola kali
Apiaceae
Aegopodium podagraria - common
Angelica sylvestris - common
Anthriscus sylvestris - common
Archangelica litoralis
Carum carvi - common
Chaerophyllum aromaticum - rare
Chaerophyllum aureum - introduced, rare
Cicuta virosa - common
Conioselinum tataricum - rare
Conium maculatum - introduced
Heracleum sibiricum - common
Heracleum sphondylium - introduced, rare
Kadenia dubia - rare
Oenanthe aquatica
Pastinaca sativa - introduced
Pimpinella saxifraga - common
Selinum carvifolia
Sium latifolium
Thyselium palustre - common
Aristolochiaceae
Asarum europaeum - rare
Asteraceae
Achillea millefolium - common
Antennaria dioica - common
Anthemis arvensis - introduced
Anthemis tinctoria - introduced
Arctium lappa - introduced, rare
Arctium minus - introduced
Arctium tomentosum - introduced, common
Artemisia absinthium - introduced
Artemisia austriaca - introduced, rare
Artemisia campestris - common
Artemisia pontica - introduced, rare
Artemisia sieversiana - introduced, rare
Artemisia vulgaris - common
Bidens cernua
Bidens radiata
Bidens tripartita - common
Carduus crispus - common
Carlina fennica
Centaurea cyanus - introduced
Centaurea diffusa - introduced, rare
Centaurea jacea - common
Centaurea phrygia - common
Centaurea scabiosa - common
Cichorium intybus - introduced
Cirsium arvense - common
Cirsium heterophyllum - common
Cirsium oleraceum
Cirsium palustre - common
Cirsium vulgare - introduced, common
Conyza canadensis - introduced, common
Crepis czerepanovii - rare
Crepis paludosa - common
Crepis tectorum - common
Erigeron acris - common
Erigeron droebachiensis - introduced, rare
Eupatorium cannabinum - rare
Filago arvensis - common
Galinsoga ciliata - introduced, rare
Galinsoga parviflora - introduced, rare
Gnaphalium pilulare - rare
Gnaphalium uliginosum - common
Hieracium aggr. aestivum:
Hieracium ahtii - rare
Hieracium reticulatum - rare
Hieracium aggr. bifidum:
Hieracium chlorellum - rare
Hieracium crispulum - rare
Hieracium prolixum - rare
Hieracium triangulare - rare
Hieracium aggr. caesiomurorum:
Hieracium caesiomurorum - rare
Hieracium fulvescens - rare
Hieracium aggr. caesium:
Hieracium caesium - rare
Hieracium coniops - rare
Hieracium laeticolor - rare
Hieracium plumbeum - rare
Hieracium ravidum
Hieracium aggr. diaphanoides:
Hieracium caespiticola - rare
Hieracium chloromaurum - rare
Hieracium diaphanoides - rare
Hieracium pseudopellucidum - rare
Hieracium silenii
Hieracium subpellucidum
Hieracium aggr. fuscocinereum:
Hieracium godbyense - rare
Hieracium oistophyllum - rare
Hieracium penduliforme - rare
Hieracium philanthrax
Hieracium karelorum
Hieracium aggr. kuusamoense:
Hieracium prolatatum - rare
Hieracium aggr. laevigatum:
Hieracium laevigatum
Hieracium lissolepium - rare
Hieracium tridentatum - rare
Hieracium aggr. murorum:
Hieracium dispansiforme - rare
Hieracium distendens
Hieracium hjeltii - rare
Hieracium lepistoides
Hieracium morulum
Hieracium patale
Hieracium pellucidum - rare
Hieracium praetenerum
Hieracium subcaesium
Hieracium submarginellum
Hieracium aggr. umbellatum:
Hieracium umbellatum - common
Hieracium aggr. vulgatum:
Hieracium diversifolium
Hieracium incurrens
Hieracium vulgatum - common
Inula britannica
Inula salicina
Lactuca serriola - introduced, rare
Lactuca sibirica
Lactuca tatarica - introduced, rare
Lapsana communis - introduced, common
Leontodon autumnalis - common
Leontodon hispidus - common
Lepidotheca suaveolens - introduced, common
Leucanthemum vulgare - common
Ligularia sibirica - extinct
Matricaria recutita - introduced, rare
Mycelis muralis - rare
Omalotheca sylvatica - common
Phalacroloma strigosum - introduced
Picris hieracioides
Pilosella caespitosa
Pilosella cymella
Pilosella × floribunda - common
Pilosella lactucella - rare
Pilosella officinarum - common
Pilosella onegensis
Pilosella praealta
Ptarmica vulgaris - common
Scorzonera humilis
Senecio aquaticus - rare
Senecio jacobaea - introduced, rare
Senecio paludosus - rare
Senecio sylvaticus - introduced, rare
Senecio viscosus - common
Senecio vulgaris - introduced, common
Solidago virgaurea - common
Sonchus arvensis - introduced, common
Sonchus asper - introduced
Sonchus humilis - rare
Sonchus oleraceus - introduced, common
Tanacetum vulgare - common
Taraxacum officinale - common
Tragopogon pratensis - introduced
Tripleurospermum maritimum - rare
Tripleurospermum perforatum - introduced, common
Tripleurospermum subpolare
Tripolium vulgare - rare
Trommsdorffia maculata - common
Tussilago farfara - common
Balsaminaceae
Impatiens noli-tangere - common
Impatiens parviflora - introduced, common
Betulaceae
Alnus glutinosa - common
Alnus incana - common
Betula nana
Betula pendula - common
Betula pubescens - common
Corylus avellana
Boraginaceae
Anchusa officinalis - introduced
Buglossoides arvensis - introduced, rare
Cynoglossum officinale - introduced, rare
Echium vulgare - introduced
Lappula squarrosa - introduced
Lycopsis arvensis - introduced, rare
Myosotis arvensis - introduced, common
Myosotis caespitosa - common
Myosotis micrantha - common
Myosotis palustris - common
Myosotis ramosissima - rare
Myosotis sparsiflora - common
Pulmonaria obscura
Symphytum officinale - common
Brassicaceae
Alliaria petiolata – introduced, rare
Alyssum alyssoides – introduced, rare
Arabidopsis × suecica
Arabidopsis thaliana – common
Barbarea arcuata – common
Barbarea stricta – common
Berteroa incana – introduced, common
Brassica campestris – introduced, common
Bunias orientalis – introduced, common
Cakile baltica
Camelina microcarpa – introduced, rare
Camelina sylvestris – introduced, rare
Capsella bursa-pastoris – introduced, common
Cardamine amara – common
Cardamine dentata – common
Cardamine parviflora – rare
Cardamine pratensis
Cardaminopsis arenosa – common
Dentaria bulbifera – rare
Descurainia sophia – introduced, common
Diplotaxis muralis – introduced, rare
Draba incana – rare
Draba nemorosa
Erophila verna
Erucastrum gallicum – introduced, rare
Erysimum canescens – introduced, rare
Erysimum cheiranthoides – introduced, common
Erysimum cuspidatum
Isatis tinctoria
Lepidium campestre – introduced, rare
Lepidium densiflorum – introduced, common
Lepidium latifolium – introduced, rare
Lepidium ruderale – introduced, common
Neslia paniculata – introduced
Raphanus raphanistrum – introduced, common
Rorippa × armoracioides – introduced, rare
Rorippa amphibia – rare
Rorippa austriaca – introduced, rare
Rorippa palustris – common
Rorippa sylvestris – common
Sinapis arvensis – introduced
Sisymbrium altissimum – introduced
Sisymbrium loeselii – introduced
Sisymbrium officinale – introduced, common
Sisymbrium wolgense – introduced, rare
Subularia aquatica
Thlaspi alpestre – introduced
Thlaspi arvense – introduced, common
Turritis glabra – common
Campanulaceae
Campanula cervicaria
Campanula glomerata - common
Campanula latifolia
Campanula patula - common
Campanula persicifolia - common
Campanula rapunculoides - introduced
Campanula rotundifolia - common
Campanula trachelium
Jasione montana
Cannabaceae
Humulus lupulus - common
Cannabis ruderalis - introduced, rare
Caprifoliaceae
Linnaea borealis - common
Lonicera xylosteum - common
Caryophyllaceae
Arenaria serpyllifolia
Cerastium arvense - common
Cerastium holosteoides - common
Coccyganthe flos-cuculi - common
Dianthus arenarius
Dianthus deltoides - common
Gypsophila fastigiata - rare
Herniaria glabra
Honckenya peploides
Melandrium album - introduced, common
Melandrium dioicum - common
Moehringia trinervia - common
Myosoton aquaticum
Oberna behen - common
Psammophiliella muralis introduced
Sagina nodosa
Sagina procumbens - common
Scleranthus annuus - common
Scleranthus perennis - rare
Silene nutans - common
Silene rupestris - rare
Silene tatarica - introduced, rare
Spergula arvensis - introduced, common
Spergula morisonii
Spergularia marina - rare
Spergularia rubra - common
Stellaria alsine
Stellaria crassifolia - rare
Stellaria graminea - common
Stellaria holostea - common
Stellaria longifolia
Stellaria media - common
Stellaria nemorum - common
Stellaria palustris - common
Viscaria alpina
Viscaria viscosa - common
Ceratophyllaceae
Ceratophyllum demersum - common
Clusiaceae
Hypericum maculatum - common
Hypericum perforatum - common
Convolvulaceae
Calystegia sepium - common
Convolvulus arvensis - introduced, common
Cuscuta europaea
Cuscuta halophyta - rare
Cornaceae
Cornus suecica
Crassulaceae
Hylotelephium decumbens - common
Hylotelephium triphyllum - common
Sedum acre - common
Tillaea aquatica
Dipsacaceae
Knautia arvensis - common
Succisa pratensis - common
Droseraceae
Drosera anglica - common
Drosera intermedia - rare
Drosera × obovata - common
Drosera rotundifolia - common
Elatinaceae
Elatine hydropiper
Elatine orthosperma - extinct
Elatine triandra
Ericaceae
Andromeda polifolia - common
Arctostaphylos uva-ursi - common
Calluna vulgaris - common
Chamaedaphne calyculata - common
Chimaphila umbellata
Empetrum hermaphroditum
Empetrum nigrum - common
Empetrum subholarcticum
Hypopitys monotropa - common
Ledum palustre - common
Moneses uniflora
Orthilia secunda - common
Oxycoccus microcarpus
Oxycoccus palustris - common
Pyrola chlorantha
Pyrola media
Pyrola minor - common
Pyrola rotundifolia - common
Vaccinium myrtillus - common
Vaccinium uliginosum - common
Vaccinium vitis-idaea - common
Euphorbiaceae
Euphorbia esula - rare
Euphorbia helioscopia - introduced, rare
Euphorbia palustris
Euphorbia seguieriana - introduced, rare
Euphorbia virgata - introduced, common
Mercurialis perennis - rare
Fabaceae
Anthyllis colorata - introduced, rare
Anthyllis macrocephala - introduced, rare
Astragalus danicus - introduced, rare
Astragalus subpolaris - rare
Chrysaspis aurea - common
Chrysaspis campestris - introduced, rare
Chrysaspis spadicea - common
Lathyrus linifolius - rare
Lathyrus maritimus
Lathyrus palustris
Lathyrus pisiformis - introduced, rare
Lathyrus pratensis - common
Lathyrus sylvestris - common
Lathyrus tuberosus - introduced, rare
Lathyrus vernus - common
Lotus ambiguus - introduced, common
Lotus callunetorum - common
Lotus corniculatus - introduced, common
Lotus ruprechtii
Medicago falcata - introduced
Medicago lupulina - introduced, common
Melilotus albus - introduced, common
Melilotus officinalis - introduced, common
Ononis repens - introduced
Oxytropis sordida - rare
Securigera varia - introduced, rare
Trifolium arvense - common
Trifolium hybridum - introduced, common
Trifolium medium - common
Trifolium montanum - introduced, rare
Trifolium pratense - common
Trifolium repens - common
Vicia angustifolia - introduced
Vicia biennis - introduced, rare
Vicia cracca - common
Vicia hirsuta - introduced
Vicia sepium - common
Vicia sylvatica - common
Vicia tetrasperma - introduced
Vicia villosa - introduced, rare
Fagaceae
Quercus robur
Fumariaceae
Corydalis intermedia - rare
Corydalis solida
Fumaria officinalis - introduced, common
Gentianaceae
Centaurium erythraea - rare
Centaurium littorale
Centaurium pulchellum - rare
Gentiana pneumonanthe - rare
Gentianella amarella - rare
Gentianella campestris - rare
Geraniaceae
Erodium cicutarium - introduced
Geranium bohemicum - rare
Geranium palustre
Geranium pratense - common
Geranium robertianum
Geranium sibiricum - introduced, rare
Geranium sylvaticum - common
Grossulariaceae
Ribes alpinum
Ribes nigrum - common
Ribes spicatum - common
Haloragaceae
Myriophyllum alterniflorum
Myriophyllum sibiricum - common
Myriophyllum spicatum - rare
Myriophyllum verticillatum - common
Lamiaceae
Acinos arvensis, synonym of Clinopodium acinos
Ajuga pyramidalis
Ajuga reptans
Betonica officinalis - introduced, rare
Clinopodium vulgare - common
Dracocephalum thymiflorum - introduced, rare
Galeobdolon luteum
Galeopsis bifida - introduced, common
Galeopsis ladanum - introduced
Galeopsis speciosa - introduced, common
Galeopsis tetrahit - introduced, common
Glechoma hederacea - common
Lamium album - introduced, common
Lamium hybridum - introduced
Lamium purpureum - introduced, common
Leonurus cardiaca - introduced, rare
Lycopus europaeus - common
Mentha aquatica - rare
Mentha arvensis - common
Prunella vulgaris - common
Scutellaria galericulata - common
Scutellaria hastifolia
Stachys palustris - common
Stachys recta - introduced, rare
Stachys sylvatica - common
Thymus ovatus - rare
Thymus serpyllum - common
Lentibulariaceae
Pinguicula vulgaris - extinct
Utricularia australis - rare
Utricularia intermedia
Utricularia minor
Utricularia ochroleuca - rare
Utricularia vulgaris - common
Linaceae
Linum catharticum
Linum usitatissimum - introduced, common
Lobeliaceae
Lobelia dortmanna
Lythraceae
Lythrum intermedium
Lythrum salicaria - common
Peplis portula
Malvaceae
Malva pusilla - introduced, rare
Tilia cordata
Menyanthaceae
Menyanthes trifoliata - common
Myricaceae
Myrica gale
Myrsinaceae
Centunculus minimus - rare
Glaux maritima
Lysimachia nummularia - rare
Lysimachia vulgaris - common
Naumburgia thyrsiflora - common
Trientalis europaea - common
Nymphaeaceae
Nuphar lutea - common
Nuphar pumila
Nymphaea candida - common
Nymphaea tetragona - rare
Oleaceae
Fraxinus excelsior
Onagraceae
Chamaenerion angustifolium - common
Circaea alpina
Epilobium adenocaulon - introduced, common
Epilobium bergianum - introduced, rare
Epilobium collinum
Epilobium hirsutum - common
Epilobium montanum - common
Epilobium obscurum - extinct
Epilobium palustre - common
Epilobium pseudorubescens - introduced, common
Epilobium roseum - common
Oenothera rubricaulis
Orobanchaceae
Euphrasia brevipila - common
Euphrasia fennica - common
Euphrasia glabrescens - common
Euphrasia hirtella - rare
Euphrasia parviflora - common
Euphrasia vernalis - common
Lathraea squamaria - rare
Melampyrum cristatum - rare
Melampyrum nemorosum - common
Melampyrum pratense - common
Melampyrum sylvaticum - common
Odontites litoralis
Odontites vulgaris - common
Pedicularis kaufmannii - rare
Pedicularis palustris - common
Pedicularis sceptrum-carolinum - rare
Rhinanthus minor - common
Rhinanthus serotinus - common
Oxalidaceae
Oxalis acetosella - common
Papaveraceae
Chelidonium majus - introduced, common
Papaver rhoeas - introduced, rare
Parnassiaceae
Parnassia palustris
Plantaginaceae
Callitriche cophocarpa - common
Callitriche hermaphroditica
Callitriche palustris - common
Chaenorhinum minus - introduced, rare
Hippuris vulgaris - common
Linaria genistifolia - introduced, rare
Littorella uniflora - rare
Plantago lanceolata - common
Plantago major - common
Plantago maritima
Plantago media - common
Veronica anagallis-aquatica - rare
Veronica arvensis - introduced
Veronica beccabunga
Veronica chamaedrys - common
Veronica longifolia - common
Veronica officinalis - common
Veronica scutellata - common
Veronica serpyllifolia - common
Veronica spicata - rare
Veronica verna - common
Plumbaginaceae
Armeria vulgaris - rare
Polemoniaceae
Polemonium caeruleum
Polygalaceae
Polygala amarella
Polygala vulgaris
Polygonaceae
Bistorta vivipara
Fagopyrum tataricum - introduced, rare
Fallopia convolvulus - introduced, common
Fallopia dumetorum
Persicaria amphibia - common
Persicaria foliosa - rare
Persicaria hydropiper - common
Persicaria lapathifolia - common
Persicaria maculosa
Persicaria minor - common
Persicaria mitis - rare
Persicaria scabra - introduced, common
Polygonum arenastrum - introduced, common
Polygonum aviculare - common
Polygonum boreale - rare
Polygonum calcatum - introduced, common
Polygonum neglectum - common
Polygonum oxyspermum - rare
Polygonum rurivagum - introduced, rare
Rumex acetosa - common
Rumex acetosella - common
Rumex aquaticus - common
Rumex confertus - introduced, rare
Rumex crispus - common
Rumex hydrolapathum - common
Rumex longifolius - introduced, common
Rumex maritimus
Rumex pseudonatronatus - introduced, rare
Rumex stenophyllus - introduced, rare
Rumex sylvestris - common
Rumex thyrsiflorus - common
Rumex triangulivalvis - introduced, rare
Portulacaceae
Montia fontana
Portulaca oleracea - introduced
Primulaceae
Androsace filiformis - rare
Androsace septentrionalis
Hottonia palustris - rare
Primula veris - rare
Ranunculaceae
Aconitum lycoctonum
Actaea spicata
Anemone nemorosa - common
Anemone ranunculoides
Batrachium circinatum
Batrachium dichotomum - common
Batrachium eradicatum - rare
Batrachium floribundum - rare
Batrachium kauffmannii - common
Batrachium marinum
Batrachium nevense - rare
Batrachium penicillatum - rare
Caltha palustris - common
Consolida regalis -introduced, rare
Ficaria verna
Hepatica nobilis
Myosurus minimus - introduced
Pulsatilla patens
Pulsatilla pratensis
Pulsatilla vernalis
Ranunculus acris - common
Ranunculus auricomus - common
Ranunculus bulbosus - rare
Ranunculus cassubicus
Ranunculus fallax - common
Ranunculus flammula - common
Ranunculus lingua
Ranunculus polyanthemos - common
Ranunculus repens - common
Ranunculus reptans - common
Ranunculus sceleratus - common
Ranunculus subborealis - rare
Thalictrum aquilegiifolium - rare
Thalictrum flavum - common
Thalictrum lucidum - rare
Thalictrum minus - introduced, rare
Thalictrum simplex - rare
Rhamnaceae
Frangula alnus - common
Rhamnus cathartica - rare
Rosaceae
Agrimonia eupatoria - rare
Agrimonia pilosa - rare
Alchemilla acutangula - common
Alchemilla baltica - common
Alchemilla cymatophylla - rare
Alchemilla filicaulis - rare
Alchemilla glabra - rare
Alchemilla glabricaulis - rare
Alchemilla glaucescens - rare
Alchemilla glomerulans - rare
Alchemilla hirsuticaulis - common
Alchemilla micans - common
Alchemilla monticola - common
Alchemilla murbeckiana - rare
Alchemilla obtusa - rare
Alchemilla plicata - rare
Alchemilla propinqua - rare
Alchemilla sarmatica
Alchemilla subcrenata - common
Alchemilla xanthochlora - introduced, rare
Comarum palustre - common
Filipendula denudata - common
Filipendula ulmaria - common
Filipendula vulgaris - rare
Fragaria moschata - common
Fragaria vesca - common
Geum aleppicum - introduced
Geum rivale - common
Geum urbanum - common
Potentilla anserina - common
Potentilla argentea - common
Potentilla bifurca - introduced, rare
Potentilla canescens - rare
Potentilla erecta - common
Potentilla goldbachii - common
Potentilla intermedia - common
Potentilla norvegica - common
Potentilla ruthenica - introduced, rare
Potentilla supina - introduced, rare
Potentilla verna - rare
Prunus padus - common
Rosa majalis - common
Rubus arcticus
Rubus caesius - introduced, rare
Rubus chamaemorus - common
Rubus idaeus - common
Rubus nessensis
Rubus saxatilis - common
Sorbus aucuparia - common
Rubiaceae
Galium album - common
Galium aparine - introduced, common
Galium boreale - common
Galium hercynicum - rare
Galium mollugo - rare
Galium odoratum - rare
Galium palustre - common
Galium trifidum - common
Galium triflorum - rare
Galium uliginosum - common
Galium vaillantii - introduced, common
Galium verum - common
Salicaceae
Populus tremula - common
Salix acutifolia
Salix aurita - common
Salix caprea - common
Salix cinerea - common
Salix lapponum
Salix myrsinifolia - common
Salix myrtilloides
Salix pentandra - common
Salix phylicifolia - common
Salix rosmarinifolia
Salix starkeana - common
Salix triandra - common
Salix viminalis - rare
Santalaceae
Thesium arvense - introduced, rare
Saxifragaceae
Chrysosplenium alternifolium - common
Saxifraga cespitosa - rare
Saxifraga hirculus
Scrophulariaceae
Limosella aquatica
Scrophularia nodosa - common
Verbascum nigrum - common
Verbascum thapsus
Solanaceae
Hyoscyamus niger - introduced
Solanum dulcamara - common
Solanum nigrum - introduced, rare
Thymelaeaceae
Daphne mezereum
Ulmaceae
Ulmus glabra
Ulmus laevis
Urticaceae
Urtica dioica - common
Urtica urens - introduced
Valerianaceae
Valeriana officinalis - common
Valeriana salina
Valeriana sambucifolia
Violaceae
Viola arvensis - introduced, common
Viola canina - common
Viola epipsila - common
Viola mirabilis
Viola nemoralis - common
Viola palustris - common
Viola persicifolia - rare
Viola riviniana - common
Viola rupestris
Viola selkirkii
Viola tricolor - common
Viola uliginosa
References
Доронина А. Ю. Сосудистые растения Карельского перешейка (Ленинградская область). [Doronina A. Vascular plants of the Karelian Isthmus (Leningrad Region)] Moscow: КМК, 2007. .
Иллюстрированный определитель растений Ленинградской области / Под ред. А. Л. Буданцева, Г. П. Яковлева. Moscow: КМК, 2006. .
Иллюстрированный определитель растений Карельского перешейка / Под ред. А. Л. Буданцева, Г. П. Яковлева. – СПб: СпецЛит, 2000.
Karelian Isthmus
Flora of Russia
Flora of Finland
Flora of Europe
Lists of plants
Lists of biota of Russia
Lists of biota of Finland
Flora of the Holarctic realm | List of vascular plants of the Karelian Isthmus | [
"Biology"
] | 8,891 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
16,959,357 | https://en.wikipedia.org/wiki/Millimeter%20wave%20scanner | A millimeter wave scanner is a whole-body imaging device used for detecting objects concealed underneath a person’s clothing using a form of electromagnetic radiation. Typical uses for this technology include detection of items for commercial loss prevention, smuggling, and screening for weapons at government buildings and airport security checkpoints.
It is one of the common technologies of full body scanner used for body imaging; a competing technology is backscatter X-ray. Millimeter wave scanners themselves come in two varieties: active and passive. Active scanners direct millimeter wave energy at the subject and then interpret the reflected energy. Passive systems create images using only ambient radiation and radiation emitted from the human body or objects.
Technical details
In active scanners, the millimeter wave is transmitted from two antennas simultaneously as they rotate around the body. The wave energy reflected back from the body or other objects on the body is used to construct a three-dimensional image, which is displayed on a remote monitor for analysis.
History
The first millimeter-wave full body scanner was developed at the Pacific Northwest National Laboratory (PNNL) in Richland, Washington. The operation is one of the eight national laboratories Battelle manages for the U.S. Department of Energy. In the 1990s, they patented their 3-D holographic-imagery technology, with research and development support provided by the TSA and the Federal Aviation Administration (FAA). In 2002, Silicon Valley startup SafeView, Inc. obtained an exclusive license to PNNL's (background) intellectual property, to commercialize their technology. From 2002 to 2006, SafeView developed a production-ready millimeter body scanner system, and software which included scanner control, algorithms for threat detection and object recognition, as well as techniques to conceal raw images in order to resolve privacy concerns. During this time, SafeView developed foreground IP through several patent applications. By 2006, SafeView's body scanning portals had been installed and trialed at various locations around the globe. They were installed at border crossings in Israel, international airports such as Mexico City and Amsterdam's Schiphol, ferry landings in Singapore, railway stations in the UK, government buildings like The Hague, and commercial buildings in Tokyo. They were also employed to secure soldiers and workers in Iraq's Green Zone. In 2006, SafeView was acquired by L-3 Communications. From 2006 and 2020, L-3 Communications (later L3Harris) continued to make incremental enhancements to their scanner systems, while deploying thousands of units world wide. In 2020, Leidos acquired L3Harris, which included their body scanner business unit.
Privacy concerns
Historically, privacy advocates were concerned about the use of full body scanning technology because it used to display a detailed image of the surface of the skin under clothing, prosthetics including breast prostheses, and other medical equipment normally hidden, such as colostomy bags. These privacy advocates called the images "virtual strip searches". However, in 2013 the U.S. Congress prohibited the display of detailed images and required the display of metal and other objects on a generic body outline instead of the person's actual skin. Such generic body outlines can be made by Automatic Target Recognition (ATR) software. As of June 1, 2013, all back-scatter full body scanners were removed from use at U.S. airports, because they could not comply with TSA's software requirements. Millimeter-wave full body scanners utilize ATR, and are compliant with TSA software requirements.
Software imaging technology can also mask specific body parts. Proposed remedies for privacy concerns include scanning only people who are independently detected to be carrying contraband, or developing technology to mask genitals and other private parts. In some locations, travelers have the choice between the body scan or a "patdown". In Australia, the scans are mandatory; in the UK, however, passengers may opt out of being scanned. In this case, the individual must be screened by an alternative method which includes at least an enhanced hand search in private as set out on the UK government website.
In the United States, the Transportation Security Administration (TSA) claimed to have taken steps to address privacy objections. The TSA claimed that the images captured by the machines were not stored. On the other hand, the U.S. Marshals Service admitted that it had saved thousands of images captured from a Florida checkpoint. The officer sitting at the machine does not see the image; rather that screen shows only whether the viewing officer has confirmed that the passenger has cleared. Conversely, the officer who views the image does not see the person being scanned by the device. In some locations, updated software has removed the necessity of a separate officer in a remote location. These units now generate a generic image of a person, with specific areas of suspicion highlighted by boxes. If no suspicious items are detected by the machine, a green screen instead appears indicating the passenger is cleared.
Concerns remain about alternative ways to capture and disseminate the image. Additionally, the protective steps often do not entirely address the underlying privacy concerns. Subjects may object to anyone viewing them in a state of effective undress, even if it is not the agent next to the machine or the image is not retrievable.
Reports of full-body scanner images being improperly and perhaps illegally saved and disseminated have emerged.
Possible health effects
Millimeter wavelength radiation is a subset of the microwave radio frequency spectrum. Even at its high-energy end, it is still more than 3 orders of magnitude lower in energy than its nearest radiotoxic neighbour (ultraviolet) in the electromagnetic spectrum. As such, millimeter wave radiation is non-ionizing and incapable of causing cancers by radiolytic DNA bond cleavage. Due to the shallow penetration depth of millimeter waves into tissue (typically less than 1 mm), acute biological effects of irradiation are localized in epidermal and dermal layers and manifest primarily as thermal effects. There is no clear evidence to date of harmful effects other than those caused by localised heating and ensuing chemical changes (expression of heat shock proteins, denaturation, proteolysis, and inflammatory response, see also mobile phone radiation and health). The energy density required to produce thermal injury in skin is much higher than that typically delivered in an active millimeter wave scanner.
The fragmented or misfolded molecules resulting from thermal injury may be delivered to neighbouring cells through diffusion and into the systemic circulation through perfusion. Increased skin permeability under irradiation exacerbates this possibility. It is therefore plausible that the molecular products of thermal injury (and their distribution to areas remote from the site of irradiation) could cause secondary injury. Note that this would be no different from the effects of a thermal injury sustained in a more conventional fashion. Due to the increasing ubiquity of millimeter wave radiation (see WiGig), research into its potential biological effects is ongoing.
Independent of thermal injury, a 2009 study funded by National Institute of Health, conducted by U.S. Department of Energy's Los Alamos National Laboratories Theoretical Division and Center for Nonlinear Studies and Harvard University Medical School found that terahertz range radiation creates changes in DNA breathing dynamics, creating apparent interference with the naturally occurring local strand separation dynamics of double-stranded DNA and consequently, with DNA function. The same article was referenced by MIT Technology Journal article on October 30, 2009.
Millimeter wave scanners should not be confused with backscatter X-ray scanners, a completely different technology used for similar purposes at airports. X-rays are ionizing radiation, more energetic than millimeter waves by more than five orders of magnitude, and raise concerns about possible mutagenic potential.
Effectiveness
The efficacy of millimeter wave scanners in detecting threatening objects has been questioned. Formal studies demonstrated the relative inability of these scanners to detect objects—dangerous or not—on the person being scanned. Additionally, some studies suggested that the cost–benefit ratios of these scanners is poor. As of January 2011, there had been no report of a terrorist capture as a result of a body scanner. In a series of repeated tests, the body scanners were not able to detect a handgun hidden in an undercover agent's undergarments, but the agents responsible for monitoring the body scanners were deemed at fault for not recognizing the concealed weapon.
Millimeter wave scanners also have problems reading through sweat, in addition to yielding false positives from buttons and folds in clothing. Some countries, such as Germany, have reported a false-positive rate of 54%.
Deployment
While airport security may be the most visible and public use of body scanners, companies have opted to deploy passive employee screening to help reduce inventory shrink from key distribution centers.
The UK Border Agency (the predecessor of UK Visas and Immigration) initiated use of passive screening technology to detect illicit goods.
As of April 2009, the U.S. Transportation Security Administration began deploying scanners at airports, e.g., at the Los Angeles International Airport (LAX). These machines have also been deployed in the Jersey City PATH train system. They have also been deployed at San Francisco International airport (SFO), as well as Salt Lake International Airport (SLC), Indianapolis International Airport (IND), Detroit-Wayne County Metropolitan Airport (DTW), Minneapolis-St. Paul International Airport (MSP), and Las Vegas International Airport (LAS).
Three security scanners using millimeter waves were put into use at Schiphol Airport in Amsterdam on 15 May 2007, with more expected to be installed later. The passenger's head is masked from the view of the security personnel.
Passive scanners are also currently in use at Fiumicino Airport, Italy. They will next be deployed in Malpensa Airport.
The federal courthouse in Orlando, Florida employs passive screening devices capable of recording and storing images.
Canada
In 2008, the Canadian Air Transport Security Authority held a trial of the scanners at Kelowna International Airport in Kelowna, British Columbia. Before the trial, the Office of the Privacy Commissioner of Canada (OPCC) reviewed a preliminary Privacy Impact Assessment and CATSA accepted recommendations from the OPCC. In October 2009, an Assistant Privacy Commissioner, Chantal Bernier, announced that the OPCC had tested the scanning procedure, and the privacy safeguards that CATSA had agreed to would “meet the test for the proper reconciliation of public safety and privacy”. In January 2010, Transport Canada confirmed that 44 scanners had been ordered, to be used in secondary screening at eight Canadian airports. The announcement resulted in controversies over privacy, effectiveness and whether the exemption for those under 18 would be too large a loophole.
Scanners are currently used in Saskatoon (YXE), Toronto (YYZ), Montréal (YUL), Quebec (YQB), Calgary (YYC), Edmonton (YEG), Vancouver (YVR), Halifax (YHZ), and Winnipeg (YWG).
Philippines
Ninoy Aquino International Airport in Manila installed body scanners from Smiths in all four airport terminals in 2015. The scanners are not yet in use, and are controversial among some airport security screeners.
Other applications
Scanners can be used for 3D physical measurement of body shape for applications such as apparel design, prosthetic devices design, ergonomics, entertainment and gaming.
See also
Backscatter X-ray (in security scanning applications)
Explosives trace-detection portal machine (puffer machine)
Full body scanner
Security theater
Electromagnetic radiation
Extremely high frequency
Microwave
References
External links
List of American airports that currently/will use Millimeter Wave Scanners in their passenger searches
Challenge to Airport Body Scanners
Full-Body Scanners: Full Protection from Terrorist Attacks or Full-On Violation of the Constitution?
Measuring instruments
Security technology | Millimeter wave scanner | [
"Technology",
"Engineering"
] | 2,418 | [
"Measuring instruments"
] |
16,960,154 | https://en.wikipedia.org/wiki/Paclobutrazol | Paclobutrazol (PBZ) is the ISO common name for an organic compound that is used as a plant growth retardant and triazole fungicide. It is a known antagonist of the plant hormone gibberellin, acting by inhibiting gibberellin biosynthesis, reducing internodal growth to give stouter stems, increasing root growth, causing early fruitset and increasing seedset in plants such as tomato and pepper. PBZ has also been shown to reduce frost sensitivity in plants. Moreover, paclobutrazol can be used as a chemical approach for reducing the risk of lodging in cereal crops. PBZ has been used by arborists to reduce shoot growth and shown to have additional positive effects on trees and shrubs. Among those are improved resistance to drought stress, darker green leaves, higher resistance against fungi and bacteria, and enhanced development of roots. Cambial growth, as well as shoot growth, has been shown to be reduced in some tree species.
Structure and synthesis
The first synthesis of paclobutrazol was disclosed in patents filed by an ICI group working at Jealott's Hill.
4-Chlorobenzaldehyde and pinacolone are combined in an aldol condensation to form a chalcone which is hydrogenated using Raney nickel as catalyst to give a substituted ketone. This material is brominated and the resulting compound treated with the sodium salt of 1,2,4-triazole in a nucleophilic substitution reaction. The final reduction reaction uses sodium borohydride, which in cold methanol gives almost exclusively the diastereomer pair having the absolute configuration (2R,3R) and its enantiomer (2S,3S), with only about 2% of the alternative (2R,3S) and (2S,3R) isomers. However, this pair of isomers can be produced when the reduction is carried out using butylmagnesium bromide.
In a 1984 study, ICI workers separated the individual enantiomers by chiral resolution and were able to demonstrate that only the (2R,3R) isomer displays substantial fungicidal activity, whereas the (2S,3S) isomer is responsible for the growth regulating properties. However, the commercial product (developed under the code number PP333) was the racemic material, since separation of the isomers was unnecessary when both components had utility in agriculture.
Mechanism of action
Paclobutrazol is an inhibitor of enzymes which use cytochrome P450 as a co-factor. Their active site contains a heme center which activates oxygen from the air to oxidise their substrates. The (2S,3S) isomer inhibits the enzyme ent-kaurene oxidase which is on the main biosynthetic pathway to gibberellins, which are important plant hormones. A secondary effect arising from the inhibition of ent-kaurene oxidase is that its precursor, geranylgeranyl pyrophosphate accumulates in the plant and some of this is diverted into additional production of the phytol group of chlorophyll and the hormone abscisic acid. The latter is responsible for controlling transpiration of water through the leaves and hence PBZ treatment can lead to better tolerance of drought conditions. The (2R,3R) isomer is a better fit to the active site of the fungal cytochrome P450 14α-demethylase. This inhibits the conversion of lanosterol to ergosterol, a component of the fungal cell membrane, which is lethal for many species. Many other azole derivatives including propiconazole and tebuconazole show this type of activity, so the main commercial opportunity for paclobutrazol was as a plant growth retardant and it was first marketed by ICI in 1985 under the trade names Bonzi, Clipper, Cultar and Parlay.
Usage
As an antagonist of gibberellin biosynthesis, PBZ has a growth retardant effect on most plant species. It is absorbed by plant tissues and transported via the xylem to the growing parts, where the rate of cell division is reduced compared to untreated plants and the new cells do not elongate.
Ornamental crops
PBZ is used in horticulture, especially for glasshouse-reared perennial plants.
Trees and shrubs
The ability of PBZ to reduce the growth of trees and shrubs means that it has found use in areas where there is a need to moderate such growth, for example under electric power lines and where a right-of-way is to be maintained. A single application of the growth regulator can give season-long control.
Fruit and vegetables
PBZ is used to increase the quantity and quality of orchard fruit and of vegetables. The quality is measured by elevated amounts of carbohydrates, total soluble solids (TSS), the TSS/titratable acidity ratio and a decreased acidity. It stimulates the growth of roots and stems and maintains the number of the leaves but suppresses the height of the plants.
Turf management
PBZ has been extensively used as a means to improve the quality of turf on golf courses, where it reduces the need for mowing and by increasing chlorophyll content has the effect of greening the grass.
Cereal crops
By diverting the plant's productivity from stem elongation into seed production, PBZ is demonstrated to increase grain yields and reduce lodging, demonstrated by Kamran et al., 2017 and Tekalign 2007. The same mechanism is responsible for modern high-yield semi-dwarf crops such as the IR8 rice variety. Peng et al., 2014 also describe better lodging tolerance. They find that winter wheat undergoes reduction of internode length, thickened internodes, increased lateral growth, increased lignin synthesis enzyme activity and therefore increased lignification with application of this compound. Although this does not reduce lodging it does make lodging less harmful.
Effects on the environment
PBZ has been the subject of extensive regulatory studies, including in the European Union and the US. These data have been summarised. It was assessed as being of moderate acute toxicity, mildly irritating to skin and eyes and unlikely to be genotoxic or carcinogenic to humans. PBZ is relatively stable in water and soil. Under laboratory aerobic or anaerobic conditions, the half-life of paclobutrazol can be higher than one year. However, in a 2010 quantitative analysis, PBZ was detected in only 3 out of 440 groundwater samples from golf turf areas with a maximum concentration of 4.2 μg/L. In Europe, the highest tolerable concentration of paclobutrazol in drinking water is 66 μg/L.
As research tool
PBZ has been used as a tool to investigate the genes associated with gibberellin biosynthesis in plants. For example, the Arabidopsis allele (of the giberellic acid interacting gene) confers resistance to paclobutrazol's damage to vegetative growth. However, in normal use, there is no selective pressure on plants to develop resistance to PBZ since it is not lethal to them.
References
External links
Fungicides
Triazoles
Plant growth regulators | Paclobutrazol | [
"Biology"
] | 1,546 | [
"Fungicides",
"Biocides"
] |
16,960,816 | https://en.wikipedia.org/wiki/Trithorax-group%20proteins | Trithorax-group proteins (TrxG) are a heterogeneous collection of proteins whose main action is to maintain gene expression. They can be categorized into three general classes based on molecular function:
histone-modifying TrxG proteins
chromatin-remodeling TrxG proteins
DNA-binding TrxG proteins,
plus other TrxG proteins not categorized in the first three classes.
Discovery
The founding member of TrxG proteins, trithorax (trx), was discovered ~1978 by Philip Ingham as part of his doctoral thesis while a graduate student in the laboratory of J.R.S. Whittle at the University of Sussex. Histone-lysine N-methyltransferase 2A is the human homolog of trx.
The table contains names of Drosophila TrxG members. Homologs in other species may have different names.
Function
Trithorax-group proteins typically function in large complexes formed with other proteins. The complexes formed by TrxG proteins are divided into two groups:
histone-modifying complexes and ATP-dependent chromatin-remodeling complexes. The main function of TrxG proteins, along with polycomb group (PcG) proteins, is regulating gene expression. Whereas PcG proteins are typically associated with gene silencing, TrxG proteins are most commonly linked to gene activation. The trithorax complex activates gene transcription by inducing trimethylation of lysine 4 of histone H3 (H3K4me3) at specific sites in chromatin recognized by the complex. Ash1 domain is involved in H3K36 methylation. Trithorax complex also interacts with CBP (CREB binding protein) which is an acetyltransferase to acetylate H3K27. This gene activation is reinforced by acetylation of histone H4. The actions of TrxG proteins are often described as 'antagonistic' of PcG proteins function. Aside from gene regulation, evidence suggests TrxG proteins are also involved in other processes including apoptosis, cancer, and stress responses.
Role in development
During development, TrxG proteins maintain activation of required genes, particularly the Hox genes, after maternal factors are depleted. This is accomplished by preserving the epigenetic marks, specifically H3K4me3, established by maternally-supplied factors. TrxG proteins are also implicated in X-chromosome inactivation, which occurs during early embryogenesis. it is unclear whether TrxG activity is required in every cell during the entire development of an organism or only during certain stages in certain cell types.
See also
HIstome
Histone acetyltransferase
Histone deacetylases
Histone methyltransferase
Histone-Modifying Enzymes
Nucleosome
PRMT4 pathway
References
External links
The Polycomb and Trithorax page of the Cavalli lab at IGH (Institut de Génétique Humaine) This page contains useful information on Polycomb and trithorax proteins, in the form of an introduction, links to published reviews, list of Polycomb and trithorax proteins, illustrative power point slides and a link to a genome browser showing the genome-wide distribution of these proteins in Drosophila melanogaster.
The Interactive Fly – Society for Developmental Biology
DNA-binding proteins
Molecular genetics
Drosophila melanogaster genes | Trithorax-group proteins | [
"Chemistry",
"Biology"
] | 710 | [
"Molecular genetics",
"Molecular biology"
] |
16,961,399 | https://en.wikipedia.org/wiki/Velocity%20overshoot | Velocity overshoot is a physical effect resulting in transit times for charge carriers between terminals that are smaller than the time required for emission of an optical phonon. The velocity therefore exceeds the saturation velocity up to three times, which leads to faster field-effect transistor or bipolar transistor switching. The effect is noticeable in the ordinary field-effect transistor for the gates shorter than 100 nm.
Ballistic collection transistor
The device intentionally designed to benefit from the velocity overshoot is called ballistic collection transistor (not to be mistaken with the ballistic deflection transistor).
See also
Ballistic conduction
References
Charge carriers | Velocity overshoot | [
"Physics",
"Materials_science"
] | 132 | [
"Physical phenomena",
"Materials science stubs",
"Charge carriers",
"Electrical phenomena",
"Condensed matter physics",
"Condensed matter stubs"
] |
18,156,469 | https://en.wikipedia.org/wiki/Repulsive%20guidance%20molecule | Repulsive guidance molecules (RGMs) are members of a three gene family (in vertebrates) composed of RGMa, RGMb, and RGMc (also called hemojuvelin).
RGMa has been implicated to play an important role in the developing brain and in the scar tissue that forms after a brain injury. For example, RGMa helps guide retinal ganglion cell (RGC) axons to the tectum in the midbrain. It has also been demonstrated that after induced spinal cord injury RGMa accumulates in the scar tissue around the lesion. Further research has shown that RGMa is an inhibitor of axonal outgrowth. Taken together, these findings highlight the importance of RGMa in axonal guidance and outgrowth.
Family members
References
Proteins
Developmental biology | Repulsive guidance molecule | [
"Chemistry",
"Biology"
] | 172 | [
"Biomolecules by chemical classification",
"Behavior",
"Developmental biology",
"Reproduction",
"Molecular biology",
"Proteins"
] |
18,161,071 | https://en.wikipedia.org/wiki/High-performance%20thin-layer%20chromatography | High-performance thin-layer chromatography (HPTLC) serves as an extension of thin-layer chromatography (TLC), offering robustness, simplicity, speed, and efficiency in the quantitative analysis of compounds. This TLC-based analytical technique enhances compound resolution for quantitative analysis. Some of these improvements involve employing higher-quality TLC plates with finer particle sizes in the stationary phase, leading to improved resolution. Additionally, the separation can be further refined through repeated plate development using a multiple development device. As a result, HPTLC provides superior resolution and lower Limit of Detection (LODs).
Instrumentation
Advantages of HPTLC:
Provides straightforward information about effects arising from individual compounds in complex or natural samples separated in parallel.
Combines chromatographic separation with effect-directed detection using enzymatic or biological assays.
Helps to select important compounds from a sample for further characterization using high-resolution mass spectrometry.
Offers unique benefits such as super-hyphenation, minimum sample preparation requirements, detection of multi-modulating compounds, and distinguishing agonistic versus antagonistic effects.
Mode
HPTLC comprises three modes: linear mode, circular mode, and anticircular mode. Among these modes, the anticircular mode stands out as the fastest in theory and practice within the realm of HPTLC. This mode achieves separation by allowing the mobile phase to enter the plate layer precisely along an outer circular path, after which it flows toward the center at a nearly constant speed. This approach maximizes sample capacity while minimizing time, layer, and mobile phase consumption, making it the most cost-effective HPTLC technique. The narrow spot-path unique to anticircular HPTLC facilitates automated quantification. When compared to the linear and circular modes, the anticircular mode demonstrates superior separation and significantly heightened sensitivity, especially at higher Rf-values.
Methodology
To begin HPTLC, a stationary phase has to be determined to separate different compounds within a mixture. Around 90% of all pharmaceutical separations are performed on normal phase silica gel; however, other stationary phases such as alumina can be used for samples with dissociating compounds and cellulose for ionic compounds. The reverse-phase HPTLC method (similar methodology to reverse-phase TLC) is used for compounds with high polarity. After the selection of the stationary phase, plates are generally washed with methanol and dried in an oven to remove excess solvent.
Selection for the mobile phase is one of the most important processes of HPTLC and follows a 'trial and error' pathway. However, the 'PRISMA' system stands as a guideline for finding the optimal mobile phase. The mobile phase is dependent on the absorptivity of the stationary phase and the composition of the compound of interest. The compound is first tested with solutions such as diethyl ether, ethanol, dichloromethane, chloroform for normal phase HPTLC, or solutions such as methanol, acetonitrile, and tetrahydrofuran for reverse phase HPTLC. The retardation factors (Rf) of the compounds with the selected solvent are then analyzed and the solvent that gives the largest Rf is chosen to be the mobile phase for the compound. Then, the mobile solvent strength is tested against hexane (for normal HPTLC) and water (for reverse-phase HPTLC) to determine the need for adjustment.
Notable HPTLC devices such as the Linomat 5 and the Automatic TLC Sampler 4 (ATS 4) by CAMAG function very similarly by having the automated 'spray-on' sample application technique. This automated 'spray-on' technique is useful to overcome the uncertainty in droplet size and position when the sample is applied to the TLC plate by hand. Additionally, automation provides high resolution and narrow bands since the solvent evaporates immediately as the sample makes contact with the plate. One approach to automation has been the use of piezoelectric devices and inkjet printers for applying the sample. Alternatively, the Nanomat 4 and ATS 4 by CAMAG are manually operated where the sample is applied via spot application using a capillary pipette.
Upon chromatographic detection, HPTLC plates are usually developed in saturated twin-trough chambers with filter paper for optimal outcomes. However, flat-bottom chambers and horizontal-development chambers are also used for specific compounds. A general mechanism for the HPTLC device goes as follows. A fitted filter paper is placed in the rear trough of the chamber and the mobile phase is poured through the rear trough to ensure complete solvent absorption of the filter paper. The chamber is then tilted to ~45° so both troughs are equal in solvent volume and left alone to equilibrate for ~20 mins. Finally, the HPTLC plate is placed in the chamber to develop. Between each sample reading, the mobile phase and filter paper are changed to ensure the best outcomes.
The spot capacity (analogous to peak capacity in HPLC) can be increased by developing the plate with two different solvents, using two-dimensional chromatography. The procedure begins with development of a sample loaded plate with first solvent. After removing it, the plate is rotated 90° and developed with a second solvent.
Applications
HPTLC finds extensive application in various fields, including pharmaceutical industries, clinical chemistry, forensic chemistry, biochemistry, cosmetology, food and drug analysis, environmental analysis, and more, owing to its numerous advantages. It distinguishes itself by being the only chromatographic method capable of presenting results as images and offers simplicity, cost-effectiveness, parallel analysis of samples, high sample capacity, rapid results, and the option for multiple detection methods.
Le Roux's research team assessed HPTLC for determining salbutamol serum levels in clinical trials and concluded that it is a suitable method for analyzing serum samples.
HPTLC has also been used successfully in the separation of various lipid subclasses, with reproducible and promising results obtained for 20 different lipid subclasses. Numerous reports related to clinical medicine studies have been published in various journals. As a result, HPTLC is now strongly recommended for drug analysis in serum and other tissues.
References
Chromatography | High-performance thin-layer chromatography | [
"Chemistry"
] | 1,302 | [
"Chromatography",
"Separation processes"
] |
18,162,840 | https://en.wikipedia.org/wiki/CLAS%20detector | CEBAF Large Acceptance Spectrometer (CLAS) is a nuclear and particle physics detector located in the experimental Hall B at Jefferson Laboratory in Newport News, Virginia, United States. It is used to study the properties of the nuclear matter by the collaboration of over 200 physicists (CLAS Collaboration) from many countries all around the world.
The 0.5 to 12.0 GeV electron beam from the accelerator of Jefferson Laboratory is brought into "Hall B", the experimental hall that houses the CLAS system. Electrons or photons in the incoming beam collide with the nuclei of atoms in the physics "target" located at the center of CLAS. These collisions generally produce new particles, often after the target nucleons (protons and neutrons) are briefly excited to heavier-mass versions of the familiar protons and neutrons. A whole variety of intermediate-mass short-lived particles called "mesons" can be created. Scattered electron as well as the longer-lived produced particles travel through the CLAS detector, where they are measured. Particle physicists use these measurements to deduce the underlying structure of protons and neutrons and to better understand the interactions that create these new particles.
The CLAS detector system was operational from 1998 until May 2012. From that time onward, analysis of archived data continued for some years, as can be traced in the publications. Since 2012, a similar but new system called CLAS12 was constructed, which began operations with particle beams in 2017.
Overview of Detector Function
The CLAS detector was notable among devices in the area of hadronic particle physics in that it had a very large acceptance; in other words, it measured the momentum and angles of almost all of the particles produced in the electron-proton collisions. Roughly spherical, the detector measured 30 feet across. It surrounded the physics target, which was typically a small cylinder of liquid hydrogen (hydrogen's nucleus is composed of a single proton) or deuterium (with a nucleus consisting of a neutron and a proton).
Each particle-target collision is called an "event". An elaborate data acquisition system records each event measured by the particle detectors, up to several thousand events per second on average. This data is then transferred to a "farm" of computing processors. Teams of physicists analyze the events, looking for new kinds of particles or information related to the underlying structure of the proton.
Detector Description
A diagram of the CLAS detector is shown in the Figure, as well as a photograph of the detector when it was partially pulled open for maintenance. The physics target is at the center. Charged particles are detected in almost all directions, excluding the very forward (beam) and backward (beam) directions, and also excluding azimuthal directions occupied by six toroidal magnetic field coils.
The detector was designed in a nested form, with successive layers of particle detectors to either track particle paths or record particle flight-times. The toroidal magnet field causes charged particle from the target to bend in arcs either toward or away from the beam line. Particles leaving the target first pass through a timing counter to register the beginning of their trajectories. The particles then traverse three packages of drift chambers which are used to track their paths though the magnetic field, and thereby allow determination of their momentum.
Outside the magnetic field, a layer of timing detectors measure the time of passage of the particles at a distance of about four meters from the target. Dividing the path length of a particle track by the time of travel gives the speed. Knowing the momentum and speed of a particle leads to its identification via its mass. The CLAS detector also contains additional detectors in the forward direction (Cherenkov counters and Electromagnetic Calorimeters) whose purpose is to distinguish electrons from other types of particles such as pions.
Physics Program
Two categories of experiments were carried out with CLAS: using electrons in the beam and using so-called real photons created using the electron beam. Experiments using electron scattering primarily probe the structure of protons and their excitations at various sub-nuclear "length scales". Experiments using real photon beams primarily probe the production and decay of mesons and excited baryons.
A list of the scientific and technical papers resulting from the CLAS program is linked at the bottom of this article. The range of questions addressed is broad, as seen in the following list of topics given in no particular order:
Inelastic electron scattering on the nucleon to study the creation and decay of nucleon excited states
Hyperon photo- and electro-production, exploring the spectrum of ground state and excited strange baryons
Meson photoproduction off the nucleon, searching for mesonic states not accounted for in the quark model
Electro-disintegration of nuclear targets to study the correlations among nucleons within the nucleus
Deep-inelastic electron scattering with meson production using polarized beam and/or target to study the full 3D distribution of quarks inside the nucleon
Collaborating Institutions (Cumulative since 1989)
Arizona State University - Tempe, AZ
California State University - Dominguez Hills, CA
Carnegie Mellon University - Pittsburgh, PA
Catholic University of America - Washington, DC
CEA-Saclay - Gif-sur-Yvette, France
Christopher Newport University, Newport News, VA
Florida International University - Miami, FL
Florida State University - Tallahassee, FL
George Washington University - Washington, DC
Idaho State University - Pocatello, ID
INFN, Laboratori Nazionali di Frascati - Frascati, Italy
INFN, Sezione di Genova - Genova, Italy
Institut de Physique Nucléaire - Orsay, France
ITEP - Moscow, Russia
James Madison University - Harrisonburg, VA
Kyungpook University - Daegu, South Korea
Massachusetts Institute of Technology - Cambridge, MA
Moscow State University - Moscow, Russia
Mississippi State University - Miss State, MS
Norfolk State University - Norfolk, VA
North Carolina Agricultural and Technical State University - Greensboro, NC
Ohio University - Athens, OH
Old Dominion University - Norfolk, VA
Rensselaer Polytechnic Institute - Troy, NY
Rice University - Houston, TX
The College of William and Mary - Williamsburg, VA
Thomas Jefferson National Accelerator Facility - Newport News, VA
Union College - Schenectady, NY
Universidad Técnica Federico Santa María - Valparaíso, Chile
University of California Los Angeles - Los Angeles, CA
University of Connecticut - Storrs, CT
University of Edinburgh - Edinburgh, Scotland
University of Glasgow - Glasgow, Scotland
University of Massachusetts - Amherst, MA
University of New Hampshire - Durham, NH
Université Paris-Sud 11 - Orsay, France
University of Richmond - Richmond, VA
University of South Carolina - Columbia, SC
University of Virginia - Charlottesville, VA
Virginia Polytechnic Institute - Blacksburg, VA
Yerevan Physics Institute - Yerevan, Armenia
Similar Facilities World Wide
Mainz Microtron
LEPS
HERA
HERMES
External links
Official Hall-B web page at Jefferson Lab
List of Publications
The CEBAF large acceptance spectrometer (CLAS) technical description
A list of CLAS publications from INSPIRE-HEP
A list of CLAS publications from the JLab Hall B website
Spectrometers
Particle experiments | CLAS detector | [
"Physics",
"Chemistry"
] | 1,468 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
15,230,235 | https://en.wikipedia.org/wiki/Material%20handling | Material handling involves short-distance movement within the confines of a building or between a building and a transportation vehicle. It uses a wide range of manual, semi-automated, and automated equipment and includes consideration of the protection, storage, and control of materials throughout their manufacturing, warehousing, distribution, consumption, and disposal. Material handling can be used to create time and place utility through the handling, storage, and control of waste, as distinct from manufacturing, which creates form utility by changing the shape, form, and makeup of material.
Role
Material handling plays an important role in manufacturing and logistics. Almost every item of physical commerce has been transported on a conveyor or lift truck or another type of material handling equipment in manufacturing plants, warehouses, and retail stores. While material handling is usually required as part of every production worker's job, over 650,000 people in the U.S. work as dedicated "material moving machine operators" and have a median annual wage of $31,530 (May 2012). These operators use material handling equipment to transport various goods in a variety of industrial settings including moving construction materials around building sites or moving goods onto ships.
Design of material handling systems
Material handling is integral to the design of most production systems since the efficient flow of material between the activities of a production system is heavily dependent on the arrangement (or layout) of the activities. If two activities are adjacent to each other, then material might easily be handed from one activity to another. If activities are in sequence, a conveyor can move the material at low cost. If activities are separated, more expensive industrial trucks or overhead conveyors are required for transport. The high cost of using an industrial truck for material transport is due to both the labor costs of the operator and the negative impact on the performance of a production system (e.g., increased work in process) when multiple units of material are combined into a single transfer batch in order to reduce the number of trips required for transport.
The unit load concept
A unit load is either a single unit of an item, or multiple units so arranged or restricted that they can be handled as a single unit and maintain their integrity. Although granular, liquid, and gaseous materials can be transported in bulk, they can also be contained into unit loads using bags, drums, and cylinders. Advantages of unit loads are that more items can be handled at the same time (thereby reducing the number of trips required, and potentially reducing handling costs, loading and unloading times, and product damage) and that it enables the use of standardized material handling equipment. Disadvantages of unit loads include the negative impact of batching on production system performance, and the cost of returning empty containers/pallets to their point of origin.
In-process handling
Unit loads can be used both for in-process handling and for distribution (receiving, storing, and shipping). Unit load design involves determining the type, size, weight, and configuration of the load; the equipment and method used to handle the load; and the methods of forming (or building) and breaking down the load. For in-process handling, unit loads should not be larger than the production batch size of parts in process. Large production batches (used to increase the utilization of bottleneck activities) can be split into smaller transfer batches for handling purposes, where each transfer batch contains one or more unit loads, and small unit loads can be combined into a larger transfer batch to allow more efficient transport.
Distribution
Selecting a unit load size for distribution can be difficult because containers/pallets are usually available only in standard sizes and configurations; truck trailers, rail boxcars, and airplane cargo bays are limited in width, length, and height; and the number of feasible container/pallet sizes for a load may be limited due to the existing warehouse layout and storage rack configurations and customer package/carton size and retail store shelf restrictions. Also, the practical size of a unit load may be limited by the equipment and aisle space available and the need for safe material handling.
Health and safety
Manual material handling work contributes to a large percentage of the over half a million cases of musculoskeletal disorders reported annually in the United States. Musculoskeletal disorders often involve strains and sprains to the lower back, shoulders, and upper limbs. They can result in protracted pain, disability, medical treatment, and financial stress for those afflicted with them, and employers often find themselves paying the bill, either directly or through workers’ compensation insurance, at the same time they must cope with the loss of the full capacity of their workers.
Scientific evidence shows that effective ergonomic interventions can lower the physical demands of MMH work tasks, thereby lowering the incidence and severity of the musculoskeletal injuries they can cause. Their potential for reducing injury related costs alone make ergonomic interventions a useful tool for improving a company’s productivity, product quality, and overall business competitiveness. But very often productivity gets an additional and solid shot in the arm when managers and workers take a fresh look at how best to use energy, equipment, and exertion to get the job done in the most efficient, effective, and effortless way possible. Planning that applies these principles can result in big wins for all concerned.
Types
Manual handling
Manual handling refers to the use of a worker’s hands to move individual containers by lifting, lowering, filling, emptying, or carrying them. It can expose workers to physical dangers that can lead to injuries: a large percentage of the over half a million cases of musculoskeletal disorders reported in the U.S. each year arise from manual handling, and often involve strains and sprains to a person's lower back, shoulders and upper limbs.
Ergonomic improvements can be used to modify manual handling tasks to reduce injury. These improvements can include reconfiguring the task and using positioning equipment like lift/tilt/turn tables, hoists, balancers, and manipulators to reduce reaching and bending. The NIOSH (National Institute for Occupational Safety and Health) 1991 Revised Lifting Equation can be used to evaluate manual lifting tasks. Under ideal circumstances, the maximum recommended weight for manual lifting to avoid back injuries is 51 lb (23.13 kg). Using the exact conditions of the lift (height, distance lifted, weight, position of weight relative to body, asymmetrical lifts, and objects that are difficult to grasp), six multipliers are used to reduce the maximum recommended weight for less than ideal lifting tasks.
Automated handling
Whenever technically and economically feasible, equipment can be used to reduce and sometimes replace the need to manually handle material. Most existing material handling equipment is only semi-automated because a human operator is needed for tasks like loading/unloading and driving that are difficult and/or too costly to fully automate. However, ongoing advances in sensing, machine intelligence, and robotics have made it possible to fully automate an increasing number of handling tasks. A rough guide to determine how much can be spent for automated equipment that would replace one material handler is to consider that, with benefits, the median moving machine operator costs a company $45,432 per year. Assuming a real interest rate of 1.7% and a service life of 5 years with no adoption/adaptation cost, no learning cost, no training cost, and no operating cost for equipment with no salvage value, a company should be willing to pay up to
to purchase automated equipment to replace one worker. In many cases, automated equipment is not as flexible as a human operator, both with respect to not being able to do a particular task as well as a human and not being able to be as easily redeployed to do other tasks as needs change.
Benefits of materials handling
Better efficiency: Material handling equipment helps streamline the movement of products. Compared to manual handling, materials handling equipment greatly saves time and effort.
Improved safety: When manually handling goods, there are a lot of risks of injuries experienced e.g fall from heights. When unit load formation equipment is factored in, all these risks are reduced to almost zero.
Cost savings: Materials handling equipment is designed to handle materials and products in a specific way, minimizing the risk of damage, therefore, saving costs that could have been spent on damaged goods.
Flexibility: Materials handling is customizable to meet the specific needs of different industries and operations, therefore offering a high degree of flexibility and versatility.
See also
Automated storage and retrieval system
Automation
Bulk material handling
College-Industry Council on Material Handling Education
Conveyor system
Human factors and ergonomics
Industrial robot
Material handling equipment
Warehouse
Unit load
Notes and references
Further reading
Apple, J.M., 1972, Material Handling System Design, New York: Ronald.
Bartholdi, J.J., III, and Hackman, S.T., 2014, Warehouse & Distribution Science, Release 0.96.
Frazelle, E., 2002, World-Class Warehousing and Material Handling, New York: McGraw-Hill.
Heragu, S.S., 2008, Facilities Design, 3rd Ed., CRC Press.
Kulwiec, R.A., Ed., 1985, Materials Handling Handbook, 2nd Ed., New York: Wiley.
Mulcahy, D.E., 1999, Materials Handling Handbook, New York: McGraw-Hill.
External links
College Industry Council on Material Handling Education (CICMHE)
European Federation of Materials Handling
Material Handling and Logistics U.S. Roadmap
Material Handling Equipment Distributors Association
Material Handling Equipment Taxonomy
Material Handling Industry
Material Handling Web Portal | Material handling | [
"Physics"
] | 1,961 | [
"Materials",
"Material handling",
"Matter"
] |
15,233,345 | https://en.wikipedia.org/wiki/Square%20pyramidal%20molecular%20geometry | Square pyramidal geometry describes the shape of certain chemical compounds with the formula where L is a ligand. If the ligand atoms were connected, the resulting shape would be that of a pyramid with a square base. The point group symmetry involved is of type C4v. The geometry is common for certain main group compounds that have a stereochemically-active lone pair, as described by VSEPR theory. Certain compounds crystallize in both the trigonal bipyramidal and the square pyramidal structures, notably .
As a transition state in Berry pseudorotation
As a trigonal bipyramidal molecule undergoes Berry pseudorotation, it proceeds via an intermediary stage with the square pyramidal geometry. Thus even though the geometry is rarely seen as the ground state, it is accessed by a low energy distortion from a trigonal bipyramid.
Pseudorotation also occurs in square pyramidal molecules. Molecules with this geometry, as opposed to trigonal bipyramidal, exhibit heavier vibration. The mechanism used is similar to the Berry mechanism.
Examples
Some molecular compounds that adopt square pyramidal geometry are XeOF4, and various halogen pentafluorides (XF5, where X = Cl, Br, I). Complexes of vanadium(IV), such as vanadyl acetylacetonate, [VO(acac)2], are square pyramidal (acac = acetylacetonate, the deprotonated anion of acetylacetone (2,4-pentanedione)).
See also
AXE method
Square pyramid
Hypervalent molecule
Molecular geometry
References
External links
Chem| Chemistry, Structures, and 3D Molecules
Indiana University Molecular Structure Center
Interactive molecular examples for point groups
Molecular Modeling
Animated Trigonal Planar Visual
Molecular geometry | Square pyramidal molecular geometry | [
"Physics",
"Chemistry"
] | 369 | [
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Matter"
] |
15,234,661 | https://en.wikipedia.org/wiki/Nitrogen%20and%20Non-Protein%20Nitrogen%27s%20effects%20on%20Agriculture | Nitrogen's effects on agriculture profoundly influence crop growth, soil fertility, and overall agricultural productivity, while also exerting significant impacts on the environment.
Nitrogen is an element vital to many environmental processes. Nitrogen plays a vital role in the nitrogen cycle, a complex biogeochemical process that involves the transformation of nitrogen between different chemical forms and its movement through various environmental compartments such as the atmosphere, soil, water, and living organisms. In its natural state, nitrogen exists primarily as a gas (N2) in the atmosphere, making up about 78% of the air we breathe. Nitrogen finds extensive usage across various sectors, primarily in the agriculture industry, and transportation. Its versatility stems from its ability to form numerous compounds, each with unique properties and applications.
Impacts on agriculture
Nitrogen is a fundamental nutrient in agriculture, playing a crucial role in plant growth and development. It is an essential component of proteins, enzymes, chlorophyll, and nucleic acids, all of which are essential for various metabolic processes within plants. When discussing the application of nitrogen in agriculture, it is essential to consider the sources of nitrogen used. Synthetic nitrogen fertilizers, such as ammonium nitrate and urea, are commonly applied to crops to replenish soil nitrogen levels and enhance crop productivity These fertilizers provide readily available nitrogen for plant uptake, thereby promoting vigorous vegetative growth and improving yields. However, the excessive or inefficient use of nitrogen fertilizers can lead to environmental problems such as nitrogen leaching, runoff, and emissions of nitrogen oxides (NOx). Nitrogen leaching occurs when nitrogen compounds, primarily nitrates, move through the soil profile and enter groundwater, potentially contaminating drinking water sources. To mitigate these environmental impacts, various nitrogen management strategies are employed in agriculture. Soil testing is an essential practice that helps farmers assess the nutrient status of their soils and determine appropriate fertilizer application rates. Nutrient management plans based on soil test results help optimize fertilizer use efficiency while minimizing nitrogen losses to the environment.
Effects on water quality
Water quality is greatly influenced by nitrogen, which also has an impact on ecosystems in settings that have been modified by humans. Even though nitrogen is a necessary element for life, too much of it in water can have negative effects on aquatic ecosystems and endanger human health. Agricultural runoff, where fertilizers containing nitrogen compounds can seep into rivers, lakes, and groundwater, is one of the main sources of nitrogen in water. Urban areas also release wastewater and add nitrogen through stormwater runoff. The process of eutrophication, in which an abundance of nutrients encourages the growth of algae and other aquatic plants, can be brought on by elevated nitrogen levels in water.
Non-protein nitrogen
Non-protein nitrogen (or NPN) is a term used in animal nutrition to refer collectively to components such as urea, biuret, and ammonia, which are not proteins but can be converted into proteins by microbes in the ruminant stomach. Due to their lower cost compared to plant and animal proteins, their inclusion in a diet can result in economic gain, but at too high levels cause a depression in growth and possible ammonia toxicity, as microbes convert NPN to ammonia first before using that to make protein. NPN can also be used to artificially raise crude protein values, which are measured based on nitrogen content, as protein is about 16% nitrogen and the only major component of most food that contains nitrogen is protein. The source of NPN is typically a chemical feed additive, or sometimes chicken waste, and cattle manure. However, excessive intake of NPN can have adverse effects on animal health and productivity, as well as environmental implications.
Agricultural effects
In ruminant nutrition, NPN sources such as urea are commonly used as supplements to provide additional nitrogen for microbial protein synthesis in the rumen. Microbes in the rumen can utilize NPN to synthesize microbial protein, which is subsequently digested and absorbed by the animal. This microbial protein serves as a source of amino acids for the animal, supporting growth and productivity, However, excessive consumption of NPN can lead to toxicity issues in ruminants. High levels of ammonia resulting from the breakdown of NPN can disrupt rumen pH balance and microbial activity, leading to conditions such as rumen acidosis and ammonia toxicity. Furthermore, excessive excretion of nitrogen in urine and feces from animals consuming diets high in NPN can contribute to nitrogen pollution in the environment. Nitrogen runoff from agricultural operations can lead to eutrophication of water bodies, harmful algal blooms, and degradation of aquatic ecosystems
See also
Nitrogen cycle
Cyanamide
Cyanuric acid
Melamine
1,3,5-Triazine
Triazines
Chinese protein export contamination
References
Agricultural chemicals
Fertilizers
Adulteration
Food safety
Nitrogen
Nitrogen cycle | Nitrogen and Non-Protein Nitrogen's effects on Agriculture | [
"Chemistry"
] | 997 | [
"Fertilizers",
"Adulteration",
"Drug safety",
"Nitrogen cycle",
"Soil chemistry",
"Metabolism"
] |
15,235,947 | https://en.wikipedia.org/wiki/Overcurrent | In an electric power system, overcurrent or excess current is a situation where a larger than intended electric current exists through a conductor, leading to excessive generation of heat, and the risk of fire or damage to equipment. Possible causes for overcurrent include short circuits, excessive load, incorrect design, an arc fault, or a ground fault. Fuses, circuit breakers, and current limiters are commonly used overcurrent protection (OCP) mechanisms to control the risks.
Circuit breakers, relays, and fuses protect circuit wiring from damage caused by overcurrent.
Overcurrent in an electrical grid
Overcurrent capabilities of electrical generators are essential for the power system operations. Lack of overcurrent capability (low short circuit ratio) of a weak grid creates a multitude of problems, including:
transients during the large load changes will cause large variations of the grid voltage, causing problems with the loads (e.g., some motors might not be able to start in the undervoltage condition);
the grid protection devices are designed to be triggered by a sufficient level of overcurrent. In a weak system the short circuit current might be hard to distinguish from a normal transient overcurrent encountered during the load changes;
during a black start operation after a failure, large inrush current might be needed to energize the system components. For example, if some loads in a weak system remain connected, and inverter-based resource might not be able to start.
Related standards
IEC 60364-4-43: Electrical installations of buildings – Part 4-43: Protection for safety – Protection against overcurrent
See also
Current limiting
Electrical fault
Electrical safety
Overvoltage
References
Sources
Electrical systems | Overcurrent | [
"Physics"
] | 361 | [
"Physical systems",
"Electrical systems"
] |
7,828,928 | https://en.wikipedia.org/wiki/Anathyrosis | Anathyrosis is the technical word for the ancient method of dressing the joints of stone blocks in dry stone construction, i. e., masonry without mortar, which was then commonly used. Because the stone blocks are set in immediate contact with each other without gaps, their joints must be exactly dressed. In order to reduce the time required to sculpt such joints, the faces of the stones to be joined were finished and smoothed only in narrower margins on the sides and top of the faces to be joined, while the interior of adjoining faces were recessed. The smoothed margins of such a face together resemble a doorframe, and the word, created by the ancients, is allusive. Thyra (θύρα) is Greek for “door”, and thus “door framing” is anathyrosis.
This technique was frequently used to construct walls, including in ashlar form, and was used to join the drums of columns. Close examination of where this technique was applied to a specific stone block since removed or fallen away can help locate its placement in the edifice or determine whether it was joined to other blocks.
References
Robertson, D. S. 1929. Handbook of Greek and Roman Architecture. Cambridge: Cambridge University Press.
Rykwert, Joseph 1996. The Dancing Column. The MIT Press.
External links
Architectural elements
Ancient Greek architecture
Stonemasonry | Anathyrosis | [
"Technology",
"Engineering"
] | 279 | [
"Building engineering",
"Construction",
"Stonemasonry",
"Architectural elements",
"Components",
"Architecture"
] |
7,831,083 | https://en.wikipedia.org/wiki/Extensometer | An extensometer is a device that is used to measure changes in the length of an object. It is useful for stress-strain measurements and tensile tests. Its name comes from "extension-meter". It was invented by Charles Huston who described it in an article in the Journal of the Franklin Institute in 1879. Huston later gave the rights to Fairbanks & Ewing, a major manufacturer of testing machines and scales.
Types
There are two main types of extensometers: contact and non-contact.
Contact
Contact extensometers have been used for many years and are also subdivided into two further categories. The first type of contact extensometer is called a clip-on extensometer. These devices are used for applications where high precision strain measurement is required (most ASTM based tests). They come in many configurations and can measure displacements from very small to relatively large (less than a mm to over 100 mm). They have the advantage of lower cost and ease of use, however they can influence small / delicate specimens.
For automated testing clip-on devices have been largely replaced by digital "sensor arm" extensometers. These can be applied to the specimen automatically by a motorized system and produce much more repeatable results than the traditional clip-on devices. They are counterbalanced and so have negligible effect on the specimen. Better linearity, reduced signal noise and synchronization with the corresponding force data are big advantages due to the lack of analogue to digital converters and associated filters which add time lags and smooth the raw data. In addition these devices can remain on the specimen until failure and measure very high extensions (up to 1000 mm) without losing any accuracy. These devices typically have resolutions of 0.3 μm or better (the highest quality devices can read values as low as 0.02 μm) and have sufficient measurement accuracy to meet class 1 and 0.5 of ISO 9513.
Non-contact
For certain special applications, non-contact extensometers are beginning to bring advantages where it is impractical to use a feeler arm or contact extensometer.
Laser
A is an extensometer capable of performing strain or elongation measurements on certain materials when they are subjected to loading in a tensile testing machine. The principle works by illuminating the specimen surface with a laser, the reflections from the specimen surface are then received by a CCD camera and processed by complex algorithms. When using a laser extensometer it is not necessary to attach marks to the specimen, bringing substantial time savings for material testing laboratories.
Resolutions less than one micrometer (typically 0.1 μm) and elongations up to 900 mm can be achieved, which renders these devices suitable for the most complex testing.
Laser extensometers are used primarily for materials which may damage a traditional "clip-on" extensometer, or where the mass of the clip on device affects the material properties, due to being physically attached to the specimen.
Laser extensometers can also be used for testing at elevated or sub zero temperatures.
Video
A is a device that is capable of performing stress/strain measurements of certain materials, by capturing continuous images of the specimen during test, using a frame grabber or a digital video camera attached to a PC. The specimen of the material under test is usually cut in a specific shape and is marked with special markers (usually special stickers or with pens that distinguishes the marker from the specimen color and texture in the captured image).The pixel distance between these markers in the captured image are constantly tracked in the captured video, while the specimen under test is stretched / compressed. This pixel distance can be measured in real time and mapped against a calibration value to give a direct strain measurement, and to control the testing machine in strain control, if required.
With a proper calibration value and good image processing algorithms, resolution of much less than one micrometre (μm) can be achieved. Proper calibration value also depends on the calibration specimen which is usually a specially etched material with great precision. To calibrate, pictures are first captured with the calibration specimen under the same testing conditions to be used for the new specimen.
Video extensometers are used primarily for materials which may damage a traditional contact or digital "feeler arm" extensometer. In some applications the video extensometer is replacing mechanical measurement units - but this is mainly clip-on devices.
When measuring the modulus of elasticity on 50 mm gauge length plastics to ISO 527 an accuracy of 1 μm is required. Some video extensometers cannot achieve this, whilst for production testing it is better to use automated motorized digital extensometry to avoid operators manually attaching marks to the specimen, and spending time setting and adjusting the system. Note that some video extensometers have difficulty in achieving acceptable results when used to measure strain within temperature chambers.
For applications demanding high accuracy, non-contact strain measurement, video extensometers are a proven solution. In certain test applications they are superior to other technologies, such as laser speckle because of the ability to measure strain over a large range. This allows measurements such as modulus to be determined as well as strain at failure.
Changing of ambient light conditions during the test can affect the test results if the video extensometer does not utilize appropriate filters both over the lighting array and lens. Systems with this technology remove all effects of ambient lighting conditions.
Mining
In the mining environment, extensometers are used to measure displacements on batters/highwalls. Plotting displacement vs time enables Geotechnical engineers to determine if wall failures are imminent. For complicated failures, further equipment such as radar or laser scans are used enabling 3-dimensional and ultimately 4-dimensional analysis.
Groundwater and Aquifers
Extensometers can be used in measuring the compaction of aquifers as well as the expansion of them. Ergometers can provide significant data of depth, rate and extent of compaction. This consistent data collecting a clear picture of subsidence in areas can be noted.
Standards
ASTM E83 Standard Practice for Verification and Classification of Extensometers
ASTM D4403 Standard Practice for Extensometers Used in Rock
See also
Compressometer
Deformation monitoring
References
Huston, Charles. "The Effect of Continued and Progressively Increasing Strain upon Iron", Journal of the Franklin Institute, Vol. 107, No. 1, January 1879, pp. 41–44.
Further reading
Tensile testing, edited by J.R. Davis. 2nd ed. Materials Park, Ohio : ASM International, 2004. pp. 77–82. .
Length, distance, or range measuring devices
Materials testing
de:Dehnungssensor | Extensometer | [
"Materials_science",
"Engineering"
] | 1,385 | [
"Materials testing",
"Materials science"
] |
7,831,498 | https://en.wikipedia.org/wiki/HoloVID | HoloVID is a measuring instrument, originally developed by Mark Slater for the holographic dimensional measurement of the internal isogrid structural webbing of the Delta family of launch vehicles in 1981.
History
Delta launch vehicles were produced by McDonnell Douglas Astronautics until the line was purchased by Boeing. Milled out of T6 Aluminum on horizontal mills, the inspection of the huge sheets took longer than the original manufacturing. It was estimated that a real time in situ inspection device could cut costs so an Independent Research and Development (IRAD) budget was generated to solve the problem. Two solutions were worked simultaneously by Mark Slater: a photo-optical technique utilizing a holographic lens and an ultrasonic technique utilizing configurable micro-transducer multiplexed arrays.
A pair of HoloVIDs for simultaneous frontside and backside weld feedback was later used at Martin Marietta to inspect the long weld seams which hold the External Tanks of the Space Shuttle together. By controlling the weld bead profile in real time as it was TIG generated, an optimum weight vs. performance ratio could be obtained, saving the rocket engines from having to waste thrust energy while guaranteeing the highest possible web strengths.
Usage
Many corporations (Kodak, Immunex, Boeing, Johnson & Johnson, The Aerospace Corporation, Silverline Helicopters, and others) use customized versions of the Six Dimensional Non-Contact Reader w/ Integrated Holographic Optical Processing for applications from supercomputer surface mount pad assessment to genetic biochemical assay analysis.
Specifications
HoloVID belongs to a class of sensor known as a structured-light 3D scanner device. The use of structured light to extract three-dimensional shape information is a well known technique. The use of single planes of light to measure the distance and orientation of objects has been reported several times.
The use of multiple planes and multiple points of light to measure shapes and construct volumetric estimates of objects has also been widely reported.
The use of segmented phase holograms to selectively deflect portions of an image wavefront is unusual. The holographic optical components used in this device split tessellated segments of a returning wave front in programmable bulk areas and shaped patches to achieve a unique capability, increasing both the size of an object which can be read and the z-axis depth per point which is measurable, while also increasing the simultaneous operations possible, which is a significant advance in the previous state of art.
Operational modes
A laser beam is made to impinge onto a target surface. The angle of the initially nonlinear optical field can be non-orthogonal to the surface. This light beam is then reflected by the surface in a wide conical spread function which is geometrically related to the incidence angle, light frequency, wavelength and relative surface roughness. A portion of this reflected light enters the optical system coaxially, where a 'stop' shadows the edges. In a single point reader, this edge is viewed along a radius by a photodiode array.
The output of this device is a boxcar output where the photodiodes are sequentially lit diode-by-diode as the object distance changes in relation to the sensor, until either no diodes are lit or all diodes are lit. The residual product charge dynamic value in each light diode cell is a function of the bias current, the dark current and the incident ionizing radiation (in this case, the returning laser light).
In the multipoint system, the HoloVID, the cursor point is acousto-optically scanned in the x-axis across a monaxial transformer. A monaxial holographic lens collects the wave front and reconstructs the pattern onto the single dimensional photodiode array and a two dimensional matrix sensor. Image processing of the sensor data derives the correlation between the compressed wave front and the actual physical object.
References
Dimensional instruments
Holography
Optical devices | HoloVID | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 797 | [
"Glass engineering and science",
"Dimensional instruments",
"Physical quantities",
"Optical devices",
"Quantity",
"Size"
] |
7,833,211 | https://en.wikipedia.org/wiki/Pascal%20matrix | In matrix theory and combinatorics, a Pascal matrix is a matrix (possibly infinite) containing the binomial coefficients as its elements. It is thus an encoding of Pascal's triangle in matrix form. There are three natural ways to achieve this: as a lower-triangular matrix, an upper-triangular matrix, or a symmetric matrix. For example, the 5 × 5 matrices are:
There are other ways in which Pascal's triangle can be put into matrix form, but these are not easily extended to infinity.
Definition
The non-zero elements of a Pascal matrix are given by the binomial coefficients:
such that the indices i, j start at 0, and ! denotes the factorial.
Properties
The matrices have the pleasing relationship Sn = LnUn. From this it is easily seen that all three matrices have determinant 1, as the determinant of a triangular matrix is simply the product of its diagonal elements, which are all 1 for both Ln and Un. In other words, matrices Sn, Ln, and Un are unimodular, with Ln and Un having trace n.
The trace of Sn is given by
with the first few terms given by the sequence 1, 3, 9, 29, 99, 351, 1275, ... .
Construction
A Pascal matrix can actually be constructed by taking the matrix exponential of a special subdiagonal or superdiagonal matrix. The example below constructs a 7 × 7 Pascal matrix, but the method works for any desired n × n Pascal matrices. The dots in the following matrices represent zero elements.
One cannot simply assume exp(A) exp(B) = exp(A + B), for n × n matrices A and B; this equality is only true when AB = BA (i.e. when the matrices A and B commute). In the construction of symmetric Pascal matrices like that above, the sub- and superdiagonal matrices do not commute, so the (perhaps) tempting simplification involving the addition of the matrices cannot be made.
A useful property of the sub- and superdiagonal matrices used for the construction is that both are nilpotent; that is, when raised to a sufficiently great integer power, they degenerate into the zero matrix. (See shift matrix for further details.) As the n × n generalised shift matrices we are using become zero when raised to power n, when calculating the matrix exponential we need only consider the first n + 1 terms of the infinite series to obtain an exact result.
Variants
Interesting variants can be obtained by obvious modification of the matrix-logarithm PL7 and then application of the matrix exponential.
The first example below uses the squares of the values of the log-matrix and constructs a 7 × 7 "Laguerre"- matrix (or matrix of coefficients of Laguerre polynomials
The Laguerre-matrix is actually used with some other scaling and/or the scheme of alternating signs.
(Literature about generalizations to higher powers is not found yet)
The second example below uses the products v(v + 1) of the values of the log-matrix and constructs a 7 × 7 "Lah"- matrix (or matrix of coefficients of Lah numbers)
Using v(v − 1) instead provides a diagonal shifting to bottom-right.
The third example below uses the square of the original PL7-matrix, divided by 2, in other words: the first-order binomials (binomial(k, 2)) in the second subdiagonal and constructs a matrix, which occurs in context of the derivatives and integrals of the Gaussian error function:
If this matrix is inverted (using, for instance, the negative matrix-logarithm), then this matrix has alternating signs and gives the coefficients of the derivatives (and by extension the integrals) of Gauss' error-function. (Literature about generalizations to greater powers is not found yet.)
Another variant can be obtained by extending the original matrix to negative values:
See also
Pascal's triangle
LU decomposition
Riordan Array
References
External links
G. Helms Pascalmatrix in a project of compilation of facts about Numbertheoretical matrices
G. Helms Gauss-matrix
Weisstein, Eric W. Gaussian-function
Weisstein, Eric W. Erf-function
Weisstein, Eric W. "Hermite Polynomial". Hermite-polynomials
(Related to Gauss-matrix).
Matrices
Triangles of numbers | Pascal matrix | [
"Mathematics"
] | 928 | [
"Matrices (mathematics)",
"Mathematical objects",
"Triangles of numbers",
"Combinatorics"
] |
7,835,398 | https://en.wikipedia.org/wiki/Quantal%20response%20equilibrium | Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey,
it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.
In a quantal response equilibrium, players are assumed to make errors in choosing which pure strategy to play. The probability of any particular strategy being chosen is positively related to the payoff from that strategy. In other words, very costly errors are unlikely.
The equilibrium arises from the realization of beliefs. A player's payoffs are computed based on beliefs about other players' probability distribution over strategies. In equilibrium, a player's beliefs are correct.
Application to data
When analyzing data from the play of actual games, particularly from laboratory experiments, particularly from experiments with the matching pennies game, Nash equilibrium can be unforgiving. Any non-equilibrium move can appear equally "wrong", but realistically should not be used to reject a theory. QRE allows every strategy to be played with non-zero probability, and so any data is possible (though not necessarily reasonable).
Logit equilibrium
The most common specification for QRE is logit equilibrium (LQRE). In a logit equilibrium, player's strategies are chosen according to the probability distribution:
is the probability of player choosing strategy .
is the expected utility to player of choosing strategy under the belief that other players are playing according to the probability distribution . Note that the "belief" density in the expected payoff on the right side must match the choice density on the left side. Thus computing expectations of observable quantities such as payoff, demand, output, etc., requires finding fixed points as in mean field theory.
Of particular interest in the logit model is the non-negative parameter λ (sometimes written as 1/μ). λ can be thought of as the rationality parameter. As λ→0, players become "completely non-rational", and play each strategy with equal probability. As λ→∞, players become "perfectly rational", and play approaches a Nash equilibrium.
For dynamic games
For dynamic (extensive form) games, McKelvey and Palfrey defined agent quantal response equilibrium (AQRE). AQRE is somewhat analogous to subgame perfection. In an AQRE, each player plays with some error as in QRE. At a given decision node, the player determines the expected payoff of each action by treating their future self as an independent player with a known probability distribution over actions. As in QRE, in an AQRE every strategy is used with nonzero probability.
Applications
The quantal response equilibrium approach has been applied in various settings. For example, Goeree et al. (2002) study overbidding in private-value auctions, Yi (2005) explores behavior in ultimatum games, Hoppe and Schmitz (2013) study the role of social preferences in principal-agent problems, and Kawagoe et al. (2018) investigate step-level public goods games with binary decisions.
Most tests of quantal response equilibrium are based on experiments, in which participants are not or only to a small extent incentivized to perform the task well. However, quantal response equilibrium has also been found to explain behavior in high-stakes environments. A large-scale analysis of the American television game show The Price Is Right, for example, shows that contestants behavior in the so-called Showcase Showdown, a sequential game of perfect information, can be well explained by an agent quantal response equilibrium (AQRE) model.
Critiques
Non-falsifiability
Work by Haile et al. has shown that QRE is not falsifiable in any normal form game, even with significant a priori restrictions on payoff perturbations. The authors argue that the LQRE concept can sometimes restrict the set of possible outcomes from a game, but may be insufficient to provide a powerful test of behavior without a priori restrictions on payoff perturbations.
Loss of Information
As in statistical mechanics the mean-field approach, specifically the expectation in the exponent, results in a loss of information. More generally, differences in an agent's payoff with respect to their strategy variable result in a loss of information.
See also
Bounded rationality
Behavioral game theory
Paradox of voting
References
Game theory equilibrium concepts | Quantal response equilibrium | [
"Mathematics"
] | 928 | [
"Game theory",
"Game theory equilibrium concepts"
] |
7,835,738 | https://en.wikipedia.org/wiki/Combinatorial%20explosion | In mathematics, a combinatorial explosion is the rapid growth of the complexity of a problem due to the way its combinatorics depends on input, constraints and bounds. Combinatorial explosion is sometimes used to justify the intractability of certain problems. Examples of such problems include certain mathematical functions, the analysis of some puzzles and games, and some pathological examples which can be modelled as the Ackermann function.
Examples
Latin squares
A Latin square of order is an array with entries from a set of elements with the property that each element of the set occurs exactly once in each row and each column of the array. An example of a Latin square of order three is given by,
{| class="wikitable" style="margin-left:auto;margin-right:auto;text-align:center;width:6em;height:6em;table-layout:fixed;"
|-
| 1|| 2 || 3
|-
| 2 || 3 || 1
|-
| 3 || 1 || 2
|}
A common example of a Latin square would be a completed Sudoku puzzle. A Latin square is a combinatorial object (as opposed to an algebraic object) since only the arrangement of entries matters and not what the entries actually are. The number of Latin squares as a function of the order (independent of the set from which the entries are drawn) provides an example of combinatorial explosion as illustrated by the following table.
Sudoku
A combinatorial explosion can also occur in some puzzles played on a grid, such as Sudoku. A Sudoku is a type of Latin square with the additional property that each element occurs exactly once in sub-sections of size (called boxes). Combinatorial explosion occurs as increases, creating limits to the properties of Sudokus that can be constructed, analyzed, and solved, as illustrated in the following table.
Games
One example in a game where combinatorial complexity leads to a solvability limit is in solving chess (a game with 64 squares and 32 pieces). Chess is not a solved game. In 2005 all chess game endings with six pieces or fewer were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity.
Furthermore, the prospect of solving larger chess-like games becomes more difficult as the board-size is increased, such as in large chess variants, and infinite chess.
Computing
Combinatorial explosion can occur in computing environments in a way analogous to communications and multi-dimensional space. Imagine a simple system with only one variable, a boolean called A. The system has two possible states, A = true or A = false. Adding another boolean variable B will give the system four possible states, A = true and B = true, A = true and B = false, A = false and B = true, A = false and B = false. A system with n booleans has 2n possible states, while a system of n variables each with Z allowed values (rather than just the 2 (true and false) of booleans) will have Zn possible states.
The possible states can be thought of as the leaf nodes of a tree of height n, where each node has Z children. This rapid increase of leaf nodes can be useful in areas like searching, since many results can be accessed without having to descend very far. It can also be a hindrance when manipulating such structures.
A class hierarchy in an object-oriented language can be thought of as a tree, with different types of object inheriting from their parents. If different classes need to be combined, such as in a comparison (like A < B) then the number of possible combinations which may occur explodes. If each type of comparison needs to be programmed then this soon becomes intractable for even small numbers of classes. Multiple inheritance can solve this, by allowing subclasses to have multiple parents, and thus a few parent classes can be considered rather than every child, without disrupting any existing hierarchy.
An example is a taxonomy where different vegetables inherit from their ancestor species. Attempting to compare the tastiness of each vegetable with the others becomes intractable since the hierarchy only contains information about genetics and makes no mention of tastiness. However, instead of having to write comparisons for carrot/carrot, carrot/potato, carrot/sprout, potato/potato, potato/sprout, sprout/sprout, they can all multiply inherit from a separate class of tasty whilst keeping their current ancestor-based hierarchy, then all of the above can be implemented with only a tasty/tasty comparison.
Arithmetic
Suppose we take the factorial of n:
Then 1! = 1, 2! = 2, 3! = 6, and 4! = 24. However, we quickly get to extremely large numbers, even for relatively small n. For example, 100! ≈ , a number so large that it cannot be displayed on most calculators, and vastly larger than the estimated number of fundamental particles in the observable universe.
Communication
In administration and computing, a combinatorial explosion is the rapidly accelerating increase in communication lines as organizations are added in a process. (This growth is often casually described as "exponential" but is actually polynomial.)
If two organizations need to communicate about a particular topic, it may be easiest to communicate directly in an ad hoc manner—only one channel of communication is required. However, if a third organization is added, three separate channels are required. Adding a fourth organization requires six channels; five, ten; six, fifteen; etc.
In general, it will take
communication lines for n organizations, which is just the number of 2-combinations of n elements (see also Binomial coefficient).
The alternative approach is to realize when this communication will not be a one-off requirement, and produce a generic or intermediate way of passing information. The drawback is that this requires more work for the first pair, since each must convert its internal approach to the common one, rather than the superficially easier approach of just understanding the other.
See also
Birthday problem
Exponential growth
Metcalfe's law
Curse of dimensionality
Information explosion
Intractability (complexity)
Second half of the chessboard
References
Combinatorics
Combinatorial game theory
Game theory | Combinatorial explosion | [
"Mathematics"
] | 1,336 | [
"Discrete mathematics",
"Recreational mathematics",
"Combinatorics",
"Game theory",
"Combinatorial game theory"
] |
7,836,846 | https://en.wikipedia.org/wiki/Model%20maker | A Model maker is a professional Craftsperson who creates a three-dimensional representation of a design or concept. Most products in use and in development today first take form as a model. This "model" may be an exacting duplicate (prototype) of the future design or a simple mock-up of the general shape or concept. Many prototype models are used for testing physical properties of the design, others for usability and marketing studies.
Mock-ups are generally used as part of the design process to help convey each new iteration. Some model makers specialize in "scale models" that allow an easier grasp of the whole design or for portability of the model to a trade show or an architect or client's office. Other scale models are used in museum displays and in the movie special effects industry. Model makers work in many environments from private studio/shops to corporate design and engineering facilities to research laboratories.
The model maker must be highly skilled in the use of many machines, such as manual lathes, manual mills, Computer Numeric Control (CNC) machines, lasers, wire EDM, water jet saws, tig welders, sheet metal fabrication tools and wood working tools. Fabrication processes model makers take part in are powder coating, shearing, punching, plating, folding, forming and anodizing. Some model makers also use increasingly automated processes, for example cutting parts directly with digital data from computer-aided design plans on a CNC mill or creating the parts through rapid prototyping. Hand tools used by a model maker are an exacto knife, tweezers, sprue cutter, tape, glue, paint, and paint brushes.
There are two basic processes used by the model maker to create models: additive and subtractive. Additive can be as simple as adding clay to create a form, sculpting and smoothing to the final shape. Body fillers, foam and resins are also used in the same manner. Most rapid prototyping technologies are based on the additive process, solidifying thin layered sections or slices one on top of each other. Subtractive is like whittling a solid block of wood or chiseling stone to the desired form. Most milling and other machining methods are subtractive, progressively using smaller and finer tools to remove material from the rough shape to get to the level of detail needed in the final model.
Model makers may use a combination of these methods and technologies to create the model in the most expeditious manner. The parts are usually test fitted, then sanded and painted to represent the intended finish or look. Model makers are required to recreate many faux finishes like brick, stone, grass, molded plastic textures, glass, skin and even water.
See also
Architectural model
Architectural rendering
Architectural visualization
Scale model
References
External links
Association of Professional Model Makers (APMM)
Artisans
Architectural communication
Crafts
Scale modeling | Model maker | [
"Physics",
"Engineering"
] | 589 | [
"Model makers",
"Scale modeling",
"Architectural communication",
"Architecture"
] |
7,836,920 | https://en.wikipedia.org/wiki/Polymethylpentene | Polymethylpentene (PMP), also known as poly(4-methyl-1-pentene), is a thermoplastic polyolefin. It is used for gas-permeable packaging, autoclavable medical and laboratory equipment, microwave components, and cookware. It is commonly called TPX, which is a trademark of Mitsui Chemicals.
Production
Polymethylpentene is a 4-methyl-1-pentene-derived linear isotactic polyolefin and is made by Ziegler–Natta type catalysis. The commercially available grades are usually copolymers. It can be extruded and moulded (by injection moulding or blow moulding).
Physical properties
Polymethylpentene melts at ≈ 235 °C. It has a relatively low density (0.84 g/cm3) among plastics and is transparent. It has low moisture absorption, and exceptional acoustical and electrical properties. Its properties are reasonably similar to those of other polyolefins, although it is more brittle and more gas permeable. The polymer also has a high thermal stability, excellent dielectric characteristics and a high chemical resistance. The crystalline phase has a lower density than the amorphous phase.
Optical properties
In comparison to other materials being used for operating in THz range, TPX shows excellent optical properties with a wavelength independent refractive index of 1.460±0.005 between visible light and 100~GHz.
While having a very good transmission in the THz area, TPX also shows a very wide transmission range spreading from UV to THz.
Applications
Applications include sonar covers, speaker cones, ultrasonic transducer heads, and lightweight structural parts. It is also FDA compliant for use in food processing machinery. Polymethylpentene is often used in films and coatings for gas-permeable packaging.
Because of its high melting point and good temperature stability, polymethylpentene is used for autoclavable medical and laboratory equipment, microwave components, and cookware.
It is also often used in electrical components e.g. LED molds because it is an excellent electrical insulator.
TPX is a hard, solid material, which can be mechanically shaped into various optical components like lenses and windows. It is used in CO2 laser pumped molecular lasers as an output window because it is transparent in the whole terahertz range and totally suppresses the ~10 μm pump radiation.
References
Krentsel B.A., Kissin Y.V., Kleiner V.I., Stotskaya S.S. Polymers and Copolymers of Higher a-Olefins, Hanser Publishers: New York, 1997.
H. C. Raine, J. Appl. Polym. Sci. 11, 39 (1969).
Mitsui Chemicals Co., Properties of Standard TPX Grades, 2004.
FDA CFR Title 21 Sec. 177.1520 Olefin polymers (C) 3.3b for TPX(4-methylpentene-1-based olefin copolymer)
External links
Mitsui Chemicals, Inc.
Tydex J.S.Co. - Manufacturer of polished TPX lenses and windows
Plastics
Polyolefins
Packaging materials
Household chemicals
Dielectrics
Thermoplastics
Organic polymers | Polymethylpentene | [
"Physics",
"Chemistry"
] | 708 | [
"Organic polymers",
"Unsolved problems in physics",
"Organic compounds",
"Materials",
"Dielectrics",
"Amorphous solids",
"Matter",
"Plastics"
] |
3,332,762 | https://en.wikipedia.org/wiki/Naphthalenesulfonic%20acid | Naphthalenesulfonic acid may refer to:
Naphthalene-1-sulfonic acid
Naphthalene-2-sulfonic acid
Sulfonic acids | Naphthalenesulfonic acid | [
"Chemistry"
] | 39 | [
"Functional groups",
"Sulfonic acids"
] |
3,333,005 | https://en.wikipedia.org/wiki/Trimethylsilyl%20chloride | Trimethylsilyl chloride, also known as chlorotrimethylsilane is an organosilicon compound (silyl halide), with the formula , often abbreviated or TMSCl. It is a colourless volatile liquid that is stable in the absence of water. It is widely used in organic chemistry.
Preparation
TMSCl is prepared on a large scale by the direct process, the reaction of methyl chloride with a silicon-copper alloy. The principal target of this process is dimethyldichlorosilane, but substantial amounts of the trimethyl and monomethyl products are also obtained. The relevant reactions are (Me = methyl, ):
Typically about 2–4% of the product stream is the monochloride, which forms an azeotrope with .
Reactions and uses
TMSCl is reactive toward nucleophiles, resulting in the replacement of the chloride. In a characteristic reaction of TMSCl, the nucleophile is water, resulting in hydrolysis to give the hexamethyldisiloxane:
The related reaction of trimethylsilyl chloride with alcohols can be exploited to produce anhydrous solutions of hydrochloric acid in alcohols, which find use in the mild synthesis of esters from carboxylic acids and nitriles as well as, acetals from ketones. Similarly, trimethylsilyl chloride is also used to silanize laboratory glassware, making the surfaces more lipophilic.
Silylation in organic synthesis
By the process of silylation, polar functional groups such as alcohols and amines readily undergo reaction with trimethylsilyl chloride, giving trimethylsilyl ethers and trimethylsilyl amines. These new groups "protect" the original functional group by removing the labile protons and decreasing the basicity of the heteroatom. The lability of the and groups allow them to be easily removed afterwards ("deprotected"). Trimethylsilylation can also be used to increase the volatility of a compound, enabling gas chromatography of normally nonvolatile substances such as glucose.
Trimethylsilyl chloride also reacts with carbanions to give trimethylsilyl derivatives. Lithium acetylides react to give trimethylsilylalkynes such as bis(trimethylsilyl)acetylene. Such derivatives are useful protected forms of alkynes.
In the presence of triethylamine and lithium diisopropylamide, enolisable aldehydes, ketones and esters are converted to trimethylsilyl enol ethers. Despite their hydrolytic instability, these compounds have found wide application in organic chemistry; oxidation of the double bond by epoxidation or dihydroxylation can be used to return the original carbonyl group with an alcohol group at the alpha carbon. The trimethylsilyl enol ethers can also be used as masked enolate equivalents in the Mukaiyama aldol addition.
Dehydrations
Dehydration of metal chlorides with trimethylsilyl chloride in THF gives the solvate as illustrated by the case of chromium trichloride:
Other reactions
Trimethylsilyl chloride is used to prepare other trimethylsilyl halides and pseudohalides, including trimethylsilyl fluoride, trimethylsilyl bromide, trimethylsilyl iodide, trimethylsilyl cyanide, trimethylsilyl azide, and trimethylsilyl trifluoromethanesulfonate (TMSOTf). These compounds are produced by a salt metathesis reaction between trimethylsilyl chloride and a salt of the (pseudo)halide (MX):
TMSCl, lithium, and nitrogen molecule react to give tris(trimethylsilyl)amine, under catalysis by nichrome wire or chromium trichloride:
Using this approach, atmospheric nitrogen can be introduced into organic substrate. For example, tris(trimethylsilyl)amine reacts with α,δ,ω-triketones to give tricyclic pyrroles.
Reduction of trimethylsilyl chloride give hexamethyldisilane:
References
Reagents for organic chemistry
Trimethylsilyl compounds
Organochlorosilanes | Trimethylsilyl chloride | [
"Chemistry"
] | 929 | [
"Functional groups",
"Trimethylsilyl compounds",
"Reagents for organic chemistry"
] |
3,333,075 | https://en.wikipedia.org/wiki/Q%20cycle | The Q cycle (named for quinol) describes a series of sequential oxidation and reduction of the lipophilic electron carrier Coenzyme Q (CoQ) between the ubiquinol and ubiquinone forms. These reactions can result in the net movement of protons across a lipid bilayer (in the case of the mitochondria, the inner mitochondrial membrane).
The Q cycle was first proposed by Peter D. Mitchell, though a modified version of Mitchell's original scheme is now accepted as the mechanism by which Complex III moves protons (i.e. how complex III contributes to the biochemical generation of the proton or pH, gradient, which is used for the biochemical generation of ATP).
The first reaction of Q cycle is the 2-electron oxidation of ubiquinol by two oxidants, c1 (Fe3+) and ubiquinone:
CoQH2 + cytochrome c1 (Fe3+) + CoQ' → CoQ + CoQ'−• + cytochrome c1 (Fe2+) + 2 H+ (intermembrane)
The second reaction of the cycle involves the 2-electron oxidation of a second ubiquinol by two oxidants, a fresh c1 (Fe3+) and the CoQ'−• produced in the first step:
CoQH2 + cytochrome c1 (Fe3+) + CoQ'−• + 2 H+ (matrix)→ CoQ + CoQ'H−2 + cytochrome c1 (Fe2+) + 2 H+ (intermembrane)
These net reactions are mediated by electron-transfer mediators including a Rieske 2Fe-2S cluster (shunt to c1) and cb (shunt to CoQ' and later to CoQ'−•)
In chloroplasts, a similar reaction is done with plastoquinone by cytochrome b6f complex.
Process
Operation of the modified Q cycle in Complex III results in the reduction of Cytochrome c, oxidation of ubiquinol to ubiquinone, and the transfer of four protons into the intermembrane space, per two-cycle process.
Ubiquinol (QH2) binds to the Qo site of complex III via hydrogen bonding to His182 of the Rieske iron-sulfur protein and Glu272 of Cytochrome b. Ubiquinone (Q), in turn, binds the Qi site of complex III. Ubiquinol is divergently oxidized (gives up one electron each) to the Rieske iron-sulfur '(FeS) protein' and to the bL heme. This oxidation reaction produces a transient semiquinone before complete oxidation to ubiquinone, which then leaves the Qo site of complex III.
Having acquired one electron from ubiquinol, the 'FeS protein' is freed from its electron donor and is able to migrate to the Cytochrome c1 subunit. 'FeS protein' then donates its electron to Cytochrome c1, reducing its bound heme group. The electron is from there transferred to an oxidized molecule of Cytochrome c externally bound to complex III, which then dissociates from the complex. In addition, the reoxidation of the 'FeS protein' releases the proton bound to His181 into the intermembrane space.
The other electron, which was transferred to the bL heme, is used to reduce the bH heme, which in turn transfers the electron to the ubiquinone bound at the Qi site. The movement of this electron is energetically unfavourable, as the electron is moving towards the negatively charged side of the membrane. This is offset by a favourable change in EM from −100 mV in BL to +50mV in the BH heme. The attached ubiquinone is thus reduced to a semiquinone radical. The proton taken up by Glu272 is subsequently transferred to a hydrogen-bonded water chain as Glu272 rotates 170° to hydrogen bond a water molecule, in turn hydrogen-bonded to a propionate of the bL heme.
Because the last step leaves an unstable semiquinone at the Qi site, the reaction is not yet fully completed. A second Q cycle is necessary, with the second electron transfer from cytochrome bH reducing the semiquinone to ubiquinol. The ultimate products of the Q cycle are four protons entering the intermembrane space, two from the matrix and two from the reduction of two molecules of cytochrome c. The reduced cytochrome c is eventually reoxidized by complex IV. The process is cyclic as the ubiquinol created at the Qi site can be reused by binding to the Qo site of complex III.
Notes
References
Trumpower, B.L. (2002) Biochim. Biophys. Acta 1555, 166-173
Hunte, C., Palsdottir, H. and Trumpower, B.L. (2003) FEBS Letters 545, 39-46
Trumpower, B.L. (1990) J. Biol. Chem., 11409-11412
Biochemical reactions
Cellular respiration
Metabolism | Q cycle | [
"Chemistry",
"Biology"
] | 1,111 | [
"Cellular respiration",
"Biochemical reactions",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
3,333,683 | https://en.wikipedia.org/wiki/Combined%20gas%20and%20steam | Combined gas and steam (COGAS) is a type of marine compound powerplant comprising gas and steam turbines, the latter being driven by steam generated using the heat from the exhaust of the gas turbines.
System
In this way, some of the otherwise lost energy can be reclaimed and the specific fuel consumption of the plant can be decreased. Large (land-based) electric powerplants built using this combined cycle can reach conversion efficiencies of over 60%.
If the turbines do not drive a propeller shaft directly and instead a turbo-electric transmission is used, the system is known as COGES (combined gas turbine-electric and steam).
COGAS differs from many other combined marine propulsion systems in that it is not intended to operate on one system alone. While this is possible, it will not operate efficiently this way, as with Combined diesel and gas systems when run solely on diesel engines. Especially COGAS should not be confused with Combined steam and gas (COSAG) power plants, which employ traditional, oil-fired boilers for steam turbine propulsion for normal cruising, and supplement this with gas turbines for faster reaction times and higher dash speed.
COGAS has been proposed as upgrade for ships that use gas turbines as their main (or only) engines, e.g. in COGOG or COGAG mode, such as the s, but currently no naval ship uses this concept. However some modern cruise ships are equipped with COGES. E.g. Celebrity Cruises' Millennium and other ships of her class use turbo-electric plants with two General Electric LM2500+ gas turbines and one steam-turbine.
BMW is currently researching combined gas and steam for automotive use, using their turbosteamer system. This uses the waste heat of combustion from the exhaust and turns it into steam to produce torque which is input into the crankshaft.
See also
Cogeneration
Combined cycle
Combined cycle powered railway locomotive
References
External links
Gizmag article discussing BMW's turbosteamer
Article on BMW's alternative Combined Cycle Hybrid technology
Marine propulsion
Steam power
Turbo generators
de:Gas-und-Dampf-Kombikraftwerk | Combined gas and steam | [
"Physics",
"Engineering"
] | 433 | [
"Physical quantities",
"Steam power",
"Power (physics)",
"Marine engineering",
"Marine propulsion"
] |
3,335,116 | https://en.wikipedia.org/wiki/Human%20iron%20metabolism | Human iron metabolism is the set of chemical reactions that maintain human homeostasis of iron at the systemic and cellular level. Iron is both necessary to the body and potentially toxic. Controlling iron levels in the body is a critically important part of many aspects of human health and disease. Hematologists have been especially interested in systemic iron metabolism, because iron is essential for red blood cells, where most of the human body's iron is contained. Understanding iron metabolism is also important for understanding diseases of iron overload, such as hereditary hemochromatosis, and iron deficiency, such as iron-deficiency anemia.
Importance of iron regulation
Iron is an essential bioelement for most forms of life, from bacteria to mammals. Its importance lies in its ability to mediate electron transfer. In the ferrous state (Fe2+), iron acts as an electron donor, while in the ferric state (Fe3+) it acts as an acceptor. Thus, iron plays a vital role in the catalysis of enzymatic reactions that involve electron transfer (reduction and oxidation, redox). Proteins can contain iron as part of different cofactors, such as iron–sulfur clusters (Fe-S) and heme groups, both of which are assembled in mitochondria.
Cellular respiration
Human cells require iron in order to obtain energy as ATP from a multi-step process known as cellular respiration, more specifically from oxidative phosphorylation at the mitochondrial cristae. Iron is present in the iron–sulfur cluster and heme groups of the electron transport chain proteins that generate a proton gradient that allows ATP synthase to synthesize ATP (chemiosmosis).
Heme groups are part of hemoglobin, a protein found in red blood cells that serves to transport oxygen from the lungs to other tissues. Heme groups are also present in myoglobin to store and diffuse oxygen in muscle cells.
Oxygen transport
The human body needs iron for oxygen transport. Oxygen (O2) is required for the functioning and survival of nearly all cell types. Oxygen is transported from the lungs to the rest of the body bound to the heme group of hemoglobin in red blood cells. In muscles cells, iron binds oxygen to myoglobin, which regulates its release.
Toxicity
Iron is also potentially toxic. Its ability to donate and accept electrons means that it can catalyze the conversion of hydrogen peroxide into free radicals. Free radicals can cause damage to a wide variety of cellular structures, and ultimately kill the cell.
Iron bound to proteins or cofactors such as heme is safe. Also, there are virtually no truly free iron ions in the cell, since they readily form complexes with organic molecules. However, some of the intracellular iron is bound to low-affinity complexes, and is termed labile iron or "free" iron. Iron in such complexes can cause damage as described above.
To prevent that kind of damage, all life forms that use iron bind the iron atoms to proteins. This binding allows cells to benefit from iron while also limiting its ability to do harm. Typical intracellular labile iron concentrations in bacteria are 10-20 micromolar, though they can be 10-fold higher in anaerobic environment, where free radicals and reactive oxygen species are scarcer. In mammalian cells, intracellular labile iron concentrations are typically smaller than 1 micromolar, less than 5 percent of total cellular iron.
Bacterial protection
In response to a systemic bacterial infection, the immune system initiates a process known as "iron withholding". If bacteria are to survive, then they must obtain iron from their environment. Disease-causing bacteria do this in many ways, including releasing iron-binding molecules called siderophores and then reabsorbing them to recover iron, or scavenging iron from hemoglobin and transferrin. The harder the bacteria have to work to get iron, the greater a metabolic price they must pay. That means that iron-deprived bacteria reproduce more slowly. So, control of iron levels appears to be an important defense against many bacterial infections. Certain bacteria species have developed strategies to circumvent that defense, TB causing bacteria can reside within macrophages, which present an iron rich environment and Borrelia burgdorferi uses manganese in place of iron. People with increased amounts of iron, as, for example, in hemochromatosis, are more susceptible to some bacterial infections.
Although this mechanism is an elegant response to short-term bacterial infection, it can cause problems when it goes on so long that the body is deprived of needed iron for red cell production. Inflammatory cytokines stimulate the liver to produce the iron metabolism regulator protein hepcidin, that reduces available iron. If hepcidin levels increase because of non-bacterial sources of inflammation, like viral infection, cancer, auto-immune diseases or other chronic diseases, then the anemia of chronic disease may result. In this case, iron withholding actually impairs health by preventing the manufacture of enough hemoglobin-containing red blood cells.
Body iron stores
Most well-nourished people in industrialized countries have 4 to 5 grams of iron in their bodies (~38 mg iron/kg body weight for women and ~50 mg iron/kg body for men). Of this, about is contained in the hemoglobin needed to carry oxygen through the blood (around 0.5 mg of iron per mL of blood), and most of the rest (approximately 2 grams in adult men, and somewhat less in women of childbearing age) is contained in ferritin complexes that are present in all cells, but most common in bone marrow, liver, and spleen. The liver stores of ferritin are the primary physiologic source of reserve iron in the body. The reserves of iron in industrialized countries tend to be lower in children and women of child-bearing age than in men and in the elderly. Women who must use their stores to compensate for iron lost through menstruation, pregnancy or lactation have lower non-hemoglobin body stores, which may consist of , or even less.
Of the body's total iron content, about is devoted to cellular proteins that use iron for important cellular processes like storing oxygen (myoglobin) or performing energy-producing redox reactions (cytochromes). A relatively small amount (3–4 mg) circulates through the plasma, bound to transferrin. Because of its toxicity, free soluble iron is kept in low concentration in the body.
Iron deficiency first affects the storage of iron in the body, and depletion of these stores is thought to be relatively asymptomatic, although some vague and non-specific symptoms have been associated with it. Since iron is primarily required for hemoglobin, iron deficiency anemia is the primary clinical manifestation of iron deficiency. Iron-deficient people will suffer or die from organ damage well before their cells run out of the iron needed for intracellular processes like electron transport.
Macrophages of the reticuloendothelial system store iron as part of the process of breaking down and processing hemoglobin from engulfed red blood cells. Iron is also stored as a pigment called hemosiderin, which is an ill-defined deposit of protein and iron, created by macrophages where excess iron is present, either locally or systemically, e.g., among people with iron overload due to frequent blood cell destruction and the necessary transfusions their condition calls for. If systemic iron overload is corrected, over time the hemosiderin is slowly resorbed by the macrophages.
Mechanisms of iron regulation
Human iron homeostasis is regulated at two different levels. Systemic iron levels are balanced by the controlled absorption of dietary iron by enterocytes, the cells that line the interior of the intestines, and the uncontrolled loss of iron from epithelial sloughing, sweat, injuries and blood loss. In addition, systemic iron is continuously recycled. Cellular iron levels are controlled differently by different cell types due to the expression of particular iron regulatory and transport proteins.
Systemic iron regulation
Dietary iron uptake
The absorption of dietary iron is a variable and dynamic process. The amount of iron absorbed compared to the amount ingested is typically low, but may range from 5% to as much as 35% depending on circumstances and type of iron. The efficiency with which iron is absorbed varies depending on the source. Generally, the best-absorbed forms of iron come from animal products. Absorption of dietary iron in iron salt form (as in most supplements) varies somewhat according to the body's need for iron, and is usually between 10% and 20% of iron intake. Absorption of iron from animal products, and some plant products, is in the form of heme iron, and is more efficient, allowing absorption of from 15% to 35% of intake. Heme iron in animals is from blood and heme-containing proteins in meat and mitochondria, whereas in plants, heme iron is present in mitochondria in all cells that use oxygen for respiration.
Like most mineral nutrients, the majority of the iron absorbed from digested food or supplements is absorbed in the duodenum by enterocytes of the duodenal lining. These cells have special molecules that allow them to move iron into the body. To be absorbed, dietary iron can be absorbed as part of a protein such as heme protein or iron must be in its ferrous Fe2+ form. A ferric reductase enzyme on the enterocytes' brush border, duodenal cytochrome B (Dcytb), reduces ferric Fe3+ to Fe2+. A protein called divalent metal transporter 1 (DMT1), which can transport several divalent metals across the plasma membrane, then transports iron across the enterocyte's cell membrane into the cell. If the iron is bound to heme, it is instead transported across the apical membrane by heme carrier protein 1 (HCP1). Heme is then catabolized by microsomal heme oxygenase into biliverdin, releasing Fe2+.
These intestinal lining cells can then either store the iron as ferritin, which is accomplished by Fe2+ binding to apoferritin (in which case the iron will leave the body when the cell dies and is sloughed off into feces), or the cell can release it into the body via the only known iron exporter in mammals, ferroportin. Hephaestin, a ferroxidase that can oxidize Fe2+ to Fe3+ and is found mainly in the small intestine, helps ferroportin transfer iron across the basolateral end of the intestine cells. Upon release into the bloodstream, Fe3+ binds transferrin and circulates to tissues. In contrast, ferroportin is post-translationally repressed by hepcidin, a 25-amino acid peptide hormone. The body regulates iron levels by regulating each of these steps. For instance, enterocytes synthesize more Dcytb, DMT1 and ferroportin in response to iron deficiency anemia. Iron absorption from diet is enhanced in the presence of vitamin C and diminished by excess calcium, zinc, or manganese.
The human body's rate of iron absorption appears to respond to a variety of interdependent factors, including total iron stores, the extent to which the bone marrow is producing new red blood cells, the concentration of hemoglobin in the blood, and the oxygen content of the blood. The body also absorbs less iron during times of inflammation, in order to deprive bacteria of iron. Recent discoveries demonstrate that hepcidin regulation of ferroportin is responsible for the syndrome of anemia of chronic disease.
Iron recycling and loss
Most of the iron in the body is hoarded and recycled by the reticuloendothelial system, which breaks down aged red blood cells. In contrast to iron uptake and recycling, there is no physiologic regulatory mechanism for excreting iron. People lose a small but steady amount by gastrointestinal blood loss, sweating and by shedding cells of the skin and the mucosal lining of the gastrointestinal tract. The total amount of loss for healthy people in the developed world amounts to an estimated average of a day for men, and 1.5–2 mg a day for women with regular menstrual periods. People with gastrointestinal parasitic infections, more commonly found in developing countries, often lose more. Those who cannot regulate absorption well enough get disorders of iron overload. In these diseases, the toxicity of iron starts overwhelming the body's ability to bind and store it.
Cellular iron regulation
Iron import
Most cell types take up iron primarily through receptor-mediated endocytosis via transferrin receptor 1 (TFR1), transferrin receptor 2 (TFR2) and GAPDH. TFR1 has a 30-fold higher affinity for transferrin-bound iron than TFR2 and thus is the main player in this process. The higher order multifunctional glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH) also acts as a transferrin receptor. Transferrin-bound ferric iron is recognized by these transferrin receptors, triggering a conformational change that causes endocytosis. Iron then enters the cytoplasm from the endosome via importer DMT1 after being reduced to its ferrous state by a STEAP family reductase.
Alternatively, iron can enter the cell directly via plasma membrane divalent cation importers such as DMT1 and ZIP14 (Zrt-Irt-like protein 14). Again, iron enters the cytoplasm in the ferrous state after being reduced in the extracellular space by a reductase such as STEAP2, STEAP3 (in red blood cells), Dcytb (in enterocytes) and SDR2.
Iron import in some cancer cells
Iron can also enter cells via CD44 in complexes bound to hyaluronic acid during epithelial–mesenchymal transition (EMT). In this process, epithelial cells transform into mesenchymal cells with detachment from the basement membrane, to which they’re normally anchored, paving the way for the newly differentiated motile mesenchymal cells to begin migration away from the epithelial layer.
While EMT plays a crucial role in physiological processes like implantation, where it enables the embryo to invade the endometrium to facilitate placental attachment, its dysregulation can also fuel the malignant spread of tumors empowering them to invade surrounding tissues and establish distant colonies (metastasis).
Malignant cells often exhibit a heightened demand for iron, fueling their transition towards a more invasive mesenchymal state. This iron is necessary for the expression of mesenchymal genes, like those encoding transforming growth factor beta (TGF-β), crucial for EMT. Notably, iron’s unique ability to catalyze protein and DNA demethylation plays a vital role in this gene expression process.
Conventional iron uptake pathways, such as those using the transferrin receptor 1 (TfR1), often prove insufficient to meet these elevated iron demands in cancer cells. As a result, various cytokines and growth factors trigger the upregulation of CD44, a surface molecule capable of internalizing iron bound to the hyaluronan complex. This alternative pathway, relying on CD44-mediated endocytosis, becomes the dominant iron uptake mechanism compared to the traditional TfR1-dependent route.
The labile iron pool
In the cytoplasm, ferrous iron is found in a soluble, chelatable state which constitutes the labile iron pool (~0.001 mM). In this pool, iron is thought to be bound to low-mass compounds such as peptides, carboxylates and phosphates, although some might be in a free, hydrated form (aqua ions). Alternatively, iron ions might be bound to specialized proteins known as metallochaperones. Specifically, poly-r(C)-binding proteins PCBP1 and PCBP2 appear to mediate transfer of free iron to ferritin (for storage) and non-heme iron enzymes (for use in catalysis). The labile iron pool is potentially toxic due to iron's ability to generate reactive oxygen species. Iron from this pool can be taken up by mitochondria via mitoferrin to synthesize Fe-S clusters and heme groups.
The storage iron pool
Iron can be stored in ferritin as ferric iron due to the ferroxidase activity of the ferritin heavy chain. Dysfunctional ferritin may accumulate as hemosiderin, which can be problematic in cases of iron overload. The ferritin storage iron pool is much larger than the labile iron pool, ranging in concentration from 0.7 mM to 3.6 mM.
Iron export
Iron export occurs in a variety of cell types, including neurons, red blood cells, hepatocytes, macrophages and enterocytes. The latter two are especially important since systemic iron levels depend upon them. There is only one known iron exporter, ferroportin. It transports ferrous iron out of the cell, generally aided by ceruloplasmin and/or hephaestin (mostly in enterocytes), which oxidize iron to its ferric state so it can bind ferritin in the extracellular medium. Hepcidin causes the internalization of ferroportin, decreasing iron export. Besides, hepcidin seems to downregulate both TFR1 and DMT1 through an unknown mechanism. Another player assisting ferroportin in effecting cellular iron export is GAPDH. A specific post translationally modified isoform of GAPDH is recruited to the surface of iron loaded cells where it recruits apo-transferrin in close proximity to ferroportin so as to rapidly chelate the iron extruded.
The expression of hepcidin, which only occurs in certain cell types such as hepatocytes, is tightly controlled at the transcriptional level and it represents the link between cellular and systemic iron homeostasis due to hepcidin's role as "gatekeeper" of iron release from enterocytes into the rest of the body. Erythroblasts produce erythroferrone, a hormone which inhibits hepcidin and so increases the availability of iron needed for hemoglobin synthesis.
Translational control of cellular iron
Although some control exists at the transcriptional level, the regulation of cellular iron levels is ultimately controlled at the translational level by iron-responsive element-binding proteins IRP1 and especially IRP2. When iron levels are low, these proteins are able to bind to iron-responsive elements (IREs). IREs are stem loop structures in the untranslated regions (UTRs) of mRNA.
Both ferritin and ferroportin contain an IRE in their 5' UTRs, so that under iron deficiency their translation is repressed by IRP2, preventing the unnecessary synthesis of storage protein and the detrimental export of iron. In contrast, TFR1 and some DMT1 variants contain 3' UTR IREs, which bind IRP2 under iron deficiency, stabilizing the mRNA, which guarantees the synthesis of iron importers.
Pathology
Iron deficiency
Functional or actual iron deficiency can result from a variety of causes. These causes can be grouped into several categories:
Increased demand for iron, which the diet cannot accommodate.
Increased loss of iron (usually through loss of blood).
Nutritional deficiency. This can result due to a lack of dietary iron or consumption of foods that inhibit iron absorption. Absorption inhibition has been observed caused by phytates in bran, calcium from supplements or dairy products, and tannins from tea, although in all three of these studies the effect was small and the authors of the studies cited regarding bran and tea note that the effect will probably only have a noticeable impact when most iron is obtained from vegetable sources.
Acid-reducing medications: Acid-reducing medications reduce the absorption of dietary iron. These medications are commonly used for gastritis, reflux disease, and ulcers. Proton pump inhibitors (PPIs), H2 antihistamines, and antacids will reduce iron metabolism.
Damage to the intestinal lining. Examples of causes of this kind of damage include surgery involving the duodenum or diseases like Crohn's or celiac sprue which severely reduce the surface area available for absorption. Helicobacter pylori infections also reduce the availability of iron.
Inflammation leading to hepcidin-induced restriction on iron release from enterocytes (see above).
Is also a common occurrence in pregnant women, and in growing adolescents due to poor diets.
Acute blood loss or acute liver cirrhosis creates a lack of transferrin therefore causing iron to be secreted from the body.
Iron overload
The body is able to substantially reduce the amount of iron it absorbs across the mucosa. It does not seem to be able to entirely shut down the iron transport process. Also, in situations where excess iron damages the intestinal lining itself (for instance, when children eat a large quantity of iron tablets produced for adult consumption), even more iron can enter the bloodstream and cause a potentially deadly syndrome of iron overload. Large amounts of free iron in the circulation will cause damage to critical cells in the liver, the heart and other metabolically active organs.
Iron toxicity results when the amount of circulating iron exceeds the amount of transferrin available to bind it, but the body is able to vigorously regulate its iron uptake. Thus, iron toxicity from ingestion is usually the result of extraordinary circumstances like iron tablet over-consumption rather than variations in diet. The type of acute toxicity from iron ingestion causes severe mucosal damage in the gastrointestinal tract, among other problems.
Excess iron has been linked to higher rates of disease and mortality. For example, breast cancer patients with low ferroportin expression (leading to higher concentrations of intracellular iron) survive for a shorter period of time on average, while high ferroportin expression predicts 90% 10-year survival in breast cancer patients. Similarly, genetic variations in iron transporter genes known to increase serum iron levels also reduce lifespan and the average number of years spent in good health. It has been suggested that mutations that increase iron absorption, such as the ones responsible for hemochromatosis (see below), were selected for during Neolithic times as they provided a selective advantage against iron-deficiency anemia. The increase in systemic iron levels becomes pathological in old age, which supports the notion that antagonistic pleiotropy or "hyperfunction" drives human aging.
Chronic iron toxicity is usually the result of more chronic iron overload syndromes associated with genetic diseases, repeated transfusions or other causes. In such cases the iron stores of an adult may reach 50 grams (10 times normal total body iron) or more. The most common diseases of iron overload are hereditary hemochromatosis (HH), caused by mutations in the HFE gene, and the more severe disease juvenile hemochromatosis (JH), caused by mutations in either hemojuvelin (HJV) or hepcidin (HAMP). The exact mechanisms of most of the various forms of adult hemochromatosis, which make up most of the genetic iron overload disorders, remain unsolved. So, while researchers have been able to identify genetic mutations causing several adult variants of hemochromatosis, they now must turn their attention to the normal function of these mutated genes.
See also
Iron in biology
References
Further reading
electronic-book electronic-
See esp. pp. 513-514.
External links
A comprehensive NIH factsheet on iron and nutrition
Iron Disorders Institute: A nonprofit group concerned with iron disorders; site has helpful links and information on iron-related medical disorders.
An interactive medical learning portal on iron metabolism
Information about iron outside the body
Hematology
Human homeostasis
Biology and pharmacology of chemical elements | Human iron metabolism | [
"Chemistry",
"Biology"
] | 5,081 | [
"Pharmacology",
"Properties of chemical elements",
"Biology and pharmacology of chemical elements",
"Human homeostasis",
"Homeostasis",
"Biochemistry"
] |
3,340,143 | https://en.wikipedia.org/wiki/Electro-slag%20remelting | Electroslag remelting (ESR), also known as electro-flux remelting, is a process of remelting and refining steel and other alloys for mission-critical applications in aircraft, thermal power stations, nuclear power plants, military technology and others.
The electroslag remelting (ESR) process is used to remelt and refine steels and various super-alloys, resulting in high-quality ingots. This process can be started up through vacuum induction melting. The ESR process uses the as-cast alloy as a consumable electrode. Electric current (generally AC) is passed between the electrode and the new ingot, which is formed in the bottom of a water-cooled copper mold. The new ingot is covered in an engineered slag that is superheated by the electric current. The electrode tip is slowly melted from contact with the slag. These metal droplets travel through the slag to the bottom of the water-cooled mold and slowly freeze as the ingot is directionally solidified upwards from the bottom of the mold. The slag pool floats above the refined alloy, continuously floating upwards as the alloy solidifies. The molten metal is cleaned of impurities that chemically react with the slag or otherwise float to the top of the molten pool as the molten droplets pass through the slag.
Electroslag remelting uses highly reactive slags (calcium fluoride, lime, alumina, or other oxides are usually the main components) to reduce the amount of type-A sulfide present in biometal alloys. It is a common practice in European industries. ESR reduces other types of inclusions as well, and is seen as an alternative to the vacuum arc remelting (VAR) method that is prevalent in US industries.
An example of the use of the electro-slag refined (ESR) steel technique is the L30 tank gun.
CrNi60WTi is a stainless steel which is best formed by either electro-slag remelting or vacuum arc remelting. This alloy can be used for the construction of nuclear power plants.
See also
Vacuum induction melting
References
External links
Ensuring Mold Steel Polishability at moldmakingtechnology.com
Electro-slag remelting furnace for consumable electrodes and having an electrode drive United States Patent 4394765 at freepatentsonline.com
Steelmaking
Metallurgical processes | Electro-slag remelting | [
"Chemistry",
"Materials_science"
] | 501 | [
"Metallurgical processes",
"Steelmaking",
"Metallurgy"
] |
5,963,858 | https://en.wikipedia.org/wiki/List%20of%20wastewater%20treatment%20technologies | This page consists of a list of wastewater treatment technologies:
See also
Agricultural wastewater treatment
Industrial wastewater treatment
List of solid waste treatment technologies
Waste treatment technologies
Water purification
Sewage sludge treatment
References
Industrial Wastewater Treatment Technology Database EPA.
Chemical processes
Environmental engineering
List
Water pollution
Water technology
Waste-water treatment technologies
Sanitation | List of wastewater treatment technologies | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 59 | [
"Water treatment",
"Chemical engineering",
"Water pollution",
"Sewerage",
"Chemical processes",
"Civil engineering",
"Water technology",
"Environmental engineering",
"Chemical process engineering",
"nan"
] |
1,151,127 | https://en.wikipedia.org/wiki/Polydimethylsiloxane | Polydimethylsiloxane (PDMS), also known as dimethylpolysiloxane or dimethicone, is a silicone polymer with a wide variety of uses, from cosmetics to industrial lubrication and passive daytime radiative cooling.
PDMS is particularly known for its unusual rheological (or flow) properties. It is optically clear and, in general, inert, non-toxic, and non-flammable. It is one of several types of silicone oil (polymerized siloxane). The applications of PDMS range from contact lenses and medical devices to elastomers; it is also present in shampoos (as it makes hair shiny and slippery), food (antifoaming agent), caulk, lubricants and heat-resistant tiles.
Structure
The chemical formula of PDMS is , where n is the number of repeating monomer units. Industrial synthesis can begin from dimethyldichlorosilane and water by the following net reaction:
+ (n+1)
The polymerization reaction evolves hydrochloric acid. For medical and domestic applications, a process was developed in which the chlorine atoms in the silane precursor were replaced with acetate groups. In this case, the polymerization produces acetic acid, which is less chemically aggressive than HCl. As a side-effect, the curing process is also much slower in this case. The acetate is used in consumer applications, such as silicone caulk and adhesives.
Branching and capping
Hydrolysis of generates a polymer that is terminated with silanol groups (). These reactive centers are typically "capped" by reaction with trimethylsilyl chloride:
Silane precursors with more acid-forming groups and fewer methyl groups, such as methyltrichlorosilane, can be used to introduce branches or cross-links in the polymer chain. Under ideal conditions, each molecule of such a compound becomes a branch point. This can be used to produce hard silicone resins. In a similar manner, precursors with three methyl groups can be used to limit molecular weight, since each such molecule has only one reactive site and so forms the end of a siloxane chain.
Well-defined PDMS with a low polydispersity index and high homogeneity is produced by controlled anionic ring-opening polymerization of hexamethylcyclotrisiloxane. Using this methodology it is possible to synthesize linear block copolymers, heteroarm star-shaped block copolymers and many other macromolecular architectures.
The polymer is manufactured in multiple viscosities, from a thin pourable liquid (when n is very low), to a thick rubbery semi-solid (when n is very high). PDMS molecules have quite flexible polymer backbones (or chains) due to their siloxane linkages, which are analogous to the ether linkages used to impart rubberiness to polyurethanes. Such flexible chains become loosely entangled when molecular weight is high, which results in PDMS' unusually high level of viscoelasticity.
Mechanical properties
PDMS is viscoelastic, meaning that at long flow times (or high temperatures), it acts like a viscous liquid, similar to honey. However, at short flow times (or low temperatures), it acts like an elastic solid, similar to rubber. Viscoelasticity is a form of nonlinear elasticity that is common amongst noncrystalline polymers. The loading and unloading of a stress-strain curve for PDMS do not coincide; rather, the amount of stress will vary based on the degree of strain, and the general rule is that increasing strain will result in greater stiffness. When the load itself is removed, the strain is slowly recovered (rather than instantaneously). This time-dependent elastic deformation results from the long-chains of the polymer. But the process that is described above is only relevant when cross-linking is present; when it is not, the polymer PDMS cannot shift back to the original state even when the load is removed, resulting in a permanent deformation. However, permanent deformation is rarely seen in PDMS, since it is almost always cured with a cross-linking agent.
If some PDMS is left on a surface overnight (long flow time), it will flow to cover the surface and mold to any surface imperfections. However, if the same PDMS is poured into a spherical mold and allowed to cure (short flow time), it will bounce like a rubber ball. The mechanical properties of PDMS enable this polymer to conform to a diverse variety of surfaces. Since these properties are affected by a variety of factors, this unique polymer is relatively easy to tune. This enables PDMS to become a good substrate that can easily be integrated into a variety of microfluidic and microelectromechanical systems. Specifically, the determination of mechanical properties can be decided before PDMS is cured; the uncured version allows the user to capitalize on myriad opportunities for achieving a desirable elastomer. Generally, the cross-linked cured version of PDMS resembles rubber in a solidified form. It is widely known to be easily stretched, bent, compressed in all directions. Depending on the application and field, the user is able to tune the properties based on what is demanded.
Overall PDMS has a low elastic modulus which enables it to be easily deformed and results in the behavior of a rubber. Viscoelastic properties of PDMS can be more precisely measured using dynamic mechanical analysis. This method requires determination of the material's flow characteristics over a wide range of temperatures, flow rates, and deformations. Because of PDMS's chemical stability, it is often used as a calibration fluid for this type of experiment.
The shear modulus of PDMS varies with preparation conditions, and consequently dramatically varies in the range of 100 kPa to 3 MPa. The loss tangent is very low .
Chemical compatibility
PDMS is hydrophobic. Plasma oxidation can be used to alter the surface chemistry, adding silanol (SiOH) groups to the surface. Atmospheric air plasma and argon plasma will work for this application. This treatment renders the PDMS surface hydrophilic, allowing water to wet it. The oxidized surface can be further functionalized by reaction with trichlorosilanes. After a certain amount of time, recovery of the surface's hydrophobicity is inevitable, regardless of whether the surrounding medium is vacuum, air, or water; the oxidized surface is stable in air for about 30 minutes. Alternatively, for applications where long-term hydrophilicity is a requirement, techniques such as hydrophilic polymer grafting, surface nanostructuring, and dynamic surface modification with embedded surfactants can be of use.
Solid PDMS samples (whether surface-oxidized or not) will not allow aqueous solvents to infiltrate and swell the material. Thus PDMS structures can be used in combination with water and alcohol solvents without material deformation. However most organic solvents will diffuse into the material and cause it to swell. Despite this, some organic solvents lead to sufficiently small swelling that they can be used with PDMS, for instance within the channels of PDMS microfluidic devices. The swelling ratio is roughly inversely related to the solubility parameter of the solvent. Diisopropylamine swells PDMS to the greatest extent; solvents such as chloroform, ether, and THF swell the material to a large extent. Solvents such as acetone, 1-propanol, and pyridine swell the material to a small extent. Alcohols and polar solvents such as methanol, glycerol and water do not swell the material appreciably.
Applications
Surfactants and antifoaming agents
PDMS derivatives are common surfactants and are a component of defoamers. PDMS, in a modified form, is used as an herbicide penetrant and is a critical ingredient in water-repelling coatings, such as .
Hydraulic fluids and related applications
Dimethicone is used in the active silicone fluid in automotive viscous limited slip differentials and couplings.
Daytime radiative cooling
PDMS is a common surface material used in passive daytime radiative cooling as a broadband emitter that is high in solar reflectivity and heat emissivity. Many tested surfaces use PDMS because of its potential scalability as a low-cost polymer. As a daytime radiative cooling surface, PDMS has also been tested to improve solar cell efficiency.
Soft lithography
PDMS is commonly used as a stamp resin in the procedure of soft lithography, making it one of the most common materials used for flow delivery in microfluidics chips. The process of soft lithography consists of creating an elastic stamp, which enables the transfer of patterns of only a few nanometers in size onto glass, silicon or polymer surfaces. With this type of technique, it is possible to produce devices that can be used in the areas of optic telecommunications or biomedical research. The stamp is produced from the normal techniques of photolithography or electron-beam lithography. The resolution depends on the mask used and can reach 6 nm.
The popularity of PDMS in microfluidics area is due to its excellent mechanical properties. Moreover, compared to other materials, it possesses superior optical properties, allowing for minimal background and autofluorescence during fluorescent imaging.
In biomedical (or biological) microelectromechanical systems (bio-MEMS), soft lithography is used extensively for microfluidics in both organic and inorganic contexts. Silicon wafers are used to design channels, and PDMS is then poured over these wafers and left to harden. When removed, even the smallest of details is left imprinted in the PDMS. With this particular PDMS block, hydrophilic surface modification is conducted using plasma etching techniques. Plasma treatment disrupts surface silicon-oxygen bonds, and a plasma-treated glass slide is usually placed on the activated side of the PDMS (the plasma-treated, now hydrophilic side with imprints). Once activation wears off and bonds begin to reform, silicon-oxygen bonds are formed between the surface atoms of the glass and the surface atoms of the PDMS, and the slide becomes permanently sealed to the PDMS, thus creating a waterproof channel. With these devices, researchers can utilize various surface chemistry techniques for different functions creating unique lab-on-a-chip devices for rapid parallel testing.
PDMS can be cross-linked into networks and is a commonly used system for studying the elasticity of polymer networks. PDMS can be directly patterned by surface-charge lithography.
PDMS is being used in the making of synthetic gecko adhesion dry adhesive materials, to date only in laboratory test quantities.
Some flexible electronics researchers use PDMS because of its low cost, easy fabrication, flexibility, and optical transparency. Yet, for fluorescence imaging at different wavelengths, PDMS shows least autofluorescence and is comparable to BoroFloat glass.
Stereo lithography
In stereo lithography (SLA) 3D printing, light is projected onto photocuring resin to selectively cure it. Some types of SLA printer are cured from the bottom of the tank of resin and therefore require the growing model to be peeled away from the base in order for each printed layer to be supplied with a fresh film of uncured resin. A PDMS layer at the bottom of the tank assists this process by absorbing oxygen : the presence of oxygen adjacent to the resin prevents it adhering to the PDMS, and the optically clear PDMS permits the projected image to pass through to the resin undistorted.
Medicine and cosmetics
Activated dimethicone, a mixture of polydimethylsiloxanes and silicon dioxide (sometimes called simethicone), is often used in over-the-counter drugs as an antifoaming agent and carminative. PDMS also works as a moisturizer that is lighter and more breathable than typical oils.
Silicone breast implants are made out of a PDMS elastomer shell, to which fumed amorphous silica is added, encasing PDMS gel or saline solution.
Skin
PDMS is used variously in the cosmetic and consumer product industry as well. For example, dimethicone is used widely in skin-moisturizing lotions where it is listed as an active ingredient whose purpose is "skin protection." Some cosmetic formulations use dimethicone and related siloxane polymers in concentrations of use up to 15%. The Cosmetic Ingredient Review's (CIR) Expert Panel, has concluded that dimethicone and related polymers are "safe as used in cosmetic formulations."
Hair
PDMS compounds such as amodimethicone, are effective conditioners when formulated to consist of small particles and be soluble in water or alcohol/act as surfactants (especially for damaged hair), and are even more conditioning to the hair than common dimethicone and/or dimethicone copolyols.
Contact lenses
A proposed use of PDMS is contact lens cleaning. Its physical properties of low elastic modulus and hydrophobicity have been used to clean micro and nano pollutants from contact lens surfaces more effectively than multipurpose solution and finger rubbing; the researchers involved call the technique PoPPR (polymer on polymer pollution removal) and note that it is highly effective at removing nanoplastic that has adhered to lenses. The use of PDMS in the manufacture of contact lenses was patented (later abandoned).
As anti-parasitic
PDMS is effective for treating lice in humans. This is thought to be due not to suffocation (or poisoning), but to its blocking water excretion, which causes insects to die from physiological stress either through prolonged immobilisation or disruption of internal organs such as the gut.
Dimethicone is the active ingredient in an anti-flea preparation sprayed on a cat, found to be equally effective to a widely used more toxic pyriproxifen/permethrin spray. The parasite becomes trapped and immobilised in the substance, inhibiting adult flea emergence for over three weeks.
Foods
PDMS is added to many cooking oils (as an anti-foaming agent) to prevent oil splatter during the cooking process. As a result of this, PDMS can be found in trace quantities in many fast food items such as McDonald's Chicken McNuggets, french fries, hash browns, milkshakes and smoothies and Wendy's french fries.
Under European food additive regulations, it is listed as E900.
Condom lubricant
PDMS is widely used as a condom lubricant.
Domestic and niche uses
Many people are indirectly familiar with PDMS because it is an important component in Silly Putty, to which PDMS imparts its characteristic viscoelastic properties. Another toy PDMS is used in is Kinetic Sand. The rubbery, vinegary-smelling silicone caulks, adhesives, and aquarium sealants are also well-known. PDMS is also used as a component in silicone grease and other silicone based lubricants, as well as in defoaming agents, mold release agents, damping fluids, heat transfer fluids, polishes, cosmetics, hair conditioners and other applications.
It can be used as a sorbent for the analysis of headspace (dissolved gas analysis) of food.
Safety and environmental considerations
According to Ullmann's Encyclopedia of Industrial Chemistry, no "marked harmful effects on organisms in the environment" have been noted for siloxanes. PDMS is nonbiodegradable, but is absorbed in waste water treatment facilities. Its degradation is catalyzed by various clays.
See also
(3-Aminopropyl)triethoxysilane
Cyclomethicone
Polymethylhydrosiloxane (PMHS)
Silicone rubber
Silicone
Siloxane, Cyclosiloxane and other organosilicon compounds
References
External links
Amodimethicone Amodimethicone structure and properties
Biomaterials
Cosmetics chemicals
Food additives
Silicones
Siloxanes
E-number additives | Polydimethylsiloxane | [
"Physics",
"Biology"
] | 3,396 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
1,151,323 | https://en.wikipedia.org/wiki/Thue%20equation | In mathematics, a Thue equation is a Diophantine equation of the form
where is an irreducible bivariate form of degree at least 3 over the rational numbers, and is a nonzero rational number. It is named after Axel Thue, who in 1909 proved that a Thue equation can have only finitely many solutions in integers and , a result known as Thue's theorem.
The Thue equation is solvable effectively: there is an explicit bound on the solutions , of the form where constants and depend only on the form . A stronger result holds: if is the field generated by the roots of , then the equation has only finitely many solutions with and integers of , and again these may be effectively determined.
Finiteness of solutions and diophantine approximation
Thue's original proof that the equation named in his honour has finitely many solutions is through the proof of what is now known as Thue's theorem: it asserts that for any algebraic number having degree and for any there exists only finitely many coprime integers with such that . Applying this theorem allows one to almost immediately deduce the finiteness of solutions. However, Thue's proof, as well as subsequent improvements by Siegel, Dyson, and Roth were all ineffective.
Solution algorithm
Finding all solutions to a Thue equation can be achieved by a practical algorithm, which has been implemented in the following computer algebra systems:
in PARI/GP as functions thueinit() and thue().
in Magma as functions ThueObject() and ThueSolve().
in Mathematica through Reduce[]
in Maple through ThueSolve()
Bounding the number of solutions
While there are several effective methods to solve Thue equations (including using Baker's method and Skolem's p-adic method), these are not able to give the best theoretical bounds on the number of solutions. One may qualify an effective bound of the Thue equation by the parameters it depends on, and how "good" the dependence is.
The best result known today, essentially building on pioneering work of Bombieri and Schmidt, gives a bound of the shape , where is an absolute constant (that is, independent of both and ) and is the number of distinct prime factors of . The most significant qualitative improvement to the theorem of Bombieri and Schmidt is due to Stewart, who obtained a bound of the form where is a divisor of exceeding in absolute value. It is conjectured that one may take the bound ; that is, depending only on the degree of but not its coefficients, and completely independent of the integer on the right hand side of the equation.
This is a weaker form of a conjecture of Stewart, and is a special case of the uniform boundedness conjecture for rational points. This conjecture has been proven for "small" integers , where smallness is measured in terms of the discriminant of the form , by various authors, including Evertse, Stewart, and Akhtari. Stewart and Xiao demonstrated a strong form of this conjecture, asserting that the number of solutions is absolutely bounded, holds on average (as ranges over the interval with ).
See also
Roth's theorem
Faltings's theorem
References
Further reading
Diophantine equations
Theorems in number theory | Thue equation | [
"Mathematics"
] | 684 | [
"Mathematical theorems",
"Number theory stubs",
"Mathematical objects",
"Equations",
"Diophantine equations",
"Theorems in number theory",
"Mathematical problems",
"Number theory"
] |
1,151,599 | https://en.wikipedia.org/wiki/Silver%28I%29%20fluoride | Silver(I) fluoride is the inorganic compound with the formula AgF. It is one of the three main fluorides of silver, the others being silver subfluoride and silver(II) fluoride. AgF has relatively few niche applications; it has been employed as a fluorination and desilylation reagent in organic synthesis and in aqueous solution as a topical caries treatment in dentistry.
The hydrates of AgF present as colourless, while pure anhydrous samples are yellow.
Preparation
High-purity silver(I) fluoride can be produced by the heating of silver carbonate to under a hydrogen fluoride environment, in a platinum tube:
Ag2CO3 + 2 HF -> 2 AgF + H2O + CO2
Laboratory routes to the compound typically avoid the use of gaseous hydrogen fluoride. One method is the thermal decomposition of silver tetrafluoroborate:
AgBF4 -> AgF + BF3
In an alternative route, silver(I) oxide is dissolved in concentrated aqueous hydrofluoric acid, and the silver fluoride is precipitated out of the resulting solution by acetone.
Ag2O + 2 HF -> 2 AgF + H2O
Properties
Structure
The structure of AgF has been determined by X-ray diffraction. At ambient temperature and pressure, silver(I) fluoride exists as the polymorph AgF-I, which adopts a cubic crystal system with space group Fmm in the Hermann–Mauguin notation. The rock salt structure is adopted by the other silver monohalides. The lattice parameter is 4.936(1) Å, significantly lower than those of AgCl and AgBr. Neutron and X-ray diffraction studies have further shown that at 2.70(2) GPa, a structural transition occurs to a second polymorph (AgF-II) with the caesium chloride structure, and lattice parameter 2.945 Å. The associated decrease in volume is approximately ten percent. A third polymorph, AgF-III, forms on reducing the pressure to 2.59(2) GPa, and has an inverse nickel arsenide structure. The lattice parameters are a = 3.244(2) Å and c = 6.24(1) Å; the rock salt structure is regained only on reduction of the pressure to 0.9(1) GPa. Non-stochiometric behaviour is exhibited by all three polymorphs under extreme pressures.
Spectroscopy
Silver(I) fluoride exhibits unusual optical properties. Simple electronic band theory predicts that the fundamental exciton absorption for AgF would lie higher than that of AgCl (5.10 eV) and would correspond to a transition from an anionic valence band as for the other silver halides. Experimentally, the fundamental exciton for AgF lies at 4.63 eV. This discrepancy can be explained by positing transition from a valence band with largely silver 4d-orbital character. The high frequency refractive index is 1.73(2).
Photosensitivity
In contrast with the other silver halides, anhydrous silver(I) fluoride is not appreciably photosensitive, although the dihydrate is. With this and the material's solubility in water considered, it is unsurprising that it has found little application in photography but may have been one of the salts used by Levi Hill in his "heliochromy", although a US patent for an experimental AgF-based method was granted in 1970.
Solubility
Unlike the other silver halides, AgF is highly soluble in water (1800 g/L), and it even has some solubility in acetonitrile. It is also unique among silver(I) compounds and the silver halides in that it forms the hydrates AgF·(H2O)2 and AgF·(H2O)4 on precipitation from aqueous solution. Like the alkali metal fluorides, it dissolves in hydrogen fluoride to give a conducting solution.
Applications
Organic synthesis
Silver(I) fluoride finds application in organofluorine chemistry for addition of fluoride across multiple bonds. For example, AgF adds to perfluoroalkenes in acetonitrile to give perfluoroalkylsilver(I) derivatives. It can also be used as a desulfuration-fluorination reagent on thiourea derived substrates. Due to its high solubility in water and organic solvents, it is a convenient source of fluoride ions, and can be used to fluorinate alkyl halides under mild conditions. An example is given by the following reaction:
Another organic synthetic method using silver(I) fluoride is the BINAP-AgF complex catalyzed enantioselective protonation of silyl enol ethers:
Inorganic synthesis
The reaction of silver acetylide with a concentrated solution of silver(I) fluoride results in the formation of a chandelier-like [Ag10]2+ cluster with endohedral acetylenediide.
Tetralkylammonium fluorides can be conveniently prepared in the laboratory by the reaction of the tetralkylammonium bromide with an aqueous AgF solution.
Other
It is possible to coat a silicon surface with a uniform silver microlayer (0.1 to 1 μm thickness) by passing AgF vapour over it at 60–800 °C. The relevant reaction is:
4 AgF + Si -> 4 Ag + SiF4
Multiple studies have shown silver(I) fluoride to be an effective anti-caries agent, although the mechanism is the subject of current research. Treatment is typically by the "atraumatic" method, in which 40% by mass aqueous silver(I) fluoride solution is applied to carious leisons, followed by sealing of the dentine with glass ionomer cement. Although the treatment is generally recognised to be safe, fluoride toxicity has been a significant clinical concern in paediatric applications, especially as some commercial preparations have had considerable silver(II) fluoride contamination in the past. Due to the instability of concentrated AgF solutions, silver diammine fluoride (Ag(NH3)2F) is now more commonly used. Preparation is by the addition of ammonia to aqueous silver fluoride solution or by the dissolution of silver fluoride in aqueous ammonia.
References
Fluorides
Silver compounds
Metal halides
Fluorinating agents
Rock salt crystal structure | Silver(I) fluoride | [
"Chemistry"
] | 1,418 | [
"Inorganic compounds",
"Salts",
"Fluorinating agents",
"Metal halides",
"Reagents for organic chemistry",
"Fluorides"
] |
1,151,991 | https://en.wikipedia.org/wiki/Logic%20in%20computer%20science | Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:
Theoretical foundations and analysis
Use of computer technology to aid logicians
Use of concepts from logic for computer applications
Theoretical foundations and analysis
Logic plays a fundamental role in computer science. Some of the key areas of logic that are particularly significant are computability theory (formerly called recursion theory), modal logic and category theory. The theory of computation is based on concepts defined by logicians and mathematicians such as Alonzo Church and Alan Turing. Church first showed the existence of algorithmically unsolvable problems using his notion of lambda-definability. Turing gave the first compelling analysis of what can be called a mechanical procedure and Kurt Gödel asserted that he found Turing's analysis "perfect.". In addition some other major areas of theoretical overlap between logic and computer science are:
Gödel's incompleteness theorem proves that any logical system powerful enough to characterize arithmetic will contain statements that can neither be proved nor disproved within that system. This has direct application to theoretical issues relating to the feasibility of proving the completeness and correctness of software.
The frame problem is a basic problem that must be overcome when using first-order logic to represent the goals of an artificial intelligence agent and the state of its environment.
The Curry–Howard correspondence is a relation between logical systems and programming languages. This theory established a precise correspondence between proofs and programs. In particular it showed that terms in the simply typed lambda calculus correspond to proofs of intuitionistic propositional logic.
Category theory represents a view of mathematics that emphasizes the relations between structures. It is intimately tied to many aspects of computer science: type systems for programming languages, the theory of transition systems, models of programming languages and the theory of programming language semantics.
Logic programming is a programming, database and knowledge representation paradigm that is based on formal logic. A logic program is a set of sentences about some problem domain. Computation is performed by applying logical reasoning to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog.
Computers to assist logicians
One of the first applications to use the term artificial intelligence was the Logic Theorist system developed by Allen Newell, Cliff Shaw, and Herbert Simon in 1956. One of the things that a logician does is to take a set of statements in logic and deduce the conclusions (additional statements) that must be true by the laws of logic. For example, if given the statements "All humans are mortal" and "Socrates is human" a valid conclusion is "Socrates is mortal". Of course this is a trivial example. In actual logical systems the statements can be numerous and complex. It was realized early on that this kind of analysis could be significantly aided by the use of computers. Logic Theorist validated the theoretical work of Bertrand Russell and Alfred North Whitehead in their influential work on mathematical logic called Principia Mathematica. In addition, subsequent systems have been utilized by logicians to validate and discover new mathematical theorems and proofs.
Logic applications for computers
There has always been a strong influence from mathematical logic on the field of artificial intelligence (AI). From the beginning of the field it was realized that technology to automate logical inferences could have great potential to solve problems and draw conclusions from facts. Ron Brachman has described first-order logic (FOL) as the metric by which all AI knowledge representation formalisms should be evaluated. First-order logic is a general and powerful method for describing and analyzing information. The reason FOL itself is simply not used as a computer language is that it is actually too expressive, in the sense that FOL can easily express statements that no computer, no matter how powerful, could ever solve. For this reason every form of knowledge representation is in some sense a trade off between expressivity and computability. The more expressive the language is, the closer it is to FOL, the more likely it is to be slower and prone to an infinite loop.
For example, IF–THEN rules used in expert systems approximate to a very limited subset of FOL. Rather than arbitrary formulas with the full range of logical operators the starting point is simply what logicians refer to as modus ponens. As a result, rule-based systems can support high-performance computation, especially if they take advantage of optimization algorithms and compilation.
On the other hand, logic programming, which combines the Horn clause subset of first-order logic with a non-monotonic form of negation, has both high expressive power and efficient implementations. In particular, the logic programming language Prolog is a Turing complete programming language. Datalog extends the relational database model with recursive relations, while answer set programming is a form of logic programming oriented towards difficult (primarily NP-hard) search problems.
Another major area of research for logical theory is software engineering. Research projects such as the Knowledge Based Software Assistant and Programmer's Apprentice programs have applied logical theory to validate the correctness of software specifications. They have also used logical tools to transform the specifications into efficient code on diverse platforms and to prove the equivalence between the implementation and the specification. This formal transformation-driven approach is often far more effortful than traditional software development. However, in specific domains with appropriate formalisms and reusable templates the approach has proven viable for commercial products. The appropriate domains are usually those such as weapons systems, security systems, and real-time financial systems where failure of the system has excessively high human or financial cost. An example of such a domain is Very Large Scale Integrated (VLSI) design—the process for designing the chips used for the CPUs and other critical components of digital devices. An error in a chip can be catastrophic. Unlike software, chips can't be patched or updated. As a result, there is commercial justification for using formal methods to prove that the implementation corresponds to the specification.
Another important application of logic to computer technology has been in the area of frame languages and automatic classifiers. Frame languages such as KL-ONE can be directly mapped to set theory and first-order logic. This allows specialized theorem provers called classifiers to analyze the various declarations between sets, subsets, and relations in a given model. In this way the model can be validated and any inconsistent definitions flagged. The classifier can also infer new information, for example define new sets based on existing information and change the definition of existing sets based on new data. The level of flexibility is ideal for handling the ever changing world of the Internet. Classifier technology is built on top of languages such as the Web Ontology Language to allow a logical semantic level on top of the existing Internet. This layer is called the Semantic Web.
Temporal logic is used for reasoning in concurrent systems.
See also
Automated reasoning
Computational logic
Logic programming
References
Further reading
External links
Article on Logic and Artificial Intelligence at the Stanford Encyclopedia of Philosophy.
IEEE Symposium on Logic in Computer Science (LICS)
Alwen Tiu, Introduction to logic video recording of a lecture at ANU Logic Summer School '09 (aimed mostly at computer scientists)
Formal methods | Logic in computer science | [
"Mathematics",
"Engineering"
] | 1,469 | [
"Software engineering",
"Mathematical logic",
"Logic in computer science",
"Formal methods"
] |
1,152,079 | https://en.wikipedia.org/wiki/K-minimum%20spanning%20tree | The -minimum spanning tree problem, studied in theoretical computer science, asks for a tree of minimum cost that has exactly vertices and forms a subgraph of a larger graph. It is also called the -MST or edge-weighted -cardinality tree. Finding this tree is NP-hard, but it can be approximated to within a constant approximation ratio in polynomial time.
Problem statement
The input to the problem consists of an undirected graph with weights on its edges, and a The output is a tree with vertices and edges, with all of the edges of the output tree belonging to the input graph. The cost of the output is the sum of the weights of its edges, and the goal is to find the tree that has minimum cost. The problem was formulated by and by .
Ravi et al. also considered a geometric version of the problem, which can be seen as a special case of the graph problem.
In the geometric -minimum spanning tree problem, the input is a set of points in the plane. Again, the output should be a tree with of the points as its vertices, minimizing the total Euclidean length of its edges. That is, it is a graph -minimum spanning tree on a complete graph with Euclidean distances as weights.
Computational complexity
When is a fixed constant, the -minimum spanning tree problem can be solved in polynomial time by a brute-force search algorithm that tries all -tuples of vertices.
However, for variable , the -minimum spanning tree problem has been shown to be NP-hard by a reduction from the Steiner tree problem.
The reduction takes as input an instance of the Steiner tree problem: a weighted graph, with a subset of its vertices selected as terminals. The goal of the Steiner tree problem is to connect these terminals by a tree whose weight is as small as possible. To transform this problem into an instance of the -minimum spanning tree problem, attach to each terminal a tree of zero-weight edges with a large number of vertices per tree. (For a graph with vertices and terminals, they use added vertices per tree.) Then, they ask for the -minimum spanning tree in this augmented graph with . The only way to include this many vertices in a -spanning tree is to use at least one vertex from each added tree, for there are not enough vertices remaining if even one of the added trees is missed. However, for this choice of , it is possible for -spanning tree to include only as few edges of the original graph as are needed to connect all the terminals. Therefore, the -minimum spanning tree must be formed by combining the optimal Steiner tree with enough of the zero-weight edges of the added trees to make the total tree size large enough.
Even for a graph whose edge weights belong to the set }, testing whether the optimal solution value is less than a given threshold is NP-complete. It remains NP-complete for planar graphs. The geometric version of the problem is also NP-hard, but not known to belong to NP, because of the difficulty of comparing sums of square roots; instead it lies in the class of problems reducible to the existential theory of the reals.
The -minimum spanning tree may be found in polynomial time for graphs of bounded treewidth, and for graphs with only two distinct edge weights.
Approximation algorithms
Because of the high computational complexity of finding an optimal solution to the -minimum spanning tree, much of the research on the problem has instead concentrated on approximation algorithms for the problem. The goal of such algorithms is to find an approximate solution in polynomial time with a small approximation ratio. The approximation ratio is defined as the ratio of the computed solution length to the optimal length for a worst-case instance, one that maximizes this ratio. Because the NP-hardness reduction for the -minimum spanning tree problem preserves the weight of all solutions, it also preserves the hardness of approximation of the problem. In particular, because the Steiner tree problem is NP-hard to approximate to an approximation ratio better than 96/95, the same is true for the -minimum spanning tree problem.
The best approximation known for the general problem achieves an approximation ratio of 2, and is by . This approximation relies heavily on the primal-dual schema of .
When the input consists of points in the Euclidean plane (any two of which can be connected in the tree with cost equal to their distance) there exists a polynomial time approximation scheme devised by .
References
External links
Minimum k-spanning tree in "A compendium of NP optimization problems"
KCTLIB, KCTLIB -- A Library for the Edge-Weighted K-Cardinality Tree Problem
Spanning tree
NP-hard problems | K-minimum spanning tree | [
"Mathematics"
] | 942 | [
"NP-hard problems",
"Mathematical problems",
"Computational problems"
] |
1,152,311 | https://en.wikipedia.org/wiki/Signature%20%28topology%29 | In the field of topology, the signature is an integer invariant which is defined for an oriented manifold M of dimension divisible by four.
This invariant of a manifold has been studied in detail, starting with Rokhlin's theorem for 4-manifolds, and Hirzebruch signature theorem.
Definition
Given a connected and oriented manifold M of dimension 4k, the cup product gives rise to a quadratic form Q on the 'middle' real cohomology group
.
The basic identity for the cup product
shows that with p = q = 2k the product is symmetric. It takes values in
.
If we assume also that M is compact, Poincaré duality identifies this with
which can be identified with . Therefore the cup product, under these hypotheses, does give rise to a symmetric bilinear form on H2k(M,R); and therefore to a quadratic form Q. The form Q is non-degenerate due to Poincaré duality, as it pairs non-degenerately with itself. More generally, the signature can be defined in this way for any general compact polyhedron with 4n-dimensional Poincaré duality.
The signature of M is by definition the signature of Q, that is, where any diagonal matrix defining Q has positive entries and negative entries. If M is not connected, its signature is defined to be the sum of the signatures of its connected components.
Other dimensions
If M has dimension not divisible by 4, its signature is usually defined to be 0. There are alternative generalization in L-theory: the signature can be interpreted as the 4k-dimensional (simply connected) symmetric L-group or as the 4k-dimensional quadratic L-group and these invariants do not always vanish for other dimensions. The Kervaire invariant is a mod 2 (i.e., an element of ) for framed manifolds of dimension 4k+2 (the quadratic L-group ), while the de Rham invariant is a mod 2 invariant of manifolds of dimension 4k+1 (the symmetric L-group ); the other dimensional L-groups vanish.
Kervaire invariant
When is twice an odd integer (singly even), the same construction gives rise to an antisymmetric bilinear form. Such forms do not have a signature invariant; if they are non-degenerate, any two such forms are equivalent. However, if one takes a quadratic refinement of the form, which occurs if one has a framed manifold, then the resulting ε-quadratic forms need not be equivalent, being distinguished by the Arf invariant. The resulting invariant of a manifold is called the Kervaire invariant.
Properties
Compact oriented manifolds M and N satisfy by definition, and satisfy by a Künneth formula.
If M is an oriented boundary, then .
René Thom (1954) showed that the signature of a manifold is a cobordism invariant, and in particular is given by some linear combination of its Pontryagin numbers. For example, in four dimensions, it is given by . Friedrich Hirzebruch (1954) found an explicit expression for this linear combination as the L genus of the manifold.
William Browder (1962) proved that a simply connected compact polyhedron with 4n-dimensional Poincaré duality is homotopy equivalent to a manifold if and only if its signature satisfies the expression of the Hirzebruch signature theorem.
Rokhlin's theorem says that the signature of a 4-dimensional simply connected manifold with a spin structure is divisible by 16.
See also
Hirzebruch signature theorem
Genus of a multiplicative sequence
Rokhlin's theorem
References
Geometric topology
Quadratic forms | Signature (topology) | [
"Mathematics"
] | 774 | [
"Quadratic forms",
"Topology",
"Number theory",
"Geometric topology"
] |
1,152,833 | https://en.wikipedia.org/wiki/IP%20Multimedia%20Subsystem | The IP Multimedia Subsystem or IP Multimedia Core Network Subsystem (IMS) is a standardised architectural framework for delivering IP multimedia services. Historically, mobile phones have provided voice call services over a circuit-switched-style network, rather than strictly over an IP packet-switched network. Various voice over IP technologies are available on smartphones; IMS provides a standard protocol across vendors.
IMS was originally designed by the wireless standards body 3rd Generation Partnership Project (3GPP), as a part of the vision for evolving mobile networks beyond GSM. Its original formulation (3GPP Rel-5) represented an approach for delivering Internet services over GPRS. This vision was later updated by 3GPP, 3GPP2 and ETSI TISPAN by requiring support of networks other than GPRS, such as Wireless LAN, CDMA2000 and fixed lines.
IMS uses IETF protocols wherever possible, e.g., the Session Initiation Protocol (SIP). According to the 3GPP, IMS is not intended to standardize applications, but rather to aid the access of multimedia and voice applications from wireless and wireline terminals, i.e., to create a form of fixed-mobile convergence (FMC). This is done by having a horizontal control layer that isolates the access network from the service layer. From a logical architecture perspective, services need not have their own control functions, as the control layer is a common horizontal layer. However, in implementation this does not necessarily map into greater reduced cost and complexity.
Alternative and overlapping technologies for access and provisioning of services across wired and wireless networks include combinations of Generic Access Network, softswitches and "naked" SIP.
Since it is becoming increasingly easier to access content and contacts using mechanisms outside the control of traditional wireless/fixed operators, the interest of IMS is being challenged.
Examples of global standards based on IMS are MMTel which is the basis for Voice over LTE (VoLTE), Wi-Fi Calling (VoWIFI), Video over LTE (ViLTE), SMS/MMS over WiFi and LTE, Unstructured Supplementary Service Data (USSD) over LTE, and Rich Communication Services (RCS), which is also known as joyn or Advanced Messaging, and now RCS is operator's implementation. RCS also further added Presence/EAB (enhanced address book) functionality.
History
IMS was defined by an industry forum called 3G.IP, formed in 1999. 3G.IP developed the initial IMS architecture, which was brought to the 3rd Generation Partnership Project (3GPP), as part of their standardization work for 3G mobile phone systems in UMTS networks. It first appeared in Release 5 (evolution from 2G to 3G networks), when SIP-based multimedia was added. Support for the older GSM and GPRS networks was also provided.
3GPP2 (a different organization from 3GPP) based their CDMA2000 Multimedia Domain (MMD) on 3GPP IMS, adding support for CDMA2000.
3GPP release 6 added interworking with WLAN, inter-operability between IMS using different IP-connectivity networks, routing group identities, multiple registration and forking, presence, speech recognition and speech-enabled services (Push to talk).
3GPP release 7 added support for fixed networks by working together with TISPAN release R1.1, the function of AGCF (access gateway control function) and PES (PSTN emulation service) are introduced to the wire-line network for the sake of inheritance of services which can be provided in PSTN network. AGCF works as a bridge interconnecting the IMS networks and the Megaco/H.248 networks. Megaco/H.248 networks offers the possibility to connect terminals of the old legacy networks to the new generation of networks based on IP networks. AGCF acts a SIP User agent towards the IMS and performs the role of P-CSCF. SIP User Agent functionality is included in the AGCF, and not on the customer device but in the network itself. Also added voice call continuity between circuit switching and packet switching domain (VCC), fixed broadband connection to the IMS, interworking with non-IMS networks, policy and charging control (PCC), emergency sessions. It also added SMS over IP.
3GPP release 8 added support for LTE / SAE, multimedia session continuity, enhanced emergency sessions, SMS over SGs and IMS centralized services.
3GPP release 9 added support for IMS emergency calls over GPRS and EPS, enhancements to multimedia telephony, IMS media plane security, enhancements to services centralization and continuity.
3GPP release 10 added support for inter device transfer, enhancements to the single radio voice call continuity (SRVCC), enhancements to IMS emergency sessions.
3GPP release 11 added USSD simulation service, network-provided location information for IMS, SMS submit and delivery without MSISDN in IMS, and overload control.
Some operators opposed IMS because it was seen as complex and expensive.
In response, a cut-down version of IMS—enough of IMS to support voice and SMS over the LTE network—was defined and standardized in 2010 as Voice over LTE (VoLTE).
Architecture
Each of the functions in the diagram is explained below.
The IP multimedia core network subsystem is a collection of different functions, linked by standardized interfaces, which grouped form one IMS administrative network. A function is not a node (hardware box): An implementer is free to combine two functions in one node, or to split a single function into two or more nodes. Each node can also be present multiple times in a single network, for dimensioning, load balancing or organizational issues.
Access network
The user can connect to IMS in various ways, most of which use the standard IP. IMS terminals (such as mobile phones, personal digital assistants (PDAs) and computers) can register directly on IMS, even when they are roaming in another network or country (the visited network). The only requirement is that they can use IP and run SIP user agents. Fixed access (e.g., digital subscriber line (DSL), cable modems, Ethernet, FTTx), mobile access (e.g. 5G NR, LTE, W-CDMA, CDMA2000, GSM, GPRS) and wireless access (e.g., WLAN, WiMAX) are all supported. Other phone systems like plain old telephone service (POTS—the old analogue telephones), H.323 and non IMS-compatible systems, are supported through gateways.
Core network
HSS – Home subscriber server:
The home subscriber server (HSS), or user profile server function (UPSF), is a master user database that supports the IMS network entities that actually handle calls. It contains the subscription-related information (subscriber profiles), performs authentication and authorization of the user, and can provide information about the subscriber's location and IP information. It is similar to the GSM home location register (HLR) and Authentication centre (AuC).
A subscriber location function (SLF) is needed to map user addresses when multiple HSSs are used.
User identities:
Various identities may be associated with IMS: IP multimedia private identity (IMPI), IP multimedia public identity (IMPU), globally routable user agent URI (GRUU), wildcarded public user identity. Both IMPI and IMPU are not phone numbers or other series of digits, but uniform resource identifier (URIs), that can be digits (a Tel URI, such as tel:+1-555-123-4567) or alphanumeric identifiers (a SIP URI, such as sip:john.doe@example.com" ).
IP Multimedia Private Identity:
The IP Multimedia Private Identity (IMPI) is a unique permanently allocated global identity assigned by the home network operator. It has the form of a Network Access Identifier(NAI) i.e. user.name@domain, and is used, for example, for Registration, Authorization, Administration, and Accounting purposes. Every IMS user shall have one IMPI.
IP Multimedia Public Identity:
The IP Multimedia Public Identity (IMPU) is used by any user for requesting communications to other users (e.g. this might be included on a business card). Also known as Address of Record (AOR). There can be multiple IMPU per IMPI. The IMPU can also be shared with another phone, so that both can be reached with the same identity (for example, a single phone-number for an entire family).
Globally Routable User Agent URI:Globally Routable User Agent URI (GRUU) is an identity that identifies a unique combination of IMPU and UE instance.
There are two types of GRUU: Public-GRUU (P-GRUU) and Temporary GRUU (T-GRUU).
P-GRUU reveal the IMPU and are very long lived.
T-GRUU do not reveal the IMPU and are valid until the contact is explicitly de-registered or the current registration expires
Wildcarded Public User Identity:
A wildcarded Public User Identity expresses a set of IMPU grouped together.
The HSS subscriber database contains the IMPU, IMPI, IMSI, MSISDN, subscriber service profiles, service triggers, and other information.
Call Session Control Function (CSCF)
Several roles of SIP servers or proxies, collectively called Call Session Control Function (CSCF), are used to process SIP signaling packets in the IMS.
A Proxy-CSCF (P-CSCF) is a SIP proxy that is the first point of contact for the IMS terminal. It can be located either in the visited network (in full IMS networks) or in the home network (when the visited network is not IMS compliant yet). Some networks may use a Session Border Controller (SBC) for this function. The P-CSCF is at its core a specialized SBC for the User–network interface which not only protects the network, but also the IMS terminal. The use of an additional SBC between the IMS terminal and the P-CSCF is unnecessary and infeasible due to the signaling being encrypted on this leg. The terminal discovers its P-CSCF with either DHCP, or it may be configured (e.g. during initial provisioning or via a 3GPP IMS Management Object (MO)) or in the ISIM or assigned in the PDP Context (in General Packet Radio Service (GPRS)).
It is assigned to an IMS terminal before registration, and does not change for the duration of the registration.
It sits on the path of all signaling, and can inspect every signal; the IMS terminal must ignore any other unencrypted signaling.
It provides subscriber authentication and may establish an IPsec or TLS security association with the IMS terminal. This prevents spoofing attacks and replay attacks and protects the privacy of the subscriber.
It inspects the signaling and ensures that the IMS terminals do not misbehave (e.g. change normal signaling routes, disobey home network's routing policy).
It can compress and decompress SIP messages using SigComp, which reduces the round-trip over slow radio links.
It may include a Policy Decision Function (PDF), which authorizes media plane resources e.g., quality of service (QoS) over the media plane. It is used for policy control, bandwidth management, etc. The PDF can also be a separate function.
It also generates charging records.
An Interrogating-CSCF (I-CSCF) is another SIP function located at the edge of an administrative domain. Its IP address is published in the Domain Name System (DNS) of the domain (using NAPTR and SRV type of DNS records), so that remote servers can find it, and use it as a forwarding point (e.g., registering) for SIP packets to this domain.
it queries the HSS to retrieve the address of the S-CSCF and assign it to a user performing SIP registration
it also forwards SIP request or response to the S-CSCF
Up to Release 6 it can also be used to hide the internal network from the outside world (encrypting parts of the SIP message), in which case it's called a Topology Hiding Inter-network Gateway (THIG). From Release 7 onwards this "entry point" function is removed from the I-CSCF and is now part of the Interconnection Border Control Function (IBCF). The IBCF is used as gateway to external networks, and provides NAT and firewall functions (pinholing). The IBCF is a session border controller specialized for the network-to-network interface (NNI).
A Serving-CSCF (S-CSCF) is the central node of the signaling plane. It is a SIP server, but performs session control too. It is always located in the home network. It uses Diameter Cx and Dx interfaces to the HSS to download user profiles and upload user-to-S-CSCF associations (the user profile is only cached locally for processing reasons and is not changed). All necessary subscriber profile information is loaded from the HSS.
it handles SIP registrations, which allows it to bind the user location (e.g., the IP address of the terminal) and the SIP address
it sits on the path of all signaling messages of the locally registered users, and can inspect every message
it decides to which application server(s) the SIP message will be forwarded, in order to provide their services
it provides routing services, typically using Electronic Numbering (ENUM) lookups
it enforces the policy of the network operator
there can be multiple S-CSCFs in the network for load distribution and high availability reasons. It's the HSS that assigns the S-CSCF to a user, when it's queried by the I-CSCF. There are multiple options for this purpose, including a mandatory/optional capabilities to be matched between subscribers and S-CSCFs.
Application servers
SIP Application servers (AS) host and execute services, and interface with the S-CSCF using SIP. An example of an application server that is being developed in 3GPP is the Voice call continuity Function (VCC Server). Depending on the actual service, the AS can operate in SIP proxy mode, SIP UA (user agent) mode or SIP B2BUA mode. An AS can be located in the home network or in an external third-party network. If located in the home network, it can query the HSS with the Diameter Sh or Si interfaces (for a SIP-AS).
SIP AS: Host and execute IMS specific services
IP Multimedia Service Switching Function (IM-SSF): Interfaces SIP to CAP to communicate with CAMEL Application Servers
OSA service capability server (OSA SCS): Interfaces SIP to the OSA framework;
Functional model
The AS-ILCM (Application Server - Incoming Leg Control Model) and AS-OLCM (Application Server - Outgoing Leg Control Model) store transaction state, and may optionally store session state depending on the specific service being executed.
The AS-ILCM interfaces to the S-CSCF (ILCM) for an incoming leg and the AS-OLCM interfaces to the S-CSCF (OLCM) for an outgoing leg.
Application Logic provides the service(s) and interacts between the AS-ILCM and AS-OLCM.
Public Service Identity
Public Service Identities (PSI) are identities that identify services, which are hosted by application servers. As user identities, PSI takes the form of either a SIP or Tel URI. PSIs are stored in the HSS either as a distinct PSI or as a wildcarded PSI:
a distinct PSI contains the PSI that is used in routing
a wildcarded PSI represents a collection of PSIs.
Media servers
The Media Resource Function (MRF) provides media related functions such as media manipulation (e.g. voice stream mixing) and playing of tones and announcements.
Each MRF is further divided into a media resource function controller (MRFC) and a media resource function processor (MRFP).
The MRFC is a signalling plane node that interprets information coming from an AS and S-CSCF to control the MRFP
The MRFP is a media plane node used to mix, source or process media streams. It can also manage access right to shared resources.
The Media Resource Broker (MRB) is a functional entity that is responsible for both collection of appropriate published MRF information and supplying of appropriate MRF information to consuming entities such as the AS. MRB can be used in two modes:
Query mode: AS queries the MRB for media and sets up the call using the response of MRB
In-Line Mode: AS sends a SIP INVITE to the MRB. The MRB sets up the call
Breakout gateway
A Breakout Gateway Control Function (BGCF) is a SIP proxy which processes requests for routing from an S-CSCF when the S-CSCF has determined that the session cannot be routed using DNS or ENUM/DNS. It includes routing functionality based on telephone numbers.
PSTN gateways
A PSTN/CS gateway interfaces with PSTN circuit switched (CS) networks. For signalling, CS networks use ISDN User Part (ISUP) (or BICC) over Message Transfer Part (MTP), while IMS uses SIP over IP. For media, CS networks use Pulse-code modulation (PCM), while IMS uses Real-time Transport Protocol (RTP).
A signalling gateway (SGW) interfaces with the signalling plane of the CS. It transforms lower layer protocols as Stream Control Transmission Protocol (SCTP, an IP protocol) into Message Transfer Part (MTP, a Signalling System 7 (SS7) protocol), to pass ISDN User Part (ISUP) from the MGCF to the CS network. The SGW does call control protocol conversion between SIP and ISUP/BICC under the control of the MGCF.
A media gateway controller function (MGCF) is a SIP endpoint that interfaces with the SGW over SCTP. It also controls the resources in a Media Gateway (MGW) across an H.248 interface.
A media gateway (MGW) interfaces with the media plane of the CS network, by converting between RTP and PCM. It can also transcode when the codecs don't match (e.g., IMS might use AMR, PSTN might use G.711).
Media resources
Media Resources are those components that operate on the media plane and are under the control of IMS core functions. Specifically, Media Server (MS) and Media gateway (MGW)
NGN interconnection
There are two types of next-generation networking interconnection:
Service-oriented interconnection (SoIx): The physical and logical linking of NGN domains that allows carriers and service providers to offer services over NGN (i.e., IMS and PES) platforms with control, signalling (i.e., session based), which provides defined levels of interoperability. For instance, this is the case of "carrier grade" voice and/or multimedia services over IP interconnection. "Defined levels of interoperability" are dependent upon the service or the QoS or the Security, etc.
Connectivity-oriented interconnection (CoIx): The physical and logical linking of carriers and service providers based on simple IP connectivity irrespective of the levels of interoperability. For example, an IP interconnection of this type is not aware of the specific end to end service and, as a consequence, service specific network performance, QoS and security requirements are not necessarily assured. This definition does not exclude that some services may provide a defined level of interoperability. However, only SoIx fully satisfies NGN interoperability requirements.
An NGN interconnection mode can be direct or indirect. Direct interconnection refers to the interconnection between two network domains without any intermediate network domain. Indirect interconnection at one layer refers to the interconnection between two network domains with one or more intermediate network domain(s) acting as transit networks. The intermediate network domain(s) provide(s) transit functionality to the two other network domains. Different interconnection modes may be used for carrying service layer signalling and media traffic.
Charging
Offline charging is applied to users who pay for their services periodically (e.g., at the end of the month). Online charging, also known as credit-based charging, is used for prepaid services, or real-time credit control of postpaid services. Both may be applied to the same session.Charging function addresses are addresses distributed to each IMS entities and provide a common location for each entity to send charging information. charging data function (CDF) addresses are used for offline billing and Online Charging Function (OCF) for online billing.
Offline Charging : All the SIP network entities (P-CSCF, I-CSCF, S-CSCF, BGCF, MRFC, MGCF, AS) involved in the session use the Diameter Rf interface to send accounting information to a CDF located in the same domain. The CDF will collect all this information, and build a call detail record (CDR), which is sent to the billing system of the domain.Each session carries an IMS Charging Identifier (ICID) as a unique identifier generated by the first IMS entity involved in a SIP transaction and used for the correlation with CDRs. Inter Operator Identifier (IOI) is a globally unique identifier shared between sending and receiving networks. Each domain has its own charging network. Billing systems in different domains will also exchange information, so that roaming charges can be applied.
Online charging : The S-CSCF talks to a IMS gateway function (IMS-GWF) which looks like a regular SIP application server. The IMS-GWF can signal the S-CSCF to terminate the session when the user runs out of credits during a session. The AS and MRFC use the Diameter Ro interface towards an OCF.
When immediate event charging (IEC) is used, a number of credit units is immediately deducted from the user's account by the ECF and the MRFC or AS is then authorized to provide the service. The service is not authorized when not enough credit units are available.
When event charging with unit reservation'' (ECUR) is used, the ECF (event charging function) first reserves a number of credit units in the user's account and then authorizes the MRFC or the AS. After the service is over, the number of spent credit units is reported and deducted from the account; the reserved credit units are then cleared.
IMS-based PES architecture
IMS-based PES (PSTN Emulation System) provides IP networks services to analog devices. IMS-based PES allows non-IMS devices to appear to IMS as normal SIP users. Analog terminal using standard analog interfaces can connect to IMS-based PES in two ways:
Via A-MGW (Access Media Gateway) that is linked and controlled by AGCF. AGCF is placed within the Operators network and controls multiple A-MGW. A-MGW and AGCF communicate using H.248.1 (Megaco) over the P1 reference point. POTS phone connect to A-MGW over the z interface. The signalling is converted to H.248 in the A-MGW and passed to AGCF. AGCF interprets the H.248 signal and other inputs from the A-MGW to format H.248 messages into appropriate SIP messages. AGCF presents itself as P-CSCF to the S-CSCF and passes generated SIP messages to S-CSCF or to IP border via IBCF (Interconnection Border Control Function). Service presented to S-CSCF in SIP messages trigger PES AS. AGCF has also certain service independent logic, for example on receipt of off-hook event from A-MGW, the AGCF requests the A-MGW to play dial tone.
Via VGW (VoIP-Gateway) or SIP Gateway/Adapter on customer premises. POTS phones via VOIP Gateway connect to P-CSCF directly. Operators mostly use session border controllers between VoIP gateways and P-CSCFs for security and to hide network topology. VoIP gateway link to IMS using SIP over Gm reference point. The conversion from POTS service over the z interface to SIP occurs in the customer premises VoIP gateway. POTS signaling is converted to SIP and passed on to P-CSCF. VGW acts as SIP user agent and appears to P-CSCF as SIP terminal.
Both A-MGW and VGW are unaware of the services. They only relay call control signalling to and from the PSTN terminal. Session control and handling is done by IMS components.
Interfaces description
Session handling
One of the most important features of IMS, that of allowing for a SIP application to be dynamically and differentially (based on the user's profile) triggered, is implemented as a filter-and-redirect signalling mechanism in the S-CSCF.
The S-CSCF might apply filter criteria to determine the need to forward SIP requests to AS. It is important to note that services for the originating party will be applied in the originating network, while the services for the terminating party will be applied in the terminating network, all in the respective S-CSCFs.
Initial filter criteria
An initial filter criteria (iFC) is an XML-based format used for describing control logic. iFCs represent a provisioned subscription of a user to an application. They are stored in the HSS as part of the IMS Subscription Profile and are downloaded to the S-CSCF upon user registration (for registered users) or on processing demand (for services, acting as unregistered users). iFCs are valid throughout the registration lifetime or until the User Profile is changed.
The iFC is composed of:
Priority - determines the order of checking the trigger.
Trigger point - logical condition(s) which is verified against initial dialog creating SIP requests or stand-alone SIP requests.
Application server URI - specifies the application server to be forwarded to when the trigger point matches.
There are two types of iFCs:
Shared - When provisioning, only a reference number (the shared iFC number) is assigned to the subscriber. During registration, only the number is sent to the CSCF, not the entire XML description. The complete XML will have previously been stored on the CSCF.
Non-shared - when provisioning, the entire XML description of the iFC is assigned to the subscriber. During registration, the entire XML description is sent to the CSCF.
Security aspects of early IMS and non-3GPP systems
It is envisaged that security defined in TS 33.203 may not be available for a while especially because of the lack of USIM/ISIM interfaces and prevalence of devices that support IPv4. For this situation, to provide some protection against the most significant threats, 3GPP defines some security mechanisms, which are informally known as "early IMS security," in TR33.978. This mechanism relies on the authentication performed during the network attachment procedures, which binds between the user's profile and its IP address. This mechanism is also weak because the signaling is not protected on the user–network interface.
CableLabs in PacketCable 2.0, which adopted also the IMS architecture but has no USIM/ISIM capabilities in their terminals, published deltas to the 3GPP specifications where the Digest-MD5 is a valid authentication option. Later on, TISPAN also did a similar effort given their fixed networks scopes, although the procedures are different. To compensate for the lack of IPsec capabilities, TLS has been added as an option for securing the Gm interface. Later 3GPP Releases have included the Digest-MD5 method, towards a Common-IMS platform, yet in its own and again different approach. Although all 3 variants of Digest-MD5 authentication have the same functionality and are the same from the IMS terminal's perspective, the implementations on the Cx interface between the S-CSCF and the HSS are different.
See also
4G
Generic Access Network
Image share
OMA Instant Messaging and Presence Service
IP connectivity access network
Mobile broadband
Mobile VoIP
Peer-to-peer video sharing
Service capability interaction manager
System Architecture Evolution
SIMPLE
SIP extensions for the IP multimedia subsystem
Text over IP
Ultra Mobile Broadband
Video share
Voice call continuity
References
Further reading
External links
A decent IMS tutorial
IMS Call Flows
Audio network protocols
3GPP standards
LTE (telecommunication)
Multimedia
Network architecture
Telecommunications
Telecommunications infrastructure
Videotelephony
Voice over IP | IP Multimedia Subsystem | [
"Technology",
"Engineering"
] | 6,195 | [
"Information and communications technology",
"Network architecture",
"Computer networks engineering",
"Telecommunications",
"Multimedia",
"IMS services"
] |
1,153,819 | https://en.wikipedia.org/wiki/Cordierite | Cordierite (mineralogy) or iolite (gemology) is a magnesium iron aluminium cyclosilicate. Iron is almost always present, and a solid solution exists between Mg-rich cordierite and Fe-rich sekaninaite with a series formula: to . A high-temperature polymorph exists, indialite, which is isostructural with beryl and has a random distribution of Al in the rings. Cordierite is also synthesized and used in high temperature applications such as catalytic converters and pizza stones.
Name and discovery
Cordierite, which was discovered in 1813, in specimens from Níjar, Almería, Spain, is named after the French geologist Louis Cordier (1777–1861).
Occurrence
Cordierite typically occurs in contact or regional metamorphism of pelitic rocks. It is especially common in hornfels produced by contact metamorphism of pelitic rocks. Two common metamorphic mineral assemblages include sillimanite-cordierite-spinel and cordierite-spinel-plagioclase-orthopyroxene. Other associated minerals include garnet (cordierite-garnet-sillimanite gneisses) and anthophyllite. Cordierite also occurs in some granites, pegmatites, and norites in gabbroic magmas. Alteration products include mica, chlorite, and talc. Cordierite occurs, for example, in the granite contact zone at Geevor Tin Mine in Cornwall.
Commercial use
Catalytic converters are commonly made from ceramics containing a large proportion of synthetic cordierite. The manufacturing process deliberately aligns the cordierite crystals to make use of the very low thermal expansion along one axis. This prevents thermal shock cracking from taking place when the catalytic converter is used.
Gem variety
As the transparent variety iolite, it is often used as a gemstone. The name "iolite" comes from the Greek word for violet. Another old name is dichroite, a Greek word meaning "two-colored rock", a reference to cordierite's strong pleochroism. It has also been called "water-sapphire" and "Vikings' Compass" because of its usefulness in determining the direction of the sun on overcast days, the Vikings having used it for this purpose. This works by determining the direction of polarization of the sky overhead. Light scattered by air molecules is polarized, and the direction of the polarization is at right angles to a line to the sun, even when the sun's disk itself is obscured by dense fog or lies just below the horizon.
Gem quality iolite varies in color from sapphire blue to blue violet to yellowish gray to light blue as the light angle changes. Iolite is sometimes used as an inexpensive substitute for sapphire. It is much softer than sapphires and is abundantly found in Australia (Northern Territory), Brazil, Burma, Canada (Yellowknife area of the Northwest Territories), India, Madagascar, Namibia, Sri Lanka, Tanzania and the United States (Connecticut). The largest iolite crystal found weighed more than 24,000 carats (4,800g), and was discovered in Wyoming, US.
Another name for blue iolite is steinheilite, after Fabian Steinheil, the Russian military governor of Finland who observed that it was a different mineral from quartz. Praseolite is another iolite variety which results from heat treatment. It should not be confused with prasiolite.
Applications
Cordierite is used in manufacturing kiln furniture for its impressive thermal shock resistance, which allows it to withstand rapid temperature changes without cracking. It is also employed to produce insulation equipment and electric heating elements in fuses, thermostats, and lighting technology.
In the automotive industry, cordierite is used in catalytic converters due to its excellent thermal stability and low thermal expansion. It forms the honeycomb substrates within the converters, which support the catalytic coating that reduces harmful emissions.
See also
List of minerals
List of minerals named after people
Sunstone (medieval)
References
External links
Mineral galleries
https://www.gemstone.org/education/gem-by-gem/222-iolite
Magnesium minerals
Iron(II) minerals
Aluminium minerals
Cyclosilicates
Orthorhombic minerals
Minerals in space group 66
Gemstones | Cordierite | [
"Physics"
] | 902 | [
"Materials",
"Gemstones",
"Matter"
] |
1,154,853 | https://en.wikipedia.org/wiki/Pyrosequencing | Pyrosequencing is a method of DNA sequencing (determining the order of nucleotides in DNA) based on the "sequencing by synthesis" principle, in which the sequencing is performed by detecting the nucleotide incorporated by a DNA polymerase. Pyrosequencing relies on light detection based on a chain reaction when pyrophosphate is released. Hence, the name pyrosequencing.
The principle of pyrosequencing was first described in 1993 by, Bertil Pettersson, Mathias Uhlen and Pål Nyren by combining the solid phase sequencing method using streptavidin coated magnetic beads with recombinant DNA polymerase lacking 3´to 5´exonuclease activity (proof-reading) and luminescence detection using the firefly luciferase enzyme. A mixture of three enzymes (DNA polymerase, ATP sulfurylase and firefly luciferase) and a nucleotide (dNTP) are added to single stranded DNA to be sequenced and the incorporation of nucleotide is followed by measuring the light emitted. The intensity of the light determines if 0, 1 or more nucleotides have been incorporated, thus showing how many complementary nucleotides are present on the template strand. The nucleotide mixture is removed before the next nucleotide mixture is added. This process is repeated with each of the four nucleotides until the DNA sequence of the single stranded template is determined.
A second solution-based method for pyrosequencing was described in 1998 by Mostafa Ronaghi, Mathias Uhlen and Pål Nyren. In this alternative method, an additional enzyme apyrase is introduced to remove nucleotides that are not incorporated by the DNA polymerase. This enabled the enzyme mixture including the DNA polymerase, the luciferase and the apyrase to be added at the start and kept throughout the procedure, thus providing a simple set-up suitable for automation. An automated instrument based on this principle was introduced to the market the following year by the company Pyrosequencing.
A third microfluidic variant of the pyrosequencing method was described in 2005 by Jonathan Rothberg and co-workers at the company 454 Life Sciences. This alternative approach for pyrosequencing was based on the original principle of attaching the DNA to be sequenced to a solid support and they showed that sequencing could be performed in a highly parallel manner using a microfabricated microarray. This allowed for high-throughput DNA sequencing and an automated instrument was introduced to the market. This became the first next generation sequencing instrument starting a new era in genomics research, with rapidly falling prices for DNA sequencing allowing whole genome sequencing at affordable prices.
Procedure
"Sequencing by synthesis" involves taking a single strand of the DNA to be sequenced and then synthesizing its complementary strand enzymatically. The pyrosequencing method is based on detecting the activity of DNA polymerase (a DNA synthesizing enzyme) with another chemoluminescent enzyme. Essentially, the method allows sequencing a single strand of DNA by synthesizing the complementary strand along it, one base pair at a time, and detecting which base was actually added at each step. The template DNA is immobile, and solutions of A, C, G, and T nucleotides are sequentially added and removed from the reaction. Light is produced only when the nucleotide solution complements the first unpaired base of the template. The sequence of solutions which produce chemiluminescent signals allows the determination of the sequence of the template.
For the solution-based version of pyrosequencing, the single-strand DNA (ssDNA) template is hybridized to a sequencing primer and incubated with the enzymes DNA polymerase, ATP sulfurylase, luciferase and apyrase, and with the substrates adenosine 5´ phosphosulfate (APS) and luciferin.
The addition of one of the four deoxynucleotide triphosphates (dNTPs) (dATPαS, which is not a substrate for a luciferase, is added instead of dATP to avoid noise) initiates the second step. DNA polymerase incorporates the correct, complementary dNTPs onto the template. This incorporation releases pyrophosphate (PPi).
ATP sulfurylase converts PPi to ATP in the presence of adenosine 5´ phosphosulfate. This ATP acts as a substrate for the luciferase-mediated conversion of luciferin to oxyluciferin that generates visible light in amounts that are proportional to the amount. The light produced in the luciferase-catalyzed reaction is detected by a camera and analyzed in a program.
Unincorporated nucleotides and ATP are degraded by the apyrase, and the reaction can restart with another nucleotide.
The process can be represented by the following equations:
PPi + APS → ATP + Sulfate (catalyzed by ATP-sulfurylase);
ATP + luciferin + O2 → AMP + PPi + oxyluciferin + + hv (catalyzed by luciferase);
where:
PPi is pyrophosphate
APS is adenosine 5-phosphosulfate;
ATP is adenosine triphosphate;
O2 is oxygen molecule;
AMP is adenosine monophosphate;
is carbon dioxide;
hv is light.
Limitations
Currently, a limitation of the method is that the lengths of individual reads of DNA sequence are in the neighborhood of 300-500 nucleotides, shorter than the 800-1000 obtainable with chain termination methods (e.g. Sanger sequencing). This can make the process of genome assembly more difficult, particularly for sequences containing a large amount of repetitive DNA. Lack of proof-reading activity limits accuracy of this method.
Commercialization
The company Pyrosequencing AB in Uppsala, Sweden was founded with venture capital provided by HealthCap in order to commercialize machinery and reagents for sequencing short stretches of DNA using the pyrosequencing technique. Pyrosequencing AB was listed on the Stockholm Stock Exchange in 1999. It was renamed to Biotage in 2003. The pyrosequencing business line was acquired by Qiagen in 2008. Pyrosequencing technology was further licensed to 454 Life Sciences. 454 developed an array-based pyrosequencing technology which emerged as a platform for large-scale DNA sequencing, including genome sequencing and metagenomics.
Roche announced the discontinuation of the 454 sequencing platform in 2013.
References
Further reading
Biotechnology
DNA sequencing methods
Life sciences industry
Molecular biology | Pyrosequencing | [
"Chemistry",
"Biology"
] | 1,405 | [
"Genetics techniques",
"Life sciences industry",
"Biotechnology",
"DNA sequencing methods",
"DNA sequencing",
"nan",
"Molecular biology",
"Biochemistry"
] |
1,156,215 | https://en.wikipedia.org/wiki/Helmholtz%20equation | In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the elliptic partial differential equation:
where is the Laplace operator, is the eigenvalue, and is the (eigen)function. When the equation is applied to waves, is known as the wave number. The Helmholtz equation has a variety of applications in physics and other sciences, including the wave equation, the diffusion equation, and the Schrödinger equation for a free particle.
In optics, the Helmholtz equation is the wave equation for the electric field.
The equation is named after Hermann von Helmholtz, who studied it in 1860.
Motivation and uses
The Helmholtz equation often arises in the study of physical problems involving partial differential equations (PDEs) in both space and time. The Helmholtz equation, which represents a time-independent form of the wave equation, results from applying the technique of separation of variables to reduce the complexity of the analysis.
For example, consider the wave equation
Separation of variables begins by assuming that the wave function is in fact separable:
Substituting this form into the wave equation and then simplifying, we obtain the following equation:
Notice that the expression on the left side depends only on , whereas the right expression depends only on . As a result, this equation is valid in the general case if and only if both sides of the equation are equal to the same constant value. This argument is key in the technique of solving linear partial differential equations by separation of variables. From this observation, we obtain two equations, one for , the other for
where we have chosen, without loss of generality, the expression for the value of the constant. (It is equally valid to use any constant as the separation constant; is chosen only for convenience in the resulting solutions.)
Rearranging the first equation, we obtain the (homogeneous) Helmholtz equation:
Likewise, after making the substitution , where is the wave number, and is the angular frequency (assuming a monochromatic field), the second equation becomes
We now have Helmholtz's equation for the spatial variable and a second-order ordinary differential equation in time. The solution in time will be a linear combination of sine and cosine functions, whose exact form is determined by initial conditions, while the form of the solution in space will depend on the boundary conditions. Alternatively, integral transforms, such as the Laplace or Fourier transform, are often used to transform a hyperbolic PDE into a form of the Helmholtz equation.
Because of its relationship to the wave equation, the Helmholtz equation arises in problems in such areas of physics as the study of electromagnetic radiation, seismology, and acoustics.
Solving the Helmholtz equation using separation of variables
The solution to the spatial Helmholtz equation:
can be obtained for simple geometries using separation of variables.
Vibrating membrane
The two-dimensional analogue of the vibrating string is the vibrating membrane, with the edges clamped to be motionless. The Helmholtz equation was solved for many basic shapes in the 19th century: the rectangular membrane by Siméon Denis Poisson in 1829, the equilateral triangle by Gabriel Lamé in 1852, and the circular membrane by Alfred Clebsch in 1862. The elliptical drumhead was studied by Émile Mathieu, leading to Mathieu's differential equation.
If the edges of a shape are straight line segments, then a solution is integrable or knowable in closed-form only if it is expressible as a finite linear combination of plane waves that satisfy the boundary conditions (zero at the boundary, i.e., membrane clamped).
If the domain is a circle of radius , then it is appropriate to introduce polar coordinates and . The Helmholtz equation takes the form
We may impose the boundary condition that vanishes if ; thus
the method of separation of variables leads to trial solutions of the form
where must be periodic of This leads to
It follows from the periodicity condition that
and that must be an integer. The radial component has the form
where the Bessel function satisfies Bessel's equation
and The radial function has infinitely many roots for each value of denoted by The boundary condition that vanishes where will be satisfied if the corresponding wavenumbers are given by
The general solution then takes the form of a generalized Fourier series of terms involving products of and the sine (or cosine) of These solutions are the modes of vibration of a circular drumhead.
Three-dimensional solutions
In spherical coordinates, the solution is:
This solution arises from the spatial solution of the wave equation and diffusion equation. Here and are the spherical Bessel functions, and are the spherical harmonics (Abramowitz and Stegun, 1964). Note that these forms are general solutions, and require boundary conditions to be specified to be used in any specific case. For infinite exterior domains, a radiation condition may also be required (Sommerfeld, 1949).
Writing function has asymptotics
where function is called scattering amplitude and is the value of at each boundary point
Three-dimensional solutions given the function on a 2-dimensional plane
Given a 2-dimensional plane where A is known, the solution to the Helmholtz equation is given by:
where
is the solution at the 2-dimensional plane,
As approaches zero, all contributions from the integral vanish except for Thus up to a numerical factor, which can be verified to be by transforming the integral to polar coordinates
This solution is important in diffraction theory, e.g. in deriving Fresnel diffraction.
Paraxial approximation
In the paraxial approximation of the Helmholtz equation, the complex amplitude is expressed as
where represents the complex-valued amplitude which modulates the sinusoidal plane wave represented by the exponential factor. Then under a suitable assumption, approximately solves
where is the transverse part of the Laplacian.
This equation has important applications in the science of optics, where it provides solutions that describe the propagation of electromagnetic waves (light) in the form of either paraboloidal waves or Gaussian beams. Most lasers emit beams that take this form.
The assumption under which the paraxial approximation is valid is that the derivative of the amplitude function is a slowly varying function of :
This condition is equivalent to saying that the angle between the wave vector and the optical axis is small: .
The paraxial form of the Helmholtz equation is found by substituting the above-stated expression for the complex amplitude into the general form of the Helmholtz equation as follows:
Expansion and cancellation yields the following:
Because of the paraxial inequality stated above, the term is neglected in comparison with the term. This yields the paraxial Helmholtz equation. Substituting then gives the paraxial equation for the original complex amplitude :
The Fresnel diffraction integral is an exact solution to the paraxial Helmholtz equation.
Inhomogeneous Helmholtz equation
The inhomogeneous Helmholtz equation is the equation
where is a function with compact support, and This equation is very similar to the screened Poisson equation, and would be identical if the plus sign (in front of the term) were switched to a minus sign.
In order to solve this equation uniquely, one needs to specify a boundary condition at infinity, which is typically the Sommerfeld radiation condition
in spatial dimensions, for all angles (i.e. any value of ). Here where are the coordinates of the vector .
With this condition, the solution to the inhomogeneous Helmholtz equation is
(notice this integral is actually over a finite region, since has compact support). Here, is the Green's function of this equation, that is, the solution to the inhomogeneous Helmholtz equation with equaling the Dirac delta function, so satisfies
The expression for the Green's function depends on the dimension of the space. One has
for ,
for , where is a Hankel function, and
for . Note that we have chosen the boundary condition that the Green's function is an outgoing wave for .
Finally, for general n,
where and .
See also
Laplace's equation (a particular case of the Helmholtz equation)
Weyl expansion
Notes
References
Further reading
External links
Helmholtz Equation at EqWorld: The World of Mathematical Equations.
Vibrating Circular Membrane by Sam Blake, The Wolfram Demonstrations Project.
Green's functions for the wave, Helmholtz and Poisson equations in a two-dimensional boundless domain
Waves
Elliptic partial differential equations
Eponymous equations of physics
Hermann von Helmholtz | Helmholtz equation | [
"Physics"
] | 1,759 | [
"Physical phenomena",
"Equations of physics",
"Eponymous equations of physics",
"Waves",
"Motion (physics)"
] |
1,156,527 | https://en.wikipedia.org/wiki/Detection%20theory | Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator).
In the field of electronics, signal recovery is the separation of such patterns from a disguising background.
According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g.
fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat.
Much of the early work in detection theory was done by radar researchers. By 1954, the theory was fully developed on the theoretical side as described by Peterson, Birdsall and Fox and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, and John A. Swets, also in 1954.
Detection theory was used in 1966 by John A. Swets and David M. Green for psychophysics. Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential) response biases.
Detection theory has applications in many fields such as diagnostics of any kind, quality control, telecommunications, and psychology. The concept is similar to the signal-to-noise ratio used in the sciences and confusion matrices used in artificial intelligence. It is also usable in alarm management, where it is important to separate important events from background noise.
Psychology
Signal detection theory (SDT) is used when psychologists want to measure the way we make decisions under conditions of uncertainty, such as how we would perceive distances in foggy conditions or during eyewitness identification. SDT assumes that the decision maker is not a passive receiver of information, but an active decision-maker who makes difficult perceptual judgments under conditions of uncertainty. In foggy circumstances, we are forced to decide how far away from us an object is, based solely upon visual stimulus which is impaired by the fog. Since the brightness of the object, such as a traffic light, is used by the brain to discriminate the distance of an object, and the fog reduces the brightness of objects, we perceive the object to be much farther away than it actually is (see also decision theory). According to SDT, during eyewitness identifications, witnesses base their decision as to whether a suspect is the culprit or not based on their perceived level of familiarity with the suspect.
To apply signal detection theory to a data set where stimuli were either present or absent, and the observer categorized each trial as having the stimulus present or absent, the trials are sorted into one of four categories:
{| class="wikitable"
|-
!
! Respond "Absent"
! Respond "Present"
|-
! Stimulus Present
| Miss
| Hit
|-
! Stimulus Absent
| Correct Rejection
| False Alarm
|}
Based on the proportions of these types of trials, numerical estimates of sensitivity can be obtained with statistics like the [[sensitivity index|sensitivity index d]] and A', and response bias can be estimated with statistics like c and β. β is the measure of response bias.
Signal detection theory can also be applied to memory experiments, where items are presented on a study list for later testing. A test list is created by combining these 'old' items with novel, 'new' items that did not appear on the study list. On each test trial the subject will respond 'yes, this was on the study list' or 'no, this was not on the study list'. Items presented on the study list are called Targets, and new items are called Distractors. Saying 'Yes' to a target constitutes a Hit, while saying 'Yes' to a distractor constitutes a False Alarm.
{| class="wikitable"
|-
!
! Respond "No"
! Respond "Yes"
|-
! Target
| Miss
| Hit
|-
! Distractor
| Correct Rejection
| False Alarm
|}
Applications
Signal Detection Theory has wide application, both in humans and animals. Topics include memory, stimulus characteristics of schedules of reinforcement, etc.
Sensitivity or discriminability
Conceptually, sensitivity refers to how hard or easy it is to detect that a target stimulus is present from background events. For example, in a recognition memory paradigm, having longer to study to-be-remembered words makes it easier to recognize previously seen or heard words. In contrast, having to remember 30 words rather than 5 makes the discrimination harder. One of the most commonly used statistics for computing sensitivity is the so-called sensitivity index or d. There are also non-parametric measures, such as the area under the ROC-curve.
Bias
Bias is the extent to which one response is more probable than another, averaging across stimulus-present and stimulus-absent cases. That is, a receiver may be more likely overall to respond that a stimulus is present or more likely overall to respond that a stimulus is not present. Bias is independent of sensitivity. Bias can be desirable if false alarms and misses lead to different costs. For example, if the stimulus is a bomber, then a miss (failing to detect the bomber) may be more costly than a false alarm (reporting a bomber when there is not one), making a liberal response bias desirable. In contrast, giving false alarms too often (crying wolf) may make people less likely to respond, a problem that can be reduced by a conservative response bias.
Compressed sensing
Another field which is closely related to signal detection theory is called compressed sensing (or compressive sensing). The objective of compressed sensing is to recover high dimensional but with low complexity entities from only a few measurements. Thus, one of the most important applications of compressed sensing is in the recovery of high dimensional signals which are known to be sparse (or nearly sparse) with only a few linear measurements. The number of measurements needed in the recovery of signals is by far smaller than what Nyquist sampling theorem requires provided that the signal is sparse, meaning that it only contains a few non-zero elements. There are different methods of signal recovery in compressed sensing including basis pursuit, expander recovery algorithm, CoSaMP and also fast non-iterative algorithm. In all of the recovery methods mentioned above, choosing an appropriate measurement matrix using probabilistic constructions or deterministic constructions, is of great importance. In other words, measurement matrices must satisfy certain specific conditions such as RIP (Restricted Isometry Property) or Null-Space property in order to achieve robust sparse recovery.
Mathematics
P(H1|y) > P(H2|y) / MAP testing
In the case of making a decision between two hypotheses, H1, absent, and H2, present, in the event of a particular observation, y, a classical approach is to choose H1 when p(H1|y) > p(H2|y) and H2 in the reverse case. In the event that the two a posteriori probabilities are equal, one might choose to default to a single choice (either always choose H1 or always choose H2), or might randomly select either H1 or H2. The a priori probabilities of H1 and H2 can guide this choice, e.g. by always choosing the hypothesis with the higher a priori probability.
When taking this approach, usually what one knows are the conditional probabilities, p(y|H1) and p(y|H2), and the a priori probabilities and . In this case,
,
where p(y) is the total probability of event y,
.
H2 is chosen in case
and H1 otherwise.
Often, the ratio is called and is called , the likelihood ratio.
Using this terminology, H2 is chosen in case . This is called MAP testing, where MAP stands for "maximum a posteriori").
Taking this approach minimizes the expected number of errors one will make.
Bayes criterion
In some cases, it is far more important to respond appropriately to H1 than it is to respond appropriately to H2. For example, if an alarm goes off, indicating H1 (an incoming bomber is carrying a nuclear weapon), it is much more important to shoot down the bomber if H1 = TRUE, than it is to avoid sending a fighter squadron to inspect a false alarm (i.e., H1 = FALSE, H2 = TRUE) (assuming a large supply of fighter squadrons). The Bayes criterion is an approach suitable for such cases.
Here a utility is associated with each of four situations:
: One responds with behavior appropriate to H1 and H1 is true: fighters destroy bomber, incurring fuel, maintenance, and weapons costs, take risk of some being shot down;
: One responds with behavior appropriate to H1 and H2 is true: fighters sent out, incurring fuel and maintenance costs, bomber location remains unknown;
: One responds with behavior appropriate to H2 and H1 is true: city destroyed;
: One responds with behavior appropriate to H2 and H2 is true: fighters stay home, bomber location remains unknown;
As is shown below, what is important are the differences, and .
Similarly, there are four probabilities, , , etc., for each of the cases (which are dependent on one's decision strategy).
The Bayes criterion approach is to maximize the expected utility:
Effectively, one may maximize the sum,
,
and make the following substitutions:
where and are the a priori probabilities, and , and is the region of observation events, y, that are responded to as though H1 is true.
and thus are maximized by extending over the region where
This is accomplished by deciding H2 in case
and H1 otherwise, where L(y) is the so-defined likelihood ratio.
Normal distribution models
Das and Geisler extended the results of signal detection theory for normally distributed stimuli, and derived methods of computing the error rate and confusion matrix for ideal observers and non-ideal observers for detecting and categorizing univariate and multivariate normal signals from two or more categories.
See also
Binary classification
Constant false alarm rate
Decision theory
Demodulation
Detector (radio)
Estimation theory
Just-noticeable difference
Likelihood-ratio test
Modulation
Neyman–Pearson lemma
Psychometric function
Receiver operating characteristic
Statistical hypothesis testing
Statistical signal processing
Two-alternative forced choice
Type I and type II errors
References
Bibliography
Coren, S., Ward, L.M., Enns, J. T. (1994) Sensation and Perception. (4th Ed.) Toronto: Harcourt Brace.
Kay, SM. Fundamentals of Statistical Signal Processing: Detection Theory ()
McNichol, D. (1972) A Primer of Signal Detection Theory. London: George Allen & Unwin.
Van Trees HL. Detection, Estimation, and Modulation Theory, Part 1 (; website)
Wickens, Thomas D., (2002) Elementary Signal Detection Theory. New York: Oxford University Press. ()
External links
A Description of Signal Detection Theory
An application of SDT to safety
Signal Detection Theory by Garrett Neske, The Wolfram Demonstrations Project
Lecture by Steven Pinker
Signal processing
Telecommunication theory
Psychophysics
Mathematical psychology | Detection theory | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 2,413 | [
"Mathematical psychology",
"Applied and interdisciplinary physics",
"Computer engineering",
"Signal processing",
"Telecommunications engineering",
"Applied mathematics",
"Psychophysics"
] |
1,156,603 | https://en.wikipedia.org/wiki/Biofilter | Biofiltration is a pollution control technique using a bioreactor containing living material to capture and biologically degrade pollutants. Common uses include processing waste water, capturing harmful chemicals or silt from surface runoff, and microbiotic oxidation of contaminants in air. Industrial biofiltration can be classified as the process of utilizing biological oxidation to remove volatile organic compounds, odors, and hydrocarbons.
Examples of biofiltration
Examples of biofiltration include:
Bioswales, biostrips, biobags, bioscrubbers, Vermifilters and trickling filters
Constructed wetlands and natural wetlands
Slow sand filters
Treatment ponds
Green belts
Green walls
Riparian zones, riparian forests, bosques
Bivalve bioaccumulation
Control of air pollution
When applied to air filtration and purification, biofilters use microorganisms to remove air pollution.
The air flows through a packed bed and the pollutant transfers into a thin biofilm on the surface of the packing material. Microorganisms, including bacteria and fungi are immobilized in the biofilm and degrade the pollutant. Trickling filters and bioscrubbers rely on a biofilm and the bacterial action in their recirculating waters.
The technology finds the greatest application in treating malodorous compounds and volatile organic compounds (VOCs). Industries employing the technology include food and animal products, off-gas from wastewater treatment facilities, pharmaceuticals, wood products manufacturing, paint and coatings application and manufacturing and resin manufacturing and application, etc. Compounds treated are typically mixed VOCs and various sulfur compounds, including hydrogen sulfide. Very large airflows may be treated and although a large area (footprint) has typically been required—a large biofilter (>200,000 acfm) may occupy as much or more land than a football field—this has been one of the principal drawbacks of the technology. Since the early 1990s, engineered biofilters have provided significant footprint reductions over the conventional flat-bed, organic media type.
One of the main challenges to optimum biofilter operation is maintaining proper moisture throughout the system. The air is normally humidified before it enters the bed with a watering (spray) system, humidification chamber, bio scrubber, or bio trickling filter. Properly maintained, a natural, organic packing media like peat, vegetable mulch, bark or wood chips may last for several years but engineered, combined natural organic, and synthetic component packing materials will generally last much longer, up to 10 years. Several companies offer these types of proprietary packing materials and multi-year guarantees, not usually provided with a conventional compost or wood chip bed biofilter.
Although widely employed, the scientific community is still unsure of the physical phenomena underpinning biofilter operation, and information about the microorganisms involved continues to be developed. A biofilter/bio-oxidation system is a fairly simple device to construct and operate and offers a cost-effective solution provided the pollutant is biodegradable within a moderate time frame (increasing residence time = increased size and capital costs), at reasonable concentrations (and lb/hr loading rates) and that the airstream is at an organism-viable temperature. For large volumes of air, a biofilter may be the only cost-effective solution. There is no secondary pollution (unlike the case of incineration where additional CO2 and NOx are produced from burning fuels) and degradation products form additional biomass, carbon dioxide and water. Media irrigation water, although many systems recycle part of it to reduce operating costs, has a moderately high biochemical oxygen demand (BOD) and may require treatment before disposal. However, this "blowdown water", necessary for proper maintenance of any bio-oxidation system, is generally accepted by municipal publicly owned treatment works without any pretreatment.
Biofilters are being utilized in Columbia Falls, Montana at Plum Creek Timber Company's fiberboard plant. The biofilters decrease the pollution emitted by the manufacturing process and the exhaust emitted is 98% clean. The newest, and largest, biofilter addition to Plum Creek cost $9.5 million, yet even though this new technology is expensive, in the long run it will cost less overtime than the alternative exhaust-cleaning incinerators fueled by natural gas (which are not as environmentally friendly).
Water treatment
Biofiltration was first introduced in England in 1893 as a trickling filter for wastewater treatment and has since been successfully used for the treatment of different types of water. Biological treatment has been used in Europe to filter surface water for drinking purposes since the early 1900s and is now receiving more interest worldwide. Biofiltration is also common in wastewater treatment, aquaculture and greywater recycling, as a way to minimize water replacement while increasing water quality.
Biofiltration process
A biofilter is a bed of media on which microorganisms attach and grow to form a biological layer called biofilm. Biofiltration is thus usually referred to as a fixed–film process. Generally, the biofilm is formed by a community of different microorganisms (bacteria, fungi, yeast, etc.), macro-organisms (protozoa, worms, insect's larvae, etc.) and extracellular polymeric substances (EPS) (Flemming and Wingender, 2010). Air or water flows through a media bed and any suspended compounds are transferred into a surface biofilm where microorganisms are held to degrade pollutants. The aspect of the biofilm is usually slimy and muddy.
Water to be treated can be applied intermittently or continuously over the media, via upflow or downflow. Typically, a biofilter has two or three phases, depending on the feeding strategy (percolating or submerged biofilter):
a solid phase (media)
a liquid phase (water);
a gaseous phase (air).
Organic matter and other water components diffuse into the biofilm where the treatment occurs, mostly by biodegradation. Biofiltration processes are usually aerobic, which means that microorganisms require oxygen for their metabolism. Oxygen can be supplied to the biofilm, either concurrently or countercurrently with water flow. Aeration occurs passively by the natural flow of air through the process (three phase biofilter) or by forced air supplied by blowers.
Microorganisms' activity is a key-factor of the process performance. The main influencing factors are the water composition, the biofilter hydraulic loading, the type of media, the feeding strategy (percolation or submerged media), the age of the biofilm, temperature, aeration, etc.
The mechanisms by which certain microorganisms can attach and colonize on the surface of filter media of a biofilter can be via transportation, initial adhesion, firm attachment, and colonization [Van Loosdrecht et al., 1990]. The transportation of microorganisms to the surface of the filter media is further controlled by four main processes of diffusion (Brownian motion), convection, sedimentation, and active mobility of the microorganisms. The overall filtration process consists of microorganism attachment, substrate utilization which causes biomass growth, to biomass detachment.
Types of filtering media
Most biofilters use media such as sand, crushed rock, river gravel, or some form of plastic or ceramic material shaped as small beads and rings.
Advantages
Although biological filters have simple superficial structures, their internal hydrodynamics and the microorganisms' biology and ecology are complex and variable. These characteristics confer robustness to the process. In other words, the process has the capacity to maintain its performance or rapidly return to initial levels following a period of no flow, of intense use, toxic shocks, media backwash (high rate biofiltration processes), etc.
The structure of the biofilm protects microorganisms from difficult environmental conditions and retains the biomass inside the process, even when conditions are not optimal for its growth. Biofiltration processes offer the following advantages: (Rittmann et al., 1988):
Since microorganisms are retained within the biofilm, biofiltration allows the development of microorganisms with relatively low specific growth rates;
Biofilters are less subject to variable or intermittent loading and to hydraulic shock;
Operational costs are usually lower than for activated sludge;
The final treatment result is less influenced by biomass separation since the biomass concentration at the effluent is much lower than for suspended biomass processes;
The attached biomass becomes more specialized (higher concentration of relevant organisms) at a given point in the process train because there is no biomass return.
Drawbacks
Because filtration and growth of biomass leads to an accumulation of matter in the filtering media, this type of fixed-film process is subject to bioclogging and flow channeling. Depending on the type of application and on the media used for microbial growth, bioclogging can be controlled using physical and/or chemical methods. Backwash steps can be implemented using air and/or water to disrupt the biomat and recover flow whenever possible. Chemicals such as oxidizing (peroxide, ozone) or biocide agents can also be used.
Biofiltration can require a large area for some treatment techniques (suspended growth and attached growth processes) as well as long hydraulic retention times (anaerobic lagoon and anaerobic baffled reactor).
Drinking water
For drinking water, biological water treatment involves the use of naturally occurring microorganisms in the surface water to improve water quality. Under optimum conditions, including relatively low turbidity and high oxygen content, the organisms break down material in the water and thus improve water quality. Slow sand filters or carbon filters are used to provide a support on which these microorganisms grow. These biological treatment systems effectively reduce water-borne diseases, dissolved organic carbon, turbidity and color in surface water, thus improving overall water quality.
Typically in drinking water treatment; granular activated carbon or sand filters are used to prevent re-growth of microorganisms in water distribution pipes by reducing levels of iron and nitrate that act as a microbial nutrient. GAC also reduces chlorine demand and other disinfection by-product accumulation by acting as a first line of disinfection. Bacteria attached to filter media as a biofilm oxidize organic material as both an energy and carbon source, this prevents undesired bacteria from using these sources which can reduce water odors and tastes [Bouwer, 1998]. These biological treatment systems effectively reduce water-borne diseases, dissolved organic carbon, turbidity and color in surface water, thus improving overall water quality.
Biotechnological techniques can be used to improve the biofiltration of drinking water by studying the microbial communities in the water. Such techniques include qPCR (quantitative polymerase chain reaction), ATP assay, metagenomics, and flow cytometry.
Wastewater
Biofiltration is used to treat wastewater from a wide range of sources, with varying organic compositions and concentrations. Many examples of biofiltration applications are described in the literature. Bespoke biofilters have been developed and commercialized for the treatment of animal wastes, landfill leachates, dairy wastewater, domestic wastewater.
This process is versatile as it can be adapted to small flows (< 1 m3/d), such as onsite sewage as well as to flows generated by a municipality (> 240 000 m3/d). For decentralized domestic wastewater production, such as for isolated dwellings, it has been demonstrated that there are important daily, weekly and yearly fluctuations of hydraulic and organic production rates related to modern families' lifestyle. In this context, a biofilter located after a septic tank constitutes a robust process able to sustain the variability observed without compromising the treatment performance.
In anaerobic wastewater treatment facilities, biogas is fed through a bio-scrubber and “scrubbed” with activated sludge liquid from an aeration tank. Most commonly found in wastewater treatment is the trickling filter process (TFs) [Chaudhary, 2003]. Trickling filters are an aerobic treatment that uses microorganisms on attached medium to remove organic matter from wastewater.
In primary wastewater treatment, biofiltration is used to control levels of biochemical oxygen, demand, chemical oxygen demand, and suspended solids. In tertiary treatment processes, biofiltration is used to control levels of organic carbon [ Carlson, 1998].
Use in aquaculture
The use of biofilters is common in closed aquaculture systems, such as recirculating aquaculture systems (RAS). The biofiltration techniques used in aquaculture can be separated into three categories: biological, physical, and chemical. The primary biological method is nitrification; physical methods include mechanical techniques and sedimentation, and chemical methods are usually used in tandem with one of the other methods. Some farms use seaweed, such as those from the genera Ulva, to take excess nutrients out of the water and release oxygen into the ecosystem in a “recirculation system” while also serving as a source of income when they sell the seaweed for safe human consumption.
Many designs are used, with different benefits and drawbacks, however the function is the same: reducing water exchanges by converting ammonia to nitrate. Ammonia (NH4+ and NH3) originates from the brachial excretion from the gills of aquatic animals and from the decomposition of organic matter. As ammonia-N is highly toxic, this is converted to a less toxic form of nitrite (by Nitrosomonas sp.) and then to an even less toxic form of nitrate (by Nitrobacter sp.). This "nitrification" process requires oxygen (aerobic conditions), without which the biofilter can crash. Furthermore, as this nitrification cycle produces H+, the pH can decrease, which necessitates the use of buffers such as lime.
See also
Bioretention
Bioswale
Folkewall
Media filter
Vermifilter
References
Further reading
Biofilter Bags SE-14. (2012). California Stormwater BMP Handbook, 1–3. Retrieved from https://www.cityofventura.ca.gov/DocumentCenter/View/13163/CASQA-Guidance-SE-14-Biofilter-Bags.
External links
Bioswales and strips for storm runoff - California Dept. of Transportation (CalTrans)
Bioreactors
Environmental engineering
Environmental soil science
Biodegradable waste management
Waste treatment technology
Air pollution control systems
Volatile organic compound abatement
Water filters | Biofilter | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 3,010 | [
"Bioreactors",
"Water filters",
"Biological engineering",
"Water treatment",
"Biodegradable waste management",
"Chemical reactors",
"Chemical engineering",
"Environmental soil science",
"Filters",
"Biochemical engineering",
"Microbiology equipment",
"Biodegradation",
"Civil engineering",
"... |
11,543,936 | https://en.wikipedia.org/wiki/Huber%27s%20equation | Huber's equation, first derived by a Polish engineer Tytus Maksymilian Huber, is a basic formula in elastic material tension calculations, an equivalent of the equation of state, but applying to solids. In most simple expression and commonly in use it looks like this:
where is the tensile stress, and is the shear stress, measured in newtons per square meter (N/m2, also called pascals, Pa), while —called a reduced tension—is the resultant tension of the material.
Finds application in calculating the span width of the bridges, their beam cross-sections, etc.
See also
Yield surface
Stress–energy tensor
Tensile stress
von Mises yield criterion
References
Physical quantities
Structural analysis | Huber's equation | [
"Physics",
"Mathematics",
"Engineering"
] | 148 | [
"Structural engineering",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Classical mechanics stubs",
"Structural analysis",
"Classical mechanics",
"Mechanical engineering",
"Aerospace engineering",
"Physical properties"
] |
11,544,764 | https://en.wikipedia.org/wiki/Rollover%20protection%20structure | A rollover protection structure or rollover protection system (ROPS) ( or ) is a system or structure intended to protect equipment operators and motorists from injuries caused by vehicle overturns or rollovers. Like rollcages and rollbars in cars and trucks, cabs, frames or rollbars on agricultural and construction equipments, a ROPS involves mechanical components attached to the frame of the vehicle that maintain a clearance zone large enough to protect the operator's body in the event of rollover.
Commonly found on heavy equipment (i.e. tractors), earth-moving machinery and UTVs used in construction, agriculture and mining, ROPS structures are defined by various regulatory agencies, including US Occupational Safety and Health Administration (OSHA) and international standard organizations such as ISO and OECD. The regulations include both a strength requirement as well as an energy absorption requirement of the structure. Some dump trucks add a protrusion to their boxes that cover the operator's compartment for ROPS purposes.
ROPS are commonly fitted to 4x4s, pickup trucks, earth moving equipment, soil compactors and utility vehicles used in the mining industry. Products such as this were developed out of necessity so employees travelling around or within mine sites were provided with extra protection in the event of a fleet vehicle rollover.
In the US, ROPS designs have to be certified by a professional engineer, who will normally require a destructive test. The structure will be tested at a reduced temperature (where the metal is more brittle), or fabricated from materials that have satisfactory low temperature performance. The International Organization for Standardization has guidelines for destructively testing ROPS structures on earthmoving machinery, excavators, forestry equipment and tractors. Theoretical performance analysis of major new design ROPS is not permitted as an alternative to physical testing.
Variants
Some tractor operators have raised concerns about using ROPS in low-clearance environments, such as in orchards and buildings. In response, NIOSH developed an Automatically Deploying Rollover Protective Structure (AutoROPS) which stays in a lowered position until a rollover condition is determined, at which time it deploys to a fully extended and locked position. It is currently working with manufacturers to streamline the commercialization of this technology. The Division of Safety Research branch of NIOSH has developed cost-effective rollover protection structures (CROPS) for four tractor models (Ford 8N, Ford 3000, Ford 4000, Massey Ferguson 135), in an effort to provide safety for older model tractors.
Some automobile models have begun to adopt the phrase, substituting system for structure in the ROPS acronym, notably the Volvo C70 convertible models, and Jaguar XK. Their ROPS structures consist of two pyrotechnically charged roll hoops hidden behind the rear seats that will pop up in the case of a roll-over to protect the occupants. If the roof is up, the system will still work, shattering the rear window at the same time.
History
Rollover injury and fatality
Tractor rollover has become one of the leading causes of occupational death in the agricultural industry. In the United States from 1992 to 2005, 1,412 workers were killed from tractor rollover, with roughly 10,000 suffering an injury. These rollover fatalities represented about 20% of all agricultural fatalities. During 2003 to 2010, 933 workers in agriculture, forestry, fishing and hunting industries were killed as a result of tractor rollover, accounting for over 63% of all tractor-related deaths. The National Safety Council estimates that between 150 and 200 tractor operators are killed due to rollover in the US each year. Researchers have also attempted to estimate the chances that a tractor rollover will result in a fatality of the operator. An adjusted probability of about 8 deaths per 100 tractor overturns (8%) was extrapolated using data from the Kentucky Fatality Assessment and Control Evaluation (FACE) Program. Furthermore, youth are particularly at risk of being crushed or pinned by a machine (all-terrain vehicle, tractor, etc.) that is not equipped with a rollover bar. All-terrain vehicles and tractors continue to be leading causes of fatal injury among youth in agricultural settings.
The installation of Rollover Protective Structures (ROPS) on older tractors that lack these protective devices has been identified as a viable solution for reducing overturn fatality rates among US farmers. When worn with a seat belt, these engineering controls are 99% effective in preventing operator death if an overturn occurs. The US National Institute for Occupational Safety and Health estimates that fatality rates from tractor overturns in the US could be reduced by a minimum of 71% if all tractors were equipped with ROPS. When paired with proper seat belt use on tractors, NIOSH estimates that ROPS could eliminate nearly all fatalities caused by tractor and lawn mower overturns. Without a seat belt, the ROPS is still 70% effective in preventing operator death, though there is a possibility that the rider may be thrown from the tractor during the overturn, and thus left unprotected by the ROPS.
Usage rates
Research from Sweden shows that the fatality rate from tractor rollover remains stable when ROPS prevalence rates range from 40% to 75%; only until the rate of ROPS adoption reaches 75% to 80% does the fatality rate from rollover fall significantly, to near-zero. The latest estimates of tractors equipped with ROPS in the United States show that 59% of tractors were ROPS-equipped in 2006, an increase from the 38% in 1993. With steady increases in the installation of ROPS, it is projected that the rollover fatality rate will decline steadily, until reaching a rate near zero by 2028.
ROPS usage has also appeared to be linked to a number of factors. There is regional variation in ROPS usage within the United States, as estimates from 2006 showed that tractor operators in the South had the highest prevalence of ROPS usage at 65%, while the Northeast had the lowest prevalence of ROPS usage at 51%. The West and Midwest reported rates of 60% and 56% respectively.
Age of tractor operator is a large risk factor, as increasing age is associated with decreasing rates of ROPS usage. The oldest group of tractor operators, those ages 65 and above, have the lowest rate of overall ROPS usage at 42%. Additionally, older tractor operators are more likely to suffer fatality and severe injury outcomes following tractor rollover than younger operators. Along with the age of the tractor operator, the age of the tractor itself is a risk factor. Older tractor models are less likely to be equipped with ROPS, possibly owing to impracticality in installation or to mandated installations in newer models. Further, older tractors are more dangerous than newer tractors, possessing narrow front ends and a higher center of gravity, as well as being more prone to operational failure.
Economics also appears to be a major factor in rates of ROPS adoption. Farms with low value of sales, part-time operations, and smaller acreage are less likely to employ ROPS-equipped tractors than farms with high value of sales, full-time operations, and larger acreage. Additionally, farms that use more hired labor over non-hired labor (family) are found to have fewer fatal tractor overturns. Overall, farms that are more economically viable are more likely to install ROPS on tractors than smaller, lower-income farms.
ROPS adoption
Tractor rollover deaths have been identified as a public health problem since the 1920s. Research efforts from several countries towards the development of engineering controls to reduce injury from rollover persisted for several decades before any legislation took place. In 1959, Sweden became the first country to enact ROPS legislation, requiring all newly manufactured tractors in the country to have ROPS installed. This requirement was expanded in 1965, requiring all tractors in Sweden, regardless of manufacture date, to have ROPS installed if it was operated by an employee and not the actual owner. Similar legislation requiring ROPS installation has been enacted in Australia, Germany, and Denmark.
In the United States, standards for ROPS design and utilization for tractors were first developed in 1967 by the American Society for Agricultural and Biological Engineers. ROPS legislation was passed in 1975, with OSHA requiring that all tractors manufactured from 25 October 1976 onwards be equipped with ROPS. In 1985, the development of a new voluntary safety standard by the American Society of Agricultural and Biological Engineers (S318.10) encouraged an initiative by American tractor manufacturers to equip new tractors over 20 horsepower with ROPS.
Agricultural health and safety researchers have observed that increases in ROPS protected tractors in the United States can largely be tied to attrition (older tractors without ROPS being replaced with newer tractors with ROPS) vs. installation of ROPS. Additional studies have indicated the need to promote and facilitate ROPS installation on older tractors, as many farmers are unwilling to replace their older tractors. Overall, these studies demonstrate that relying on the eventual replacement of tractors without ROPS – and the installation of ROPS on all older tractors – is not an expeditious solution to tractor overturn deaths and will result in the deaths of many US tractor operators over the next few decades.
Barriers to ROPS installation in the United States
Over the past few decades, quantitative and qualitative research studies have attempted to identify farmers' potential barriers to ROPS adoption. Cost, time to find and install ROPS parts, and dismissal of personal risk have all been prominently identified barriers to ROPS adoption. Research also shows that knowledge of tractor overturn risks and the benefits of ROPS installation do not appear to stimulate farmer interest in installing ROPS. Equipment dealers have also cited a number of barriers, such as a perceived lack of farmer interest, injury liability, difficulty recovering expenses and a lack of understanding amongst dealers regarding the magnitude of the overturn fatality problem, which negatively impacts dealers' interest in ROPS installation.
Programs to increase ROPS installation in the United States
Several strategies have been employed to address these barriers and motivate farmers to install ROPS. In 1985, equipment manufacturers launched a promotional campaign to encourage ROPS installation activities, although industry representatives state the campaign did not stimulate considerable interest in ROPS installation in the farm community [1]. Education has also been largely employed by extension agents and agricultural health and safety educators as a means for increasing ROPS installations, although evaluations of educational interventions indicate they do not markedly decrease agricultural worker injury rates or increase ROPS installation activity. However, in Kentucky, a community awareness campaign did appear to increase interest in ROPS installation. Various state farm bureaus (VA, NC, and IL) have also offered financial incentives for members to install ROPS, while an online ROPS Inventory Site called the KY ROPS Guide, was developed to assist farmers searching for ROPS.
In 2006, the New York ROPS Rebate Program was launched in an effort to increase access to ROPS among New York tractor operators; this addressed the Northeastern United States' consistently lower rates of ROPS usage than other regions of the United States. The program has since expanded to seven states including New York, Pennsylvania, Vermont, New Hampshire, Wisconsin, Massachusetts, and Minnesota.
These programs incorporate a number of components that build on prior ROPS research. These include targeted promotions, rebates for 70% of the cost to install ROPS (with varying caps on farmers out of pocket expense) and toll-free ROPS hotline assistance with the ROPS purchase and ordering process. Rebate funding is provided via state funding resources or private industry / fundraising campaigns. Programs have increased farmer interest in ROPS installation with an average of 1,200 calls annually to the ROPS hotline and farmers are generally satisfied with these services (99% of program participants would recommend the program to other farmers). Programs have also documented the prevention of injury and death for farmers who have participated in these installation programs.
Current efforts to increase ROPS adoption in the United States
National Tractor Safety Coalition (NTSC)
In an effort to build on the momentum of prior ROPS interventional efforts to create a national ROPS installation solution, a number of research, government and industry groups organized a two-day 'Whole-System-in-the-Room' workshop in Chicago, Illinois in May 2014. The purpose of the meeting was to outline a national strategy for ROPS installation that all stakeholders could agree on and to engage multiple industry groups in strategy implementation efforts. Close to 50 organizations were represented at the meeting and included representatives from the following industry groups: manufacturers and dealers, agricultural organizations, health and safety organizations, financial and insurance groups, government organizations, researchers, private corporations, media, farmers/farm safety advocates. By the end of the meeting, the National Tractor Safety Coalition was officially organized with the mission "to prevent tractor-related injuries and deaths in US agriculture by developing and implementing collaborative, stakeholder-driven, evidence-based solutions." A detailed list of common goals are featured in the NIOSH Science Blog "The National Tractor Safety Coalition: Taking a new systems-approach to a well-known problem."
Currently the Coalition includes 87 members from a number of agricultural or health related organizations. These organizations include: NIOSH, American Farm Bureau Federation, Farm Foundation, and several Universities, Extension agencies, NIOSH Agricultural Safety and Health Centers, State Departments of Health, and insurance companies, among others. Some members serve on the NTSC Steering Committee, which meets on a monthly basis and provide guidance on the overarching initiative to expand ROPS installation programs nationally while others provide assistance on various aspects of national ROPS implementation efforts, such as promotions, testimonials, congressional outreach or networking. A manufacturing and technology task force has also been assembled, and provides guidance to the group on technical issues.
National ROPS Rebate Program
The NTSC launched the National ROPS Rebate Program in 2017 which helps to facilitate individual state based programs as well as trying to obtain national-level funding. Given the NTSC's broad mission to address tractor-related deaths, the group seeks to tackle issues such as run-overs or implement entanglements once a National ROPS Rebate Program has been sustainably established.
See also
Active rollover protection
Anti-roll bar
Gyroscope
Roll cage
Side Impact Protection System
WHIPS
References
External links
The Kentucky ROPS Guide
Legislation in the EU: Council Directive 87/402/EEC of 25 June 1987 on roll over protection structures mounted in front of the driver's seat on narrow-track wheeled agricultural and forestry tractors. It was modified several times, for the latest version refer to the consolidated version.
ROPS test procedure (US State of Washington)
ROPS Decider
National ROPS Rebate Program
Vehicle safety technologies
Tractors
Agricultural machinery
Agricultural health and safety | Rollover protection structure | [
"Engineering"
] | 3,004 | [
"Engineering vehicles",
"Tractors"
] |
11,546,101 | https://en.wikipedia.org/wiki/Sharp%20Solar | Sharp Solar, a subsidiary of Sharp Electronics, is a solar energy products company owned by Sharp Corporation and based in Osaka, Japan.
Products
The company produces thin film modules and mono and poly-crystalline silicon solar cells.
Sharp's photovoltaic (PV) modules are used for many applications, from satellites to lighthouses, and industrial applications to residential use.
Sharp Solar manufactures PV modules in multiple locations, though it shut down solar panel production at its factories in Wrexham, Wales and Memphis, Tennessee in 2014.
History
Sharp began researching solar cells in 1959 with mass production first beginning in 1963. Production capacity amounted to 324 MW in 2004. In 2010, they were the #1 producer of PV cells, in terms of revenues.
Timeline
1959: Started development of solar cells
1963: Began mass production of solar cells
1963: First to supply ocean buoy with solar power cells
1966: Installed solar on lighthouse
1967: Began development of solar space applications
1976: "Ume" satellite successfully launched with solar cells on board
1980: Released first solar calculator
1981: Began operations at Shinjo Plant (now Katsuragi)
1988: Reached 11.5% cell conversion for amorphous silicon solar cells
1992: Reached 17.1% cell conversion for polycrystalline solar cells
1992: Achieved world's highest cell conversion efficiency of 22%
1994: Commercialization of residential solar power system (grid-connected)
2000: Became global leader in solar cell manufacturing
2001: Obtained UL (U.S.) and TUV (EU) certification for PV modules
2002: Developed the industry's first string power conditioner
2003: Space PV module installed on Satellite Observatory "Free Flyer" (SFU)
2003: Began producing PV modules in the United States
2003: Began producing PV modules in Europe
2005: Developed solar cells that admit light and can be used as building materials for windows
2005: Began mass-producing thin film solar cells
2006: Katsuragi plant expands its annual production capacity to 600 megawatts, the world's highest at that time
2007: Expanded production capacity of PV modules to 200 megawatts in Europe
2008: Became first PV manufacturer in the world to achieve cumulative production of 2 GW
2008: Achieved industry's highest conversion efficiency for a polycrystalline PV module of 14.4%
2009: Launched thin film modules globally
2010: Launched world's highest efficiency Solar PV panel with greater than 32.5% efficiency
2010: Investment made into 2.8 GW annual production capacity
See also
List of photovoltaics companies
Photovoltaic array
Photovoltaics
References
External links
Solar energy companies of Japan
Solar
Electronics companies of Japan
Photovoltaics manufacturers
Manufacturing companies based in Osaka
Energy companies established in 1959
Electronics companies established in 1959
Renewable resource companies established in 1959
Japanese companies established in 1959 | Sharp Solar | [
"Engineering"
] | 570 | [
"Photovoltaics manufacturers",
"Engineering companies"
] |
11,546,489 | https://en.wikipedia.org/wiki/List%20of%20photovoltaics%20companies | This is a list of notable photovoltaics (PV) companies.
Grid-connected solar photovoltaics (PV) is the fastest growing energy technology in the world, growing from a cumulative installed capacity of 7.7 GW in 2007, to 320 GW in 2016. In 2016, 93% of the global PV cell manufacturing capacity utilized crystalline silicon (cSi) technology, representing a commanding lead over rival forms of PV technology, such as cadmium telluride (CdTe), amorphous silicon (aSi), and copper indium gallium selenide (CIGS). In 2016, manufacturers in China and Taiwan met the majority of global PV module demand, accounting for 68% of all modules, followed by the rest of Asia at 14%. The United States and Canada manufactured 6%, and Europe manufactured a mere 4%. In 2021 China produced about 80% of the polysilicon, 95% of wafers, 80% of cells and 70% of modules. Module production capacity reached 460 GW with crystalline silicon technology assembly accounting for 98%.
Photovoltaics companies include PV capital equipment producers, cell manufacturers, panel manufacturers and installers. The list does not include silicon manufacturing companies.
Photovoltaic manufacturers
Top 10 by year
Summary
According to EnergyTrend, the 2011 global top ten polysilicon, solar cell and solar module manufacturers by capacity were found in countries including People's Republic of China, United States, Taiwan, Germany, Japan, and Korea.
In 2011, the global top ten polysilicon makers by capacity were GCL, Hemlock, OCI, Wacker, LDK, REC, MEMC/SunEdison, Tokuyama, LCY and Woongjin, represented by People's Republic of China, United States, Taiwan, Germany, Japan and South Korea.
Historical rankings
In 2015, GCL System Integration Technology Company made an increase of 500%, topping 2.5-2.7 GW, which puts it at seventh rank, overtaking Yingli Green, compared to 0.5 GW in 2014. Their solar PV module production appears to have reached a 3.7 GW capacity at the end of 2015.
Solar modules, as the final products to be installed to generate electricity, are regarded as the major components to be selected by customers willing to choose solar PV energy. Solar module manufacturers must be sure that their products can be sustainable for application periods of more than 25 years. As a result, major solar module producers have their products tested by publicly recognized testing organizations and guarantee their durable efficiency rate for a certain number of years. The solar PV market has been growing for the past few years. According to solar PV research company PVinsights, worldwide shipments of solar modules in 2011 was around 25 GW, and the shipment year-over-year growth was around 40%. The top five solar module producers in 2011 were: Suntech, First Solar, Yingli, Trina, and Canadian. The top five solar module companies possessed 51.3% market share of solar modules, according to PVinsights' market intelligence report.
Top 10 solar cell producers
According to an annual market survey by the photovoltaics trade publication Photon International, global production of photovoltaic cells and modules in 2009 was 12.3 GW. The top ten manufacturers accounted for 45% of this total.
In 2010, a tremendous growth of solar PV cell shipments doubled the solar PV cell market size. According to the solar PV market research company PVinsights, Suntech topped the ranking of solar cell production. Most of the top ten solar PV producers doubled their shipment in 2010 and five of them were over one gigawatt shipments. The top ten solar cell producers dominated the market with an even higher market share, say 50~60%, with respect to an assumed twenty gigawatt cell shipments in 2010.
Quarterly ranking
Although yearly ranking is as listed above, quarterly ranking can indicate which company can sustain particular conditions such as price adjustment, government feed-in tariff change, and weather conditions. In 2Q11, First Solar regained the top spot in solar module shipments from Suntech. From the 2Q11 results, four phenomena should be noticed: thin film leader First Solar still dominates; more centralization in the solar module market; Chinese companies soared; and the giga-watt game is prevailing (according to the latest solar model shipment report by PVinsigts).
Thin film ranking
Thin film solar cells are commercially used in several technologies, including cadmium telluride (CdTe), copper indium gallium diselenide (CIGS), and amorphous and other thin-film silicon (a-Si, TF-Si). In 2013, thin-film declined to 9% of worldwide PV production.
In 2009, thin films represented 16.8% of total global production, up from 12.5% in 2008. The top ten thin-film producers were:
1100.0 MW First Solar
123.4 MW Suntech solar
94.0 MW Sharp
60.0 MW HELIOSPHERA
50.0 MW Sungen Solar
50.0 MW Trony
50.0 MW Moser Baer
43.0 MW Solar Frontier
42.0 MW Mitsubishi
40.0 MW Kaneka Corporation
40.0 MW Vtech Solar
30.0 MW Würth Solar
30.0 MW Bosch (formerly Ersol)
30.0 MW EPV
1 Estimated
2011 global top 10 polysilicon manufacturers by capacity
On the other hand, the 2011 global top ten solar cell makers by capacity are dominated by both Chinese and Taiwanese companies, including Suntech, JA Solar, Trina, Yingli, Motech, Gintech, Canadian Solar, NeoSolarPower, Hanwha Solar One and JinkoSolar.
2011 global top 10 solar cell manufacturers by capacity
In terms of solar module by capacity, the 2011 global top ten are Suntech, LDK, Canadian Solar, Trina, Yingli, Hanwha Solar One, Solar World, Jinko Solar, Sunneeg and Sunpower, represented by makers in People's Republic of China and Germany.
2011 global top 10 solar module manufacturers by capacity
In terms of wafer and cell capacities, both makers from Taiwan and China have demonstrated significant year over year growth from 2010 to 2011.
China and Taiwan production capacity
Solar photovoltaic production by country
China now manufactures more than half of the world's solar photovoltaics. Its production has been rapidly escalating. In 2001 it had less than 1% of the world market. In contrast, in 2001 Japan and the United States combined had over 70% of world production. By 2011 they produced around 15%.
Other companies
Other notable companies include:
Anwell Solar, Hong Kong, China
Ascent Solar, Tucson, Arizona, US
Cool Earth Solar, California, US
Dyesol, Canberra, Australia
Eurosolar, Germany
Global Solar, Tucson, Arizona, US
GreenSun Energy, Jerusalem, Israel
Hanwha, Seoul, South Korea
HelioVolt, Austin, Texas, US
Hitachi, Japan
IBC SOLAR, Germany
International Solar Electric Technology, Chatsworth, California, US
Isofotón, Malaga, Spain
Konarka Technologies, Inc., Lowell, Massachusetts, US
LDK Solar, Xinyu, China
Meyer Burger, Thun, Switzerland
Miasolé, California, US
Mitsubishi Electric, Tokyo, Japan
Nanosolar, San Jose, California, US
Odersun, Frankfurt Oder, Germany
Panasonic Corporation Osaka, Japan
PowerFilm, Inc., Ames, Iowa, US
Renewable Energy Corporation, Norway
Schott Solar, Germany
Signet Solar, California, US
Skyline Solar, Mountain View, California, US
SolarEdge, Grass Valley, California, US
SolarPark Korea, Wanju, South Korea
SolarWorld, Bonn, Germany
Solimpeks, Munich, Germany
SoloPower, San Jose, California, US
Spectrolab, Inc., Sylmar, California, US
Sulfurcell, company has changed name to Soltecture in 2011, Germany
SunEdison
Suniva, Norcross, Georgia, US
Sun Power Corporation, San Jose, California, US
Targray Technology International, Kirkland, Quebec, Canada
Tenksolar, Minneapolis, Minnesota, US
Topray Solar, China
Toshiba, Tokyo, Japan
Unirac, Albuquerque, New Mexico, US
Wagner & Co., Germany
Wirsol, Waghäusel, Germany
Xinyi Solar, Wuhu, China
List of solar panel factories
Below is a list of solar panel factories. It lists actual factories only, former plants are below this first table.
Closed solar panel factories
See also
List of CIGS companies
List of concentrating solar thermal power companies
List of energy storage projects
List of silicon producers
Renewable energy industry
Silicon Module Super League
Solar cell
Dye-sensitized solar cell
Solar inverter
Power optimizer
Applied Materials, a solar cell capital equipment producer
Notes
References
External links
"Solar Home System"
Electrical-engineering-related lists
Photovoltaics companies
Photovoltaics companies | List of photovoltaics companies | [
"Engineering"
] | 1,852 | [
"Electrical engineering",
"Photovoltaics manufacturers",
"Electrical-engineering-related lists",
"Engineering companies"
] |
11,548,654 | https://en.wikipedia.org/wiki/Hydrogenated%20starch%20hydrolysates | Hydrogenated starch hydrolysates (HSHs), also known as polyglycitol syrup (INS 964), are mixtures of several sugar alcohols (a type of sugar substitute). Hydrogenated starch hydrolysates were developed by the Swedish company Lyckeby Starch in the 1960s. The HSH family of polyols is an approved food ingredient in Canada, Japan, and Australia. HSH sweeteners provide 40 to 90% sweetness relative to table sugar.
Hydrogenated starch hydrolysates are produced by the partial hydrolysis of starch – most often corn starch, but also potato starch or wheat starch. This creates dextrins (glucose and short glucose chains). The hydrolyzed starch (dextrin) then undergoes hydrogenation to convert the dextrins to sugar alcohols.
Hydrogenated starch hydrolysates are similar to sorbitol: if the starch is completely hydrolyzed so that only single glucose molecules remain, then after hydrogenation the result is sorbitol. Because in HSHs the starch is not completely hydrolyzed, a mixture of sorbitol, maltitol, and longer chain hydrogenated saccharides (such as maltotriitol) is produced. When no single polyol is dominant in the mix, the generic name hydrogenated starch hydrolysates is used. However, if 50% or more of the polyols in the mixture are of one type, it can be labeled as "sorbitol syrup", or "maltitol syrup", etc.
Uses
Hydrogenated starch hydrolysates are used commercially in the same way as other common sugar alcohols. They are often used as both a sweetener and as a humectant (moisture-retaining ingredient). As a crystallization modifier, they can prevent syrups from forming crystals of sugar. It is used to add bulk, body, texture, and viscosity to mixtures, and can protect against damage from freezing and drying. HSH products are generally blended with other sweeteners, both caloric and artificial.
Health and safety
Similar to xylitol, hydrogenated starch hydrolysates are not readily fermented by oral bacteria and are used to formulate sugarless products that do not promote dental caries.
HSHs are also more slowly absorbed in the digestive tract, thus, have a reduced glycemic potential relative to glucose. However, they do have a laxative effect when consumed in large amounts.
References
General references
The Sugar Association Inc. "Sugar - Sweet By Nature: Sugar Alcohols"
Science Toys. Ingredients. "Hydrogenated Starch Hydrosylate"
Organic compounds
Starch
Polyols
Sugar alcohols | Hydrogenated starch hydrolysates | [
"Chemistry"
] | 578 | [
"Organic compounds",
"Carbohydrates",
"Sugar alcohols"
] |
11,548,952 | https://en.wikipedia.org/wiki/Censoring%20%28statistics%29 | In statistics, censoring is a condition in which the value of a measurement or observation is only partially known.
For example, suppose a study is conducted to measure the impact of a drug on mortality rate. In such a study, it may be known that an individual's age at death is at least 75 years (but may be more). Such a situation could occur if the individual withdrew from the study at age 75, or if the individual is currently alive at the age of 75.
Censoring also occurs when a value occurs outside the range of a measuring instrument. For example, a bathroom scale might only measure up to 140 kg. If a 160 kg individual is weighed using the scale, the observer would only know that the individual's weight is at least 140 kg.
The problem of censored data, in which the observed value of some variable is partially known, is related to the problem of missing data, where the observed value of some variable is unknown.
Censoring should not be confused with the related idea truncation. With censoring, observations result either in knowing the exact value that applies, or in knowing that the value lies within an interval. With truncation, observations never result in values outside a given range: values in the population outside the range are never seen or never recorded if they are seen. Note that in statistics, truncation is not the same as rounding.
Types
Left censoring – a data point is below a certain value but it is unknown by how much.
Interval censoring – a data point is somewhere on an interval between two values.
Right censoring – a data point is above a certain value but it is unknown by how much.
Type I censoring occurs if an experiment has a set number of subjects or items and stops the experiment at a predetermined time, at which point any subjects remaining are right-censored.
Type II censoring occurs if an experiment has a set number of subjects or items and stops the experiment when a predetermined number are observed to have failed; the remaining subjects are then right-censored.
Random (or non-informative) censoring is when each subject has a censoring time that is statistically independent of their failure time. The observed value is the minimum of the censoring and failure times; subjects whose failure time is greater than their censoring time are right-censored.
Interval censoring can occur when observing a value requires follow-ups or inspections. Left and right censoring are special cases of interval censoring, with the beginning of the interval at zero or the end at infinity, respectively.
Estimation methods for using left-censored data vary, and not all methods of estimation may be applicable to, or the most reliable, for all data sets.
A common misconception with time interval data is to class as left censored intervals where the start time is unknown. In these cases we have a lower bound on the time interval, thus the data is right censored (despite the fact that the missing start point is to the left of the known interval when viewed as a timeline!).
Analysis
Special techniques may be used to handle censored data. Tests with specific failure times are coded as actual failures; censored data are coded for the type of censoring and the known interval or limit. Special software programs (often reliability oriented) can conduct a maximum likelihood estimation for summary statistics, confidence intervals, etc.
Epidemiology
One of the earliest attempts to analyse a statistical problem involving censored data was Daniel Bernoulli's 1766 analysis of smallpox morbidity and mortality data to demonstrate the efficacy of vaccination. An early paper to use the Kaplan–Meier estimator for estimating censored costs was Quesenberry et al. (1989), however this approach was found to be invalid by Lin et al. unless all patients accumulated costs with a common deterministic rate function over time, they proposed an alternative estimation technique known as the Lin estimator.
Operating life testing
Reliability testing often consists of conducting a test on an item (under specified conditions) to determine the time it takes for a failure to occur.
Sometimes a failure is planned and expected but does not occur: operator error, equipment malfunction, test anomaly, etc. The test result was not the desired time-to-failure but can be (and should be) used as a time-to-termination. The use of censored data is unintentional but necessary.
Sometimes engineers plan a test program so that, after a certain time limit or number of failures, all other tests will be terminated. These suspended times are treated as right-censored data. The use of censored data is intentional.
An analysis of the data from replicate tests includes both the times-to-failure for the items that failed and the time-of-test-termination for those that did not fail.
Censored regression
An earlier model for censored regression, the tobit model, was proposed by James Tobin in 1958.
Likelihood
The likelihood is the probability or probability density of what was observed, viewed as a function of parameters in an assumed model. To incorporate censored data points in the likelihood the censored data points are represented by the probability of the censored data points as a function of the model parameters given a model, i.e. a function of CDF(s) instead of the density or probability mass.
The most general censoring case is interval censoring: , where is the CDF of the probability distribution, and the two special cases are:
left censoring:
right censoring:
For continuous probability distributions:
Example
Suppose we are interested in survival times, , but we don't observe for all . Instead, we observe
, with and if is actually observed, and
, with and if all we know is that is longer than .
When is called the censoring time.
If the censoring times are all known constants, then the likelihood is
where = the probability density function evaluated at ,
and = the probability that is greater than , called the survival function.
This can be simplified by defining the hazard function, the instantaneous force of mortality, as
so
.
Then
.
For the exponential distribution, this becomes even simpler, because the hazard rate, , is constant, and . Then:
,
where .
From this we easily compute , the maximum likelihood estimate (MLE) of , as follows:
.
Then
.
We set this to 0 and solve for to get:
.
Equivalently, the mean time to failure is:
.
This differs from the standard MLE for the exponential distribution in that the any censored observations are considered only in the numerator.
See also
Data analysis
Detection limit
Imputation (statistics)
Inverse probability weighting
Sampling bias
Saturation arithmetic
Survival analysis
Winsorising
References
Further reading
Blower, S. (2004), D, Bernoulli's " ", Reviews of Medical Virology, 14: 275–288
Bagdonavicius, V., Kruopis, J., Nikulin, M.S. (2011),"Non-parametric Tests for Censored Data", London, ISTE/WILEY,.
External links
"Engineering Statistics Handbook", NIST/SEMATEK,
Statistical data types
Survival analysis
Reliability engineering
Unknown content | Censoring (statistics) | [
"Engineering"
] | 1,528 | [
"Systems engineering",
"Reliability engineering"
] |
11,549,201 | https://en.wikipedia.org/wiki/Pretomanid | Pretomanid is an antibiotic medication used for the treatment of multi-drug-resistant tuberculosis affecting the lungs. It is generally used together with bedaquiline and linezolid. It is taken by mouth.
The most common side effects include nerve damage, acne, vomiting, headache, low blood sugar, diarrhea, and liver inflammation. It is in the nitroimidazole class of medications.
Pretomanid was approved for medical use in the United States in August 2019, and in the European Union in July 2020. Pretomanid was developed by TB Alliance. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Pretomanid is indicated in combination with bedaquiline and linezolid, in adults, for the treatment of pulmonary extensively drug resistant, or treatment-intolerant or nonresponsive multidrug-resistant tuberculosis.
Mechanism of action
Pretomanid is activated in the mycobacterium by deazaflavin-dependent nitroreductase (Ddn), an enzyme which uses dihydro-F420 (reduced form), into nitric oxide and a highly reactive metabolite. This metabolite attacks the synthesis enzyme DprE2, which is important for the synthesis of cell wall arabinogalactan, to which mycolic acid would be attached. This mechanism is shared with delamanid. Clinical isolates resistant to this drug tend to have mutations in the biosynthetic pathway for Coenzyme F420.
History
Development of this compound was initiated because of the urgent need for new antibacterial drugs effective against resistant strains of tuberculosis. Also, current anti-TB drugs are mainly effective against replicating and metabolically active bacteria, creating a need for drugs effective against persisting or latent bacterial infections as often occur in patients with tuberculosis.
Discovery and pre-clinical development
Pretomanid was first identified in 2000, in a series of 100 nitroimidazopyran derivatives synthesized and tested for antitubercular activity, by PathoGenesis (now a subsidiary of Novartis). Importantly, pretomanid has activity against static M. tuberculosis isolates that survive under anaerobic conditions, with bactericidal activity comparable to that of the existing drug metronidazole. Pretomanid requires metabolic activation by Mycobacterium for antibacterial activity. Pretomanid was not the most potent compound in the series against cultures of M. tuberculosis, but it was the most active in infected mice after oral administration. Oral pretomanid was active against tuberculosis in mice and guinea pigs at safely tolerated dosages for up to 28 days.
Society and culture
Legal status
The US Food and Drug Administration (FDA) approved pretomanid only in combination with bedaquiline and linezolid for treatment of a limited and specific population of adults with extensively drug resistant, treatment-intolerant or nonresponsive multidrug resistant pulmonary tuberculosis. Pretomanid was approved under the limited population pathway (LPAD pathway) for antibacterial and antifungal drugs. Pretomanid is only the third tuberculosis drug to receive approval from the FDA in more than 40 years.
The FDA granted pretomanid priority review and orphan drug designations. The FDA granted The Global Alliance for TB Drug Development (TB Alliance) the approval of pretomanid and a tropical disease priority review voucher.
Names
Pretomanid is the international nonproprietary name, the generic name, and the nonproprietary name. Pretomanid is referred to as "Pa" in regimen abbreviations, such as BPaL. The "preto" part of the compound's name honors Pretoria, South Africa, the home of a TB Alliance clinical development office where much of the drug's development took place, while the "-manid" stem designates compounds with similar chemical structures. This class of drug is variously referred to as nitroimidazoles or nitroimidazooxazines.
References
Anti-tuberculosis drugs
Orphan drugs
Prodrugs
Trifluoromethyl ethers
World Health Organization essential medicines | Pretomanid | [
"Chemistry"
] | 893 | [
"Chemicals in medicine",
"Prodrugs"
] |
11,549,406 | https://en.wikipedia.org/wiki/Quality%20control%20and%20genetic%20algorithms | The combination of quality control and genetic algorithms led to novel solutions of complex quality control design and optimization problems. Quality is the degree to which a set of inherent characteristics of an entity fulfils a need or expectation that is stated, general implied or obligatory. ISO 9000 defines quality control as "A part of quality management focused on fulfilling quality requirements". Genetic algorithms are search algorithms, based on the mechanics of natural selection and natural genetics.
Quality control
Alternative quality control (QC) procedures can be applied to a process to test statistically the null hypothesis, that the process conforms to the quality specifications and consequently is in control, against the alternative, that the process is out of control. When a true null hypothesis is rejected, a statistical type I error is committed. We have then a false rejection of a run of the process. The probability of a type I error is called probability of false rejection. When a false null hypothesis is accepted, a statistical type II error is committed. We fail then to detect a significant change in the probability density function of a quality characteristic of the process. The probability of rejection of a false null hypothesis equals the probability of detection of the nonconformity of the process to the quality specifications.
The QC procedure to be designed or optimized can be formulated as:
(1)
where denotes a statistical decision rule, denotes the size of the sample , that is the number of the samples the rule is applied upon, and denotes the vector of the rule specific parameters, including the decision limits. Each symbol denotes either the Boolean operator AND or the operator OR. Obviously, for denoting AND, and for , that is for , the (1) denotes a -sampling QC procedure.
Each statistical decision rule is evaluated by calculating the respective statistic of the measured quality characteristic of the sample. Then, if the statistic is out of the interval between the decision limits, the decision rule is considered to be true. Many statistics can be used, including the following: a single value of the variable of a sample, the range, the mean, and the standard deviation of the values of the variable of the samples, the cumulative sum, the smoothed mean, and the smoothed standard deviation. Finally, the QC procedure is evaluated as a Boolean proposition. If it is true, then the null hypothesis is considered to be false, the process is considered to be out of control, and the run is rejected.
A quality control procedure is considered to be optimum when it minimizes (or maximizes) a context specific objective function. The objective function depends on the probabilities of detection of the nonconformity of the process and of false rejection. These probabilities depend on the parameters of the quality control procedure (1) and on the probability density functions (see probability density function) of the monitored variables of the process.
Genetic algorithms
Genetic algorithms are robust search algorithms, that do not require knowledge of the objective function to be optimized and search through large spaces quickly. Genetic algorithms have been derived from the processes of the molecular biology of the gene and the evolution of life. Their operators, cross-over, mutation, and reproduction, are isomorphic with the synonymous biological processes. Genetic algorithms have been used to solve a variety of complex optimization problems. Additionally the classifier systems and the genetic programming paradigm have shown us that genetic algorithms can be used for tasks as complex as the program induction.
Quality control and genetic algorithms
In general, we can not use algebraic methods to optimize the quality control procedures. Usage of enumerative methods would be very tedious, especially with multi-rule procedures, as the number of the points of the parameter space to be searched grows exponentially with the number of the parameters to be optimized. Optimization methods based on genetic algorithms offer an appealing alternative.
Furthermore, the complexity of the design process of novel quality control procedures is obviously greater than the complexity of the optimization of predefined ones.
In fact, since 1993, genetic algorithms have been used successfully to optimize and to design novel quality control procedures.
See also
Quality control
Genetic algorithm
Optimization (mathematics)
References
External links
American Society for Quality (ASQ)
Hellenic Complex Systems Laboratory (HCSL)
Statistical process control
Genetic algorithms | Quality control and genetic algorithms | [
"Engineering",
"Biology"
] | 859 | [
"Genetics techniques",
"Statistical process control",
"Engineering statistics",
"Genetic algorithms"
] |
11,551,960 | https://en.wikipedia.org/wiki/Hayes%20similitude%20principle | The Hayes similitude principle enabled aerodynamicists to take the results of one series of tests or calculations and apply them to the design of an entire family of similar configurations where neither tests nor detailed calculations are available.
The similitude principle was developed by Wallace D. Hayes, a pioneer in hypersonic flow, which is considered to begin at about five times the speed of sound, or Mach 5, and is described in his classic book Hypersonic Flow Theory co-written with Ronald Probstein and first published in 1959.
The behavior of the physical processes in actual problems is affected by so many physical quantities that a complete mathematical description thereof is usually very difficult and sometimes practically impossible due to the complicated nature of the phenomena. We know from experience that if two systems are geometrically similar there usually exists some kind of similarity under certain conditions, such as kinematic similarity, dynamic similarity, thermal similarity, and similarity of concentration distribution, and that if similarity conditions are satisfied we can greatly reduce the number of independent variables required to describe the behavior of the process. In this way, we can systematically understand. describe, and even predict the behavior of physical processes in real problems in a relatively simple manner. This principle is known as principle of similitude. Dimensional analysis is a method of deducing logical groupings of the variables, through which we can describe similarity criteria of the processes.
Physical quantities such as length [L], mass [M], time [T], and temperature are dimensional quantities and the magnitude of each quantity can be described by multiples of the unit of each dimension namely m, kg, s, and K, respectively. Through experience, we can select a certain number of fundamental dimensions, such as those mentioned above, and express all other dimensional quantities in terms of products of powers of these fundamental dimensions. Furthermore, in describing the behavior of physical processes, we know that there is an implicit principle that we cannot add or subtract physical quantities of different dimensions. This means that the equations governing physical processes must be dimensionally consistent and each term of the equation must have the same dimensions. This principle is known as the principle of dimensional homogeneity.
(courtesy: Book: Mass transfer : from fundamentals to modern industrial applications, Publisher: Weinheim : WILEY-VCH, 2006.
References
External links
Wallace Hayes, Pioneer of Supersonic Flight, Princeton University obituary
Wallace Hayes, 82, Aeronautics Expert, Dies, The New York Times obituary
Wallace D. Hayes Memorial Tributes: National Academy of Engineering, Volume 1, pp. 151–156.
Aerodynamics | Hayes similitude principle | [
"Chemistry",
"Engineering"
] | 523 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
10,109,430 | https://en.wikipedia.org/wiki/Reynolds%20analogy | The Reynolds Analogy is popularly known to relate turbulent momentum and heat transfer. That is because in a turbulent flow (in a pipe or in a boundary layer) the transport of momentum and the transport of heat largely depends on the same turbulent eddies: the velocity and the temperature profiles have the same shape.
The main assumption is that heat flux q/A in a turbulent system is analogous to momentum flux τ, which suggests that the ratio τ/(q/A) must be constant for all radial positions.
The complete Reynolds analogy* is:
Experimental data for gas streams agree approximately with above equation if the Schmidt and Prandtl numbers are near 1.0 and only skin friction is present in flow past a flat plate or inside a pipe. When liquids are present and/or form drag is present, the analogy is conventionally known to be invalid.
In 2008, the qualitative form of validity of Reynolds' analogy was re-visited for laminar flow of incompressible fluid with variable dynamic viscosity (μ). It was shown that the inverse dependence of Reynolds number (Re) and skin friction coefficient(cf) is the basis for validity of the Reynolds’ analogy, in laminar convective flows with constant & variable μ. For μ = const. it reduces to the popular form of Stanton number (St) increasing with increasing Re, whereas for variable μ it reduces to St increasing with decreasing Re. Consequently, the Chilton-Colburn analogy of St•Pr2/3 increasing with increasing cf is qualitatively valid whenever the
Reynolds’ analogy is valid. Further, the validity of the Reynolds’ analogy is linked to the applicability of Prigogine's Theorem of Minimum Entropy Production. Thus, Reynolds' analogy is valid for flows that are close to developed, for whom, changes in the gradients of field variables (velocity & temperature) along the flow are small.
See also
Reynolds number
Chilton and Colburn J-factor analogy
References
Transport phenomena
Analogy | Reynolds analogy | [
"Physics",
"Chemistry",
"Engineering"
] | 409 | [
"Transport phenomena",
"Chemical engineering",
"Physical phenomena"
] |
10,109,665 | https://en.wikipedia.org/wiki/Chilton%20and%20Colburn%20J-factor%20analogy | Chilton–Colburn J-factor analogy (also known as the modified Reynolds analogy) is a successful and widely used analogy between heat, momentum, and mass transfer. The basic mechanisms and mathematics of heat, mass, and momentum transport are essentially the same. Among many analogies (like Reynolds analogy, Prandtl–Taylor analogy) developed to directly relate heat transfer coefficients, mass transfer coefficients and friction factors, Chilton and Colburn J-factor analogy proved to be the most accurate. The factors are named after Thomas H. Chilton and Allan Philip Colburn (1904–1955).
It is written as follows,
This equation permits the prediction of an unknown transfer coefficient when one of the other coefficients is known. The analogy is valid for fully developed turbulent flow in conduits with Re > 10000, 0.7 < Pr < 160, and tubes where L/d > 60 (the same constraints as the Sieder–Tate correlation). The wider range of data can be correlated by Friend–Metzner analogy.
Relationship between Heat and Mass;
See also
Reynolds analogy
References
Geankoplis, C.J. Transport processes and separation process principles (2003). Fourth Edition, p. 475.
External links
Lecture notes on mass transfer coefficients: http://facstaff.cbu.edu/rprice/lectures/mtcoeff.html
Transport phenomena
Analogy | Chilton and Colburn J-factor analogy | [
"Physics",
"Chemistry",
"Engineering"
] | 286 | [
"Transport phenomena",
"Chemical engineering",
"Physical phenomena"
] |
10,111,265 | https://en.wikipedia.org/wiki/Tributyltin%20hydride | Tributyltin hydride is an organotin compound with the formula (C4H9)3SnH. It is a colorless liquid that is soluble in organic solvents. The compound is used as a source of hydrogen atoms in organic synthesis.
Synthesis and characterization
The compound is produced by reduction of tributyltin oxide with polymethylhydrosiloxane:
2 "[MeSi(H)O]n" + (Bu3Sn)2O → "[MeSi(OH)O]n" + 2 Bu3SnH
It can also be synthesized by a reduction of tributyltin chloride with lithium aluminium hydride.
The hydride is a distillable liquid that is mildly sensitive to air, decomposing to (Bu3Sn)2O. Its IR spectrum exhibits a strong band at 1814 cm−1 for νSn−H.
Applications
It is a specialized reagent in organic synthesis. Combined with azobisisobutyronitrile (AIBN) or by irradiation with light, tributyltin hydride converts organic halides (and related groups) to the corresponding hydrocarbon. This process occurs via a radical chain mechanism involving the radical Bu3Sn•. The radical abstracts a H• from another equivalent of tributyltin hydride, propagating the chain. Tributyltin hydride's utility as a H• donor can be attributed to its relatively weak bond strength (78 kcal/mol).
It is the reagent of choice for hydrostannylation reactions:
RC2R′ + HSnBu3 → RC(H)=C(SnBu3)R′
See also
Tributyltin
Trimethylsilyl
References
Further reading
Hayashi, K.; Iyoda, J.; Shiihara, I. "Reaction of organotin oxides, alkoxides and acyloxides with organosilicon hydrides. New preparative method of organotin hydrides " J. Organomet. Chem. 1967, 10, 81.
Organotin compounds
Radical initiators
Metal hydrides
Tin(IV) compounds
Butyl compounds | Tributyltin hydride | [
"Chemistry",
"Materials_science"
] | 479 | [
"Inorganic compounds",
"Radical initiators",
"Reducing agents",
"Metal hydrides",
"Polymer chemistry",
"Reagents for organic chemistry"
] |
10,112,524 | https://en.wikipedia.org/wiki/Transforming%20growth%20factor%20beta%20superfamily | The transforming growth factor beta (TGF-β) superfamily is a large group of structurally related cell regulatory proteins that was named after its first member, TGF-β1, originally described in 1983. They interact with TGF-beta receptors.
Many proteins have since been described as members of the TGF-β superfamily in a variety of species, including invertebrates as well as vertebrates and categorized into 23 distinct gene types that fall into four major subfamilies:
The TGF-β subfamily
The bone morphogenetic proteins and the growth differentiation factors
The activin and inhibin subfamilies
The left-right determination factors
A group encompassing various divergent members
Transforming growth factor-beta (TGF-beta) is a multifunctional peptide that controls proliferation, differentiation and other functions in many cell types. TGF-beta-1 is a peptide of 112 amino acid residues derived by proteolytic cleavage from the C-terminal of a precursor protein. These proteins interact with a conserved family of cell surface serine/threonine-specific protein kinase receptors, and generate intracellular signals using a conserved family of proteins called SMADs. They play fundamental roles in the regulation of basic biological processes such as growth, development, tissue homeostasis and regulation of the immune system.
Structure
Proteins from the TGF-beta superfamily are only active as homo- or heterodimer; the two chains being linked by a single disulfide bond. From X-ray studies of TGF-beta-2, it is known that all the other cysteines are involved in intrachain disulfide bonds. As shown in the following schematic representation, there are four disulfide bonds in the TGF-beta's and in inhibin beta chains, while the other members of this superfamily lack the first bond.
interchain
|
+------------------------------------------|+
| ||
| | | | | |
+------+ +--|----------------------------------------+ |
+------------------------------------------+
where 'C' denotes a conserved cysteine involved in a disulfide bond.
Examples
Human genes encoding proteins that contain this domain include:
AMH; ARTN; BMP2; BMP3; BMP4; BMP5; BMP6; BMP7; BMP8A; BMP8B; BMP10; BMP15;
GDF1; GDF2; GDF3; GDF5; GDF6; GDF7; GDF9; GDF10; GDF11; GDF15; GDNF; INHA; INHBA; INHBB; INHBC; INHBE; LEFTY1; LEFTY2;
MSTN; NODAL; NRTN; PSPN; TGFB1; TGFB2; TGFB3;
References
Developmental genes and proteins
TGFβ domain
Protein domains
Membrane proteins
Protein families | Transforming growth factor beta superfamily | [
"Biology"
] | 732 | [
"Protein classification",
"Membrane proteins",
"Protein domains",
"Developmental genes and proteins",
"Protein families",
"Induced stem cells"
] |
10,113,122 | https://en.wikipedia.org/wiki/Quantum%20radar | Quantum radar is a speculative remote-sensing technology based on quantum-mechanical effects, such as the uncertainty principle or quantum entanglement. Broadly speaking, a quantum radar can be seen as a device working in the microwave range, which exploits quantum features, from the point of view of the radiation source and/or the output detection, and is able to outperform a classical counterpart. One approach is based on the use of input quantum correlations (in particular, quantum entanglement) combined with a suitable interferometric quantum detection at the receiver (strongly related to the protocol of quantum illumination).
Paving the way for a technologically viable prototype of a quantum radar involves the resolution of a number of experimental challenges as discussed in some review articles, the latter of which pointed out "inaccurate reporting" in the media. Current experimental designs seem to be limited to very short ranges, of the order of one meter, suggesting that potential applications might instead be for near-distance surveillance or biomedical scanning.
Concept behind a microwave-range model
A microwave-range model of a quantum radar was proposed in 2015 by an international team and is based on the protocol of Gaussian quantum illumination. The basic concept is to create a stream of entangled visible-frequency photons and split it in half. One half, the "signal beam", goes through a conversion to microwave frequencies in a way that preserves the original quantum state. The microwave signal is then sent and received as in a normal radar system. When the reflected signal is received it is converted back into visible photons and compared with the other half of the original entangled beam, the "idler beam".
Although most of the original entanglement will be lost due to quantum decoherence as the microwaves travel to the target objects and back, enough quantum correlations will still remain between the reflected-signal and the idler beams. Using a suitable quantum detection scheme, the system can pick out just those photons that were originally sent by the radar, completely filtering out any other sources. If the system can be made to work in the field, it represents an enormous advance in detection capability.
One way to defeat conventional radar systems is to broadcast signals on the same frequencies used by the radar, making it impossible for the receiver to distinguish between their own broadcasts and the spoofing signal (or "jamming"). However, such systems cannot know, even in theory, what the original quantum state of the radar's internal signal was. Lacking such information, their broadcasts will not match the original signal and will be filtered out in the correlator. Environmental sources, like ground clutter and aurora, will similarly be filtered out.
History
One design was proposed in 2005 by defence contractor Lockheed Martin. The patent on this work was granted in 2013. The aim was to create a radar system providing a better resolution and higher detail than classical radar could provide. However no quantum advantage or better resolution was theoretically proven by this design.
In 2015, an international team of researchers, showed the first theoretical design of a quantum radar able to achieve a quantum advantage over a classical setup. In this model of quantum radar, one considers the remote sensing of a low-reflectivity target that is embedded within a bright microwave background, with detection performance well beyond the capability of a classical microwave radar. By using a suitable wavelength "electro-optomechanical converter", this scheme generates excellent quantum entanglement between a microwave signal beam, sent to probe the target region, and an optical idler beam, retained for detection. The microwave return collected from the target region is subsequently converted into an optical beam and then measured jointly with the idler beam. Such a technique extends the powerful protocol of quantum illumination to its more natural spectral domain, namely microwave wavelengths.
In 2019, a three-dimensional enhancement quantum radar protocol was proposed. It could be understood as a quantum metrology protocol for the localization of a non-cooperative point-like target in three-dimensional space. It employed quantum entanglement to achieve an uncertainty in localization that is quadratically smaller for each spatial direction than what could be achieved by using independent, unentangled photons.
Review articles that delve more into the history and designs of quantum radar, in addition to the ones mentioned in the introduction above, are available on arXiv.
A quantum radar is challenging to be realized with current technology, even though a preliminary experimental prototype has been realized.
Challenges and limitations
There are a number of non-trivial challenges behind the experimental implementation of a truly-quantum radar prototype, even at short ranges. According to current quantum illumination designs, an
important point is the management of the idler pulse that, ideally, should be jointly detected together with the signal pulse returning from the potential target. However, this would require
the use of a quantum memory with a long coherence time, able to work at times comparable with the round-trip of the signal pulse. Other solutions may degrade the quantum correlations between signal and idler pulses
to a point where the quantum advantage may disappear. This is a problem that also affects optical designs of quantum illumination. For instance, storing the idler pulse in a delay line
by using a standard optical fiber would degrade the system and limit the maximum range of a quantum illumination radar to about 11 km. This value has to be interpreted as a theoretical
limit of this design, not to be confused with an achievable range. Other limitations include the fact that current quantum designs only consider a single polarization, azimuth, elevation,
range, Doppler bin at a time.
Media speculation about applications
There is media speculation that a quantum radar could operate at long ranges detecting stealth aircraft, filter out deliberate jamming attempts, and operate in areas of high background noise, e.g., due to ground clutter.
Related to the above, there is considerable media speculation of the use of quantum radar as a potential anti-stealth technology. Stealth aircraft are designed to reflect signals away from the radar, typically by using rounded surfaces and avoiding anything that might form a partial corner reflector. This so reduces the amount of signal returned to the radar's receiver that the target is (ideally) lost in the thermal background noise. Although stealth technologies will still be just as effective at reflecting the original signal away from the receiver of a quantum radar, it is the system's ability to separate out the remaining tiny signal, even when swamped by other sources, that allows it to pick out the return even from highly stealthy designs. At the moment these long-range applications are speculative and not supported by experimental data.
More recently, the generation of large numbers of entangled photons for radar detection has been studied by the University of Waterloo.
References
Quantum optics
Quantum information science
Radar | Quantum radar | [
"Physics"
] | 1,378 | [
"Quantum optics",
"Quantum mechanics"
] |
10,113,242 | https://en.wikipedia.org/wiki/String-net%20liquid | In condensed matter physics, a string-net is an extended object whose collective behavior has been proposed as a physical mechanism for topological order by Michael A. Levin and Xiao-Gang Wen. A particular string-net model may involve only closed loops; or networks of oriented, labeled strings obeying branching rules given by some gauge group; or still more general networks.
Overview
The string-net model is claimed to show the derivation of photons, electrons, and U(1) gauge charge, small (relative to the Planck mass) but nonzero masses, and suggestions that the leptons, quarks, and gluons can be modeled in the same way. In other words, string-net condensation provides a unified origin for photons and electrons (or gauge bosons and fermions). It can be viewed as an origin of light and electron (or gauge interactions and Fermi statistics).
However, their model does not account for the chiral coupling between the fermions and the SU(2) gauge bosons in the standard model.
For strings labeled by the positive integers, string-nets are the spin networks studied in loop quantum gravity. This has led to the proposal by Levin and Wen, and Smolin, Markopoulou and Konopka that loop quantum gravity's spin networks can give rise to the standard model of particle physics through this mechanism, along with fermi statistics and gauge interactions. To date, a rigorous derivation from LQG's spin networks to Levin and Wen's spin lattice has yet to be done, but the project to do so is called quantum graphity, and in a more recent paper, Tomasz Konopka, Fotini Markopoulou, Simone Severini argued that there are some similarities to spin networks (but not necessarily an exact equivalence) that gives rise to U(1) gauge charge and electrons in the string net mechanism.
Herbertsmithite may be an example of string-net matter.
Examples
Z2 spin liquid
Z2 spin liquid obtained using slave-particle approach may be the first theoretical example of string-net liquid.
The toric code
The toric code is a two-dimensional spin-lattice that acts as a quantum error-correcting code. It is defined on a two-dimensional lattice with toric boundary conditions with a spin-1/2 on each link. It can be shown that the ground-state of the standard toric code Hamiltonian is an equal-weight superposition of closed-string states. Such a ground-state is an example of a string-net condensate which has the same topological order
as the Z2 spin liquid above.
References
Quantum phases
Condensed matter physics
Chemical engineering
Phases of matter | String-net liquid | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 560 | [
"Quantum phases",
"Chemical engineering",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"nan",
"Matter"
] |
10,113,932 | https://en.wikipedia.org/wiki/Talking%20clock | A talking clock (also called a speaking clock and an auditory clock) is a timekeeping device that presents the time as sounds. It may present the time solely as sounds, such as a phone-based time service (see "Speaking clock") or a clock for the visually impaired, or may have a sound feature in addition to an analog or digital face.
History
Although they would not be considered to be speaking, clocks have incorporated noisemakers such as clangs, chimes, gongs, melodies, and the sounds of cuckoos or roosters from almost the beginning of the mechanical clock. Soon after Thomas Edison's invention of the phonograph, the earliest attempts to make a clock that incorporated a voice were made. Around 1878, Frank Lambert invented a machine that used a voice recorded on a lead cylinder to call out the hours. Lambert used lead in place of Edison's soft tinfoil. In 1992, the Guinness Book of World Records recognized this as the oldest known sound recording that was playable (though that status now rests with a phonautogram of Édouard-Léon Scott de Martinville, recorded in 1857). It is on display at the National Watch and Clock Museum in Columbia, Pennsylvania.
Although there have been rumors that other talking clocks may have been produced afterward, it is not until around 1910 that another talking clock was introduced, when Bernhard Hiller created a clock that used a belt with a recording on it to announce the time. However, these belts were often broken by the hand-tightening required, and all attempts to reproduce the celluloid ribbon have so far failed.
In 1933, the first practical use of talking clocks was seen when Ernest Esclangon created a talking telephone time service in Paris, France. On its first day, February 14, 1933, more than 140,000 calls were received. London began a similar service three years later. This type of talking time service is still around, and more than a million calls per year are received for the NIST's Telephone Time-of-Day Service.
In 1954, Ted Duncan, Inc., released the Hickory Dickory Clock, a crank toy intended for children. This clock used a record, needle, and tone arm to produce its sound.
In 1968, the first truly portable talking clock, the Mattel-a-Time Talking Clock, was released.
In 1979, Sharp released the world's first quartz-based talking clock, the Talking Time CT-660E (German version CT-660G). Its silver transistor-radio-like case contained complex LSI circuitry with 3 SMD ICs (likely clock CPU, speech CPU and sound IC), producing a Speak&Spell-like synthetic voice. At the front rim was a small LCD. The alarm spoke the time and also had a melody "Boccherini's Minuet"; after 5 minutes the alarm repeated with the words "Please hurry!". It also had stopwatch and countdown timer modes. The tiny controls to turn off alarm or set functions are hard to reach under a small bottom lid.
In 1984, the Hattori Seiko Co. released their famous pyramid-shaped talking clock, the Pyramid Talk. As a futuristic design object even its LCD was hidden at the bottom, requiring the user to push the clock's top to hear it talk.
Current talking clocks often include many more features than just giving the time; in these, the ability to speak the time is part of a wide range of voice capabilities, such as reading the weather and other information to the user.
Uses and purposes
Teaching timetelling
After the telephone time service, the next practical application of the talking clock was in the teaching of timetelling to children. The first talking clock to be used for this purpose was the Mattel "Mattel-a-Time Talking Clock" of 1968. Several other clocks of this type followed, including one featuring Thomas the Tank Engine. One of the latest ones, the "Talking Clever Clock", includes a quiz button which asks questions such as "What time is it?", "What time will it be in an hour?", and "How much time has passed between 1:00 and 2:30?" Other educational talking clocks come in a kit designed to be assembled by children.
Talking clocks can also be used with children whose learning disabilities may be partially offset by the reinforcement provided by hearing the time as well as seeing it.
Assisting the blind
Talking clocks have found a natural home as an assistive technology for people who are blind or visually impaired. There are over 150 tabletop clocks and 50 types of watches that talk. Manufacturers of such clocks include Sharp, Panasonic, RadioShack, and Reizen. In addition, one manufacturer purportedly produced a clock that would announce the time upon detecting a user's whistling signal.
Branding/Advertising
Many companies have used talking clocks as a novelty item to promote their brand. In 1987, the H. J. Heinz Company released a clock with the figure of "Mr. Aristocrat", a tomato with a motif similar to Mr. Peanut. At alarm time, the clock said, "It's time to get up; get up right away! Wait any longer and it's 'ketchup' all day! Remember, Heinz is the thick rich one." At roughly the same time, Pillsbury created a similar clock with the character of Little Sprout. In recent years, the Coca-Cola polar bear, the Red and Yellow M&M's characters, the Pillsbury Doughboy, a Campbell's Soup girl, and others have at one time appeared on a talking clock. One of the more interesting branded clocks was produced by Energizer and was a soft, battery-shaped clock whose alarm was turned off by punching it or throwing it against a hard surface.
Entertainment/conversation pieces
The inexpensiveness of modern speech technology has allowed manufacturers to include talking clock capabilities into a wide range of products. Many of these are intended as conversation pieces or speak merely for the entertainment of hearing sounds or words spoken by an inanimate object. Such timepieces include Darth Vader clocks, calculators with time features, and even a painting of Leonardo da Vinci's The Last Supper that announces the time on the hour along with a quote from Jesus.
Other themes of talking timepieces include fortune-telling, astrology, clocks with moving lips, animated creatures, sports and athletes, and movies, among others.
Technology
Most modern talking clocks are based on speech-synthesis integrated circuits that generate speech from sampled, stored data. The rapid technological progress of the 1980s enabled today's high-quality talking products. Early talking clocks employed chips that linked phonemes to generate speech. These products could generate unlimited speech, but it was of relatively poor quality that sounded robotic, at worst, unintelligible. Today's higher-quality speech is produced by sampled-data systems that take elements of an actual human voice. Modern voice synthesis technologies can produce synthesized vocabularies that retain the style of the speaker exactly and are not limited to just perfect English, but can be as varied as Scottish accents, Japanese, and even the voice of a young child. Such voices are all generated using tiny, inexpensive voice chips that are readily available.
Almost all of the latest voice-chipped talking clocks incorporate the female human voice to announce the time. Dr. Mark McKinley, the president of the International Society of Talking Clock Collectors, proposes three possible explanations for this phenomenon. The female voice may be considered more soothing psychologically; it may be a relic of the female voice being historically associated with secretarial (Administrative Assistant) functions; or a feminine voice may possibly simply be softer in a less intrusive way.
Many talking clocks include a light sensor or a setting that will automatically silence them between certain hours (usually between 10 p.m. and 8 a.m.).
Ozen Box
Many talking clocks of the 1970s utilized an Ozen box, which is a mechanism similar to a phonograph, in which a needle-like stylus tracks on a 2.25 inch platter similar to a vinyl phonograph record. The Janex Corporation produced most of the clocks which use this device, and they are highly prized among collectors.
Characters
A very large number of popular characters have appeared on talking clocks. The following list is not exhaustive, nor is it intended to be — the International Society of Talking Clocks Collectors (ISTCC) has a Museum collection of over 800 talking clocks.
Mickey Mouse
Several Looney Tunes characters (including Bugs Bunny, Daffy Duck, Tweety, et al.)
The Simpsons
Strawberry Shortcake
Superheroes (including Superman, Spider-Man, The Incredible Hulk, et al.)
Furby
Biz Markie
The Smurfs
SpongeBob SquarePants
Mario
See also
Speaking clock
References
External links
ISTCC Virtual Museum.
Frank Lambert's talking clock.
More on Lambert's clock.
Clocks
Assistive technology
Educational hardware
Novelty items | Talking clock | [
"Physics",
"Technology",
"Engineering"
] | 1,840 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
10,115,658 | https://en.wikipedia.org/wiki/Molecular%20self-assembly | In chemistry and materials science, molecular self-assembly is the process by which molecules adopt a defined arrangement without guidance or management from an outside source. There are two types of self-assembly: intermolecular and intramolecular. Commonly, the term molecular self-assembly refers to the former, while the latter is more commonly called folding.
Supramolecular systems
Molecular self-assembly is a key concept in supramolecular chemistry. This is because assembly of molecules in such systems is directed through non-covalent interactions (e.g., hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi-stacking interactions, and/or electrostatic) as well as electromagnetic interactions. Common examples include the formation of colloids, biomolecular condensates, micelles, vesicles, liquid crystal phases, and Langmuir monolayers by surfactant molecules. Further examples of supramolecular assemblies demonstrate that a variety of different shapes and sizes can be obtained using molecular self-assembly.
Molecular self-assembly allows the construction of challenging molecular topologies. One example is Borromean rings, interlocking rings wherein removal of one ring unlocks each of the other rings. DNA has been used to prepare a molecular analog of Borromean rings. More recently, a similar structure has been prepared using non-biological building blocks.
Biological systems
Molecular self-assembly underlies the construction of biologic macromolecular assemblies and biomolecular condensates in living organisms, and so is crucial to the function of cells. It is exhibited in the self-assembly of lipids to form the membrane, the formation of double helical DNA through hydrogen bonding of the individual strands, and the assembly of proteins to form quaternary structures. Molecular self-assembly of incorrectly folded proteins into insoluble amyloid fibers is responsible for infectious prion-related neurodegenerative diseases. Molecular self-assembly of nanoscale structures plays a role in the growth of the remarkable β-keratin lamellae/setae/spatulae structures used to give geckos the ability to climb walls and adhere to ceilings and rock overhangs.
Protein multimers
When multiple copies of a polypeptide encoded by a gene self-assemble to form a complex, this protein structure is referred to as a "multimer". Genes that encode multimer-forming polypeptides appear to be common. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation. Jehle pointed out that, when immersed in a liquid and intermingled with other molecules, charge fluctuation forces favor the association of identical molecules as nearest neighbors.
Nanotechnology
Molecular self-assembly is an important aspect of bottom-up approaches to nanotechnology. Using molecular self-assembly, the final (desired) structure is programmed in the shape and functional groups of the molecules. Self-assembly is referred to as a 'bottom-up' manufacturing technique in contrast to a 'top-down' technique such as lithography where the desired final structure is carved from a larger block of matter. In the speculative vision of molecular nanotechnology, microchips of the future might be made by molecular self-assembly. An advantage to constructing nanostructure using molecular self-assembly for biological materials is that they will degrade back into individual molecules that can be broken down by the body.
DNA nanotechnology
DNA nanotechnology is an area of current research that uses the bottom-up, self-assembly approach for nanotechnological goals. DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information, to make structures such as complex 2D and 3D lattices (both tile-based as well as using the "DNA origami" method) and three-dimensional structures in the shapes of polyhedra. These DNA structures have also been used as templates in the assembly of other molecules such as gold nanoparticles and streptavidin proteins.
Two-dimensional monolayers
The spontaneous assembly of a single layer of molecules at interfaces is usually referred to as two-dimensional self-assembly. One of the common examples of such assemblies are Langmuir-Blodgett monolayers and multilayers of surfactants. Non-surface active molecules can assemble into ordered structures as well. Early direct proofs showing that non-surface active molecules can assemble into higher-order architectures at solid interfaces came with the development of scanning tunneling microscopy and shortly thereafter. Eventually two strategies became popular for the self-assembly of 2D architectures, namely self-assembly following ultra-high-vacuum deposition and annealing and self-assembly at the solid-liquid interface. The design of molecules and conditions leading to the formation of highly-crystalline architectures is considered today a form of 2D crystal engineering at the nanoscopic scale.
See also
Assembly theory
Foldamer
Ice-nine
Macromolecular assembly
Self-assembly of nanoparticles
Supramolecular assembly
References
Supramolecular chemistry
Self-organization
ar:تجميع ذاتي جزيئي | Molecular self-assembly | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,147 | [
"Self-organization",
"Supramolecular chemistry",
"nan",
"Nanotechnology",
"Dynamical systems"
] |
10,117,078 | https://en.wikipedia.org/wiki/Acid%E2%80%93base%20extraction | Acid–base extraction is a subclass of liquid–liquid extractions and involves the separation of chemical species from other acidic or basic compounds. It is typically performed during the work-up step following a chemical synthesis to purify crude compounds and results in the product being largely free of acidic or basic impurities. A separatory funnel is commonly used to perform an acid-base extraction.
Acid-base extraction utilizes the difference in solubility of a compound in its acid or base form to induce separation. Typically, the desired compound is changed into its charged acid or base form, causing it to become soluble in aqueous solution and thus be extracted from the non-aqueous (organic) layer. Acid-base extraction is a simple alternative to more complex methods like chromatography. It is not possible to separate chemically similar acids or bases using this simple method.
Background theory
Acid-base extraction works on the fundamental principle that salts are ionic compounds with a high solubility in water, while neutral molecules typically lack solubility in water.
Consider a mixture of acidic and basic compounds dissolved in an organic solvent. Adding aqueous acid will cause the acidic component to stay uncharged, while the basic component will be protonated to form a salt. The uncharged acid component will remain dissolved in the organic solvent, while the highly charged basic salt will migrate to the aqueous solvent. Since the acidic and basic components are now in two different layers, they can easily be separated.
Alternatively, adding aqueous base will cause the acidic component to be deprotonated and form a salt, while the basic component will remain uncharged. In this case, the uncharged base will stay in the organic layer, while the highly charged acidic salt will migrate to the aqueous layer.
If the organic acid component is relatively weak and has a pKa value of ~5 (such as a carboxylic acid), adding additional acid can further improve separation by lowering the pH of the solution. This minimizes the self ionization of the organic acid component and limits its tendency to enter the aqueous layer. This principle is also applicable to an organic base when it is a relatively weak base.
Although acid-base extractions are most commonly used to separate acids from bases, they can be used to separate two acids or two bases from each other. However, the acids and bases must differ greatly in strength, e.g. one strong acid and one very weak acid. Therefore, the two acids must have a pKa (or pKb) difference that is as large as possible. For example, the following can be separated:
Very weak acids like phenols (pKa around 10) from stronger acids like carboxylic acids (pKa around 4–5).
Very weak bases (pKb around 13–14) from stronger bases (pKb around 3–4). This is frequently used in purifying soil to determine trace metal concentration.
When separating two acids or two bases, the pH is usually adjusted to a value roughly between the pKa (or pKb) constants. Separation occurs at this intermediate pH because one component is fully ionized, while the other is fully in its neutral form. Often, the solutions used to extract the acids or bases can also be used to control the pH. When separating two acids, the mixture is first washed with a weak base (e.g. sodium bicarbonate) to extract the strong acid, then washed with a strong base (e.g. sodium hydroxide) to extract the weak acid. For separating basic components, weak acid (e.g. dilute acetic acid) is first used to extract the stronger base, then more concentrated acid (e.g. hydrochloric acid or nitric acid) is used to create strongly acidic pH values and separate the weaker base.
Technique
The following procedure is typically followed when performing an acid-base extraction for a mixture containing an acidic and/or basic compound:
The mixture of compounds is dissolved in a suitable organic solvent, such as dichloromethane or diethyl ether.
The solution is added to a separatory funnel. If the desired compound is basic, the solution will be washed with aqueous acid (e.g. 5% HCl); if it is acidic, the solution is washed with aqueous base (e.g. 5% NaOH).
The fractions are then shaken and the two phases are separated. The separatory funnel must be vented frequently to alleviate pressure build-up, especially when containing aqueous solutions that evolve carbon dioxide gas upon neutralization (such as sodium bicarbonate).
The fraction containing the analyte of interest is then collected. Typically, this is the aqueous layer, as addition of acid or base has caused the analyte to become charged and highly soluble in the aqueous layer. The identity of the aqueous layer depends critically on the organic solvent's density. Organic solvents with a density greater than 1.00 g/mL (e.g. dichloromethane) cause the aqueous layer to float to the top, while solvents with a density lower than 1.00 g/mL (e.g. ether) cause the aqueous layer to sink to the bottom.
The organic fraction is added to the separatory funnel again, and steps 2-4 are repeated twice more to maximize the yield of the extraction. On the final rinse, a brine solution drives any remaining aqueous solution out of the organic layer.
If the remaining organic layer contains no analytes of interest, it is discarded; otherwise, the solvent is dried over a suitable drying agent (such as anhydrous sodium sulfate), filtered, then evaporated under reduced pressure to yield the pure compound. If the aqueous layer contains the analyte of interest, it is adjusted to the opposite pH (e.g. basic to acidic). Steps 1-4 are repeated with this fraction using an aqueous solution of opposite pH (e.g. NaOH to HCl). This circular procedure is performed since it is typically much easier to remove organic solvent via rotary evaporation than aqueous solvent.
Common uses in chemical synthesis
Acid-base extraction is frequently used as the first step in a work-up procedure following a chemical synthesis to remove acidic and basic starting materials or impurities. Acid-base extraction is typically a precursor to more complicated purification techniques, such as recrystallization, if the product synthesized is still not completely pure.
Organic synthesis often uses acid-base extractions during work-up procedures. For example, consider a Fischer esterification –– the condensation of a carboxylic acid with an alcohol to form an ester. The post-reaction mixture often consists of small amounts of leftover acid and alcohol, in addition to the desired ester. Acid-base extraction can be used to easily separate out the acidic starting materials from the ester. By rinsing the crude product mixture with a weak base (e.g. sodium bicarbonate), the carboxylic acid and alcohol will be washed away with the aqueous layer, leaving purified ester in the organic layer. The choice of base used for extraction is critical, as a strong base (e.g. sodium hydroxide) will hydrolyze the ester.
Another common example of acid-base extraction occurs following peptide coupling, where the amide product must be separated from leftover carboxylic acid and amine. The carboxylic acid can be removed by rinsing the organic layer with weak base (sodium bicarbonate), while the amine can be removed by rinsing with a weak acid (10% hydrochloric acid). Following these two extractions, the amide will remain in the organic layer and has been significantly purified.
Troubleshooting
The following issues are commonly observed during acid-base extraction and typically have simple solutions
Only one layer is observed in the separatory funnel.
This is due to using an organic solvent with significant miscibility with water (e.g. acetonitrile). The organic solvent used must be water-insoluble to observe phase separation and perform an acid-base extraction.
Three layers form in the separatory funnel.
Often this is a result of insufficient mixing, and light stirring will solve the issue.
The boundary between the organic layer and aqueous layer is not observed.
Ice can be used to identify the boundary as it will float between the two layers.
An emulsion forms and one layer is suspended in the other as tiny droplets.
This can be solved by using a glass stirring rod to gently "push" the tiny droplets into each other, eventually resulting in separation and causing the two layers to appear. Adding a small amount of brine solution can also be used to break up the emulsion; this process is termed "salting out". Emulsions can be prevented by mixing the solutions gently rather than vigorously.
The relative positions of the aqueous/organic layers are unknown.
A small amount of water can be added to the separatory funnel. Whichever layer these droplets go into is identified as the aqueous layer.
Limitations
Acid-base extraction is efficient at separating compounds with a large difference in solubility between their charged and their uncharged form. Therefore, this procedure will not work for:
Zwitterions with acidic and basic functional groups in the same molecule.
For instance, glycine is soluble in water at most pH values and is therefore difficult to be extracted into organic media.
Lipophilic compounds.
Compounds such as tetrabutylammonium salts or fatty acids do not easily dissolve in the aqueous phase in their charged form.
Basic amines.
Amines like ammonia, methylamine, or triethanolamine are miscible or significantly soluble in water at most pH and cannot be extracted into organic media.
Hydrophilic inorganic acids.
Acids like acetic acid are indefinitely miscible in water and have limited solubility in organic solvents.
Alternatives
Alternatives to acid–base extraction include:
Filtering the mixture through a plug of silica gel or alumina — if the product is a charged salt, it will remain strongly adsorbed to the silica gel or alumina.
Ion exchange chromatography can separate acids, bases, or mixtures of strong and weak acids and bases by their varying affinities to the column medium at different pH.
Using column chromatography to separate the neutral compounds according to their ratio-of-fronts values.
Gel electrophoresis, which separates large biomolecules based on their charge and size.
See also
Chromatography
Extraction
Multiphasic liquid
Separating funnel
References
External links
Acid base extraction
Extraction (chemistry)
Laboratory techniques
Equilibrium chemistry | Acid–base extraction | [
"Chemistry"
] | 2,252 | [
"Equilibrium chemistry",
"Extraction (chemistry)",
"nan",
"Separation processes"
] |
10,119,238 | https://en.wikipedia.org/wiki/Essential%20extension | In mathematics, specifically module theory, given a ring R and an R-module M with a submodule N, the module M is said to be an essential extension of N (or N is said to be an essential submodule or large submodule of M) if for every submodule H of M,
implies that
As a special case, an essential left ideal of R is a left ideal that is essential as a submodule of the left module RR. The left ideal has non-zero intersection with any non-zero left ideal of R. Analogously, an essential right ideal is exactly an essential submodule of the right R module RR.
The usual notations for essential extensions include the following two expressions:
, and
The dual notion of an essential submodule is that of superfluous submodule (or small submodule). A submodule N is superfluous if for any other submodule H,
implies that .
The usual notations for superfluous submodules include:
, and
Properties
Here are some of the elementary properties of essential extensions, given in the notation introduced above. Let M be a module, and K, N and H be submodules of M with K N
Clearly M is an essential submodule of M, and the zero submodule of a nonzero module is never essential.
if and only if and
if and only if and
Using Zorn's Lemma it is possible to prove another useful fact:
For any submodule N of M, there exists a submodule C such that
.
Furthermore, a module with no proper essential extension (that is, if the module is essential in another module, then it is equal to that module) is an injective module. It is then possible to prove that every module M has a maximal essential extension E(M), called the injective hull of M. The injective hull is necessarily an injective module, and is unique up to isomorphism. The injective hull is also minimal in the sense that any other injective module containing M contains a copy of E(M).
Many properties dualize to superfluous submodules, but not everything. Again let M be a module, and K, N and H be submodules of M with K N.
The zero submodule is always superfluous, and a nonzero module M is never superfluous in itself.
if and only if and
if and only if and .
Since every module can be mapped via a monomorphism whose image is essential in an injective module (its injective hull), one might ask if the dual statement is true, i.e. for every module M, is there a projective module P and an epimorphism from P onto M whose kernel is superfluous? (Such a P is called a projective cover). The answer is "No" in general, and the special class of rings whose right modules all have projective covers is the class of right perfect rings.
One form of Nakayama's lemma is that J(R)M is a superfluous submodule of M when M is a finitely-generated module over R.
Generalization
This definition can be generalized to an arbitrary abelian category C. An essential extension is a monomorphism u : M → E such that for every non-zero subobject s : N → E, the fibre product N ×E M ≠ 0.
In a general category, a morphism f : X → Y is essential if any morphism g : Y → Z is a monomorphism if and only if g ° f is a monomorphism . Taking g to be the identity morphism of Y shows that an essential morphism f must be a monomorphism.
If X has an injective hull Y, then Y is the largest essential extension of X . But the largest essential extension may not be an injective hull. Indeed, in the category of T1 spaces and continuous maps, every object has a unique largest essential extension, but no space with more than one element has an injective hull .
See also
Dense submodules are a special type of essential submodule
References
David Eisenbud, Commutative algebra with a view toward Algebraic Geometry
Section III.2
Commutative algebra
Module theory | Essential extension | [
"Mathematics"
] | 925 | [
"Fields of abstract algebra",
"Commutative algebra",
"Module theory"
] |
10,120,241 | https://en.wikipedia.org/wiki/5-demicubic%20honeycomb | The 5-demicube honeycomb (or demipenteractic honeycomb) is a uniform space-filling tessellation (or honeycomb) in Euclidean 5-space. It is constructed as an alternation of the regular 5-cube honeycomb.
It is the first tessellation in the demihypercube honeycomb family which, with all the next ones, is not regular, being composed of two different types of uniform facets. The 5-cubes become alternated into 5-demicubes h{4,3,3,3} and the alternated vertices create 5-orthoplex {3,3,3,4} facets.
D5 lattice
The vertex arrangement of the 5-demicubic honeycomb is the D5 lattice which is the densest known sphere packing in 5 dimensions. The 40 vertices of the rectified 5-orthoplex vertex figure of the 5-demicubic honeycomb reflect the kissing number 40 of this lattice.
The D packing (also called D) can be constructed by the union of two D5 lattices. The analogous packings form lattices only in even dimensions. The kissing number is 24=16 (2n-1 for n<8, 240 for n=8, and 2n(n-1) for n>8).
∪
The D lattice (also called D and C) can be constructed by the union of all four 5-demicubic lattices: It is also the 5-dimensional body centered cubic, the union of two 5-cube honeycombs in dual positions.
∪ ∪ ∪ = ∪ .
The kissing number of the D lattice is 10 (2n for n≥5) and its Voronoi tessellation is a tritruncated 5-cubic honeycomb, , containing all bitruncated 5-orthoplex, Voronoi cells.
Symmetry constructions
There are three uniform construction symmetries of this tessellation. Each symmetry can be represented by arrangements of different colors on the 32 5-demicube facets around each vertex.
Related honeycombs
See also
Uniform polytope
Regular and uniform honeycombs in 5-space:
5-cube honeycomb
5-demicube honeycomb
5-simplex honeycomb
Truncated 5-simplex honeycomb
Omnitruncated 5-simplex honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition,
pp. 154–156: Partial truncation or alternation, represented by h prefix: h{4,4}={4,4}; h{4,3,4}={31,1,4}, h{4,3,3,4}={3,3,4,3}, ...
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
External links
Honeycombs (geometry)
6-polytopes | 5-demicubic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 688 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
10,121,045 | https://en.wikipedia.org/wiki/Pore%20space%20in%20soil | The pore space of soil contains the liquid and gas phases of soil, i.e., everything but the solid phase that contains mainly minerals of varying sizes as well as organic compounds.
In order to understand porosity better a series of equations have been used to express the quantitative interactions between the three phases of soil.
Macropores or fractures play a major role in infiltration rates in many soils as well as preferential flow patterns, hydraulic conductivity and evapotranspiration. Cracks are also very influential in gas exchange, influencing respiration within soils. Modeling cracks therefore helps understand how these processes work and what the effects of changes in soil cracking such as compaction, can have on these processes.
The pore space of soil may contain the habitat of plants (rhizosphere) and microorganisms.
Background
Dry bulk density
The dry bulk density of a soil greatly depends on the mineral assemblage making up the soil and on its degree of compaction. The density of quartz is around 2.65 g/cm3 but the dry bulk density of a soil can be less than half that value.
Most soils have a dry bulk density between 1.0 and 1.6 g/cm3 but organic soil and some porous clays may have a dry bulk density well below 1 g/cm3.
Core samples are taken by pushing a metallic cutting edge into the soil at the desired depth or soil horizon.
The soil samples are then oven dried (often at 105 °C) until constant weight.
The dry bulk density of a soil is inversely proportional to its porosity. The more pore space in a soil, the lower its dry bulk density.
Porosity
or, more generally, for an unsaturated soil in which the pores are filled by two fluids, air and water:
The porosity is a measure of the total pore space in the soil. This is defined as a fraction of volume often given in percent. The amount of porosity in a soil depends on the minerals that make up the soil and on the amount of sorting occurring within the soil structure. For example, a sandy soil will have a larger porosity than a silty sand, because the silt will fill the gaps in between the sand particles.
Pore space relations
Hydraulic conductivity
Hydraulic conductivity (K) is a property of soil that describes the ease with which water can move through pore spaces. It depends on the permeability of the material (pores, compaction) and on the degree of saturation. Saturated hydraulic conductivity, Ksat, describes water movement through saturated media. Where hydraulic conductivity has the capability to be measured at any state. It can be estimated by numerous kinds of equipment. To calculate hydraulic conductivity, Darcy's law is used. The manipulation of the law depends on the soil saturation and instrument used.
Infiltration
Infiltration is the process by which water on the ground surface enters the soil. The water enters the soil through the pores by the forces of gravity and capillary action. The largest cracks and pores offer a great reservoir for the initial flush of water. This allows a rapid infiltration. The smaller pores take longer to fill and rely on capillary forces as well as gravity. The smaller pores have a slower infiltration as the soil becomes more saturated.
Pore types
A pore is not simply a void in the solid structure of soil. The various pore size categories have different characteristics and contribute different attributes to soils depending on the number and frequency of each type. A widely used classification of pore size is that of Brewer (1964):
Macropore
The pores that are too large to have any significant capillary force. Unless impeded, water will drain from these pores, and they are generally air-filled at field capacity. Macropores can be caused by cracking, division of peds and aggregates, as well as plant roots, and zoological exploration. Size >75 μm.
Mesopore
The largest pores filled with water at field capacity. Also known as storage pores because of the ability to store water useful to plants. They do not have capillary forces too great so that the water does not become limiting to the plants. The properties of mesopores are highly studied by soil scientists because of their impact on agriculture and irrigation. Size 30–75 μm.
Micropore
These are "pores that are sufficiently small that water within these pores is considered immobile, but available for plant extraction." Because there is little movement of water in these pores, solute movement is mainly by the process of diffusion. Size 5–30 μm.
Ultramicropore
These pores are suitable for habitation by microorganisms. Their distribution is determined by soil texture and soil organic matter, and they are not greatly affected by compaction. Size 0.1–5 μm.
Cryptopore
Pores that are too small to be penetrated by most microorganisms. Organic matter in these pores is therefore protected from microbial decomposition. They are filled with water unless the soil is very dry, but little of this water is available to plants, and water movement is very slow. Size <0.1 μm.
Modeling methods
Basic crack modeling has been undertaken for many years by simple observations and measurements of crack size, distribution, continuity and depth. These observations have either been surface observation or done on profiles in pits. Hand tracing and measurement of crack patterns on paper was one method used prior to advances in modern technology. Another field method was with the use of string and a semicircle of wire. The semi circle was moved along alternating sides of a string line. The cracks within the semicircle were measured for width, length and depth using a ruler. The crack distribution was calculated using the principle of Buffon's needle.
Disc permeameter
This method relies on the fact that crack sizes have a range of different water potentials. At zero water potential at the soil surface an estimate of saturated hydraulic conductivity is produced, with all pores filled with water. As the potential is decreased progressively larger cracks drain. By measuring at the hydraulic conductivity at a range of negative potentials, the pore size distribution can be determined. While this is not a physical model of the cracks, it does give an indication to the sizes of pores within the soil.
Horgan and Young model
Horgan and Young (2000) produced a computer model to create a two-dimensional prediction of surface crack formation. It used the fact that once cracks come within a certain distance of one another they tend to be attracted to each other. Cracks also tend to turn within a particular range of angles and at some stage a surface aggregate gets to a size that no more cracking will occur. These are often characteristic of a soil and can therefore be measured in the field and used in the model. However it was not able to predict the points at which cracking starts and although random in the formation of crack pattern, in many ways, cracking of soil is often not random, but follows lines of weaknesses.
Araldite-impregnation imaging
A large core sample is collected. This is then impregnated with araldite and a fluorescent resin. The core is then cut back using a grinding implement, very gradually (~1 mm per time), and at every interval the surface of the core sample is digitally imaged. The images are then loaded into a computer where they can be analysed. Depth, continuity, surface area and a number of other measurements can then be made on the cracks within the soil.
Electrical resistivity imaging
Using the infinite resistivity of air, the air spaces within a soil can be mapped. A specially designed resistivity meter had improved the meter-soil contact and therefore the area of the reading.
This technology can be used to produce images that can be analysed for a range of cracking properties.
See also
Aeration of soil
Particle density
Pore water pressure
Soil respiration
References
Further reading
Foth, H.D. (1990). Fundamentals of soil science. (Wiley, New York)
Harpstead, M.I. (2001). Soil science simplified. (Iowa State University Press, Ames)
Hillel, D. (2004). Introduction to environmental soil physics. (Elsevier/Academic Press, Amsterdam, Sydney)
Kohnke, H. (1995). Soil science simplified. (Waveland Press: Prospect Heights, Illinois)
Leeper, G.W. (1993). Soil science : an introduction. (Melbourne University Press, Carlton, Victoria)
Soil science
Soil mechanics
Porous media | Pore space in soil | [
"Physics",
"Materials_science",
"Engineering"
] | 1,770 | [
"Soil mechanics",
"Porous media",
"Applied and interdisciplinary physics",
"Materials science"
] |
19,176,602 | https://en.wikipedia.org/wiki/Double%20mass%20analysis | Double mass analysis is a simple graphical method to evaluate the consistency of hydrological data. The DM approach plots the cumulative data of one variable against the cumulative data of a second variable. A break in the slope of a linear function fit to the data is thought to represent a change in the relation between the variables. This approach provides a robust method to determine a change in the behavior of precipitation and recharge in a simple graphical method. It is a commonly used data analysis approach for investigating the behaviour of records made of hydrological or meteorological data at a number of locations. It is used to determine whether there is a need for corrections to the data - to account for changes in data collection procedures or other local conditions. Such changes may result from a variety of things including changes in instrumentation, changes in observation procedures, or changes in gauge location or surrounding conditions. Double mass analysis for checking consistency of a hydrological or meteorological record is considered to be an essential tool before taking it for analysis purpose. This method is based on the hypothesis that each item of the recorded data of a population is consistent.
An example of a double mass analysis is a "double mass plot", or "double mass curve". For this, points and/or a joining line are plotted where the x- and y- coordinates are determined by the running totals of the values observed at two stations. If both stations are affected to the same extent by the same trends then a double mass curve should follow a straight line. A break in the slope of the curve would indicate that conditions have changed at one location but not at another. Breaks in the double-mass curve of such variables are caused by changes in the relation between the variables. These changes may be due to changes in the method of data collection or to physical changes that affect the relation. This technique is based on the principle that when each recorded data comes from the same parent population, they are consistent.
Procedure
Let be the data points then the procedure for double mass analysis is as follows;
Divide the data into distinct categories of equal slope ().
Obtain correction factor for category as;
Multiply category with to get corrected data.
After correction, repeat this process until all data points have the same slope.
See also
Statistics
Notes
Further reading
Dubreuil P. (1974) Initiation à l'analyse hydrologique Masson& Cie et ORSTOM, Paris.
Data analysis
Hydrology
Meteorological data and networks
Meteorological concepts
Statistical charts and diagrams | Double mass analysis | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 498 | [
"Hydrology",
"Hydrology stubs",
"Environmental engineering"
] |
19,179,678 | https://en.wikipedia.org/wiki/Protocell | A protocell (or protobiont) is a self-organized, endogenously ordered, spherical collection of lipids proposed as a rudimentary precursor to cells during the origin of life. A central question in evolution is how simple protocells first arose and how their progeny could diversify, thus enabling the accumulation of novel biological emergences over time (i.e. biological evolution). Although a functional protocell has not yet been achieved in a laboratory setting, the goal to understand the process appears well within reach.
A protocell is a pre-cell in abiogenesis, and was a contained system consisting of simple biologically relevant molecules like ribozymes, and encapsulated in a simple membrane structure – isolating the entity from the environment and other individuals – thought to consist of simple fatty acids, mineral structures, or rock-pore structures.
Overview
Compartmentalization was important in the origin of life. Membranes form enclosed compartments that are separate from the external environment, thus providing the cell with functionally specialized aqueous spaces. As the lipid bilayer of membranes is impermeable to most hydrophilic molecules (dissolved by water), modern cells have membrane transport-systems that achieve nutrient uptake as well as the export of waste. Prior to the development of these molecular assemblies, protocells likely employed vesicle dynamics that are relevant to cellular functions, such as membrane trafficking and self-reproduction, using amphiphilic molecules. On the primitive Earth, numerous chemical reactions of organic compounds produced the ingredients of life. Of these substances, amphiphilic molecules might be the first player in the evolution from molecular assembly to cellular life. Vesicle dynamics could progress towards protocells with the development of self-replication coupled with early metabolism. It is possible that protocells might have had a primitive metabolic system (Wood-Ljungdahl pathway) at alkaline hydrothermal vents or other geological environments like impact crater lakes from meteorites, which are known to be composed of elements found in the Wood-Ljungdahl pathway.
Another conceptual model of a protocell relates to the term "chemoton" (short for 'chemical automaton') which refers to the fundamental unit of life introduced by Hungarian theoretical biologist Tibor Gánti. It is the oldest known computational abstract of a protocell. Gánti conceived the basic idea in 1952 and formulated the concept in 1971 in his book The Principles of Life (originally written in Hungarian, and translated to English only in 2003). He surmised the chemoton as the original ancestor of all organisms, or the last universal common ancestor.
The basic assumption of the chemoton model is that life should fundamentally and essentially have three properties: metabolism, self-replication, and a bilipid membrane. The metabolic and replication functions together form an autocatalytic subsystem necessary for the basic functions of life, and a membrane encloses this subsystem to separate it from the surrounding environment. Therefore, any system having such properties may be regarded as alive, and will contain self sustaining cellular information that is subject to natural selection. Some consider this model a significant contribution to origin of life as it provides a philosophy of evolutionary units.
Selectivity for compartmentalization
Self-assembled vesicles are essential components of primitive cells. The second law of thermodynamics requires that the universe becomes increasingly disordered (entropy), yet life is distinguished by its great degree of organization. Therefore, a boundary is needed to separate life processes from non-living matter. This fundamental necessity is underpinned by the universality of the cell membrane which is the only cellular structure found in all organisms on Earth.
In the aqueous environment in which all known cells function, a non-aqueous barrier is required to surround a cell and separate it from its surroundings. This non-aqueous membrane establishes a barrier to free diffusion, allowing for regulation of the internal environment within the barrier. The necessity of thermodynamically isolating a subsystem is an irreducible condition of life. In modern biology, such isolation is ordinarily accomplished by amphiphilic bilayers of a thickness of around 10−8 meters.
Researchers including Irene A. Chen and Jack W. Szostak have demonstrated that simple physicochemical properties of elementary protocells can give rise to simpler conceptual analogues of essential cellular behaviors, including primitive forms of Darwinian competition and energy storage. Such cooperative interactions between the membrane and encapsulated contents could greatly simplify the transition from replicating molecules to true cells. Competition for membrane molecules would favor stabilized membranes, suggesting a selective advantage for the evolution of cross-linked fatty acids and even the phospholipids of today. This micro-encapsulation allowed for metabolism within the membrane, exchange of small molecules and prevention of passage of large substances across it. The main advantages of encapsulation include increased solubility of the cargo and creating energy in the form of chemical gradients. Energy is thus often said to be stored by cells in molecular structures such as carbohydrates (including sugars), lipids, and proteins, which release energy when chemically combined with oxygen during cellular respiration.
Vesicles, micelles and membranes
When phospholipids or simple lipids like fatty acids are placed in water, the molecules spontaneously arrange such that the hydrophobic tails are shielded from the water, resulting in the formation of membrane structures such as bilayers, vesicles, and micelles. In modern cells, vesicles are involved in metabolism, transport, buoyancy control, and enzyme storage. They can also act as natural chemical reaction chambers. A typical vesicle or micelle in aqueous solution forms an aggregate with the hydrophilic "head" regions in contact with surrounding solvent, sequestering the hydrophobic single-tail regions in the micelle center. This phase is caused by the packing behavior of single-tail lipids in a bilayer. Although the spontaneous self-assembly process that form lipid monolayer vesicles and micelles in nature resemble the kinds of primordial vesicles or protocells that might have existed at the beginning of evolution, they are not as sophisticated as the bilayer membranes of today's living organisms. However, in a prebiotic context, electrostatic interactions induced by short, positively charged, hydrophobic peptides containing seven amino acids in length or fewer, can attach RNA to a vesicle membrane, the basic cell membrane.
Rather than being made up of phospholipids, early membranes may have formed from monolayers or bilayers of simple fatty acids, which may have formed more readily in a prebiotic environment. Fatty acids have been synthesized in laboratories under a variety of prebiotic conditions and have been found on meteorites, suggesting their natural synthesis in nature. Oleic acid vesicles represent good models of membrane protocells
Cohen et al. (2022) suggest that plausible prebiotic production of fatty acids — leading to the development of early protocell membranes — is enriched on metal-rich mineral surfaces, possibly from impact craters, increasing the prebiotic environmental mass of lipids by 102 times. They evaluate three different possible synthesis pathways of fatty acids in the Hadean, and found that these metal surfaces could produce 1011 - 1015 kg of 6-18 carbon fatty acids. Of these products, the 8-18C fatty acids are compatible with membrane formation. They also propose that alternative amphiphiles like alcohols are co-synthesized with fatty acid, and can help improve membrane stability. However, despite this production, the authors state that net fatty acid synthesis would not yield sufficient concentrations for spontaneous membrane formation without significant evaporation of Earth's aqueous environments.
Membrane transport
For cellular organisms, the transport of specific molecules across compartmentalizing membrane barriers is essential in order to exchange content with their environment and with other individuals. For example, content exchange between individuals enables the exchange of genes between individuals (horizontal gene transfer), an important factor in the evolution of cellular life. While modern cells can rely on complicated protein machineries to catalyze these crucial processes, protocells must have accomplished this using more simple mechanisms.
Protocells composed of fatty acids would have been able to easily exchange small molecules and ions with their environment. Modern phospholipid bilayer cell membranes exhibit low permeability, but contain complex molecular assemblies which both actively and passively transport relevant molecules across the membrane in a highly specific manner. In the absence of these complex assemblies, simple fatty acid based protocell membranes would be more permeable and allow for greater non-specific transport across membranes. Molecules that would be highly permeable across protocell membranes include nucleoside monophosphate (NMP), nucleoside diphosphate (NDP), and nucleoside triphosphate (NTP), and may withstand millimolar concentrations of Mg2+. Osmotic pressure can also play a significant role regarding this passive membrane transport.
Environmental effects have been suggested to trigger conditions under which a transport of larger molecules, such as DNA and RNA, across the membranes of protocells is possible. For example, it has been proposed that electroporation resulting from lightning strikes could enable such transport. Electroporation is the rapid increase in bilayer permeability induced by the application of a large artificial electric field across the membrane. During electroporation, the lipid molecules in the membrane shift position, opening up a pore (hole) that acts as a conductive pathway through which hydrophobic molecules like nucleic acids can pass the lipid bilayer. A similar transfer of content across protocells and with the surrounding solution can be caused by freezing and subsequent thawing. This could, for instance, occur in an environment in which day and night cycles cause recurrent freezing. Laboratory experiments have shown that such conditions allow an exchange of genetic information between populations of protocells. This can be explained by the fact that membranes are highly permeable at temperatures slightly below their phase transition temperature. If this point is reached during the freeze-thaw cycle, even large and highly charged molecules can temporarily pass the protocell membrane.
Some molecules or particles are too large or too hydrophilic to pass through a lipid bilayer even under these conditions, but can be moved across the membrane through fusion or budding of vesicles, events which have also been observed for freeze-thaw cycles. This may eventually have led to mechanisms that facilitate movement of molecules to the inside of the protocell (endocytosis) or to release its contents into the extracellular space (exocytosis).
Suitable prebiotic environments
See also: Abiogenesis: Suitable Geologic Environment, RNA World: Prebiotic RNA Synthesis
Hydrothermal systems
It has been proposed that life began in hydrothermal vents in the deep sea, but a 2012 study suggests that hot springs have the ideal characteristics for the origin of life. The conclusion is based mainly on the chemistry of modern cells, where the cytoplasm is rich in potassium, zinc, manganese, and phosphate ions, not widespread in marine environments. Such conditions, the researchers argue, are found only where hot hydrothermal fluid brings the ions to the surface—places such as geysers, mud pots, fumaroles and other geothermal features. Within these fuming and bubbling basins, water laden with zinc and manganese ions could have collected, cooled and condensed in shallow pools. However, a recent discovery of alkaline hydrothermal vents with an ionic concentration of sodium lower than in seawater suggests that high concentrations of potassium can be found at marine environments.
A study in the 1990s showed that montmorillonite clay can help create RNA chains of as many as 50 nucleotides joined together spontaneously into a single RNA molecule. Later, in 2002, it was discovered that by adding montmorillonite to a solution of fatty acid micelles (lipid spheres), the clay sped up the rate of vesicle formation 100-fold.
Some minerals can catalyze the stepwise formation of hydrocarbon tails of fatty acids from hydrogen and carbon monoxide gases—gases that may have been released from hydrothermal vents or geysers. Fatty acids of various lengths are eventually released into the surrounding water, but vesicle formation requires a higher concentration of fatty acids, so it is suggested that protocell formation started at land-bound hydrothermal freshwater environments such as geysers, mud pots, fumaroles and other geothermal features where water evaporates and concentrates the solute.
In 2019, Nick Lane and colleagues show that vesicles form readily in seawater conditions at pH between 6.5 and >12 and temperatures 70 °C, meant to mimic the conditions of alkaline hydrothermal vents, with the presence of lipid mixtures, however a prebiotic source to such mixtures is unclear in those environments. Simple amphiphilic compounds in seawater do not assemble into vesicles because of the high concentration of ionic solutes. Research has shown that vesicles can be bound and stabilized by prebiotic amino acids even while in the presence of salt ions and magnesium ions.
In hot spring conditions, self-assembly of vesicles occurs, which have a lower concentration of ionic solutes. Scientists oligomerized RNA in alkaline hydrothermal vent conditions in the laboratory. Although they were estimated to be 4 units in length, it implies RNA polymers possibly were synthesized at such environments. Experimental research at hot springs gave higher yields of RNA-like polymers than in the laboratory. The polymers were encapsulated in fatty acid vesicles when rehydrated, further supporting the hot spring hypothesis of abiogenesis. These wet-dry cycles also improved vesicle stability and binding. UV exposure has also been shown to promote the synthesis of stable biomolecules like nucleotides.
In the origin of chemiosmosis, if early cells originated at alkaline hydrothermal vents, proton gradients can be maintained by the acidic ocean and alkaline water from white smokers while an inorganic membranous structure is in a rock cavity. If early cells originated in terrestrial pools such as hot springs, quinones present in meteorites like the Murchison meteorite would promote the development of proton gradients by coupled redox reactions if the ferricyanide, the electron acceptor, was within the vesicle and an electron donor like a sulfur compound was outside of the lipid membrane. Because of the "water problem", a primitive ATP synthase and other biomolecules would go through hydrolysis due to the absence of wet-dry cycles at hydrothermal vents, unlike at terrestrial pools. Other researchers propose hydrothermal pore systems coated in mineral gels at deep sea hydrothermal vents to an alternative compartment of membranous structures, promote biochemical reactions of biopolymers, and could solve the "water problem". David Deamer and Bruce Damer argue that biomolecules would become trapped within these pore systems upon polymerization and would not undergo combinatorial selection. Catalytic FeS and NiS walls at alkaline hydrothermal vents has also been suggested to have promoted polymerization.
However, Jackson (2016) evaluates how the pH gradient between alkaline hydrothermal vents and acidic Hadean seawater might influence prebiotic synthesis. Three main criticisms emerge from this evaluation. Firstly, the maintenance and stability of membranes positioned suitably between turbulent pH gradients seemed implausible. They claim that the proposition of CaCO3 and Mg(OH)2 precipitates interacting with fluid mixing in subsurface pores do not produce satisfactory environments. Secondly, they suggest that the molecular assemblies required to utilize key energetic gradients available at hydrothermal systems were too complex to have been relevant at the origin of life. Lastly, they argue that even if a molecular assembly could have harvested available hydrothermal energy, those assemblies would have been too large to operate within the proposed membrane thicknesses accepted by proponents of the hydrothermal vent hypothesis. In 2017, Jackson takes a further stance, suggesting that even if an organism successfully originated in alkaline hydrothermal pores, exploiting natural pH gradients for energy, it would not be able to withstand the drastic change of environment after emergence from the vent environment in which it had solely evolved. This emergence, however, is essential to the niche differentiation of life, allowing for the diversification of habitats and energetic strategies. Counters to these arguments suggest that the close resemblance between biochemical pathways and geochemical systems at alkaline hydrothermal vents gives merit to the hypothesis, and that selection on these protocells would improve resilience to environmental change, allowing for emergence and distribution.
It has been considered by other researchers that life originating in hydrothermal volcanic ponds exposed to UV radiation, zinc sulfide photocatalysis, and occurrence of continuous wet-dry cycling would not resemble modern biochemistry. Maximal ATP synthesis is shown to occur at high water activity and low ion concentrations. Despite this, hydrothermal vents are still considered to be a feasible environment as some shallow hydrothermal vents emit freshwater and the concentration of divalent cations in Hadean oceans were likely lower than in modern oceans. Nick Lane and coauthors state that "alkaline hydrothermal systems tend to precipitate Ca2+ and Mg2+ ions as aragonite and brucite, so their concentrations are typically much lower than mean ocean values. Modelling work in relation to Hadean systems indicates that hydrothermal concentrations of Ca2+ and Mg2+ would likely have been <1 mM, which is in the range that enhanced phosphorylation here. Other conditions considered here, including salinity and high pressure, would have only limited effects on ATP synthesis in submarine hydrothermal systems (which typically have pressures in the range of 100 to 300 Bars). Alkaline hydrothermal systems might also have generated Fe3+ in situ for ADP phosphorylation. Thermodynamic modelling shows that the mixing of alkaline hydrothermal fluids with seawater in submarine systems can promote continuous cycling between ferrous and ferric iron, potentially forming soluble hydrous ferric chlorides, which our experiments show have the same effect as ferric sulphate".
Montmorillonite bubbles
Another group suggests that primitive cells might have formed inside inorganic clay microcompartments, which can provide an ideal container for the synthesis and compartmentalization of complex organic molecules. Clay-armored bubbles form naturally when particles of montmorillonite clay collect on the outer surface of air bubbles under water. This creates a semi permeable vesicle from materials that are readily available in the environment. The authors remark that montmorillonite is known to serve as a chemical catalyst, encouraging lipids to form membranes and single nucleotides to join into strands of RNA. Primitive reproduction can be envisioned when the clay bubbles burst, releasing the lipid membrane-bound product into the surrounding medium.
Membraneless droplets
Another way to form primitive compartments that may lead to the formation of a protocell is polyesters membraneless structures that have the ability to host biochemicals (proteins and RNA) and/or scaffold the assemblies of lipids around them. While these droplets are leaky towards genetic materials, this leakiness could have facilitated the progenote hypothesis.
Coacervates
Researchers have also proposed early encapsulation in aqueous phase-separated droplets called coacervates. These droplets are driven by the accumulation of macromolecules, producing a distinct dense phase liquid droplet within a more dilute liquid medium. These droplets can propagate, retaining their internal composition, through shear forces and turbulence in the medium, and could have acted as a means of replicating encapsulation for an early protocell. However, replication was highly disordered and droplet fusion is common, calling into question coacervates true potential for distinct compartmentalization leading to competition and early Darwinian-selection.
Sexual reproduction
Eigen et al. and Woese proposed that the genomes of early protocells were composed of single-stranded RNA, and that individual genes corresponded to separate RNA segments, rather than being linked end-to-end as in present-day DNA genomes. A protocell that was haploid (one copy of each RNA gene) would be vulnerable to damage, since a single lesion in any RNA segment would be potentially lethal to the protocell (e.g. by blocking replication or inhibiting the function of an essential gene).
Vulnerability to damage could be reduced by maintaining two or more copies of each RNA segment in each protocell, i.e. by maintaining diploidy or polyploidy. Genome redundancy would allow a damaged RNA segment to be replaced by an additional replication of its homolog. For such a simple organism, the proportion of available resources tied up in the genetic material would be a large fraction of the total resource budget. Under limited resource conditions, the protocell reproductive rate would likely be inversely related to ploidy number, and the protocell's fitness would be reduced by the costs of redundancy. Consequently, coping with damaged RNA genes while minimizing the costs of redundancy would likely have been a fundamental problem for early protocells.
A cost-benefit analysis was carried out in which the costs of maintaining redundancy were balanced against the costs of genome damage. This analysis led to the conclusion that, under a wide range of circumstances, the selected strategy would be for each protocell to be haploid, but to periodically fuse with another haploid protocell to form a transient diploid. The retention of the haploid state maximizes the growth rate. The periodic fusions permit mutual reactivation of otherwise lethally damaged protocells. If at least one damage-free copy of each RNA gene is present in the transient diploid, viable progeny can be formed. For two, rather than one, viable daughter cells to be produced would require an extra replication of the intact RNA gene homologous to any RNA gene that had been damaged prior to the division of the fused protocell. The cycle of haploid reproduction, with occasional fusion to a transient diploid state, followed by splitting to the haploid state, can be considered to be the sexual cycle in its most primitive form. In the absence of this sexual cycle, haploid protocells with damage in an essential RNA gene would simply die.
This model for the early sexual cycle is hypothetical, but it is very similar to the known sexual behavior of the segmented RNA viruses, which are among the simplest organisms known. Influenza virus, whose genome consists of 8 physically separated single-stranded RNA segments, is an example of this type of virus. In segmented RNA viruses, "mating" can occur when a host cell is infected by at least two virus particles. If these viruses each contain an RNA segment with a lethal damage, multiple infection can lead to reactivation providing that at least one undamaged copy of each virus gene is present in the infected cell. This phenomenon is known as "multiplicity reactivation". Multiplicity reactivation has been reported to occur in influenza virus infections after induction of RNA damage by UV-irradiation, and ionizing radiation.
Artificial models
Langmuir–Blodgett deposition
Starting with a technique commonly used to deposit molecules on a solid surface, Langmuir–Blodgett deposition, scientists are able to assemble phospholipid membranes of arbitrary complexity layer by layer. These artificial phospholipid membranes support functional insertion both of purified and of in situ expressed membrane proteins. The technique could help astrobiologists understand how the first living cells originated.
Jeewanu protocells
Jeewanu protocells are synthetic chemical particles that possess cell-like structure and seem to have some functional living properties. First synthesized in 1963 from simple minerals and basic organics while exposed to sunlight, it is still reported to have some metabolic capabilities, the presence of semipermeable membrane, amino acids, phospholipids, carbohydrates and RNA-like molecules. The nature and properties of the Jeewanu remains to be clarified.
In a similar synthesis experiment a frozen mixture of water, methanol, ammonia and carbon monoxide was exposed to ultraviolet (UV) radiation. This combination yielded large amounts of organic material that self-organised to form globules or vesicles when immersed in water. The investigating scientist considered these globules to resemble cell membranes that enclose and concentrate the chemistry of life, separating their interior from the outside world. The globules were between , or about the size of red blood cells. Remarkably, the globules fluoresced, or glowed, when exposed to UV light. Absorbing UV and converting it into visible light in this way was considered one possible way of providing energy to a primitive cell. If such globules played a role in the origin of life, the fluorescence could have been a precursor to primitive photosynthesis. Such fluorescence also provides the benefit of acting as a sunscreen, diffusing any damage that otherwise would be inflicted by UV radiation. Such a protective function would have been vital for life on the early Earth, since the ozone layer, which blocks out the sun's most destructive UV rays, did not form until after photosynthetic life began to produce oxygen.
Bio-like structures
The synthesis of three kinds of "jeewanu" have been reported; two of them were organic, and the other was inorganic. Other similar inorganic structures have also been produced. The investigating scientist (V. O. Kalinenko) referred to them as "bio-like structures" and "artificial cells". Formed in distilled water (as well as on agar gel) under the influence of an electric field, they lack protein, amino acids, purine or pyrimidine bases, and certain enzyme activities. According to NASA researchers, "presently known scientific principles of biology and biochemistry cannot account for living inorganic units" and "the postulated existence of these living units has not been proved".
Analogous Research: Fuel Cells
In March 2014, NASA's Jet Propulsion Laboratory demonstrated a unique way to study the origins of life: fuel cells. Fuel cells are similar to biological cells in that electrons are also transferred to and from molecules. In both cases, this results in electricity and power. The study of fuel cells suggest that an important factor in protocell development was that the Earth provides electrical energy at the seafloor. "This energy could have kick-started life and could have sustained life after it arose. Now, we have a way of testing different materials and environments that could have helped life arise not just on Earth, but possibly on Mars, Europa and other places in the Solar System."
Ethics, controversy, and research considerations
Protocell research has created controversy and opposing opinions, including criticism of vague definitions of "artificial life". The creation of a basic unit of life is the most pressing ethical concern, although the most widespread worry about protocells is their potential threat to human health and the environment through uncontrolled replication.
Additionally, postulation into the conditions for protocellular origins of life on Earth remain debated. Scientists in the field emphasize the importance of further hypothesis based experimentation over theoretical conjecture to more concretely constrain the prebiotic plausibility of different protocell morphologies, geologic conditions, and synthetic schemes.
See also
Protocell Circus, a film
Pseudo-panspermia
References
Evolutionarily significant biological phenomena
Evolutionary biology
Membrane biology
Origin of life
Synthetic biology
Prebiotic chemistry | Protocell | [
"Chemistry",
"Engineering",
"Biology"
] | 5,756 | [
"Synthetic biology",
"Evolutionary biology",
"Biological engineering",
"Origin of life",
"Membrane biology",
"Prebiotic chemistry",
"Bioinformatics",
"Molecular genetics",
"Molecular biology",
"Biological hypotheses"
] |
19,179,706 | https://en.wikipedia.org/wiki/Abiogenesis | Abiogenesis is the natural process by which life arises from non-living matter, such as simple organic compounds. The prevailing scientific hypothesis is that the transition from non-living to living entities on Earth was not a single event, but a process of increasing complexity involving the formation of a habitable planet, the prebiotic synthesis of organic molecules, molecular self-replication, self-assembly, autocatalysis, and the emergence of cell membranes. The transition from non-life to life has never been observed experimentally, but many proposals have been made for different stages of the process.
The study of abiogenesis aims to determine how pre-life chemical reactions gave rise to life under conditions strikingly different from those on Earth today. It primarily uses tools from biology and chemistry, with more recent approaches attempting a synthesis of many sciences. Life functions through the specialized chemistry of carbon and water, and builds largely upon four key families of chemicals: lipids for cell membranes, carbohydrates such as sugars, amino acids for protein metabolism, and nucleic acid DNA and RNA for the mechanisms of heredity. Any successful theory of abiogenesis must explain the origins and interactions of these classes of molecules.
Many approaches to abiogenesis investigate how self-replicating molecules, or their components, came into existence. Researchers generally think that current life descends from an RNA world, although other self-replicating and self-catalyzing molecules may have preceded RNA. Other approaches ("metabolism-first" hypotheses) focus on understanding how catalysis in chemical systems on the early Earth might have provided the precursor molecules necessary for self-replication. The classic 1952 Miller–Urey experiment demonstrated that most amino acids, the chemical constituents of proteins, can be synthesized from inorganic compounds under conditions intended to replicate those of the early Earth. External sources of energy may have triggered these reactions, including lightning, radiation, atmospheric entries of micro-meteorites, and implosion of bubbles in sea and ocean waves. More recent research has found amino acids in meteorites, comets, asteroids, and star-forming regions of space.
While the last universal common ancestor of all modern organisms (LUCA) is thought to have been quite different from the origin of life, investigations into LUCA can guide research into early universal characteristics. A genomics approach has sought to characterize LUCA by identifying the genes shared by Archaea and Bacteria, members of the two major branches of life (with Eukaryotes included in the archaean branch in the two-domain system). It appears there are 60 proteins common to all life and 355 prokaryotic genes that trace to LUCA; their functions imply that the LUCA was anaerobic with the Wood–Ljungdahl pathway, deriving energy by chemiosmosis, and maintaining its hereditary material with DNA, the genetic code, and ribosomes. Although the LUCA lived over 4 billion years ago (4 Gya), researchers believe it was far from the first form of life. Earlier cells might have had a leaky membrane and been powered by a naturally occurring proton gradient near a deep-sea white smoker hydrothermal vent.
Earth remains the only place in the universe known to harbor life. Geochemical and fossil evidence from the Earth informs most studies of abiogenesis. The Earth was formed at 4.54 Gya, and the earliest evidence of life on Earth dates from at least 3.8 Gya from Western Australia. Some studies have suggested that fossil micro-organisms may have lived within hydrothermal vent precipitates dated 3.77 to 4.28 Gya from Quebec, soon after ocean formation 4.4 Gya during the Hadean.
Overview
Life consists of reproduction with (heritable) variations. NASA defines life as "a self-sustaining chemical system capable of Darwinian [i.e., biological] evolution." Such a system is complex; the last universal common ancestor (LUCA), presumably a single-celled organism which lived some 4 billion years ago, already had hundreds of genes encoded in the DNA genetic code that is universal today. That in turn implies a suite of cellular machinery including messenger RNA, transfer RNA, and ribosomes to translate the code into proteins. Those proteins included enzymes to operate its anaerobic respiration via the Wood–Ljungdahl metabolic pathway, and a DNA polymerase to replicate its genetic material.
The challenge for abiogenesis (origin of life) researchers is to explain how such a complex and tightly interlinked system could develop by evolutionary steps, as at first sight all its parts are necessary to enable it to function. For example, a cell, whether the LUCA or in a modern organism, copies its DNA with the DNA polymerase enzyme, which is itself produced by translating the DNA polymerase gene in the DNA. Neither the enzyme nor the DNA can be produced without the other. The likely answer to this challenge is that the evolutionary process could have involved molecular self-replication, self-assembly such as of cell membranes, and autocatalysis via RNA ribozymes. Nonetheless, the transition of non-life to life has never been observed experimentally, nor has there been a satisfactory chemical explanation.
The preconditions to the development of a living cell like the LUCA are clear enough, though disputed in their details: a habitable world is formed with a supply of minerals and liquid water. Prebiotic synthesis creates a range of simple organic compounds, which are assembled into polymers such as proteins and RNA. On the other side, the process after the LUCA is readily understood: biological evolution caused the development of a wide range of species with varied forms and biochemical capabilities. However, the derivation of living things such as LUCA from simple components is far from understood.
Although Earth remains the only place where life is known, the science of astrobiology seeks evidence of life on other planets. The 2015 NASA strategy on the origin of life aimed to solve the puzzle by identifying interactions, intermediary structures and functions, energy sources, and environmental factors that contributed to the diversity, selection, and replication of evolvable macromolecular systems, and mapping the chemical landscape of potential primordial informational polymers. The advent of polymers that could replicate, store genetic information, and exhibit properties subject to selection was, it suggested, most likely a critical step in the emergence of prebiotic chemical evolution. Those polymers derived, in turn, from simple organic compounds such as nucleobases, amino acids, and sugars that could have been formed by reactions in the environment. A successful theory of the origin of life must explain how all these chemicals came into being.
Pre-1960s conceptual history
Spontaneous generation
One ancient view of the origin of life, from Aristotle until the 19th century, is of spontaneous generation. This theory held that "lower" animals such as insects were generated by decaying organic substances, and that life arose by chance. This was questioned from the 17th century, in works like Thomas Browne's Pseudodoxia Epidemica. In 1665, Robert Hooke published the first drawings of a microorganism. In 1676, Antonie van Leeuwenhoek drew and described microorganisms, probably protozoa and bacteria. Van Leeuwenhoek disagreed with spontaneous generation, and by the 1680s convinced himself, using experiments ranging from sealed and open meat incubation and the close study of insect reproduction, that the theory was incorrect. In 1668 Francesco Redi showed that no maggots appeared in meat when flies were prevented from laying eggs. By the middle of the 19th century, spontaneous generation was considered disproven.
Panspermia
Dating back to Anaxagoras in the 5th century BC, panspermia is the idea that life originated elsewhere in the universe and came to Earth. The modern version of panspermia holds that life may have been distributed to Earth by meteoroids, asteroids, comets or planetoids. It does not attempt to explain how life originated, but shifts the origin of life to another heavenly body. The advantage is that life is not required to have formed on each planet it occurs on, but rather in a more limited set of locations, or even a single location, and then spread about the galaxy to other star systems via cometary or meteorite impact. Panspermia did not get much scientific support and deflects the need of an answer instead of explaining observable phenomena. Although interest in panspermia grew when traces of organic materials were found in meteorites, it is currently accepted that life started locally on Earth.
"A warm little pond": primordial soup
The idea that life originated from non-living matter in slow stages appeared in Herbert Spencer's 1864–1867 book Principles of Biology, and in William Turner Thiselton-Dyer's 1879 paper "On spontaneous generation and evolution". On 1 February 1871 Charles Darwin wrote about these publications to Joseph Hooker, and set out his own speculation, suggesting that the original spark of life may have begun in a "warm little pond, with all sorts of ammonia and phosphoric salts, light, heat, electricity, , present, that a compound was chemically formed ready to undergo still more complex changes." Darwin went on to explain that "at the present day such matter would be instantly devoured or absorbed, which would not have been the case before living creatures were formed."
Alexander Oparin in 1924 and J. B. S. Haldane in 1929 proposed that the first molecules constituting the earliest cells slowly self-organized from a primordial soup, and this theory is called the Oparin–Haldane hypothesis. Haldane suggested that the Earth's prebiotic oceans consisted of a "hot dilute soup" in which organic compounds could have formed. J. D. Bernal showed that such mechanisms could form most of the necessary molecules for life from inorganic precursors. In 1967, he suggested three "stages": the origin of biological monomers; the origin of biological polymers; and the evolution from molecules to cells.
Miller–Urey experiment
In 1952, Stanley Miller and Harold Urey carried out a chemical experiment to demonstrate how organic molecules could have formed spontaneously from inorganic precursors under prebiotic conditions like those posited by the Oparin–Haldane hypothesis. It used a highly reducing (lacking oxygen) mixture of gases—methane, ammonia, and hydrogen, as well as water vapor—to form simple organic monomers such as amino acids. Bernal said of the Miller–Urey experiment that "it is not enough to explain the formation of such molecules, what is necessary, is a physical-chemical explanation of the origins of these molecules that suggests the presence of suitable sources and sinks for free energy." However, current scientific consensus describes the primitive atmosphere as weakly reducing or neutral, diminishing the amount and variety of amino acids that could be produced. The addition of iron and carbonate minerals, present in early oceans, however, produces a diverse array of amino acids. Later work has focused on two other potential reducing environments: outer space and deep-sea hydrothermal vents.
Producing a habitable Earth
Evolutionary history
Early universe with first stars
Soon after the Big Bang, which occurred roughly 14 Gya, the only chemical elements present in the universe were hydrogen, helium, and lithium, the three lightest atoms in the periodic table. These elements gradually accreted and began orbiting in disks of gas and dust. Gravitational accretion of material at the hot and dense centers of these protoplanetary disks formed stars by the fusion of hydrogen. Early stars were massive and short-lived, producing all the heavier elements through stellar nucleosynthesis. Element formation through stellar nucleosynthesis proceeds to its most stable element Iron-56. Heavier elements were formed during supernovae at the end of a stars lifecycle. Carbon, currently the fourth most abundant chemical element in the universe (after hydrogen, helium, and oxygen), was formed mainly in white dwarf stars, particularly those bigger than twice the mass of the sun. As these stars reached the end of their lifecycles, they ejected these heavier elements, among them carbon and oxygen, throughout the universe. These heavier elements allowed for the formation of new objects, including rocky planets and other bodies. According to the nebular hypothesis, the formation and evolution of the Solar System began 4.6 Gya with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the center, forming the Sun, while the rest flattened into a protoplanetary disk out of which the planets, moons, asteroids, and other small Solar System bodies formed.
Emergence of Earth
The age of the Earth is 4.54 Gya as found by radiometric dating of calcium-aluminium-rich inclusions in carbonaceous chrondrite meteorites, the oldest material in the Solar System. Earth, during the Hadean eon (from its formation until 4.031 Gya,) was at first inhospitable to any living organisms. During its formation, the Earth lost a significant part of its initial mass, and consequentially lacked the gravity to hold molecular hydrogen and the bulk of the original inert gases. Soon after initial accretion of Earth at 4.48 Ga, its collision with Theia, a hypothesised impactor, is thought to have created the ejected debris that would eventually form the Moon. This impact would have removed the Earth's primary atmosphere, leaving behind clouds of viscous silicates and carbon dioxide. This unstable atmosphere was short-lived and condensed shortly after to form the bulk silicate Earth, leaving behind an atmosphere largely consisting of water vapor, nitrogen, and carbon dioxide, with smaller amounts of carbon monoxide, hydrogen, and sulfur compounds. The solution of carbon dioxide in water is thought to have made the seas slightly acidic, with a pH of about 5.5.
Condensation to form liquid oceans is theorised to have occurred as early as the Moon-forming impact. This scenario has found support from the dating of 4.404 Gya zircon crystals with high δ18O values from metamorphosed quartzite of Mount Narryer in Western Australia. The Hadean atmosphere has been characterized as a "gigantic, productive outdoor chemical laboratory," similar to volcanic gases today which still support some abiotic chemistry. Despite the likely increased volcanism from early plate tectonics, the Earth may have been a predominantly water world between 4.4 and 4.3 Gya. It is debated whether or not crust was exposed above this ocean due to uncertainties of what early plate tectonics looked like. For early life to have developed, it is generally thought that a land setting is required, so this question is essential to determining when in Earth's history life evolved. Immediately after the Moon-forming impact, Earth likely had little if any continental crust, a turbulent atmosphere, and a hydrosphere subject to intense ultraviolet light from a T Tauri stage Sun. It was also affected by cosmic radiation, and continued asteroid and comet impacts. Despite all this, niche environments likely existed conducive to life on Earth in the Late-Hadean to Early-Archaean.
The Late Heavy Bombardment hypothesis posits that a period of intense impact occurred at 4.1 to 3.8 Gya during the Hadean and early Archean eons. Originally it was thought that the Late Heavy Bombardment was a single cataclysmic impact event occurring at 3.9 Gya; this would have had the potential to sterilise all life on Earth by volatilising liquid oceans and blocking the Sun needed for photosynthesising primary producers, pushing back the earliest possible emergence of life to after the Late Heavy Bombardment. However, more recent research questioned both the intensity of the Late Heavy Bombardment as well as its potential for sterilisation. Uncertainties as to whether Late Heavy Bombardment was one giant impact or a period of greater impact rates greatly changed the implication of its destructive power. The 3.9 Ga date arose from dating of Apollo mission sample returns collected mostly near the Imbrium Basin, biasing the age of recorded impacts. Impact modelling of the lunar surface reveals that rather than a cataclysmic event at 3.9 Ga, multiple small-scale, short-lived periods of bombardment likely occurred. Terrestrial data backs this idea by showing multiple periods of ejecta in the rock record both before and after the 3.9 Ga marker, suggesting that the early Earth was subject to continuous impacts that would not have had as great an impact on extinction as previously thought. If the Late Heavy Bombardment was not a single cataclysmic event, the emergence of life could have taken place far before 3.9 Ga.
If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from late impacts and the then high levels of ultraviolet radiation from the sun. Geothermically heated oceanic crust could have yielded far more organic compounds through deep hydrothermal vents than the Miller–Urey experiments indicated. The available energy is maximized at 100–150 °C, the temperatures at which hyperthermophilic bacteria and thermoacidophilic archaea live.
Earliest evidence of life
The timing at which life emerged on Earth is most likely between 3.48 and 4.32 Gya. Minimum age estimates are based on evidence from the geologic rock record. In 2017, the earliest physical evidence of life so far found was reported to consist of microbialites in the Nuvvuagittuq Greenstone Belt of Northern Quebec, in banded iron formation rocks at least 3.77 and possibly as old as 4.32 Gya. The micro-organisms could have lived within hydrothermal vent precipitates, soon after the 4.4 Gya formation of oceans during the Hadean. The microbes resembled modern hydrothermal vent bacteria, supporting the view that abiogenesis began in such an environment. However, later research disputed this interpretation of the data, stating that the observations may be better explained by abiotic processes in silica-rich waters, "chemical gardens," circulating hydrothermal fluids, or volcanic ejecta.
Biogenic graphite has been found in 3.7 Gya metasedimentary rocks from southwestern Greenland and in microbial mat fossils from 3.49 Gya cherts in the Pilbara region of Western Australia. Evidence of early life in rocks from Akilia Island, near the Isua supracrustal belt in southwestern Greenland, dating to 3.7 Gya, have shown biogenic carbon isotopes. In other parts of the Isua supracrustal belt, graphite inclusions trapped within garnet crystals are connected to the other elements of life: oxygen, nitrogen, and possibly phosphorus in the form of phosphate, providing further evidence for life 3.7 Gya. In the Pilbara region of Western Australia, compelling evidence of early life was found in pyrite-bearing sandstone in a fossilized beach, with rounded tubular cells that oxidized sulfur by photosynthesis in the absence of oxygen. Carbon isotope ratios on graphite inclusions from the Jack Hills zircons suggest that life could have existed on Earth from 4.1 Gya.
The Pilbara region of Western Australia contains the Dresser Formation with rocks 3.48 Gya, including layered structures called stromatolites. Their modern counterparts are created by photosynthetic micro-organisms including cyanobacteria. These lie within undeformed hydrothermal-sedimentary strata; their texture indicates a biogenic origin. Parts of the Dresser formation preserve hot springs on land, but other regions seem to have been shallow seas. A molecular clock analysis suggests the LUCA emerged prior to 3.9 Gya.
Producing molecules: prebiotic synthesis
All chemical elements derive from stellar nucleosynthesis except for hydrogen and some helium and lithium. Basic chemical ingredients of life – the carbon-hydrogen molecule (CH), the carbon-hydrogen positive ion (CH+) and the carbon ion (C+) – can be produced by ultraviolet light from stars. Complex molecules, including organic molecules, form naturally both in space and on planets. Organic molecules on the early Earth could have had either terrestrial origins, with organic molecule synthesis driven by impact shocks or by other energy sources, such as ultraviolet light, redox coupling, or electrical discharges; or extraterrestrial origins (pseudo-panspermia), with organic molecules formed in interstellar dust clouds raining down on to the planet.
Observed extraterrestrial organic molecules
An organic compound is a chemical whose molecules contain carbon. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets of the Solar System. Organic compounds are relatively common in space, formed by "factories of complex molecular synthesis" which occur in molecular clouds and circumstellar envelopes, and chemically evolve after reactions are initiated mostly by ionizing radiation. Purine and pyrimidine nucleobases including guanine, adenine, cytosine, uracil, and thymine have been found in meteorites. These could have provided the materials for DNA and RNA to form on the early Earth. The amino acid glycine was found in material ejected from comet Wild 2; it had earlier been detected in meteorites. Comets are encrusted with dark material, thought to be a tar-like organic substance formed from simple carbon compounds under ionizing radiation. A rain of material from comets could have brought such complex organic molecules to Earth. It is estimated that during the Late Heavy Bombardment, meteorites may have delivered up to five million tons of organic prebiotic elements to Earth per year. Currently 40,000 tons of cosmic dust falls to Earth each year.
Polycyclic aromatic hydrocarbons
Polycyclic aromatic hydrocarbons (PAH) are the most common and abundant polyatomic molecules in the observable universe, and are a major store of carbon. They seem to have formed shortly after the Big Bang, and are associated with new stars and exoplanets. They are a likely constituent of Earth's primordial sea. PAHs have been detected in nebulae, and in the interstellar medium, in comets, and in meteorites.
A star, HH 46-IR, resembling the sun early in its life, is surrounded by a disk of material which contains molecules including cyanide compounds, hydrocarbons, and carbon monoxide. PAHs in the interstellar medium can be transformed through hydrogenation, oxygenation, and hydroxylation to more complex organic compounds used in living cells.
Nucleobases and nucleotides
Organic compounds introduced on Earth by interstellar dust particles can help to form complex molecules, thanks to their peculiar surface-catalytic activities. The RNA component uracil and related molecules, including xanthine, in the Murchison meteorite were likely formed extraterrestrially, as suggested by studies of 12C/13C isotopic ratios. NASA studies of meteorites suggest that all four DNA nucleobases (adenine, guanine and related organic molecules) have been formed in outer space. The cosmic dust permeating the universe contains complex organics ("amorphous organic solids with a mixed aromatic–aliphatic structure") that could be created rapidly by stars. Glycolaldehyde, a sugar molecule and RNA precursor, has been detected in regions of space including around protostars and on meteorites.
Laboratory synthesis
As early as the 1860s, experiments demonstrated that biologically relevant molecules can be produced from interaction of simple carbon sources with abundant inorganic catalysts. The spontaneous formation of complex polymers from abiotically generated monomers under the conditions posited by the "soup" theory is not straightforward. Besides the necessary basic organic monomers, compounds that would have prohibited the formation of polymers were also formed in high concentration during the Miller–Urey and Joan Oró experiments. Biology uses essentially 20 amino acids for its coded protein enzymes, representing a very small subset of the structurally possible products. Since life tends to use whatever is available, an explanation is needed for why the set used is so small. Formamide is attractive as a medium that potentially provided a source of amino acid derivatives from simple aldehyde and nitrile feedstocks.
Sugars
Alexander Butlerov showed in 1861 that the formose reaction created sugars including tetroses, pentoses, and hexoses when formaldehyde is heated under basic conditions with divalent metal ions like calcium. R. Breslow proposed that the reaction was autocatalytic in 1959.
Nucleobases
Nucleobases, such as guanine and adenine, can be synthesized from simple carbon and nitrogen sources, such as hydrogen cyanide (HCN) and ammonia. Formamide produces all four ribonucleotides when warmed with terrestrial minerals. Formamide is ubiquitous in the Universe, produced by the reaction of water and HCN. It can be concentrated by the evaporation of water. HCN is poisonous only to aerobic organisms (eukaryotes and aerobic bacteria), which did not yet exist. It can play roles in other chemical processes such as the synthesis of the amino acid glycine.
DNA and RNA components including uracil, cytosine and thymine can be synthesized under outer space conditions, using starting chemicals such as pyrimidine found in meteorites. Pyrimidine may have been formed in red giant stars or in interstellar dust and gas clouds. All four RNA-bases may be synthesized from formamide in high-energy density events like extraterrestrial impacts.
Other pathways for synthesizing bases from inorganic materials have been reported. Freezing temperatures are advantageous for the synthesis of purines, due to the concentrating effect for key precursors such as hydrogen cyanide. However, while adenine and guanine require freezing conditions for synthesis, cytosine and uracil may require boiling temperatures. Seven amino acids and eleven types of nucleobases formed in ice when ammonia and cyanide were left in a freezer for 25 years. S-triazines (alternative nucleobases), pyrimidines including cytosine and uracil, and adenine can be synthesized by subjecting a urea solution to freeze-thaw cycles under a reductive atmosphere, with spark discharges as an energy source. The explanation given for the unusual speed of these reactions at such a low temperature is eutectic freezing, which crowds impurities in microscopic pockets of liquid within the ice, causing the molecules to collide more often.
Peptides
Prebiotic peptide synthesis is proposed to have occurred through a number of possible routes. Some center on high temperature/concentration conditions in which condensation becomes energetically favorable, while others focus on the availability of plausible prebiotic condensing agents.
Experimental evidence for the formation of peptides in uniquely concentrated environments is bolstered by work suggesting that wet-dry cycles and the presence of specific salts can greatly increase spontaneous condensation of glycine into poly-glycine chains. Other work suggests that while mineral surfaces, such as those of pyrite, calcite, and rutile catalyze peptide condensation, they also catalyze their hydrolysis. The authors suggest that additional chemical activation or coupling would be necessary to produce peptides at sufficient concentrations. Thus, mineral surface catalysis, while important, is not sufficient alone for peptide synthesis.
Many prebiotically plausible condensing/activating agents have been identified, including the following: cyanamide, dicyanamide, dicyandiamide, diaminomaleonitrile, urea, trimetaphosphate, NaCl, CuCl2, (Ni,Fe)S, CO, carbonyl sulfide (COS), carbon disulfide (CS2), SO2, and diammonium phosphate (DAP).
An experiment reported in 2024 used a sapphire substrate with a web of thin cracks under a heat flow, similar to the environment of deep-ocean vents, as a mechanism to separate and concentrate prebiotically relevant building blocks from a dilute mixture, purifying their concentration by up to three orders of magnitude. The authors propose this as a plausible model for the origin of complex biopolymers. This presents another physical process that allows for concentrated peptide precursors to combine in the right conditions. A similar role of increasing amino acid concentration has been suggested for clays as well.
While all of these scenarios involve the condensation of amino acids, the prebiotic synthesis of peptides from simpler molecules such as CO, NH3 and C, skipping the step of amino acid formation, is very efficient.
Producing suitable vesicles
The largest unanswered question in evolution is how simple protocells first arose and differed in reproductive contribution to the following generation, thus initiating the evolution of life. The lipid world theory postulates that the first self-replicating object was lipid-like. Phospholipids form lipid bilayers in water while under agitation—the same structure as in cell membranes. These molecules were not present on early Earth, but other amphiphilic long-chain molecules also form membranes. These bodies may expand by insertion of additional lipids, and may spontaneously split into two offspring of similar size and composition. Lipid bodies may have provided sheltering envelopes for information storage, allowing the evolution and preservation of polymers like RNA that store information. Only one or two types of amphiphiles have been studied which might have led to the development of vesicles. There is an enormous number of possible arrangements of lipid bilayer membranes, and those with the best reproductive characteristics would have converged toward a hypercycle reaction, a positive feedback composed of two mutual catalysts represented by a membrane site and a specific compound trapped in the vesicle. Such site/compound pairs are transmissible to the daughter vesicles leading to the emergence of distinct lineages of vesicles, which would have allowed natural selection.
A protocell is a self-organized, self-ordered, spherical collection of lipids proposed as a stepping-stone to the origin of life. A functional protocell has (as of 2014) not yet been achieved in a laboratory setting. Self-assembled vesicles are essential components of primitive cells. The theory of classical irreversible thermodynamics treats self-assembly under a generalized chemical potential within the framework of dissipative systems. The second law of thermodynamics requires that overall entropy increases, yet life is distinguished by its great degree of organization. Therefore, a boundary is needed to separate ordered life processes from chaotic non-living matter.
Irene Chen and Jack W. Szostak suggest that elementary protocells can give rise to cellular behaviors including primitive forms of differential reproduction, competition, and energy storage. Competition for membrane molecules would favor stabilized membranes, suggesting a selective advantage for the evolution of cross-linked fatty acids and even the phospholipids of today. Such micro-encapsulation would allow for metabolism within the membrane and the exchange of small molecules, while retaining large biomolecules inside. Such a membrane is needed for a cell to create its own electrochemical gradient to store energy by pumping ions across the membrane. Fatty acid vesicles in conditions relevant to alkaline hydrothermal vents can be stabilized by isoprenoids which are synthesized by the formose reaction; the advantages and disadvantages of isoprenoids incorporated within the lipid bilayer in different microenvironments might have led to the divergence of the membranes of archaea and bacteria.
Laboratory experiments have shown that vesicles can undergo an evolutionary process under pressure cycling conditions. Simulating the systemic environment in tectonic fault zones within the Earth's crust, pressure cycling leads to the periodic formation of vesicles. Under the same conditions, random peptide chains are being formed, which are being continuously selected for their ability to integrate into the vesicle membrane. A further selection of the vesicles for their stability potentially leads to the development of functional peptide structures, associated with an increase in the survival rate of the vesicles.
Producing biology
Energy and entropy
Life requires a loss of entropy, or disorder, as molecules organize themselves into living matter. At the same time, the emergence of life is associated with the formation of structures beyond a certain threshold of complexity. The emergence of life with increasing order and complexity does not contradict the second law of thermodynamics, which states that overall entropy never decreases, since a living organism creates order in some places (e.g. its living body) at the expense of an increase of entropy elsewhere (e.g. heat and waste production).
Multiple sources of energy were available for chemical reactions on the early Earth. Heat from geothermal processes is a standard energy source for chemistry. Other examples include sunlight, lightning, atmospheric entries of micro-meteorites, and implosion of bubbles in sea and ocean waves. This has been confirmed by experiments and simulations.
Unfavorable reactions can be driven by highly favorable ones, as in the case of iron-sulfur chemistry. For example, this was probably important for carbon fixation. Carbon fixation by reaction of CO2 with H2S via iron-sulfur chemistry is favorable, and occurs at neutral pH and 100 °C. Iron-sulfur surfaces, which are abundant near hydrothermal vents, can drive the production of small amounts of amino acids and other biomolecules.
Chemiosmosis
In 1961, Peter Mitchell proposed chemiosmosis as a cell's primary system of energy conversion. The mechanism, now ubiquitous in living cells, powers energy conversion in micro-organisms and in the mitochondria of eukaryotes, making it a likely candidate for early life. Mitochondria produce adenosine triphosphate (ATP), the energy currency of the cell used to drive cellular processes such as chemical syntheses. The mechanism of ATP synthesis involves a closed membrane in which the ATP synthase enzyme is embedded. The energy required to release strongly bound ATP has its origin in protons that move across the membrane. In modern cells, those proton movements are caused by the pumping of ions across the membrane, maintaining an electrochemical gradient. In the first organisms, the gradient could have been provided by the difference in chemical composition between the flow from a hydrothermal vent and the surrounding seawater, or perhaps meteoric quinones that were conducive to the development of chemiosmotic energy across lipid membranes if at a terrestrial origin.
PAH world hypothesis
The PAH world hypothesis posits polycyclic aromatic hydrocarbons as precursors to the RNA world.
The RNA world
The RNA world hypothesis describes an early Earth with self-replicating and catalytic RNA but no DNA or proteins. Many researchers concur that an RNA world must have preceded the DNA-based life that now dominates. However, RNA-based life may not have been the first to exist. Another model echoes Darwin's "warm little pond" with cycles of wetting and drying.
RNA is central to the translation process. Small RNAs can catalyze all the chemical groups and information transfers required for life. RNA both expresses and maintains genetic information in modern organisms; and the chemical components of RNA are easily synthesized under the conditions that approximated the early Earth, which were very different from those that prevail today. The structure of the ribosome has been called the "smoking gun", with a central core of RNA and no amino acid side chains within 18 Å of the active site that catalyzes peptide bond formation.
The concept of the RNA world was proposed in 1962 by Alexander Rich, and the term was coined by Walter Gilbert in 1986. There were initial difficulties in the explanation of the abiotic synthesis of the nucleotides cytosine and uracil. Subsequent research has shown possible routes of synthesis; for example, formamide produces all four ribonucleotides and other biological molecules when warmed in the presence of various terrestrial minerals.
RNA replicase can function as both code and catalyst for further RNA replication, i.e. it can be autocatalytic. Jack Szostak has shown that certain catalytic RNAs can join smaller RNA sequences together, creating the potential for self-replication. The RNA replication systems, which include two ribozymes that catalyze each other's synthesis, showed a doubling time of the product of about one hour, and were subject to natural selection under the experimental conditions. If such conditions were present on early Earth, then natural selection would favor the proliferation of such autocatalytic sets, to which further functionalities could be added. Self-assembly of RNA may occur spontaneously in hydrothermal vents. A preliminary form of tRNA could have assembled into such a replicator molecule.
Possible precursors to protein synthesis include the synthesis of short peptide cofactors or the self-catalysing duplication of RNA. It is likely that the ancestral ribosome was composed entirely of RNA, although some roles have since been taken over by proteins. Major remaining questions on this topic include identifying the selective force for the evolution of the ribosome and determining how the genetic code arose.
Eugene Koonin has argued that "no compelling scenarios currently exist for the origin of replication and translation, the key processes that together comprise the core of biological systems and the apparent pre-requisite of biological evolution. The RNA World concept might offer the best chance for the resolution of this conundrum but so far cannot adequately account for the emergence of an efficient RNA replicase or the translation system."
From RNA to directed protein synthesis
In line with the RNA world hypothesis, much of modern biology's templated protein biosynthesis is done by RNA molecules—namely tRNAs and the ribosome (consisting of both protein and rRNA components). The most central reaction of peptide bond synthesis is understood to be carried out by base catalysis by the 23S rRNA domain V. Experimental evidence has demonstrated successful di- and tripeptide synthesis with a system consisting of only aminoacyl phosphate adaptors and RNA guides, which could be a possible stepping stone between an RNA world and modern protein synthesis. Aminoacylation ribozymes that can charge tRNAs with their cognate amino acids have also been selected in in vitro experimentation. The authors also extensively mapped fitness landscapes within their selection to find that chance emergence of active sequences was more important that sequence optimization.
Early functional peptides
The first proteins would have had to arise without a fully-fledged system of protein biosynthesis. As discussed above, numerous mechanisms for the prebiotic synthesis of polypeptides exist. However, these random sequence peptides would not have likely had biological function. Thus, significant study has gone into exploring how early functional proteins could have arisen from random sequences. First, some evidence on hydrolysis rates shows that abiotically plausible peptides likely contained significant "nearest-neighbor" biases. This could have had some effect on early protein sequence diversity. In other work by Anthony Keefe and Jack Szostak, mRNA display selection on a library of 6*1012 80-mers was used to search for sequences with ATP binding activity. They concluded that approximately 1 in 1011 random sequences had ATP binding function. While this is a single example of functional frequency in the random sequence space, the methodology can serve as a powerful simulation tool for understanding early protein evolution.
Phylogeny and LUCA
Starting with the work of Carl Woese from 1977, genomics studies have placed the last universal common ancestor (LUCA) of all modern life-forms between Bacteria and a clade formed by Archaea and Eukaryota in the phylogenetic tree of life. It lived over 4 Gya. A minority of studies have placed the LUCA in Bacteria, proposing that Archaea and Eukaryota are evolutionarily derived from within Eubacteria; Thomas Cavalier-Smith suggested in 2006 that the phenotypically diverse bacterial phylum Chloroflexota contained the LUCA.
In 2016, a set of 355 genes likely present in the LUCA was identified. A total of 6.1 million prokaryotic genes from Bacteria and Archaea were sequenced, identifying 355 protein clusters from among 286,514 protein clusters that were probably common to the LUCA. The results suggest that the LUCA was anaerobic with a Wood–Ljungdahl (reductive Acetyl-CoA) pathway, nitrogen- and carbon-fixing, thermophilic. Its cofactors suggest dependence upon an environment rich in hydrogen, carbon dioxide, iron, and transition metals. Its genetic material was probably DNA, requiring the 4-nucleotide genetic code, messenger RNA, transfer RNA, and ribosomes to translate the code into proteins such as enzymes. LUCA likely inhabited an anaerobic hydrothermal vent setting in a geochemically active environment. It was evidently already a complex organism, and must have had precursors; it was not the first living thing. The physiology of LUCA has been in dispute. Previous research identified 60 proteins common to all life.
Leslie Orgel argued that early translation machinery for the genetic code would be susceptible to error catastrophe. Geoffrey Hoffmann however showed that such machinery can be stable in function against "Orgel's paradox". Metabolic reactions that have also been inferred in LUCA are the incomplete reverse Krebs cycle, gluconeogenesis, the pentose phosphate pathway, glycolysis, reductive amination, and transamination.
Suitable geological environments
A variety of geologic and environmental settings have been proposed for an origin of life. These theories are often in competition with one another as there are many differing views of prebiotic compound availability, geophysical setting, and early life characteristics. The first organism on Earth likely looked different from LUCA. Between the first appearance of life and where all modern phylogenies began branching, an unknown amount of time passed, with unknown gene transfers, extinctions, and evolutionary adaptation to various environmental niches. One major shift is believed to be from the RNA world to an RNA-DNA-protein world. Modern phylogenies provide more pertinent genetic evidence about LUCA than about its precursors.
The most popular hypotheses for settings for the origin of life are deep sea hydrothermal vents and surface bodies of water. Surface waters can be classified into hot springs, moderate temperature lakes and ponds, and cold settings.
Deep sea hydrothermal vents
Hot fluids
Early micro-fossils may have come from a hot world of gases such as methane, ammonia, carbon dioxide, and hydrogen sulfide, toxic to much current life. Analysis of the tree of life places thermophilic and hyperthermophilic bacteria and archaea closest to the root, suggesting that life may have evolved in a hot environment. The deep sea or alkaline hydrothermal vent theory posits that life began at submarine hydrothermal vents. William Martin and Michael Russell have suggested "that life evolved in structured iron monosulphide precipitates in a seepage site hydrothermal mound at a redox, pH, and temperature gradient between sulphide-rich hydrothermal fluid and iron(II)-containing waters of the Hadean ocean floor. The naturally arising, three-dimensional compartmentation observed within fossilized seepage-site metal sulphide precipitates indicates that these inorganic compartments were the precursors of cell walls and membranes found in free-living prokaryotes. The known capability of FeS and NiS to catalyze the synthesis of the acetyl-methylsulphide from carbon monoxide and methylsulphide, constituents of hydrothermal fluid, indicates that pre-biotic syntheses occurred at the inner surfaces of these metal-sulphide-walled compartments".
These form where hydrogen-rich fluids emerge from below the sea floor, as a result of serpentinization of ultra-mafic olivine with seawater and a pH interface with carbon dioxide-rich ocean water. The vents form a sustained chemical energy source derived from redox reactions, in which electron donors (molecular hydrogen) react with electron acceptors (carbon dioxide); see iron–sulfur world theory. These are exothermic reactions.
Chemiosmotic gradient
Russell demonstrated that alkaline vents created an abiogenic proton motive force chemiosmotic gradient, ideal for abiogenesis. Their microscopic compartments "provide a natural means of concentrating organic molecules," composed of iron-sulfur minerals such as mackinawite, endowed these mineral cells with the catalytic properties envisaged by Günter Wächtershäuser. This movement of ions across the membrane depends on a combination of two factors:
Diffusion force caused by concentration gradient—all particles including ions tend to diffuse from higher concentration to lower.
Electrostatic force caused by electrical potential gradient—cations like protons H+ tend to diffuse down the electrical potential, anions in the opposite direction.
These two gradients taken together can be expressed as an electrochemical gradient, providing energy for abiogenic synthesis. The proton motive force can be described as the measure of the potential energy stored as a combination of proton and voltage gradients across a membrane (differences in proton concentration and electrical potential).
The surfaces of mineral particles inside deep-ocean hydrothermal vents have catalytic properties similar to those of enzymes and can create simple organic molecules, such as methanol (CH3OH) and formic, acetic, and pyruvic acids out of the dissolved CO2 in the water, if driven by an applied voltage or by reaction with H2 or H2S.
Starting in 1985, researchers proposed that life arose at hydrothermal vents, that spontaneous chemistry in the Earth's crust driven by rock–water interactions at disequilibrium thermodynamically underpinned life's origin and that the founding lineages of the archaea and bacteria were H2-dependent autotrophs that used CO2 as their terminal acceptor in energy metabolism. In 2016, Martin suggested, based upon this evidence, that the LUCA "may have depended heavily on the geothermal energy of the vent to survive". Pores at deep sea hydrothermal vents are suggested to have been occupied by membrane-bound compartments which promoted biochemical reactions. Metabolic intermediates in the Krebs cycle, gluconeogenesis, amino acid bio-synthetic pathways, glycolysis, the pentose phosphate pathway, and including sugars like ribose, and lipid precursors can occur non-enzymatically at conditions relevant to deep-sea alkaline hydrothermal vents.
If the deep marine hydrothermal setting was the site for the origin of life, then abiogenesis could have happened as early as 4.0-4.2 Gya. If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from impacts and the then high levels of ultraviolet radiation from the sun. The available energy in hydrothermal vents is maximized at 100–150 °C, the temperatures at which hyperthermophilic bacteria and thermoacidophilic archaea live. Arguments against a hydrothermal origin of life state that hyperthermophily was a result of convergent evolution in bacteria and archaea, and that a mesophilic environment would have been more likely. This hypothesis, suggested in 1999 by Galtier, was proposed one year before the discovery of the Lost City Hydrothermal Field, where white-smoker hydrothermal vents average ~45-90 °C. Moderate temperatures and alkaline seawater such as that at Lost City are now the favoured hydrothermal vent setting in contrast to acidic, high temperature (~350 °C) black-smokers.
Arguments against a vent setting
Production of prebiotic organic compounds at hydrothermal vents is estimated to be 1x108 kg yr−1. While a large amount of key prebiotic compounds, such as methane, are found at vents, they are in far lower concentrations than estimates of a Miller-Urey Experiment environment. In the case of methane, the production rate at vents is around 2-4 orders of magnitude lower than predicted amounts in a Miller-Urey Experiment surface atmosphere.
Other arguments against an oceanic vent setting for the origin of life include the inability to concentrate prebiotic materials due to strong dilution from seawater. This open-system cycles compounds through minerals that make up vents, leaving little residence time to accumulate. All modern cells rely on phosphates and potassium for nucleotide backbone and protein formation respectively, making it likely that the first life forms also shared these functions. These elements were not available in high quantities in the Archaean oceans as both primarily come from the weathering of continental rocks on land, far from vent settings. Submarine hydrothermal vents are not conducive to condensation reactions needed for polymerisation to form macromolecules.
An older argument was that key polymers were encapsulated in vesicles after condensation, which supposedly would not happen in saltwater because of the high concentrations of ions. However, while it is true that salinity inhibits vesicle formation from low-diversity mixtures of fatty acids, vesicle formation from a broader, more realistic mix of fatty-acid and 1-alkanol species is more resilient.
Surface bodies of water
Surface bodies of water provide environments able to dry out and be rewetted. Continued wet-dry cycles allow the concentration of prebiotic compounds and condensation reactions to polymerise macromolecules. Moreover, lake and ponds on land allow for detrital input from the weathering of continental rocks which contain apatite, the most common source of phosphates needed for nucleotide backbones. The amount of exposed continental crust in the Hadean is unknown, but models of early ocean depths and rates of ocean island and continental crust growth make it plausible that there was exposed land. Another line of evidence for a surface start to life is the requirement for UV for organism function. UV is necessary for the formation of the U+C nucleotide base pair by partial hydrolysis and nucleobase loss. Simultaneously, UV can be harmful and sterilising to life, especially for simple early lifeforms with little ability to repair radiation damage. Radiation levels from a young Sun were likely greater, and, with no ozone layer, harmful shortwave UV rays would reach the surface of Earth. For life to begin, a shielded environment with influx from UV-exposed sources is necessary to both benefit and protect from UV. Shielding under ice, liquid water, mineral surfaces (e.g. clay) or regolith is possible in a range of surface water settings. While deep sea vents may have input from raining down of surface exposed materials, the likelihood of concentration is lessened by the ocean's open system.
Hot springs
Most branching phylogenies are thermophilic or hyperthermophilic, making it possible that the Last universal common ancestor (LUCA) and preceding lifeforms were similarly thermophilic. Hot springs are formed from the heating of groundwater by geothermal activity. This intersection allows for influxes of material from deep penetrating waters and from surface runoff that transports eroded continental sediments. Interconnected groundwater systems create a mechanism for distribution of life to wider area.
Mulkidjanian and co-authors argue that marine environments did not provide the ionic balance and composition universally found in cells, or the ions required by essential proteins and ribozymes, especially with respect to high K+/Na+ ratio, Mn2+, Zn2+ and phosphate concentrations. They argue that the only environments that mimic the needed conditions on Earth are hot springs similar to ones at Kamchatka. Mineral deposits in these environments under an anoxic atmosphere would have suitable pH (while current pools in an oxygenated atmosphere would not), contain precipitates of photocatalytic sulfide minerals that absorb harmful ultraviolet radiation, have wet-dry cycles that concentrate substrate solutions to concentrations amenable to spontaneous formation of biopolymers created both by chemical reactions in the hydrothermal environment, and by exposure to UV light during transport from vents to adjacent pools that would promote the formation of biomolecules. The hypothesized pre-biotic environments are similar to hydrothermal vents, with additional components that help explain peculiarities of the LUCA.
A phylogenomic and geochemical analysis of proteins plausibly traced to the LUCA shows that the ionic composition of its intracellular fluid is identical to that of hot springs. The LUCA likely was dependent upon synthesized organic matter for its growth. Experiments show that RNA-like polymers can be synthesized in wet-dry cycling and UV light exposure. These polymers were encapsulated in vesicles after condensation. Potential sources of organics at hot springs might have been transport by interplanetary dust particles, extraterrestrial projectiles, or atmospheric or geochemical synthesis. Hot springs could have been abundant in volcanic landmasses during the Hadean.
Temperate surface bodies of water
A mesophilic start in surface bodies of waters hypothesis has evolved from Darwin's concept of a 'warm little pond' and the Oparin-Haldane hypothesis. Freshwater bodies under temperate climates can accumulate prebiotic materials while providing suitable environmental conditions conducive to simple life forms. The climate during the Archaean is still a highly debated topic, as there is uncertainty about what continents, oceans, and the atmosphere looked like then. Atmospheric reconstructions of the Archaean from geochemical proxies and models state that sufficient greenhouse gases were present to maintain surface temperatures between 0-40 °C. Under this assumption, there is a greater abundance of moderate temperature niches in which life could begin.
Strong lines of evidence for mesophily from biomolecular studies include Galtier's G+C nucleotide thermometer. G+C are more abundant in thermophiles due to the added stability of an additional hydrogen bond not present between A+T nucleotides. rRNA sequencing on a diverse range of modern lifeforms show that LUCA's reconstructed G+C content was likely representative of moderate temperatures.
Although most modern phylogenies are thermophilic or hyperthermophilic, it is possible that their widespread diversity today is a product of convergent evolution and horizontal gene transfer rather than an inherited trait from LUCA. The reverse gyrase topoisomerase is found exclusively in thermophiles and hyperthermophiles as it allows for coiling of DNA. The reverse gyrase enzyme requires ATP to function, both of which are complex biomolecules. If an origin of life is hypothesised to involve a simple organism that had not yet evolved a membrane, let alone ATP, this would make the existence of reverse gyrase improbable. Moreover, phylogenetic studies show that reverse gyrase had an archaeal origin, and that it was transferred to bacteria by horizontal gene transfer. This implies that reverse gyrase was not present in the LUCA.
Icy surface bodies of water
Cold-start origin of life theories stem from the idea there may have been cold enough regions on the early Earth that large ice cover could be found. Stellar evolution models predict that the Sun's luminosity was ~25% weaker than it is today. Fuelner states that although this significant decrease in solar energy would have formed an icy planet, there is strong evidence for liquid water to be present, possibly driven by a greenhouse effect. This would create an early Earth with both liquid oceans and icy poles.
Ice melts that form from ice sheets or glaciers melts create freshwater pools, another niche capable of experiencing wet-dry cycles. While these pools that exist on the surface would be exposed to intense UV radiation, bodies of water within and under ice are sufficiently shielded while remaining connected to UV exposed areas through ice cracks. Suggestions of impact melting of ice allow freshwater paired with meteoritic input, a popular vessel for prebiotic components. Near-seawater levels of sodium chloride are found to destabilize fatty acid membrane self-assembly, making freshwater settings appealing for early membranous life.
Icy environments would trade the faster reaction rates that occur in warm environments for increased stability and accumulation of larger polymers. Experiments simulating Europa-like conditions of ~20 °C have synthesised amino acids and adenine, showing that Miller-Urey type syntheses can still occur at cold temperatures. In an RNA world, the ribozyme would have had even more functions than in a later DNA-RNA-protein-world. For RNA to function, it must be able to fold, a process that is hindered by temperatures above 30 °C. While RNA folding in psychrophilic organisms is slower, the process is more successful as hydrolysis is also slower. Shorter nucleotides would not suffer from higher temperatures.
Inside the continental crust
An alternative geological environment has been proposed by the geologist Ulrich Schreiber and the physical chemist Christian Mayer: the continental crust. Tectonic fault zones could present a stable and well-protected environment for long-term prebiotic evolution. Inside these systems of cracks and cavities, water and carbon dioxide present the bulk solvents. Their phase state would depend on the local temperature and pressure conditions and could vary between liquid, gaseous and supercritical. When forming two separate phases (e.g., liquid water and supercritical carbon dioxide in depths of little more than 1 km), the system provides optimal conditions for phase transfer reactions. Concurrently, the contents of the tectonic fault zones are being supplied by a multitude of inorganic educts (e.g., carbon monoxide, hydrogen, ammonia, hydrogen cyanide, nitrogen, and even phosphate from dissolved apatite) and simple organic molecules formed by hydrothermal chemistry (e.g. amino acids, long-chain amines, fatty acids, long-chain aldehydes). Finally, the abundant mineral surfaces provide a rich choice of catalytic activity.
An especially interesting section of the tectonic fault zones is located at a depth of approximately 1000 m. For the carbon dioxide part of the bulk solvent, it provides temperature and pressure conditions near the phase transition point between the supercritical and the gaseous state. This leads to a natural accumulation zone for lipophilic organic molecules that dissolve well in supercritical CO2, but not in its gaseous state, leading to their local precipitation. Periodic pressure variations such as caused by geyser activity or tidal influences result in periodic phase transitions, keeping the local reaction environment in a constant non-equilibrium state. In presence of amphiphilic compounds (such as the long chain amines and fatty acids mentioned above), subsequent generations of vesicles are being formed that are constantly and efficiently being selected for their stability. The resulting structures could provide hydrothermal vents as well as hot springs with raw material for further development.
Homochirality
Homochirality is the geometric uniformity of materials composed of chiral (non-mirror-symmetric) units. Living organisms use molecules that have the same chirality (handedness): with almost no exceptions, amino acids are left-handed while nucleotides and sugars are right-handed. Chiral molecules can be synthesized, but in the absence of a chiral source or a chiral catalyst, they are formed in a 50/50 (racemic) mixture of both forms. Known mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as the electroweak interaction; asymmetric environments, such as those caused by circularly polarized light, quartz crystals, or the Earth's rotation, statistical fluctuations during racemic synthesis, and spontaneous symmetry breaking.
Once established, chirality would be selected for. A small bias (enantiomeric excess) in the population can be amplified into a large one by asymmetric autocatalysis, such as in the Soai reaction. In asymmetric autocatalysis, the catalyst is a chiral molecule, which means that a chiral molecule is catalyzing its own production. An initial enantiomeric excess, such as can be produced by polarized light, then allows the more abundant enantiomer to outcompete the other.
Homochirality may have started in outer space, as on the Murchison meteorite the amino acid L-alanine (left-handed) is more than twice as frequent as its D (right-handed) form, and L-glutamic acid is more than three times as abundant as its D counterpart. Amino acids from meteorites show a left-handed bias, whereas sugars show a predominantly right-handed bias: this is the same preference found in living organisms, suggesting an abiogenic origin of these compounds.
In a 2010 experiment by Robert Root-Bernstein, "two D-RNA-oligonucleotides having inverse base sequences (D-CGUA and D-AUGC) and their corresponding L-RNA-oligonucleotides (L-CGUA and L-AUGC) were synthesized and their affinity determined for Gly and eleven pairs of L- and D-amino acids". The results suggest that homochirality, including codon directionality, might have "emerged as a function of the origin of the genetic code".
See also
Alternative abiogenesis scenarios
Autopoiesis
Manganese metallic nodules
Notes
References
Sources
International Symposium on the Origin of Life on the Earth (held at Moscow, 19–24 August 1957)
Proceedings of the SPIE held at San Jose, California, 22–24 January 2001
Proceedings of the SPIE held at San Diego, California, 31 July–2 August 2005
External links
Making headway with the mysteries of life's origins – Adam Mann (PNAS; 14 April 2021)
Exploring Life's Origins a virtual exhibit at the Museum of Science (Boston)
How life began on Earth – Marcia Malory (Earth Facts; 2015)
The Origins of Life – Richard Dawkins et al. (BBC Radio; 2004)
Life in the Universe – Essay by Stephen Hawking (1996)
Astrobiology
Evolutionarily significant biological phenomena
Evolutionary biology
Global events
Natural events
Prebiotic chemistry | Abiogenesis | [
"Chemistry",
"Astronomy",
"Biology"
] | 12,888 | [
"Evolutionary biology",
"Origin of life",
"Speculative evolution",
"Prebiotic chemistry",
"Astrobiology",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
19,188,374 | https://en.wikipedia.org/wiki/Konrad%20Osterwalder | Konrad Osterwalder (born June 3, 1942) is a Swiss mathematician and physicist, former Undersecretary-General of the United Nations, former Rector of the United Nations University (UNU), and Rector Emeritus of the Swiss Federal Institute of Technology Zurich (ETH Zurich). He is known for the Osterwalder–Schrader theorem.
United Nations University
Osterwalder was appointed to the position of United Nations Under Secretary General and United Nations University Rector by United Nations Secretary-General Ban Ki-moon May 2007 and served until 28 February 2013. He succeeded Prof. Hans van Ginkel from the Netherlands to be the fifth Rector of the United Nations University.
He is credited with turning United Nations University into a world leading institution, ranked #5 & #6 in two categories according to the 2012 Global Go to Think Tank Rankings. He was responsible for ensuring that UNU's charter was amended by the United Nations General Assembly in 2009 allowing the United Nations University to grant degrees, introducing UNU's degree programmes and creating a new concept in education, research and development by introducing the twin institute programmes. A concept that is changing the way that development, aid and capacity building is approached both by developed countries and developing and least developed countries.
Bologna Process
In March 2000, following the Bologna Declaration by 28 European Education Ministers, the European University Association and the Comite de Liaison within the National Rector's Conference convened the Convention of European Higher Education in Salamanca Spain, hereinafter referred to as the "Salamanca Process" with the aim of discussing the Bologna Declaration and delivering an overall, univocal response to the Council of Ministers. Professor Osterwalder, Rector of ETH, was chosen by the conference as the Rapporteur of the Salamanca Process and the voice of Higher Education institutions. The meeting concluded with a declaration and a report that led to the basis of Higher Education reform within the Bologna process and the EU. In addition, the two conveners of the conference formed the European University Association.
Life and career
Konrad Osterwalder was born in Frauenfeld, Thurgau, Switzerland, in June 1942. He studied at the Swiss Federal Institute of Technology (Eidgenössische Technische Hochschule; ETH) in Zurich, where he earned a Diploma in theoretical physics in 1965 and a Doctorate in theoretical physics in 1970. He is married to Verena Osterwalder-Bollag, an analytical therapist. They have three children.
After one year with the Courant Institute of Mathematical Sciences, New York University, he accepted a research position at Harvard University with Arthur Jaffe in 1971. He remained on the faculty of Harvard for seven years, and was promoted to Assistant Professor for Mathematical Physics in 1973 and Associate Professor for Mathematical Physics in 1976. In 1977, he returned to Switzerland upon being appointed a full Professor for Mathematical Physics at ETH Zurich. His doctoral students include Felix Finster and Emil J. Straube.
During his tenure at ETH Zurich, Osterwalder served as Head of the Department of Mathematics (1986–1990) and Head of the Planning Committee (1990–1995), and was founder of the Centro Stefano Franscini seminar center in Ascona. He was appointed Rector of ETH in 1995 and held that post for 12 years. From November 2006 through August 2007, he also served concurrently as ETH President pro tempore.
On 1 September 2007, Osterwalder joined the United Nations University as its fifth rector. In that role, he held the rank of Under-Secretary-General of the United Nations.
Osterwalder's research focused on the mathematical structure of relativistic quantum field theory as well as on elementary particle physics and statistical mechanics. During his long and distinguished career, he has been a Visiting Fellow/Guest Professor at several prominent universities around the world, including the Institut des Hautes Études Scientifiques (IHES; Bures-sur-Yvette, France); Harvard University; University of Texas (Austin); Max Planck Institute for Physics and Astrophysics (Munich), Università La Sapienza (Rome); Università di Napoli; Waseda University; and Weizmann Institute of Science (Rehovot, Israel).
Since 2014 - member of International Scientific Council of Tomsk Polytechnic University.
Career achievements
Osterwalder career encompasses service on many advisory boards, committees and associations including as
Editor of Communications in Mathematical Physics;
Treasurer and president of the International Association of Mathematical Physics;
Member of the visiting committee of the Harvard Department of Physics;
President of the IHÉS National Committee of the Swiss Academy of Natural Sciences;
Member of the advisory council of the Euler Institute in St. Petersburg;
Vice-president of the Conference of Rectors of Swiss Universities;
President of the Conference of European Schools of Advanced Engineering Education and Research (CESAER);
Member of the International Academic Advisory Panel of the Government of Singapore;
President of UNITECH International (a collaboration between several European Technical Universities and more than 20 leading multinational corporations);
Chairman of the Bologna-Project Group (Swiss Rectors Conference);
President, Jury of the Brandenberger Foundation;
Member of the Nucleo di Valutazione (supervisory council) of the Politecnico di Milano;
Member of the Conseil d'administration of the École Polytechnique de France (Paris);
Member of the "Comité de l´Enseignement" of the Ecole Nationale Supérieure des Mines de Paris;
Member of the University Council of the Università della Svizzera Italiana;
President chair of the University Council of the Technical University Darmstadt;
Head of the Evaluationsverbund Darmstadt-Kaiserslautern-Karlsruhe;
Member, Strategic Council, Free University of Berlin;
Member, Comitato Scientifico Alta Scuola Politecnica (Politecnici di Milano e di Torino);
Member, Beirat Robert Bosch Stiftung; and
Member, Academic Council of the International Council on Systems Engineering (INCOSE)
Member, Consiglio Fondazione Italian Institute of Technology
Member, The International Selection committee for the Millennium Technology Prize, the world’s biggest technology prize (1.5 Million US$), awarded by the Technology Academy Finland
Executive Committee Member, Club of Rome
Awards and prizes
Osterwalder has been a recipient of many honours and prizes including:
having one of the top-cited mathematical physics papers of all time
Fellow of the Alfred P. Sloan Foundation (1974–1978);
member of the Swiss Academy of Technical Sciences;
Honorary degree from the Helsinki Technical University
Honorary Member of Riga Technical University.
2009 Matteo Ricci International Award
2010 Leonardo da Vinci Medal (SEFI, European Society for Engineering Education)
1987 until 1995 awarded by ETH's students the prize of the best teacher of the term
Fellow of the American Mathematical Society, 2012.
Publications
Cluster Properties of the S-Matrix, diploma thesis, unpublished
Boson Fields with the λ ϕ3 Interaction in Two, Three and Four Dimensions, Ph. D. thesis, published by Physikalisches Intitut ETH, Zürich (1970)
On the Hamiltonian of the Cubic Boson Self-Interaction in Four Dimensional Space Time, Fortschritte der Physik 19, 43-113 (1971)
On the Spectrum of the Cubic Boson Self-Interaction, ETH Preprint (1971)
On the Uniqueness of the Hamiltonian and of the Representation of the CCR for the Quartic Boson Interaction in Three Dimensions, Helv. Phys. Acta . 44, 884-909 (1971), with J.-P. Eckmann
Duality for Free Bose Fields, Comm. Math. Phys. 29, 1-14 (1973)
On the Uniqueness of the Energy Density in the Infinite Volume Limit for Quantum Field Models, Helv. Phys. Acta. 45, 746-754 (1972), with R. Schrader
An Application of Tomita’s Theory of Modular Hilbert Algebras: Duality for Free Bose Fields, Jour. Funct. Anal. 13, 1-12 (1973), with J.-P. Eckmann
Feynman-Kac Formula for Euclidean Fermi and Bose Fields, Phys. Rev. Lett. 29, 1423-1425 (1971), with R. Schrader
Axioms for Euclidean Green’s Functions, Comm. Math. Phys. 31, 83-113, (1973), with R. Schrader
Euclidean Fermi Fields and Feynman-Kac Formula for Boson-Fermion Models, Helv. Phys. Acta. 46, 277-302 (1973), with R. Schrader
Euclidean Green’s Functions and Wightman Distributions, in Constructive Quantum Field Theory, G. Velo and A. Wightman (eds.), 1973 Erice Lectures, Vol. 25, Springer-Verlag, Berlin - Heidelberg - New York (1973); Russian translation, MIR 1977
Euclidean Fermi Fields, in Constructive Quantum Field Theory, G. Velo and A. Wightman (eds.), 1973 Erice Lectures, Lecture Notes in Physics, Vol 25, Springer Verlag, Berlin-Heidelberg-New York, (1973); Russian translation, MIR 1977
Axioms for Euclidean Green’s Functions, II, Comm. Math. Phys. 42, 281 (1975), with R. Schrader; Russian translation in: Euclidean Quantum Field Theory, MIR 1978
Is there a Euclidean Field Theory for Fermions, Helv. Phys. Acta. 47, 781 (1974), with J. Fröhlich
The Wightman Axioms and the Mass Gap for Weakky Coupled (φ4)3 Quantum Field Theories, Ann. of Phys. 97, 80-135 (1976), with J. Feldman
The Wightman Axioms and the Mass Gap for Weakly Coupled (φ4)3 Quantum Field Theories, Proc. of the International Symposium on Mathematical Problems in Theoretical Physics, Kyoto Japan, January 23–29, 1975. Lecture Notes in Physics, Springer-Verlag, with Joel Feldman
Recent Results in Constructive Quantum Field Theory (in Japanese), Kagaku, June 1975
The Construction of λ (φ4)3 Quantum Field Models, in Colloques Internationaux C.N.R.S. No. 248, les méthodes mathématiques de la théorie quantique des champs (1975), with J. Feldman
A Nontrivial Scattering Matrix for Weakly Coupled P(ϕ)2 Models, Helv. Phys. Acta. 49, 525 (1976), with R. Sénéor
Time Ordered Operator Products and the Scattering Matrix in P(φ)2 Models, in Quantum Dynamics: Models and Mathematics, ed. L. Streit, Springer Verlag Wien, New York 1976
Gauge Theories on the Lattice, in New Developments in Quantum Theory and Statistical Mechanics, p. 173-200, ed. M. Lévy and P. Mitter (Cargèse 1976), Plenum Press New York, London 1977
Gauge Field Theories on the Lattice, Ann. Phys. 110, 440-471 (1978), with E. Seiler; reprinted in: Lattice Gauge Theories, ed. Y. Iwasaki and T. Yonega, Series of selected papers in physics, Physical Society of Japan
Lattice Gauge Theories, in Mathematical Problems in Theoretical Physics, ed. G. Dell’Antonio et al., Springer Lecture Notes in Physics, vol. 80, Springer Verlag 1978
Auf dem Weg zu einer relativistischen Quantenfeldtheorie, in Einstein Symposion Berlin, Springer Lecture Notes in Physics, vol. 100, Springer Verlag 1979
Operators, in Encyclopedia of Physics, eds. R.G. Lerner, G.L. Trigg, Addison Wesley (1981)
Constructive Quantum Field Theories: Scalar Fields, in Gauge Theories, Fundamental Interactions and Rigorous Results, eds. P. Dita, V. Georgescu, R. Purice, Birkhäuser (1982)
Virtual Representation of Symmetric Spaces and their Analytic Continuation, Ann. of Math., 118, 461 (1983) with J. Fröhlich and E. Seiler
Constructive Quantum Field Theory: Goals, Methods, Results, Helv. Phys. Acta 59, 220 (1986)
On the convergence of inverse functions of operators, J.Func. Anal. 81, 320 - 324, (1988), with A. Jaffe and A. Lesniewski
Quantum K-Theory: The Chern Character, Commun. Math. Phys. 118, 1- 14 (1988), with A. Jaffe and A. Lesniewski
On super-KMS functionals and entire cyclic cohomology, K-Theory 2, 675 - 682, (1989), with A. Jaffe and A. Lesniewski
Ward Identities for non-commutative geometry, Commun. Math. Phys. 132, 118 - 130, (1990), with A. Jaffe
Operators, in Encyclopedia of Physics, eds. R.G. Lerner, G.L. Trigg, second edition, VCH Publishers, New York, Cambridge(UK), 1991
Stability for a class of bilocal Hamiltonians, Commun. Math. Phys. 155, 183 -197, (1993), with A. Jaffe and A. Lesniewski
Supersymmetry and the stability of non-local interactions, in Differential Geometric Methods in Theoretical Physics, H.M.Ho, editor, World Scientific (1993)
Constructing Supersymmetric Quantum Field Theories, in Advances in Dynamical Systems and Quantum Physics, R. Figari, editor, World Scientific (1994)
Superspace Formulation of the Chern Character of a Theta Summable Fredholm Module, Commun. Math. Phys. 168, 643 (1995), with A. Lesniewski
Supersymmetric Quantum Field Theory, in Constructive Results in Field Theory, Statistical Mechanics and Solid State Physics, V. Rivasseau, editor, Springer Verlag 1995
Unitary Representations of Super Groups, to appear, with A. Lesniewski
Axioms for Supersymmetric Quantum Field Theories, to appear, with A. Lesniewski
Mathematical Problems in Theoretical Physics, Springer Lecture Notes in Physics, Vol.116, Springer-Verlag 1980
Critical Phenomena, Random Systems, Gauge Theories, Parts I/II, Les Houches 1984, Session XLIII, North Holland 1986 (with R. Stora)
Akademikerproduktionsanlage GmbH? Gedanken zur Positionierung der Hochschulen, NZZ, Bildung und Erziehung, 25.Nov.1993
Lehre für die Zukunft, Bulletin der ETHZ, 261, 4 - 7, 1996
The Renaissance Engineer in face of Unexpected Vulnerabilities, 30th SEFI Annual conference, Firenze 2002
Worldwide Trends and their Impacts, The 3rd Technology Trends Seminar Sept. 2008
Was erwartet die Wirtschaft von der Hochschulwelt, ZEIT Konferenz Hochschule und Bildung, Juli 1009
L’Università delle Nazioni Unite per il dialogo tra le culture, Milano, Università cattolica, Cerimonia per il premio Matteo Ricci
References
External links
New challenge for the outgoing ETH Zurich Rector, ETH Campus Life
Rector Konrad Osterwalder takes over the President’s official duties, ETH Campus Life
A Konrad Osterwalder il Matteo Ricci 2009, Cattolica News
1942 births
Living people
Swiss physicists
Swiss officials of the United Nations
Academic staff of United Nations University
ETH Zurich alumni
Swiss mathematicians
Differential geometers
Academic staff of ETH Zurich
Swiss theoretical physicists
Quantum physicists
Geometers
Fellows of the American Mathematical Society
People from Frauenfeld
Academic staff of Technische Universität Darmstadt
Presidents of the International Association of Mathematical Physics | Konrad Osterwalder | [
"Physics",
"Mathematics"
] | 3,320 | [
"Quantum physicists",
"Geometers",
"Quantum mechanics",
"Geometry"
] |
219,713 | https://en.wikipedia.org/wiki/Titanium%20dioxide | Titanium dioxide, also known as titanium(IV) oxide or titania , is the inorganic compound derived from titanium with the chemical formula . When used as a pigment, it is called titanium white, Pigment White 6 (PW6), or CI 77891. It is a white solid that is insoluble in water, although mineral forms can appear black. As a pigment, it has a wide range of applications, including paint, sunscreen, and food coloring. When used as a food coloring, it has E number E171. World production in 2014 exceeded 9 million tonnes. It has been estimated that titanium dioxide is used in two-thirds of all pigments, and pigments based on the oxide have been valued at a price of $13.2 billion.
Structure
In all three of its main dioxides, titanium exhibits octahedral geometry, being bonded to six oxide anions. The oxides in turn are bonded to three Ti centers. The overall crystal structures of rutile and anatase are tetragonal in symmetry whereas brookite is orthorhombic. The oxygen substructures are all slight distortions of close packing: in rutile, the oxide anions are arranged in distorted hexagonal close-packing, whereas they are close to cubic close-packing in anatase and to "double hexagonal close-packing" for brookite. The rutile structure is widespread for other metal dioxides and difluorides, e.g. RuO2 and ZnF2.
Molten titanium dioxide has a local structure in which each Ti is coordinated to, on average, about 5 oxygen atoms. This is distinct from the crystalline forms in which Ti coordinates to 6 oxygen atoms.
Synthetic and geologic occurrence
Synthetic TiO2 is mainly produced from the mineral ilmenite. Rutile, and anatase, naturally occurring TiO2, occur widely also, e.g. rutile as a 'heavy mineral' in beach sand. Leucoxene, fine-grained anatase formed by natural alteration of ilmenite, is yet another ore. Star sapphires and rubies get their asterism from oriented inclusions of rutile needles.
Mineralogy and uncommon polymorphs
Titanium dioxide occurs in nature as the minerals rutile and anatase. Additionally two high-pressure forms are known minerals: a monoclinic baddeleyite-like form known as akaogiite, and the other has a slight monoclinic distortion of the orthorhombic α-PbO2 structure and is known as riesite. Both of which can be found at the Ries crater in Bavaria. It is mainly sourced from ilmenite, which is the most widespread titanium dioxide-bearing ore around the world. Rutile is the next most abundant and contains around 98% titanium dioxide in the ore. The metastable anatase and brookite phases convert irreversibly to the equilibrium rutile phase upon heating above temperatures in the range .
Titanium dioxide has twelve known polymorphs – in addition to rutile, anatase, brookite, akaogiite and riesite, three metastable phases can be produced synthetically (monoclinic, tetragonal, and orthorhombic ramsdellite-like), and four high-pressure forms (α-PbO2-like, cotunnite-like, orthorhombic OI, and cubic phases) also exist:
The cotunnite-type phase was claimed to be the hardest known oxide with the Vickers hardness of 38 GPa and the bulk modulus of 431 GPa (i.e. close to diamond's value of 446 GPa) at atmospheric pressure. However, later studies came to different conclusions with much lower values for both the hardness (7–20 GPa, which makes it softer than common oxides like corundum Al2O3 and rutile TiO2) and bulk modulus (~300 GPa).
Titanium dioxide (B) is found as a mineral in magmatic rocks and hydrothermal veins, as well as weathering rims on perovskite. TiO2 also forms lamellae in other minerals.
Production
The largest pigment processors are Chemours, Venator, , and Tronox. Major paint and coating company end users for pigment grade titanium dioxide include Akzo Nobel, PPG Industries, Sherwin Williams, BASF, Kansai Paints and Valspar. Global pigment demand for 2010 was 5.3 Mt with annual growth expected to be about 3–4%.
The production method depends on the feedstock. In addition to ores, other feedstocks include upgraded slag. Both the chloride process and the sulfate process (both described below) produce titanium dioxide pigment in the rutile crystal form, but the sulfate process can be adjusted to produce the anatase form. Anatase, being softer, is used in fiber and paper applications. The sulfate process is run as a batch process; the chloride process is run as a continuous process.
Chloride process
In chloride process, the ore is treated with chlorine and carbon to give titanium tetrachloride, a volatile liquid that is further purified by distillation. The TiCl4 is treated with oxygen to regenerate chlorine and produce the titanium dioxide.
Sulfate process
In the sulfate process, ilmenite is treated with sulfuric acid to extract iron(II) sulfate pentahydrate. This process requires concentrated ilmenite (45–60% TiO2) or pretreated feedstocks as a suitable source of titanium. The resulting synthetic rutile is further processed according to the specifications of the end user, i.e. pigment grade or otherwise.
Examples of plants using the sulfate process are the Sorel-Tracy plant of QIT-Fer et Titane and the Eramet Titanium & Iron smelter in Tyssedal Norway.
Becher process
The Becher process is another method for the production of synthetic rutile from ilmenite. It first oxidizes the ilmenite as a means to separate the iron component.
Specialized methods
For specialty applications, TiO2 films are prepared by various specialized chemistries. Sol-gel routes involve the hydrolysis of titanium alkoxides such as titanium ethoxide:
Ti(OEt)4 + 2 H2O → TiO2 + 4 EtOH
A related approach that also relies on molecular precursors involves chemical vapor deposition. In this method, the alkoxide is volatilized and then decomposed on contact with a hot surface:
Ti(OEt)4 → TiO2 + 2 Et2O
Applications
Pigment
First mass-produced in 1916, titanium dioxide is the most widely used white pigment because of its brightness and very high refractive index, in which it is surpassed only by a few other materials (see list of indices of refraction). Titanium dioxide crystal size is ideally around 220 nm (measured by electron microscope) to optimize the maximum reflection of visible light. However, abnormal grain growth is often observed in titanium dioxide, particularly in its rutile phase. The occurrence of abnormal grain growth brings about a deviation of a small number of crystallites from the mean crystal size and modifies the physical behaviour of TiO2. The optical properties of the finished pigment are highly sensitive to purity. As little as a few parts per million (ppm) of certain metals (Cr, V, Cu, Fe, Nb) can disturb the crystal lattice so much that the effect can be detected in quality control. Approximately 4.6 million tons of pigmentary TiO2 are used annually worldwide, and this number is expected to increase as use continues to rise.
TiO2 is also an effective opacifier in powder form, where it is employed as a pigment to provide whiteness and opacity to products such as paints, coatings, plastics, papers, inks, foods, supplements, medicines (i.e. pills and tablets), and most toothpastes; in 2019 it was present in two-thirds of toothpastes on the French market. In paint, it is often referred to offhandedly as "brilliant white", "the perfect white", "the whitest white", or other similar terms. Opacity is improved by optimal sizing of the titanium dioxide particles.
Food additive
In food, it is commonly found in ice creams, chocolates, all types of candy, creamers, desserts, marshmallows, chewing gum, pastries, spreads, dressings, cakes, some cheeses, and many other foods.
Thin films
When deposited as a thin film, its refractive index and colour make it an excellent reflective optical coating for dielectric mirrors; it is also used in generating decorative thin films such as found in "mystic fire topaz".
Some grades of modified titanium based pigments as used in sparkly paints, plastics, finishes and cosmetics – these are man-made pigments whose particles have two or more layers of various oxides – often titanium dioxide, iron oxide or alumina – in order to have glittering, iridescent and or pearlescent effects similar to crushed mica or guanine-based products. In addition to these effects a limited colour change is possible in certain formulations depending on how and at which angle the finished product is illuminated and the thickness of the oxide layer in the pigment particle; one or more colours appear by reflection while the other tones appear due to interference of the transparent titanium dioxide layers. In some products, the layer of titanium dioxide is grown in conjunction with iron oxide by calcination of titanium salts (sulfates, chlorates) around 800 °C One example of a pearlescent pigment is Iriodin, based on mica coated with titanium dioxide or iron (III) oxide.
The iridescent effect in these titanium oxide particles is unlike the opaque effect obtained with usual ground titanium oxide pigment obtained by mining, in which case only a certain diameter of the particle is considered and the effect is due only to scattering.
Sunscreen and UV blocking pigments
In cosmetic and skin care products, titanium dioxide is used as a pigment, sunscreen and a thickener. As a sunscreen, ultrafine TiO2 is used, which is notable in that combined with ultrafine zinc oxide, it is considered to be an effective sunscreen that lowers the incidence of sun burns and minimizes the premature photoaging, photocarcinogenesis and immunosuppression associated with long term excess sun exposure. Sometimes these UV blockers are combined with iron oxide pigments in sunscreen to increase visible light protection.
Titanium dioxide and zinc oxide are generally considered to be less harmful to coral reefs than sunscreens that include chemicals such as oxybenzone, octocrylene and octinoxate.
Nanosized titanium dioxide is found in the majority of physical sunscreens because of its strong UV light absorbing capabilities and its resistance to discolouration under ultraviolet light. This advantage enhances its stability and ability to protect the skin from ultraviolet light. Nano-scaled (particle size of 20–40 nm) titanium dioxide particles are primarily used in sunscreen lotion because they scatter visible light much less than titanium dioxide pigments, and can give UV protection. Sunscreens designed for infants or people with sensitive skin are often based on titanium dioxide and/or zinc oxide, as these mineral UV blockers are believed to cause less skin irritation than other UV absorbing chemicals. Nano-TiO2, which blocks both UV-A and UV-B radiation, is used in sunscreens and other cosmetic products.
The EU Scientific Committee on Consumer Safety considered nano sized titanium dioxide to be safe for skin applications, in concentrations of up to 25 percent based on animal testing. The risk assessment of different titanium dioxide nanomaterials in sunscreen is currently evolving since nano-sized TiO2 is different from the well-known micronized form. The rutile form is generally used in cosmetic and sunscreen products due to it not possessing any observed ability to damage the skin under normal conditions and having a higher UV absorption. In 2016 Scientific Committee on Consumer Safety (SCCS) tests concluded that the use of nano titanium dioxide (95–100% rutile, ≦5% anatase) as a UV filter can be considered to not pose any risk of adverse effects in humans post-application on healthy skin, except in the case the application method would lead to substantial risk of inhalation (ie; powder or spray formulations). This safety opinion applied to nano TiO2 in concentrations of up to 25%.
Initial studies indicated that nano-TiO2 particles could penetrate the skin, causing concern over its use. These studies were later refuted, when it was discovered that the testing methodology couldn't differentiate between penetrated particles and particles simply trapped in hair follicles and that having a diseased or physically damaged dermis could be the true cause of insufficient barrier protection.
SCCS research found that when nanoparticles had certain photostable coatings (e.g., alumina, silica, cetyl phosphate, triethoxycaprylylsilane, manganese dioxide), the photocatalytic activity was attenuated and no notable skin penetration was observed; the sunscreen in this research was applied at amounts of 10 mg/cm2 for exposure periods of 24 hours. Coating TiO2 with alumina, silica, zircon or various polymers can minimize avobenzone degradation and enhance UV absorption by adding an additional light diffraction mechanism.
is used extensively in plastics and other applications as a white pigment or an opacifier and for its UV resistant properties where the powder disperses light – unlike organic UV absorbers – and reduces UV damage, due mostly to the particle's high refractive index.
Other uses of titanium dioxide
In ceramic glazes, titanium dioxide acts as an opacifier and seeds crystal formation.
It is used as a tattoo pigment and in styptic pencils. Titanium dioxide is produced in varying particle sizes which are both oil and water dispersible, and in certain grades for the cosmetic industry. It is also a common ingredient in toothpaste.
The exterior of the Saturn V rocket was painted with titanium dioxide; this later allowed astronomers to determine that J002E3 was likely the S-IVB stage from Apollo 12 and not an asteroid.
Titanium dioxide is an n-type semiconductor and is used in dye-sensitized solar cells. It is also used in other electronics components such as electrodes in batteries.
Research
Patenting activities
Between 2002 and 2022, there were 459 patent families that describe the production of titanium dioxide from ilmenite. The majority of these patents describe pre-treatment processes, such as using smelting and magnetic separation to increase titanium concentration in low-grade ores, leading to titanium concentrates or slags. Other patents describe processes to obtain titanium dioxide, either by a direct hydrometallurgical process or through the main industrial production processes, the sulfate process and the chloride process. The sulfate process represents 40% of the world’s titanium dioxide production and is protected in 23% of patent families. The chloride process is only mentioned in 8% of patent families, although it provides 60% of the worldwide industrial production of titanium dioxide.
Key contributors to patents on the production of titanium dioxide are companies from China, Australia and the United States, reflecting the major contribution of these countries to industrial production. Chinese companies Pangang and Lomon Billions Groups hold major patent portfolios.
Photocatalyst
Nanosized titanium dioxide, particularly in the anatase form, exhibits photocatalytic activity under ultraviolet (UV) irradiation. This photoactivity is reportedly most pronounced at the {001} planes of anatase, although the {101} planes are thermodynamically more stable and thus more prominent in most synthesised and natural anatase, as evident by the often observed tetragonal dipyramidal growth habit. Interfaces between rutile and anatase are further considered to improve photocatalytic activity by facilitating charge carrier separation and as a result, biphasic titanium dioxide is often considered to possess enhanced functionality as a photocatalyst. It has been reported that titanium dioxide, when doped with nitrogen ions or doped with metal oxide like tungsten trioxide, exhibits excitation also under visible light. The strong oxidative potential of the positive holes oxidizes water to create hydroxyl radicals. It can also oxidize oxygen or organic materials directly. Hence, in addition to its use as a pigment, titanium dioxide can be added to paints, cements, windows, tiles, or other products for its sterilizing, deodorizing, and anti-fouling properties, and is used as a hydrolysis catalyst. It is also used in dye-sensitized solar cells, which are a type of chemical solar cell (also known as a Graetzel cell).
The photocatalytic properties of nanosized titanium dioxide were discovered by Akira Fujishima in 1967 and published in 1972. The process on the surface of the titanium dioxide was called the . In thin film and nanoparticle form, titanium dioxide has the potential for use in energy production: As a photocatalyst, it can break water into hydrogen and oxygen. With the hydrogen collected, it could be used as a fuel. The efficiency of this process can be greatly improved by doping the oxide with carbon. Further efficiency and durability has been obtained by introducing disorder to the lattice structure of the surface layer of titanium dioxide nanocrystals, permitting infrared absorption. Visible-light-active nanosized anatase and rutile has been developed for photocatalytic applications.
In 1995 Fujishima and his group discovered the superhydrophilicity phenomenon for titanium dioxide coated glass exposed to sun light. This resulted in the development of self-cleaning glass and anti-fogging coatings.
Nanosized TiO2 incorporated into outdoor building materials, such as paving stones in noxer blocks or paints, could reduce concentrations of airborne pollutants such as volatile organic compounds and nitrogen oxides. A TiO2-containing cement has been produced.
Using TiO2 as a photocatalyst, attempts have been made to mineralize pollutants (to convert into CO2 and H2O) in waste water. The photocatalytic destruction of organic matter could also be exploited in coatings with antimicrobial applications.
Hydroxyl radical formation
Although nanosized anatase TiO2 does not absorb visible light, it does strongly absorb ultraviolet (UV) radiation (hv), leading to the formation of hydroxyl radicals. This occurs when photo-induced valence bond holes (h+vb) are trapped at the surface of TiO2 leading to the formation of trapped holes (h+tr) that cannot oxidize water.
TiO2 + hv → e− + h+vb
h+vb → h+tr
O2 + e− → O2•−
O2•− + O2•−+ 2 → H2O2 + O2
O2•− + h+vb → O2
O2•− + h+tr → O2
+ h+vb → HO•
e− + h+tr → recombination
Note: Wavelength (λ)= 387 nm This reaction has been found to mineralize and decompose undesirable compounds in the environment, specifically the air and in wastewater.
Nanotubes
Anatase can be converted into non-carbon nanotubes and nanowires. Hollow TiO2 nanofibers can be also prepared by coating carbon nanofibers by first applying titanium butoxide.
Solubility
Titanium dioxide is insoluble in water, organic solvents, and inorganic acids. It is slightly soluble in alkali, soluble in saturated potassium acid carbonate, and can be completely dissolved in strong sulfuric acid and hydrofluoric acid after boiling for a long time.
Health and safety
Widely-occurring minerals and even gemstones are composed of TiO2. All natural titanium, comprising more than 0.5% of the Earth's crust, exists as oxides.
Food additive
In 2006, titanium dioxide was, according to one chemical encyclopedia, regarded as "completely nontoxic when orally administered". However, this is now seriously disputed.
Government policies
TiO2 whitener in food was banned in France from 2020, due to uncertainty about safe quantities for human consumption.
In 2021, the European Food Safety Authority (EFSA) ruled that as a consequence of new understandings of nanoparticles, titanium dioxide could "no longer be considered safe as a food additive", and the EU health commissioner announced plans to ban its use across the EU, with discussions beginning in June 2021. EFSA concluded that genotoxicity—which could lead to carcinogenic effects—could not be ruled out, and that a "safe level for daily intake of the food additive could not be established". In 2022, the UK Food Standards Agency and Food Standards Scotland announced their disagreement with the EFSA ruling, and did not follow the EU in banning titanium dioxide as a food additive. Health Canada similarly reviewed the available evidence in 2022 and decided not to change their position on titanium dioxide as a food additive.
The European Union removed the authorization to use titanium dioxide (E 171) in foods, effective 7 February 2022, with a six months grace period.
As of May 2023, following the European Union 2022 ban, the U.S. states California and New York were considering banning the use of titanium dioxide in foods.
As of 2024, the Food and Drug Administration (FDA) in the United States permits titanium dioxide as a food additive. It may be used to increase whiteness and opacity in dairy products (some cheeses, ice cream, and yogurt), candies, frostings, fillings, and many other foods. The FDA regulates the labeling of products containing titanium dioxide, alllowing the product's ingredients list to identify titanium dioxide either as "color added" or "artificial colors" or "titanium dioxide;" it does not require that titanium dioxide be explicitly named despite growing scientific concerns. In 2023, the Consumer Healthcare Products Association, a manufacturer's trade group, defended the substance as safe at certain limits while allowing that additional studies could provide further insight, saying an immediate ban would be a "knee-jerk" reaction.
Industry response
Dunkin' Donuts dropped titanium dioxide from their merchandise in 2015 after public pressure.
Research as an ingestible nanomaterial
Due to the potential that long-term ingestion of titanium dioxide may be toxic, particularly to cells and functions of the gastrointestinal tract, preliminary research as of 2021 was assessing its possible role in disease development, such as inflammatory bowel disease and colorectal cancer.
Size distribution analyses showed that batches of food-grade TiO₂, which is produced with a target particle size in the 200300nm range for optimal pigmentation qualities, always include a nanoparticle-sized fraction as inevitable byproduct of the manufacturing processes.
Andrew Maynard, director of Risk Science Center at the University of Michigan, rejected the supposed danger from use of titanium dioxide in food. He says that the titanium dioxide used by Dunkin' Brands and many other food producers is not a new material, and it is not a nanomaterial either. Nanoparticles are typically smaller than 100 nanometres in diameter, yet most of the particles in food-grade titanium dioxide are much larger.
Inhalation
Titanium dioxide dust, when inhaled, has been classified by the International Agency for Research on Cancer (IARC) as an IARC Group 2B carcinogen, meaning it is possibly carcinogenic to humans.
The US National Institute for Occupational Safety and Health recommends two separate exposure limits. NIOSH recommends that fine particles be set at an exposure limit of 2.4 mg/m3, while ultrafine be set at an exposure limit of 0.3 mg/m3, as time-weighted average concentrations up to 10 hours a day for a 40-hour work week.
Although no evidence points to acute toxicity, recurring concerns have been expressed about nanophase forms of these materials. Studies of workers with high exposure to TiO2 particles indicate that even at high exposure there is no adverse effect to human health.
Environmental waste introduction
Titanium dioxide (TiO₂) is mostly introduced into the environment as nanoparticles via wastewater treatment plants. Cosmetic pigments including titanium dioxide enter the wastewater when the product is washed off into sinks after cosmetic use. Once in the sewage treatment plants, pigments separate into sewage sludge which can then be released into the soil when injected into the soil or distributed on its surface. 99% of these nanoparticles wind up on land rather than in aquatic environments due to their retention in sewage sludge. In the environment, titanium dioxide nanoparticles have low to negligible solubility and have been shown to be stable once particle aggregates are formed in soil and water surroundings. In the process of dissolution, water-soluble ions typically dissociate from the nanoparticle into solution when thermodynamically unstable. TiO2 dissolution increases when there are higher levels of dissolved organic matter and clay in the soil. However, aggregation is promoted by pH at the isoelectric point of TiO2 (pH= 5.8) which renders it neutral and solution ion concentrations above 4.5 mM.
See also
Delustrant
Dye-sensitized solar cell
List of inorganic pigments
Noxer blocks, TiO2-coated pavers that remove pollutants from the air
Suboxide
Surface properties of transition metal oxides
Titanium dioxide nanoparticle
Sources
References
External links
International Chemical Safety Card 0338
NIOSH Pocket Guide to Chemical Hazards
"Titanium Dioxide Classified as Possibly Carcinogenic to Humans", Canadian Centre for Occupational Health and Safety, August, 2006 (if inhaled as a powder)
A description of TiO2 photocatalysis
Titanium and titanium dioxide production data (US and World)
Dye-sensitized solar cells
E-number additives
Excipients
Food colorings
IARC Group 2B carcinogens
Inorganic pigments
Sunscreening agents
Titanium(IV) compounds
Transition metal oxides | Titanium dioxide | [
"Chemistry"
] | 5,460 | [
"Inorganic pigments",
"Inorganic compounds"
] |
219,847 | https://en.wikipedia.org/wiki/Levinson%20recursion | Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in time, which is a strong improvement over Gauss–Jordan elimination, which runs in Θ(n3).
The Levinson–Durbin algorithm was proposed first by Norman Levinson in 1947, improved by James Durbin in 1960, and subsequently improved to and then multiplications by W. F. Trench and S. Zohar, respectively.
Other methods to process data include Schur decomposition and Cholesky decomposition. In comparison to these, Levinson recursion (particularly split Levinson recursion) tends to be faster computationally, but more sensitive to computational inaccuracies like round-off errors.
The Bareiss algorithm for Toeplitz matrices (not to be confused with the general Bareiss algorithm) runs about as fast as Levinson recursion, but it uses space, whereas Levinson recursion uses only O(n) space. The Bareiss algorithm, though, is numerically stable, whereas Levinson recursion is at best only weakly stable (i.e. it exhibits numerical stability for well-conditioned linear systems).
Newer algorithms, called asymptotically fast or sometimes superfast Toeplitz algorithms, can solve in for various p (e.g. p = 2, p = 3 ). Levinson recursion remains popular for several reasons; for one, it is relatively easy to understand in comparison; for another, it can be faster than a superfast algorithm for small n (usually n < 256).
Derivation
Background
Matrix equations follow the form
The Levinson–Durbin algorithm may be used for any such equation, as long as M is a known Toeplitz matrix with a nonzero main diagonal. Here is a known vector, and is an unknown vector of numbers xi yet to be determined.
For the sake of this article, êi is a vector made up entirely of zeroes, except for its ith place, which holds the value one. Its length will be implicitly determined by the surrounding context. The term N refers to the width of the matrix above – M is an N×N matrix. Finally, in this article, superscripts refer to an inductive index, whereas subscripts denote indices. For example (and definition), in this article, the matrix Tn is an n×n matrix that copies the upper left n×n block from M – that is, Tnij = Mij.
Tn is also a Toeplitz matrix, meaning that it can be written as
Introductory steps
The algorithm proceeds in two steps. In the first step, two sets of vectors, called the forward and backward vectors, are established. The forward vectors are used to help get the set of backward vectors; then they can be immediately discarded. The backwards vectors are necessary for the second step, where they are used to build the solution desired.
Levinson–Durbin recursion defines the nth "forward vector", denoted , as the vector of length n which satisfies:
The nth "backward vector" is defined similarly; it is the vector of length n which satisfies:
An important simplification can occur when M is a symmetric matrix; then the two vectors are related by bni = fnn+1−i—that is, they are row-reversals of each other. This can save some extra computation in that special case.
Obtaining the backward vectors
Even if the matrix is not symmetric, then the nth forward and backward vector may be found from the vectors of length n − 1 as follows. First, the forward vector may be extended with a zero to obtain:
In going from Tn−1 to Tn, the extra column added to the matrix does not perturb the solution when a zero is used to extend the forward vector. However, the extra row added to the matrix has perturbed the solution; and it has created an unwanted error term εf which occurs in the last place. The above equation gives it the value of:
This error will be returned to shortly and eliminated from the new forward vector; but first, the backwards vector must be extended in a similar (albeit reversed) fashion. For the backwards vector,
As before, the extra column added to the matrix does not perturb this new backwards vector; but the extra row does. Here we have another unwanted error εb with value:
These two error terms can be used to form higher-order forward and backward vectors described as follows. Using the linearity of matrices, the following identity holds for all :
If α and β are chosen so that the right hand side yields ê1 or ên, then the quantity in the parentheses will fulfill the definition of the nth forward or backward vector, respectively. With those alpha and beta chosen, the vector sum in the parentheses is simple and yields the desired result.
To find these coefficients, , are such that :
and respectively , are such that :
By multiplying both previous equations by one gets the following equation:
Now, all the zeroes in the middle of the two vectors above being disregarded and collapsed, only the following equation is left:
With these solved for (by using the Cramer 2×2 matrix inverse formula), the new forward and backward vectors are:
Performing these vector summations, then, gives the nth forward and backward vectors from the prior ones. All that remains is to find the first of these vectors, and then some quick sums and multiplications give the remaining ones. The first forward and backward vectors are simply:
Using the backward vectors
The above steps give the N backward vectors for M. From there, a more arbitrary equation is:
The solution can be built in the same recursive way that the backwards vectors were built. Accordingly, must be generalized to a sequence of intermediates , such that .
The solution is then built recursively by noticing that if
Then, extending with a zero again, and defining an error constant where necessary:
We can then use the nth backward vector to eliminate the error term and replace it with the desired formula as follows:
Extending this method until n = N yields the solution .
In practice, these steps are often done concurrently with the rest of the procedure, but they form a coherent unit and deserve to be treated as their own step.
Block Levinson algorithm
If M is not strictly Toeplitz, but block Toeplitz, the Levinson recursion can be derived in much the same way by regarding the block Toeplitz matrix as a Toeplitz matrix with matrix elements (Musicus 1988). Block Toeplitz matrices arise naturally in signal processing algorithms when dealing with multiple signal streams (e.g., in MIMO systems) or cyclo-stationary signals.
See also
Split Levinson recursion
Linear prediction
Autoregressive model
Notes
References
Defining sources
Levinson, N. (1947). "The Wiener RMS error criterion in filter design and prediction." J. Math. Phys., v. 25, pp. 261–278.
Durbin, J. (1960). "The fitting of time series models." Rev. Inst. Int. Stat., v. 28, pp. 233–243.
Trench, W. F. (1964). "An algorithm for the inversion of finite Toeplitz matrices." J. Soc. Indust. Appl. Math., v. 12, pp. 515–522.
Musicus, B. R. (1988). "Levinson and Fast Choleski Algorithms for Toeplitz and Almost Toeplitz Matrices." RLE TR No. 538, MIT.
Delsarte, P. and Genin, Y. V. (1986). "The split Levinson algorithm." IEEE Transactions on Acoustics, Speech, and Signal Processing, v. ASSP-34(3), pp. 470–478.
Further work
Brent R.P. (1999), "Stability of fast algorithms for structured linear systems", Fast Reliable Algorithms for Matrices with Structure (editors—T. Kailath, A.H. Sayed), ch.4 (SIAM).
Bunch, J. R. (1985). "Stability of methods for solving Toeplitz systems of equations." SIAM J. Sci. Stat. Comput., v. 6, pp. 349–364.
Summaries
Bäckström, T. (2004). "2.2. Levinson–Durbin Recursion." Linear Predictive Modelling of Speech – Constraints and Line Spectrum Pair Decomposition. Doctoral thesis. Report no. 71 / Helsinki University of Technology, Laboratory of Acoustics and Audio Signal Processing. Espoo, Finland.
Claerbout, Jon F. (1976). "Chapter 7 – Waveform Applications of Least-Squares." Fundamentals of Geophysical Data Processing. Palo Alto: Blackwell Scientific Publications.
Golub, G.H., and Loan, C.F. Van (1996). "Section 4.7 : Toeplitz and related Systems" Matrix Computations, Johns Hopkins University Press
Matrices
Numerical analysis | Levinson recursion | [
"Mathematics"
] | 1,901 | [
"Mathematical objects",
"Computational mathematics",
"Matrices (mathematics)",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
219,861 | https://en.wikipedia.org/wiki/In-place%20algorithm | In computer science, an in-place algorithm is an algorithm that operates directly on the input data structure without requiring extra space proportional to the input size. In other words, it modifies the input in place, without creating a separate copy of the data structure. An algorithm which is not in-place is sometimes called not-in-place or out-of-place.
In-place can have slightly different meanings. In its strictest form, the algorithm can only have a constant amount of extra space, counting everything including function calls and pointers. However, this form is very limited as simply having an index to a length array requires bits. More broadly, in-place means that the algorithm does not use extra space for manipulating the input but may require a small though nonconstant extra space for its operation. Usually, this space is , though sometimes anything in is allowed. Note that space complexity also has varied choices in whether or not to count the index lengths as part of the space used. Often, the space complexity is given in terms of the number of indices or pointers needed, ignoring their length. In this article, we refer to total space complexity (DSPACE), counting pointer lengths. Therefore, the space requirements here have an extra factor compared to an analysis that ignores the lengths of indices and pointers.
An algorithm may or may not count the output as part of its space usage. Since in-place algorithms usually overwrite their input with output, no additional space is needed. When writing the output to write-only memory or a stream, it may be more appropriate to only consider the working space of the algorithm. In theoretical applications such as log-space reductions, it is more typical to always ignore output space (in these cases it is more essential that the output is write-only).
Examples
Given an array of items, suppose we want an array that holds the same elements in reversed order and to dispose of the original. One seemingly simple way to do this is to create a new array of equal size, fill it with copies from in the appropriate order and then delete .
function reverse(a[0..n - 1])
allocate b[0..n - 1]
for i from 0 to n - 1
b[n − 1 − i] := a[i]
return b
Unfortunately, this requires extra space for having the arrays and available simultaneously. Also, allocation and deallocation are often slow operations. Since we no longer need , we can instead overwrite it with its own reversal using this in-place algorithm which will only need constant number (2) of integers for the auxiliary variables and , no matter how large the array is.
function reverse_in_place(a[0..n-1])
for i from 0 to floor((n-2)/2)
tmp := a[i]
a[i] := a[n − 1 − i]
a[n − 1 − i] := tmp
And for further clarification check leet code problem number 88
As another example, many sorting algorithms rearrange arrays into sorted order in-place, including: bubble sort, comb sort, selection sort, insertion sort, heapsort, and Shell sort. These algorithms require only a few pointers, so their space complexity is .
Quicksort operates in-place on the data to be sorted. However, quicksort requires stack space pointers to keep track of the subarrays in its divide and conquer strategy. Consequently, quicksort needs additional space. Although this non-constant space technically takes quicksort out of the in-place category, quicksort and other algorithms needing only additional pointers are usually considered in-place algorithms.
Most selection algorithms are also in-place, although some considerably rearrange the input array in the process of finding the final, constant-sized result.
Some text manipulation algorithms such as trim and reverse may be done in-place.
In computational complexity
In computational complexity theory, the strict definition of in-place algorithms includes all algorithms with space complexity, the class DSPACE(1). This class is very limited; it equals the regular languages. In fact, it does not even include any of the examples listed above.
Algorithms are usually considered in L, the class of problems requiring additional space, to be in-place. This class is more in line with the practical definition, as it allows numbers of size as pointers or indices. This expanded definition still excludes quicksort, however, because of its recursive calls.
Identifying the in-place algorithms with L has some interesting implications; for example, it means that there is a (rather complex) in-place algorithm to determine whether a path exists between two nodes in an undirected graph, a problem that requires extra space using typical algorithms such as depth-first search (a visited bit for each node). This in turn yields in-place algorithms for problems such as determining if a graph is bipartite or testing whether two graphs have the same number of connected components.
Role of randomness
In many cases, the space requirements of an algorithm can be drastically cut by using a randomized algorithm. For example, if one wishes to know if two vertices in a graph of vertices are in the same connected component of the graph, there is no known simple, deterministic, in-place algorithm to determine this. However, if we simply start at one vertex and perform a random walk of about steps, the chance that we will stumble across the other vertex provided that it is in the same component is very high. Similarly, there are simple randomized in-place algorithms for primality testing such as the Miller–Rabin primality test, and there are also simple in-place randomized factoring algorithms such as Pollard's rho algorithm.
In functional programming
Functional programming languages often discourage or do not support explicit in-place algorithms that overwrite data, since this is a type of side effect; instead, they only allow new data to be constructed. However, good functional language compilers will often recognize when an object very similar to an existing one is created and then the old one is thrown away, and will optimize this into a simple mutation "under the hood".
Note that it is possible in principle to carefully construct in-place algorithms that do not modify data (unless the data is no longer being used), but this is rarely done in practice.
See also
Table of in-place and not-in-place sorting algorithms
References
Algorithms | In-place algorithm | [
"Mathematics"
] | 1,342 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
220,046 | https://en.wikipedia.org/wiki/Index%20of%20chemistry%20articles | Chemistry (from Egyptian kēme (chem), meaning "earth") is the physical science concerned with the composition, structure, and properties of matter, as well as the changes it undergoes during chemical reactions.
Below is a list of chemistry-related articles in alphabetical order. Chemical compounds are listed separately at List of inorganic compounds, List of biomolecules, or List of organic compounds.
The Outline of chemistry delineates different aspects of chemistry.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
References
Indexes of science articles | Index of chemistry articles | [
"Chemistry"
] | 128 | [
"nan"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.