id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
22,307,998 | https://en.wikipedia.org/wiki/LTR%20retrotransposon | LTR retrotransposons are class I transposable elements (TEs) characterized by the presence of long terminal repeats (LTRs) directly flanking an internal coding region. As retrotransposons, they mobilize through reverse transcription of their mRNA and integration of the newly created cDNA into another genomic location. Their mechanism of retrotransposition is shared with retroviruses, with the difference that the rate of horizontal transfer in LTR-retrotransposons is much lower than the vertical transfer by passing active TE insertions to the progeny. LTR retrotransposons that form virus-like particles are classified under Ortervirales.
Their size ranges from a few hundred base pairs to 30 kb, the largest species reported to date are members of the Burro retrotransposon family in Schmidtea mediterranea.
In plant genomes, LTR retrotransposons are the major repetitive sequence class constituting more than 75% of the maize genome. LTR retrotransposons make up about 8% of the human genome and approximately 10% of the mouse genome.
Structure and propagation
LTR retrotransposons have direct long terminal repeats that range from ~100 bp to over 5 kb in size. LTR retrotransposons are further sub-classified into the Ty1-copia-like (Pseudoviridae), Ty3-like (Metaviridae, formally referred to as Gypsy-like, a name that is being considered for retirement), and BEL-Pao-like (Belpaoviridae) groups based on both their degree of sequence similarity and the order of encoded gene products. Ty1-copia and Ty3-Metaviridae groups of retrotransposons are commonly found in high copy number (up to a few million copies per haploid nucleus) in animals, fungi, protista, and plants genomes. BEL-Pao like elements have so far only been found in animals.
All functional LTR-retrotransposons encode a minimum of two genes, gag and pol, that are sufficient for their replication. Gag encodes a polyprotein with a capsid and a nucleocapsid domain. Gag proteins form virus-like particles in the cytoplasm inside which reverse-transcription occurs. The Pol gene produces three proteins: a protease (PR), a reverse transcriptase endowed with an RT (reverse-transcriptase) and an RNAse H domains, and an integrase (IN).
Typically, LTR-retrotransposon mRNAs are produced by the host RNA pol II acting on a promoter located in their 5’ LTR. The Gag and Pol genes are encoded in the same mRNA. Depending on the host species, two different strategies can be used to express the two polyproteins: a fusion into a single open reading frame (ORF) that is then cleaved or the introduction of a frameshift between the two ORFs. Occasional ribosomal frameshifting allows the production of both proteins, while ensuring that much more Gag protein is produced to form virus-like particles.
Reverse transcription usually initiates at a short sequence located immediately downstream of the 5’-LTR and termed the primer binding site (PBS). Specific host tRNAs bind to the PBS and act as primers for reverse-transcription, which occurs in a complex and multi-step process, ultimately producing a double- stranded cDNA molecule. The cDNA is finally integrated into a new location, creating short TSDs (Target Site Duplications) and adding a new copy in the host genome
Types
Ty1-copia retrotransposons
Ty1-copia retrotransposons are abundant in species ranging from single-cell algae to bryophytes, gymnosperms, and angiosperms. They encode four protein domains in the following order: protease, integrase, reverse transcriptase, and ribonuclease H.
At least two classification systems exist for the subdivision of Ty1-copia retrotransposons into five lineages: Sireviruses/Maximus, Oryco/Ivana, Retrofit/Ale, TORK (subdivided in Angela/Sto, TAR/Fourf, GMR/Tork), and Bianca.
Sireviruses/Maximus retrotransposons contain an additional putative envelope gene. This lineage is named for the founder element SIRE1 in the Glycine max genome, and was later described in many species such as Zea mays, Arabidopsis thaliana, Beta vulgaris, and Pinus pinaster. Plant Sireviruses of many sequenced plant genomes are summarized at the MASIVEdb Sirevirus database.
Ty3-retrotransposons (formally gypsy)
Ty3-retrotransposons are widely distributed in the plant kingdom, including both gymnosperms and angiosperms. They encode at least four protein domains in the order: protease, reverse transcriptase, ribonuclease H, and integrase. Based on structure, presence/absence of specific protein domains, and conserved protein sequence motifs, they can be subdivided into several lineages:
Errantiviruses contain an additional defective envelope ORF with similarities to the retroviral envelope gene. First described as Athila-elements in Arabidopsis thaliana, they have been later identified in many species, such as Glycine max and Beta vulgaris.
Chromoviruses contain an additional chromodomain (chromatin organization modifier domain) at the C-terminus of their integrase protein. They are widespread in plants and fungi, probably retaining protein domains during evolution of these two kingdoms. It is thought that the chromodomain directs retrotransposon integration to specific target sites. According to sequence and structure of the chromodomain, chromoviruses are subdivided into the four clades CRM, Tekay, Reina and Galadriel. Chromoviruses from each clade show distinctive integration patterns, e.g. into centromeres or into the rRNA genes.
Ogre-elements are gigantic Ty3-retrotransposons reaching lengths up to 25 kb. Ogre elements have been first described in Pisum sativum.
Metaviruses describe conventional Ty3-gypsy retrotransposons that do not contain additional domains or ORFs.
The Sushi family of Ty3 long terminal repeat retrotransposons were first identified in teleost fish and Sushi-like neogenes were subsequently identified in mammals. Mammalian retrotransposon-derived transcripts (MARTs) cannot transpose but have retained open reading frames, demonstrate high levels of evolutionary conservation and are subject to selective pressures, which suggests some have become neofunctionalized genes with new cellular functions. Retrotransposon gag-like-3 (RTL3/ZCCHC5/MART3) is one of eleven Sushi-like neogenes identified in the human genome.
BEL/pao family
The BEL/pao family is found in animals.
Endogenous retroviruses (ERV)
Although retroviruses are often classified separately, they share many features with LTR retrotransposons. A major difference with Ty1-copia and Ty3-gypsy retrotransposons is that retroviruses have an envelope protein (ENV). A retrovirus can be transformed into an LTR retrotransposon through inactivation or deletion of the domains that enable extracellular mobility. If such a retrovirus infects and subsequently inserts itself in the genome in germ line cells, it may become transmitted vertically and become an Endogenous Retrovirus.
Terminal repeat retrotransposons in miniature (TRIMs)
Some LTR retrotransposons lack all of their coding domains. Due to their short size, they are referred to as terminal repeat retrotransposons in miniature (TRIMs). Nevertheless, TRIMs can be able to retrotranspose, as they may rely on the coding domains of autonomous Ty1-copia or Ty3-gypsy retrotransposons. Among the TRIMs, the Cassandra family plays an exceptional role, as the family is unusually wide-spread among higher plants. In contrast to all other characterized TRIMs, Cassandra elements harbor a 5S rRNA promoter in their LTR sequence. Due to their short overall length and the relatively high contribution of the flanking LTRs, TRIMs are prone to re-arrangements by recombination.
References
Mobile genetic elements | LTR retrotransposon | [
"Biology"
] | 1,804 | [
"Molecular genetics",
"Mobile genetic elements"
] |
4,764,018 | https://en.wikipedia.org/wiki/Electroless%20nickel-phosphorus%20plating | Electroless nickel-phosphorus plating, also referred to as E-nickel, is a chemical process that deposits an even layer of nickel-phosphorus alloy on the surface of a solid substrate, like metal or plastic. The process involves dipping the substrate in a water solution containing nickel salt and a phosphorus-containing reducing agent, usually a hypophosphite salt. It is the most common version of electroless nickel plating (EN plating) and is often referred by that name. A similar process uses a borohydride reducing agent, yielding a nickel-boron coating instead.
Unlike electroplating, processes in general do not require passing an electric current through the bath and the substrate; the reduction of the metal cations in solution to metallic is achieved by purely chemical means, through an autocatalytic reaction. This creates an even layer of metal regardless of the geometry of the surface – in contrast to electroplating which suffers from uneven current density due to the effect of substrate shape on the electric resistance of the bath and therefore on the current distribution within it. Moreover, it can be applied to non-conductive surfaces.
It has many industrial applications, from merely decorative to the prevention of corrosion and wear. It can be used to apply composite coatings, by suspending suitable powders in the bath.
Historical overview
The reduction of nickel salts to nickel metal by hypophosphite was accidentally discovered by Charles Adolphe Wurtz in 1844. In 1911, François Auguste Roux of L'Aluminium Français patented the process (using both hypophosphite and orthophosphite) for general metal plating.
However, Roux's invention does not seem to have received much commercial use. In 1946 the process was accidentally rediscovered by Abner Brenner and Grace E. Riddell of the National Bureau of Standards. They tried adding various reducing agents to an electroplating bath in order to prevent undesirable oxidation reactions at the anode. When they added sodium hypophosphite, they observed that the amount of nickel that was deposited at the cathode exceeded the theoretical limit of Faraday's law.
Brenner and Riddel presented their discovery at the 1946 Convention of the American Electroplaters' Society (AES); a year later, at the same conference they proposed the term "electroless" for the process and described optimized bath formulations, that resulted in a patent.
A declassified US Army technical report in 1963 credits the discovery to Wurtz and Roux more than to Brenner and Riddell.
During 1954–1959, a team led by Gregorie Gutzeit at General American Transportation Corporation greatly developed the process, determining the optimum parameters and concentrations of the bath, and introducing many important additives to speed up the deposition rate and prevent unwanted reactions, such as spontaneous deposition. They also studied the chemistry of the process.
In 1969, Harold Edward Bellis from DuPont filed a patent for a general class of processes using sodium borohydride, dimethylamine borane, or sodium hypophosphite, in the presence of thallium salts, thus producing a metal-thallium-boron or metal-thallium-phosphorus; where the metal could be either nickel or cobalt. The boron or phosphorus contents was claimed to be variable from 0.1 to 12%, and that of thallium from 0.5 to 6%. The coatings were claimed to be "an intimate dispersion of hard trinickel boride () or nickel phosphide () in a soft matrix of nickel and thallium".
Procedure
Surface cleaning
Before plating, the surface of the material must be thoroughly cleaned. Unwanted solids left on the surface cause poor plating. Cleaning is usually achieved by a series of chemical baths, including non-polar solvents to remove oils and greases, as well as acids and alkalis to remove oxides, insoluble organics, and other surface contaminants. After applying each bath, the surface must be thoroughly rinsed with water to remove any residue of the cleaning chemicals.
Internal stresses in the substrate created by machining or welding can affect the plating.
Plating bath
The main ingredients of an electroless nickel plating bath are source of nickel cations , usually nickel sulfate and a suitable reducing agent, such as hypophosphite or borohydride .
With hypophosphite, the main reaction that produces the nickel plating yields orthophosphite , elemental phosphorus, protons and molecular hydrogen :
2 + 8 + 2 → 2 (s) + 6 + 2 + 2 (s) + 3 (g)
This reaction is catalyzed by some metals including cobalt, palladium, rhodium, and nickel itself. Because of the latter, the reaction is auto-catalytic, and proceeds spontaneously once an initial layer of nickel has formed on the surface.
The plating bath also often includes:
complexing agents, such as carboxylic acids or amines to increase phosphate solubility and to prevent the white-out phenomena by slowing the reaction.
stabilizers, such as lead salts, sulfur compounds, or various organic compounds, to slow the reduction by co-depositing with the nickel.
buffers, to maintain the acidity of the bath. Many complexing agents act as buffers.
brighteners, such as cadmium salts or certain organic compounds, to improve the surface finish. They are mostly co-deposited with nickel (like the stabilizers).
surfactants, to keep the deposited layer hydrophilic in order to reduce pitting and staining.
accelerators, such as certain sulfur compounds, to counteract the reduction of plating rate caused by complexing agents. They are usually co-deposited and may cause discoloration.
Surface activation
Because of the autocatalytic character of the reaction, the surface to be plated must be activated by making it hydrophilic, then ensuring that it consists of a metal with catalytic activity. If the substrate is not made of one of those metals, then a thin layer of one of them must be deposited first, by some other process.
If the substrate is a metal that is more electropositive than nickel, such as iron and aluminum, an initial nickel film will be created spontaneously by a redox reaction with the bath, such as:
(s) + (aq) → (s) + (aq)
2 (s) + 3 (aq) → 3 (s) + 2 (aq)
For metals that are less electropositive than nickel, such as copper, the initial nickel layer can be created by immersing a piece of a more electropositive metal, such as zinc, electrically connected to the substrate, thus creating a shorted Galvanic cell.
On substrates that are not metallic but are electrically conductive, such as graphite, the initial layer can be created by briefly running an electric current through it and the bath, as in electroplating. If the substrate is not conductive, such as ABS and other plastics, one can use an activating bath containing a noble metal salt, like palladium chloride or silver nitrate, and a suitable reducing agent.
Activation is done with a weak acid etch, nickel strike, or a proprietary solution, if the substrate is non-metallic.
After-plating treatment
After plating, an anti-oxidation or anti-tarnish chemical coating, such as phosphate or chromate, is applied, followed by rinsing with water and dried to prevent staining. Baking may be necessary to improve the hardness and adhesion of the plating, anneal any internal stresses, and expel trapped hydrogen that may make it brittle.
Variants
The processes for electroless nickel-phosphorus plating can be modified by substituting cobalt for nickel, wholly or partially, with relatively little changes. Other nickel-phosphorus alloys can be created with suitable baths, such as nickel-zinc-phosphorus.
Composites by codeposition
Electroless nickel-phosphorus plating can produce composite materials consisting of minute solid particles embedded in the nickel-phosphorus coat. The general procedure is to suspend the particles in the plating bath, so that the growing metal layer will surround and cover them. This procedure was initially developed by Odekerken in 1966 for electrodeposited nickel-chromium coatings. In that study, in an intermediate layer, finely powdered particles, like aluminum oxide and polyvinyl chloride (PVC) resin, were distributed within a metallic matrix. By changing the baths, the procedure can create coatings with multiple layers of different composition.
The first commercial application of their work was electroless nickel-silicon carbide coatings on the Wankel internal combustion engine. Another commercial composite in 1981 incorporated polytetrafluoroethylene (nickel-phosphorus PTFE). However, the co-deposition of diamond and PTFE particles was more difficult than that of aluminum oxide or silicon carbide. The feasibility to incorporate the second phase of fine particles, the size of a nanometer to micrometer, within a metal-alloy matrix has initiated a new generation of composite coatings.
Characteristics
Advantages and disadvantages
Compared to the electrolytic process, a major advantage of electroless nickel plating is that it creates an even coating of a desired thickness and volume, even in parts with complex shape, recesses, and blind holes. Because of this property, it may often be the only option.
Another major advantage of EN plating is that it does not require electrical power, electrical apparatuses, or sophisticated jigs and racks.
If properly formulated, EN plating may also provide a less porous coating, harder and more resistant to corrosion and hydrogen absorption.
Electroless nickel plating also can produce coatings that are free of built-in mechanical stress, or even have compressive stress.
A disadvantage is the higher cost of the chemicals, which are consumed in proportion to the mass of nickel deposited; whereas in electroplating the nickel ions are replenished by the metallic nickel anode. Automatic mechanisms may be needed to replenish those reagents during plating.
The specific characteristics vary depending on the type of EN plating and nickel alloy used, which are chosen to suit the application.
Types
The metallurgical properties of the alloy depend on the percentage of phosphorus.
Low-phosphorus coatings have up to 4% P contents. Their hardness reaches up to 60 on the Rockwell C scale.
Medium-phosphorus coatings, the most common type, are defined as those with 4 to 10% P, although the range depends on the application: up to 4–7% for decorative applications, 6–9% for industrial applications, and 4–10% for electronics.
High-phosphorus coatings have 10–14% P. They are preferred for parts that will be exposed to highly corrosive acidic environments such as oil drilling and coal mining. Their hardness may score up to 600 on Vickers test. Note that the Vickers hardness is not easily comparable to the Rockwell scale.
Surface finish
Electroless nickel plating can have a matte, semi-bright, or bright finish.
Structure
Electroless nickel-phosphorus coatings with less than 7% phosphorus are solid solutions with a microcrystalline structure, with each grain 2–6 nm across. Coatings with more than 10% phosphorus are amorphous. Between these two limits, the coating is a mixture of amorphous and microcrystalline materials.
Physical properties
The melting point of the nickel-phosphorus alloy deposited by the EN process is significantly lower than that of pure nickel (1445 °C), and decreases as the phosphorus content increases, down to 890 °C at about 14% P.
The magnetic properties of the coatings decrease with increasing phosphorus contents. Coatings with more than 11.2% P are non-magnetic.
Solderability of low-phosphorus coatings is good, but decreases with increasing P contents.
Porosity decreases as the phosphorus contents increases, while hardness, wear resistance, and resistance to corrosion increase.
Applications
Electroless nickel-phosphorus is used when wear resistance, hardness and corrosion protection are required. Applications include oilfield valves, rotors, drive shafts, paper handling equipment, fuel rails, optical surfaces for diamond turning, door knobs, kitchen utensils, bathroom fixtures, electrical/mechanical tools and office equipment.
Due to the high hardness of the coating, it can be used to salvage worn parts. Coatings of 25 to 100 micrometers can be applied and machined back to the final dimensions. Its uniform deposition profile means it can be applied to complex components not readily suited to other hard-wearing coatings like hard chromium.
It is also used extensively in the manufacture of hard disk drives, as a way of providing an atomically smooth coating to the aluminium disks. The magnetic layers are then deposited on top of this film, usually by sputtering and finishing with protective carbon and lubrication layers.
Its use in the automotive industry for wear resistance has increased significantly. However, it is important to recognize that only End of Life Vehicles Directive or RoHS compliant process types (free from heavy metal stabilizers) may be used for these applications.
Printed circuit boards
Electroless nickel plating, covered by a thin layer of gold, is used in the manufacture of printed circuit boards (PCBs), to avoid oxidation and improving the solderability of copper contacts and plated through holes and vias. The gold is typically applied by quick immersion in a solution containing gold salts. This process is known in the industry as electroless nickel immersion gold (ENIG). A variant of this process adds a thin layer of electroless palladium over the nickel, a process known by the acronym ENEPIG.
Standards
AMS-2404
AMS-C-26074
ASTM B-733
ASTM-B-656 (inactive)
Mil-C-26074E
MIL-DTL-32119
IPC-4552 (for ENIG)
IPC-7095 (for ENIG)
See also
Nickel electroplating
Nucleation
Organic Solderability Preservative (OSP)
Electroless nickel-boron plating
Electroless copper plating
References
Printed circuit board manufacturing
Metal plating | Electroless nickel-phosphorus plating | [
"Chemistry",
"Engineering"
] | 2,958 | [
"Metallurgical processes",
"Coatings",
"Electronic engineering",
"Electrical engineering",
"Metal plating",
"Printed circuit board manufacturing"
] |
4,765,320 | https://en.wikipedia.org/wiki/Guide%20RNA | Guide RNA (gRNA) or single guide RNA (sgRNA) is a short sequence of RNA that functions as a guide for the Cas9-endonuclease or other Cas-proteins that cut the double-stranded DNA and thereby can be used for gene editing. In bacteria and archaea, gRNAs are a part of the CRISPR-Cas system that serves as an adaptive immune defense that protects the organism from viruses. Here the short gRNAs serve as detectors of foreign DNA and direct the Cas-enzymes that degrades the foreign nucleic acid.
History
The RNA editing guide RNA was discovered in 1990 by B. Blum, N. Bakalara, and L. Simpson through Northern Blot Hybridization in the mitochondrial maxicircle DNA of the eukaryotic parasite Leishmania tarentolae. Subsequent research throughout the mid-2000s and the following years explored the structure and function of gRNA and the CRISPR-Cas system. A significant breakthrough occurred in 2012 when it was discovered that gRNA could guide the Cas9 endonuclease to introduce target-specific cuts in double-stranded DNA. This discovery led to the 2020 Nobel Prize awarded to Jennifer Doudna and Emmanuelle Charpentier for their contributions to the development of CRISPR-Cas9 gene-editing technology.
Guide RNA in Protists
Trypanosomatid protists and other kinetoplastids have a post-transcriptional RNA modification process known as "RNA editing" that performs a uridine insertion/deletion inside the mitochondria. This mitochondrial DNA is circular and is divided into maxicircles and minicircles. A mitochondrion contains about 50 maxicircles which have both coding and non coding regions and consists of approximately 20 kilo bases (kb). The coding region is highly conserved (16-17kb) and the non-coding region varies depending on the species. Minicircles are small (around 1 kb) but more numerous than maxicircles, a mitochondrion contains several thousands minicircles. Maxicircles can encode "cryptogenes" and some gRNAs; minicircles can encode the majority of gRNAs. Some gRNA genes show identical insertion and deletion sites even if they have different sequences, whereas other gRNA sequences are not complementary to pre-edited mRNA. Maxicircles and minicircles molecules are catenated into a giant network of DNA inside the mitochondrion.
The majority of maxicircle transcripts cannot be translated into proteins due to frameshifts in their sequences. These frameshifts are corrected post-transcriptionally through the insertion and deletion of uridine residues at precise sites, which then create an open reading frame. This open reading frame is subsequently translated into a protein that is homologous to mitochondrial proteins found in other cells. The process of uridine insertion and deletion is mediated by short guide RNAs (gRNAs),which encode the editing information through complementary sequences, and allow for base pairing between guanine and uracil (GU) as well as between guanine and cytosine (GC), facilitating the editing process.
The function of the gRNA-mRNA Complex
Guide RNAs are mainly transcribed from the intergenic region of DNA maxicircle and have sequences complementary to mRNA. The 3' end of gRNAs contains an oligo 'U' tail (5-24 nucleotides in length) which is in a nonencoded region but interacts and forms a stable complex with A and G rich regions of pre-edited mRNA and gRNA, that are thermodynamically stabilized by a 5' and 3' anchors. This initial hybrid helps in the recognition of specific mRNA site to be edited.
RNA editing typically progresses from the 3' to the 5' end on the mRNA. The initial editing process begins when a gRNA forms an RNA duplex with a complementary mRNA sequence located just downstream of the editing site. This pairing recruits a number of ribonucleoprotein complexes that direct the cleavage of the first mismatched base adjacent to the gRNA-mRNA anchor. Following this, Uridylyltransferase inserts a 'U' at the 3' end, and RNA ligase then joins the two severed ends. The process repeats at the next upstream editing site in a similar manner. A single gRNA usually encodes the information for several editing sites (an editing "block"), the editing of which produces a complete gRNA/mRNA duplex. This process of sequential editing is known as the enzyme cascade model.
In the case of "pan-edited" mRNAs, the duplex unwinds and another gRNA forms a duplex with the edited mRNA sequence, initiating another round of editing. These overlapping gRNAs form an editing "domain". Some genes contain multiple editing domains. The extent of editing for any particular gene varies among trypanosomatid species. The variation consists of the loss of editing at the 3' side, probably due to the loss of minicircle sequence classes that encode specific gRNAs. A retroposition model has been proposed to explain the partial, and in some cases, complete loss of editing through evolution. Although the loss of editing is typically lethal, such losses have been observed in old laboratory strains. The maintenance of editing over the long evolutionary history of these ancient protists suggests the presence of a selective advantage, the exact nature of which is still uncertain.
It is not clear why trypanosomatids utilize such an elaborate mechanism to produce mRNAs. It might have originated in the early mitochondria of the ancestor of the kintoplastid protist lineage, since it is present in the bodonids which are ancestral to the trypanosomatids, and may not be present in the euglenoids, which branched from the same common ancestor as the kinetoplastids.
Guide RNA sequences
In the protozoan Leishmania tarentolae, 12 of the 18 mitochondrial genes are edited using this process. One such gene is Cyb. The mRNA is actually edited twice in succession. For the first edit, the relevant sequence on the mRNA is as follows:
mRNA 5' AAAGAAAAGGCUUUAACUUCAGGUUGU 3'
The 3' end is used to anchor the gRNA (gCyb-I gRNA in this case) by basepairing (some G/U pairs are used). The 5' end does not exactly match and one of three specific endonucleases cleaves the mRNA at the mismatch site.
gRNA 3' AAUAAUAAAUUUUUAAAUAUAAUAGAAAAUUGAAGUUCAGUA 5'
mRNA 5' A A AGAAA A G G C UUUAACUUCAGGUUGU 3'
The mRNA is now "repaired" by adding U's at each editing site in succession, giving the following sequence:
gRNA 3' AAUAAUAAAUUUUUAAAUAUAAUAGAAAAUUGAAGUUCAGUA 5'
mRNA 5' UUAUUAUUUAGAAAUUUAUGUUGUCUUUUAACUUCAGGUUGU 3'
This particular gene has two overlapping gRNA editing sites. The 5' end of this section is the 3' anchor for another gRNA (gCyb-II gRNA).
Guide RNA in Prokaryotes
CRISPR In Prokaryotes
Prokaryotes as bacteria and archaea, use CRISPR (clustered regularly interspaced short palindromic repeats) and its associated Cas enzymes, as their adaptive immune system. When prokaryotes are infected by phages, and manage to fend off the attack, specific Cas enzymes cut the phage DNA (or RNA) and integrate the fragments into the CRISPR sequence interspaces. These stored segments are then recognized during future virus attacks, allowing Cas enzymes to use RNA copies of these segments, along with their associated CRISPR sequences, as gRNA to identify and neutralize the foreign sequences.
Structure
Guide RNA targets the complementary sequences by simple Watson-Crick base pairing. In the type II CRISPR/cas system, the sgRNA directs the Cas-enzyme to target specific regions in the genome for targeted DNA cleavage. The sgRNA is an artificially engineered combination of two RNA molecules: CRISPR RNA (crRNA) and trans-activating crRNA (tracrRNA). The crRNA component is responsible for binding to the target-specific DNA region, while the tracrRNA component is responsible for the activation of the Cas9 endonuclease activity. These two components are linked by a short tetraloop structure, resulting in the formation of the sgRNA. The tracrRNA consist of base pairs that form a stem-loop structure, enabling its attachment to the endonuclease enzyme. The transcription of the CRISPR locus generates crRNA, which contains spacer regions flanked by repeat sequences, typically 18-20 base pairs (bp) in length. This crRNA guides the Cas9 endonuclease to the complementary target region on the DNA, where it cleaves the DNA, forming what is known as the effector complex. Modifications in the crRNA sequence within the sgRNA can alter the binding location, allowing for precise targeting of different DNA regions, effectively making it a programmable system for genome editing.
Applications
Designing gRNAs
The targeting specificity of CRISPR-Cas9 is determined by the 20-nucleotide (nt) sequence at the 5' end of the gRNA. The desired target sequence must precede the Protospacer Adjacent Motif (PAM), which is a short DNA sequence usually 2-6 base pairs in length that follows the DNA region targeted for cleavage by the CRISPR system, such as CRISPR-Cas9. The PAM is required for a Cas nuclease to cut and is usually located 3-4 nucleotides downstream from the cut site. Once the gRNA base pairs with the target, Cas9 induces a double-strand break about 3 nucleotides upstream of the PAM.
The optimal GC content of the guide sequence should be over 50%. A higher GC content enhances the stability of the RNA-DNA duplex and reduces off-target hybridization. The length of guide sequences is typically 20 bp, but they can also range from 17 to 24 bp. A longer sequence minimizes off-target effects. Guide sequences shorter than 17 bp are at risk of targeting multiple loci.
CRISPR Cas9
CRISPR (Clustered regularly interspaced short palindromic repeats)/Cas9 is a technique used for gene editing and gene therapy. Cas is an endonuclease enzyme that cuts DNA at a specific location directed by a guide RNA. This is a target-specific technique that can introduce gene knockouts or knock-ins depending on the double strand repair pathway. Evidence shows that both in vitro and in vivo, tracrRNA is required for Cas9 to bind to the target DNA sequence. The CRISPR-Cas9 system consists of three main stages. The first stage involves the extension of bases in the CRISPR locus region by addition of foreign DNA spacers in the genome sequence. Proteins like cas1 and cas2, assist in finding new spacers. The next stage involves transcription of CRISPR: pre-crRNA (precursor CRISPR RNA) are expressed by the transcription of CRISPR repeat-spacer array. Upon further modification, the pre-crRNA is converted to single spacer flanked regions forming short crRNA. RNA maturation process is similar in type I and III but different in type II. The third stage involves binding of cas9 protein and directing it to cleave the DNA segment. The Cas9 protein binds to a combined form of crRNA and tracrRNA forming an effector complex. This serves as guide RNA for the cas9 protein directing its endonuclease activity.
RNA mutagenesis
One important method of gene regulation is RNA mutagenesis, which can be introduced through RNA editing with the assistance of gRNA. Guide RNA replaces adenosine with inosine at specific target sites, modifying the genetic code. Adenosine deaminase acts on RNA, bringing post transcriptional modification by altering codons and different protein functions. Guide RNAs are small nucleolar RNAs that, along with riboproteins, perform intracellular RNA alterations such as ribomethylation in rRNA and the introduction of pseudouridine in preribosomal RNA. Guide RNAs bind to the antisense RNA sequence and regulate RNA modification. It has been observed that small interfering RNA (siRNA) and micro RNA (miRNA) are generally used as target RNA sequences, and modifications are comparatively easy to introduce due to their small size.
See also
CRISPR gene editing
CRISPR/Cas Tools
SiRNA
Gene knockout
Protospacer adjacent motif
References
Further reading
Guide RNA-directed uridine insertion RNA editing in vitrohttp://www.jbc.org/content/272/7/4212.full
Genome editing
RNA | Guide RNA | [
"Engineering",
"Biology"
] | 2,747 | [
"Genetics techniques",
"Genetic engineering",
"Genome editing"
] |
4,767,789 | https://en.wikipedia.org/wiki/Superconductor%E2%80%93insulator%20transition | The superconductor–insulator transition is an example of a quantum phase transition, whereupon tuning some parameter in the Hamiltonian, a dramatic change in the behavior of the electrons occurs. The nature of how this transition occurs is disputed, and many studies seek to understand how the order parameter, , changes. Here is the amplitude of the order parameter, and is the phase. Most theories involve either the destruction of the amplitude of the order parameter - by a reduction in the density of states at the Fermi surface, or by destruction of the phase coherence; which results from the proliferation of vortices.
Destruction of superconductivity
In two dimensions, the subject of superconductivity becomes very interesting because the existence of true long-range order is not possible. In the 1970s, J. Michael Kosterlitz and David J. Thouless (along with Vadim Berezinski) showed that a different kind of long-range order could exist - topological order - which showed power law correlations (meaning that by measuring the two-point correlation function it decays algebraically).
This picture changes if disorder is included. Kosterlitz-Thouless behavior can be obtained, but the fluctuations of the order parameter are greatly enhanced, and the transition temperature is suppressed.
The model to keep in mind in the understanding of how superconductivity occurs in a two-dimensional disordered superconductor is the following. At high temperatures, the system is in the normal state. As the system is cooled towards its transition temperature, superconducting grains begin to fluctuate in and out of existence. When one of these grains "pops" into existence, it is accelerated without dissipation for a time before decaying back into the normal state. This has the effect of increasing the conductivity even before the system has condensed into the superconducting state. This increased conductivity above is referred to as paraconductivity, or fluctuation conductivity, and was first correctly described by Lev G. Aslamazov and Anatoly Larkin. As the system is cooled further, the lifetime of these fluctuations increase, and becomes comparable to the Ginzburg-Landau time
.
Eventually, the amplitude of the order parameter becomes well defined (it is non-zero wherever there are superconducting patches), and it can begin to support phase fluctuations. These phase fluctuations set in at a lower temperature, and are caused by vortices - which are topological defects in the order parameter. It is the motion of vortices that gives rise to inflation of resistance below . Eventually the system is cooled further, below the Kosterlitz-Thouless temperature , all of the free vortices become bound into vortex-antivortex pairs, and the systems attains a state with zero resistance.
Finite magnetic field
Cooling the system to and turning on a magnetic field has certain effects. For very small fields () the magnetic field is shielded from the interior of the sample. Above however, the energy cost to keep out the external field becomes too great, and the superconductor allows the field to penetrate in quantized fluxons. Now the superconductor has transitioned into the "mixed state", in which there is a superfluid along with vortices - which now have only one circulation.
Increasing the field adds vortices to the system. Eventually the density of vortices becomes so large that they overlap. The core of the vortex contains normal electrons (i.e. the amplitude of the superconducting order parameter is zero), so when they overlap, superconductivity is killed by destroying the amplitude of the order parameter. Increasing the field further leads to a very interesting possibility - in two-dimensions where the fluctuations are enhanced - that the vortices may condense into a Bose-condensate, which localizes the superconducting pairs.
See also
Metal–insulator transition
Anderson's theorem
References
Superconductivity
Quantum phases | Superconductor–insulator transition | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 824 | [
"Quantum phases",
"Matter",
"Physical quantities",
"Superconductivity",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
23,652,892 | https://en.wikipedia.org/wiki/Hitchin%20system | In mathematics, the Hitchin integrable system is an integrable system depending on the choice of a complex reductive group and a compact Riemann surface, introduced by Nigel Hitchin in 1987. It lies on the crossroads of algebraic geometry, the theory of Lie algebras and integrable system theory. It also plays an important role in the geometric Langlands correspondence over the field of complex numbers through conformal field theory.
A genus zero analogue of the Hitchin system, the Garnier system, was discovered by René Garnier somewhat earlier as a certain limit of the Schlesinger equations, and Garnier solved his system by defining spectral curves. (The Garnier system is the classical limit of the Gaudin model. In turn, the Schlesinger equations are the classical limit of the Knizhnik–Zamolodchikov equations).
Almost all integrable systems of classical mechanics can be obtained as particular cases of the Hitchin system or their common generalization defined by Bottacin and Markman in 1994.
Description
Using the language of algebraic geometry, the phase space of the system is a partial compactification of the cotangent bundle to the moduli space of stable G-bundles for some reductive group G, on some compact algebraic curve. This space is endowed with a canonical symplectic form. Suppose for simplicity that , the general linear group; then the Hamiltonians can be described as follows: the tangent space to the moduli space of G-bundles at the bundle F is
which by Serre duality is dual to
where is the canonical bundle, so a pair
called a Hitchin pair or Higgs bundle, defines a point in the cotangent bundle. Taking
one obtains elements in
which is a vector space which does not depend on . So taking any basis in these vector spaces we obtain functions Hi, which are Hitchin's hamiltonians. The construction for general reductive group is similar and uses invariant polynomials on the Lie algebra of G.
For trivial reasons these functions are algebraically independent, and some calculations show that their number is exactly half of the dimension of the phase space. The nontrivial part is a proof of Poisson commutativity of these functions. They therefore define an integrable system in the symplectic or Arnol'd–Liouville sense.
Hitchin fibration
The Hitchin fibration is the map from the moduli space of Hitchin pairs to characteristic polynomials, a higher genus analogue of the map Garnier used to define the spectral curves. used Hitchin fibrations over finite fields in his proof of the fundamental lemma.
To be more precise, the version of Hitchin fibration that is used by Ngô has source the moduli stack of Hitchin pairs, instead of the moduli space. Let be the Lie algebra of the reductive algebraic group . We have the adjoint action of on . We can then take the stack quotient and the GIT quotient , and there is a natural morphism . There is also the natural scaling action of the multiplicative group on , which descends to the stack and GIT quotients. Furthermore, the morphism is equivariant with respect to the -actions. Therefore, given any line bundle on our curve , we can twist the morphism by the -torsor, and obtain a morphism of stacks over . Finally, the moduli stack of -twisted Higgs bundles is recovered as the section stack ; the corresponding Hitchin base is recovered as , which is represented by a vector space; and the Hitchin morphism at the stack level is simply the morphism induced by the morphism above. Note that this definition is not relevant to semistability. To obtain the Hitchin fibration mentioned above, we need to take to be the canonical bundle, restrict to the semistable part of , and then take the induced morphism on the moduli space. To be even more precise, the version of that is used by Ngô often has the restriction that , so that it cannot be the canonical bundle. This condition is added to guarantee that the topology of the Hitchin morphism is, in a precise sense, determined by its restriction to the smooth part, see for the vector bundle case.
See also
Yang–Mills equations
Higgs bundle
Nonabelian Hodge correspondence
Character variety
Hitchin's equations
References
Algebraic geometry
Dynamical systems
Hamiltonian mechanics
Integrable systems
Lie groups
Differential geometry | Hitchin system | [
"Physics",
"Mathematics"
] | 933 | [
"Lie groups",
"Mathematical structures",
"Integrable systems",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Fields of abstract algebra",
"Mechanics",
"Algebraic structures",
"Algebraic geometry",
"Dynamical systems"
] |
23,653,409 | https://en.wikipedia.org/wiki/C15H25NO3 | {{DISPLAYTITLE:C15H25NO3}}
The molecular formula C15H25NO3 (molar mass: 267.36 g/mol, exact mass: 267.183444) may refer to:
Butaxamine
Desacetylmetipranolol
EEE (psychedelic)
Metoprolol
Molecular formulas | C15H25NO3 | [
"Physics",
"Chemistry"
] | 73 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,653,447 | https://en.wikipedia.org/wiki/C17H20N2O | {{DISPLAYTITLE:C17H20N2O}}
The molecular formula C17H20N2O (molar mass: 268.35 g/mol, exact mass: 268.1576 u) may refer to:
Centralite, or ethyl centralite
Michler's ketone
Remacemide
Molecular formulas | C17H20N2O | [
"Physics",
"Chemistry"
] | 73 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,655,028 | https://en.wikipedia.org/wiki/Uzbekistan%20GTL | Uzbekistan GTL (Oltin Yo'l GTL) is a gas-to-liquids (GTL) plant in the Qashqadaryo Region, Uzbekistan.
History
Construction of a GTL plant in Uzbekistan was discussed between Uzbekneftegaz and Abu Dhabi's International Petroleum Investment Company in March 2008. However, in April 2009 Uzbekneftegaz signed a heads of agreement for the GTL project with Sasol and Petronas. On 15 July 2009, Sasol, Petronas, and Uzbekneftegaz signed an agreement to establish a joint venture for developing the GTL project. The detailed feasibility study was conducted by Technip. Technip also conducted the front end engineering design.
Work commenced after the 2016 Uzbekistan election.
Technical features
The plant will use the Sasol's slurry phase distillate process. The annual capacity of the plant would be 1.3 million tonnes of petroleum products such as diesel, kerosene, naphtha and liquefied petroleum gas. The project is expected to cost US$5.6 billion. It is scheduled to be commissioned in 2021.
Ownership
The project is jointly developed by Sasol, Petronas, and Uzbekneftegaz. Each partner will have an equal share in the joint venture.
References
Natural gas plants
Petroleum production
Synthetic fuel facilities
Energy infrastructure in Uzbekistan
Natural gas in Uzbekistan
Petroleum in Uzbekistan
Proposed energy infrastructure in Uzbekistan
Petronas | Uzbekistan GTL | [
"Chemistry"
] | 300 | [
"Natural gas technology",
"Natural gas plants"
] |
23,655,263 | https://en.wikipedia.org/wiki/Superintegrable%20Hamiltonian%20system | In mathematics, a superintegrable Hamiltonian system is a Hamiltonian system on a -dimensional symplectic manifold for which the following conditions hold:
(i) There exist independent integrals of motion. Their level surfaces (invariant submanifolds) form a fibered manifold over a connected open subset .
(ii) There exist smooth real functions on such that the Poisson bracket of integrals of motion reads
.
(iii) The matrix function is of constant corank on .
If , this is the case of a completely integrable Hamiltonian system. The Mishchenko-Fomenko theorem for superintegrable Hamiltonian systems generalizes the Liouville-Arnold theorem on action-angle coordinates of completely integrable Hamiltonian system as follows.
Let invariant submanifolds of a superintegrable Hamiltonian system be connected compact and mutually diffeomorphic. Then the fibered manifold is a fiber bundle
in tori . There exists an open neighbourhood of which is a trivial fiber bundle provided with the bundle (generalized action-angle) coordinates ,
, such that are coordinates on . These coordinates are the Darboux coordinates on a symplectic manifold . A Hamiltonian of a superintegrable system depends only on the action variables which are the Casimir functions of the coinduced Poisson structure on .
The Liouville-Arnold theorem for completely integrable systems and the Mishchenko-Fomenko theorem for the superintegrable ones are generalized to the case of non-compact invariant submanifolds. They are diffeomorphic to a toroidal cylinder .
See also
Integrable system
Action-angle coordinates
Nambu mechanics
Laplace–Runge–Lenz vector
Fradkin tensor
References
Mishchenko, A., Fomenko, A., Generalized Liouville method of integration of Hamiltonian systems, Funct. Anal. Appl. 12 (1978) 113.
Bolsinov, A., Jovanovic, B., Noncommutative integrability, moment map and geodesic flows, Ann. Global Anal. Geom. 23 (2003) 305; .
Fasso, F., Superintegrable Hamiltonian systems: geometry and perturbations, Acta Appl. Math. 87(2005) 93.
Fiorani, E., Sardanashvily, G., Global action-angle coordinates for completely integrable systems with non-compact invariant manifolds, J. Math. Phys. 48 (2007) 032901; .
Miller, W., Jr, Post, S., Winternitz P., Classical and quantum superintegrability with applications, J. Phys. A 46 (2013), no. 42, 423001,
Giachetta, G., Mangiarotti, L., Sardanashvily, G., Geometric Methods in Classical and Quantum Mechanics (World Scientific, Singapore, 2010) ; .
Hamiltonian mechanics
Dynamical systems
Integrable systems | Superintegrable Hamiltonian system | [
"Physics",
"Mathematics"
] | 623 | [
"Integrable systems",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mechanics",
"Dynamical systems"
] |
23,655,997 | https://en.wikipedia.org/wiki/Septocellula | Septocellula is an extinct genus of fly in the family Dolichopodidae. It contains only one species, Septocellula asiatica, described from Eocene amber found near Fushun, China.
The genus Septocellula was first described in 1981 by You-chong Hong, who assigned to it three species: Septocellula asiatica, Septocellula fera and Septocellula trichopoda. In 2002, the latter two species were transferred by Hong to their own genera – Orbilabia and Wangia (later renamed Fushuniregis), respectively.
References
†
†
Prehistoric Diptera genera
†
Eocene insects
Monotypic prehistoric insect genera
Eocene animals of Asia
Prehistoric animals of China
Amber | Septocellula | [
"Physics"
] | 154 | [
"Amorphous solids",
"Unsolved problems in physics",
"Amber"
] |
23,657,481 | https://en.wikipedia.org/wiki/List%20of%20black%20holes | This list of black holes (and stars considered probable candidates) is organized by mass (including black holes of undetermined mass); some items in this list are galaxies or star clusters that are believed to be organized around a black hole. Messier and New General Catalogue designations are given where possible.
Supermassive black holes and candidates
1ES 2344+514
Ton 618 (this quasar has possibly the biggest black hole ever found, estimated at 66 billion solar masses)
3C 371
4C +37.11 (this radio galaxy is believed to have binary supermassive black holes)
AP Lib
S5 0014+81 (said to be a compact hyperluminous quasar, estimated at 40 billion solar masses)
APM 08279+5255 (contains one of the largest black holes, estimated at 10-23 billion solar masses; previous candidate for largest)
Arp 220
Centaurus A
Fornax A
HE0450-2958
IC 1459
Messier 31 (or the Andromeda Galaxy)
Messier 32
Messier 51 (or the Whirlpool Galaxy)
Messier 60
Messier 77
Messier 81 (or Bode's Galaxy)
Messier 84
Messier 87 (or Virgo A)
Messier 104 (or the Sombrero Galaxy)
Messier 105
Messier 106
Quiescent (Galaxy) (Black Hole at the center of the Andromeda Galaxy)
Mrk 421
Mrk 501
NGC 821
NGC 1023
NGC 1097
NGC 1271
NGC 1277
NGC 1332
NGC 1566
NGC 2787
NGC 3079
NGC 3115
NGC 3377
NGC 3384
NGC 3998
NGC 4151
NGC 4261
NGC 4438
NGC 4459
NGC 4473
NGC 4486B (a satellite galaxy of Messier 87)
NGC 4564
NGC 4579
NGC 4596
NGC 4697
NGC 4889
NGC 4945
NGC 5033
NGC 6251
NGC 7052
NGC 7314
PKS 0521-365
Q0906+6930 (a blazar organized around a supermassive black hole)
RX J1131 (first black hole whose spin was directly measured)
Sagittarius A*, which is in the center of the Milky Way
Types
Quasar
Supermassive black hole
Hypercompact stellar system (hypothetical object organized around a supermassive black hole)
Intermediate-mass black holes and candidates
Cigar Galaxy (Messier 82, NGC 3034)
GCIRS 13E
HLX-1
M82 X-1
Messier 15 (NGC 7078)
Messier 110 (NGC 205)
Sculptor Galaxy (NGC 253)
Triangulum Galaxy (Messier 33, NGC 598)
Stellar black holes and candidates
1E1740.7-2942 (Great Annihilator), 340 ly from Sgr A*
4U 1543-475/IL Lupi
A0620-00/V616 Mon (once thought to be the closest to Earth known, at about 3,000 light years)
CXOU J132527.6-430023 (a candidate stellar mass black hole outside of the Local Group)
Cygnus X-1
Cygnus X-3
GRO J0422+32 (possibly the smallest black hole yet discovered)
GRO J1655-40/V1033 Sco (at one time considered the smallest black hole known)
GRS 1124-683/GU Mus
GRS 1915+105/V1487 Aql
GS 2000+25/QZ Vul
GX 339-4/V821 Ara
IGR J17091-3624 (candidate smallest known stellar black hole)
LB-1 (name of both a galactic B-type star and a very closely associated over-massive stellar-mass black hole)
M33 X-7 (stellar black hole with the most massive stellar companion, located in the Triangulum Galaxy)
MOA-2011-BLG-191/OGLE-2011-BLG-0462 (first known isolated stellar black hole)
SN 1997D (in NGC 1536)
SS 433
V404 Cyg
V Puppis
XTE J1118+480/KV UMa
XTE J1550-564/V381 Nor
XTE J1650-500 (at one time considered the smallest black hole known)
XTE J1819-254/V4641 Sgr
LMC X-1 (first X-ray source in the Large Magellanic Cloud)
Black holes detected by gravitational wave signals
, 10 mergers of binary black holes have been observed. In each case two black holes merged to a larger black hole. In addition, one neutron star merger has been observed (GW170817), forming a black hole. In addition, over 30 alerts have been issued since April 2019, of black hole merger candidates.
GW 150914
Multiple black hole systems
Binary black holes
EGSD2 J142033.66 525917.5 core black holes — galaxy hosting a dual AGN
OJ 287 core black holes — a BL Lac object with a candidate binary supermassive black hole core system
PG 1302-102 – the first binary-cored quasar — a pair of supermassive black holes at the core of this quasar
SDSS J120136.02+300305.5 core black holes — a pair of supermassive black holes at the centre of this galaxy
In addition, the signal of several binary black holes merging into a single black hole and in so doing producing gravitational waves have been observed by the LIGO instrument. These are listed above in the section Black holes detected by gravitational wave signals.
Trinary black holes
As of 2014, there are 5 triple black hole systems known.
SDSS J150243.09+111557.3 (SDSS J1502+1115) core black holes — the three components are distant tertiary J1502P, and the close binary pair J1502S composed of J1502SE and J1502SW
GOODS J123652.77+621354.7 core black holes of triple-clump galaxy
2MASX J10270057+1749001 (SDSS J1027+1749) core black holes
See also
Black hole
Lists of black holes
List of nearest black holes
Supermassive black hole
Intermediate-mass black hole
Stellar black hole
Micro black hole
Lists of astronomical objects
References
External links
NASA's general description of black holes.
A list of black hole stars and candidates compiled by Dr. William Robert Johnston, Ph.D (Physics), a post-doctoral researcher at the University of Texas (Dallas).
black holes, list of
Theory of relativity | List of black holes | [
"Physics",
"Astronomy"
] | 1,410 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects"
] |
23,658,163 | https://en.wikipedia.org/wiki/Instream%20use | Instream use refers to water use taking place within a stream channel. Examples are hydroelectric power generation, navigation, fish propagation and use, and recreational activities. Some instream uses, usually associated with fish populations and navigation, require a minimum amount of water to be viable.
The term is often used in discussions concerning water resources allocation and/or water rights.
See also
Water law
International trade and water
References
Hydrology
Water resources management | Instream use | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 87 | [
"Hydrology",
"Environmental engineering"
] |
23,658,272 | https://en.wikipedia.org/wiki/Flight%20control%20modes | A flight control mode or flight control law is a computer software algorithm that transforms the movement of the yoke or joystick, made by an aircraft pilot, into movements of the aircraft control surfaces. The control surface movements depend on which of several modes the flight computer is in. In aircraft in which the flight control system is fly-by-wire, the movements the pilot makes to the yoke or joystick in the cockpit, to control the flight, are converted to electronic signals, which are transmitted to the flight control computers that determine how to move each control surface to provide the aircraft movement the pilot ordered.
A reduction of electronic flight control can be caused by the failure of a computational device, such as the flight control computer or an information providing device, such as the Air Data Inertial Reference Unit (ADIRU).
Electronic flight control systems (EFCS) also provide augmentation in normal flight, such as increased protection of the aircraft from overstress or providing a more comfortable flight for passengers by recognizing and correcting for turbulence and providing yaw damping.
Two aircraft manufacturers produce commercial passenger aircraft with primary flight computers that can perform under different flight control modes. The most well-known is the system of normal, alternate, direct laws and mechanical alternate control laws of the Airbus A320-A380. The other is Boeing's fly-by-wire system, used in the Boeing 777, Boeing 787 Dreamliner and Boeing 747-8.
These newer aircraft use electronic control systems to increase safety and performance while saving aircraft weight. These electronic systems are lighter than the old mechanical systems and can also protect the aircraft from overstress situations, allowing designers to reduce over-engineered components, which further reduces the aircraft's weight.
Flight control laws (Airbus)
Airbus aircraft designs after the A300/A310 are almost completely controlled by fly-by-wire equipment. These newer aircraft, including the A320, A330, A340, A350 and A380 operate under Airbus flight control laws. The flight controls on the Airbus A330, for example, are all electronically controlled and hydraulically activated. Some surfaces, such as the rudder, can also be mechanically controlled. In normal flight, the computers act to prevent excessive forces in pitch and roll.
The aircraft is controlled by three primary control computers (captain's, first officer's, and standby) and two secondary control computers (captain's and first officer's). In addition there are two flight control data computers (FCDC) that read information from the sensors, such as air data (airspeed, altitude). This is fed along with GPS data, into three redundant processing units known as air data inertial reference units (ADIRUs) that act both as an air data reference and inertial reference. ADIRUs are part of the air data inertial reference system, which, on the Airbus is linked to eight air data modules: three are linked to pitot tubes and five are linked to static sources. Information from the ADIRU is fed into one of several flight control computers (primary and secondary flight control). The computers also receive information from the control surfaces of the aircraft and from the pilot's aircraft control devices and autopilot. Information from these computers is sent both to the pilot's primary flight display and also to the control surfaces.
There are four named flight control laws, however alternate law consists of two modes, alternate law 1 and alternate law 2. Each of these modes have different sub modes: ground mode, flight mode and flare, plus a back-up mechanical control.
Normal law
Normal law differs depending on the stage of flight. These include:
Stationary at the gate
Taxiing from the gate to a runway or from a runway back to the gate
Beginning the take-off roll
Initial climb
Cruise climb and cruise flight at altitude
Final descent, flare and landing.
During the transition from take-off to cruise there is a 5-second transition, from descent to flare there is a two-second transition, and from flare to ground there is another 2 second transition in normal law.
Ground mode
The aircraft behaves as in direct mode: the autotrim feature is turned off and there is a direct response of the elevators to the sidestick inputs. The horizontal stabilizer is set to 4° up but manual settings (e.g. for center of gravity) override this setting. After the wheels leave the ground, a 5-second transition occurs where normal law – flight mode takes over from ground mode.
Flight mode
The flight mode of normal law provides five types of protection: pitch attitude, load factor limitations, high speed, high-AOA and bank angle. Flight mode is operational from take-off, until shortly before the aircraft lands, around 100 feet above ground level. It can be lost prematurely as a result of pilot commands or system failures. Loss of normal law as a result of a system failure results in alternate law 1 or 2.
Unlike conventional controls, in normal law vertical side stick movement corresponds to a load factor proportional to stick deflection independent of aircraft speed. When the stick is neutral and the load factor is 1g, the aircraft remains in level flight without the pilot changing the elevator trim. Horizontal side stick movement commands a roll rate, and the aircraft maintains a proper pitch angle once a turn has been established, up to 33° bank. The system prevents further trim up when the angle of attack is excessive, the load factor exceeds 1.3g, or when the bank angle exceeds 33°.
Alpha protection (α-Prot) prevents stalling and guards against the effects of windshear. The protection engages when the angle of attack is between α-Prot and α-Max and limits the angle of attack commanded by the pilot's sidestick or, if autopilot is engaged, it disengages the autopilot.
High speed protection will automatically recover from an overspeed. There are two speed limitations for high altitude aircraft, VMO (maximum operational velocity) and MMO (maximum operational Mach) the two speeds are the same at approximately 31,000 feet, below which overspeed is determined by VMO and above which by MMO.
Flare mode
This mode is automatically engaged when the radar altimeter indicates 100 feet above ground. At 50 feet the aircraft trims the nose slightly down. During the landing flare, normal law provides high-angle of attack protection and bank angle protection. The load factor is permitted to be from 2.5g to −1g, or 2.0g to 0g when slats are extended. Pitch attitude is limited from −15° to +30°, and upper limit is further reduced to +25° as the aircraft slows.
Alternate law
There are four reconfiguration modes for the Airbus fly-by-wire aircraft: alternate law 1, alternate law 2, direct law and mechanical law. The ground mode and flare modes for alternate law are identical to those modes for normal law.
Alternate law 1 (ALT1) mode combines a normal law lateral mode with the load factor, bank angle protections retained. High angle of attack protection may be lost and low energy (level flight stall) protection is lost. High speed and high angle of attack protections enter alternate law mode.
ALT1 may be entered if there are faults in the horizontal stabilizer, an elevator, yaw-damper actuation, slat or flap sensor, or a single air data reference fault.
Alternate law 2 (ALT2) loses normal law lateral mode (replaced by roll direct mode and yaw alternate mode) along with pitch attitude protection, bank angle protection and low energy protection. Load factor protection is retained. High angle of attack and high speed protections are retained unless the reason for alternate law 2 mode is the failure of two air-data references or if the two remaining air data references disagree.
ALT2 mode is entered when 2 engines flame out (on dual engine aircraft), faults in two inertial or air-data references, with the autopilot being lost, except with an ADR disagreement. This mode may also be entered with an all spoilers fault, certain ailerons fault, or pedal transducers fault.
Direct law
Direct law (DIR) introduces a direct stick-to-control surfaces relationship: control surface motion is directly related to the sidestick and rudder pedal motion. The trimmable horizontal stabilizer can only be controlled by the manual trim wheel. All protections are lost, and the maximum deflection of the elevators is limited for each configuration as a function of the current aircraft centre of gravity. This aims to create a compromise between adequate pitch control with a forward center of gravity and not-too-sensitive control with an aft center of gravity.
DIR is entered if there is failure of three inertial reference units or the primary flight computers, faults in two elevators, or flame-out in two engines (on a two-engine aircraft) when the captain's primary flight computer is also inoperable.
Mechanical control
In the mechanical control back-up mode, pitch is controlled by the mechanical trim system and lateral direction is controlled by the rudder pedals operating the rudder mechanically.
Boeing 777 primary flight control system
The fly-by-wire electronic flight control system of the Boeing 777 differs from the Airbus EFCS. The design principle is to provide a system that responds similarly to a mechanically controlled system. Because the system is controlled electronically, the flight control system can provide flight envelope protection.
The electronic system is subdivided between two levels, the four actuator control electronics (ACE) and the three primary flight computers (PFC). The ACEs control actuators (from those on pilot controls to control surface controls and the PFC). The role of the PFC is to calculate the control laws and provide feedback forces, pilot information and warnings.
Standard protections and augmentations
The flight control system on the 777 is designed to restrict control authority beyond certain range by increasing the back pressure once the desired limit is reached. This is done via electronically controlled backdrive actuators (controlled by ACE). The protections and augmentations are: bank angle protection, turn compensation, stall protection, over-speed protection, pitch control, stability augmentation and thrust asymmetry compensation. The design philosophy is: "to inform the pilot that the command being given would put the aircraft outside of its normal operating envelope, but the ability to do so is not precluded."
Normal mode
In normal mode the PFCs transmit actuator commands to the ACEs, which convert them into analog servo commands. Full functionality is provided, including all enhanced performance, envelope protection and ride quality features.
Secondary mode
Boeing secondary mode is comparable to the Airbus alternate law, with the PFCs supplying commands to the ACEs. However, EFCS functionality is reduced, including loss of flight envelope protection. Like the Airbus system, this state is entered when a number of failures occur in the EFCS or interfacing systems (e.g. ADIRU or SAARU). Moreover, in case of a complete failure of all PFCs and ACEs, the ailerons and selected roll spoilers are connected to the pilot controls by control cable, permitting mechanical control on a temporary basis.
See also
Index of aviation articles
Dual control (aviation)
References
Aerospace engineering
Aircraft instruments
Flight control systems
Technology systems | Flight control modes | [
"Technology",
"Engineering"
] | 2,331 | [
"Systems engineering",
"Technology systems",
"Measuring instruments",
"Aircraft instruments",
"nan",
"Aerospace engineering"
] |
23,659,948 | https://en.wikipedia.org/wiki/Damping%20torque | Damping torque is provided by indicating instrument. Damper is a generic term used to identify any mechanism used for vibration energy absorption, the shaft vibration suppression, soft start and overload protection device. In order to design an efficient damper, it is imperative that the damping torque is calculated first. Damping torque or damping forces is the speed deviation of an electromechanical torque deviations of a machine while the angle deviation is called synchronizing torque [1]. In a measuring instrument, the damping torque is necessary to bring the moving system to rest to indicate steady reflection in a reasonable short time. It exists only as long as the pointer is in motion. Under the absence of damping torque the pointer oscillates for a short period of time and comes to steady position and this situation is called under damping. If the damping force is too large, then the pointer will come to rest slowly and this is called as over damping. Damping torque is a physical process of controlling a system's movement through producing motion that opposes the natural oscillation of a system. Similar to friction, it only acts when a system is in motion, and is not present if the system is at rest. Its primary purpose is to enable fast and accurate readings for an oscillating system. Instead of allowing an object to oscillate at its fundamental frequency forever, damping torque applies a counteractive force that slows the oscillation enough for a reading to be made. Although damping torque is used in many measurement devices, it is not something that has a set value, but instead is adjusted based on a pointer that is graphed on a deflection torque vs. time graph. Damping torque is an integral part in the measurement of moving systems because of its ability to control oscillation.
Production
There are four different ways of producing damping torque, these include air friction damping, fluid friction damping, eddy current damping, and electromagnetic damping.
Air friction damping is created by a piston oscillating in and out of an air chamber. When the piston enters the chamber it causes compression, when it exits the chamber there is a force acting back against it. This method is often used in the presence of a relatively weak electrical field, as air friction damping does not involve the use of any electric components that could distort the electrical field.
Fluid friction damping is created through the oscillation of a disk in and out of liquid, normally oil, thus causing it to always oppose motion. This method is very similar to air friction damping, except rather than having air in a chamber, it is replaced with fluid. This method is hindered by the fact that it can only be done vertically, as it requires the liquid to be in an upright position.
Eddy current damping is the use of an Eddy current and an electric field to create an electromagnetic torque that opposes motion. In this method the damping torque produced is proportional to the strength of the current and magnetic field. This method is very efficient, but it has the downside of distorting a weak electrical field.
Electromagnetic damping is created by sending an electric current through a magnetic coil, causing a torque that goes against the natural movement of the coil. It has the similar disadvantage to the Eddy current damping in that it can distort the electrical field.
Uses
Damping torque is used to enable fast and accurate reading of an object that undergoes oscillation. Due to inertia, an object in motion tends to stay in motion, thus requiring a counteractive force to bring it to its final rate of oscillation in a short period of time. Damping torque does this by opposing the natural oscillation, enabling the user to get an accurate reading. It is used in most experiments that involve gathering data of a system that is in motion, as one of the only ways to obtain accurate data. It also has many different methods of production as outlined above, allowing it to be used in many models where a counteractive force is required. Although, as noted above there are certain methods of creating damping torque that are only applicable to a system if it meets the correct requirements.
Measurement
Damping torque is a motion that isn't assigned numbers while being used, but rather is tested and observed using a pointer in an experiment. A pointer of a device is the part that shows the damping torque based on a deflection torque vs time graph. This is done by taking into account both deflection and controlling torque in order to give the correct amount of damping torque. Deflection torque is what causes the pointer on the machine to oscillate, and the controlling torque is a counteractive force that stops the pointer from oscillating uncontrollably. Deflection torque and controlling torque work in a similar way to a scale, in that deflection torque is the weight that is pressed on the scale and the controlling torque is the counterweight that is used to balance out the initial weight. In order to get good results it is very important that these two forces equal one another.
Deflection and Controlling Torque Production
Deflection and controlling torque, like damping torque, are not explicitly measured, but can be created and thus controlled in different 2ways. By creating these two torques the pointer will move in a specific way that can be analyzed as shown below. Deflection torque can be any type of force that initially puts the system in motion. Controlling torque on the other hand is generated by a measuring device, and thus is not a naturally occurring motion. There are two ways of producing a controlling torque, spring control and gravity control:
Spring control is created through the use of a control spring that is connected to the pointer of the system. When the system moves the spring is twisted in the opposite direction, thus creating a torque that directly counteracts the deflection torque.
Gravity control is created by attaching small weights to a moving system, generating a torque based on the angle of deflection, which is the angle the back and forward tangents make with one another. This method is hindered by the fact that it requires the system to be vertical so that the weights can be acted on by gravity.
When analyzing the deflection and controlling torque there are three main categories, under damped, over damped, and critically damped. If a system is under damped it will not reach its final rate of oscillation in a timely manner, and will oscillate slowly for a long period of time. If it is over damped, the system will oscillate at a rate that is too slow to give an accurate reading. Finally, if it is critically damped, it has an equal amount of deflection and controlling torque, thus allowing the pointer to quickly find the correct value, without the system oscillating past that value. Critically damped means the machine has the right amount of damping torque and is ready to be used for experiments.
References
Power Engineering Society General Meeting, 2006. IEEE, 10.1109/PES.2006.1709001
External links
Classical mechanics
Torsional vibration
Mechanical vibrations | Damping torque | [
"Physics",
"Engineering"
] | 1,475 | [
"Structural engineering",
"Mechanics",
"Classical mechanics",
"Mechanical vibrations"
] |
8,210,422 | https://en.wikipedia.org/wiki/Stereo%20cameras | The stereo cameras approach is a method of distilling a noisy video signal into a coherent data set that a computer can begin to process into actionable symbolic objects, or abstractions. Stereo cameras is one of many approaches used in the broader fields of computer vision and machine vision.
Calculation
In this approach, two cameras with a known physical relationship (i.e. a common field of view the cameras can see, and how far apart their focal points sit in physical space) are correlated via software. By finding mappings of common pixel values, and calculating how far apart these common areas reside in pixel space, a rough depth map can be created. This is very similar to how the human brain uses stereoscopic information from the eyes to gain depth cue information, i.e. how far apart any given object in the scene is from the viewer.
The camera attributes must be known, focal length and distance apart etc., and a calibration done. Once this is completed, the systems can be used to sense the distances of objects by triangulation. Finding the same singular physical point in the two left and right images is known as the correspondence problem. Correctly locating the point gives the computer the capability to calculate the distance that the robot or camera is from the object. On the BH2 Lunar Rover the cameras use five steps: a bayer array filter, photometric consistency dense matching algorithm, a Laplace of Gaussian (LoG) edge detection algorithm, a stereo matching algorithm and finally uniqueness constraint.
Uses
This type of stereoscopic image processing technique is used in applications such as 3D reconstruction, robotic control and sensing, crowd dynamics monitoring and off-planet terrestrial rovers; for example, in mobile robot navigation, tracking, gesture recognition, targeting, 3D surface visualization, immersive and interactive gaming. Although the Xbox Kinect sensor is also able to create a depth map of an image, it uses an infrared camera for this purpose, and does not use the dual-camera technique.
Other approaches to stereoscopic sensing include time of flight sensors and ultrasound.
See also
Stereo camera (about a camera with two separated views)
References
Computer vision
Geometry in computer vision
Robotic sensing
Robot control
tr:Stereo kameralar | Stereo cameras | [
"Mathematics",
"Engineering"
] | 448 | [
"Robotics engineering",
"Packaging machinery",
"Robot control",
"Geometry",
"Artificial intelligence engineering",
"Geometry in computer vision",
"Computer vision"
] |
8,211,999 | https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Ganea%20conjecture | The Eilenberg–Ganea conjecture is a claim in algebraic topology. It was formulated by Samuel Eilenberg and Tudor Ganea in 1957, in a short, but influential paper. It states that if a group G has cohomological dimension 2, then it has a 2-dimensional Eilenberg–MacLane space . For n different from 2, a group G of cohomological dimension n has an n-dimensional Eilenberg–MacLane space. It is also known that a group of cohomological dimension 2 has a 3-dimensional Eilenberg−MacLane space.
In 1997, Mladen Bestvina and Noel Brady constructed a group G so that either G is a counterexample to the Eilenberg–Ganea conjecture, or there must be a counterexample to the Whitehead conjecture; in other words, it is not possible for both conjectures to be true.
References
Conjectures
Theorems in algebraic topology
Unsolved problems in mathematics | Eilenberg–Ganea conjecture | [
"Mathematics"
] | 201 | [
"Unsolved problems in mathematics",
"Theorems in topology",
"Topology stubs",
"Conjectures",
"Topology",
"Mathematical problems",
"Theorems in algebraic topology"
] |
8,212,459 | https://en.wikipedia.org/wiki/Muramyl%20dipeptide | Muramyl dipeptide is a component of bacterial peptidoglycan, a recognition structure or activator for nucleotide-binding oligomerization domain 2 (NOD2) protein. It is a constituent of both Gram-positive and Gram-negative bacteria composed of N-acetylmuramic acid linked by its lactic acid moiety to the N-terminus of an L-alanine D-isoglutamine dipeptide. It can be recognized by the immune system as a pathogen-associated molecular pattern and activate the NALP3 inflammasome which, in turn, leads to cytokine activation, IL-1α and IL-1β especially.
Human NOD2 protein of the nucleotide-binding leucine-rich repeat family, is a cytoplasmic receptor involved in host innate immune system defense. Mutations in the CARD15 gene encoding NOD2 protein have been observed in Crohn's disease patients, decreasing the immune systems of these patients ability to recognize muramyl dipeptide. Analogues of muramyl dipeptide and their potential for immune response therapies in cancer and disease are being investigated. Experiments published in 2008 showed that muramyl dipeptide is involved in a molecular pathway in mice that conferred protection from colitis.
See also
Taxol
Dipeptide
Mifamurtide, a synthetic analogue for the treatment of osteosarcoma
References
Peptides
Amino sugars | Muramyl dipeptide | [
"Chemistry"
] | 303 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Amino sugars",
"Molecular biology",
"Peptides"
] |
8,213,823 | https://en.wikipedia.org/wiki/Trabectedin | Trabectedin, sold under the brand name Yondelis, is an antitumor chemotherapy medication for the treatment of advanced soft-tissue sarcoma and ovarian cancer.
The most common adverse reactions include nausea, fatigue, vomiting, constipation, decreased appetite, diarrhea, peripheral edema, dyspnea, and headache.
It is sold by Pharma Mar S.A. and Johnson and Johnson. It is approved for use in the European Union, Russia, South Korea and the United States. The European Commission and the U.S. Food and Drug Administration (FDA) granted orphan drug status to trabectedin for soft-tissue sarcomas and ovarian cancer.
Discovery and production
During the 1950s and 1960s, the National Cancer Institute carried out a wide-ranging program of screening plant and marine organism material. As part of that program, extract from the sea squirt Ecteinascidia turbinata was found to have anticancer activity in 1969.
Separation and characterization of the active molecules had to wait many years for the development of sufficiently sensitive techniques, and the structure of one of them, Ecteinascidin 743, was determined by KL Rinehart at the University of Illinois in 1984. Rinehart had collected his sea squirts by scuba diving in the reefs of the West Indies. The biosynthetic pathway responsible for producing the drug has been determined to come from Candidatus Endoecteinascidia frumentensis, a microbial symbiont of the tunicate.
The Spanish company PharmaMar licensed the compound from the University of Illinois before 1994 and attempted to farm the sea squirt with limited success. Yields from the sea squirt are extremely low as around 1,000 kilograms of animals is needed to isolate 1 gram of trabectedin - and about 5 grams were believed to be needed for a clinical trial so Rinehart asked the Harvard chemist E. J. Corey to search for a synthetic method of preparation. His group developed such a method and published it in 1996. This was later followed by a simpler and more tractable method which was patented by Harvard and subsequently licensed to PharmaMar. The current supply is based on a semisynthetic process developed by PharmaMar starting from safracin B, a chemical obtained by fermentation of the bacterium Pseudomonas fluorescens. PharmaMar entered into an agreement with Johnson & Johnson to market the compound outside Europe.
Approvals and indications
Trabectedin was first trialed in humans in 1996.
Soft tissue sarcoma
In 2007, the European Commission gave authorization for the marketing of trabectedin, under the trade name Yondelis, "for the treatment of patients with advanced soft tissue sarcoma, after failure of anthracyclines and ifosfamide, or who are unsuited to receive these agents". The European Medicine Agency's evaluating committee, the Committee for Medicinal Products for Human Use (CHMP), observed that trabectedin had not been evaluated in an adequately designed and analyzed randomized controlled trial against current best care, and that the clinical efficacy data were mainly based on patients with liposarcoma and leiomyosarcoma. However, the pivotal study did show a significant difference between two different trabectedin treatment regimens, and due to the rarity of the disease, the CHMP considered that marketing authorization could be granted under exceptional circumstances. As part of the approval PharmaMar agreed to conduct a further trial to identify whether any specific chromosomal translocations could be used to predict responsiveness to trabectedin.
Trabectedin is also approved in South Korea and Russia.
In 2015, (after a phase III study comparing trabectedin with dacarbazine), the US FDA approved trabectedin (Yondelis) for the treatment of liposarcoma and leiomyosarcoma that is either unresectable or has metastasized. Patients must have received prior chemotherapy with an anthracycline.
Ovarian cancer and other
In 2008, the submission was announced of a registration dossier to the European Medicines Agency and the FDA for Yondelis when administered in combination with pegylated liposomal doxorubicin (Doxil, Caelyx) for the treatment of women with relapsed ovarian cancer. In 2011, Johnson & Johnson voluntarily withdrew the submission in the United States following a request by the FDA for an additional phase III study to be done in support of the submission.
Trabectedin is also in phase II trials for prostate, breast, and paediatric cancers.
Structure
Trabectedin is composed of three tetrahydroisoquinoline moieties, eight rings including one 10-membered heterocyclic ring containing a cysteine residue, and seven chiral centers.
Biosynthesis
The biosynthesis of trabectedin in the tunicate symbiotic bacteria Candidatus Endoecteinascidia frumentensis starts with a fatty acid loading onto the acyl-ligase domain of the EtuA3 module. A cysteine and glycine are then loaded as canonical NRPS amino acids. A tyrosine residue is modified by the enzymes EtuH, EtuM1, and EtuM2 to add a hydroxyl at the meta position of the phenol, and adding two methyl groups at the para-hydroxyl and the meta carbon position. This modified tyrosine reacts with the original substrate via a Pictet-Spengler reaction, where the amine group is converted to an imine by deprotonation, then attacks the free aldehyde to form a carbocation that is quenched by electrons from the methyl-phenol ring. This is done in the EtuA2 T-domain. This reaction is done a second time to yield a dimer of modified tyrosine residues that have been further cyclized via Pictet-Spengler reaction, yielding a bicyclic ring moiety. The EtuO and EtuF3 enzymes continue to post-translationally modify the molecule, adding several functional groups and making a sulfide bridge between the original cysteine residue and the beta-carbon of the first tyrosine to form ET-583, ET-597, ET-596, and ET-594 which have been previously isolated. A third O-methylated tyrosine is added and cyclized via Pictet-Spengler to yield the final product.
Total synthesis
The total synthesis by E.J. Corey used this proposed biosynthesis to guide their synthetic strategy. The synthesis uses such reactions as the Mannich reaction, Pictet-Spengler reaction, the Curtius rearrangement, and chiral rhodium-based diphosphine-catalyzed enantioselective hydrogenation. A separate synthetic process also involved the Ugi reaction to assist in the formation of the pentacyclic core. This reaction was unprecedented for using such a one pot multicomponent reaction in the synthesis of such a complex molecule.
Mechanism of action
Recently, it has been shown that trabectedin blocks DNA binding of the oncogenic transcription factor FUS-CHOP and reverses the transcriptional program in myxoid liposarcoma. By reversing the genetic program created by this transcription factor, trabectedin promotes differentiation and reverses the oncogenic phenotype in these cells.
Other than transcriptional interference, the mechanism of action of trabectedin is complex and not completely understood. The compound is known to bind and alkylate DNA at the N2 position of guanine. It is known from in vitro work that this binding occurs in the minor groove, spans approximately three to five base pairs and is most efficient with CGG sequences. Additional favorable binding sequences are TGG, AGC, or GGC. Once bound, this reversible covalent adduct bends DNA toward the major groove, interferes directly with activated transcription, poisons the transcription-coupled nucleotide excision repair complex, promotes degradation of RNA polymerase II, and generates DNA double-strand breaks.
In 2024, researchers from ETH Zürich and UNIST determined that abortive transcription-coupled nucleotide excision repair of trabectedin-DNA adducts forms persistent single-strand breaks (SSBs) as the adducts block the second of the two sequential NER incisions. The researchers mapped the 3’-hydroxyl groups of SSBs originating from the first NER incision at trabectedin lesions, recording TC-NER on a genome-wide scale, which resulted in a TC-NER-profiling assay TRABI-Seq.
Society and culture
Legal status
In September 2020, the European Medicines Agency recommended that the use of trabectedin in treating ovarian cancer remain unchanged.
References
2,5-Dimethoxyphenethylamines
Acetate esters
Antineoplastic drugs
Benzodioxoles
Drugs developed by Johnson & Johnson
Orphan drugs
Hydroxyarenes
Total synthesis
Phenethylamine alkaloids
Bacterial alkaloids | Trabectedin | [
"Chemistry"
] | 1,934 | [
"Total synthesis",
"Alkaloids by chemical classification",
"Phenethylamine alkaloids",
"Chemical synthesis"
] |
8,218,773 | https://en.wikipedia.org/wiki/Brinkman%20number | The Brinkman number (Br) is a dimensionless number related to heat conduction from a wall to a flowing viscous fluid, commonly used in polymer processing. It is named after the Dutch mathematician and physicist Henri Brinkman. There are several definitions; one is
where
μ is the dynamic viscosity;
u is the flow velocity;
κ is the thermal conductivity;
T0 is the bulk fluid temperature;
Tw is the wall temperature;
Pr is the Prandtl number
Ec is the Eckert number
It is the ratio between heat produced by viscous dissipation and heat transported by molecular conduction. i.e., the ratio of viscous heat generation to external heating. The higher its value, the slower the conduction of heat produced by viscous dissipation and hence the larger the temperature rise.
In, for example, a screw extruder, the energy supplied to the polymer melt comes primarily from two sources:
viscous heat generated by shear between elements of the flowing liquid moving at different velocities;
direct heat conduction from the wall of the extruder.
The former is supplied by the motor turning the screw, the latter by heaters. The Brinkman number is a measure of the ratio of the two.
References
Continuum mechanics
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Polymer chemistry | Brinkman number | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 280 | [
"Thermodynamic properties",
"Physical quantities",
"Dimensionless numbers of thermodynamics",
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"Polymer chemistry"
] |
18,591,429 | https://en.wikipedia.org/wiki/Thermo-mechanical%20fatigue | Thermo-mechanical fatigue (short TMF) is the overlay of a cyclical mechanical loading, that leads to fatigue of a material, with a cyclical thermal loading. Thermo-mechanical fatigue is an important point that needs to be considered, when constructing turbine engines or gas turbines.
Failure mechanisms
There are three mechanisms acting in thermo-mechanical fatigue
Creep is the flow of material at high temperatures
Fatigue is crack growth and propagation due to repeated loading
Oxidation is a change in the chemical composition of the material due to environmental factors. The oxidized material is more brittle and prone to crack creation.
Each factor has more or less of an effect depending on the parameters of loading. In phase (IP) thermo-mechanical loading (when the temperature and load increase at the same time) is dominated by creep. The combination of high temperature and high stress is the ideal condition for creep. The heated material flows more easily in tension, but cools and stiffens under compression. Out of phase (OP) thermo-mechanical loading is dominated by the effects of oxidation and fatigue. Oxidation weakens the surface of the material, creating flaws and seeds for crack propagation. As the crack propagates, the newly exposed crack surface then oxidizes, weakening the material further and enabling the crack to extend. A third case occurs in OP TMF loading when the stress difference is much greater than the temperature difference. Fatigue alone is the driving cause of failure in this case, causing the material to fail before oxidation can have much of an effect.
TMF still is not fully understood. There are many different models to attempt to predict the behavior and life of materials undergoing TMF loading. The two models presented below take different approaches.
Models
There are many different models that have been developed in an attempt to understand and explain TMF. This page will address the two broadest approaches, constitutive and phenomenological models. Constitutive models utilize the current understanding of the microstructure of materials and failure mechanisms. These models tend to be more complex, as they try to incorporate everything we know about how the materials fail. These types of models are becoming more popular recently as improved imaging technology has allowed for a better understanding of failure mechanisms. Phenomenological models are based purely on the observed behavior of materials. They treat the exact mechanism of failure as a sort of "black box". Temperature and loading conditions are input, and the result is the fatigue life. These models try to fit some equation to match the trends found between different inputs and outputs.
Damage accumulation model
The damage accumulation model is a constitutive model of TMF. It adds together the damage from the three failure mechanisms of fatigue, creep, and oxidation.
where is the fatigue life of the material, that is, the number of loading cycles until failure. The fatigue life for each failure mechanism is calculated individually and combined to find the total fatigue life of the specimen.
Fatigue
The life from fatigue is calculated for isothermal loading conditions. It is dominated by the strain applied to the specimen.
where and are material constants found through isothermal testing. Note that this term does not account for temperature effects. The effects of temperature are treated in the oxidation and creep terms..
Oxidation
The life from oxidation is affected by temperature and cycle time.
where
and
Parameters are found by comparing fatigue tests done in air and in an environment with no oxygen (vacuum or argon). Under these testing conditions, it has been found that the effects of oxidation can reduce the fatigue life of a specimen by a whole order of magnitude. Higher temperatures greatly increase the amount of damage from environmental factors.
Creep
where
Benefit
The damage accumulation model is one of the most in-depth and accurate models for TMF. It accounts for the effects of each failure mechanism.
Drawback
The damage accumulation model is also one of the most complex models for TMF. There are several material parameters that must be found through extensive testing.
Strain-rate partitioning
Strain-rate partitioning is a phenomenological model of thermo-mechanical fatigue. It is based on observed phenomenon instead of the failure mechanisms. This model deals only with inelastic strain and ignores elastic strain completely. It accounts for different types of deformation and breaks strain into four possible scenarios:
PP – plastic in tension and compression
CP – creep in tension and plastic in compression
PC – plastic in tension and creep in compression
CC – creep in tension and compression
The damage and life for each partition is calculated and combined in the model
where
and etc., are found from variations of the equation
where A and C are material constants for individual loading.
Benefit
Strain-Rate Partitioning is a much simpler model than the damage accumulation model. Because it breaks down the loading into specific scenarios, it can account for different phases in loading.
Drawback
The model is based on inelastic strain. This means that it does not work well with scenarios of low inelastic strain, such as brittle materials or loading with very low strain.
This model can be an oversimplification. Because it fails to account for oxidation damage, it may overpredict specimen life in certain loading conditions.
Looking forward
The next area of research is attempting to understand TMF of composites. The interaction between the different materials adds another layer of complexity. Zhang and Wang are currently investigating the TMF of a unidirectional fiber reinforced matrix. They are using a finite element method that accounts for the known microstructure. They have discovered that the large difference in the thermal expansion coefficient between the matrix and the fiber is the driving cause of failure, causing high internal stress.
References
Mechanical engineering
Fracture mechanics | Thermo-mechanical fatigue | [
"Physics",
"Materials_science",
"Engineering"
] | 1,162 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Fracture mechanics",
"Materials science",
"Mechanical engineering",
"Materials degradation"
] |
18,593,401 | https://en.wikipedia.org/wiki/Estimation%20of%20signal%20parameters%20via%20rotational%20invariance%20techniques | Estimation of signal parameters via rotational invariant techniques (ESPRIT), is a technique to determine the parameters of a mixture of sinusoids in background noise. This technique was first proposed for frequency estimation. However, with the introduction of phased-array systems in everyday technology, it is also used for angle of arrival estimations.
One-dimensional ESPRIT
At instance , the (complex -valued) output signals (measurements) , , of the system are related to the (complex -valued) input signals , , aswhere denotes the noise added by the system. The one-dimensional form of ESPRIT can be applied if the weights have the form , whose phases are integer multiples of some radial frequency . This frequency only depends on the index of the system's input, i.e., . The goal of ESPRIT is to estimate 's, given the outputs and the number of input signals, . Since the radial frequencies are the actual objectives, is denoted as .
Collating the weights as and the output signals at instance as , where . Further, when the weight vectors are put into a Vandermonde matrix , and the inputs at instance into a vector , we can writeWith several measurements at instances and the notations , and , the model equation becomes
Dividing into virtual sub-arrays
The weight vector has the property that adjacent entries are related.For the whole vector , the equation introduces two selection matrices and : and . Here, is an identity matrix of size and is a vector of zero.
The vectors contains all elements of except the last [first] one. Thus, andThe above relation is the first major observation required for ESPRIT. The second major observation concerns the signal subspace that can be computed from the output signals.
Signal subspace
The singular value decomposition (SVD) of is given aswhere and are unitary matrices and is a diagonal matrix of size , that holds the singular values from the largest (top left) in descending order. The operator denotes the complex-conjugate transpose (Hermitian transpose).
Let us assume that . Notice that we have input signals. If there was no noise, there would only be non-zero singular values. We assume that the largest singular values stem from these input signals and other singular values are presumed to stem from noise. The matrices in SVD of can be partitioned into submatrices, where some submatrices correspond to the signal subspace and some correspond to the noise subspace.where and contain the first columns of and , respectively and is a diagonal matrix comprising the largest singular values.
Thus, The SVD can be written aswhere , , and represent the contribution of the input signal to . We term the signal subspace. In contrast, , , and represent the contribution of noise to .
Hence, from the system model, we can write and . Also, from the former, we can writewhere . In the sequel, it is only important that there exists such an invertible matrix and its actual content will not be important.
Note: The signal subspace can also be extracted from the spectral decomposition of the auto-correlation matrix of the measurements, which is estimated as
Estimation of radial frequencies
We have established two expressions so far: and . Now, where and denote the truncated signal sub spaces, and The above equation has the form of an eigenvalue decomposition, and the phases of eigenvalues in the diagonal matrix are used to estimate the radial frequencies.
Thus, after solving for in the relation , we would find the eigenvalues of , where , and the radial frequencies are estimated as the phases (argument) of the eigenvalues.
Remark: In general, is not invertible. One can use the least squares estimate . An alternative would be the total least squares estimate.
Algorithm summary
Input: Measurements , the number of input signals (estimate if not already known).
Compute the singular value decomposition (SVD) of and extract the signal subspace as the first columns of .
Compute and , where and .
Solve for in (see the remark above).
Compute the eigenvalues of .
The phases of the eigenvalues provide the radial frequencies , i.e., .
Notes
Choice of selection matrices
In the derivation above, the selection matrices and were used. However, any appropriate matrices and may be used as long as the rotational invariance i.e., , or some generalization of it (see below) holds; accordingly, the matrices and may contain any rows of .
Generalized rotational invariance
The rotational invariance used in the derivation may be generalized. So far, the matrix has been defined to be a diagonal matrix that stores the sought-after complex exponentials on its main diagonal. However, may also exhibit some other structure. For instance, it may be an upper triangular matrix. In this case, constitutes a triangularization of .
See also
Multiple signal classification
Generalized pencil-of-function method
Independent component analysis
References
Further reading
.
.
Haardt, M., Zoltowski, M. D., Mathews, C. P., & Nossek, J. (1995, May). 2D unitary ESPRIT for efficient 2D parameter estimation. In icassp (pp. 2096-2099). IEEE.
Signal estimation
Trigonometry
Wave mechanics | Estimation of signal parameters via rotational invariance techniques | [
"Physics"
] | 1,098 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
18,595,173 | https://en.wikipedia.org/wiki/Mohamed%20Gad-el-Hak | Mohamed Gad-el-Hak (born 1945) is an engineering scientist. He is currently the Inez Caudill Eminent Professor of biomedical engineering and professor of mechanical and nuclear engineering at Virginia Commonwealth University.
Biography
Gad-el-Hak was born on 11 February 1945 in Tanta, Egypt.
Gad-el-Hak was senior research scientist and program manager at Flow Research Company in Seattle, Washington, and then professor of aerospace and mechanical engineering at the University of Notre Dame, finally coming to Virginia Commonwealth University in 2002 as chair of mechanical engineering, subsequently expanded to mechanical and nuclear engineering.
Scientific work
Gad-el-Hak has developed diagnostic tools for turbulent flows, including the laser-induced fluorescence (LIF) technique for flow visualization, and discovered the efficient mechanism by which a turbulent region rapidly grows by destabilizing a surrounding laminar flow. His has also published on Reynolds number effects in turbulent boundary layers and on the fluid mechanics of microdevices.
Gad-el-Hak is the author of the book Flow Control: Passive, Active, and Reactive Flow Management, and editor of the books Frontiers in Experimental Fluid Mechanics, Advances in Fluid Mechanics Measurements, Flow Control: Fundamentals and Practices, The MEMS Handbook (three volumes), and Large-Scale Disasters: Prediction, Control, and Mitigation.
Honors
Gad-el-Hak has been a member of several advisory panels for DOD, DOE, NASA, and NSF. During the 1991/1992 academic year, he was a visiting professor at Institut de Mécanique de Grenoble, France. During the summers of 1993, 1994, and 1997, he was, respectively, a distinguished faculty fellow at Naval Undersea Warfare Center, Newport, Rhode Island, a visiting exceptional professor at Université de Poitiers, France, and a Gastwissenschaftler (guest scientist) at Forschungszentrum Rossendorf, Dresden, Germany.
Gad-el-Hak is a fellow of the American Academy of Mechanics, a fellow of the American Physical Society, a fellow of the American Institute of Physics, a fellow of the American Society of Mechanical Engineers, a fellow of the American Association for the Advancement of Science, an associate fellow of the American Institute of Aeronautics and Astronautics, and a member of the European Mechanics Society. Gad-el-Hak served as editor of eight international journals, including AIAA Journal, Applied Mechanics Reviews, and Bulletin of the Polish Academy of Sciences. He is additionally a contributing editor for Springer-Verlag's Lecture Notes in Engineering and Lecture Notes in Physics, for McGraw-Hill's Year Book of Science and Technology, and for CRC Press's Mechanical Engineering Series.
An editorial in honor of Gal-el-Hak titled "Homage to a Legendary Dynamicist on His Seventy-Fifth Birthday" appeared in the July 2020 issue of the Journal of Fluids Engineering.
In 1998, Gad-el-Hak was named the 14th American Society of Mechanical Engineers (ASME) Freeman Scholar. In 1999, he was awarded the Alexander von Humboldt Prize as well as the Japanese Government Research Award for Foreign Scholars. In 2002, he was named ASME Distinguished Lecturer. Gad-el-Hak has also been awarded the ASME Medal for contributions to the discipline of fluids engineering, as well as a Certificate of Appreciation.
Selected publications
Gad-el-Hak, M., and Bandyopadhyay, P.R. (1994) "Reynolds Number Effects in Wall-Bounded Flows," Applied Mechanics Reviews, vol. 47, pp. 307–365.
Sen, M., Wajerski, D., and Gad-el-Hak, M. (1996) "A Novel Pump for MEMS Applications," Journal of Fluids Engineering, vol. 118, pp. 624–627.
Gad-el-Hak, M. (1999) "The Fluid Mechanics of Microdevices—The Freeman Scholar Lecture," Journal of Fluids Engineering, vol. 121, pp. 5–33.
Hemeda, A.A., Esteves, R.J.A., McLeskey, J.T., Gad-el-Hak, M., Khraisheh, M., and Vahedi Tafreshi, H. (2018) "Molecular Dynamic Simulations of Fibrous Distillation Membranes," International Communications in Heat and Mass Transfer, vol. 98, pp. 304–309.
Ullah, R., Khraisheh, M., Esteves, R.J., McLeskey, J.T., AlGhouti, M., Gad-el-Hak, M., and Vahedi, Tafreshi, H. (2018) "Energy Efficiency of Direct Contact Membrane Distillation," Desalination, vol. 433, pp. 56–67.
Zhu, Y., Lee, C., Chen, X., Wu, J., Chen, S., and Gad-el-Hak, M. (2018) "Newly Identified Principle for Aerodynamic Heating in Hypersonic Flows," Journal of Fluid Mechanics, vol. 855, pp. 152–180.
Gad-el-Hak, M. (2019) "Coherent Structures and Flow Control: Genesis and Prospect," Bulletin of the Polish Academy of Sciences, vol. 67, pp. 411–444.
References
External links
1945 births
American physicists
Fluid dynamicists
American mechanical engineers
Aerospace engineers
Egyptian mechanical engineers
Egyptian emigrants to the United States
Engineers from Virginia
Ain Shams University alumni
Fellows of the American Association for the Advancement of Science
Fellows of the American Physical Society
Fellows of the American Society of Mechanical Engineers
Johns Hopkins University alumni
Living people
People from Tanta
Scientists from Virginia
Whiting School of Engineering alumni
University of Southern California faculty
University of Virginia faculty
University of Notre Dame faculty
Virginia Commonwealth University faculty | Mohamed Gad-el-Hak | [
"Chemistry",
"Engineering"
] | 1,232 | [
"Aerospace engineering",
"Fluid dynamicists",
"Aerospace engineers",
"Fluid dynamics"
] |
1,250,206 | https://en.wikipedia.org/wiki/Intermetallic | An intermetallic (also called intermetallic compound, intermetallic alloy, ordered intermetallic alloy, long-range-ordered alloy) is a type of metallic alloy that forms an ordered solid-state compound between two or more metallic elements. Intermetallics are generally hard and brittle, with good high-temperature mechanical properties. They can be classified as stoichiometric or nonstoichiometic.
The term "intermetallic compounds" applied to solid phases has long been in use. However, Hume-Rothery argued that it misleads, suggesting a fixed stoichiometry and a clear decomposition into species.
Definitions
Research definition
In 1967 Schulze defined intermetallic compounds as solid phases containing two or more metallic elements, with optionally one or more non-metallic elements, whose crystal structure differs from that of the other constituents. This definition includes:
Electron (or Hume-Rothery) compounds
Size packing phases. e.g. Laves phases, Frank–Kasper phases and Nowotny phases
Zintl phases
The definition of metal includes:
Post-transition metals, i.e. aluminium, gallium, indium, thallium, tin, lead, and bismuth.
Metalloids, e.g. silicon, germanium, arsenic, antimony and tellurium.
Homogeneous and heterogeneous solid solutions of metals, and interstitial compounds such as carbides and nitrides are excluded under this definition. However, interstitial intermetallic compounds are included, as are alloys of intermetallic compounds with a metal.
Common use
In common use, the research definition, including post-transition metals and metalloids, is extended to include compounds such as cementite, Fe3C. These compounds, sometimes termed interstitial compounds, can be stoichiometric, and share properties with the above intermetallic compounds.
Complexes
The term intermetallic is used to describe compounds involving two or more metals such as the cyclopentadienyl complex Cp6Ni2Zn4.
B2
A B2 intermetallic compound has equal numbers of atoms of two metals such as aluminum and iron, arranged as two interpenetrating simple cubic lattices of the component metals.
Properties
Intermetallic compounds are generally brittle at room temperature and have high melting points. Cleavage or intergranular fracture modes are typical of intermetallics due to limited independent slip systems required for plastic deformation. However, some intermetallics have ductile fracture modes such as Nb–15Al–40Ti. Others can exhibit improved ductility by alloying with other elements to increase grain boundary cohesion. Alloying of other materials such as boron to improve grain boundary cohesion can improve ductility. They may offer a compromise between ceramic and metallic properties when hardness and/or resistance to high temperatures is important enough to sacrifice some toughness and ease of processing. They can display desirable magnetic and chemical properties, due to their strong internal order and mixed (metallic and covalent/ionic) bonding, respectively. Intermetallics have given rise to various novel materials developments.
Applications
Examples include alnico and the hydrogen storage materials in nickel metal hydride batteries. Ni3Al, which is the hardening phase in the familiar nickel-base super alloys, and the various titanium aluminides have attracted interest for turbine blade applications, while the latter is also used in small quantities for grain refinement of titanium alloys. Silicides, intermetallics involving silicon, serve as barrier and contact layers in microelectronics. Others include:
Magnetic materials e.g. alnico, sendust, Permendur, FeCo, Terfenol-D
Superconductors e.g. A15 phases, niobium-tin
Hydrogen storage e.g. AB5 compounds (nickel metal hydride batteries)
Shape memory alloys e.g. Cu-Al-Ni (alloys of Cu3Al and nickel), Nitinol (NiTi)
Coating materials e.g. NiAl
High-temperature structural materials e.g. nickel aluminide, Ni3Al
Dental amalgams, which are alloys of intermetallics Ag3Sn and Cu3Sn
Gate contact/ barrier layer for microelectronics e.g. TiSi2
Laves phases (AB2), e.g., MgCu2, MgZn2 and MgNi2.
The unintended formation of intermetallics can cause problems. For example, intermetallics of gold and aluminium can be a significant cause of wire bond failures in semiconductor devices and other microelectronics devices. The management of intermetallics is a major issue in the reliability of solder joints between electronic components.
Intermetallic particles
Intermetallic particles often form during solidification of metallic alloys, and can be used as a dispersion strengthening mechanism.
History
Examples of intermetallics through history include:
Roman yellow brass, CuZn
Chinese high tin bronze, Cu31Sn8
Type metal, SbSn
Chinese white copper, CuNi
German type metal is described as breaking like glass, without bending, softer than copper, but more fusible than lead. The chemical formula does not agree with the one above; however, the properties match with an intermetallic compound or an alloy of one.
See also
Complex metallic alloys
Kirkendall effect
Maraging steel
Metallurgy
Solid solution
References
Sources
External links
Intermetallics, scientific journal | Intermetallic | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,154 | [
"Inorganic compounds",
"Metallurgy",
"Intermetallics",
"Condensed matter physics",
"Alloys"
] |
1,250,498 | https://en.wikipedia.org/wiki/Joachim%20Lambek | Joachim "Jim" Lambek (5 December 1922 – 23 June 2014) was a Canadian mathematician. He was Peter Redpath Emeritus Professor of Pure Mathematics at McGill University, where he earned his PhD degree in 1950 with Hans Zassenhaus as advisor.
Biography
Lambek was born in Leipzig, Germany, where he attended a Gymnasium. He came to England in 1938 as a refugee on the Kindertransport. From there he was interned as an enemy alien and deported to a prison work camp in New Brunswick, Canada. There, he began in his spare time a mathematical apprenticeship with Fritz Rothberger, also
interned, and wrote the McGill Junior Matriculation in fall of 1941. In the spring of 1942, he was released and settled in Montreal, where he entered studies at McGill University, graduating with an honours mathematics degree in 1945 and an MSc a year later. In 1950, he completed his doctorate under Hans Zassenhaus becoming McGill's first PhD in mathematics.
Lambek became assistant professor at McGill; he was made a full professor in 1963. He spent his sabbatical year 1965–66 in at the Institute for Mathematical Research at ETH Zurich, where Beno Eckmann had gathered together a group of researchers interested in algebraic topology and category theory, including Bill Lawvere. There Lambek reoriented his research into category theory.
Lambek retired in 1992 but continued his involvement at McGill's mathematics department. In 2000 a festschrift celebrating Lambek's contributions to mathematical structures in computer science was published. On the occasion of Lambek's 90th birthday, a collection Categories and Types in Logic, Language, and Physics was produced in tribute to him.
Scholarly work
Lambek's PhD thesis investigated vector fields using the biquaternion algebra over Minkowski space, as well as semigroup immersion in a group. The second component was published by the Canadian Journal of Mathematics. He later returned to biquaternions when in 1995 he contributed "If Hamilton had prevailed: Quaternions in Physics", which exhibited the Riemann–Silberstein bivector to express the free-space electromagnetic equations.
Lambek supervised 17 doctoral students, and has 75 doctoral descendants as of 2020. He has over 100 publications listed in the Mathematical Reviews, including 6 books. His earlier work was mostly in module theory, especially torsion theories, non-commutative localization, and injective modules. One of his earliest papers, , proved the Lambek–Moser theorem about integer sequences. In 1963 he published an important result, now known as Lambek's theorem, on character modules characterizing flatness of a module. His more recent work is in pregroups and formal languages; his earliest works in this field were probably and . He is noted, among other things, for the Lambek calculus, an effort to capture mathematical aspects of natural language syntax in logical form, and a work that has been very influential in computational linguistics, as well as for developing the connections between typed lambda calculus and cartesian closed categories (see Curry–Howard–Lambek correspondence). His last works were on pregroup grammar.
Selected works
Books
Articles
Reprinted in
See also
Cartesian monoid
Michael K. Brame
References
External links
Faculty profile of Joachim Lambek at McGill University
Lambek festival (80th anniversary)
1922 births
2014 deaths
20th-century Canadian mathematicians
21st-century Canadian mathematicians
21st-century German mathematicians
Algebraists
Canadian logicians
Category theorists
Kindertransport refugees
German emigrants to Canada
McGill University alumni | Joachim Lambek | [
"Mathematics"
] | 716 | [
"Mathematical structures",
"Algebraists",
"Category theory",
"Category theorists",
"Algebra"
] |
1,250,552 | https://en.wikipedia.org/wiki/Paper%20chromatography | Paper chromatography is an analytical method used to separate colored chemicals or substances. It can also be used for colorless chemicals that can be located by a stain or other visualisation method after separation. It is now primarily used as a teaching tool, having been replaced in the laboratory by other chromatography methods such as thin-layer chromatography (TLC).
This analytic method has three components, a mobile phase, stationary phase and a support medium. The mobile phase is a solution that travels up the stationary phase by capillary action. The mobile phase is generally a mixture of non-polar organic solvent, while the stationary phase is polar inorganic solvent water. Here, paper is used to support the stationary phase, water. Polar water molecules are held inside the void space of the cellulose network of the paper. The difference between TLC and paper chromatography is that the stationary phase in TLC is a layer of adsorbent (usually silica gel, or aluminium oxide), and the stationary phase in paper chromatography is less absorbent paper.
A paper chromatography variant, two-dimensional chromatography, involves using two solvents and rotating the paper 90° in between. This is useful for separating complex mixtures of compounds having similar polarity, for example, amino acids.
Rƒ value, solutes, and solvents
The retention factor (Rƒ) may be defined as the ratio of the distance travelled by the solute to the distance travelled by the solvent. It is used in chromatography to quantify the amount of retardation of a sample in a stationary phase relative to a mobile phase. Rƒ values are usually expressed as a fraction of two decimal places.
If Rƒ value of a solution is zero, the solute remains in the stationary phase and thus it is immobile.
If Rƒ value = 1 then the solute has no affinity for the stationary phase and travels with the solvent front.
For example, if a compound travels 9.9 cm and the solvent front travels 12.7 cm, the Rƒ value = (9.9/12.7) = 0.779 or 0.78. Rƒ value depends on temperature and the solvent used in experiment, so several solvents offer several Rƒ values for the same mixture of compound. A solvent in chromatography is the liquid the paper is placed in, and the solute is the ink which is being separated.
Pigments and polarity
Paper chromatography is one method for testing the purity of compounds and identifying substances. Paper chromatography is a useful technique because it is relatively quick and requires only small quantities of material. Separations in paper chromatography involve the principle of partition. In paper chromatography, substances are distributed between a stationary phase and a mobile phase. The stationary phase is the water trapped between the cellulose fibers of the paper. The mobile phase is a developing solution that travels up the stationary phase, carrying the samples with it. Components of the sample will separate readily according to how strongly they adsorb onto the stationary phase versus how readily they dissolve in the mobile phase.
When a colored chemical sample is placed on a filter paper, the colors separate from the sample by placing one end of the paper in a solvent. The solvent diffuses up the paper, dissolving the various molecules in the sample according to the polarities of the molecules and the solvent. If the sample contains more than one color, that means it must have more than one kind of molecule. Because of the different chemical structures of each kind of molecule, the chances are very high that each molecule will have at least a slightly different polarity, giving each molecule a different solubility in the solvent. The unequal solubility causes the various color molecules to leave solution at different places as the solvent continues to move up the paper. The more soluble a molecule is, the higher it will migrate up the paper. If a chemical is very non-polar it will not dissolve at all in a very polar solvent. This is the same for a very polar chemical and a very non-polar solvent.
When using water (a very polar substance) as a solvent, the more polar the color, the higher it will rise on the papers.
Types
Descending
Development of the chromatogram is done by allowing the solvent to travel down the paper. Here, the mobile phase is placed in a solvent holder at the top. The spot is kept at the top and solvent flows down the paper from above.
Ascending
Here the solvent travels up the chromatographic paper. Both descending and ascending paper chromatography are used for the separation of organic and inorganic substances.
The sample and solvent move upward.
The ascending and descending method
This is the hybrid of both of the above techniques. The upper part of ascending chromatography can be folded over a rod in order to allow the paper to become descending after crossing the rod.
Circular chromatography
A circular filter paper is taken and the sample is deposited at the center of the paper. After drying the spot, the filter paper is tied horizontally on a Petri dish containing solvent, so that the wick of the paper is dipped in the solvent. The solvent rises through the wick and the components are separated into concentric rings.
Two-dimensional
In this technique a square or rectangular paper is used. Here the sample is applied to one of the corners and development is performed at a right angle to the direction of the first run.
History of paper chromatography
The discovery of paper chromatography in 1943 by Martin and Synge provided, for the first time, the means of surveying constituents of plants and for their separation and identification. Erwin Chargaff credits in Weintraub's history of the man the 1944 article by Consden, Gordon and Martin. There was an explosion of activity in this field after 1945.
References
Bibliography
Chromatography | Paper chromatography | [
"Chemistry"
] | 1,223 | [
"Chromatography",
"Separation processes"
] |
1,250,632 | https://en.wikipedia.org/wiki/Calculus%20of%20structures | In mathematical logic, the calculus of structures is a proof calculus with deep inference for studying the structural proof theory of noncommutative logic. The calculus has since been applied to study linear logic, classical logic, modal logic, and process calculi, and many benefits are claimed to follow in these investigations from the way in which deep inference is made available in the calculus.
References
Alessio Guglielmi (2004)., 'A System of Interaction and Structure'. ACM Transactions on Computational Logic.
Kai Brünnler (2004). Deep Inference and Symmetry in Classical Proofs. Logos Verlag.
External links
Calculus of structures homepage
CoS in Maude: page documenting implementations of logical systems in the calculus of structures, using the Maude system.
Logical calculi | Calculus of structures | [
"Mathematics"
] | 163 | [
"Mathematical logic",
"Logical calculi"
] |
1,250,664 | https://en.wikipedia.org/wiki/Deep%20inference | In mathematical logic, deep inference names a general idea in structural proof theory that breaks with the classical sequent calculus by generalising the notion of structure to permit inference to occur in contexts of high structural complexity. The term deep inference is generally reserved for proof calculi where the structural complexity is unbounded; in this article we will use non-shallow inference to refer to calculi that have structural complexity greater than the sequent calculus, but not unboundedly so, although this is not at present established terminology.
Deep inference is not important in logic outside of structural proof theory, since the phenomena that lead to the proposal of formal systems with deep inference are all related to the cut-elimination theorem. The first calculus of deep inference was proposed by Kurt Schütte, but the idea did not generate much interest at the time.
Nuel Belnap proposed display logic in an attempt to characterise the essence of structural proof theory. The calculus of structures was proposed in order to give a cut-free characterisation of noncommutative logic. Cirquent calculus was developed as a system of deep inference allowing to explicitly account for the possibility of subcomponent-sharing.
Notes
Further reading
Kai Brünnler, "Deep Inference and Symmetry in Classical Proofs" (Ph.D. thesis 2004), also published in book form by Logos Verlag ().
Deep Inference and the Calculus of Structures Intro and reference web page about ongoing research in deep inference.
Proof theory
Inference | Deep inference | [
"Mathematics"
] | 302 | [
"Mathematical logic",
"Proof theory"
] |
1,250,665 | https://en.wikipedia.org/wiki/Proof%20calculus | In mathematical logic, a proof calculus or a proof system is built to prove statements.
Overview
A proof system includes the components:
Formal language: The set L of formulas admitted by the system, for example, propositional logic or first-order logic.
Rules of inference: List of rules that can be employed to prove theorems from axioms and theorems.
Axioms: Formulas in L assumed to be valid. All theorems are derived from axioms.
A formal proof of a well-formed formula in a proof system is a set of axioms and rules of inference of proof system that infers that the well-formed formula is a theorem of proof system.
Usually a given proof calculus encompasses more than a single particular formal system, since many proof calculi are under-determined and can be used for radically different logics. For example, a paradigmatic case is the sequent calculus, which can be used to express the consequence relations of both intuitionistic logic and relevance logic. Thus, loosely speaking, a proof calculus is a template or design pattern, characterized by a certain style of formal inference, that may be specialized to produce specific formal systems, namely by specifying the actual inference rules for such a system. There is no consensus among logicians on how best to define the term.
Examples of proof calculi
The most widely known proof calculi are those classical calculi that are still in widespread use:
The class of Hilbert systems, of which the most famous example is the 1928 Hilbert–Ackermann system of first-order logic;
Gerhard Gentzen's calculus of natural deduction, which is the first formalism of structural proof theory, and which is the cornerstone of the formulae-as-types correspondence relating logic to functional programming;
Gentzen's sequent calculus, which is the most studied formalism of structural proof theory.
Many other proof calculi were, or might have been, seminal, but are not widely used today.
Aristotle's syllogistic calculus, presented in the Organon, readily admits formalisation. There is still some modern interest in syllogisms, carried out under the aegis of term logic.
Gottlob Frege's two-dimensional notation of the Begriffsschrift (1879) is usually regarded as introducing the modern concept of quantifier to logic.
C.S. Peirce's existential graph easily might have been seminal, had history worked out differently.
Modern research in logic teems with rival proof calculi:
Several systems have been proposed that replace the usual textual syntax with some graphical syntax. proof nets and cirquent calculus are among such systems.
Recently, many logicians interested in structural proof theory have proposed calculi with deep inference, for instance display logic, hypersequents, the calculus of structures, and bunched implication.
See also
Method of analytic tableaux
Proof procedure
Propositional proof system
Resolution (logic)
References
Proof theory
Logical calculi | Proof calculus | [
"Mathematics"
] | 609 | [
"Mathematical logic",
"Logical calculi",
"Proof theory"
] |
1,250,779 | https://en.wikipedia.org/wiki/Glycation | Glycation (non-enzymatic glycosylation) is the covalent attachment of a sugar to a protein, lipid or nucleic acid molecule. Typical sugars that participate in glycation are glucose, fructose, and their derivatives. Glycation is the non-enzymatic process responsible for many (e.g. micro and macrovascular) complications in diabetes mellitus and is implicated in some diseases and in aging. Glycation end products are believed to play a causative role in the vascular complications of diabetes mellitus.
In contrast with glycation, glycosylation is the enzyme-mediated ATP-dependent attachment of sugars to a protein or lipid. Glycosylation occurs at defined sites on the target molecule. It is a common form of post-translational modification of proteins and is required for the functioning of the mature protein.
Biochemistry
Glycations occur mainly in the bloodstream to a small proportion of the absorbed simple sugars: glucose, fructose, and galactose. It appears that fructose has approximately ten times the glycation activity of glucose, the primary body fuel. Glycation can occur through Amadori reactions, Schiff base reactions, and Maillard reactions; which lead to advanced glycation end products (AGEs).
Biomedical implications
Red blood cells have a consistent lifespan of 120 days and are accessible for measurement of glycated hemoglobin. Measurement of HbA1c—the predominant form of glycated hemoglobin—enables medium-term blood sugar control to be monitored in diabetes.
Some glycation products are implicated in many age-related chronic diseases, including cardiovascular diseases (the endothelium, fibrinogen, and collagen are damaged) and Alzheimer's disease (amyloid proteins are side-products of the reactions progressing to AGEs).
Long-lived cells (such as nerves and different types of brain cell), long-lasting proteins (such as crystallins of the lens and cornea), and DNA can sustain substantial glycation over time. Damage by glycation results in stiffening of the collagen in the blood vessel walls, leading to high blood pressure, especially in diabetes. Glycations also cause weakening of the collagen in the blood vessel walls, which may lead to micro- or macro-aneurysm; this may cause strokes if in the brain.
DNA glycation
The term DNA glycation applies to DNA damage induced by reactive carbonyls (principally methylglyoxal and glyoxal) that are present in cells as by-products of sugar metabolism. Glycation of DNA can cause mutation, breaks in DNA and cytotoxicity. Guanine in DNA is the base most susceptible to glycation. Glycated DNA, as a form of damage, appears to be as frequent as the more well studied oxidative DNA damage. A protein, designated DJ-1 (also known as PARK7), is employed in the repair of glycated DNA bases in humans, and homologs of this protein have also been identified in bacteria.
See also
Advanced glycation end-product
Alagebrium
Fructose
Galactose
Glucose
Glycosylation
Glycated hemoglobin
List of aging processes
Additional reading
References
Ageing processes
Carbohydrates
Post-translational modification
Protein metabolism | Glycation | [
"Chemistry",
"Biology"
] | 715 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Gene expression",
"Protein metabolism",
"Biochemical reactions",
"Organic compounds",
"Senescence",
"Post-translational modification",
"Carbohydrate chemistry",
"Ageing processes",
"Metabolism"
] |
1,251,318 | https://en.wikipedia.org/wiki/Plasma%20window | The plasma window (not to be confused with a plasma shield) is a technology that fills a volume of space with plasma confined by a magnetic field. With current technology, this volume is quite small and the plasma is generated as a flat plane inside a cylindrical space.
Plasma is any gas whose atoms or molecules have been ionized, and is a separate phase of matter. This is most commonly achieved by heating the gas to extremely high temperatures, although other methods exist. Plasma becomes increasingly viscous at higher temperatures, to the point where other matter has trouble passing through.
A plasma window's viscosity allows it to separate gas at standard atmospheric pressure from a total vacuum, and can reportedly withstand a pressure difference of up to nine atmospheres. At the same time, the plasma window will allow radiation such as laser beams and electron beams to pass. This property is the key to the plasma window's usefulness – the technology of the plasma window allows radiation that can only be generated in a vacuum to be applied to objects in an atmosphere. Electron-beam welding is a major application of plasma windows, making it practical outside a hard vacuum.
History
The plasma window was invented at Brookhaven National Laboratory by Ady Hershcovitch and patented in 1995.
Further inventions using this principle include the plasma valve in 1996.
In 2014, a group of students from the University of Leicester released a study describing functioning of spaceship plasma deflector shields.
In 2015, Boeing was granted a patent on a force field system designed to protect against shock waves generated by explosions. It is not intended to protect against projectiles, radiation, or energy weapons such as lasers. The field purportedly works by using a combination of lasers, electricity and microwaves to rapidly heat up the air creating a field of (ionised) superheated air-plasma which disrupts, or at least attenuates, the shock wave. As of March 2016, no working models are known to have been demonstrated.
Michio Kaku proposes force fields consisting of three layers. The first is the high-powered plasma window which can vaporize incoming objects, block radiation, and particles. The second layer will consist of thousands of laser beams arranged in a tight lattice configuration to vaporize any objects that managed to go through the plasma screen, by the laser beams. The third layer is an invisible but stable sheet of material like carbon nanotubes, or graphene that is only one atom thick, and thus transparent, but stronger than steel to block possible debris from destroyed objects.
Plasma valve
A related technology is the plasma valve, invented shortly after the plasma window. A plasma valve is a layer of gas in the shell of a particle accelerator. The ring of a particle accelerator contains a vacuum, and ordinarily a breach of this vacuum is disastrous. If, however, an accelerator equipped with plasma valve technology breaches, the gas layer is ionized within a nanosecond, creating a seal that prevents the accelerator's recompression. This gives technicians time to shut off the particle beam in the accelerator and slowly recompress the accelerator ring to avoid damage.
Properties
The physical properties of the plasma window vary depending on application. The initial patent cited temperatures around .
The only limit to the size of the plasma window are current energy limitations as generating the window consumes around 20 kilowatts per inch (8 kW/cm) in the diameter of a round window.
The plasma window emits a bright glow, with the color being dependent on the gas used.
Similarity to fictional "force fields"
In science fiction, such as the television series Star Trek, a fictional technology known as the "force field" is often used as a device. In some cases it is used as an external "door" to hangars on spacecraft, to prevent the ship's internal atmosphere from venting into outer space. Plasma windows could theoretically serve such a purpose if enough energy were available to produce them. The StarTram proposal plans on use of a power-demanding MHD window over a multi-meter diameter launch tube periodically, but briefly at a time, to prevent excessive loss of vacuum during the moments when a mechanical shutter temporarily opens in advance of a hypervelocity spacecraft.
See also
Tractor beam
Other sources
BNL Wins R&D 100 Award for 'Plasma Window'
Ady Hershcovitch. Plasma Window Technology for Propagating Particle Beams and Radiation from Vacuum to Atmosphere
Bibliography
Ady Hershcovitch (1995). High-pressure arcs as vacuum-atmosphere interface and plasma lens for nonvacuum electron beam welding machines, electron beam melting, and nonvacuum ion material modification, Journal of Applied Physics, 78(9): 5283–5288
References
External links
Official article by Ady Hershcovitch – Inventor of the Plasma Window.
Brookhaven Lab Wins R&D 100 Award for the "Plasma Window"
Brookhaven National Laboratory – Where the Plasma Window was invented
News on the Plasma Window
Plasma Valve Patent
Plasma technology and applications | Plasma window | [
"Physics"
] | 1,018 | [
"Plasma technology and applications",
"Plasma physics"
] |
1,251,702 | https://en.wikipedia.org/wiki/Suspension%20%28topology%29 | In topology, a branch of mathematics, the suspension of a topological space X is intuitively obtained by stretching X into a cylinder and then collapsing both end faces to points. One views X as "suspended" between these end points. The suspension of X is denoted by SX or susp(X).
There is a variation of the suspension for pointed space, which is called the reduced suspension and denoted by ΣX. The "usual" suspension SX is sometimes called the unreduced suspension, unbased suspension, or free suspension of X, to distinguish it from ΣX.
Free suspension
The (free) suspension of a topological space can be defined in several ways.
1. is the quotient space In other words, it can be constructed as follows:
Construct the cylinder .
Consider the entire set as a single point ("glue" all its points together).
Consider the entire set as a single point ("glue" all its points together).
2. Another way to write this is:
Where are two points, and for each i in {0,1}, is the projection to the point (a function that maps everything to ). That means, the suspension is the result of constructing the cylinder , and then attaching it by its faces, and , to the points along the projections .
3. One can view as two cones on X, glued together at their base.
4. can also be defined as the join where is a discrete space with two points.
5. In Homotopy type theory, be defined as a higher inductive type generated by
S:
N:
Properties
In rough terms, S increases the dimension of a space by one: for example, it takes an n-sphere to an (n + 1)-sphere for n ≥ 0.
Given a continuous map there is a continuous map defined by where square brackets denote equivalence classes. This makes into a functor from the category of topological spaces to itself.
Reduced suspension
If X is a pointed space with basepoint x0, there is a variation of the suspension which is sometimes more useful. The reduced suspension or based suspension ΣX of X is the quotient space:
.
This is the equivalent to taking SX and collapsing the line (x0 × I) joining the two ends to a single point. The basepoint of the pointed space ΣX is taken to be the equivalence class of (x0, 0).
One can show that the reduced suspension of X is homeomorphic to the smash product of X with the unit circle S1.
For well-behaved spaces, such as CW complexes, the reduced suspension of X is homotopy equivalent to the unbased suspension.
Adjunction of reduced suspension and loop space functors
Σ gives rise to a functor from the category of pointed spaces to itself. An important property of this functor is that it is left adjoint to the functor taking a pointed space to its loop space . In other words, we have a natural isomorphism
where and are pointed spaces and stands for continuous maps that preserve basepoints. This adjunction can be understood geometrically, as follows: arises out of if a pointed circle is attached to every non-basepoint of , and the basepoints of all these circles are identified and glued to the basepoint of . Now, to specify a pointed map from to , we need to give pointed maps from each of these pointed circles to . This is to say we need to associate to each element of a loop in (an element of the loop space ), and the trivial loop should be associated to the basepoint of : this is a pointed map from to . (The continuity of all involved maps needs to be checked.)
The adjunction is thus akin to currying, taking maps on cartesian products to their curried form, and is an example of Eckmann–Hilton duality.
This adjunction is a special case of the adjunction explained in the article on smash products.
Applications
The reduced suspension can be used to construct a homomorphism of homotopy groups, to which the Freudenthal suspension theorem applies. In homotopy theory, the phenomena which are preserved under suspension, in a suitable sense, make up stable homotopy theory.
Examples
Some examples of suspensions are:
The suspension of an n-ball is homeomorphic to the (n+1)-ball.
Desuspension
Desuspension is an operation partially inverse to suspension.
See also
Double suspension theorem
Cone (topology)
Join (topology)
References
Topology
Homotopy theory | Suspension (topology) | [
"Physics",
"Mathematics"
] | 929 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
1,251,777 | https://en.wikipedia.org/wiki/Thermal%20desorption%20spectroscopy | Temperature programmed desorption (TPD) is the method of observing desorbed molecules from a surface when the surface temperature is increased. When experiments are performed using well-defined surfaces of single-crystalline samples in a continuously pumped ultra-high vacuum (UHV) chamber, then this experimental technique is often also referred to as thermal desorption spectroscopy or thermal desorption spectrometry (TDS).
Desorption
When molecules or atoms come in contact with a surface, they adsorb onto it, minimizing their energy by forming a bond with the surface. The binding energy varies with the combination of the adsorbate and surface. If the surface is heated, at one point, the energy transferred to the adsorbed species will cause it to desorb. The temperature at which this happens is known as the desorption temperature. Thus TPD shows information on the binding energy.
Measurement
Since TPD observes the mass of desorbed molecules, it shows what molecules are adsorbed on the surface. Moreover, TPD recognizes the different adsorption conditions of the same molecule from the differences between the desorption temperatures of molecules desorbing different sites at the surface, e.g. terraces vs. steps. TPD also obtains the amounts of adsorbed molecules on the surface from the intensity of the peaks of the TPD spectrum, and the total amount of adsorbed species is shown by the integral of the spectrum.
To measure TPD, one needs a mass spectrometer, such as a quadrupole mass spectrometer or a time-of-flight (TOF) mass spectrometer, under ultrahigh vacuum (UHV) conditions. The amount of adsorbed molecules is measured by increasing the temperature at a heating rate of typically 2 K/s to 10 K/s. Several masses may be simultaneously measured by the mass spectrometer, and the intensity of each mass as a function of temperature is obtained as a TDS spectrum.
The heating procedure is often controlled by the PID control algorithm, with the controller being either a computer or specialised equipment such as a Eurotherm.
Other methods of measuring desorption are Thermal Gravimetric Analysis (TGA) or using infrared detectors, thermal conductivity detectors etc.
Quantitative interpretation of TPD data
TDS spectrum 1 and 2 are typical examples of a TPD measurement. Both are examples of NO desorbing from a single crystal in high vacuum. The crystal was mounted on a titanium filament and heated with current. The desorbing NO was measured using a mass spectrometer monitoring the atomic mass of 30.
Before 1990 analysis of a TPD spectrum was usually done using a so-called simplified method; the "Redhead" method, assuming the exponential prefactor and the desorption energy to be independent of the surface coverage. After 1990 and with use of computer algorithms TDS spectra were analyzed using the "complete analysis method" or the "leading edge method". These methods assume the exponential prefactor and the desorption energy to be dependent of the surface coverage. Several available methods of analyzing TDS are described and compared in an article by A.M. de JONG and J.W. NIEMANTSVERDRIET. During parameter optimization/estimation, using the integral has been found to create a more well behaved objective function than the differential.
Theoretical Introduction
Thermal desorption is described by the Polanyi–Wigner equation derived from the Arrhenius equation.
where
the desorption rate [mol/(cm2 s)] as a function of ,
order of desorption,
surface coverage,
pre-exponential factor [Hz] as a function of ,
activation energy of desorption [kJ/mol] as a function of ,
gas constant [J/(K mol)],
temperature [K].
This equation is difficult in practice while several variables are a function of the coverage and influence each other. The “complete analysis method” calculates the pre-exponential factor and the activation energy at several coverages. This calculation can be simplified. First we assume the pre-exponential factor and the activation energy to be independent of the coverage.
We also assume a linear heating rate:
(equation 1)
where:
the heating rate in [K/s],
the start temperature in [K],
the time in [s].
We assume that the pump rate of the system is indefinitely large, thus no gasses will absorb during the desorption. The change in pressure during desorption is described as:
(equation 2)
where:
the pressure in the system,
the time in [s].
,
the sample surface [m2],
a constant,
volume of the system [m3],
the desorption rate [mol/(cm2 s)],
,
the pump rate,
volume of the system [m3],
We assume that is indefinitely large so molecules do not re-adsorp during desorption process and we assume that is indefinitely small compared to and thus:
(equation 3)
Equation 2 and 3 lead to conclude that the desorption rate is a function of the change in pressure. One can use data in an experiment, which are a function of the pressure like the intensity of a mass spectrometer, to determine the desorption rate.
Since we assumed the pre-exponential factor and the activation energy to be independent of the coverage.
Thermal desorption is described with a simplified Arrhenius equation:
(equation 4)
where:
the desorption rate[mol/(cm2 s)],
order of desorption,
surface coverage,
pre-exponential factor [Hz],
activation energy of desorption [kJ/mol],
gas constant,
temperature [K].
Using the before mentioned Redhead method (a method less precise as the "complete analysis" or the "leading edge" method) and the temperature maximum one can determine the activation energy:
(equation 5)
for n=1
(equation 6)
for n=2
M. Ehasi and K. Christmann described a simple method to determine the activation energy of the second order.
Equation 6 can be changed into:
(equation 6a)
where:
is the surface area of a TDS or TPD peak.
A graph of versus results in a straight line with a slope equal to .
Thus in a first-order reaction the is independent of the surface coverage. Changing the surface coverage one can determine . Usually a fixed value of the pre-exponential factor is used and is known, with these values one can derive the iteratively from .
See also
Temperature-programmed reduction
References
External links
Temperature programmed desorption @ the Surface Science Laboratory
Thermal desorption of large adsorbates
Mass spectrometry
Analytical chemistry
Surface science | Thermal desorption spectroscopy | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,401 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Surface science",
"Mass spectrometry",
"Condensed matter physics",
"nan",
"Matter"
] |
1,252,256 | https://en.wikipedia.org/wiki/Inelastic%20scattering | In chemistry, nuclear physics, and particle physics, inelastic scattering is a process in which the internal states of a particle or a system of particles change after a collision. Often, this means the kinetic energy of the incident particle is not conserved (in contrast to elastic scattering). Additionally, relativistic collisions which involve a transition from one type of particle to another are referred to as inelastic even if the outgoing particles have the same kinetic energy as the incoming ones. Processes which are governed by elastic collisions at a microscopic level will appear to be inelastic if a macroscopic observer only has access to a subset of the degrees of freedom. In Compton scattering for instance, the two particles in the collision transfer energy causing a loss of energy in the measured particle.
Electrons
When an electron is the incident particle, the probability of inelastic scattering, depending on the energy of the incident electron, is usually smaller than that of elastic scattering. Thus in the case of gas electron diffraction (GED), reflection high-energy electron diffraction (RHEED), and transmission electron diffraction, because the energy of the incident electron is high, the contribution of inelastic electron scattering can be ignored. Deep inelastic scattering of electrons from protons provided the first direct evidence for the existence of quarks.
Photons
When a photon is the incident particle, there is an inelastic scattering process called Raman scattering. In this scattering process, the incident photon interacts with matter (gas, liquid, and solid) and the frequency of the photon is shifted towards red or blue. A red shift can be observed when part of the energy of the photon is transferred to the interacting matter, where it adds to its internal energy in a process called Stokes Raman scattering. The blue shift can be observed when internal energy of the matter is transferred to the photon; this process is called anti-Stokes Raman scattering.
Inelastic scattering is seen in the interaction between an electron and a photon. When a high-energy photon collides with a free electron (more precisely, weakly bound since a free electron cannot participate in inelastic scattering with a photon) and transfers energy, the process is called Compton scattering. Furthermore, when an electron with relativistic energy collides with an infrared or visible photon, the electron gives energy to the photon. This process is called inverse Compton scattering.
Neutrons
Neutrons undergo many types of scattering, including both elastic and inelastic scattering. Whether elastic or inelastic scatter occurs is dependent on the speed of the neutron, whether fast or thermal, or somewhere in between. It is also dependent on the nucleus it strikes and its neutron cross section. In inelastic scattering, the neutron interacts with the nucleus and the kinetic energy of the system is changed. This often activates the nucleus, putting it into an excited, unstable, short-lived energy state which causes it to quickly emit some kind of radiation to bring it back down to a stable or ground state. Alpha, beta, gamma, and protons may be emitted. Particles scattered in this type of nuclear reaction may cause the nucleus to recoil in the other direction.
Molecular collisions
Inelastic scattering is common in molecular collisions. Any collision which leads to a chemical reaction will be inelastic, but the term inelastic scattering is reserved for those collisions which do not result in reactions. There is a transfer of energy between the translational mode (kinetic energy) and rotational and vibrational modes.
If the transferred energy is small compared to the incident energy of the scattered particle, one speaks of quasielastic scattering.
See also
Inelastic collision
Elastic scattering
Scattering theory
References
Particle physics
Chemical kinetics
Scattering, absorption and radiative transfer (optics) | Inelastic scattering | [
"Physics",
"Chemistry"
] | 772 | [
"Chemical reaction engineering",
" absorption and radiative transfer (optics)",
"Scattering",
"Particle physics",
"Chemical kinetics"
] |
1,252,913 | https://en.wikipedia.org/wiki/Thermionic%20converter | A thermionic converter consists of a hot electrode which thermionically emits electrons over a potential energy barrier to a cooler electrode, producing a useful electric power output. Caesium vapor is used to optimize the electrode work functions and provide an ion supply (by surface ionization or electron impact ionization in a plasma) to neutralize the electron space charge.
Definition
From a physical electronic viewpoint, thermionic energy conversion is the direct production of electric power from heat by thermionic electron emission. From a thermodynamic viewpoint, it is the use of electron vapor as the working fluid in a power-producing cycle. A thermionic converter consists of a hot emitter electrode from which electrons are vaporized by thermionic emission and a colder collector electrode into which they are condensed after conduction through the inter-electrode plasma. The resulting current, typically several amperes per square centimeter of emitter surface, delivers electrical power to a load at a typical potential difference of 0.5–1 volt and thermal efficiency of 5–20%, depending on the emitter temperature (1500–2000 K) and mode of operation.
History
After the first demonstration of the practical arc-mode caesium vapor thermionic converter by V. Wilson in 1957, several applications of it were demonstrated in the following decade, including its use with solar, combustion, radioisotope, and nuclear reactor heat sources. The application most seriously pursued, however, was the integration of thermionic nuclear fuel elements directly into the core of nuclear reactors for production of electrical power in space. The exceptionally high operating temperature of thermionic converters, which makes their practical use difficult in other applications, gives the thermionic converter decisive advantages over competing energy conversion technologies in the space power application where radiant heat rejection is required. Substantial thermionic space reactor development programs were conducted in the U.S., France, and Germany in the period 1963–1973, and the US resumed a significant thermionic nuclear fuel element development program in the period 1983–1993.
Thermionic power systems were used in combination with various nuclear reactors (BES-5, TOPAZ) as electrical power supply on a number of Soviet military surveillance satellites between 1967 and 1988.
See Kosmos 954 for more details.
Although the priority for thermionic reactor use diminished as the US and Russian space programs were curtailed, research and technology development in thermionic energy conversion have continued. In recent years technology development programs for solar-heated thermionic space power systems were conducted. Prototype combustion-heated thermionic systems for domestic heat and electric power cogeneration, and for rectification, have been developed.
Description
The scientific aspects of thermionic energy conversion primarily concern the fields of surface physics and plasma physics. The electrode surface properties determine the magnitude of electron emission current and electric potential at the electrode surfaces, and the plasma properties determine the transport of electron current from the emitter to the collector. All practical thermionic converters to date employ caesium vapor between the electrodes, which determines both the surface and plasma properties. Caesium is employed because it is the most easily ionized of all stable elements.
A thermionic generator is like a cyclic heat engine and its maximum efficiency is limited by Carnot's law. It is a low-Voltage high current device where current densities of 25–50 (A/squarecm) have been achieved at voltage from 1–2V. The energy of high temperature gases can be partly converted into electricity if the riser tubes of the boiler are provided cathode and anode of a thermionic generator with the interspace filled with ionized caesium vapor.
The surface property of primary interest is the work function, which is the barrier that limits electron emission current from the surface and essentially is the heat of vaporization of electrons from the surface. The work function is determined primarily by a layer of caesium atoms adsorbed on the electrode surfaces. The properties of the interelectrode plasma are determined by the mode of operation of the thermionic converter. In the ignited (or "arc") mode the plasma is maintained via ionization internally by hot plasma electrons (~ 3300 K); in the unignited mode the plasma is maintained via injection of externally produced positive ions into a cold plasma; in the hybrid mode the plasma is maintained by ions from a hot-plasma interelectrode region transferred into a cold-plasma interelectrode region.
Recent work
All the applications cited above have employed technology in which the basic physical understanding and performance of the thermionic converter were essentially the same as those achieved before 1970. During the period from 1973 to 1983, however, significant research on advanced low-temperature thermionic converter technology for fossil-fueled industrial and commercial electric power production was conducted in the US, and continued until 1995 for possible space reactor and naval reactor applications. That research has shown that substantial improvements in converter performance can be obtained now at lower operating temperatures by addition of oxygen to the caesium vapor, by suppression of electron reflection at the electrode surfaces, and by hybrid mode operation. Similarly, improvements via use of oxygen-containing electrodes have been demonstrated in Russia along with design studies of systems employing the advanced thermionic converter performance. Recent studies have shown that excited Cs-atoms in thermionic converters form clusters of Cs-Rydberg matter which yield a decrease of collector emitting work function from 1.5 eV to 1.0 – 0.7 eV. Due to long-lived nature of Rydberg matter this low work function remains low for a long time which essentially increases the low-temperature converter’s efficiency.
See also
Atomic battery
Betavoltaics
Optoelectric nuclear battery
Magnetohydrodynamic generator
Radioisotope piezoelectric generator
Radioisotope thermoelectric generator
Thermocouple
Thermoelectric generator
Belousov–Zhabotinsky reaction
References
Battery types
Electric power
Electrical generators
Nuclear power in space
Nuclear technology | Thermionic converter | [
"Physics",
"Technology",
"Engineering"
] | 1,254 | [
"Electrical generators",
"Machines",
"Physical quantities",
"Nuclear technology",
"Physical systems",
"Power (physics)",
"Electric power",
"Nuclear physics",
"Electrical engineering"
] |
1,252,991 | https://en.wikipedia.org/wiki/Orbital%20hybridisation | In chemistry, orbital hybridisation (or hybridization) is the concept of mixing atomic orbitals to form new hybrid orbitals (with different energies, shapes, etc., than the component atomic orbitals) suitable for the pairing of electrons to form chemical bonds in valence bond theory. For example, in a carbon atom which forms four single bonds, the valence-shell s orbital combines with three valence-shell p orbitals to form four equivalent sp3 mixtures in a tetrahedral arrangement around the carbon to bond to four different atoms. Hybrid orbitals are useful in the explanation of molecular geometry and atomic bonding properties and are symmetrically disposed in space. Usually hybrid orbitals are formed by mixing atomic orbitals of comparable energies.
History and uses
Chemist Linus Pauling first developed the hybridisation theory in 1931 to explain the structure of simple molecules such as methane (CH4) using atomic orbitals. Pauling pointed out that a carbon atom forms four bonds by using one s and three p orbitals, so that "it might be inferred" that a carbon atom would form three bonds at right angles (using p orbitals) and a fourth weaker bond using the s orbital in some arbitrary direction. In reality, methane has four C–H bonds of equivalent strength. The angle between any two bonds is the tetrahedral bond angle of 109°28' (around 109.5°). Pauling supposed that in the presence of four hydrogen atoms, the s and p orbitals form four equivalent combinations which he called hybrid orbitals. Each hybrid is denoted sp3 to indicate its composition, and is directed along one of the four C–H bonds. This concept was developed for such simple chemical systems, but the approach was later applied more widely, and today it is considered an effective heuristic for rationalizing the structures of organic compounds. It gives a simple orbital picture equivalent to Lewis structures.
Hybridisation theory is an integral part of organic chemistry, one of the most compelling examples being Baldwin's rules. For drawing reaction mechanisms sometimes a classical bonding picture is needed with two atoms sharing two electrons. Hybridisation theory explains bonding in alkenes and methane. The amount of p character or s character, which is decided mainly by orbital hybridisation, can be used to reliably predict molecular properties such as acidity or basicity.
Overview
Orbitals are a model representation of the behavior of electrons within molecules. In the case of simple hybridization, this approximation is based on atomic orbitals, similar to those obtained for the hydrogen atom, the only neutral atom for which the Schrödinger equation can be solved exactly. In heavier atoms, such as carbon, nitrogen, and oxygen, the atomic orbitals used are the 2s and 2p orbitals, similar to excited state orbitals for hydrogen.
Hybrid orbitals are assumed to be mixtures of atomic orbitals, superimposed on each other in various proportions. For example, in methane, the C hybrid orbital which forms each carbon–hydrogen bond consists of 25% s character and 75% p character and is thus described as sp3 (read as s-p-three) hybridised. Quantum mechanics describes this hybrid as an sp3 wavefunction of the form , where N is a normalisation constant (here 1/2) and pσ is a p orbital directed along the C-H axis to form a sigma bond. The ratio of coefficients (denoted λ in general) is in this example. Since the electron density associated with an orbital is proportional to the square of the wavefunction, the ratio of p-character to s-character is λ2 = 3. The p character or the weight of the p component is N2λ2 = 3/4.
Types of hybridisation
sp3
Hybridisation describes the bonding of atoms from an atom's point of view. For a tetrahedrally coordinated carbon (e.g., methane CH4), the carbon should have 4 orbitals directed towards the 4 hydrogen atoms.
Carbon's ground state configuration is 1s2 2s2 2p2 or more easily read:
This diagram suggests that the carbon atom could use its two singly occupied p-type orbitals to form two covalent bonds with two hydrogen atoms in a methylene (CH2) molecule, with a hypothetical bond angle of 90° corresponding to the angle between two p orbitals on the same atom. However the true H–C–H angle in singlet methylene is about 102° which implies the presence of some orbital hybridisation.
The carbon atom can also bond to four hydrogen atoms in methane by an excitation (or promotion) of an electron from the doubly occupied 2s orbital to the empty 2p orbital, producing four singly occupied orbitals.
The energy released by the formation of two additional bonds more than compensates for the excitation energy required, energetically favoring the formation of four C-H bonds.
According to quantum mechanics the lowest energy is obtained if the four bonds are equivalent, which requires that they are formed from equivalent orbitals on the carbon. A set of four equivalent orbitals can be obtained that are linear combinations of the valence-shell (core orbitals are almost never involved in bonding) s and p wave functions, which are the four sp3 hybrids.
In CH4, four sp3 hybrid orbitals are overlapped by hydrogen 1s orbitals, yielding four σ (sigma) bonds (that is, four single covalent bonds) of equal length and strength.
The following :
translates into :
sp2
Other carbon compounds and other molecules may be explained in a similar way. For example, ethene (C2H4) has a double bond between the carbons.
For this molecule, carbon sp2 hybridises, because one π (pi) bond is required for the double bond between the carbons and only three σ bonds are formed per carbon atom. In sp2 hybridisation the 2s orbital is mixed with only two of the three available 2p orbitals, usually denoted 2px and 2py. The third 2p orbital (2pz) remains unhybridised.
forming a total of three sp2 orbitals with one remaining p orbital. In ethene, the two carbon atoms form a σ bond by overlapping one sp2 orbital from each carbon atom. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. Each carbon atom forms covalent C–H bonds with two hydrogens by s–sp2 overlap, all with 120° bond angles. The hydrogen–carbon bonds are all of equal strength and length, in agreement with experimental data.
sp
The chemical bonding in compounds such as alkynes with triple bonds is explained by sp hybridization. In this model, the 2s orbital is mixed with only one of the three p orbitals,
resulting in two sp orbitals and two remaining p orbitals. The chemical bonding in acetylene (ethyne) (C2H2) consists of sp–sp overlap between the two carbon atoms forming a σ bond and two additional π bonds formed by p–p overlap. Each carbon also bonds to hydrogen in a σ s–sp overlap at 180° angles.
Hybridisation and molecule shape
Hybridisation helps to explain molecule shape, since the angles between bonds are approximately equal to the angles between hybrid orbitals. This is in contrast to valence shell electron-pair repulsion (VSEPR) theory, which can be used to predict molecular geometry based on empirical rules rather than on valence-bond or orbital theories.
spx hybridisation
As the valence orbitals of main group elements are the one s and three p orbitals with the corresponding octet rule, spx hybridization is used to model the shape of these molecules.
spxdy hybridisation
As the valence orbitals of transition metals are the five d, one s and three p orbitals with the corresponding 18-electron rule, spxdy hybridisation is used to model the shape of these molecules. These molecules tend to have multiple shapes corresponding to the same hybridization due to the different d-orbitals involved. A square planar complex has one unoccupied p-orbital and hence has 16 valence electrons.
sdx hybridisation
In certain transition metal complexes with a low d electron count, the p-orbitals are unoccupied and sdx hybridisation is used to model the shape of these molecules.
Hybridisation of hypervalent molecules
Octet expansion
In some general chemistry textbooks, hybridization is presented for main group coordination number 5 and above using an "expanded octet" scheme with d-orbitals first proposed by Pauling. However, such a scheme is now considered to be incorrect in light of computational chemistry calculations.
In 1990, Eric Alfred Magnusson of the University of New South Wales published a paper definitively excluding the role of d-orbital hybridisation in bonding in hypervalent compounds of second-row (period 3) elements, ending a point of contention and confusion. Part of the confusion originates from the fact that d-functions are essential in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result). Also, the contribution of the d-function to the molecular wavefunction is large. These facts were incorrectly interpreted to mean that d-orbitals must be involved in bonding.
Resonance
In light of computational chemistry, a better treatment would be to invoke sigma bond resonance in addition to hybridisation, which implies that each resonance structure has its own hybridisation scheme. All resonance structures must obey the octet rule.
Hybridisation in computational VB theory
While the simple model of orbital hybridisation is commonly used to explain molecular shape, hybridisation is used differently when computed in modern valence bond programs. Specifically, hybridisation is not determined a priori but is instead variationally optimized to find the lowest energy solution and then reported. This means that all artificial constraints, specifically two constraints, on orbital hybridisation are lifted:
that hybridisation is restricted to integer values (isovalent hybridisation)
that hybrid orbitals are orthogonal to one another (hybridisation defects)
This means that in practice, hybrid orbitals do not conform to the simple ideas commonly taught and thus in scientific computational papers are simply referred to as spx, spxdy or sdx hybrids to express their nature instead of more specific integer values.
Isovalent hybridisation
Although ideal hybrid orbitals can be useful, in reality, most bonds require orbitals of intermediate character. This requires an extension to include flexible weightings of atomic orbitals of each type (s, p, d) and allows for a quantitative depiction of the bond formation when the molecular geometry deviates from ideal bond angles. The amount of p-character is not restricted to integer values; i.e., hybridizations like sp2.5 are also readily described.
The hybridization of bond orbitals is determined by Bent's rule: "Atomic s character concentrates in orbitals directed towards electropositive substituents".
For molecules with lone pairs, the bonding orbitals are isovalent spx hybrids. For example, the two bond-forming hybrid orbitals of oxygen in water can be described as sp4.0 to give the interorbital angle of 104.5°. This means that they have 20% s character and 80% p character and does not imply that a hybrid orbital is formed from one s and four p orbitals on oxygen since the 2p subshell of oxygen only contains three p orbitals.
Hybridisation defects
Hybridisation of s and p orbitals to form effective spx hybrids requires that they have comparable radial extent. While 2p orbitals are on average less than 10% larger than 2s, in part attributable to the lack of a radial node in 2p orbitals, 3p orbitals which have one radial node, exceed the 3s orbitals by 20–33%. The difference in extent of s and p orbitals increases further down a group. The hybridisation of atoms in chemical bonds can be analysed by considering localised molecular orbitals, for example using natural localised molecular orbitals in a natural bond orbital (NBO) scheme. In methane, CH4, the calculated p/s ratio is approximately 3 consistent with "ideal" sp3 hybridisation, whereas for silane, SiH4, the p/s ratio is closer to 2. A similar trend is seen for the other 2p elements. Substitution of fluorine for hydrogen further decreases the p/s ratio. The 2p elements exhibit near ideal hybridisation with orthogonal hybrid orbitals. For heavier p block elements this assumption of orthogonality cannot be justified. These deviations from the ideal hybridisation were termed hybridisation defects by Kutzelnigg.
However, computational VB groups such as Gerratt, Cooper and Raimondi (SCVB) as well as Shaik and Hiberty (VBSCF) go a step further to argue that even for model molecules such as methane, ethylene and acetylene, the hybrid orbitals are already defective and nonorthogonal, with hybridisations such as sp1.76 instead of sp3 for methane.
Photoelectron spectra
One misconception concerning orbital hybridization is that it incorrectly predicts the ultraviolet photoelectron spectra of many molecules. While this is true if Koopmans' theorem is applied to localized hybrids, quantum mechanics requires that the (in this case ionized) wavefunction obey the symmetry of the molecule which implies resonance in valence bond theory. For example, in methane, the ionised states (CH4+) can be constructed out of four resonance structures attributing the ejected electron to each of the four sp3 orbitals. A linear combination of these four structures, conserving the number of structures, leads to a triply degenerate T2 state and an A1 state. The difference in energy between each ionized state and the ground state would be ionization energy, which yields two values in agreement with experimental results.
Localized vs canonical molecular orbitals
Bonding orbitals formed from hybrid atomic orbitals may be considered as localized molecular orbitals, which can be formed from the delocalized orbitals of molecular orbital theory by an appropriate mathematical transformation. For molecules in the ground state, this transformation of the orbitals leaves the total many-electron wave function unchanged. The hybrid orbital description of the ground state is, therefore equivalent to the delocalized orbital description for ground state total energy and electron density, as well as the molecular geometry that corresponds to the minimum total energy value.
Two localized representations
Molecules with multiple bonds or multiple lone pairs can have orbitals represented in terms of sigma and pi symmetry or equivalent orbitals. Different valence bond methods use either of the two representations, which have mathematically equivalent total many-electron wave functions and are related by a unitary transformation of the set of occupied molecular orbitals.
For multiple bonds, the sigma-pi representation is the predominant one compared to the equivalent orbital (bent bond) representation. In contrast, for multiple lone pairs, most textbooks use the equivalent orbital representation. However, the sigma-pi representation is also used, such as by Weinhold and Landis within the context of natural bond orbitals, a localized orbital theory containing modernized analogs of classical (valence bond/Lewis structure) bonding pairs and lone pairs. For the hydrogen fluoride molecule, for example, two F lone pairs are essentially unhybridized p orbitals, while the other is an spx hybrid orbital. An analogous consideration applies to water (one O lone pair is in a pure p orbital, another is in an spx hybrid orbital).
See also
Crystal field theory
Isovalent hybridisation
Ligand field theory
Linear combination of atomic orbitals
MO diagrams
VALBOND
References
External links
Covalent Bonds and Molecular Structure
Hybridisation flash movie
Hybrid orbital 3D preview program in OpenGL
Understanding Concepts: Molecular Orbitals
General Chemistry tutorial on orbital hybridization
Chemical bonding
Molecular geometry
Stereochemistry
Quantum chemistry | Orbital hybridisation | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,303 | [
"Quantum chemistry",
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Space",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Spacetime",
"Chemical bonding",
"Matter",
" and optical physics"
] |
1,253,305 | https://en.wikipedia.org/wiki/Wheeler%E2%80%93DeWitt%20equation | The Wheeler–DeWitt equation for theoretical physics and applied mathematics, is a field equation attributed to John Archibald Wheeler and Bryce DeWitt. The equation attempts to mathematically combine the ideas of quantum mechanics and general relativity, a step towards a theory of quantum gravity.
In this approach, time plays a role different from what it does in non-relativistic quantum mechanics, leading to the so-called "problem of time". More specifically, the equation describes the quantum version of the Hamiltonian constraint using metric variables. Its commutation relations with the diffeomorphism constraints generate the Bergman–Komar "group" (which is the diffeomorphism group on-shell).
Motivation and background
In canonical gravity, spacetime is foliated into spacelike submanifolds. The three-metric (i.e., metric on the hypersurface) is and given by
In that equation the Latin indices run over the values 1, 2, 3, and the Greek indices run over the values 1, 2, 3, 4. The three-metric is the field, and we denote its conjugate momenta as . The Hamiltonian is a constraint (characteristic of most relativistic systems)
where , and is the Wheeler–DeWitt metric. In index-free notation, the Wheeler–DeWitt metric on the space of positive definite quadratic forms g in three dimensions is
Quantization "puts hats" on the momenta and field variables; that is, the functions of numbers in the classical case become operators that modify the state function in the quantum case. Thus we obtain the operator
Working in "position space", these operators are
One can apply the operator to a general wave functional of the metric where
which would give a set of constraints amongst the coefficients . This means that the amplitudes for gravitons at certain positions are related to the amplitudes for a different number of gravitons at different positions. Or, one could use the two-field formalism, treating as an independent field, so that the wave function is .
Mathematical formalism
The Wheeler–DeWitt equation is a functional differential equation. It is ill-defined in the general case, but very important in theoretical physics, especially in quantum gravity. It is a functional differential equation on the space of three-dimensional spatial metrics. The Wheeler–DeWitt equation has the form of an operator acting on a wave functional; the functional reduces to a function in cosmology. Contrary to the general case, the Wheeler–DeWitt equation is well defined in minisuperspaces like the configuration space of cosmological theories. An example of such a wave function is the Hartle–Hawking state. Bryce DeWitt first published this equation in 1967 under the name "Einstein–Schrödinger equation"; it was later renamed the "Wheeler–DeWitt equation".
Hamiltonian constraint
Simply speaking, the Wheeler–DeWitt equation says
where is the Hamiltonian constraint in quantized general relativity, and stands for the wave function of the universe. Unlike ordinary quantum field theory or quantum mechanics, the Hamiltonian is a first-class constraint on physical states. We also have an independent constraint for each point in space.
Although the symbols and may appear familiar, their interpretation in the Wheeler–DeWitt equation is substantially different from non-relativistic quantum mechanics. is no longer a spatial wave function in the traditional sense of a complex-valued function that is defined on a 3-dimensional space-like surface and normalized to unity. Instead it is a functional of field configurations on all of spacetime. This wave function contains all of the information about the geometry and matter content of the universe. is still an operator that acts on the Hilbert space of wave functions, but it is not the same Hilbert space as in the nonrelativistic case, and the Hamiltonian no longer determines the evolution of the system, so the Schrödinger equation no longer applies. This property is known as timelessness. Various attempts to incorporate time in a fully quantum framework have been made, starting with the "Page and Wootters mechanism" and other subsequent proposals. The reemergence of time was also proposed as arising from quantum correlations between an evolving system and a reference quantum clock system, the concept of system-time entanglement is introduced as a quantifier of the actual distinguishable evolution undergone by the system.
Momentum constraint
We also need to augment the Hamiltonian constraint with momentum constraints
associated with spatial diffeomorphism invariance.
In minisuperspace approximations, we only have one Hamiltonian constraint (instead of infinitely many of them).
In fact, the principle of general covariance in general relativity implies that global evolution per se does not exist; the time is just a label we assign to one of the coordinate axes. Thus, what we think about as time evolution of any physical system is just a gauge transformation, similar to that of QED induced by U(1) local gauge transformation where plays the role of local time. The role of a Hamiltonian is simply to restrict the space of the "kinematic" states of the Universe to that of "physical" states—the ones that follow gauge orbits. For this reason we call it a "Hamiltonian constraint". Upon quantization, physical states become wave functions that lie in the kernel of the Hamiltonian operator.
In general, the Hamiltonian vanishes for a theory with general covariance or time-scaling invariance.
See also
ADM formalism
Diffeomorphism constraint
Euclidean quantum gravity
Regge calculus
Canonical quantum gravity
Peres metric
Loop quantum gravity
References
Eponymous equations of physics
Quantum gravity | Wheeler–DeWitt equation | [
"Physics"
] | 1,166 | [
"Equations of physics",
"Unsolved problems in physics",
"Eponymous equations of physics",
"Quantum gravity",
"Physics beyond the Standard Model"
] |
19,651,210 | https://en.wikipedia.org/wiki/Ontological%20maximalism | In philosophy, ontological maximalism (or metaontological maximalism) is a ontological realist position that asserts, "whatever can exist does in some sense exist".
Overview
Meta-ontology deals with question related to ontology, whether there are mind independent (objective) answers to "what exists". Ontological realism asserts that reality (at least a part of it) is independent of the human mind. In contrast to realists, ontological anti-realists deny that the world is mind-independent. Believing the epistemological and semantic problems to be insoluble, they conclude that realism must be false.
Maximalism is one of two main metaontological positions. In a maximalist framework, any entity whose existence is consistent with the nature of this world can be taken to exist.
See also
Ontology
Large cardinal property
Continuum hypothesis
Mathematical universe hypothesis
Modal realism
References
Set theory
Ontology | Ontological maximalism | [
"Mathematics"
] | 188 | [
"Mathematical logic",
"Set theory"
] |
19,660,216 | https://en.wikipedia.org/wiki/Meyer%20hardness%20test | The Meyer hardness test is a hardness test based upon projected area of an impression. The hardness, , is defined as the maximum load, divided by the projected area of the indent, .
This is a more fundamental measurement of hardness than other hardness tests which are based on the surface area of an indentation. The principle behind the test is that the mean pressure required to test the material is the measurement of the hardness of the material. Units of megapascals (MPa) are frequently used for reporting Meyer hardness, but any unit of pressure can be used.
The test was originally defined for spherical indenters, but can be applied to any indenter shape. It is often the definition used in nanoindentation testing. An advantage of the Meyer test is that it is less sensitive to the applied load, especially compared to the Brinell hardness test. For cold worked materials the Meyer hardness is relatively constant and independent of load, whereas for the Brinell hardness test it decreases with higher loads. For annealed materials the Meyer hardness increases continuously with load due to strain hardening.
Based on Meyer's law hardness values from this test can be converted into Brinell hardness values, and vice versa.
The Meyer hardness test was devised by Eugene Meyer of the Materials Testing Laboratory at the Imperial School of Technology, Charlottenburg, Germany, circa 1908.
See also
Brinelling
Hardness comparison
Knoop hardness test
Leeb rebound hardness test
Rockwell scale
Vickers hardness test
References
Bibliography
.
Hardness tests | Meyer hardness test | [
"Materials_science",
"Engineering"
] | 303 | [
"Hardness tests",
"Materials testing",
"Mechanical engineering stubs",
"Mechanical engineering"
] |
19,662,199 | https://en.wikipedia.org/wiki/Convection%E2%80%93diffusion%20equation | The convection–diffusion equation is a parabolic partial differential equation that combines the diffusion and convection (advection) equations. It describes physical phenomena where particles, energy, or other physical quantities are transferred inside a physical system due to two processes: diffusion and convection. Depending on context, the same equation can be called the advection–diffusion equation, drift–diffusion equation, or (generic) scalar transport equation.
Equation
The general equation in conservative form is
where
is the variable of interest (species concentration for mass transfer, temperature for heat transfer),
is the diffusivity (also called diffusion coefficient), such as mass diffusivity for particle motion or thermal diffusivity for heat transport,
is the velocity field that the quantity is moving with. It is a function of time and space. For example, in advection, might be the concentration of salt in a river, and then would be the velocity of the water flow as a function of time and location. Another example, might be the concentration of small bubbles in a calm lake, and then would be the velocity of bubbles rising towards the surface by buoyancy (see below) depending on time and location of the bubble. For multiphase flows and flows in porous media, is the (hypothetical) superficial velocity.
describes sources or sinks of the quantity , i.e. the creation or destruction of the quantity. For example, for a chemical species, means that a chemical reaction is creating more of the species, and means that a chemical reaction is destroying the species. For heat transport, might occur if thermal energy is being generated by friction.
represents gradient and represents divergence. In this equation, represents concentration gradient.
In general, , , and may vary with space and time. In cases in which they depend on concentration as well, the equation becomes nonlinear, giving rise to many distinctive mixing phenomena such as Rayleigh–Bénard convection when depends on temperature in the heat transfer formulation and reaction–diffusion pattern formation when depends on concentration in the mass transfer formulation.
Often there are several quantities, each with its own convection–diffusion equation, where the destruction of one quantity entails the creation of another. For example, when methane burns, it involves not only the destruction of methane and oxygen but also the creation of carbon dioxide and water vapor. Therefore, while each of these chemicals has its own convection–diffusion equation, they are coupled together and must be solved as a system of differential equations.
Derivation
The convection–diffusion equation can be derived in a straightforward way from the continuity equation, which states that the rate of change for a scalar quantity in a differential control volume is given by flow and diffusion into and out of that part of the system along with any generation or consumption inside the control volume:
where is the total flux and is a net volumetric source for . There are two sources of flux in this situation. First, diffusive flux arises due to diffusion. This is typically approximated by Fick's first law:
i.e., the flux of the diffusing material (relative to the bulk motion) in any part of the system is proportional to the local concentration gradient. Second, when there is overall convection or flow, there is an associated flux called advective flux:
The total flux (in a stationary coordinate system) is given by the sum of these two:
Plugging into the continuity equation:
Common simplifications
In a common situation, the diffusion coefficient is constant, there are no sources or sinks, and the velocity field describes an incompressible flow (i.e., it has zero divergence). Then the formula simplifies to:
In this case the equation can be put in the simple diffusion form:
where the derivative of the left hand side is the material derivative of the variable c.
In non-interacting material, (for example, when temperature is close to absolute zero, dilute gas has almost zero mass diffusivity), hence the transport equation is simply the continuity equation:
Using Fourier transform in both temporal and spatial domain (that is, with integral kernel ), its characteristic equation can be obtained:
which gives the general solution:
where is any differentiable scalar function. This is the basis of temperature measurement for near Bose–Einstein condensate via time of flight method.
Stationary version
The stationary convection–diffusion equation describes the steady-state behavior of a convection–diffusion system. In a steady state, , so the equation to solve becomes the second order equation:
In one spatial dimension, the equation can be written as
Which can be integrated one time in the space variable x to give:
Where D is not zero, this is an inhomogeneous first-order linear differential equation with variable coefficients in the variable c(x):
where the coefficients are:
and:
On the other hand, in the positions x where D=0, the first-order diffusion term disappears and the solution becomes simply the ratio:
Velocity in response to a force
In some cases, the average velocity field exists because of a force; for example, the equation might describe the flow of ions dissolved in a liquid, with an electric field pulling the ions in some direction (as in gel electrophoresis). In this situation, it is usually called the drift–diffusion equation or the Smoluchowski equation, after Marian Smoluchowski who described it in 1915 (not to be confused with the Einstein–Smoluchowski relation or Smoluchowski coagulation equation).
Typically, the average velocity is directly proportional to the applied force, giving the equation:
where is the force, and characterizes the friction or viscous drag. (The inverse is called mobility.)
Derivation of Einstein relation
When the force is associated with a potential energy (see conservative force), a steady-state solution to the above equation (i.e. ) is:
(assuming and are constant). In other words, there are more particles where the energy is lower. This concentration profile is expected to agree with the Boltzmann distribution (more precisely, the Gibbs measure). From this assumption, the Einstein relation can be proven:
Similar equations in other contexts
The convection–diffusion equation is a relatively simple equation describing flows, or alternatively, describing a stochastically-changing system. Therefore, the same or similar equation arises in many contexts unrelated to flows through space.
It is formally identical to the Fokker–Planck equation for the velocity of a particle.
It is closely related to the Black–Scholes equation and other equations in financial mathematics.
It is closely related to the Navier–Stokes equations, because the flow of momentum in a fluid is mathematically similar to the flow of mass or energy. The correspondence is clearest in the case of an incompressible Newtonian fluid, in which case the Navier–Stokes equation is:
where is the momentum of the fluid (per unit volume) at each point (equal to the density multiplied by the velocity ), is viscosity, is fluid pressure, and is any other body force such as gravity. In this equation, the term on the left-hand side describes the change in momentum at a given point; the first term on the right describes the diffusion of momentum by viscosity; the second term on the right describes the advective flow of momentum; and the last two terms on the right describes the external and internal forces which can act as sources or sinks of momentum.
In probability theory
The convection–diffusion equation (with ) can be viewed as a stochastic differential equation, describing random motion with diffusivity and bias . For example, the equation can describe the Brownian motion of a single particle, where the variable describes the probability distribution for the particle to be in a given position at a given time. The reason the equation can be used that way is because there is no mathematical difference between the probability distribution of a single particle, and the concentration profile of a collection of infinitely many particles (as long as the particles do not interact with each other).
The Langevin equation describes advection, diffusion, and other phenomena in an explicitly stochastic way. One of the simplest forms of the Langevin equation is when its "noise term" is Gaussian; in this case, the Langevin equation is exactly equivalent to the convection–diffusion equation. However, the Langevin equation is more general.
In semiconductor physics
In semiconductor physics, this equation is called the drift–diffusion equation. The word "drift" is related to drift current and drift velocity. The equation is normally written:
where
and are the concentrations (densities) of electrons and holes, respectively,
is the elementary charge,
and are the electric currents due to electrons and holes respectively,
and are the corresponding "particle currents" of electrons and holes respectively,
represents carrier generation and recombination ( for generation of electron-hole pairs, for recombination.)
is the electric field vector
and are electron and hole mobility.
The diffusion coefficient and mobility are related by the Einstein relation as above:
where is the Boltzmann constant and is absolute temperature. The drift current and diffusion current refer separately to the two terms in the expressions for , namely:
This equation can be solved together with Poisson's equation numerically.
An example of results of solving the drift diffusion equation is shown on the right. When light shines on the center of semiconductor, carriers are generated in the middle and diffuse towards two ends. The drift–diffusion equation is solved in this structure and electron density distribution is displayed in the figure. One can see the gradient of carrier from center towards two ends.
See also
Advanced Simulation Library
Buckley–Leverett equation
Burgers' equation
Conservation equations
Double diffusive convection
Incompressible Navier–Stokes equations
Natural convection
Nernst–Planck equation
Numerical solution of the convection–diffusion equation
Notes
References
Further reading
Diffusion
Parabolic partial differential equations
Stochastic differential equations
Transport phenomena
Equations of physics
Functions of space and time | Convection–diffusion equation | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 2,037 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Equations of physics",
"Functions of space and time",
"Chemical engineering",
"Mathematical objects",
"Equations",
"Spacetime"
] |
10,600,606 | https://en.wikipedia.org/wiki/Fritz%20Sauter | Fritz Eduard Josef Maria Sauter (; 9 June 1906 – 24 May 1983) was an Austrian-German physicist who worked mostly in quantum electrodynamics and solid-state physics.
Education
From 1924 to 1928, Sauter studied mathematics and physics at the Leopold-Franzens-Universität Innsbruck. He received his doctorate in 1928 under Arthur March, with a thesis on Kirchhoff’s theory of diffraction. After graduation, he did postdoctoral studies with Arnold Sommerfeld and was his assistant at the Ludwig Maximilian University of Munich. In January 1931, Sommerfeld recommended Sauter to Max Born, director of the Institute of Theoretical Physics at the University of Göttingen.
Career
From 1931 to 1934, Sauter was an assistant to Richard Becker at the Technische Hochschule Berlin (today Technische Universität Berlin) in Charlottenburg. From 1933, he was also a lecturer at Berlin. While at Berlin, he did work on atomic physics and Dirac’s theory of electrons related to Klein's paradox.
Adolf Hitler came to power in Germany on 30 January 1933 and Max Born took leave as director of the Institute of Theoretical Physics at the University of Göttingen on 1 July of that year and emigrated to England. In 1934, Sauter, while only a Privatdozent, was brought in to Göttingen as acting director of the Institute of Theoretical Physics and lecturer on theoretical physics; Born was officially retired under the Nuremberg Laws on 31 December 1935. Sauter continued in this role until 1936, when Becker was appointed director, after the Reichserziehungsministerium (Reich Education Ministry) eliminated his position at Berlin and reassigned him to Göttingen.
After Göttingen, Sauter took a teaching assignment and became acting director of the theoretical physics department at the University of Königsberg. In 1939, he became ordinarius professor of theoretical physics and director of the theoretical physics department at Königsberg. From 1942 to 1945, Sauter was ordinarius professor of theoretical physics at the Ludwig Maximilian University of Munich.
From 1950 to 1951, Sauter had a teaching assignment and was substitute director of the theoretical physics department at Technische Hochschule Hanover. From 1951 to 1952, he had a teaching assignment at Göttingen and Bamberg Universities. In 1952, he became ordinarius professor and director of the theoretical physics department at the University of Cologne, which he held until achieving emeritus status in 1971.
Having been a student of Sommerfeld, Sauter was a superb mathematician. He wrote his own book on differential equations of physics, and, after Sommerfeld’s death in 1951, Sauter was editor on the 4th, 5th, and 6th editions of Sommerfeld’s book on the same subject, and he was also editor of the four volume, collected works of Sommerfeld. Sauter was also editor of books by Becker, with whom he had been an assistant in Berlin.
Bibliography
Articles
Fritz Sauter Über das Verhalten eines Elektrons im homogenen elektrischen Feld nach der relativistischen Theorie Diracs, Zeitschrift für Physik 69 (11-12) 742-764 (1931). Author cited as being at Munich.
Fritz Sauter Über die Bremsstrahlung schneller Elektronen Annalen der Physik 412 (4) 404-412 (1934)
Books
Fritz Sauter Differentialgleichungen der Physik (de Gruyter, 1950, 1958, and 1966)
Arnold Sommerfeld, author and Fritz Sauter, editor Vorlesungen über theoretische Physik. Band 6: Partielle Differentialgleichungen der Physik. 4. Auflage, bearbeitet und ergänzt (Akademische Verlagsgesellschaft, 1958)
Arnold Sommerfeld, author and Fritz Sauter, editor Vorlesungen über theoretische Physik. Bd. 6. Partielle Differentialgleichungen der Physik. 5. Auflage, bearbeitet und ergänzt (Akademische Verlagsgesellschaft, 1962)
Arnold Sommerfeld, author and Fritz Sauter, editor Vorlesungen über theoretische Physik. Band 6: Partielle Differentialgleichungen der Physik. 5. Auflage, bearbeitet und ergänzt (Akademische Verlagsgesellschaft, 1962)
Richard Becker, author and Fritz Sauter, editor Theorie der Elektrizität. Bd. 1. Einführung in die Maxwellsche Theorie (Teubner, 1957, 1962, 1964, and 1969)
Richard Becker, author, Fritz Sauter, editor, and Ivor De Teissier, translator Electromagnetic Fields and Interactions, Volume I: Electromagnetic Theory and Relativity (Blaisdell, 1964)
Richard Becker, author and Fritz Sauter, editor Theorie der Elektrizität. Bd. 2. Einführung in die Quantentheorie der Atome und der Strahlung (Teubner, 1959, 1963, 1970, and 1997)
Richard Becker, author, Fritz Sauter, editor, and Ivor De Teissier, translator Electromagnetic Fields and Interactions, Volume II: Quantum Theory of Atoms and Radiation (Blaisdell, 1964)
Arnold Sommerfeld, author and Fritz Sauter, editor Vorlesungen über theoretische Physik. Band 6: Partielle Differentialgleichungen der Physik. 6. Auflage, bearbeitet und ergänzt (Akademische Verlagsgesellschaft, 1966)
Fritz Sauter, editor Arnold Sommerfeld: Gesammelte Schriften, 4 Volumes (Braunschweig, 1968)
Richard Becker, author and Fritz Sauter, editor Theorie der Elektrizität. Bd. 3. Elektrodynamik der Materie (Teubner, 1969)
References
Further reading
Beyerchen, Alan D. Scientists Under Hitler: Politics and the Physics Community in the Third Reich (Yale, 1977)
Hentschel, Klaus, editor and Ann M. Hentschel, editorial assistant and Translator Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996)
Hoffmann, Dieter Between Autonomy and Accommodation: The German Physical Society during the Third Reich, Physics in Perspective 7(3) 293-329 (2005)
1906 births
1983 deaths
20th-century Austrian physicists
20th-century German physicists
German theoretical physicists
Theoretical physicists
University of Innsbruck alumni
Ludwig Maximilian University of Munich alumni
Academic staff of the Ludwig Maximilian University of Munich
Academic staff of the University of Königsberg
Academic staff of Technische Universität Berlin
Academic staff of the University of Göttingen
Academic staff of the University of Bamberg
Academic staff of the University of Hanover
Academic staff of the University of Cologne
Austrian emigrants to Germany | Fritz Sauter | [
"Physics"
] | 1,431 | [
"Theoretical physics",
"Theoretical physicists"
] |
10,601,618 | https://en.wikipedia.org/wiki/Protomer | In structural biology, a protomer is the structural unit of an oligomeric protein. It is the smallest unit composed of at least one protein chain. The protomers associate to form a larger oligomer of two or more copies of this unit. Protomers usually arrange in cyclic symmetry to form closed point group symmetries.
The term was introduced by Chetverin to make nomenclature in the Na/K-ATPase enzyme unambiguous. This enzyme is composed of two subunits: a large, catalytic α subunit, and a smaller glycoprotein β subunit (plus a proteolipid, called γ-subunit). At the time it was unclear how many of each work together. In addition, when people spoke of a dimer, it was unclear whether they were referring to αβ or to (αβ)2. Chetverin suggested to call αβ a protomer and (αβ)2 a diprotomer. Thus, in the work by Chetverin the term protomer was only applied to a hetero-oligomer and subsequently used mainly in the context of hetero-oligomers. Following this usage, a protomer consists of a least two different proteins chains. In current literature of structural biology, the term is commonly also applied to the smallest unit of homo-oligomers, avoiding the term "monomer".
In chemistry, a so-called protomer is a molecule which displays tautomerism due to position of a proton.
Examples
Hemoglobin is a heterotetramer consisting of four subunits (two α and two β). However, structurally and functionally hemoglobin is described better as (αβ)2, so we call it a dimer of two αβ-protomers, that is, a diprotomer.
Aspartate carbamoyltransferase has a α6β6 subunit composition. The six αβ-protomers are arranged in D3 symmetry.
Viral capsids are usually composed of protomers.
HIV-1 protease forms a homodimer consisting of two protomers.
Examples in chemistry include tyrosine and 4-aminobenzoic acid. The former may be deprotonated to form the carboxylate and phenoxide anions, and the later may be protonated at the amino or carboxyl groups.
References
External links
Structural biology
Polymer chemistry | Protomer | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 502 | [
"Biochemistry",
"Structural biology",
"Materials science",
"Polymer chemistry"
] |
10,601,794 | https://en.wikipedia.org/wiki/Carbon%E2%80%93fluorine%20bond | The carbon–fluorine bond is a polar covalent bond between carbon and fluorine that is a component of all organofluorine compounds. It is one of the strongest single bonds in chemistry (after the B–F single bond, Si–F single bond, and H–F single bond), and relatively short, due to its partial ionic character. The bond also strengthens and shortens as more fluorines are added to the same carbon on a chemical compound. As such, fluoroalkanes like tetrafluoromethane (carbon tetrafluoride) are some of the most unreactive organic compounds.
Electronegativity and bond strength
The high electronegativity of fluorine (4.0 for fluorine vs. 2.5 for carbon) gives the carbon–fluorine bond a significant polarity or dipole moment. The electron density is concentrated around the fluorine, leaving the carbon relatively electron poor. This introduces ionic character to the bond through partial charges (Cδ+—Fδ−). The partial charges on the fluorine and carbon are attractive, contributing to the unusual bond strength of the carbon–fluorine bond. The bond is labeled as "the strongest in organic chemistry," because fluorine forms the strongest single bond to carbon. Carbon–fluorine bonds can have a bond dissociation energy (BDE) of up to 130 kcal/mol. The BDE (strength of the bond) of C–F is higher than other carbon–halogen and carbon–hydrogen bonds. For example, the BDEs of the C–X bond within a CH3–X molecule is 115, 104.9, 83.7, 72.1, and 57.6 kcal/mol for X = fluorine, hydrogen, chlorine, bromine, and iodine, respectively.
Bond length
The carbon–fluorine bond length is typically about 1.35 ångström (1.39 Å in fluoromethane). It is shorter than any other carbon–halogen bond, and shorter than single carbon–nitrogen and carbon–oxygen bonds. The short length of the bond can also be attributed to the ionic character of the bond (the electrostatic attractions between the partial charges on the carbon and the fluorine). The carbon–fluorine bond length varies by several hundredths of an ångstrom depending on the hybridization of the carbon atom and the presence of other substituents on the carbon or even in atoms farther away. These fluctuations can be used as indication of subtle hybridization changes and stereoelectronic interactions. The table below shows how the average bond length varies in different bonding environments (carbon atoms are sp3-hybridized unless otherwise indicated for sp2 or aromatic carbon).
{| class="wikitable"
|-
! Bond !! Mean bond length (Å)
|-
| CCH2F, C2CHF || 1.399
|-
| C3CF || 1.428
|-
| C2CF2, H2CF2, CCHF2 || 1.349
|-
| CCF3 || 1.346
|-
| FCNO2 || 1.320
|-
| FCCF || 1.371
|-
| Csp2F || 1.340
|-
| CarF || 1.363
|-
| FCarCarF || 1.340
|}
The variability in bond lengths and the shortening of bonds to fluorine due to their partial ionic character are also observed for bonds between fluorine and other elements, and have been a source of difficulties with the selection of an appropriate value for the covalent radius of fluorine. Linus Pauling originally suggested 64 pm, but that value was eventually replaced by 72 pm, which is half of the fluorine–fluorine bond length. However, 72 pm is too long to be representative of the lengths of the bonds between fluorine and other elements, so values between 54 pm and 60 pm have been suggested by other authors.
Bond strength effect of geminal bonds
With increasing number of fluorine atoms on the same (geminal) carbon the other bonds become stronger and shorter. This can be seen by the changes in bond length and strength (BDE) for the fluoromethane series, as shown on the table below; also, the partial charges (qC and qF) on the atoms change within the series. The partial charge on carbon becomes more positive as fluorines are added, increasing the electrostatic interactions, and ionic character, between the fluorines and carbon.
{| class="wikitable"
|-
!Compound
!C-F bond length (Å)
!BDE (kcal/mol)
!qC
!qF
|-
|CH3F
|1.385
|109.9 ± 1
|0.01
| −0.23
|-
|CH2F2
|1.357
|119.5
|0.40
| −0.23
|-
|CHF3
|1.332
|127.5
|0.56
| −0.21
|-
|CF4
|1.319
|130.5 ± 3
|0.72
| −0.18
|}
Gauche effect
When two fluorine atoms are in vicinal (i.e., adjacent) carbons, as in 1,2-difluoroethane (H2FCCFH2), the gauche conformer is more stable than the anti conformer—this is the opposite of what would normally be expected and to what is observed for most 1,2-disubstituted ethanes; this phenomenon is known as the gauche effect. In 1,2-difluoroethane, the gauche conformation is more stable than the anti conformation by 2.4 to 3.4 kJ/mole in the gas phase. This effect is not unique to the halogen fluorine, however; the gauche effect is also observed for 1,2-dimethoxyethane. A related effect is the alkene cis effect. For instance, the cis isomer of 1,2-difluoroethylene is more stable than the trans isomer.
There are two main explanations for the gauche effect: hyperconjugation and bent bonds. In the hyperconjugation model, the donation of electron density from the carbon–hydrogen σ bonding orbital to the carbon–fluorine σ* antibonding orbital is considered the source of stabilization in the gauche isomer. Due to the greater electronegativity of fluorine, the carbon–hydrogen σ orbital is a better electron donor than the carbon–fluorine σ orbital, while the carbon–fluorine σ* orbital is a better electron acceptor than the carbon–hydrogen σ* orbital. Only the gauche conformation allows good overlap between the better donor and the better acceptor.
Key in the bent bond explanation of the gauche effect in difluoroethane is the increased p orbital character of both carbon–fluorine bonds due to the large electronegativity of fluorine. As a result, electron density builds up above and below to the left and right of the central carbon–carbon bond. The resulting reduced orbital overlap can be partially compensated when a gauche conformation is assumed, forming a bent bond. Of these two models, hyperconjugation is generally considered the principal cause behind the gauche effect in difluoroethane.
Spectroscopy
The carbon–fluorine bond stretching appears in the infrared spectrum between 1000 and 1360 cm−1. The wide range is due to the sensitivity of the stretching frequency to other substituents in the molecule. Monofluorinated compounds have a strong band between 1000 and 1110 cm−1; with more than one fluorine atoms, the band splits into two bands, one for the symmetric mode and one for the asymmetric. The carbon–fluorine bands are so strong that they may obscure any carbon–hydrogen bands that might be present.
Organofluorine compounds can also be characterized using NMR spectroscopy, using carbon-13, fluorine-19 (the only natural fluorine isotope), or hydrogen-1 (if present). The chemical shifts in 19F NMR appear over a very wide range, depending on the degree of substitution and functional group. The table below shows the ranges for some of the major classes.
{| class="wikitable"
|-
! Type of Compound !! Chemical Shift Range (ppm) Relative to neat CFCl3
|-
| F–C=O || −70 to −20
|-
| CF3 || +40 to +80
|-
| CF2 || +80 to +140
|-
| CF || +140 to +250
|-
| ArF || +80 to +170
|}
Breaking C–F bonds
Breaking C–F bonds is of interest as a way to decompose and destroy organofluorine "forever chemicals" such as PFOA and perfluorinated compounds (PFCs). Candidate methods include catalysts, such as platinum atoms; photocatalysts; UV, iodide, and sulfite, radicals; etc.
Some metal complexes cleave C-F bonds. These reactions are of interest from the perspectives of organic synthesis and remediation of xenochemicals. C-F bond activation has been classified as follows "(i) oxidative addition of fluorocarbon, (ii) M–C bond formation with HF elimination, (iii) M–C bond formation with fluorosilane elimination, (iv) hydrodefluorination of fluorocarbon with M–F bond formation, (v) nucleophilic attack on fluorocarbon, and (vi) defluorination of fluorocarbon". An illustrative metal-mediated C-F activation reaction is the defluorination of fluorohexane by a zirconocene dihydride:
See also
Fluorocarbon
Organofluorine chemistry
Carbon–hydrogen bond
Carbon–carbon bond
Carbon–nitrogen bond
Carbon–oxygen bond
References
Fluorine
Organic chemistry
Chemical bonding | Carbon–fluorine bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,163 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
10,603,568 | https://en.wikipedia.org/wiki/Unitarity%20%28physics%29 | In quantum physics, unitarity is (or a unitary process has) the condition that the time evolution of a quantum state according to the Schrödinger equation is mathematically represented by a unitary operator. This is typically taken as an axiom or basic postulate of quantum mechanics, while generalizations of or departures from unitarity are part of speculations about theories that may go beyond quantum mechanics. A unitarity bound is any inequality that follows from the unitarity of the evolution operator, i.e. from the statement that time evolution preserves inner products in Hilbert space.
Hamiltonian evolution
Time evolution described by a time-independent Hamiltonian is represented by a one-parameter family of unitary operators, for which the Hamiltonian is a generator: .
In the Schrödinger picture, the unitary operators are taken to act upon the system's quantum state, whereas in the Heisenberg picture, the time dependence is incorporated into the observables instead.
Implications of unitarity on measurement results
In quantum mechanics, every state is described as a vector in Hilbert space. When a measurement is performed, it is convenient to describe this space using a vector basis in which every basis vector has a defined result of the measurement – e.g., a vector basis of defined momentum in case momentum is measured. The measurement operator is diagonal in this basis.
The probability to get a particular measured result depends on the probability amplitude, given by the inner product of the physical state with the basis vectors that diagonalize the measurement operator. For a physical state that is measured after it has evolved in time, the probability amplitude can be described either by the inner product of the physical state after time evolution with the relevant basis vectors, or equivalently by the inner product of the physical state with the basis vectors that are evolved backwards in time. Using the time evolution operator , we have:
But by definition of Hermitian conjugation, this is also:
Since these equalities are true for every two vectors, we get
This means that the Hamiltonian is Hermitian and the time evolution operator is unitary.
Since by the Born rule the norm determines the probability to get a particular result in a measurement, unitarity together with the Born rule guarantees the sum of probabilities is always one. Furthermore, unitarity together with the Born rule implies that the measurement operators in Heisenberg picture indeed describe how the measurement results are expected to evolve in time.
Implications on the form of the Hamiltonian
That the time evolution operator is unitary, is equivalent to the Hamiltonian being Hermitian. Equivalently, this means that the possible measured energies, which are the eigenvalues of the Hamiltonian, are always real numbers.
Scattering amplitude and the optical theorem
The S-matrix is used to describe how the physical system changes in a scattering process. It is in fact equal to the time evolution operator over a very long time (approaching infinity) acting on momentum states of particles (or bound complex of particles) at infinity. Thus it must be a unitary operator as well; a calculation yielding a non-unitary S-matrix often implies a bound state has been overlooked.
Optical theorem
Unitarity of the S-matrix implies, among other things, the optical theorem. This can be seen as follows:
The S-matrix can be written as:
where is the part of the S-matrix that is due to interactions; e.g. just implies the S-matrix is 1, no interaction occur and all states remain unchanged.
Unitarity of the S-matrix:
is then equivalent to:
The left-hand side is twice the imaginary part of the S-matrix. In order to see what the right-hand side is, let us look at any specific element of this matrix, e.g. between some initial state and final state , each of which may include many particles. The matrix element is then:
where {Ai} is the set of possible on-shell states - i.e. momentum states of particles (or bound complex of particles) at infinity.
Thus, twice the imaginary part of the S-matrix, is equal to a sum representing products of contributions from all the scatterings of the initial state of the S-matrix to any other physical state at infinity, with the scatterings of the latter to the final state of the S-matrix. Since the imaginary part of the S-matrix can be calculated by virtual particles appearing in intermediate states of the Feynman diagrams, it follows that these virtual particles must only consist of real particles that may also appear as final states. The mathematical machinery which is used to ensure this includes gauge symmetry and sometimes also Faddeev–Popov ghosts.
Unitarity bounds
According to the optical theorem, the probability amplitude M (= iT) for any scattering process must obey
Similar unitarity bounds imply that the amplitudes and cross section cannot increase too much with energy or they must decrease as quickly as a certain formula dictates. For example, Froissart bound says that the total cross section of two particles scattering is bounded by , where is a constant, and is the square of the center-of-mass energy. (See Mandelstam variables)
See also
Antiunitary operator
the Born rule
Probability axioms
Quantum channel
Stone's theorem on one-parameter unitary groups
Wigner's theorem
References
Quantum mechanics | Unitarity (physics) | [
"Physics"
] | 1,091 | [
"Theoretical physics",
"Quantum mechanics"
] |
10,604,051 | https://en.wikipedia.org/wiki/Jane%20Cain | Ethel Jane Cain (1 May 1909 – 19 September 1996) was a British telephonist and actress, and the original voice of the speaking clock in the United Kingdom.
Working at London's Victoria Exchange, she was appointed on 21 June 1935 following a competition among GPO telephonists; there were nine finalists in total and the adjudication panel included leading actress Sybil Thorndike and Poet Laureate John Masefield, who announced that "She has a golden voice. It is beautiful." Her recording was used from 1936 until 1963, when it was replaced by Pat Simmons. She also made a record for the GPO, helping other staff members improve their speaking voices, and went on to become announcer for Henry Hall during his broadcast concerts.
Having been chosen as the 'Golden Voice Girl', in July 1935 she was offered the leading role in the Columbia Pictures film Vanity. Directed by Adrian Brunel, it began shooting at Walton-on-Thames in October and was first shown in December. Using the name Jane Cain as an actress, she then made her professional stage debut at the Open Air Theatre, Regent's Park on 17 July 1936, playing Celia in As You Like It. The Post Office had started its 'speaking clock' service on the 1st of the same month, over a year after her appointment had been announced.
In addition to working with regional repertory companies, notably a lengthy association with Scotland's Perth Theatre Company in the 1950s, she also appeared in such West End shows as A Soldier for Christmas (1944), Maigret and the Lady (1965) and The Sleeping Prince (1968). She also played supporting roles in such TV series as Starr and Company (1958) and Thirty-Minute Theatre (1961).
See also
Speaking clock
Pat Simmons, second permanent voice
Brian Cobby, third permanent voice
Lenny Henry, comedian, temporary voice
Alicia Roland, 12-year-old schoolgirl, temporary voice
Sara Mendes da Costa, fourth permanent voice
References
External links
Telecommunications Heritage Group
Includes video clips of Jane Cain.
1909 births
1996 deaths
British voice actresses
Clocks
Telephone voiceover talent
20th-century British actresses | Jane Cain | [
"Physics",
"Technology",
"Engineering"
] | 434 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
10,606,531 | https://en.wikipedia.org/wiki/Flow%20tracer | A flow tracer is any fluid property used to track the flow velocity (i.e., flow magnitude and direction) and circulation patterns. Tracers can be chemical properties, such as radioactive material, or chemical compounds, physical properties, such as density, temperature, salinity, or dyes, and can be natural or artificially induced. Flow tracers are used in many fields, such as physics, hydrology, limnology, oceanography, environmental studies and atmospheric studies.
Conservative tracers remain constant following fluid parcels, whereas reactive tracers (such as compounds undergoing a mutual chemical reaction) grow or decay with time. Active tracers dynamically alter the flow of the fluid by changing fluid properties which appear in the equation of motion such as density or viscosity, while passive tracers have no influence on flow.
Uses in oceanography
Ocean tracers are used to deduce small scale flow patterns, large-scale ocean circulation, water mass formation and changes, "dating" of water masses, and carbon dioxide storage and uptake.
Temperature, salinity, density, and other conservative tracers are often used to track currents, circulation and water mass mixing. An interesting example was when 28,000 plastic ducks fell over board from a container ship in the middle of the Pacific Ocean. The following twelve years oceanographers recorded where the ducks washed ashore, some thousands of miles from the spill site, and this data was used to calibrate and verify the circulation patterns of the North Pacific Gyre.
Transient tracers change over time, such as radioactive material (Tritium and Cesium-137) and chemical concentrations (CFCs and SF6), which are used to date water masses and can also track mixing. In the mid-1900s, Nuclear weapons testing and chemical production released tons of compounds that are not naturally found in the environment. While extremely unfortunate, scientists were able to use the concentrations of anthropogenic compounds and half-lives of radioactive material to determine how old a water body is. The Fukushima nuclear disaster was really well studied by oceanographers, who tracked the radioactive material spread throughout the Pacific Ocean, and used that to better understand ocean currents and mixing patterns.
Biological tracers can also be used to track water masses in the ocean. Phytoplankton blooms can be seen by satellites and move with the changing currents. They can be used as a "check point" to see how well water masses are mixing. Subtropical water is often warm, which is ideal for phytoplankton, but nutrient poor, which inhibits their growth, while subpolar water is cold and nutrient rich. When these two types of water masses mix, such as the Kuroshio Current in the north Pacific, it often causes huge phytoplankton blooms, because they now how conditions they need to grow—warm temperatures and high nutrients. Vertical mixing and eddy formation can also cause phytoplankton blooms, and these blooms are tracked by satellites to observe current patterns and mixing.
See also
Perfluorocarbon tracer
References
External links
ctraj Library of advection codes, including passive tracer modelling.
Fluid dynamics
Data collection
Oceanography | Flow tracer | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 654 | [
"Data collection",
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Chemical engineering",
"Data",
"Piping",
"Fluid dynamics"
] |
10,607,261 | https://en.wikipedia.org/wiki/Folding%20%28chemistry%29 | In chemistry, folding is the process by which a molecule assumes its shape or conformation. The process can also be described as intramolecular self-assembly, a type of molecular self-assembly, where the molecule is directed to form a specific shape through noncovalent interactions, such as hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi-pi interactions, and electrostatic effects.
The most active area of interest in the folding of molecules is the process of protein folding, which analyses the specific sequences of amino acids in a protein. The shape of the folded protein can be used to understand its function and design drugs to influence the processes that it is involved in.
There is also a great deal of interest in the construction of artificial folding molecules or foldamers. They are studied as models of biological molecules and for potential application to the development of new functional materials.
See also
Secondary structure
Tertiary structure
Circuit topology
References
A Field Guide to Foldamers. Hill, D. J.; Mio, M. J.; Prince, R. B.; Hughes, T.; Moore, J. S. Chem. Rev. 2001, 101, 3893-4011 .
Supramolecular chemistry
Self-organization
Stereochemistry | Folding (chemistry) | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 261 | [
"Self-organization",
"Stereochemistry",
"Space",
"Stereochemistry stubs",
"nan",
"Dynamical systems",
"Spacetime",
"Nanotechnology",
"Supramolecular chemistry"
] |
10,610,469 | https://en.wikipedia.org/wiki/Theory%20of%20tides | The theory of tides is the application of continuum mechanics to interpret and predict the tidal deformations of planetary and satellite bodies and their atmospheres and oceans (especially Earth's oceans) under the gravitational loading of another astronomical body or bodies (especially the Moon and Sun).
History
Australian Aboriginal astronomy
The Yolngu people of northeastern Arnhem Land in the Northern Territory of Australia identified a link between the Moon and the tides, which they mythically attributed to the Moon filling with water and emptying out again.
Classical era
The tides received relatively little attention in the civilizations around the Mediterranean Sea, as the tides there are relatively small, and the areas that experience tides do so unreliably. A number of theories were advanced, however, from comparing the movements to breathing or blood flow to theories involving whirlpools or river cycles. A similar "breathing earth" idea was considered by some Asian thinkers. Plato reportedly believed that the tides were caused by water flowing in and out of undersea caverns. Crates of Mallus attributed the tides to "the counter-movement (ἀντισπασμός) of the sea” and Apollodorus of Corcyra to "the refluxes from the Ocean". An ancient Indian Purana text dated to 400-300 BC refers to the ocean rising and falling because of heat expansion from the light of the Moon.
Ultimately the link between the Moon (and Sun) and the tides became known to the Greeks, although the exact date of discovery is unclear; references to it are made in sources such as Pytheas of Massilia in 325 BC and Pliny the Elder's Natural History in 77 AD. Although the schedule of the tides and the link to lunar and solar movements was known, the exact mechanism that connected them was unclear. Classicists Thomas Little Heath claimed that both Pytheas and Posidonius connected the tides with the moon, "the former directly, the latter through the setting up of winds". Seneca mentions in De Providentia the periodic motion of the tides controlled by the lunar sphere. Eratosthenes (3rd century BC) and Posidonius (1st century BC) both produced detailed descriptions of the tides and their relationship to the phases of the Moon, Posidonius in particular making lengthy observations of the sea on the Spanish coast, although little of their work survived. The influence of the Moon on tides was mentioned in Ptolemy's Tetrabiblos as evidence of the reality of astrology. Seleucus of Seleucia is thought to have theorized around 150 BC that tides were caused by the Moon as part of his heliocentric model.
Aristotle, judging from discussions of his beliefs in other sources, is thought to have believed the tides were caused by winds driven by the Sun's heat, and he rejected the theory that the Moon caused the tides. An apocryphal legend claims that he committed suicide in frustration with his failure to fully understand the tides. Heraclides also held "the sun sets up winds, and that these winds, when they blow, cause the high tide and, when they cease, the low tide". Dicaearchus also "put the tides down to the direct action of the sun according to its position". Philostratus discusses tides in Book Five of Life of Apollonius of Tyana (circa 217-238 AD); he was vaguely aware of a correlation of the tides with the phases of the Moon but attributed them to spirits moving water in and out of caverns, which he connected with the legend that spirits of the dead cannot move on at certain phases of the Moon.
Medieval period
The Venerable Bede discusses the tides in The Reckoning of Time and shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. However, he made no progress regarding the question of how exactly the Moon created the tides.
Medieval rule-of-thumb methods for predicting tides were said to allow one "to know what Moon makes high water" from the Moon's movements. Dante references the Moon's influence on the tides in his Divine Comedy.
Medieval European understanding of the tides was often based on works of Muslim astronomers that became available through Latin translation starting from the 12th century. Abu Ma'shar al-Balkhi, in his Introductorium in astronomiam, taught that ebb and flood tides were caused by the Moon. Abu Ma'shar discussed the effects of wind and Moon's phases relative to the Sun on the tides. In the 12th century, al-Bitruji contributed the notion that the tides were caused by the general circulation of the heavens. Medieval Arabic astrologers frequently referenced the Moon's influence on the tides as evidence for the reality of astrology; some of their treatises on the topic influenced western Europe. Some theorized that the influence was caused by lunar rays heating the ocean's floor.
Modern era
Simon Stevin in his 1608 De spiegheling der Ebbenvloet (The Theory of Ebb and Flood) dismisses a large number of misconceptions that still existed about ebb and flood. Stevin pleads for the idea that the attraction of the Moon was responsible for the tides and writes in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made. In 1609, Johannes Kepler correctly suggested that the gravitation of the Moon causes the tides, which he compared to magnetic attraction basing his argument upon ancient observations and correlations.
In 1616, Galileo Galilei wrote Discourse on the Tides. He strongly and mockingly rejects the lunar theory of the tides, and tries to explain the tides as the result of the Earth's rotation and revolution around the Sun, believing that the oceans moved like water in a large basin: as the basin moves, so does the water. Therefore, as the Earth revolves, the force of the Earth's rotation causes the oceans to "alternately accelerate and retardate". His view on the oscillation and "alternately accelerated and retardated" motion of the Earth's rotation is a "dynamic process" that deviated from the previous dogma, which proposed "a process of expansion and contraction of seawater." However, Galileo's theory was erroneous. In subsequent centuries, further analysis led to the current tidal physics. Galileo tried to use his tidal theory to prove the movement of the Earth around the Sun. Galileo theorized that because of the Earth's motion, borders of the oceans like the Atlantic and Pacific would show one high tide and one low tide per day. The Mediterranean Sea had two high tides and low tides, though Galileo argued that this was a product of secondary effects and that his theory would hold in the Atlantic. However, Galileo's contemporaries noted that the Atlantic also had two high tides and low tides per day, which led to Galileo omitting this claim from his 1632 Dialogue.
René Descartes theorized that the tides (alongside the movement of planets, etc.) were caused by aetheric vortices, without reference to Kepler's theories of gravitation by mutual attraction; this was extremely influential, with numerous followers of Descartes expounding on this theory throughout the 17th century, particularly in France. However, Descartes and his followers acknowledged the influence of the Moon, speculating that pressure waves from the Moon via the aether were responsible for the correlation.
Newton, in the Principia, provides a correct explanation for the tidal force, which can be used to explain tides on a planet covered by a uniform ocean but which takes no account of the distribution of the continents or ocean bathymetry.
Dynamic theory
While Newton explained the tides by describing the tide-generating forces and Daniel Bernoulli gave a description of the static reaction of the waters on Earth to the tidal potential, the dynamic theory of tides, developed by Pierre-Simon Laplace in 1775, describes the ocean's real reaction to tidal forces. Laplace's theory of ocean tides takes into account friction, resonance and natural periods of ocean basins. It predicts the large amphidromic systems in the world's ocean basins and explains the oceanic tides that are actually observed.
The equilibrium theory—based on the gravitational gradient from the Sun and Moon but ignoring the Earth's rotation, the effects of continents, and other important effects—could not explain the real ocean tides. Since measurements have confirmed the dynamic theory, many things have possible explanations now, like how the tides interact with deep sea ridges, and chains of seamounts give rise to deep eddies that transport nutrients from the deep to the surface. The equilibrium tide theory calculates the height of the tide wave of less than half a meter, while the dynamic theory explains why tides are up to 15 meters.
Satellite observations confirm the accuracy of the dynamic theory, and the tides worldwide are now measured to within a few centimeters. Measurements from the CHAMP satellite closely match the models based on the TOPEX data. Accurate models of tides worldwide are essential for research since the variations due to tides must be removed from measurements when calculating gravity and changes in sea levels.
Laplace's tidal equations
In 1776, Laplace formulated a single set of linear partial differential equations for tidal flow described as a barotropic two-dimensional sheet flow. Coriolis effects are introduced as well as lateral forcing by gravity. Laplace obtained these equations by simplifying the fluid dynamics equations, but they can also be derived from energy integrals via Lagrange's equation.
For a fluid sheet of average thickness D, the vertical tidal elevation ζ, as well as the horizontal velocity components u and v (in the latitude φ and longitude λ directions, respectively) satisfy Laplace's tidal equations:
where Ω is the angular frequency of the planet's rotation, g is the planet's gravitational acceleration at the mean ocean surface, a is the planetary radius, and U is the external gravitational tidal-forcing potential.
William Thomson (Lord Kelvin) rewrote Laplace's momentum terms using the curl to find an equation for vorticity. Under certain conditions this can be further rewritten as a conservation of vorticity.
Tidal analysis and prediction
Harmonic analysis
Laplace's improvements in theory were substantial, but they still left prediction in an approximate state. This position changed in the 1860s when the local circumstances of tidal phenomena were more fully brought into account by William Thomson's application of Fourier analysis to the tidal motions as harmonic analysis. Thomson's work in this field was further developed and extended by George Darwin, applying the lunar theory current in his time. for the tidal harmonic constituents are still used, for example: M: moon/lunar; S: sun/solar; K: moon-sun/lunisolar.
Darwin's harmonic developments of the tide-generating forces were later improved when A.T. Doodson, applying the lunar theory of E.W. Brown, developed the tide-generating potential (TGP) in harmonic form, distinguishing 388 tidal frequencies. Doodson's work was carried out and published in 1921. Doodson devised a practical system for specifying the different harmonic components of the tide-generating potential, the Doodson numbers, a system still in use.
Since the mid-twentieth century further analysis has generated many more terms than Doodson's 388. About 62 constituents are of sufficient size to be considered for possible use in marine tide prediction, but sometimes many fewer can predict tides to useful accuracy. The calculations of tide predictions using the harmonic constituents are laborious, and from the 1870s to about the 1960s they were carried out using a mechanical tide-predicting machine, a special-purpose form of analog computer. More recently digital computers, using the method of matrix inversion, are used to determine the tidal harmonic constituents directly from tide gauge records.
Tidal constituents
Tidal constituents combine to give an endlessly varying aggregate because of their different and incommensurable frequencies: the effect is visualized in an animation of the American Mathematical Society illustrating the way in which the components used to be mechanically combined in the tide-predicting machine. Amplitudes (half of peak-to-peak amplitude) of tidal constituents are given below for six example locations:
Eastport, Maine (ME), Biloxi, Mississippi (MS), San Juan, Puerto Rico (PR), Kodiak, Alaska (AK), San Francisco, California (CA), and Hilo, Hawaii (HI).
Semi-diurnal
Diurnal
Long period
Short period
Doodson numbers
In order to specify the different harmonic components of the tide-generating potential, Doodson devised a practical system which is still in use, involving what are called the Doodson numbers based on the six Doodson arguments or Doodson variables. The number of different tidal frequency components is large, but each corresponds to a specific linear combination of six frequencies using small-integer multiples, positive or negative. In principle, these basic angular arguments can be specified in numerous ways; Doodson's choice of his six "Doodson arguments" has been widely used in tidal work. In terms of these Doodson arguments, each tidal frequency can then be specified as a sum made up of a small integer multiple of each of the six arguments. The resulting six small integer multipliers effectively encode the frequency of the tidal argument concerned, and these are the Doodson numbers: in practice all except the first are usually biased upwards by +5 to avoid negative numbers in the notation. (In the case that the biased multiple exceeds 9, the system adopts X for 10, and E for 11.)
The Doodson arguments are specified in the following way, in order of decreasing frequency:
is mean Lunar time, the Greenwich hour angle of the mean Moon plus 12 hours.
is the mean longitude of the Moon.
is the mean longitude of the Sun.
is the longitude of the Moon's mean perigee.
is the negative of the longitude of the Moon's mean ascending node on the ecliptic.
or is the longitude of the Sun's mean perigee.
In these expressions, the symbols , , and refer to an alternative set of fundamental angular arguments (usually preferred for use in modern lunar theory), in which:-
is the mean anomaly of the Moon (distance from its perigee).
is the mean anomaly of the Sun (distance from its perigee).
is the Moon's mean argument of latitude (distance from its node).
is the Moon's mean elongation (distance from the sun).
It is possible to define several auxiliary variables on the basis of combinations of these.
In terms of this system, each tidal constituent frequency can be identified by its Doodson numbers. The strongest tidal constituent "M2" has a frequency of 2 cycles per lunar day, its Doodson numbers are usually written 255.555, meaning that its frequency is composed of twice the first Doodson argument, and zero times all of the others. The second strongest tidal constituent "S2" is influenced by the sun, and its Doodson numbers are 273.555, meaning that its frequency is composed of twice the first Doodson argument, +2 times the second, -2 times the third, and zero times each of the other three. This aggregates to the angular equivalent of mean solar time +12 hours. These two strongest component frequencies have simple arguments for which the Doodson system might appear needlessly complex, but each of the hundreds of other component frequencies can be briefly specified in a similar way, showing in the aggregate the usefulness of the encoding.
See also
Long-period tides
Lunar node § Effect on tides
Kelvin wave
Tide table
Notes
References
External links
Contributions of satellite laser ranging to the studies of earth tides
Dynamic Theory of Tides
Tidal Observations
Publications from NOAA's Center for Operational Oceanographic Products and Services
Understanding Tides
150 Years of Tides on the Western Coast
Our Relentless Tides
GeoTide Tidal Analysis System
Tides
Geophysics
Oceanography
Continuum mechanics
Fluid dynamics
Fluid mechanics
Planetary science | Theory of tides | [
"Physics",
"Chemistry",
"Astronomy",
"Engineering",
"Environmental_science"
] | 3,340 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Continuum mechanics",
"Oceanography",
"Chemical engineering",
"Classical mechanics",
"Civil engineering",
"Geophysics",
"Piping",
"Planetary science",
"Fluid mechanics",
"Astronomical sub-disciplines",
"Fluid dynamics"
] |
10,611,092 | https://en.wikipedia.org/wiki/Nonextensive%20entropy | Entropy is considered to be an extensive property, i.e., that its value depends on the amount of material present. Constantino Tsallis has proposed a nonextensive entropy (Tsallis entropy), which is a generalization of the traditional Boltzmann–Gibbs entropy.
The rationale behind the theory is that Gibbs-Boltzmann entropy leads to systems that have a strong dependence on initial conditions. In reality most materials behave quite independently of initial conditions.
Nonextensive entropy leads to nonextensive statistical mechanics, whose typical functions are power laws, instead of the traditional exponentials.
See also
Tsallis entropy
Statistical mechanics
Entropy and information
Thermodynamic entropy
Information theory | Nonextensive entropy | [
"Physics",
"Materials_science",
"Mathematics",
"Technology",
"Engineering"
] | 146 | [
"Statistical mechanics stubs",
"Materials science stubs",
"Telecommunications engineering",
"Physical quantities",
"Applied mathematics",
"Statistical mechanics",
"Thermodynamic entropy",
"Entropy and information",
"Computer science",
"Entropy",
"Information theory",
"Condensed matter physics"... |
23,822,972 | https://en.wikipedia.org/wiki/Pocket%20filter | Pocket filters are filters used in HVAC applications to remove dust from ambient air. They are commonly used as final filters in commercial applications or as prefilters for HEPA filters in hospitals in the Pharmaceutical Industry.
Pocket filters were historically produced from glass fiber media, however in recent a years a shift to synthetic media took place.
Glass fiber media is prone to bacterial growth and shedding. On the other hand, it has the advantage of increased filtration efficiency over time.
Synthetic media filters are usually electrostatically charged to increase efficiency. The drawback of this approach is the media loses efficiency by as much as 75% over time. To remedy this manufacturers have introduced new multilayer synthetic media with a smaller fiber diameter. Not only does this approach contribute to a limited loss of efficiency, but also to a service life that is longer by more than 30%. Some filters come with bacteriostatically treated media to prevent bacterial growth, while others come with a media that is inherently not suitable for bacterial growth.
Pocket filters come with a pocket number of 3–12 depending on the frame size. Frame material can be Metal of Plastic. The most common frame sizes are:
Despite the advances in synthetic media, pocket filters are slowly being overtaken by rigid type filters made from glass fiber paper or more recently from nanofiber. These filters usually offer a service life that is 4–8 times what is offered by pocket filters at a lower energy expenditure.
References
Filters | Pocket filter | [
"Chemistry",
"Engineering"
] | 297 | [
"Chemical equipment",
"Filtration",
"Filters"
] |
23,822,989 | https://en.wikipedia.org/wiki/General%20Dirichlet%20series | In the field of mathematical analysis, a general Dirichlet series is an infinite series that takes the form of
where , are complex numbers and is a strictly increasing sequence of nonnegative real numbers that tends to infinity.
A simple observation shows that an 'ordinary' Dirichlet series
is obtained by substituting while a power series
is obtained when .
Fundamental theorems
If a Dirichlet series is convergent at , then it is uniformly convergent in the domain
and convergent for any where .
There are now three possibilities regarding the convergence of a Dirichlet series, i.e. it may converge for all, for none or for some values of s. In the latter case, there exist a such that the series is convergent for and divergent for . By convention, if the series converges nowhere and if the series converges everywhere on the complex plane.
Abscissa of convergence
The abscissa of convergence of a Dirichlet series can be defined as above. Another equivalent definition is
The line is called the line of convergence. The half-plane of convergence is defined as
The abscissa, line and half-plane of convergence of a Dirichlet series are analogous to radius, boundary and disk of convergence of a power series.
On the line of convergence, the question of convergence remains open as in the case of power series. However, if a Dirichlet series converges and diverges at different points on the same vertical line, then this line must be the line of convergence. The proof is implicit in the definition of abscissa of convergence. An example would be the series
which converges at (alternating harmonic series) and diverges at (harmonic series). Thus, is the line of convergence.
Suppose that a Dirichlet series does not converge at , then it is clear that and diverges. On the other hand, if a Dirichlet series converges at , then and converges. Thus, there are two formulas to compute , depending on the convergence of which can be determined by various convergence tests. These formulas are similar to the Cauchy–Hadamard theorem for the radius of convergence of a power series.
If is divergent, i.e. , then is given by
If is convergent, i.e. , then is given by
Abscissa of absolute convergence
A Dirichlet series is absolutely convergent if the series
is convergent. As usual, an absolutely convergent Dirichlet series is convergent, but the converse is not always true.
If a Dirichlet series is absolutely convergent at , then it is absolutely convergent for all s where . A Dirichlet series may converge absolutely for all, for no or for some values of s. In the latter case, there exist a such that the series converges absolutely for and converges non-absolutely for .
The abscissa of absolute convergence can be defined as above, or equivalently as
The line and half-plane of absolute convergence can be defined similarly. There are also two formulas to compute .
If is divergent, then is given by
If is convergent, then is given by
In general, the abscissa of convergence does not coincide with abscissa of absolute convergence. Thus, there might be a strip between the line of convergence and absolute convergence where a Dirichlet series is conditionally convergent. The width of this strip is given by
In the case where L = 0, then
All the formulas provided so far still hold true for 'ordinary' Dirichlet series by substituting .
Other abscissas of convergence
It is possible to consider other abscissas of convergence for a Dirichlet series. The abscissa of bounded convergence is given by
while the abscissa of uniform convergence is given by
These abscissas are related to the abscissa of convergence and of absolute convergence by the formulas
,
and a remarkable theorem of Bohr in fact shows that for any ordinary Dirichlet series where (i.e. Dirichlet series of the form ) , and Bohnenblust and Hille subsequently showed that for every number there are Dirichlet series for which
A formula for the abscissa of uniform convergence for the general Dirichlet series is given as follows: for any , let , then
Analytic functions
A function represented by a Dirichlet series
is analytic on the half-plane of convergence. Moreover, for
Further generalizations
A Dirichlet series can be further generalized to the multi-variable case where , k = 2, 3, 4,..., or complex variable case where , m = 1, 2, 3,...
References
G. H. Hardy, and M. Riesz, The general theory of Dirichlet's series, Cambridge University Press, first edition, 1915.
E. C. Titchmarsh, The theory of functions, Oxford University Press, second edition, 1939.
Tom Apostol, Modular functions and Dirichlet series in number theory, Springer, second edition, 1990.
A.F. Leont'ev, Entire functions and series of exponentials (in Russian), Nauka, first edition, 1982.
A.I. Markushevich, Theory of functions of a complex variables (translated from Russian), Chelsea Publishing Company, second edition, 1977.
J.-P. Serre, A Course in Arithmetic, Springer-Verlag, fifth edition, 1973.
John E. McCarthy, Dirichlet Series, 2018.
H. F. Bohnenblust and Einar Hille, On the Absolute Convergence of Dirichlet Series, Annals of Mathematics, Second Series, Vol. 32, No. 3 (Jul., 1931), pp. 600-622.
External links
Complex analysis
Mathematical series | General Dirichlet series | [
"Mathematics"
] | 1,179 | [
"Sequences and series",
"Mathematical structures",
"Series (mathematics)",
"Calculus"
] |
23,823,849 | https://en.wikipedia.org/wiki/Catalan%20Institute%20of%20Nanotechnology | The Catalan Institute of Nanotechnology (ICN) (NanoCAT or Institut Català de Nanotecnologia) was established in 2003 by the Catalan government and the Autonomous University of Barcelona (UAB). with the aim of attracting skilled international researchers to create a hub for nanoscience and nanotechnology research. In 2006, the collaboration with the Consejo Superior de Investigaciones Científicas (CSIC), the main Spanish scientific institution, began. Since then it has a new name as Instituto Catalán de Nanociencia y Nanotecnología (Catalan Institute of Nanoscience and Nanotechnology - ICN2).
Research areas
Research focused in the following areas:
Synthesis and applications of nanoparticles and nanotubes
Design and synthesis of macromolecules for the integration of nanodevices
Magnetism of thin films and spintronics
Imaging and manipulation of atoms and molecules
Theory of surfaces and inter-phases
Interaction of biomaterials with inorganic matter
Following collaborations between the ICN and the Spanish government's Centre for Research in Nanoscience and Nanotechnology (CIN2), the ICN and the Spanish Research Council (CSIC) signed a memorandum of understanding of their official collaboration in 2006. This agreement was formalised in 2011, when CSIC representatives joined the ICN's board of patrons, and then in 2013, when the ICN changed its name to the Catalan Institute of Nanoscience and Nanotechnology (ICN2).
References
Nanotechnology institutions
Autonomous University of Barcelona | Catalan Institute of Nanotechnology | [
"Materials_science"
] | 321 | [
"Nanotechnology",
"Nanotechnology institutions"
] |
23,824,085 | https://en.wikipedia.org/wiki/Spinon | Spinons are one of three quasiparticles, along with holons and orbitons, that electrons in solids are able to split into during the process of spin–charge separation, when extremely tightly confined at temperatures close to absolute zero. The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital location and the holon carrying the charge, but in certain conditions they can behave as independent quasiparticles.
The term spinon is frequently used in discussions of experimental facts within the framework of both quantum spin liquid and strongly correlated quantum spin liquid.
Overview
Electrons, being of like charge, repel each other. As a result, in order to move past each other in an extremely crowded environment, they are forced to modify their behavior. Research published in July 2009 by the University of Cambridge and the University of Birmingham in England showed that electrons could jump from the surface of the metal onto a closely located quantum wire by quantum tunneling, and upon doing so, will separate into two quasiparticles, named spinons and holons by the researchers.
The orbiton was predicted theoretically by van den Brink, Khomskii and Sawatzky in 1997–1998.
Its experimental observation as a separate quasiparticle was reported in paper sent to publishers in September 2011.
The research states that by firing a beam of X-ray photons at a single electron in a one-dimensional sample of strontium cuprate, this will excite the electron to a higher orbital, causing the beam to lose a fraction of its energy in the process. In doing so, the electron will be separated into a spinon and an orbiton. This can be traced by observing the energy and momentum of the X-rays before and after the collision.
See also
Condensed matter physics
Tomonaga–Luttinger liquid
References
Quasiparticles | Spinon | [
"Physics",
"Materials_science"
] | 393 | [
"Matter",
"Particle physics",
"Particle physics stubs",
"Condensed matter physics",
"Quasiparticles",
"Subatomic particles"
] |
23,824,086 | https://en.wikipedia.org/wiki/Holon%20%28physics%29 | Holons are one of three quasi-particles, along with spinons and orbitons, that electrons in solids are able to split into during the process of spin–charge separation, when extremely tightly confined at temperatures close to absolute zero. The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital location and the holon carrying the charge, but in certain conditions they can become deconfined and behave as independent particles.
Overview
Electrons, being fermions, repel each other due to the Pauli exclusion principle. As a result, in order to move past each other in an extremely crowded environment, they are forced to modify their behavior. Research published in July 2009 by the University of Cambridge and the University of Birmingham in Britain showed that electrons could jump past each other by quantum tunneling, and in order to do so will separate into two particles, named spinons and holons by the researchers.
Notes
General References
See also
Condensed matter physics
Tomonaga–Luttinger liquid
Quasiparticles | Holon (physics) | [
"Physics",
"Materials_science"
] | 226 | [
"Matter",
"Quasiparticles",
"Particle physics",
"Condensed matter physics",
"Particle physics stubs",
"Subatomic particles"
] |
23,825,035 | https://en.wikipedia.org/wiki/Non-squeezing%20theorem | The non-squeezing theorem, also called Gromov's non-squeezing theorem, is one of the most important theorems in symplectic geometry. It was first proven in 1985 by Mikhail Gromov.
The theorem states that one cannot embed a ball into a cylinder via a symplectic map unless the radius of the ball is less than or equal to the radius of the cylinder. The theorem is important because formerly very little was known about the geometry behind symplectic maps.
One easy consequence of a transformation being symplectic is that it preserves volume. One can easily embed a ball of any radius into a cylinder of any other radius by a volume-preserving transformation: just picture squeezing the ball into the cylinder (hence, the name non-squeezing theorem). Thus, the non-squeezing theorem tells us that, although symplectic transformations are volume-preserving, it is much more restrictive for a transformation to be symplectic than it is to be volume-preserving.
Background and statement
Consider the symplectic spaces
each endowed with the symplectic form
The space is called the ball of radius and is called the cylinder of radius . The choice of axes for the cylinder are not arbitrary given the fixed symplectic form above; the circles of the cylinder each lie in a symplectic subspace of .
If and are symplectic manifolds, a symplectic embedding is a smooth embedding such that . For , there is a symplectic embedding which takes to the same point .
Gromov's non-squeezing theorem says that if there is a symplectic embedding , then .
Symplectic capacities
A symplectic capacity is a map satisfying
(Monotonicity) If there is a symplectic embedding and , then ,
(Conformality) ,
(Nontriviality) and .
The existence of a symplectic capacity satisfying
is equivalent to Gromov's non-squeezing theorem. Given such a capacity, one can verify the non-squeezing theorem, and given the non-squeezing theorem, the Gromov width
is such a capacity.
The “symplectic camel”
Gromov's non-squeezing theorem has also become known as the principle of the symplectic camel since Ian Stewart referred to it by alluding to the parable of the camel and the eye of a needle. As Maurice A. de Gosson states:
Similarly:
Further work
De Gosson has shown that the non-squeezing theorem is closely linked to the Robertson–Schrödinger–Heisenberg inequality, a generalization of the Heisenberg uncertainty relation. The Robertson–Schrödinger–Heisenberg inequality states that:
with Q and P the canonical coordinates and var and cov the variance and covariance functions.
References
Further reading
Maurice A. de Gosson: The symplectic egg, arXiv:1208.5969v1, submitted on 29 August 2012 – includes a proof of a variant of the theorem for case of linear canonical transformations
Dusa McDuff: What is symplectic geometry?, 2009
Symplectic geometry
Theorems in geometry | Non-squeezing theorem | [
"Mathematics"
] | 650 | [
"Mathematical theorems",
"Mathematical problems",
"Geometry",
"Theorems in geometry"
] |
23,826,709 | https://en.wikipedia.org/wiki/Lombard%20Steam%20Log%20Hauler | The Lombard Steam Log Hauler, patented 21 May 1901, was the first successful commercial application of a continuous track for vehicle propulsion. The concept was later used for military tanks during World War I and for agricultural tractors and construction equipment following the war.
Description
Alvin Orlando Lombard was a blacksmith building logging equipment in Waterville, Maine. He built 83 steam log haulers between 1901 and 1917. These log haulers resembled a saddle-tank steam locomotive with a small platform in front of the boiler where the cowcatcher might be expected.
A steering wheel on the platform moved a large pair of skis beneath the platform. A set of tracked vehicle treads occupied the space beneath the boiler where driving wheels might be expected. The locomotive cylinders powered the treads through a gear train. The log haulers mechanically resembled 10- to 30-ton snowmobiles with a top speed of about .
Operation
While the ground was covered with snow and ice, a log hauler could tow a string of sleds filled with logs. Each sled train required a crew of four men. An engineer and fireman occupied the cab behind the boiler, and a steersman sat on the platform in front. A conductor rode on the sleds with a bell-rope or wire to signal the crew in the cab. The earliest log haulers pulled three sleds, and later models were designed to pull eight sleds. Each train carried 40,000 to 100,000 board-feet of logs. The record train length was said to be 24 sleds with a total length of .
The greatest operational difficulty was on downhill grades where ice allowed the sleds to accelerate faster than the engine. Jack-knifing sleds pushed many log haulers into trees, and most photos of log haulers show rebuilt cabs and bent ironwork on the boiler and saddle tank. Hay was spread over the downhill routes in an effort to increase friction under the sleds, but hungry deer sometimes consumed the hay before the train arrived.
The steersman was regarded as the hero of the crew. In sub-freezing temperatures down to 40 degrees below zero, he sat in an exposed position in front of the train. Sparks flying out of the boiler stack above him would sometimes set his clothing on fire as avoidance of trees required his full attention and effort turning the large iron steering wheel. Some steersmen earned enough money to purchase fire-resistant leather clothing. Some log haulers had a small roofed shelter built on the steering platform, but the shelter limited the steersman's ability to jump clear when collision became inevitable, and he would require luck to avoid injury from the following trainload of logs.
Berlin Mills Company was one of the larger woods operators to use Lombard log haulers. They purchased one machine in 1904, and then purchased two more to maintain reliable operation when one needed repairs. The company maintained a single iced haul road in Stetson, Maine, by nightly application of water from a sprinkler sled, and strung a telephone line with frequent call boxes to dispatch sled trains over that road. The company estimated those three Lombard log haulers did the work of 60 horses.
History
The first two Lombard log haulers were used near Eustis, Maine, in 1901 prior to construction of the Eustis Railroad. These early machines had an upright boiler and were steered by a team of horses. Most of the Lombard log haulers were used in Maine and New Hampshire. A few were used in Michigan, Wisconsin and Russia. Lombard began building 6-cylinder gasoline-powered log haulers in 1914, produced a more powerful "Big 6" later, and built one Fairbanks-Morse Diesel engine hauler in 1934. The internal combustion log haulers (called Lombard tractors) were less powerful than the steam log haulers; and resembled a stake body truck on a skis and tracks chassis. The steam-powered haulers are thought to have been used as late as 1929. At least ten of the Lombard tractors were preserved at Churchill Depot as recently as the 1960s.
Legacy
Antarctic Mount Lombard was named in recognition of the Lombard Log Hauler as the first application of knowledge of snow mechanics to trafficability. The Lombard Steam Log Hauler was designated a National Historic Mechanical Engineering Landmark in 1982 following nomination by the American Society of Mechanical Engineers.
Lombard Steam Log Haulers have been preserved and restored in:
Ashland Logging Museum in Ashland, Maine
Maine Forest & Logging Museum in Bradley, Maine
Owls Head Transportation Museum in Owls Head, Maine
Lumberman's Museum in Patten, Maine
Clark's Trading Post in Lincoln, New Hampshire
Rhinelander, Wisconsin
Saskatchewan Western Development Museum in Saskatoon
Waterville, Maine
Tulppio, Finland
Rovaniemi Forestry Museum, Finland
Notes
References
External links
Railway Museums in Finland, including a photograph of Lombard Steam log hauler at Tulppio
Forestry Museum of Lapland, Rovaniemi
Vehicle technology
Steam road vehicles
Half-tracks
Tractors
Historic Mechanical Engineering Landmarks | Lombard Steam Log Hauler | [
"Engineering"
] | 1,001 | [
"Vehicle technology",
"Engineering vehicles",
"Tractors",
"Mechanical engineering by discipline"
] |
260,836 | https://en.wikipedia.org/wiki/Twistor%20theory | In theoretical physics, twistor theory was proposed by Roger Penrose in 1967 as a possible path to quantum gravity and has evolved into a widely studied branch of theoretical and mathematical physics. Penrose's idea was that twistor space should be the basic arena for physics from which space-time itself should emerge. It has led to powerful mathematical tools that have applications to differential and integral geometry, nonlinear differential equations and representation theory, and in physics to general relativity, quantum field theory, and the theory of scattering amplitudes.
Twistor theory arose in the context of the rapidly expanding mathematical developments in Einstein's theory of general relativity in the late 1950s and in the 1960s and carries a number of influences from that period. In particular, Roger Penrose has credited Ivor Robinson as an important early influence in the development of twistor theory, through his construction of so-called Robinson congruences.
Overview
Projective twistor space is projective 3-space , the simplest 3-dimensional compact algebraic variety. It has a physical interpretation as the space of massless particles with spin. It is the projectivisation of a 4-dimensional complex vector space, non-projective twistor space , with a Hermitian form of signature (2, 2) and a holomorphic volume form. This can be most naturally understood as the space of chiral (Weyl) spinors for the conformal group of Minkowski space; it is the fundamental representation of the spin group of the conformal group. This definition can be extended to arbitrary dimensions except that beyond dimension four, one defines projective twistor space to be the space of projective pure spinors for the conformal group.
In its original form, twistor theory encodes physical fields on Minkowski space in terms of complex analytic objects on twistor space via the Penrose transform. This is especially natural for massless fields of arbitrary spin. In the first instance these are obtained via contour integral formulae in terms of free holomorphic functions on regions in twistor space. The holomorphic twistor functions that give rise to solutions to the massless field equations can be more deeply understood as Čech representatives of analytic cohomology classes on regions in . These correspondences have been extended to certain nonlinear fields, including self-dual gravity in Penrose's nonlinear graviton construction and self-dual Yang–Mills fields in the so-called Ward construction; the former gives rise to deformations of the underlying complex structure of regions in , and the latter to certain holomorphic vector bundles over regions in . These constructions have had wide applications, including inter alia the theory of integrable systems.
The self-duality condition is a major limitation for incorporating the full nonlinearities of physical theories, although it does suffice for Yang–Mills–Higgs monopoles and instantons (see ADHM construction). An early attempt to overcome this restriction was the introduction of ambitwistors by Isenberg, Yasskin and Green, and their superspace extension, super-ambitwistors, by Edward Witten. Ambitwistor space is the space of complexified light rays or massless particles and can be regarded as a complexification or cotangent bundle of the original twistor description. By extending the ambitwistor correspondence to suitably defined formal neighborhoods, Isenberg, Yasskin and Green showed the equivalence between the vanishing of the curvature along such extended null lines and the full Yang–Mills field equations. Witten showed that a further extension, within the framework of super Yang–Mills theory, including fermionic and scalar fields, gave rise, in the case of N = 1 or 2 supersymmetry, to the constraint equations, while for N = 3 (or 4), the vanishing condition for supercurvature along super null lines (super ambitwistors) implied the full set of field equations, including those for the fermionic fields. This was subsequently shown to give a equivalence between the null curvature constraint equations and the supersymmetric Yang-Mills field equations. Through dimensional reduction, it may also be deduced from the analogous super-ambitwistor correspondence for 10-dimensional, N = 1 super-Yang–Mills theory.
Twistorial formulae for interactions beyond the self-dual sector also arose in Witten's twistor string theory, which is a quantum theory of holomorphic maps of a Riemann surface into twistor space. This gave rise to the remarkably compact RSV (Roiban, Spradlin and Volovich) formulae for tree-level S-matrices of Yang–Mills theories, but its gravity degrees of freedom gave rise to a version of conformal supergravity limiting its applicability; conformal gravity is an unphysical theory containing ghosts, but its interactions are combined with those of Yang–Mills theory in loop amplitudes calculated via twistor string theory.
Despite its shortcomings, twistor string theory led to rapid developments in the study of scattering amplitudes. One was the so-called MHV formalism loosely based on disconnected strings, but was given a more basic foundation in terms of a twistor action for full Yang–Mills theory in twistor space. Another key development was the introduction of BCFW recursion. This has a natural formulation in twistor space that in turn led to remarkable formulations of scattering amplitudes in terms of Grassmann integral formulae and polytopes. These ideas have evolved more recently into the positive Grassmannian and amplituhedron.
Twistor string theory was extended first by generalising the RSV Yang–Mills amplitude formula, and then by finding the underlying string theory. The extension to gravity was given by Cachazo & Skinner, and formulated as a twistor string theory for maximal supergravity by David Skinner. Analogous formulae were then found in all dimensions by Cachazo, He and Yuan for Yang–Mills theory and gravity and subsequently for a variety of other theories. They were then understood as string theories in ambitwistor space by Mason and Skinner in a general framework that includes the original twistor string and extends to give a number of new models and formulae. As string theories they have the same critical dimensions as conventional string theory; for example the type II supersymmetric versions are critical in ten dimensions and are equivalent to the full field theory of type II supergravities in ten dimensions (this is distinct from conventional string theories that also have a further infinite hierarchy of massive higher spin states that provide an ultraviolet completion). They extend to give formulae for loop amplitudes and can be defined on curved backgrounds.
The twistor correspondence
Denote Minkowski space by , with coordinates and Lorentzian metric signature . Introduce 2-component spinor indices and set
Non-projective twistor space is a four-dimensional complex vector space with coordinates denoted by where and are two constant Weyl spinors. The hermitian form can be expressed by defining a complex conjugation from to its dual by so that the Hermitian form can be expressed as
This together with the holomorphic volume form, is invariant under the group SU(2,2), a quadruple cover of the conformal group C(1,3) of compactified Minkowski spacetime.
Points in Minkowski space are related to subspaces of twistor space through the incidence relation
The incidence relation is preserved under an overall re-scaling of the twistor, so usually one works in projective twistor space which is isomorphic as a complex manifold to . A point thereby determines a line in parametrised by A twistor is easiest understood in space-time for complex values of the coordinates where it defines a totally null two-plane that is self-dual. Take to be real, then if vanishes, then lies on a light ray, whereas if is non-vanishing, there are no solutions, and indeed then corresponds to a massless particle with spin that are not localised in real space-time.
Variations
Supertwistors
Supertwistors are a supersymmetric extension of twistors introduced by Alan Ferber in 1978. Non-projective twistor space is extended by fermionic coordinates where is the number of supersymmetries so that a twistor is now given by with anticommuting. The super conformal group naturally acts on this space and a supersymmetric version of the Penrose transform takes cohomology classes on supertwistor space to massless supersymmetric multiplets on super Minkowski space. The case provides the target for Penrose's original twistor string and the case is that for Skinner's supergravity generalisation.
Higher dimensional generalization of the Klein correspondence
A higher dimensional generalization of the Klein correspondence underlying twistor theory, applicable to isotropic subspaces of conformally compactified (complexified) Minkowski space and its super-space extensions, was developed by J. Harnad and S. Shnider.
Hyperkähler manifolds
Hyperkähler manifolds of dimension also admit a twistor correspondence with a twistor space of complex dimension .
Palatial twistor theory
The nonlinear graviton construction encodes only anti-self-dual, i.e., left-handed fields. A first step towards the problem of modifying twistor space so as to encode a general gravitational field is the encoding of right-handed fields. Infinitesimally, these are encoded in twistor functions or cohomology classes of homogeneity −6. The task of using such twistor functions in a fully nonlinear way so as to obtain a right-handed nonlinear graviton has been referred to as the (gravitational) googly problem. (The word "googly" is a term used in the game of cricket for a ball bowled with right-handed helicity using the apparent action that would normally give rise to left-handed helicity.) The most recent proposal in this direction by Penrose in 2015 was based on noncommutative geometry on twistor space and referred to as palatial twistor theory. The theory is named after Buckingham Palace, where Michael Atiyah suggested to Penrose the use of a type of "noncommutative algebra", an important component of the theory. (The underlying twistor structure in palatial twistor theory was modeled not on the twistor space but on the non-commutative holomorphic twistor quantum algebra.)
See also
Background independence
Complex spacetime
History of loop quantum gravity
Robinson congruences
Spin network
Twisted geometries
Notes
References
Roger Penrose (2004), The Road to Reality, Alfred A. Knopf, ch. 33, pp. 958–1009.
Roger Penrose and Wolfgang Rindler (1984), Spinors and Space-Time; vol. 1, Two-Spinor Calculus and Relativitic Fields, Cambridge University Press, Cambridge.
Roger Penrose and Wolfgang Rindler (1986), Spinors and Space-Time; vol. 2, Spinor and Twistor Methods in Space-Time Geometry, Cambridge University Press, Cambridge.
Further reading
Baird, P., "An Introduction to Twistors."
Huggett, S. and Tod, K. P. (1994). An Introduction to Twistor Theory, second edition. Cambridge University Press. . OCLC 831625586.
Hughston, L. P. (1979) Twistors and Particles. Springer Lecture Notes in Physics 97, Springer-Verlag. .
Hughston, L. P. and Ward, R. S., eds (1979) Advances in Twistor Theory. Pitman. .
Mason, L. J. and Hughston, L. P., eds (1990) Further Advances in Twistor Theory, Volume I: The Penrose Transform and its Applications. Pitman Research Notes in Mathematics Series 231, Longman Scientific and Technical. .
Mason, L. J., Hughston, L. P., and Kobak, P. K., eds (1995) Further Advances in Twistor Theory, Volume II: Integrable Systems, Conformal Geometry, and Gravitation. Pitman Research Notes in Mathematics Series 232, Longman Scientific and Technical. .
Mason, L. J., Hughston, L. P., Kobak, P. K., and Pulverer, K., eds (2001) Further Advances in Twistor Theory, Volume III: Curved Twistor Spaces. Research Notes in Mathematics 424, Chapman and Hall/CRC. .
External links
Penrose, Roger (1999), "Einstein's Equation and Twistor Theory: Recent Developments"
Penrose, Roger; Hadrovich, Fedja. "Twistor Theory."
Hadrovich, Fedja, "Twistor Primer."
Penrose, Roger. "On the Origins of Twistor Theory."
Jozsa, Richard (1976), "Applications of Sheaf Cohomology in Twistor Theory."
Andrew Hodges, Summary of recent developments.
Huggett, Stephen (2005), "The Elements of Twistor Theory."
Mason, L. J., "The twistor programme and twistor strings: From twistor strings to quantum gravity?"
Sparling, George (1999), "On Time Asymmetry."
MathWorld: Twistors.
Universe Review: "Twistor Theory."
Twistor newsletter archives.
Clifford algebras
Quantum field theory
Theories of gravity | Twistor theory | [
"Physics"
] | 2,785 | [
"Quantum field theory",
"Theoretical physics",
"Quantum mechanics",
"Theories of gravity"
] |
260,914 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28numbers%29 | This list contains selected positive numbers in increasing order, including counts of things, dimensionless quantities and probabilities. Each number is given a name in the short scale, which is used in English-speaking countries, as well as a name in the long scale, which is used in some of the countries that do not have English as their national language.
Smaller than (one googolth)
Mathematics – random selections: Approximately is a rough first estimate of the probability that a typing "monkey", or an English-illiterate typing robot, when placed in front of a typewriter, will type out William Shakespeare's play Hamlet as its first set of inputs, on the precondition it typed the needed number of characters. However, demanding correct punctuation, capitalization, and spacing, the probability falls to around 10−360,783.
Computing: 2.2 is approximately equal to the smallest non-zero value that can be represented by an octuple-precision IEEE floating-point value.
1 is equal to the smallest non-zero value that can be represented by a quadruple-precision IEEE decimal floating-point value.
6.5 is approximately equal to the smallest non-zero value that can be represented by a quadruple-precision IEEE floating-point value.
3.6 is approximately equal to the smallest non-zero value that can be represented by an 80-bit x86 double-extended IEEE floating-point value.
1 is equal to the smallest non-zero value that can be represented by a double-precision IEEE decimal floating-point value.
4.9 is approximately equal to the smallest non-zero value that can be represented by a double-precision IEEE floating-point value.
1.5 is approximately equal to the probability that in a randomly selected group of 365 people, all of them will have different birthdays.
1 is equal to the smallest non-zero value that can be represented by a single-precision IEEE decimal floating-point value.
10−100 to 10−30
Mathematics: The chances of shuffling a standard 52-card deck in any specific order is around 1.24 (or exactly )
Computing: The number 1.4 is approximately equal to the smallest positive non-zero value that can be represented by a single-precision IEEE floating-point value.
10−30
(; 1000−10; short scale: one nonillionth; long scale: one quintillionth)
ISO: quecto- (q)
Mathematics: The probability in a game of bridge of all four players getting a complete suit each is approximately .
10−27
(; 1000−9; short scale: one octillionth; long scale: one quadrilliardth)
ISO: ronto- (r)
10−24
(; 1000−8; short scale: one septillionth; long scale: one quadrillionth)
ISO: yocto- (y)
10−21
(; 1000−7; short scale: one sextillionth; long scale: one trilliardth)
ISO: zepto- (z)
Mathematics: The probability of matching 20 numbers for 20 in a game of keno is approximately 2.83 × 10−19.
Mathematics: The odds of a perfect bracket in the NCAA Division I men's basketball tournament are 1 in 263, approximately 1.08 × 10−19, if coin flips are used to predict the winners of the 63 matches.
10−18
(; 1000−6; short scale: one quintillionth; long scale: one trillionth)
ISO: atto- (a)
Mathematics: The probability of rolling snake eyes 10 times in a row on a pair of fair dice is about .
10−15
(; 1000−5; short scale: one quadrillionth; long scale: one billiardth)
ISO: femto- (f)
Mathematics: The Ramanujan constant, is an almost integer, differing from the nearest integer by approximately .
10−12
(; 1000−4; short scale: one trillionth; long scale: one billionth)
ISO: pico- (p)
Mathematics: The probability in a game of bridge of one player getting a complete suit is approximately ().
Biology: Human visual sensitivity to 1000 nm light is approximately of its peak sensitivity at 555 nm.
10−9
(; 1000−3; short scale: one billionth; long scale: one milliardth)
ISO: nano- (n)
Mathematics – Lottery: The odds of winning the Grand Prize (matching all 6 numbers) in the US Powerball lottery, with a single ticket, under the rules , are 292,201,338 to 1 against, for a probability of ().
Mathematics – Lottery: The odds of winning the Grand Prize (matching all 6 numbers) in the Australian Powerball lottery, with a single ticket, under the rules , are 134,490,400 to 1 against, for a probability of ().
Mathematics – Lottery: The odds of winning the Jackpot (matching the 6 main numbers) in the current 59-ball UK National Lottery Lotto, with a single ticket, under the rules , are 45,057,474 to 1 against, for a probability of ().
Mathematics – Lottery: The odds of winning the Jackpot (matching the 6 main numbers) in the former 49-ball UK National Lottery, with a single ticket, were 13,983,815 to 1 against, for a probability of ().
10−6
(; 1000−2; long and short scales: one millionth)
ISO: micro- (μ)
Mathematics – Poker: The odds of being dealt a royal flush in poker are 649,739 to 1 against, for a probability of 1.5 ().
Mathematics – Poker: The odds of being dealt a straight flush (other than a royal flush) in poker are 72,192 to 1 against, for a probability of 1.4 (0.0014%).
Mathematics – Poker: The odds of being dealt a four of a kind in poker are 4,164 to 1 against, for a probability of 2.4 (0.024%).
10−3
(0.001; 1000−1; one thousandth)
ISO: milli- (m)
Mathematics – Poker: The odds of being dealt a full house in poker are 693 to 1 against, for a probability of 1.4 × 10−3 (0.14%).
Mathematics – Poker: The odds of being dealt a flush in poker are 507.8 to 1 against, for a probability of 1.9 × 10−3 (0.19%).
Mathematics – Poker: The odds of being dealt a straight in poker are 253.8 to 1 against, for a probability of 4 × 10−3 (0.39%).
Physics: α = , the fine-structure constant.
10−2
(0.01; one hundredth)
ISO: centi- (c)
Mathematics – Lottery: The odds of winning any prize in the UK National Lottery, with a single ticket, under the rules as of 2003, are 54 to 1 against, for a probability of about 0.018 (1.8%).
Mathematics – Poker: The odds of being dealt a three of a kind in poker are 46 to 1 against, for a probability of 0.021 (2.1%).
Mathematics – Lottery: The odds of winning any prize in the Powerball, with a single ticket, under the rules as of 2015, are 24.87 to 1 against, for a probability of 0.0402 (4.02%).
Mathematics – Poker: The odds of being dealt two pair in poker are 21 to 1 against, for a probability of 0.048 (4.8%).
10−1
(0.1; one tenth)
ISO: deci- (d)
Legal history: 10% was widespread as the tax raised for income or produce in the ancient and medieval period; see tithe.
Mathematics – Poker: The odds of being dealt only one pair in poker are about 5 to 2 against (2.37 to 1), for a probability of 0.42 (42%).
Mathematics – Poker: The odds of being dealt no pair in poker are nearly 1 to 2, for a probability of about 0.5 (50%).
100
(1; one)
Demography: The population of Monowi, an incorporated village in Nebraska, United States, was one in 2010.
Religion: One is the number of gods in Judaism, Christianity, and Islam (monotheistic religions).
Computing – Unicode: One character is assigned to the Lisu Supplement Unicode block, the fewest of any public-use Unicode block as of Unicode 15.0 (2022).
Mathematics: ≈ , the ratio of the diagonal of a square to its side length.
Mathematics: φ ≈ , the golden ratio.
Mathematics: ≈ , the ratio of the diagonal of a unit cube.
Mathematics: the number system understood by most computers, the binary system, uses 2 digits: 0 and 1.
Mathematics: ≈ 2.236 067 9775, the correspondent to the diagonal of a rectangle whose side lengths are 1 and 2.
Mathematics: + 1 ≈ , the silver ratio; the ratio of the smaller of the two quantities to the larger quantity is the same as the ratio of the larger quantity to the sum of the smaller quantity and twice the larger quantity.
Mathematics: e ≈ , the base of the natural logarithm.
Mathematics: the number system understood by ternary computers, the ternary system, uses 3 digits: 0, 1, and 2.
Religion: three manifestations of God in the Christian Trinity.
Mathematics: π ≈ , the ratio of a circle's circumference to its diameter.
Religion: the Four Noble Truths in Buddhism.
Biology: 7 ± 2, in cognitive science, George A. Miller's estimate of the number of objects that can be simultaneously held in human working memory.
Music: 7 notes in a major or minor scale.
Astronomy: 8 planets in the Solar System.
Religion: the Noble Eightfold Path in Buddhism.
Literature: 9 circles of Hell in the Inferno by Dante Alighieri.
101
(10; ten)
ISO: deca- (da)
Demography: The population of Pesnopoy, a village in Bulgaria, was 10 in 2007.
Human scale: There are 10 digits on a pair of human hands, and 10 toes on a pair of human feet.
Mathematics: The decimal system has 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9.
Religion: the Ten Commandments in the Abrahamic religions.
Music: There are 12 notes in the chromatic scale.
Astrology: There are 12 zodiac signs, each one representing part of the annual path of the sun's movement across the night sky.
Computing – Microsoft Windows: Twelve successive consumer versions of Windows NT have been released as of December 2021.
Music: Composers Ludwig van Beethoven and Dmitri Shostakovich both completed and numbered 15 string quartets in their lifetimes.
Linguistics: The Finnish language has 15 noun cases.
Mathematics: The hexadecimal system, a common number system used in computer programming, uses 16 digits where the last 6 are typically represented by letters: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F.
Computing – Unicode: The minimum possible size of a Unicode block is 16 contiguous code points (i.e., U+abcde0 - U+abcdeF).
Computing – UTF-16/Unicode: There are 17 addressable planes in UTF-16, and, thus, as Unicode is limited to the UTF-16 code space, 17 valid planes in Unicode.
Science fiction: The 23 enigma plays a prominent role in the plot of The Illuminatus! Trilogy by Robert Shea and Robert Anton Wilson.
Mathematics: e ≈ 23.140692633
Music: There is a combined total of 24 major and minor keys, also the number of works in some musical cycles of J. S. Bach, Frédéric Chopin, Alexander Scriabin, and Dmitri Shostakovich.
Alphabetic writing: There are 26 letters in the Latin-derived English alphabet (excluding letters found only in foreign loanwords).
Science fiction: The number 42, in by Douglas Adams, is the Answer to the Ultimate Question of Life, the Universe, and Everything which is calculated by an enormous supercomputer over a period of 7.5 million years.
Biology: A human cell typically contains 46 chromosomes.
Phonology: There are 47 phonemes in English phonology in Received Pronunciation.
Syllabic writing: There are 49 letters in each of the two kana syllabaries (hiragana and katakana) used to represent Japanese (not counting letters representing sound patterns that have never occurred in Japanese).
Chess: Either player in a chess game can claim a draw if 50 consecutive moves are made by each side without any captures or pawn moves.
Demography: The population of Nassau Island, part of the Cook Islands, was around 78 in 2016.
Syllabic writing: There are 85 letters in the modern version of the Cherokee syllabary.
Music: Typically, there are 88 keys on a grand piano.
Computing – ASCII: There are 95 printable characters in the ASCII character set.
102
(100; hundred)
ISO: hecto- (h)
European history: Groupings of 100 homesteads were a common administrative unit in Northern Europe and Great Britain (see Hundred (county division)).
Music: There are 104 numbered symphonies of Franz Josef Haydn.
Religion: 108 is a sacred number in Hinduism.
Chemistry: 118 chemical elements have been discovered or synthesized as of 2016.
Computing – ASCII: There are 128 characters in the ASCII character set, including nonprintable control characters.
Videogames: There are 151 Pokémon in the first generation.
Phonology: The Taa language is estimated to have between 130 and 164 distinct phonemes.
Political science: There were 193 member states of the United Nations as of 2011.
Computing: A GIF image (or an 8-bit image) supports maximum 256 (28) colors.
Computing – Unicode: There are 327 different Unicode blocks as of Unicode 15.0 (2022).
Aviation: 583 people died in the 1977 Tenerife airport disaster, the deadliest accident in the history of civil aviation.
Music: The largest number (626) in the Köchel catalogue of works of Wolfgang Amadeus Mozart.
Demography: Vatican City, the least populous independent country, has an approximate population of 800 as of 2018.
103
(; thousand)
ISO: kilo- (k)
Demography: The population of Ascension Island is 1,122.
Music: 1,128: number of known extant works by Johann Sebastian Bach recognized in the Bach-Werke-Verzeichnis as of 2017.
Typesetting: 2,000–3,000 letters on a typical typed page of text.
Mathematics: 2,520 (5×7×8×9 or 23×32×5×7) is the least common multiple of every positive integer under (and including) 10.
Terrorism: 2,996 persons (including 19 terrorists) died in the terrorist attacks of September 11, 2001.
Biology: the DNA of the simplest viruses has 3,000 base pairs.
Military history: 4,200 (Republic) or 5,200 (Empire) was the standard size of a Roman legion.
Linguistics: Estimates for the linguistic diversity of living human languages or dialects range between 5,000 and 10,000. (SIL Ethnologue in 2009 listed 6,909 known living languages.)
Astronomy – Catalogues: There are 7,840 deep-sky objects in the NGC Catalogue from 1888.
Lexicography: 8,674 unique words in the Hebrew Bible.
104
(; ten thousand or a myriad)
Biology: Each neuron in the human brain is estimated to connect to 10,000 others.
Demography: The population of Tuvalu was 10,544 in 2007.
Lexicography: 14,500 unique English words occur in the King James Version of the Bible.
Zoology: There are approximately 17,500 distinct butterfly species known.
Language: There are 20,000–40,000 distinct Chinese characters in more than occasional use.
Biology: Each human being is estimated to have 20,000 coding genes.
Grammar: Each regular verb in Cherokee can have 21,262 inflected forms.
War: 22,717 Union and Confederate soldiers were killed, wounded, or missing in the Battle of Antietam, the bloodiest single day of battle in American history.
Computing – Unicode: 42,720 characters are encoded in CJK Unified Ideographs Extension B, the most of any single public-use Unicode block as of Unicode 15.0 (2022).
Aviation: , 44,000+ airframes have been built of the Cessna 172, the most-produced aircraft in history.
Computing - Fonts: The maximum possible number of glyphs in a TrueType or OpenType font is 65,535 (216-1), the largest number representable by the 16-bit unsigned integer used to record the total number of glyphs in the font.
Computing – Unicode: A plane contains 65,536 (216) code points; this is also the maximum size of a Unicode block, and the total number of code points available in the obsolete UCS-2 encoding.
Mathematics: 65,537 is the largest known Fermat prime.
Memory: , the largest number of decimal places of π that have been recited from memory is 70,030.
105
(; one hundred thousand or a lakh).
Demography: The population of Saint Vincent and the Grenadines was 100,982 in 2009.
Biology – Strands of hair on a head: The average human head has about 100,000–150,000 strands of hair.
Literature: approximately 100,000 verses (shlokas) in the Mahabharata.
Computing – Unicode: 149,186 characters (including control characters) encoded in Unicode as of version 15.0 (2022).
Language: 267,000 words in James Joyce's Ulysses.
Computing – Unicode: 293,168 code points assigned to a Unicode block as of Unicode 15.0.
Genocide: 300,000 people killed in the Nanjing Massacre.
Language – English words: The New Oxford Dictionary of English contains about 360,000 definitions for English words.
Mathematics: 360,000 – The approximate number of entries in The On-Line Encyclopedia of Integer Sequences .
Biology – Plants: There are approximately 390,000 distinct plant species known, of which approximately 20% (or 78,000) are at risk of extinction.
Biology – Flowers: There are approximately 400,000 distinct flower species on Earth.
Literature: 564,000 words in War and Peace by Leo Tolstoy.
Literature: 930,000 words in the King James Version of the Bible.
Mathematics: There are 933,120 possible combinations on the Pyraminx.
Computing – Unicode: There are 974,530 publicly-assignable code points (i.e., not surrogates, private-use code points, or noncharacters) in Unicode.
106
(; 10002; long and short scales: one million)
ISO: mega- (M)
Demography: The population of Riga, Latvia was 1,003,949 in 2004, according to Eurostat.
Computing – UTF-8: There are 1,112,064 (220 + 216 - 211) valid UTF-8 sequences (excluding overlong sequences and sequences corresponding to code points used for UTF-16 surrogates or code points beyond U+10FFFF).
Computing – UTF-16/Unicode: There are 1,114,112 (220 + 216) distinct values encodable in UTF-16, and, thus (as Unicode is currently limited to the UTF-16 code space), 1,114,112 valid code points in Unicode (1,112,064 scalar values and 2,048 surrogates).
Ludology – Number of games: Approximately 1,181,019 video games have been created as of 2019.
Biology – Species: The World Resources Institute claims that approximately 1.4 million species have been named, out of an unknown number of total species (estimates range between 2 and 100 million species). Some scientists give 8.8 million species as an exact figure.
Genocide: Approximately 800,000–1,500,000 (1.5 million) Armenians were killed in the Armenian genocide.
Linguistics: The number of possible conjugations for each verb in the Archi language is 1,502,839.
Info: The freedb database of CD track listings has around 1,750,000 entries .
Computing – UTF-8: 2,164,864 (221 + 216 + 211 + 27) possible one- to four-byte UTF-8 sequences, if the restrictions on overlong sequences, surrogate code points, and code points beyond U+10FFFF are not adhered to. (Note that not all of these correspond to unique code points.)
Mathematics – Playing cards: There are 2,598,960 different 5-card poker hands that can be dealt from a standard 52-card deck.
Mathematics: There are 3,149,280 possible positions for the Skewb.
Mathematics – Rubik's Cube: 3,674,160 is the number of combinations for the Pocket Cube (2×2×2 Rubik's Cube).
Geography/Computing – Geographic places: The NIMA GEOnet Names Server contains approximately 3.88 million named geographic features outside the United States, with 5.34 million names. The USGS Geographic Names Information System claims to have almost 2 million physical and cultural geographic features within the United States.
Computing - Supercomputer hardware: 4,981,760 processor cores in the final configuration of the Tianhe-2 supercomputer.
Genocide: Approximately 5,100,000–6,200,000 Jews were killed in the Holocaust.
Info – Web sites: As of , , the English Wikipedia contains approximately million articles in the English language.
107
(; a crore; long and short scales: ten million)
Demography: The population of Haiti was 10,085,214 in 2010.
Literature: 11,206,310 words in Devta by Mohiuddin Nawab, the longest continuously published story known in the history of literature.
Genocide: An estimated 12 million persons shipped from Africa to the New World in the Atlantic slave trade.
Mathematics: 12,988,816 is the number of domino tilings of an 8×8 checkerboard.
Genocide/Famine: 15 million is an estimated lower bound for the death toll of the 1959–1961 Great Chinese Famine, the deadliest known famine in human history.
War: 15 to 22 million casualties estimated as a result of World War I.
Computing: 16,777,216 different colors can be generated using the hex code system in HTML (note that the trichromatic color vision of the human eye can only distinguish between about an estimated 1,000,000 different colors).
Science Fiction: In Isaac Asimov's Galactic Empire, in 22,500 CE, there are 25,000,000 different inhabited planets in the Galactic Empire, all inhabited by humans in Asimov's "human galaxy" scenario.
Genocide/Famine: 55 million is an estimated upper bound for the death toll of the Great Chinese Famine.
Literature: Wikipedia contains a total of around articles in languages as of .
War: 70 to 85 million casualties estimated as a result of World War II.
Mathematics: 73,939,133 is the largest right-truncatable prime.
108
(; long and short scales: one hundred million)
Demography: The population of the Philippines was 100,981,437 in 2015.
Internet – YouTube: The number of YouTube channels is estimated to be 113.9 million.
Info – Books: The British Library holds over 150 million items. The Library of Congress holds approximately 148 million items. See The Gutenberg Galaxy.
Video gaming: , approximately 200 million copies of Minecraft (the most-sold video game in history) have been sold.
Mathematics: More than 215,000,000 mathematical constants are collected on the Plouffe's Inverter .
Mathematics: 275,305,224 is the number of 5×5 normal magic squares, not counting rotations and reflections. This result was found in 1973 by Richard Schroeppel.
Demography: The population of the United States was 328,239,523 in 2019.
Mathematics: 358,833,097 stellations of the rhombic triacontahedron.
Info – Web sites: , the Netcraft web survey estimates that there are 525,998,433 (526 million) distinct websites.
Astronomy – Cataloged stars: The Guide Star Catalog II has entries on 998,402,801 distinct astronomical objects.
109
(; 10003; short scale: one billion; long scale: one thousand million, or one milliard)
ISO: giga- (G)
Transportation – Cars: , there are approximately 1.4 billion cars in the world, corresponding to around 18% of the human population.
Demographics – China: 1,409,670,000 – approximate population of the People's Republic of China in 2023.
Demographics – India 1,428,627,663 – approximate population of India in 2023.
Demographics – Africa: The population of Africa reached 1,430,000,000 sometime in 2023.
Internet – Google: There are more than 1,500,000,000 active Gmail users globally.
Internet: Approximately 1,500,000,000 active users were on Facebook as of October 2015.
Computing – Computational limit of a 32-bit CPU: 2,147,483,647 is equal to 231−1, and as such is the largest number which can fit into a signed (two's complement) 32-bit integer on a computer.
Computing – UTF-8: 2,147,483,648 (231) possible code points (U+0000 - U+7FFFFFFF) in the pre-2003 version of UTF-8 (including five- and six-byte sequences), before the UTF-8 code space was limited to the much smaller set of values encodable in UTF-16.
Biology – base pairs in the genome: approximately 3.3 base pairs in the human genome.
Linguistics: 3,400,000,000 – the total number of speakers of Indo-European languages, of which 2,400,000,000 are native speakers; the other 1,000,000,000 speak Indo-European languages as a second language.
Mathematics and computing: 4,294,967,295 (232 − 1), the product of the five known Fermat primes and the maximum value for a 32-bit unsigned integer in computing.
Computing – IPv4: 4,294,967,296 (232) possible unique IP addresses.
Computing: 4,294,967,296 – the number of bytes in 4 gibibytes; in computation, 32-bit computers can directly access 232 units (bytes) of address space, which leads directly to the 4-gigabyte limit on main memory.
Mathematics: 4,294,967,297 is a Fermat number and semiprime. It is the smallest number of the form which is not a prime number.
Demographics – world population: 8,019,876,189 – Estimated population for the world as of 1 January 2024.
1010
(; short scale: ten billion; long scale: ten thousand million, or ten milliard)
Biology – bacteria in the human body: There are roughly 1010 bacteria in the human mouth.
Computing – web pages: approximately 5.6 web pages indexed by Google as of 2010.
1011
(; short scale: one hundred billion; long scale: hundred thousand million, or hundred milliard)
Astronomy: There are 100 billion planets located in the Milky Way.
Biology – Neurons in the brain: approximately (1±0.2) × 1011 neurons in the human brain.
Medicine: The United States Food and Drug Administration requires a minimum of 3 x 1011 (300 billion) platelets per apheresis unit.
Paleodemography – Number of humans that have ever lived: approximately (1.2±0.3) × 1011 live births of anatomically modern humans since the beginning of the Upper Paleolithic.
Astronomy – stars in our galaxy: of the order of 1011 stars in the Milky Way galaxy.
Mathematics: 608,981,813,029 is the smallest number for which there are more primes of the form 3k + 1 than of the form 3k + 2 up to the number.
1012
(; 10004; short scale: one trillion; long scale: one billion)
ISO: tera- (T)
Astronomy: Andromeda Galaxy, which is part of the same Local Group as our galaxy, contains about 1012 stars.
Biology – Bacteria on the human body: The surface of the human body houses roughly 1012 bacteria.
Astronomy – Galaxies: A 2016 estimate says there are 2 × 1012 galaxies in the observable universe.
Biology: An estimate says there were 3.04 × 1012 trees on Earth in 2015.
Mathematics: 7,625,597,484,987 – a number that often appears when dealing with powers of 3. It can be expressed as , , , and 33 or when using Knuth's up-arrow notation it can be expressed as and .
Astronomy: A light-year, as defined by the International Astronomical Union (IAU), is the distance that light travels in a vacuum in one year, which is equivalent to about 9.46 trillion kilometers ().
Mathematics: 1013 – The approximate number of known non-trivial zeros of the Riemann zeta function .
Biology – Blood cells in the human body: The average human body is estimated to have (2.5 ± .5) × 1013 red blood cells.
Mathematics – Known digits of π: , the number of known digits of π is 31,415,926,535,897 (the integer part of π).
Biology – approximately 1014 synapses in the human brain.
Biology – Cells in the human body: The human body consists of roughly 1014 cells, of which only 1013 are human. The remaining 90% non-human cells (though much smaller and constituting much less mass) are bacteria, which mostly reside in the gastrointestinal tract, although the skin is also covered in bacteria.
Mathematics: The first case of exactly 18 prime numbers between multiples of 100 is 122,853,771,370,900 + n, for n = 1, 3, 7, 19, 21, 27, 31, 33, 37, 49, 51, 61, 69, 73, 87, 91, 97, 99.
Cryptography: 150,738,274,937,250 configurations of the plug-board of the Enigma machine used by the Germans in WW2 to encode and decode messages by cipher.
Computing – MAC-48: 281,474,976,710,656 (248) possible unique physical addresses.
Mathematics: 953,467,954,114,363 is the largest known Motzkin prime.
1015
(; 10005; short scale: one quadrillion; long scale: one thousand billion, or one billiard)
ISO: peta- (P)
Biology – Insects: 1,000,000,000,000,000 to 10,000,000,000,000,000 (1015 to 1016) – The estimated total number of ants on Earth alive at any one time (their biomass is approximately equal to the total biomass of the human species).
Computing: 9,007,199,254,740,992 (253) – number until which all integer values can exactly be represented in IEEE double precision floating-point format.
Mathematics: 48,988,659,276,962,496 is the fifth taxicab number.
Science Fiction: In Isaac Asimov's Galactic Empire, in what we call 22,500 CE, there are 25,000,000 different inhabited planets in the Galactic Empire, all inhabited by humans in Asimov's "human galaxy" scenario, each with an average population of 2,000,000,000, thus yielding a total Galactic Empire population of approximately 50,000,000,000,000,000.
Cryptography: There are 256 = 72,057,594,037,927,936 different possible keys in the obsolete 56-bit DES symmetric cipher.
Science Fiction: There are approximately 100,000,000,000,000,000 (1017) sentient beings in the Star Wars galaxy.
Physical culture: Highest amount of bytes lifted by a human is 318,206,335,271,488,635 by Hafþór Júlíus Björnsson.
1018
(; 10006; short scale: one quintillion; long scale: one trillion)
ISO: exa- (E)
Mathematics: The first case of exactly 19 prime numbers between multiples of 100 is 1,468,867,005,116,420,800 + n, for n = 1, 3, 7, 9, 21, 31, 37, 39, 43, 49, 51, 63, 67, 69, 73, 79, 81, 87, 93.
Mathematics: 261 − 1 = 2,305,843,009,213,693,951 (≈2.31) is the ninth Mersenne prime. It was determined to be prime in 1883 by Ivan Mikheevich Pervushin. This number is sometimes called Pervushin's number.
Mathematics: Goldbach's conjecture has been verified for all n ≤ 4 by a project which computed all prime numbers up to that limit.
Computing – Manufacturing: An estimated 6 transistors were produced worldwide in 2008.
Computing – Computational limit of a 64-bit CPU: 9,223,372,036,854,775,807 (about 9.22) is equal to 263−1, and as such is the largest number which can fit into a signed (two's complement) 64-bit integer on a computer.
Mathematics – NCAA basketball tournament: There are 9,223,372,036,854,775,808 (263) possible ways to enter the bracket.
Mathematics – Bases: 9,439,829,801,208,141,318 (≈9.44) is the 10th and (by conjecture) largest number with more than one digit that can be written from base 2 to base 18 using only the digits 0 to 9, meaning the digits for 10 to 17 are not needed in bases greater than 10.
Biology – Insects: It has been estimated that the insect population of the Earth is about 1019.
Mathematics – Answer to the wheat and chessboard problem: When doubling the grains of wheat on each successive square of a chessboard, beginning with one grain of wheat on the first square, the final number of grains of wheat on all 64 squares of the chessboard when added up is 264−1 = 18,446,744,073,709,551,615 (≈1.84).
Mathematics – Legends: The Tower of Brahma legend tells about a Hindu temple containing a large room with three posts, on one of which are 64 golden discs, and the object of the mathematical game is for the Brahmins in this temple to move all of the discs to another pole so that they are in the same order, never placing a larger disc above a smaller disc, moving only one at a time. Using the simplest algorithm for moving the disks, it would take 264−1 = 18,446,744,073,709,551,615 (≈1.84) turns to complete the task (the same number as the wheat and chessboard problem above).<ref>Ivan Moscovich, 1000 playthinks: puzzles, paradoxes, illusions & games, Workman Pub., 2001 .</ref>
Computing – IPv6: 18,446,744,073,709,551,616 (264; ≈1.84) possible unique /64 subnetworks.
Mathematics – Rubik's Cube: There are 43,252,003,274,489,856,000 (≈4.33) different positions of a 3×3×3 Rubik's Cube.
Password strength: Usage of the 95-character set found on standard computer keyboards for a 10-character password yields a computationally intractable 59,873,693,923,837,890,625 (9510, approximately 5.99) permutations.
Economics: Hyperinflation in Zimbabwe estimated in February 2009 by some economists at 10 sextillion percent, or a factor of 1020.
1021
(; 10007; short scale: one sextillion; long scale: one thousand trillion, or one trilliard)
ISO: zetta- (Z)
Geo – Grains of sand: All the world's beaches combined have been estimated to hold roughly 1021 grains of sand.
Computing – Manufacturing: Intel predicted that there would be 1.2 transistors in the world by 2015 and Forbes estimated that 2.9 transistors had been shipped up to 2014.
Mathematics – Sudoku: There are 6,670,903,752,021,072,936,960 (≈6.7) 9×9 sudoku grids.
Mathematics: The first case of exactly 20 prime numbers between multiples of 100 is 20,386,095,164,137,273,086,400 + n, for n = 1, 3, 7, 9, 13, 19, 21, 31, 33, 37, 49, 57, 63, 73, 79, 87, 91, 93, 97, 99.
Astronomy – Stars: 70 sextillion = 7, the estimated number of stars within range of telescopes (as of 2003).
Astronomy – Stars: in the range of 1023 to 1024 stars in the observable universe.
Mathematics: 146,361,946,186,458,562,560,000 (≈1.5) is the fifth unitary perfect number.
Mathematics: 357,686,312,646,216,567,629,137 (≈3.6) is the largest left-truncatable prime.
Chemistry – Physics: The Avogadro constant () is the number of constituents (e.g. atoms or molecules) in one mole of a substance, defined for convenience as expressing the order of magnitude separating the molecular from the macroscopic scale.
1024
(; 10008; short scale: one septillion; long scale: one quadrillion)
ISO: yotta- (Y)
Mathematics: 2,833,419,889,721,787,128,217,599 (≈2.8) is the fifth Woodall prime.
Mathematics: 3,608,528,850,368,400,786,036,725 (≈3.6) is the largest polydivisible number.
Mathematics: 286 = 77,371,252,455,336,267,181,195,264 is the largest known power of two not containing the digit '0' in its decimal representation.
1027
(; 10009; short scale: one octillion; long scale: one thousand quadrillion, or one quadrilliard)
ISO: ronna- (R)
Biology – Atoms in the human body: the average human body contains roughly 7 atoms.
Mathematics – Poker: the number of unique combinations of hands and shared cards in a 10-player game of Texas hold 'em is approximately 2.117.
1030
(; 100010; short scale: one nonillion; long scale: one quintillion)
ISO: quetta- (Q)
Biology – Bacterial cells on Earth: The number of bacterial cells on Earth is estimated at 5,000,000,000,000,000,000,000,000,000,000, or 5 × 1030.
Mathematics: 5,000,000,000,000,000,000,000,000,000,027 is the largest quasi-minimal prime.
Mathematics: The number of partitions of 1000 is 24,061,467,864,032,622,473,692,149,727,991.
Mathematics: 368 = 278,128,389,443,693,511,257,285,776,231,761 is the largest known power of three not containing the digit '0' in its decimal representation.
Mathematics: 2108 = 324,518,553,658,426,726,783,156,020,576,256 is the largest known power of two not containing the digit '9' in its decimal representation.
Mathematics: 739 = 909,543,680,129,861,140,820,205,019,889,143 is the largest known power of 7 not containing the digit '7' in its decimal representation.
1033
(; 100011; short scale: one decillion; long scale: one thousand quintillion, or one quintilliard)
Mathematics – Alexander's Star: There are 72,431,714,252,715,638,411,621,302,272,000,000 (about 7.24) different positions of Alexander's Star.
1036
(; 100012; short scale: one undecillion; long scale: one sextillion)
Mathematics: 227−1 − 1 = 170,141,183,460,469,231,731,687,303,715,884,105,727 (≈1.7) is the largest known double Mersenne prime and the 12th Mersenne prime.
Computing: 2128 = 340,282,366,920,938,463,463,374,607,431,768,211,456 (≈3.40282367), the theoretical maximum number of Internet addresses that can be allocated under the IPv6 addressing system, one more than the largest value that can be represented by a single-precision IEEE floating-point value, the total number of different Universally Unique Identifiers (UUIDs) that can be generated.
Cryptography: 2128 = 340,282,366,920,938,463,463,374,607,431,768,211,456 (≈3.40282367), the total number of different possible keys in the AES 128-bit key space (symmetric cipher).
1039
(; 100013; short scale: one duodecillion; long scale: one thousand sextillion, or one sextilliard)
Cosmology: The Eddington–Dirac number is roughly 1040.
Mathematics: 97# × 25 × 33 × 5 × 7 = 69,720,375,229,712,477,164,533,808,935,312,303,556,800 (≈6.97) is the least common multiple of every integer from 1 to 100.
1042 to 10100
(; 100014; short scale: one tredecillion; long scale: one septillion)
Mathematics: 141×2141+1 = 393,050,634,124,102,232,869,567,034,555,427,371,542,904,833 (≈3.93) is the second Cullen prime.
Mathematics: There are 7,401,196,841,564,901,869,874,093,974,498,574,336,000,000,000 (≈7.4) possible permutations for the Rubik's Revenge (4×4×4 Rubik's Cube).
Chess: 4.52 is a proven upper bound for the number of chess positions allowed according to the rules of chess.
Geo: 1.33 is the estimated number of atoms on Earth.
Mathematics: 2168 = 374,144,419,156,711,147,060,143,317,175,368,453,031,918,731,001,856 is the largest known power of two which is not pandigital: There is no digit '2' in its decimal representation.
Mathematics: 3106 = 375,710,212,613,636,260,325,580,163,599,137,907,799,836,383,538,729 is the largest known power of three which is not pandigital: There is no digit '4'.
Mathematics: 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 (≈8.08) is the order of the monster group.
Cryptography: 2192 = 6,277,101,735,386,680,763,835,789,423,207,666,416,102,355,444,464,034,512,896 (6.27710174), the total number of different possible keys in the Advanced Encryption Standard (AES) 192-bit key space (symmetric cipher).
Cosmology: 8 is roughly the number of Planck time intervals since the universe is theorised to have been created in the Big Bang 13.799 ± 0.021 billion years ago.
Cosmology: 1 is Archimedes' estimate in The Sand Reckoner of the total number of grains of sand that could fit into the entire cosmos, the diameter of which he estimated in stadia to be what we call 2 light-years.
Mathematics – Cards: 52! = 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000 (≈8.07) – the number of ways to order the cards in a 52-card deck.
Mathematics: There are ≈1.01×1068 possible combinations for the Megaminx.
Mathematics: 1,808,422,353,177,349,564,546,512,035,512,530,001,279,481,259,854,248,860,454,348,989,451,026,887 (≈1.81) – The largest known prime factor found by Lenstra elliptic-curve factorization (LECF) .
Mathematics: There are 282,870,942,277,741,856,536,180,333,107,150,328,293,127,731,985,672,134,721,536,000,000,000,000,000 (≈2.83) possible permutations for the Professor's Cube (5×5×5 Rubik's Cube).
Cryptography: 2256 = 115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,936 (≈1.15792089), the total number of different possible keys in the Advanced Encryption Standard (AES) 256-bit key space (symmetric cipher).
Cosmology: Various sources estimate the total number of fundamental particles in the observable universe to be within the range of 1080 to 1085.WMAP- Content of the Universe . Map.gsfc.nasa.gov (2010-04-16). Retrieved on 2011-05-01. However, these estimates are generally regarded as guesswork. (Compare the Eddington number, the estimated total number of protons in the observable universe.)
Computing: 9.999 999 is equal to the largest value that can be represented in the IEEE decimal32 floating-point format.
Computing: 69! (roughly 1.7112245), is the largest factorial value that can be represented on a calculator with two digits for powers of ten without overflow.
Mathematics: One googol, 1, 1 followed by one hundred zeros, or 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.
10100 (one googol) to 101000
(; short scale: ten duotrigintillion; long scale: ten thousand sexdecillion, or ten sexdecillard)
Mathematics: There are 157 152 858 401 024 063 281 013 959 519 483 771 508 510 790 313 968 742 344 694 684 829 502 629 887 168 573 442 107 637 760 000 000 000 000 000 000 000 000 (≈1.57) distinguishable permutations of the V-Cube 6 (6×6×6 Rubik's Cube).
Chess: Shannon number, 10120, a lower bound of the game-tree complexity of chess.
Physics: 10120, discrepancy between the observed value of the cosmological constant and a naive estimate based on Quantum Field Theory and the Planck energy.
Physics: 8, ratio of the mass-energy in the observable universe to the energy of a photon with a wavelength the size of the observable universe.
Mathematics: 19 568 584 333 460 072 587 245 340 037 736 278 982 017 213 829 337 604 336 734 362 294 738 647 777 395 483 196 097 971 852 999 259 921 329 236 506 842 360 439 300 (≈1.96) is the period of Fermat pseudoprimes.
History – Religion: Asaṃkhyeya is a Buddhist name for the number 10140. It is listed in the Avatamsaka Sutra and metaphorically means "innumerable" in the Sanskrit language of ancient India.
Xiangqi: 10150, an estimation of the game-tree complexity of xiangqi.
Mathematics: There are 19 500 551 183 731 307 835 329 126 754 019 748 794 904 992 692 043 434 567 152 132 912 323 232 706 135 469 180 065 278 712 755 853 360 682 328 551 719 137 311 299 993 600 000 000 000 000 000 000 000 000 000 000 000 (≈1.95) distinguishable permutations of the V-Cube 7 (7×7×7 Rubik's Cube).
Go: There are 208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935 (≈2.08) legal positions in the game of Go. See Go and mathematics.
Economics: The annualized rate of the hyperinflation in Hungary in 1946 was estimated to be 2.9%. It was the most extreme case of hyperinflation ever recorded.
Board games: 3.457, number of ways to arrange the tiles in English Scrabble on a standard 15-by-15 Scrabble board.
Physics: 10186, approximate number of Planck volumes in the observable universe.
Shogi: 10226, an estimation of the game-tree complexity of shogi.
Physics: 7×10245, approximate spacetime volume of the history of the observable universe in Planck units.
Computing: 1.797 693 134 862 315 807 is approximately equal to the largest value that can be represented in the IEEE double precision floating-point format.
Computing: (10 – 10−15) is equal to the largest value that can be represented in the IEEE decimal64 floating-point format.
Mathematics: 997# × 31# × 7 × 52 × 34 × 27 = 7 128 865 274 665 093 053 166 384 155 714 272 920 668 358 861 885 893 040 452 001 991 154 324 087 581 111 499 476 444 151 913 871 586 911 717 817 019 575 256 512 980 264 067 621 009 251 465 871 004 305 131 072 686 268 143 200 196 609 974 862 745 937 188 343 705 015 434 452 523 739 745 298 963 145 674 982 128 236 956 232 823 794 011 068 809 262 317 708 861 979 540 791 247 754 558 049 326 475 737 829 923 352 751 796 735 248 042 463 638 051 137 034 331 214 781 746 850 878 453 485 678 021 888 075 373 249 921 995 672 056 932 029 099 390 891 687 487 672 697 950 931 603 520 000 (≈7.13) is the least common multiple of every integer from 1 to 1000.
101000 to 1010100 (one googolplex)
Mathematics: There are approximately 1.869 distinguishable permutations of the world's largest Rubik's Cube (33×33×33).
Computing: 1.189 731 495 357 231 765 05 is approximately equal to the largest value that can be represented in the IEEE 80-bit x86 extended precision floating-point format.
Computing: 1.189 731 495 357 231 765 085 759 326 628 007 0 is approximately equal to the largest value that can be represented in the IEEE quadruple-precision floating-point format.
Computing: (10 – 10−33) is equal to the largest value that can be represented in the IEEE decimal128 floating-point format.
Computing: 1010,000 − 1 is equal to the largest value that can be represented in Windows Phone's calculator.
Mathematics: 104,8245 + 5104,824 is the largest proven Leyland prime; with 73,269 digits .
Mathematics: approximately 7.76 × 10206,544 cattle in the smallest herd which satisfies the conditions of Archimedes's cattle problem.
Mathematics: 2,618,163,402,417 × 21,290,000 − 1 is a 388,342-digit Sophie Germain prime; the largest known .
Mathematics: 2,996,863,034,895 × 21,290,000 ± 1 are 388,342-digit twin primes; the largest known .
Mathematics: 3,267,113# – 1 is a 1,418,398-digit primorial prime; the largest known .
Mathematics – Literature: Jorge Luis Borges' Library of Babel contains at least 251,312,000 ≈ 1.956 × 101,834,097 books (this is a lower bound).
Mathematics: 101,888,529 − 10944,264 – 1 is a 1,888,529-digit palindromic prime, the largest known .
Mathematics: 4 × 721,119,849 − 1 is the smallest prime of the form 4 × 72n − 1.
Mathematics: 422,429! + 1 is a 2,193,027-digit factorial prime; the largest known .
Mathematics: (215,135,397 + 1)/3 is a 4,556,209-digit Wagstaff probable prime, the largest known .
Mathematics: 1,963,7361,048,576 + 1 is a 6,598,776-digit Generalized Fermat prime, the largest known .
Mathematics: (108,177,207 − 1)/9 is a 8,177,207-digit probable prime, the largest known .
Mathematics: 10,223 × 231,172,165 + 1 is a 9,383,761-digit Proth prime, the largest known Proth prime and non-Mersenne prime .
Mathematics: 282,589,933 − 1 is a 24,862,048-digit Mersenne prime; the largest known prime of any kind .
Mathematics: 282,589,932 × (282,589,933 − 1) is a 49,724,095-digit perfect number, the largest known as of 2020.
Mathematics – History: 108×1016, largest named number in Archimedes' Sand Reckoner.
Mathematics: 10googol (), a googolplex. A number 1 followed by 1 googol zeros. Carl Sagan has estimated that 1 googolplex, fully written out, would not fit in the observable universe because of its size, while also noting that one could also write the number as 1010100.
Larger than 1010100
(One googolplex; 10googol; short scale: googolplex; long scale: googolplex)
Go: There are at least 1010108 legal games of Go. See Game Tree Complexity.
Mathematics – Literature: The number of different ways in which the books in Jorge Luis Borges' Library of Babel can be arranged is approximately , the factorial of the number of books in the Library of Babel.
Cosmology: In chaotic inflation theory, proposed by physicist Andrei Linde, our universe is one of many other universes with different physical constants that originated as part of our local section of the multiverse, owing to a vacuum that had not decayed to its ground state. According to Linde and Vanchurin, the total number of these universes is about .
Mathematics: , order of magnitude of an upper bound that occurred in a proof of Skewes (this was later estimated to be closer to 1.397 × 10316).Cosmology: The estimated number of Planck time units for quantum fluctuations and tunnelling to generate a new Big Bang is estimated to be .
Mathematics: , a number in the googol family called a googolplexplex, googolplexian, or googolduplex. 1 followed by a googolplex zeros, or 10googolplexCosmology: The uppermost estimate to the size of the entire universe is approximately times that of the observable universe.
Mathematics: , order of magnitude of another upper bound in a proof of Skewes.
Mathematics: Steinhaus' mega lies between 10[4]257 and 10[4]258 (where a[n]b is hyperoperation).
Mathematics: Moser's number, "2 in a mega-gon" in Steinhaus–Moser notation, is approximately equal to 10[10[4]257]10, the last four digits are ...1056.
Mathematics: Graham's number, the last ten digits of which are ...2464195387. Arises as an upper bound solution to a problem in Ramsey theory. Representation in powers of 10 would be impractical (the number of 10s in the power tower would be virtually indistinguishable from the number itself).
Mathematics: TREE(3): appears in relation to a theorem on trees in graph theory. Representation of the number is difficult, but one weak lower bound is AA(187196)(1), where A(n) is a version of the Ackermann function.
Mathematics: SSCG(3): appears in relation to the Robertson–Seymour theorem. Known to be greater than TREE(3).
Mathematics: Transcendental integers: a set of numbers defined in 2000 by Harvey Friedman, appears in proof theory.
Mathematics:'' Rayo's number is a large number named after Agustín Rayo which has been claimed to be the largest number to have ever been named. It was originally defined in a "big number duel" at MIT on 26 January 2007.
See also
Conway chained arrow notation
Encyclopedic size comparisons on Wikipedia
Fast-growing hierarchy
Indian numbering system
Infinity
Large numbers
List of numbers
Mathematical constant
Names of large numbers
Names of small numbers
Power of 10
References
External links
Seth Lloyd's paper Computational capacity of the universe provides a number of interesting dimensionless quantities.
Notable properties of specific numbers
Numbers | Orders of magnitude (numbers) | [
"Mathematics"
] | 13,006 | [
"Quantity",
"Orders of magnitude",
"Units of measurement"
] |
261,151 | https://en.wikipedia.org/wiki/XML%20pipeline | In software, an XML pipeline is formed when XML (Extensible Markup Language) processes, especially XML transformations and XML validations, are connected.
For instance, given two transformations T1 and T2, the two can be connected so that an input XML document is transformed by T1 and then the output of T1 is fed as input document to T2. Simple pipelines like the one described above are called linear; a single input document always goes through the same sequence of transformations to produce a single output document.
Linear operations
Linear operations can be divided in at least two parts
Micro-operations
They operate at the inner document level
Rename - renames elements or attributes without modifying the content
Replace - replaces elements or attributes
Insert - adds a new data element to the output stream at a specified point
Delete - removes an element or attribute (also known as pruning the input tree)
Wrap - wraps elements with additional elements
Reorder - changes the order of elements
Document operations
They take the input document as a whole
Identity transform - makes a verbatim copy of its input to the output
Compare - it takes two documents and compare them
Transform - execute a transform on the input file using a specified XSLT file. Version 1.0 or 2.0 should be specified.
Split - take a single XML document and split it into distinct documents
Sequence operations
They are mainly introduced in XProc and help to handle the sequence of document as a whole
Count - it takes a sequence of documents and counts them
Identity transform - makes a verbatim copy of its input sequence of documents to the output
split-sequence - takes a sequence of documents as input and routes them to different outputs depending on matching rules
wrap-sequence - takes a sequence of documents as input and wraps them into one or more documents
Non-linear
Non-linear operations on pipelines may include:
Conditionals — where a given transformation is executed if a condition is met while another transformation is executed otherwise
Loops — where a transformation is executed on each node of a node set selected from a document or a transformation is executed until a condition evaluates to false
Tees — where a document is fed to multiple transformations potentially happening in parallel
Aggregations — where multiple documents are aggregated into a single document
Exception Handling — where failures in processing can result in an alternate pipeline being processed
Some standards also categorize transformation as macro (changes impacting an entire file) or micro (impacting only an element or attribute)
XML pipeline languages
XML pipeline languages are used to define pipelines. A program written with an XML pipeline language is implemented by software known as an XML pipeline engine, which creates processes, connects them together and finally executes the pipeline. Existing XML pipeline languages include:
Standards
XProc: An XML Pipeline Language is a W3C Recommendation for defining linear and non-linear XML pipelines.
Product-specific
W3C XML Pipeline Definition Language is specified in a W3C Note.
W3C XML Pipeline Language (XPL) Version 1.0 (Draft) is specified in a W3C Submission and a component of Orbeon Presentation Server OPS (now called Orbeon Forms). This specification provides an implementation of an earlier version of the language. XPL allows the declaration of complex pipelines with conditionals, loops, tees, aggregations, and sub-pipelines. XProc is roughly a superset of XPL.
Cocoon sitemaps allow, among other functionality, the declaration of XML pipelines. Cocoon sitemaps are one of the earliest implementations of the concept of XML pipeline.
smallx XML Pipelines are used by the smallx project.
ServingXML defines a vocabulary for expressing flat-XML, XML-flat, flat-flat, and XML-XML transformations in pipelines.
PolarLake Circuit Markup Language used by PolarLake's runtime to define XML pipelines. Circuits are collections of paths through which fragments of XML stream (usually as SAX or DOM events). Components are placed on paths to interact with the stream (and/or the outside world) in a low latency process.
xmlsh is a scripting language based on the unix shells which natively supports xml and text pipelines
Stylus Studio XML Pipeline is a visual grammar which defines the following operations: Input, Output, XQuery, XSLT, Validate, XSL-FO to PDF, Convert To XML, Convert From XML, Choose, Warning, Stop.
Pipe granularity
Different XML Pipeline implementations support different granularity of flow.
Document: Whole documents flow through the pipe as atomic units. A document can only be in one place at a time. Though usually multiple documents may be in the pipe at once.
Event: Element/Text nodes events may flow through different paths. A document may be concurrently flowing through many components at the same time.
Standardization
Until May 2010, there was no widely used standard for XML pipeline languages. However, with the introduction of the W3C XProc standard as a W3C Recommendation as of May 2010, widespread adoption can be expected.
History
1972 Douglas McIlroy of Bell Laboratories adds the pipe operator to the UNIX command shell. This allows the output from one shell program to go directly into input of another shell program without going to disk. This allowed programs such as the UNIX awk and sed to be specialized yet work together . For more details see Pipeline (Unix).
1993 Sean McGrath developed a C++ toolkit for SGML processing.
1998 Stefano Mazzocchi releases the first version of Apache Cocoon, one of the first software programs to use XML pipelines.
1998 PolarLake build XML Operating System, which includes XML Pipelining.
2002 Notes submitted by Norman Walsh and Eve Maler from Sun Microsystems, as well as a W3C Submission submitted in 2005 by Erik Bruchez and Alessandro Vernet from Orbeon, were important steps toward spawning an actual standardization effort. While neither submission directly became a W3C recommendation, they were considered key sources of inspiration for the W3C XML Processing Working Group.
September 2005 W3C XML Processing Working Group started. The task of this working group was to create a specification for an XML pipelining language.
August 2008, xmlsh, an XML pipeline language was announced at Balisage 2008
See also
Apache Cocoon
Identity transform
NetKernel
Pipeline (Unix)
W3C recommendation
XSLT
References
External links
Standards
Recommendations
XProc: An XML Pipeline Language, W3C Recommendation 11 May 2010
Working drafts
W3C XML Processing Model Working Group
W3C XML Pipeline Definition Language Note
W3C XML Pipeline Language (XPL) Version 1.0 (Draft) Submission
Product specific
XProc tutorial and reference
Oracle's XML Pipeline Definition Language Controller Implementation Part of XML Developer's kit, no individual download
Cocoon sitemap
NetKernel XML Pipelines
Managing Complex Document Generation through Pipelining
XML Pipeline Language (XPL) Documentation
SXPipe
PolarLake Reference data management PolarLake XML circuits and reference data management
smallx
ServingXML
XML Pipeline Implementation from Stylus Studio - This program allows XML transforms to be chained together along with other operations on XML files such as validation and HTML Tidy.
IVI XML Pipeline Server XML Pipeline Server is an implementation for the Stylus Studio XML Pipeline language
Norman Walsh's XProc web site - Norman Walsh is the chair of the W3C XProc standards committee.
yax - an XProc Implementation currently with commandline and Apache ant interface
Yahoo! Pipes let's users create multi-source data mashups in a web-based visual environment
xmlsh A shell for manipulating xml based on the unix shells. Supports in-process multithreaded xml and text processing pipelines.
How to implement XML Pipeline in XSLT
Calabash is an implementation of XProc
Calumet is an XProc implementation from EMC
QuiXProc is an XProc implementation of Innovimax
XML-based standards
Inter-process communication | XML pipeline | [
"Technology"
] | 1,629 | [
"Computer standards",
"XML-based standards"
] |
261,362 | https://en.wikipedia.org/wiki/ITER | ITER (initially the International Thermonuclear Experimental Reactor, iter meaning "the way" or "the path" in Latin) is an international nuclear fusion research and engineering megaproject aimed at creating energy through a fusion process similar to that of the Sun. It is being built next to the Cadarache facility in southern France. Upon completion of construction of the main reactor and first plasma, planned for 2033–2034, ITER will be the largest of more than 100 fusion reactors built since the 1950s, with six times the plasma volume of JT-60SA in Japan, the largest tokamak operating today.
The long-term goal of fusion research is to generate electricity. ITER's stated purpose is scientific research, and technological demonstration of a large fusion reactor, without electricity generation. ITER's goals are to achieve enough fusion to produce 10 times as much thermal output power as thermal power absorbed by the plasma for short time periods; to demonstrate and test technologies that would be needed to operate a fusion power plant including cryogenics, heating, control and diagnostics systems, and remote maintenance; to achieve and learn from a burning plasma; to test tritium breeding; and to demonstrate the safety of a fusion plant.
ITER's thermonuclear fusion reactor will use over 300 MW of electrical power to cause the plasma to absorb 50 MW of thermal power, creating 500 MW of heat from fusion for periods of 400 to 600 seconds. This would mean a ten-fold gain of plasma heating power (Q), as measured by heating input to thermal output, or Q ≥ 10. , the record for energy production using nuclear fusion is held by the National Ignition Facility reactor, which achieved a Q of 1.5 in December 2022. Beyond just heating the plasma, the total electricity consumed by the reactor and facilities will range from 110 MW up to 620 MW peak for 30-second periods during plasma operation. As a research reactor, the heat energy generated will not be converted to electricity, but simply vented.
ITER is funded and run by seven member parties: China, the European Union, India, Japan, Russia, South Korea and the United States. In the immediate aftermath of Brexit, the United Kingdom continued to participate in ITER through the EU's Fusion for Energy (F4E) program; however, in September 2023, the UK decided to discontinue its participation in ITER via F4E, and by March 2024 had rejected an invitation to join ITER directly, deciding instead to pursue its own independent fusion research program. Switzerland participated through Euratom and F4E, but the EU effectively suspended Switzerland's participation in response to the May 2021 collapse in talks on an EU-Swiss framework agreement; , Switzerland is considered a non-participant pending resolution of its dispute with the EU. The project also has cooperation agreements with Australia, Canada, Kazakhstan and Thailand.
Construction of the ITER complex in France started in 2013, and assembly of the tokamak began in 2020. The initial budget was close to €6 billion, but the total price of construction and operations is projected to be from €18 to €22 billion; other estimates place the total cost between $45 billion and $65 billion, though these figures are disputed by ITER. Regardless of the final cost, ITER has already been described as the most expensive science experiment of all time, the most complicated engineering project in human history, and one of the most ambitious human collaborations since the development of the International Space Station (€100 billion or $150 billion budget) and the Large Hadron Collider (€7.5 billion budget).
ITER's planned successor, the EUROfusion-led DEMO, is expected to be one of the first fusion reactors to produce electricity in an experimental environment.
Background
Fusion aims to replicate the process that takes place in stars where the intense heat at the core fuses together nuclei and produces large amounts of energy in the form of heat and light. Harnessing fusion power in terrestrial conditions would provide sufficient energy to satisfy mounting demand, and to do so in a sustainable manner that has a relatively small impact on the environment. One gram of deuterium-tritium fuel mixture in the process of nuclear fusion produces 90,000-kilowatt hours of energy, or the equivalent of 11 tonnes of coal.
Nuclear fusion uses a different approach from traditional nuclear energy. Current nuclear power stations rely on nuclear fission with the nucleus of an atom being split to release energy. Nuclear fusion takes multiple nuclei and uses intense heat to fuse them together, a process that also releases energy.
Nuclear fusion has many potential attractions. The fuel is relatively abundant or can be produced in a fusion reactor. After preliminary tests with deuterium, ITER will use a mix of deuterium-tritium for its fusion because of the combination's high energy potential and because this fusion reaction is the easiest to run. The first isotope, deuterium, can be extracted from seawater, from which it is a nearly inexhaustible resource. The second isotope, tritium, only occurs in trace amounts in nature and the estimated world's supply (mainly produced by the heavy-water CANDU fission reactors) is just 20 kilograms per year, insufficient for power plants. ITER will be testing tritium breeding blanket technology that would allow a future fusion reactor to create its own tritium and thus be self-sufficient. Furthermore, a fusion reactor would produce virtually no CO2 emissions or atmospheric pollutants, there would be no chance of a meltdown, and its radioactive waste products would mostly be very short-lived compared to those produced by conventional nuclear reactors (fission reactors).
On 21 November 2006, the seven project partners formally agreed to fund the creation of a nuclear fusion reactor. The program is anticipated to last for 30 years – 10 years for construction, and 20 years of operation. ITER was originally expected to cost approximately €5 billion. However, delays, the rising price of raw materials, and changes to the initial design have seen the official budget estimate rise to between €18 billion and €20 billion.
The reactor was expected to take 10 years to build, and ITER had planned to test its first plasma in 2020 and achieve full fusion by 2023. In 2024, ITER published a new schedule with deuterium-deuterium plasma operations starting in 2035. Site preparation has begun near Cadarache center, France, and French President Emmanuel Macron launched the assembly phase of the project at a ceremony in 2020. Under the revised schedule, work to achieve the first hydrogen plasma discharge was 70% complete in the middle of 2020 and considered to be on track.
One of the ITER objectives is a Q-value ("fusion gain") of 10. Q = 1 is called "breakeven". The best result achieved in a tokamak is 0.67 in the JET tokamak. The best result achieved for fusion in general is Q = 1.5, achieved in an inertial confinement fusion (ICF) experiment by the National Ignition Facility in late 2022.
For commercial fusion power stations, engineering gain factor is important. Engineering gain factor is defined as the ratio of a plant electrical power output to electrical power input of all plant's internal systems (tokamak external heating systems, electromagnets, cryogenics plant, diagnostics and control systems, etc.). Commercial fusion plants will be designed with engineering breakeven in mind (see DEMO). Some nuclear engineers consider a Q of 100 to be required for commercial fusion power stations to be viable.
ITER will not produce electricity. Producing electricity from thermal sources is a well known process (used in many power stations) and ITER will not run with significant fusion power output continuously. Adding electricity production to ITER would raise the cost of the project and bring no value for experiments on the tokamak. The DEMO-class reactors that are planned to follow ITER are intended to demonstrate the net production of electricity.
One of the primary ITER objectives is to achieve a state of "burning plasma". Burning plasma is the state of the plasma when more than 50% of the energy received for plasma heating is received from fusion reactions (not from external sources). No fusion reactors had created a burning plasma until the competing NIF fusion project reached the milestone on 8 August 2021 using inertial confinement. At higher Q values, progressively bigger parts of plasma heating power will be produced by fusion reactions. This reduces the power needed from external heating systems at high Q values. The bigger a tokamak is, the more fusion-reaction-produced energy is preserved for internal plasma heating (and the less external heating is required), which also improves its Q-value. This is how ITER plans for its tokamak reactor to scale.
Organisation history
The initial international cooperation for a nuclear fusion project that was the foundation of ITER began in 1978 with the International Tokamak Reactor, or INTOR, which had four partners: the Soviet Union, the European Atomic Energy Community, the United States, and Japan. However, the INTOR project stalled until Mikhail Gorbachev became general secretary of the Communist Party of the Soviet Union in March 1985. Gorbachev first revived interest in a collaborative fusion project in an October 1985 meeting with French President François Mitterrand, and then the idea was further developed in November 1985 at the Geneva Summit with Ronald Reagan.
Preparations for the Gorbachev-Reagan summit showed that there were no tangible agreements in the works for the summit. However, the ITER project was gaining momentum in political circles due to the quiet work being done by two physicists, the American scientist Alvin Trivelpiece who served as Director of the Office of Energy Research in the 1980s and the Russian scientist Evgeny Velikhov who would become head of the Kurchatov Institute for nuclear research. The two scientists both supported a project to construct a demonstration fusion reactor. At the time, magnetic fusion research was ongoing in Japan, Europe, the Soviet Union and the US, but Trivelpiece and Velikhov believed that taking the next step in fusion research would be beyond the budget of any of the key nations and that collaboration would be useful internationally.
Dr. Michael Robert, who is the director of International Programs of the Office of Fusion Energy at the US Department of Energy, explains that, 'In September 1985, I led a US science team to Moscow as part of our bilateral fusion activities. Velikhov proposed to me at lunch one day his idea of having the USSR and USA work together to proceed to a fusion reactor. My response was 'great idea', but from my position, I have no capability of pushing that idea upward to the President.'
This push for cooperation on nuclear fusion is cited as a key moment of science diplomacy, but nonetheless a major bureaucratic fight erupted in the US government over the project. One argument against collaboration was that the Soviets would use it to steal US technology and expertise. A second was symbolic and involved American criticism of how the Soviet physicist Andrei Sakharov was being treated. Sakharov was an early proponent of the peaceful use of nuclear technology and along with Igor Tamm he developed the idea for the tokamak that is at the heart of nuclear fusion research. However, Sakharov also supported broader civil liberties in the Soviet Union, and his activism earned him both the 1975 Nobel peace prize and internal exile in Russia, which he opposed by going on multiple hunger strikes. The United States National Security Council convened a meeting under the direction of William Flynn Martin to discuss the nuclear fusion project that resulted in a consensus that the US should go forward with the project.
This led to nuclear fusion cooperation being discussed at the Geneva summit and release of a historic joint statement from Reagan and Gorbachev that emphasized, "the potential importance of the work aimed at utilizing controlled thermonuclear fusion for peaceful purposes and, in this connection, advocated the widest practicable development of international cooperation in obtaining this source of energy, which is essentially inexhaustible, for the benefit of all mankind." For the fusion community, this statement was a breakthrough, and it was reinforced when Reagan evoked the possibilities of nuclear fusion in a Joint Session of Congress later in the month.
As a result, collaboration on an international fusion experiment began to move forward. In October 1986 at the Reykjavik Summit, the so-called 'Quadripartite Initiative Committee' (Europe through the Euratom countries, Japan, USSR, and the US) was formed to oversee the development of the project. The year after, in March 1987, the Quadripartite Initiative Committee met at the International Atomic Energy Agency (IAEA) headquarters in Vienna. This meeting marked the launch of the conceptual design studies for the experimental reactors as well as the start of negotiations for operational issues such as the legal foundations for the peaceful use of fusion technology, the organizational structure and staffing, and the eventual location for the project. This meeting in Vienna was also where the project was baptized the International Thermonuclear Experimental Reactor, though it was quickly called by its abbreviation alone and its Latin meaning of 'the way'.
Conceptual and engineering design phases were carried out under the auspices of the IAEA. The original technical objectives were established in 1992 and the original Engineering Design Activities (EDA) were completed in 1998. An acceptable, detailed design was validated in July 2001 to complete the extended EDA period, and the validated design then went through a Design Review that began November 2006 and concluded in December 2007. The design process was difficult with arguments over issues such as whether there should be circular cross sections for magnetic confinement or D-shaped cross sections. These issues were partly responsible for the United States temporarily exiting the project in 1998 before rejoining in 2003.
At this same time, the group of ITER partners was expanding; China and South Korea joined the project in 2003 and India formally joined in 2005.
There was a heated competition to host the ITER project with the candidates narrowed down to two possible sites: France and Japan. Russia, China, and the European Union supported the choice of Cadarache in France, while the United States, South Korea, and Japan support the choice of Rokkasho in Japan. In June 2005, it was officially announced that ITER would be built in the South of France at the Cadarache site. The negotiations that led to the decision ended in a compromise between the EU and Japan, in that Japan was promised 20% of the research staff on the French location of ITER, as well as the head of the administrative body of ITER. In addition, it was agreed that 8% of the ITER construction budget would go to partner facilities that would be built in Japan.
On 21 November 2006, at a ceremony hosted by French President Jacques Chirac at the Élysée Palace in Paris, an international consortium signed a formal agreement to build the reactor. Initial work to clear the site for construction began in Cadarache in March 2007 and, once this agreement was ratified by all partners, the ITER Organization was officially established on 24 October 2007.
In 2016, Australia became the first non-member partner of the project. ITER signed a technical cooperation agreement with the Australian Nuclear Science and Technology Organisation (ANSTO), granting this country access to research results of ITER in exchange for the construction of selected parts of the ITER machine. In 2017, Kazakhstan signed a cooperation agreement that laid the groundwork for technical collaboration between the National Nuclear Center of the Republic of Kazakhstan and ITER. Most recently, after collaborating with ITER in the early stages of the project, Canada signed a cooperation agreement in 2020 with a focus on tritium and tritium-related equipment.
The project began its assembly phase in July 2020, launched by French President Emmanuel Macron in the presence of other members of the ITER project.
Directors-General
ITER is supervised by a governing body known as the ITER Council that is composed of representatives of the seven signatories to the ITER Agreement. The ITER Council is responsible for the overall direction of the organization and decides such issues as the budget.
The ITER Council also appoints the director-general of the project. There have been five directors-general so far:
2005–2010: Kaname Ikeda
2010–2015: Osamu Motojima
2015–2022: Bernard Bigot
2022: Eisuke Tada (acting)
2022–present: Pietro Barabaschi
Bernard Bigot was appointed to reform the management and governance of the ITER project in 2015. In January 2019, the ITER Council voted unanimously to reappoint Bigot for a second five-year term. Bigot died on May 14, 2022, and his deputy Eisuke Tada took over leadership of ITER during the search process for the new director.
Objectives
ITER's stated mission is to demonstrate the feasibility of fusion power as a large-scale, carbon-free source of energy.
More specifically, the project has aims to:
Momentarily produce a fusion plasma with thermal power ten times greater than the injected thermal power (a Q value of 10).
Produce a steady-state plasma with a Q value greater than 5. (Q = 1 is scientific breakeven, as defined in fusion energy gain factor.)
Maintain a fusion pulse for up to 8 minutes.
Develop technologies and processes needed for a fusion power station — including superconducting magnets and remote handling (maintenance by robot).
Verify tritium breeding concepts.
Refine neutron shield / heat conversion technology (most of the energy in the D+T fusion reaction is released in the form of fast neutrons).
Experiment with burning plasma state.
The objectives of the ITER project are not limited to creating the nuclear fusion device but are much broader, including building necessary technical, organizational, and logistical capabilities, skills, tools, supply chains, and culture enabling management of such megaprojects among participating countries, bootstrapping their local nuclear fusion industries.
Timeline and status
ITER is near 85% complete toward first plasma. First plasma was scheduled for late 2025, however delays were acknowledged in 2023 which would impact this target. In July 2024, ITER announced a new schedule which included full plasma current in 2034, the start of operations with a deuterium-deuterium plasma in 2035, and deuterium-tritium operations in 2039.
The start of the project can be traced back to 1978 when the European Commission, Japan, United States, and USSR joined for the International Tokamak Reactor (INTOR) Workshop. This initiative was held under the auspices of the International Atomic Energy Agency and its goals were to assess the readiness of magnetic fusion to move forward to the experimental power reactor (EPR) stage, to identify the additional R&D that must be undertaken, and to define the characteristics of such an EPR by means of a conceptual design. From 1978 to the middle of the 1980s, hundreds of fusion scientists and engineers in each participating country took part in a detailed assessment of the tokamak confinement system and the design possibilities for harnessing nuclear fusion energy.
In 1985, at the Geneva summit meeting in 1985, Mikhail Gorbachev suggested to Ronald Reagan that the two countries jointly undertake the construction of a tokamak EPR as proposed by the INTOR Workshop. The ITER project was initiated in 1988.
Ground was broken in 2007 and construction of the ITER tokamak complex started in 2013.
Machine assembly was launched on 28 July 2020. The construction of the facility was expected to be completed in 2025 when commissioning of the reactor can commence and initial plasma experiments were scheduled to begin at the end of that year. When ITER becomes operational, it will be the largest magnetic confinement plasma physics experiment in use with a plasma volume of 840 cubic meters, surpassing the Joint European Torus by a factor of 8.
On 3 July 2024, ITER director-general Pietro Barabaschi announced the first plasma production in the project will not take place until, at least, 2033. The energy from magnets will be produced no earlier than 2036, rather than 2033, as previously planned in 2016. He additionally said the cost of repairing some malfunctioning pieces was estimated at €5 billion.
Reactor overview
When deuterium and tritium fuse, two nuclei come together to form a helium nucleus (an alpha particle), and a high-energy neutron.
H + H → He + +
While nearly all stable isotopes lighter on the periodic table than iron-56 and nickel-62, which have the highest binding energy per nucleon, will fuse with some other isotope and release energy, deuterium and tritium are by far the most attractive for energy generation as they require the lowest activation energy (thus lowest temperature) to do so, while producing among the most energy per unit weight.
All proto- and mid-life stars radiate enormous amounts of energy generated by fusion processes. Deuterium–tritium fusion releases, per mass, roughly three times as much energy as uranium-235 fission, and millions of times more energy than a chemical reaction such as the burning of coal. It is the goal of a fusion power station to harness this energy to produce electricity.
Activation energies (in most fusion systems this is the temperature required to initiate the reaction) for fusion are generally high because the protons in each nucleus strongly repel one another, as they each have the same positive charge. A heuristic for estimating reaction rates is that nuclei must be able to get within 100 femtometers (10 meter) of each other, where the nuclei are increasingly likely to undergo quantum tunneling past the electrostatic barrier and the turning point where the strong nuclear force and the electrostatic force are equally balanced, allowing them to fuse. In ITER, this distance of approach is made possible by high temperatures and magnetic confinement. ITER uses cooling equipment like a cryopump to cool the magnets to near absolute zero. High temperatures give the nuclei enough energy to overcome their electrostatic repulsion (see Maxwell–Boltzmann distribution). For deuterium and tritium, the optimal reaction rates occur at temperatures higher than 10 kelvin. At ITER, the plasma will be heated to 150 million kelvin (about ten times the temperature at the core of the Sun) by ohmic heating (running a current through the plasma). Additional heating is applied using neutral beam injection (which cross magnetic field lines without a net deflection and will not cause a large electromagnetic disruption) and radio frequency (RF) or microwave heating.
At such high temperatures, particles have a large kinetic energy, and hence speed. If unconfined, the particles will rapidly escape, taking the energy with them, cooling the plasma to the point where net energy is no longer produced. A successful reactor would need to contain the particles in a small enough volume for a long enough time for much of the plasma to fuse.
In ITER and many other magnetic confinement reactors, the plasma, a gas of charged particles, is confined using magnetic fields. A charged particle moving through a magnetic field experiences a force perpendicular to the direction of travel, resulting in centripetal acceleration, thereby confining it to move in a circle or helix around the lines of magnetic flux. ITER will use four types of magnets to contain the plasma: a central solenoid magnet, poloidal magnets around the edges of the tokamak, 18 D-shaped toroidal-field coils, and correction coils.
A solid confinement vessel is also needed, both to shield the magnets and other equipment from high temperatures and energetic photons and particles, and to maintain a near-vacuum for the plasma to populate. The containment vessel is subjected to a barrage of very energetic particles, where electrons, ions, photons, alpha particles, and neutrons constantly bombard it and degrade the structure. The material must be designed to endure this environment so that a power station would be economical. Tests of such materials will be carried out both at ITER and at IFMIF (International Fusion Materials Irradiation Facility).
Once fusion has begun, high-energy neutrons will radiate from the reactive regions of the plasma, crossing magnetic field lines easily due to charge neutrality (see neutron flux). Since it is the neutrons that receive the majority of the energy, they will be ITER's primary source of energy output. Ideally, alpha particles will expend their energy in the plasma, further heating it.
The inner wall of the containment vessel will have 440 blanket modules that are designed to slow and absorb neutrons in a reliable and efficient manner and therefore protect the steel structure and the superconducting toroidal field magnets. At later stages of the ITER project, experimental blanket modules will be used to test breeding tritium for fuel from lithium-bearing ceramic pebbles contained within the blanket module following the following reactions:
+ → +
+ → + +
where the reactant neutron is supplied by the D-T fusion reaction.
Energy absorbed from the fast neutrons is extracted and passed into the primary coolant. This heat energy would then be used to power an electricity-generating turbine in a real power station; in ITER this electricity generating system is not of scientific interest, so instead the heat will be extracted and disposed of.
Technical design
Vacuum vessel
The vacuum vessel is the central part of the ITER machine: a double-walled steel container in which the plasma is contained by means of magnetic fields.
The ITER vacuum vessel will be twice as large and 16 times as heavy as any previously manufactured fusion vessel: each of the nine torus-shaped sectors will weigh approximately 450 tonnes. When all the shielding and port structures are included, this adds up to a total of 5,116 tonnes. Its external diameter will measure , the internal . Once assembled, the whole structure will be high.
The primary function of the vacuum vessel is to provide a hermetically sealed plasma container. Its main components are the main vessel, the port structures and the supporting system. The main vessel is a double-walled structure with poloidal and toroidal stiffening ribs between shells to reinforce the vessel structure. These ribs also form the flow passages for the cooling water. The space between the double walls will be filled with shield structures made of stainless steel. The inner surfaces of the vessel will act as the interface with breeder modules containing the breeder blanket component. These modules will provide shielding from the high-energy neutrons produced by the fusion reactions and some will also be used for tritium breeding concepts.
The vacuum vessel has a total of 44 openings that are known as ports – 18 upper, 17 equatorial, and 9 lower ports – that will be used for remote handling operations, diagnostic systems, neutral beam injections and vacuum pumping. Remote handling is made necessary by the radioactive interior of the reactor following a shutdown, which is caused by neutron bombardment during operation.
Vacuum pumping will be done before the start of fusion reactions to create the necessary low density environment, which is about one million times lower than the density of air.
Breeder blanket
ITER will use deuterium-tritium fuel. While deuterium is abundant in nature, tritium is much rarer because it is radioactive with a half-life of just 12.3 years and there are only about 3.5 kg of natural tritium on earth. Owing to this tiny supply of tritium, an important component for testing on ITER is the breeding blanket. This component, located in the ports of the vacuum vessel, serves to test the production of tritium by reaction with neutrons from the plasma. There are several reactions that produce tritium within the blanket. Lithium-6 produces tritium via (n,t) reactions with moderated neutrons, while Lithium-7 produces tritium via interactions with higher energy neutrons via (n,nt) reactions.
Concepts for the breeder blanket include helium-cooled lithium lead (HCLL), helium-cooled pebble bed (HCPB), and water-cooled lithium lead (WCLL) methods. Six different tritium breeding blanket mock-ups, known as Test Blanket Modules (TBM), will be tested in ITER and will share a common box geometry. Materials for use as breeder pebbles in the HCPB concept include lithium metatitanate and lithium orthosilicate. Requirements of breeder materials include good tritium production and extraction, mechanical stability and low levels of radioactive activation.
Magnet system
ITER is based on magnetic confinement fusion that uses magnetic fields to contain the fusion fuel in plasma form. The magnet system used in the ITER tokamak will be the largest superconducting magnet system ever built. The system will use four types of magnets to achieve plasma confinement: a central solenoid magnet, poloidal magnets, toroidal-field coils, and correction coils. The central solenoid coil will be 18 meters tall, 4.3 m wide, and weigh 1000 tonnes. It will use superconducting niobium–tin to carry 45 kA and produce a peak field of more than 13 teslas.
The 18 toroidal field coils will also use niobium-tin. They are the most powerful superconductive magnets ever designed with a nominal peak field strength of 11.8 teslas and a stored magnetic energy of 41 gigajoules. Other lower field ITER magnets (poloidal field and correction coils) will use niobium–titanium for their superconducting elements.
Additional heating
To achieve fusion, plasma particles must be heated to temperatures that reach as high as 150 million °C. To achieve these extreme temperatures, multiple heating methods must be used. Within the tokamak itself, changing magnetic fields produce a heating effect but external heating is also required. There will be three types of external heating in ITER:
Two one-million volt heating neutral beam injectors (HNB) that will each provide about 16.5MW to the burning plasma, with the possibility to add a third injector. The beams generate electrically charged deuterium ions that are accelerated through five grids to reach the required energy of 1MV and the beams can operate for the entire plasma pulse duration, a total of up to 3600 seconds. The prototype is being built at the Neutral Beam Test Facility (NBTF), which was constructed in Padua, Italy. There is also a smaller neutral beam that will be used for diagnostics to help detect the amount of helium ash inside the tokamak.
An ion cyclotron resonance heating (ICRH) system that will inject 20 MW of electromagnetic power into the plasma by using antennas to generate radio waves that have the same rate of oscillation as the ions in the plasma.
An electron cyclotron resonance heating (ECRH) system that will heat electrons in the plasma using a high-intensity beam of electromagnetic radiation.
Cryostat
The ITER cryostat is a large 3,850-tonne stainless steel structure surrounding the vacuum vessel and the superconducting magnets, with the purpose of providing a super-cool vacuum environment. Its thickness (ranging from ) will allow it to withstand the stresses induced by atmospheric pressure acting on the enclosed volume of 8,500 cubic meters. On 9 June 2020, Larsen & Toubro completed the delivery and installation of the cryostat module. The cryostat is the major component of the tokamak complex, which sits on a seismically isolated base.
Divertor
The divertor is a device within the tokamak that allows for removal of waste and impurities from the plasma while the reactor is operating. At ITER, the divertor will extract heat and ash that are created by the fusion process, while also protecting the surrounding walls and reducing plasma contamination.
The ITER divertor, which has been compared to a massive ashtray, is primarily composed of tungsten. The divertor targets, which are the components directly exposed to the plasma, are made of tungsten due to its high melting point, low sputtering yield, and low tritium retention. The underlying structure of the divertor includes materials like copper alloy for heat conduction and stainless steel for structural support.
The divertor consists of 54 cassettes. Each cassette weighs roughly eight tonnes and measures 0.8 meters x 2.3 meters by 3.5 meters. The divertor design and construction is being overseen by the Fusion For Energy agency.
When the ITER tokamak is in operation, the plasma-facing units endure heat spikes as high as 20 megawatts per square metre, which is more than four times higher than what is experienced by a spacecraft entering Earth's atmosphere.
The testing of the divertor is being done at the ITER Divertor Test Facility (IDTF) in Russia. This facility was created at the Efremov Institute in Saint Petersburg as part of the ITER Procurement Arrangement that spreads design and manufacturing across the project's member countries.
Cooling systems
The ITER tokamak will use interconnected cooling systems to manage the heat generated during operation. Most of the heat will be removed by a primary water cooling loop, itself cooled by water from a secondary loop through a heat exchanger within the tokamak building's secondary confinement. The secondary cooling loop will be cooled by a larger complex, comprising a cooling tower, a pipeline supplying water from the Canal de Provence, and basins that allow cooling water to be cooled and tested for chemical contamination and tritium before being released into the river Durance. This system will need to dissipate an average power of during the tokamak's operation. A liquid nitrogen system will provide a further of cooling to , and a liquid helium system will provide of cooling to . The liquid helium system will be designed, manufactured, installed and commissioned by Air Liquide in France.
Location
The process of selecting a location for ITER was long and drawn out. Japan proposed a site in Rokkasho. Two European sites were considered, the Cadarache site in France and the Vandellòs site in Spain, but the European Competitiveness Council named Cadarache as its official candidate in November 2003. Additionally, Canada announced a bid for the site in Clarington in May 2001, but withdrew from the race in 2003.
From this point on, the choice was between France and Japan. On 3 May 2005, the EU and Japan agreed to a process which would settle their dispute by July. At the final meeting in Moscow on 28 June 2005, the participating parties agreed to construct ITER at Cadarache with Japan receiving a privileged partnership that included a Japanese director-general for the project and a financial package to construct facilities in Japan.
Fusion for Energy, the EU agency in charge of the European contribution to the project, is located in Barcelona, Spain. Fusion for Energy (F4E) is the European Union's Joint Undertaking for ITER and the Development of Fusion Energy. According to the agency's website: F4E is responsible for providing Europe's contribution to ITER, the world's largest scientific partnership that aims to demonstrate fusion as a viable and sustainable source of energy. [...] F4E also supports fusion research and development initiatives [...]
The ITER Neutral Beam Test Facility aimed at developing and optimizing the neutral beam injector prototype, is being constructed in Padova, Italy. It will be the only ITER facility out of the site in Cadarache.
Most of the buildings at ITER will or have been clad in an alternating pattern of reflective stainless steel and grey lacquered metal. This was done for aesthetic reasons to blend the buildings with their surrounding environment and to aid with thermal insulation.
Participants
Currently there are seven signatories to the ITER Agreement: China, the European Union, India, Japan, Russia, South Korea and the United States.
As a consequence of Brexit, the United Kingdom formally withdrew from Euratom on 31 January 2020. However, under the terms of the EU–UK Trade and Cooperation Agreement, the United Kingdom initially remained a member of ITER as a part of Fusion for Energy following the end of the transition period on 31 December 2020. However, in 2023, the United Kingdom decided to discontinue its participation in Fusion for Energy, and in 2024 decided not to seek membership in ITER independently of the EU, leaving the UK no longer a participant in the ITER project.
In March 2009, Switzerland, an associate member of Euratom since 1979, also ratified the country's accession to the Fusion for Energy as a third country member.
In 2016, ITER announced a partnership with Australia for "technical cooperation in areas of mutual benefit and interest", but without Australia becoming a full member.
In 2017, ITER signed a Cooperation Agreement with Kazakhstan.
Thailand also has an official role in the project after a cooperation agreement was signed between the ITER Organization and the Thailand Institute of Nuclear Technology in 2018. The agreement provides courses and lectures to students and scientists in Thailand and facilitates relationships between Thailand and the ITER project.
Canada was previously a full member but pulled out due to a lack of funding from the federal government. The lack of funding also resulted in Canada's withdrawing from its bid for the ITER site in 2003. Canada rejoined the project in 2020 via a cooperation agreement that focused on tritium and tritium-related equipment.
ITER's work is supervised by the ITER Council, which has the authority to appoint senior staff, amend regulations, decide on budgeting issues, and allow additional states or organizations to participate in ITER. The current Chairman of the ITER Council is Won Namkung, and the acting ITER Director-General is Eisuke Tada.
Members
(as a member of Euratom and Fusion for Energy)
(as a part of Fusion for Energy)
Non-members
(through the Australian Nuclear Science and Technology Organisation (ANSTO) in 2016)
(through the Government of Canada in 2020, mostly on the grounds of tritium)
(through the National Nuclear Center of the Republic of Kazakhstan (NNC-RK) in 2017)
(through the Thailand Institute of Nuclear Technology (TINT) in 2018)
Domestic agencies
Each member of the ITER project – The European Union, China, India, Japan, Korea, Russia, and the United States – has created a domestic agency to meet its contributions and procurement responsibilities. These agencies employ their own staff, have their own budget, and directly oversee all industrial contracts and subcontracting.
ITER EU
The ITER Agreement was signed by Euratom representing the EU. Fusion for Energy, often referred to as F4E, was created in 2007 as the EU's domestic agency, with headquarters in Barcelona, Spain, and further offices in Cadarache, France, Garching, Germany, and Rokkasho, Japan. F4E is responsible for contributing to the design and manufacture of components such as the vacuum vessel, the divertor, and the magnets.
ITER China
China's contribution to ITER is managed through the China International Nuclear Fusion Energy Program or the CNDA. The Chinese agency is working on components such as the correction coil, magnet supports, the first wall, and shield blanket. China is also running experiments on their HL-2M tokamak in Chengdu and HT-7U (EAST) in Hefei to help support ITER research.
ITER India
ITER-India is a special project run by India's Institute for Plasma Research. ITER-India's research facility is based in Ahmedabad in the Gujarat state. India's deliverables to the ITER project include the cryostat, in-vessel shielding, cooling and cooling water systems.
ITER Japan
Japan's National Institutes for Quantum and Radiological Sciences and Technology, or QST, is now the designated Japanese domestic agency for the ITER project. The organization is based in Chiba, Japan. Japan collaborates with the ITER Organization and ITER members to help design and produce components for the tokamak, including the blanket remote handling system, the central solenoid coils, the plasma diagnostics systems, and the neutral beam injection heating systems.
ITER Korea
ITER Korea was established in 2007 under Korea's National Fusion Research Institute and the organization is based in Daejeon, South Korea. Among the procurement items that ITER Korea is responsible for four sectors of the vacuum vessel, the blanket shield block, thermal shields, and the tritium storage and delivery system.
ITER Russia
Russia occupies one of the key positions in the implementation of the international ITER Project. The Russian Federation's contribution to the ITER project lies in the manufacture and supply of high-tech equipment and basic reactor systems. The Russian Federation's contribution is being made under the aegis of Rosatom or the State Atomic Energy Corporation. The Russian Federation has multiple obligations to the ITER project, including the supply of 22 kilometers of conductors based on 90 tonnes of superconducting Nb3Sn strands for winding coils of a toroidal field and 11 km of conductors based on 40 tonnes of superconducting NbTi strands for windings of coils of a poloidal field of the ITER magnetic system, sent in late 2022. Russia is responsible for the manufacture of 179 of the most energy-intensive (up to 5 MW/sq.m) panels of the First Wall. The panels are covered with beryllium plates soldered to CuCrZr bronze, which is connected to a steel base. Panel size up to 2 m wide, 1.4 m high; its mass is about 1000 kg. The obligation of the Russian Federation also includes conducting thermal tests of ITER components that are facing the plasma. Today, Russia, thanks to its participation in the project, has the full design documentation for the ITER reactor.
ITER US
US ITER is part of the US Department of Energy and is managed by the Oak Ridge National Laboratory in Tennessee. US ITER is responsible for both the design and manufacturing of components for the ITER project, and American involvement includes contributions to the tokamak cooling system, the diagnostics systems, the electron and ion cyclotron heating transmission lines, the toroidal and central solenoid magnet systems, and the pellet injection systems.
In 2022, the US fusion research community released its plan for a US ITER Research Program covering key research areas such as plasma-material interactions, plasma diagnostics, and fusion nuclear science and technology. The plan envisions close collaboration between the US and other ITER partners to ensure the successful operation of ITER.
Funding
In 2006, the ITER Agreement was signed on the basis of an estimated cost of €5.9 billion over a ten-year period. In 2008, as a result of a design review, the estimate was revised upwards to about €19 billion. As of 2016, the total cost of constructing and operating the experiment is expected to be over €22 billion, an increase of €4.6 billion of its 2010 estimate, and of €9.6 billion from the 2009 estimate.
At the June 2005 conference in Moscow the participating members of the ITER cooperation agreed on the following division of funding contributions for the construction phase: 45.4% by the hosting member, the European Union, and the rest split between the non-hosting members at a rate of 9.1% each for China, India, Japan, South Korea, the Russian Federation and the US. During the operation and deactivation phases, Euratom will contribute to 34% of the total costs; Japan and the United States 13%; and China, India, Korea, and Russia 10%.
90% of contributions will be delivered 'in-kind' using ITER's own currency, the ITER Units of Account (IUAs). Though Japan's financial contribution as a non-hosting member is one-eleventh of the total, the EU agreed to grant it a special status so that Japan will provide for two-elevenths of the research staff at Cadarache and be awarded two-elevenths of the construction contracts, while the European Union's staff and construction components contributions will be cut from five-elevenths to four-elevenths.
The American contribution to ITER has been the subject to debate. The US Department of Energy (USDOE) has estimated the total construction costs to 2025, including in-kind contributions, to be $65 billion, though ITER disputes this calculation. After having reduced funding to ITER in 2017, the United States ended up doubling its initial budget to $122 million in-kind contribution in 2018. It is estimated the total contribution to ITER for the year 2020 was $247 million, an amount that is part of the USDOE's Fusion Energy Sciences program. Under a strategic plan to guide American fusion energy efforts that was approved in January 2021, the USDOE directed the Fusion Energy Sciences Advisory Committee to assume that the US will continue to fund ITER for a ten-year period.
Support for the European budget for ITER has also varied over the course of the project. It was reported in December 2010 that the European Parliament had refused to approve a plan by member states to reallocate €1.4 billion from the budget to cover a shortfall in ITER building costs in 2012–13. The closure of the 2010 budget required this financing plan to be revised, and the European Commission (EC) was forced to put forward an ITER budgetary resolution proposal in 2011. In the end, the European contribution to ITER for the 2014 to 2020 period was set at €2.9 billion. Most recently, in February 2021, the European Council approved ITER financing of €5.61 billion for the period of 2021 to 2027.
Manufacturing
The construction of the ITER tokamak has been compared to the assembly of “a giant three-dimensional puzzle” because the parts are manufactured around the world and then shipped to France for assembly. This assembly system is the result of the ITER Agreement that stipulates that member contributions were to be mostly “in-kind” with countries manufacturing components instead of providing money. This system was devised to provide economic stimulus and fusion expertise in the countries funding the project and the general framework called for 90% of member contributions to be in material or components and 10% to be in money.
As a result, more than 2800 design or manufacturing contracts have been signed since the launch of the project. According to a 2017 estimate from French Minister for Research, Education and Innovation, Frédérique Vidal, there were 500 companies involved in the construction of ITER and Bernard Bigot stated that €7 billion in contracts had been awarded to prime contractors in Europe alone since 2007.
The overall assembly of the tokamak facility is being overseen through a €174-million contract awarded to Momentum, a joint venture between Amec Foster Wheeler (Britain), Assystem (France), and Kepco (South Korea). One of the largest tenders was a €530-million contract for HVAC systems and mechanical and electrical equipment that was awarded to a European consortium involving ENGIE (France) and Exyte (Germany). A tokamak assembly contract worth €200 million also went to a European consortium, Dynamic, that includes the companies Ansaldo Energia (Italy), ENGIE (France), and SIMIC (Italy). The French industrial conglomerate Daher was awarded more than €100 million in logistics contracts for ITER, which includes the shipment of the heavy components from the different manufacturers around the world.
In America, US ITER has awarded $1.3 billion in contracts to American companies since the beginning of the project and there is an estimated $800 million in future contracts still to come. The major US contracts include General Atomics being selected to design and manufacture the crucial central solenoid magnet.
In 2019, the Chinese consortium led by China Nuclear Power Engineering Corporation signed a contract for machine assembly at ITER that was the biggest nuclear energy contract ever signed by a Chinese company in Europe.
Russia is supplying magnet and vacuum-injection systems for ITER with construction being done at the Sredne-Nevsky Shipyard in Saint Petersburg.
In India, the contract for construction of the cryostat, one of the fundamental pieces of the tokamak, was awarded to Larsen & Toubro, who also have ITER contracts for water cooling systems. InoxCVA, an Inox Group company will supply cryolines for the ITER Project.
Two of Japan's industrial leaders, Toshiba Energy Systems & Solutions and Mitsubishi Heavy Industries, have contracts to manufacture the toroidal field coils for ITER. Construction of another key part of the tokamak, the vacuum vessel, was awarded to Hyundai Heavy Industries and is being built in Korea.
Delays were acknowledged in 2023, which would impact the target to create plasma by 2025; it was hoped the 2035 full-fusion target could be maintained. A new schedule was issued in July 2024, targeting first plasma in the mid-2030s and the start of deuterium-tritium operations by 2039.
Criticism
The ITER project has been criticized for issues such as its possible environmental impacts, its usefulness as a response to climate change, the design of its tokamak, and how the experiment's objectives have been expressed.
When France was announced as the site of the ITER project in 2005, several European environmentalists stated their opposition to the project. For example, the French politician Noël Mamère argued that the fight against global warming would be neglected as a result of ITER: “This is not good news for the fight against the greenhouse effect because we're going to put ten billion euros towards a project that has a term of 30–50 years when we're not even sure it will be effective." However, another French environmental association, Association des Ecologistes Pour le Nucléaire (AEPN), welcomed the ITER project as an important part of the response to climate change.
Within the broader fusion sector, a number of researchers working on non-tokamak systems, such as the independent fusion scientist Eric Lerner, have argued that other fusion projects would be a fraction of ITER's cost and could be a potentially more viable and/or more cost-effective path to fusion power. Other critics, such as Daniel Jassby, accuse ITER researchers of being unwilling to face up to the technical and economic potential problems posed by tokamak fusion schemes.
In terms of the design of the tokamak, one concern arose from the 2013 tokamak parameters database interpolation that revealed the power load on a tokamak divertor would be five times the previously expected value. Given that the projected power load on the ITER divertor will already be very high, these new findings led to new design testing initiatives.
Another issue that critics raised regarding ITER and future deuterium-tritium (DT) fusion projects is the available supply of tritium. As it stands, ITER will use all existing supplies of tritium for its experiment and the current state-of-the-art technology isn't sufficient to generate enough tritium to fulfill the needs of future DT fuel cycle experiments for fusion energy. According to the conclusion of a 2020 study that analyzed the tritium issue, “successful development of the DT fuel cycle for DEMO and future fusion reactors requires an intensive R&D program in key areas of plasma physics and fusion technologies.”
Responses to criticism
Proponents believe that much of the ITER criticism is misleading and inaccurate, in particular the allegations of the experiment's "inherent danger". The stated goals for a commercial fusion power station design are that the amount of radioactive waste produced should be hundreds of times less than that of a fission reactor, and that it should produce no long-lived radioactive waste, and that it is impossible for any such reactor to undergo a large-scale runaway chain reaction. A direct contact of the plasma with ITER inner walls would contaminate it, causing it to cool immediately and stop the fusion process. In addition, the amount of fuel contained in a fusion reactor chamber (one half gram of deuterium/tritium fuel) is only sufficient to sustain the fusion burn pulse from minutes up to an hour at most, whereas a fission reactor usually contains several years' worth of fuel.
Moreover, some detritiation systems will be implemented, so that, at a fuel cycle inventory level of about , ITER will eventually need to recycle large amounts of tritium and at turnovers orders of magnitude higher than any preceding tritium facility worldwide.
In the case of an accident (or sabotage), it is expected that a fusion reactor would release far less radioactive pollution than would an ordinary fission nuclear station. Furthermore, ITER's type of fusion power has little in common with nuclear weapons technology, and does not produce the fissile materials necessary for the construction of a weapon. Proponents note that large-scale fusion power would be able to produce reliable electricity on demand, and with virtually zero pollution (no gaseous CO2, SO2, or NOx by-products are produced).
According to researchers at a demonstration reactor in Japan, a fusion generator should be feasible in the 2030s and no later than the 2050s. Japan is pursuing its own research program with several operational facilities that are exploring several fusion paths.
In the United States alone, electricity accounts for US$210 billion in annual sales. Asia's electricity sector attracted US$93 billion in private investment between 1990 and 1999. These figures take into account only current prices. Proponents of ITER contend that an investment in research now should be viewed as an attempt to earn a far greater future return and a 2017–18 study of the impact of ITER investments on the EU economy have concluded that 'in the medium and long-term, there is likely to be a positive return on investment from the EU commitment to ITER.'
Also, worldwide investment of less than US$1 billion per year into ITER is not incompatible with concurrent research into other methods of power generation, which in 2007 totaled US$16.9 billion.
Supporters of ITER emphasize that the only way to test ideas for withstanding the intense neutron flux is to subject materials experimentally to that flux, which is one of the primary missions of ITER and the IFMIF, and both facilities will be vitally important to that effort. The purpose of ITER is to explore the scientific and engineering questions that surround potential fusion power stations. It is nearly impossible to acquire satisfactory data for the properties of materials expected to be subject to an intense neutron flux, and burning plasmas are expected to have quite different properties from externally heated plasmas. Supporters contend that the answer to these questions requires the ITER experiment, especially in the light of the monumental potential benefits.
Furthermore, the main line of research via tokamaks has been developed to the point that it is now possible to undertake the penultimate step in magnetic confinement plasma physics research with a self-sustained reaction. In the tokamak research program, recent advances devoted to controlling the configuration of the plasma have led to the achievement of substantially improved energy and pressure confinement, which reduces the projected cost of electricity from such reactors by a factor of two to a value only about 50% more than the projected cost of electricity from advanced light-water reactors. In addition, progress in the development of advanced, low activation structural materials supports the promise of environmentally benign fusion reactors and research into alternate confinement concepts is yielding the promise of future improvements in confinement. Finally, supporters contend that other potential replacements to the fossil fuels have environmental issues of their own. Solar, wind, and hydroelectric power all have very low surface power density compared to ITER's successor DEMO which, at 2,000 MW, would have an energy density that exceeds even large fission power stations.
Safety of the project is regulated according to French and EU nuclear power regulations. In 2011, the French Nuclear Safety Authority (ASN) delivered a favorable opinion, and then, based on the French Act on Nuclear Transparency and Safety, the licensing application was subject to public enquiry that allowed the general public to submit requests for information regarding safety of the project. According to published safety assessments (approved by the ASN), in the worst case of reactor leak, released radioactivity will not exceed 1/1000 of natural background radiation and no evacuation of local residents will be required. The whole installation includes a number of stress tests to confirm efficiency of all barriers. The whole reactor building is built on top of almost 500 seismic suspension columns and the whole complex is located almost 300 m above sea level. Overall, extremely rare events such as 100-year flood of the nearby Durance river and 10,000-year earthquakes were assumed in the safety design of the complex and respective safeguards are part of the design.
Between 2008 and 2017, the project has generated 34,000 job-years in the EU economy alone. It is estimated that in the 2018–2030 period, it will generate a further 74,000 job-years and €15.9 billion in gross value.
Similar projects
Precursors to ITER were JET, Tore Supra, MAST, SST-1, EAST, and KSTAR.
Other planned and proposed fusion reactors include NIF, W7X, T-15MD, STEP, SPARC, SST-2, CFETR, DEMO, K-DEMO and other 'DEMO-phase' national or private-sector fusion power plants.
See also
DEMOnstration Power Plant, generic term for a future class of fusion reactors that produce useful power
Experimental Advanced Superconducting Tokamak (EAST), China's ongoing effort at Hefei Institutes
Wendelstein 7-X, an advanced stellarator of Max Planck IPP in Germany for evaluating components of future fusion power plants
Notes
References
Further reading
Claessens, Michel. (2020). ITER: The giant fusion reactor: Bringing a Sun to Earth. Springer.
Clery, Daniel. (2013). A Piece of the Sun. Gerald Duckworth & Co. Ltd.
ITER. (2018). ITER Research Plan within the Staged Approach (Level III – Provisional Version). ITER.
Wendell Horton Jr, C., and Sadruddin Benkadda. (2015). ITER physics. World Scientific.
External links
ITER China website
ITER EU (Fusion for Energy) website
ITER India website
ITER Japan website
ITER Korea website
ITER Russia website
ITER US website
The New Yorker, 3 March 2014, Star in a Bottle, by Raffi Khatchadourian
Archival material collected by Prof. McCray relating to ITER's early phase (1979–1989) can be consulted at the Historical Archives of the European Union in Florence
"ITER Talks (1): Introduction to ITER" video (53:00) at YouTube, by ITER Organization, July 23, 2021.
The roles of the Host and the non-Host for the ITER Project. June 2005 The broader approach agreement with Japan.
Fusion Electricity – A roadmap to the realisation of fusion energy EFDA 2012 – 8 missions, ITER, project plan with dependencies, ...
Magnetic confinement fusion devices
Buildings and structures in Bouches-du-Rhône
International science experiments
Science diplomacy | ITER | [
"Chemistry"
] | 12,200 | [
"Particle traps",
"Magnetic confinement fusion devices"
] |
261,366 | https://en.wikipedia.org/wiki/Dimerization | In chemistry, dimerization is the process of joining two identical or similar molecular entities by bonds. The resulting bonds can be either strong or weak. Many symmetrical chemical species are described as dimers, even when the monomer is unknown or highly unstable.
The term homodimer is used when the two subunits are identical (e.g. A–A) and heterodimer when they are not (e.g. A–B). The reverse of dimerization is often called dissociation. When two oppositely-charged ions associate into dimers, they are referred to as Bjerrum pairs, after Danish chemist Niels Bjerrum.
Noncovalent dimers
Anhydrous carboxylic acids form dimers by hydrogen bonding of the acidic hydrogen and the carbonyl oxygen. For example, acetic acid forms a dimer in the gas phase, where the monomer units are held together by hydrogen bonds. Many OH-containing molecules form dimers, e.g. the water dimer.
Excimers and exciplexes are excited structures with a short lifetime. For example, noble gases do not form stable dimers, but they do form the excimers Ar2*, Kr2* and Xe2* under high pressure and electrical stimulation.
Covalent dimers
Molecular dimers are often formed by the reaction of two identical compounds e.g.: . In this example, monomer "A" is said to dimerize to give the dimer "".
Dicyclopentadiene is an asymmetrical dimer of two cyclopentadiene molecules that have reacted in a Diels-Alder reaction to give the product. Upon heating, it "cracks" (undergoes a retro-Diels-Alder reaction) to give identical monomers:
C10H12 -> 2 C5H6
Many nonmetallic elements occur as dimers: hydrogen, nitrogen, oxygen, and the halogens fluorine, chlorine, bromine and iodine. Some metals form a proportion of dimers in their vapour phase: dilithium (), disodium (), dipotassium (), dirubidium () and dicaesium (). Such elemental dimers are homonuclear diatomic molecules.
Polymer chemistry
In the context of polymers, "dimer" also refers to the degree of polymerization 2, regardless of the stoichiometry or condensation reactions.
One case where this is applicable is with disaccharides. For example, cellobiose is a dimer of glucose, even though the formation reaction produces water:
2 C6H12O6 -> C12H22O11 + H2O
Here, the resulting dimer has a stoichiometry different from the initial pair of monomers.
Disaccharides need not be composed of the same monosaccharides to be considered dimers. An example is sucrose, a dimer of fructose and glucose, which follows the same reaction equation as presented above.
Amino acids can also form dimers, which are called dipeptides. An example is glycylglycine, consisting of two glycine molecules joined by a peptide bond. Other examples include aspartame and carnosine.
Inorganic and organometallic dimers
Many molecules and ions are described as dimers, even when the monomer is elusive.
Boranes
Diborane (B2H6) is an dimer of borane, which is elusive and rarely observed. Almost all compounds of the type R2BH exist as dimers.
Organoaluminium compounds
Trialkylaluminium compounds can exist as either monomers or dimers, depending on the steric bulk of the groups attached. For example, trimethylaluminium exists as a dimer, but trimesitylaluminium adopts a monomeric structure.
Organochromium compounds
Cyclopentadienylchromium tricarbonyl dimer exists in measureable equilibrium quantities with the monometallic radical .
Biochemical dimers
Pyrimidine dimers
Pyrimidine dimers (also known as thymine dimers) are formed by a photochemical reaction from pyrimidine DNA bases when exposed to ultraviolet light. This cross-linking causes DNA mutations, which can be carcinogenic, causing skin cancers. When pyrimidine dimers are present, they can block polymerases, decreasing DNA functionality until it is repaired.
Protein dimers
Protein dimers arise from the interaction between two proteins which can interact further to form larger and more complex oligomers. For example, tubulin is formed by the dimerization of α-tubulin and β-tubulin and this dimer can then polymerize further to make microtubules. For symmetric proteins, the larger protein complex can be broken down into smaller identical protein subunits, which then dimerize to decrease the genetic code required to make the functional protein.
G protein-coupled receptors
As the largest and most diverse family of receptors within the human genome, G protein-coupled receptors (GPCR) have been studied extensively, with recent studies supporting their ability to form dimers. GPCR dimers include both homodimers and heterodimers formed from related members of the GPCR family. While not all, some GPCRs require dimerization to function, such as GABAB-receptor, emphasizing the importance of dimers in biological systems.
Receptor tyrosine kinase
Much like for G protein-coupled receptors, dimerization is essential for receptor tyrosine kinases (RTK) to perform their function in signal transduction, affecting many different cellular processes. RTKs typically exist as monomers, but undergo a conformational change upon ligand binding, allowing them to dimerize with nearby RTKs. The dimerization activates the cytoplasmic kinase domains that are responsible for further signal transduction.
See also
Monomer
Trimer
Polymer
Protein dimer
Oligomer
References
Chemical compounds | Dimerization | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,253 | [
"Chemical compounds",
"Molecules",
"Dimers (chemistry)",
"Polymer chemistry",
"Matter"
] |
261,407 | https://en.wikipedia.org/wiki/Plasma%20cosmology | Plasma cosmology is a non-standard cosmology whose central postulate is that the dynamics of ionized gases and plasmas play important, if not dominant, roles in the physics of the universe at interstellar and intergalactic scales. In contrast, the current observations and models of cosmologists and astrophysicists explain the formation, development, and evolution of large-scale structures as dominated by gravity (including its formulation in Albert Einstein's general theory of relativity).
The original form of the theory, Alfvén–Klein cosmology, was developed by Hannes Alfvén and Oskar Klein in the 1960s and 1970s, and holds that matter and antimatter exist in equal quantities at very large scales, that the universe is eternal rather than bounded in time by the Big Bang, and that the expansion of the observable universe is caused by annihilation between matter and antimatter rather than a mechanism like cosmic inflation.
Cosmologists and astrophysicists who have evaluated plasma cosmology reject it because it does not match the observations of astrophysical phenomena as well as the currently accepted Big Bang model. Very few papers supporting plasma cosmology have appeared in the literature since the mid-1990s.
The term plasma universe is sometimes used as a synonym for plasma cosmology, as an alternative description of the plasma in the universe. Plasma cosmology is distinct from pseudoscientific ideas collectively called the Electric Universe, though proponents of each are known to be sympathetic to each other. These pseudoscientific ideas vary widely but generally claim that electric currents flow into stars and power them like light bulbs, contradicting well-established scientific theories and observations showing that stars are powered by nuclear fusion.
Alfvén–Klein cosmology
In the 1960s, the theory behind plasma cosmology was introduced by Alfvén, a plasma expert who won the 1970 Nobel Prize in Physics for his work on magnetohydrodynamics. He proposed the use of plasma scaling to extrapolate the results of laboratory experiments and plasma physics observations and scale them over many orders of magnitude up to the largest observable objects in the universe (see box). In 1971, Oskar Klein, a Swedish theoretical physicist, extended the earlier proposals and developed the Alfvén–Klein model of the universe, or "metagalaxy", an earlier term used to refer to the empirically accessible part of the universe, rather than the entire universe including parts beyond our particle horizon.
In this model, the universe is made up of equal amounts of matter and antimatter with the boundaries between the regions of matter and antimatter being delineated by cosmic electromagnetic fields formed by double layers, thin regions comprising two parallel layers with opposite electrical charge. Interaction between these boundary regions would generate radiation, and this would form the plasma. Alfvén introduced the term ambiplasma for a plasma made up of matter and antimatter and the double layers are thus formed of ambiplasma. According to Alfvén, such an ambiplasma would be relatively long-lived as the component particles and antiparticles would be too hot and too low-density to annihilate each other rapidly. The double layers will act to repel clouds of opposite type, but combine clouds of the same type, creating ever-larger regions of matter and antimatter. The idea of ambiplasma was developed further into the forms of heavy ambiplasma (protons-antiprotons) and light ambiplasma (electrons-positrons).
Alfvén–Klein cosmology was proposed in part to explain the observed baryon asymmetry in the universe, starting from an initial condition of exact symmetry between matter and antimatter. According to Alfvén and Klein, ambiplasma would naturally form pockets of matter and pockets of antimatter that would expand outwards as annihilation between matter and antimatter occurred in the double layer at the boundaries. They concluded that we must just happen to live in one of the pockets that was mostly baryons rather than antibaryons, explaining the baryon asymmetry. The pockets, or bubbles, of matter or antimatter would expand because of annihilations at the boundaries, which Alfvén considered as a possible explanation for the observed expansion of the universe, which would be merely a local phase of a much larger history. Alfvén postulated that the universe has always existed due to causality arguments and the rejection of ex nihilo models, such as the Big Bang, as a stealth form of creationism. The exploding double layer was also suggested by Alfvén as a possible mechanism for the generation of cosmic rays,
X-ray bursts and gamma-ray bursts.
In 1993, theoretical cosmologist Jim Peebles criticized Alfvén–Klein cosmology, writing that "there is no way that the results can be consistent with the isotropy of the cosmic microwave background radiation and X-ray backgrounds". In his book he also showed that Alfvén's models do not predict Hubble's law, the abundance of light elements, or the existence of the cosmic microwave background. A further difficulty with the ambiplasma model is that matter–antimatter annihilation results in the production of high energy photons, which are not observed in the amounts predicted. While it is possible that the local "matter-dominated" cell is simply larger than the observable universe, this proposition does not lend itself to observational tests.
Plasma cosmology and the study of galaxies
Hannes Alfvén from the 1960s to 1980s argued that plasma played an important if not dominant role in the universe. He argued that electromagnetic forces are far more important than gravity when acting on interplanetary and interstellar charged particles. He further hypothesized that they might promote the contraction of interstellar clouds and may even constitute the main mechanism for contraction, initiating star formation. The current standard view is that magnetic fields can hinder collapse, that large-scale Birkeland currents have not been observed, and that the length scale for charge neutrality is predicted to be far smaller than the relevant cosmological scales.
In the 1980s and 1990s, Alfvén and Anthony Peratt, a plasma physicist at Los Alamos National Laboratory, outlined a program they called the "plasma universe". In plasma universe proposals, various plasma physics phenomena were associated with astrophysical observations and were used to explain contemporary mysteries and problems outstanding in astrophysics in the 1980s and 1990s. In various venues, Peratt profiled what he characterized as an alternative viewpoint to the mainstream models applied in astrophysics and cosmology.
For example, Peratt proposed that the mainstream approach to galactic dynamics which relied on gravitational modeling of stars and gas in galaxies with the addition of dark matter was overlooking a possibly major contribution from plasma physics. He mentions laboratory experiments of Winston H. Bostick in the 1950s that created plasma discharges that looked like galaxies. Perrat conducted computer simulations of colliding plasma clouds that he reported also mimicked the shape of galaxies. Peratt proposed that galaxies formed due to plasma filaments joining in a z-pinch, the filaments starting 300,000 light years apart and carrying Birkeland currents of 1018 amperes. Peratt also reported simulations he did showing emerging jets of material from the central buffer region that he compared to quasars and active galactic nuclei occurring without supermassive black holes. Peratt proposed a sequence for galaxy evolution: "the transition of double radio galaxies to radioquasars to radioquiet QSO's to peculiar and Seyfert galaxies, finally ending in spiral galaxies". He also reported that flat galaxy rotation curves were simulated without dark matter. At the same time Eric Lerner, an independent plasma researcher and supporter of Peratt's ideas, proposed a plasma model for quasars based on a dense plasma focus.
Comparison with mainstream astrophysics
Standard astronomical modeling and theories attempt to incorporate all known physics into descriptions and explanations of observed phenomena, with gravity playing a dominant role on the largest scales as well as in celestial mechanics and dynamics. To that end, both Keplerian orbits and Albert Einstein's General Theory of Relativity are generally used as the underlying frameworks for modeling astrophysical systems and structure formation, while high-energy astronomy and particle physics in cosmology additionally appeal to electromagnetic processes including plasma physics and radiative transfer to explain relatively small scale energetic processes observed in the x-rays and gamma rays. Due to overall charge neutrality, plasma physics does not provide for very long-range interactions in astrophysics even while much of the matter in the universe is plasma. (See astrophysical plasma for more.)
Proponents of plasma cosmology claim electrodynamics is as important as gravity in explaining the structure of the universe, and speculate that it provides an alternative explanation for the evolution of galaxies and the initial collapse of interstellar clouds. In particular plasma cosmology is claimed to provide an alternative explanation for the flat rotation curves of spiral galaxies and to do away with the need for dark matter in galaxies and with the need for supermassive black holes in galaxy centres to power quasars and active galactic nuclei. However, theoretical analysis shows that "many scenarios for the generation of seed magnetic fields, which rely on the survival and sustainability of currents at early times [of the universe are disfavored]", i.e. Birkeland currents of the magnitude needed (1018 amps over scales of megaparsecs) for galaxy formation do not exist. Additionally, many of the issues that were mysterious in the 1980s and 1990s, including discrepancies relating to the cosmic microwave background and the nature of quasars, have been solved with more evidence that, in detail, provides a distance and time scale for the universe.
Some of the places where plasma cosmology supporters are most at odds with standard explanations include the need for their models to have light element production without Big Bang nucleosynthesis, which, in the context of Alfvén–Klein cosmology, has been shown to produce excessive X-rays and gamma rays beyond that observed. Plasma cosmology proponents have made further proposals to explain light element abundances, but the attendant issues have not been fully addressed. In 1995 Eric Lerner published his alternative explanation for the cosmic microwave background radiation (CMBR). He argued that his model explained the fidelity of the CMB spectrum to that of a black body and the low level of anisotropies found, even while the level of isotropy at 1:105 is not accounted for to that precision by any alternative models. Additionally, the sensitivity and resolution of the measurement of the CMB anisotropies was greatly advanced by WMAP and the Planck satellite and the statistics of the signal were so in line with the predictions of the Big Bang model, that the CMB has been heralded as a major confirmation of the Big Bang model to the detriment of alternatives. The acoustic peaks in the early universe are fit with high accuracy by the predictions of the Big Bang model, and, to date, there has never been an attempt to explain the detailed spectrum of the anisotropies within the framework of plasma cosmology or any other alternative cosmological model.
References and notes
Further reading
Alfvén, Hannes:
"Cosmic Plasma" (Reidel, 1981)
"Cosmology in the plasma universe", Laser and Particle Beams (), vol. 6, August 1988, pp. 389–398 Full text
"Model of the plasma universe", IEEE Transactions on Plasma Science (), vol. PS-14, December 1986, pp. 629–638 Full text (PDF)
"The Plasma Universe", Physics Today (), vol. 39, issue 9, September 1986, pp. 22 – 27
Peratt, Anthony:
"Physics of the Plasma Universe", (Springer, 1992)
"Simulating spiral galaxies", Sky and Telescope (), vol. 68, August 1984, pp. 118–122
"Are Black Holes Necessary?", Sky and Telescope (), vol. 66, July 1983, pp. 19–22
"Evolution of the plasma universe. I – Double radio galaxies, quasars, and extragalactic jets", IEEE Transactions on Plasma Science (), vol. PS-14, December 1986, pp. 639–660 Full text (PDF)
"Evolution of the plasma universe. II – The formation of systems of galaxies", IEEE Transactions on Plasma Science (), vol. PS-14, December 1986, pp. 763–778 Full text (PDF)
"The role of particle beams and electrical currents in the plasma universe", Laser and Particle Beams (), vol. 6, August 1988, pp. 471–491 Full text (PDF)
IEEE journal Transactions on Plasma Science: special issues on Space and Cosmic Plasma 1986, 1989, 1990, 1992, 2000, 2003, and 2007
Cambridge University Press journal Laser and Particle Beams: Particle Beams and Basic Phenomena in the Plasma Universe, a Special Issue in Honor of the 80th Birthday of Hannes Alfvén, vol. 6, issue 3, August 1988 Laser and Particle Beams: Volume 6 - Issue 3 | Cambridge Core
Various authors: "Introduction to Plasma Astrophysics and Cosmology", Astrophysics and Space Science, v. 227 (1995) p. 3–11. Proceedings of the Second IEEE International Workshop on Plasma Astrophysics and Cosmology, held from 10 to 12 May 1993 in Princeton, New Jersey
External links
Wright, E. L. "Errors in The Big Bang Never Happened". See also: Lerner, E. J. "Dr. Wright is Wrong", Lerner's reply to the above.
Physical cosmology
Space plasmas
Fringe physics | Plasma cosmology | [
"Physics",
"Astronomy"
] | 2,841 | [
"Space plasmas",
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"Physical cosmology"
] |
262,135 | https://en.wikipedia.org/wiki/Rocket%20engine | A rocket engine is a reaction engine, producing thrust in accordance with Newton's third law by ejecting reaction mass rearward, usually a high-speed jet of high-temperature gas produced by the combustion of rocket propellants stored inside the rocket. However, non-combusting forms such as cold gas thrusters and nuclear thermal rockets also exist. Rocket vehicles carry their own oxidiser, unlike most combustion engines, so rocket engines can be used in a vacuum, and they can achieve great speed, beyond escape velocity. Vehicles commonly propelled by rocket engines include missiles, artillery shells, ballistic missiles and rockets of any size, from tiny fireworks to man-sized weapons to huge spaceships.
Compared to other types of jet engine, rocket engines are the lightest and have the highest thrust, but are the least propellant-efficient (they have the lowest specific impulse). The ideal exhaust is hydrogen, the lightest of all elements, but chemical rockets produce a mix of heavier species, reducing the exhaust velocity.
Terminology
Here, "rocket" is used as an abbreviation for "rocket engine".
Thermal rockets use an inert propellant, heated by electricity (electrothermal propulsion) or a nuclear reactor (nuclear thermal rocket).
Chemical rockets are powered by exothermic reduction-oxidation chemical reactions of the propellant:
Solid-fuel rockets (or solid-propellant rockets or motors) are chemical rockets which use propellant in a solid state.
Liquid-propellant rockets use one or more propellants in a liquid state fed from tanks.
Hybrid rockets use a solid propellant in the combustion chamber, to which a second liquid or gas oxidiser or propellant is added to permit combustion.
Monopropellant rockets use a single propellant decomposed by a catalyst. The most common monopropellants are hydrazine and hydrogen peroxide.
Principle of operation
Rocket engines produce thrust by the expulsion of an exhaust fluid that has been accelerated to high speed through a propelling nozzle. The fluid is usually a gas created by high pressure () combustion of solid or liquid propellants, consisting of fuel and oxidiser components, within a combustion chamber. As the gases expand through the nozzle, they are accelerated to very high (supersonic) speed, and the reaction to this pushes the engine in the opposite direction. Combustion is most frequently used for practical rockets, as the laws of thermodynamics (specifically Carnot's theorem) dictate that high temperatures and pressures are desirable for the best thermal efficiency. Nuclear thermal rockets are capable of higher efficiencies, but currently have environmental problems which preclude their routine use in the Earth's atmosphere and cislunar space.
For model rocketry, an available alternative to combustion is the water rocket pressurized by compressed air, carbon dioxide, nitrogen, or any other readily available, inert gas.
Propellant
Rocket propellant is mass that is stored, usually in some form of tank, or within the combustion chamber itself, prior to being ejected from a rocket engine in the form of a fluid jet to produce thrust.
Chemical rocket propellants are the most commonly used. These undergo exothermic chemical reactions producing a hot gas jet for propulsion. Alternatively, a chemically inert reaction mass can be heated by a high-energy power source through a heat exchanger in lieu of a combustion chamber.
Solid rocket propellants are prepared in a mixture of fuel and oxidising components called grain, and the propellant storage casing effectively becomes the combustion chamber.
Injection
Liquid-fuelled rockets force separate fuel and oxidiser components into the combustion chamber, where they mix and burn. Hybrid rocket engines use a combination of solid and liquid or gaseous propellants. Both liquid and hybrid rockets use injectors to introduce the propellant into the chamber. These are often an array of simple jets – holes through which the propellant escapes under pressure; but sometimes may be more complex spray nozzles. When two or more propellants are injected, the jets usually deliberately cause the propellants to collide as this breaks up the flow into smaller droplets that burn more easily.
Combustion chamber
For chemical rockets the combustion chamber is typically cylindrical, and flame holders, used to hold a part of the combustion in a slower-flowing portion of the combustion chamber, are not needed. The dimensions of the cylinder are such that the propellant is able to combust thoroughly; different rocket propellants require different combustion chamber sizes for this to occur.
This leads to a number called , the characteristic length:
where:
is the volume of the chamber
is the area of the throat of the nozzle.
L* is typically in the range of .
The temperatures and pressures typically reached in a rocket combustion chamber in order to achieve practical thermal efficiency are extreme compared to a non-afterburning airbreathing jet engine. No atmospheric nitrogen is present to dilute and cool the combustion, so the propellant mixture can reach true stoichiometric ratios. This, in combination with the high pressures, means that the rate of heat conduction through the walls is very high.
In order for fuel and oxidiser to flow into the chamber, the pressure of the propellants entering the combustion chamber must exceed the pressure inside the combustion chamber itself. This may be accomplished by a variety of design approaches including turbopumps or, in simpler engines, via sufficient tank pressure to advance fluid flow. Tank pressure may be maintained by several means, including a high-pressure helium pressurization system common to many large rocket engines or, in some newer rocket systems, by a bleed-off of high-pressure gas from the engine cycle to autogenously pressurize the propellant tanks For example, the self-pressurization gas system of the SpaceX Starship is a critical part of SpaceX strategy to reduce launch vehicle fluids from five in their legacy Falcon 9 vehicle family to just two in Starship, eliminating not only the helium tank pressurant but all hypergolic propellants as well as nitrogen for cold-gas reaction-control thrusters.
Nozzle
The hot gas produced in the combustion chamber is permitted to escape through an opening (the "throat"), and then through a diverging expansion section. When sufficient pressure is provided to the nozzle (about 2.5–3 times ambient pressure), the nozzle chokes and a supersonic jet is formed, dramatically accelerating the gas, converting most of the thermal energy into kinetic energy. Exhaust speeds vary, depending on the expansion ratio the nozzle is designed for, but exhaust speeds as high as ten times the speed of sound in air at sea level are not uncommon. About half of the rocket engine's thrust comes from the unbalanced pressures inside the combustion chamber, and the rest comes from the pressures acting against the inside of the nozzle (see diagram). As the gas expands (adiabatically) the pressure against the nozzle's walls forces the rocket engine in one direction while accelerating the gas in the other.
The most commonly used nozzle is the de Laval nozzle, a fixed geometry nozzle with a high expansion-ratio. The large bell- or cone-shaped nozzle extension beyond the throat gives the rocket engine its characteristic shape.
The exit static pressure of the exhaust jet depends on the chamber pressure and the ratio of exit to throat area of the nozzle. As exit pressure varies from the ambient (atmospheric) pressure, a choked nozzle is said to be
under-expanded (exit pressure greater than ambient),
perfectly expanded (exit pressure equals ambient),
over-expanded (exit pressure less than ambient; shock diamonds form outside the nozzle), or
grossly over-expanded (a shock wave forms inside the nozzle extension).
In practice, perfect expansion is only achievable with a variable–exit-area nozzle (since ambient pressure decreases as altitude increases), and is not possible above a certain altitude as ambient pressure approaches zero. If the nozzle is not perfectly expanded, then loss of efficiency occurs. Grossly over-expanded nozzles lose less efficiency, but can cause mechanical problems with the nozzle. Fixed-area nozzles become progressively more under-expanded as they gain altitude. Almost all de Laval nozzles will be momentarily grossly over-expanded during startup in an atmosphere.
Nozzle efficiency is affected by operation in the atmosphere because atmospheric pressure changes with altitude; but due to the supersonic speeds of the gas exiting from a rocket engine, the pressure of the jet may be either below or above ambient, and equilibrium between the two is not reached at all altitudes (see diagram).
Back pressure and optimal expansion
For optimal performance, the pressure of the gas at the end of the nozzle should just equal the ambient pressure: if the exhaust's pressure is lower than the ambient pressure, then the vehicle will be slowed by the difference in pressure between the top of the engine and the exit; on the other hand, if the exhaust's pressure is higher, then exhaust pressure that could have been converted into thrust is not converted, and energy is wasted.
To maintain this ideal of equality between the exhaust's exit pressure and the ambient pressure, the diameter of the nozzle would need to increase with altitude, giving the pressure a longer nozzle to act on (and reducing the exit pressure and temperature). This increase is difficult to arrange in a lightweight fashion, although is routinely done with other forms of jet engines. In rocketry a lightweight compromise nozzle is generally used and some reduction in atmospheric performance occurs when used at other than the 'design altitude' or when throttled. To improve on this, various exotic nozzle designs such as the plug nozzle, stepped nozzles, the expanding nozzle and the aerospike have been proposed, each providing some way to adapt to changing ambient air pressure and each allowing the gas to expand further against the nozzle, giving extra thrust at higher altitudes.
When exhausting into a sufficiently low ambient pressure (vacuum) several issues arise. One is the sheer weight of the nozzle—beyond a certain point, for a particular vehicle, the extra weight of the nozzle outweighs any performance gained. Secondly, as the exhaust gases adiabatically expand within the nozzle they cool, and eventually some of the chemicals can freeze, producing 'snow' within the jet. This causes instabilities in the jet and must be avoided.
On a de Laval nozzle, exhaust gas flow detachment will occur in a grossly over-expanded nozzle. As the detachment point will not be uniform around the axis of the engine, a side force may be imparted to the engine. This side force may change over time and result in control problems with the launch vehicle.
Advanced altitude-compensating designs, such as the aerospike or plug nozzle, attempt to minimize performance losses by adjusting to varying expansion ratio caused by changing altitude.
Propellant efficiency
For a rocket engine to be propellant efficient, it is important that the maximum pressures possible be created on the walls of the chamber and nozzle by a specific amount of propellant; as this is the source of the thrust. This can be achieved by all of:
heating the propellant to as high a temperature as possible (using a high energy fuel, containing hydrogen and carbon and sometimes metals such as aluminium, or even using nuclear energy)
using a low specific density gas (as hydrogen rich as possible)
using propellants which are, or decompose to, simple molecules with few degrees of freedom to maximise translational velocity
Since all of these things minimise the mass of the propellant used, and since pressure is proportional to the mass of propellant present to be accelerated as it pushes on the engine, and since from Newton's third law the pressure that acts on the engine also reciprocally acts on the propellant, it turns out that for any given engine, the speed that the propellant leaves the chamber is unaffected by the chamber pressure (although the thrust is proportional). However, speed is significantly affected by all three of the above factors and the exhaust speed is an excellent measure of the engine propellant efficiency. This is termed exhaust velocity, and after allowance is made for factors that can reduce it, the effective exhaust velocity is one of the most important parameters of a rocket engine (although weight, cost, ease of manufacture etc. are usually also very important).
For aerodynamic reasons the flow goes sonic ("chokes") at the narrowest part of the nozzle, the 'throat'. Since the speed of sound in gases increases with the square root of temperature, the use of hot exhaust gas greatly improves performance. By comparison, at room temperature the speed of sound in air is about 340 m/s while the speed of sound in the hot gas of a rocket engine can be over 1700 m/s; much of this performance is due to the higher temperature, but additionally rocket propellants are chosen to be of low molecular mass, and this also gives a higher velocity compared to air.
Expansion in the rocket nozzle then further multiplies the speed, typically between 1.5 and 2 times, giving a highly collimated hypersonic exhaust jet. The speed increase of a rocket nozzle is mostly determined by its area expansion ratio—the ratio of the area of the exit to the area of the throat, but detailed properties of the gas are also important. Larger ratio nozzles are more massive but are able to extract more heat from the combustion gases, increasing the exhaust velocity.
Thrust vectoring
Vehicles typically require the overall thrust to change direction over the length of the burn. A number of different ways to achieve this have been flown:
The entire engine is mounted on a hinge or gimbal and any propellant feeds reach the engine via low pressure flexible pipes or rotary couplings.
Just the combustion chamber and nozzle is gimballed, the pumps are fixed, and high pressure feeds attach to the engine.
Multiple engines (often canted at slight angles) are deployed but throttled to give the overall vector that is required, giving only a very small penalty.
High-temperature vanes protrude into the exhaust and can be tilted to deflect the jet.
Overall performance
Rocket technology can combine very high thrust (meganewtons), very high exhaust speeds (around 10 times the speed of sound in air at sea level) and very high thrust/weight ratios (>100) simultaneously as well as being able to operate outside the atmosphere, and while permitting the use of low pressure and hence lightweight tanks and structure.
Rockets can be further optimised to even more extreme performance along one or more of these axes at the expense of the others.
Specific impulse
The most important metric for the efficiency of a rocket engine is impulse per unit of propellant, this is called specific impulse (usually written ). This is either measured as a speed (the effective exhaust velocity in metres/second or ft/s) or as a time (seconds). For example, if an engine producing 100 pounds of thrust runs for 320 seconds and burns 100 pounds of propellant, then the specific impulse is 320 seconds. The higher the specific impulse, the less propellant is required to provide the desired impulse.
The specific impulse that can be achieved is primarily a function of the propellant mix (and ultimately would limit the specific impulse), but practical limits on chamber pressures and the nozzle expansion ratios reduce the performance that can be achieved.
Net thrust
Below is an approximate equation for calculating the net thrust of a rocket engine:
Since, unlike a jet engine, a conventional rocket motor lacks an air intake, there is no 'ram drag' to deduct from the gross thrust. Consequently, the net thrust of a rocket motor is equal to the gross thrust (apart from static back pressure).
The term represents the momentum thrust, which remains constant at a given throttle setting, whereas the term represents the pressure thrust term. At full throttle, the net thrust of a rocket motor improves slightly with increasing altitude, because as atmospheric pressure decreases with altitude, the pressure thrust term increases. At the surface of the Earth the pressure thrust may be reduced by up to 30%, depending on the engine design. This reduction drops roughly exponentially to zero with increasing altitude.
Maximum efficiency for a rocket engine is achieved by maximising the momentum contribution of the equation without incurring penalties from over expanding the exhaust. This occurs when . Since ambient pressure changes with altitude, most rocket engines spend very little time operating at peak efficiency.
Since specific impulse is force divided by the rate of mass flow, this equation means that the specific impulse varies with altitude.
Vacuum specific impulse, Isp
Due to the specific impulse varying with pressure, a quantity that is easy to compare and calculate with is useful. Because rockets choke at the throat, and because the supersonic exhaust prevents external pressure influences travelling upstream, it turns out that the pressure at the exit is ideally exactly proportional to the propellant flow , provided the mixture ratios and combustion efficiencies are maintained. It is thus quite usual to rearrange the above equation slightly:
and so define the vacuum Isp to be:
where:
And hence:
Throttling
Rockets can be throttled by controlling the propellant combustion rate (usually measured in kg/s or lb/s). In liquid and hybrid rockets, the propellant flow entering the chamber is controlled using valves, in solid rockets it is controlled by changing the area of propellant that is burning and this can be designed into the propellant grain (and hence cannot be controlled in real-time).
Rockets can usually be throttled down to an exit pressure of about one-third of ambient pressure (often limited by flow separation in nozzles) and up to a maximum limit determined only by the mechanical strength of the engine.
In practice, the degree to which rockets can be throttled varies greatly, but most rockets can be throttled by a factor of 2 without great difficulty; the typical limitation is combustion stability, as for example, injectors need a minimum pressure to avoid triggering damaging oscillations (chugging or combustion instabilities); but injectors can be optimised and tested for wider ranges.
For example, some more recent liquid-propellant engine designs that have been optimised for greater throttling capability (BE-3, Raptor) can be throttled to as low as 18–20 per cent of rated thrust.
Solid rockets can be throttled by using shaped grains that will vary their surface area over the course of the burn.
Energy efficiency
Rocket engine nozzles are surprisingly efficient heat engines for generating a high speed jet, as a consequence of the high combustion temperature and high compression ratio. Rocket nozzles give an excellent approximation to adiabatic expansion which is a reversible process, and hence they give efficiencies which are very close to that of the Carnot cycle. Given the temperatures reached, over 60% efficiency can be achieved with chemical rockets.
For a vehicle employing a rocket engine the energetic efficiency is very good if the vehicle speed approaches or somewhat exceeds the exhaust velocity (relative to launch); but at low speeds the energy efficiency goes to 0% at zero speed (as with all jet propulsion). See Rocket energy efficiency for more details.
Thrust-to-weight ratio
Rockets, of all the jet engines, indeed of essentially all engines, have the highest thrust-to-weight ratio. This is especially true for liquid-fuelled rocket engines.
This high performance is due to the small volume of pressure vessels that make up the engine—the pumps, pipes and combustion chambers involved. The lack of inlet duct and the use of dense liquid propellant allows the pressurisation system to be small and lightweight, whereas duct engines have to deal with air which has around three orders of magnitude lower density.
Of the liquid fuels used, density is lowest for liquid hydrogen. Although hydrogen/oxygen burning has the highest specific impulse of any in-use chemical rocket, hydrogen's very low density (about one-fourteenth that of water) requires larger and heavier turbopumps and pipework, which decreases the engine's thrust-to-weight ratio (for example the RS-25) compared to those that do not use hydrogen (NK-33).
Mechanical issues
Rocket combustion chambers are normally operated at fairly high pressure, typically 10–200bar (1–20MPa, 150–3,000psi). When operated within significant atmospheric pressure, higher combustion chamber pressures give better performance by permitting a larger and more efficient nozzle to be fitted without it being grossly overexpanded.
However, these high pressures cause the outermost part of the chamber to be under very large hoop stresses – rocket engines are pressure vessels.
Worse, due to the high temperatures created in rocket engines the materials used tend to have a significantly lowered working tensile strength.
In addition, significant temperature gradients are set up in the walls of the chamber and nozzle, these cause differential expansion of the inner liner that create internal stresses.
Hard starts
A hard start refers to an over-pressure condition during start of a rocket engine at ignition. In the worst cases, this takes the form of an unconfined explosion, resulting in the damage or destruction of the engine.
Rocket fuels, hypergolic or otherwise, must be introduced into the combustion chamber at the correct rate in order to have a controlled rate of production of hot gas. A "hard start" indicates that the quantity of combustible propellant that entered the combustion chamber prior to ignition was too large. The result is an excessive spike of pressure, possibly leading to structural failure or explosion.
Avoiding hard starts involves careful timing of the ignition relative to valve timing or varying the mixture ratio so as to limit the maximum pressure that can occur or simply ensuring an adequate ignition source is present well prior to propellant entering the chamber.
Explosions from hard starts usually cannot happen with purely gaseous propellants, since the amount of the gas present in the chamber is limited by the injector area relative to the throat area, and for practical designs, propellant mass escapes too quickly to be an issue.
A famous example of a hard start was the explosion of Wernher von Braun's "1W" engine during a demonstration to General Walter Dornberger on December 21, 1932. Delayed ignition allowed the chamber to fill with alcohol and liquid oxygen, which exploded violently. Shrapnel was embedded in the walls, but nobody was hit.
Acoustic issues
The extreme vibration and acoustic environment inside a rocket motor commonly result in peak stresses well above mean values, especially in the presence of organ pipe-like resonances and gas turbulence.
Combustion instabilities
The combustion may display undesired instabilities, of sudden or periodic nature. The pressure in the injection chamber may increase until the propellant flow through the injector plate decreases; a moment later the pressure drops and the flow increases, injecting more propellant in the combustion chamber which burns a moment later, and again increases the chamber pressure, repeating the cycle. This may lead to high-amplitude pressure oscillations, often in ultrasonic range, which may damage the motor. Oscillations of ±200 psi at 25 kHz were the cause of failures of early versions of the Titan II missile second stage engines. The other failure mode is a deflagration to detonation transition; the supersonic pressure wave formed in the combustion chamber may destroy the engine.
Combustion instability was also a problem during Atlas development. The Rocketdyne engines used in the Atlas family were found to suffer from this effect in several static firing tests, and three missile launches exploded on the pad due to rough combustion in the booster engines. In most cases, it occurred while attempting to start the engines with a "dry start" method whereby the igniter mechanism would be activated prior to propellant injection. During the process of man-rating Atlas for Project Mercury, solving combustion instability was a high priority, and the final two Mercury flights sported an upgraded propulsion system with baffled injectors and a hypergolic igniter.
The problem affecting Atlas vehicles was mainly the so-called "racetrack" phenomenon, where burning propellant would swirl around in a circle at faster and faster speeds, eventually producing vibration strong enough to rupture the engine, leading to complete destruction of the rocket. It was eventually solved by adding several baffles around the injector face to break up swirling propellant.
More significantly, combustion instability was a problem with the Saturn F-1 engines. Some of the early units tested exploded during static firing, which led to the addition of injector baffles.
In the Soviet space program, combustion instability also proved a problem on some rocket engines, including the RD-107 engine used in the R-7 family and the RD-216 used in the R-14 family, and several failures of these vehicles occurred before the problem was solved. Soviet engineering and manufacturing processes never satisfactorily resolved combustion instability in larger RP-1/LOX engines, so the RD-171 engine used to power the Zenit family still used four smaller thrust chambers fed by a common engine mechanism.
The combustion instabilities can be provoked by remains of cleaning solvents in the engine (e.g. the first attempted launch of a Titan II in 1962), reflected shock wave, initial instability after ignition, explosion near the nozzle that reflects into the combustion chamber, and many more factors. In stable engine designs the oscillations are quickly suppressed; in unstable designs they persist for prolonged periods. Oscillation suppressors are commonly used.
Three different types of combustion instabilities occur:
Chugging
A low frequency oscillation in chamber pressure below 200 Hertz. Usually it is caused by pressure variations in feed lines due to variations in acceleration of the vehicle, when rocket engines are building up thrust, are shut down or are being throttled.
Chugging can cause a worsening feedback loop, as cyclic variation in thrust causes longitudinal vibrations to travel up the rocket, causing the fuel lines to vibrate, which in turn do not deliver propellant smoothly into the engines. This phenomenon is known as "pogo oscillations" or "pogo", named after the pogo stick.
In the worst case, this may result in damage to the payload or vehicle. Chugging can be minimised by using several methods, such as installing energy-absorbing devices on feed lines. Chugging may cause Screeching.
Buzzing
An intermediate frequency oscillation in chamber pressure between 200 and 1000 Hertz. Usually caused due to insufficient pressure drop across the injectors. It generally is mostly annoying, rather than being damaging.
Buzzing is known to have adverse effects on engine performance and reliability, primarily as it causes material fatigue. In extreme cases combustion can end up being forced backwards through the injectors – this can cause explosions with monopropellants. Buzzing may cause Screeching.
Screeching
A high frequency oscillation in chamber pressure above 1000 Hertz, sometimes called screaming or squealing. The most immediately damaging, and the hardest to control. It is due to acoustics within the combustion chamber that often couples to the chemical combustion processes that are the primary drivers of the energy release, and can lead to unstable resonant "screeching" that commonly leads to catastrophic failure due to thinning of the insulating thermal boundary layer. Acoustic oscillations can be excited by thermal processes, such as the flow of hot air through a pipe or combustion in a chamber. Specifically, standing acoustic waves inside a chamber can be intensified if combustion occurs more intensely in regions where the pressure of the acoustic wave is maximal.
Such effects are very difficult to predict analytically during the design process, and have usually been addressed by expensive, time-consuming and extensive testing, combined with trial and error remedial correction measures.
Screeching is often dealt with by detailed changes to injectors, changes in the propellant chemistry, vaporising the propellant before injection or use of Helmholtz dampers within the combustion chambers to change the resonant modes of the chamber.
Testing for the possibility of screeching is sometimes done by exploding small explosive charges outside the combustion chamber with a tube set tangentially to the combustion chamber near the injectors to determine the engine's impulse response and then evaluating the time response of the chamber pressure- a fast recovery indicates a stable system.
Exhaust noise
For all but the very smallest sizes, rocket exhaust compared to other engines is generally very noisy. As the hypersonic exhaust mixes with the ambient air, shock waves are formed. The Space Shuttle generated over 200 dB(A) of noise around its base. To reduce this, and the risk of payload damage or injury to the crew atop the stack, the mobile launcher platform was fitted with a Sound Suppression System that sprayed of water around the base of the rocket in 41 seconds at launch time. Using this system kept sound levels within the payload bay to 142 dB.
The sound intensity from the shock waves generated depends on the size of the rocket and on the exhaust velocity. Such shock waves seem to account for the characteristic crackling and popping sounds produced by large rocket engines when heard live. These noise peaks typically overload microphones and audio electronics, and so are generally weakened or entirely absent in recorded or broadcast audio reproductions. For large rockets at close range, the acoustic effects could actually kill.
More worryingly for space agencies, such sound levels can also damage the launch structure, or worse, be reflected back at the comparatively delicate rocket above. This is why so much water is typically used at launches. The water spray changes the acoustic qualities of the air and reduces or deflects the sound energy away from the rocket.
Generally speaking, noise is most intense when a rocket is close to the ground, since the noise from the engines radiates up away from the jet, as well as reflecting off the ground. Also, when the vehicle is moving slowly, little of the chemical energy input to the engine can go into increasing the kinetic energy of the rocket (since useful power P transmitted to the vehicle is for thrust F and speed V). Then the largest portion of the energy is dissipated in the exhaust's interaction with the ambient air, producing noise. This noise can be reduced somewhat by flame trenches with roofs, by water injection around the jet and by deflecting the jet at an angle.
Rocket engine development
United States
The development of the US rocket engine industry has been shaped by a complex web of relationships between government agencies, private companies, research institutions, and other stakeholders.
Since the establishment of the first liquid-propellant rocket engine company (Reaction Motors, Inc.) in 1941 and the first government laboratory (GALCIT) devoted to the subject, the US liquid-propellant rocket engine (LPRE) industry has undergone significant changes. At least 14 US companies have been involved in the design, development, manufacture, testing, and flight support operations of various types of rocket engines from 1940 to 2000. In contrast to other countries like Russia, China, or India, where only government or pseudogovernment organisations engage in this business, the US government relies heavily on private industry. These commercial companies are essential to the continued viability of the United States and its form of governance, as they compete with one another to provide cutting-edge rocket engines that meet the needs of the government, the military, and the private sector. In the United States the company that develops the LPRE usually is awarded the production contract.
Generally, the need or demand for a new rocket engine comes from government agencies such as NASA or the Department of Defense. Once the need is identified, government agencies may issue requests for proposals (RFPs) to solicit proposals from private companies and research institutions. Private companies and research institutions, in turn, may invest in research and development (R&D) activities to develop new rocket engine technologies that meet the needs and specifications outlined in the RFPs.
Alongside private companies, universities, independent research institutes and government laboratories also play a critical role in the research and development of rocket engines.
Universities provide graduate and undergraduate education to train qualified technical personnel, and their research programs often contribute to the advancement of rocket engine technologies. More than 25 universities in the US have taught or are currently teaching courses related to Liquid Propellant Rocket Engines (LPREs), and their graduate and undergraduate education programs are considered one of their most important contributions. Universities such as Princeton University, Cornell University, Purdue University, Pennsylvania State University, University of Alabama, the Navy's Post-Graduate School, or the California Institute of Technology have conducted excellent R&D work on topics related to the rocket engine industry. One of the earliest examples of the contribution of universities to the rocket engine industry is the work of the GALCIT in 1941. They demonstrated the first jet-assisted takeoff (JATO) rockets to the Army, leading to the establishment of the Jet Propulsion Laboratory.
However the transfer of knowledge from research professors and their projects to the rocket engine industry has been a mixed experience. While some notable professors and relevant research projects have positively influenced industry practices and understanding of LPREs, the connection between university research and commercial companies has been inconsistent and weak. Universities were not always aware of the industry's specific needs, and engineers and designers in the industry had limited knowledge of university research. As a result, many university research programs remained relatively unknown to industry decision-makers. Furthermore, in the last few decades, certain university research projects, while interesting to professors, were not useful to the industry due to a lack of communication or relevance to industry needs.
Government laboratories, including the Rocket Propulsion Laboratory (now part of Air Force Research Laboratory), Arnold Engineering Test Center, NASA Marshall Space Flight Center, Jet Propulsion Laboratory, Stennis Space Center, White Sands Proving Grounds, and NASA John H. Glenn Research Center, have played crucial roles in the development of liquid rocket propulsion engines (LPREs). They have conducted unbiased testing, guided work at US and some non-US contractors, performed research and development, and provided essential testing facilities including hover test facilities and simulated altitude test facilities and resources. Initially, private companies or foundations financed smaller test facilities, but since the 1950s, the U.S. government has funded larger test facilities at government laboratories. This approach reduced costs for the government by not building similar facilities at contractors' plants but increased complexity and expenses for contractors. Nonetheless, government laboratories have solidified their significance and contributed to LPRE advancements.
LPRE programs have been subject to several cancellations in the United States, even after spending millions of dollars on their development. For example, the M-l LOX/LH2 LPRE, Titan I, and the RS-2200 aerospike, as well as several JATO units and large uncooled thrust chambers were cancelled. The cancellations of these programs were not related to the specific LPRE's performance or any issues with it. Instead, they were due to the cancellation of the vehicle programs the engine was intended for or budget cuts imposed by the government.
USSR
Russia and the former Soviet Union was and still is the world's foremost nation in developing and building rocket engines. From 1950 to 1998, their organisations developed, built, and put into operation a larger number and a larger variety of liquid propellant rocket engine (LPRE) designs than any other country. Approximately 500 different LPREs have been developed before 2003. For comparison the United States has developed slightly more than 300 (before 2003). The Soviets also had the most rocket-propelled flight vehicles. They had more liquid propellant ballistic missiles and more space launch vehicles derived or converted from these decommissioned ballistic missiles than any other nation. As of the end of 1998, the Russians (or earlier the Soviet Union) had successfully launched 2573 satellites with LPREs or almost 65% of the world total of 3973. All of these vehicle flights were made possible by the timely development of suitable high-performance reliable LPREs.
Institutions and actors
Unlike many other countries where the development and production of rocket engines were consolidated within a single organisation, the Soviet Union took a different approach, they established numerous specialised design bureaus (DB) which would compete for development contracts. These design bureaus, or "konstruktorskoye buro" (KB) in Russian were state run organisations which were primarily responsible for carrying out research, development and prototyping of advanced technologies usually related to military hardware, such as turbojet engines, aircraft components, missiles, or space launch vehicles.
Design Bureaus which specialised in rocket engines often possessed the necessary personnel, facilities, and equipment to conduct laboratory tests, flow tests, and ground testing of experimental rocket engines. Some even had specialised facilities for testing very large engines, conducting static firings of engines installed in vehicle stages, or simulating altitude conditions during engine tests. In certain cases, engine testing, certification and quality control were outsourced to other organisations and locations with more suitable test facilities. Many DBs also had housing complexes, gymnasiums, and medical facilities intended to support the needs of their employees and their families.
The Soviet Union's LPRE development effort saw significant growth during the 1960s and reached its peak in the 1970s. This era coincided with the Cold War between the Soviet Union and the United States, characterised by intense competition in spaceflight achievements. Between 14 and 17 Design Bureaus and research institutes were actively involved in developing LPREs during this period. These organisations received relatively steady support and funding due to high military and spaceflight priorities, which facilitated the continuous development of new engine concepts and manufacturing methods.
Once a mission with a new vehicle (missile or spacecraft) was established it was passed on to a design bureau whose role was to oversee the development of the entire rocket. If none of the previously developed rocket engines met the needs of the mission, a new rocket engine with specific requirements would be contracted to another DB specialised in LPRE development (oftentimes each DB had expertise in specific types of LPREs with different applications, propellants, or engine sizes). This meant that the development or design study of a rocket engine was always aimed at a specific application which entailed set requirements.
When it comes to which DBs were awarded contracts for the development of new rocket engines either a single design bureau would be chosen or several design bureaus would be given the same contract which sometimes led to fierce competition between DBs.
When only one DB was picked for the development, it was often the result of the relationship between a vehicle or system's chief designer and the chief designer of a rocket engine specialised DB. If the vehicle's chief designer was happy with previous work done by a certain design bureau it was not unusual to see continued reliance on that LPRE bureau for that class of engines. For example, all but one of the LPREs for submarine-launched missiles were developed by the same design bureau for the same vehicle development prime contractor.
However, when two parallel engine development programs were supported in order to select the superior one for a specific application, several qualified rocket engine models were never used. This luxury of choice was not commonly available in other nations. However, the use of design bureaus also led to certain issues, including program cancellations and duplication. Some major programs were cancelled, resulting in the disposal or storage of previously developed engines.
One notable example of duplication and cancellation was the development of engines for the R-9A ballistic missile. Two sets of engines were supported, but ultimately only one set was selected, leaving several perfectly functional engines unused. Similarly, for the ambitious heavy N-l space launch vehicle intended for lunar and planetary missions, the Soviet Union developed and put into production at least two engines for each of the six stages. Additionally, they developed alternate engines for a more advanced N-l vehicle. However, the program faced multiple flight failures, and with the United States' successful Moon landing, the program was ultimately cancelled, leaving the Soviet Union with a surplus of newly qualified engines without a clear purpose.
These examples demonstrate the complex dynamics and challenges faced by the Soviet Union in managing the development and production of rocket engines through Design Bureaus.
Accidents
The development of rocket engines in the Soviet Union was marked by significant achievements, but it also carried ethical considerations due to numerous accidents and fatalities. From a Science and Technology Studies point of view, the ethical implications of these incidents shed light on the complex relationship between technology, human factors, and the prioritisation of scientific advancement over safety.
The Soviet Union encountered a series of tragic accidents and mishaps in the development and operation of rocket engines. Notably, the USSR holds the unfortunate distinction of having experienced more injuries and deaths resulting from liquid propellant rocket engine (LPRE) accidents than any other country. These incidents brought into question the ethical considerations surrounding the development, testing, and operational use of rocket engines.
One of the most notable disasters occurred in 1960 when the R-16 ballistic missile suffered a catastrophic accident on the launchpad at the Tyuratam launch facility. This incident resulted in the deaths of 124 engineers and military personnel, including Marshal M.I. Nedelin, a former deputy minister of defence. The explosion occurred after the second-stage rocket engine suddenly ignited, causing the fully loaded missile to disintegrate. The explosion resulted from the ignition and explosion of the mixed hypergolic propellants, consisting of nitric acid with additives and UDMH (unsymmetrical dimethylhydrazine).
While the immediate cause of the 1960 accident was attributed to a lack of protective circuits in the missile control unit, the ethical considerations surrounding LPRE accidents in the USSR extend beyond specific technical failures. The secrecy surrounding these accidents, which remained undisclosed for approximately three decades, raises concerns about transparency, accountability, and the protection of human life.
The decision to keep fatal LPRE accidents hidden from the public eye reflects a broader ethical dilemma. The Soviet government, driven by the pursuit of scientific and technological superiority during the Cold War, sought to maintain an image of invincibility and conceal the failures that accompanied their advancements. This prioritisation of national prestige over the well-being and safety of workers raises questions about the ethical responsibility of the state and the organisations involved.
Testing
Rocket engines are usually statically tested at a test facility before being put into production. For high altitude engines, either a shorter nozzle must be used, or the rocket must be tested in a large vacuum chamber.
Safety
Rocket vehicles have a reputation for unreliability and danger; especially catastrophic failures. Contrary to this reputation, carefully designed rockets can be made arbitrarily reliable. In military use, rockets are not unreliable. However, one of the main non-military uses of rockets is for orbital launch. In this application, the premium has typically been placed on minimum weight, and it is difficult to achieve high reliability and low weight simultaneously. In addition, if the number of flights launched is low, there is a very high chance of a design, operations or manufacturing error causing destruction of the vehicle.
Saturn family (1961–1975)
The Rocketdyne H-1 engine, used in a cluster of eight in the first stage of the Saturn I and Saturn IB launch vehicles, had no catastrophic failures in 152 engine-flights. The Pratt and Whitney RL10 engine, used in a cluster of six in the Saturn I second stage, had no catastrophic failures in 36 engine-flights. The Rocketdyne F-1 engine, used in a cluster of five in the first stage of the Saturn V, had no failures in 65 engine-flights. The Rocketdyne J-2 engine, used in a cluster of five in the Saturn V second stage, and singly in the Saturn IB second stage and Saturn V third stage, had no catastrophic failures in 86 engine-flights.
Space Shuttle (1981–2011)
The Space Shuttle Solid Rocket Booster, used in pairs, caused one notable catastrophic failure in 270 engine-flights.
The RS-25, used in a cluster of three, flew in 46 refurbished engine units. These made a total of 405 engine-flights with no catastrophic in-flight failures. A single in-flight RS-25 engine failure occurred during 's STS-51-F mission. This failure had no effect on mission objectives or duration.
Cooling
For efficiency reasons, higher temperatures are desirable, but materials lose their strength if the temperature becomes too high. Rockets run with combustion temperatures that can reach .
Most other jet engines have gas turbines in the hot exhaust. Due to their larger surface area, they are harder to cool and hence there is a need to run the combustion processes at much lower temperatures, losing efficiency. In addition, duct engines use air as an oxidant, which contains 78% largely unreactive nitrogen, which dilutes the reaction and lowers the temperatures. Rockets have none of these inherent combustion temperature limiters.
The temperatures reached by combustion in rocket engines often substantially exceed the melting points of the nozzle and combustion chamber materials (about 1,200 K for copper). Most construction materials will also combust if exposed to high temperature oxidiser, which leads to a number of design challenges. The nozzle and combustion chamber walls must not be allowed to combust, melt, or vaporize (sometimes facetiously termed an "engine-rich exhaust").
Rockets that use common construction materials such as aluminium, steel, nickel or copper alloys must employ cooling systems to limit the temperatures that engine structures experience. Regenerative cooling, where the propellant is passed through tubes around the combustion chamber or nozzle, and other techniques, such as film cooling, are employed to give longer nozzle and chamber life. These techniques ensure that a gaseous thermal boundary layer touching the material is kept below the temperature which would cause the material to catastrophically fail.
Material exceptions that can sustain rocket combustion temperatures to a certain degree are carbon–carbon materials and rhenium, although both are subject to oxidation under certain conditions. Other refractory alloys, such as alumina, molybdenum, tantalum or tungsten have been tried, but were given up on due to various issues.
Materials technology, combined with the engine design, is a limiting factor in chemical rockets.
In rockets, the heat fluxes that can pass through the wall are among the highest in engineering; fluxes are generally in the range of 0.8–80 MW/m (0.5-50 BTU/in-sec). The strongest heat fluxes are found at the throat, which often sees twice that found in the associated chamber and nozzle. This is due to the combination of high speeds (which gives a very thin boundary layer), and although lower than the chamber, the high temperatures seen there. (See above for temperatures in nozzle).
In rockets the coolant methods include:
Ablative: The combustion chamber inside walls are lined with a material that traps heat and carries it away with the exhaust as it vaporizes.
Radiative cooling: The engine is made of one or several refractory materials, which take heat flux until its outer thrust chamber wall glows red- or white-hot, radiating the heat away.
Dump cooling: A cryogenic propellant, usually hydrogen, is passed around the nozzle and dumped. This cooling method has various issues, such as wasting propellant. It is only used rarely.
Regenerative cooling: The fuel (and possibly, the oxidiser) of a liquid rocket engine is routed around the nozzle before being injected into the combustion chamber or preburner. This is the most widely applied method of rocket engine cooling.
Film cooling: The engine is designed with rows of multiple orifices lining the inside wall through which additional propellant is injected, cooling the chamber wall as it evaporates. This method is often used in cases where the heat fluxes are especially high, likely in combination with regenerative cooling. A more efficient subtype of film cooling is transpiration cooling, in which propellant passes through a porous inner combustion chamber wall and transpirates. So far, this method has not seen usage due to various issues with this concept.
Rocket engines may also use several cooling methods. Examples:
Regeneratively and film cooled combustion chamber and nozzle: V-2 Rocket Engine
Regeneratively cooled combustion chamber with a film cooled nozzle extension: Rocketdyne F-1 Engine
Regeneratively cooled combustion chamber with an ablatively cooled nozzle extension: The LR-91 rocket engine
Ablatively and film cooled combustion chamber with a radiatively cooled nozzle extension: Lunar module descent engine (LMDE), Service propulsion system engine (SPS)
Radiatively and film cooled combustion chamber with a radiatively cooled nozzle extension: R-4D storable propellant thrusters
In all cases, another effect that aids in cooling the rocket engine chamber wall is a thin layer of combustion gases (a boundary layer) that is notably cooler than the combustion temperature. Disruption of the boundary layer may occur during cooling failures or combustion instabilities, and wall failure typically occurs soon after.
With regenerative cooling a second boundary layer is found in the coolant channels around the chamber. This boundary layer thickness needs to be as small as possible, since the boundary layer acts as an insulator between the wall and the coolant. This may be achieved by making the coolant velocity in the channels as high as possible.
Liquid-fuelled engines are often run fuel-rich, which lowers combustion temperatures. This reduces heat loads on the engine and allows lower cost materials and a simplified cooling system. This can also increase performance by lowering the average molecular weight of the exhaust and increasing the efficiency with which combustion heat is converted to kinetic exhaust energy.
Chemistry
Rocket propellants require a high energy per unit mass (specific energy), which must be balanced against the tendency of highly energetic propellants to spontaneously explode. Assuming that the chemical potential energy of the propellants can be safely stored, the combustion process results in a great deal of heat being released. A significant fraction of this heat is transferred to kinetic energy in the engine nozzle, propelling the rocket forward in combination with the mass of combustion products released.
Ideally all the reaction energy appears as kinetic energy of the exhaust gases, as exhaust velocity is the single most important performance parameter of an engine. However, real exhaust species are molecules, which typically have translation, vibrational, and rotational modes with which to dissipate energy. Of these, only translation can do useful work to the vehicle, and while energy does transfer between modes this process occurs on a timescale far in excess of the time required for the exhaust to leave the nozzle.
The more chemical bonds an exhaust molecule has, the more rotational and vibrational modes it will have. Consequently, it is generally desirable for the exhaust species to be as simple as possible, with a diatomic molecule composed of light, abundant atoms such as H2 being ideal in practical terms. However, in the case of a chemical rocket, hydrogen is a reactant and reducing agent, not a product. An oxidizing agent, most typically oxygen or an oxygen-rich species, must be introduced into the combustion process, adding mass and chemical bonds to the exhaust species.
An additional advantage of light molecules is that they may be accelerated to high velocity at temperatures that can be contained by currently available materials - the high gas temperatures in rocket engines pose serious problems for the engineering of survivable motors.
Liquid hydrogen (LH2) and oxygen (LOX, or LO2), are the most effective propellants in terms of exhaust velocity that have been widely used to date, though a few exotic combinations involving boron or liquid ozone are potentially somewhat better in theory if various practical problems could be solved.
When computing the specific reaction energy of a given propellant combination, the entire mass of the propellants (both fuel and oxidiser) must be included. The exception is in the case of air-breathing engines, which use atmospheric oxygen and consequently have to carry less mass for a given energy output. Fuels for car or turbojet engines have a much better effective energy output per unit mass of propellant that must be carried, but are similar per unit mass of fuel.
Computer programs that predict the performance of propellants in rocket engines are available.
Ignition
With liquid and hybrid rockets, immediate ignition of the propellants as they first enter the combustion chamber is essential.
With liquid propellants (but not gaseous), failure to ignite within milliseconds usually causes too much liquid propellant to be inside the chamber, and if/when ignition occurs the amount of hot gas created can exceed the maximum design pressure of the chamber, causing a catastrophic failure of the pressure vessel. This is sometimes called a hard start or a rapid unscheduled disassembly (RUD).
Ignition can be achieved by a number of different methods; a pyrotechnic charge can be used, a plasma torch can be used, or electric spark ignition may be employed. Some fuel/oxidiser combinations ignite on contact (hypergolic), and non-hypergolic fuels can be "chemically ignited" by priming the fuel lines with hypergolic propellants (popular in Russian engines).
Gaseous propellants generally will not cause hard starts, with rockets the total injector area is less than the throat thus the chamber pressure tends to ambient prior to ignition and high pressures cannot form even if the entire chamber is full of flammable gas at ignition.
Solid propellants are usually ignited with one-shot pyrotechnic devices and combustion usually proceeds through total consumption of the propellants.
Once ignited, rocket chambers are self-sustaining and igniters are not needed and combustion usually proceeds through total consumption of the propellants. Indeed, chambers often spontaneously reignite if they are restarted after being shut down for a few seconds. Unless designed for re-ignition, when cooled, many rockets cannot be restarted without at least minor maintenance, such as replacement of the pyrotechnic igniter or even refueling of the propellants.
Jet physics
Rocket jets vary depending on the rocket engine, design altitude, altitude, thrust and other factors.
Carbon-rich exhausts from kerosene-based fuels such as RP-1 are often orange in colour due to the black-body radiation of the unburnt particles, in addition to the blue Swan bands. Peroxide oxidiser-based rockets and hydrogen rocket jets contain largely steam and are nearly invisible to the naked eye but shine brightly in the ultraviolet and infrared ranges. Jets from solid-propellant rockets can be highly visible, as the propellant frequently contains metals such as elemental aluminium which burns with an orange-white flame and adds energy to the combustion process. Rocket engines which burn liquid hydrogen and oxygen will exhibit a nearly transparent exhaust, due to it being mostly superheated steam (water vapour), plus some unburned hydrogen.
The nozzle is usually over-expanded at sea level, and the exhaust can exhibit visible shock diamonds through a schlieren effect caused by the incandescence of the exhaust gas.
The shape of the jet varies for a fixed-area nozzle as the expansion ratio varies with altitude: at high altitude all rockets are grossly under-expanded, and a quite small percentage of exhaust gases actually end up expanding forwards.
Types of rocket engines
Physically powered
Chemically powered
Electrically powered
Thermal
Preheated
Solar thermal
The solar thermal rocket would make use of solar power to directly heat reaction mass, and therefore does not require an electrical generator as most other forms of solar-powered propulsion do. A solar thermal rocket only has to carry the means of capturing solar energy, such as concentrators and mirrors. The heated propellant is fed through a conventional rocket nozzle to produce thrust. The engine thrust is directly related to the surface area of the solar collector and to the local intensity of the solar radiation and inversely proportional to the Isp.
Beamed thermal
Nuclear thermal
Nuclear
Nuclear propulsion includes a wide variety of propulsion methods that use some form of nuclear reaction as their primary power source. Various types of nuclear propulsion have been proposed, and some of them tested, for spacecraft applications:
History of rocket engines
According to the writings of the Roman Aulus Gellius, the earliest known example of jet propulsion was in c. 400 BC, when a Greek Pythagorean named Archytas, propelled a wooden bird along wires using steam. However, it was not powerful enough to take off under its own thrust.
The aeolipile described in the first century BC, often known as Hero's engine, consisted of a pair of steam rocket nozzles mounted on a bearing. It was created almost two millennia before the Industrial Revolution but the principles behind it were not well understood, and it was not developed into a practical power source.
The availability of black powder to propel projectiles was a precursor to the development of the first solid rocket. Ninth Century Chinese Taoist alchemists discovered black powder in a search for the elixir of life; this accidental discovery led to fire arrows which were the first rocket engines to leave the ground.
It is stated that "the reactive forces of incendiaries were probably not applied to the propulsion of projectiles prior to the 13th century". A turning point in rocket technology emerged with a short manuscript entitled Liber Ignium ad Comburendos Hostes (abbreviated as The Book of Fires). The manuscript is composed of recipes for creating incendiary weapons from the mid-eighth to the end of the thirteenth centuries—two of which are rockets. The first recipe calls for one part of colophonium and sulfur added to six parts of saltpeter (potassium nitrate) dissolved in laurel oil, then inserted into hollow wood and lit to "fly away suddenly to whatever place you wish and burn up everything". The second recipe combines one pound of sulfur, two pounds of charcoal, and six pounds of saltpeter—all finely powdered on a marble slab. This powder mixture is packed firmly into a long and narrow case. The introduction of saltpeter into pyrotechnic mixtures connected the shift from hurled Greek fire into self-propelled rocketry.
Articles and books on the subject of rocketry appeared increasingly from the fifteenth through seventeenth centuries. In the sixteenth century, German military engineer Conrad Haas (1509–1576) wrote a manuscript which introduced the construction of multi-staged rockets.
Rocket engines were also put in use by Tippu Sultan, the king of Mysore. These usually consisted of a tube of soft hammered iron about long and diameter, closed at one end, packed with black powder propellant and strapped to a shaft of bamboo about long. A rocket carrying about one pound of powder could travel almost . These 'rockets', fitted with swords, would travel several meters in the air before coming down with sword edges facing the enemy. These were used very effectively against the British empire.
Modern rocketry
Slow development of this technology continued up to the later 19th century, when Russian Konstantin Tsiolkovsky first wrote about liquid-fuelled rocket engines. He was the first to develop the Tsiolkovsky rocket equation, though it was not published widely for some years.
The modern solid- and liquid-fuelled engines became realities early in the 20th century, thanks to the American physicist Robert Goddard. Goddard was the first to use a De Laval nozzle on a solid-propellant (gunpowder) rocket engine, doubling the thrust and increasing the efficiency by a factor of about twenty-five. This was the birth of the modern rocket engine. He calculated from his independently derived rocket equation that a reasonably sized rocket, using solid fuel, could place a one-pound payload on the Moon.
The era of liquid-fuel rocket engines
Goddard began to use liquid propellants in 1921, and in 1926 became the first to launch a liquid-fuelled rocket. Goddard pioneered the use of the De Laval nozzle, lightweight propellant tanks, small light turbopumps, thrust vectoring, the smoothly-throttled liquid fuel engine, regenerative cooling, and curtain cooling.
During the late 1930s, German scientists, such as Wernher von Braun and Hellmuth Walter, investigated installing liquid-fuelled rockets in military aircraft (Heinkel He 112, He 111, He 176 and Messerschmitt Me 163).
The turbopump was employed by German scientists in World War II. Until then cooling the nozzle had been problematic, and the A4 ballistic missile used dilute alcohol for the fuel, which reduced the combustion temperature sufficiently.
Staged combustion (Замкнутая схема) was first proposed by Alexey Isaev in 1949. The first staged combustion engine was the S1.5400 used in the Soviet planetary rocket, designed by Melnikov, a former assistant to Isaev. About the same time (1959), Nikolai Kuznetsov began work on the closed cycle engine NK-9 for Korolev's orbital ICBM, GR-1. Kuznetsov later evolved that design into the NK-15 and NK-33 engines for the unsuccessful Lunar N1 rocket.
In the West, the first laboratory staged-combustion test engine was built in Germany in 1963, by Ludwig Boelkow.
Liquid hydrogen engines were first successfully developed in America: the RL-10 engine first flew in 1962. Its successor, the Rocketdyne J-2, was used in the Apollo program's Saturn V rocket to send humans to the Moon. The high specific impulse and low density of liquid hydrogen lowered the upper stage mass and the overall size and cost of the vehicle.
The record for most engines on one rocket flight is 44, set by NASA in 2016 on a Black Brant.
See also
Comparison of orbital rocket engines
Rotating detonation engine
Jet damping, an effect of the exhaust jet of a rocket that tends to slow a vehicle's rotation speed
Model rocket motor classification lettered engines
NERVA (Nuclear Energy for Rocket Vehicle Applications), a US nuclear thermal rocket programme
Photon rocket
Project Prometheus, NASA development of nuclear propulsion for long-duration spaceflight, begun in 2003
Rocket propulsion technologies (disambiguation)
Notes
References
External links
Designing for rocket engine life expectancy
Rocket Engine performance analysis with Plume Spectrometry
Rocket Engine Thrust Chamber technical article
Net Thrust of a Rocket Engine calculator
Design Tool for Liquid Rocket Engine Thermodynamic Analysis
Rocket & Space Technology - Rocket Propulsion
The official website of test pilot Erich Warsitz (world's first jet pilot) which includes videos of the Heinkel He 112 fitted with von Braun's and Hellmuth Walter's rocket engines (as well as the He 111 with ATO Units)
Aerospace technologies | Rocket engine | [
"Technology"
] | 12,859 | [
"Rocket engines",
"Engines"
] |
262,252 | https://en.wikipedia.org/wiki/Pyrolysis | Pyrolysis is the process of thermal decomposition of materials at elevated temperatures, often in an inert atmosphere without access to oxygen.
Etymology
The word pyrolysis is coined from the Greek-derived elements pyro- (from Ancient Greek πῦρ : pûr - "fire, heat, fever") and lysis (λύσις : lúsis - "separation, loosening").
Applications
Pyrolysis is most commonly used in the treatment of organic materials. It is one of the processes involved in the charring of wood or pyrolysis of biomass. In general, pyrolysis of organic substances produces volatile products and leaves char, a carbon-rich solid residue. Extreme pyrolysis, which leaves mostly carbon as the residue, is called carbonization. Pyrolysis is considered one of the steps in the processes of gasification or combustion. Laypeople often confuse pyrolysis gas with syngas. Pyrolysis gas has a high percentage of heavy tar fractions, which condense at relatively high temperatures, preventing its direct use in gas burners and internal combustion engines, unlike syngas.
The process is used heavily in the chemical industry, for example, to produce ethylene, many forms of carbon, and other chemicals from petroleum, coal, and even wood, or to produce coke from coal. It is used also in the conversion of natural gas (primarily methane) into hydrogen gas and solid carbon char, recently introduced on an industrial scale. Aspirational applications of pyrolysis would convert biomass into syngas and biochar, waste plastics back into usable oil, or waste into safely disposable substances.
Terminology
Pyrolysis is one of the various types of chemical degradation processes that occur at higher temperatures (above the boiling point of water or other solvents). It differs from other processes like combustion and hydrolysis in that it usually does not involve the addition of other reagents such as oxygen (O2, in combustion) or water (in hydrolysis). Pyrolysis produces solids (char), condensable liquids, (light and heavy oils and tar), and non-condensable gasses.
Pyrolysis is different from gasification. In the chemical process industry, pyrolysis refers to a partial thermal degradation of carbonaceous materials that takes place in an inert (oxygen free) atmosphere and produces both gases, liquids and solids. The pyrolysis can be extended to full gasification that produces mainly gaseous output, often with the addition of e.g. water steam to gasify residual carbonic solids, see Steam reforming.
Types
Specific types of pyrolysis include:
Carbonization, the complete pyrolysis of organic matter, which usually leaves a solid residue that consists mostly of elemental carbon.
Methane pyrolysis, the direct conversion of methane to hydrogen fuel and separable solid carbon, sometimes using molten metal catalysts.
Hydrous pyrolysis, in the presence of superheated water or steam, producing hydrogen and substantial atmospheric carbon dioxide.
Dry distillation, as in the original production of sulfuric acid from sulfates.
Destructive distillation, as in the manufacture of charcoal, coke and activated carbon.
Charcoal burning, the production of charcoal.
Tar production by destructive distillation of wood in tar kilns.
Caramelization of sugars.
High-temperature cooking processes such as roasting, frying, toasting, and grilling.
Cracking of heavier hydrocarbons into lighter ones, as in oil refining.
Thermal depolymerization, which breaks down plastics and other polymers into monomers and oligomers.
Ceramization involving the formation of polymer derived ceramics from preceramic polymers under an inert atmosphere.
Catagenesis, the natural conversion of buried organic matter to fossil fuels.
Flash vacuum pyrolysis, used in organic synthesis.
Other pyrolysis types come from a different classification that focuses on the pyrolysis operating conditions and heating system used, which have an impact on the yield of the pyrolysis products.
History
Pyrolysis has been used for turning wood into charcoal since ancient times. The ancient Egyptians used the liquid fraction obtained from the pyrolysis of cedar wood, in their embalming process.
The dry distillation of wood remained the major source of methanol into the early 20th century.
Pyrolysis was instrumental in the discovery of many chemical substances, such as phosphorus from ammonium sodium hydrogen phosphate in concentrated urine, oxygen from mercuric oxide, and various nitrates.
General processes and mechanisms
Pyrolysis generally consists in heating the material above its decomposition temperature, breaking chemical bonds in its molecules. The fragments usually become smaller molecules, but may combine to produce residues with larger molecular mass, even amorphous covalent solids.
In many settings, some amounts of oxygen, water, or other substances may be present, so that combustion, hydrolysis, or other chemical processes may occur besides pyrolysis proper. Sometimes those chemicals are added intentionally, as in the burning of firewood, in the traditional manufacture of charcoal, and in the steam cracking of crude oil.
Conversely, the starting material may be heated in a vacuum or in an inert atmosphere to avoid chemical side reactions (such as combustion or hydrolysis). Pyrolysis in a vacuum also lowers the boiling point of the byproducts, improving their recovery.
When organic matter is heated at increasing temperatures in open containers, the following processes generally occur, in successive or overlapping stages:
Below about 100 °C, volatiles, including some water, evaporate. Heat-sensitive substances, such as vitamin C and proteins, may partially change or decompose already at this stage.
At about 100 °C or slightly higher, any remaining water that is merely absorbed in the material is driven off. This process consumes a lot of energy, so the temperature may stop rising until all water has evaporated. Water trapped in crystal structure of hydrates may come off at somewhat higher temperatures.
Some solid substances, like fats, waxes, and sugars, may melt and separate.
Between 100 and 500 °C, many common organic molecules break down. Most sugars start decomposing at 160–180 °C. Cellulose, a major component of wood, paper, and cotton fabrics, decomposes at about 350 °C. Lignin, another major wood component, starts decomposing at about 350 °C, but continues releasing volatile products up to 500 °C. The decomposition products usually include water, carbon monoxide and/or carbon dioxide , as well as a large number of organic compounds. Gases and volatile products leave the sample, and some of them may condense again as smoke. Generally, this process also absorbs energy. Some volatiles may ignite and burn, creating a visible flame. The non-volatile residues typically become richer in carbon and form large disordered molecules, with colors ranging between brown and black. At this point the matter is said to have been "charred" or "carbonized".
At 200–300 °C, if oxygen has not been excluded, the carbonaceous residue may start to burn, in a highly exothermic reaction, often with no or little visible flame. Once carbon combustion starts, the temperature rises spontaneously, turning the residue into a glowing ember and releasing carbon dioxide and/or monoxide. At this stage, some of the nitrogen still remaining in the residue may be oxidized into nitrogen oxides like and . Sulfur and other elements like chlorine and arsenic may be oxidized and volatilized at this stage.
Once combustion of the carbonaceous residue is complete, a powdery or solid mineral residue (ash) is often left behind, consisting of inorganic oxidized materials of high melting point. Some of the ash may have left during combustion, entrained by the gases as fly ash or particulate emissions. Metals present in the original matter usually remain in the ash as oxides or carbonates, such as potash. Phosphorus, from materials such as bone, phospholipids, and nucleic acids, usually remains as phosphates.
Safety challenges
Because pyrolysis takes place at high temperatures which exceed the autoignition temperature of the produced gases, an explosion risk exists if oxygen is present. To control the temperature of pyrolysis systems careful temperature control is needed and can be accomplished with an open source pyrolysis controller. Pyrolysis also produces various toxic gases, mainly carbon monoxide. The greatest risk of fire, explosion and release of toxic gases comes when the system is starting up and shutting down, operating intermittently, or during operational upsets.
Inert gas purging is essential to manage inherent explosion risks. The procedure is not trivial and failure to keep oxygen out has led to accidents.
Occurrence and uses
Clandestine chemistry
Conversion of CBD to THC can be brought about by pyrolysis.
Cooking
Pyrolysis has many applications in food preparation. Caramelization is the pyrolysis of sugars in food (often after the sugars have been produced by the breakdown of polysaccharides). The food goes brown and changes flavor. The distinctive flavors are used in many dishes; for instance, caramelized onion is used in French onion soup. The temperatures needed for caramelization lie above the boiling point of water. Frying oil can easily rise above the boiling point. Putting a lid on the frying pan keeps the water in, and some of it re-condenses, keeping the temperature too cool to brown for longer time.
Pyrolysis of food can also be undesirable, as in the charring of burnt food (at temperatures too low for the oxidative combustion of carbon to produce flames and burn the food to ash).
Coke, carbon, charcoals, and chars
Carbon and carbon-rich materials have desirable properties but are nonvolatile, even at high temperatures. Consequently, pyrolysis is used to produce many kinds of carbon; these can be used for fuel, as reagents in steelmaking (coke), and as structural materials.
Charcoal is a less smoky fuel than pyrolyzed wood. Some cities ban, or used to ban, wood fires; when residents only use charcoal (and similarly treated rock coal, called coke) air pollution is significantly reduced. In cities where people do not generally cook or heat with fires, this is not needed. In the mid-20th century, "smokeless" legislation in Europe required cleaner-burning techniques, such as coke fuel and smoke-burning incinerators as an effective measure to reduce air pollution
The coke-making or "coking" process consists of heating the material in "coking ovens" to very high temperatures (up to ) so that the molecules are broken down into lighter volatile substances, which leave the vessel, and a porous but hard residue that is mostly carbon and inorganic ash. The amount of volatiles varies with the source material, but is typically 25–30% of it by weight. High temperature pyrolysis is used on an industrial scale to convert coal into coke. This is useful in metallurgy, where the higher temperatures are necessary for many processes, such as steelmaking. Volatile by-products of this process are also often useful, including benzene and pyridine. Coke can also be produced from the solid residue left from petroleum refining.
The original vascular structure of the wood and the pores created by escaping gases combine to produce a light and porous material. By starting with a dense wood-like material, such as nutshells or peach stones, one obtains a form of charcoal with particularly fine pores (and hence a much larger pore surface area), called activated carbon, which is used as an adsorbent for a wide range of chemical substances.
Biochar is the residue of incomplete organic pyrolysis, e.g., from cooking fires. It is a key component of the terra preta soils associated with ancient indigenous communities of the Amazon basin. Terra preta is much sought by local farmers for its superior fertility and capacity to promote and retain an enhanced suite of beneficial microbiota, compared to the typical red soil of the region. Efforts are underway to recreate these soils through biochar, the solid residue of pyrolysis of various materials, mostly organic waste.
Carbon fibers are filaments of carbon that can be used to make very strong yarns and textiles. Carbon fiber items are often produced by spinning and weaving the desired item from fibers of a suitable polymer, and then pyrolyzing the material at a high temperature (from ). The first carbon fibers were made from rayon, but polyacrylonitrile has become the most common starting material. For their first workable electric lamps, Joseph Wilson Swan and Thomas Edison used carbon filaments made by pyrolysis of cotton yarns and bamboo splinters, respectively.
Pyrolysis is the reaction used to coat a preformed substrate with a layer of pyrolytic carbon. This is typically done in a fluidized bed reactor heated to . Pyrolytic carbon coatings are used in many applications, including artificial heart valves.
Liquid and gaseous biofuels
Pyrolysis is the basis of several methods for producing fuel from biomass, i.e. lignocellulosic biomass. Crops studied as biomass feedstock for pyrolysis include native North American prairie grasses such as switchgrass and bred versions of other grasses such as Miscantheus giganteus. Other sources of organic matter as feedstock for pyrolysis include greenwaste, sawdust, waste wood, leaves, vegetables, nut shells, straw, cotton trash, rice hulls, and orange peels. Animal waste including poultry litter, dairy manure, and potentially other manures are also under evaluation. Some industrial byproducts are also suitable feedstock including paper sludge, distillers grain, and sewage sludge.
In the biomass components, the pyrolysis of hemicellulose happens between 210 and 310 °C. The pyrolysis of cellulose starts from 300 to 315 °C and ends at 360–380 °C, with a peak at 342–354 °C. Lignin starts to decompose at about 200 °C and continues until 1000 °C.
Synthetic diesel fuel by pyrolysis of organic materials is not yet economically competitive. Higher efficiency is sometimes achieved by flash pyrolysis, in which finely divided feedstock is quickly heated to between for less than two seconds.
Syngas is usually produced by pyrolysis.
The low quality of oils produced through pyrolysis can be improved by physical and chemical processes, which might drive up production costs, but may make sense economically as circumstances change.
There is also the possibility of integrating with other processes such as mechanical biological treatment and anaerobic digestion. Fast pyrolysis is also investigated for biomass conversion. Fuel bio-oil can also be produced by hydrous pyrolysis.
Methane pyrolysis for hydrogen
Methane pyrolysis is an industrial process for "turquoise" hydrogen production from methane by removing solid carbon from natural gas. This one-step process produces hydrogen in high volume at low cost (less than steam reforming with carbon sequestration). No greenhouse gas is released. No deep well injection of carbon dioxide is needed. Only water is released when hydrogen is used as the fuel for fuel-cell electric heavy truck transportation,
gas turbine electric power generation, and hydrogen for industrial processes including producing ammonia fertilizer and cement. Methane pyrolysis is the process operating around 1065 °C for producing hydrogen from natural gas that allows removal of carbon easily (solid carbon is a byproduct of the process). The industrial quality solid carbon can then be sold or landfilled and is not released into the atmosphere, avoiding emission of greenhouse gas (GHG) or ground water pollution from a landfill. In 2015, a company called Monolith Materials built a pilot plant in Redwood City, CA to study scaling Methane Pyrolysis using renewable power in the process. A successful pilot project then led to a larger commercial-scale demonstration plant in Hallam, Nebraska in 2016. As of 2020, this plant is operational and can produce around 14 metric tons of hydrogen per day. In 2021, the US Department of Energy backed Monolith Materials' plans for major expansion with a $1B loan guarantee. The funding will help produce a plant capable of generating 164 metric tons of hydrogen per day by 2024. Pilots with gas utilities and biogas plants are underway with companies like Modern Hydrogen. Volume production is also being evaluated in the BASF "methane pyrolysis at scale" pilot plant, the chemical engineering team at University of California - Santa Barbara and in such research laboratories as Karlsruhe Liquid-metal Laboratory (KALLA). Power for process heat consumed is only one-seventh of the power consumed in the water electrolysis method for producing hydrogen.
The Australian company Hazer Group was founded in 2010 to commercialise technology originally developed at the University of Western Australia. The company was listed on the ASX in December 2015. It is completing a commercial demonstration project to produce renewable hydrogen and graphite from wastewater and iron ore as a process catalyst use technology created by the University of Western Australia (UWA). The Commercial Demonstration Plant project is an Australian first, and expected to produce around 100 tonnes of fuel-grade hydrogen and 380 tonnes of graphite each year starting in 2023. It was scheduled to commence in 2022. "10 December 2021: Hazer Group (ASX: HZR) regret to advise that there has been a delay to the completion of the fabrication of the reactor for the Hazer Commercial Demonstration Project (CDP). This is expected to delay the planned commissioning of the Hazer CDP, with commissioning now expected to occur after our current target date of 1Q 2022." The Hazer Group has collaboration agreements with Engie for a facility in France in May 2023, A Memorandum of Understanding with Chubu Electric & Chiyoda in Japan April 2023 and an agreement with Suncor Energy and FortisBC to develop 2,500 tonnes per Annum Burrard-Hazer Hydrogen Production Plant in Canada April 2022
The American company C-Zero's technology converts natural gas into hydrogen and solid carbon. The hydrogen provides clean, low-cost energy on demand, while the carbon can be permanently sequestered. C-Zero announced in June 2022 that it closed a $34 million financing round led by SK Gas, a subsidiary of South Korea's second-largest conglomerate, the SK Group. SK Gas was joined by two other new investors, Engie New Ventures and Trafigura, one of the world's largest physical commodities trading companies, in addition to participation from existing investors including Breakthrough Energy Ventures, Eni Next, Mitsubishi Heavy Industries, and AP Ventures. Funding was for C-Zero's first pilot plant, which was expected to be online in Q1 2023. The plant may be capable of producing up to 400 kg of hydrogen per day from natural gas with no CO2 emissions.
One of the world's largest chemical companies, BASF, has been researching hydrogen pyrolysis for more than 10 years.
Ethylene
Pyrolysis is used to produce ethylene, the chemical compound produced on the largest scale industrially (>110 million tons/year in 2005). In this process, hydrocarbons from petroleum are heated to around in the presence of steam; this is called steam cracking. The resulting ethylene is used to make antifreeze (ethylene glycol), PVC (via vinyl chloride), and many other polymers, such as polyethylene and polystyrene.
Semiconductors
The process of metalorganic vapour-phase epitaxy (MOCVD) entails pyrolysis of volatile organometallic compounds to give semiconductors, hard coatings, and other applicable materials. The reactions entail thermal degradation of precursors, with deposition of the inorganic component and release of the hydrocarbons as gaseous waste. Since it is an atom-by-atom deposition, these atoms organize themselves into crystals to form the bulk semiconductor. Raw polycrystalline silicon is produced by the chemical vapor deposition of silane gases:
Gallium arsenide, another semiconductor, forms upon co-pyrolysis of trimethylgallium and arsine.
Waste management
Pyrolysis can also be used to treat municipal solid waste and plastic waste. The main advantage is the reduction in volume of the waste. In principle, pyrolysis will regenerate the monomers (precursors) to the polymers that are treated, but in practice the process is neither a clean nor an economically competitive source of monomers.
In tire waste management, tire pyrolysis is a well-developed technology.
Other products from car tire pyrolysis include steel wires, carbon black and bitumen. The area faces legislative, economic, and marketing obstacles. Oil derived from tire rubber pyrolysis has a high sulfur content, which gives it high potential as a pollutant; consequently it should be desulfurized.
Alkaline pyrolysis of sewage sludge at low temperature of 500 °C can enhance H2 production with in-situ carbon capture. The use of NaOH (sodium hydroxide) has the potential to produce H2-rich gas that can be used for fuels cells directly.
In early November 2021, the U.S. State of Georgia announced a joint effort with Igneo Technologies to build an $85 million large electronics recycling plant in the Port of Savannah. The project will focus on lower-value, plastics-heavy devices in the waste stream using multiple shredders and furnaces using pyrolysis technology.
One-stepwise pyrolysis and Two-stepwise pyrolysis for Tobacco Waste
Pyrolysis has also been used for trying to mitigate tobacco waste. One method was done where tobacco waste was separated into two categories TLW (Tobacco Leaf Waste) and TSW (Tobacco Stick Waste). TLW was determined to be any waste from cigarettes and TSW was determined to be any waste from electronic cigarettes. Both TLW and TSW were dried at 80 °C for 24 hours and stored in a desiccator. Samples were grounded so that the contents were uniform. Tobacco Waste (TW) also contains inorganic (metal) contents, which was determined using an inductively coupled plasma-optical spectrometer. Thermo-gravimetric analysis was used to thermally degrade four samples (TLW, TSW, glycerol, and guar gum) and monitored under specific dynamic temperature conditions. About one gram of both TLW and TSW were used in the pyrolysis tests. During these analysis tests, CO2 and N2 were used as atmospheres inside of a tubular reactor that was built using quartz tubing. For both CO2 and N2 atmospheres the flow rate was 100 mL min−1. External heating was created via a tubular furnace. The pyrogenic products were classified into three phases. The first phase was biochar, a solid residue produced by the reactor at 650 °C. The second phase liquid hydrocarbons were collected by a cold solvent trap and sorted by using chromatography. The third and final phase was analyzed using an online micro GC unit and those pyrolysates were gases.
Two different types of experiments were conducted: one-stepwise pyrolysis and two-stepwise pyrolysis. One-stepwise pyrolysis consisted of a constant heating rate (10 °C min−1) from 30 to 720 °C. In the second step of the two-stepwise pyrolysis test the pyrolysates from the one-stepwise pyrolysis were pyrolyzed in the second heating zone which was controlled isothermally at 650 °C. The two-stepwise pyrolysis was used to focus primarily on how well CO2 affects carbon redistribution when adding heat through the second heating zone.
First noted was the thermolytic behaviors of TLW and TSW in both the CO2 and N2 environments. For both TLW and TSW the thermolytic behaviors were identical at less than or equal to 660 °C in the CO2 and N2 environments. The differences between the environments start to occur when temperatures increase above 660 °C and the residual mass percentages significantly decrease in the CO2 environment compared to that in the N2 environment. This observation is likely due to the Boudouard reaction, where we see spontaneous gasification happening when temperatures exceed 710 °C. Although these observations were seen at temperatures lower than 710 °C it is most likely due to the catalytic capabilities of inorganics in TLW. It was further investigated by doing ICP-OES measurements and found that a fifth of the residual mass percentage was Ca species. CaCO3 is used in cigarette papers and filter material, leading to the explanation that degradation of CaCO3 causes pure CO2 reacting with CaO in a dynamic equilibrium state. This being the reason for seeing mass decay between 660 °C and 710 °C. Differences in differential thermogram (DTG) peaks for TLW were compared to TSW. TLW had four distinctive peaks at 87, 195, 265, and 306 °C whereas TSW had two major drop offs at 200 and 306 °C with one spike in between. The four peaks indicated that TLW contains more diverse types of additives than TSW. The residual mass percentage between TLW and TSW was further compared, where the residual mass in TSW was less than that of TLW for both CO2 and N2 environments concluding that TSW has higher quantities of additives than TLW.
The one-stepwise pyrolysis experiment showed different results for the CO2 and N2 environments. During this process the evolution of 5 different notable gases were observed. Hydrogen, Methane, Ethane, Carbon Dioxide, and Ethylene all are produced when the thermolytic rate of TLW began to be retarded at greater than or equal to 500 °C. Thermolytic rate begins at the same temperatures for both the CO2 and N2 environment but there is higher concentration of the production of Hydrogen, Ethane, Ethylene, and Methane in the N2 environment than that in the CO2 environment. The concentration of CO in the CO2 environment is significantly greater as temperatures increase past 600 °C and this is due to CO2 being liberated from CaCO3 in TLW. This significant increase in CO concentration is why there is lower concentrations of other gases produced in the CO2 environment due to a dilution effect. Since pyrolysis is the re-distribution of carbons in carbon substrates into three pyrogenic products. The CO2 environment is going to be more effective because the CO2 reduction into CO allows for the oxidation of pyrolysates to form CO. In conclusion the CO2 environment allows a higher yield of gases than oil and biochar. When the same process is done for TSW the trends are almost identical therefore the same explanations can be applied to the pyrolysis of TSW.
Harmful chemicals were reduced in the CO2 environment due to CO formation causing tar to be reduced. One-stepwise pyrolysis was not that effective on activating CO2 on carbon rearrangement due to the high quantities of liquid pyrolysates (tar). Two-stepwise pyrolysis for the CO2 environment allowed for greater concentrations of gases due to the second heating zone. The second heating zone was at a consistent temperature of 650 °C isothermally. More reactions between CO2 and gaseous pyrolysates with longer residence time meant that CO2 could further convert pyrolysates into CO. The results showed that the two-stepwise pyrolysis was an effective way to decrease tar content and increase gas concentration by about 10 wt.% for both TLW (64.20 wt.%) and TSW (73.71%).
Thermal cleaning
Pyrolysis is also used for thermal cleaning, an industrial application to remove organic substances such as polymers, plastics and coatings from parts, products or production components like extruder screws, spinnerets and static mixers. During the thermal cleaning process, at temperatures from , organic material is converted by pyrolysis and oxidation into volatile organic compounds, hydrocarbons and carbonized gas. Inorganic elements remain.
Several types of thermal cleaning systems use pyrolysis:
Molten Salt Baths belong to the oldest thermal cleaning systems; cleaning with a molten salt bath is very fast but implies the risk of dangerous splatters, or other potential hazards connected with the use of salt baths, like explosions or highly toxic hydrogen cyanide gas.
Fluidized Bed Systems use sand or aluminium oxide as heating medium; these systems also clean very fast but the medium does not melt or boil, nor emit any vapors or odors; the cleaning process takes one to two hours.
Vacuum Ovens use pyrolysis in a vacuum avoiding uncontrolled combustion inside the cleaning chamber; the cleaning process takes 8 to 30 hours.
Burn-Off Ovens, also known as Heat-Cleaning Ovens, are gas-fired and used in the painting, coatings, electric motors and plastics industries for removing organics from heavy and large metal parts.
Fine chemical synthesis
Pyrolysis is used in the production of chemical compounds, mainly, but not only, in the research laboratory.
The area of boron-hydride clusters started with the study of the pyrolysis of diborane (B2H6) at ca. 200 °C. Products include the clusters pentaborane and decaborane. These pyrolyses involve not only cracking (to give H2), but also recondensation.
The synthesis of nanoparticles, zirconia and oxides utilizing an ultrasonic nozzle in a process called ultrasonic spray pyrolysis (USP).
Other uses and occurrences
Pyrolysis is used to turn organic materials into carbon for the purpose of carbon-14 dating.
Pyrolysis liquids from slow pyrolysis of bark and hemp have been tested for their antifungal activity against wood decaying fungi, showing potential to substitute the current wood preservatives while further tests are still required. However, their ecotoxicity is very variable and while some are less toxic than current wood preservatives, other pyrolysis liquids have shown high ecotoxicity, what may cause detrimental effects in the environment.
Pyrolysis of tobacco, paper, and additives, in cigarettes and other products, generates many volatile products (including nicotine, carbon monoxide, and tar) that are responsible for the aroma and negative health effects of smoking. Similar considerations apply to the smoking of marijuana and the burning of incense products and mosquito coils.
Pyrolysis occurs during the incineration of trash, potentially generating volatiles that are toxic or contribute to air pollution if not completely burned.
Laboratory or industrial equipment sometimes gets fouled by carbonaceous residues that result from coking, the pyrolysis of organic products that come into contact with hot surfaces.
PAHs generation
Polycyclic aromatic hydrocarbons (PAHs) can be generated from the pyrolysis of different solid waste fractions, such as hemicellulose, cellulose, lignin, pectin, starch, polyethylene (PE), polystyrene (PS), polyvinyl chloride (PVC), and polyethylene terephthalate (PET). PS, PVC, and lignin generate significant amount of PAHs. Naphthalene is the most abundant PAH among all the polycyclic aromatic hydrocarbons.
When the temperature is increased from 500 to 900 °C, most PAHs increase. With increasing temperature, the percentage of light PAHs decreases and the percentage of heavy PAHs increases.
Study tools
Thermogravimetric analysis
Thermogravimetric analysis (TGA) is one of the most common techniques to investigate pyrolysis with no limitations of heat and mass transfer. The results can be used to determine mass loss kinetics. Activation energies can be calculated using the Kissinger method or peak analysis-least square method (PA-LSM).
TGA can couple with Fourier-transform infrared spectroscopy (FTIR) and mass spectrometry. As the temperature increases, the volatiles generated from pyrolysis can be measured.
Macro-TGA
In TGA, the sample is loaded first before the increase of temperature, and the heating rate is low (less than 100 °C min−1). Macro-TGA can use gram-scale samples to investigate the effects of pyrolysis with mass and heat transfer.
Pyrolysis–gas chromatography–mass spectrometry
Pyrolysis mass spectrometry (Py-GC-MS) is an important laboratory procedure to determine the structure of compounds.
Machine learning
In recent years, machine learning has attracted significant research interest in predicting yields, optimizing parameters, and monitoring pyrolytic processes.
See also
Dextrin
Gasification
Hydrogen
Hydrogen production
Karrick process
Pyrolytic coating
Thermal decomposition
Torrefaction
Wood gas
References
External links
In Situ Catalytic Fast Pyrolysis Technology Pathway National Renewable Energy Laboratory
Organic reactions
Chemical processes
Industrial processes
Oil shale technology
Biodegradable waste management
Waste treatment technology
Fire protection | Pyrolysis | [
"Chemistry",
"Engineering"
] | 6,996 | [
"Water treatment",
"Biodegradable waste management",
"Pyrolysis",
"Petroleum technology",
"Oil shale technology",
"Building engineering",
"Fire protection",
"Organic reactions",
"Biodegradation",
"Chemical processes",
"Synthetic fuel technologies",
"Environmental engineering",
"Chemical proc... |
262,259 | https://en.wikipedia.org/wiki/IEEE%20802.6 | IEEE 802.6 is a standard governed by the ANSI for Metropolitan Area Networks (MAN). It is an improvement of an older standard (also created by ANSI) which used the Fiber distributed data interface (FDDI) network structure. The FDDI-based standard failed due to its expensive implementation and lack of compatibility with current LAN standards. The IEEE 802.6 standard uses the Distributed Queue Dual Bus (DQDB) network form. This form supports 150 Mbit/s transfer rates. It consists of two unconnected unidirectional buses. DQDB is rated for a maximum of 160 km before significant signal degradation over fiberoptic cable with an optical wavelength of 1310 nm.
This standard has also failed, mostly for the same reasons that the FDDI standard failed. MANs are traditionally designed using Synchronous Optical Network (SONET), Synchronous Digital Hierarchy (SDH) or Asynchronous Transfer Mode (ATM). Recent designs use native Ethernet or MPLS.
References
IEEE 802.06
Networking standards
Metropolitan area networks | IEEE 802.6 | [
"Technology",
"Engineering"
] | 221 | [
"Computer network stubs",
"Computer standards",
"Computer networks engineering",
"Networking standards",
"Computing stubs"
] |
262,861 | https://en.wikipedia.org/wiki/Zeroth%20law%20of%20thermodynamics | The zeroth law of thermodynamics is one of the four principal laws of thermodynamics. It provides an independent definition of temperature without reference to entropy, which is defined in the second law. The law was established by Ralph H. Fowler in the 1930s, long after the first, second, and third laws had been widely recognized.
The zeroth law states that if two thermodynamic systems are both in thermal equilibrium with a third system, then the two systems are in thermal equilibrium with each other.
Two systems are said to be in thermal equilibrium if they are linked by a wall permeable only to heat, and they do not change over time.
Another formulation by James Clerk Maxwell is "All heat is of the same kind". Another statement of the law is "All diathermal walls are equivalent".
The zeroth law is important for the mathematical formulation of thermodynamics. It makes the relation of thermal equilibrium between systems an equivalence relation, which can represent equality of some quantity associated with each system. A quantity that is the same for two systems, if they can be placed in thermal equilibrium with each other, is a scale of temperature. The zeroth law is needed for the definition of such scales, and justifies the use of practical thermometers.
Equivalence relation
A thermodynamic system is by definition in its own state of internal thermodynamic equilibrium, that is to say, there is no change in its observable state (i.e. macrostate) over time and no flows occur in it. One precise statement of the zeroth law is that the relation of thermal equilibrium is an equivalence relation on pairs of thermodynamic systems. In other words, the set of all systems each in its own state of internal thermodynamic equilibrium may be divided into subsets in which every system belongs to one and only one subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with a member of any other subset. This means that a unique "tag" can be assigned to every system, and if the "tags" of two systems are the same, they are in thermal equilibrium with each other, and if different, they are not. This property is used to justify the use of empirical temperature as a tagging system. Empirical temperature provides further relations of thermally equilibrated systems, such as order and continuity with regard to "hotness" or "coldness", but these are not implied by the standard statement of the zeroth law.
If it is defined that a thermodynamic system is in thermal equilibrium with itself (i.e., thermal equilibrium is reflexive), then the zeroth law may be stated as follows:
This statement asserts that thermal equilibrium is a left-Euclidean relation between thermodynamic systems. If we also define that every thermodynamic system is in thermal equilibrium with itself, then thermal equilibrium is also a reflexive relation. Binary relations that are both reflexive and Euclidean are equivalence relations. Thus, again implicitly assuming reflexivity, the zeroth law is therefore often expressed as a right-Euclidean statement:
One consequence of an equivalence relationship is that the equilibrium relationship is symmetric: If A is in thermal equilibrium with B, then B is in thermal equilibrium with A. Thus, the two systems are in thermal equilibrium with each other, or they are in mutual equilibrium. Another consequence of equivalence is that thermal equilibrium is described as a transitive relation:
A reflexive, transitive relation does not guarantee an equivalence relationship. For the above statement to be true, both reflexivity and symmetry must be implicitly assumed.
It is the Euclidean relationships which apply directly to thermometry. An ideal thermometer is a thermometer which does not measurably change the state of the system it is measuring. Assuming that the unchanging reading of an ideal thermometer is a valid tagging system for the equivalence classes of a set of equilibrated thermodynamic systems, then the systems are in thermal equilibrium, if a thermometer gives the same reading for each system. If the system are thermally connected, no subsequent change in the state of either one can occur. If the readings are different, then thermally connecting the two systems causes a change in the states of both systems. The zeroth law provides no information regarding this final reading.
Foundation of temperature
Nowadays, there are two nearly separate concepts of temperature, the thermodynamic concept, and that of the kinetic theory of gases and other materials.
The zeroth law belongs to the thermodynamic concept, but this is no longer the primary international definition of temperature. The current primary international definition of temperature is in terms of the kinetic energy of freely moving microscopic particles such as molecules, related to temperature through the Boltzmann constant . The present article is about the thermodynamic concept, not about the kinetic theory concept.
The zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set (such as the set of all systems each in its own state of internal thermodynamic equilibrium) divides that set into a collection of distinct subsets ("disjoint subsets") where any member of the set is a member of one and only one such subset. In the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be uniquely "tagged" with a label identifying the subset to which it belongs. Although the labeling may be quite arbitrary, temperature is just such a labeling process which uses the real number system for tagging. The zeroth law justifies the use of suitable thermodynamic systems as thermometers to provide such a labeling, which yield any number of possible empirical temperature scales, and justifies the use of the second law of thermodynamics to provide an absolute, or thermodynamic temperature scale. Such temperature scales bring additional continuity and ordering (i.e., "hot" and "cold") properties to the concept of temperature.
In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural order of nearby surfaces. One may therefore construct a global temperature function that provides a continuous ordering of states. The dimensionality of a surface of constant temperature is one less than the number of thermodynamic parameters, thus, for an ideal gas described with three thermodynamic parameters P, V and N, it is a two-dimensional surface.
For example, if two systems of ideal gases are in joint thermodynamic equilibrium across an immovable diathermal wall, then = where Pi is the pressure in the ith system, Vi is the volume, and Ni is the amount (in moles, or simply the number of atoms) of gas.
The surface = constant defines surfaces of equal thermodynamic temperature, and one may label defining T so that = RT, where R is some constant. These systems can now be used as a thermometer to calibrate other systems. Such systems are known as "ideal gas thermometers".
In a sense, focused on the zeroth law, there is only one kind of diathermal wall or one kind of heat, as expressed by Maxwell's dictum that "All heat is of the same kind". But in another sense, heat is transferred in different ranks, as expressed by Arnold Sommerfeld's dictum "Thermodynamics investigates the conditions that govern the transformation of heat into work. It teaches us to recognize temperature as the measure of the work-value of heat. Heat of higher temperature is richer, is capable of doing more work. Work may be regarded as heat of an infinitely high temperature, as unconditionally available heat." This is why temperature is the particular variable indicated by the zeroth law's statement of equivalence.
Dependence on the existence of walls permeable only to heat
In Constantin Carathéodory's (1909) theory, it is postulated that there exist walls "permeable only to heat", though heat is not explicitly defined in that paper. This postulate is a physical postulate of existence. It does not say that there is only one kind of heat. This paper of Carathéodory states as proviso 4 of its account of such walls: "Whenever each of the systems S1 and S2 is made to reach equilibrium with a third system S3 under identical conditions, systems S1 and S2 are in mutual equilibrium".
It is the function of this statement in the paper, not there labeled as the zeroth law, to provide not only for the existence of transfer of energy other than by work or transfer of matter, but further to provide that such transfer is unique in the sense that there is only one kind of such wall, and one kind of such transfer. This is signaled in the postulate of this paper of Carathéodory that precisely one non-deformation variable is needed to complete the specification of a thermodynamic state, beyond the necessary deformation variables, which are not restricted in number. It is therefore not exactly clear what Carathéodory means when in the introduction of this paper he writes
It is possible to develop the whole theory without assuming the existence of heat, that is of a quantity that is of a different nature from the normal mechanical quantities.
It is the opinion of Elliott H. Lieb and Jakob Yngvason (1999) that the derivation from statistical mechanics of the law of entropy increase is a goal that has so far eluded the deepest thinkers. Thus the idea remains open to consideration that the existence of heat and temperature are needed as coherent primitive concepts for thermodynamics, as expressed, for example, by Maxwell and Max Planck. On the other hand, Planck (1926) clarified how the second law can be stated without reference to heat or temperature, by referring to the irreversible and universal nature of friction in natural thermodynamic processes.
History
Writing long before the term "zeroth law" was coined, in 1871 Maxwell discussed at some length ideas which he summarized by the words "All heat is of the same kind". Modern theorists sometimes express this idea by postulating the existence of a unique one-dimensional hotness manifold, into which every proper temperature scale has a monotonic mapping. This may be expressed by the statement that there is only one kind of temperature, regardless of the variety of scales in which it is expressed. Another modern expression of this idea is that "All diathermal walls are equivalent". This might also be expressed by saying that there is precisely one kind of non-mechanical, non-matter-transferring contact equilibrium between thermodynamic systems.
According to Sommerfeld, Ralph H. Fowler coined the term zeroth law of thermodynamics while discussing the 1935 text by Meghnad Saha and B.N. Srivastava.
They write on page 1 that "every physical quantity must be measurable in numerical terms". They presume that temperature is a physical quantity and then deduce the statement "If a body is in temperature equilibrium with two bodies and , then and themselves are in temperature equilibrium with each other". Then they italicize a self-standing paragraph, as if to state their basic postulate:
They do not themselves here use the phrase "zeroth law of thermodynamics".
There are very many statements of these same physical ideas in the physics literature long before this text, in very similar language. What was new here was just the label zeroth law of thermodynamics.
Fowler & Guggenheim (1936/1965) wrote of the zeroth law as follows:
They then proposed that
The first sentence of this present article is a version of this statement. It is not explicitly evident in the existence statement of Fowler and Edward A. Guggenheim that temperature refers to a unique attribute of a state of a system, such as is expressed in the idea of the hotness manifold. Also their statement refers explicitly to statistical mechanical assemblies, not explicitly to macroscopic thermodynamically defined systems.
References
Further reading
0 | Zeroth law of thermodynamics | [
"Physics",
"Chemistry"
] | 2,529 | [
"Thermodynamics",
"Laws of thermodynamics"
] |
262,927 | https://en.wikipedia.org/wiki/Groundwater | Groundwater is the water present beneath Earth's surface in rock and soil pore spaces and in the fractures of rock formations. About 30 percent of all readily available fresh water in the world is groundwater. A unit of rock or an unconsolidated deposit is called an aquifer when it can yield a usable quantity of water. The depth at which soil pore spaces or fractures and voids in rock become completely saturated with water is called the water table. Groundwater is recharged from the surface; it may discharge from the surface naturally at springs and seeps, and can form oases or wetlands. Groundwater is also often withdrawn for agricultural, municipal, and industrial use by constructing and operating extraction wells. The study of the distribution and movement of groundwater is hydrogeology, also called groundwater hydrology.
Typically, groundwater is thought of as water flowing through shallow aquifers, but, in the technical sense, it can also contain soil moisture, permafrost (frozen soil), immobile water in very low permeability bedrock, and deep geothermal or oil formation water. Groundwater is hypothesized to provide lubrication that can possibly influence the movement of faults. It is likely that much of Earth's subsurface contains some water, which may be mixed with other fluids in some instances.
Groundwater is often cheaper, more convenient and less vulnerable to pollution than surface water. Therefore, it is commonly used for public drinking water supplies. For example, groundwater provides the largest source of usable water storage in the United States, and California annually withdraws the largest amount of groundwater of all the states. Underground reservoirs contain far more water than the capacity of all surface reservoirs and lakes in the US, including the Great Lakes. Many municipal water supplies are derived solely from groundwater. Over 2 billion people rely on it as their primary water source worldwide.
Human use of groundwater causes environmental problems. For example, polluted groundwater is less visible and more difficult to clean up than pollution in rivers and lakes. Groundwater pollution most often results from improper disposal of wastes on land. Major sources include industrial and household chemicals and garbage landfills, excessive fertilizers and pesticides used in agriculture, industrial waste lagoons, tailings and process wastewater from mines, industrial fracking, oil field brine pits, leaking underground oil storage tanks and pipelines, sewage sludge and septic systems. Additionally, groundwater is susceptible to saltwater intrusion in coastal areas and can cause land subsidence when extracted unsustainably, leading to sinking cities (like Bangkok) and loss in elevation (such as the multiple meters lost in the Central Valley of California). These issues are made more complicated by sea level rise and other effects of climate change, particularly those on the water cycle. Earth's axial tilt has shifted 31 inches because of human groundwater pumping.
Definition
Groundwater is fresh water located in the subsurface pore space of soil and rocks. It is also water that is flowing within aquifers below the water table. Sometimes it is useful to make a distinction between groundwater that is closely associated with surface water, and deep groundwater in an aquifer (called "fossil water" if it infiltrated into the ground millennia ago).
Role in the water cycle
Groundwater can be thought of in the same terms as surface water: inputs, outputs and storage. The natural input to groundwater is seepage from surface water. The natural outputs from groundwater are springs and seepage to the oceans. Due to its slow rate of turnover, groundwater storage is generally much larger (in volume) compared to inputs than it is for surface water. This difference makes it easy for humans to use groundwater unsustainably for a long time without severe consequences. Nevertheless, over the long term the average rate of seepage above a groundwater source is the upper bound for average consumption of water from that source.
Groundwater is naturally replenished by surface water from precipitation, streams, and rivers when this recharge reaches the water table.
Groundwater can be a long-term 'reservoir' of the natural water cycle (with residence times from days to millennia), as opposed to short-term water reservoirs like the atmosphere and fresh surface water (which have residence times from minutes to years). Deep groundwater (which is quite distant from the surface recharge) can take a very long time to complete its natural cycle.
The Great Artesian Basin in central and eastern Australia is one of the largest confined aquifer systems in the world, extending for almost 2 million km2. By analysing the trace elements in water sourced from deep underground, hydrogeologists have been able to determine that water extracted from these aquifers can be more than 1 million years old.
By comparing the age of groundwater obtained from different parts of the Great Artesian Basin, hydrogeologists have found it increases in age across the basin. Where water recharges the aquifers along the Eastern Divide, ages are young. As groundwater flows westward across the continent, it increases in age, with the oldest groundwater occurring in the western parts. This means that in order to have travelled almost 1000 km from the source of recharge in 1 million years, the groundwater flowing through the Great Artesian Basin travels at an average rate of about 1 metre per year.
Groundwater recharge
Location in aquifers
Characteristics
Temperature
The high specific heat capacity of water and the insulating effect of soil and rock can mitigate the effects of climate and maintain groundwater at a relatively steady temperature. In some places where groundwater temperatures are maintained by this effect at about , groundwater can be used for controlling the temperature inside structures at the surface. For example, during hot weather relatively cool groundwater can be pumped through radiators in a home and then returned to the ground in another well. During cold seasons, because it is relatively warm, the water can be used in the same way as a source of heat for heat pumps that is much more efficient than using air.
Availability
Groundwater makes up about thirty percent of the world's fresh water supply, which is about 0.76% of the entire world's water, including oceans and permanent ice. About 99% of the world's liquid fresh water is groundwater. Global groundwater storage is roughly equal to the total amount of freshwater stored in the snow and ice pack, including the north and south poles. This makes it an important resource that can act as a natural storage that can buffer against shortages of surface water, as in during times of drought.
The volume of groundwater in an aquifer can be estimated by measuring water levels in local wells and by examining geologic records from well-drilling to determine the extent, depth and thickness of water-bearing sediments and rocks. Before an investment is made in production wells, test wells may be drilled to measure the depths at which water is encountered and collect samples of soils, rock and water for laboratory analyses. Pumping tests can be performed in test wells to determine flow characteristics of the aquifer.
The characteristics of aquifers vary with the geology and structure of the substrate and topography in which they occur. In general, the more productive aquifers occur in sedimentary geologic formations. By comparison, weathered and fractured crystalline rocks yield smaller quantities of groundwater in many environments. Unconsolidated to poorly cemented alluvial materials that have accumulated as valley-filling sediments in major river valleys and geologically subsiding structural basins are included among the most productive sources of groundwater.
Fluid flows can be altered in different lithological settings by brittle deformation of rocks in fault zones; the mechanisms by which this occurs are the subject of fault zone hydrogeology.
Uses by humans
Reliance on groundwater will only increase, mainly due to growing water demand by all sectors combined with increasing variation in rainfall patterns. Safe use of groundwater varies substantially by the elements present and use-cases, with significant differences between consumption for humans, livestocks and different crops.
Quantities
Groundwater is the most accessed source of freshwater around the world, including as drinking water, irrigation, and manufacturing. Groundwater accounts for about half of the world's drinking water, 40% of its irrigation water, and a third of water for industrial purposes.
Another estimate stated that globally groundwater accounts for about one third of all water withdrawals, and surface water for the other two thirds. Groundwater provides drinking water to at least 50% of the global population. About 2.5 billion people depend solely on groundwater resources to satisfy their basic daily water needs.
A similar estimate was published in 2021 which stated that "groundwater is estimated to supply between a quarter and a third of the world's annual freshwater withdrawals to meet agricultural, industrial and domestic demands."
Global freshwater withdrawal was probably around 600 km3 per year in 1900 and increased to 3,880 km3 per year in 2017. The rate of increase was especially high (around 3% per year) during the period 1950–1980, partly due to a higher population growth rate, and partly to rapidly increasing groundwater development, particularly for irrigation. The rate of increase is (as per 2022) approximately 1% per year, in tune with the current population growth rate.
Global groundwater depletion has been calculated to be between 100 and 300 km3 per year. This depletion is mainly caused by "expansion of irrigated agriculture in drylands".
The Asia-Pacific region is the largest groundwater abstractor in the world, containing seven out of the ten countries that extract most groundwater (Bangladesh, China, India, Indonesia, Iran, Pakistan and Turkey). These countries alone account for roughly 60% of the world's total groundwater withdrawal.
Drinking water quality aspects
Groundwater may or may not be a safe water source. In fact, there is considerable uncertainty with groundwater in different hydrogeologic contexts: the widespread presence of contaminants such as arsenic, fluoride and salinity can reduce the suitability of groundwater as a drinking water source. Arsenic and fluoride have been considered as priority contaminants at a global level, although priority chemicals will vary by country.
There is a lot of heterogeneity of hydrogeologic properties. For this reason, salinity of groundwater is often highly variable over space. This contributes to highly variable groundwater security risks even within a specific region. Salinity in groundwater makes the water unpalatable and unusable and is often the worst in coastal areas, especially due to Saltwater intrusion from excessive use, which are notable in Bangladesh, and East and West India, and many islan nations..
Due to climate change groundwater is warming. The temperature of Viennese groundwater has increased by .9 degrees Celsius between 2001 and 2010; by 1.4 degrees between 2011 and 2020. In a joint research project scientists at the Karlsruher Institut für Technologie and the University of Vienna have tried to quantify the amount of drinking water loss to be expected due to ground water warming up to the end of the current century. Stressing the fact that regional shallow groundwater warming patterns vary substantially due to spatial variability in climate change and water table depth these researchers write that we currently lack knowledge about how groundwater responds to surface warming across spatial and temporal scales. Their study shows, however, that following a medium emissions pathway, in 2100 between 77 million and 188 million people are projected to live in areas where groundwater exceeds the highest threshold for drinking water temperatures (DWTs) set by any country.
Water supply for municipal and industrial uses
Municipal and industrial water supplies are provided through large wells. Multiple wells for one water supply source are termed "wellfields", which may withdraw water from confined or unconfined aquifers. Using groundwater from deep, confined aquifers provides more protection from surface water contamination. Some wells, termed "collector wells", are specifically designed to induce infiltration of surface (usually river) water.
Aquifers that provide sustainable fresh groundwater to urban areas and for agricultural irrigation are typically close to the ground surface (within a couple of hundred metres) and have some recharge by fresh water. This recharge is typically from rivers or meteoric water (precipitation) that percolates into the aquifer through overlying unsaturated materials. In cases where the groundwater has unacceptable levels of salinity or specific ions, desalination is a common treatment,. However, for the brine, safe disposal or reuse is needed.
Irrigation
In general, the irrigation of 20% of farming land (with various types of water sources) accounts for the production of 40% of food production. Irrigation techniques across the globe includes canals redirecting surface water, groundwater pumping, and diverting water from dams. Aquifers are critically important in agriculture. Deep aquifers in arid areas have long been water sources for irrigation. A majority of extracted groundwater, 70%, is used for agricultural purposes. Significant investigation has gone into determining safe levels of specific salts present for different agricultural uses.
In India, 65% of the irrigation is from groundwater and about 90% of extracted groundwater is used for irrigation.
Occasionally, sedimentary or "fossil" aquifers are used to provide irrigation and drinking water to urban areas. In Libya, for example, Muammar Gaddafi's Great Manmade River project has pumped large amounts of groundwater from aquifers beneath the Sahara to populous areas near the coast. Though this has saved Libya money over the alternative, seawater desalination, the aquifers are likely to run dry in 60 to 100 years.
In developing countries
Challenges
First, flood mitigation schemes, intended to protect infrastructure built on floodplains, have had the unintended consequence of reducing aquifer recharge associated with natural flooding. Second, prolonged depletion of groundwater in extensive aquifers can result in land subsidence, with associated infrastructure damageas well as, third, saline intrusion. Fourth, draining acid sulphate soils, often found in low-lying coastal plains, can result in acidification and pollution of formerly freshwater and estuarine streams.
Overdraft
Groundwater is a highly useful and often abundant resource. Most land areas on Earth have some form of aquifer underlying them, sometimes at significant depths. In some cases, these aquifers are rapidly being depleted by the human population. Such over-use, over-abstraction or overdraft can cause major problems to human users and to the environment. The most evident problem (as far as human groundwater use is concerned) is a lowering of the water table beyond the reach of existing wells. As a consequence, wells must be drilled deeper to reach the groundwater; in some places (e.g., California, Texas, and India) the water table has dropped hundreds of feet because of extensive well pumping. The GRACE satellites have collected data that demonstrates 21 of Earth's 37 major aquifers are undergoing depletion. In the Punjab region of India, for example, groundwater levels have dropped 10 meters since 1979, and the rate of depletion is accelerating. A lowered water table may, in turn, cause other problems such as groundwater-related subsidence and saltwater intrusion.
Another cause for concern is that groundwater drawdown from over-allocated aquifers has the potential to cause severe damage to both terrestrial and aquatic ecosystemsin some cases very conspicuously but in others quite imperceptibly because of the extended period over which the damage occurs. The importance of groundwater to ecosystems is often overlooked, even by freshwater biologists and ecologists. Groundwaters sustain rivers, wetlands, and lakes, as well as subterranean ecosystems within karst or alluvial aquifers.
Not all ecosystems need groundwater, of course. Some terrestrial ecosystemsfor example, those of the open deserts and similar arid environmentsexist on irregular rainfall and the moisture it delivers to the soil, supplemented by moisture in the air. While there are other terrestrial ecosystems in more hospitable environments where groundwater plays no central role, groundwater is in fact fundamental to many of the world's major ecosystems. Water flows between groundwaters and surface waters. Most rivers, lakes, and wetlands are fed by, and (at other places or times) feed groundwater, to varying degrees. Groundwater feeds soil moisture through percolation, and many terrestrial vegetation communities depend directly on either groundwater or the percolated soil moisture above the aquifer for at least part of each year. Hyporheic zones (the mixing zone of streamwater and groundwater) and riparian zones are examples of ecotones largely or totally dependent on groundwater.
A 2021 study found that of ~39 million investigated groundwater wells 6-20% are at high risk of running dry if local groundwater levels decline by a few meters, oras with many areas and possibly more than half of major aquiferscontinue to decline.
Fresh-water aquifers, especially those with limited recharge by snow or rain, also known as meteoric water, can be over-exploited and depending on the local hydrogeology, may draw in non-potable water or saltwater intrusion from hydraulically connected aquifers or surface water bodies. This can be a serious problem, especially in coastal areas and other areas where aquifer pumping is excessive.
Subsidence
Subsidence occurs when too much water is pumped out from underground, deflating the space below the above-surface, and thus causing the ground to collapse. The result can look like craters on plots of land. This occurs because, in its natural equilibrium state, the hydraulic pressure of groundwater in the pore spaces of the aquifer and the aquitard supports some of the weight of the overlying sediments. When groundwater is removed from aquifers by excessive pumping, pore pressures in the aquifer drop and compression of the aquifer may occur. This compression may be partially recoverable if pressures rebound, but much of it is not. When the aquifer gets compressed, it may cause land subsidence, a drop in the ground surface.
In unconsolidated aquifers, groundwater is produced from pore spaces between particles of gravel, sand, and silt. If the aquifer is confined by low-permeability layers, the reduced water pressure in the sand and gravel causes slow drainage of water from the adjoining confining layers. If these confining layers are composed of compressible silt or clay, the loss of water to the aquifer reduces the water pressure in the confining layer, causing it to compress from the weight of overlying geologic materials. In severe cases, this compression can be observed on the ground surface as subsidence. Unfortunately, much of the subsidence from groundwater extraction is permanent (elastic rebound is small). Thus, the subsidence is not only permanent, but the compressed aquifer has a permanently reduced capacity to hold water.
The city of New Orleans, Louisiana is actually below sea level today, and its subsidence is partly caused by removal of groundwater from the various aquifer/aquitard systems beneath it. In the first half of the 20th century, the San Joaquin Valley experienced significant subsidence, in some places up to due to groundwater removal. Cities on river deltas, including Venice in Italy, and Bangkok in Thailand, have experienced surface subsidence; Mexico City, built on a former lake bed, has experienced rates of subsidence of up to per year.
For coastal cities, subsidence can increase the risk of other environmental issues, such as sea level rise. For example, Bangkok is expected to have 5.138 million people exposed to coastal flooding by 2070 because of these combining factors.
Groundwater becoming saline due to evaporation
If the surface water source is also subject to substantial evaporation, a groundwater source may become saline. This situation can occur naturally under endorheic bodies of water, or artificially under irrigated farmland. In coastal areas, human use of a groundwater source may cause the direction of seepage to ocean to reverse which can also cause soil salinization.
As water moves through the landscape, it collects soluble salts, mainly sodium chloride. Where such water enters the atmosphere through evapotranspiration, these salts are left behind. In irrigation districts, poor drainage of soils and surface aquifers can result in water tables' coming to the surface in low-lying areas. Major land degradation problems of soil salinity and waterlogging result, combined with increasing levels of salt in surface waters. As a consequence, major damage has occurred to local economies and environments.
Aquifers in surface irrigated areas in semi-arid zones with reuse of the unavoidable irrigation water losses percolating down into the underground by supplemental irrigation from wells run the risk of salination.
Surface irrigation water normally contains salts in the order of or more and the annual irrigation requirement is in the order of or more so the annual import of salt is in the order of or more.
Under the influence of continuous evaporation, the salt concentration of the aquifer water may increase continually and eventually cause an environmental problem.
For salinity control in such a case, annually an amount of drainage water is to be discharged from the aquifer by means of a subsurface drainage system and disposed of through a safe outlet. The drainage system may be horizontal (i.e. using pipes, tile drains or ditches) or vertical (drainage by wells). To estimate the drainage requirement, the use of a groundwater model with an agro-hydro-salinity component may be instrumental, e.g. SahysMod.
Seawater intrusion
Aquifers near the coast have a lens of freshwater near the surface and denser seawater under freshwater. Seawater penetrates the aquifer diffusing in from the ocean and is denser than freshwater. For porous (i.e., sandy) aquifers near the coast, the thickness of freshwater atop saltwater is about for every of freshwater head above sea level. This relationship is called the Ghyben-Herzberg equation. If too much groundwater is pumped near the coast, salt-water may intrude into freshwater aquifers causing contamination of potable freshwater supplies. Many coastal aquifers, such as the Biscayne Aquifer near Miami and the New Jersey Coastal Plain aquifer, have problems with saltwater intrusion as a result of overpumping and sea level rise.
Seawater intrusion is the flow or presence of seawater into coastal aquifers; it is a case of saltwater intrusion. It is a natural phenomenon but can also be caused or worsened by anthropogenic factors, such as sea level rise due to climate change. In the case of homogeneous aquifers, seawater intrusion forms a saline wedge below a transition zone to fresh groundwater, flowing seaward on the top. These changes can have other effects on the land above the groundwater. For example, coastal groundwater in California would rise in many aquifers, increasing risks of flooding and runoff challenges.
Sea level rise causes the mixing of sea water into the coastal groundwater, rendering it unusable once it amounts to more than 2-3% of the reservoir. Along an estimated 15% of the US coastline, the majority of local groundwater levels are already below the sea level.
Pollution
Climate change
The impacts of climate change on groundwater may be greatest through its indirect effects on irrigation water demand via increased evapotranspiration. There is an observed declined in groundwater storage in many parts of the world. This is due to more groundwater being used for irrigation activities in agriculture, particularly in drylands. Some of this increase in irrigation can be due to water scarcity issues made worse by effects of climate change on the water cycle. Direct redistribution of water by human activities amounting to ~24,000 km3 per year is about double the global groundwater recharge each year.
Climate change causes changes to the water cycle which in turn affect groundwater in several ways: There can be a decline in groundwater storage, and reduction in groundwater recharge and water quality deterioration due to extreme weather events. In the tropics intense precipitation and flooding events appear to lead to more groundwater recharge.
However, the exact impacts of climate change on groundwater are still under investigation. This is because scientific data derived from groundwater monitoring is still missing, such as changes in space and time, abstraction data and "numerical representations of groundwater recharge processes".
Effects of climate change could have different impacts on groundwater storage: The expected more intense (but fewer) major rainfall events could lead to increased groundwater recharge in many environments. But more intense drought periods could result in soil drying-out and compaction which would reduce infiltration to groundwater.
For the higher altitudes regions, the reduced duration and amount of snow may lead to reduced recharge of groundwater in spring. The impacts of receding alpine glaciers on groundwater systems are not well understood.
Global sea level rise due to climate change has induced seawater intrusion into coastal aquifers around the world, particularly in low-lying areas and small islands. However, groundwater abstraction is usually the main reason for seawater intrusion, rather than sea level rise (see in section on seawater intrusion). Seawater intrusion threatens coastal ecosystems and livelihood resilience. Bangladesh is a vulnerable country for this issue, and mangrove forest of Sundarbans is a vulnerable ecosystem.
Groundwater pollution may also increase indirectly due to climate change: More frequent and intense storms can pollute groundwater by mobilizing contaminants, for example fertilizers, wastewater or human excreta from pit latrines. Droughts reduce river dilution capacities and groundwater levels, increasing the risk of groundwater contamination.
Aquifer systems that are vulnerable to climate change include the following examples (the first four are largely independent of human withdrawals, unlike examples 5 to 8 where the intensity of human groundwater withdrawals plays a key role in amplifying vulnerability to climate change):
low-relief coastal and deltaic aquifer systems,
aquifer systems in continental northern latitudes or alpine and polar regions
aquifers in rapidly expanding low-income cities and large displaced and informal communities
shallow alluvial aquifers underlying seasonal rivers in drylands,
intensively pumped aquifer systems for groundwater-fed irrigation in drylands
intensively pumped aquifers for dryland cities
intensively pumped coastal aquifers
low-storage/low-recharge aquifer systems in drylands
Climate change adaptation
Using more groundwater, particularly in Sub-Saharan Africa, is seen as a method for climate change adaptation in the case that climate change causes more intense or frequent droughts.
Groundwater-based adaptations to climate change exploit distributed groundwater storage and the capacity of aquifer systems to store seasonal or episodic water surpluses. They incur substantially lower evaporative losses than conventional infrastructure, such as surface dams. For example, in tropical Africa, pumping water from groundwater storage can help to improve the climate resilience of water and food supplies.
Climate change mitigation
The development of geothermal energy, a sustainable energy source, plays an important role in reducing CO2 emissions and thus mitigating climate change. Groundwater is an agent in the storage, movement, and extraction of geothermal energy.
In pioneering nations, such as the Netherlands and Sweden, the ground/groundwater is increasingly seen as just one component (a seasonal source, sink or thermal 'buffer') in district heating and cooling networks.
Deep aquifers can also be used for carbon capture and sequestration, the process of storing carbon to curb accumulation of carbon dioxide in the atmosphere.
Groundwater governance
Groundwater governance processes enable groundwater management, planning and policy implementation. It takes place at multiple scales and geographic levels, including regional and transboundary scales.
Groundwater management is action-oriented, focusing on practical implementation activities and day-to-day operations. Because groundwater is often perceived as a private resource (that is, closely connected to land ownership, and in some jurisdictions treated as privately owned), regulation and top–down governance and management are difficult. Governments need to fully assume their role as resource custodians in view of the common-good aspects of groundwater.
Domestic laws and regulations regulate access to groundwater as well as human activities that impact the quality of groundwater. Legal frameworks also need to include protection of discharge and recharge zones and of the area surrounding water supply wells, as well as sustainable yield norms and abstraction controls, and conjunctive use regulations. In some jurisdictions, groundwater is regulated in conjunction with surface water, including rivers.
By country
Groundwater is an important water resource for the supply of drinking water, especially in arid countries.
The Arab region is one of the most water-scarce in the world and groundwater is the most relied-upon water source in at least 11 of the 22 Arab states. Over-extraction of groundwater in many parts of the region has led to groundwater table declines, especially in highly populated and agricultural areas.
See also
Hydrology
List of aquifers
List of aquifers in the United States
References
External links
USGS Office of Groundwater
IAH, International Association of Hydrogeologists
The Groundwater Project Online platform for groundwater knowledge
UGPRO 7-year research project on the "Potential of Groundwater for the Poor" (2013–2020)
Hydrology
Hydraulic engineering
Soil mechanics
Liquid water
Water and the environment
Water
Lithosphere
Subterranea (geography) | Groundwater | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 6,009 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Soil mechanics",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Aquifers",
"Environmental engineering",
"Water",
"Hydraulic engineering"
] |
1,866,778 | https://en.wikipedia.org/wiki/Nonlinear%20filter | In signal processing, a nonlinear filter is a filter whose output is not a linear function of its input. That is, if the filter outputs signals and for two input signals and separately, but does not always output when the input is a linear combination .
Both continuous-domain and discrete-domain filters may be nonlinear. A simple example of the former would be an electrical device whose output voltage at any moment is the square of the input voltage ; or which is the input clipped to a fixed range , namely . An important example of the latter is the running-median filter, such that every output sample is the median of the last three input samples . Like linear filters, nonlinear filters may be shift invariant or not.
Non-linear filters have many applications, especially in the removal of certain types of noise that are not additive. For example, the median filter is widely used to remove spike noise — that affects only a small percentage of the samples, possibly by very large amounts. Indeed, all radio receivers use non-linear filters to convert kilo- to gigahertz signals to the audio frequency range; and all digital signal processing depends on non-linear filters (analog-to-digital converters) to transform analog signals to binary numbers.
However, nonlinear filters are considerably harder to use and design than linear ones, because the most powerful mathematical tools of signal analysis (such as the impulse response and the frequency response) cannot be used on them. Thus, for example, linear filters are often used to remove noise and distortion that was created by nonlinear processes, simply because the proper non-linear filter would be too hard to design and construct.
From the foregoing, we can know that the nonlinear filters have quite different behavior compared to linear filters. The most important characteristic is that, for nonlinear filters, the filter output or response of the filter does not obey the principles outlined earlier, particularly scaling and shift invariance. Furthermore, a nonlinear filter can produce results that vary in a non-intuitive manner.
Linear system
Several principles define a linear system. The basic definition of linearity is that the output must be a linear function of the inputs, that is
for any scalar values and .
This is a fundamental property of linear system design, and is known as superposition. So, a system is said to be nonlinear if this equation is not valid. That is to say, when the system is linear, the superposition principle can be applied. This important fact is the reason that the techniques of linear-system analysis have been so well developed.
Applications
Noise removal
Signals often get corrupted during transmission or processing; and a frequent goal in filter design is the restoration of the original signal, a process commonly called "noise removal". The simplest type of corruption is additive noise, when the desired signal S gets added with an unwanted signal N that has no known connection with S. If the noise N has a simple statistical description, such as Gaussian noise, then a Kalman filter will reduce N and restore S to the extent allowed by Shannon's theorem. In particular, if S and N do not overlap in the frequency domain, they can be completely separated by linear bandpass filters.
For almost any other form of noise, on the other hand, some sort of non-linear filter will be needed for maximum signal recovery. For multiplicative noise (that gets multiplied by the signal, instead of added to it), for example, it may suffice to convert the input to a logarithmic scale, apply a linear filter, and then convert the result to linear scale. In this example, the first and third steps are not linear.
Non-linear filters may also be useful when certain "nonlinear" features of the signal are more important than the overall information contents. In digital image processing, for example, one may wish to preserve the sharpness of silhouette edges of objects in photographs, or the connectivity of lines in scanned drawings. A linear noise-removal filter will usually blur those features; a non-linear filter may give more satisfactory results (even if the blurry image may be more "correct" in the information-theoretic sense).
Many nonlinear noise-removal filters operate in the time domain. They typically examine the input digital signal within a finite window surrounding each sample, and use some statistical inference model (implicitly or explicitly) to estimate the most likely value for the original signal at that point. The design of such filters is known as the filtering problem for a stochastic process in estimation theory and control theory.
Examples of nonlinear filters include:
phase-locked loops
detectors
mixers
median filters
ranklets
Nonlinear filter also occupy a decisive position in the image processing functions. In a typical pipeline for real-time image processing, it is common to have many nonlinear filter included to form, shape, detect, and manipulate image information. Furthermore, each of these filter types can be parameterized to work one way under certain circumstances and another way under a different set of circumstance using adaptive filter rule generation. The goals vary from noise removal to feature abstraction. Filtering image data is a standard process used in almost all image processing systems. Nonlinear filters are the most utilized forms of filter construction. For example, if an image contains a low amount of noise but with relatively high magnitude, then a median filter may be more appropriate.
Kushner–Stratonovich filtering
The context here is the formulation of the nonlinear filtering problem seen through the lens of the theory of stochastic processes. In this context, both the random signal and the noisy partial observations are described by continuous time stochastic processes. The unobserved random signal to be estimated is modeled through a non-linear Ito stochastic differential equation and the observation function is a continuous time non-linear transformation of the unobserved signal, an observation perturbed by continuous time observation noise.
Given the nonlinear nature of the dynamics, familiar frequency domain concepts that can be applied to linear filters are not viable, and a theory based on the state space representation is formulated. The complete information on the nonlinear filter at a given time is the probability law of the unobserved signal at that time conditional on the history of observations up to that time. This law may have a density, and the infinite dimensional equation for the density of this law takes the form of a stochastic partial differential equation (SPDE).
The problem of optimal nonlinear filtering in this context was solved in the late 1950s and early 1960s by Ruslan L. Stratonovich and Harold J. Kushner.
The optimal filter SPDE is called Kushner-Stratonovich equation.
In 1969, Moshe Zakai introduced a simplified dynamics for the unnormalized conditional law of the filter known as Zakai equation.
It has been proved by Mireille Chaleyat-Maurel and Dominique Michel that the solution is infinite dimensional in general, and as such requires finite dimensional approximations. These may be heuristics-based such as the extended Kalman filter or the assumed density filters described by Peter S. Maybeck or the projection filters introduced by Damiano Brigo, Bernard Hanzon and François Le Gland, some sub-families of which are shown to coincide with the assumed density filters. Particle filters are another option, related to sequential Monte Carlo methods.
Energy transfer filters
Energy transfer filters are a class of nonlinear dynamic filters that can be used to move energy in a designed manner. Energy can be moved to higher or lower frequency bands, spread over a designed range, or focused. Many energy transfer filter designs are possible, and these provide extra degrees of freedom in filter design that are just not possible using linear designs.
Types of Non-linear Filters
Min Filter
A min filter also known as erosion in morphological image processing, is a spatial domain filter used for image processing. It replaces each pixel in the image with the minimum value of its neighboring pixels.
The size and shape of the neighborhood are defined by a structuring element, typically a square or circular mask.
The transformation replaces the central pixel with the darkest one in the running window.
For example, if you have text that is lightly printed, the minimum filter makes letters thicker.
Max Filter
A max filter, also known as dilation in morphological image processing, is another spatial domain filter used for image processing.
It replaces each pixel in the image with the maximum value of its neighboring pixels, again defined by a structuring element.
The maximum and minimum filters are shift-invariant. Whereas the minimum filter replaces the central pixel with the darkest one in the running window, the maximum filter replaces it with the lightest one.
For example, if you have a text string drawn with a thick pen, you can make the sign skinnier.
See also
Moving horizon estimation
Nonlinear system
Particle filter
Unscented Kalman filter section in Kalman filter
Nonlinear filtering problem
Projection filters
References
Further reading
External links
Prof. Ilya Shmulevich page on nonlinear signal processing
Filter theory | Nonlinear filter | [
"Engineering"
] | 1,811 | [
"Telecommunications engineering",
"Filter theory"
] |
1,866,872 | https://en.wikipedia.org/wiki/Canonical%20ring | In mathematics, the pluricanonical ring of an algebraic variety V (which is nonsingular), or of a complex manifold, is the graded ring
of sections of powers of the canonical bundle K. Its nth graded component (for ) is:
that is, the space of sections of the n-th tensor product Kn of the canonical bundle K.
The 0th graded component is sections of the trivial bundle, and is one-dimensional as V is projective. The projective variety defined by this graded ring is called the canonical model of V, and the dimension of the canonical model is called the Kodaira dimension of V.
One can define an analogous ring for any line bundle L over V; the analogous dimension is called the Iitaka dimension. A line bundle is called big if the Iitaka dimension equals the dimension of the variety.
Properties
Birational invariance
The canonical ring and therefore likewise the Kodaira dimension is a birational invariant: Any birational map between smooth compact complex manifolds induces an isomorphism between the respective canonical rings. As a consequence one can define the Kodaira dimension of a singular space as the Kodaira dimension of a desingularization. Due to the birational invariance this is well defined, i.e., independent of the choice of the desingularization.
Fundamental conjecture of birational geometry
A basic conjecture is that the pluricanonical ring is finitely generated. This is considered a major step in the Mori program.
proved this conjecture.
The plurigenera
The dimension
is the classically defined n-th plurigenus of V. The pluricanonical divisor , via the corresponding linear system of divisors, gives a map to projective space , called the n-canonical map.
The size of R is a basic invariant of V, and is called the Kodaira dimension.
Notes
References
Algebraic geometry
Birational geometry
Structures on manifolds | Canonical ring | [
"Mathematics"
] | 402 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
1,867,005 | https://en.wikipedia.org/wiki/Joule%20expansion | The Joule expansion (a subset of free expansion) is an irreversible process in thermodynamics in which a volume of gas is kept in one side of a thermally isolated container (via a small partition), with the other side of the container being evacuated. The partition between the two parts of the container is then opened, and the gas fills the whole container.
The Joule expansion, treated as a thought experiment involving ideal gases, is a useful exercise in classical thermodynamics. It provides a convenient example for calculating changes in thermodynamic quantities, including the resulting increase in entropy of the universe (entropy production) that results from this inherently irreversible process. An actual Joule expansion experiment necessarily involves real gases; the temperature change in such a process provides a measure of intermolecular forces.
This type of expansion is named after James Prescott Joule who used this expansion, in 1845, in his study for the mechanical equivalent of heat, but this expansion was known long before Joule e.g. by John Leslie, in the beginning of the 19th century, and studied by Joseph Louis Gay-Lussac in 1807 with similar results as obtained by Joule.
The Joule expansion should not be confused with the Joule–Thomson expansion or throttling process which refers to the steady flow of a gas from a region of higher pressure to one of lower pressure via a valve or porous plug.
Description
The process begins with gas under some pressure, , at temperature , confined to one half of a thermally isolated container (see the top part of the drawing at the beginning of this article). The gas occupies an initial volume , mechanically separated from the other part of the container, which has a volume , and is under near zero pressure. The tap (solid line) between the two halves of the container is then suddenly opened, and the gas expands to fill the entire container, which has a total volume of (see the bottom part of the drawing). A thermometer inserted into the compartment on the left (not shown in the drawing) measures the temperature of the gas before and after the expansion.
The system in this experiment consists of both compartments; that is, the entire region occupied by the gas at the end of the experiment. Because this system is thermally isolated, it cannot exchange heat with its surroundings. Also, since the system's total volume is kept constant, the system cannot perform work on its surroundings. As a result, the change in internal energy, , is zero. Internal energy consists of internal kinetic energy (due to the motion of the molecules) and internal potential energy (due to intermolecular forces). When the molecular motion is random, temperature is the measure of the internal kinetic energy. In this case, the internal kinetic energy is called heat. If the chambers have not reached equilibrium, there will be some kinetic energy of flow, which is not detectable by a thermometer (and therefore is not a component of heat). Thus, a change in temperature indicates a change in kinetic energy, and some of this change will not appear as heat until and unless thermal equilibrium is reestablished. When heat is transferred into kinetic energy of flow, this causes a decrease in temperature.
In practice, the simple two-chamber free expansion experiment often incorporates a 'porous plug' through which the expanding air must flow to reach the lower pressure chamber. The purpose of this plug is to inhibit directional flow, thereby quickening the reestablishment of thermal equilibrium.
Since the total internal energy does not change, the stagnation of flow in the receiving chamber converts kinetic energy of flow back into random motion (heat) so that the temperature climbs to its predicted value.
If the initial air temperature is low enough that non-ideal gas properties cause condensation, some internal energy is converted into latent heat (an offsetting change in potential energy) in the liquid products. Thus, at low temperatures the Joule expansion process provides information on intermolecular forces.
Ideal gases
If the gas is ideal, both the initial (, , ) and final (, , ) conditions follow the Ideal Gas Law, so that initially
and then, after the tap is opened,
Here is the number of moles of gas and is the molar ideal gas constant. Because the internal energy does not change and the internal energy of an ideal gas is solely a function of temperature, the temperature of the gas does not change; therefore . This implies that
Therefore if the volume doubles, the pressure halves.
The fact that the temperature does not change makes it easy to compute the change in entropy of the universe for this process.
Real gases
Unlike ideal gases, the temperature of a real gas will change during a Joule expansion. At temperatures below their inversion temperature gases will cool during Joule expansion, while at higher temperatures they will heat up. The inversion temperature of a gas is typically much higher than room temperature; exceptions are helium, with an inversion temperature of about 40 K, and hydrogen, with an inversion temperature of about 200 K. Since the internal energy of the gas during Joule expansion is constant, cooling must be due to the conversion of internal kinetic energy to internal potential energy, with the opposite being the case for warming.
Intermolecular forces are repulsive at short range and attractive at long range (for example, see the Lennard-Jones potential). Since distances between gas molecules are large compared to molecular diameters, the energy of a gas is usually influenced mainly by the attractive part of the potential. As a result, expanding a gas usually increases the potential energy associated with intermolecular forces. Some textbooks say that for gases this must always be the case and that a Joule expansion must always produce cooling. When molecules are close together, however, repulsive interactions are much more important and it is thus possible to get an increase in temperature during a Joule expansion.
It is theoretically predicted that, at sufficiently high temperature, all gases will warm during a Joule expansion The reason is that at any moment, a very small number of molecules will be undergoing collisions; for those few molecules, repulsive forces will dominate and the potential energy will be positive. As the temperature rises, both the frequency of collisions and the energy involved in the collisions increase, so the positive potential energy associated with collisions increases strongly. If the temperature is high enough, that can make the total potential energy positive, in spite of the much larger number of molecules experiencing weak attractive interactions. When the potential energy is positive, a constant energy expansion reduces potential energy and increases kinetic energy, resulting in an increase in temperature. This behavior has only been observed for hydrogen and helium; which have very weak attractive interactions. For other gases this "Joule inversion temperature" appears to be extremely high.
Entropy production
Entropy is a function of state, and therefore the entropy change can be computed directly from the knowledge of the final and initial equilibrium states. For an ideal gas, the change in entropy is the same as for isothermal expansion where all heat is converted to work:
For an ideal monatomic gas, the entropy as a function of the internal energy , volume , and number of moles is given by the Sackur–Tetrode equation:
In this expression is the particle mass and is the Planck constant. For a monatomic ideal gas , with the molar heat capacity at constant volume.
A second way to evaluate the entropy change is to choose a route from the initial state to the final state where all the intermediate states are in equilibrium. Such a route can only be realized in the limit where the changes happen infinitely slowly. Such routes are also referred to as quasistatic routes. In some books one demands that a quasistatic route has to be reversible, here we don't add this extra condition. The net entropy change from the initial state to the final state is independent of the particular choice of the quasistatic route, as the entropy is a function of state.
Here is how we can effect the quasistatic route. Instead of letting the gas undergo a free expansion in which the volume is doubled, a free expansion is allowed in which the volume expands by a very small amount . After thermal equilibrium is reached, we then let the gas undergo another free expansion by and wait until thermal equilibrium is reached. We repeat this until the volume has been doubled. In the limit to zero, this becomes an ideal quasistatic process, albeit an irreversible one. Now, according to the fundamental thermodynamic relation, we have:
As this equation relates changes in thermodynamic state variables, it is valid for any quasistatic change, regardless of whether it is irreversible or reversible. For the above defined path we have that and thus , and hence the increase in entropy for the Joule expansion is
A third way to compute the entropy change involves a route consisting of reversible adiabatic expansion followed by heating. We first let the system undergo a reversible adiabatic expansion in which the volume is doubled. During the expansion, the system performs work and the gas temperature goes down, so we have to supply heat to the system equal to the work performed to bring it to the same final state as in case of Joule expansion.
During the reversible adiabatic expansion, we have . From the classical expression for the entropy it can be derived that the temperature after the doubling of the volume at constant entropy is given as:
for the monoatomic ideal gas. Heating the gas up to the initial temperature increases the entropy by the amount
We might ask what the work would be if, once the Joule expansion has occurred, the gas is put back into the left-hand side by compressing it. The best method (i.e. the method involving the least work) is that of a reversible isothermal compression, which would take work given by
During the Joule expansion the surroundings do not change, i.e. the entropy of the surroundings is constant. Therefore the entropy change of the so-called "universe" is equal to the entropy change of the gas which is .
Real-gas effect
Joule performed his experiment with air at room temperature which was expanded from a pressure of about 22 bar. Under these conditions, air behaves only approximately as an ideal gas. As a result, the real temperature change will not be exactly zero. Rather, one can calculate that the temperature of the air should drop by about 3 degrees Celsius when the volume is doubled under adiabatic conditions. However, due to the low heat capacity of the air and the high heat capacity of the strong copper containers and the water of the calorimeter, the observed temperature drop is much smaller, so Joule found that the temperature change was zero within his measuring accuracy.
See also
Szilard's engine, a reverse thought experiment
References
The majority of good undergraduate textbooks deal with this expansion in great depth; see e.g. Concepts in Thermal Physics, Blundell & Blundell, OUP
Thermodynamics
Thought experiments in physics | Joule expansion | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,258 | [
"Thermodynamics",
"Dynamical systems"
] |
1,867,116 | https://en.wikipedia.org/wiki/Quantum%20sort | A quantum sort is any sorting algorithm that runs on a quantum computer. Any comparison-based quantum sorting algorithm would take at least steps, which is already achievable by classical algorithms. Thus, for this task, quantum computers are no better than classical ones, and should be disregarded when it comes to time complexity. However, in space-bounded sorts, quantum algorithms outperform their classical counterparts.
References
Sorting algorithms
Quantum algorithms | Quantum sort | [
"Physics",
"Mathematics"
] | 89 | [
"Order theory",
"Quantum mechanics",
"Quantum physics stubs",
"Sorting algorithms"
] |
1,867,619 | https://en.wikipedia.org/wiki/Organotin%20chemistry | Organotin chemistry is the scientific study of the synthesis and properties of organotin compounds or stannanes, which are organometallic compounds containing tin–carbon bonds. The first organotin compound was diethyltin diiodide (), discovered by Edward Frankland in 1849. The area grew rapidly in the 1900s, especially after the discovery of the Grignard reagents, which are useful for producing Sn–C bonds. The area remains rich with many applications in industry and continuing activity in the research laboratory.
Structure
Organotin compounds are generally classified according to their oxidation states. Tin(IV) compounds are much more common and more useful.
Organic derivatives of tin(IV)
The tetraorgano derivatives are invariably tetrahedral. Compounds of the type SnRR'R''R''' have been resolved into individual enantiomers.
Organotin halides
Organotin chlorides have the formula for values of n up to 3. Bromides, iodides, and fluorides are also known, but are less important. These compounds are known for many R groups. They are always tetrahedral. The tri- and dihalides form adducts with good Lewis bases such as pyridine. The fluorides tend to associate such that dimethyltin difluoride forms sheet-like polymers. Di- and especially tri-organotin halides, e.g. tributyltin chloride, exhibit toxicities approaching that of hydrogen cyanide.
Organotin hydrides
Organotin hydrides have the formula for values of n up to 3. The parent member of this series, stannane (), is an unstable colourless gas. Stability is correlated with the number of organic substituents. Tributyltin hydride is used as a source of hydride radical in some organic reactions.
Organotin oxides and hydroxides
Organotin oxides and hydroxides are common products from the hydrolysis of organotin halides. Unlike the corresponding derivatives of silicon and germanium, tin oxides and hydroxides often adopt structures with penta- and even hexacoordinated tin centres, especially for the diorgano- and monoorgano derivatives. The group is called a stannoxane (which is a tin analogue of ethers), and the group is also called a stannanol (which is a tin analogue of alcohols). Structurally simplest of the oxides and hydroxides are the triorganotin derivatives. A commercially important triorganotin hydroxide is the acaricide cyhexatin (also called Plictran, tricyclohexyltin hydroxide and tricyclohexylstannanol), (. Such triorganotin hydroxides exist in equilibrium with the distannoxanes:
With only two organic substituents on each Sn centre, the diorganotin oxides and hydroxides are structurally more complex than the triorgano derivatives. The simple tin geminal diols (, the tin analogues of geminal diols ) and monomeric stannanones (, the tin analogues of ketones ) are unknown. Diorganotin oxides () are polymers except when the organic substituents are very bulky, in which case cyclic trimers or, in the case where R is dimers, with and rings. The distannoxanes exist as dimers with the formula wherein the X groups (e.g., chloride –Cl, hydroxide –OH, carboxylate ) can be terminal or bridging (see Table). The hydrolysis of the monoorganotin trihalides has the potential to generate stannanoic acids, . As for the diorganotin oxides/hydroxides, the monoorganotin species form structurally complex because of the occurrence of dehydration/hydration, aggregation. Illustrative is the hydrolysis of butyltin trichloride to give .
Hypercoordinated stannanes
Unlike carbon(IV) analogues but somewhat like silicon compounds, tin(IV) can also be coordinated to five and even six atoms instead of the regular four. These hypercoordinated compounds usually have electronegative substituents. Numerous examples of hypercoordinated compounds are provided by the organotin oxides and associated carboxylates and related pseudohalide derivatives. The organotin halides for adducts, e.g. ).
The all-organic penta- and hexaorganostannates(IV) have even been characterized, while in the subsequent year a six-coordinated tetraorganotin compound was reported. A crystal structure of room-temperature stable (in argon) all-carbon pentaorganostannate(IV) was reported as the lithium salt with this structure:
In this distorted trigonal bipyramidal structure the carbon to tin bond lengths (2.26 Å apical, 2.17 Å equatorial) are longer than regular C-Sn bonds (2.14 Å) reflecting its hypercoordinated nature.
Triorganotin cations
Some reactions of triorganotin halides implicate a role for intermediates. Such cations are analogous to carbocations. They have been characterized crystallographically when the organic substituents are large, such as 2,4,6-triisopropylphenyl.
Tin radicals (organic derivatives of tin(III))
Tin radicals, with the formula , are called stannyl radicals. They are a type of tetrel radical, and are invoked as intermediates in certain atom-transfer reactions. For example, tributyltin hydride (tris(n-butyl)stannane) serves as a useful source of "hydrogen atoms" because of the stability of the tributytin radical.
Organic derivatives of tin(II)
Organotin(II) compounds are somewhat rare. Compounds with the empirical formula are somewhat fragile and exist as rings or polymers when R is not bulky. The polymers, called polystannanes, have the formula .
In principle, compounds of tin(II) might be expected to form a tin analogues of alkenes with a formal double bond between two tin atoms () or between a tin atom and a carbon group atom (e.g. and ). Indeed, compounds with the formula , called distannenes or distannylenes, which are tin analogues of ethylenes , are known for certain organic substituents. The Sn centres in stannenes are trigonal. But, contrary to the C centres in alkenes which are trigonal planar, the Sn centres in stannenes tend to be highly pyramidal. Monomeric compounds with the formula , tin analogues of carbenes are also known in a few cases. One example is , where R is the very bulky . Such species reversibly dimerize to the distannylene upon crystallization:
Stannenes, compounds with tin-carbon double bonds, are exemplified by derivatives of stannabenzene. Stannoles, structural analogs of cyclopentadiene, exhibit little C-Sn double bond character.
Organic derivatives of tin(I)
Compounds of Sn(I) are rare and only observed with very bulky ligands. One prominent family of cages is accessed by pyrolysis of the 2,6-diethylphenyl-substituted tristannylene [Sn(C6H3-2,6-Et2)2]3, which affords the cubane-type cluster and a prismane. These cages contain Sn(I) and have the formula [Sn(C6H3-2,6-Et2)]n where n = 8, 10 and Et stands for ethyl group. A stannyne contains a tin atom to carbon group atom triple bond (e.g. and ), and a distannyne a triple bond between two tin atoms (). Distannynes only exist for extremely bulky substituents. Unlike alkynes, the core of these distannynes are nonlinear, although they are planar. The Sn-Sn distance is 3.066(1) Å, and the Sn-Sn-C angles are 99.25(14)°. Such compounds are prepared by reduction of bulky aryltin(II) halides.
Preparation
Organotin compounds can be synthesised by numerous methods. Classic is the reaction of a Grignard reagent with tin halides for example tin tetrachloride. An example is provided by the synthesis of tetraethyltin:
The symmetrical tetraorganotin compounds, especially tetraalkyl derivatives, can then be converted to various mixed chlorides by redistribution reactions (also known as the "Kocheshkov comproportionation" in the case of organotin compounds):
A related method involves redistribution of tin halides with organoaluminium compounds.
In principle, alkyltin halides can be formed from direct insertion of the metal into the carbon-halogen bond. However, such reactions are temperamental, typically requiring a very weak carbon-halogen bond (e.g. an alkyl iodide or an allyl halide) or crown-complexed alkali metal salt catalyst. Lewis acids or an ionic solvent may also promote the reaction.
The mixed organo-halo tin compounds can be converted to the mixed organic derivatives, as illustrated by the synthesis of dibutyldivinyltin:
The organotin hydrides are generated by reduction of the mixed alkyl chlorides. For example, treatment of dibutyltin dichloride with lithium aluminium hydride gives the dibutyltin dihydride, a colourless distillable oil:
The Wurtz-like coupling of alkyl sodium compounds with tin halides yields tetraorganotin compounds.
Hydrostannylation involves the metal-catalyzed addition of tin hydrides across unsaturated substrates.
Alternatively, stannides attack organic electrophiles to give organostannanes, e.g.:
LiSnMe3 + CCl4 → C(SnMe3)4 + LiCl.
Reactions
Important reactions, discussed above, usually focus on organotin halides and pseudohalides with nucleophiles. All-alkyl organotin compounds generally do not hydrolyze except in concentrated acid; the major exception being tin acetylides. An organostannane addition is nucleophilic addition of an allyl-, allenyl-, or propargylstannanes to aldehydes and imines, whereas hydrostannylation conveniently reduces only unpolarized multiple bonds.
Organotin hydrides are unstable to strong base, disproportionating to hydrogen gas and distannanes. The latter equilibrate with the corresponding radicals only in the continued presence of base, or if strongly sterically hindered. Conversely, mineral acids cleave distannanes to the organotin halide and more hydrogen gas.
In "pure" organic synthesis, the Stille reaction is considered is a key coupling technique. In the Stille reaction, sp2-hybridized organic halides (e.g. vinyl chloride ) catalyzed by palladium:
Organotin compounds are also used extensively in radical chemistry (e.g. radical cyclizations, Barton–McCombie deoxygenation, Barton decarboxylation, etc.).
Applications
An organotin compound is commercially applied as stabilizers in polyvinyl chloride. In this capacity, they suppress degradation by removing allylic chloride groups and by absorbing hydrogen chloride. This application consumes about 20,000 tons of tin each year. The main class of organotin compounds are diorganotin dithiolates with the formula . The Sn-S bond is the reactive component. Diorganotin carboxylates, e.g., dibutyltin dilaurate, are used as catalysts for the formation of polyurethanes, for vulcanization of silicones, and transesterification.
n-Butyltin trichloride is used in the production of tin dioxide layers on glass bottles by chemical vapor deposition.
Biological applications
"Tributyltins" are used as industrial biocides, e.g. as antifungal agents in textiles and paper, wood pulp and paper mill systems, breweries, and industrial cooling systems. Triphenyltin derivatives are used as active components of antifungal paints and agricultural fungicides. Other triorganotins are used as miticides and acaricides. Tributyltin oxide has been extensively used as a wood preservative.
Tributyltin compounds were once widely used as marine anti-biofouling agents to improve the efficiency of ocean-going ships. Concerns over toxicity of these compounds (some reports describe biological effects to marine life at a concentration of 1 nanogram per liter) led to a worldwide ban by the International Maritime Organization. As anti-fouling compounds, organotin compounds have been replaced by dichlorooctylisothiazolinone.
Toxicity
The toxicities of tributyltin and triphenyltin derivative compounds are comparable to that of hydrogen cyanide. Furthermore, tri-n-alkyltins are phytotoxic and therefore cannot be used in agriculture. Depending on the organic groups, they can be powerful bactericides and fungicides. Reflecting their high bioactivity, "tributyltins" were once used in marine anti-fouling paint.
In contrast to the triorganotin compounds, monoorgano, diorgano- and tetraorganotin compounds are far less dangerous, although DBT may be immunotoxic.
See also
Organostannane addition
Tributyltin azide
Carbastannatranes
References
External links
National Pollutant Inventory Fact Sheet for organotins
Industry information site
Organotin chemistry in synthesis
Endocrine disruptors | Organotin chemistry | [
"Chemistry"
] | 3,002 | [
"Endocrine disruptors"
] |
1,868,229 | https://en.wikipedia.org/wiki/Thomas%20precession | In physics, the Thomas precession, named after Llewellyn Thomas, is a relativistic correction that applies to the spin of an elementary particle or the rotation of a macroscopic gyroscope and relates the angular velocity of the spin of a particle following a curvilinear orbit to the angular velocity of the orbital motion.
For a given inertial frame, if a second frame is Lorentz-boosted relative to it, and a third boosted relative to the second, but non-collinear with the first boost, then the Lorentz transformation between the first and third frames involves a combined boost and rotation, known as the "Wigner rotation" or "Thomas rotation". For accelerated motion, the accelerated frame has an inertial frame at every instant. Two boosts a small time interval (as measured in the lab frame) apart leads to a Wigner rotation after the second boost. In the limit the time interval tends to zero, the accelerated frame will rotate at every instant, so the accelerated frame rotates with an angular velocity.
The precession can be understood geometrically as a consequence of the fact that the space of velocities in relativity is hyperbolic, and so parallel transport of a vector (the gyroscope's angular velocity) around a circle (its linear velocity) leaves it pointing in a different direction, or understood algebraically as being a result of the non-commutativity of Lorentz transformations. Thomas precession gives a correction to the spin–orbit interaction in quantum mechanics, which takes into account the relativistic time dilation between the electron and the nucleus of an atom.
Thomas precession is a kinematic effect in the flat spacetime of special relativity. In the curved spacetime of general relativity, Thomas precession combines with a geometric effect to produce de Sitter precession. Although Thomas precession (net rotation after a trajectory that returns to its initial velocity) is a purely kinematic effect, it only occurs in curvilinear motion and therefore cannot be observed independently of some external force causing the curvilinear motion such as that caused by an electromagnetic field, a gravitational field or a mechanical force, so Thomas precession is usually accompanied by dynamical effects.
If the system experiences no external torque, e.g., in external scalar fields, its spin dynamics are determined only by the Thomas precession. A single discrete Thomas rotation (as opposed to the series of infinitesimal rotations that add up to the Thomas precession) is present in situations anytime there are three or more inertial frames in non-collinear motion, as can be seen using Lorentz transformations.
History
Thomas precession in relativity was already known to Ludwik Silberstein in 1914. But the only knowledge Thomas had of relativistic precession came from de Sitter's paper on the relativistic precession of the moon, first published in a book by Eddington.
In 1925 Thomas recomputed the relativistic precessional frequency of the doublet separation in the fine structure of the atom. He thus found the missing factor 1/2, which came to be known as the Thomas half.
This discovery of the relativistic precession of the electron spin led to the understanding of the significance of the relativistic effect. The effect was consequently named "Thomas precession".
Introduction
Definition
Consider a physical system moving through Minkowski spacetime. Assume that there is at any moment an inertial system such that in it, the system is at rest. This assumption is sometimes called the third postulate of relativity. This means that at any instant, the coordinates and state of the system can be Lorentz transformed to the lab system through some Lorentz transformation.
Let the system be subject to external forces that produce no torque with respect to its center of mass in its (instantaneous) rest frame. The condition of "no torque" is necessary to isolate the phenomenon of Thomas precession. As a simplifying assumption one assumes that the external forces bring the system back to its initial velocity after some finite time. Fix a Lorentz frame such that the initial and final velocities are zero.
The Pauli–Lubanski spin vector is defined to be in the system's rest frame, with the angular-momentum three-vector about the center of mass. In the motion from initial to final position, undergoes a rotation, as recorded in , from its initial to its final value. This continuous change is the Thomas precession.
Statement
Consider the motion of a particle. Introduce a lab frame in which an observer can measure the relative motion of the particle. At each instant of time the particle has an inertial frame in which it is at rest. Relative to this lab frame, the instantaneous velocity of the particle is with magnitude bounded by the speed of light , so that . Here the time is the coordinate time as measured in the lab frame, not the proper time of the particle.
Apart from the upper limit on magnitude, the velocity of the particle is arbitrary and not necessarily constant; its corresponding vector of acceleration is . As a result of the Wigner rotation at every instant, the particle's frame precesses with an angular velocity given by the equation
where × is the cross product and
is the instantaneous Lorentz factor, a function of the particle's instantaneous velocity. Like any angular velocity, is a pseudovector; its magnitude is the angular speed the particle's frame precesses (in radians per second), and the direction points along the rotation axis. As is usual, the right-hand convention of the cross product is used (see right-hand rule).
The precession depends on accelerated motion, and the non-collinearity of the particle's instantaneous velocity and acceleration. No precession occurs if the particle moves with uniform velocity (constant so ), or accelerates in a straight line (in which case and are parallel or antiparallel so their cross product is zero). The particle has to move in a curve, say an arc, spiral, helix, or a circular orbit or elliptical orbit, for its frame to precess. The angular velocity of the precession is a maximum if the velocity and acceleration vectors are perpendicular throughout the motion (a circular orbit), and is large if their magnitudes are large (the magnitude of is almost ).
In the non-relativistic limit, so , and the angular velocity is approximately
The factor of 1/2 turns out to be the critical factor to agree with experimental results. It is informally known as the "Thomas half".
Mathematical explanation
Lorentz transformations
The description of relative motion involves Lorentz transformations, and it is convenient to use them in matrix form; symbolic matrix expressions summarize the transformations and are easy to manipulate, and when required the full matrices can be written explicitly. Also, to prevent extra factors of cluttering the equations, it is convenient to use the definition with magnitude such that .
The spacetime coordinates of the lab frame are collected into a 4×1 column vector, and the boost is represented as a 4×4 symmetric matrix, respectively
and turn
is the Lorentz factor of . In other frames, the corresponding coordinates are also arranged into column vectors. The inverse matrix of the boost corresponds to a boost in the opposite direction, and is given by .
At an instant of lab-recorded time measured in the lab frame, the transformation of spacetime coordinates from the lab frame to the particle's frame Σ is
and at later lab-recorded time we can define a new frame for the particle, which moves with velocity relative to , and the corresponding boost is
The vectors and are two separate vectors. The latter is a small increment, and can be conveniently split into components parallel (‖) and perpendicular (⊥) to
Combining () and () obtains the Lorentz transformation between and ,
and this composition contains all the required information about the motion between these two lab times. Notice and are infinitesimal transformations because they involve a small increment in the relative velocity, while is not.
The composition of two boosts equates to a single boost combined with a Wigner rotation about an axis perpendicular to the relative velocities;
The rotation is given by is a 4×4 rotation matrix in the axis–angle representation, and coordinate systems are taken to be right-handed. This matrix rotates 3d vectors anticlockwise about an axis (active transformation), or equivalently rotates coordinate frames clockwise about the same axis (passive transformation). The axis-angle vector parametrizes the rotation, its magnitude is the angle has rotated, and direction is parallel to the rotation axis, in this case the axis is parallel to the cross product . If the angles are negative, then the sense of rotation is reversed. The inverse matrix is given by .
Corresponding to the boost is the (small change in the) boost vector , with magnitude and direction of the relative velocity of the boost (divided by ). The boost and rotation here are infinitesimal transformations because and rotation are small.
The rotation gives rise to the Thomas precession, but there is a subtlety. To interpret the particle's frame as a co-moving inertial frame relative to the lab frame, and agree with the non-relativistic limit, we expect the transformation between the particle's instantaneous frames at times and to be related by a boost without rotation. Combining () and () and rearranging gives
where another instantaneous frame is introduced with coordinates , to prevent conflation with . To summarize the frames of reference: in the lab frame an observer measures the motion of the particle, and three instantaneous inertial frames in which the particle is at rest are (at time ), (at time ), and (at time ). The frames and are at the same location and time, they differ only by a rotation. By contrast and differ by a boost and lab time interval .
Relating the coordinates to the lab coordinates via () and ();
the frame is rotated in the negative sense.
The rotation is between two instants of lab time. As , the particle's frame rotates at every instant, and the continuous motion of the particle amounts to a continuous rotation with an angular velocity at every instant. Dividing by , and taking the limit , the angular velocity is by definition
It remains to find what precisely is.
Extracting the formula
The composition can be obtained by explicitly calculating the matrix product. The boost matrix of will require the magnitude and Lorentz factor of this vector. Since is small, terms of "second order" , , , and higher are negligible. Taking advantage of this fact, the magnitude squared of the vector is
and expanding the Lorentz factor of as a power series gives to first order in ,
using the Lorentz factor of as above.
Introducing the boost generators
and rotation generators
along with the dot product · facilitates the coordinate independent expression
which holds if and lie in any plane. This is an infinitesimal Lorentz transformation in the form of a combined boost and rotation
where
After dividing by and taking the limit as in (), one obtains the instantaneous angular velocity
where is the acceleration of the particle as observed in the lab frame. No forces were specified or used in the derivation so the precession is a kinematical effect - it arises from the geometric aspects of motion. However, forces cause accelerations, so the Thomas precession is observed if the particle is subject to forces.
Thomas precession can also be derived using the Fermi-Walker transport equation. One assumes uniform circular motion in flat Minkowski spacetime. The spin 4-vector is orthogonal to the velocity 4-vector. Fermi-Walker transport preserves this relation. One finds that the dot product of the acceleration 4-vector with the spin 4-vector varies sinusoidally with time with an angular frequency γω, where ω is the angular frequency of the circular motion and γ=1/√(1-v^2/c^2), the Lorentz factor. This is easily shown by taking the second time derivative of that dot product. Because this angular frequency exceeds ω, the spin precesses in the retrograde direction. The difference (γ-1)ω is the Thomas precession angular frequency already given, as is simply shown by realizing that the magnitude of the 3-acceleration is ω v.
Applications
In electron orbitals
In quantum mechanics Thomas precession is a correction to the spin-orbit interaction, which takes into account the relativistic time dilation between the electron and the nucleus in hydrogenic atoms.
Basically, it states that spinning objects precess when they accelerate in special relativity because Lorentz boosts do not commute with each other.
To calculate the spin of a particle in a magnetic field, one must also take into account Larmor precession.
In a Foucault pendulum
The rotation of the swing plane of Foucault pendulum can be treated as a result of parallel transport of the pendulum in a 2-dimensional sphere of Euclidean space. The hyperbolic space of velocities in Minkowski spacetime represents a 3-dimensional (pseudo-) sphere with imaginary radius and imaginary timelike coordinate. Parallel transport of a spinning particle in relativistic velocity space leads to Thomas precession, which is similar to the rotation of the swing plane of a Foucault pendulum. The angle of rotation in both cases is determined by the area integral of curvature in agreement with the Gauss–Bonnet theorem.
Thomas precession gives a correction to the precession of a Foucault pendulum. For a Foucault pendulum located in the city of Nijmegen in the Netherlands the correction is:
Note that it is more than two orders of magnitude smaller than the precession due to the general-relativistic correction arising from frame-dragging, the Lense–Thirring precession.
See also
Velocity-addition formula
Relativistic angular momentum
Holonomy
Remarks
Notes
References
Thomas L H The kinematics of an electron with an axis, Phil. Mag. 7 1927 1–23
(free access)
.
Textbooks
External links
Mathpages article on Thomas Precession
Alternate, detailed derivation of Thomas Precession (by Robert Littlejohn)
Short derivation of the Thomas precession
Special relativity
Atomic physics
Precession | Thomas precession | [
"Physics",
"Chemistry"
] | 3,000 | [
"Physical quantities",
"Quantum mechanics",
"Precession",
"Special relativity",
"Atomic physics",
" molecular",
"Theory of relativity",
"Atomic",
"Wikipedia categories named after physical quantities",
" and optical physics"
] |
1,868,531 | https://en.wikipedia.org/wiki/Molecularity | In chemistry, molecularity is the number of molecules that come together to react in an elementary (single-step) reaction and is equal to the sum of stoichiometric coefficients of reactants in the elementary reaction with effective collision (sufficient energy) and correct orientation.
Depending on how many molecules come together, a reaction can be unimolecular, bimolecular or even trimolecular.
The kinetic order of any elementary reaction or reaction step is equal to its molecularity, and the rate equation of an elementary reaction can therefore be determined by inspection, from the molecularity.
The kinetic order of a complex (multistep) reaction, however, is not necessarily equal to the number of molecules involved. The concept of molecularity is only useful to describe elementary reactions or steps.
Unimolecular reactions
In a unimolecular reaction, a single molecule rearranges atoms, forming different molecules. This is illustrated by the equation
A -> P,
where refers to chemical product(s). The reaction or reaction step is an isomerization if there is only one product molecule, or a dissociation if there is more than one product molecule.
In either case, the rate of the reaction or step is described by the first order rate law
where is the concentration of species A, is time, and is the reaction rate constant.
As can be deduced from the rate law equation, the number of A molecules that decay is proportional to the number of A molecules available. An example of a unimolecular reaction, is the isomerization of cyclopropane to propene:
Unimolecular reactions can be explained by the Lindemann-Hinshelwood mechanism.
Bimolecular reactions
In a bimolecular reaction, two molecules collide and exchange energy, atoms or groups of atoms.
This can be described by the equation
A + B -> P
which corresponds to the second order rate law: .
Here, the rate of the reaction is proportional to the rate at which the reactants come together. An example of a bimolecular
reaction is the SN2-type nucleophilic substitution of methyl bromide by hydroxide ion:
CH3Br + OH^- -> CH3OH + Br^-
Termolecular reactions
A termolecular (or trimolecular) reaction in solutions or gas mixtures involves three reactants simultaneously colliding, with appropriate orientation and sufficient energy. However the term trimolecular is also used to refer to three body association reactions of the type:
A + B ->[\ce{M}] C
Where the M over the arrow denotes that to conserve energy and momentum a second reaction with a third body is required. After the initial bimolecular collision of A and B an energetically excited reaction intermediate is formed, then, it collides with a M body, in a second bimolecular reaction, transferring the excess energy to it.
The reaction can be explained as two consecutive reactions:
These reactions frequently have a pressure and temperature dependence region of transition between second and third order kinetics.
Catalytic reactions are often three-component, but in practice a complex of the starting materials is first formed and the rate-determining step is the reaction of this complex into products, not an adventitious collision between the two species and the catalyst. For example, in hydrogenation with a metal catalyst, molecular dihydrogen first dissociates onto the metal surface into hydrogen atoms bound to the surface, and it is these monatomic hydrogens that react with the starting material, also previously adsorbed onto the surface.
Reactions of higher molecularity are not observed due to very small probability of simultaneous interaction between 4 or more molecules.
Difference between molecularity and order of reaction
It is important to distinguish molecularity from order of reaction. The order of reaction is an empirical quantity determined by experiment from the rate law of the reaction. It is the sum of the exponents in the rate law equation. Molecularity, on the other hand, is deduced from the mechanism of an elementary reaction, and is used only in context of an elementary reaction. It is the number of molecules taking part in this reaction.
This difference can be illustrated on the reaction between nitric oxide and hydrogen:
2NO + 2H2 -> N2 + 2H2O,
where the observed rate law is , so that the reaction is third order. Since the order does not equal the sum of reactant stoichiometric coefficients, the reaction must involve more than one step. The proposed two-step mechanism has a rate-limiting first step whose molecularity corresponds to the overall order of 3:
Slow: 2 NO + H2 -> N2 + H2O2
Fast: H2O2 + H2 -> 2H2O
On the other hand, the molecularity of this reaction is undefined, because it involves a mechanism of more than one step. However, we can consider the molecularity of the individual elementary reactions that make up this mechanism: the first step is trimolecular because it involves three reactant molecules, while the second step is bimolecular because it involves two reactant molecules.
See also
Reaction rate
Dissociation (chemistry)
Lindemann mechanism
Crossed molecular beam
Cage effect
Reaction progress kinetic analysis
References
Chemical kinetics | Molecularity | [
"Chemistry"
] | 1,092 | [
"Chemical kinetics",
"Chemical reaction engineering"
] |
1,868,978 | https://en.wikipedia.org/wiki/SLOSS%20debate | The SLOSS debate was a debate in ecology and conservation biology during the 1970's and 1980's as to whether a single large or several small (SLOSS) reserves were a superior means of conserving biodiversity in a fragmented habitat. Since its inception, multiple alternate theories have been proposed. There have been applications of the concept outside of the original context of habitat conservation.
History
In 1975, Jared Diamond suggested some "rules" for the design of protected areas, based on Robert MacArthur and E. O. Wilson's book The Theory of Island Biogeography. One of his suggestions was that a single large reserve was preferable to several smaller reserves whose total areas were equal to the larger.
Since species richness increases with habitat area, as established by the species area curve, a larger block of habitat would support more species than any of the smaller blocks. This idea was popularised by many other ecologists, and has been incorporated into most standard textbooks in conservation biology, and was used in real-world conservation planning. This idea was challenged by Wilson's former student Daniel Simberloff, who pointed out that this idea relied on the assumption that smaller reserves had a nested species composition — it assumed that each larger reserve had all the species presented in any smaller reserve. If the smaller reserves had unshared species, then it was possible that two smaller reserves could have more species than a single large reserve.
Simberloff and Abele expanded their argument in subsequent paper in the journal The American Naturalist stating neither ecological theory nor empirical data exist to support the hypothesis that subdividing a nature reserve would increase extinction rates, basically negating Diamond as well as MacArthur and Wilson. Bruce A. Wilcox and Dennis D. Murphy responded with a key paper "Conservation strategy - effects of fragmentation on extinction" pointing out flaws in their argument while providing a comprehensive definition of habitat fragmentation. Wilcox and Murphy also argued that habitat fragmentation is probably the major threat to the loss of global biological diversity.
This helped set the stage for fragmentation research as an important area of conservation biology. The SLOSS debate ensued as to the extent to which smaller reserves shared species with one another, leading to the development of nested subset theory by Bruce D. Patterson and Wirt Atmar in the 1980s and to the establishment of the Biological Dynamics of Forest Fragments Project (BDFFP) near Manaus, Brazil in 1979 by Thomas Lovejoy and Richard Bierregaard.
Alternate theories
In 1986, Michael E. Soulé and Daniel Simberloff proposed that the SLOSS debate was irrelevant and that a three step process was the ideal way to determine reserve size. The proposed steps were to firstly decide the species whose presence was most important to the reserves biodiversity, secondly, decide how many of the species were required for the species to survive, and lastly, based on other metapopulation densities, estimate how much space is needed to sustain the required number of individuals.
Other considerations
Dispersal and genetics, the consideration of which alternate theories often center on as the original debated tended to ignore them.
Habitat connectivity or Landscape connectivity.
Applications
Conservation park planning
The purpose of the debate itself is in regards to conservation planning and is currently used in most spatial allotment planning.
Urban areas
The SLOSS debate has come in to play in urban planning concerning green spaces with considerations extending beyond biodiversity to human well being. The concept can also be applied to other aspects of city planning.
Current status of debate
The general consensus of the SLOSS debate is that neither option fits every situation and that they must all be evaluated on a case to case basis in accordance to the conservation goal to decide the best course of action.
In the field of metapopulation ecology, modelling works suggest that the SLOSS debate should be refined and cannot be solved without explicit spatial consideration of dispersal and environmental dynamics. In particular, a large number of small patches may be optimal to long-term species persistence only if the species range increases with the number of patches.
In conservation biology and conservation genetics, metapopulations (i.e. connected groups of sub-populations) are considered to be more stable if they are larger, or have more populations. This is because although individual small populations may go extinct due to stochastic processes of environment or biology (such as genetic drift and inbreeding), they can be recolonized by rare migrants from other surviving populations. Thus several small populations could be better than a single large: if a catastrophe wipes out a single big population, the species goes extinct, but if some regional populations in a large metapopulation get wiped out, recolonization from the rest of the metapopulation can ensure their eventual survival. In cases of habitat loss, when the loss is dispersed, few large reserves are best, when the loss is in clusters, multiple small reserves are best.
See also
Island biogeography
Patch dynamics
References
Further reading
Atmar, W. and B.D. Patterson. 1993. "The measure of order and disorder in the distribution of species in fragmented habitat." Oecologia 96:373-382.
Diamond, J.M. 1975. "The Island Dilemma: Lessons of Modern Biogeographic Studies for the Design of Natural Reserves". Biological Conservation Vol. 7, no. 2, pp. 129–146
MacArthur, R. H. and Wilson, E. O. 1967. The Theory of Island Biogeography Princeton University Press.
Patterson, B.D. and W. Atmar. 1986. "Nested subsets and the structure of insular mammalian faunas and archipelagos." In: Heaney L.R. and Patterson B.D. (eds), Island biogeography of mammals. Academic Press, London, pp 65–82.
Simberloff, D. S. and L. G. Abele. 1976. Island biogeography theory and conservation practice. Science 191: 285-286
Simberloff, D. S. and L. G. Abele. 1982. Refuge design and island biogeographic theory - effects of fragmentation. American Naturalist 120:41-56
Wilcox, B. A., and D. D. Murphy. 1985. Conservation strategy - effects of fragmentation on extinction. American Naturalist 125:879-887
Ecology | SLOSS debate | [
"Biology"
] | 1,280 | [
"Ecology"
] |
1,869,011 | https://en.wikipedia.org/wiki/Chlorine%20fluoride | A chlorine fluoride is an interhalogen compound containing only chlorine and fluorine.
External links
National Pollutant Inventory - Fluoride compounds fact sheet
NIST Standard Reference Database
WebElements
Inorganic chlorine compounds
Fluorides
Interhalogen compounds | Chlorine fluoride | [
"Chemistry"
] | 59 | [
"Inorganic compounds",
"Interhalogen compounds",
"Oxidizing agents",
"Salts",
"Inorganic compound stubs",
"Inorganic chlorine compounds",
"Fluorides"
] |
1,869,136 | https://en.wikipedia.org/wiki/Volume-weighted%20average%20price | In finance, volume-weighted average price (VWAP) is the ratio of the value of a security or financial asset traded to the total volume of transactions during a trading session. It is a measure of the average trading price for the period.
Typically, the indicator is computed for one day, but it can be measured between any two points in time.
VWAP is often used as a trading benchmark by investors who aim to be as passive as possible in their execution. Many pension funds, and some mutual funds, fall into this category. The aim of using a VWAP trading target is to ensure that the trader executing the order does so in line with the volume on the market. It is sometimes argued that such execution reduces transaction costs by minimizing market impact costs (the additional cost due to the market impact, i.e. the adverse effect of a trader's activities on the price of a security).
VWAP is often used in algorithmic trading. A broker may guarantee the execution of an order at the VWAP and have a computer program enter the orders into the market to earn the trader's commission and create P&L. This is called a guaranteed VWAP execution. The broker can also trade in a best effort way and answer the client with the realized price. This is called a VWAP target execution; it incurs more dispersion in the answered price compared to the VWAP price for the client but a lower received/paid commission. Trading algorithms that use VWAP as a target belong to a class of algorithms known as volume participation algorithms.
The first execution based on the VWAP was in 1984 for the Ford Motor Company by James Elkins, then head trader at Abel Noser.
Formula
VWAP is calculated using the following formula:
where:
is Volume Weighted Average Price;
is price of trade ;
is quantity of trade ;
is each individual trade that takes place over the defined period of time, excluding cross trades and basket cross trades.
Using the VWAP
The VWAP can be used similar to moving averages, where prices above the VWAP reflect a bullish sentiment and prices below the VWAP reflect a bearish sentiment. Traders may initiate short positions as a stock price moves below VWAP for a given time period or initiate long positions as the price moves above VWAP.
Institutional buyers and algorithms often use VWAP to plan entries and initiate larger positions without disturbing the stock price.
VWAP slippage refers to the difference between the intended and executed prices, and is a common measure of broker performance. Many Buy-side firms now use an algo wheel to algorithmically direct their flow to the best broker.
See also
Electronic trading
Time-weighted average price
References
Mathematical finance
Stock market
Algorithmic trading | Volume-weighted average price | [
"Mathematics"
] | 571 | [
"Applied mathematics",
"Mathematical finance"
] |
1,869,214 | https://en.wikipedia.org/wiki/SEMI | SEMI is an industry association comprising companies involved in the electronics design and manufacturing supply chain. They provide equipment, materials and services for the manufacture of semiconductors, photovoltaic panels, LED and flat panel displays, micro-electromechanical systems (MEMS), printed and flexible electronics, and related micro and nano-technologies.
SEMI is headquartered in Milpitas, California, and has offices in Bangalore; Berlin; Brussels; Hsinchu; Seoul; Shanghai; Singapore; Tokyo; and Washington, D.C. Its main activities include conferences and trade shows, development of industry standards, market research reporting, and industry advocacy. The president and chief executive officer of the organization is Ajit Manocha. The previous CEO was Dennis P. McGuirk, and before him, Stanley T. Myers.
Global advocacy
SEMI Global Advocacy represents the interests of the semiconductor industry's design, manufacturing and supply chain businesses worldwide. SEMI promotes its positions on public issues via press releases, position papers, presentations, social media, web content, and media interviews.
SEMI Global Advocacy focuses on five priorities: taxes, trade, technology, talent, and environment, health and safety (EHS).
Workforce development
SEMI Workforce Development attracts, and develops talent that can fulfill the requirements of the electronics industry. SEMI programs include:
SEMI Works. Begun in 2019, SEMI Works develops a standardized process that identifies technical competencies and certifies relevant college coursework. The program is designed to improve the job hiring process for both applicants and employers.
Diversity and Inclusion Council. This council communicates best practices and benefits arising from diverse and inclusive cultures, using white papers, services, webinars, workshops, presentations and events.
SEMI standards
The SEMI Standards program was established in 1973 using proceeds from the west coast SEMICON show. Its first initiative, following meetings with silicon suppliers, was a successful effort to set common wafer diameters to be used in silicon manufacturing. This standardization helped the industry avoid a wafer shortage from 1973 to 1974, that had previously been anticipated. The standards would become internationally utilized over the years, through partnerships with the ASTM, the DIN, and other national standards organizations. Before these standards, there were more than two thousand different specifications for silicon and by 1975 80% of all silicon wafers met with the SEMI standard. It was first published annually as the Book of SEMI Standards. With three new standards published annually in the mid-2000s, the book was eventually replaced with a CD-ROM, and now standards are available online on an annual subscription basis.
Today, more than 1,000 SEMI standards and safety guidelines are available to address all aspects of automated fabs. The standards are developed and maintained by over 5,000 volunteer experts representing more than 2,000 companies, working in 23 technical committees and 200 task forces. High-profile standards include wafer dimensions and materials, factory efficiency and reliability, equipment interfaces, and environmental, health and safety standards. In 2022, SEMI published first ever a pair of Cybersecurity Standards, to help protect against future cyberattacks on factory equipment. SEMI E187 - Specification for Cybersecurity of Fab Equipment, and SEMI E188 - Specification for Malware Free Equipment Integration.
The four main equipment communication standards are the SECS-I (which stands for SEMI Equipment Communication Standards) established in 1978 that deals with communication protocol and physical definitions, the SECS-II established in 1982 that deals with message format, the GEM established in 1992 that refines the SECS-II, and the HSMS that supersedes SECS-I established in 1994. The organization also provides safety and ergonomics guidelines, the first of which was the SEMI S2 developed in 1993, followed by the SEMI S8 in 1995.
Conferences and trade shows
SEMI was founded in 1970 as an association of semiconductor production equipment vendors. At that time, most companies in the semiconductor industry exhibited at the Wescon Show on the west coast and the IEEE show on the east coast. Wishing to organize a show dedicated to semiconductor production equipment, 55 companies met in Palo Alto and agreed to found a new association, originally called Semiconductor Equipment and Materials Institute.
The first SEMICON show was held in 1971 at the San Mateo Fairground in California, which featured “semiconductor processing equipment, materials, and service firms.” It featured 80 exhibitors and attracted 2,800 visitors. In 1973, the first SEMICON East show was held in New York City, with 120 exhibitors participating. This was followed by SEMICON Europa in Zürich, Switzerland (1975) and SEMICON Japan in Tokyo (1977), which attracted more than 200 exhibitors and 4,500 visitors. Through this and other activities, the organization grew from a domestic organization to one with an international focus. Part of this focus was to work with governments to reduce trade barriers and develop “a sympathetic regulatory climate” for its member organizations—companies that sold equipment and materials to firms that produce microprocessors.
Today SEMI organizes and produces nearly 100 technology showcases, trade shows, conferences and special events per year in all of the major manufacturing regions of the world. They include trade shows in China, Japan, Germany, Singapore, South Korea, Taiwan, North America, and Europe, as well as executive conferences, technical programs, and standards meetings. The organization also has technical education programs, and a weekly email newsletter. Presentations delivered at its symposia are available to members of the organization on the Members Only section of the website.
Market research reports
SEMI provides market research reports for the semiconductor equipment, materials, and LED industries. Its billing data is considered an important leading indicator of demand trends and is closely watched within the industry and by semiconductor market analysts and investor. It also releases the World Fab Forecast.
The semiconductor equipment billings report provides a three-month rolling average of the book-to-bill ratio for semiconductor equipment manufacturers with headquarters in North America. It is released approximately three weeks after the close of each month.
Data for the reports is collected directly from suppliers through a confidential data collection program via an independent financial services company.
There are data collection programs in the following areas.
Equipment market
Packaging market
Materials market
Semiconductor Fabrication foundries and capacity
In-depth reports are broken down by region, supply chain segment, and equipment type.
Smart initiatives
SEMI Smart Initiatives build activities around promising electronics markets emerging from mass digitalization in the Fourth Industrial Era. The initiatives synchronize advances around semiconductors, electronics and imaging systems, the Internet of Things, MEMS, sensors, devices, displays, and other digital technologies used in the electronics industry.
SEMI Smart Initiatives include:
Smart Mobility. This initiative is focused primarily on the automotive and autonomous vehicle supply chain. In 2018, SEMI formed the Global Automotive Advisory Council, composed of five regional chapters (Europe, United States, Japan, Taiwan, China), which engages stakeholders to address common challenges, priorities, solutions, and opportunities. GAAC members include Audi, BMW, Ford, and Volkswagen.
Smart MedTech. For example, SEMI's Nano-Bio Materials Consortium (NVMC) brings critical information and project funding to the electronics industry; the SEMI-FlexTech partnership focuses on the rise of flexible, printed and hybrid electronics to better fit the contours and movement of the human form, in part for healthcare applications.
Smart Manufacturing. This initiative focuses on challenges and opportunities integrating production and sensor data, analytics, artificial intelligence, automated systems and more with traditional manufacturing technologies. Regional chapters worldwide collaborate through conferences, communities, working groups and online meetings.
Smart Data. This initiative aims to improve efficiency in the semiconductor design and manufacturing supply chain through new data analytics, artificial intelligence (AI) and machine learning (ML). In addition to efficiency improvements, activities aim to reduce costs, validate semiconductor chips and products, speed process development and to problem-solve root causes.
Technology communities
More than 20 SEMI Technology Communities, 150 Committees, and 15 Partner organizations provide access to global networks for collaboration, professional growth, business opportunities, educational events, workshops, and industrywide intelligence. In June 2021, SEMI established its first Semiconductor Committee focusing on Cybersecurity, seeking to raise the overall supply chain security, and building supply resilience through cybersecurity.
Strategic Association Partnerships
In 2019, Electronic Systems Design Alliance (ESDA) joined SEMI as a SEMI Strategic Association partner In 2018, Electronic System Design Alliance joined SEMI as a Strategic Association partner.
In 2019, Nano Bio Materials Consortium (NBMC) joined SEMI as a SEMI Strategic Association partner
In 2018, Fab Owners Association joined SEMI as a SEMI Strategic Association partner.
In 2017, MSIG (MEMS & Sensors Industry Group) joined SEMI as a SEMI Strategic Association partner bringing MEMS and Sensors community to SEMI's global platforms.
In 2016, FlexTech joined SEMI as a SEMI Strategic Association partner.
See also
Semiconductor device fabrication
Semiconductor Industry Association
Notes
External links
SEMI
Technology trade associations | SEMI | [
"Materials_science"
] | 1,811 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
20,838,163 | https://en.wikipedia.org/wiki/Alberta%20Taciuk%20process | The Alberta Taciuk process (ATP; known also as the AOSTRA Taciuk process) is an above-ground dry thermal retorting technology for extracting oil from oil sands, oil shale and other organics-bearing materials, including oil contaminated soils, sludges and wastes. The technology is named after its inventor William Taciuk and the Alberta Oil Sands Technology and Research Authority.
History
The research and development of the ATP technology started in 1970. In 1975, its inventor, William Taciuk, formed the UMATAC Industrial Processes (now part of Polysius) to further its development. The first ATP pilot plant was constructed in 1977.
The ATP was originally developed for pyrolysis of oil sand. However, its first commercial application in 1989 was dedicated to the environmental remediation of contaminated soils. From 1999 to 2004, ATP technology was used for shale oil extraction at the Stuart Oil Shale Plant in Australia. During that time, of shale oil was extracted before the owner, Southern Pacific Petroleum Pty Ltd went into receivership. The subsequent owner, Queensland Energy Resources closed and dismantled the plant.
In 2002, Estonian company Viru Keemia Grupp tested this technology; however, it was not taken into use.
Technology
The ATP is an above-ground oil-shale retorting technology classified as a hot recycled solids technology. The distinguishing feature of the ATP is that the drying and pyrolysis of the oil shale or other feed, as well as the combustion, recycling, and cooling of spent materials and residues, all occur within a single rotating multi-chamber horizontal retort. Its feed consists of fine particles.
In its shale-oil applications, fine particles (less than in diameter) are fed into the preheat tubes of the retort, where they are dried and preheated to indirectly by hot shale ash and hot flue gas. In the pyrolysis zone, oil shale particles are mixed with hot shale ash and the pyrolysis is performed at temperatures between and . The resulting shale oil vapor is withdrawn from the retort through a vapour tube and recovered by condensation in other equipment. The char residues, mixed with ash, are moved to the combustion zone, and burnt at about to form shale ash. Part of the ash is delivered to the pyrolysis zone, where its heat is recycled as a hot solid carrier; the other part is removed and cooled in the cooling zone with the combustion gases by heat transfer to the feed oil shale.
The advantages of the ATP technology for shale oil extraction lie in its simple and robust design, energy self-sufficiency, minimal process water requirements, ability to handle fine particles, and high oil yields. It is particularly suited for processing materials with otherwise low oil yield. The mechanical transfer of solids through the machine does not involve moving parts and it achieves improved process efficiencies through solid-to-solid heat transfer. Most of the process energy (over 80%) is produced by combustion of char and produced oil shale gas; external energy inputs are minimal. The oil yields are about 85–90% of Fischer Assay. The organic carbon content of the process residue (spent shale) is less than 3%. The process produces only small amounts of contaminated water with low concentrations of phenols. These advantages also apply to its oil sands applications, including increased oil yield, a simplified process flow, reduction of bitumen losses to tailings, elimination of the need for tailing ponds, improvement in energy efficiency compared with the hot water extraction process, and elimination of requirements for chemical and other additives.
A complication of the ATP is that retorting operations can reach temperatures at which carbonate minerals within the shale decompose, increasing greenhouse gas emissions.
Operations
As of 2008, ATP was used by the United States Environmental Protection Agency at a PCB-contaminated site near Buffalo, New York, and at the Waukegan Harbor, Illinois.
UMATAC Industrial Processes runs a 5 tons of oil shale per hour pilot processor in Calgary, Alberta for large scale tests of different oil shales. The Fushun Mining Group of China has built a 250 tonnes per hour ATP plant that began commissioning in 2010. Jordan Energy and Mining Ltd planned to use the ATP technology for extracting oil from Al Lajjun and Attarat oil shale deposits in Jordan.
See also
Shale oil extraction
Galoter process
Petrosix process
Kiviter process
TOSCO II process
Fushun process
Paraho process
Lurgi–Ruhrgas process
References
Oil shale technology
Bituminous sands
Waste treatment technology
Petroleum industry in Alberta | Alberta Taciuk process | [
"Chemistry",
"Engineering"
] | 941 | [
"Bituminous sands",
"Waste treatment technology",
"Water treatment",
"Petroleum technology",
"Oil shale technology",
"Synthetic fuel technologies",
"Environmental engineering",
"Asphalt"
] |
23,664,140 | https://en.wikipedia.org/wiki/Estrogen%20dominance | Estrogen dominance (ED) is a theory about a metabolic state where the level of estrogen outweighs the level of progesterone in the body. This is said to be caused by a decrease in progesterone without a subsequent decrease in estrogen.
The theory was proposed by John R. Lee and Virginia Hopkins in their 1996 book, What Your Doctor May Not Tell You About Menopause: The Breakthrough Book on Natural Progesterone. In their book Lee and Hopkins assert that ED causes fatigue, depression, anxiety, low libido, weight gain specifically in the midsection, water retention, headaches, mood swings, white spots on fingernails, and fibrocystic breasts. The book criticizes estrogen replacement therapy and proposes the use of "natural progesterone" for menopausal women in order to alleviate a variety of complaints. Lee's theories have been criticized for being inadequately supported through science, being primarily based on anecdotal evidence with no rigorous research to support them.
Estrogen dominance can affect both men and women.
Proponents
Estrogen dominance is widely discussed by many proponents and on many alternative medicine websites, including:
Christiane Northrup, former obstetrics and gynaecology physician, believes that estrogen dominance is linked to "allergies, autoimmune disorders, breast cancer, uterine cancer, infertility, ovarian cysts, and increased blood clotting, and is also associated with acceleration of the aging process." She believes that ED can be reduced by several methods including taking multi-vitamins, using progesterone cream, decreasing stress and detoxifying the liver.
Nisha Chellam, an internal medicine and holistic and integrative health physician, admits that "estrogen dominance isn't an official medical diagnosis" but believes that it is "an under-diagnosed condition." The list of symptoms Chellam attributes to ED include "unexplained weight gain, difficulty losing weight, breast tenderness, subcutaneous fat, heavy periods, missing periods, prolonged cycles, painful periods, premenstrual dysmorphic disorder , infertility, mood swings, insomnia, headaches and migraines."
Bob Wood, R.Ph., lists the symptoms of estrogen dominance as "fibrocystic and tender breasts, heavy menstrual bleeding, irregular menstrual cycles, mood swings, vasomotor symptoms, weight gain and uterine fibroids" and believes that testing and "balancing hormones is of benefit to women of all ages".
Research
Extensive research has been conducted on all aspects of estrogen including its mechanism of action, contraindications to estrogen supplementation and estrogen toxicity. Research on hormone replacement therapies have indicated that hormone replacement did not help prevent heart disease and it increased risk for some medical conditions. Research conducted by Alfred Plechner points to cortisol as a possible cause of naturally elevated estrogen. "The cortisol abnormality creates a domino effect on feedback loops involving the hypothalamus–pituitary–adrenal axis. In this scenario, estrogen becomes elevated..."
References
Alternative medicine
Naturopathy
Sex hormones | Estrogen dominance | [
"Biology"
] | 671 | [
"Behavior",
"Sexuality",
"Sex hormones"
] |
6,234,485 | https://en.wikipedia.org/wiki/Viral%20culture | Viral culture is a laboratory technique in which samples of a virus are placed to different cell lines which the virus being tested for its ability to infect. If the cells show changes, known as cytopathic effects, then the culture is positive.
Traditional viral culture has been generally superseded by shell vial culture, in which the sample is centrifuged onto a single layer of cells and viral growth is measured by antigen detection methods. This greatly reduces the time to detection for slow growing viruses such as cytomegalovirus, for which the method was developed. In addition, the centrifugation step in shell vial culture enhances the sensitivity of this method because after centrifugation, the viral particles of the sample are in close proximity to the cells.
Human and monkey cells are used in both traditional viral culture and shell vial culture.
Human virus types that can be identified by viral culture include adenovirus, cytomegalovirus, enteroviruses, herpes simplex virus, influenza virus, parainfluenza virus, rhinovirus, respiratory syncytial virus, varicella zoster virus, measles and mumps. For these, the final identification method is generally by immunofluorescence, with exception of cytomegalovirus and rhinovirus, whose identification in a viral culture are determined by cytopathic effects.
Research explored the suitability of viral culture testing of SARS-CoV-2 .
See also
Cell culture
Instruments used in microbiology
Laboratory diagnosis of viral infections
Viral plaque
Viral disease testing
References
External links
Medical tests
Virology | Viral culture | [
"Biology"
] | 324 | [
"Virus stubs",
"Viruses"
] |
6,234,521 | https://en.wikipedia.org/wiki/Building%20performance | Building performance is an attribute of a building that expresses how well that building carries out its functions. It may also relate to the performance of the building construction process. Categories of building performance are quality (how well the building fulfills its functions), resource savings (how much of a particular resource is needed to fulfill its functions) and workload capacity (how much the building can do). The performance of a building depends on the response of the building to an external load or shock. Building performance plays an important role in architecture, building services engineering, building regulation, architectural engineering and construction management. Furthermore, improving building performance (particularly energy efficiency) is important for addressing climate change, since buildings account for 30% of global energy consumption, resulting in 27% of global greenhouse gas emissions. Prominent building performance aspects are energy efficiency, occupant comfort, indoor air quality and daylighting.
Background
Building performance has been of interest to humans since the very first shelters were built to protect us from the weather, natural enemies and other dangers. Initially design and performance were managed by craftsmen who combined their expertise in both domains. More formal approaches to building performance appeared in the 1970s and 1980s, with seminal works being the book on Building Performance and CIB Report 64. Further progress on building performance studies took place in parallel with the development of building science as a discipline, and with the introduction of personal computing (especially computer simulation) in the field; for a good overview of the role of simulation in building design see the chapter by Augenbroe. A more general overview that also includes physical measurement, expert judgement and stakeholder evaluation is presented in the book Building Performance Analysis. While energy efficiency, thermal comfort, indoor air quality and (day)lighting are very prominent in the debate on building performance, there is much longer list of building performance aspect that includes things like resistance against burglary, flexibility for change of use, and many others; for an overview see the building performance analysis platform website in the external links below.
Building performance standards
There are several different building performance standards widely used for designing building codes and energy-efficiency certifications. For instance, the standards produced by ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers) and the IECC (International Energy Conservation Code) have been widely used to inform local building codes and energy-efficiency certification programs, such as Passive House, Energy Star, and LEED. Building performance standards include specifications on the building envelope (which includes the windows, walls, roofs, and foundation), the HVAC system, electric lighting, hot water consumption, and home appliances, among others.
See also
Building energy simulation
Ecological design
Energy audit
Environmental impact assessment
Green retrofit
Sociology of architecture
Sustainable architecture
Sustainable design
Weatherization
References
External links
ASHRAE - measuring commercial building performance
Global Buildings Performance Network
BPI Building Performance Institute - U.S. organization setting home performance technical standards
Building Performance Association - U.S. trade association of home performance contractors and others promoting performance based energy retrofits.
Building Performance Journal - Home performance articles.
Platform for discussion of theory on building performance - Building Performance Analysis book companion website
Building engineering
Energy conservation | Building performance | [
"Engineering"
] | 639 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
6,238,120 | https://en.wikipedia.org/wiki/Phi%20value%20analysis | Phi value analysis, analysis, or -value analysis is an experimental protein engineering technique for studying the structure of the folding transition state of small protein domains that fold in a two-state manner. The structure of the folding transition state is hard to find using methods such as protein NMR or X-ray crystallography because folding transitions states are mobile and partly unstructured by definition. In -value analysis, the folding kinetics and conformational folding stability of the wild-type protein are compared with those of point mutants to find phi values. These measure the mutant residue's energetic contribution to the folding transition state, which reveals the degree of native structure around the mutated residue in the transition state, by accounting for the relative free energies of the unfolded state, the folded state, and the transition state for the wild-type and mutant proteins.
The protein's residues are mutated one by one to identify residue clusters that are well-ordered in the folded transition state. These residues' interactions can be checked by double-mutant-cycle analysis, in which the single-site mutants' effects are compared to the double mutants'. Most mutations are conservative and replace the original residue with a smaller one (cavity-creating mutations) like alanine, though tyrosine-to-phenylalanine, isoleucine-to-valine and threonine-to-serine mutants can be used too. Chymotrypsin inhibitor, SH3 domains, WW domain, individual domains of proteins L and G, ubiquitin, and barnase have all been studied by analysis.
Mathematical approach
Phi is defined thus:
is the difference in energy between the wild-type protein's transition and denatured state, is the same energy difference but for the mutant protein, and the bits are the differences in energy between the native and denatured state. The phi value is interpreted as how much the mutation destabilizes the transition state versus the folded state.
Though may have been meant to range from zero to one, negative values can appear. A value of zero suggests the mutation doesn't affect the structure of the folding pathway's rate-limiting transition state, and a value of one suggests the mutation destabilizes the transition state as much as the folded state; values near zero suggest the area around the mutation is relatively unfolded or unstructured in the transition state, and values near one suggest the transition state's local structure near the mutation site is similar to the native state's. Conservative substitutions on the protein's surface often give phi values near one. When is well between zero and one, it is less informative as it doesn't tell us which is the case:
The transition state itself is partly structured; or
There are two protein populations of near-equal numbers, one kind which is mostly-unfolded and the other which is mostly-folded.
Key assumptions
Phi value analysis assumes Hammond's postulate, which states that energy and chemical structure are correlated. Though the relationship between the folding intermediate and native state's structures may correlate that between their energies when the energy landscape has a well-defined, deep global minimum, free energy destabilizations may not give useful structural information when the energy landscape is flatter or has many local minima.
Phi value analysis assumes the folding pathway isn't significantly altered, though the folding energies may be. As nonconservative mutations may not bear this out, conservative substitutions, though they may give smaller energetic destabilizations which are harder to detect, are preferred.
Restricting to numbers greater than zero is the same as assuming the mutation increases the stability and lowers the energy of neither the native nor the transition state. It is in the same line assumed that interactions that stabilize a folding transition state are like those of the native structure, though some protein folding studies found that stabilizing non-native interactions in a transition state facilitates folding.
Example: barnase
Alan Fersht pioneered phi value analysis in his study of the small bacterial protein barnase. Using molecular dynamics simulations, he found that the transition state between folding and unfolding looks like the native state and is the same no matter the reaction direction. Phi varied with the mutation location as some regions gave values near zero and others near one. The distribution of values throughout the protein's sequence agreed with all of the simulated transition state but one helix which folded semi-independently and made native-like contacts with the rest of the protein only once the transition state had formed fully. Such variation in the folding rate in one protein makes it hard to interpret values as the transition state structure must otherwise be compared to folding-unfolding simulations which are computationally expensive.
Variants
Other 'kinetic perturbation' techniques for studying the folding transition state have appeared recently. Best known is the psi () value which is found by engineering two metal-binding amino acid residues like histidine into a protein and then recording the folding kinetics as a function of metal ion concentration, though Fersht thought this approach difficult. A 'cross-linking' variant of the -value was used to study segment association in a folding transition state as covalent crosslinks like disulfide bonds were introduced. -T value analysis has been used as an extension of -value analysis to measure the response of mutants as a function of temperature to separate enthalpic and entropic contributions to the transition state free energy.
Limitations
The error in equilibrium stability and aqueous (un)folding rate measurements may be large when values of for solutions with denaturants must be extrapolated to aqueous solutions that are nearly pure or the stability difference between the native and mutant protein is 'low', or less than 7 kJ/mol. This may cause to fall beyond the zero-one range. Calculated values depend strongly on how many data point are available. A study of 78 mutants of WW domain with up to four mutations per residue has quantified what types of mutations avoid interference from native state flexibility, solvation, and other effects, and statistical analysis shows that reliable information about transition state perturbation can be obtained from large mutant screens.
See also
Chevron plot
Denaturation midpoint
Equilibrium unfolding
References
Protein structure
Protein folding
Protein engineering
Protein methods | Phi value analysis | [
"Chemistry",
"Biology"
] | 1,283 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Structural biology",
"Protein structure"
] |
6,239,416 | https://en.wikipedia.org/wiki/Frequency%20domain%20sensor | Frequency domain (FD) sensor is an instrument developed for measuring soil moisture content. The instrument has an oscillating circuit, the sensing part of the sensor is embedded in the soil, and the operating frequency will depend on the value of soil's dielectric constant.
Types of sensors
Capacitance probe, or fringe capacitance sensor. Capacitance probes use capacitance to measure the dielectric permittivity of the soil. The volume of water in the total volume of soil most heavily influences the dielectric permittivity of the soil because the dielectric constant of water (80) is much greater than the other constituents of the soil (mineral soil: 4, organic matter: 4, air: 1). Thus, when the amount of water changes in the soil, the probe will measure a change in capacitance (from the change in dielectric permittivity) that can be directly correlated with a change in water content. Circuitry inside some commercial probes change the capacitance measurement into a proportional millivolt output. Other configuration are like the neutron probe where an access tube made of PVC is installed in the soil. The probe consists of sensing head at fixed depth. The sensing head consists of an oscillator circuit, the frequency is determined by an annular electrode, fringe-effect capacitor, and the dielectric constant of the soil.
Electrical impedance sensor, which consists of soil probes and using electrical impedance measurement. The most common configuration is based on the standing wave principle (Gaskin & Miller, 1996). The device comprises a 100 MHz sinusoidal oscillator, a fixed impedance coaxial transmission line, and probe wires which is buried in the soil. The oscillator signal is propagated along the transmission line into the soil probe, and if the probe's impedance differs from that of the transmission line, a proportion of the incident signal is reflected back along the line towards the signal source.
Benefits and limitations
Compared to time domain reflectometer (TDR), FD sensors are cheaper to build and have a faster response time. However, because of the complex electrical field around the probe, the sensor needs to be calibrated for different soil types. Some commercial sensors have been able to remove the soil type sensitivity by using a high frequency.
References
Gaskin G.J., Miller J.D. 1996. Measurement of soil water content using a simplified impedance measuring technique. Journal of Agricultural Engineering Research 63, 153-160.
Campbell C.S., Campbell G.S., Cobos D.R.2004. Response of Low Cost Dielectric Moisture Sensor to Temperature Variation. Eos Trans. AGU, 85(17), Jt. Assem. Suppl. Abstract NS44A-05.
Soil physics
Hydrology
Physical chemistry
Soil mechanics
Measuring instruments
Impedance measurements | Frequency domain sensor | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 608 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Physical quantities",
"Soil mechanics",
"Soil physics",
"Measuring instruments",
"Impedance measurements",
"nan",
"Environmental engineering",
"Physical chemistry",
"Electrical resistance and conductance"
] |
22,313,894 | https://en.wikipedia.org/wiki/Molecular%20beam | A molecular beam is produced by allowing a gas at higher pressure to expand through a small orifice into a chamber at lower pressure to form a beam of particles (atoms, free radicals, molecules or ions) moving at approximately equal velocities, with very few collisions between the particles. Molecular beams are useful for fabricating thin films in molecular beam epitaxy and artificial structures such as quantum wells, quantum wires, and quantum dots. Molecular beams have also been applied as crossed molecular beams. The molecules in the molecular beam can be manipulated by electrical fields and magnetic fields. Molecules can be decelerated in a Stark decelerator or in a Zeeman slower.
History
The first to study atomic beam experiments was Louis Dunoyer de Segonzac 1911, but were simple experiments to confirm that atoms travelled in straight lines when not acted on by external forces.
In 1921, Hartmut Kallmann and Fritz Reiche wrote about the deflection of beams of polar molecules in an inhomogeneous electric field, with an ultimate aim of measuring their dipole moments.
Seeing the page proofs for the Kallman and Reiche work prompted Otto Stern at the University of Hamburg and University of Frankfurt am Main
to rush publication of his work with Walther Gerlach on what later became known as the Stern–Gerlach experiment. (Stern's paper references the preprint, but the Kallman and Reiche work would go largely unnoticed.)
When the 1922 Stern-Gerlach paper appeared is caused a sensation: they claimed to have experimentally demonstrated "space quantization": clear evidence of quantum effects at a time when classical models were still considered viable. The initial quantum explanation of the measurement -- as an observation of orbital angular momentum -- was not correct. Five years of intense work on quantum theory was needed before it was realized that the experiment was in fact the first demonstration quantum electron spin Stern's group would go on to create pioneering experiments with atomic beams, and later with molecular beams. The advances of Stern and collaborators led to decisive discoveries including: the discovery of space quantization; de Broglie matter waves; anomalous magnetic moments of the proton and neutron; recoil of an atom of emission of a photon; and the limitation of scattering cross-sections for molecular collisions imposed by the uncertainty principle
The first to report on the relationship between dipole moments and deflection in a molecular beam (using binary salts such as KCl) was Erwin Wrede in 1927.
In 1939 Isidor Rabi invented a molecular beam magnetic resonance method in which two magnets placed one after the other create an inhomogeneous magnetic field. The method was used to measure the magnetic moment of several lithium isotopes with molecular beams of LiCl, LiF and dilithium. This method is a predecessor of NMR. The invention of the maser in 1957 by James P. Gordon, Herbert J. Zeiger and Charles H. Townes was made possible by a molecular beam of ammonia and a special electrostatic quadrupole focuser.
The study of molecular beam led to the development of molecular-beam epitaxy in the 1960s.
See also
Norman Ramsey
John B. Fenn
F.M. Devienne
Dudley R. Herschbach
References
Quantum electronics
Chemical physics | Molecular beam | [
"Physics",
"Chemistry",
"Materials_science"
] | 673 | [
"Applied and interdisciplinary physics",
"Quantum electronics",
"Quantum mechanics",
"Condensed matter physics",
"nan",
"Nanotechnology",
"Chemical physics"
] |
22,315,224 | https://en.wikipedia.org/wiki/Electromethanogenesis | Electromethanogenesis is a form of electrofuel production where methane is produced by direct biological conversion of electrical current and carbon dioxide.
Methane producing technologies garnered interest from the scientific community prior to 2000, but electromethanogenesis did not become a significant area of interest until 2008. Publications concerning catalytic methanation have increased from 44 to over 130 since 2008. Electromethanogenesis has drawn more research due to its proposed applications. The production of methane from electrical current may provide an approach to renewable energy storage. Electrical current produced from renewable energy sources may, through electromethanogenesis, be converted into methane which may then be used as a biofuel. It may also be a useful method for the capture of carbon dioxide which may be used for air purification.
In nature, methane formation occurs biotically and abiotically. Abiogenic methane is produced on a smaller scale and the required chemical reactions do not necessitate organic materials. Biogenic methane is produced in anaerobic natural environments where methane forms as the result of the breakdown of organic materials by microbes—or microorganisms. Researchers have found that the biogenic methane production process can be replicated in a laboratory environment through electromethanogenesis. The reduction of CO2 in electromethanogenesis is facilitated by an electrical current at a biocathode in a microbial electrolysis cell (MEC) and with the help of microbes and electrons (Equation 1) or abiotically produced hydrogen (Equation 2).
(1) CO2 + 8H+ + 8e− ↔ CH4 + 2H2O
(2) CO2 + 4H2 ↔ CH4 + 2H2O
Biocathode
A biocathode is a cathode used in a microbial electrolysis cell during electromethanogenesis that utilizes microorganisms to catalyze the process of accepting electrons and protons from the anode. A biocathode is usually made of a cheap material, such as carbon or graphite, like the anode in the MEC. The microbe population that is placed on the biocathode must be able to pick up electrons from the electrode material (carbon or graphite) and convert those electrons to hydrogen.
Mechanism
The mechanism of electromethanogenesis is outlined in Figure 1. Water is introduced into the system with the anode, biocathode, and microbes. At the anode, microbes attract H2O molecules which are then oxidized after an electrical current is turned on from the power source. Oxygen is released from the anode side. The protons and electrons oxidized from the H2O move across the membrane where they move into the material that makes up the biocathode. The new microbe on the biocathode has the ability to transfer the new electrons from the biocathode material and convert them into protons. These protons are then used in the major pathway that drives methane production in electromethanogenesis—CO2 reduction. CO2 is brought in on the biocathode side of the system where it is reduced by the protons produced by the microorganisms to yield H2O and methane (CH4+). Methane is produced and can then be released from the biocathode side and stored.
Limitations
One limitation is the energy loss in methane-producing bioelectrochemical systems. This occurs as a result of overpotentials occurring at the anode, membrane, and biocathode. The energy loss reduces efficiency significantly. Another limitation is the biocathode. Because the biocathode is so important in electron exchange and methane formation, its make-up can have a dramatic effect on the efficiency of the reaction. Efforts are being made to improve the biocathodes used in electromethanogenesis through combining new and existing materials, reshaping the materials, or applying different "pre-treatments" to the biocathode surface, thereby increasing biocompatibility.
See also
Bioelectrochemical reactor
Electrochemical energy conversion
Electrochemical engineering
Electrochemical reduction of carbon dioxide
Electrohydrogenesis
Microbial fuel cell
Photoelectrolysis
Sabatier reaction
References
Environmental engineering
Electrochemistry
Biotechnology
Bioelectrochemistry
Electrochemical engineering | Electromethanogenesis | [
"Chemistry",
"Engineering",
"Biology"
] | 888 | [
"Bioelectrochemistry",
"Chemical engineering",
"Electrochemical engineering",
"Biotechnology",
"Electrochemistry",
"Civil engineering",
"nan",
"Environmental engineering",
"Electrical engineering"
] |
2,595,419 | https://en.wikipedia.org/wiki/Aluminum%20building%20wiring | Aluminum building wiring is a type of electrical wiring for residential construction or houses that uses aluminum electrical conductors. Aluminum provides a better conductivity-to-weight ratio than copper, and therefore is also used for wiring power grids, including overhead power transmission lines and local power distribution lines, as well as for power wiring of some airplanes. Utility companies have used aluminum wire for electrical transmission in power grids since around the late 1800s to the early 1900s. It has cost and weight advantages over copper wires. Aluminum in power transmission and distribution applications is still the preferred wire material today.
In North American residential construction, aluminum wire was used for wiring entire houses for a short time from the 1960s to the mid-1970s during a period of high copper prices. Electrical devices (outlets, switches, lighting, fans, etc.) at the time were not designed with the particular properties of the aluminum wire being used in mind, and there were some issues related to the properties of the wire itself, making the installations with aluminum wire much more susceptible to problems. Revised manufacturing standards for both the wire and the devices were developed to reduce the problems. Existing homes with this older aluminum wiring used in branch circuits present a potential fire hazard.
In communist former East Germany (GDR, 1945-1990), aluminum or Copper-clad aluminium wire (″AlCu-Kabel″) had to be used for wiring as copper was expensive to import. While all devices were designed for aluminum during that era, this ended with unification in 1990 when standard Western European equipment became available and the national public owned enterprises (Volkseigener Betrieb) went out of business.
Materials
Aluminum wire has been used as an electrical conductor for a considerable period of time, particularly by electrical utilities related to power transmission lines in use shortly after the beginning of modern power distribution systems being constructed starting in the late 1880s. Aluminum wire requires a larger wire gauge than copper wire to carry the same current, but is still less expensive than copper wire for a particular application.
Aluminum alloys used for electrical conductors are only approximately 61% as conductive as copper of the same cross-section, but aluminum's density is 30.5% that of copper. Accordingly, one pound of aluminum has the same current carrying capacity as two pounds of copper. Since copper costs about four times as much as aluminum by weight (roughly /lb vs. /lb ), aluminum wires are one-eighth the cost of copper wire of the same conductivity. The lower weight of aluminum wires in particular makes these electrical conductors well suited for use in power distribution systems by electrical utilities, as supporting towers or structures only need to support half the weight of wires to carry the same current.
In the early 1960s when there was a housing construction boom in North America and the price of copper spiked, aluminum building wire was manufactured using utility grade AA-1350 aluminum alloy in sizes small enough to be used for lower load branch circuits in homes. In the late 1960s problems and failures related to branch circuit connections for building wire made with the utility grade AA-1350 alloy aluminum began to surface, resulting in a re-evaluation of the use of that alloy for building wire and an identification of the need for newer alloys to produce aluminum building wire. The first 8000 series electric conductor alloy, still widely used in some applications, was developed and patented in 1972 by Aluminum Company of America (Alcoa). This alloy, along with AA-8030 (patented by Olin in 1973) and AA-8176 (patented by Southwire in 1975 and 1980), performs mechanically like copper.
Unlike the older AA-1350 alloy previously used, these AA-8000 series alloys also retain their tensile strength after the standard current cycle test or the current-cycle submersion test (CCST), as described in ANSI C119.4:2004. Depending on the annealing grade, AA-8176 may elongate up to 30% with less springback effect and possesses a higher yield strength (, for a cold-worked AA-8076 wire).
A home with aluminum wiring installed prior to the mid-1970s (as the stock of pre-1972 aluminum wire was permitted to be used up) likely has wire made with the older AA-1350 alloy that was developed for power transmission. The AA-1350 aluminum alloy was more prone to problems related to branch circuit wiring in homes due to mechanical properties that made it more susceptible to failures resulting from the electrical devices being used at that time combined with poor workmanship.
The 1977 Beverly Hills Supper Club fire was a notable incident triggered by poorly-installed aluminum wiring.
Modern building construction
Aluminum building wiring for modern construction is manufactured with AA-8000 series aluminum alloy (sometimes referred to as "new technology" aluminum wiring) as specified by the industry standards such as the National Electrical Code (NEC). The use of larger gauge stranded aluminum wire (larger than #8AWG) is fairly common in much of North America for modern residential construction. Aluminum wire is used in residential applications for lower voltage service feeders from the utility to the building. This is installed with materials and methods as specified by the local electrical utility companies. Also, larger aluminum stranded building wire made with AA-8000 series alloy of aluminum is used for electrical services (e.g. service entrance conductors from the utility connection to the service breaker panel) and for larger branch circuits such as for sub-panels, ranges, clothes dryers and air-conditioning units.
In the United States, solid aluminum wires made with AA-8000 series aluminum alloy are allowed for 15 A or 20 A branch circuit wiring according to the National Electrical Code. The terminations need to be rated for aluminum wire, which can be problematic. This is particularly a problem with wire to wire connections made with twist-on connectors. As of 2017 most twist-on connectors for typical smaller branch circuit wire sizes, even those designed to connect copper to aluminum wiring, are not rated for aluminum-to-aluminum connections, with one exception being the Marette #63 or #65 used in Canada but not approved by UL for use in the United States. Also, the size of the aluminum wire needs to be larger compared to copper wire used for the same circuit due to the increased resistance of the aluminum alloys. For example, a 15 A branch circuit supplying standard lighting fixtures can be installed with either #14AWG copper building wire or #12AWG aluminum building wire according to the NEC. However, smaller solid aluminum branch circuit wiring is almost never used for residential construction in North America.
Older homes
When utility grade AA-1350 alloy aluminum wire was first used in branch circuit wiring in the early 1960s, solid aluminum wire was installed the same way as copper wire with the same electrical devices.
For smaller branch circuits with solid wires (15 or 20 A circuits) typical connections of an electrical wire to an electrical device are usually made by wrapping the wire around a screw on the device, also called a terminal, and then tightening the screw. At around the same time the use of steel screws became more common than brass screws for electrical devices.
Over time, many of these terminations with solid aluminum wire began to fail due to improper connection techniques and the dissimilar metals having different resistances and significantly different coefficients of thermal expansion, as well as problems with properties of the solid wires. These connection failures generated heat under electrical load and caused overheated connections.
The larger size stranded aluminum wires don't have the same historical problems as solid aluminum wires, and the common terminations for larger size wires are dual-rated terminations called lugs. These lugs are typically made with a coated aluminum alloy, which can accommodate either an aluminum wire or a copper wire. Larger stranded aluminum wiring with proper terminations is generally considered safe, since long-term installations have proven its reliability.
Problems
The use of older solid aluminum wiring in residential construction has resulted in failures of connections at electrical devices, has been implicated in house fires according to the U.S. Consumer Product Safety Commission (CPSC), and in some areas it may be difficult to obtain homeowners insurance for a house with older aluminum wiring. There are several possible reasons why these connections failed. The two main reasons were improper installations (poor workmanship) and the differences in the coefficient of expansion between aluminum wire used in the 1960s to mid-1970s and the terminations, particularly when the termination was a steel screw on an electrical device. The reported hazards are associated with older solid aluminum branch circuit wiring (smaller than )
Improper installations
Many terminations of aluminum wire installed in the 1960s and 1970s that were properly installed continue to operate with no problems. However, problems can develop in the future, particularly if connections were not properly installed initially.
Improper installation, or poor workmanship, includes: not abrading the wires, not applying a corrosion inhibitor, not wrapping wires around terminal screws, wrapping wires around terminal screws the wrong way, and inadequate torque on the connection screws. There can also be problems with connections made with too much torque on the connection screw as it causes damage to the wire, particularly with the softer aluminum wire.
Coefficient of expansion and creep
Most of the problems related to aluminum wire are typically associated with older (pre-1972) AA-1350 alloy solid aluminum wire, sometimes referred to as "old technology" aluminum wiring, as the properties of that wire result in significantly more expansion and contraction than copper wire or modern day AA-8000 series aluminum wire. Older solid aluminum wire also had some problems with a property called creep, which results in the wire permanently deforming or relaxing over time under load.
Aluminum wire used before the mid-1970s had a somewhat higher rate of creep, but a more significant issue was that the same high price of copper driving the use of aluminum wire led to the use of brass-coated steel rather than solid brass screws for terminations at devices such as outlets and switches. Aluminum and steel expand and contract at significantly different rates under thermal load, so a connection can become loose, particularly for older terminations initially installed with inadequate torque of the screws combined with creep of the aluminum over time. Loose connections get progressively worse over time.
This cycle results from the connection loosening slightly, with a reduced contact area at the connection leading to overheating, and allowing intermetallic aluminum–iron and aluminum–copper compounds to be formed between the wire, the screw, and the device conductors under the wire. This resulted in a higher resistance junction, leading to additional overheating. Although many believe that oxidation was the issue, studies have shown that oxidation was not significant in these cases.
Electrical device ratings
Many electrical devices used in the 1960s had smaller plain steel terminal screws, which made the attachment of the aluminum wires being used at that time to these devices much more vulnerable to problems. In the late 1960s, a device specification known as CU/AL (meaning copper-aluminum) was created that specified standards for devices intended for use with aluminum wire. Some of these devices used larger undercut screw terminals to more securely hold the wire.
Unfortunately, CU/AL switches and receptacles failed to work well enough with aluminum wire, and a new specification called CO/ALR (meaning copper-aluminum, revised) was created. These devices employ brass screw terminals that are designed to act as a similar metal to aluminum and to expand at a similar rate, and the screws have even deeper undercuts. The CO/ALR rating is only available for standard light switches and receptacles; CU/AL is the standard connection marking for circuit breakers and larger equipment.
Oxidation
Most metals (with a few exceptions, such as gold) oxidize freely when exposed to air. Aluminium oxide is not an electrical conductor, but rather an electrical insulator. Consequently, the flow of electrons through the oxide layer can be greatly impeded. However, since the oxide layer is only a few nanometers thick, the added resistance is not noticeable under most conditions. When aluminum wire is terminated properly, the mechanical connection breaks the thin, brittle layer of oxide to form an excellent electrical connection. Unless this connection is loosened, there is no way for oxygen to penetrate the connection point to form further oxide.
If inadequate torque is applied to the electrical device termination screw or if the devices are not CO/ALR rated (or at least CU/AL-rated for breakers and larger equipment) this can result in an inadequate connection of the aluminum wire. Also, due to the significant difference in thermal expansion rates of older aluminum wire and steel termination screws connections can loosen over time, allowing the formation of some additional oxide on the wire. However, oxidation was found not to be a substantial factor in failures of aluminum wire terminations.
Joining aluminum and copper wires
Another issue is the joining of aluminum wire to copper wire. In addition to the oxidation that occurs on the surface of aluminum wires which can cause a poor connection, aluminum and copper are dissimilar metals. As a result, galvanic corrosion can occur in the presence of an electrolyte, causing these connections to become unstable over time.
Upgrades and repairs
Several upgrades or repairs are available for homes with older pre-1970s aluminum branch circuit wiring:
Completely rewiring the house with copper wires (usually cost prohibitive)
"Pig-tailing" which involves splicing a short length of copper wire (pigtail) to the original aluminum wire, and then attaching the copper wire to the existing electrical device. The splice of the copper pigtail to the existing aluminum wire can be accomplished with special crimp connectors, special miniature lug-type connectors, or approved twist-on connectors (with special installation procedures). Pig-tailing generally saves time and money, and is possible as long as the wiring itself is not damaged.
However, the U.S. Consumer Product Safety Commission (CPSC) currently recommends only two alternatives for a "permanent repair" using the pig-tailing method. The more extensively tested method uses special crimp-on connectors called COPALUM connectors. As of April 2011, the CPSC has also recognized miniature lug-type connectors called AlumiConn connectors. The CPSC considers the use of pigtails with wire nuts a temporary repair, and even as a temporary repair recommends special installation procedures, and notes that there can still be hazards with attempting the repairs.
COPALUM connectors use a special crimping system that creates a cold weld between the copper and aluminum wire, and is considered a permanent, maintenance-free repair. However, there may not be sufficient length of wires in enclosures to permit a special crimping tool to be used, and the resulting connections are sometimes too large to install in existing enclosures due to limited space (or "box fill"). Installing an enclosure extender for unfinished surfaces, replacing the enclosure with a larger one or installing an additional adjacent enclosure can be done to increase the available space. Also, COPALUM connectors are costly to install, require special tools that cannot simply be purchased and electricians certified to use them by the manufacturer, and it can sometimes be very difficult to find local electricians certified to install these connectors.
The AlumiConn miniature lug connector can also be used for a permanent repair. The only special tool required for an electrician installing them is a special torque screwdriver that should be readily available to qualified electrical contractors. Proper torque on the connectors set screws is critical to having an acceptable repair. However, use of the Alumiconn connectors is a relatively newer repair option for older aluminum wiring compared to other methods, and use of these connectors can have some of the same or similar problems with limited enclosure space as the COPALUM connectors.
Special twist-on connectors (or "wire nuts") are available for joining aluminum to copper wire, which are pre-filled with an antioxidant compound made of zinc dust in polybutene base with silicon dioxide added to the compound to abrade the wires. As of 2014 there was only one twist-on connector rated or "UL Listed" for connecting aluminum and copper branch circuit wires in the U.S., which is the Ideal no. 65 "Twister Al/Cu wire connector". These special twist-on connectors have a distinctive purple color, have been UL Listed for aluminum to copper branch circuit wire connections since 1995, and according to the manufacturer's current literature are "perfect for pig-tailing a copper conductor onto aluminum branch circuit wiring in retrofit applications". The CPSC still considers the use of twist-on connectors, including the Ideal no. 65 "Twister Al/Cu wire connector", to be a temporary repair.
According to the CPSC, even using (listed) twist-on connectors to attach copper pigtails to older aluminum wires as a temporary repair requires special installation procedures, including abrading and pre-twisting the wires. However, the manufacturer's instructions for the Ideal no. 65 Twister only recommends pre-twisting the wires, and does not state it is required. Also, the instructions do not mention physically abrading the wires as recommended by the CPSC, although the manufacturer current literature states the pre-filled "compound cuts aluminum oxide". Some researchers have criticized the UL listing/tests for this wire connector, and there have been reported problems with tests (without pre-twisting) and installations. However, it is unknown if the reported installation problems were associated with unqualified persons attempting these repairs, or not using recommended special installation procedures (such as abrading and pre-twisting the wires as recommended by the CPSC for older aluminum wire, or at least pre-twisting the wires as recommended by Ideal for their connectors).
The use of newer CO/ALR rated devices (switches and receptacles) can be used to replace older devices that did not have the proper rating in homes with aluminum branch circuit wiring to reduce the hazards. These devices are reportedly tested and listed for both AA-1350 and AA-8000 series aluminum wire, and are acceptable according to the National Electrical Code.
However, some manufacturers of CO/ALR devices recommend periodically checking/tightening the terminal screws on these devices which can be hazardous for unqualified individuals to attempt, and there is criticism of their use as a permanent repair as some CO/ALR devices have failed in tests when connected to "old technology" aluminum wire. Furthermore, just installing CO/ALR devices (switches and receptacles) doesn't address potential hazards associated with other connections such as those at ceiling fans, lights and equipment.
See also
Copper-clad aluminium wire
References
"Installing Aluminum Building Wire", Christel Hunter, EC&M Article, 2007.
Electrical wiring
Aluminium | Aluminum building wiring | [
"Physics",
"Engineering"
] | 3,843 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
2,595,698 | https://en.wikipedia.org/wiki/L%C3%BCders%20band | Lüders bands are a type of plastic bands, slip band or stretcher-strain mark which are formed due to localized bands of plastic deformation in metals experiencing tensile stresses, common to low-carbon steels and certain Al-Mg alloys. First reported by Guillaume Piobert, and later by W. Lüders, the mechanism that stimulates their appearance is known as dynamic strain aging, or the inhibition of dislocation motion by interstitial atoms (in steels, typically carbon and nitrogen), around which "atmospheres" or "zones" naturally congregate.
As internal stresses tend to be highest at the shoulders of tensile test specimens, band formation is favored in those areas. However, the formation of Lüders bands depends primarily on the microscopic (i.e. average grain size and crystal structure, if applicable) and macroscopic geometries of the material. For example, a tensile-tested steel bar with a square cross-section tends to develop comparatively more bands than would a bar of identical composition having a circular cross-section.
The formation of a Lüders band is preceded by a yield point and a drop in the flow stress. Then the band appears as a localized event of a single band between plastically deformed and undeformed material that moves with the constant cross head velocity. The Lüders Band usually starts at one end of the specimen and propagates toward the other end. The visible front on the material usually makes a well-defined angle typically 50–55° from the specimen axis as it moves down the sample. During the propagation of the band the nominal stress–strain curve is flat. After the band has passed through the material the deformation proceeds uniformly with positive strain hardening. Sometimes Lüders band transition into the Portevin–Le Chatelier effect while changing the temperature or strain rate, this implies these are related phenomena Lüders bands are known as a strain softening instability.
If a sample is stretched beyond the range of the Lüder strain once, no Lüder strain occurs any more when the sample is deformed again, since the dislocations have already torn themselves away from the interstitial atoms. For this reason, deep drawing sheets are often cold rolled in advance to prevent the formation of stretcher-strain marks during the actual deep drawing process. The formation of Lüder bands can occur again with a deformation over time, since the interstitial atoms accumulate by diffusing processes called precipitation hardening (or aging).
See also
Portevin–Le Chatelier effect
Adiabatic shear band
Persistent slip bands
References
Further reading
Richard W. Hertzberg, Deformation and Fracture Mechanics of Engineering Materials, 4th Edition, pp. 29–30
W. Mason, "The Lüders' Lines on Mild Steel", 1910 Proc. Phys. Soc. London 23 305
Materials degradation
Mechanical failure | Lüders band | [
"Materials_science",
"Engineering"
] | 586 | [
"Materials degradation",
"Mechanical failure",
"Materials science",
"Mechanical engineering"
] |
2,595,792 | https://en.wikipedia.org/wiki/Portevin%E2%80%93Le%20Chatelier%20effect | The Portevin–Le Chatelier (PLC) effect describes a serrated stress–strain curve or jerky flow, which some materials exhibit as they undergo plastic deformation, specifically inhomogeneous deformation. This effect has been long associated with dynamic strain aging or the competition between diffusing solutes pinning dislocations and dislocations breaking free of this stoppage.
The onset of the PLC effect occurs when the strain rate sensitivity becomes negative and inhomogeneous deformation starts. This effect also can appear on the specimen's surface and in bands of plastic deformation. This process starts at a so-called critical strain, which is the minimum strain needed for the onset of the serrations in the stress–strain curve. The critical strain is both temperature and strain rate dependent. The existence of a critical strain is attributed to better solute diffusivity due to the deformation created vacancies and increased mobile dislocation density. Both of these contribute to the instability in substitutional alloys, while interstitial alloys are only affected by the increase in mobile dislocation densities.
History
While the effect is named after Albert Portevin and François Le Chatelier, they were not the first to discover it. Félix Savart made the discovery when he observed non-homogeneous deformation during a tensile test of copper strips. He documented the physical serrations in his samples that are currently known as Portevin–Le Chatelier bands. A student of Savart, Antoine Masson, repeated the experiment while controlling for loading rate. Masson observed that under a constant loading rate, the samples would experience sudden large changes in elongation (as large as a few millimeters).
Underlying physics
Much of the underlying physics of the Portevin-Le Chatelier effect lies in a specific case of solute drag creep. Adding solute atoms to a pure crystal introduces a size misfit into the system. This size misfit leads to restriction of dislocation motion. At low temperature, these solute atoms are immobile within the lattice, but at high temperatures, the solute atoms become mobile and interact in a more complex manner with the dislocations. When solute atoms are mobile and the dislocation velocity is not too high, the solute atoms and dislocation can move together where the solute atom decreases the motion of the dislocation.
The Portevin-Le Chatelier effect occurs in the specific case where solute drag creep is occurring and there is an applied stress, with a material dependent range, on the sample. The applied stress causes the velocity of the dislocations to increase, allowing the dislocation to move away from the solute. This process is commonly referred to as “breakaway”. Once the dislocation has moved away from the solute, the stress on it decreases which causes its velocity to decrease. This allows the solute atoms to “catch up” with the dislocation. As soon as the solute atom catches up, the stress on the dislocation significantly increases, causing the process to repeat.
The cyclic changes described above produce serrations in the plastic region of the stress strain diagram of a tensile test that is undergoing the Portevin-Le Chatelier effect. The variation in stress also causes non-homogeneous deformation to occur throughout the sample which can be visible to the naked eye through observation of a rough finish.
Conditions that affect the PLC effect
Temperature
Temperature affects both the speed of band propagation through the material and the critical strain. The speed of band propagation is proportional to the temperature (lower temp lower speeds, higher temp higher speeds). Often the critical strain will first decrease due to temperature.
The temperature effect on the PLC regime is caused by the increased ability of the solutes to diffuse to the dislocations with increasing temperature. Although the mechanism of diffusion is not entirely understood, it is believed that solute atoms diffuse by either volume (high temperature), by diffusion in stacking fault ribbons between partial dislocations (intermediate temperature) or pipe diffusion (low temperature).
Strain rate
While temperature is related to the rate of diffusion, strain rate determines the time the dislocations take to overcome these obstacles, and has a dramatic effect on the conditions of the PLC effect. So generally, the critical stress will decrease with imposed strain rate. Also the higher the stress rate, the lower the band speed.
Precipitates
Precipitates, often found in Al alloys (especially of the Mg variety), complicate the PLC effect. Often these precipitates will cause the so-called inverse behavior, which changes the effect of both strain rate and temperature on the solid The presence of precipitates is shown to have an influence on the appearance and disappearance of serrations in the stress strain curve.
Grain size
The structure of the material, as well, has an effect on the appearance and parameters that describe the PLC effect. For example, the magnitude of the stress drops is larger with a smaller grain size. The critical strain often increases with larger grains, which is linked to the dependence of the dislocation density to grain size. Serration amplitude is greater in Al-Mg alloys for a finer grain size. There is a correlation between increasing the critical strain and the onset of serration with increasing grain size. But some findings indicate that the grain size has practically no effect on the band velocity or the band width.
Material finish
Polishing the material affects the beginning of the PLC effect and the band velocities. Apparently a rougher surface provides more nucleation points for high stress, which help initiate deformation bands. These bands also propagate twice as fast in the polished specimen.
Non effects
The number of vacancies does not directly affect the PLC start point. It was found that if a material is pre-strained to a value ½ of that required to initiate jerky flow, and then rested at the test temperature or annealed to remove vacancies (but low enough that the dislocation structure is not affected), then the total critical strain is only slightly decreased as well as the types of serrations that do occur.
Serrations descriptors
While properties like strain rate sensitivity and critical strain mark the beginning of the PLC effect, people have developed a system to describe the serrations themselves. These types are often dependent on strain rate, temperature, and grain size. While usually the bands are labeled A, B, and C some sources have added a D and E type Bands. Because the type A, B, and C type bands are most found in literature they will be the only ones covered here.
Type A bands
Type A bands are often seen at high strain rate and low temperatures. They are a random development of bands that form over the entire specimen. They are usually described as continuously propagating with small stress drops.
Type B bands
Type B bands are sometimes described as “hopping” bands and they appear at a medium to high strain rates. They are often seen as each band forming ahead of the previous one in a spatially correlated way. The serrations are more irregular with smaller amplitudes than type C.
Type C bands
C bands are often seen at low applied strain rate or high temperatures. These are identified with random nucleated static bands with large characteristic stress drops the serration.
Other notes on band types
The different types of bands are believed to represent different states of dislocation in the bands, and band types can change in a materials stress strain curve. Currently there are no models that can capture the change in band types
Portevin-Le Chatelier (PLC) effect is a proof of non-uniform deformation of CuNi25 commercial alloys at intermediate temperature. In CuNi25 alloy it manifests itself as irregularities in the form of serrations on the stress–strain curve. It proves instability of force during tension and heterogeneity of microstructure and presence of many heterogeneous factors, affecting its mechanical properties.
Problems caused by the PLC effect
Because the PLC effect is related to a strengthening mechanism, the strength of steel may increase; however, the plasticity and ductility of a material afflicted by the PLC effect decrease drastically. The PLC effect is known to induce blue brittleness in steel; additionally, the loss of ductility may cause rough surfaces to develop during deformation (Al-Mg alloys are especially susceptible to this), rendering them useless for autobody or casting applications.
References
See also
Lüders band
Materials science | Portevin–Le Chatelier effect | [
"Physics",
"Materials_science",
"Engineering"
] | 1,737 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
2,596,238 | https://en.wikipedia.org/wiki/Shock%20response%20spectrum | A Shock Response Spectrum (SRS) is a graphical representation of a shock, or any other transient acceleration input, in terms of how a Single Degree Of Freedom (SDOF) system (like a mass on a spring) would respond to that input. The horizontal axis shows the natural frequency of a hypothetical SDOF, and the vertical axis shows the peak acceleration which this SDOF would undergo as a consequence of the shock input.
Calculation
The most direct and intuitive way to generate an SRS from a shock waveform is the following procedure:
Pick a damping ratio (or equivalently, a quality factor Q) for your SRS to be based on;
Pick a frequency f, and assume that there is a hypothetical Single Degree of Freedom (SDOF) system with a damped natural frequency of f ;
Calculate (by direct time-domain simulation) the maximum instantaneous absolute acceleration experienced by the mass element of your SDOF at any time during (or after) exposure to the shock in question. This acceleration is a;
Draw a dot at (f,a);
Repeat steps 2–4 for many other values of f, and connect all the dots together into a smooth curve.
The resulting plot of peak acceleration vs test system frequency is called a Shock Response Spectrum. It is often plotted with frequency in Hz, and with acceleration in units of g
Example application
Consider a computer chassis containing three cards with fundamental natural frequencies of f1, f2, and f3. Lab tests have previously confirmed that this system survives a certain shock waveform—say, the shock from dropping the chassis from 2 feet above a hard floor. Now, the customer wants to know whether the system will survive a different shock waveform—say, from dropping the chassis from 4 feet above a carpeted floor. If the SRS of the new shock is lower than the SRS of the old shock at each of the three frequencies f1, f2, and f3, then the chassis is likely to survive the new shock. (It is not, however, guaranteed.)
Details and limitations
Any transient waveform can be presented as an SRS, but the relationship is not unique; many different transient waveforms can produce the same SRS (something one can take advantage of through a process called "Shock Synthesis"). Due to only tracking the peak instantaneous acceleration the SRS does not contain all the information in the transient waveform from which it was created.
Different damping ratios produce different SRSs for the same shock waveform. Zero damping will produce a maximum response. Very high damping produces a very boring SRS: A horizontal line. The level of damping is demonstrated by the "quality factor", Q which can also be thought of transmissibility in sinusoidal vibration case. Relative damping of 5% results in a Q of 10. An SRS plot is incomplete if it doesn't specify the assumed Q value.
An SRS is of little use for fatigue-type damage scenarios, as the transform removes information of how many times a peak acceleration (and inferred stress) is reached.
The SDOF system model also can be used to characterize the severity of vibrations, with two criteria:
the exceeding of characteristic instantaneous stress limits (yield stress, ultimate stress etc.). We then define the extreme response spectrum (ERS), similar to the shock response spectrum;
the damage by fatigue following the application of a large number of cycles, thus taking into account the duration of the vibration (Fatigue damage spectrum (FDS)).
Like many other useful tools, the SRS is not applicable to significantly non-linear systems.
See also
Shock data logger
Shock detector
References
Harris, C., Piersol, A., Harris Shock and Vibration Handbook, Fifth Edition, McGraw-Hill, (2002), .
Lalanne, C., Mechanical Vibration and Shock Analysis. Volume 2: Mechanical Shock, Second Edition, Wiley, 2009.
MIL-STD-810G, Environmental Test Methods and Engineering Guidelines, 2000, sect 516.6
External links
FreeSRS, http://freesrs.sourceforge.net/, is a toolbox in the public domain to calculate SRS.
Mechanical vibrations | Shock response spectrum | [
"Physics",
"Engineering"
] | 862 | [
"Structural engineering",
"Mechanics",
"Mechanical vibrations"
] |
2,597,750 | https://en.wikipedia.org/wiki/Detection%20limit | The limit of detection (LOD or LoD) is the lowest signal, or the lowest corresponding quantity to be determined (or extracted) from the signal, that can be observed with a sufficient degree of confidence or statistical significance. However, the exact threshold (level of decision) used to decide when a signal significantly emerges above the continuously fluctuating background noise remains arbitrary and is a matter of policy and often of debate among scientists, statisticians and regulators depending on the stakes in different fields.
Significance in analytical chemistry
In analytical chemistry, the detection limit, lower limit of detection, also termed LOD for limit of detection or analytical sensitivity (not to be confused with statistical sensitivity), is the lowest quantity of a substance that can be distinguished from the absence of that substance (a blank value) with a stated confidence level (generally 99%). The detection limit is estimated from the mean of the blank, the standard deviation of the blank, the slope (analytical sensitivity) of the calibration plot and a defined confidence factor (e.g. 3.2 being the most accepted value for this arbitrary value). Another consideration that affects the detection limit is the adequacy and the accuracy of the model used to predict concentration from the raw analytical signal.
As a typical example, from a calibration plot following a linear equation taken here as the simplest possible model:
where, corresponds to the signal measured (e.g. voltage, luminescence, energy, etc.), "" the value in which the straight line cuts the ordinates axis, "" the sensitivity of the system (i.e., the slope of the line, or the function relating the measured signal to the quantity to be determined) and "" the value of the quantity (e.g. temperature, concentration, pH, etc.) to be determined from the signal , the LOD for "" is calculated as the "" value in which equals to the average value of blanks "" plus "" times its standard deviation "" (or, if zero, the standard deviation corresponding to the lowest value measured) where "" is the chosen confidence value (e.g. for a confidence of 95% it can be considered = 3.2, determined from the limit of blank).
Thus, in this didactic example:
There are a number of concepts derived from the detection limit that are commonly used. These include the instrument detection limit (IDL), the method detection limit (MDL), the practical quantitation limit (PQL), and the limit of quantitation (LOQ). Even when the same terminology is used, there can be differences in the LOD according to nuances of what definition is used and what type of noise contributes to the measurement and calibration.
The figure below illustrates the relationship between the blank, the limit of detection (LOD), and the limit of quantitation (LOQ) by showing the probability density function for normally distributed measurements at the blank, at the LOD defined as 3 × standard deviation of the blank, and at the LOQ defined as 10 × standard deviation of the blank. (The identical spread along Abscissa of these two functions is problematic.) For a signal at the LOD, the alpha error (probability of false positive) is small (1%). However, the beta error (probability of a false negative) is 50% for a sample that has a concentration at the LOD (red line). This means a sample could contain an impurity at the LOD, but there is a 50% chance that a measurement would give a result less than the LOD. At the LOQ (blue line), there is minimal chance of a false negative.
Instrument detection limit
Most analytical instruments produce a signal even when a blank (matrix without analyte) is analyzed. This signal is referred to as the noise level. The instrument detection limit (IDL) is the analyte concentration that is required to produce a signal greater than three times the standard deviation of the noise level. This may be practically measured by analyzing 8 or more standards at the estimated IDL then calculating the standard deviation from the measured concentrations of those standards.
The detection limit (according to IUPAC) is the smallest concentration, or the smallest absolute amount, of analyte that has a signal statistically significantly larger than the signal arising from the repeated measurements of a reagent blank.
Mathematically, the analyte's signal at the detection limit () is given by:
where, is the mean value of the signal for a reagent blank measured multiple times, and is the known standard deviation for the reagent blank's signal.
Other approaches for defining the detection limit have also been developed. In atomic absorption spectrometry usually the detection limit is determined for a certain element by analyzing a diluted solution of this element and recording the corresponding absorbance at a given wavelength. The measurement is repeated 10 times. The 3σ of the recorded absorbance signal can be considered as the detection limit for the specific element under the experimental conditions: selected wavelength, type of flame or graphite oven, chemical matrix, presence of interfering substances, instrument... .
Method detection limit
Often there is more to the analytical method than just performing a reaction or submitting the analyte to direct analysis. Many analytical methods developed in the laboratory, especially these involving the use of a delicate scientific instrument, require a sample preparation, or a pretreatment of the samples prior to being analysed. For example, it might be necessary to heat a sample that is to be analyzed for a particular metal with the addition of acid first (digestion process). The sample may also be diluted or concentrated prior to analysis by means of a given instrument. Additional steps in an analysis method add additional opportunities for errors. Since detection limits are defined in terms of errors, this will naturally increase the measured detection limit. This "global" detection limit (including all the steps of the analysis method) is called the method detection limit (MDL). The practical way for determining the MDL is to analyze seven samples of concentration near the expected limit of detection. The standard deviation is then determined. The one-sided Student's t-distribution is determined and multiplied versus the determined standard deviation. For seven samples (with six degrees of freedom) the t value for a 99% confidence level is 3.14. Rather than performing the complete analysis of seven identical samples, if the Instrument Detection Limit is known, the MDL may be estimated by multiplying the Instrument Detection Limit, or Lower Level of Detection, by the dilution prior to analyzing the sample solution with the instrument. This estimation, however, ignores any uncertainty that arises from performing the sample preparation and will therefore probably underestimate the true MDL.
Limit of each model
The issue of limit of detection, or limit of quantification, is encountered in all scientific disciplines. This explains the variety of definitions and the diversity of juridiction specific solutions developed to address preferences. In the simplest cases as in nuclear and chemical measurements, definitions and approaches have probably received the clearer and the simplest solutions. In biochemical tests and in biological experiments depending on many more intricate factors, the situation involving false positive and false negative responses is more delicate to handle. In many other disciplines such as geochemistry, seismology, astronomy, dendrochronology, climatology, life sciences in general, and in many other fields impossible to enumerate extensively, the problem is wider and deals with signal extraction out of a background of noise. It involves complex statistical analysis procedures and therefore it also depends on the models used, the hypotheses and the simplifications or approximations to be made to handle and manage uncertainties. When the data resolution is poor and different signals overlap, different deconvolution procedures are applied to extract parameters. The use of different phenomenological, mathematical and statistical models may also complicate the exact mathematical definition of limit of detection and how it is calculated. This explains why it is not easy to come to a general consensus, if any, about the precise mathematical definition of the expression of limit of detection. However, one thing is clear: it always requires a sufficient number of data (or accumulated data) and a rigorous statistical analysis to render better signification statistically.
Limit of quantification
The limit of quantification (LoQ, or LOQ) is the lowest value of a signal (or concentration, activity, response...) that can be quantified with acceptable precision and accuracy.
The LoQ is the limit at which the difference between two distinct signals / values can be discerned with a reasonable certainty, i.e., when the signal is statistically different from the background. The LoQ may be drastically different between laboratories, so another detection limit is commonly used that is referred to as the Practical Quantification Limit (PQL).
See also
References
Further reading
External links
Downloads of articles (a.o. harmonization of concepts by ISO and IUPAC) and an extensive list of references
Analytical chemistry
Measurement
Background radiation | Detection limit | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,879 | [
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"nan"
] |
2,597,872 | https://en.wikipedia.org/wiki/Stamped%20concrete | Stamped concrete is concrete that has been imprinted, or that is patterned, textured, or embossed to resemble brick, slate, flagstone, stone, tile, wood, or various other patterns and textures. The practice of stamping concrete for various purposes began with the ancient Romans. In the nineteenth and twentieth centuries, concrete was sometimes stamped with contractor names and years during public works projects, but by the late twentieth century the term "stamped concrete" came to refer primarily to decorative concrete produced with special modern techniques for use in patios, sidewalks, driveways, pool decks, and interior flooring.
History
The ancient Romans used basic concrete stamping techniques, as evidenced in well-known structures such as the Pantheon. In the late nineteenth and early twentieth centuries, concrete companies who received government bids for public works projects sometimes used concrete stamps featuring the company name and sometimes the year in which the concrete was poured, creating a visible historical record of when certain sidewalks were made.
Concrete manufacturers started experimenting with modern decorative concrete techniques as early as the 1890s. In the 1950s, Brad Bowman—considered the "father" of modern concrete stamping—began developing and patenting new techniques for producing concrete that resembled non-concrete materials, such as flagstone and wood. He used wooden platform stamps that could imprint multiple pieces of concrete at once. Later, platform stamps would be made of sheet metal or aluminium. In 1956, Bill Stegmeier of the Stegmeier Company, discovered that a color powder used to impart an antiquing effect to concrete also had the property of preventing stamps from sticking to concrete, which opened up new possibilities.
By the 1970s the demand for stamped concrete grew, and the material became a common component in building projects. In the late 1970s, manufacturer Jon Nasvik developed lightweight and durable urethane stamps for concrete. He later developed plastic stamps that could imprint both texture and pattern on wet concrete, making the process more efficient.
Modern stamped concrete increased in popularity in the 1970s in part because it was featured in the World of Concrete trade show. Builders saw it as a new way to satisfy the customer and make their budget work simultaneously. When stamped concrete for aesthetic purposes was first developed, there were very few choices of design and colors. However, as the industry grew more stamping patterns were created along with many different types of stains. Another advantage to using stamped concrete was that it could be applied to many different surfaces and textures, such as driveways, highways, patios, decks, and even floors inside the home.
In the twenty-first century, demand for stamped concrete in the non-residential construction industry has increased as part of a more general boom in demand for concrete products.
Modern techniques
The ability of stamped concrete to resemble other building materials makes stamped concrete a less expensive alternative to using those other authentic materials such as stone, slate or brick. There are three procedures used in modern stamped concrete which distinguish it from other concrete procedures: the addition of a base color, the addition of an accent color, and stamping a pattern into the concrete. These three procedures provide stamped concrete with a color and shape similar to the natural building material. It also is longer-lasting than paved stone, and still resembles the look.
Adding base color
The base color is the primary color used in stamped concrete. The base color is chosen to reflect the color of the natural building material. The base color is produced by adding a color hardener to the concrete. Color hardener is a powder pigment used to dye the concrete.
The color hardener can be applied using one of two procedures; integral color or cast-on color. Integral color is the procedure where the entire volume of concrete is dyed the base color. The entire volume of concrete is colored by adding the color hardener to the concrete truck, and allowing all the concrete in the truck to be dyed. Cast-on color is the procedure where the surface of the concrete is dyed the base color. The surface of the concrete is colored by spreading the color hardener onto the surface of the wet concrete and floating the powder into the top layer of the wet concrete.
Concrete can be colored in many ways; color hardener, integral liquid or powder, acid stains to name a few. The process of integrally coloring the concrete offers the advantage of the entire volume being colored; however, the surface strength is not increased as with the use of color hardener. Dry shake color hardener is another popular way to color concrete. You broadcast the hardener on the concrete as soon as it is floated for the first time. After letting the bleed water soak into the hardener you float and trowel it in. This method only covers the surface about 3/16 of an inch but it gives the concrete surface a longer wear life.
Adding accent color
The accent color is the secondary color used in stamped concrete. The secondary color is used to produce texture and show additional building materials (e.g. grout) in the stamped concrete. The accent color is produced by applying color release to the concrete. Color release has two purposes - 1) It is a pigment used to color the concrete and 2) It is a non-adhesive used to prevent the concrete stamps from sticking to the concrete.
The color release can be applied in one of two procedures based on the two forms it is manufactured in: powdered (cast-on color release made up of calcium-releasing powders that repel water); or liquid - which is a light aromatic-based solvent, spray-on color release. Cast-on color release is a procedure where the powder color release is applied by spreading the color release on the surface of the concrete before the concrete is stamped. Spray-on color release is a procedure where liquid color release is sprayed on the bottom of the concrete stamps before the concrete is stamped.
Stamping patterns
The pattern is the shape of the surface of the stamped concrete. The pattern reflects the shape of the natural building material. The pattern is made by imprinting the concrete shortly after it has been poured with a "concrete stamp". Most modern concrete stamps are made of polyurethane, but older "cookie cutter" style stamps were made of various metals. The old style stamps lacked the capabilities of forming natural stone texture.
Concrete stamping is the procedure which uses the concrete stamps to make the pattern in the stamped concrete. Concrete stamps are placed on the concrete after the color release has been applied. The concrete stamps are pushed into the concrete and then removed to leave the pattern in the stamped concrete.
In most cases concrete stamping is made to look like ordinary building products such as flagstone, brick, natural stone, etc.
See also
Brutalist architecture
Concrete
Decorative concrete
References
Concrete
Materials
Building materials | Stamped concrete | [
"Physics",
"Engineering"
] | 1,363 | [
"Structural engineering",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Concrete",
"Matter",
"Architecture"
] |
2,597,955 | https://en.wikipedia.org/wiki/Aircraft%20rescue%20and%20firefighting | Aircraft rescue and firefighting (ARFF) is a type of firefighting that involves the emergency response, mitigation, evacuation, and rescue of passengers and crew of aircraft involved in aviation accidents and incidents.
Airports with scheduled passenger flights are obliged to have firefighters and firefighting apparatus on location ready for duty any time aircraft operate. Airports may have regulatory oversight by an arm of their individual national governments or voluntarily under standards of the International Civil Aviation Organization.
Duties
Due to the mass casualty potential of an aviation emergency, the speed with which emergency response equipment and personnel arrive at the scene of the emergency is of paramount importance. Their arrival and initial mission to secure the aircraft against all hazards, particularly fire, increases the survivability of the passengers and crew on board. Airport firefighters have advanced training in the application of firefighting foams, dry chemical and clean agents used to extinguish burning aviation fuel in and around an aircraft in order to maintain a path for evacuating passengers to exit the fire hazard area. Further, should fire either be encountered in the cabin or extend there from an external fire, the ARFF responders must work to control/extinguish these fires as well.
Primary to the hazard mitigation and safe evacuation of ambulatory passengers is the need to perform rescue operations. Passengers unable to extricate themselves must be removed from the aircraft and provided medical care. This process is extremely labor-intensive, requiring both firefighters and support personnel. Due to the nature of a mass casualty incident, rescue workers employ triage to classify the victims and direct their efforts where they can maximize survival.
Subsequent to the emergency being declared under control, the ARFF function reverts to one of protecting the scene, eliminating any peripheral or slowly evolving hazards and assisting to preserve the scene for investigators.
In 2016, an Emirati fire fighter died from burns when trying to fight the fire in the Emirates Flight 521 crash. The man was the only fatality.
Due to the rarity of aircraft fires, firefighters often have other usual duties such as luggage loaders or security guards, which they have to abandon at fire alarms.
Apparatus
Specialized fire apparatus are required for the ARFF function, the design of which is predicated on many factors but primarily: speed, water-carrying capacity, off-road performance, and agent discharge rates. Since an accident could occur anywhere on or off airport property, sufficient water and other agents must be carried to contain the fire to allow for the best possibility of extinguishment, maximum possibility for evacuation and/or until additional resources arrive on the scene.
Personal protective equipment
Due to the intense radiant heat generated by burning fuels, firefighters wear protective ensembles that are coated with a silvered material to reflect heat away from their bodies, called a fire proximity suit. They also must wear self-contained breathing apparatus to provide a source of clean air, enabling them to work in the presence of smoke or other super-heated gases, such as when making entry into the burning cabin of an aircraft.
ARFF in the United States
The Federal Aviation Administration (FAA) mandates ARFF operations at all U.S. airports that serve scheduled passenger air carriers. These are the only civilian fire protection services that are specifically regulated by any governmental entity. Military bases may have their own ARFF services with specialized duties and training.
Airports required to have ARFF services are inspected at least annually by the FAA for compliance with FAR, Part 139. Military ARFF operations must meet the mission requirements for their individual branch of the service.
In many cases the FAA will perform the investigatory duties after an incident, but in instances where significant injuries or any fatal accident the National Transportation Safety Board (NTSB) will investigate and the ARFF contingent will assist where needed.
Airport index
An index is assigned to each FAA Part 139 certificate holder based on a combination of the air carrier aircraft length and the average number of daily departures. If the longest air carrier aircraft at the airport has five or more average daily departures, the matching index is used. If the longest aircraft has less than five average daily departures, the next lower index is used. That index determines the required number of ARFF vehicles and required amount of extinguishing agents.
See also
Airport crash tender
Water salute
References
External links
Aircraft fires
Airport infrastructure
Firefighting
Rescue | Aircraft rescue and firefighting | [
"Engineering"
] | 874 | [
"Airport infrastructure",
"Aerospace engineering"
] |
2,597,983 | https://en.wikipedia.org/wiki/Boyer%E2%80%93Lindquist%20coordinates | In the mathematical description of general relativity, the Boyer–Lindquist coordinates are a generalization of the coordinates used for the metric of a Schwarzschild black hole that can be used to express the metric of a Kerr black hole.
The Hamiltonian for particle motion in Kerr spacetime is separable in Boyer–Lindquist coordinates. Using Hamilton–Jacobi theory one can derive a fourth constant of the motion known as Carter's constant.
The 1967 paper introducing Boyer–Lindquist coordinates was a posthumous publication for Robert H. Boyer, who was killed in the 1966 University of Texas tower shooting.
Line element
The line element for a black hole with a total mass equivalent , angular momentum , and charge in Boyer–Lindquist coordinates and geometrized units () is
where
called the discriminant,
and
called the Kerr parameter.
Note that in geometrized units , , and all have units of length. This line element describes the Kerr–Newman metric. Here, is to be interpreted as the mass of the black hole, as seen by an observer at infinity, is interpreted as the angular momentum, and the electric charge. These are all meant to be constant parameters, held fixed. The name of the discriminant arises because it appears as the discriminant of the quadratic equation bounding the time-like motion of particles orbiting the black hole, i.e. defining the ergosphere.
The coordinate transformation from Boyer–Lindquist coordinates , , to Cartesian coordinates , , is given (for ) by:
Vierbein
The vierbein one-forms can be read off directly from the line element:
so that the line element is given by
where is the flat-space Minkowski metric.
Spin connection
The torsion-free spin connection is defined by
The contorsion tensor gives the difference between a connection with torsion, and a corresponding connection without torsion. By convention, Riemann manifolds are always specified with torsion-free geometries; torsion is often used to specify equivalent, flat geometries.
The spin connection is useful because it provides an intermediate way-point for computing the curvature two-form:
It is also the most suitable form for describing the coupling to spinor fields, and opens the door to the twistor formalism.
All six components of the spin connection are non-vanishing. These are:
Riemann and Ricci tensors
The Riemann tensor written out in full is quite verbose; it can be found in Frè. The Ricci tensor takes the diagonal form:
Notice the location of the minus-one entry: this comes entirely from the electromagnetic contribution. Namely, when the electromagnetic stress tensor has only two non-vanishing components: and , then the corresponding energy–momentum tensor takes the form
Equating this with the energy–momentum tensor for the gravitational field leads to the Kerr–Newman electrovacuum solution.
References
Black holes
Coordinate charts in general relativity | Boyer–Lindquist coordinates | [
"Physics",
"Astronomy",
"Mathematics"
] | 604 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Coordinate systems",
"Stellar phenomena",
"Astronomical objects",
"Coordinate charts in general relativity"
] |
2,599,187 | https://en.wikipedia.org/wiki/Earth%20System%20Modeling%20Framework | The Earth System Modeling Framework (ESMF) is open-source software for building climate, numerical weather prediction, data assimilation, and other Earth science software applications. These applications are computationally demanding and usually run on supercomputers. The ESMF is considered a technical layer, integrated into a sophisticated common modeling infrastructure for interoperability. Other aspects of interoperability and shared infrastructure include: common experimental protocols, common analytic methods, common documentation standards for data and data provenance, shared workflow, and shared model components.
The ESMF project is distinguished by its strong emphasis on community governance and distributed development, and by a diverse customer base that includes modeling groups from universities, major U.S. research centers, the National Weather Service, the Department of Defense, and NASA. The ESMF development team was centered at NCAR until 2009, after which it moved to the NOAA Earth System Research Laboratories.
Editing Earth System Modeling Framework is free software released under the University of Illinois/NCSA Open Source License.
Purpose
ESMF increases the interoperability of Earth-science modeling software developed at different sites and promotes code reuse. The idea is to transform distributed, specialized knowledge and resources into a collaborative, integrated modeling community that operates more efficiently, can address a wider variety of problems more effectively, and is more responsive to societal needs.
Software architecture
ESMF is based on principles of component-based software engineering. The components within an ESMF software application usually represent large-scale physical domains such as the atmosphere, ocean, cryosphere, or land surface. Some models also represent specific processes (e.g. ocean biogeochemistry, the impact of solar radiation on the atmosphere) as components. In ESMF, components can create and drive other components so that an ocean biogeochemistry component can be part of a larger ocean component.
The software that connects physical domains is called a coupler in the Earth system modeling community. Couplers follow the mediator pattern and take the outputs from one component and transform them into the inputs that are needed to run another component. Transformations may include unit conversions, grid interpolation or remapping, mergers (i.e., combining land and ocean surfaces to form a completely covered global surface) or other specialized transformations. In ESMF, couplers are also software components.
Capabilities
ESMF represents user data in the form of data objects such as grids, fields, and arrays. The user data within a component may be copied or referenced into these ESMF objects. Once user data is part of an ESMF data object, framework methods can be used to transform and transfer the data as required to other components in the system. This generally happens within a coupler component.
Grid interpolation and remapping are core utilities of ESMF. Interpolation weights can be generated in ESMF using bilinear interpolation, finite element patch recovery, and conservative remapping methods.
ESMF can associate metadata with data objects. The metadata, in the form of name and value pairs, is grouped into packages, which can be written out in XML and other standard formats. ESMF metadata packages are based on community conventions including the Climate and Forecast Metadata Conventions and the METAFOR Common Information Model.
History
The ESMF collaboration had its roots in the Common Modeling Infrastructure Working Group (CMIWG), an unfunded, grass-roots effort to explore ways of enhancing collaborative Earth system model development. The CMIWG attracted broad participation from major weather and climate modeling groups at research and operational centers. In a series of meetings held from 1998 to 2000, CMIWG members established general requirements and a preliminary design for a common software framework.
In September 2000, the NASA Earth Science Technology Office (ESTO) released a solicitation that called for the creation of an ESMF. A critical mass of CMIWG participants agreed to develop a coordinated response, based on their strawman framework design, and submitted three linked proposals. The first focused on development of the core ESMF software, the second on deployment of Earth science modeling applications, and the third on deployment of ESMF data assimilation applications. All three proposals were funded, at a collective level of $9.8 million over a three-year period. As the ESMF project gained momentum, it replaced the CMIWG as the focal point for developing community modeling infrastructure.
During the period of NASA funding, the ESMF team developed a prototype of the framework and used it in a number of experiments that demonstrated coupling of modeling components from different institutions. ESMF was also used as the basis for the construction of a new model, the Goddard Earth Observing System (GEOS) atmospheric general circulation model at NASA Goddard.
As the end of the first funding cycle for ESMF neared, its collaborators wrote a project plan that described how ESMF could transition to an organization with multi-agency sponsorship for its next funding cycle. Major new five-year grants came from NASA, through the Modeling Analysis and Prediction (MAP) program for climate change and variability, and from the Department of Defense Battlespace Environments Institute. The National Science Foundation (NSF) continued funding part of the development team through NCAR core funds. Many smaller ESMF-based application adoption projects were funded in domains as diverse as space weather and sediment transport.
Also at the end of the first funding cycle, the ESMF collaborators wrote a white paper on future directions for the ESMF. This paper formed the basis for a proposal to NSF to combine ESMF (and other software frameworks) with data services to create a computational environment that supports an end-to-end modeling workflow.
In 2008, a project manager was appointed for the National Unified Operational Prediction Capability (NUOPC), a joint project for weather prediction of the United States Navy, the National Weather Service, and the United States Air Force.
See also
Coupled Model Intercomparison Project (CMIP)
References
Further reading
External links
Earth System Modeling Framework (ESMF)
Earth System Prediction Capability (ESPC)
Modeling Analysis and Prediction Program (NASA)
Earth System Grid Federation (ESGF)
Weather prediction
Numerical climate and weather models
Modeling Framework
Free science software | Earth System Modeling Framework | [
"Physics"
] | 1,280 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
12,866,103 | https://en.wikipedia.org/wiki/Epedigree | An epedigree (sometimes referred to as e-pedigree or electronic pedigree) is an electronic document which provides data on the history of a particular batch of a drug. It satisfies the requirement for a drug pedigree while using a convenient electronic form.
Pedigree
The U.S. Food and Drug Administration (FDA) in the 2006 Compliance Policy Guide for the Prescription Drug Marketing Act states that:
"A drug pedigree is a statement of origin that identifies each prior sale, purchase, or trade of a drug, including the date of those transactions and the names and addresses of all parties to them."
An epedigree is simply an electronic document which satisfies the pedigree requirement. The primary purpose of an epedigree is to protect consumers from any contaminated medicine or counterfeit drugs.
Standard
On January 5, 2007 EPCglobal ratified the Pedigree Standard as an international standard that specifies an XML description of the life history of a product across an arbitrarily complex supply chain.
As of 2008, most states have enacted some sort of pedigree requirement and many have also required an epedigree. However, the existing epedigree requirements amount to little more than requiring that pharmaceutical supply chain companies be able to provide reports in formats such as pdf, text files or spreadsheets.
The basic data elements of an original epedigree are:
Lot
Potency
Expiration
National Drug Code and Electronic Product Code
Manufacturer
Distributor, Wholesaler or Pharmacy
Unique identifier of the salable unit
As the product moves down the supply chain, each company is required to carry forward all previous epedigree information. In this way, the final point of sale has the complete lineage of every unit.
Laws
ePedigree laws were eventually replaced by a harmonized national standard called the Drug Quality and Security Act.
ePedigree laws were in a rapid state of flux with states changing the "drop dead" date for compliance with tracking and authentication years beyond the original dates set by Florida and California. The definitive requirements will include serialization. Companies that focus purely on achieving compliance will miss the opportunity to use regulation as a business driver. The ability to track and serialize unit level saleable packages (e.g. bottle of 25 pills) not just cases or pallets can create business value in knowing exactly where their products are purchased can do the following:
1) Minimize cost of chargebacks through 100% accurate adjudication. Chargebacks account for 2-15% of gross revenue for a pharmaceutical manufacturer.
2) Minimize risk by increasing accuracy in Medicare/Medicaid pricing calculations by fully knowing all fees, rebates, and chargebacks that should be applied to a specific unit sale. Over $4B in fines have been handed down for improperly calculating Medicare/Medicaid pricing.
3) Limit liability of having to recall entire lots of product because a (non-serialized) shipment was stolen - see example:
4) Achieve visibility for manufacturers in the labyrinth that is the wholesale distribution network to more accurately forecast demand and measure sales & marketing programs.
Although simple epedigree systems are an important first step, significant improvement in public safety would result from a more standardized and automated approach. The larger and more difficult task of providing for an Automated Epedigree System has been suggested, but not required by any state. Such a system would require fairly significant changes to supply chain companies' data interchanges and would certainly require advanced Track and Trace technology (with bar codes or RFID). The requirements that come closest to an Automated Epedigree System have been proposed by California. In March 2008, the California Board of Pharmacy (CBOP) published its "E-Pedigree Requirements" which are scheduled to go into effect in a phased approach between 2015 and 2017 (since abandoned). CBOP has proposed an XML standard document and the law requires an "interoperable electronic system".
References
External links
ePedigree info from the California Board of Pharmacy
Florida ePedigree Law
FDA Counterfeit Drug page
EPCglobal's Pedigree Standard an XML description of the life history of a product
IBM's ePedigree solution
Oracle Pedigree and Serialization Manager
Axway ePedigree solution
Pharmaceutical industry
Forgery | Epedigree | [
"Chemistry",
"Biology"
] | 885 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
12,866,442 | https://en.wikipedia.org/wiki/7-Hydroxyamoxapine | 7-Hydroxyamoxapine is an active metabolite of the antidepressant drug amoxapine (Asendin). It contributes to amoxapine's pharmacology. It is a dopamine receptor antagonist and contributes to amoxapine's antipsychotic properties.
See also
8-Hydroxyamoxapine
References
Dibenzoxazepines
1-Piperazinyl compounds
Tricyclic antidepressants
Chloroarenes
Hydroxyarenes
Human drug metabolites | 7-Hydroxyamoxapine | [
"Chemistry"
] | 115 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
12,868,362 | https://en.wikipedia.org/wiki/Filling%20radius | In Riemannian geometry, the filling radius of a Riemannian manifold X is a metric invariant of X. It was originally introduced in 1983 by Mikhail Gromov, who used it to prove his systolic inequality for essential manifolds, vastly generalizing Loewner's torus inequality and Pu's inequality for the real projective plane, and creating systolic geometry in its modern form.
The filling radius of a simple loop C in the plane is defined as the largest radius, R > 0, of a circle that fits inside C:
Dual definition via neighborhoods
There is a kind of a dual point of view that allows one to generalize this notion in an extremely fruitful way, as shown by Gromov. Namely, we consider the -neighborhoods of the loop C, denoted
As increases, the -neighborhood swallows up more and more of the interior of the loop. The last point to be swallowed up is precisely the center of a largest inscribed circle. Therefore, we can reformulate the above definition by defining
to be the infimum of such that the loop C contracts to a point in .
Given a compact manifold X imbedded in, say, Euclidean space E, we could define the filling radius relative to the imbedding, by minimizing the size of the neighborhood in which X could be homotoped to something smaller dimensional, e.g., to a lower-dimensional polyhedron. Technically it is more convenient to work with a homological definition.
Homological definition
Denote by A the coefficient ring or , depending on whether or not X is orientable. Then the fundamental class, denoted [X], of a compact n-dimensional manifold X, is a generator of the homology group , and we set
where is the inclusion homomorphism.
To define an absolute filling radius in a situation where X is equipped with a Riemannian metric g, Gromov proceeds as follows.
One exploits Kuratowski embedding.
One imbeds X in the Banach space of bounded Borel functions on X, equipped with the sup norm . Namely, we map a point to the function defined by the formula
for all , where d is the distance function defined by the metric. By the triangle inequality we have and therefore the imbedding is strongly isometric, in the precise sense that internal distance and ambient distance coincide. Such a strongly isometric imbedding is impossible if the ambient space is a Hilbert space, even when X is the Riemannian circle (the distance between opposite points must be
, not 2!). We then set in the formula above, and define
Properties
The filling radius is at most a third of the diameter (Katz, 1983).
The filling radius of real projective space with a metric of constant curvature is a third of its Riemannian diameter, see (Katz, 1983). Equivalently, the filling radius is a sixth of the systole in these cases.
The filling radius of the Riemannian circle of length 2π, i.e. the unit circle with the induced Riemannian distance function, equals π/3, i.e. a sixth of its length. This follows by combining the diameter upper bound mentioned above with Gromov's lower bound in terms of the systole (Gromov, 1983)
The systole of an essential manifold M is at most six times its filling radius, see (Gromov, 1983).
The inequality is optimal in the sense that the boundary case of equality is attained by the real projective spaces as above.
The injectivity radius of compact manifold gives a lower bound on filling radius. Namely,
See also
Filling area conjecture
Gromov's systolic inequality for essential manifolds
References
Gromov, M.: Filling Riemannian manifolds, Journal of Differential Geometry 18 (1983), 1–147.
Katz, M.: The filling radius of two-point homogeneous spaces. Journal of Differential Geometry 18, Number 3 (1983), 505–511.
Differential geometry
Manifolds
Riemannian geometry
Systolic geometry | Filling radius | [
"Mathematics"
] | 835 | [
"Topological spaces",
"Topology",
"Manifolds",
"Space (mathematics)"
] |
12,869,072 | https://en.wikipedia.org/wiki/Charge%20controller | A charge controller, charge regulator or battery regulator limits the rate at which electric current is added to or drawn from electric batteries to protect against electrical overload, overcharging, and may protect against overvoltage. This prevents conditions that reduce battery performance or lifespan and may pose a safety risk. It may also prevent completely draining ("deep discharging") a battery, or perform controlled discharges, depending on the battery technology, to protect battery life.
The terms "charge controller" or "charge regulator" may refer to either a stand-alone device, or to control circuitry integrated within a battery pack, battery-powered device, and/or battery charger.
Stand-alone charge controllers
Charge controllers are sold to consumers as separate devices, often in conjunction with solar or wind power generators, for uses such as RV, boat, and off-the-grid home battery storage systems.
In solar applications, charge controllers may also be called solar regulators or solar charge controllers. Some charge controllers / solar regulators have additional features, such as a low voltage disconnect (LVD), a separate circuit which powers down the load when the batteries become overly discharged (some battery chemistries are such that over-discharge can ruin the battery).
A series charge controller or series regulator disables further current flow into batteries when they are full. A shunt charge controller or shunt regulator diverts excess electricity to an auxiliary or "shunt" load, such as an electric water heater, when batteries are full.
Simple charge controllers stop charging a battery when they exceed a set high voltage level, and re-enable charging when battery voltage drops back below that level. Pulse-width modulation (PWM) and maximum power point tracker (MPPT) technologies are more electronically sophisticated, adjusting charging rates depending on the battery's level, to allow charging closer to its maximum capacity.
A charge controller with MPPT capability frees the system designer from closely matching available PV voltage to battery voltage. Considerable efficiency gains can be achieved, particularly when the PV array is located at some distance from the battery. By way of example, a 150 volt PV array connected to an MPPT charge controller can be used to charge a 24 or 48 volt battery. Higher array voltage means lower array current, so the savings in wiring costs can more than pay for the controller.
Charge controllers may also monitor battery temperature to prevent overheating. Some charge controller systems also display data, transmit data to remote displays, and data logging to track electric flow over time.
Integrated charge controller circuitry
Circuitry that functions as a charge regulator controller may consist of several electrical components, or may be encapsulated in a single microchip, an integrated circuit (IC) usually called a charge controller IC or charge control IC.
Charge controller circuits are used for rechargeable electronic devices such as cell phones, laptop computers, portable audio players, and uninterruptible power supplies, as well as for larger battery systems found in electric vehicles and orbiting space satellites
Charging protocols
Due to limitations in currents that copper wires could safely handle, charging protocols have been developed to allow the end device to request elevated voltages for increasing the power throughput without increasing heat in the wires. The arriving voltage is then converted down to the battery's optimum charging voltage inside the end device.
Quick Charge and Pump Express
The two most widely used standards are Quick Charge by Qualcomm and Pump Express by MediaTek.
The 2014 and 2015 versions of Pump Express, Pump Express Plus and Pump Express Plus 2.0, differ from by communicating voltage requests to the charger using current modulation signals through the main USB power lanes (VBUS) rather than negotiating through the USB 2.0 data lanes.
Pump Express Plus supports elevated voltage levels of 7, 9 and 12 volts, whereas the specification for Quick Charge 2.0 lacks the 7-volt level. A 20-volt level was added in a revision named "class B" of the specification.
The voltage range of the successor Pump Express Plus 2.0 is between 5 volts and 20 volts, with 0.5 volts steps. The Quick Charge 3.0 protocol supports finer-grain voltage levels with 0.2 volts steps and has a lower minimum voltage of approximately 3.3 volt. According to PocketNow, Quick Charge 3.0 starts at 3.2 volts with 0.2 volts between each step and goes up to 20 V (3.2 V, 3.4 V, 4.6 V, ..., 19.8 V, 20 V). The site "powerbankexpert.com" claims that the protocol has a minimum voltage of 3.6 volts.
Oppo VOOC and Huawei SuperCharge
Oppo VOOC, also branded as "Dash Charge" for the subsidiary "OnePlus", as well as SuperCharge by Huawei, have taken the counter approach by increasing the charging current. Since the voltage that arrives at the end device matches the optimum battery charging voltage, no conversion inside the end device is necessary, which reduces heat there. However, unlike the charging protocols that only elevate voltage, the higher currents would produce more heat in cables' copper wires, making it incompatible with existing cables, and require special high-current cables with thicker copper wires.
See also
Battery management system
Battery balancing
Solar inverter
Voltage regulator
Zener diode
References
Integrated circuits
Electrical power control
Battery charging | Charge controller | [
"Technology",
"Engineering"
] | 1,122 | [
"Computer engineering",
"Integrated circuits"
] |
12,869,518 | https://en.wikipedia.org/wiki/Performance-based%20logistics | Performance-based logistics (PBL), also known as performance-based life-cycle product support, is a defense acquisition strategy for cost-effective weapon system support which has been adopted in particular by the United States Department of Defense. Rather than contracting for the acquisition of parts or services, DoD contracts to secure outcomes or results. Under PBL, the product support manager identifies product support integrator(s) (PSI) to deliver performance outcomes as defined by performance metric(s) for a system or product. The integrator often commits to this performance level at a lower cost, or increased performance at costs similar to those previously achieved under a non-PBL or transactional portfolio of product support arrangements for goods and services.
As the preferred approach to supporting weapon system logistics, it seeks to deliver product support as an integrated, affordable performance package designed to optimize system readiness. PBL meets performance goals for a weapon system through a support structure based on long-term performance agreements with clear lines of authority and responsibility.
DoD program managers are required to develop and implement performance-based life-cycle support strategies for weapon systems. These strategies should optimize total system availability while minimizing cost and logistics footprint. Trade-off decisions involve cost, useful service, and effectiveness. The selection of the specific performance metrics should be carefully considered and supported by an operationally oriented analysis, taking into account technology maturity, fiscal constraints, and schedule. In implementing performance-based life-cycle product support strategies, the metrics should be appropriate to the scope of product support integrators and providers responsibilities and should be revisited as necessary to ensure they are motivating the desired behaviors across the enterprise.
PBL strategies do not mandate that work be contracted to commercial contractors; integrating the best features of the public and private sectors is a key component of the support strategy. Instead of a pre-ordained course of action, Product Managers are directed to implement "sustainment strategies that include the best use of public and private sector capabilities through government/industry partnering initiatives, in accordance with statutory requirements".
Many times, employing a PBL strategy has resulted in either increased system performance issues or increased costs.. Examples include the C-17 PBL, FIRST, and PBtH. Ideally, the provider profits by controlling constituent elements (PSIs) that are used to generate the performance results.
In PBL, typically a part or the whole payment is tied to the performance of the provider and the purchaser does not get involved in the details of the process, it becomes crucial to define a clear set of requirements to the provider. Occasionally governments, more particularly Defence, fail to define the requirements clearly. This leaves room for providers to, either intentionally or unintentionally, misinterpret the requirements, which creates a game like situation and excuses to deliver imperfect services.
History
Beginning in the early 1990s, emerging trends towards increases in the costs to support fielded systems and decreases in the general reliability and operational readiness of weapon systems were recognized as issues that could continue if unabated. As a result, a performance-based approach, PBL, was advanced by the U.S. DoD in its annual Quadrennial Defense Review in 2001. Since then, not only has the U.S. DoD adopted the PBL approach, but other countries have adopted this strategy as well. Many programs that have employed it have yielded increased system availability, shorter maintenance cycles, and/or reduced costs.
Awards
Since the inception of the PBL concept, there have been numerous examples of DoD systems that have yielded the anticipated results, and many that have exceeded – some extremely so – the performance expectations. Annual PBL awards highlight achievement in three areas:
component-level performance
sub-system performance
system-level performance
Criticism
In 2009, partially in response to some who believed that PBL concepts were inadequate, and to assess the current state of DoD systems sustainment, DoD's Office of the Assistant Deputy Secretary of Defense for Materiel Readiness (OADUSD(MR)) initiated a Weapon System Acquisition Reform Product Support Assessment. Its final report, signed by Ashton B. Carter, Under Secretary of Defense for Acquisition, Technology and Logistics, affirms the essence of the PBL concept by stating, "there remains a strong consensus that an outcome-based, performance-oriented product support strategy is a worthy objective". It further identified eight areas that would make product support even more effective, if developed and improved:
Product Support Business Model
Industrial Integration Strategy
Supply Chain Operational Strategy
Governance
Metrics
Operating and Support (O&S) Costs
Analytical Tools
Human Capital.
In 2003 the United States Air Force found that logistics support contracts were more expensive than undertaking support operations in-house through their organic depot system.
See also
Cost-plus contract
Fixed-price contract
Military surplus
Performance-based contracting
References
External links
Performance Based Logistics
Performance Based Logistics What it Takes
Defense Acquisition Guidebook, Section 5.1.1.2
Defense Acquisition Guidebook, Section 5.1.1.3
DoD Directive 5000.01, The Defense Acquisition System, Enclosure 1, Section E1.1.17 - Performance Based Logistics
Defense Acquisition Guidebook
Life Cycle Logistics Community of Practice PBL Topic Area
PBL Toolkit
Going Organic
Maintenance
Military logistics | Performance-based logistics | [
"Engineering"
] | 1,068 | [
"Maintenance",
"Mechanical engineering"
] |
8,223,245 | https://en.wikipedia.org/wiki/Riving%20knife | A riving knife is a safety device installed on a table saw, circular saw, or radial arm saw used for woodworking. Attached to the saw's arbor, it is fixed relative to the blade and moves with it as blade depth is adjusted.
A splitter is a similar device attached to a trunion on the far side of the saw and fixed in relation to the saw table, which must be removed to make any non-through cuts or dados within the depth of the wood.
Function
A table saw is typically used for cross-cutting and ripping; cross-cutting slices a board across its grain width-wise, ripping cuts lengthwise along the grain. Various conditions experienced while cutting either way can cause a partially cut board to move, twist, or have the saw blade's kerf close up and bind the blade. Poor blade or fence alignment, operator error, or pre-existing stresses in the wood released by cutting may cause these different and dangerous conditions. A riving knife rides within the kerf, pivoting on the saw's arbor in relation to blade height, to maintain an even gap between the two cut sides of the board, preventing jamming which could cause the stock to be forcefully ejected rearward toward the saw's operator.
Kickback can pull the operator's hand into contact with the saw blade, as demonstrated by Popular Mechanics.
Forms of kickback
Saw blade "grabbing" occurs more frequently during ripping than cross-cutting (cuts made to wood or stone across its main grain or axis). It can occur with both hand saws and bandsaws but is more dangerous with a circular saw as areas of the circular blade close to the cutting area are moving in different directions. If a bandsaw grabs, the wood is pressed safely down into the machine table (though the saw may jam, stall or break the blade). If a table saw grabs at the rear of the blade where the teeth are rising up from the table, it may rapidly lift the wood upwards. The wood is then likely to catch the teeth on top of the blade and be thrown forwards at high speed towards the operator. This accident is termed a "kickback".
Table saw kickback may occur if the saw's fence is not parallel with the blade, but is slightly closer to the rear of it than the front, causing the fence to push the wood into the rear of the blade. This is especially likely when cross-cutting sheet materials that are wider than the cut length, which may pivot on the table and jam against the blade. If a proper cross-cutting jig is not being used, the fence should be adjusted (either slid forward, or a false fence added) so the end of the fence stops alongside the blade, leaving a free space for the cut-off to pivot into without binding.
Kickback may also occur when a loose piece of wood, freshly cut free, slips against the back of the blade. Apart from the measures above, this "falling board" may require an assistant to control it.
Riving knife versus splitter
A splitter is a stationary blade of similar thickness to the rotating saw blade mounted behind it to prevent a board from pinching inward into the saw kerf and binding on the saw blade, potentially causing a dangerous kickback. Like a riving knife, its thickness should be greater than the body of the saw blade but thinner than its kerf. Blades with a narrow kerf relative to their body are more susceptible to grabbing and kickback.
A riving knife has these advantages over a splitter:
It does not need to be removed from the saw when cross-cutting or doing a blind (non-through) cut as it does not extend above the top of the saw blade. If it is not removed, the operator cannot forget to put it back on.
It sits closer to the back edge of the blade, making it much more effective – less space for the stock to shift into the path of the blade
It provides some additional protection for the operator – blocking contact to the back edge of the blade – in those situations where the stock is being pulled from the outfeed side of the saw.
It is independent of (and will not interfere with) other blade guards and dust collectors
It achieves all of this by being attached to the saw's arbor, allowing it to move with the saw blade as the blade is raised, lowered and tilted.
Riving knives are also fitted to some hand-held electrical circular and powered miter or cross-cut saws (known generically as "chop saws").
As of 2008, Underwriters Laboratories (UL) requires that all new table saw designs include a riving knife.
Other anti-kickback devices
Featherboard
A featherboard is a safety device that applies sideways pressure holding the workpiece against the saw fence. It can reduce the risk of a kickback developing, but will not restrain the board if one does occur.
Kickback pawl
Some US table saws are fitted with sharpened ratchet teeth on a free-swinging pawl attached to the guard which restrain a board during a kickback. This combination may require awkward adjustment and is ineffective compared to the splitter.
See also
Stop block
References
Woodworking machines
Safety equipment | Riving knife | [
"Physics",
"Technology"
] | 1,071 | [
"Physical systems",
"Machines",
"Woodworking machines"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.