text
stringlengths
11
320k
source
stringlengths
26
161
A still is an apparatus used to distill liquid mixtures by heating to selectively boil and then cooling to condense the vapor . [ 1 ] A still uses the same concepts as a basic distillation apparatus , but on a much larger scale. Stills have been used to produce perfume and medicine , water for injection (WFI) for pharmaceutical use, generally to separate and purify different chemicals, and to produce distilled beverages containing ethanol . Since ethanol boils at a much lower temperature than water , simple distillation can separate ethanol from water by applying heat to the mixture. Historically, a copper vessel was used for this purpose, since copper removes undesirable sulfur -based compounds from the alcohol. However, many modern stills are made of stainless steel pipes with copper linings to prevent erosion of the entire vessel and lower copper levels in the waste product (which in large distilleries is processed to become animal feed). [ 2 ] Copper is the preferred material for stills because it yields an overall better-tasting spirit. The taste is improved by the chemical reaction between the copper in the still and the sulfur compounds created by the yeast during fermentation. These unwanted and flavor-changing sulfur compounds are chemically removed from the final product resulting in a smoother, better-tasting drink. All copper stills will require repairs about every eight years due to the precipitation of copper-sulfur compounds . The beverage industry was the first to implement a modern distillation apparatus and led the way in developing equipment standards which are now widely accepted in the chemical industry. There is also an increasing usage of the distillation of gin under glass and PTFE , and even at reduced pressures, to facilitate a fresher product. This is irrelevant to alcohol quality because the process starts with triple distilled grain alcohol, and the distillation is used solely to harvest botanical flavors such as limonene and other terpene like compounds. The ethyl alcohol is relatively unchanged. The simplest standard distillation apparatus is commonly known as a pot still , consisting of a single heated chamber and a vessel to collect purified alcohol. A pot still incorporates only one condensation , whereas other types of distillation equipment have multiple stages which result in higher purification of the more volatile component (alcohol). Pot still distillation gives an incomplete separation , but this can be desirable for the flavor of some distilled beverages . If a purer distillate is desired, a reflux still is the most common solution. Reflux stills incorporate a fractionating column , commonly created by filling copper vessels with glass beads to maximize available surface area . [ 3 ] As alcohol boils, condenses, and reboils through the column, the effective number of distillations greatly increases. Vodka and gin and other neutral grain spirits are distilled by this method, then diluted to concentrations appropriate for human consumption. Alcoholic products from home distilleries are common throughout the world but are sometimes in violation of local statutes. The product of illegal stills in the United States is commonly referred to as moonshine and in Ireland , poitín . However, poitín, although made illegal in 1661, has been legal for export in Ireland since 1997. Note that the term moonshine itself is often misused as many believe it to be a specific kind of high- proof alcohol that was distilled from corn, but the term can refer to any illicitly distilled alcohol. [ 4 ] The dictionary definition of still at Wiktionary
https://en.wikipedia.org/wiki/Still
The Stille reaction is a chemical reaction widely used in organic synthesis . The reaction involves the coupling of two organic groups, one of which is carried as an organotin compound (also known as organostannanes ). A variety of organic electrophiles provide the other coupling partner . The Stille reaction is one of many palladium-catalyzed coupling reactions . [ 1 ] [ 2 ] [ 3 ] The R 1 group attached to the trialkyltin is normally sp 2 -hybridized, including vinyl , and aryl groups. These organostannanes are also stable to both air and moisture, and many of these reagents either are commercially available or can be synthesized from literature precedent. However, these tin reagents tend to be highly toxic. X is typically a halide , such as Cl , Br , or I , yet pseudohalides such as triflates and sulfonates and phosphates can also be used. [ 4 ] [ 5 ] Several reviews have been published. [ 6 ] [ 2 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ excessive citations ] The first example of a palladium catalyzed coupling of aryl halides with organotin reagents was reported by Colin Eaborn in 1976. [ 16 ] This reaction yielded from 7% to 53% of diaryl product. This process was expanded to the coupling of acyl chlorides with alkyl-tin reagents in 1977 by Toshihiko Migita, yielding 53% to 87% ketone product. [ 17 ] In 1977, Migita published further work on the coupling of allyl -tin reagents with both aryl ( C ) and acyl ( D ) halides. The greater ability of allyl groups to migrate to the palladium catalyst allowed the reactions to be performed at lower temperatures. Yields for aryl halides ranged from 4% to 100%, and for acyl halides from 27% to 86%. [ 18 ] [ 19 ] Reflecting the early contributions of Migita and Kosugi, the Stille reaction is sometimes called the Migita–Kosugi–Stille coupling . John Kenneth Stille subsequently reported the coupling of a variety of alkyl tin reagents in 1978 with numerous aryl and acyl halides under mild reaction conditions with much better yields (76%–99%). [ 18 ] [ 20 ] Stille continued his work in the 1980s on the synthesis of a multitude of ketones using this broad and mild process and elucidated a mechanism for this transformation. [ 21 ] [ 22 ] By the mid-1980s, over 65 papers on the topic of coupling reactions involving tin had been published, continuing to explore the substrate scope of this reaction. While initial research in the field focused on the coupling of alkyl groups, most future work involved the much more synthetically useful coupling of vinyl , alkenyl , aryl, and allyl organostannanes to halides. Due to these organotin reagent's stability to air and their ease of synthesis, the Stille reaction became common in organic synthesis. [ 8 ] The mechanism of the Stille reaction has been extensively studied. [ 11 ] [ 23 ] The catalytic cycle involves an oxidative addition of a halide or pseudohalide ( 2 ) to a palladium catalyst ( 1 ), transmetalation of 3 with an organotin reagent ( 4 ), and reductive elimination of 5 to yield the coupled product ( 7 ) and the regenerated palladium catalyst ( 1 ). [ 24 ] However, the detailed mechanism of the Stille coupling is extremely complex and can occur via numerous reaction pathways. Like other palladium-catalyzed coupling reactions , the active palladium catalyst is believed to be a 14-electron Pd(0) complex, which can be generated in a variety of ways. Use of an 18- or 16- electron Pd(0) source Pd(PPh 3 ) 4 , Pd(dba) 2 can undergo ligand dissociation to form the active species. Second, phosphines can be added to ligandless palladium(0). Finally, as pictured, reduction of a Pd(II) source ( 8 ) (Pd(OAc) 2 , PdCl 2 (MeCN) 2 , PdCl 2 (PPh 3 ) 2 , BnPdCl(PPh 3 ) 2 , etc.) by added phosphine ligands or organotin reagents is also common [ 6 ] Oxidative addition to the 14-electron Pd(0) complex is proposed. This process gives a 16-electron Pd(II) species. It has been suggested that anionic ligands , such as OAc , accelerate this step by the formation of [Pd(OAc)(PR 3 ) n ] − , making the palladium species more nucleophillic. [ 11 ] [ 25 ] In some cases, especially when an sp 3 -hybridized organohalide is used, an S N 2 type mechanism tends to prevail, yet this is not as commonly seen in the literature. [ 11 ] [ 25 ] However, despite normally forming a cis -intermediate after a concerted oxidative addition , this product is in rapid equilibrium with its trans -isomer. [ 26 ] [ 27 ] There are multiple reasons why isomerization is favored here. First, a bulky ligand set is usually used in these processes, such as phosphines , and it is highly unfavorable for them to adopt a cis orientation relative to each other, resulting in isomerization to the more favorable trans product. [ 26 ] [ 27 ] An alternative explanation for this phenomenon, dubbed antisymbiosis or transphobia, is by invocation of the sd n model. [ 24 ] [ 28 ] Under this theory, palladium is a hypervalent species . Hence R 1 and the trans ligand, being trans to each other, will compete with one palladium orbital for bonding. This 4-electron 3-center bond is weakest when two strong donating groups are present, which heavily compete for the palladium orbital. Relative to any ligand normally used, the C-donor R 1 ligand has a much higher trans effect . This trans influence is a measure of how competitive ligands trans to each other will compete for palladium's orbital. The usual ligand set, phosphines, and C-donors (R 1 ) are both soft ligands, meaning that they will form strong bonds to palladium , and heavily compete with each other for bonding. [ 29 ] [ 30 ] Since halides or pseudohalides are significantly more electronegative , their bonding with palladium will be highly polarized , with most of the electron density on the X group, making them low trans effect ligands. Hence, it will be highly favorable for R 1 to be trans to X, since the R 1 group will be able to form a stronger bond to the palladium. [ 24 ] [ 28 ] [ 30 ] The transmetallation of the trans intermediate from the oxidative addition step is believed to proceed via a variety of mechanisms depending on the substrates and conditions. The most common type of transmetallation for the Stille coupling involves an associative mechanism . This pathway implies that the organostannane , normally a tin atom bonded to an allyl, alkenyl, or aryl group, can coordinate to the palladium via one of these double bonds. This produces a fleeting pentavalent, 18-electron species , which can then undergo ligand detachment to form a square planar complex again. Despite the organostannane being coordinated to the palladium through the R 2 group, R 2 must be formally transferred to the palladium (the R 2 -Sn bond must be broken), and the X group must leave with the tin, completing the transmetalation. This is believed to occur through two mechanisms. [ 31 ] First, when the organostannane initially adds to the trans metal complex, the X group can coordinate to the tin , in addition to the palladium, producing a cyclic transition state . Breakdown of this adduct results in the loss of R 3 Sn-X and a trivalent palladium complex with R 1 and R 2 present in a cis relationship. Another commonly seen mechanism involves the same initial addition of the organostannane to the trans palladium complex as seen above; however, in this case, the X group does not coordinate to the tin, producing an open transition state . After the α-carbon relative to tin attacks the palladium, the tin complex will leave with a net positive charge. In the scheme below, please note that the double bond coordinating to tin denotes R 2 , so any alkenyl , allyl , or aryl group. Furthermore, the X group can dissociate at any time during the mechanism and bind to the Sn + complex at the end. Density functional theory calculations predict that an open mechanism will prevail if the 2 ligands remain attached to the palladium and the X group leaves, while the cyclic mechanism is more probable if a ligand dissociates prior to the transmetalation . Hence, good leaving groups such as triflates in polar solvents favor the cyclic transition state, while bulky phosphine ligands will favor the open transition state. [ 31 ] A less common pathway for transmetalation is through a dissociative or solvent assisted mechanism. Here, a ligand from the tetravalent palladium species dissociates, and a coordinating solvent can add onto the palladium. When the solvent detaches, to form a 14-electron trivalent intermediate, the organostannane can add to the palladium , undergoing an open or cyclic type process as above. [ 31 ] In order for R 1 -R 2 to reductively eliminate , these groups must occupy mutually cis coordination sites. Any trans -adducts must therefore isomerize to the cis intermediate or the coupling will be frustrated. A variety of mechanisms exist for reductive elimination and these are usually considered to be concerted. [ 11 ] [ 32 ] [ 33 ] First, the 16-electron tetravalent intermediate from the transmetalation step can undergo unassisted reductive elimination from a square planar complex. This reaction occurs in two steps: first, the reductive elimination is followed by coordination of the newly formed sigma bond between R 1 and R 2 to the metal, with ultimate dissociation yielding the coupled product. [ 11 ] [ 32 ] [ 33 ] The previous process, however, is sometimes slow and can be greatly accelerated by dissociation of a ligand to yield a 14-electron T shaped intermediate . This intermediate can then rearrange to form a Y-shaped adduct, which can undergo faster reductive elimination. [ 11 ] [ 32 ] [ 33 ] Finally, an extra ligand can associate to the palladium to form an 18-electron trigonal bipyramidal structure, with R 1 and R 2 cis to each other in equatorial positions. The geometry of this intermediate makes it similar to the Y-shaped above. [ 11 ] [ 32 ] [ 33 ] The presence of bulky ligands can also increase the rate of elimination. Ligands such as phosphines with large bite angles cause steric repulsion between L and R 1 and R 2 , resulting in the angle between L and the R groups to increase and the angle between R 1 and R 2 to hence decrease, allowing for quicker reductive elimination . [ 11 ] [ 24 ] The rate at which organostannanes transmetalate with palladium catalysts is shown below. Sp 2 -hybridized carbon groups attached to tin are the most commonly used coupling partners, and sp 3 -hybridized carbons require harsher conditions and terminal alkynes may be coupled via a C-H bond through the Sonogashira reaction . As the organic tin compound, a trimethylstannyl or tributylstannyl compound is normally used. Although trimethylstannyl compounds show higher reactivity compared with tributylstannyl compounds and have much simpler 1 H-NMR spectra, the toxicity of the former is much larger. [ 34 ] Optimizing which ligands are best at carrying out the reaction with high yield and turnover rate can be difficult. This is because the oxidative addition requires an electron rich metal, hence favoring electron donating ligands. However, an electron deficient metal is more favorable for the transmetalation and reductive elimination steps, making electron withdrawing ligands the best here. Therefore, the optimal ligand set heavily depends on the individual substrates and conditions used. These can change the rate determining step, as well as the mechanism for the transmetalation step. [ 35 ] Normally, ligands of intermediate donicity, such as phosphines, are utilized. Rate enhancements can be seen when moderately electron-poor ligands, such as tri-2-furylphosphine or triphenylarsenine are used. Likewise, ligands of high donor number can slow down or inhibit coupling reactions. [ 35 ] [ 36 ] These observations imply that normally, the rate-determining step for the Stille reaction is transmetalation . [ 36 ] The most common additive to the Stille reaction is stoichiometric or co-catalytic copper(I) , specifically copper iodide , which can enhance rates up by >10 3 fold. It has been theorized that in polar solvents copper transmetalate with the organostannane . The resulting organocuprate reagent could then transmetalate with the palladium catalyst. Furthermore, in ethereal solvents, the copper could also facilitate the removal of a phosphine ligand , activating the Pd center. [ 9 ] [ 37 ] [ 38 ] [ 39 ] [ 40 ] Lithium chloride has been found to be a powerful rate accelerant in cases where the X group dissociates from palladium (i.e. the open mechanism). The chloride ion is believed to either displace the X group on the palladium making the catalyst more active for transmetalation or by coordination to the Pd(0) adduct to accelerate the oxidative addition . Also, LiCl salt enhances the polarity of the solvent, making it easier for this normally anionic ligand (– Cl , – Br , – OTf , etc.) to leave. This additive is necessary when a solvent like THF is used; however, utilization of a more polar solvent, such as NMP , can replace the need for this salt additive. However, when the coupling's transmetalation step proceeds via the cyclic mechanism, addition of lithium chloride can actually decrease the rate. As in the cyclic mechanism, a neutral ligand, such as phosphine, must dissociate instead of the anionic X group. [ 10 ] [ 41 ] Finally, sources of fluoride ions , such as cesium fluoride , also effect on the catalytic cycle . First, fluoride can increase the rates of reactions of organotriflates , possibly by the same effect as lithium chloride . Furthermore, fluoride ions can act as scavengers for tin byproducts , making them easier to remove via filtration . [ 39 ] The most common side reactivity associated with the Stille reaction is homocoupling of the stannane reagents to form an R 2 -R 2 dimer . It is believed to proceed through two possible mechanisms. First, reaction of two equivalents of organostannane with the Pd(II) precatalyst will yield the homocoupled product after reductive elimination . Second, the Pd(0) catalyst can undergo a radical process to yield the dimer. The organostannane reagent used is traditionally tetravalent at tin, normally consisting of the sp 2 -hybridized group to be transferred and three "non-transferable" alkyl groups. As seen above, alkyl groups are normally the slowest at migrating onto the palladium catalyst. [ 10 ] It has also been found that at temperatures as low as 50 °C, aryl groups on both palladium and a coordinated phosphine can exchange. While normally not detected, they can be a potential minor product in many cases. [ 10 ] Finally, a rather rare and exotic side reaction is known as cine substitution . Here, after initial oxidative addition of an aryl halide , this Pd-Ar species can insert across a vinyl tin double bond. After β-hydride elimination , migratory insertion , and protodestannylation, a 1,2-disubstituted olefin can be synthesized. [ 10 ] Numerous other side reactions can occur, and these include E/Z isomerization , which can potentially be a problem when an alkenylstannane is utilized. The mechanism of this transformation is currently unknown. Normally, organostannanes are quite stable to hydrolysis , yet when very electron-rich aryl stannanes are used, this can become a significant side reaction. [ 10 ] Vinyl halides are common coupling partners in the Stille reaction, and reactions of this type are found in numerous natural product total syntheses . Normally, vinyl iodides and bromides are used. Vinyl chlorides are insufficiently reactive toward oxidative addition to Pd(0). Iodides are normally preferred: they will typically react faster and under milder conditions than will bromides . This difference is demonstrated below by the selective coupling of a vinyl iodide in the presence of a vinyl bromide. [ 10 ] Normally, the stereochemistry of the alkene is retained throughout the reaction, except under harsh reaction conditions. A variety of alkenes may be used, and these include both α- and β-halo-α,β unsaturated ketones , esters , and sulfoxides (which normally need a copper (I) additive to proceed), and more (see example below). [ 42 ] Vinyl triflates are also sometimes used. Some reactions require the addition of LiCl and others are slowed down, implying that two mechanistic pathways are present. [ 10 ] Another class of common electrophiles are aryl and heterocyclic halides. As for the vinyl substrates, bromides and iodides are more common despite their greater expense. A multitude of aryl groups can be chosen, including rings substituted with electron donating substituents, biaryl rings, and more. Halogen-substituted heterocycles have also been used as coupling partners, including pyridines , furans , thiophenes , thiazoles , indoles , imidazoles , purines , uracil , cytosines , pyrimidines , and more (See below for table of heterocycles; halogens can be substituted at a variety of positions on each). [ 10 ] Below is an example of the use of Stille coupling to build complexity on heterocycles of nucleosides , such as purines . [ 43 ] Aryl triflates and sulfonates are also couple to a wide variety of organostannane reagents. Triflates tend to react comparably to bromides in the Stille reaction. [ 10 ] Acyl chlorides are also used as coupling partners and can be used with a large range of organostannane, even alkyl-tin reagents, to produce ketones (see example below). [ 44 ] However, it is sometimes difficult to introduce acyl chloride functional groups into large molecules with sensitive functional groups. An alternative developed to this process is the Stille-carbonylative cross-coupling reaction, which introduces the carbonyl group via carbon monoxide insertion . [ 10 ] Allylic , benzylic , and propargylic halides can also be coupled. While commonly employed, allylic halides proceed via an η 3 transition state, allowing for coupling with the organostannane at either the α or γ position, occurring predominantly at the least substituted carbon (see example below). [ 45 ] Alkenyl epoxides (adjacent epoxides and alkenes ) can also undergo this same coupling through an η 3 transition state as, opening the epoxide to an alcohol . While allylic and benzylic acetates are commonly used, propargylic acetates are unreactive with organostannanes. [ 10 ] Organostannane reagents are common. Several are commercially available. [ 46 ] Stannane reagents can be synthesized by the reaction of a Grignard or organolithium reagent with trialkyltin chlorides. For example, vinyltributyltin is prepared by the reaction of vinylmagnesium bromide with tributyltin chloride . [ 47 ] Hydrostannylation of alkynes or alkenes provides many derivatives. Organotin reagents are air and moisture stable. Some reactions can even take place in water. [ 48 ] They can be purified by chromatography . They are tolerant to most functional groups. Some organotin compounds are heavily toxic , especially trimethylstannyl derivatives. [ 10 ] The use of vinylstannane, or alkenylstannane reagents is widespread. [ 10 ] In regards to limitations, both very bulky stannane reagents and stannanes with substitution on the α-carbon tend to react sluggishly or require optimization. For example, in the case below, the α-substituted vinylstannane only reacts with a terminal iodide due to steric hindrance . [ 49 ] Arylstannane reagents are also common and both electron donating and electron withdrawing groups actually increase the rate of the transmetalation. This again implies that two mechanisms of transmetalation can occur. The only limitation to these reagents are substituents at the ortho-position as small as methyl groups can decrease the rate of reaction. A wide variety of heterocycles (see Electrophile section) can also be used as coupling partners (see example with a thiazole ring below). [ 10 ] [ 50 ] Alkynylstannanes, the most reactive of stannanes, have also been used in Stille couplings. They are not usually needed as terminal alkynes can couple directly to palladium catalysts through their C-H bond via Sonogashira coupling . Allylstannanes have been reported to have worked, yet difficulties arise, like with allylic halides, with the difficulty in control regioselectivity for α and γ addition. Distannane and acyl stannane reagents have also been used in Stille couplings. [ 10 ] The Stille reaction has been used in the synthesis of a variety of polymers. [ 51 ] [ 52 ] [ 53 ] However, the most widespread use of the Stille reaction is its use in organic syntheses , and specifically, in the synthesis of natural products . Larry Overman 's 19-step enantioselective total synthesis of quadrigemine C involves a double Stille cross metathesis reaction. [ 6 ] [ 54 ] The complex organostannane is coupled onto two aryl iodide groups. After a double Heck cyclization, the product is achieved. Panek's 32 step enantioselective total synthesis of ansamycin antibiotic (+)-mycotrienol makes use of a late stage tandem Stille type macrocycle coupling. Here, the organostannane has two terminal tributyl tin groups attacked to an alkene. This organostannane "stitches" the two ends of the linear starting material into a macrocycle, adding the missing two methylene units in the process. After oxidation of the aromatic core with ceric ammonium nitrate (CAN) and deprotection with hydrofluoric acid yields the natural product in 54% yield for the 3 steps. [ 6 ] [ 55 ] Stephen F. Martin and coworkers' 21 step enantioselective total synthesis of the manzamine antitumor alkaloid Ircinal A makes use of a tandem one-pot Stille/Diels-Alder reaction. An alkene group is added to vinyl bromide, followed by an in situ Diels-Alder cycloaddition between the added alkene and the alkene in the pyrrolidine ring. [ 6 ] [ 56 ] Numerous other total syntheses utilize the Stille reaction, including those of oxazolomycin, [ 57 ] lankacidin C, [ 58 ] onamide A, [ 59 ] calyculin A, [ 60 ] lepicidin A, [ 61 ] ripostatin A, [ 62 ] and lucilactaene. [ 6 ] [ 63 ] The image below displays the final natural product , the organohalide (blue), the organostannane (red), and the bond being formed (green and circled). From these examples, it is clear that the Stille reaction can be used both at the early stages of the synthesis (oxazolomycin and calyculin A), at the end of a convergent route (onamide A, lankacidin C, ripostatin A), or in the middle (lepicidin A and lucilactaene). The synthesis of ripostatin A features two concurrent Stille couplings followed by a ring-closing metathesis . The synthesis of lucilactaene features a middle subunit, having a borane on one side and a stannane on the other, allowing for a Stille reaction followed by a subsequent Suzuki coupling. In addition to performing the reaction in a variety of organic solvents, conditions have been devised which allow for a broad range of Stille couplings in aqueous solvent. [ 14 ] In the presence of Cu(I) salts, palladium-on-carbon has been shown to be an effective catalyst. [ 64 ] [ 65 ] In the realm of green chemistry a Stille reaction is reported taking place in a low melting and highly polar mixture of a sugar such as mannitol , a urea such as dimethylurea and a salt such as ammonium chloride [ 66 ] . [ 67 ] The catalyst system is tris(dibenzylideneacetone)dipalladium(0) with triphenylarsine : A common alteration to the Stille coupling is the incorporation of a carbonyl group between R 1 and R 2 , serving as an efficient method to form ketones . This process is extremely similar to the initial exploration by Migita and Stille (see History) of coupling organostannane to acyl chlorides . However, these moieties are not always readily available and can be difficult to form, especially in the presence of sensitive functional groups . Furthermore, controlling their high reactivity can be challenging. The Stille-carbonylative cross-coupling employs the same conditions as the Stille coupling, except with an atmosphere of carbon monoxide (CO) being used. The CO can coordinate to the palladium catalyst ( 9 ) after initial oxidative addition, followed by CO insertion into the Pd-R 1 bond ( 10 ), resulting in subsequent reductive elimination to the ketone ( 12 ). The transmetalation step is normally the rate-determining step . [ 6 ] Larry Overman and coworkers make use of the Stille-carbonylative cross-coupling in their 20-step enantioselective total synthesis of strychnine . The added carbonyl is later converted to a terminal alkene via a Wittig reaction , allowing for the key tertiary nitrogen and the pentacyclic core to be formed via an aza- Cope - Mannich reaction . [ 6 ] [ 68 ] Giorgio Ortar et al. explored how the Stille-carbonylative cross-coupling could be used to synthesize benzophenone phosphores. These were embedded into 4-benzoyl-L-phenylalanine peptides and used for their photoaffinity labelling properties to explore various peptide-protein interactions. [ 6 ] [ 69 ] Louis Hegedus' 16-step racemic total synthesis of Jatraphone involved a Stille-carbonylative cross-coupling as its final step to form the 11-membered macrocycle . Instead of a halide, a vinyl triflate is used there as the coupling partner. [ 6 ] [ 70 ] Using the seminal publication by Eaborn in 1976, which forms arylstannanes from arylhalides and distannanes, T. Ross Kelly applied this process to the intramolecular coupling of arylhalides. This tandem stannylation/aryl halide coupling was used for the syntheses of a variety of dihydrophenanthrenes. Most of the internal rings formed are limited to 5 or 6 members, however some cases of macrocyclization have been reported. Unlike a normal Stille coupling, chlorine does not work as a halogen, possibly due to its lower reactivity in the halogen sequence (its shorter bond length and stronger bond dissociation energy makes it more difficult to break via oxidative addition ). Starting in the middle of the scheme below and going clockwise, the palladium catalyst ( 1 ) oxidatively adds to the most reactive C-X bond ( 13 ) to form 14 , followed by transmetalation with distannane ( 15 ) to yield 16 and reductive elimination to yield an arylstannane ( 18 ). The regenerated palladium catalyst ( 1 ) can oxidative add to the second C-X bond of 18 to form 19 , followed by intramolecular transmetalation to yield 20 , followed by reductive elimination to yield the coupled product ( 22 ). [ 6 ] Jie Jack Lie et al. made use of the Stille-Kelly coupling in their synthesis of a variety of benzo[4,5]furopyridines ring systems. They invoke a three-step process, involving a Buchwald-Hartwig amination , another palladium-catalyzed coupling reaction , followed by an intramolecular Stille-Kelly coupling. Note that the aryl-iodide bond will oxidatively add to the palladium faster than either of the aryl-bromide bonds. [ 6 ] [ 71 ]
https://en.wikipedia.org/wiki/Stille_reaction
Stilts are poles, posts or pillars used to allow a structure or building to stand at a distance above the ground or water. In flood plains, and on beaches or unstable ground, buildings are often constructed on stilts to protect them from damage by water, waves or shifting soil or sand. As these issues were commonly faced by many societies around the world, stilts have become synonymous with various places and cultures, particularly in South East Asia and Venice . Stilts are a common architectural element in tropical architecture, especially in Southeast Asia and South America , but can be found worldwide. Stilts also have a large prominence in Oceania and Europe as well as the Arctic , where the stilts elevate houses above the permafrost . The length of stilts may vary widely; stilts of traditional houses can be measured from half a meter to 5 or 6 meters. Stilt houses have been used for millennia, with evidence in the European Alps that stilt houses were constructed on a lake over 6000 years ago [ 1 ] and Herodotus making reference to stilt housing on lakes in Paeonia . [ 2 ] Settlements primarily composed of stilt housing are common in Micronesia and in Oceania. Stilt homes in South America date back to Pre-Columbian times, with early explorers such as Vespucci noting the houses built on stilts by the local people whilst exploring, consequently giving the area the name Venezuela , or “Little Venice”. [ citation needed ] In the 18th Century, Jesuit João Daniel noted “Many nations live on lakes, or among them, where they have, over the water, their houses made of the same sort, only with the amend of being out of hay, that they erect with poles, and palm tree branches, and in them they live joyfully, like fish in the water” whilst travelling in the Brazilian Amazon Rainforest . [ 3 ] On the island of Chiloé , modern dwellers have incorporated stilts into house design due to local seismic activity causing tides up to 7 metres in height. [ 4 ] Stilts were utilised by Inuit inhabiting the Bering Strait and Western Alaska , with stilts used to create level terraces for the community inhabiting Ugiuvak , also known as King Island. These stilt homes had a platform and a walrus skin roof and were built on up to 45 degree inclines, with the stilts largely constructed out of driftwood , due to the islands lack of forest cover. Many storehouses in the Bering Strait and nearby areas inhabited by the Yup’ik people on the mainland were constructed with driftwood stilts, a concept found in many regions around the world, usually to prevent pests from damaging food. [ 5 ] There are many types and names of stilt housing, including: Many regions that utilise stilts in housing and architecture globally often face similar challenges to each other. Communities in tropical regions , wetlands , or other environments prone to high levels of moisture often utilise stilts to solve a particular issue facing an area. One of the largest reasons stilts are used in vernacular architecture is to provide thermal comfort for inhabitants. For example, a study surveying the traditional stilt housing utilised by the Dong minority in Southern China, discovered that the airflow from elevating a house significantly cooled the house down. [ 6 ] Furthermore, the majority of people surveyed were satisfied with the natural cooling of their stilt homes in the hot, humid summer months as compared to people living in modern housing. Stilt housing also provides a large area to store commodities during non-flooding events, with many people using the bottom area to store livestock or items, or as entertainment areas. Stilts are often used in buildings where there is a regular risk of flooding. Tropical regions can experience large quantities of rainfall in a small amount of time, often causing long and devastating floods for local people. The force of floodwaters often destroys buildings, meaning many people in flood communities build their houses on stilts such that they are well protected from high flood levels. Modelling of floodwaters acting on stilts and pillars in traditional and modern Thai stilts show that by using suitable simple construction methods, stilt houses can withstand large flooding events, protecting people and their possessions from being destroyed. [ 7 ] Whilst the short term durability of stilt housing prevents consistent destruction, [ 7 ] the materials often used to make stilts can be damaged. This is due to materials becoming overstressed by flash flooding , where a large enough load is applied to the stilt that is large enough to cause deformation or damage, potentially causing structural failure or other serious damage to the building. Stilt homes which have been built using wooden pillars can rot due to general humidity or after being wet by flooding, compromising structural integrity. [ 8 ] Despite providing cooling due to elevating, stilts can adversely affect the thermal efficiency of building, making it more expensive to heat/cool using technologies such as air-conditioning. A study on stilt houses in Chile found that traditional construction methods resulted in an average of 30.25% of heat losses in stilt houses came from the open floor, increasing the energy consumption of each home. [ 9 ] A large social disadvantage of stilt housing is the difficulties faced by people with mobility issues. [ 7 ] The stairs leading up to the main floor may often be inaccessible to people with disabilities such as people who are in a wheelchair. While an elevator may be added, this is often an expensive investment and cannot be afforded by people in remote communities, or feasible with local issues such as regular flooding. In traditional stilt houses, wood is a prevalent structural material used to manufacture the stilts. This is usually from a local lumber source, with many traditional stilt houses in Asia using bamboo for structural support. [ 8 ] In modern homes, concrete and steel are often used as construction material for the structural stilts in houses. In the Avieiras stilt houses along the Tagus River in Portugal , canes growing by the riverbank and trunks of large trees were used as stilts to support the homes of local fisherman. [ 10 ] Over time, concrete slabs have been added to support the wood and extend the pillars foundation into the ground, making buildings more stable in the case of flooding. [ 11 ] Over the years many cultures have modified aspects of their construction method to improve the stability and strength of buildings on stilts. In Sumatra , severe damage from flooding and other natural disasters has modernised many aspects of stilt house construction, with concrete being added to foundations of some buildings more prone to such events such as flooding, earthquakes, and large storms. By using concrete slabs in construction as well as by using concrete pillars, the stilts supporting the main building on top have been less damaged by recent events as compared to previous years. The improvement of technologies such as the durability of nails and screws has also made the connections between the pillar and various beams stronger. Often the materials used in stilt housing reflect the challenges of its location. For example, a building with foundations underwater for most of the time often uses wood or reinforced concrete as the main material for stilts. A building that is sat on ground that is only flood-prone however can have brick and mortar as the primary structural element. Another type of stilts involves wooden stilts with ballasts to allow for a building to float freely in water. This can allow for a large amount of water to enter an area with the buildings safely afloat, reducing damage to a building during flooding events or from waves, winds, or tides. These stilts must be designed to provide the floating building with stability and buoyancy. This construction technique of developing a floating village is seen globally, from Peru to Hong Kong . Some floating villages in Vietnam are composed of a raft fixed to wooden stilts that are driven into the shallow sea floor. These stilts are periodically replaced every 30 years. [ 12 ] In Indonesia, there are a variety of construction methods used in stilt houses. Foundations used for stilts include concrete pedestals or piles, with joints being fixed using screws/nails or being detachable interlocking wooden joints. A mix of continued pillars, where two pillars are connected directly vertically, or discontinued pillars, where a plate is placed in between the two pillars are used depending on local constraints. This durable building style has allowed some silt dwellings to surpass 100 years in age. [ 13 ] Whilst fleeing the barbarians pillaging the Italian Peninsula in the 6th Century, [ 14 ] Roman farmers built elevated huts on wooden stilts on and surrounding the islands in the Venetian Lagoon . Over time as Venetian power and the local population grew, the city expanded, and the foundations of the city were required to be stronger and more durable. As such, the Venetians utilised approximately 18 metre long (60 feet) wooden poles manufactured from oak , larch or pine from local forests driven to use as the foundations of the city. [ 15 ] These stilts were driven deep into the ground through the unstable silt and dirt and into the hard clay beneath, allowing for a strong and stable structure. While wood is susceptible to rot and decay, the lack of dissolved oxygen in the mud protects the wood from significant rot, with some wooden Venetian foundations being over 500 years old. The disadvantage of using this system is that industrial action in the city often causes the city to sink at an increased rate. [ 16 ] For example, artisan wells constructed in the 1960s were originally drilled to get the city a reliable supply of fresh drinking water, as the water in the lagoon is entirely salt water. However, as water was pumped from the wells, Venice began to sink faster, leading to a ban on wells in the city due to the sensitivity of the foundations to surrounding construction. Architecture and housing play an integral role in a culture, allowing for artistic expression in day-to-day life. The Dong minority in the Guangxi province of China decorate all aspects of their homes, including the pillars that support the house. [ 6 ] With modern construction using concrete instead of wood, many locals create a façade to ensure the style of housing remains consistent with the traditional style that defines the local culture. The area between the first floor and ground is often used to store livestock. Stilts have been embedded into Thai architectural culture, with stilt housing making up a significant proportion of the country's housing in agricultural regions such as the Uttaradit and Phetchabun region. [ 7 ] Many buildings, even away from areas prone to flooding often incorporate stilts into their design, such as temples. Due to the prominence of such buildings in Thailand, the architecture there is often associated with stilts. In Indonesia, the construction of the house symbolizes the division of the macrocosm into three regions: the upper world; the seat of deities and ancestors, the middle world; the realm of human, and lower world; the realm of demon and malevolent spirit . The typical way of buildings in Southeast Asia is to build on stilts, an architectural form usually combined with a saddle roof. [ 17 ] The usage of stilts in homes in Indonesia has been dated back hundreds of years. [ 11 ] Many styles of vernacular buildings have been developed depending on the needs of the people and dynamics of the environment. Recent disasters such as tsunamis and flooding in the Teunom region of Sumatra have forced the modernisation of building materials and methods, with concrete replacing the wooden foundations of many houses. The area at the bottom of the building, referred to as the stage area, is often used aesthetically with fruits and flowers being commonplace in the space. Stilts can be found in Indonesian vernacular architecture such as Dayak long houses , [ 17 ] Torajan Tongkonan , Minangkabau Rumah Gadang , and Malay houses . The construction is known locally as Rumah Panggung (lit: "stage house") houses built on stilts. This was to avoid wild animals and floods, to deter thieves, and for added ventilation. In Sumatra, traditionally stilted houses are designed in order to avoid dangerous wild animals, such as snakes and tigers. While in areas located close to big rivers of Sumatra and Borneo, the stilts help to elevated house above flood surface. The development of the Avieira architecture along the Tagus River in Portugal [ 10 ] occurred from seasonal migration. Cold winters meant fishermen would fish in rivers instead of the ocean, developing communities along the shoreline. Painting the exterior, including the stilts, usually green, red, blue, or orange gave individual expression to the fisherman who usually made the houses themselves.
https://en.wikipedia.org/wiki/Stilts_(architecture)
In quantum optics , stimulated Raman adiabatic passage ( STIRAP ) is a process that permits transfer of a population between two applicable quantum states via at least two coherent electromagnetic (light) pulses . [ 1 ] [ 2 ] These light pulses drive the transitions of the three level Ʌ atom or multilevel system. [ 3 ] [ 4 ] The process is a form of state-to-state coherent control . Consider the description of three level Ʌ atom having ground states | g 1 ⟩ {\displaystyle |g_{1}\rangle } and | g 2 ⟩ {\displaystyle |g_{2}\rangle } (for simplicity suppose that the energies of the ground states are the same) and excited state | e ⟩ {\displaystyle |e\rangle } . Suppose in the beginning the total population is in the ground state | g 1 ⟩ {\displaystyle |g_{1}\rangle } . By applying pulses with specific shapes and durations, initially the unpopulated states | g 2 ⟩ {\displaystyle |g_{2}\rangle } and | e ⟩ {\displaystyle |e\rangle } couple, and afterward superposition of states | g 2 ⟩ {\displaystyle |g_{2}\rangle } and | e ⟩ {\displaystyle |e\rangle } couple to the state | g 1 ⟩ {\displaystyle |g_{1}\rangle } . Thereby a state is formed that permits the transformation of the population into state | g 2 ⟩ {\displaystyle |g_{2}\rangle } without populating the excited state | e ⟩ {\displaystyle |e\rangle } . This process of transforming the population without populating the excited state is called the stimulated Raman adiabatic passage. [ 5 ] Consider states | 1 ⟩ {\displaystyle |1\rangle } , | 2 ⟩ {\displaystyle |2\rangle } and | 3 ⟩ {\displaystyle |3\rangle } with the goal of transferring population initially in state | 1 ⟩ {\displaystyle |1\rangle } to state | 3 ⟩ {\displaystyle |3\rangle } without populating state | 2 ⟩ {\displaystyle |2\rangle } . Allow the system to interact with two coherent radiation fields, the pump and Stokes fields. Let the pump field couple only states | 1 ⟩ {\displaystyle |1\rangle } and | 2 ⟩ {\displaystyle |2\rangle } and the Stokes field couple only states | 2 ⟩ {\displaystyle |2\rangle } and | 3 ⟩ {\displaystyle |3\rangle } , for instance due to far-detuning or selection rules . Denote the Rabi frequencies and detunings of the pump and Stokes couplings by Ω P / S {\displaystyle \Omega _{P/S}} and Δ P / S {\displaystyle \Delta _{P/S}} . Setting the energy of state | 2 ⟩ {\displaystyle |2\rangle } to zero, the rotating wave Hamiltonian is given by H R W A = − ℏ Δ P | 1 ⟩ ⟨ 1 | + ℏ Δ S | 3 ⟩ ⟨ 3 | + ℏ Ω P 2 ( | 1 ⟩ ⟨ 2 | + h . c . ) + ℏ Ω S 2 ( | 3 ⟩ ⟨ 2 | + h . c . ) {\displaystyle H_{\mathrm {RWA} }=-\hbar \Delta _{P}|1\rangle \langle 1|+\hbar \Delta _{S}|3\rangle \langle 3|+{\frac {\hbar \Omega _{P}}{2}}(|1\rangle \langle 2|+\mathrm {h.c.} )+{\frac {\hbar \Omega _{S}}{2}}(|3\rangle \langle 2|+\mathrm {h.c.} )} The energy ordering of the states is not critical, and here it is taken so that E 1 < E 2 < E 3 {\displaystyle E_{1}<E_{2}<E_{3}} only for concreteness. Ʌ and V configurations can be realized by changing the signs of the detunings. Shifting the energy zero by Δ P {\displaystyle \Delta _{P}} allows the Hamiltonian to be written in the more configuration independent form H R W A = ℏ ( 0 Ω P 2 0 Ω P 2 Δ Ω S 2 0 Ω S 2 δ ) {\displaystyle H_{\mathrm {RWA} }=\hbar {\begin{pmatrix}0&{\frac {\Omega _{P}}{2}}&0\\{\frac {\Omega _{P}}{2}}&\Delta &{\frac {\Omega _{S}}{2}}\\0&{\frac {\Omega _{S}}{2}}&\delta \end{pmatrix}}} Here Δ {\displaystyle \Delta } and δ {\displaystyle \delta } denote the single and two-photon detunings respectively. STIRAP is achieved on two-photon resonance δ = 0 {\displaystyle \delta =0} . Focusing to this case, the energies upon diagonalization of H R W A {\displaystyle H_{\mathrm {RWA} }} are given by E 0 , ± = 0 , Δ ± Δ 2 + Ω 2 2 {\displaystyle E_{0,\pm }=0,{\frac {\Delta \pm {\sqrt {\Delta ^{2}+\Omega ^{2}}}}{2}}} where Ω 2 = Ω P 2 + Ω S 2 {\displaystyle \Omega ^{2}=\Omega _{P}^{2}+\Omega _{S}^{2}} . Solving for the E 0 {\displaystyle E_{0}} eigenstate ( c 1 c 2 c 3 ) T {\displaystyle (c_{1}\,c_{2}\,c_{3})^{T}} , it is seen to obey the condition c 2 = 0 , Ω P c 1 + Ω S c 3 = 0 {\displaystyle c_{2}=0,\;\Omega _{P}c_{1}+\Omega _{S}c_{3}=0} The first condition reveals that the critical two-photon resonance condition yields a dark state which is a superposition of only the initial and target state. By defining the mixing angle tan ⁡ θ = Ω P / Ω S {\displaystyle \tan \theta =\Omega _{P}/\Omega _{S}} and utilizing the normalization condition | c 1 | 2 + | c 3 | 2 = 1 {\displaystyle |c_{1}|^{2}+|c_{3}|^{2}=1} , the second condition can be used to express this dark state as | d a r k ⟩ = cos ⁡ θ | 1 ⟩ − sin ⁡ θ | 3 ⟩ {\displaystyle |\mathrm {dark} \rangle =\cos \theta \,|1\rangle -\sin \theta \,|3\rangle } From this, the STIRAP counter-intuitive pulse sequence can be deduced. At θ = 0 {\displaystyle \theta =0} which corresponds the presence of only the Stokes field ( Ω S ≫ Ω P {\displaystyle \Omega _{S}\gg \Omega _{P}} ), the dark state exactly corresponds to the initial state | 1 ⟩ {\displaystyle |1\rangle } . As the mixing angle is rotated from 0 {\displaystyle 0} to π / 2 {\displaystyle \pi /2} , the dark state smoothly interpolates from purely state | 1 ⟩ {\displaystyle |1\rangle } to purely state | 3 ⟩ {\displaystyle |3\rangle } . The latter θ = π / 2 {\displaystyle \theta =\pi /2} case corresponds to the opposing limit of a strong pump field ( Ω P ≫ Ω S {\displaystyle \Omega _{P}\gg \Omega _{S}} ). Practically, this corresponds to applying Stokes and pump field pulses to the system with a slight delay between while still maintaining significant temporal overlap between pulses; the delay provides the correct limiting behavior and the overlap ensures adiabatic evolution. A population initially prepared in state | 1 ⟩ {\displaystyle |1\rangle } will adiabatically follow the dark state and end up in state | 3 ⟩ {\displaystyle |3\rangle } without populating state | 2 ⟩ {\displaystyle |2\rangle } as desired. The pulse envelopes can take on fairly arbitrary shape so long as the time rate of change of the mixing angle is slow compared to the energy splitting with respect to the non-dark states. This adiabatic condition takes its simplest form at the single-photon resonance condition Δ = 0 {\displaystyle \Delta =0} where it can be expressed as Ω ( t ) ≫ | θ ˙ ( t ) | = | Ω S ( t ) Ω ˙ P ( t ) − Ω P ( t ) Ω ˙ S ( t ) | Ω ( t ) 2 {\displaystyle \Omega (t)\gg |{\dot {\theta }}(t)|={\frac {|\Omega _{S}(t){\dot {\Omega }}_{P}(t)-\Omega _{P}(t){\dot {\Omega }}_{S}(t)|}{\Omega (t)^{2}}}} This scattering –related article is a stub . You can help Wikipedia by expanding it . This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stimulated_Raman_adiabatic_passage
Stimulated emission is the process by which an incoming photon of a specific frequency can interact with an excited atomic electron (or other excited molecular state), causing it to drop to a lower energy level. The liberated energy transfers to the electromagnetic field, creating a new photon with a frequency , polarization , and direction of travel that are all identical to the photons of the incident wave. This is in contrast to spontaneous emission , which occurs at a characteristic rate for each of the atoms/oscillators in the upper energy state regardless of the external electromagnetic field. According to the American Physical Society , the first person to correctly predict the phenomenon of stimulated emission was Albert Einstein in a series of papers starting in 1916, culminating in what is now called the Einstein B Coefficient . Einstein's work became the theoretical foundation of the maser and the laser . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The process is identical in form to atomic absorption in which the energy of an absorbed photon causes an identical but opposite atomic transition: from the lower level to a higher energy level. In normal media at thermal equilibrium, absorption exceeds stimulated emission because there are more electrons in the lower energy states than in the higher energy states. However, when a population inversion is present, the rate of stimulated emission exceeds that of absorption, and a net optical amplification can be achieved. Such a gain medium , along with an optical resonator, is at the heart of a laser or maser. Lacking a feedback mechanism, laser amplifiers and superluminescent sources also function on the basis of stimulated emission. Electrons and their interactions with electromagnetic fields are important in our understanding of chemistry and physics . In the classical view , the energy of an electron orbiting an atomic nucleus is larger for orbits further from the nucleus of an atom . However, quantum mechanical effects force electrons to take on discrete positions in orbitals . Thus, electrons are found in specific energy levels of an atom, two of which are shown below: When an electron absorbs energy either from light (photons) or heat ( phonons ), it receives that incident quantum of energy. But transitions are only allowed between discrete energy levels such as the two shown above. This leads to emission lines and absorption lines . When an electron is excited from a lower to a higher energy level, it is unlikely for it to stay that way forever. An electron in an excited state may decay to a lower energy state which is not occupied, according to a particular time constant characterizing that transition. When such an electron decays without external influence, emitting a photon, that is called " spontaneous emission ". The phase and direction associated with the photon that is emitted is random. A material with many atoms in such an excited state may thus result in radiation which has a narrow spectrum (centered around one wavelength of light), but the individual photons would have no common phase relationship and would also emanate in random directions. This is the mechanism of fluorescence and thermal emission . An external electromagnetic field at a frequency associated with a transition can affect the quantum mechanical state of the atom without being absorbed. As the electron in the atom makes a transition between two stationary states (neither of which shows a dipole field), it enters a transition state which does have a dipole field, and which acts like a small electric dipole , and this dipole oscillates at a characteristic frequency. In response to the external electric field at this frequency, the probability of the electron entering this transition state is greatly increased. Thus, the rate of transitions between two stationary states is increased beyond that of spontaneous emission. A transition from the higher to a lower energy state produces an additional photon with the same phase and direction as the incident photon; this is the process of stimulated emission. Stimulated emission was a theoretical discovery by Albert Einstein within the framework of the old quantum theory , wherein the emission is described in terms of photons that are the quanta of the EM field. [ 5 ] [ 6 ] Stimulated emission can also occur in classical models, without reference to photons or quantum-mechanics. [ 7 ] [ non-primary source needed ] (See also Laser § History .) According to physics professor and director of the MIT-Harvard Center for Ultracold Atoms Daniel Kleppner , Einstein's theory of radiation was ahead of its time and prefigures the modern theory of quantum electrodynamics and quantum optics by several decades. [ 8 ] Stimulated emission can be modelled mathematically by considering an atom that may be in one of two electronic energy states, a lower level state (possibly the ground state) (1) and an excited state (2), with energies E 1 and E 2 respectively. If the atom is in the excited state, it may decay into the lower state by the process of spontaneous emission , releasing the difference in energies between the two states as a photon. The photon will have frequency ν 0 and energy hν 0 , given by: E 2 − E 1 = h ν 0 {\displaystyle E_{2}-E_{1}=h\,\nu _{0}} where h is the Planck constant . Alternatively, if the excited-state atom is perturbed by an electric field of frequency ν 0 , it may emit an additional photon of the same frequency and in phase, thus augmenting the external field, leaving the atom in the lower energy state. This process is known as stimulated emission. In a group of such atoms, if the number of atoms in the excited state is given by N 2 , the rate at which stimulated emission occurs is given by ∂ N 2 ∂ t = − ∂ N 1 ∂ t = − B 21 ρ ( ν ) N 2 {\displaystyle {\frac {\partial N_{2}}{\partial t}}=-{\frac {\partial N_{1}}{\partial t}}=-B_{21}\,\rho (\nu )\,N_{2}} where the proportionality constant B 21 is known as the Einstein B coefficient for that particular transition, and ρ ( ν ) is the radiation density of the incident field at frequency ν . The rate of emission is thus proportional to the number of atoms in the excited state N 2 , and to the density of incident photons. At the same time, there will be a process of atomic absorption which removes energy from the field while raising electrons from the lower state to the upper state. Its rate is precisely the negative of the stimulated emission rate, ∂ N 2 ∂ t = − ∂ N 1 ∂ t = B 12 ρ ( ν ) N 1 . {\displaystyle {\frac {\partial N_{2}}{\partial t}}=-{\frac {\partial N_{1}}{\partial t}}=B_{12}\,\rho (\nu )\,N_{1}.} The rate of absorption is thus proportional to the number of atoms in the lower state, N 1 . The B coefficients can be calculated using dipole approximation and time dependent perturbation theory in quantum mechanics as: [ 9 ] [ 10 ] B a b = e 2 6 ϵ 0 ℏ 2 | ⟨ a | r → | b ⟩ | 2 {\displaystyle B_{ab}={\frac {e^{2}}{6\epsilon _{0}\hbar ^{2}}}|\langle a|{\vec {r}}|b\rangle |^{2}} where B corresponds to energy distribution in terms of frequency ν . The B coefficient may vary based on choice of energy distribution function used, however, the product of energy distribution function and its respective B coefficient remains same. Einstein showed from the form of Planck's law, [ citation needed ] that the coefficient for this transition must be identical to that for stimulated emission: B 12 = B 21 . {\displaystyle B_{12}=B_{21}.} Thus absorption and stimulated emission are reverse processes proceeding at somewhat different rates. Another way of viewing this is to look at the net stimulated emission or absorption viewing it as a single process. The net rate of transitions from E 2 to E 1 due to this combined process can be found by adding their respective rates, given above: ∂ N 1 net ∂ t = − ∂ N 2 net ∂ t = B 21 ρ ( ν ) ( N 2 − N 1 ) = B 21 ρ ( ν ) Δ N . {\displaystyle {\frac {\partial N_{1}^{\text{net}}}{\partial t}}=-{\frac {\partial N_{2}^{\text{net}}}{\partial t}}=B_{21}\,\rho (\nu )\,(N_{2}-N_{1})=B_{21}\,\rho (\nu )\,\Delta N.} Thus a net power is released into the electric field equal to the photon energy hν times this net transition rate. In order for this to be a positive number, indicating net stimulated emission, there must be more atoms in the excited state than in the lower level: Δ N > 0 {\displaystyle \Delta N>0} . Otherwise there is net absorption and the power of the wave is reduced during passage through the medium. The special condition N 2 > N 1 {\displaystyle N_{2}>N_{1}} is known as a population inversion , a rather unusual condition that must be effected in the gain medium of a laser. The notable characteristic of stimulated emission compared to everyday light sources (which depend on spontaneous emission) is that the emitted photons have the same frequency, phase, polarization, and direction of propagation as the incident photons. The photons involved are thus mutually coherent . When a population inversion ( Δ N > 0 {\displaystyle \Delta N>0} ) is present, therefore, optical amplification of incident radiation will take place. Although energy generated by stimulated emission is always at the exact frequency of the field which has stimulated it, the above rate equation refers only to excitation at the particular optical frequency ν 0 {\displaystyle \nu _{0}} corresponding to the energy of the transition. At frequencies offset from ν 0 {\displaystyle \nu _{0}} the strength of stimulated (or spontaneous) emission will be decreased according to the so-called line shape . Considering only homogeneous broadening affecting an atomic or molecular resonance, the spectral line shape function is described as a Lorentzian distribution g ′ ( ν ) = 1 π ( Γ / 2 ) ( ν − ν 0 ) 2 + ( Γ / 2 ) 2 {\displaystyle g'(\nu )={1 \over \pi }{(\Gamma /2) \over (\nu -\nu _{0})^{2}+(\Gamma /2)^{2}}} where Γ {\displaystyle \Gamma } is the full width at half maximum or FWHM bandwidth. The peak value of the Lorentzian line shape occurs at the line center, ν = ν 0 {\displaystyle \nu =\nu _{0}} . A line shape function can be normalized so that its value at ν 0 {\displaystyle \nu _{0}} is unity; in the case of a Lorentzian we obtain g ( ν ) = g ′ ( ν ) g ′ ( ν 0 ) = ( Γ / 2 ) 2 ( ν − ν 0 ) 2 + ( Γ / 2 ) 2 . {\displaystyle g(\nu )={g'(\nu ) \over g'(\nu _{0})}={(\Gamma /2)^{2} \over (\nu -\nu _{0})^{2}+(\Gamma /2)^{2}}.} Thus stimulated emission at frequencies away from ν 0 {\displaystyle \nu _{0}} is reduced by this factor. In practice there may also be broadening of the line shape due to inhomogeneous broadening , most notably due to the Doppler effect resulting from the distribution of velocities in a gas at a certain temperature. This has a Gaussian shape and reduces the peak strength of the line shape function. In a practical problem the full line shape function can be computed through a convolution of the individual line shape functions involved. Therefore, optical amplification will add power to an incident optical field at frequency ν {\displaystyle \nu } at a rate given by P = h ν g ( ν ) B 21 ρ ( ν ) Δ N . {\displaystyle P=h\nu \,g(\nu )\,B_{21}\,\rho (\nu )\,\Delta N.} The stimulated emission cross section is σ 21 ( ν ) = A 21 λ 2 8 π n 2 g ′ ( ν ) {\displaystyle \sigma _{21}(\nu )=A_{21}{\frac {\lambda ^{2}}{8\pi n^{2}}}g'(\nu )} where Stimulated emission can provide a physical mechanism for optical amplification . If an external source of energy stimulates more than 50% of the atoms in the ground state to transition into the excited state, then what is called a population inversion is created. When light of the appropriate frequency passes through the inverted medium, the photons are either absorbed by the atoms that remain in the ground state or the photons stimulate the excited atoms to emit additional photons of the same frequency, phase, and direction. Since more atoms are in the excited state than in the ground state then an amplification of the input intensity results. The population inversion, in units of atoms per cubic metre, is where g 1 and g 2 are the degeneracies of energy levels 1 and 2, respectively. The intensity (in watts per square metre) of the stimulated emission is governed by the following differential equation: as long as the intensity I ( z ) is small enough so that it does not have a significant effect on the magnitude of the population inversion. Grouping the first two factors together, this equation simplifies as where is the small-signal gain coefficient (in units of radians per metre). We can solve the differential equation using separation of variables : Integrating, we find: or where The saturation intensity I S is defined as the input intensity at which the gain of the optical amplifier drops to exactly half of the small-signal gain. We can compute the saturation intensity as where The minimum value of I S ( ν ) {\displaystyle I_{\text{S}}(\nu )} occurs on resonance, [ 11 ] where the cross section σ ( ν ) {\displaystyle \sigma (\nu )} is the largest. This minimum value is: For a simple two-level atom with a natural linewidth Γ {\displaystyle \Gamma } , the saturation time constant τ S = Γ − 1 {\displaystyle \tau _{\text{S}}=\Gamma ^{-1}} . The general form of the gain equation, which applies regardless of the input intensity, derives from the general differential equation for the intensity I as a function of position z in the gain medium : where I S {\displaystyle I_{S}} is saturation intensity. To solve, we first rearrange the equation in order to separate the variables, intensity I and position z : Integrating both sides, we obtain or The gain G of the amplifier is defined as the optical intensity I at position z divided by the input intensity: Substituting this definition into the prior equation, we find the general gain equation : In the special case where the input signal is small compared to the saturation intensity, in other words, then the general gain equation gives the small signal gain as or which is identical to the small signal gain equation (see above). For large input signals, where the gain approaches unity and the general gain equation approaches a linear asymptote :
https://en.wikipedia.org/wiki/Stimulated_emission
Stimulus-triggered acquisition of pluripotency ( STAP ) was a proposed method of generating pluripotent stem cells by subjecting ordinary cells to certain types of stress, such as the application of a bacterial toxin, submersion in a weak acid, or physical trauma. [ 1 ] [ 2 ] The technique gained prominence in January 2014 when research by Haruko Obokata et al. was published in Nature . Over the following months, all scientists who tried to duplicate her results failed, and suspicion arose that Obokata's results were due to error or fraud . An investigation by her employer, RIKEN , was launched. On April 1, 2014, RIKEN concluded that Obokata had falsified data to obtain her results. [ 3 ] On June 4, 2014, Obokata agreed to retract the papers. [ 4 ] On August 5, 2014, Yoshiki Sasai —Obokata's supervisor at RIKEN and one of the coauthors on the STAP cell papers—was found dead at a RIKEN facility after an apparent suicide by hanging. [ 5 ] STAP would have been a radically simpler method of stem cell generation than previously researched methods as it requires neither nuclear transfer nor the introduction of transcription factors . [ 6 ] Haruko Obokata claimed that STAP cells were produced by exposing CD45 + murine spleen cells to certain stresses including an acidic medium with a pH of 5.7 for half an hour. [ 6 ] [ 7 ] Following this treatment, the cells were verified to be pluripotent by observing increasing levels of Oct-4 (a transcription factor expressed in embryonic stem cells) over the following week using an Oct4- GFP transgene . [ 6 ] [ 8 ] On average only 25% of cells survived the acid treatment, but over 50% of those that survived converted to Oct4-GFP + CD45 − pluripotent cells. [ 6 ] The researchers also claimed that treatment with bacterial toxins or physical stress were conducive to the acquisition of pluripotent markers. [ 1 ] STAP cells injected into mouse embryos grew into a variety of tissues and organs found throughout the body. According to the researchers, the chimaeric mice "[appeared] to be healthy, fertile, and normal" after one-to-two years of observation. [ 9 ] Additionally, these mice produced healthy offspring, thereby demonstrating germline transmission which is "a strict criterion for pluripotency as well as genetic and epigenetic normality." [ 6 ] STAP cells were supposedly able to differentiate into placental cells, meaning they would be more potent than embryonic stem cells or induced pluripotent stem cells (iPS). [ 1 ] It was not clear why ordinary cells do not convert into stem cells when subjected to similar stimuli under ordinary conditions, such as acidity in the body; Obokata et al. suggested that in vivo inhibitory mechanisms may block conversion to pluripotency. [ 6 ] Research is underway to generate stimulus-triggered acquisition of pluripotency (STAP) cells using human tissue: in February 2014, Charles Vacanti and Koji Kojima ( Harvard researchers originally involved in the discovery and publication of STAP) claimed to have preliminary results of STAP cells generated from human fibroblasts , but concomitantly cautioned that these preliminary results require further analysis and validation. [ 10 ] In the early 2000s, Charles Vacanti and Martin Vacanti conducted studies that led them to the idea that stem cells— spore-like cells —could be spontaneously recovered from ordinary tissues that are stressed via mechanical injury or increased acidity. [ 11 ] The technique for producing STAP cells was subsequently studied by Obokata at the Brigham and Women's Hospital (BWH), while she was studying as a post doc under Charles Vacanti, and then at the RIKEN Center for Developmental Biology in Japan . [ 12 ] [ 13 ] In 2008, while working at Harvard Medical School , she verified at the request of Charles Vacanti that some of the cultured cells she was working with shrank to the size of stem cells after being mechanically injured in a capillary tube . [ 1 ] [ 9 ] She went on as directed, to test the effects of various stimuli on cells. After modifying the technique, Obokata was able to show that white blood cells from newborn mice could be transformed into cells that behaved much like stem cells. She repeated the experiment with other cell types including brain, skin, and muscle cells with the same result. [ 9 ] Initially Obokata's findings were met with skepticism, even among her coworkers. "Everyone said it was an artefact – there were some really hard days", she recalled. [ 1 ] The manuscript describing the work was rejected multiple times before its eventual publication as an article (together with a shorter jointly-written "letter") within the journal Nature . [ 9 ] A series of experiments, first turning a mouse embryo green by fluorescently tagging STAP cells, then videotaping the transformation of T-cells into pluripotent cells, finally convinced skeptics that the results were real. [ 1 ] In the months after the two Nature papers [ 9 ] were released, all scientists who tried to duplicate Obokata's results failed and suspicion arose that her results were due to error or fraud. An investigation into alleged irregularities was launched by RIKEN on February 15, 2014. The allegations questioned the use of seemingly duplicated images in the papers, and reported failure to reproduce her results in other prominent stem-cell laboratories. Nature also announced that they were investigating. Several stem-cell scientists defended Obokata or reserved their opinion while the investigation was ongoing. [ 14 ] To address the problem of reproducibility in other laboratories, Obokata published some technical 'tips' on the protocols on March 5 while promising that the detailed procedure would be published in due course. [ 15 ] On March 11, Teruhiko Wakayama, one of Obokata's coauthors, urged all the researchers involved to withdraw the articles, citing many "questionable points". [ 16 ] Charles Vacanti said he opposed their retraction and posted a "revised protocol" for creating STAP cells on his own website, which was taken down after he resigned his BWH post. [ 17 ] On March 14, RIKEN released an interim report of the investigation. Out of the six items being investigated, the committee concluded that there was inappropriate handling of data on two items, but did not judge the mishandling as research misconduct. [ 18 ] On April 1, RIKEN concluded that Obokata had engaged in "research misconduct", falsifying data on two occasions. The co-authors were cleared of misconduct, but bore "grave responsibility" for not verifying the data themselves. RIKEN also announced that an internal group had been established to verify whether the ‘stimulus-triggered acquisition of pluripotency’ is reproducible. [ 19 ] Obokata maintained her innocence and said she would appeal the decision. [ 3 ] On June 4, 2014, Obokata agreed to retract both the article and the "letter". [ 4 ] The article was officially retracted on July 2, 2014. An article analyzing the controversy concluded that while issues of image manipulation, duplication and plagiarism were potentially detectable, the reviewers could not have concluded that the article was the product of academic misconduct prior to acceptance. [ 20 ] In the wake of the controversy, observers, journalists, and former members of RIKEN have stated that the organization is riddled with unprofessional and inadequate scientific rigor and consistency, and that this is reflective of serious issues with scientific research in Japan in general. [ 21 ] [ 22 ] RIKEN commissioned a team of scientists to attempt to verify Obokata's original results and asked Obokata to participate in the effort. On August 5, 2014, Obokata's supervisor and co-author of the original paper, Yoshiki Sasai , was discovered dead by apparent suicide by hanging in a building at the RIKEN facility in Kobe, Japan. [ 23 ] On September 24, 2015, the RIKEN scientists reported that Obokata's STAP cells came from embryonic stem cell contamination, [ 24 ] while on the same day, research groups who had attempted to reproduce the STAP protocol jointly reported that they had found it irreproducible. [ 25 ] [ 26 ] If the findings had proven to be valid, stimulus-triggered pluripotency cells could have been generated more easily and efficiently than by existing iPS techniques. [ 1 ] Adapted to human tissue, the technique could have led to cheap and simple procedures to create patient-specific stem cells. Stem-cell researcher Dusko Ilic of King's College London called STAP cells "a major scientific discovery that will be opening a new era in stem-cell biology". [ 9 ] Shinya Yamanaka , a pioneer of iPS research, called the findings "important to understand nuclear reprogramming ... [and] a new approach to generate iPS-like cells". [ 1 ] The idea that STAP cells can form placental tissue meant they could have made cloning considerably easier by bypassing the need for a donor egg and in vitro cultivation. [ 1 ] One previous way of creating stem cells has been via genetic manipulation of adult cells into iPS cells. Progress on iPS-based therapies has been slow due to regulatory hurdles surrounding genetic manipulation. [ 9 ] Additionally, iPS techniques have an observed efficiency of around 1%, significantly lower than the claimed efficiency of STAP. [ 1 ]
https://en.wikipedia.org/wiki/Stimulus-triggered_acquisition_of_pluripotency
A stinging plant or a plant with stinging hairs is a plant with hairs ( trichomes ) on its leaves or stems that are capable of injecting substances that cause pain or irritation. Other plants, such as opuntias , have hairs or spines that cause mechanical irritation, but do not inject chemicals. Stinging hairs occur particularly in the families Urticaceae , Loasaceae , Boraginaceae (subfamily Hydrophylloideae ) and Euphorbiaceae . [ 1 ] Such hairs have been shown to deter grazing mammals, but are no more effective against insect attack than non-stinging hairs. [ 2 ] Many plants with stinging hairs have the word "nettle" in their English name, but may not be related to "true nettles" (the genus Urtica ). Though several unrelated families of plants have stinging hairs, their structure is generally similar. A solid base supports a single elongated cell with a brittle tip. When the tip is broken, the exposed sharp point penetrates the skin and pressure injects toxins. The precise chemicals involved in causing pain and irritation are not yet fully understood. Stiff hairs or trichomes without the ability to inject irritating compounds occur on the leaves and stems of many plants. They appear to deter feeding insects to some degree by impeding movement and restricting access to the surface of the stem or leaf. Some plants have glandular hairs, either as well as non-glandular hairs or instead of them. Glandular hairs have regions of tissue that produce secretions of secondary metabolites. These chemical substances can repel or poison feeding insects. [ 3 ] Stinging hairs may be defined as those with ability to inject a chemical substance through the skin of an animal causing irritation or pain. Since some glandular hairs can cause irritation merely by contact, the difference between "stinging hairs" and "irritating hairs" is not always clear. For example, the hairs of Mucuna species are described in both ways. Some species of Mucuna have sharply tipped hairs, in which the upper part easily breaks off, whereas other species have hairs that are blunter. [ 4 ] In those subspecies of Urtica dioica that have stinging hairs (stinging nettles), these also have a point that easily breaks off, allowing the irritants in the cell below to enter through the skin. [ 5 ] Being stung in this way has been shown to deter grazing mammals, such as rabbits, [ 6 ] and even large herbivores such as cows. [ 3 ] Many plant species respond to physical damage by producing a higher density of trichomes of all kinds. [ 3 ] The general structure of a stinging hair is very similar in all the families of plants that possess them (except Tragia and Dalechampia ). A multicellular base supports a single long thin cell, typically 1–8 mm long, with a brittle tip that easily breaks to form a sharp point that can penetrate skin. [ 7 ] Stinging hairs of Urtica species have been studied in some detail. Each hair contains a fine tube, stiffened with calcium carbonate (calcified) at its base and with silica (silicified) at its tip. In Urtica thunbergiana , individual hairs contain around 4 nanolitres ( 4 × 10 −6 ml ) of fluid. The silicified tip breaks off on contact, and the resulting fine point pierces the skin. Pressure forces the fluid out of the hair. [ 8 ] Different toxins may be involved. The stinging hairs of Tragia spp, notably Tragia volubilis , a South American member of the Euphorbiaceae , are capable of injecting a crystal of calcium oxalate . [ 9 ] The stinging sensation is initially caused by the mechanical entry of the stiff hair into the skin, but is then intensified by the effect of the oxalate. [ 10 ] The effects of the stinging hairs of Urtica species, particularly some subspecies of Urtica dioica , have been attributed to a number of substances, including histamine , acetylcholine , serotonin , [ 10 ] and formic acid . [ 8 ] Histamine is a component of the stinging hairs of other Urtica species (e.g. U. urens and U. parviflora ) and of Cnidoscolus urens and Laportea species. In vertebrates , histamine is a neurotransmitter . When it is released naturally, inflammation of the skin results, causing pain and itching. Injection of histamine by stinging hairs has been considered to have the same effect. [ 5 ] This traditional interpretation was challenged in 2006 by research on Urtica thunbergiana , the main species of Urtica present in Taiwan . In tests on rats, the long-lasting pain caused by stings was attributed to oxalic and tartaric acid , although a synergistic effect of the other components of the stinging hairs was not ruled out. Fu et al. concluded that "stinging hairs, although studied for a long time, are still mysterious, particularly concerning the mechanism of the skin reaction after being stung." [ 8 ] Many plants with stinging hairs belong to the genus Urtica . Between twenty-four and thirty-nine species of flowering plants of the genus Urtica in the family Urticaceae fall into this category, with a cosmopolitan though mainly temperate distribution. They are mostly herbaceous perennial plants , but some are annual and a few are shrubby . The most prominent member of the genus Urtica is the stinging nettle , Urtica dioica , native to Europe , Africa , Asia , and North America . [ citation needed ] The family Urticaceae also contains some other plants with stinging hairs that are not members of the genus Urtica . These include: There are also plants with stinging hairs that are unrelated to the Urticaceae: [ 11 ] Though plants with stinging hairs can cause pain and acute urticaria , only a few are seriously harmful. The genus Dendrocnide (stinging trees) has been said to cause the most pain, particularly the Australian Dendrocnide moroides (gympie-gympie), although other sources [ 15 ] describe the pain of stinging trees as only differing from that of nettles in terms of persistence rather than severity. There are reports of dogs and horses being killed, and once of a human death. [ citation needed ] The researcher Marina Hurley reports being hospitalized after being stung by a dead leaf. Deaths are probably due to heart failure caused by pain and shock. [ 16 ] Urtica ferox (tree nettle or ongaonga) is endemic to New Zealand . One recorded human death is known: a lightly clad young man died five hours after walking through a dense patch. [ 17 ] After cooking, some plants with stinging hairs, such as Urtica dioica (stinging nettle), are eaten as vegetables. [ 18 ]
https://en.wikipedia.org/wiki/Stinging_plant
The StingRay is an IMSI-catcher , a cellular phone surveillance device, manufactured by Harris Corporation . [ 2 ] Initially developed for the military and intelligence community, the StingRay and similar Harris devices are in widespread use by local and state law enforcement agencies across Canada, [ 3 ] the United States, [ 4 ] [ 5 ] and in the United Kingdom. [ 6 ] [ 7 ] Stingray has also become a generic name to describe these kinds of devices. [ 8 ] The StingRay is an IMSI-catcher with both passive (digital analyzer) and active (cell-site simulator) capabilities. When operating in active mode, the device mimics a wireless carrier cell tower in order to force all nearby mobile phones and other cellular data devices to connect to it. [ 9 ] [ 10 ] [ 11 ] The StingRay family of devices can be mounted in vehicles, [ 10 ] on airplanes, helicopters and unmanned aerial vehicles . [ 12 ] Hand-carried versions are referred to under the trade name KingFish . [ 13 ] In active mode, the StingRay will force each compatible cellular device in a given area to disconnect from its service provider cell site (e.g., operated by Verizon, AT&T, etc.) and establish a new connection with the StingRay. [ 17 ] In most cases, this is accomplished by having the StingRay broadcast a pilot signal that is either stronger than, or made to appear stronger than, the pilot signals being broadcast by legitimate cell sites operating in the area. [ 18 ] A common function of all cellular communications protocols is to have the cellular device connect to the cell site offering the strongest signal. StingRays exploit this function as a means to force temporary connections with cellular devices within a limited area. During the process of forcing connections from all compatible cellular devices in a given area, the StingRay operator needs to determine which device is the desired surveillance target. This is accomplished by downloading the IMSI, ESN, or other identifying data from each of the devices connected to the StingRay. [ 14 ] In this context, the IMSI or equivalent identifier is not obtained from the cellular service provider or from any other third-party. The StingRay downloads this data directly from the device using radio waves. [ 19 ] In some cases, the IMSI or equivalent identifier of a target device is known to the StingRay operator beforehand. When this is the case, the operator will download the IMSI or equivalent identifier from each device as it connects to the StingRay. [ 20 ] When the downloaded IMSI matches the known IMSI of the desired target, the dragnet will end and the operator will proceed to conduct specific surveillance operations on just the target device. [ 21 ] In other cases, the IMSI or equivalent identifier of a target is not known to the StingRay operator and the goal of the surveillance operation is to identify one or more cellular devices being used in a known area. [ 22 ] For example, if visual surveillance is being conducted on a group of protestors, [ 23 ] a StingRay can be used to download the IMSI or equivalent identifier from each phone within the protest area. After identifying the phones, locating and tracking operations can be conducted, and service providers can be forced to turn over account information identifying the phone users. Cellular telephones are radio transmitters and receivers, much like a walkie-talkie . However, the cell phone communicates only with a repeater inside a nearby cell tower installation. At that installation, the device takes in all cell calls in its geographic area and repeats them out to other cell installations which repeat the signals onward to their destination telephone (either by radio or landline wires). Radio is used also to transmit a caller's voice/data back to the receiver's cell telephone. The two-way duplex phone conversation then exists via these interconnections. To make all that work correctly, the system allows automatic increases and decreases in transmitter power (for the individual cell phone and for the tower repeater, too) so that only the minimum transmit power is used to complete and hold the call active, "on", and allows the users to hear and be heard continuously during the conversation. The goal is to hold the call active but use the least amount of transmitting power, mainly to conserve batteries and be efficient. The tower system will sense when a cell phone is not coming in clearly and will order the cell phone to boost transmit power. The user has no control over this boosting; it may occur for a split second or for the whole conversation. If the user is in a remote location, the power boost may be continuous. In addition to carrying voice or data, the cell phone also transmits data about itself automatically, and that is boosted or not as the system detects need. Encoding of all transmissions ensures that no crosstalk or interference occurs between two nearby cell users. The boosting of power, however, is limited by the design of the devices to a maximum setting. The standard systems are not "high power" and thus can be overpowered by secret systems using much more boosted power that can then take over a user's cell phone. If overpowered that way, a cell phone will not indicate the change due to the secret radio being programmed to hide from normal detection. The ordinary user cannot know if their cell phone is captured via overpowering boosts or not. (There are other ways of secret capture that need not overpower, too.) Just as a person shouting drowns out someone whispering, the boost in RF watts of power into the cell telephone system can overtake and control that system—in total or only a few, or even only one, conversation. This strategy requires only more RF power, and thus it is simpler than other types of secret control. Power boosting equipment can be installed anywhere there can be an antenna, including in a vehicle, perhaps even in a vehicle on the move. Once a secretly boosted system takes control, any manipulation is possible from simple recording of the voice or data to total blocking of all cell phones in the geographic area. [ 24 ] A StingRay can be used to identify and track a phone or other compatible cellular data device even while the device is not engaged in a call or accessing data services. [ 25 ] A Stingray closely resembles a portable cellphone tower. Typically, law enforcement officials place the Stingray in their vehicle with a compatible computer software. The Stingray acts as a cellular tower to send out signals to get the specific device to connect to it. Cell phones are programmed to connect with the cellular tower offering the best signal. When the phone and Stingray connect, the computer system determines the strength of the signal and thus the distance to the device. Then, the vehicle moves to another location and sends out signals until it connects with the phone. When the signal strength is determined from enough locations, the computer system centralizes the phone and is able to find it. Cell phones are programmed to constantly search for the strongest signal emitted from cell phone towers in the area. Over the course of the day, most cell phones connect and reconnect to multiple towers in an attempt to connect to the strongest, fastest, or closest signal. Because of the way they are designed, the signals that the Stingray emits are far stronger than those coming from surrounding towers. For this reason, all cell phones in the vicinity connect to the Stingray regardless of the cell phone owner's knowledge. From there, the stingray is capable of locating the device, interfering with the device, and collecting personal data from the device. [ 26 ] [ 27 ] The FBI has claimed that when used to identify, locate, or track a cellular device, the StingRay does not collect communications content or forward it to the service provider. [ 28 ] Instead, the device causes a disruption in service. [ 29 ] Under this scenario, any attempt by the cellular device user to place a call or access data services will fail while the StingRay is conducting its surveillance. On August 21, 2018, Senator Ron Wyden noted that Harris Corporation confirmed that Stingrays disrupt the targeted phone's communications. Additionally, he noted that "while the company claims its cell-site simulators include a feature that detects and permits the delivery of emergency calls to 9-1-1 , its officials admitted to my office that this feature has not been independently tested as part of the Federal Communications Commission’s certification process, nor were they able to confirm this feature is capable of detecting and passing-through 9-1-1 emergency communications made by people who are deaf, hard of hearing, or speech disabled using Real-Time Text technology." [ 30 ] By way of software upgrades, [ 31 ] the StingRay and similar Harris products can be used to intercept GSM communications content transmitted over-the-air between a target cellular device and a legitimate service provider cell site. The StingRay does this by way of the following man-in-the-middle attack : (1) simulate a cell site and force a connection from the target device, (2) download the target device's IMSI and other identifying information, (3) conduct "GSM Active Key Extraction" [ 31 ] to obtain the target device's stored encryption key, (4) use the downloaded identifying information to simulate the target device over-the-air, (5) while simulating the target device, establish a connection with a legitimate cell site authorized to provide service to the target device, (6) use the encryption key to authenticate the StingRay to the service provider as being the target device, and (7) forward signals between the target device and the legitimate cell site while decrypting and recording communications content. The "GSM Active Key Extraction" [ 31 ] performed by the StingRay in step three merits additional explanation. A GSM phone encrypts all communications content using an encryption key stored on its SIM card with a copy stored at the service provider. [ 32 ] While simulating the target device during the above explained man-in-the-middle attack, the service provider cell site will ask the StingRay (which it believes to be the target device) to initiate encryption using the key stored on the target device. [ 33 ] Therefore, the StingRay needs a method to obtain the target device's stored encryption key else the man-in-the-middle attack will fail. GSM primarily encrypts communications content using the A5/1 call encryption cypher. In 2008 it was reported that a GSM phone's encryption key can be obtained using $1,000 worth of computer hardware and 30 minutes of cryptanalysis performed on signals encrypted using A5/1. [ 34 ] However, GSM also supports an export weakened variant of A5/1 called A5/2 . This weaker encryption cypher can be cracked in real-time. [ 32 ] While A5/1 and A5/2 use different cypher strengths, they each use the same underlying encryption key stored on the SIM card. [ 33 ] Therefore, the StingRay performs "GSM Active Key Extraction" [ 31 ] during step three of the man-in-the-middle attack as follows: (1) instruct target device to use the weaker A5/2 encryption cypher, (2) collect A5/2 encrypted signals from target device, and (3) perform cryptanalysis of the A5/2 signals to quickly recover the underlying stored encryption key. [ 35 ] Once the encryption key is obtained, the StingRay uses it to comply with the encryption request made to it by the service provider during the man-in-the-middle attack. [ 35 ] A rogue base station can force unencrypted links, if supported by the handset software. The rogue base station can send a 'Cipher Mode Settings' element (see GSM 04.08 Chapter 10.5.2.9 ) to the phone, with this element clearing the one bit that marks if encryption should be used. In such cases the phone display could indicate the use of an unsafe link—but the user interface software in most phones does not interrogate the handset's radio subsystem for use of this insecure mode nor display any warning indication. In passive mode, the StingRay operates either as a digital analyzer, which receives and analyzes signals being transmitted by cellular devices and/or wireless carrier cell sites or as a radio jamming device, which transmits signals that block communications between cellular devices and wireless carrier cell sites. By "passive mode", it is meant that the StingRay does not mimic a wireless carrier cell site or communicate directly with cellular devices. A StingRay and a test phone can be used to conduct base station surveys, which is the process of collecting information on cell sites, including identification numbers, signal strength, and signal coverage areas. When conducting base station surveys, the StingRay mimics a cell phone while passively collecting signals being transmitted by cell-sites in the area of the StingRay. Base station survey data can be used to further narrow the past locations of a cellular device if used in conjunction with historical cell site location information ("HCSLI") obtained from a wireless carrier. HCSLI includes a list of all cell sites and sectors accessed by a cellular device, and the date and time each access was made. Law enforcement will often obtain HCSLI from wireless carriers in order to determine where a particular cell phone was located in the past. Once this information is obtained, law enforcement will use a map of cell site locations to determine the past geographical locations of the cellular device. However, the signal coverage area of a given cell site may change according to the time of day, weather, and physical obstructions in relation to where a cellular device attempts to access service. The maps of cell site coverage areas used by law enforcement may also lack precision as a general matter. For these reasons, it is beneficial to use a StingRay and a test phone to map out the precise coverage areas of all cell sites appearing in the HCSLI records. This is typically done at the same time of day and under the same weather conditions that were in effect when the HCSLI was logged. Using a StingRay to conduct base station surveys in this manner allows for mapping out cell site coverage areas that more accurately match the coverage areas that were in effect when the cellular device was used. The use of the devices has been frequently funded by grants from the Department of Homeland Security . [ 36 ] The Los Angeles Police Department used a Department of Homeland Security grant in 2006 to buy a StingRay for "regional terrorism investigations". [ 37 ] However, according to the Electronic Frontier Foundation , the "LAPD has been using it for just about any investigation imaginable." [ 38 ] In addition to federal law enforcement, military and intelligence agencies, StingRays have in recent years been purchased by local and state law enforcement agencies. In 2006, Harris Corporation employees directly conducted wireless surveillance using StingRay units on behalf of the Palm Bay Police Department—where Harris has a campus [ 39 ] —in response to a bomb threat against a middle school. The search was conducted without a warrant or judicial oversight. [ 40 ] [ 41 ] [ 42 ] [ 43 ] The American Civil Liberties Union (ACLU) confirmed that local police have cell site simulators in Washington, Nevada, Arizona, Alaska, Missouri, New Mexico, Georgia, and Massachusetts. State police have cell site simulators in Oklahoma, Louisiana, Pennsylvania, and Delaware. Local and state police have cell site simulators in California, Texas, Minnesota, Wisconsin, Michigan, Illinois, Indiana, Tennessee, North Carolina, Virginia, Florida, Maryland, and New York. [ 4 ] The police use of cell site simulators is unknown in the remaining states. However, many agencies do not disclose their use of StingRay technology, so these statistics are still potentially an under-representation of the actual number of agencies. According to the most recent information published by the American Civil Liberties Union, 72 law enforcement agencies in 24 states own StingRay technology in 2017. Since 2014, these numbers have increased from 42 agencies in 17 states. The following are federal agencies in the United States that have validated their use of cell-site simulators: Federal Bureau of Investigation, Drug Enforcement Administration, US Secret Service, Immigration and Customs Enforcement, US Marshals Service, Bureau of Alcohol, Tobacco, Firearms, and Explosives, US Army, US Navy, US Marine Corps, US National Guard, US Special Command, and National Security Agency. [ 4 ] In the 2010–2014 fiscal years, the Department of Justice has confirmed spending "more than $71 million on cell-site simulation technology", while the Department of Homeland Security confirmed spending "more than $24 million on cell-site simulation technology". [ 44 ] Several court decisions have been issued on the legality of using a Stingray without a warrant, with some courts ruling a warrant is required [ 45 ] [ 46 ] [ 47 ] and others not requiring a warrant. [ 48 ] Police in Vancouver , British Columbia, Canada, admitted after much speculation across the country that they had made use of a Stingray device [ 49 ] provided by the RCMP . They also stated that they intended to make use of such devices in the future. Two days later, a statement by Edmonton 's police force had been taken as confirming their use of the devices, but they said later that they did not mean to create what they called a miscommunication. [ 50 ] Privacy International and The Sunday Times reported on the usage of StingRays and IMSI-catchers in Ireland , against the Irish Garda Síochána Ombudsman Commission (GSOC), which is an oversight agency of the Irish police force Garda Síochána . [ 51 ] [ 52 ] On June 10, 2015, the BBC reported on an investigation by Sky News [ 53 ] [ 54 ] about possible false mobile phone towers being used by the London Metropolitan Police . Commissioner Bernard Hogan-Howe refused comment. Between February 2015 and April 2016, over 12 companies in the United Kingdom were authorized to export IMSI-catcher devices to states including Saudi Arabia , the UAE , and Turkey . Critics have expressed concern about the export of surveillance technology to countries with poor human rights records and histories of abusing surveillance technology . [ 55 ] The increasing use of the devices has largely been kept secret from the court system and the public. [ 56 ] In 2014, police in Florida revealed they had used such devices at least 200 additional times since 2010 without disclosing it to the courts or obtaining a warrant. [ 2 ] One of the reasons the Tallahassee police provided for not pursuing court approval is that such efforts would allegedly violate the non-disclosure agreements (NDAs) that police sign with the manufacturer. [ 57 ] The American Civil Liberties Union has filed multiple requests for the public records of Florida law enforcement agencies about their use of the cell phone tracking devices. [ 58 ] Local law enforcement and the federal government have resisted judicial requests for information about the use of Stingrays, refusing to turn over information or heavily censoring it. [ 59 ] [ 60 ] In June 2014, the American Civil Liberties Union published information from court regarding the extensive use of these devices by local Florida police. [ 61 ] After this publication, United States Marshals Service then seized the local police's surveillance records in a bid to keep them from coming out in court. [ 62 ] In some cases, police have refused to disclose information to the courts citing non-disclosure agreements signed with Harris Corporation. [ 59 ] [ 63 ] [ 64 ] The FBI defended these agreements, saying that information about the technology could allow adversaries to circumvent it. [ 63 ] The ACLU has said "potentially unconstitutional government surveillance on this scale should not remain hidden from the public just because a private corporation desires secrecy. And it certainly should not be concealed from judges." [ 2 ] In 2014, former U.S. Magistrate Judge for the Southern District of Texas, Brian Owsley, became the first judge to openly testify about the problematic and unconstitutional aspects of the ways in which law enforcement use of Stingray machines seemed to him to regularly surpass the parameters of the very electronic surveillance warrants that were being issued to sanction the Stingray machine's capabilities. In his testimony, former-Judge Owsley stated: "The first time I ever dealt with a StingRay was in April 2011. I received a pen register application filed by an Assistant U.S. Attorney alleging that some federal inmates were suspected of using cell phones to engage in federal crimes at the Federal Corrections Institution in Three Rivers, Texas. Although the Government knew who these inmates were, they did not know the cell phone numbers. Hence, they filed the pen register application, which essentially seeks authorization of a list of all telephone numbers that are outgoing from a given telephone. Although it was captioned as a pen register, the application sought to use a device that would capture any cell phone used within the vicinity of the prison. In other words, this did not sound like a pen register." [ 65 ] [ 66 ] [ 67 ] In 2015 Santa Clara County pulled out of contract negotiations with Harris for StingRay units, citing onerous restrictions imposed by Harris on what could be released under public records requests as the reason for exiting negotiations. [ 68 ] Beginning around 2018 and over the next several years until 2023, the ACLU and the Center for Human Rights and Privacy were able to obtain, through both Freedom of Information Act requests and other legal channels, several copies of various NDAs between some of America's largest police departments and the Harris Corporation, the primary American manufacturer of the Stingray Machine, and its latest upgrade the HailStorm Machine. [ 67 ] The NDAs revealed that the FBI has often intervened directly in state criminal trials to protect the confidentiality of any information relating to the Harris Corporation. In fact, NDA between the Harris Corporation and Police Departments in San Diego, Chicago, Miami, Indianapolis, Tucson, and many others include a contractual clause that reads: "In the event that the San Diego Police Department receives a request pursuant to the Freedom of Information Act ... or any equivalent state or local law, the civil or criminal discovery process, or other judicial, legislative, or administrative process, to disclose information concerning the Harris Corporation wireless collection equipment/technology ... the San Diego Police Department will notify the FBI of any such request telephonically and in writing in order to allow sufficient time for the FBI to seek to prevent disclosure through appropriate channels". [ 69 ] [ 67 ] This language is repeated identically in virtually all of the NDAs between the Harris Corporation and major police departments that have been disclosed since 2015. In recent years, legal scholars, public interest advocates, legislators and several members of the judiciary have strongly criticized the use of this technology by law enforcement agencies. Critics have called the use of the devices by government agencies warrantless cell phone tracking, as they have frequently been used without informing the court system or obtaining a warrant. [ 2 ] The Electronic Frontier Foundation has called the devices "an unconstitutional, all-you-can-eat data buffet". [ 70 ] In June 2015, WNYC Public Radio published a podcast with Daniel Rigmaiden about the StingRay device. [ 71 ] In 2016, Professor Laura Moy of the Georgetown University Law Center filed a formal complaint to the FCC regarding the use of the devices by law enforcement agencies, taking the position that because the devices mimic the properties of cell phone towers , the agencies operating them are in violation of FCC regulation, as they lack the appropriate spectrum licenses . [ 72 ] On December 4, 2019, the American Civil Liberties Union and the New York Civil Liberties Union (NYCLU) filed a federal lawsuit against the Customs and Border Protection and the Immigrations and Customs Enforcement agencies. According to the ACLU, the union had filed a Freedom of Information Act request in 2017, but were not given access to documents. [ 73 ] The NYCLU and ACLU proceeded with the lawsuit under the statement that both CBP and ICE had failed "to produce a range of records about their use, purchase, and oversight of Stingrays". [ 73 ] In an official statement expanding their reasoning for the lawsuit, the ACLU expressed their concern over the Stingrays current and future applications, stating that ICE were using them for "unlawfully tracking journalists and advocates and subjecting people to invasive searches of their electronic devices at the border". [ 73 ] A number of countermeasures to the StingRay and other devices have been developed. One is the existence of crypto phones such as GSMK's Cryptophone, which has a firewall that can identify and thwart the StingRay's actions or alert the user to IMSI capture. [ 74 ] The EFF itself developed a system to catch Stingrays. [ 75 ] [ 76 ] In a 2023 paper, two university researchers in the US demonstrated simple timing-based approaches to detect Stingray attacks. [ 77 ]
https://en.wikipedia.org/wiki/Stingray_phone_tracker
A stink bomb , sometimes called a stinkpot , is a device designed to create an unpleasant smell . They range in effectiveness from being used as simple pranks [ 1 ] to military grade malodorants or riot control chemical agents . A stink bomb that could be launched with arrows was invented by Leonardo da Vinci . [ 2 ] The 1972 U.S. presidential campaign of Edmund Muskie was disrupted at least four times in Florida in 1972 with the use of stink bombs during the Florida presidential primary . [ 3 ] Stink bombs were set off at campaign picnics in Miami and Tampa, at the Muskie campaign headquarters in Tampa and at offices in Tampa where the campaign's telephone bank was located. [ 3 ] The stink bomb plantings served to disrupt the picnics and campaign operations, and was deemed by the U.S. Select Committee on Presidential Campaign Activities of the U.S. Senate to have "disrupted, confused, and unnecessarily interfered with a campaign for the office of the Presidency". [ 3 ] In 2004, it was reported that the Israeli weapons research and development directorate had created a liquid stink bomb, dubbed the "skunk bomb", with an odor that lingers for five years on clothing. [ 4 ] It is a synthetic stink bomb based upon the chemistry of the spray that is emitted from the anal glands of the skunk. [ 4 ] It was designed as a crowd control tool to be used as a deterrent that causes people to scatter, such as at a protest. [ 4 ] It has been described as a less than lethal weapon . [ 4 ] At the lower end of the spectrum, relatively harmless stink bombs consist of a mixture of ammonium sulfide , vinegar and bicarbonate, which smells strongly of rotten eggs . [ 2 ] When exposed to air, the ammonium sulfide reacts with moisture , hydrolyzes , and a mixture of hydrogen sulfide (rotten egg smell) and ammonia is released. Another mixture consists of hydrogen sulfide and ammonia mixed together directly. [ 5 ] Other popular substances on which to base stink bombs are thiols with lower molecular weight such as methyl mercaptan and ethyl mercaptan —the chemicals that are added in minute quantities to natural gas in order to make gas leaks detectable by smell. A variation on this idea is the scent bomb , or perfume bomb , filled with an overpowering "cheap perfume" smell. At the upper end of the spectrum, the governments of Israel [ 4 ] and the United States of America are developing stink bombs for use by their law enforcement agencies and militaries as riot control [ 4 ] and area denial weapons . Using stink bombs for these purposes has advantages over traditional riot control agents : unlike pepper spray and tear gas , stink bombs are believed not to be dangerous, although their psychological effects can make people sick. [ 6 ] Prank stink bombs and perfume bombs are usually sold as a 1- or 2- mL sealed glass ampoule , which can be broken by throwing against a hard surface or by crushing under one's shoe sole, thus releasing the malodorous liquid contained therein. Another variety of prank stink bomb comprises two bags, one smaller and inside the other. The inner one contains a liquid and the outer one a powder. When the inner one is ruptured by squeezing it, the liquid reacts with the powder, producing hydrogen sulfide, which expands and bursts the outer bag, releasing an unpleasant odor. Typically, lower molecular weight volatile organic compounds are used. Generally, the higher the molecular weight for a given class of compounds, the lower the volatility and initial concentration but the longer the persistence. Some chemicals (typically thiols) have a certain concentration threshold over which the smell is not perceived significantly stronger; therefore a lower-volatility compound is capable of providing comparable stench intensity to a higher-volatility compound, but for longer time. Another issue is the operating temperature , on which the compound's volatility strongly depends. Care should be taken as some compounds are toxic either in higher concentration or after prolonged exposure in low concentration. Some plants may be used as improvised stink bombs; one such plant is the Parkia speciosa or 'stinky bean', which grows in India , Southeast Asia and Eastern Australia . The pods from this plant are collected when partly dried and stamped on, to release the stink. [ 7 ] Some common components are: The US Government Standard Bathroom Malodor, said to be one of the worst-smelling substances, [ 8 ] is quoted as having this composition: (Note that this substance is a concoction) [ 9 ]
https://en.wikipedia.org/wiki/Stink_bomb
A stipulative definition is a type of definition in which a new or currently existing term is given a new specific meaning for the purposes of argument or discussion in a given context. When the term already exists, this definition may, but does not necessarily, contradict the dictionary ( lexical ) definition of the term. Because of this, a stipulative definition cannot be "correct" or "incorrect"; it can only differ from other definitions, but it can be useful for its intended purpose. [ 1 ] [ 2 ] For example, in the riddle of induction by Nelson Goodman , " grue " was stipulated to be "a property of an object that makes it appear green if observed before some future time t , and blue if observed afterward". "Grue" has no meaning in standard English; therefore, Goodman created the new term and gave it a stipulative definition . Stipulative definitions of existing terms are useful in making theoretical arguments, or stating specific cases. For example: Some of these are also precising definitions , a subtype of stipulative definition that may not contradict but only extend the lexical definition of a term. Theoretical definitions , used extensively in science and philosophy, are similar in some ways to stipulative definitions (although theoretical definitions are somewhat normative, [ clarification needed ] more like persuasive definitions ). [ 2 ] Many holders of controversial and highly charged opinions use stipulative definitions to attach the emotional or other connotations of a word to the meaning they would like to give it; for example, defining "murder" as "the killing of any living thing for any reason". The other side of such an argument is likely to use a different stipulative definition for the same term: "the unlawful killing of a human being with malice aforethought" or "the premeditated killing of a human being". The lexical definition in such a case is likely to fall somewhere in between. When a stipulative definition is confused with a lexical definition within an argument there is a risk of equivocation .
https://en.wikipedia.org/wiki/Stipulative_definition
In mathematics , Stirling's approximation (or Stirling's formula ) is an asymptotic approximation for factorials . It is a good approximation, leading to accurate results even for small values of n {\displaystyle n} . It is named after James Stirling , though a related but less precise result was first stated by Abraham de Moivre . [ 1 ] [ 2 ] [ 3 ] One way of stating the approximation involves the logarithm of the factorial: ln ⁡ ( n ! ) = n ln ⁡ n − n + O ( ln ⁡ n ) , {\displaystyle \ln(n!)=n\ln n-n+O(\ln n),} where the big O notation means that, for all sufficiently large values of n {\displaystyle n} , the difference between ln ⁡ ( n ! ) {\displaystyle \ln(n!)} and n ln ⁡ n − n {\displaystyle n\ln n-n} will be at most proportional to the logarithm of n {\displaystyle n} . In computer science applications such as the worst-case lower bound for comparison sorting , it is convenient to instead use the binary logarithm , giving the equivalent form log 2 ⁡ ( n ! ) = n log 2 ⁡ n − n log 2 ⁡ e + O ( log 2 ⁡ n ) . {\displaystyle \log _{2}(n!)=n\log _{2}n-n\log _{2}e+O(\log _{2}n).} The error term in either base can be expressed more precisely as 1 2 log 2 ⁡ ( 2 π n ) + O ( 1 n ) {\displaystyle {\tfrac {1}{2}}\log _{2}(2\pi n)+O({\tfrac {1}{n}})} , corresponding to an approximate formula for the factorial itself, n ! ∼ 2 π n ( n e ) n . {\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.} Here the sign ∼ {\displaystyle \sim } means that the two quantities are asymptotic, that is, their ratio tends to 1 as n {\displaystyle n} tends to infinity. Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sum ln ⁡ ( n ! ) = ∑ j = 1 n ln ⁡ j {\displaystyle \ln(n!)=\sum _{j=1}^{n}\ln j} with an integral : ∑ j = 1 n ln ⁡ j ≈ ∫ 1 n ln ⁡ x d x = n ln ⁡ n − n + 1. {\displaystyle \sum _{j=1}^{n}\ln j\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1.} The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximating n ! {\displaystyle n!} , one considers its natural logarithm , as this is a slowly varying function : ln ⁡ ( n ! ) = ln ⁡ 1 + ln ⁡ 2 + ⋯ + ln ⁡ n . {\displaystyle \ln(n!)=\ln 1+\ln 2+\cdots +\ln n.} The right-hand side of this equation minus 1 2 ( ln ⁡ 1 + ln ⁡ n ) = 1 2 ln ⁡ n {\displaystyle {\tfrac {1}{2}}(\ln 1+\ln n)={\tfrac {1}{2}}\ln n} is the approximation by the trapezoid rule of the integral ln ⁡ ( n ! ) − 1 2 ln ⁡ n ≈ ∫ 1 n ln ⁡ x d x = n ln ⁡ n − n + 1 , {\displaystyle \ln(n!)-{\tfrac {1}{2}}\ln n\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1,} and the error in this approximation is given by the Euler–Maclaurin formula : ln ⁡ ( n ! ) − 1 2 ln ⁡ n = 1 2 ln ⁡ 1 + ln ⁡ 2 + ln ⁡ 3 + ⋯ + ln ⁡ ( n − 1 ) + 1 2 ln ⁡ n = n ln ⁡ n − n + 1 + ∑ k = 2 m ( − 1 ) k B k k ( k − 1 ) ( 1 n k − 1 − 1 ) + R m , n , {\displaystyle {\begin{aligned}\ln(n!)-{\tfrac {1}{2}}\ln n&={\tfrac {1}{2}}\ln 1+\ln 2+\ln 3+\cdots +\ln(n-1)+{\tfrac {1}{2}}\ln n\\&=n\ln n-n+1+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}\left({\frac {1}{n^{k-1}}}-1\right)+R_{m,n},\end{aligned}}} where B k {\displaystyle B_{k}} is a Bernoulli number , and R m , n is the remainder term in the Euler–Maclaurin formula. Take limits to find that lim n → ∞ ( ln ⁡ ( n ! ) − n ln ⁡ n + n − 1 2 ln ⁡ n ) = 1 − ∑ k = 2 m ( − 1 ) k B k k ( k − 1 ) + lim n → ∞ R m , n . {\displaystyle \lim _{n\to \infty }\left(\ln(n!)-n\ln n+n-{\tfrac {1}{2}}\ln n\right)=1-\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}+\lim _{n\to \infty }R_{m,n}.} Denote this limit as y {\displaystyle y} . Because the remainder R m , n in the Euler–Maclaurin formula satisfies R m , n = lim n → ∞ R m , n + O ( 1 n m ) , {\displaystyle R_{m,n}=\lim _{n\to \infty }R_{m,n}+O\left({\frac {1}{n^{m}}}\right),} where big-O notation is used, combining the equations above yields the approximation formula in its logarithmic form: ln ⁡ ( n ! ) = n ln ⁡ ( n e ) + 1 2 ln ⁡ n + y + ∑ k = 2 m ( − 1 ) k B k k ( k − 1 ) n k − 1 + O ( 1 n m ) . {\displaystyle \ln(n!)=n\ln \left({\frac {n}{e}}\right)+{\tfrac {1}{2}}\ln n+y+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)n^{k-1}}}+O\left({\frac {1}{n^{m}}}\right).} Taking the exponential of both sides and choosing any positive integer m {\displaystyle m} , one obtains a formula involving an unknown quantity e y {\displaystyle e^{y}} . For m = 1 , the formula is n ! = e y n ( n e ) n ( 1 + O ( 1 n ) ) . {\displaystyle n!=e^{y}{\sqrt {n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).} The quantity e y {\displaystyle e^{y}} can be found by taking the limit on both sides as n {\displaystyle n} tends to infinity and using Wallis' product , which shows that e y = 2 π {\displaystyle e^{y}={\sqrt {2\pi }}} . Therefore, one obtains Stirling's formula: n ! = 2 π n ( n e ) n ( 1 + O ( 1 n ) ) . {\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).} An alternative formula for n ! {\displaystyle n!} using the gamma function is n ! = ∫ 0 ∞ x n e − x d x . {\displaystyle n!=\int _{0}^{\infty }x^{n}e^{-x}\,{\rm {d}}x.} (as can be seen by repeated integration by parts). Rewriting and changing variables x = ny , one obtains n ! = ∫ 0 ∞ e n ln ⁡ x − x d x = e n ln ⁡ n n ∫ 0 ∞ e n ( ln ⁡ y − y ) d y . {\displaystyle n!=\int _{0}^{\infty }e^{n\ln x-x}\,{\rm {d}}x=e^{n\ln n}n\int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y.} Applying Laplace's method one has ∫ 0 ∞ e n ( ln ⁡ y − y ) d y ∼ 2 π n e − n , {\displaystyle \int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y\sim {\sqrt {\frac {2\pi }{n}}}e^{-n},} which recovers Stirling's formula: n ! ∼ e n ln ⁡ n n 2 π n e − n = 2 π n ( n e ) n . {\displaystyle n!\sim e^{n\ln n}n{\sqrt {\frac {2\pi }{n}}}e^{-n}={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.} In fact, further corrections can also be obtained using Laplace's method. From previous result, we know that Γ ( x ) ∼ x x e − x {\displaystyle \Gamma (x)\sim x^{x}e^{-x}} , so we "peel off" this dominant term, then perform two changes of variables, to obtain: x − x e x Γ ( x ) = ∫ R e x ( 1 + t − e t ) d t {\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{x(1+t-e^{t})}dt} To verify this: ∫ R e x ( 1 + t − e t ) d t = t ↦ ln ⁡ t e x ∫ 0 ∞ t x − 1 e − x t d t = t ↦ t / x x − x e x ∫ 0 ∞ e − t t x − 1 d t = x − x e x Γ ( x ) {\displaystyle \int _{\mathbb {R} }e^{x(1+t-e^{t})}dt{\overset {t\mapsto \ln t}{=}}e^{x}\int _{0}^{\infty }t^{x-1}e^{-xt}dt{\overset {t\mapsto t/x}{=}}x^{-x}e^{x}\int _{0}^{\infty }e^{-t}t^{x-1}dt=x^{-x}e^{x}\Gamma (x)} . Now the function t ↦ 1 + t − e t {\displaystyle t\mapsto 1+t-e^{t}} is unimodal, with maximum value zero. Locally around zero, it looks like − t 2 / 2 {\displaystyle -t^{2}/2} , which is why we are able to perform Laplace's method. In order to extend Laplace's method to higher orders, we perform another change of variables by 1 + t − e t = − τ 2 / 2 {\displaystyle 1+t-e^{t}=-\tau ^{2}/2} . This equation cannot be solved in closed form, but it can be solved by serial expansion, which gives us t = τ − τ 2 / 6 + τ 3 / 36 + a 4 τ 4 + O ( τ 5 ) {\displaystyle t=\tau -\tau ^{2}/6+\tau ^{3}/36+a_{4}\tau ^{4}+O(\tau ^{5})} . Now plug back to the equation to obtain x − x e x Γ ( x ) = ∫ R e − x τ 2 / 2 ( 1 − τ / 3 + τ 2 / 12 + 4 a 4 τ 3 + O ( τ 4 ) ) d τ = 2 π ( x − 1 / 2 + x − 3 / 2 / 12 ) + O ( x − 5 / 2 ) {\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{-x\tau ^{2}/2}(1-\tau /3+\tau ^{2}/12+4a_{4}\tau ^{3}+O(\tau ^{4}))d\tau ={\sqrt {2\pi }}(x^{-1/2}+x^{-3/2}/12)+O(x^{-5/2})} notice how we don't need to actually find a 4 {\displaystyle a_{4}} , since it is cancelled out by the integral. Higher orders can be achieved by computing more terms in t = τ + ⋯ {\displaystyle t=\tau +\cdots } , which can be obtained programmatically. [ note 1 ] Thus we get Stirling's formula to two orders: n ! = 2 π n ( n e ) n ( 1 + 1 12 n + O ( 1 n 2 ) ) . {\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+O\left({\frac {1}{n^{2}}}\right)\right).} A complex-analysis version of this method [ 4 ] is to consider 1 n ! {\displaystyle {\frac {1}{n!}}} as a Taylor coefficient of the exponential function e z = ∑ n = 0 ∞ z n n ! {\displaystyle e^{z}=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}} , computed by Cauchy's integral formula as 1 n ! = 1 2 π i ∮ | z | = r e z z n + 1 d z . {\displaystyle {\frac {1}{n!}}={\frac {1}{2\pi i}}\oint \limits _{|z|=r}{\frac {e^{z}}{z^{n+1}}}\,\mathrm {d} z.} This line integral can then be approximated using the saddle-point method with an appropriate choice of contour radius r = r n {\displaystyle r=r_{n}} . The dominant portion of the integral near the saddle point is then approximated by a real integral and Laplace's method, while the remaining portion of the integral can be bounded above to give an error term. An alternative version uses the fact that the Poisson distribution converges to a normal distribution by the Central Limit Theorem . [ 5 ] Since the Poisson distribution with parameter λ {\displaystyle \lambda } converges to a normal distribution with mean λ {\displaystyle \lambda } and variance λ {\displaystyle \lambda } , their density functions will be approximately the same: exp ⁡ ( − μ ) μ x x ! ≈ 1 2 π μ exp ⁡ ( − 1 2 ( x − μ μ ) ) {\displaystyle {\frac {\exp(-\mu )\mu ^{x}}{x!}}\approx {\frac {1}{\sqrt {2\pi \mu }}}\exp(-{\frac {1}{2}}({\frac {x-\mu }{\sqrt {\mu }}}))} Evaluating this expression at the mean, at which the approximation is particularly accurate, simplifies this expression to: exp ⁡ ( − μ ) μ μ μ ! ≈ 1 2 π μ {\displaystyle {\frac {\exp(-\mu )\mu ^{\mu }}{\mu !}}\approx {\frac {1}{\sqrt {2\pi \mu }}}} Taking logs then results in: − μ + μ ln ⁡ μ − ln ⁡ μ ! ≈ − 1 2 ln ⁡ 2 π μ {\displaystyle -\mu +\mu \ln \mu -\ln \mu !\approx -{\frac {1}{2}}\ln 2\pi \mu } which can easily be rearranged to give: ln ⁡ μ ! ≈ μ ln ⁡ μ − μ + 1 2 ln ⁡ 2 π μ {\displaystyle \ln \mu !\approx \mu \ln \mu -\mu +{\frac {1}{2}}\ln 2\pi \mu } Evaluating at μ = n {\displaystyle \mu =n} gives the usual, more precise form of Stirling's approximation. Stirling's formula is in fact the first approximation to the following series (now called the Stirling series ): [ 6 ] n ! ∼ 2 π n ( n e ) n ( 1 + 1 12 n + 1 288 n 2 − 139 51840 n 3 − 571 2488320 n 4 + ⋯ ) . {\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+{\frac {1}{288n^{2}}}-{\frac {139}{51840n^{3}}}-{\frac {571}{2488320n^{4}}}+\cdots \right).} An explicit formula for the coefficients in this series was given by G. Nemes. [ 7 ] Further terms are listed in the On-Line Encyclopedia of Integer Sequences as A001163 and A001164 . The first graph in this section shows the relative error vs. n {\displaystyle n} , for 1 through all 5 terms listed above. (Bender and Orszag [ 8 ] p. 218) gives the asymptotic formula for the coefficients: A 2 j + 1 ∼ ( − 1 ) j 2 ( 2 j ) ! / ( 2 π ) 2 ( j + 1 ) {\displaystyle A_{2j+1}\sim (-1)^{j}2(2j)!/(2\pi )^{2(j+1)}} which shows that it grows superexponentially, and that by the ratio test the radius of convergence is zero. As n → ∞ , the error in the truncated series is asymptotically equal to the first omitted term. This is an example of an asymptotic expansion . It is not a convergent series ; for any particular value of n {\displaystyle n} there are only so many terms of the series that improve accuracy, after which accuracy worsens. This is shown in the next graph, which shows the relative error versus the number of terms in the series, for larger numbers of terms. More precisely, let S ( n , t ) be the Stirling series to t {\displaystyle t} terms evaluated at n {\displaystyle n} . The graphs show | ln ⁡ ( S ( n , t ) n ! ) | , {\displaystyle \left|\ln \left({\frac {S(n,t)}{n!}}\right)\right|,} which, when small, is essentially the relative error. Writing Stirling's series in the form ln ⁡ ( n ! ) ∼ n ln ⁡ n − n + 1 2 ln ⁡ ( 2 π n ) + 1 12 n − 1 360 n 3 + 1 1260 n 5 − 1 1680 n 7 + ⋯ , {\displaystyle \ln(n!)\sim n\ln n-n+{\tfrac {1}{2}}\ln(2\pi n)+{\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots ,} it is known that the error in truncating the series is always of the opposite sign and at most the same magnitude as the first omitted term. [ citation needed ] Other bounds, due to Robbins, [ 9 ] valid for all positive integers n {\displaystyle n} are 2 π n ( n e ) n e 1 12 n + 1 < n ! < 2 π n ( n e ) n e 1 12 n . {\displaystyle {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n+1}}<n!<{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.} This upper bound corresponds to stopping the above series for ln ⁡ ( n ! ) {\displaystyle \ln(n!)} after the 1 n {\displaystyle {\frac {1}{n}}} term. The lower bound is weaker than that obtained by stopping the series after the 1 n 3 {\displaystyle {\frac {1}{n^{3}}}} term. A looser version of this bound is that n ! e n n n + 1 2 ∈ ( 2 π , e ] {\displaystyle {\frac {n!e^{n}}{n^{n+{\frac {1}{2}}}}}\in ({\sqrt {2\pi }},e]} for all n ≥ 1 {\displaystyle n\geq 1} . For all positive integers, n ! = Γ ( n + 1 ) , {\displaystyle n!=\Gamma (n+1),} where Γ denotes the gamma function . However, the gamma function, unlike the factorial, is more broadly defined for all complex numbers other than non-positive integers; nevertheless, Stirling's formula may still be applied. If Re( z ) > 0 , then ln ⁡ Γ ( z ) = z ln ⁡ z − z + 1 2 ln ⁡ 2 π z + ∫ 0 ∞ 2 arctan ⁡ ( t z ) e 2 π t − 1 d t . {\displaystyle \ln \Gamma (z)=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{z}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t.} Repeated integration by parts gives ln ⁡ Γ ( z ) ∼ z ln ⁡ z − z + 1 2 ln ⁡ 2 π z + ∑ n = 1 N − 1 B 2 n 2 n ( 2 n − 1 ) z 2 n − 1 = z ln ⁡ z − z + 1 2 ln ⁡ 2 π z + 1 12 z − 1 360 z 3 + 1 1260 z 5 + … , {\displaystyle {\begin{aligned}\ln \Gamma (z)\sim z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\sum _{n=1}^{N-1}{\frac {B_{2n}}{2n(2n-1)z^{2n-1}}}\\=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+{\frac {1}{12z}}-{\frac {1}{360z^{3}}}+{\frac {1}{1260z^{5}}}+\dots ,\end{aligned}}} where B n {\displaystyle B_{n}} is the n {\displaystyle n} th Bernoulli number (note that the limit of the sum as N → ∞ {\displaystyle N\to \infty } is not convergent, so this formula is just an asymptotic expansion ). The formula is valid for z {\displaystyle z} large enough in absolute value, when | arg( z ) | < π − ε , where ε is positive, with an error term of O ( z −2 N + 1 ) . The corresponding approximation may now be written: Γ ( z ) = 2 π z ( z e ) z ( 1 + O ( 1 z ) ) . {\displaystyle \Gamma (z)={\sqrt {\frac {2\pi }{z}}}\,{\left({\frac {z}{e}}\right)}^{z}\left(1+O\left({\frac {1}{z}}\right)\right).} where the expansion is identical to that of Stirling's series above for n ! {\displaystyle n!} , except that n {\displaystyle n} is replaced with z − 1 . [ 10 ] A further application of this asymptotic expansion is for complex argument z with constant Re( z ) . See for example the Stirling formula applied in Im( z ) = t of the Riemann–Siegel theta function on the straight line ⁠ 1 / 4 ⁠ + it . Thomas Bayes showed, in a letter to John Canton published by the Royal Society in 1763, that Stirling's formula did not give a convergent series . [ 11 ] Obtaining a convergent version of Stirling's formula entails evaluating Binet's formula : ∫ 0 ∞ 2 arctan ⁡ ( t x ) e 2 π t − 1 d t = ln ⁡ Γ ( x ) − x ln ⁡ x + x − 1 2 ln ⁡ 2 π x . {\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\ln \Gamma (x)-x\ln x+x-{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}.} One way to do this is by means of a convergent series of inverted rising factorials . If z n ¯ = z ( z + 1 ) ⋯ ( z + n − 1 ) , {\displaystyle z^{\bar {n}}=z(z+1)\cdots (z+n-1),} then ∫ 0 ∞ 2 arctan ⁡ ( t x ) e 2 π t − 1 d t = ∑ n = 1 ∞ c n ( x + 1 ) n ¯ , {\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\sum _{n=1}^{\infty }{\frac {c_{n}}{(x+1)^{\bar {n}}}},} where c n = 1 n ∫ 0 1 x n ¯ ( x − 1 2 ) d x = 1 2 n ∑ k = 1 n k | s ( n , k ) | ( k + 1 ) ( k + 2 ) , {\displaystyle c_{n}={\frac {1}{n}}\int _{0}^{1}x^{\bar {n}}\left(x-{\tfrac {1}{2}}\right)\,{\rm {d}}x={\frac {1}{2n}}\sum _{k=1}^{n}{\frac {k|s(n,k)|}{(k+1)(k+2)}},} where s ( n , k ) denotes the Stirling numbers of the first kind . From this one obtains a version of Stirling's series ln ⁡ Γ ( x ) = x ln ⁡ x − x + 1 2 ln ⁡ 2 π x + 1 12 ( x + 1 ) + 1 12 ( x + 1 ) ( x + 2 ) + + 59 360 ( x + 1 ) ( x + 2 ) ( x + 3 ) + 29 60 ( x + 1 ) ( x + 2 ) ( x + 3 ) ( x + 4 ) + ⋯ , {\displaystyle {\begin{aligned}\ln \Gamma (x)&=x\ln x-x+{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}+{\frac {1}{12(x+1)}}+{\frac {1}{12(x+1)(x+2)}}+\\&\quad +{\frac {59}{360(x+1)(x+2)(x+3)}}+{\frac {29}{60(x+1)(x+2)(x+3)(x+4)}}+\cdots ,\end{aligned}}} which converges when Re( x ) > 0 . Stirling's formula may also be given in convergent form as [ 12 ] Γ ( x ) = 2 π x x − 1 2 e − x + μ ( x ) {\displaystyle \Gamma (x)={\sqrt {2\pi }}x^{x-{\frac {1}{2}}}e^{-x+\mu (x)}} where μ ( x ) = ∑ n = 0 ∞ ( ( x + n + 1 2 ) ln ⁡ ( 1 + 1 x + n ) − 1 ) . {\displaystyle \mu \left(x\right)=\sum _{n=0}^{\infty }\left(\left(x+n+{\frac {1}{2}}\right)\ln \left(1+{\frac {1}{x+n}}\right)-1\right).} The approximation Γ ( z ) ≈ 2 π z ( z e z sinh ⁡ 1 z + 1 810 z 6 ) z {\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {z}{e}}{\sqrt {z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}}}\right)^{z}} and its equivalent form 2 ln ⁡ Γ ( z ) ≈ ln ⁡ ( 2 π ) − ln ⁡ z + z ( 2 ln ⁡ z + ln ⁡ ( z sinh ⁡ 1 z + 1 810 z 6 ) − 2 ) {\displaystyle 2\ln \Gamma (z)\approx \ln(2\pi )-\ln z+z\left(2\ln z+\ln \left(z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}\right)-2\right)} can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultant power series and the Taylor series expansion of the hyperbolic sine function. This approximation is good to more than 8 decimal digits for z with a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory. [ 13 ] Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler: [ 14 ] Γ ( z ) ≈ 2 π z ( 1 e ( z + 1 12 z − 1 10 z ) ) z , {\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {1}{e}}\left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)\right)^{z},} or equivalently, ln ⁡ Γ ( z ) ≈ 1 2 ( ln ⁡ ( 2 π ) − ln ⁡ z ) + z ( ln ⁡ ( z + 1 12 z − 1 10 z ) − 1 ) . {\displaystyle \ln \Gamma (z)\approx {\tfrac {1}{2}}\left(\ln(2\pi )-\ln z\right)+z\left(\ln \left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)-1\right).} An alternative approximation for the gamma function stated by Srinivasa Ramanujan in Ramanujan's lost notebook [ 15 ] is Γ ( 1 + x ) ≈ π ( x e ) x ( 8 x 3 + 4 x 2 + x + 1 30 ) 1 6 {\displaystyle \Gamma (1+x)\approx {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{\frac {1}{6}}} for x ≥ 0 . The equivalent approximation for ln n ! has an asymptotic error of ⁠ 1 / 1400 n 3 ⁠ and is given by ln ⁡ n ! ≈ n ln ⁡ n − n + 1 6 ln ⁡ ( 8 n 3 + 4 n 2 + n + 1 30 ) + 1 2 ln ⁡ π . {\displaystyle \ln n!\approx n\ln n-n+{\tfrac {1}{6}}\ln(8n^{3}+4n^{2}+n+{\tfrac {1}{30}})+{\tfrac {1}{2}}\ln \pi .} The approximation may be made precise by giving paired upper and lower bounds; one such inequality is [ 16 ] [ 17 ] [ 18 ] [ 19 ] π ( x e ) x ( 8 x 3 + 4 x 2 + x + 1 100 ) 1 / 6 < Γ ( 1 + x ) < π ( x e ) x ( 8 x 3 + 4 x 2 + x + 1 30 ) 1 / 6 . {\displaystyle {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{100}}\right)^{1/6}<\Gamma (1+x)<{\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{1/6}.} The formula was first discovered by Abraham de Moivre [ 2 ] in the form n ! ∼ [ c o n s t a n t ] ⋅ n n + 1 2 e − n . {\displaystyle n!\sim [{\rm {constant}}]\cdot n^{n+{\frac {1}{2}}}e^{-n}.} De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely 2 π {\displaystyle {\sqrt {2\pi }}} . [ 3 ]
https://en.wikipedia.org/wiki/Stirling's_approximation
In mathematics , especially in combinatorics , Stirling numbers of the first kind arise in the study of permutations. In particular, the unsigned Stirling numbers of the first kind count permutations according to their number of cycles (counting fixed points as cycles of length one). [ 1 ] The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices . This article is devoted to specifics of Stirling numbers of the first kind. Identities linking the two kinds appear in the article on Stirling numbers . The Stirling numbers of the first kind are the coefficients s ( n , k ) {\displaystyle s(n,k)} in the expansion of the falling factorial into powers of the variable x {\displaystyle x} : For example, ( x ) 3 = x ( x − 1 ) ( x − 2 ) = x 3 − 3 x 2 + 2 x {\displaystyle (x)_{3}=x(x-1)(x-2)=x^{3}-3x^{2}+2x} , leading to the values s ( 3 , 3 ) = 1 {\displaystyle s(3,3)=1} , s ( 3 , 2 ) = − 3 {\displaystyle s(3,2)=-3} , and s ( 3 , 1 ) = 2 {\displaystyle s(3,1)=2} . The unsigned Stirling numbers may also be defined algebraically as the coefficients of the rising factorial : The notations used on this page for Stirling numbers are not universal, and may conflict with notations in other sources; the square bracket notation [ n k ] {\displaystyle \left[{n \atop k}\right]} is also common notation for the Gaussian coefficients . [ 2 ] Subsequently, it was discovered that the absolute values | s ( n , k ) | {\displaystyle |s(n,k)|} of these numbers are equal to the number of permutations of certain kinds. These absolute values, which are known as unsigned Stirling numbers of the first kind, are often denoted c ( n , k ) {\displaystyle c(n,k)} or [ n k ] {\displaystyle \left[{n \atop k}\right]} . They may be defined directly to be the number of permutations of n {\displaystyle n} elements with k {\displaystyle k} disjoint cycles . [ 1 ] For example, of the 3 ! = 6 {\displaystyle 3!=6} permutations of three elements, there is one permutation with three cycles (the identity permutation , given in one-line notation by 123 {\displaystyle 123} or in cycle notation by ( 1 ) ( 2 ) ( 3 ) {\displaystyle (1)(2)(3)} ), three permutations with two cycles ( 132 = ( 1 ) ( 23 ) {\displaystyle 132=(1)(23)} , 213 = ( 12 ) ( 3 ) {\displaystyle 213=(12)(3)} , and 321 = ( 13 ) ( 2 ) {\displaystyle 321=(13)(2)} ) and two permutations with one cycle ( 312 = ( 132 ) {\displaystyle 312=(132)} and 231 = ( 123 ) {\displaystyle 231=(123)} ). Thus [ 3 3 ] = 1 {\displaystyle \left[{3 \atop 3}\right]=1} , [ 3 2 ] = 3 , {\displaystyle \left[{3 \atop 2}\right]=3,} and [ 3 1 ] = 2 {\displaystyle \left[{3 \atop 1}\right]=2} . These can be seen to agree with the previous algebraic calculations of s ( n , k ) {\displaystyle s(n,k)} for n = 3 {\displaystyle n=3} . For another example, the image at right shows that [ 4 2 ] = 11 {\displaystyle \left[{4 \atop 2}\right]=11} : the symmetric group on 4 objects has 3 permutations of the form and 8 permutations of the form These numbers can be calculated by considering the orbits as conjugacy classes . Alfréd Rényi observed that the unsigned Stirling number of the first kind [ n k ] {\displaystyle \left[{n \atop k}\right]} also counts the number of permutations of size n {\displaystyle n} with k {\displaystyle k} left-to-right maxima. [ 3 ] The signs of the signed Stirling numbers of the first kind depend only on the parity of n − k . The unsigned Stirling numbers of the first kind follow the recurrence relation for k > 0 {\displaystyle k>0} , with the boundary conditions for n > 0 {\displaystyle n>0} . [ 2 ] It follows immediately that the signed Stirling numbers of the first kind satisfy the recurrence We prove the recurrence relation using the definition of Stirling numbers in terms of rising factorials. Distributing the last term of the product, we have The coefficient of x k {\displaystyle x^{k}} on the left-hand side of this equation is [ n + 1 k ] {\displaystyle \left[{n+1 \atop k}\right]} . The coefficient of x k {\displaystyle x^{k}} in n ⋅ x n ¯ {\displaystyle n\cdot x^{\overline {n}}} is n ⋅ [ n k ] {\displaystyle n\cdot \left[{n \atop k}\right]} , while the coefficient of x k {\displaystyle x^{k}} in x ⋅ x n ¯ {\displaystyle x\cdot x^{\overline {n}}} is [ n k − 1 ] {\displaystyle \left[{n \atop k-1}\right]} . Since the two sides are equal as polynomials, the coefficients of x k {\displaystyle x^{k}} on both sides must be equal, and the result follows. We prove the recurrence relation using the definition of Stirling numbers in terms of permutations with a given number of cycles (or equivalently, orbits ). Consider forming a permutation of n + 1 {\displaystyle n+1} objects from a permutation of n {\displaystyle n} objects by adding a distinguished object. There are exactly two ways in which this can be accomplished. We could do this by forming a singleton cycle, i.e., leaving the extra object alone. This increases the number of cycles by 1 and so accounts for the [ n k − 1 ] {\displaystyle \left[{n \atop k-1}\right]} term in the recurrence formula. We could also insert the new object into one of the existing cycles. Consider an arbitrary permutation of n {\displaystyle n} objects with k {\displaystyle k} cycles, and label the objects a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} , so that the permutation is represented by To form a new permutation of n + 1 {\displaystyle n+1} objects and k {\displaystyle k} cycles one must insert the new object into this array. There are n {\displaystyle n} ways to perform this insertion, inserting the new object immediately following any of the a i {\displaystyle a_{i}} already present. This explains the n [ n k ] {\displaystyle n\left[{n \atop k}\right]} term of the recurrence relation. These two cases include all possibilities, so the recurrence relation follows. Below is a triangular array of unsigned values for the Stirling numbers of the first kind, similar in form to Pascal's triangle . These values are easy to generate using the recurrence relation in the previous section. Using the Kronecker delta one has, and Also and Similar relationships involving the Stirling numbers hold for the Bernoulli polynomials . Many relations for the Stirling numbers shadow similar relations on the binomial coefficients . The study of these 'shadow relationships' is termed umbral calculus and culminates in the theory of Sheffer sequences . Generalizations of the Stirling numbers of both kinds to arbitrary complex-valued inputs may be extended through the relations of these triangles to the Stirling convolution polynomials . [ 4 ] These identities may be derived by enumerating permutations directly. For example, a permutation of n elements with n − 3 cycles must have one of the following forms: The three types may be enumerated as follows: Sum the three contributions to obtain Note that all the combinatorial proofs above use either binomials or multinomials of n {\displaystyle n} . Therefore if p {\displaystyle p} is prime, then: p | [ p k ] {\displaystyle p\ |\left[{p \atop k}\right]} for 1 < k < p {\displaystyle 1<k<p} . Since the Stirling numbers are the coefficients of a polynomial with roots 0, 1, ..., n − 1 , one has by Vieta's formulas that [ n n − k ] = ∑ 0 ≤ i 1 < … < i k < n i 1 i 2 ⋯ i k . {\displaystyle \left[{\begin{matrix}n\\n-k\end{matrix}}\right]=\sum _{0\leq i_{1}<\ldots <i_{k}<n}i_{1}i_{2}\cdots i_{k}.} In other words, the Stirling numbers of the first kind are given by elementary symmetric polynomials evaluated at 0, 1, ..., n − 1 . [ 5 ] In this form, the simple identities given above take the form [ n n − 1 ] = ∑ i = 0 n − 1 i = ( n 2 ) , {\displaystyle \left[{\begin{matrix}n\\n-1\end{matrix}}\right]=\sum _{i=0}^{n-1}i={\binom {n}{2}},} [ n n − 2 ] = ∑ i = 0 n − 1 ∑ j = 0 i − 1 i j = 3 n − 1 4 ( n 3 ) , {\displaystyle \left[{\begin{matrix}n\\n-2\end{matrix}}\right]=\sum _{i=0}^{n-1}\sum _{j=0}^{i-1}ij={\frac {3n-1}{4}}{\binom {n}{3}},} [ n n − 3 ] = ∑ i = 0 n − 1 ∑ j = 0 i − 1 ∑ k = 0 j − 1 i j k = ( n 2 ) ( n 4 ) , {\displaystyle \left[{\begin{matrix}n\\n-3\end{matrix}}\right]=\sum _{i=0}^{n-1}\sum _{j=0}^{i-1}\sum _{k=0}^{j-1}ijk={\binom {n}{2}}{\binom {n}{4}},} and so on. One may produce alternative forms for the Stirling numbers of the first kind with a similar approach preceded by some algebraic manipulation: since it follows from Newton's formulas that one can expand the Stirling numbers of the first kind in terms of generalized harmonic numbers . This yields identities like where H n is the harmonic number H n = 1 1 + 1 2 + … + 1 n {\displaystyle H_{n}={\frac {1}{1}}+{\frac {1}{2}}+\ldots +{\frac {1}{n}}} and H n ( m ) is the generalized harmonic number H n ( m ) = 1 1 m + 1 2 m + … + 1 n m . {\displaystyle H_{n}^{(m)}={\frac {1}{1^{m}}}+{\frac {1}{2^{m}}}+\ldots +{\frac {1}{n^{m}}}.} These relations can be generalized to give where w ( n , m ) is defined recursively in terms of the generalized harmonic numbers by (Here δ is the Kronecker delta function and ( m ) k {\displaystyle (m)_{k}} is the Pochhammer symbol .) [ 6 ] For fixed n ≥ 0 {\displaystyle n\geq 0} these weighted harmonic number expansions are generated by the generating function where the notation [ x k ] {\displaystyle [x^{k}]} means extraction of the coefficient of x k {\displaystyle x^{k}} from the following formal power series (see the non-exponential Bell polynomials and section 3 of [ 7 ] ). More generally, sums related to these weighted harmonic number expansions of the Stirling numbers of the first kind can be defined through generalized zeta series transforms of generating functions . [ 8 ] [ 9 ] One can also "invert" the relations for these Stirling numbers given in terms of the k {\displaystyle k} -order harmonic numbers to write the integer-order generalized harmonic numbers in terms of weighted sums of terms involving the Stirling numbers of the first kind. For example, when k = 2 , 3 {\displaystyle k=2,3} the second-order and third-order harmonic numbers are given by More generally, one can invert the Bell polynomial generating function for the Stirling numbers expanded in terms of the m {\displaystyle m} -order harmonic numbers to obtain that for integers m ≥ 2 {\displaystyle m\geq 2} Since permutations are partitioned by number of cycles, one has The identities and can be proved by the techniques at Stirling numbers and exponential generating functions#Stirling numbers of the first kind and Binomial coefficient#Ordinary generating functions . The table in section 6.1 of Concrete Mathematics provides a plethora of generalized forms of finite sums involving the Stirling numbers. Several particular finite sums relevant to this article include Additionally, if we define the second-order Eulerian numbers by the triangular recurrence relation [ 10 ] we arrive at the following identity related to the form of the Stirling convolution polynomials which can be employed to generalize both Stirling number triangles to arbitrary real, or complex-valued, values of the input x {\displaystyle x} : Particular expansions of the previous identity lead to the following identities expanding the Stirling numbers of the first kind for the first few small values of n := 1 , 2 , 3 {\displaystyle n:=1,2,3} : Software tools for working with finite sums involving Stirling numbers and Eulerian numbers are provided by the RISC Stirling.m package utilities in Mathematica . Other software packages for guessing formulas for sequences (and polynomial sequence sums) involving Stirling numbers and other special triangles is available for both Mathematica and Sage here and here , respectively. [ 11 ] The following congruence identity may be proved via a generating function -based approach: [ 12 ] More recent results providing Jacobi-type J-fractions that generate the single factorial function and generalized factorial-related products lead to other new congruence results for the Stirling numbers of the first kind. [ 13 ] For example, working modulo 2 {\displaystyle 2} we can prove that Where [ b ] {\displaystyle [b]} is the Iverson bracket . and working modulo 3 {\displaystyle 3} we can similarly prove that More generally, for fixed integers h ≥ 3 {\displaystyle h\geq 3} if we define the ordered roots then we may expand congruences for these Stirling numbers defined as the coefficients in the following form where the functions, p h , i [ m ] ( n ) {\displaystyle p_{h,i}^{[m]}(n)} , denote fixed polynomials of degree m {\displaystyle m} in n {\displaystyle n} for each h {\displaystyle h} , m {\displaystyle m} , and i {\displaystyle i} : Section 6.2 of the reference cited above provides more explicit expansions related to these congruences for the r {\displaystyle r} -order harmonic numbers and for the generalized factorial products , p n ( α , R ) := R ( R + α ) ⋯ ( R + ( n − 1 ) α ) {\displaystyle p_{n}(\alpha ,R):=R(R+\alpha )\cdots (R+(n-1)\alpha )} . A variety of identities may be derived by manipulating the generating function (see change of basis ): Using the equality it follows that and This identity is valid for formal power series , and the sum converges in the complex plane for | z | < 1. Other identities arise by exchanging the order of summation, taking derivatives, making substitutions for z or u , etc. For example, we may derive: [ 14 ] or and where ζ ( k ) {\displaystyle \zeta (k)} and ζ ( k , v ) {\displaystyle \zeta (k,v)} are the Riemann zeta function and the Hurwitz zeta function respectively, and even evaluate this integral where Γ ( z ) {\displaystyle \Gamma (z)} is the gamma function . There also exist more complicated expressions for the zeta-functions involving the Stirling numbers. One, for example, has This series generalizes Hasse 's series for the Hurwitz zeta-function (we obtain Hasse's series by setting k =1). [ 15 ] [ 16 ] The next estimate given in terms of the Euler gamma constant applies: [ 17 ] For fixed n {\displaystyle n} we have the following estimate : It is well-known that we don't know any one-sum formula for Stirling numbers of the first kind. A two-sum formula can be obtained using one of the symmetric formulae for Stirling numbers in conjunction with the explicit formula for Stirling numbers of the second kind . As discussed earlier, by Vieta's formulas , one get [ n k ] = ∑ 0 ≤ i 1 < … < i n − k < n i 1 i 2 ⋯ i n − k . {\displaystyle \left[{\begin{matrix}n\\k\end{matrix}}\right]=\sum _{0\leq i_{1}<\ldots <i_{n-k}<n}i_{1}i_{2}\cdots i_{n-k}.} The Stirling number s(n,n-p) can be found from the formula [ 18 ] where K = k 1 + ⋯ + k p . {\displaystyle K=k_{1}+\cdots +k_{p}.} The sum is a sum over all partitions of p . Another exact nested sum expansion for these Stirling numbers is computed by elementary symmetric polynomials corresponding to the coefficients in x {\displaystyle x} of a product of the form ( 1 + c 1 x ) ⋯ ( 1 + c n − 1 x ) {\displaystyle (1+c_{1}x)\cdots (1+c_{n-1}x)} . In particular, we see that Newton's identities combined with the above expansions may be used to give an alternate proof of the weighted expansions involving the generalized harmonic numbers already noted above . The n th derivative of the μ th power of the natural logarithm involves the signed Stirling numbers of the first kind: d n ( ln ⁡ x ) μ d x n = x − n ∑ k = 1 n μ k _ s ( n , n − k + 1 ) ( ln ⁡ x ) μ − k , {\displaystyle {\operatorname {d} ^{n}\!(\ln x)^{\mu } \over \operatorname {d} \!x^{n}}=x^{-n}\sum _{k=1}^{n}\mu ^{\underline {k}}s(n,n-k+1)(\ln x)^{\mu -k},} where μ i _ {\displaystyle \mu ^{\underline {i}}} is the falling factorial , and s ( n , n − k + 1 ) {\displaystyle s(n,n-k+1)} is the signed Stirling number. It can be proved by using mathematical induction . Stirling numbers of the first kind appear in the formula for Gregory coefficients and in a finite sum identity involving Bell numbers [ 19 ] n ! G n = ∑ l = 0 n s ( n , l ) l + 1 {\displaystyle n!G_{n}=\sum _{l=0}^{n}{\frac {s(n,l)}{l+1}}} ∑ j = 0 n ( n j ) B j k n − j = ∑ i = 0 k [ k i ] B n + i ( − 1 ) k − i {\displaystyle \sum _{j=0}^{n}{\binom {n}{j}}B_{j}k^{n-j}=\sum _{i=0}^{k}\left[{k \atop i}\right]B_{n+i}(-1)^{k-i}} Infinite series involving the finite sums with the Stirling numbers often lead to the special functions. For example [ 14 ] [ 20 ] ln ⁡ Γ ( z ) = ( z − 1 2 ) ln ⁡ z − z + 1 2 ln ⁡ 2 π + 1 π ∑ n = 1 ∞ 1 n ⋅ n ! ∑ l = 0 ⌊ n / 2 ⌋ ( − 1 ) l ( 2 l ) ! ( 2 π z ) 2 l + 1 [ n 2 l + 1 ] {\displaystyle \ln \Gamma (z)=\left(z-{\frac {1}{2}}\right)\!\ln z-z+{\frac {1}{2}}\ln 2\pi +{\frac {1}{\pi }}\sum _{n=1}^{\infty }{\frac {1}{n\cdot n!}}\!\sum _{l=0}^{\lfloor n/2\rfloor }\!{\frac {(-1)^{l}(2l)!}{(2\pi z)^{2l+1}}}\left[{n \atop 2l+1}\right]} and Ψ ( z ) = ln ⁡ z − 1 2 z − 1 π z ∑ n = 1 ∞ 1 n ⋅ n ! ∑ l = 0 ⌊ n / 2 ⌋ ( − 1 ) l ( 2 l + 1 ) ! ( 2 π z ) 2 l + 1 [ n 2 l + 1 ] {\displaystyle \Psi (z)=\ln z-{\frac {1}{2z}}-{\frac {1}{\pi z}}\sum _{n=1}^{\infty }{\frac {1}{n\cdot n!}}\!\sum _{l=0}^{\lfloor n/2\rfloor }\!{\frac {(-1)^{l}(2l+1)!}{(2\pi z)^{2l+1}}}\left[{n \atop 2l+1}\right]} or even γ m = 1 2 δ m , 0 + ( − 1 ) m m ! π ∑ n = 1 ∞ 1 n ⋅ n ! ∑ k = 0 ⌊ n / 2 ⌋ ( − 1 ) k ( 2 π ) 2 k + 1 [ 2 k + 2 m + 1 ] [ n 2 k + 1 ] {\displaystyle \gamma _{m}={\frac {1}{2}}\delta _{m,0}+{\frac {(-1)^{m}m!}{\pi }}\!\sum _{n=1}^{\infty }{\frac {1}{n\cdot n!}}\!\sum _{k=0}^{\lfloor n/2\rfloor }{\frac {(-1)^{k}}{(2\pi )^{2k+1}}}\left[{2k+2 \atop m+1}\right]\left[{n \atop 2k+1}\right]\,} where γ m are the Stieltjes constants and δ m ,0 represents the Kronecker delta function . Notice that this last identity immediately implies relations between the polylogarithm functions, the Stirling number exponential generating functions given above, and the Stirling-number-based power series for the generalized Nielsen polylogarithm functions. There are many notions of generalized Stirling numbers that may be defined (depending on application) in a number of differing combinatorial contexts. In so much as the Stirling numbers of the first kind correspond to the coefficients of the distinct polynomial expansions of the single factorial function , n ! = n ( n − 1 ) ( n − 2 ) ⋯ 2 ⋅ 1 {\displaystyle n!=n(n-1)(n-2)\cdots 2\cdot 1} , we may extend this notion to define triangular recurrence relations for more general classes of products. In particular, for any fixed arithmetic function f : N → C {\displaystyle f:\mathbb {N} \rightarrow \mathbb {C} } and symbolic parameters x , t {\displaystyle x,t} , related generalized factorial products of the form may be studied from the point of view of the classes of generalized Stirling numbers of the first kind defined by the following coefficients of the powers of x {\displaystyle x} in the expansions of ( x ) n , f , t {\displaystyle (x)_{n,f,t}} and then by the next corresponding triangular recurrence relation: These coefficients satisfy a number of analogous properties to those for the Stirling numbers of the first kind as well as recurrence relations and functional equations related to the f-harmonic numbers , F n ( r ) ( t ) := ∑ k ≤ n t k / f ( k ) r {\displaystyle F_{n}^{(r)}(t):=\sum _{k\leq n}t^{k}/f(k)^{r}} . [ 21 ] One special case of these bracketed coefficients corresponding to t ≡ 1 {\displaystyle t\equiv 1} allows us to expand the multiple factorial, or multifactorial functions as polynomials in n {\displaystyle n} . [ 22 ] The Stirling numbers of both kinds, the binomial coefficients , and the first and second-order Eulerian numbers are all defined by special cases of a triangular super-recurrence of the form for integers n , k ≥ 0 {\displaystyle n,k\geq 0} and where | n k | ≡ 0 {\displaystyle \left|{\begin{matrix}n\\k\end{matrix}}\right|\equiv 0} whenever n < 0 {\displaystyle n<0} or k < 0 {\displaystyle k<0} . In this sense, the form of the Stirling numbers of the first kind may also be generalized by this parameterized super-recurrence for fixed scalars α , β , γ , α ′ , β ′ , γ ′ {\displaystyle \alpha ,\beta ,\gamma ,\alpha ^{\prime },\beta ^{\prime },\gamma ^{\prime }} (not all zero).
https://en.wikipedia.org/wiki/Stirling_numbers_of_the_first_kind
In mathematics , particularly in combinatorics , a Stirling number of the second kind (or Stirling partition number ) is the number of ways to partition a set of n objects into k non-empty subsets and is denoted by S ( n , k ) {\displaystyle S(n,k)} or { n k } {\displaystyle \textstyle \left\{{n \atop k}\right\}} . [ 1 ] Stirling numbers of the second kind occur in the field of mathematics called combinatorics and the study of partitions . They are named after James Stirling . The Stirling numbers of the first and second kind can be understood as inverses of one another when viewed as triangular matrices . This article is devoted to specifics of Stirling numbers of the second kind. Identities linking the two kinds appear in the article on Stirling numbers . The Stirling numbers of the second kind, written S ( n , k ) {\displaystyle S(n,k)} or { n k } {\displaystyle \lbrace \textstyle {n \atop k}\rbrace } or with other notations, count the number of ways to partition a set of n {\displaystyle n} labelled objects into k {\displaystyle k} nonempty unlabelled subsets. Equivalently, they count the number of different equivalence relations with precisely k {\displaystyle k} equivalence classes that can be defined on an n {\displaystyle n} element set. In fact, there is a bijection between the set of partitions and the set of equivalence relations on a given set. Obviously, as the only way to partition an n -element set into n parts is to put each element of the set into its own part, and the only way to partition a nonempty set into one part is to put all of the elements in the same part. Unlike Stirling numbers of the first kind , they can be calculated using a one-sum formula: [ 2 ] (see also Stirling numbers of the second kind for a proof of the latter formula) The Stirling numbers of the first kind may be characterized as the numbers that arise when one expresses powers of an indeterminate x in terms of the falling factorials [ 3 ] (In particular, ( x ) 0 = 1 because it is an empty product .) Stirling numbers of the second kind satisfy the relation Various notations have been used for Stirling numbers of the second kind. The brace notation { n k } {\textstyle \textstyle \lbrace {n \atop k}\rbrace } was used by Imanuel Marx and Antonio Salmeri in 1962 for variants of these numbers. [ 4 ] [ 5 ] This led Knuth to use it, as shown here, in the first volume of The Art of Computer Programming (1968). [ 6 ] [ 7 ] According to the third edition of The Art of Computer Programming , this notation was also used earlier by Jovan Karamata in 1935. [ 8 ] [ 9 ] The notation S ( n , k ) was used by Richard Stanley in his book Enumerative Combinatorics and also, much earlier, by many other writers. [ 6 ] The notations used on this page for Stirling numbers are not universal, and may conflict with notations in other sources. Since the Stirling number { n k } {\displaystyle \left\{{n \atop k}\right\}} counts set partitions of an n -element set into k parts, the sum over all values of k is the total number of partitions of a set with n members. This number is known as the n th Bell number . Analogously, the ordered Bell numbers can be computed from the Stirling numbers of the second kind via Below is a triangular array of values for the Stirling numbers of the second kind (sequence A008277 in the OEIS ): As with the binomial coefficients , this table could be extended to k > n , but the entries would all be 0. Stirling numbers of the second kind obey the recurrence relation (first discovered by Masanobu Saka in his 1782 Sanpō-Gakkai ): [ 11 ] with initial conditions For instance, the number 25 in column k = 3 and row n = 5 is given by 25 = 7 + (3×6), where 7 is the number above and to the left of 25, 6 is the number above 25 and 3 is the column containing the 6. To prove this recurrence, observe that a partition of the ⁠ n + 1 {\displaystyle n+1} ⁠ objects into k nonempty subsets either contains the ⁠ ( n + 1 ) {\displaystyle (n+1)} ⁠ -th object as a singleton or it does not. The number of ways that the singleton is one of the subsets is given by since we must partition the remaining n objects into the available ⁠ k − 1 {\displaystyle k-1} ⁠ subsets. In the other case the ⁠ ( n + 1 ) {\displaystyle (n+1)} ⁠ -th object belongs to a subset containing other objects. The number of ways is given by since we partition all objects other than the ⁠ ( n + 1 ) {\displaystyle (n+1)} ⁠ -th into k subsets, and then we are left with k choices for inserting object ⁠ n + 1 {\displaystyle n+1} ⁠ . Summing these two values gives the desired result. Another recurrence relation is given by { n k } = k n k ! − ∑ r = 1 k − 1 { n r } ( k − r ) ! . {\displaystyle \left\lbrace {\begin{matrix}n\\k\end{matrix}}\right\rbrace ={\frac {k^{n}}{k!}}-\sum _{r=1}^{k-1}{\frac {\left\lbrace {\begin{matrix}n\\r\end{matrix}}\right\rbrace }{(k-r)!}}.} which follows from evaluating ∑ r = 0 n { n r } ( x ) r = x n {\displaystyle \sum _{r=0}^{n}\left\{{n \atop r}\right\}(x)_{r}=x^{n}} at x = k {\displaystyle x=k} . It is also conjectured that for a fixed n {\displaystyle n} we have Here we start with recursively computing of { n n − 1 } {\displaystyle \left\{{n \atop n-1}\right\}} , then compute { n n − 2 } {\displaystyle \left\{{n \atop n-2}\right\}} and so on up to { n 1 } {\displaystyle \left\{{n \atop 1}\right\}} . Another conjecture is that for a fixed k {\displaystyle k} we have If you swap ( j − 2 ) ! {\displaystyle (j-2)!} from the first sum and ( − 1 ) j {\displaystyle (-1)^{j}} from the second, you will get similar conjectures, but for Stirling numbers of the first kind . Some simple identities include This is because dividing n elements into n − 1 sets necessarily means dividing it into one set of size 2 and n − 2 sets of size 1. Therefore we need only pick those two elements; and To see this, first note that there are 2 n ordered pairs of complementary subsets A and B . In one case, A is empty, and in another B is empty, so 2 n − 2 ordered pairs of subsets remain. Finally, since we want unordered pairs rather than ordered pairs we divide this last number by 2, giving the result above. Another explicit expansion of the recurrence-relation gives identities in the spirit of the above example. The table in section 6.1 of Concrete Mathematics provides a plethora of generalized forms of finite sums involving the Stirling numbers. Several particular finite sums relevant to this article include The Stirling numbers of the second kind are given by the explicit formula: This can be derived by using inclusion-exclusion to count the surjections from n to k and using the fact that the number of such surjections is k ! { n k } {\textstyle k!\left\{{n \atop k}\right\}} . Additionally, this formula is a special case of the k th forward difference of the monomial x n {\displaystyle x^{n}} evaluated at x = 0: Because the Bernoulli polynomials may be written in terms of these forward differences, one immediately obtains a relation in the Bernoulli numbers : The evaluation of incomplete exponential Bell polynomial B n , k ( x 1 , x 2 ,...) on the sequence of ones equals a Stirling number of the second kind: Another explicit formula given in the NIST Handbook of Mathematical Functions is The parity of a Stirling number of the second kind is same as the parity of a related binomial coefficient : This relation is specified by mapping n and k coordinates onto the Sierpiński triangle . More directly, let two sets contain positions of 1's in binary representations of results of respective expressions: One can mimic a bitwise AND operation by intersecting these two sets: to obtain the parity of a Stirling number of the second kind in O (1) time. In pseudocode : where [ b ] {\displaystyle \left[b\right]} is the Iverson bracket . The parity of a central Stirling number of the second kind { 2 n n } {\displaystyle \textstyle \left\{{2n \atop n}\right\}} is odd if and only if n {\displaystyle n} is a fibbinary number , a number whose binary representation has no two consecutive 1s. [ 12 ] For a fixed integer n , the ordinary generating function for Stirling numbers of the second kind { n 0 } , { n 1 } , … {\displaystyle \left\{{n \atop 0}\right\},\left\{{n \atop 1}\right\},\ldots } is given by where T n ( x ) {\displaystyle T_{n}(x)} are Touchard polynomials . If one sums the Stirling numbers against the falling factorial instead, one can show the following identities, among others: and which has special case For a fixed integer k , the Stirling numbers of the second kind have rational ordinary generating function and have an exponential generating function given by A mixed bivariate generating function for the Stirling numbers of the second kind is If n ≥ 2 {\displaystyle n\geq 2} and 1 ≤ k ≤ n − 1 {\displaystyle 1\leq k\leq n-1} , then For fixed value of k , {\displaystyle k,} the asymptotic value of the Stirling numbers of the second kind as n → ∞ {\displaystyle n\rightarrow \infty } is given by If k = o ( n ) {\displaystyle k=o({\sqrt {n}})} (where o denotes the little o notation ) then A uniformly valid approximation also exists: for all k such that 1 < k < n , one has where v = n / k {\displaystyle v=n/k} , and G ∈ ( 0 , 1 ) {\displaystyle G\in (0,1)} is the unique solution to G = v e G − v {\displaystyle G=ve^{G-v}} . [ 15 ] Relative error is bounded by about 0.066 / n {\displaystyle 0.066/n} . For fixed n {\displaystyle n} , { n k } {\displaystyle \left\{{n \atop k}\right\}} is unimodal, that is, the sequence increases and then decreases. The maximum is attained for at most two consecutive values of k . That is, there is an integer k n {\displaystyle k_{n}} such that Looking at the table of values above, the first few values for k n {\displaystyle k_{n}} are 0 , 1 , 1 , 2 , 2 , 3 , 3 , 4 , 4 , 4 , 5 , … {\displaystyle 0,1,1,2,2,3,3,4,4,4,5,\ldots } When n {\displaystyle n} is large and the maximum value of the Stirling number can be approximated with If X is a random variable with a Poisson distribution with expected value λ, then its n- th moment is In particular, the n th moment of the Poisson distribution with expected value 1 is precisely the number of partitions of a set of size n , i.e., it is the n th Bell number (this fact is Dobiński's formula ). Let the random variable X be the number of fixed points of a uniformly distributed random permutation of a finite set of size m . Then the n th moment of X is Note: The upper bound of summation is m , not n . In other words, the n th moment of this probability distribution is the number of partitions of a set of size n into no more than m parts. This is proved in the article on random permutation statistics , although the notation is a bit different. The Stirling numbers of the second kind can represent the total number of rhyme schemes for a poem of n lines. S ( n , k ) {\displaystyle S(n,k)} gives the number of possible rhyming schemes for n lines using k unique rhyming syllables. As an example, for a poem of 3 lines, there is 1 rhyme scheme using just one rhyme (aaa), 3 rhyme schemes using two rhymes (aab, aba, abb), and 1 rhyme scheme using three rhymes (abc). The r -Stirling number of the second kind { n k } r {\displaystyle \left\{{n \atop k}\right\}_{r}} counts the number of partitions of a set of n objects into k non-empty disjoint subsets, such that the first r elements are in distinct subsets. [ 16 ] These numbers satisfy the recurrence relation Some combinatorial identities and a connection between these numbers and context-free grammars can be found in [ 17 ] An r -associated Stirling number of the second kind is the number of ways to partition a set of n objects into k subsets, with each subset containing at least r elements. [ 18 ] It is denoted by S r ( n , k ) {\displaystyle S_{r}(n,k)} and obeys the recurrence relation The 2-associated numbers (sequence A008299 in the OEIS ) appear elsewhere as "Ward numbers" and as the magnitudes of the coefficients of Mahler polynomials . Denote the n objects to partition by the integers 1, 2, ..., n . Define the reduced Stirling numbers of the second kind, denoted S d ( n , k ) {\displaystyle S^{d}(n,k)} , to be the number of ways to partition the integers 1, 2, ..., n into k nonempty subsets such that all elements in each subset have pairwise distance at least d . That is, for any integers i and j in a given subset, it is required that | i − j | ≥ d {\displaystyle |i-j|\geq d} . It has been shown that these numbers satisfy (hence the name "reduced"). [ 19 ] Observe (both by definition and by the reduction formula), that S 1 ( n , k ) = S ( n , k ) {\displaystyle S^{1}(n,k)=S(n,k)} , the familiar Stirling numbers of the second kind.
https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind
In mathematics , the Stirling polynomials are a family of polynomials that generalize important sequences of numbers appearing in combinatorics and analysis , which are closely related to the Stirling numbers , the Bernoulli numbers , and the generalized Bernoulli polynomials . There are multiple variants of the Stirling polynomial sequence considered below most notably including the Sheffer sequence form of the sequence, S k ( x ) {\displaystyle S_{k}(x)} , defined characteristically through the special form of its exponential generating function, and the Stirling (convolution) polynomials , σ n ( x ) {\displaystyle \sigma _{n}(x)} , which also satisfy a characteristic ordinary generating function and that are of use in generalizing the Stirling numbers (of both kinds) to arbitrary complex -valued inputs. We consider the " convolution polynomial " variant of this sequence and its properties second in the last subsection of the article. Still other variants of the Stirling polynomials are studied in the supplementary links to the articles given in the references. For nonnegative integers k , the Stirling polynomials, S k ( x ), are a Sheffer sequence for ( g ( t ) , f ¯ ( t ) ) := ( e − t , log ⁡ ( t 1 − e − t ) ) {\displaystyle (g(t),{\bar {f}}(t)):=\left(e^{-t},\log \left({\frac {t}{1-e^{-t}}}\right)\right)} [ 1 ] defined by the exponential generating function The Stirling polynomials are a special case of the Nørlund polynomials (or generalized Bernoulli polynomials ) [ 2 ] each with exponential generating function given by the relation S k ( x ) = B k ( x + 1 ) ( x + 1 ) {\displaystyle S_{k}(x)=B_{k}^{(x+1)}(x+1)} . The first 10 Stirling polynomials are given in the following table: Yet another variant of the Stirling polynomials is considered in [ 3 ] (see also the subsection on Stirling convolution polynomials below). In particular, the article by I. Gessel and R. P. Stanley defines the modified Stirling polynomial sequences, f k ( n ) := S ( n + k , n ) {\displaystyle f_{k}(n):=S(n+k,n)} and g k ( n ) := c ( n , n − k ) {\displaystyle g_{k}(n):=c(n,n-k)} where c ( n , k ) := ( − 1 ) n − k s ( n , k ) {\displaystyle c(n,k):=(-1)^{n-k}s(n,k)} are the unsigned Stirling numbers of the first kind , in terms of the two Stirling number triangles for non-negative integers n ≥ 1 , k ≥ 0 {\displaystyle n\geq 1,\ k\geq 0} . For fixed k ≥ 0 {\displaystyle k\geq 0} , both f k ( n ) {\displaystyle f_{k}(n)} and g k ( n ) {\displaystyle g_{k}(n)} are polynomials of the input n ∈ Z + {\displaystyle n\in \mathbb {Z} ^{+}} each of degree 2 k {\displaystyle 2k} and with leading coefficient given by the double factorial term ( 1 ⋅ 3 ⋅ 5 ⋯ ( 2 k − 1 ) ) / ( 2 k ) ! {\displaystyle (1\cdot 3\cdot 5\cdots (2k-1))/(2k)!} . Below B k ( x ) {\displaystyle B_{k}(x)} denote the Bernoulli polynomials and B k = B k ( 0 ) {\displaystyle B_{k}=B_{k}(0)} the Bernoulli numbers under the convention B 1 = B 1 ( 0 ) = − 1 2 ; {\displaystyle B_{1}=B_{1}(0)=-{\tfrac {1}{2}};} s m , n {\displaystyle s_{m,n}} denotes a Stirling number of the first kind ; and S m , n {\displaystyle S_{m,n}} denotes Stirling numbers of the second kind . Another variant of the Stirling polynomial sequence corresponds to a special case of the convolution polynomials studied by Knuth's article [ 5 ] and in the Concrete Mathematics reference. We first define these polynomials through the Stirling numbers of the first kind as It follows that these polynomials satisfy the next recurrence relation given by These Stirling " convolution " polynomials may be used to define the Stirling numbers, [ x x − n ] {\displaystyle \scriptstyle {\left[{\begin{matrix}x\\x-n\end{matrix}}\right]}} and { x x − n } {\displaystyle \scriptstyle {\left\{{\begin{matrix}x\\x-n\end{matrix}}\right\}}} , for integers n ≥ 0 {\displaystyle n\geq 0} and arbitrary complex values of x {\displaystyle x} . The next table provides several special cases of these Stirling polynomials for the first few n ≥ 0 {\displaystyle n\geq 0} . This variant of the Stirling polynomial sequence has particularly nice ordinary generating functions of the following forms: More generally, if S t ( z ) {\displaystyle {\mathcal {S}}_{t}(z)} is a power series that satisfies ln ⁡ ( 1 − z S t ( z ) t − 1 ) = − z S t ( z ) t {\displaystyle \ln \left(1-z{\mathcal {S}}_{t}(z)^{t-1}\right)=-z{\mathcal {S}}_{t}(z)^{t}} , we have that We also have the related series identity [ 6 ] and the Stirling (Sheffer) polynomial related generating functions given by For integers 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} and r , s ∈ C {\displaystyle r,s\in \mathbb {C} } , these polynomials satisfy the two Stirling convolution formulas given by and When n , m ∈ N {\displaystyle n,m\in \mathbb {N} } , we also have that the polynomials, σ n ( m ) {\displaystyle \sigma _{n}(m)} , are defined through their relations to the Stirling numbers and their relations to the Bernoulli numbers given by
https://en.wikipedia.org/wiki/Stirling_polynomials
The Stobbe condensation entails the reaction of an aldehyde or ketone with an ester of succinic acid to generate alkylidene succinic acid or related derivatives. [ 1 ] The reaction consumes one equivalent of metal alkoxide. Commonly, diethylsuccinate is a component of the reaction. The usual product is salt of the half-ester. The Stobbe condensation is named after its discoverer, Hans Stobbe [ de ] , whose work involved the sodium ethoxide-induced condensation of acetone and diethyl succinate . [ 2 ] An example is the reaction of benzophenone with diethyl succinate: [ 3 ] A reaction mechanism that explains the formation of both an ester group and a carboxylic acid group is centered on a lactone intermediate ( 5 ): The Stobbe condensation is also illustrated by the synthesis of the drug tametraline . [ 4 ]
https://en.wikipedia.org/wiki/Stobbe_condensation
Stochastic ( / s t ə ˈ k æ s t ɪ k / ; from Ancient Greek στόχος ( stókhos ) ' aim, guess ' ) [ 1 ] is the property of being well-described by a random probability distribution . [ 1 ] Stochasticity and randomness are technically distinct concepts: the former refers to a modeling approach, while the latter describes phenomena; in everyday conversation, however, these terms are often used interchangeably . In probability theory , the formal concept of a stochastic process is also referred to as a random process . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Stochasticity is used in many different fields, including image processing , signal processing , computer science , information theory , telecommunications , [ 7 ] chemistry , [ 8 ] ecology , [ 9 ] neuroscience , [ 10 ] physics , [ 11 ] [ 12 ] [ 13 ] [ 14 ] and cryptography . [ 15 ] [ 16 ] It is also used in finance (e.g., stochastic oscillator ), due to seemingly random changes in the different markets within the financial sector and in medicine, linguistics, music, media, colour theory, botany, manufacturing and geomorphology. [ 17 ] [ 18 ] [ 19 ] The word stochastic in English was originally used as an adjective with the definition "pertaining to conjecturing", and stemming from a Greek word meaning "to aim at a mark, guess", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence. [ 1 ] In his work on probability Ars Conjectandi , originally published in Latin in 1713, Jakob Bernoulli used the phrase "Ars Conjectandi sive Stochastice", which has been translated to "the art of conjecturing or stochastics". [ 20 ] This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz , [ 21 ] who in 1917 wrote in German the word Stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph L. Doob . [ 1 ] For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin , [ 22 ] [ 23 ] though the German term had been used earlier in 1931 by Andrey Kolmogorov . [ 24 ] In the early 1930s, Aleksandr Khinchin gave the first mathematical definition of a stochastic process as a family of random variables indexed by the real line. [ 25 ] [ 22 ] [ a ] Further fundamental work on probability theory and stochastic processes was done by Khinchin as well as other mathematicians such as Andrey Kolmogorov , Joseph Doob , William Feller , Maurice Fréchet , Paul Lévy , Wolfgang Doeblin , and Harald Cramér . [ 27 ] [ 28 ] Decades later Cramér referred to the 1930s as the "heroic period of mathematical probability theory". [ 28 ] In mathematics, the theory of stochastic processes is an important contribution to probability theory , [ 29 ] and continues to be an active topic of research for both theory and applications. [ 30 ] [ 31 ] [ 32 ] The word stochastic is used to describe other terms and objects in mathematics. Examples include a stochastic matrix , which describes a stochastic process known as a Markov process , and stochastic calculus, which involves differential equations and integrals based on stochastic processes such as the Wiener process , also called the Brownian motion process. One of the simplest continuous-time stochastic processes is Brownian motion . This was first observed by botanist Robert Brown while looking through a microscope at pollen grains in water. The Monte Carlo method is a stochastic method popularized by physics researchers Stanisław Ulam , Enrico Fermi , John von Neumann , and Nicholas Metropolis . [ 33 ] The use of randomness and the repetitive nature of the process are analogous to the activities conducted at a casino. Methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread. Perhaps the most famous early use was by Enrico Fermi in 1930, when he used a random method to calculate the properties of the newly discovered neutron . Monte Carlo methods were central to the simulations required for the Manhattan Project , though they were severely limited by the computational tools of the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb , and became popularized in the fields of physics , physical chemistry , and operations research . The RAND Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators , which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In biological systems the technique of stochastic resonance - introducing stochastic "noise" - has been found to help improve the signal-strength of the internal feedback-loops for balance and other vestibular communication. [ 34 ] The technique has helped diabetic and stroke patients with balance control. [ 35 ] Many biochemical events lend themselves to stochastic analysis. Gene expression , for example, has a stochastic component through the molecular collisions—e.g., during binding and unbinding of RNA polymerase to a gene promoter which contributes to bursts of transcription and super-Poissonian variability in cell-to-cell RNA distributions [ 36 ] —via the solution's Brownian motion . Simonton (2003, Psych Bulletin ) argues that creativity in science (of scientists) is a constrained stochastic behaviour such that new theories in all sciences are, at least in part, the product of a stochastic process . [ 37 ] Stochastic ray tracing is the application of Monte Carlo simulation to the computer graphics ray tracing algorithm. " Distributed ray tracing samples the integrand at many randomly chosen points and averages the results to obtain a better approximation. It is essentially an application of the Monte Carlo method to 3D computer graphics , and for this reason is also called Stochastic ray tracing ." [ citation needed ] Stochastic forensics analyzes computer crime by viewing computers as stochastic steps. In artificial intelligence , stochastic programs work by using probabilistic methods to solve problems, as in simulated annealing , stochastic neural networks , stochastic optimization , genetic algorithms , and genetic programming . A problem itself may be stochastic as well, as in planning under uncertainty. The financial markets use stochastic models to represent the seemingly random behaviour of various financial assets, including the random behavior of the price of one currency compared to that of another (such as the price of US Dollar compared to that of the Euro), and also to represent random behaviour of interest rates . These models are then used by financial analysts to value options on stock prices, bond prices, and on interest rates, see Markov models . Moreover, it is at the heart of the insurance industry . The formation of river meanders has been analyzed as a stochastic process. Non-deterministic approaches in language studies are largely inspired by the work of Ferdinand de Saussure , for example, in functionalist linguistic theory , which argues that competence is based on performance . [ 38 ] [ 39 ] This distinction in functional theories of grammar should be carefully distinguished from the langue and parole distinction. To the extent that linguistic knowledge is constituted by experience with language, grammar is argued to be probabilistic and variable rather than fixed and absolute. This conception of grammar as probabilistic and variable follows from the idea that one's competence changes in accordance with one's experience with language. Though this conception has been contested, [ 40 ] it has also provided the foundation for modern statistical natural language processing [ 41 ] and for theories of language learning and change. [ 42 ] Manufacturing processes are assumed to be stochastic processes . This assumption is largely valid for either continuous or batch manufacturing processes. Testing and monitoring of the process is recorded using a process control chart which plots a given process control parameter over time. Typically a dozen or many more parameters will be tracked simultaneously. Statistical models are used to define limit lines which define when corrective actions must be taken to bring the process back to its intended operational window. This same approach is used in the service industry where parameters are replaced by processes related to service level agreements. The marketing and the changing movement of audience tastes and preferences, as well as the solicitation of and the scientific appeal of certain film and television debuts (i.e., their opening weekends, word-of-mouth, top-of-mind knowledge among surveyed groups, star name recognition and other elements of social media outreach and advertising), are determined in part by stochastic modeling. A recent attempt at repeat business analysis was done by Japanese scholars [ citation needed ] and is part of the Cinematic Contagion Systems patented by Geneva Media Holdings, and such modeling has been used in data collection from the time of the original Nielsen ratings to modern studio and television test audiences. Stochastic effect, or "chance effect" is one classification of radiation effects that refers to the random, statistical nature of the damage. [ citation needed ] In contrast to the deterministic effect, severity is independent of dose. Only the probability of an effect increases with dose. [ citation needed ] In music , mathematical processes based on probability can generate stochastic elements. Stochastic processes may be used in music to compose a fixed piece or may be produced in performance. Stochastic music was pioneered by Iannis Xenakis , who coined the term stochastic music . Specific examples of mathematics, statistics, and physics applied to music composition are the use of the statistical mechanics of gases in Pithoprakta , statistical distribution of points on a plane in Diamorphoses , minimal constraints in Achorripsis , the normal distribution in ST/10 and Atrées , Markov chains in Analogiques , game theory in Duel and Stratégie , group theory in Nomos Alpha (for Siegfried Palm ), set theory in Herma and Eonta , [ 43 ] and Brownian motion in N'Shima . [ citation needed ] Xenakis frequently used computers to produce his scores, such as the ST series including Morsima-Amorsima and Atrées , and founded CEMAMu . Earlier, John Cage and others had composed aleatoric or indeterminate music , which is created by chance processes but does not have the strict mathematical basis (Cage's Music of Changes , for example, uses a system of charts based on the I-Ching ). Lejaren Hiller and Leonard Issacson used generative grammars and Markov chains in their 1957 Illiac Suite . Modern electronic music production techniques make these processes relatively simple to implement, and many hardware devices such as synthesizers and drum machines incorporate randomization features. Generative music techniques are therefore readily accessible to composers, performers, and producers. Stochastic social science theory is similar to systems theory in that events are interactions of systems, although with a marked emphasis on unconscious processes. The event creates its own conditions of possibility, rendering it unpredictable if simply for the number of variables involved. Stochastic social science theory can be seen as an elaboration of a kind of 'third axis' in which to situate human behavior alongside the traditional 'nature vs. nurture' opposition. See Julia Kristeva on her usage of the 'semiotic', Luce Irigaray on reverse Heideggerian epistemology, and Pierre Bourdieu on polythetic space for examples of stochastic social science theory. [ citation needed ] The term stochastic terrorism has come into frequent use [ 44 ] with regard to lone wolf terrorism . The terms "Scripted Violence" and "Stochastic Terrorism" are linked in a "cause <> effect" relationship. "Scripted violence" rhetoric can result in an act of "stochastic terrorism". The phrase "scripted violence" has been used in social science since at least 2002. [ 45 ] Author David Neiwert, who wrote the book Alt-America , told Salon interviewer Chauncey Devega: Scripted violence is where a person who has a national platform describes the kind of violence that they want to be carried out. He identifies the targets and leaves it up to the listeners to carry out this violence. It is a form of terrorism. It is an act and a social phenomenon where there is an agreement to inflict massive violence on a whole segment of society. Again, this violence is led by people in high-profile positions in the media and the government. They're the ones who do the scripting, and it is ordinary people who carry it out. Think of it like Charles Manson and his followers. Manson wrote the script; he didn't commit any of those murders. He just had his followers carry them out. [ 46 ] When color reproductions are made, the image is separated into its component colors by taking multiple photographs filtered for each color. One resultant film or plate represents each of the cyan, magenta, yellow, and black data. Color printing is a binary system, where ink is either present or not present, so all color separations to be printed must be translated into dots at some stage of the work-flow. Traditional line screens which are amplitude modulated had problems with moiré but were used until stochastic screening became available. A stochastic (or frequency modulated ) dot pattern creates a sharper image.
https://en.wikipedia.org/wiki/Stochastic
Stochastic Gronwall inequality is a generalization of Gronwall's inequality and has been used for proving the well-posedness of path-dependent stochastic differential equations with local monotonicity and coercivity assumption with respect to supremum norm. [ 1 ] [ 2 ] Let X ( t ) , t ≥ 0 {\displaystyle X(t),\,t\geq 0} be a non-negative right-continuous ( F t ) t ≥ 0 {\displaystyle ({\mathcal {F}}_{t})_{t\geq 0}} - adapted process . Assume that A : [ 0 , ∞ ) → [ 0 , ∞ ) {\displaystyle A:[0,\infty )\to [0,\infty )} is a deterministic non-decreasing càdlàg function with A ( 0 ) = 0 {\displaystyle A(0)=0} and let H ( t ) , t ≥ 0 {\displaystyle H(t),\,t\geq 0} be a non-decreasing and càdlàg adapted process starting from H ( 0 ) ≥ 0 {\displaystyle H(0)\geq 0} . Further, let M ( t ) , t ≥ 0 {\displaystyle M(t),\,t\geq 0} be an ( F t ) t ≥ 0 {\displaystyle ({\mathcal {F}}_{t})_{t\geq 0}} - local martingale with M ( 0 ) = 0 {\displaystyle M(0)=0} and càdlàg paths. Assume that for all t ≥ 0 {\displaystyle t\geq 0} , X ( t ) ≤ ∫ 0 t X ∗ ( u − ) d A ( u ) + M ( t ) + H ( t ) , {\displaystyle X(t)\leq \int _{0}^{t}X^{*}(u^{-})\,dA(u)+M(t)+H(t),} where X ∗ ( u ) := sup r ∈ [ 0 , u ] X ( r ) {\displaystyle X^{*}(u):=\sup _{r\in [0,u]}X(r)} . and define c p = p − p 1 − p {\displaystyle c_{p}={\frac {p^{-p}}{1-p}}} . Then the following estimates hold for p ∈ ( 0 , 1 ) {\displaystyle p\in (0,1)} and T > 0 {\displaystyle T>0} : [ 1 ] [ 2 ] It has been proven by Lenglart's inequality . [ 1 ]
https://en.wikipedia.org/wiki/Stochastic_Gronwall_inequality
Stochastic calculus is a branch of mathematics that operates on stochastic processes . It allows a consistent theory of integration to be defined for integrals of stochastic processes with respect to stochastic processes. This field was created and started by the Japanese mathematician Kiyosi Itô during World War II . The best-known stochastic process to which stochastic calculus is applied is the Wiener process (named in honor of Norbert Wiener ), which is used for modeling Brownian motion as described by Louis Bachelier in 1900 and by Albert Einstein in 1905 and other physical diffusion processes in space of particles subject to random forces. Since the 1970s, the Wiener process has been widely applied in financial mathematics and economics to model the evolution in time of stock prices and bond interest rates. The main flavours of stochastic calculus are the Itô calculus and its variational relative the Malliavin calculus . For technical reasons the Itô integral is the most useful for general classes of processes, but the related Stratonovich integral is frequently useful in problem formulation (particularly in engineering disciplines). The Stratonovich integral can readily be expressed in terms of the Itô integral, and vice versa. The main benefit of the Stratonovich integral is that it obeys the usual chain rule and therefore does not require Itô's lemma . This enables problems to be expressed in a coordinate system invariant form, which is invaluable when developing stochastic calculus on manifolds other than R n . The dominated convergence theorem does not hold for the Stratonovich integral; consequently it is very difficult to prove results without re-expressing the integrals in Itô form. The Itô integral is central to the study of stochastic calculus. The integral ∫ H d X {\displaystyle \int H\,dX} is defined for a semimartingale X and locally bounded predictable process H . [ citation needed ] The Stratonovich integral or Fisk–Stratonovich integral of a semimartingale X {\displaystyle X} against another semimartingale Y can be defined in terms of the Itô integral as where [ X , Y ] t c denotes the optional quadratic covariation of the continuous parts of X and Y , which is the optional quadratic covariation minus the jumps of the processes X {\displaystyle X} and Y {\displaystyle Y} , i.e. The alternative notation is also used to denote the Stratonovich integral. An important application of stochastic calculus is in mathematical finance , in which asset prices are often assumed to follow stochastic differential equations . For example, the Black–Scholes model prices options as if they follow a geometric Brownian motion , illustrating the opportunities and risks from applying stochastic calculus. Besides the classical Itô and Fisk–Stratonovich integrals, many other notions of stochastic integrals exist, such as the Hitsuda–Skorokhod integral , the Marcus integral, and the Ogawa integral .
https://en.wikipedia.org/wiki/Stochastic_calculus
A stochastic differential equation ( SDE ) is a differential equation in which one or more of the terms is a stochastic process , [ 1 ] resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices , [ 2 ] random growth models [ 3 ] or physical systems that are subjected to thermal fluctuations . SDEs have a random differential that is in the most basic case random white noise calculated as the distributional derivative of a Brownian motion or more generally a semimartingale . However, other types of random behaviour are possible, such as jump processes like Lévy processes [ 4 ] or semimartingales with jumps. Stochastic differential equations are in general neither differential equations nor random differential equations . Random differential equations are conjugate to stochastic differential equations. Stochastic differential equations can also be extended to differential manifolds . [ 5 ] [ 6 ] [ 7 ] [ 8 ] Stochastic differential equations originated in the theory of Brownian motion , in the work of Albert Einstein and Marian Smoluchowski in 1905, although Louis Bachelier was the first person credited with modeling Brownian motion in 1900, giving a very early example of a stochastic differential equation now known as Bachelier model . Some of these early examples were linear stochastic differential equations, also called Langevin equations after French physicist Langevin , describing the motion of a harmonic oscillator subject to a random force. The mathematical theory of stochastic differential equations was developed in the 1940s through the groundbreaking work of Japanese mathematician Kiyosi Itô , who introduced the concept of stochastic integral and initiated the study of nonlinear stochastic differential equations. Another approach was later proposed by Russian physicist Stratonovich , leading to a calculus similar to ordinary calculus. The most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a white noise variable. In most cases, SDEs are understood as continuous time limit of the corresponding stochastic difference equations . This understanding of SDEs is ambiguous and must be complemented by a proper mathematical definition of the corresponding integral. [ 1 ] [ 3 ] Such a mathematical definition was first proposed by Kiyosi Itô in the 1940s, leading to what is known today as the Itô calculus . Another construction was later proposed by Russian physicist Stratonovich , leading to what is known as the Stratonovich integral . The Itô integral and Stratonovich integral are related, but different, objects and the choice between them depends on the application considered. The Itô calculus is based on the concept of non-anticipativeness or causality, which is natural in applications where the variable is time. The Stratonovich calculus, on the other hand, has rules which resemble ordinary calculus and has intrinsic geometric properties which render it more natural when dealing with geometric problems such as random motion on manifolds , although it is possible and in some cases preferable to model random motion on manifolds through Itô SDEs, [ 6 ] for example when trying to optimally approximate SDEs on submanifolds. [ 9 ] An alternative view on SDEs is the stochastic flow of diffeomorphisms. This understanding is unambiguous and corresponds to the Stratonovich version of the continuous time limit of stochastic difference equations. Associated with SDEs is the Smoluchowski equation or the Fokker–Planck equation , an equation describing the time evolution of probability distribution functions . The generalization of the Fokker–Planck evolution to temporal evolution of differential forms is provided by the concept of stochastic evolution operator . In physical science, there is an ambiguity in the usage of the term "Langevin SDEs" . While Langevin SDEs can be of a more general form , this term typically refers to a narrow class of SDEs with gradient flow vector fields. This class of SDEs is particularly popular because it is a starting point of the Parisi–Sourlas stochastic quantization procedure, [ 10 ] leading to a N=2 supersymmetric model closely related to supersymmetric quantum mechanics . From the physical point of view, however, this class of SDEs is not very interesting because it never exhibits spontaneous breakdown of topological supersymmetry, i.e., (overdamped) Langevin SDEs are never chaotic . Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable; [ 1 ] [ 3 ] thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus . Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. Øksendal, 2003) [ 3 ] and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. [ 1 ] [ 3 ] Still, one must be careful which calculus to use when the SDE is initially written down. Numerical methods for solving stochastic differential equations [ 11 ] include the Euler–Maruyama method , Milstein method , Runge–Kutta method (SDE) , Rosenbrock method, [ 12 ] and methods based on different representations of iterated stochastic integrals. [ 13 ] [ 14 ] In physics, SDEs have wide applicability ranging from molecular dynamics to neurodynamics and to the dynamics of astrophysical objects. More specifically, SDEs describe all dynamical systems, in which quantum effects are either unimportant or can be taken into account as perturbations. SDEs can be viewed as a generalization of the dynamical systems theory to models with noise. This is an important generalization because real systems cannot be completely isolated from their environments and for this reason always experience external stochastic influence. There are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. Therefore, the following is the most general class of SDEs: where x ∈ X {\displaystyle x\in X} is the position in the system in its phase (or state) space , X {\displaystyle X} , assumed to be a differentiable manifold, the F ∈ T X {\displaystyle F\in TX} is a flow vector field representing deterministic law of evolution, and g α ∈ T X {\displaystyle g_{\alpha }\in TX} is a set of vector fields that define the coupling of the system to Gaussian white noise, ξ α {\displaystyle \xi ^{\alpha }} . If X {\displaystyle X} is a linear space and g {\displaystyle g} are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. For additive noise, the Itô and Stratonovich forms of the SDE generate the same solution, and it is not important which definition is used to solve the SDE. For multiplicative noise SDEs the Itô and Stratonovich forms of the SDE are different, and care should be used in mapping between them. [ 15 ] For a fixed configuration of noise, SDE has a unique solution differentiable with respect to the initial condition. [ 16 ] Nontriviality of stochastic case shows up when one tries to average various objects of interest over noise configurations. In this sense, an SDE is not a uniquely defined entity when noise is multiplicative and when the SDE is understood as a continuous time limit of a stochastic difference equation . In this case, SDE must be complemented by what is known as "interpretations of SDE" such as Itô or a Stratonovich interpretations of SDEs. Nevertheless, when SDE is viewed as a continuous-time stochastic flow of diffeomorphisms, it is a uniquely defined mathematical object that corresponds to Stratonovich approach to a continuous time limit of a stochastic difference equation. In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker–Planck equation (FPE). The Fokker–Planck equation is a deterministic partial differential equation . It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively, numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function. [ citation needed ] The notation used in probability theory (and in many applications of probability theory, for instance in signal processing with the filtering problem and in mathematical finance ) is slightly different. It is also the notation used in publications on numerical methods for solving stochastic differential equations. This notation makes the exotic nature of the random function of time ξ α {\displaystyle \xi ^{\alpha }} in the physics formulation more explicit. In strict mathematical terms, ξ α {\displaystyle \xi ^{\alpha }} cannot be chosen as an ordinary function, but only as a generalized function . The mathematical formulation treats this complication with less ambiguity than the physics formulation. A typical equation is of the form where B {\displaystyle B} denotes a Wiener process (standard Brownian motion). This equation should be interpreted as an informal way of expressing the corresponding integral equation The equation above characterizes the behavior of the continuous time stochastic process X t as the sum of an ordinary Lebesgue integral and an Itô integral . A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length δ the stochastic process X t changes its value by an amount that is normally distributed with expectation μ ( X t , t ) δ and variance σ ( X t , t ) 2 δ and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function μ is referred to as the drift coefficient, while σ is called the diffusion coefficient. The stochastic process X t is called a diffusion process , and satisfies the Markov property . [ 1 ] The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution [ 1 ] Both require the existence of a process X t that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space ( Ω , F , P {\displaystyle \Omega ,\,{\mathcal {F}},\,P} ). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space. The Yamada–Watanabe theorem makes a connection between the two. An important example is the equation for geometric Brownian motion which is the equation for the dynamics of the price of a stock in the Black–Scholes options pricing model [ 2 ] of financial mathematics. Generalizing the geometric Brownian motion, it is also possible to define SDEs admitting strong solutions and whose distribution is a convex combination of densities coming from different geometric Brownian motions or Black Scholes models, obtaining a single SDE whose solutions is distributed as a mixture dynamics of lognormal distributions of different Black Scholes models. [ 2 ] [ 17 ] [ 18 ] [ 19 ] This leads to models that can deal with the volatility smile in financial mathematics. The simpler SDE called arithmetic Brownian motion [ 3 ] was used by Louis Bachelier as the first model for stock prices in 1900, known today as Bachelier model . There are also more general stochastic differential equations where the coefficients μ and σ depend not only on the present value of the process X t , but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, X , is not a Markov process, and it is called an Itô process and not a diffusion process. When the coefficients depends only on present and past values of X , the defining equation is called a stochastic delay differential equation. A generalization of stochastic differential equations with the Fisk-Stratonovich integral to semimartingales with jumps are the SDEs of Marcus type . The Marcus integral is an extension of McShane's stochastic calculus. [ 20 ] An innovative application in stochastic finance derives from the usage of the equation for Ornstein–Uhlenbeck process which is the equation for the dynamics of the return of the price of a stock under the hypothesis that returns display a Log-normal distribution . Under this hypothesis, the methodologies developed by Marcello Minenna determines prediction interval able to identify abnormal return that could hide market abuse phenomena. [ 21 ] [ 22 ] More generally one can extend the theory of stochastic calculus onto differential manifolds and for this purpose one uses the Fisk-Stratonovich integral. Consider a manifold M {\displaystyle M} , some finite-dimensional vector space E {\displaystyle E} , a filtered probability space ( Ω , F , ( F t ) t ∈ R + , P ) {\displaystyle (\Omega ,{\mathcal {F}},({\mathcal {F}}_{t})_{t\in \mathbb {R} _{+}},P)} with ( F t ) t ∈ R + {\displaystyle ({\mathcal {F}}_{t})_{t\in \mathbb {R} _{+}}} satisfying the usual conditions and let M ^ = M ∪ { ∞ } {\displaystyle {\widehat {M}}=M\cup \{\infty \}} be the one-point compactification and x 0 {\displaystyle x_{0}} be F 0 {\displaystyle {\mathcal {F}}_{0}} -measurable. A stochastic differential equation on M {\displaystyle M} written is a pair ( A , Z ) {\displaystyle (A,Z)} , such that For each x ∈ M {\displaystyle x\in M} the map A ( x ) : E → T x M {\displaystyle A(x):E\to T_{x}M} is linear and A ( ⋅ ) e ∈ Γ ( T M ) {\displaystyle A(\cdot )e\in \Gamma (TM)} for each e ∈ E {\displaystyle e\in E} . A solution to the SDE on M {\displaystyle M} with initial condition X 0 = x 0 {\displaystyle X_{0}=x_{0}} is a continuous { F t } {\displaystyle \{{\mathcal {F}}_{t}\}} -adapted M {\displaystyle M} -valued process ( X t ) t < ζ {\displaystyle (X_{t})_{t<\zeta }} up to life time ζ {\displaystyle \zeta } , s.t. for each test function f ∈ C c ∞ ( M ) {\displaystyle f\in C_{c}^{\infty }(M)} the process f ( X ) {\displaystyle f(X)} is a real-valued semimartingale and for each stopping time τ {\displaystyle \tau } with 0 ≤ τ < ζ {\displaystyle 0\leq \tau <\zeta } the equation holds P {\displaystyle P} -almost surely, where ( d f ) X : T x M → T f ( x ) M {\displaystyle (df)_{X}:T_{x}M\to T_{f(x)}M} is the differential at X {\displaystyle X} . It is a maximal solution if the life time is maximal, i.e., P {\displaystyle P} -almost surely. It follows from the fact that f ( X ) {\displaystyle f(X)} for each test function f ∈ C c ∞ ( M ) {\displaystyle f\in C_{c}^{\infty }(M)} is a semimartingale, that X {\displaystyle X} is a semimartingale on M {\displaystyle M} . Given a maximal solution we can extend the time of X {\displaystyle X} onto full R + {\displaystyle \mathbb {R} _{+}} and after a continuation of f {\displaystyle f} on M ^ {\displaystyle {\widehat {M}}} we get up to indistinguishable processes. [ 23 ] Although Stratonovich SDEs are the natural choice for SDEs on manifolds, given that they satisfy the chain rule and that their drift and diffusion coefficients behave as vector fields under changes of coordinates, there are cases where Ito calculus on manifolds is preferable. A theory of Ito calculus on manifolds was first developed by Laurent Schwartz through the concept of Schwartz morphism, [ 6 ] see also the related 2-jet interpretation of Ito SDEs on manifolds based on the jet-bundle. [ 8 ] This interpretation is helpful when trying to optimally approximate the solution of an SDE given on a large space with the solutions of an SDE given on a submanifold of that space, [ 9 ] in that a Stratonovich based projection does not result to be optimal. This has been applied to the filtering problem , leading to optimal projection filters. [ 9 ] Usually the solution of an SDE requires a probabilistic setting, as the integral implicit in the solution is a stochastic integral. If it were possible to deal with the differential equation path by path, one would not need to define a stochastic integral and one could develop a theory independently of probability theory. This points to considering the SDE as a single deterministic differential equation for every ω ∈ Ω {\displaystyle \omega \in \Omega } , where Ω {\displaystyle \Omega } is the sample space in the given probability space ( Ω , F , P {\displaystyle \Omega ,\,{\mathcal {F}},\,P} ). However, a direct path-wise interpretation of the SDE is not possible, as the Brownian motion paths have unbounded variation and are nowhere differentiable with probability one, so that there is no naive way to give meaning to terms like d B t ( ω ) {\displaystyle \mathrm {d} B_{t}(\omega )} , precluding also a naive path-wise definition of the stochastic integral as an integral against every single d B t ( ω ) {\displaystyle \mathrm {d} B_{t}(\omega )} . However, motivated by the Wong-Zakai result [ 24 ] for limits of solutions of SDEs with regular noise and using rough paths theory, while adding a chosen definition of iterated integrals of Brownian motion, it is possible to define a deterministic rough integral for every single ω ∈ Ω {\displaystyle \omega \in \Omega } that coincides for example with the Ito integral with probability one for a particular choice of the iterated Brownian integral. [ 24 ] Other definitions of the iterated integral lead to deterministic pathwise equivalents of different stochastic integrals, like the Stratonovich integral. This has been used for example in financial mathematics to price options without probability. [ 25 ] As with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for Itô SDEs taking values in n - dimensional Euclidean space R n and driven by an m -dimensional Brownian motion B ; the proof may be found in Øksendal (2003, §5.2). [ 3 ] Let T > 0, and let be measurable functions for which there exist constants C and D such that for all t ∈ [0, T ] and all x and y ∈ R n , where Let Z be a random variable that is independent of the σ -algebra generated by B s , s ≥ 0, and with finite second moment : Then the stochastic differential equation/initial value problem has a P- almost surely unique t -continuous solution ( t , ω ) ↦ X t ( ω ) such that X is adapted to the filtration F t Z generated by Z and B s , s ≤ t , and The stochastic differential equation above is only a special case of a more general form where More generally one can also look at stochastic differential equations on manifolds . Whether the solution of this equation explodes depends on the choice of α {\displaystyle \alpha } . Suppose α {\displaystyle \alpha } satisfies some local Lipschitz condition, i.e., for t ≥ 0 {\displaystyle t\geq 0} and some compact set K ⊂ U {\displaystyle K\subset U} and some constant L ( t , K ) {\displaystyle L(t,K)} the condition where | ⋅ | {\displaystyle |\cdot |} is the Euclidean norm. This condition guarantees the existence and uniqueness of a so-called maximal solution . Suppose α {\displaystyle \alpha } is continuous and satisfies the above local Lipschitz condition and let F : Ω → U {\displaystyle F:\Omega \to U} be some initial condition, meaning it is a measurable function with respect to the initial σ-algebra. Let ζ : Ω → R ¯ + {\displaystyle \zeta :\Omega \to {\overline {\mathbb {R} }}_{+}} be a predictable stopping time with ζ > 0 {\displaystyle \zeta >0} almost surely. A U {\displaystyle U} -valued semimartingale ( Y t ) t < ζ {\displaystyle (Y_{t})_{t<\zeta }} is called a maximal solution of with life time ζ {\displaystyle \zeta } if ζ {\displaystyle \zeta } is also a so-called explosion time . Explicitly solvable SDEs include: [ 11 ] where for a given differentiable function f {\displaystyle f} is equivalent to the Stratonovich SDE which has a general solution where for a given differentiable function f {\displaystyle f} is equivalent to the Stratonovich SDE which is reducible to where Y t = h ( X t ) {\displaystyle Y_{t}=h(X_{t})} where h {\displaystyle h} is defined as before. Its general solution is In supersymmetric theory of SDEs, stochastic dynamics is defined via stochastic evolution operator acting on the differential forms on the phase space of the model. In this exact formulation of stochastic dynamics, all SDEs possess topological supersymmetry which represents the preservation of the continuity of the phase space by continuous time flow. The spontaneous breakdown of this supersymmetry is the mathematical essence of the ubiquitous dynamical phenomenon known across disciplines as chaos , turbulence , self-organized criticality etc. and the Goldstone theorem explains the associated long-range dynamical behavior, i.e., the butterfly effect , 1/f and crackling noises, and scale-free statistics of earthquakes, neuroavalanches, solar flares etc.
https://en.wikipedia.org/wiki/Stochastic_differential_equation
The stochastic empirical loading and dilution model ( SELDM ) [ 1 ] [ 2 ] [ 3 ] is a stormwater quality model. SELDM is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. Although SELDM is, nominally, a highway runoff model is can be used to estimate flows concentrations and loads of runoff-quality constituents from other land use areas as well. SELDM was developed by the U.S. Geological Survey so the model, source code, and all related documentation are provided free of any copyright restrictions according to U.S. copyright laws and the USGS Software User Rights Notice. SELDM is widely used to assess the potential effect of runoff from highways, bridges, and developed areas on receiving-water quality with and without the use of mitigation measures. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] Stormwater practitioners evaluating highway runoff commonly use data from the Highway Runoff Database (HRDB) with SELDM to assess the risks for adverse effects of runoff on receiving waters. [ 13 ] [ 14 ] [ 15 ] [ 16 ] SELDM is a stochastic mass-balance model. [ 17 ] [ 18 ] [ 19 ] A mass-balance approach (figure 1) is commonly applied to estimate the concentrations and loads of water-quality constituents in receiving waters downstream of an urban or highway-runoff outfall. In a mass-balance model, the loads from the upstream basin and runoff source area are added to calculate the discharge, concentration, and load in the receiving water downstream of the discharge point. SELDM can do a stream-basin analysis and a lake-basin analysis. The stream-basin analysis uses a stochastic mass-balance analysis based on multi-year simulations including hundreds to thousands of runoff events. SELDM generates storm-event values for the site of interest (the highway site) and the upstream receiving stream to calculate flows, concentrations, and loads in the receiving stream downstream of the stormwater outfall. The lake-basin analysis also is a stochastic multi-year mass-balance analysis. The lake-basin analysis uses the highway loads that occur during runoff periods, the total annual loads from the lake basin to calculate annual loads to and from the lake. The lake basin analysis uses the volume of the lake and pollutant-specific attenuation factors to calculate a population of average-annual lake concentrations. The annual flows and loads SELDM calculates for the stream and lake analyses also can be used to estimate total maximum daily loads (TMDLs) for the site of interest and the upstream lake basin. [ 13 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] The TMDL can be based on the average of annual loads because product of the average load times the number of years of record will be the sum-total load for that (simulated) period of record. The variability in annual values can be used to estimate the risk of exceedance and the margin of safety for the TMDL analysis SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. [ 1 ] SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physicochemical equations. [ citation needed ] SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. [ 1 ] Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available. [ citation needed ] SELDM is easy to use because it has a simple graphical user interface and because much of the information and data needed to run SELDM are embedded in the model. [ 1 ] SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. Information and data from hundreds to thousands of sites across the country were compiled to facilitate use of SELDM. [ 24 ] [ 25 ] [ 26 ] [ 27 ] Most of the necessary input data are obtained by defining the location of the site of interest and five simple basin properties. These basin properties are the drainage area, the basin length, the basin slope, the impervious fraction, and the basin development factor [ 1 ] [ 28 ] [ 29 ] SELDM models the potential effect of mitigation measures by using Monte Carlo methods with statistics that approximate the net effects of structural and nonstructural best management practices (BMPs). [ 1 ] [ 13 ] [ 30 ] [ 31 ] Structural BMPs are defined as the components of the drainage pathway between the source of runoff and a stormwater discharge location that affect the volume, timing, or quality of runoff. SELDM uses a simple stochastic statistical model of BMP performance to develop planning-level estimates of runoff-event characteristics. This statistical approach can be used to represent a single BMP or an assemblage of BMPs. The SELDM BMP-treatment module has provisions for stochastic modeling of three stormwater treatments: volume reduction, hydrograph extension, and water-quality treatment. In SELDM, these three treatment variables are modeled by using the trapezoidal distribution [ 32 ] and the rank correlation [ 33 ] with the associated highway-runoff variables. This report describes methods for calculating the trapezoidal-distribution statistics and rank correlation coefficients for stochastic modeling of volume reduction, hydrograph extension, and water-quality treatment by structural stormwater BMPs and provides the calculated values for these variables. These statistics are different from the statistics commonly used to characterize or compare BMPs. They are designed to provide a stochastic transfer function to approximate the quantity, duration, and quality of BMP effluent given the associated inflow values for a population of storm events. [ citation needed ] SELDM was developed as a Microsoft Access ® database software application to facilitate storage, handling, and use of the hydrologic dataset with a simple graphical user interface (GUI). [ 1 ] The program's menu-driven GUI uses standard Microsoft Visual Basic for Applications ® (VBA) interface controls to facilitate entry, processing, and output of data. Appendix 4 of the SELDM manual [ 1 ] has detailed instructions for using the GUI. The SELDM user interface has one or more GUI forms that are used to enter four categories of input data, which include documentation, site and region information, hydrologic statistics, and water-quality data. The documentation data include information about the analyst, the project, and the analysis. The site and region data include the highway-site characteristics, the ecoregions , the upstream-basin characteristics, and, if a lake analysis is selected, the lake-basin characteristics. The hydrologic data include precipitation, streamflow, and runoff-coefficient statistics. The water-quality data include highway-runoff-quality statistics, upstream-water-quality statistics, downstream-water-quality definitions, and BMP-performance statistics. There also is a GUI form for running the model and accessing the distinct set of output files. The SELDM interface is designed to populate the database with data and statistics for the analysis and to specify index variables that are used by the program to query the database when SELDM is run. It is necessary to step through the input forms each time an analysis is run. [ citation needed ] The results of each SELDM analysis are written to 5–10 output files, depending on the options that were selected during the analysis-specification process. The five output files that are created for every model run are the output documentation, highway-runoff quality, annual highway runoff, precipitation events, and stormflow file. If the Stream Basin or Stream and Lake Basin output options are selected, then the prestorm streamflow and dilution factor files also are created. If these same two output options are selected and, in addition, one or more downstream water-quality pairs are defined by using the water-quality menu, then the upstream water-quality and downstream water-quality output files also are created by SELDM. If the Stream and Lake Basin Output or Lake Basin Output option is selected, and one or more downstream water-quality pairs are defined by using the water-quality menu, then the Lake Analysis output file is created when the Lake Basin Analysis is run. The output files are written as tab-delimited ASCII text files in a relational database (RDB) format that can be imported into many software packages. This output is designed to facilitate post-modeling analysis and presentation of results. [ citation needed ] The benefit of the Monte Carlo analysis is not to decrease uncertainty in the input statistics, but to represent the different combinations of the variables that determine potential risks of water-quality excursions. SELDM provides a method for rapid assessment of information that is otherwise difficult or impossible to obtain because it models the interactions among hydrologic variables (with different probability distributions) that result in a population of values that represent likely long-term outcomes from runoff processes and the potential effects of different mitigation measures. SELDM also provides the means for rapidly doing sensitivity analyses to determine the potential effects of different input assumptions on the risks for water-quality excursions. SELDM produces a population of storm-event and annual values to address the questions about the potential frequency, magnitude, and duration of water-quality excursions. The output represents a collection of random events rather than a time series. Each storm that is generated in SELDM is identified by sequence number and annual-load accounting year. The model generates each storm randomly; there is no serial correlation, and the order of storms does not reflect seasonal patterns. The annual-load accounting years, which are just random collections of events generated with the sum of storm interevent times less than or equal to a year, are used to generate annual highway flows and loads for TMDL analysis and the lake basin analysis. [ citation needed ] In 2019, the USGS developed a model post processor for SELDM to facilitate analysis and graphing of results from SELDM simulations; that software, known as InterpretSELDM, is available in the public domain on a USGS ScienceBase site. [ 34 ] SELDM was developed between 2010 and 2013 and was published as version 1.0.0 in March 2013. A small problem with the algorithm used to calculate upstream and lake-basin transport curves was discovered and version 1.0.1 was released in July 2013. Version 1.0.2 was released in June, 2016 to use the Cunnane plotting position formula for all output files. Version 1.0.3 was released in July, 2018 to address issues with load calculations for constituents with concentrations of nanograms per liter or picograms per liter and to address other sundry issues. Version 1.1.0 was released in May 2021 to add batch processing, change the highway runoff duration used for upstream transport curves from the discharge duration, which could vary from BMP to BMP, to the runoff-concurrent duration and volume, and fix a problem that allowed users to simulate a dependent variable in a lake analysis without the explanatory variable, which caused an error. Version 1.1.1 was released in December 2022 to make SELDM compatible with the 32- and 64-bit versions of Microsoft Office; this version has the ability to simulate emerging contaminants including Microplastics , PFAS/PFOS (see Per- and polyfluoroalkyl substances and Perfluorooctanesulfonic acid ), and tire chemicals (see Tire manufacturing , Rubber pollution , and 6PPD ). The code for SELDM is open source and public domain code that can be downloaded from the SELDM software support page. [ 35 ] This article incorporates public domain material from websites or documents of the United States Geological Survey .
https://en.wikipedia.org/wiki/Stochastic_empirical_loading_and_dilution_model
Stochastic multicriteria acceptability analysis ( SMAA ) is a multiple-criteria decision analysis method for problems with missing or incomplete information. This means that criteria and preference information can be uncertain, inaccurate or partially missing. Incomplete information is represented in SMAA using suitable probability distributions. The method is based on stochastic simulation by drawing random values for criteria measurements and weights from their corresponding distributions. [ 1 ] SMAA can handle mixed cardinal and ordinal information. Ordinal information is treated by a special joint distribution that preserves the ordinal information. [ 2 ] A survey on different variants and applications of SMAA can be found in this article. [ 3 ] Open source implementations of SMAA can be found at the website SMAA.fi. [ 4 ]
https://en.wikipedia.org/wiki/Stochastic_multicriteria_acceptability_analysis
Stochastic portfolio theory ( SPT ) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT. SPT uses continuous-time random processes (in particular, continuous semi-martingales) to represent the prices of individual securities. Processes with discontinuities, such as jumps, have also been incorporated* into the theory (*unverifiable claim due to missing citation!). SPT considers stocks and stock markets , but its methods can be applied to other classes of assets as well. A stock is represented by its price process, usually in the logarithmic representation . In the case the market is a collection of stock-price processes X i , {\displaystyle X_{i},} for i = 1 , … , n , {\displaystyle i=1,\dots ,n,} each defined by a continuous semimartingale where W := ( W 1 , … , W d ) {\displaystyle W:=(W_{1},\dots ,W_{d})} is an n {\displaystyle n} -dimensional Brownian motion (Wiener) process with d ≥ n {\displaystyle d\geq n} , and the processes γ i {\displaystyle \gamma _{i}} and ξ i ν {\displaystyle \xi _{i\nu }} are progressively measurable with respect to the Brownian filtration { F t } = { F t W } {\displaystyle \{{\mathcal {F}}_{t}\}=\{{\mathcal {F}}_{t}^{W}\}} . In this representation γ i ( t ) {\displaystyle \gamma _{i}(t)} is called the (compound) growth rate of X i , {\displaystyle X_{i},} and the covariance between log ⁡ X i {\displaystyle \log X_{i}} and log ⁡ X j {\displaystyle \log X_{j}} is σ i j ( t ) = ∑ ν = 1 d ξ i ν ( t ) ξ j ν ( t ) . {\displaystyle \sigma _{ij}(t)=\sum _{\nu =1}^{d}\xi _{i\nu }(t)\xi _{j\nu }(t).} It is frequently assumed that, for all i , {\displaystyle i,} the process ξ i , 1 2 ( t ) + ⋯ + ξ i d 2 ( t ) {\displaystyle \xi _{i,1}^{2}(t)+\cdots +\xi _{id}^{2}(t)} is positive, locally square-integrable , and does not grow too rapidly as t → ∞ . {\displaystyle t\rightarrow \infty .} The logarithmic representation is equivalent to the classical arithmetic representation which uses the rate of return α i ( t ) , {\displaystyle \alpha _{i}(t),} however the growth rate can be a meaningful indicator of long-term performance of a financial asset, whereas the rate of return has an upward bias. The relation between the rate of return and the growth rate is The usual convention in SPT is to assume that each stock has a single share outstanding, so X i ( t ) {\displaystyle X_{i}(t)} represents the total capitalization of the i {\displaystyle i} -th stock at time t , {\displaystyle t,} and X ( t ) = X 1 ( t ) + ⋯ + X n ( t ) {\displaystyle X(t)=X_{1}(t)+\cdots +X_{n}(t)} is the total capitalization of the market. Dividends can be included in this representation, but are omitted here for simplicity. An investment strategy π = ( π 1 , ⋯ , π n ) {\displaystyle \pi =(\pi _{1},\cdots ,\pi _{n})} is a vector of bounded, progressively measurable processes; the quantity π i ( t ) {\displaystyle \pi _{i}(t)} represents the proportion of total wealth invested in the i {\displaystyle i} -th stock at time t {\displaystyle t} , and π 0 ( t ) := 1 − ∑ i = 1 n π i ( t ) {\displaystyle \pi _{0}(t):=1-\sum _{i=1}^{n}\pi _{i}(t)} is the proportion hoarded (invested in a money market with zero interest rate). Negative weights correspond to short positions. The cash strategy κ ≡ 0 ( κ 0 ≡ 1 ) {\displaystyle \kappa \equiv 0(\kappa _{0}\equiv 1)} keeps all wealth in the money market. A strategy π {\displaystyle \pi } is called portfolio , if it is fully invested in the stock market , that is π 1 ( t ) + ⋯ + π n ( t ) = 1 {\displaystyle \pi _{1}(t)+\cdots +\pi _{n}(t)=1} holds, at all times. The value process Z π {\displaystyle Z_{\pi }} of a strategy π {\displaystyle \pi } is always positive and satisfies where the process γ π ∗ {\displaystyle \gamma _{\pi }^{*}} is called the excess growth rate process and is given by This expression is non-negative for a portfolio with non-negative weights π i ( t ) {\displaystyle \pi _{i}(t)} and has been used in quadratic optimization of stock portfolios, a special case of which is optimization with respect to the logarithmic utility function. The market weight processes , where i = 1 , … , n {\displaystyle i=1,\dots ,n} define the market portfolio μ {\displaystyle \mu } . With the initial condition Z μ ( 0 ) = X ( 0 ) , {\displaystyle Z_{\mu }(0)=X(0),} the associated value process will satisfy Z μ ( t ) = X ( t ) {\displaystyle Z_{\mu }(t)=X(t)} for all t . {\displaystyle t.} A number of conditions can be imposed on a market, sometimes to model actual markets and sometimes to emphasize certain types of hypothetical market behavior. Some commonly invoked conditions are: 1 T ∫ 0 T μ max ( t ) d t ≤ 1 − ε {\displaystyle {\frac {1}{T}}\int _{0}^{T}\mu _{\max }(t)\,dt\leq 1-\varepsilon } Diversity and weak diversity are rather weak conditions, and markets are generally far more diverse than would be tested by these extremes. A measure of market diversity is market entropy , defined by We consider the vector process ( μ ( 1 ) ( t ) , … , μ ( n ) ( t ) ) , {\displaystyle (\mu _{(1)}(t),\dots ,\mu _{(n)}(t)),} with 0 ≤ t < ∞ {\displaystyle 0\leq t<\infty } of ranked market weights where ties are resolved “lexicographically”, always in favor of the lowest index. The log-gaps where 0 ≤ t < ∞ {\displaystyle 0\leq t<\infty } and k = 1 , … , n − 1 {\displaystyle k=1,\dots ,n-1} are continuous, non-negative semimartingales; we denote by Λ ( k , k + 1 ) ( t ) = L G ( k , k + 1 ) ( t ; 0 ) {\displaystyle \Lambda ^{(k,k+1)}(t)=L^{G^{(k,k+1)}}(t;0)} their local times at the origin. These quantities measure the amount of turnover between ranks k {\displaystyle k} and k + 1 {\displaystyle k+1} during the time-interval [ 0 , t ] {\displaystyle [0,t]} . A market is called stochastically stable , if ( μ ( 1 ) ( t ) , ⋯ , μ ( n ) ( t ) ) {\displaystyle (\mu _{(1)}(t),\cdots ,\mu _{(n)}(t))} converges in distribution as t → ∞ {\displaystyle t\rightarrow \infty } to a random vector ( M ( 1 ) , ⋯ , M ( n ) ) {\displaystyle (M_{(1)},\cdots ,M_{(n)})} with values in the Weyl chamber { ( x 1 , … , x n ) ∣ x 1 > x 2 > ⋯ > x n and ∑ i = 1 n x i = 1 } {\displaystyle \{(x_{1},\dots ,x_{n})\mid x_{1}>x_{2}>\dots >x_{n}{\text{ and }}\sum _{i=1}^{n}x_{i}=1\}} of the unit simplex, and if the strong law of large numbers holds for suitable real constants λ ( 1 , 2 ) , … , λ ( n − 1 , n ) . {\displaystyle \lambda ^{(1,2)},\dots ,\lambda ^{(n-1,n)}.} Given any two investment strategies π , ρ {\displaystyle \pi ,\rho } and a real number T > 0 {\displaystyle T>0} , we say that π {\displaystyle \pi } is arbitrage relative to ρ {\displaystyle \rho } over the time-horizon [ 0 , T ] {\displaystyle [0,T]} , if P ( Z π ( T ) ≥ Z ρ ( T ) ) ≥ 1 {\displaystyle \mathbb {P} (Z_{\pi }(T)\geq Z_{\rho }(T))\geq 1} and P ( Z π ( T ) > Z ρ ( T ) ) > 0 {\displaystyle \mathbb {P} (Z_{\pi }(T)>Z_{\rho }(T))>0} both hold; this relative arbitrage is called “strong” if P ( Z π ( T ) > Z ρ ( T ) ) = 1. {\displaystyle \mathbb {P} (Z_{\pi }(T)>Z_{\rho }(T))=1.} When ρ {\displaystyle \rho } is κ ≡ 0 , {\displaystyle \kappa \equiv 0,} we recover the usual definition of arbitrage relative to cash. We say that a given strategy ν {\displaystyle \nu } has the numeraire property , if for any strategy π {\displaystyle \pi } the ratio Z π / Z ν {\displaystyle Z_{\pi }/Z_{\nu }} is a P {\displaystyle \mathbb {P} } −supermartingale. In such a case, the process 1 / Z ν {\displaystyle 1/Z_{\nu }} is called a “deflator” for the market. No arbitrage is possible, over any given time horizon , relative to a strategy ν {\displaystyle \nu } that has the numeraire property (either with respect to the underlying probability measure P {\displaystyle \mathbb {P} } , or with respect to any other probability measure which is equivalent to P {\displaystyle \mathbb {P} } ). A strategy ν {\displaystyle \nu } with the numeraire property maximizes the asymptotic growth rate from investment, in the sense that holds for any strategy π {\displaystyle \pi } ; it also maximizes the expected log-utility from investment, in the sense that for any strategy π {\displaystyle \pi } and real number T > 0 {\displaystyle T>0} we have If the vector α ( t ) = ( α 1 ( t ) , ⋯ , α n ( t ) ) ′ {\displaystyle \alpha (t)=(\alpha _{1}(t),\cdots ,\alpha _{n}(t))'} of instantaneous rates of return, and the matrix σ ( t ) = ( σ ( t ) ) 1 ≤ i , j ≤ n {\displaystyle \sigma (t)=(\sigma (t))_{1\leq i,j\leq n}} of instantaneous covariances, are known, then the strategy has the numeraire property whenever the indicated maximum is attained. The study of the numeraire portfolio links SPT to the so-called Benchmark approach to Mathematical Finance , which takes such a numeraire portfolio as given and provides a way to price contingent claims, without any further assumptions. A probability measure Q {\displaystyle \mathbb {Q} } is called equivalent martingale measure (EMM) on a given time-horizon [ 0 , T ] {\displaystyle [0,T]} , if it has the same null sets as P {\displaystyle \mathbb {P} } on F T {\displaystyle {\mathcal {F}}_{T}} , and if the processes X 1 ( t ) , … , X n ( t ) {\displaystyle X_{1}(t),\dots ,X_{n}(t)} with 0 ≤ t ≤ T {\displaystyle 0\leq t\leq T} are all Q {\displaystyle \mathbb {Q} } −martingales. Assuming that such an EMM exists, arbitrage is not possible on [ 0 , T ] {\displaystyle [0,T]} relative to either cash κ {\displaystyle \kappa } or to the market portfolio μ {\displaystyle \mu } (or more generally, relative to any strategy ρ {\displaystyle \rho } whose wealth process Z ρ {\displaystyle Z_{\rho }} is a martingale under some EMM). Conversely, if π , ρ {\displaystyle \pi ,\rho } are portfolios and one of them is arbitrage relative to the other on [ 0 , T ] {\displaystyle [0,T]} then no EMM can exist on this horizon. Suppose we are given a smooth function G : U → ( 0 , ∞ ) {\displaystyle G:U\rightarrow (0,\infty )} on some neighborhood U {\displaystyle U} of the unit simplex in R n {\displaystyle \mathbb {R} ^{n}} . We call the portfolio generated by the function G {\displaystyle \mathbb {G} } . It can be shown that all the weights of this portfolio are non-negative, if its generating function G {\displaystyle \mathbb {G} } is concave. Under mild conditions, the relative performance of this functionally-generated portfolio π G {\displaystyle \pi _{\mathbb {G} }} with respect to the market portfolio μ {\displaystyle \mu } , is given by the F-G decomposition which involves no stochastic integrals. Here the expression is called the drift process of the portfolio (and it is a non-negative quantity if the generating function G {\displaystyle \mathbb {G} } is concave); and the quantities with 1 ≤ i , j ≤ n {\displaystyle 1\leq i,j\leq n} are called the relative covariances between log ⁡ ( X i ) {\displaystyle \log(X_{i})} and log ⁡ ( X j ) {\displaystyle \log(X_{j})} with respect to the market. The excess growth rate of the market portfolio admits the representation 2 γ μ ∗ ( t ) = ∑ i = 1 n μ i ( t ) τ i i μ ( t ) {\displaystyle 2\gamma _{\mu }^{*}(t)=\sum _{i=1}^{n}\mu _{i}(t)\tau _{ii}^{\mu }(t)} as a capitalization-weighted average relative stock variance. This quantity is nonnegative; if it happens to be bounded away from zero, namely for all 0 ≤ t < ∞ {\displaystyle 0\leq t<\infty } for some real constant h {\displaystyle h} , then it can be shown using the F-G decomposition that, for every T > S ( μ ( 0 ) ) / h , {\displaystyle T>\mathbb {S} (\mu (0))/h,} there exists a constant c > 0 {\displaystyle c>0} for which the modified entropic portfolio Θ ( c ) {\displaystyle \Theta ^{(c)}} is strict arbitrage relative to the market μ {\displaystyle \mu } over [ 0 , T ] {\displaystyle [0,T]} ; see Fernholz and Karatzas (2005) for details. It is an open question, whether such arbitrage exists over arbitrary time horizons (for two special cases, in which the answer to this question turns out to be affirmative, please see the paragraph below and the next section). If the eigenvalues of the covariance matrix ( σ i j ( t ) ) 1 ≤ i , j ≤ n {\displaystyle (\sigma _{ij}(t))_{1\leq i,j\leq n}} are bounded away from both zero and infinity, the condition γ μ ∗ ≥ h > 0 {\displaystyle \gamma _{\mu }^{*}\geq h>0} can be shown to be equivalent to diversity, namely μ max ≤ 1 − ε {\displaystyle \mu _{\max }\leq 1-\varepsilon } for a suitable ε ∈ ( 0 , 1 ) . {\displaystyle \varepsilon \in (0,1).} Then the diversity-weighted portfolio δ ( p ) {\displaystyle \delta ^{(p)}} leads to strict arbitrage relative to the market portfolio over sufficiently long time horizons; whereas, suitable modifications of this diversity-weighted portfolio realize such strict arbitrage over arbitrary time horizons. We consider the example of a system of stochastic differential equations with 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} given real constants α ≥ 0 {\displaystyle \alpha \geq 0} and an n {\displaystyle n} -dimensional Brownian motion ( W 1 , … , W n ) . {\displaystyle (W_{1},\dots ,W_{n}).} It follows from the work of Bass and Perkins (2002) that this system has a weak solution, which is unique in distribution. Fernholz and Karatzas (2005) show how to construct this solution in terms of scaled and time-changed squared Bessel processes , and prove that the resulting system is coherent. The total market capitalization X {\displaystyle X} behaves here as geometric Brownian motion with drift, and has the same constant growth rate as the largest stock; whereas the excess growth rate of the market portfolio is a positive constant. On the other hand, the relative market weights μ i {\displaystyle \mu _{i}} with 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} have the dynamics of multi-allele Wright-Fisher processes . This model is an example of a non-diverse market with unbounded variances, in which strong arbitrage opportunities with respect to the market portfolio μ {\displaystyle \mu } exist over arbitrary time horizons , as was shown by Banner and Fernholz (2008). Moreover, Pal (2012) derived the joint density of market weights at fixed times and at certain stopping times. We fix an integer m ∈ { 2 , … , n − 1 } {\displaystyle m\in \{2,\dots ,n-1\}} and construct two capitalization-weighted portfolios: one consisting of the top m {\displaystyle m} stocks, denoted ζ {\displaystyle \zeta } , and one consisting of the bottom n − m {\displaystyle n-m} stocks, denoted η {\displaystyle \eta } . More specifically, for 1 ≤ i ≤ n . {\displaystyle 1\leq i\leq n.} Fernholz (1999), (2002) showed that the relative performance of the large-stock portfolio with respect to the market is given as Indeed, if there is no turnover at the mth rank during the interval [ 0 , T ] {\displaystyle [0,T]} , the fortunes of ζ {\displaystyle \zeta } relative to the market are determined solely on the basis of how the total capitalization of this sub-universe of the m {\displaystyle m} largest stocks fares, at time T {\displaystyle T} versus time 0; whenever there is turnover at the m {\displaystyle m} -th rank, though, ζ {\displaystyle \zeta } has to sell at a loss a stock that gets “relegated” to the lower league, and buy a stock that has risen in value and been promoted. This accounts for the “leakage” that is evident in the last term, an integral with respect to the cumulative turnover process Λ ( m , m + 1 ) {\displaystyle \Lambda ^{(m,m+1)}} of the relative weight in the large-cap portfolio ζ {\displaystyle \zeta } of the stock that occupies the mth rank. The reverse situation prevails with the portfolio η {\displaystyle \eta } of small stocks, which gets to sell at a profit stocks that are being promoted to the “upper capitalization” league, and buy relatively cheaply stocks that are being relegated: It is clear from these two expressions that, in a coherent and stochastically stable market, the small- stock cap-weighted portfolio ζ {\displaystyle \zeta } will tend to outperform its large-stock counterpart η {\displaystyle \eta } , at least over large time horizons and; in particular, we have under those conditions This quantifies the so-called size effect . In Fernholz (1999, 2002), constructions such as these are generalized to include functionally generated portfolios based on ranked market weights. First- and second-order models are hybrid Atlas models that reproduce some of the structure of real stock markets. First-order models have only rank-based parameters, and second-order models have both rank-based and name-based parameters. Suppose that X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} is a coherent market, and that the limits and exist for k = 1 , … , n {\displaystyle k=1,\ldots ,n} , where r t ( i ) {\displaystyle r_{t}(i)} is the rank of X i ( t ) {\displaystyle X_{i}(t)} . Then the Atlas model X ^ 1 , … , X ^ n {\displaystyle {\widehat {X}}_{1},\ldots ,{\widehat {X}}_{n}} defined by where r ^ t ( i ) {\displaystyle {\hat {r}}_{t}(i)} is the rank of X ^ i ( t ) {\displaystyle {\widehat {X}}_{i}(t)} and ( W 1 , … , W n ) {\displaystyle (W_{1},\ldots ,W_{n})} is an n {\displaystyle n} -dimensional Brownian motion process, is the first-order model for the original market, X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} . Under reasonable conditions, the capital distribution curve for a first-order model will be close to that of the original market. However, a first-order model is ergodic in the sense that each stock asymptotically spends ( 1 / n ) {\displaystyle (1/n)} -th of its time at each rank, a property that is not present in actual markets. In order to vary the proportion of time that a stock spends at each rank, it is necessary to use some form of hybrid Atlas model with parameters that depend on both rank and name. An effort in this direction was made by Fernholz, Ichiba, and Karatzas (2013), who introduced a second-order model for the market with rank- and name-based growth parameters, and variance parameters that depended on rank alone.
https://en.wikipedia.org/wiki/Stochastic_portfolio_theory
Stochastic-process rare event sampling (SPRES) is a rare-event sampling method in computer simulation , designed specifically for non-equilibrium calculations, including those for which the rare-event rates are time-dependent ( non-stationary process). To treat systems in which there is time dependence in the dynamics, due either to variation of an external parameter or to evolution of the system itself, the scheme for branching paths must be devised so as to achieve sampling which is distributed evenly in time and which takes account of changing fluxes through different regions of the phase space . The SPRES algorithm [ 1 ] branches simulation paths at fixed time intervals. The process of branching requires that identical paths can be made to diverge from each other, such as by changing the seed in the computer's random number generator . For systems which would be naturally considered as deterministic , it may be possible to inject an element of randomness, for instance by coupling to a fluctuating heat bath or by adding random perturbations to account for some elements of the simulation which are not modelled explicitly but which exist in the real system. The amount of over or under-sampling (the branching density) is decided based on some system-specific 'progress coordinate' which measures progress toward a rare event of interest. The probability of selecting a configuration as the starting point for a new path segment is conditioned jointly by its probability of appearing in an unbiased simulation and by the local flux forwards in the progress coordinate, with a small flux leading adaptively to a larger oversampling. The method is designed to allow ready observation of rare events with respect to time. An additional benefit relative to methods which mainly split trajectories based on interfaces in the progress coordinate rather than on time is that over most of the progress coordinate space the coordinate only needs to be evaluated at fixed time intervals (rather than continuously) because the exact time-point at which interfaces other than the final interface are reached is no longer of importance. This computational chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stochastic_process_rare_event_sampling
Stochastic resonance ( SR ) is the description of a physical phenomenon where the behavior of non-linear system where random (stochastic) fluctuations in the micro state cause deterministic changes in the macro state. This occurs when the non-linear nature of the system amplifies certain (resonant) portions of the fluctuations, while not amplifying other portions of the noise. In information theory, SR can be used to reveal weak signals. When a signal that is normally too weak to be detected by a sensor can be boosted by adding white noise to the signal, which contains a wide spectrum of frequencies. The frequencies in the white noise corresponding to the original signal's frequencies will resonate with each other, amplifying the original signal while not amplifying the rest of the white noise – thereby increasing the signal-to-noise ratio , which makes the original signal more prominent. Further, the added white noise can be enough to be detectable by the sensor, which can then filter it out to effectively detect the original, previously undetectable signal. This phenomenon of boosting undetectable signals by resonating with added white noise extends to many other systems – whether electromagnetic, physical or biological – and is an active area of research. [ 1 ] Stochastic resonance was first proposed by the Italian physicists Roberto Benzi, Alfonso Sutera and Angelo Vulpiani in 1981, [ 2 ] and the first application they proposed (together with Giorgio Parisi ) was in the context of climate dynamics. [ 3 ] [ 4 ] Stochastic resonance (SR) is observed when noise added to a system changes the system's behaviour in some fashion. More technically, SR occurs if the signal-to-noise ratio of a nonlinear system or device increases for moderate values of noise intensity . It often occurs in bistable systems or in systems with a sensory threshold and when the input signal to the system is "sub-threshold." For lower noise intensities, the signal does not cause the device to cross threshold, so little signal is passed through it. For large noise intensities, the output is dominated by the noise, also leading to a low signal-to-noise ratio. For moderate intensities, the noise allows the signal to reach threshold, but the noise intensity is not so large as to swamp it. Thus, a plot of signal-to-noise ratio as a function of noise intensity contains a peak. Strictly speaking, stochastic resonance occurs in bistable systems, when a small periodic ( sinusoidal ) force is applied together with a large wide band stochastic force (noise). The system response is driven by the combination of the two forces that compete/cooperate to make the system switch between the two stable states. The degree of order is related to the amount of periodic function that it shows in the system response. When the periodic force is chosen small enough in order to not make the system response switch, the presence of a non-negligible noise is required for it to happen. When the noise is small, very few switches occur, mainly at random with no significant periodicity in the system response. When the noise is very strong, a large number of switches occur for each period of the sinusoid, and the system response does not show remarkable periodicity. Between these two conditions, there exists an optimal value of the noise that cooperatively concurs with the periodic forcing in order to make almost exactly one switch per period (a maximum in the signal-to-noise ratio). Such a favorable condition is quantitatively determined by the matching of two timescales: the period of the sinusoid (the deterministic time scale) and the Kramers rate [ 5 ] (i.e., the average switch rate induced by the sole noise: the inverse of the stochastic time scale [ 6 ] [ 7 ] ). Stochastic resonance was discovered and proposed for the first time in 1981 to explain the periodic recurrence of ice ages. [ 8 ] Since then, the same principle has been applied in a wide variety of systems. Nowadays stochastic resonance is commonly invoked when noise and nonlinearity concur to determine an increase of order in the system response. Suprathreshold stochastic resonance is a particular form of stochastic resonance, in which random fluctuations , or noise, provide a signal processing benefit in a nonlinear system . Unlike most of the nonlinear systems in which stochastic resonance occurs, suprathreshold stochastic resonance occurs when the strength of the fluctuations is small relative to that of an input signal, or even small for random noise . It is not restricted to a subthreshold signal, hence the qualifier. Stochastic resonance has been observed in the neural tissue of the sensory systems of several organisms. [ 9 ] Computationally, neurons exhibit SR because of non-linearities in their processing. SR has yet to be fully explained in biological systems, but neural synchrony in the brain (specifically in the gamma wave frequency [ 10 ] ) has been suggested as a possible neural mechanism for SR by researchers who have investigated the perception of "subconscious" visual sensation. [ 11 ] Single neurons in vitro including cerebellar Purkinje cells [ 12 ] and squid giant axon [ 13 ] could also demonstrate the inverse stochastic resonance, when spiking is inhibited by synaptic noise of a particular variance. SR-based techniques have been used to create a novel class of medical devices for enhancing sensory and motor functions such as vibrating insoles especially for the elderly, or patients with diabetic neuropathy or stroke. [ 14 ] See the Review of Modern Physics [ 15 ] article for a comprehensive overview of stochastic resonance. Stochastic Resonance has found noteworthy application in the field of image processing. A related phenomenon is dithering applied to analog signals before analog-to-digital conversion . [ 16 ] Stochastic resonance can be used to measure transmittance amplitudes below an instrument's detection limit . If Gaussian noise is added to a subthreshold (i.e., immeasurable) signal, then it can be brought into a detectable region. After detection, the noise is removed. A fourfold improvement in the detection limit can be obtained. [ 17 ]
https://en.wikipedia.org/wiki/Stochastic_resonance
Stochastic resonance is a phenomenon that occurs in a threshold measurement system (e.g. a man-made instrument or device; a natural cell, organ or organism) when an appropriate measure of information transfer ( signal-to-noise ratio , mutual information , coherence , d' , etc.) is maximized in the presence of a non-zero level of stochastic input noise thereby lowering the response threshold; [ 1 ] the system resonates at a particular noise level. The three criteria that must be met for stochastic resonance to occur are: Stochastic resonance occurs when these conditions combine in such a way that a certain average noise intensity results in maximized information transfer. A time-averaged (or, equivalently, low-pass filtered ) output due to signal of interest plus noise will yield an even better measurement of the signal compared to the system's response without noise in terms of SNR. The idea of adding noise to a system in order to improve the quality of measurements is counter-intuitive. Measurement systems are usually constructed or evolved to reduce noise as much as possible and thereby provide the most precise measurement of the signal of interest. Numerous experiments have demonstrated that, in both biological and non-biological systems, the addition of noise can actually improve the probability of detecting the signal; this is stochastic resonance. The systems in which stochastic resonance occur are always nonlinear systems. The addition of noise to a linear system will always decrease the information transfer rate. [ 1 ] [ 2 ] Stochastic resonance was first discovered in a study of the periodic recurrence of Earth's ice ages. [ 2 ] [ 3 ] The theory developed out of an effort to understand how the Earth's climate oscillates periodically between two relatively stable global temperature states, one "normal" and the other an "ice age" state. The conventional explanation was that variations in the eccentricity of Earth's orbital path occurred with a period of about 100,000 years and caused the average temperature to shift dramatically. The measured variation in the eccentricity had a relatively small amplitude compared to the dramatic temperature change, however, and stochastic resonance was developed to show that the temperature change due to the weak eccentricity oscillation and added stochastic variation due to the unpredictable energy output of the Sun (known as the solar constant ) could cause the temperature to move in a nonlinear fashion between two stable dynamic states. As an example of stochastic resonance, consider the following demonstration after Simonotto et al. [ 4 ] The image to the left shows an original picture of the Arc de Triomphe in Paris. If this image is passed through a nonlinear threshold filter in which each pixel detects light intensity as above or below a given threshold, a representation of the image is obtained as in the images to the right. It can be hard to discern the objects in the filtered image in the top left because of the reduced amount of information present. The addition of noise before the threshold operation can result in a more recognizable output. The image below shows four versions of the image after the threshold operation with different levels of noise variance; the image in the top right hand corner appears to have the optimal level of noise allowing the Arc to be recognized, but other noise variances reveal different features. The quality of the image resulting from stochastic resonance can be improved further by blurring, or subjecting the image to low-pass spatial filtering. This can be approximated in the visual system by squinting one's eyes or moving away from the image. This allows the observer's visual system to average the pixel intensities over areas, which is in effect a low-pass filter. The resonance breaks up the harmonic distortion due to the threshold operation by spreading the distortion across the spectrum, and the low-pass filter eliminates much of the noise that has been pushed into higher spatial frequencies. A similar output could be achieved by examining multiple threshold levels, so in a sense the addition of noise creates a new effective threshold for the measurement device. Evidence for stochastic resonance in a sensory system was first found in nerve signals from the mechanoreceptors located on the tail fan of the crayfish ( Procambarus clarkii ). [ 5 ] An appendage from the tail fan was mechanically stimulated to trigger the cuticular hairs that the crayfish uses to detect pressure waves in water. The stimulus consisted of sinusoidal motion at 55.2 Hz with random Gaussian noise at varying levels of average intensity. Spikes along the nerve root of the terminal abdominal ganglion were recorded extracellularly for 11 cells and analyzed to determine the SNR. Two separate measurements were used to estimate the signal-to-noise ratio of the neural response. The first was based on the Fourier power spectrum of the spike time series response. The power spectra from the averaged spike data for three different noise intensities all showed a clear peak at the 55.2 Hz component with different average levels of broadband noise. The relatively low- and mid-level added noise conditions also show a second harmonic component at about 110 Hz. The mid-level noise condition clearly shows a stronger component at the signal of interest than either low- or high-level noise, and the harmonic component is greatly reduced at mid-level noise and not present in the high-level noise. A standard measure of the SNR as a function of noise variance shows a clear peak at the mid-level noise condition. The other measure used for SNR was based on the inter-spike interval histogram instead of the power spectrum. A similar peak was found on a plot of SNR as a function of noise variance for mid-level noise, although it was slightly different from that found using the power spectrum measurement. These data support the claim that noise can enhance detection at the single neuron level but are not enough to establish that noise helps the crayfish detect weak signals in a natural setting. Experiments performed after this at a slightly higher level of analysis establish behavioral effects of stochastic resonance in other organisms; these are described below. A similar experiment was performed on the cricket ( Acheta domestica ), an arthropod like the crayfish. [ 6 ] The cercal system in the cricket senses the displacement of particles due to air currents utilizing filiform hairs covering the cerci, the two antenna-like appendages extending from the posterior section of the abdomen. Sensory interneurons in terminal abdominal ganglion carry information about intensity and direction of pressure perturbations. Crickets were presented with signal plus noise stimuli and the spikes from cercal interneurons due to this input were recorded. Two types of measurements of stochastic resonance were conducted. The first, like the crayfish experiment, consisted of a pure tone pressure signal at 23 Hz in a broadband noise background of varying intensities. A power spectrum analysis of the signals yielded maximum SNR for a noise intensity equal to 25 times the signal stimulus resulting in a maximum increase of 600% in SNR. 14 cells in 12 animals were tested, and all showed an increased SNR at a particular level of noise, meeting the requirements for the occurrence of stochastic resonance. The other measurement consisted of the rate of mutual information transfer between the nerve signal and a broadband stimulus combined with varying levels of broadband noise uncorrelated with the signal. The power spectrum SNR could not be calculated in the same manner as before because there were signal and noise components present at the same frequencies. Mutual information measures the degree to which one signal predicts another; independent signals carry no mutual information, while perfectly identical signals carry maximal mutual information. For varying low amplitudes of signal, stochastic resonance peaks were found in plots of mutual information transfer rate as a function of input noise with a maximum increase in information transfer rate of 150%. For stronger signal amplitudes that stimulated the interneurons in the presence of no noise, however, the addition of noise always decreased the mutual information transfer demonstrating that stochastic resonance only works in the presence of low-intensity signals. The information carried in each spike at different levels of input noise was also calculated. At the optimum level of noise, the cells were more likely to spike, resulting in spikes with more information and more precise temporal coherence with the stimulus. Stochastic resonance is a possible cause of escape behavior in crickets to attacks from predators that cause pressure waves in the tested frequency range at very low amplitudes, like the wasp Liris niger . Similar effects have also been noted in cockroaches. [ 6 ] Another investigation of stochastic resonance in broadband (or, equivalently, aperiodic) signals was conducted by probing cutaneous mechanoreceptors in the rat. [ 7 ] A patch of skin from the thigh and its corresponding section of the saphenous nerve were removed, mounted on a test stand immersed in interstitial fluid. Slowly adapting type 1 (SA1) mechanoreceptors output signals in response to mechanical vibrations below 500 Hz. The skin was mechanically stimulated with a broadband pressure signal with varying amounts of broadband noise using the up-and-down motion of a cylindrical probe. The intensity of the pressure signal was tested without noise and then set at a near sub-threshold intensity that would evoke 10 action potentials over a 60-second stimulation time. Several trials were then conducted with noise of increasing amplitude variance. Extracellular recordings were made of the mechanoreceptor response from the extracted nerve. The encoding of the pressure stimulus in the neural signal was measured by the coherence of the stimulus and response. The coherence was found to be maximized by a particular level of input Gaussian noise, consistent with the occurrence of stochastic resonance. The paddlefish ( Polyodon spathula ) hunts plankton using thousands of tiny passive electroreceptors located on its extended snout, or rostrum . The paddlefish is able to detect electric fields that oscillate at 0.5–20 Hz, and large groups of plankton generate this type of signal. Due to the small magnitude of the generated fields, plankton are usually caught by the paddlefish when they are within 40 mm of the fish's rostrum. An experiment was performed to test the hunting ability of the paddlefish in environments with different levels of background noise. [ 8 ] It was found that the paddlefish had a wider distance range of successful strikes in an electrical background with a low level of noise than in the absence of noise. In other words, there was a peak noise level, implying effects of stochastic resonance. In the absence of noise, the distribution of successful strikes has greater variance in the horizontal direction than in the vertical direction. With the optimal level of noise, the variance in the vertical direction increased relative to the horizontal direction and also shifted to a peak slightly below center, although the horizontal variance did not increase. Another measure of the increase in accuracy due to the optimal noise background is the number of plankton captured per unit time. For four paddlefish tested, two showed no increase in capture rate, while the other two showed a 50% increase in capture rate. Separate observations of the paddlefish hunting in the wild have provided evidence that the background noise generated by plankton increase the paddlefish's hunting abilities. Each individual organism generates a particular electrical signal; these individual signals cause massed groups of plankton to emit what amounts to a noisy background signal. It has been found that the paddlefish does not respond to only noise without signals from nearby individual organisms, so it uses the strong individual signals of nearby plankton to acquire specific targets, and the background electrical noise provides a cue to their presence. For these reasons, it is likely that the paddlefish takes advantage of stochastic resonance to improve its sensitivity to prey. Stochastic resonance was demonstrated in a high-level mathematical model of a single neuron using a dynamical systems approach. [ 9 ] The model neuron was composed of a bi-stable potential energy function treated as a dynamical system that was set up to fire spikes in response to a pure tonal input with broadband noise and the SNR is calculated from the power spectrum of the potential energy function, which loosely corresponds to an actual neuron's spike-rate output. The characteristic peak on a plot of the SNR as a function of noise variance was apparent, demonstrating the occurrence of stochastic resonance. Another phenomenon closely related to stochastic resonance is inverse stochastic resonance. It happens in the bistable dynamical systems having the limit cycle and stable fixed point solutions. In this case the noise of particular variance could efficiently inhibit spiking activity by moving the trajectory to the stable fixed point . It has been initially found in single neuron models, including classical Hodgkin-Huxley system. [ 10 ] [ 11 ] Later inverse stochastic resonance has been confirmed in Purkinje cells of cerebellum , [ 12 ] where it could play the role for generation of pauses of spiking activity in vivo . An aspect of stochastic resonance that is not entirely understood has to do with the relative magnitude of stimuli and the threshold for triggering the sensory neurons that measure them. If the stimuli are generally of a certain magnitude, it seems that it would be more evolutionarily advantageous for the threshold of the neuron to match that of the stimuli. In systems with noise, however, tuning thresholds for taking advantage of stochastic resonance may be the best strategy. A theoretical account of how a large model network (up to 1000) of summed FitzHugh–Nagumo neurons could adjust the threshold of the system based on the noise level present in the environment was devised. [ 13 ] [ 14 ] This can be equivalently conceived of as the system lowering its threshold, and this is accomplished such that the ability to detect suprathreshold signals is not degraded. Stochastic resonance in large-scale physiological systems of neurons (above the single-neuron level but below the behavioral level) has not yet been investigated experimentally. Psychophysical experiments testing the thresholds of sensory systems have also been performed in humans across sensory modalities and have yielded evidence that our systems make use of stochastic resonance as well. The above demonstration using the Arc de Triomphe photo is a simplified version of an earlier experiment. A photo of a clocktower was made into a video by adding noise with a particular variance a number of times to create successive frames. This was done for different levels of noise variance, and a particularly optimal level was found for discerning the appearance of the clocktower. [ 4 ] Similar experiments also demonstrated an increased level of contrast sensitivity to sine wave gratings. [ 4 ] Human subjects who undergo mechanical stimulation of a fingertip are able to detect a subthreshold impulse signal in the presence of a noisy mechanical vibration. The percentage of correct detections of the presence of the signal was maximized for a particular value of noise. [ 15 ] The auditory intensity detection thresholds of a number of human subjects were tested in the presence of noise. [ 16 ] The subjects include four people with normal hearing, two with cochlear implants and one with an auditory brainstem implant . The normal subjects were presented with two sound samples, one with a pure tone plus white noise and one with just white noise, and asked which one contained the pure tone. The level of noise which optimized the detection threshold in all four subjects was found to be between -15 and -20 dB relative to the pure tone, showing evidence for stochastic resonance in normal human hearing. A similar test in the subjects with cochlear implants only found improved detection thresholds for pure tones below 300 Hz, while improvements were found at frequencies greater than 60 Hz in the brainstem implant subject. The reason for the limited range of resonance effects are unknown. Additionally, the addition of noise to cochlear implant signals improved the threshold for frequency discrimination. The authors recommend that some type of white noise addition to cochlear implant signals could well improve the utility of such devices.
https://en.wikipedia.org/wiki/Stochastic_resonance_(sensory_neurobiology)
Stochastic thermodynamics is an emergent field of research in statistical mechanics that uses stochastic variables to better understand the non-equilibrium dynamics present in many microscopic systems [ 1 ] such as colloidal particles , biopolymers (e.g. DNA , RNA , and proteins ), enzymes , and molecular motors . [ a ] [ 5 ] [ clarify ] When a microscopic machine (e.g. a MEM ) performs useful work it generates heat and entropy as a byproduct of the process, however it is also predicted that this machine will operate in "reverse" or "backwards" over appreciable short periods. That is, heat energy from the surroundings will be converted into useful work. For larger engines, this would be described as a violation of the second law of thermodynamics , as entropy is consumed rather than generated. Loschmidt's paradox [ 6 ] states that in a time reversible system, for every trajectory there exists a time-reversed anti-trajectory. As the entropy production of a trajectory and its equal anti-trajectory are of identical magnitude but opposite sign, then, so the argument goes, one cannot prove that entropy production is positive. [ 7 ] For a long time, exact results in thermodynamics were only possible in linear systems capable of reaching equilibrium, leaving other questions like the Loschmidt paradox unsolved. During the last few decades fresh approaches have revealed general laws applicable to non-equilibrium system which are described by nonlinear equations, pushing the range of exact thermodynamic statements beyond the realm of traditional linear solutions. These exact results are particularly relevant for small systems where appreciable (typically non-Gaussian) fluctuations occur. Thanks to stochastic thermodynamics it is now possible to accurately predict distribution functions of thermodynamic quantities relating to exchanged heat, applied work or entropy production for these systems. [ 8 ] The mathematical resolution to Loschmidt's paradox is called the (steady state) fluctuation theorem (FT), which is a generalisation of the second law of thermodynamics. The FT shows that as a system gets larger or the trajectory duration becomes longer, entropy-consuming trajectories become more unlikely, and the expected second law behaviour is recovered. The FT was first put forward by Evans et al. (1993) [ 9 ] and much of the work done in developing and extending the theorem was accomplished by theoreticians and mathematicians interested in nonequilibrium statistical mechanics. [ b ] [ 7 ] The first observation and experimental proof of Evan's fluctuation theorem (FT) was performed by Wang et al. (2002) [ 10 ] Seifert writes: [ 8 ] This is shown to be a special case of a more general relation. Seifert writes: [ 8 ] Stochastic thermodynamics can be applied to driven (i.e. open) quantum systems whenever the effects of quantum coherence can be ignored. The dynamics of an open quantum system is then equivalent to a classical stochastic one. However, this is sometimes at the cost of requiring unrealistic measurements at the beginning and end of a process. [ c ] [ 13 ] Understanding non-equilibrium quantum thermodynamics more broadly is an important and active area of research. The efficiency of some computing and information theory tasks can be greatly enhanced when using quantum correlated states; quantum correlations can be used not only as a valuable resource in quantum computation, but also in the realm of quantum thermodynamics. [ 14 ] New types of quantum devices in non-equilibrium states function very differently to their classical counterparts. For example, it has been theoretically shown that non-equilibrium quantum ratchet systems function far more efficiently then that predicted by classical thermodynamics. [ d ] [ 15 ] It has also been shown that quantum coherence can be used to enhance the efficiency of systems beyond the classical Carnot limit . This is because it could be possible to extract work, in the form of photons, from a single heat bath. Quantum coherence can be used in effect to play the role of Maxwell's demon [ 16 ] though the broader information theory based interpretation of the second law of thermodynamics is not violated. [ e ] [ 21 ] Quantum versions of stochastic thermodynamics have been studied for some time [ f ] and the past few years have seen a surge of interest in this topic. [ c ] Quantum mechanics involves profound issues around the interpretation of reality (e.g. the Copenhagen interpretation , many-worlds , de Broglie-Bohm theory etc are all competing interpretations that try to explain the unintuitive results of quantum theory) . It is hoped that by trying to specify the quantum-mechanical definition of work, dealing with open quantum systems, analyzing exactly solvable models, or proposing and performing experiments to test non-equilibrium predictions, [ g ] important insights into the interpretation of quantum mechanics and the true nature of reality will be gained. [ 26 ] Applications of non-equilibrium work relations, like the Jarzynski equality, have recently been proposed for the purposes of detecting quantum entanglement ( Hide & Vedral 2010 ) and to improving optimization problems (minimize or maximize a function of multivariables called the cost function ) via quantum annealing ( Ohzeki & Nishimori 2011 ). [ 26 ] Until recently thermodynamics has only considered systems coupled to a thermal bath and, therefore, satisfying Boltzmann statistics . However, some systems do not satisfy these conditions and are far from equilibrium such as living matter, for which fluctuations are expected to be non-Gaussian . [ 27 ] Active particle systems are able to take energy from their environment and drive themselves far from equilibrium. An important example of active matter is constituted by objects capable of self propulsion. Thanks to this property, they feature a series of novel behaviours that are not attainable by matter at thermal equilibrium, including, for example, swarming and the emergence of other collective properties. [ 28 ] A passive particle is considered in an active bath when it is in an environment where a wealth of active particles are present. These particles will exert nonthermal forces on the passive object so that it will experience non-thermal fluctuations and will behave widely different from a passive Brownian particle in a thermal bath. The presence of an active bath can significantly influence the microscopic thermodynamics of a particle. Experiments have suggested that the Jarzynski equality does not hold in some cases due to the presence of non-Boltzmann statistics in active baths. [ h ] This observation points towards a new direction in the study of non-equilibrium statistical physics and stochastic thermodynamics, where also the environment itself is far from equilibrium. [ 30 ] Active baths are a question of particular importance in biochemistry. For example, biomolecules within cells are coupled with an active bath due to the presence of molecular motors within the cytoplasm, which leads to striking and largely not yet understood phenomena such as the emergence of anomalous diffusion (Barkai et al., 2012). Also, protein folding might be facilitated by the presence of active fluctuations (Harder et al., 2014b) and active matter dynamics could play a central role in several biological functions (Mallory et al., 2015; Shin et al., 2015; Suzuki et al., 2015). It is an open question to what degree stochastic thermodynamics can be applied to systems coupled to active baths. [ 27 ]
https://en.wikipedia.org/wiki/Stochastic_thermodynamics
Stock nomenclature for inorganic compounds is a widely used system of chemical nomenclature developed by the German chemist Alfred Stock and first published in 1919. In the "Stock system", the oxidation states of some or all of the elements in a compound are indicated in parentheses by Roman numerals . [ 1 ] [ 2 ] Contrary to the usual English style for parentheses, there is no space between the end of the element name and the opening parenthesis: for AgF , the correct style is "silver(I) fluoride" not "silver (I) fluoride". Where there is no ambiguity about the oxidation state of an element in a compound, it is not necessary to indicate it with Roman numerals: hence for NaCl , sodium chloride will suffice; sodium(I) chloride(−I) is unnecessarily long and such usage is very rare.
https://en.wikipedia.org/wiki/Stock_nomenclature
Stockholm Convention on Persistent Organic Pollutants is an international environmental treaty , signed on 22 May 2001 in Stockholm and effective from 17 May 2004, that aims to eliminate or restrict the production and use of persistent organic pollutants (POPs). [ 2 ] In 1995, the Governing Council of the United Nations Environment Programme (UNEP) called for global action to be taken on POPs, which it defined as "chemical substances that persist in the environment, bio-accumulate through the food web , and pose a risk of causing adverse effects to human health and the environment". Following this, the Intergovernmental Forum on Chemical Safety (IFCS) and the International Programme on Chemical Safety (IPCS) prepared an assessment of the 12 worst offenders, known as the dirty dozen . The INC met five times between June 1998 and December 2000 to elaborate the convention, and delegates adopted the Stockholm Convention on POPs at the Conference of the Plenipotentiaries convened from 22 to 23 May 2001 in Stockholm , Sweden. The negotiations for the convention were completed on 23 May 2001 in Stockholm. The convention entered into force on 17 May 2004 with ratification by an initial 128 parties and 151 signatories. Co-signatories agree to outlaw nine of the dirty dozen chemicals, limit the use of DDT to malaria control, and curtail inadvertent production of dioxins and furans . Parties to the convention have agreed to a process by which persistent toxic compounds can be reviewed and added to the convention, if they meet certain criteria for persistence and transboundary threat. The first set of new chemicals to be added to the convention were agreed at a conference in Geneva on 8 May 2009. As of September 2022, there are 186 parties to the convention (185 states and the European Union ). [ 1 ] Notable non-ratifying states include the United States, Israel, and Malaysia. The Stockholm Convention was adopted to EU legislation in Regulation (EC) No 850/2004. [ 3 ] In 2019, the latter was replaced by Regulation (EU) 2019/1021. [ 4 ] Key elements of the Convention include the requirement that developed countries provide new and additional financial resources and measures to eliminate production and use of intentionally produced POPs, eliminate unintentionally produced POPs where feasible, and manage and dispose of POPs wastes in an environmentally sound manner. Precaution is exercised throughout the Stockholm Convention, with specific references in the preamble, the objective, and the provision on identifying new POPs. When adopting the convention, provision was made for a procedure to identify additional POPs and the criteria to be considered in doing so. At the first meeting of the Conference of the Parties (COP1), held in Punta del Este, Uruguay, from 2 to 6 May 2005, the POPRC was established to consider additional candidates nominated for listing under the convention. The committee is composed of 31 experts nominated by parties from the five United Nations regional groups and reviews nominated chemicals in three stages. The Committee first determines whether the substance fulfills POP screening criteria detailed in Annex D of the convention, relating to its persistence, bioaccumulation, potential for long-range environmental transport (LRET), and toxicity. If a substance is deemed to fulfill these requirements, the Committee then drafts a risk profile according to Annex E to evaluate whether the substance is likely, as a result of its LRET, to lead to significant adverse human health and/or environmental effects and therefore warrants global action. Finally, if the POPRC finds that global action is warranted, it develops a risk management evaluation, according to Annex F, reflecting socioeconomic considerations associated with possible control measures. Based on this, the POPRC decides to recommend that the COP list the substance under one or more of the annexes to the convention. The POPRC has met annually in Geneva, Switzerland, since its establishment. The seventh meeting of the Persistent Organic Pollutants Review Committee (POPRC-7) of the Stockholm Convention on Persistent Organic Pollutants (POPs) took place from 10 to 14 October 2011 in Geneva. POPRC-8 was held from 15 to 19 October 2012 in Geneva, POPRC-9 to POPRC-15 were held in Rome, while POPRC-16 needed to be held online. There were initially twelve distinct chemicals ("dirty dozen") listed in three categories. Two chemicals, hexachlorobenzene and polychlorinated biphenyls, were listed in both categories A and C. [ 5 ] Currently, five chemicals are listed in both categories. POPRC-7 considered three proposals for listing in Annexes A, B and/or C of the convention: chlorinated naphthalenes (CNs), hexachlorobutadiene (HCBD) and pentachlorophenol (PCP), its salts and esters. The proposal is the first stage of the POPRC's work in assessing a substance, and requires the POPRC to assess whether the proposed chemical satisfies the criteria in Annex D of the convention. The criteria for forwarding a proposed chemical to the risk profile preparation stage are persistence, bioaccumulation, potential for long-range environmental transport (LRET), and adverse effects. POPRC-8 proposed hexabromocyclododecane for listing in Annex A, with specific exemptions for production and use in expanded polystyrene and extruded polystyrene in buildings. This proposal was agreed at the sixth Conference of Parties on 28 April – 10 May 2013. [ 15 ] [ 16 ] POPRC-9 proposed di-, tri-, tetra-, penta-, hexa-, hepta- and octa-chlorinated naphthalenes , and hexachlorobutadiene for listing in Annexes A and C. It also set up further work on pentachlorophenol, its salts and esters, and decabromodiphenyl ether, perfluorooctanesulfonic acid, its salts and perfluorooctane sulfonyl chloride. [ 17 ] POPRC-15 proposed PFHxS for listing in Annex A without specific exemptions. [ 18 ] Currently, [ timeframe? ] chlorpyrifos , long-chain perfluorocarboxylic acids and medium-chain chlorinated paraffins are under review. [ 19 ] Although some critics have alleged that the treaty is responsible for the continuing death toll from malaria, in reality the treaty specifically permits the public health use of DDT for the control of mosquitoes (the malaria vector ). [ 20 ] [ 21 ] [ 22 ] There are also ways to prevent high amounts of DDT consumed by using other malaria controls such as window screens. As long as there are specific measures taken, such as use of DDT indoors, then the limited amount of DDT can be used in a regulated fashion. [ 23 ] From a developing country perspective, a lack of data and information about the sources, releases, and environmental levels of POPs hampers negotiations on specific compounds, and indicates a strong need for research. [ 24 ] [ 25 ] Another controversy would be certain POPs (which are continually active, specifically in the Arctic Biota) that were mentioned in the Stockholm Convention, but were not part of the Dirty Dozen such as perfluorooctane sulfonate (PFOS). [ 26 ] PFOS have many general uses such as stain repellents but have many properties which can make it a dangerous due to the fact that PFOS can be highly resistant to environmental breakdown. PFOS can be toxic in terms of increased offspring death, decrease in body weight, and the disruption of neurological systems. What makes this compound controversial is the economic and political impact it can have among various countries and businesses. [ 27 ]
https://en.wikipedia.org/wiki/Stockholm_Convention_on_Persistent_Organic_Pollutants
Stockholm format is a multiple sequence alignment format used by Pfam , Rfam and Dfam , to disseminate protein, RNA and DNA sequence alignments. [ 1 ] [ 2 ] [ 3 ] The alignment editors Ralee , [ 4 ] Belvu and Jalview support Stockholm format as do the probabilistic database search tools , Infernal and HMMER , and the phylogenetic analysis tool Xrate . Stockholm format files often have the filename extension .sto or .stk . [ 5 ] A well-formed stockholm file always contains a header which states the format and version identifier, currently ' # STOCKHOLM 1.0 '. The header is then followed by a multiple lines, a mix of markup (starting with # ) and sequences. Finally, the " // " line indicates the end of the alignment. An example without markup looks like: Sequences are written one per line. The sequence name is written first, and after any number of whitespaces the sequence is written. Sequence names are typically in the form "name/start-end" or just "name". Sequence letters may include any characters except whitespace. Gaps may be indicated by " . " or " - ". Mark-up lines start with # . The "parameters" are separated by whitespace, so an underscore ("_") instead of space should be used for the 1-char-per-column markups. Mark-up types defined include: These feature names are used by Pfam and Rfam for specific types of annotation. (See the Pfam and the Rfam documentation under "Description of fields") Pfam and Rfam may use the following tags: Rfam and Pfam may use these features: The list of valid features includes those shown below, as well as the same features as for #=GR with "_cons" appended, meaning "consensus". Example: "SS_cons". There are no explicit size limits on any field. However, a simple parser that uses fixed field sizes should work safely on Pfam and Rfam alignments with these limits: A simple example of an Rfam alignment ( UPSK RNA ) with a pseudoknot in Stockholm format is shown below: [ 6 ] Here is a slightly more complex example showing the Pfam CBS domain:
https://en.wikipedia.org/wiki/Stockholm_format
The Stockmayer potential is a mathematical model for representing the interactions between pairs of atoms or molecules . It is defined as a Lennard-Jones potential with a point electric dipole moment . A Stockmayer liquid consists of a collection of spheres with point dipoles embedded at the centre of each. These spheres interact both by Lennard-Jones and dipolar interactions. In the absence of the point dipoles, the spheres face no rotational friction and the translational dynamics of such LJ spheres have been studied in detail. This system, therefore, provides a simple model where the only source of rotational friction is dipolar interactions. [ 1 ] The interaction potential may be written as V ( r ) = 4 ε 12 [ ( σ 12 r ) 12 − ( σ 12 r ) 6 ] − ξ ( μ 1 μ 2 r 3 ) {\displaystyle V(r)=4\varepsilon _{12}\left[\left({\frac {\sigma _{12}}{r}}\right)^{12}-\left({\frac {\sigma _{12}}{r}}\right)^{6}\right]-\xi \left({\frac {\mu _{1}\mu _{2}}{r^{3}}}\right)} where the parameters ε 12 {\displaystyle \varepsilon _{12}} and σ 12 {\displaystyle \sigma _{12}} are related to dispersion strength and particle size respectively, just as in the Mie potential or Lennard-Jones potential , which is the source of the first term, μ i {\displaystyle \mu _{i}} is the dipole moment of species i {\displaystyle i} , and ξ {\displaystyle \xi } is a parameter describing the relative orientation of the two dipoles, which may vary between -2 and 2. [ 2 ] This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . This molecular physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stockmayer_potential
The stocks-to-use ratio (S/U) is a convenient measure of supply and demand interrelationships of commodities . This ratio indicates the level of carryover stock for any given commodity as a percentage of the total use of the commodity. It is typically used for grain commodity stocks such as wheat , corn and soybeans where it can be used to compare both the ending stock, along with the stocks-to-use ratio against previous years, this percentage number is a good indicator of whether current ending stock levels are at historically small amounts to justification for higher prices or plentiful amounts resulting in lower prices. According to Futures Trading Charts the ratio is calculated as follows: [ 1 ] Beginning Stock + Total Production - Total Use Total Use = Stocks-To-Use Ratio {\displaystyle {\frac {\text{Beginning Stock + Total Production - Total Use}}{\text{Total Use}}}={\text{Stocks-To-Use Ratio}}} This can be simplified by consolidating the upper portion to: Carryover x 100 Total Use = Stocks-To-Use Ratio {\displaystyle {\frac {{\text{Carryover }}x100}{\text{Total Use}}}={\text{Stocks-To-Use Ratio}}} Beginning stocks represent the previous year’s ending or carry-over inventories. Total production represents the total grain produced in a given year. Total usage is the sum of all the end uses in which the stock of grain has been consumed. This would include human consumption, export programs, seed, waste , dockage and feed consumption. Adding carry-over stocks to the total production provides the total supply. Using the total supply, subtract the total use and result will be the year-ending carry-over stock. This economic term article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stocks-to-use_ratio
Air–fuel ratio ( AFR ) is the mass ratio of air to a solid, liquid, or gaseous fuel present in a combustion process. The combustion may take place in a controlled manner such as in an internal combustion engine or industrial furnace, or may result in an explosion (e.g., a dust explosion ). The air–fuel ratio determines whether a mixture is combustible at all, how much energy is being released, and how much unwanted pollutants are produced in the reaction. Typically a range of air to fuel ratios exists, outside of which ignition will not occur. These are known as the lower and upper explosive limits. In an internal combustion engine or industrial furnace, the air–fuel ratio is an important measure for anti-pollution and performance-tuning reasons. If exactly enough air is provided to completely burn all of the fuel ( stoichiometric combustion ), the ratio is known as the stoichiometric mixture , often abbreviated to stoich . Ratios lower than stoichiometric (where the fuel is in excess) are considered "rich". Rich mixtures are less efficient, but may produce more power and burn cooler. Ratios higher than stoichiometric (where the air is in excess) are considered "lean". Lean mixtures are more efficient but may cause higher temperatures, which can lead to the formation of nitrogen oxides . Some engines are designed with features to allow lean-burn . For precise air–fuel ratio calculations, the oxygen content of combustion air should be specified because of different air density due to different altitude or intake air temperature, possible dilution by ambient water vapor , or enrichment by oxygen additions. An air-fuel ratio meter monitors the air–fuel ratio of an internal combustion engine . Also called air–fuel ratio gauge , air–fuel meter , or air–fuel gauge , it reads the voltage output of an oxygen sensor , sometimes also called AFR sensor or lambda sensor. The original narrow-band oxygen sensors became factory installed standard in the late 1970s and early 1980s. In recent years a newer and much more accurate wide-band sensor, though more expensive, has become available. Most stand-alone narrow-band meters have 10 LEDs and some have more. Also common, narrow band meters in round housings with the standard mounting 52 and 67 mm ( 2 + 1 ⁄ 16 and 2 + 5 ⁄ 8 in) diameters, as other types of car 'gauges'. These usually have 10 or 20 LEDs. Analogue 'needle' style gauges are also available. In theory, a stoichiometric mixture has just enough air to completely burn the available fuel. In practice, this is never quite achieved, due primarily to the very short time available in an internal combustion engine for each combustion cycle. Most of the combustion process is completed in approximately 2 milliseconds at an engine speed of 6,000 revolutions per minute (100 revolutions per second, or 10 milliseconds per revolution of the crankshaft. For a four-stroke engine this would mean 5 milliseconds for each piston stroke, and 20 milliseconds to complete one 720 degree Otto cycle ). This is the time that elapses from the spark plug firing until 90% of the fuel–air mix is combusted, typically some 80 degrees of crankshaft rotation later. Catalytic converters are designed to work best when the exhaust gases passing through them are the result of nearly perfect combustion. A perfectly stoichiometric mixture burns very hot and can damage engine components if the engine is placed under high load at this fuel–air mixture. Due to the high temperatures at this mixture, the detonation of the fuel-air mix while approaching or shortly after maximum cylinder pressure is possible under high load (referred to as knocking or pinging), specifically a "pre-detonation" event in the context of a spark-ignition engine model. Such detonation can cause serious engine damage as the uncontrolled burning of the fuel-air mix can create very high pressures in the cylinder. As a consequence, stoichiometric mixtures are only used under light to low-moderate load conditions. For acceleration and high-load conditions, a richer mixture (lower air–fuel ratio) is used to produce cooler combustion products (thereby utilizing evaporative cooling ), and so avoid overheating of the cylinder head , and thus prevent detonation. The stoichiometric mixture for a gasoline engine is the ideal ratio of air to fuel that burns all fuel with no excess air. For gasoline fuel, the stoichiometric air–fuel mixture is about 14.7:1 [ 1 ] i.e. for every one gram of fuel, 14.7 grams of air are required. For pure octane fuel, the oxidation reaction is: Any mixture greater than 14.7:1 is considered a lean mixture ; any less than 14.7:1 is a rich mixture – given perfect (ideal) "test" fuel (gasoline consisting of solely n - heptane and iso-octane ). In reality, most fuels consist of a combination of heptane, octane, a handful of other alkanes , plus additives including detergents, and possibly oxygenators such as MTBE ( methyl tert -butyl ether ) or ethanol / methanol . These compounds all alter the stoichiometric ratio, with most of the additives pushing the ratio downward (oxygenators bring extra oxygen to the combustion event in liquid form that is released at the time of combustions; for MTBE -laden fuel, a stoichiometric ratio can be as low as 14.1:1). Vehicles that use an oxygen sensor or other feedback loops to control fuel to air ratio (lambda control), compensate automatically for this change in the fuel's stoichiometric rate by measuring the exhaust gas composition and controlling fuel volume. Vehicles without such controls (such as most motorcycles until recently, and cars predating the mid-1980s) may have difficulties running certain fuel blends (especially winter fuels used in some areas) and may require different carburetor jets (or otherwise have the fueling ratios altered) to compensate. Vehicles that use oxygen sensors can monitor the air–fuel ratio with an air–fuel ratio meter . In the typical air to natural gas combustion burner, a double-cross limit strategy is employed to ensure ratio control. (This method was used in World War II). [ citation needed ] The strategy involves adding the opposite flow feedback into the limiting control of the respective gas (air or fuel). This assures ratio control within an acceptable margin. There are other terms commonly used when discussing the mixture of air and fuel in internal combustion engines. Mixture is the predominant word that appears in training texts, operation manuals, and maintenance manuals in the aviation world. Air-fuel ratio is the ratio between the mass of air and the mass of fuel in the air-fuel mix at any given moment. The mass is the mass of all constituents that compose the air or fuel, whether they take part in the combustion or not. For example, a calculation of the mass of natural gas as fuel — which often contains carbon dioxide ( CO 2 ), nitrogen ( N 2 ), and various alkanes — includes the mass of the carbon dioxide, nitrogen and all alkanes in determining the value of m fuel . [ 2 ] For pure octane the stoichiometric mixture is approximately 15.1:1, or λ of 1.00 exactly. In naturally aspirated engines powered by octane, maximum power is frequently reached at AFRs ranging from 12.5 to 13.3:1 or λ of 0.850 to 0.901. [ citation needed ] The air-fuel ratio of 12:1 is considered as the maximum output ratio, whereas the air-fuel ratio of 16:1 is considered as the maximum fuel economy ratio. [ citation needed ] Fuel–air ratio is commonly used in the gas turbine industry as well as in government studies of internal combustion engine , and refers to the ratio of fuel to the air. [ citation needed ] Air–fuel equivalence ratio, λ (lambda), is the ratio of actual AFR to stoichiometry for a given mixture. λ = 1.0 is at stoichiometry, rich mixtures λ < 1.0, and lean mixtures λ > 1.0. There is a direct relationship between λ and AFR. To calculate AFR from a given λ , multiply the measured λ by the stoichiometric AFR for that fuel. Alternatively, to recover λ from an AFR, divide AFR by the stoichiometric AFR for that fuel. This last equation is often used as the definition of λ : Because the composition of common fuels varies seasonally, and because many modern vehicles can handle different fuels when tuning, it makes more sense to talk about λ values rather than AFR. [ 3 ] Most practical AFR devices actually measure the amount of residual oxygen (for lean mixes) or unburnt hydrocarbons (for rich mixtures) in the exhaust gas. The fuel–air equivalence ratio , Φ (phi), of a system is defined as the ratio of the fuel-to-oxidizer ratio to the stoichiometric fuel-to-oxidizer ratio. Mathematically, where m represents the mass, n represents a number of moles, subscript st stands for stoichiometric conditions. The advantage of using equivalence ratio over fuel–oxidizer ratio is that it takes into account (and is therefore independent of) both mass and molar values for the fuel and the oxidizer. Consider, for example, a mixture of one mole of ethane ( C 2 H 6 ) and one mole of oxygen ( O 2 ). The fuel–oxidizer ratio of this mixture based on the mass of fuel and air is and the fuel-oxidizer ratio of this mixture based on the number of moles of fuel and air is Clearly the two values are not equal. To compare it with the equivalence ratio, we need to determine the fuel–oxidizer ratio of ethane and oxygen mixture. For this we need to consider the stoichiometric reaction of ethane and oxygen, This gives Thus we can determine the equivalence ratio of the given mixture as or, equivalently, as Another advantage of using the equivalence ratio is that ratios greater than one always mean there is more fuel in the fuel–oxidizer mixture than required for complete combustion (stoichiometric reaction), irrespective of the fuel and oxidizer being used—while ratios less than one represent a deficiency of fuel or equivalently excess oxidizer in the mixture. This is not the case if one uses fuel–oxidizer ratio, which takes different values for different mixtures. The fuel–air equivalence ratio is related to the air–fuel equivalence ratio (defined previously) as follows: The relative amounts of oxygen enrichment and fuel dilution can be quantified by the mixture fraction , Z, defined as where Y F,0 and Y O,0 represent the fuel and oxidizer mass fractions at the inlet, W F and W O are the species molecular weights, and v F and v O are the fuel and oxygen stoichiometric coefficients, respectively. The stoichiometric mixture fraction is The stoichiometric mixture fraction is related to λ (lambda) and Φ (phi) by the equations assuming In industrial fired heaters , power plant steam generators, and large gas-fired turbines , the more common terms are percent excess combustion air and percent stoichiometric air. [ 6 ] [ 7 ] For example, excess combustion air of 15 percent means that 15 percent more than the required stoichiometric air (or 115 percent of stoichiometric air) is being used. A combustion control point can be defined by specifying the percent excess air (or oxygen) in the oxidant , or by specifying the percent oxygen in the combustion product. [ 8 ] An air–fuel ratio meter may be used to measure the percent oxygen in the combustion gas, from which the percent excess oxygen can be calculated from stoichiometry and a mass balance for fuel combustion. For example, for propane ( C 3 H 8 ) combustion between stoichiometric and 30 percent excess air (AFR mass between 15.58 and 20.3), the relationship between percent excess air and percent oxygen is:
https://en.wikipedia.org/wiki/Stoichiometric_air_ratio
Stoichiometry ( / ˌ s t ɔɪ k i ˈ ɒ m ɪ t r i / ⓘ ) is the relationships between the masses of reactants and products before, during, and following chemical reactions . Stoichiometry is based on the law of conservation of mass ; the total mass of reactants must equal the total mass of products, so the relationship between reactants and products must form a ratio of positive integers. This means that if the amounts of the separate reactants are known, then the amount of the product can be calculated. Conversely, if one reactant has a known quantity and the quantity of the products can be empirically determined, then the amount of the other reactants can also be calculated. This is illustrated in the image here, where the unbalanced equation is: Here, one molecule of methane reacts with two molecules of oxygen gas to yield one molecule of carbon dioxide and two molecules of liquid water . This particular chemical equation is an example of complete combustion . The numbers in front of each quantity are a set of stoichiometric coefficients which directly reflect the molar ratios between the products and reactants. Stoichiometry measures these quantitative relationships, and is used to determine the amount of products and reactants that are produced or needed in a given reaction. Describing the quantitative relationships among substances as they participate in chemical reactions is known as reaction stoichiometry . In the example above, reaction stoichiometry measures the relationship between the quantities of methane and oxygen that react to form carbon dioxide and water: for every mole of methane combusted, two moles of oxygen are consumed, one mole of carbon dioxide is produced, and two moles of water are produced. Because of the well known relationship of moles to atomic weights , the ratios that are arrived at by stoichiometry can be used to determine quantities by weight in a reaction described by a balanced equation. This is called composition stoichiometry . Gas stoichiometry deals with reactions solely involving gases, where the gases are at a known temperature, pressure, and volume and can be assumed to be ideal gases . For gases, the volume ratio is ideally the same by the ideal gas law , but the mass ratio of a single reaction has to be calculated from the molecular masses of the reactants and products. In practice, because of the existence of isotopes , molar masses are used instead in calculating the mass ratio. The term stoichiometry was first used by Jeremias Benjamin Richter in 1792 when the first volume of Richter's Anfangsgründe der Stöchyometrie oder Meßkunst chymischer Elemente ( Fundamentals of Stoichiometry, or the Art of Measuring the Chemical Elements ) was published. [ 1 ] The term is derived from the Ancient Greek words στοιχεῖον stoikheîon "element" [ 2 ] and μέτρον métron "measure." Ludwig Darmstaedter and Ralph E. Oesper have written a useful account on this. [ 3 ] A stoichiometric amount [ 4 ] or stoichiometric ratio of a reagent is the optimum amount or ratio where, assuming that the reaction proceeds to completion: Stoichiometry rests upon the very basic laws that help to understand it better, i.e., law of conservation of mass , the law of definite proportions (i.e., the law of constant composition ), the law of multiple proportions and the law of reciprocal proportions . In general, chemical reactions combine in definite ratios of chemicals. Since chemical reactions can neither create nor destroy matter, nor transmute one element into another, the amount of each element must be the same throughout the overall reaction. For example, the number of atoms of a given element X on the reactant side must equal the number of atoms of that element on the product side, whether or not all of those atoms are actually involved in a reaction. [ 5 ] Chemical reactions, as macroscopic unit operations, consist of simply a very large number of elementary reactions , where a single molecule reacts with another molecule. As the reacting molecules (or moieties) consist of a definite set of atoms in an integer ratio, the ratio between reactants in a complete reaction is also in integer ratio. A reaction may consume more than one molecule, and the stoichiometric number counts this number, defined as positive for products (added) and negative for reactants (removed). [ 6 ] The unsigned coefficients are generally referred to as the stoichiometric coefficients . [ 7 ] Each element has an atomic mass , and considering molecules as collections of atoms, compounds have a definite molecular mass , which when expressed in daltons is numerically equal to the molar mass in g / mol . By definition, the atomic mass of carbon-12 is 12 Da , giving a molar mass of 12 g/mol. The number of molecules per mole in a substance is given by the Avogadro constant , exactly 6.022 140 76 × 10 23 mol −1 since the 2019 revision of the SI . Thus, to calculate the stoichiometry by mass, the number of molecules required for each reactant is expressed in moles and multiplied by the molar mass of each to give the mass of each reactant per mole of reaction. The mass ratios can be calculated by dividing each by the total in the whole reaction. Elements in their natural state are mixtures of isotopes of differing mass; thus, atomic masses and thus molar masses are not exactly integers. For instance, instead of an exact 14:3 proportion, 17.04 g of ammonia consists of 14.01 g of nitrogen and 3 × 1.01 g of hydrogen, because natural nitrogen includes a small amount of nitrogen-15, and natural hydrogen includes hydrogen-2 ( deuterium ). A stoichiometric reactant is a reactant that is consumed in a reaction, as opposed to a catalytic reactant , which is not consumed in the overall reaction because it reacts in one step and is regenerated in another step. Stoichiometry is not only used to balance chemical equations but also used in conversions, i.e., converting from grams to moles using molar mass as the conversion factor, or from grams to milliliters using density . For example, to find the amount of NaCl (sodium chloride) in 2.00 g, one would do the following: In the above example, when written out in fraction form, the units of grams form a multiplicative identity, which is equivalent to one (g/g = 1), with the resulting amount in moles (the unit that was needed), as shown in the following equation, Stoichiometry is often used to balance chemical equations (reaction stoichiometry). For example, the two diatomic gases, hydrogen and oxygen , can combine to form a liquid, water, in an exothermic reaction , as described by the following equation: Reaction stoichiometry describes the 2:1:2 ratio of hydrogen, oxygen, and water molecules in the above equation. The molar ratio allows for conversion between moles of one substance and moles of another. For example, in the reaction the amount of water that will be produced by the combustion of 0.27 moles of CH 3 OH is obtained using the molar ratio between CH 3 OH and H 2 O of 2 to 4. The term stoichiometry is also often used for the molar proportions of elements in stoichiometric compounds (composition stoichiometry). For example, the stoichiometry of hydrogen and oxygen in H 2 O is 2:1. In stoichiometric compounds, the molar proportions are whole numbers. Stoichiometry can also be used to find the quantity of a product yielded by a reaction. If a piece of solid copper (Cu) were added to an aqueous solution of silver nitrate ( AgNO 3 ), the silver (Ag) would be replaced in a single displacement reaction forming aqueous copper(II) nitrate ( Cu(NO 3 ) 2 ) and solid silver. How much silver is produced if 16.00 grams of Cu is added to the solution of excess silver nitrate? The following steps would be used: The complete balanced equation would be: For the mass to mole step, the mass of copper (16.00 g) would be converted to moles of copper by dividing the mass of copper by its molar mass : 63.55 g/mol. Now that the amount of Cu in moles (0.2518) is found, we can set up the mole ratio. This is found by looking at the coefficients in the balanced equation: Cu and Ag are in a 1:2 ratio. Now that the moles of Ag produced is known to be 0.5036 mol, we convert this amount to grams of Ag produced to come to the final answer: This set of calculations can be further condensed into a single step: For propane ( C 3 H 8 ) reacting with oxygen gas ( O 2 ), the balanced chemical equation is: The mass of water formed if 120 g of propane ( C 3 H 8 ) is burned in excess oxygen is then Stoichiometry is also used to find the right amount of one reactant to "completely" react with the other reactant in a chemical reaction – that is, the stoichiometric amounts that would result in no leftover reactants when the reaction takes place. An example is shown below using the thermite reaction , [ citation needed ] This equation shows that 1 mole of iron(III) oxide and 2 moles of aluminium will produce 1 mole of aluminium oxide and 2 moles of iron . So, to completely react with 85.0 g of iron(III) oxide (0.532 mol), 28.7 g (1.06 mol) of aluminium are needed. The limiting reagent is the reagent that limits the amount of product that can be formed and is completely consumed when the reaction is complete. An excess reactant is a reactant that is left over once the reaction has stopped due to the limiting reactant being exhausted. Consider the equation of roasting lead(II) sulfide (PbS) in oxygen ( O 2 ) to produce lead(II) oxide (PbO) and sulfur dioxide ( SO 2 ): To determine the theoretical yield of lead(II) oxide if 200.0 g of lead(II) sulfide and 200.0 g of oxygen are heated in an open container: Because a lesser amount of PbO is produced for the 200.0 g of PbS, it is clear that PbS is the limiting reagent. In reality, the actual yield is not the same as the stoichiometrically-calculated theoretical yield. Percent yield, then, is expressed in the following equation: If 170.0 g of lead(II) oxide is obtained, then the percent yield would be calculated as follows: Consider the following reaction, in which iron(III) chloride reacts with hydrogen sulfide to produce iron(III) sulfide and hydrogen chloride : The stoichiometric masses for this reaction are: Suppose 90.0 g of FeCl 3 reacts with 52.0 g of H 2 S . To find the limiting reagent and the mass of HCl produced by the reaction, we change the above amounts by a factor of 90/324.41 and obtain the following amounts: The limiting reactant (or reagent) is FeCl 3 , since all 90.00 g of it is used up while only 28.37 g H 2 S are consumed. Thus, 52.0 − 28.4 = 23.6 g H 2 S left in excess. The mass of HCl produced is 60.7 g. By looking at the stoichiometry of the reaction, one might have guessed FeCl 3 being the limiting reactant; three times more FeCl 3 is used compared to H 2 S (324 g vs 102 g). Often, more than one reaction is possible given the same starting materials. The reactions may differ in their stoichiometry. For example, the methylation of benzene ( C 6 H 6 ), through a Friedel–Crafts reaction using AlCl 3 as a catalyst, may produce singly methylated ( C 6 H 5 CH 3 ), doubly methylated ( C 6 H 4 (CH 3 ) 2 ), or still more highly methylated ( C 6 H 6− n (CH 3 ) n ) products, as shown in the following example, In this example, which reaction takes place is controlled in part by the relative concentrations of the reactants. In lay terms, the stoichiometric coefficient of any given component is the number of molecules and/or formula units that participate in the reaction as written. A related concept is the stoichiometric number (using IUPAC nomenclature), wherein the stoichiometric coefficient is multiplied by +1 for all products and by −1 for all reactants. For example, in the reaction CH 4 + 2 O 2 → CO 2 + 2 H 2 O , the stoichiometric number of CH 4 is −1, the stoichiometric number of O 2 is −2, for CO 2 it would be +1 and for H 2 O it is +2. In more technically precise terms, the stoichiometric number in a chemical reaction system of the i -th component is defined as or where N i {\displaystyle N_{i}} is the number of molecules of i , and ξ {\displaystyle \xi } is the progress variable or extent of reaction . [ 8 ] [ 9 ] The stoichiometric number ν i {\displaystyle \nu _{i}} represents the degree to which a chemical species participates in a reaction. The convention is to assign negative numbers to reactants (which are consumed) and positive ones to products , consistent with the convention that increasing the extent of reaction will correspond to shifting the composition from reactants towards products. However, any reaction may be viewed as going in the reverse direction, and in that point of view, would change in the negative direction in order to lower the system's Gibbs free energy. Whether a reaction actually will go in the arbitrarily selected forward direction or not depends on the amounts of the substances present at any given time, which determines the kinetics and thermodynamics , i.e., whether equilibrium lies to the right or the left of the initial state, In reaction mechanisms , stoichiometric coefficients for each step are always integers , since elementary reactions always involve whole molecules. If one uses a composite representation of an overall reaction, some may be rational fractions . There are often chemical species present that do not participate in a reaction; their stoichiometric coefficients are therefore zero. Any chemical species that is regenerated, such as a catalyst , also has a stoichiometric coefficient of zero. The simplest possible case is an isomerization in which ν B = 1 since one molecule of B is produced each time the reaction occurs, while ν A = −1 since one molecule of A is necessarily consumed. In any chemical reaction, not only is the total mass conserved but also the numbers of atoms of each kind are conserved, and this imposes corresponding constraints on possible values for the stoichiometric coefficients. There are usually multiple reactions proceeding simultaneously in any natural reaction system, including those in biology . Since any chemical component can participate in several reactions simultaneously, the stoichiometric number of the i -th component in the k -th reaction is defined as so that the total (differential) change in the amount of the i -th component is Extents of reaction provide the clearest and most explicit way of representing compositional change, although they are not yet widely used. With complex reaction systems, it is often useful to consider both the representation of a reaction system in terms of the amounts of the chemicals present { N i } ( state variables ), and the representation in terms of the actual compositional degrees of freedom , as expressed by the extents of reaction { ξ k } . The transformation from a vector expressing the extents to a vector expressing the amounts uses a rectangular matrix whose elements are the stoichiometric numbers [ ν i k ] . The maximum and minimum for any ξ k occur whenever the first of the reactants is depleted for the forward reaction; or the first of the "products" is depleted if the reaction as viewed as being pushed in the reverse direction. This is a purely kinematic restriction on the reaction simplex , a hyperplane in composition space, or N ‑space, whose dimensionality equals the number of linearly-independent chemical reactions. This is necessarily less than the number of chemical components, since each reaction manifests a relation between at least two chemicals. The accessible region of the hyperplane depends on the amounts of each chemical species actually present, a contingent fact. Different such amounts can even generate different hyperplanes, all sharing the same algebraic stoichiometry. In accord with the principles of chemical kinetics and thermodynamic equilibrium , every chemical reaction is reversible , at least to some degree, so that each equilibrium point must be an interior point of the simplex. As a consequence, extrema for the ξ s will not occur unless an experimental system is prepared with zero initial amounts of some products. The number of physically -independent reactions can be even greater than the number of chemical components, and depends on the various reaction mechanisms. For example, there may be two (or more) reaction paths for the isomerism above. The reaction may occur by itself, but faster and with different intermediates, in the presence of a catalyst. The (dimensionless) "units" may be taken to be molecules or moles . Moles are most commonly used, but it is more suggestive to picture incremental chemical reactions in terms of molecules. The N s and ξ s are reduced to molar units by dividing by the Avogadro constant . While dimensional mass units may be used, the comments about integers are then no longer applicable. In complex reactions, stoichiometries are often represented in a more compact form called the stoichiometry matrix. The stoichiometry matrix is denoted by the symbol N . [ 10 ] [ 11 ] [ 12 ] If a reaction network has n reactions and m participating molecular species, then the stoichiometry matrix will have correspondingly m rows and n columns. For example, consider the system of reactions shown below: This system comprises four reactions and five different molecular species. The stoichiometry matrix for this system can be written as: where the rows correspond to S 1 , S 2 , S 3 , S 4 and S 5 , respectively. The process of converting a reaction scheme into a stoichiometry matrix can be a lossy transformation: for example, the stoichiometries in the second reaction simplify when included in the matrix. This means that it is not always possible to recover the original reaction scheme from a stoichiometry matrix. Often the stoichiometry matrix is combined with the rate vector, v , and the species vector, x to form a compact equation, the biochemical systems equation , describing the rates of change of the molecular species: Gas stoichiometry is the quantitative relationship (ratio) between reactants and products in a chemical reaction with reactions that produce gases . Gas stoichiometry applies when the gases produced are assumed to be ideal , and the temperature, pressure, and volume of the gases are all known. The ideal gas law is used for these calculations. Often, but not always, the standard temperature and pressure (STP) are taken as 0 °C and 1 bar and used as the conditions for gas stoichiometric calculations. Gas stoichiometry calculations solve for the unknown volume or mass of a gaseous product or reactant. For example, if we wanted to calculate the volume of gaseous NO 2 produced from the combustion of 100 g of NH 3 , by the reaction: we would carry out the following calculations: There is a 1:1 molar ratio of NH 3 to NO 2 in the above balanced combustion reaction, so 5.871 mol of NO 2 will be formed. We will employ the ideal gas law to solve for the volume at 0 °C (273.15 K) and 1 atmosphere using the gas law constant of R = 0.08206 L·atm·K −1 ·mol −1 : Gas stoichiometry often involves having to know the molar mass of a gas, given the density of that gas. The ideal gas law can be re-arranged to obtain a relation between the density and the molar mass of an ideal gas: and thus: where: In the combustion reaction, oxygen reacts with the fuel, and the point where exactly all oxygen is consumed and all fuel burned is defined as the stoichiometric point. With more oxygen (overstoichiometric combustion), some of it stays unreacted. Likewise, if the combustion is incomplete due to lack of sufficient oxygen, fuel remains unreacted. (Unreacted fuel may also remain because of slow combustion or insufficient mixing of fuel and oxygen – this is not due to stoichiometry.) Different hydrocarbon fuels have different contents of carbon, hydrogen and other elements, thus their stoichiometry varies. Oxygen makes up only 20.95% of the volume of air, and only 23.20% of its mass. [ 13 ] The air-fuel ratios listed below are much higher than the equivalent oxygen-fuel ratios, due to the high proportion of inert gasses in the air. Gasoline engines can run at stoichiometric air-to-fuel ratio, because gasoline is quite volatile and is mixed (sprayed or carburetted) with the air prior to ignition. Diesel engines, in contrast, run lean, with more air available than simple stoichiometry would require. Diesel fuel is less volatile and is effectively burned as it is injected. [ 16 ]
https://en.wikipedia.org/wiki/Stoichiometry
In fluid dynamics , Stokes' law gives the frictional force – also called drag force – exerted on spherical objects moving at very small Reynolds numbers in a viscous fluid . [ 1 ] It was derived by George Gabriel Stokes in 1851 by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations . [ 2 ] The force of viscosity on a small sphere moving through a viscous fluid is given by: [ 3 ] [ 4 ] where (in SI units ): Stokes' law makes the following assumptions for the behavior of a particle in a fluid: Depending on desired accuracy, the failure to meet these assumptions may or may not require the use of a more complicated model. To 10% error, for instance, velocities need be limited to those giving Re < 1. For molecules Stokes' law is used to define their Stokes radius and diameter . The CGS unit of kinematic viscosity was named "stokes" after his work. Stokes' law is the basis of the falling-sphere viscometer , in which the fluid is stationary in a vertical glass tube. A sphere of known size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube. Electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, and the density of the liquid, Stokes' law can be used to calculate the viscosity of the fluid. A series of steel ball bearings of different diameters are normally used in the classic experiment to improve the accuracy of the calculation. The school experiment uses glycerine or golden syrup as the fluid, and the technique is used industrially to check the viscosity of fluids used in processes. Several school experiments often involve varying the temperature and/or concentration of the substances used in order to demonstrate the effects this has on the viscosity. Industrial methods include many different oils , and polymer liquids such as solutions. The importance of Stokes' law is illustrated by the fact that it played a critical role in the research leading to at least three Nobel Prizes. [ 5 ] Stokes' law is important for understanding the swimming of microorganisms and sperm ; also, the sedimentation of small particles and organisms in water, under the force of gravity. [ 5 ] In air, the same theory can be used to explain why small water droplets (or ice crystals) can remain suspended in air (as clouds) until they grow to a critical size and start falling as rain (or snow and hail). [ 6 ] Similar use of the equation can be made in the settling of fine particles in water or other fluids. [ citation needed ] At terminal (or settling) velocity , the excess force F e due to the difference between the weight and buoyancy of the sphere (both caused by gravity [ 7 ] ) is given by: where (in SI units ): Requiring the force balance F d = F e and solving for the velocity v gives the terminal velocity v s . Note that since the excess force increases as R 3 and Stokes' drag increases as R , the terminal velocity increases as R 2 and thus varies greatly with particle size as shown below. If a particle only experiences its own weight while falling in a viscous fluid, then a terminal velocity is reached when the sum of the frictional and the buoyant forces on the particle due to the fluid exactly balances the gravitational force . This velocity v [m/s] is given by: [ 7 ] where (in SI units): In Stokes flow , at very low Reynolds number , the convective acceleration terms in the Navier–Stokes equations are neglected. Then the flow equations become, for an incompressible steady flow : [ 8 ] where: By using some vector calculus identities , these equations can be shown to result in Laplace's equations for the pressure and each of the components of the vorticity vector: [ 8 ] Additional forces like those by gravity and buoyancy have not been taken into account, but can easily be added since the above equations are linear, so linear superposition of solutions and associated forces can be applied. For the case of a sphere in a uniform far field flow, it is advantageous to use a cylindrical coordinate system ( r , φ , z ) . The z –axis is through the centre of the sphere and aligned with the mean flow direction, while r is the radius as measured perpendicular to the z –axis. The origin is at the sphere centre. Because the flow is axisymmetric around the z –axis, it is independent of the azimuth φ . In this cylindrical coordinate system, the incompressible flow can be described with a Stokes stream function ψ , depending on r and z : [ 9 ] [ 10 ] with u r and u z the flow velocity components in the r and z direction, respectively. The azimuthal velocity component in the φ –direction is equal to zero, in this axisymmetric case. The volume flux, through a tube bounded by a surface of some constant value ψ , is equal to 2 πψ and is constant. [ 9 ] For this case of an axisymmetric flow, the only non-zero component of the vorticity vector ω is the azimuthal φ –component ω φ [ 11 ] [ 12 ] The Laplace operator , applied to the vorticity ω φ , becomes in this cylindrical coordinate system with axisymmetry: [ 12 ] From the previous two equations, and with the appropriate boundary conditions, for a far-field uniform-flow velocity u in the z –direction and a sphere of radius R , the solution is found to be [ 13 ] The solution of velocity in cylindrical coordinates and components follows as: The solution of vorticity in cylindrical coordinates follows as: The solution of pressure in cylindrical coordinates follows as: The solution of pressure in spherical coordinates follows as: The formula of pressure is also called dipole potential analogous to the concept in electrostatics. A more general formulation, with arbitrary far-field velocity-vector u ∞ {\displaystyle \mathbf {u} _{\infty }} , in cartesian coordinates x = ( x , y , z ) T {\displaystyle \mathbf {x} =(x,y,z)^{T}} follows with: u ( x ) = R 3 4 ⋅ ( 3 ( u ∞ ⋅ x ) ⋅ x ‖ x ‖ 5 − u ∞ ‖ x ‖ 3 ) ⏟ conservative: curl=0, ∇ 2 u = 0 + u ∞ ⏟ far-field ⏟ Terms of Boundary-Condition − 3 R 4 ⋅ ( u ∞ ‖ x ‖ + ( u ∞ ⋅ x ) ⋅ x ‖ x ‖ 3 ) ⏟ non-conservative: curl = ω ( x ) , μ ∇ 2 u = ∇ p = [ 3 R 3 4 x ⊗ x ‖ x ‖ 5 − R 3 4 I ‖ x ‖ 3 − 3 R 4 x ⊗ x ‖ x ‖ 3 − 3 R 4 I ‖ x ‖ + I ] ⋅ u ∞ {\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {x} )&=\underbrace {\underbrace {{\frac {R^{3}}{4}}\cdot \left({\frac {3\left(\mathbf {u} _{\infty }\cdot \mathbf {x} \right)\cdot \mathbf {x} }{\|\mathbf {x} \|^{5}}}-{\frac {\mathbf {u} _{\infty }}{\|\mathbf {x} \|^{3}}}\right)} _{{\text{conservative: curl=0,}}\ \nabla ^{2}\mathbf {u} =0}+\underbrace {\mathbf {u} _{\infty }} _{\text{far-field}}} _{\text{Terms of Boundary-Condition}}\;\underbrace {-{\frac {3R}{4}}\cdot \left({\frac {\mathbf {u} _{\infty }}{\|\mathbf {x} \|}}+{\frac {\left(\mathbf {u} _{\infty }\cdot \mathbf {x} \right)\cdot \mathbf {x} }{\|\mathbf {x} \|^{3}}}\right)} _{{\text{non-conservative: curl}}={\boldsymbol {\omega }}(\mathbf {x} ),\ \mu \nabla ^{2}\mathbf {u} =\nabla p}\\[8pt]&=\left[{\frac {3R^{3}}{4}}{\frac {\mathbf {x\otimes \mathbf {x} } }{\|\mathbf {x} \|^{5}}}-{\frac {R^{3}}{4}}{\frac {\mathbf {I} }{\|\mathbf {x} \|^{3}}}-{\frac {3R}{4}}{\frac {\mathbf {x} \otimes \mathbf {x} }{\|\mathbf {x} \|^{3}}}-{\frac {3R}{4}}{\frac {\mathbf {I} }{\|\mathbf {x} \|}}+\mathbf {I} \right]\cdot \mathbf {u} _{\infty }\end{aligned}}} In this formulation the non-conservative term represents a kind of so-called Stokeslet . The Stokeslet is the Green's function of the Stokes-Flow-Equations. The conservative term is equal to the dipole gradient field . The formula of vorticity is analogous to the Biot–Savart law in electromagnetism . Alternatively, in a more compact way, one can formulate the velocity field as follows: where H = ∇ ⊗ ∇ {\displaystyle \mathrm {H} =\nabla \otimes \nabla } is the Hessian matrix differential operator and S = I ∇ 2 − H {\displaystyle \mathrm {S} =\mathbf {I} \nabla ^{2}-\mathrm {H} } is a differential operator composed as the difference of the Laplacian and the Hessian. In this way it becomes explicitly clear, that the solution is composed from derivatives of a Coulomb potential ( 1 / ‖ x ‖ {\displaystyle 1/\|\mathbf {x} \|} ) and a Biharmonic potential ( ‖ x ‖ {\displaystyle \|\mathbf {x} \|} ). The differential operator S {\displaystyle \mathrm {S} } applied to the vector norm ‖ x ‖ {\displaystyle \|\mathbf {x} \|} generates the Stokeslet. The following formula describes the viscous stress tensor for the special case of Stokes flow. It is needed in the calculation of the force acting on the particle. In Cartesian coordinates the vector-gradient ∇ u {\displaystyle \nabla \mathbf {u} } is identical to the Jacobian matrix . The matrix I represents the identity matrix . The force acting on the sphere can be calculated via the integral of the stress tensor over the surface of the sphere, where e r represents the radial unit-vector of spherical-coordinates : Although the liquid is static and the sphere is moving with a certain velocity, with respect to the frame of sphere, the sphere is at rest and liquid is flowing in the opposite direction to the motion of the sphere.
https://en.wikipedia.org/wiki/Stokes'_law
In the science of fluid flow , Stokes' paradox is the phenomenon that there can be no creeping flow of a fluid around a disk in two dimensions; or, equivalently, the fact there is no non-trivial steady-state solution for the Stokes equations around an infinitely long cylinder. This is opposed to the 3-dimensional case, where Stokes' method provides a solution to the problem of flow around a sphere. [ 1 ] [ 2 ] Stokes' paradox was resolved by Carl Wilhelm Oseen in 1910, by introducing the Oseen equations which improve upon the Stokes equations – by adding convective acceleration . The velocity vector u {\displaystyle \mathbf {u} } of the fluid may be written in terms of the stream function ψ {\displaystyle \psi } as The stream function in a Stokes flow problem, ψ {\displaystyle \psi } satisfies the biharmonic equation . [ 3 ] By regarding the ( x , y ) {\displaystyle (x,y)} -plane as the complex plane , the problem may be dealt with using methods of complex analysis . In this approach, ψ {\displaystyle \psi } is either the real or imaginary part of Here z = x + i y {\displaystyle z=x+iy} , where i {\displaystyle i} is the imaginary unit, z ¯ = x − i y {\displaystyle {\bar {z}}=x-iy} , and f ( z ) , g ( z ) {\displaystyle f(z),g(z)} are holomorphic functions outside of the disk. We will take the real part without loss of generality . Now the function u {\displaystyle u} , defined by u = u x + i u y {\displaystyle u=u_{x}+iu_{y}} is introduced. u {\displaystyle u} can be written as u = − 2 i ∂ ψ ∂ z ¯ {\displaystyle u=-2i{\frac {\partial \psi }{\partial {\bar {z}}}}} , or 1 2 i u = ∂ ψ ∂ z ¯ {\displaystyle {\frac {1}{2}}iu={\frac {\partial \psi }{\partial {\bar {z}}}}} (using the Wirtinger derivatives ). This is calculated to be equal to Without loss of generality, the disk may be assumed to be the unit disk , consisting of all complex numbers z of absolute value smaller or equal to 1. The boundary conditions are: whenever | z | = 1 {\displaystyle |z|=1} , [ 1 ] [ 5 ] and by representing the functions f , g {\displaystyle f,g} as Laurent series : [ 6 ] the first condition implies f n = 0 , g n = 0 {\displaystyle f_{n}=0,g_{n}=0} for all n ≥ 2 {\displaystyle n\geq 2} . Using the polar form of z {\displaystyle z} results in z n = r n e i n θ , z ¯ n = r n e − i n θ {\displaystyle z^{n}=r^{n}e^{in\theta },{\bar {z}}^{n}=r^{n}e^{-in\theta }} . After deriving the series form of u , substituting this into it along with r = 1 {\displaystyle r=1} , and changing some indices, the second boundary condition translates to Since the complex trigonometric functions e i n θ {\displaystyle e^{in\theta }} compose a linearly independent set, it follows that all coefficients in the series are zero. Examining these conditions for every n {\displaystyle n} after taking into account the condition at infinity shows that f {\displaystyle f} and g {\displaystyle g} are necessarily of the form where a {\displaystyle a} is an imaginary number (opposite to its own complex conjugate ), and b {\displaystyle b} and c {\displaystyle c} are complex numbers. Substituting this into u {\displaystyle u} gives the result that u = 0 {\displaystyle u=0} globally, compelling both u x {\displaystyle u_{x}} and u y {\displaystyle u_{y}} to be zero. Therefore, there can be no motion – the only solution is that the cylinder is at rest relative to all points of the fluid. The paradox is caused by the limited validity of Stokes' approximation, as explained in Oseen's criticism: the validity of Stokes' equations relies on Reynolds number being small, and this condition cannot hold for arbitrarily large distances r {\displaystyle r} . [ 7 ] [ 2 ] A correct solution for a cylinder was derived using Oseen's equations , and the same equations lead to an improved approximation of the drag force on a sphere . [ 8 ] [ 9 ] On the contrary to Stokes' paradox , there exists the unsteady-state solution of the same problem which models a fluid flow moving around a circular cylinder with Reynolds number being small. This solution can be given by explicit formula in terms of vorticity of the flow's vector field. The vorticity of Stokes' flow is given by the following relation: [ 10 ] w k ( t , r ) = W | k | , | k | − 1 − 1 [ e − λ 2 t W | k | , | k | − 1 [ w k ( 0 , ⋅ ) ] ( λ ) ] ( t , r ) . {\displaystyle w_{k}(t,r)=W_{|k|,|k|-1}^{-1}\left[e^{-\lambda ^{2}t}W_{|k|,|k|-1}[w_{k}(0,\cdot )](\lambda )\right](t,r).} Here w k ( t , r ) {\displaystyle w_{k}(t,r)} - are the Fourier coefficients of the vorticity's expansion by polar angle which are defined on ( r 0 , ∞ ) {\displaystyle (r_{0},\infty )} , r 0 {\displaystyle r_{0}} - radius of the cylinder, W | k | , | k | − 1 {\displaystyle W_{|k|,|k|-1}} , W | k | , | k | − 1 − 1 {\displaystyle W_{|k|,|k|-1}^{-1}} are the direct and inverse special Weber's transforms, [ 11 ] and initial function for vorticity w k ( 0 , r ) {\displaystyle w_{k}(0,r)} satisfies no-slip boundary condition. Special Weber's transform has a non-trivial kernel, but from the no-slip condition follows orthogonality of the vorticity flow to the kernel. [ 10 ] Special Weber's transform [ 11 ] is an important tool in solving problems of the hydrodynamics . It is defined for k ∈ R {\displaystyle k\in \mathbb {R} } as W k , k − 1 [ f ] ( λ ) = ∫ r 0 ∞ J k ( λ s ) Y k − 1 ( λ r 0 ) − Y k ( λ s ) J k − 1 ( λ r 0 ) J k − 1 2 ( λ r 0 ) + Y k − 1 2 ( λ r 0 ) f ( s ) s d s , {\displaystyle W_{k,k-1}[f](\lambda )=\int _{r_{0}}^{\infty }{\frac {J_{k}(\lambda s)Y_{k-1}(\lambda r_{0})-Y_{k}(\lambda s)J_{k-1}(\lambda r_{0})}{\sqrt {J_{k-1}^{2}(\lambda r_{0})+Y_{k-1}^{2}(\lambda r_{0})}}}f(s)sds,} where J k {\displaystyle J_{k}} , Y k {\displaystyle Y_{k}} are the Bessel functions of the first and second kind [ 12 ] respectively. For k > 1 {\displaystyle k>1} it has a non-trivial kernel [ 13 ] [ 10 ] which consists of the functions C / r k ∈ ker ⁡ ( W k , k − 1 ) {\displaystyle C/r^{k}\in \ker(W_{k,k-1})} . The inverse transform is given by the formula W k , k − 1 − 1 [ f ^ ] ( r ) = ∫ 0 ∞ J k ( λ r ) Y k − 1 ( λ r 0 ) − Y k ( λ s ) J k − 1 ( λ r 0 ) J k − 1 2 ( λ r 0 ) + Y k − 1 2 ( λ r 0 ) f ^ ( λ ) λ d λ . {\displaystyle W_{k,k-1}^{-1}[{\hat {f}}](r)=\int _{0}^{\infty }{\frac {J_{k}(\lambda r)Y_{k-1}(\lambda r_{0})-Y_{k}(\lambda s)J_{k-1}(\lambda r_{0})}{\sqrt {J_{k-1}^{2}(\lambda r_{0})+Y_{k-1}^{2}(\lambda r_{0})}}}{\hat {f}}(\lambda )\lambda d\lambda .} Due to non-triviality of the kernel, the inversion identity f ( r ) = W k , k − 1 − 1 [ W k , k − 1 [ f ] ] ( r ) {\displaystyle f(r)=W_{k,k-1}^{-1}\left[W_{k,k-1}[f]\right](r)} is valid if k ≤ 1 {\displaystyle k\leq 1} . Also it is valid in the case of k > 1 {\displaystyle k>1} but only for functions, which are orthogonal to the kernel of W k , k − 1 {\displaystyle W_{k,k-1}} in L 2 ( r 0 , ∞ ) {\displaystyle L_{2}(r_{0},\infty )} with infinitesimal element r d r {\displaystyle rdr} : ∫ r 0 ∞ 1 r k f ( r ) r d r = 0 , k > 1. {\displaystyle \int _{r_{0}}^{\infty }{\frac {1}{r^{k}}}f(r)rdr=0,~k>1.} In exterior of the disc of radius r 0 {\displaystyle r_{0}} B r 0 = { x ∈ R 2 , | x | > r 0 } {\displaystyle B_{r_{0}}=\{\mathbf {x} \in \mathbb {R} ^{2},~\vert \mathbf {x} \vert >r_{0}\}} the Biot-Savart law v ( x ) = 1 2 π ∫ B r 0 ( x − y ) ⊥ | x − y | 2 w ( y ) d y + v ∞ , {\displaystyle \mathbf {v} (\mathbf {x} )={\frac {1}{2\pi }}\int _{B_{r_{0}}}{\frac {(\mathbf {x} -\mathbf {y} )^{\perp }}{\vert \mathbf {x} -\mathbf {y} \vert ^{2}}}w(\mathbf {y} )\operatorname {d\mathbf {y} } +\mathbf {v} _{\infty },} restores the velocity field v ( x ) {\displaystyle \mathbf {v} (\mathbf {x} )} which is induced by the vorticity w ( x ) {\displaystyle w(\mathbf {x} )} with zero-circularity and given constant velocity v ∞ {\displaystyle \mathbf {v} _{\infty }} at infinity. No-slip condition for x ∈ S r 0 = { x ∈ R 2 , | x | = r 0 } {\displaystyle \mathbf {x} \in S_{r_{0}}=\{\mathbf {x} \in \mathbb {R} ^{2},~\vert \mathbf {x} \vert =r_{0}\}} 1 2 π ∫ B r 0 ( x − y ) ⊥ | x − y | 2 w ( y ) d y + v ∞ = 0 {\displaystyle {\frac {1}{2\pi }}\int _{B_{r_{0}}}{\frac {(\mathbf {x} -\mathbf {y} )^{\perp }}{\vert \mathbf {x} -\mathbf {y} \vert ^{2}}}w(\mathbf {y} )\operatorname {d\mathbf {y} } +\mathbf {v} _{\infty }=0} leads to the relations for k ∈ Z {\displaystyle k\in \mathbf {Z} } : ∫ r 0 ∞ r − | k | + 1 w k ( r ) d r = d k , {\displaystyle \int _{r_{0}}^{\infty }r^{-\vert k\vert +1}w_{k}(r)dr=d_{k},} where d k = δ | k | , 1 ( v ∞ , y + i k v ∞ , x ) , {\displaystyle d_{k}=\delta _{\vert k\vert ,1}(v_{\infty ,y}+ikv_{\infty ,x}),} δ | k | , 1 {\displaystyle \delta _{\vert k\vert ,1}} is the Kronecker delta , v ∞ , x {\displaystyle v_{\infty ,x}} , v ∞ , y {\displaystyle v_{\infty ,y}} are the cartesian coordinates of v ∞ {\displaystyle \mathbf {v} _{\infty }} . In particular, from the no-slip condition follows orthogonality the vorticity to the kernel of the Weber's transform W k , k − 1 {\displaystyle W_{k,k-1}} : ∫ r 0 ∞ r − | k | + 1 w k ( r ) d r = 0 f o r | k | > 1. {\displaystyle \int _{r_{0}}^{\infty }r^{-\vert k\vert +1}w_{k}(r)dr=0~for~|k|>1.} Vorticity w ( t , x ) {\displaystyle w(t,\mathbf {x} )} for Stokes flow satisfies to the vorticity equation ∂ w ( t , x ) ∂ t − Δ w = 0 , {\displaystyle {\frac {\partial w(t,\mathbf {x} )}{\partial t}}-\Delta w=0,} or in terms of the Fourier coefficients in the expansion by polar angle ∂ w k ( t , r ) ∂ t − Δ w k = 0 , {\displaystyle {\frac {\partial w_{k}(t,r)}{\partial t}}-\Delta w_{k}=0,} where Δ k w k ( t , r ) = 1 r ∂ ∂ r ( r ∂ ∂ r w k ( t , r ) ) − k 2 r 2 w k ( t , r ) . {\displaystyle \Delta _{k}w_{k}(t,r)={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial }{\partial r}}w_{k}(t,r)\right)-{\frac {k^{2}}{r^{2}}}w_{k}(t,r).} From no-slip condition follows d d t ∫ r 0 ∞ r − | k | + 1 w k ( t , r ) d r = 0. {\displaystyle {\frac {d}{dt}}\int _{r_{0}}^{\infty }r^{-\vert k\vert +1}w_{k}(t,r)dr=0.} Finally, integrating by parts, we obtain the Robin boundary condition for the vorticity: ∫ r 0 ∞ s − | k | + 1 Δ k w k ( t , r ) d r = − r 0 − | k | ( r 0 ∂ w k ( t , r ) ∂ r | r = r 0 + | k | w k ( t , r 0 ) ) = 0. {\displaystyle \int _{r_{0}}^{\infty }s^{-|k|+1}\Delta _{k}w_{k}(t,r)dr=-r_{0}^{-|k|}\left(r_{0}{\frac {\partial w_{k}(t,r)}{\partial r}}{\Big |}_{r=r_{0}}+|k|w_{k}(t,r_{0})\right)=0.} Then the solution of the boundary-value problem can be expressed via Weber's integral above. Formula for vorticity can give another explanation of the Stokes' Paradox. The functions C r k ∈ k e r ( Δ k ) , k > 1 {\displaystyle {\frac {C}{r^{k}}}\in ker(\Delta _{k}),~k>1} belong to the kernel of Δ k {\displaystyle \Delta _{k}} and generate the stationary solutions of the vorticity equation with Robin-type boundary condition. From the arguments above any Stokes' vorticity flow with no-slip boundary condition must be orthogonal to the obtained stationary solutions. That is only possible for w ≡ 0 {\displaystyle w\equiv 0} .
https://en.wikipedia.org/wiki/Stokes'_paradox
Stokes' theorem , [ 1 ] also known as the Kelvin–Stokes theorem [ 2 ] [ 3 ] after Lord Kelvin and George Stokes , the fundamental theorem for curls , or simply the curl theorem , [ 4 ] is a theorem in vector calculus on R 3 {\displaystyle \mathbb {R} ^{3}} . Given a vector field , the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: Stokes' theorem is a special case of the generalized Stokes theorem . [ 5 ] [ 6 ] In particular, a vector field on R 3 {\displaystyle \mathbb {R} ^{3}} can be considered as a 1-form in which case its curl is its exterior derivative , a 2-form. Let Σ {\displaystyle \Sigma } be a smooth oriented surface in R 3 {\displaystyle \mathbb {R} ^{3}} with boundary ∂ Σ ≡ Γ {\displaystyle \partial \Sigma \equiv \Gamma } . If a vector field F ( x , y , z ) = ( F x ( x , y , z ) , F y ( x , y , z ) , F z ( x , y , z ) ) {\displaystyle \mathbf {F} (x,y,z)=(F_{x}(x,y,z),F_{y}(x,y,z),F_{z}(x,y,z))} is defined and has continuous first order partial derivatives in a region containing Σ {\displaystyle \Sigma } , then ∬ Σ ( ∇ × F ) ⋅ d Σ = ∮ ∂ Σ F ⋅ d Γ . {\displaystyle \iint _{\Sigma }(\nabla \times \mathbf {F} )\cdot \mathrm {d} \mathbf {\Sigma } =\oint _{\partial \Sigma }\mathbf {F} \cdot \mathrm {d} \mathbf {\Gamma } .} More explicitly, the equality says that ∬ Σ ( ( ∂ F z ∂ y − ∂ F y ∂ z ) d y d z + ( ∂ F x ∂ z − ∂ F z ∂ x ) d z d x + ( ∂ F y ∂ x − ∂ F x ∂ y ) d x d y ) = ∮ ∂ Σ ( F x d x + F y d y + F z d z ) . {\displaystyle {\scriptstyle {\begin{aligned}&\iint _{\Sigma }\left(\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\,\mathrm {d} y\,\mathrm {d} z+\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\,\mathrm {d} z\,\mathrm {d} x+\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\,\mathrm {d} x\,\mathrm {d} y\right)\\&=\oint _{\partial \Sigma }{\Bigl (}F_{x}\,\mathrm {d} x+F_{y}\,\mathrm {d} y+F_{z}\,\mathrm {d} z{\Bigr )}.\end{aligned}}}} The main challenge in a precise statement of Stokes' theorem is in defining the notion of a boundary. Surfaces such as the Koch snowflake , for example, are well-known not to exhibit a Riemann-integrable boundary, and the notion of surface measure in Lebesgue theory cannot be defined for a non- Lipschitz surface. One (advanced) technique is to pass to a weak formulation and then apply the machinery of geometric measure theory ; for that approach see the coarea formula . In this article, we instead use a more elementary definition, based on the fact that a boundary can be discerned for full-dimensional subsets of R 2 {\displaystyle \mathbb {R} ^{2}} . A more detailed statement will be given for subsequent discussions. Let γ : [ a , b ] → R 2 {\displaystyle \gamma :[a,b]\to \mathbb {R} ^{2}} be a piecewise smooth Jordan plane curve . The Jordan curve theorem implies that γ {\displaystyle \gamma } divides R 2 {\displaystyle \mathbb {R} ^{2}} into two components, a compact one and another that is non-compact. Let D {\displaystyle D} denote the compact part; then D {\displaystyle D} is bounded by γ {\displaystyle \gamma } . It now suffices to transfer this notion of boundary along a continuous map to our surface in R 3 {\displaystyle \mathbb {R} ^{3}} . But we already have such a map: the parametrization of Σ {\displaystyle \Sigma } . Suppose ψ : D → R 3 {\displaystyle \psi :D\to \mathbb {R} ^{3}} is piecewise smooth at the neighborhood of D {\displaystyle D} , with Σ = ψ ( D ) {\displaystyle \Sigma =\psi (D)} . [ note 1 ] If Γ {\displaystyle \Gamma } is the space curve defined by Γ ( t ) = ψ ( γ ( t ) ) {\displaystyle \Gamma (t)=\psi (\gamma (t))} [ note 2 ] then we call Γ {\displaystyle \Gamma } the boundary of Σ {\displaystyle \Sigma } , written ∂ Σ {\displaystyle \partial \Sigma } . With the above notation, if F {\displaystyle \mathbf {F} } is any smooth vector field on R 3 {\displaystyle \mathbb {R} ^{3}} , then [ 7 ] [ 8 ] ∮ ∂ Σ F ⋅ d Γ = ∬ Σ ∇ × F ⋅ d Σ . {\displaystyle \oint _{\partial \Sigma }\mathbf {F} \,\cdot \,\mathrm {d} {\mathbf {\Gamma } }=\iint _{\Sigma }\nabla \times \mathbf {F} \,\cdot \,\mathrm {d} \mathbf {\Sigma } .} Here, the " ⋅ {\displaystyle \cdot } " represents the dot product in R 3 {\displaystyle \mathbb {R} ^{3}} . Stokes' theorem can be viewed as a special case of the following identity: [ 9 ] ∮ ∂ Σ ( F ⋅ d Γ ) g = ∬ Σ [ d Σ ⋅ ( ∇ × F − F × ∇ ) ] g , {\displaystyle \oint _{\partial \Sigma }(\mathbf {F} \,\cdot \,\mathrm {d} {\mathbf {\Gamma } })\,\mathbf {g} =\iint _{\Sigma }\left[\mathrm {d} \mathbf {\Sigma } \cdot \left(\nabla \times \mathbf {F} -\mathbf {F} \times \nabla \right)\right]\mathbf {g} ,} where g {\displaystyle \mathbf {g} } is any smooth vector or scalar field in R 3 {\displaystyle \mathbb {R} ^{3}} . When g {\displaystyle \mathbf {g} } is a uniform scalar field, the standard Stokes' theorem is recovered. The proof of the theorem consists of 4 steps. We assume Green's theorem , so what is of concern is how to boil down the three-dimensional complicated problem (Stokes' theorem) to a two-dimensional rudimentary problem (Green's theorem). [ 10 ] When proving this theorem, mathematicians normally deduce it as a special case of a more general result , which is stated in terms of differential forms , and proved using more sophisticated machinery. While powerful, these techniques require substantial background, so the proof below avoids them, and does not presuppose any knowledge beyond a familiarity with basic vector calculus and linear algebra. [ 8 ] At the end of this section, a short alternative proof of Stokes' theorem is given, as a corollary of the generalized Stokes' theorem. As in § Theorem , we reduce the dimension by using the natural parametrization of the surface. Let ψ and γ be as in that section, and note that by change of variables ∮ ∂ Σ F ( x ) ⋅ d Γ = ∮ γ F ( ψ ( γ ) ) ⋅ d ψ ( γ ) = ∮ γ F ( ψ ( y ) ) ⋅ J y ( ψ ) d γ {\displaystyle \oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,\mathrm {d} \mathbf {\Gamma } }=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {\gamma } ))\cdot \,\mathrm {d} {\boldsymbol {\psi }}(\mathbf {\gamma } )}=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot J_{\mathbf {y} }({\boldsymbol {\psi }})\,\mathrm {d} \gamma }} where J y ψ stands for the Jacobian matrix of ψ at y = γ ( t ) . Now let { e u , e v } be an orthonormal basis in the coordinate directions of R 2 . [ note 3 ] Recognizing that the columns of J y ψ are precisely the partial derivatives of ψ at y , we can expand the previous equation in coordinates as ∮ ∂ Σ F ( x ) ⋅ d Γ = ∮ γ F ( ψ ( y ) ) J y ( ψ ) e u ( e u ⋅ d y ) + F ( ψ ( y ) ) J y ( ψ ) e v ( e v ⋅ d y ) = ∮ γ ( ( F ( ψ ( y ) ) ⋅ ∂ ψ ∂ u ( y ) ) e u + ( F ( ψ ( y ) ) ⋅ ∂ ψ ∂ v ( y ) ) e v ) ⋅ d y {\displaystyle {\begin{aligned}\oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,\mathrm {d} \mathbf {\Gamma } }&=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))J_{\mathbf {y} }({\boldsymbol {\psi }})\mathbf {e} _{u}(\mathbf {e} _{u}\cdot \,\mathrm {d} \mathbf {y} )+\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))J_{\mathbf {y} }({\boldsymbol {\psi }})\mathbf {e} _{v}(\mathbf {e} _{v}\cdot \,\mathrm {d} \mathbf {y} )}\\&=\oint _{\gamma }{\left(\left(\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(\mathbf {y} )\right)\mathbf {e} _{u}+\left(\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(\mathbf {y} )\right)\mathbf {e} _{v}\right)\cdot \,\mathrm {d} \mathbf {y} }\end{aligned}}} The previous step suggests we define the function P ( u , v ) = ( F ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ u ( u , v ) ) e u + ( F ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ v ( u , v ) ) e v {\displaystyle \mathbf {P} (u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\right)\mathbf {e} _{u}+\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\right)\mathbf {e} _{v}} Now, if the scalar value functions P u {\displaystyle P_{u}} and P v {\displaystyle P_{v}} are defined as follows, P u ( u , v ) = ( F ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ u ( u , v ) ) {\displaystyle {P_{u}}(u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\right)} P v ( u , v ) = ( F ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ v ( u , v ) ) {\displaystyle {P_{v}}(u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\right)} then, P ( u , v ) = P u ( u , v ) e u + P v ( u , v ) e v . {\displaystyle \mathbf {P} (u,v)={P_{u}}(u,v)\mathbf {e} _{u}+{P_{v}}(u,v)\mathbf {e} _{v}.} This is the pullback of F along ψ , and, by the above, it satisfies ∮ ∂ Σ F ( x ) ⋅ d l = ∮ γ P ( y ) ⋅ d l = ∮ γ ( P u ( u , v ) e u + P v ( u , v ) e v ) ⋅ d l {\displaystyle \oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,\mathrm {d} \mathbf {l} }=\oint _{\gamma }{\mathbf {P} (\mathbf {y} )\cdot \,\mathrm {d} \mathbf {l} }=\oint _{\gamma }{({P_{u}}(u,v)\mathbf {e} _{u}+{P_{v}}(u,v)\mathbf {e} _{v})\cdot \,\mathrm {d} \mathbf {l} }} We have successfully reduced one side of Stokes' theorem to a 2-dimensional formula; we now turn to the other side. First, calculate the partial derivatives appearing in Green's theorem , via the product rule : ∂ P u ∂ v = ∂ ( F ∘ ψ ) ∂ v ⋅ ∂ ψ ∂ u + ( F ∘ ψ ) ⋅ ∂ 2 ψ ∂ v ∂ u ∂ P v ∂ u = ∂ ( F ∘ ψ ) ∂ u ⋅ ∂ ψ ∂ v + ( F ∘ ψ ) ⋅ ∂ 2 ψ ∂ u ∂ v {\displaystyle {\begin{aligned}{\frac {\partial P_{u}}{\partial v}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial v}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}+(\mathbf {F} \circ {\boldsymbol {\psi }})\cdot {\frac {\partial ^{2}{\boldsymbol {\psi }}}{\partial v\,\partial u}}\\[5pt]{\frac {\partial P_{v}}{\partial u}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial u}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}+(\mathbf {F} \circ {\boldsymbol {\psi }})\cdot {\frac {\partial ^{2}{\boldsymbol {\psi }}}{\partial u\,\partial v}}\end{aligned}}} Conveniently, the second term vanishes in the difference, by equality of mixed partials . So, [ note 4 ] ∂ P v ∂ u − ∂ P u ∂ v = ∂ ( F ∘ ψ ) ∂ u ⋅ ∂ ψ ∂ v − ∂ ( F ∘ ψ ) ∂ v ⋅ ∂ ψ ∂ u = ∂ ψ ∂ v ⋅ ( J ψ ( u , v ) F ) ∂ ψ ∂ u − ∂ ψ ∂ u ⋅ ( J ψ ( u , v ) F ) ∂ ψ ∂ v (chain rule) = ∂ ψ ∂ v ⋅ ( J ψ ( u , v ) F − ( J ψ ( u , v ) F ) T ) ∂ ψ ∂ u {\displaystyle {\begin{aligned}{\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial u}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}-{\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial v}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\\[5pt]&={\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\cdot (J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} ){\frac {\partial {\boldsymbol {\psi }}}{\partial u}}-{\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\cdot (J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} ){\frac {\partial {\boldsymbol {\psi }}}{\partial v}}&&{\text{(chain rule)}}\\[5pt]&={\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\cdot \left(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} -{(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}^{\mathsf {T}}\right){\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\end{aligned}}} But now consider the matrix in that quadratic form—that is, J ψ ( u , v ) F − ( J ψ ( u , v ) F ) T {\displaystyle J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} -(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )^{\mathsf {T}}} . We claim this matrix in fact describes a cross product. Here the superscript " T {\displaystyle {}^{\mathsf {T}}} " represents the transposition of matrices . To be precise, let A = ( A i j ) i j {\displaystyle A=(A_{ij})_{ij}} be an arbitrary 3 × 3 matrix and let a = [ a 1 a 2 a 3 ] = [ A 32 − A 23 A 13 − A 31 A 21 − A 12 ] {\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}A_{32}-A_{23}\\A_{13}-A_{31}\\A_{21}-A_{12}\end{bmatrix}}} Note that x ↦ a × x is linear, so it is determined by its action on basis elements. But by direct calculation ( A − A T ) e 1 = [ 0 a 3 − a 2 ] = a × e 1 ( A − A T ) e 2 = [ − a 3 0 a 1 ] = a × e 2 ( A − A T ) e 3 = [ a 2 − a 1 0 ] = a × e 3 {\displaystyle {\begin{aligned}\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{1}&={\begin{bmatrix}0\\a_{3}\\-a_{2}\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{1}\\\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{2}&={\begin{bmatrix}-a_{3}\\0\\a_{1}\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{2}\\\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{3}&={\begin{bmatrix}a_{2}\\-a_{1}\\0\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{3}\end{aligned}}} Here, { e 1 , e 2 , e 3 } represents an orthonormal basis in the coordinate directions of R 3 {\displaystyle \mathbb {R} ^{3}} . [ note 5 ] Thus ( A − A T ) x = a × x for any x . Substituting ( J ψ ( u , v ) F ) {\displaystyle {(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}} for A , we obtain ( ( J ψ ( u , v ) F ) − ( J ψ ( u , v ) F ) T ) x = ( ∇ × F ) × x , for all x ∈ R 3 {\displaystyle \left({(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}-{(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}^{\mathsf {T}}\right)\mathbf {x} =(\nabla \times \mathbf {F} )\times \mathbf {x} ,\quad {\text{for all}}\,\mathbf {x} \in \mathbb {R} ^{3}} We can now recognize the difference of partials as a (scalar) triple product : ∂ P v ∂ u − ∂ P u ∂ v = ∂ ψ ∂ v ⋅ ( ∇ × F ) × ∂ ψ ∂ u = ( ∇ × F ) ⋅ ∂ ψ ∂ u × ∂ ψ ∂ v {\displaystyle {\begin{aligned}{\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}&={\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\cdot (\nabla \times \mathbf {F} )\times {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}=(\nabla \times \mathbf {F} )\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\times {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\end{aligned}}} On the other hand, the definition of a surface integral also includes a triple product—the very same one! ∬ Σ ( ∇ × F ) ⋅ d Σ = ∬ D ( ∇ × F ) ( ψ ( u , v ) ) ⋅ ∂ ψ ∂ u ( u , v ) × ∂ ψ ∂ v ( u , v ) d u d v {\displaystyle {\begin{aligned}\iint _{\Sigma }(\nabla \times \mathbf {F} )\cdot \,d\mathbf {\Sigma } &=\iint _{D}{(\nabla \times \mathbf {F} )({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\times {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\,\mathrm {d} u\,\mathrm {d} v}\end{aligned}}} So, we obtain ∬ Σ ( ∇ × F ) ⋅ d Σ = ∬ D ( ∂ P v ∂ u − ∂ P u ∂ v ) d u d v {\displaystyle \iint _{\Sigma }(\nabla \times \mathbf {F} )\cdot \,\mathrm {d} \mathbf {\Sigma } =\iint _{D}\left({\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}\right)\,\mathrm {d} u\,\mathrm {d} v} Combining the second and third steps and then applying Green's theorem completes the proof. Green's theorem asserts the following: for any region D bounded by the Jordans closed curve γ and two scalar-valued smooth functions P u ( u , v ) , P v ( u , v ) {\displaystyle P_{u}(u,v),P_{v}(u,v)} defined on D; ∮ γ ( P u ( u , v ) e u + P v ( u , v ) e v ) ⋅ d l = ∬ D ( ∂ P v ∂ u − ∂ P u ∂ v ) d u d v {\displaystyle \oint _{\gamma }{({P_{u}}(u,v)\mathbf {e} _{u}+{P_{v}}(u,v)\mathbf {e} _{v})\cdot \,\mathrm {d} \mathbf {l} }=\iint _{D}\left({\frac {\partial P_{v}}{\partial u}}-{\frac {\partial P_{u}}{\partial v}}\right)\,\mathrm {d} u\,\mathrm {d} v} We can substitute the conclusion of STEP2 into the left-hand side of Green's theorem above, and substitute the conclusion of STEP3 into the right-hand side. Q.E.D. The functions R 3 → R 3 {\displaystyle \mathbb {R} ^{3}\to \mathbb {R} ^{3}} can be identified with the differential 1-forms on R 3 {\displaystyle \mathbb {R} ^{3}} via the map F x e 1 + F y e 2 + F z e 3 ↦ F x d x + F y d y + F z d z . {\displaystyle F_{x}\mathbf {e} _{1}+F_{y}\mathbf {e} _{2}+F_{z}\mathbf {e} _{3}\mapsto F_{x}\,\mathrm {d} x+F_{y}\,\mathrm {d} y+F_{z}\,\mathrm {d} z.} Write the differential 1-form associated to a function F as ω F . Then one can calculate that ⋆ ω ∇ × F = d ω F , {\displaystyle \star \omega _{\nabla \times \mathbf {F} }=\mathrm {d} \omega _{\mathbf {F} },} where ★ is the Hodge star and d {\displaystyle \mathrm {d} } is the exterior derivative . Thus, by generalized Stokes' theorem, [ 11 ] ∮ ∂ Σ F ⋅ d γ = ∮ ∂ Σ ω F = ∫ Σ d ω F = ∫ Σ ⋆ ω ∇ × F = ∬ Σ ∇ × F ⋅ d Σ {\displaystyle \oint _{\partial \Sigma }{\mathbf {F} \cdot \,\mathrm {d} \mathbf {\gamma } }=\oint _{\partial \Sigma }{\omega _{\mathbf {F} }}=\int _{\Sigma }{\mathrm {d} \omega _{\mathbf {F} }}=\int _{\Sigma }{\star \omega _{\nabla \times \mathbf {F} }}=\iint _{\Sigma }{\nabla \times \mathbf {F} \cdot \,\mathrm {d} \mathbf {\Sigma } }} In this section, we will discuss the irrotational field ( lamellar vector field ) based on Stokes' theorem. Definition 2-1 (irrotational field). A smooth vector field F on an open U ⊆ R 3 {\displaystyle U\subseteq \mathbb {R} ^{3}} is irrotational ( lamellar vector field ) if ∇ × F = 0 . This concept is very fundamental in mechanics; as we'll prove later, if F is irrotational and the domain of F is simply connected , then F is a conservative vector field . In this section, we will introduce a theorem that is derived from Stokes' theorem and characterizes vortex-free vector fields. In classical mechanics and fluid dynamics it is called Helmholtz's theorem . Theorem 2-1 (Helmholtz's theorem in fluid dynamics). [ 5 ] [ 3 ] : 142 Let U ⊆ R 3 {\displaystyle U\subseteq \mathbb {R} ^{3}} be an open subset with a lamellar vector field F and let c 0 , c 1 : [0, 1] → U be piecewise smooth loops. If there is a function H : [0, 1] × [0, 1] → U such that Then, ∫ c 0 F d c 0 = ∫ c 1 F d c 1 {\displaystyle \int _{c_{0}}\mathbf {F} \,\mathrm {d} c_{0}=\int _{c_{1}}\mathbf {F} \,\mathrm {d} c_{1}} Some textbooks such as Lawrence [ 5 ] call the relationship between c 0 and c 1 stated in theorem 2-1 as "homotopic" and the function H : [0, 1] × [0, 1] → U as "homotopy between c 0 and c 1 ". However, "homotopic" or "homotopy" in above-mentioned sense are different (stronger than) typical definitions of "homotopic" or "homotopy"; the latter omit condition [TLH3]. So from now on we refer to homotopy (homotope) in the sense of theorem 2-1 as a tubular homotopy (resp. tubular-homotopic) . [ note 6 ] In what follows, we abuse notation and use " ⊕ {\displaystyle \oplus } " for concatenation of paths in the fundamental groupoid and " ⊖ {\displaystyle \ominus } " for reversing the orientation of a path. Let D = [0, 1] × [0, 1] , and split ∂ D into four line segments γ j . γ 1 : [ 0 , 1 ] → D ; γ 1 ( t ) = ( t , 0 ) γ 2 : [ 0 , 1 ] → D ; γ 2 ( s ) = ( 1 , s ) γ 3 : [ 0 , 1 ] → D ; γ 3 ( t ) = ( 1 − t , 1 ) γ 4 : [ 0 , 1 ] → D ; γ 4 ( s ) = ( 0 , 1 − s ) {\displaystyle {\begin{aligned}\gamma _{1}:[0,1]\to D;\quad &\gamma _{1}(t)=(t,0)\\\gamma _{2}:[0,1]\to D;\quad &\gamma _{2}(s)=(1,s)\\\gamma _{3}:[0,1]\to D;\quad &\gamma _{3}(t)=(1-t,1)\\\gamma _{4}:[0,1]\to D;\quad &\gamma _{4}(s)=(0,1-s)\end{aligned}}} so that ∂ D = γ 1 ⊕ γ 2 ⊕ γ 3 ⊕ γ 4 {\displaystyle \partial D=\gamma _{1}\oplus \gamma _{2}\oplus \gamma _{3}\oplus \gamma _{4}} By our assumption that c 0 and c 1 are piecewise smooth homotopic, there is a piecewise smooth homotopy H : D → M Γ i ( t ) = H ( γ i ( t ) ) i = 1 , 2 , 3 , 4 Γ ( t ) = H ( γ ( t ) ) = ( Γ 1 ⊕ Γ 2 ⊕ Γ 3 ⊕ Γ 4 ) ( t ) {\displaystyle {\begin{aligned}\Gamma _{i}(t)&=H(\gamma _{i}(t))&&i=1,2,3,4\\\Gamma (t)&=H(\gamma (t))=(\Gamma _{1}\oplus \Gamma _{2}\oplus \Gamma _{3}\oplus \Gamma _{4})(t)\end{aligned}}} Let S be the image of D under H . That ∬ S ∇ × F d S = ∮ Γ F d Γ {\displaystyle \iint _{S}\nabla \times \mathbf {F} \,\mathrm {d} S=\oint _{\Gamma }\mathbf {F} \,\mathrm {d} \Gamma } follows immediately from Stokes' theorem. F is lamellar, so the left side vanishes, i.e. 0 = ∮ Γ F d Γ = ∑ i = 1 4 ∮ Γ i F d Γ {\displaystyle 0=\oint _{\Gamma }\mathbf {F} \,\mathrm {d} \Gamma =\sum _{i=1}^{4}\oint _{\Gamma _{i}}\mathbf {F} \,\mathrm {d} \Gamma } As H is tubular(satisfying [TLH3]), Γ 2 = ⊖ Γ 4 {\displaystyle \Gamma _{2}=\ominus \Gamma _{4}} and Γ 2 = ⊖ Γ 4 {\displaystyle \Gamma _{2}=\ominus \Gamma _{4}} . Thus the line integrals along Γ 2 ( s ) and Γ 4 ( s ) cancel, leaving 0 = ∮ Γ 1 F d Γ + ∮ Γ 3 F d Γ {\displaystyle 0=\oint _{\Gamma _{1}}\mathbf {F} \,\mathrm {d} \Gamma +\oint _{\Gamma _{3}}\mathbf {F} \,\mathrm {d} \Gamma } On the other hand, c 1 = Γ 1 , c 3 = ⊖ Γ 3 {\displaystyle c_{3}=\ominus \Gamma _{3}} , so that the desired equality follows almost immediately. Above Helmholtz's theorem gives an explanation as to why the work done by a conservative force in changing an object's position is path independent. First, we introduce the Lemma 2-2, which is a corollary of and a special case of Helmholtz's theorem. Lemma 2-2. [ 5 ] [ 6 ] Let U ⊆ R 3 {\displaystyle U\subseteq \mathbb {R} ^{3}} be an open subset , with a Lamellar vector field F and a piecewise smooth loop c 0 : [0, 1] → U . Fix a point p ∈ U , if there is a homotopy H : [0, 1] × [0, 1] → U such that Then, ∫ c 0 F d c 0 = 0 {\displaystyle \int _{c_{0}}\mathbf {F} \,\mathrm {d} c_{0}=0} Above Lemma 2-2 follows from theorem 2–1. In Lemma 2-2, the existence of H satisfying [SC0] to [SC3] is crucial;the question is whether such a homotopy can be taken for arbitrary loops. If U is simply connected, such H exists. The definition of simply connected space follows: Definition 2-2 (simply connected space). [ 5 ] [ 6 ] Let M ⊆ R n {\displaystyle M\subseteq \mathbb {R} ^{n}} be non-empty and path-connected . M is called simply connected if and only if for any continuous loop, c : [0, 1] → M there exists a continuous tubular homotopy H : [0, 1] × [0, 1] → M from c to a fixed point p ∈ c ; that is, The claim that "for a conservative force, the work done in changing an object's position is path independent" might seem to follow immediately if the M is simply connected. However, recall that simple-connection only guarantees the existence of a continuous homotopy satisfying [SC1-3]; we seek a piecewise smooth homotopy satisfying those conditions instead. Fortunately, the gap in regularity is resolved by the Whitney's approximation theorem . [ 6 ] : 136, 421 [ 12 ] In other words, the possibility of finding a continuous homotopy, but not being able to integrate over it, is actually eliminated with the benefit of higher mathematics. We thus obtain the following theorem. Theorem 2-2. [ 5 ] [ 6 ] Let U ⊆ R 3 {\displaystyle U\subseteq \mathbb {R} ^{3}} be open and simply connected with an irrotational vector field F . For all piecewise smooth loops c : [0, 1] → U ∫ c 0 F d c 0 = 0 {\displaystyle \int _{c_{0}}\mathbf {F} \,\mathrm {d} c_{0}=0} In the physics of electromagnetism , Stokes' theorem provides the justification for the equivalence of the differential form of the Maxwell–Faraday equation and the Maxwell–Ampère equation and the integral form of these equations. For Faraday's law, Stokes theorem is applied to the electric field, E {\displaystyle \mathbf {E} } : ∮ ∂ Σ E ⋅ d l = ∬ Σ ∇ × E ⋅ d S . {\displaystyle \oint _{\partial \Sigma }\mathbf {E} \cdot \mathrm {d} {\boldsymbol {l}}=\iint _{\Sigma }\mathbf {\nabla } \times \mathbf {E} \cdot \mathrm {d} \mathbf {S} .} For Ampère's law, Stokes' theorem is applied to the magnetic field, B {\displaystyle \mathbf {B} } : ∮ ∂ Σ B ⋅ d l = ∬ Σ ∇ × B ⋅ d S . {\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {l}}=\iint _{\Sigma }\mathbf {\nabla } \times \mathbf {B} \cdot \mathrm {d} \mathbf {S} .}
https://en.wikipedia.org/wiki/Stokes'_theorem
In acoustics , Stokes's law of sound attenuation is a formula for the attenuation of sound in a Newtonian fluid , such as water or air, due to the fluid's viscosity . It states that the amplitude of a plane wave decreases exponentially with distance traveled, at a rate α given by α = 2 η ω 2 3 ρ V 3 {\displaystyle \alpha ={\frac {2\eta \omega ^{2}}{3\rho V^{3}}}} where η is the dynamic viscosity coefficient of the fluid, ω is the sound's angular frequency , ρ is the fluid density , and V is the speed of sound in the medium. [ 1 ] The law and its derivation were published in 1845 by the Anglo-Irish physicist G. G. Stokes , who also developed Stokes's law for the friction force in fluid motion . A generalisation of Stokes attenuation taking into account the effect of thermal conductivity was proposed by the German physicist Gustav Kirchhoff in 1868. [ 2 ] [ 3 ] Sound attenuation in fluids is also accompanied by acoustic dispersion , meaning that the different frequencies are propagating at different sound speeds. [ 1 ] Stokes's law of sound attenuation applies to sound propagation in an isotropic and homogeneous Newtonian medium. Consider a plane sinusoidal pressure wave that has amplitude A 0 at some point. After traveling a distance d from that point, its amplitude A ( d ) will be A ( d ) = A 0 e − α d {\displaystyle A(d)=A_{0}e^{-\alpha d}} The parameter α is a kind of attenuation constant , dimensionally the reciprocal of length. In the International System of Units (SI), it is expressed in neper per meter or simply reciprocal of meter (m –1 ). That is, if α = 1 m –1 , the wave's amplitude decreases by a factor of 1/ e for each meter traveled. The law is amended to include a contribution by the volume viscosity ζ : α = ( 2 η + 3 2 ζ ) ω 2 3 ρ V 3 = ( 4 3 η + ζ ) ω 2 2 ρ V 3 {\displaystyle \alpha ={\frac {\left(2\eta +{\frac {3}{2}}\zeta \right)\omega ^{2}}{3\rho V^{3}}}={\frac {\left({\frac {4}{3}}\eta +\zeta \right)\omega ^{2}}{2\rho V^{3}}}} The volume viscosity coefficient is relevant when the fluid's compressibility cannot be ignored, such as in the case of ultrasound in water. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The volume viscosity of water at 15 C is 3.09 centipoise . [ 8 ] Stokes's law is actually an asymptotic approximation for low frequencies of a more general formula involving relaxation time τ : 2 ( α V ω ) 2 = 1 1 + ( ω τ ) 2 − 1 1 + ( ω τ ) 2 α = ω V 1 + ( ω τ ) 2 − 1 2 ( 1 + ( ω τ ) 2 ) τ = 4 η 3 + ζ ρ V 2 = 4 η + 3 ζ 3 ρ V 2 α = ω 3 2 ( ρ ( ( ω ( 4 η + 3 ζ ) ) 2 + ( 3 ρ V 2 ) 2 − 3 ρ V 2 ) ( ω ( 4 η + 3 ζ ) ) 2 + ( 3 ρ V 2 ) 2 ) 1 2 {\displaystyle {\begin{aligned}2\left({\frac {\alpha V}{\omega }}\right)^{2}&={\frac {1}{\sqrt {1+\left(\omega \tau \right)^{2}}}}-{\frac {1}{1+\left(\omega \tau \right)^{2}}}\\\alpha &={\frac {\omega }{V}}{\sqrt {\frac {{\sqrt {1+\left(\omega \tau \right)^{2}}}-1}{2\left(1+\left(\omega \tau \right)^{2}\right)}}}\\\tau &={\frac {{\frac {4\eta }{3}}+\zeta }{\rho V^{2}}}={\frac {4\eta +3\zeta }{3\rho V^{2}}}\\\alpha &=\omega {\sqrt {\frac {3}{2}}}\left({\frac {\rho \left({\sqrt {\left(\omega \left(4\eta +3\zeta \right)\right)^{2}+\left(3\rho V^{2}\right)^{2}}}-3\rho V^{2}\right)}{\left(\omega \left(4\eta +3\zeta \right)\right)^{2}+\left(3\rho V^{2}\right)^{2}}}\right)^{\frac {1}{2}}\\\end{aligned}}} The relaxation time for water is about 2.0 × 10 −12 seconds (2 picoseconds ) per radian [ citation needed ] , corresponding to an angular frequency ω of 5 × 10 11 radians (500 gigaradians) per second and therefore a frequency of about 3.14 × 10 12 hertz (3.14 terahertz ).
https://en.wikipedia.org/wiki/Stokes's_law_of_sound_attenuation
This article provides an error analysis of time discretization applied to spatially discrete approximation of the stationary and nonstationary Navier-Stokes equations . The nonlinearity of the convection term is the main problem in solving a stationary or nonstationary Navier-Stokes equation or Euler equation problems. Stoke incorporated ‘the method of artificial compressibility’ to solve these problems. ρ ( ∂ v ∂ t + v ⋅ ∇ v ) = − ∇ p + μ ∇ 2 v + f . {\displaystyle \rho \left({\frac {\partial \mathbf {v} }{\partial t}}+\mathbf {v} \cdot \nabla \mathbf {v} \right)=-\nabla p+\mu \nabla ^{2}\mathbf {v} +\mathbf {f} .} The Stokes approximation is developed from the Navier-Stokes equations by omission of the convective term. For small Reynolds numbers in the incompressible flow, this approximation is more useful. Then incompressible Navier Stokes equation can be written as- Here linear diffusion term dominates the convection term. In the stationary problem neglecting the convection term, we get- Many theorems can be proved by using this process. The main problem with the solution of the incompressible flow equation is the decoupling of the continuity and momentum equation due to the absence of pressure or density term. Chorin proposed the solution for this problem of the pressure decoupling; this approach is called artificial compressibility. In the above equation stoke assume that at, non-stationary Navier Stokes problem converge towards the solution of the correspondent stationary problem. This solution will not depend upon the function . If this is used for the above equation consisting of Navier stokes equation and continuity equations with time derivative of pressure, then the solution will be same as the stationary solution of the original Navier Stoke problem. This process also introduce the new term artificial time as t→∞. Artificial compressibility method is combined with a dual time stepping procedure which involves iteration in pseudo-time within each physical time step. This guarantees a convergence towards the solution for the incompressible flow problem. Ansorge, R. , Mathematical Models of Fluid dynamics
https://en.wikipedia.org/wiki/Stokes_approximation_and_artificial_time
For a pure wave motion in fluid dynamics , the Stokes drift velocity is the average velocity when following a specific fluid parcel as it travels with the fluid flow . For instance, a particle floating at the free surface of water waves , experiences a net Stokes drift velocity in the direction of wave propagation . More generally, the Stokes drift velocity is the difference between the average Lagrangian flow velocity of a fluid parcel, and the average Eulerian flow velocity of the fluid at a fixed position. This nonlinear phenomenon is named after George Gabriel Stokes , who derived expressions for this drift in his 1847 study of water waves . The Stokes drift is the difference in end positions, after a predefined amount of time (usually one wave period ), as derived from a description in the Lagrangian and Eulerian coordinates . The end position in the Lagrangian description is obtained by following a specific fluid parcel during the time interval. The corresponding end position in the Eulerian description is obtained by integrating the flow velocity at a fixed position—equal to the initial position in the Lagrangian description—during the same time interval. The Stokes drift velocity equals the Stokes drift divided by the considered time interval. Often, the Stokes drift velocity is loosely referred to as Stokes drift. Stokes drift may occur in all instances of oscillatory flow which are inhomogeneous in space. For instance in water waves , tides and atmospheric waves . In the Lagrangian description , fluid parcels may drift far from their initial positions. As a result, the unambiguous definition of an average Lagrangian velocity and Stokes drift velocity, which can be attributed to a certain fixed position, is by no means a trivial task. However, such an unambiguous description is provided by the Generalized Lagrangian Mean (GLM) theory of Andrews and McIntyre in 1978 . [ 2 ] The Stokes drift is important for the mass transfer of various kinds of material and organisms by oscillatory flows. It plays a crucial role in the generation of Langmuir circulations . [ 3 ] For nonlinear and periodic water waves, accurate results on the Stokes drift have been computed and tabulated. [ 4 ] The Lagrangian motion of a fluid parcel with position vector x = ξ ( α , t) in the Eulerian coordinates is given by [ 5 ] where Often, the Lagrangian coordinates α are chosen to coincide with the Eulerian coordinates x at the initial time t = t 0 : [ 5 ] If the average value of a quantity is denoted by an overbar, then the average Eulerian velocity vector ū E and average Lagrangian velocity vector ū L are Different definitions of the average may be used, depending on the subject of study (see ergodic theory ): The Stokes drift velocity ū S is defined as the difference between the average Eulerian velocity and the average Lagrangian velocity: [ 6 ] In many situations, the mapping of average quantities from some Eulerian position x to a corresponding Lagrangian position α forms a problem. Since a fluid parcel with label α traverses along a path of many different Eulerian positions x , it is not possible to assign α to a unique x . A mathematically sound basis for an unambiguous mapping between average Lagrangian and Eulerian quantities is provided by the theory of the generalized Lagrangian mean (GLM) by Andrews and McIntyre (1978) . For the Eulerian velocity as a monochromatic wave of any nature in a continuous medium: u = u ^ sin ⁡ ( k x − ω t ) , {\displaystyle u={\hat {u}}\sin(kx-\omega t),} one readily obtains by the perturbation theory – with k u ^ / ω {\displaystyle k{\hat {u}}/\omega } as a small parameter – for the particle position x = ξ ( ξ 0 , t ) {\displaystyle x=\xi (\xi _{0},t)} : Here the last term describes the Stokes drift velocity 1 2 k u ^ 2 / ω . {\displaystyle {\tfrac {1}{2}}k{\hat {u}}^{2}/\omega .} [ 7 ] The Stokes drift was formulated for water waves by George Gabriel Stokes in 1847. For simplicity, the case of infinitely deep water is considered, with linear wave propagation of a sinusoidal wave on the free surface of a fluid layer: [ 8 ] where As derived below, the horizontal component ū S ( z ) of the Stokes drift velocity for deep-water waves is approximately: [ 9 ] As can be seen, the Stokes drift velocity ū S is a nonlinear quantity in terms of the wave amplitude a . Further, the Stokes drift velocity decays exponentially with depth: at a depth of a quarter wavelength, z = − λ /4, it is about 4% of its value at the mean free surface , z = 0. It is assumed that the waves are of infinitesimal amplitude and the free surface oscillates around the mean level z = 0. The waves propagate under the action of gravity, with a constant acceleration vector by gravity (pointing downward in the negative z direction). Further the fluid is assumed to be inviscid [ 10 ] and incompressible , with a constant mass density . The fluid flow is irrotational . At infinite depth, the fluid is taken to be at rest . Now the flow may be represented by a velocity potential φ , satisfying the Laplace equation and [ 8 ] In order to have non-trivial solutions for this eigenvalue problem, the wave length and wave period may not be chosen arbitrarily, but must satisfy the deep-water dispersion relation: [ 11 ] with g the acceleration by gravity in (m/s 2 ). Within the framework of linear theory, the horizontal and vertical components, ξ x and ξ z respectively, of the Lagrangian position ξ are [ 9 ] The horizontal component ū S of the Stokes drift velocity is estimated by using a Taylor expansion around x of the Eulerian horizontal velocity component u x = ∂ ξ x / ∂ t at the position ξ : [ 5 ]
https://en.wikipedia.org/wiki/Stokes_drift
Stokes flow (named after George Gabriel Stokes ), also named creeping flow or creeping motion , [ 1 ] is a type of fluid flow where advective inertial forces are small compared with viscous forces. [ 2 ] The Reynolds number is low, i.e. R e ≪ 1 {\displaystyle \mathrm {Re} \ll 1} . This is a typical situation in flows where the fluid velocities are very slow, the viscosities are very large, or the length-scales of the flow are very small. Creeping flow was first studied to understand lubrication . In nature, this type of flow occurs in the swimming of microorganisms and sperm . [ 3 ] In technology, it occurs in paint , MEMS devices, and in the flow of viscous polymers generally. The equations of motion for Stokes flow, called the Stokes equations, are a linearization of the Navier–Stokes equations , and thus can be solved by a number of well-known methods for linear differential equations. [ 4 ] The primary Green's function of Stokes flow is the Stokeslet , which is associated with a singular point force embedded in a Stokes flow. From its derivatives, other fundamental solutions can be obtained. [ 5 ] The Stokeslet was first derived by Oseen in 1927, although it was not named as such until 1953 by Hancock. [ 6 ] The closed-form fundamental solutions for the generalized unsteady Stokes and Oseen flows associated with arbitrary time-dependent translational and rotational motions have been derived for the Newtonian [ 7 ] and micropolar [ 8 ] fluids. The equation of motion for Stokes flow can be obtained by linearizing the steady state Navier–Stokes equations . The inertial forces are assumed to be negligible in comparison to the viscous forces, and eliminating the inertial terms of the momentum balance in the Navier–Stokes equations reduces it to the momentum balance in the Stokes equations: [ 1 ] where σ {\displaystyle \sigma } is the stress (sum of viscous and pressure stresses), [ 9 ] [ 10 ] and f {\displaystyle \mathbf {f} } an applied body force . The full Stokes equations also include an equation for the conservation of mass , commonly written in the form: where ρ {\displaystyle \rho } is the fluid density and u {\displaystyle \mathbf {u} } the fluid velocity. To obtain the equations of motion for incompressible flow , it is assumed that the density, ρ {\displaystyle \rho } , is a constant. Furthermore, occasionally one might consider the unsteady Stokes equations, in which the term ρ ∂ u ∂ t {\displaystyle \rho {\frac {\partial \mathbf {u} }{\partial t}}} is added to the left hand side of the momentum balance equation. [ 1 ] The Stokes equations represent a considerable simplification of the full Navier–Stokes equations , especially in the incompressible Newtonian case. [ 2 ] [ 4 ] [ 9 ] [ 10 ] They are the leading-order simplification of the full Navier–Stokes equations, valid in the distinguished limit R e → 0. {\displaystyle \mathrm {Re} \to 0.} While these properties are true for incompressible Newtonian Stokes flows, the non-linear and sometimes time-dependent nature of non-Newtonian fluids means that they do not hold in the more general case. An interesting property of Stokes flow is known as the Stokes' paradox : that there can be no Stokes flow of a fluid around a disk in two dimensions; or, equivalently, the fact there is no non-trivial solution for the Stokes equations around an infinitely long cylinder. [ 13 ] A Taylor–Couette system can create laminar flows in which concentric cylinders of fluid move past each other in an apparent spiral. [ 14 ] A fluid such as corn syrup with high viscosity fills the gap between two cylinders, with colored regions of the fluid visible through the transparent outer cylinder. The cylinders are rotated relative to one another at a low speed, which together with the high viscosity of the fluid and thinness of the gap gives a low Reynolds number , so that the apparent mixing of colors is actually laminar and can then be reversed to approximately the initial state. This creates a dramatic demonstration of seemingly mixing a fluid and then unmixing it by reversing the direction of the mixer. [ 15 ] [ 16 ] [ 17 ] In the common case of an incompressible Newtonian fluid , the Stokes equations take the (vectorized) form: where u {\displaystyle \mathbf {u} } is the velocity of the fluid, ∇ p {\displaystyle {\boldsymbol {\nabla }}p} is the gradient of the pressure , μ {\displaystyle \mu } is the dynamic viscosity, and f {\displaystyle \mathbf {f} } an applied body force. The resulting equations are linear in velocity and pressure, and therefore can take advantage of a variety of linear differential equation solvers. [ 4 ] With the velocity vector expanded as u = ( u , v , w ) {\displaystyle \mathbf {u} =(u,v,w)} and similarly the body force vector f = ( f x , f y , f z ) {\displaystyle \mathbf {f} =(f_{x},f_{y},f_{z})} , we may write the vector equation explicitly, We arrive at these equations by making the assumptions that P = μ ( ∇ u + ( ∇ u ) T ) − p I {\displaystyle \mathbb {P} =\mu \left({\boldsymbol {\nabla }}\mathbf {u} +({\boldsymbol {\nabla }}\mathbf {u} )^{\mathsf {T}}\right)-p\mathbb {I} } and the density ρ {\displaystyle \rho } is a constant. [ 9 ] The equation for an incompressible Newtonian Stokes flow can be solved by the stream function method in planar or in 3-D axisymmetric cases The linearity of the Stokes equations in the case of an incompressible Newtonian fluid means that a Green's function , J ( r ) {\displaystyle \mathbb {J} (\mathbf {r} )} , exists. The Green's function is found by solving the Stokes equations with the forcing term replaced by a point force acting at the origin, and boundary conditions vanishing at infinity: where δ ( r ) {\displaystyle \mathbf {\delta } (\mathbf {r} )} is the Dirac delta function , and F ⋅ δ ( r ) {\displaystyle \mathbf {F} \cdot \delta (\mathbf {r} )} represents a point force acting at the origin. The solution for the pressure p and velocity u with | u | and p vanishing at infinity is given by [ 1 ] where is a second-rank tensor (or more accurately tensor field ) known as the Oseen tensor (after Carl Wilhelm Oseen ). Here, r r is a quantity such that F ⋅ ( r r ) = ( F ⋅ r ) r {\displaystyle \mathbf {F} \cdot (\mathbf {r} \mathbf {r} )=(\mathbf {F} \cdot \mathbf {r} )\mathbf {r} } . [ clarification needed ] The terms Stokeslet and point-force solution are used to describe F ⋅ J ( r ) {\displaystyle \mathbf {F} \cdot \mathbb {J} (\mathbf {r} )} . Analogous to the point charge in electrostatics , the Stokeslet is force-free everywhere except at the origin, where it contains a force of strength F {\displaystyle \mathbf {F} } . For a continuous-force distribution (density) f ( r ) {\displaystyle \mathbf {f} (\mathbf {r} )} the solution (again vanishing at infinity) can then be constructed by superposition: This integral representation of the velocity can be viewed as a reduction in dimensionality: from the three-dimensional partial differential equation to a two-dimensional integral equation for unknown densities. [ 1 ] The Papkovich–Neuber solution represents the velocity and pressure fields of an incompressible Newtonian Stokes flow in terms of two harmonic potentials. Certain problems, such as the evolution of the shape of a bubble in a Stokes flow, are conducive to numerical solution by the boundary element method . This technique can be applied to both 2- and 3-dimensional flows. Hele-Shaw flow is an example of a geometry for which inertia forces are negligible. It is defined by two parallel plates arranged very close together with the space between the plates occupied partly by fluid and partly by obstacles in the form of cylinders with generators normal to the plates. [ 9 ] Slender-body theory in Stokes flow is a simple approximate method of determining the irrotational flow field around bodies whose length is large compared with their width. The basis of the method is to choose a distribution of flow singularities along a line (since the body is slender) so that their irrotational flow in combination with a uniform stream approximately satisfies the zero normal velocity condition. [ 9 ] Lamb 's general solution arises from the fact that the pressure p {\displaystyle p} satisfies the Laplace equation , and can be expanded in a series of solid spherical harmonics in spherical coordinates. As a result, the solution to the Stokes equations can be written: where p n , Φ n , {\displaystyle p_{n},\Phi _{n},} and χ n {\displaystyle \chi _{n}} are solid spherical harmonics of order n {\displaystyle n} : and the P n m {\displaystyle P_{n}^{m}} are the associated Legendre polynomials . The Lamb's solution can be used to describe the motion of fluid either inside or outside a sphere. For example, it can be used to describe the motion of fluid around a spherical particle with prescribed surface flow, a so-called squirmer , or to describe the flow inside a spherical drop of fluid. For interior flows, the terms with n < 0 {\displaystyle n<0} are dropped, while for exterior flows the terms with n > 0 {\displaystyle n>0} are dropped (often the convention n → − n − 1 {\displaystyle n\to -n-1} is assumed for exterior flows to avoid indexing by negative numbers). [ 1 ] The drag resistance to a moving sphere, also known as Stokes' solution is here summarised. Given a sphere of radius a {\displaystyle a} , travelling at velocity U {\displaystyle U} , in a Stokes fluid with dynamic viscosity μ {\displaystyle \mu } , the drag force F D {\displaystyle F_{D}} is given by: [ 9 ] The Stokes solution dissipates less energy than any other solenoidal vector field with the same boundary velocities: this is known as the Helmholtz minimum dissipation theorem . [ 1 ] The Lorentz reciprocal theorem states a relationship between two Stokes flows in the same region. Consider fluid filled region V {\displaystyle V} bounded by surface S {\displaystyle S} . Let the velocity fields u {\displaystyle \mathbf {u} } and u ′ {\displaystyle \mathbf {u} '} solve the Stokes equations in the domain V {\displaystyle V} , each with corresponding stress fields σ {\displaystyle \mathbf {\sigma } } and σ ′ {\displaystyle \mathbf {\sigma } '} . Then the following equality holds: Where n {\displaystyle \mathbf {n} } is the unit normal on the surface S {\displaystyle S} . The Lorentz reciprocal theorem can be used to show that Stokes flow "transmits" unchanged the total force and torque from an inner closed surface to an outer enclosing surface. [ 1 ] The Lorentz reciprocal theorem can also be used to relate the swimming speed of a microorganism, such as cyanobacterium , to the surface velocity which is prescribed by deformations of the body shape via cilia or flagella . [ 19 ] The Lorentz reciprocal theorem has also been used in the context of elastohydrodynamic theory to derive the lift force exerted on a solid object moving tangent to the surface of an elastic interface at low Reynolds numbers . [ 20 ] [ 21 ] Faxén's laws are direct relations that express the multipole moments in terms of the ambient flow and its derivatives. First developed by Hilding Faxén to calculate the force, F {\displaystyle \mathbf {F} } , and torque, T {\displaystyle \mathbf {T} } on a sphere, they take the following form: where μ {\displaystyle \mu } is the dynamic viscosity, a {\displaystyle a} is the particle radius, v ∞ {\displaystyle \mathbf {v} ^{\infty }} is the ambient flow, U {\displaystyle \mathbf {U} } is the speed of the particle, Ω ∞ {\displaystyle \mathbf {\Omega } ^{\infty }} is the angular velocity of the background flow, and ω {\displaystyle \mathbf {\omega } } is the angular velocity of the particle. Faxén's laws can be generalized to describe the moments of other shapes, such as ellipsoids, spheroids, and spherical drops. [ 1 ]
https://en.wikipedia.org/wiki/Stokes_flow
The Stokes number ( Stk ), named after George Gabriel Stokes , is a dimensionless number characterising the behavior of particles suspended in a fluid flow . The Stokes number is defined as the ratio of the characteristic time of a particle (or droplet ) to a characteristic time of the flow or of an obstacle, or S t k = t 0 u 0 l 0 {\displaystyle \mathrm {Stk} ={\frac {t_{0}\,u_{0}}{l_{0}}}} where t 0 {\displaystyle t_{0}} is the relaxation time of the particle (the time constant in the exponential decay of the particle velocity due to drag), u 0 {\displaystyle u_{0}} is the fluid velocity of the flow well away from the obstacle, and l 0 {\displaystyle l_{0}} is the characteristic dimension of the obstacle (typically its diameter) or a characteristic length scale in the flow (like boundary layer thickness). [ 1 ] A particle with a low Stokes number follows fluid streamlines (perfect advection ), while a particle with a large Stokes number is dominated by its inertia and continues along its initial trajectory. In the case of Stokes flow , which is when the particle (or droplet) Reynolds number is less than about one, the particle drag coefficient is inversely proportional to the Reynolds number itself. In that case, the characteristic time of the particle can be written as t 0 = ρ p d p 2 18 μ g {\displaystyle t_{0}={\frac {\rho _{p}d_{p}^{2}}{18\mu _{g}}}} where ρ p {\displaystyle \rho _{p}} is the particle density , d p {\displaystyle d_{p}} is the particle diameter and μ g {\displaystyle \mu _{g}} is the fluid dynamic viscosity . [ 2 ] In experimental fluid dynamics, the Stokes number is a measure of flow tracer fidelity in particle image velocimetry (PIV) experiments where very small particles are entrained in turbulent flows and optically observed to determine the speed and direction of fluid movement (also known as the velocity field of the fluid). For acceptable tracing accuracy, the particle response time should be faster than the smallest time scale of the flow. Smaller Stokes numbers represent better tracing accuracy; for S t k ≫ 1 {\displaystyle \mathrm {Stk} \gg 1} , particles will detach from a flow especially where the flow decelerates abruptly. For S t k ≪ 1 {\displaystyle \mathrm {Stk} \ll 1} , particles follow fluid streamlines closely. If S t k < 0.1 {\displaystyle \mathrm {Stk} <0.1} , tracing accuracy errors are below 1%. [ 3 ] The Stokes number provides a means of estimating the quality of PIV data sets, as previously discussed. However, a definition of a characteristic velocity or length scale may not be evident in all applications. Thus, a deeper insight of how a tracking delay arises could be drawn by simply defining the differential equations of a particle in the Stokes regime. A particle moving with the fluid at some velocity v p ( t ) {\displaystyle v_{p}(t)} will encounter a variable fluid velocity field as it advects. Let's assume the velocity of the fluid, in the Lagrangian frame of reference of the particle, is v f ( t ) {\displaystyle v_{f}(t)} . It is the difference between these velocities that will generate the drag force necessary to correct the particle path: Δ v ( t ) = v f ( t ) − v p ( t ) {\displaystyle \Delta v(t)=v_{f}(t)-v_{p}(t)} The stokes drag force is then: F D = 3 π μ d p Δ v {\displaystyle F_{D}=3\pi \mu d_{p}\Delta v} The particle mass is: m p = ρ p 4 3 π ( d p 2 ) 3 = ρ p π d p 3 6 {\displaystyle m_{p}=\rho _{p}{\frac {4}{3}}\pi {\bigg (}{\frac {d_{p}}{2}}{\bigg )}^{3}=\rho _{p}{\frac {\pi d_{p}^{3}}{6}}} Thus, the particle acceleration can be found through Newton's second law: d v p ( t ) d t = F D m p = 18 μ d p 2 ρ p Δ v ( t ) {\displaystyle {\frac {dv_{p}(t)}{dt}}={\frac {F_{D}}{m_{p}}}={\frac {18\mu }{{d_{p}}^{2}\rho _{p}}}\Delta v(t)} Note the relaxation time t 0 = ρ p d p 2 18 μ g {\displaystyle t_{0}={\frac {\rho _{p}d_{p}^{2}}{18\mu _{g}}}} can be replaced to yield: d v p ( t ) d t = 1 t 0 Δ v ( t ) {\displaystyle {\frac {dv_{p}(t)}{dt}}={\frac {1}{t_{0}}}\Delta v(t)} The first-order differential equation above can be solved through the Laplace transform method: t 0 s v p ( s ) = v f − v p ( s ) {\displaystyle t_{0}sv_{p}(s)=v_{f}-v_{p}(s)} v p ( s ) v f ( s ) = 1 t 0 s + 1 {\displaystyle {\frac {v_{p}(s)}{v_{f}(s)}}={\frac {1}{t_{0}s+1}}} The solution above, in the frequency domain, characterizes a first-order system with a characteristic time of t 0 {\displaystyle t_{0}} . Thus, the −3 dB gain (cut-off) frequency will be: f − 3 dB = 1 2 π t 0 {\displaystyle f_{-3{\text{ dB}}}={\frac {1}{2\pi t_{0}}}} The cut-off frequency and the particle transfer function, plotted on the side panel, allows for the assessment of PIV error in unsteady flow applications and its effect on turbulence spectral quantities and kinetic energy. The bias error in particle tracking discussed in the previous section is evident in the frequency domain, but it can be difficult to appreciate in cases where the particle motion is being tracked to perform flow field measurements (like in particle image velocimetry ). A simple but insightful solution to the above-mentioned differential equation is possible when the forcing function v f ( t ) = V u − Δ V H ( t ) {\displaystyle v_{f}(t)=V_{u}-\Delta VH(t)} is a Heaviside step function; representing particles going through a shockwave. In this case, V u {\displaystyle V_{u}} is the flow velocity upstream of the shock; whereas Δ V {\displaystyle \Delta V} is the velocity drop across the shock. The step response for a particle is a simple exponential: v p ( t ) = ( V u − Δ V ) + Δ V e − t / t 0 {\displaystyle v_{p}(t)=(V_{u}-\Delta V)+\Delta Ve^{-t/t_{0}}} To convert the velocity as a function of time to a particle velocity distribution as a function of distance, let's assume a 1-dimensional velocity jump in the x {\displaystyle x} direction. Let's assume x = 0 {\displaystyle x=0} is positioned where the shock wave is, and then integrate the previous equation to get: x particle = ∫ 0 Δ t v p ( t ) d t = ∫ 0 Δ t ( V u − Δ V ) d t + ∫ 0 Δ t Δ V e − t / t 0 d t {\displaystyle x_{\text{particle}}=\int _{0}^{\Delta t}v_{p}(t)dt=\int _{0}^{\Delta t}(V_{u}-\Delta V)dt+\int _{0}^{\Delta t}\Delta Ve^{-t/t_{0}}dt} x particle = Δ t ( V u − Δ V ) + Δ t Δ V ( 1 − e − Δ t / t 0 ) {\displaystyle x_{\text{particle}}=\Delta t(V_{u}-\Delta V)+\Delta t\Delta V(1-e^{-\Delta t/t_{0}})} Considering a relaxation time of Δ t = 3 t 0 {\displaystyle \Delta t=3t_{0}} (time to 95% velocity change), we have: x particle , 95 % = 3 t 0 ( V u − Δ V ) + 3 t 0 Δ V ( 1 − e − 3 ) {\displaystyle x_{{\text{particle}},95\%}=3t_{0}(V_{u}-\Delta V)+3t_{0}\Delta V(1-e^{-3})} x particle , 95 % = 3 t 0 ( V u − 0.05 Δ V ) {\displaystyle x_{{\text{particle}},95\%}=3t_{0}(V_{u}-0.05\Delta V)} This means the particle velocity would be settled to within 5% of the downstream velocity at x particle , 95 % {\displaystyle x_{{\text{particle}},95\%}} from the shock. In practice, this means a shock wave would look, to a PIV system, blurred by approximately this x particle , 95 % {\displaystyle x_{{\text{particle}},95\%}} distance. For example, consider a normal shock wave of Mach number M = 2 {\displaystyle M=2} at a stagnation temperature of 298 K. A propylene glycol particle of d p = 1 μ m {\displaystyle d_{p}=1~\mu {\text{m}}} would blur the flow by x particle , 95 % = 5 mm {\displaystyle x_{{\text{particle}},95\%}=5{\text{ mm}}} ; whereas a d p = 10 μ m {\displaystyle d_{p}=10~\mu {\text{m}}} would blur the flow by x particle , 95 % = 500 mm {\displaystyle x_{{\text{particle}},95\%}=500{\text{ mm}}} (which would, in most cases, yield unacceptable PIV results). Although a shock wave is the worst-case scenario of abrupt deceleration of a flow, it illustrates the effect of particle tracking error in PIV, which results in a blurring of the velocity fields acquired at the length scales of order x particle , 95 % {\displaystyle x_{{\text{particle}},95\%}} . The preceding analysis will not be accurate in the ultra-Stokesian regime. i.e. if the particle Reynolds number is much greater than unity. Assuming a Mach number much less than unity, a generalized form of the Stokes number was demonstrated by Israel & Rosner. [ 4 ] Stk e = Stk 24 Re o ∫ 0 Re o d Re ′ C D ( Re ′ ) Re ′ {\displaystyle {\text{Stk}}_{\text{e}}={\text{Stk}}{\frac {24}{{\text{Re}}_{o}}}\int _{0}^{{\text{Re}}_{o}}{\frac {d{\text{Re}}^{\prime }}{C_{D}({\text{Re}}^{\prime }){\text{Re}}^{\prime }}}} Where Re o {\displaystyle {\text{Re}}_{o}} is the "particle free-stream Reynolds number", Re o = ρ g | u | d p μ g {\displaystyle {\text{Re}}_{o}={\frac {\rho _{g}|\mathbf {u} |d_{p}}{\mu _{g}}}} An additional function ψ ( Re o ) {\displaystyle \psi ({\text{Re}}_{o})} was defined by; [ 4 ] this describes the non-Stokesian drag correction factor, Stk e = Stk ⋅ ψ ( Re o ) {\displaystyle {\text{Stk}}_{e}={\text{Stk}}\cdot \psi ({\text{Re}}_{o})} It follows that this function is defined by, ψ ( Re o ) = 24 Re o ∫ 0 Re o d Re ′ C D ( Re ′ ) Re ′ {\displaystyle \psi ({\text{Re}}_{o})={\frac {24}{{\text{Re}}_{o}}}\int _{0}^{{\text{Re}}_{o}}{\frac {d{\text{Re}}^{\prime }}{C_{D}({\text{Re}}^{\prime }){\text{Re}}^{\prime }}}} Considering the limiting particle free-stream Reynolds numbers, as Re o → 0 {\displaystyle {\text{Re}}_{o}\to 0} then C D ( Re o ) → 24 / Re o {\displaystyle C_{D}({\text{Re}}_{o})\to 24/{\text{Re}}_{o}} and therefore ψ → 1 {\displaystyle \psi \to 1} . Thus as expected there correction factor is unity in the Stokesian drag regime. Wessel & Righi [ 5 ] evaluated ψ {\displaystyle \psi } for C D ( Re ) {\displaystyle C_{D}({\text{Re}})} from the empirical correlation for drag on a sphere from Schiller & Naumann. [ 6 ] ψ ( Re o ) = 3 ( c Re o 1 / 3 − arctan ⁡ ( c Re o 1 / 3 ) ) c 3 / 2 Re o {\displaystyle \psi ({\text{Re}}_{o})={\frac {3({\sqrt {c}}{\text{Re}}_{o}^{1/3}-\arctan({\sqrt {c}}{\text{Re}}_{o}^{1/3}))}{c^{3/2}{\text{Re}}_{o}}}} Where the constant c = 0.158 {\displaystyle c=0.158} . The conventional Stokes number will significantly underestimate the drag force for large particle free-stream Reynolds numbers. Thus overestimating the tendency for particles to depart from the fluid flow direction. This will lead to errors in subsequent calculations or experimental comparisons. For example, the selective capture of particles by an aligned, thin-walled circular nozzle is given by Belyaev and Levin [ 7 ] as: c / c 0 = 1 + ( u 0 / u − 1 ) ( 1 − 1 1 + S t k ( 2 + 0.617 u / u 0 ) ) {\displaystyle c/c_{0}=1+(u_{0}/u-1)\left(1-{\frac {1}{1+\mathrm {Stk} (2+0.617u/u_{0})}}\right)} where c {\displaystyle c} is particle concentration, u {\displaystyle u} is speed, and the subscript 0 indicates conditions far upstream of the nozzle. The characteristic distance is the diameter of the nozzle. Here the Stokes number is calculated, S t k = u 0 V s d g {\displaystyle \mathrm {Stk} ={\frac {u_{0}V_{s}}{dg}}} where V s {\displaystyle V_{s}} is the particle's settling velocity, d {\displaystyle d} is the sampling tube's inner diameter, and g {\displaystyle g} is the acceleration of gravity.
https://en.wikipedia.org/wiki/Stokes_number
The Stokes operator , named after George Gabriel Stokes , is an unbounded linear operator used in the theory of partial differential equations , specifically in the fields of fluid dynamics and electromagnetics . If we define P σ {\displaystyle P_{\sigma }} as the Leray projection onto divergence free vector fields , then the Stokes Operator A {\displaystyle A} is defined by where Δ ≡ ∇ 2 {\displaystyle \Delta \equiv \nabla ^{2}} is the Laplacian . Since A {\displaystyle A} is unbounded, we must also give its domain of definition, which is defined as D ( A ) = H 2 ∩ V {\displaystyle {\mathcal {D}}(A)=H^{2}\cap V} , where V = { u → ∈ ( H 0 1 ( Ω ) ) n | div u → = 0 } {\displaystyle V=\{{\vec {u}}\in (H_{0}^{1}(\Omega ))^{n}|\operatorname {div} \,{\vec {u}}=0\}} . Here, Ω {\displaystyle \Omega } is a bounded open set in R n {\displaystyle \mathbb {R} ^{n}} (usually n = 2 or 3), H 2 ( Ω ) {\displaystyle H^{2}(\Omega )} and H 0 1 ( Ω ) {\displaystyle H_{0}^{1}(\Omega )} are the standard Sobolev spaces , and the divergence of u → {\displaystyle {\vec {u}}} is taken in the distribution sense. For a given domain Ω {\displaystyle \Omega } which is open, bounded, and has C 2 {\displaystyle C^{2}} boundary, the Stokes operator A {\displaystyle A} is a self-adjoint positive-definite operator with respect to the L 2 {\displaystyle L^{2}} inner product. It has an orthonormal basis of eigenfunctions { w k } k = 1 ∞ {\displaystyle \{w_{k}\}_{k=1}^{\infty }} corresponding to eigenvalues { λ k } k = 1 ∞ {\displaystyle \{\lambda _{k}\}_{k=1}^{\infty }} which satisfy and λ k → ∞ {\displaystyle \lambda _{k}\rightarrow \infty } as k → ∞ {\displaystyle k\rightarrow \infty } . Note that the smallest eigenvalue is unique and non-zero. These properties allow one to define powers of the Stokes operator. Let α > 0 {\displaystyle \alpha >0} be a real number. We define A α {\displaystyle A^{\alpha }} by its action on u → ∈ D ( A ) {\displaystyle {\vec {u}}\in {\mathcal {D}}(A)} : where u k := ( u → , w k → ) {\displaystyle u_{k}:=({\vec {u}},{\vec {w_{k}}})} and ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} is the L 2 ( Ω ) {\displaystyle L^{2}(\Omega )} inner product. The inverse A − 1 {\displaystyle A^{-1}} of the Stokes operator is a bounded, compact, self-adjoint operator in the space H := { u → ∈ ( L 2 ( Ω ) ) n | div u → = 0 and γ ( u → ) = 0 } {\displaystyle H:=\{{\vec {u}}\in (L^{2}(\Omega ))^{n}|\operatorname {div} \,{\vec {u}}=0{\text{ and }}\gamma ({\vec {u}})=0\}} , where γ {\displaystyle \gamma } is the trace operator . Furthermore, A − 1 : H → V {\displaystyle A^{-1}:H\rightarrow V} is injective.
https://en.wikipedia.org/wiki/Stokes_operator
In fluid dynamics, Stokes problem also known as Stokes second problem or sometimes referred to as Stokes boundary layer or Oscillating boundary layer is a problem of determining the flow created by an oscillating solid surface, named after Sir George Stokes . This is considered one of the simplest unsteady problems that has an exact solution for the Navier–Stokes equations . [ 1 ] [ 2 ] In turbulent flow, this is still named a Stokes boundary layer, but now one has to rely on experiments , numerical simulations or approximate methods in order to obtain useful information on the flow. Consider an infinitely long plate which is oscillating with a velocity U cos ⁡ ω t {\displaystyle U\cos \omega t} in the x {\displaystyle x} direction, which is located at y = 0 {\displaystyle y=0} in an infinite domain of fluid, where ω {\displaystyle \omega } is the frequency of the oscillations. The incompressible Navier–Stokes equations reduce to where ν {\displaystyle \nu } is the kinematic viscosity . The pressure gradient does not enter into the problem. The initial, no-slip condition on the wall is and the second boundary condition is due to the fact that the motion at y = 0 {\displaystyle y=0} is not felt at infinity. The flow is only due to the motion of the plate, there is no imposed pressure gradient. The initial condition is not required because of periodicity. Since both the equation and the boundary conditions are linear, the velocity can be written as the real part of some complex function because cos ⁡ ω t = ℜ e i ω t {\displaystyle \cos \omega t=\Re e^{i\omega t}} . Substituting this into the partial differential equation reduces it to ordinary differential equation with boundary conditions The solution to the above problem is The disturbance created by the oscillating plate travels as the transverse wave through the fluid, but it is highly damped by the exponential factor. The depth of penetration δ = 2 ν / ω {\displaystyle \delta ={\sqrt {2\nu /\omega }}} of this wave decreases with the frequency of the oscillation, but increases with the kinematic viscosity of the fluid. The force per unit area exerted on the plate by the fluid is There is a phase shift between the oscillation of the plate and the force created. An important observation from Stokes' solution for the oscillating Stokes flow is that vorticity oscillations are confined to a thin boundary layer and damp exponentially when moving away from the wall. [ 7 ] This observation is also valid for the case of a turbulent boundary layer. Outside the Stokes boundary layer – which is often the bulk of the fluid volume – the vorticity oscillations may be neglected. To good approximation, the flow velocity oscillations are irrotational outside the boundary layer, and potential flow theory can be applied to the oscillatory part of the motion. This significantly simplifies the solution of these flow problems, and is often applied in the irrotational flow regions of sound waves and water waves . If the fluid domain is bounded by an upper, stationary wall, located at a height y = h {\displaystyle y=h} , the flow velocity is given by where λ = ω / ( 2 ν ) {\displaystyle \lambda ={\sqrt {\omega /(2\nu )}}} . Suppose the extent of the fluid domain be 0 < y < h {\displaystyle 0<y<h} with y = h {\displaystyle y=h} representing a free surface. Then the solution as shown by Chia-Shun Yih in 1968 [ 8 ] is given by where δ = 2 ν / ω . {\displaystyle \delta ={\sqrt {2\nu /\omega }}.} The case for an oscillating far-field flow, with the plate held at rest, can easily be constructed from the previous solution for an oscillating plate by using linear superposition of solutions. Consider a uniform velocity oscillation u ( ∞ , t ) = U ∞ cos ⁡ ω t {\displaystyle u(\infty ,t)=U_{\infty }\cos \omega t} far away from the plate and a vanishing velocity at the plate u ( 0 , t ) = 0 {\displaystyle u(0,t)=0} . Unlike the stationary fluid in the original problem, the pressure gradient here at infinity must be a harmonic function of time. The solution is then given by which is zero at the wall y = 0 , corresponding with the no-slip condition for a wall at rest. This situation is often encountered in sound waves near a solid wall, or for the fluid motion near the sea bed in water waves . The vorticity, for the oscillating flow near a wall at rest, is equal to the vorticity in case of an oscillating plate but of opposite sign. Consider an infinitely long cylinder of radius a {\displaystyle a} exhibiting torsional oscillation with angular velocity Ω cos ⁡ ω t {\displaystyle \Omega \cos \omega t} where ω {\displaystyle \omega } is the frequency. Then the velocity approaches after the initial transient phase to [ 9 ] where K 1 {\displaystyle K_{1}} is the modified Bessel function of the second kind. This solution can be expressed with real argument [ 10 ] as: where k e i {\displaystyle \mathrm {kei} } and k e r {\displaystyle \mathrm {ker} } are Kelvin functions and R ω {\displaystyle R_{\omega }} is to the dimensionless oscillatory Reynolds number defined as R ω = ω a 2 / ν {\displaystyle R_{\omega }=\omega a^{2}/\nu } , being ν {\displaystyle \nu } the kinematic viscosity. If the cylinder oscillates in the axial direction with velocity U cos ⁡ ω t {\displaystyle U\cos \omega t} , then the velocity field is where K 0 {\displaystyle K_{0}} is the modified Bessel function of the second kind. In the Couette flow , instead of the translational motion of one of the plate, an oscillation of one plane will be executed. If we have a bottom wall at rest at y = 0 {\displaystyle y=0} and the upper wall at y = h {\displaystyle y=h} is executing an oscillatory motion with velocity U cos ⁡ ω t {\displaystyle U\cos \omega t} , then the velocity field is given by The frictional force per unit area on the moving plane is − μ U ℜ { k cot ⁡ k h } {\displaystyle -\mu U\Re \{k\cot kh\}} and on the fixed plane is μ U ℜ { k csc ⁡ k h } {\displaystyle \mu U\Re \{k\csc kh\}} .
https://en.wikipedia.org/wiki/Stokes_problem
The Stokes radius or Stokes–Einstein radius of a solute is the radius of a hard sphere that diffuses at the same rate as that solute. Named after George Gabriel Stokes , it is closely related to solute mobility, factoring in not only size but also solvent effects. A smaller ion with stronger hydration, for example, may have a greater Stokes radius than a larger ion with weaker hydration. This is because the smaller ion drags a greater number of water molecules with it as it moves through the solution. [ 1 ] Stokes radius is sometimes used synonymously with effective hydrated radius in solution . [ 2 ] Hydrodynamic radius , R H , can refer to the Stokes radius of a polymer or other macromolecule . According to Stokes’ law , a perfect sphere traveling through a viscous liquid feels a drag force proportional to the frictional coefficient f {\displaystyle f} : F drag = f s = ( 6 π η a ) s {\displaystyle F_{\text{drag}}=fs=(6\pi \eta a)s} where η {\displaystyle \eta } is the liquid's viscosity , s {\displaystyle s} is the sphere's drift speed , and a {\displaystyle a} is its radius. Because ionic mobility μ {\displaystyle \mu } is directly proportional to drift speed, it is inversely proportional to the frictional coefficient: μ = z e f {\displaystyle \mu ={\frac {ze}{f}}} where z e {\displaystyle ze} represents ionic charge in integer multiples of electron charges. In 1905, Albert Einstein found the diffusion coefficient D {\displaystyle D} of an ion to be proportional to its mobility constant: D = μ k B T q = k B T f {\displaystyle D={\frac {\mu k_{\text{B}}T}{q}}={\frac {k_{\text{B}}T}{f}}} where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant and q {\displaystyle q} is electrical charge . This is known as the Einstein relation . Substituting in the frictional coefficient of a perfect sphere from Stokes’ law yields D = k B T 6 π η a {\displaystyle D={\frac {k_{\text{B}}T}{6\pi \eta a}}} which can be rearranged to solve for a {\displaystyle a} , the radius: R H = a = k B T 6 π η D {\displaystyle R_{H}=a={\frac {k_{\text{B}}T}{6\pi \eta D}}} In non-spherical systems, the frictional coefficient is determined by the size and shape of the species under consideration. Stokes radii are often determined experimentally by gel-permeation or gel-filtration chromatography. [ 3 ] [ 4 ] [ 5 ] [ 6 ] They are useful in characterizing biological species due to the size-dependence of processes like enzyme-substrate interaction and membrane diffusion. [ 5 ] The Stokes radii of sediment, soil, and aerosol particles are considered in ecological measurements and models. [ 7 ] They likewise play a role in the study of polymer and other macromolecular systems. [ 5 ]
https://en.wikipedia.org/wiki/Stokes_radius
In fluid dynamics , the Stokes stream function is used to describe the streamlines and flow velocity in a three-dimensional incompressible flow with axisymmetry . A surface with a constant value of the Stokes stream function encloses a streamtube , everywhere tangential to the flow velocity vectors. Further, the volume flux within this streamtube is constant, and all the streamlines of the flow are located on this surface. The velocity field associated with the Stokes stream function is solenoidal —it has zero divergence . This stream function is named in honor of George Gabriel Stokes . Consider a cylindrical coordinate system ( ρ , φ , z ), with the z –axis the line around which the incompressible flow is axisymmetrical, φ the azimuthal angle and ρ the distance to the z –axis. Then the flow velocity components u ρ and u z can be expressed in terms of the Stokes stream function Ψ {\displaystyle \Psi } by: [ 1 ] The azimuthal velocity component u φ does not depend on the stream function. Due to the axisymmetry, all three velocity components ( u ρ , u φ , u z ) only depend on ρ and z and not on the azimuth φ . The volume flux, through the surface bounded by a constant value ψ of the Stokes stream function, is equal to 2π ψ . In spherical coordinates ( r , θ , φ ), r is the radial distance from the origin , θ is the zenith angle and φ is the azimuthal angle . In axisymmetric flow, with θ = 0 the rotational symmetry axis, the quantities describing the flow are again independent of the azimuth φ . The flow velocity components u r and u θ are related to the Stokes stream function Ψ {\displaystyle \Psi } through: [ 2 ] Again, the azimuthal velocity component u φ is not a function of the Stokes stream function ψ . The volume flux through a stream tube, bounded by a surface of constant ψ , equals 2π ψ , as before. The vorticity is defined as: with ϕ ^ {\displaystyle {\boldsymbol {\hat {\phi }}}} the unit vector in the ϕ {\displaystyle \phi \,} –direction. From the definition of the curl in spherical coordinates : First notice that the r {\displaystyle r} and θ {\displaystyle \theta } components are equal to 0. Secondly substitute u r {\displaystyle u_{r}} and u θ {\displaystyle u_{\theta }} into ω ϕ . {\displaystyle \omega _{\phi }.} The result is: Next the following algebra is performed: As a result, from the calculation the vorticity vector is found to be equal to: The cylindrical and spherical coordinate systems are related through As explained in the general stream function article, definitions using an opposite sign convention – for the relationship between the Stokes stream function and flow velocity – are also in use. [ 3 ] In cylindrical coordinates, the divergence of the velocity field u becomes: [ 4 ] as expected for an incompressible flow. And in spherical coordinates: [ 5 ] From calculus it is known that the gradient vector ∇ Ψ {\displaystyle \nabla \Psi } is normal to the curve Ψ = C {\displaystyle \Psi =C} (see e.g. Level set#Level sets versus the gradient ). If it is shown that everywhere u ⋅ ∇ Ψ = 0 , {\displaystyle {\boldsymbol {u}}\cdot \nabla \Psi =0,} using the formula for u {\displaystyle {\boldsymbol {u}}} in terms of Ψ , {\displaystyle \Psi ,} then this proves that level curves of Ψ {\displaystyle \Psi } are streamlines. In cylindrical coordinates, and So that And in spherical coordinates and So that
https://en.wikipedia.org/wiki/Stokes_stream_function
In fluid dynamics , a Stokes wave is a nonlinear and periodic surface wave on an inviscid fluid layer of constant mean depth. This type of modelling has its origins in the mid 19th century when Sir George Stokes – using a perturbation series approach, now known as the Stokes expansion – obtained approximate solutions for nonlinear wave motion. Stokes's wave theory is of direct practical use for waves on intermediate and deep water. It is used in the design of coastal and offshore structures , in order to determine the wave kinematics ( free surface elevation and flow velocities ). The wave kinematics are subsequently needed in the design process to determine the wave loads on a structure. [ 2 ] For long waves (as compared to depth) – and using only a few terms in the Stokes expansion – its applicability is limited to waves of small amplitude . In such shallow water, a cnoidal wave theory often provides better periodic-wave approximations. While, in the strict sense, Stokes wave refers to a progressive periodic wave of permanent form, the term is also used in connection with standing waves [ 3 ] and even random waves. [ 4 ] [ 5 ] The examples below describe Stokes waves under the action of gravity (without surface tension effects) in case of pure wave motion, so without an ambient mean current. According to Stokes's third-order theory, the free surface elevation η , the velocity potential Φ, the phase speed (or celerity) c and the wave phase θ are, for a progressive surface gravity wave on deep water – i.e. the fluid layer has infinite depth: [ 6 ] η ( x , t ) = a { [ 1 − 1 16 ( k a ) 2 ] cos ⁡ θ + 1 2 ( k a ) cos ⁡ 2 θ + 3 8 ( k a ) 2 cos ⁡ 3 θ } + O ( ( k a ) 4 ) , Φ ( x , z , t ) = a g k e k z sin ⁡ θ + O ( ( k a ) 4 ) , c = ω k = ( 1 + 1 2 ( k a ) 2 ) g k + O ( ( k a ) 4 ) , and θ ( x , t ) = k x − ω t , {\displaystyle {\begin{aligned}\eta (x,t)=&a\left\{\left[1-{\tfrac {1}{16}}(ka)^{2}\right]\cos \theta +{\tfrac {1}{2}}(ka)\,\cos 2\theta +{\tfrac {3}{8}}(ka)^{2}\,\cos 3\theta \right\}+{\mathcal {O}}\left((ka)^{4}\right),\\\Phi (x,z,t)=&a{\sqrt {\frac {g}{k}}}\,{\text{e}}^{kz}\,\sin \theta +{\mathcal {O}}\left((ka)^{4}\right),\\c=&{\frac {\omega }{k}}=\left(1+{\tfrac {1}{2}}(ka)^{2}\right)\,{\sqrt {\frac {g}{k}}}+{\mathcal {O}}\left((ka)^{4}\right),{\text{ and}}\\\theta (x,t)=&kx-\omega t,\end{aligned}}} where The expansion parameter ka is known as the wave steepness. The phase speed increases with increasing nonlinearity ka of the waves. The wave height H , being the difference between the surface elevation η at a crest and a trough , is: [ 7 ] H = 2 a ( 1 + 3 8 k 2 a 2 ) . {\displaystyle H=2a\,\left(1+{\tfrac {3}{8}}\,k^{2}a^{2}\right).} Note that the second- and third-order terms in the velocity potential Φ are zero. Only at fourth order do contributions deviating from first-order theory – i.e. Airy wave theory – appear. [ 6 ] Up to third order the orbital velocity field u = ∇ Φ consists of a circular motion of the velocity vector at each position ( x , z ). As a result, the surface elevation of deep-water waves is to a good approximation trochoidal , as already noted by Stokes (1847) . [ 8 ] Stokes further observed, that although (in this Eulerian description) the third-order orbital velocity field consists of a circular motion at each point, the Lagrangian paths of fluid parcels are not closed circles. This is due to the reduction of the velocity amplitude at increasing depth below the surface. This Lagrangian drift of the fluid parcels is known as the Stokes drift . [ 8 ] The surface elevation η and the velocity potential Φ are, according to Stokes's second-order theory of surface gravity waves on a fluid layer of mean depth h : [ 6 ] [ 9 ] η ( x , t ) = a { cos θ + k a 3 − σ 2 4 σ 3 cos 2 θ } + O ( ( k a ) 3 ) , Φ ( x , z , t ) = a ω k 1 sinh k h × { cosh k ( z + h ) sin θ + k a 3 cosh 2 k ( z + h ) 8 sinh 3 k h sin 2 θ } − ( k a ) 2 1 2 sinh 2 k h g t k + O ( ( k a ) 3 ) , c = ω k = g k σ + O ( ( k a ) 2 ) , σ = tanh k h and θ ( x , t ) = k x − ω t . {\displaystyle {\begin{aligned}\eta (x,t)=&a\left\{\cos \,\theta +ka\,{\frac {3-\sigma ^{2}}{4\,\sigma ^{3}}}\,\cos \,2\theta \right\}+{\mathcal {O}}\left((ka)^{3}\right),\\\Phi (x,z,t)=&a\,{\frac {\omega }{k}}\,{\frac {1}{\sinh \,kh}}\\&\times \left\{\cosh \,k(z+h)\sin \,\theta +ka\,{\frac {3\cosh \,2k(z+h)}{8\,\sinh ^{3}\,kh}}\,\sin \,2\theta \right\}\\&-(ka)^{2}\,{\frac {1}{2\,\sinh \,2kh}}\,{\frac {g\,t}{k}}+{\mathcal {O}}\left((ka)^{3}\right),\\c=&{\frac {\omega }{k}}={\sqrt {{\frac {g}{k}}\,\sigma }}+{\mathcal {O}}\left((ka)^{2}\right),\\\sigma =&\tanh \,kh\quad {\text{and}}\quad \theta (x,t)=kx-\omega t.\end{aligned}}} Observe that for finite depth the velocity potential Φ contains a linear drift in time, independent of position ( x and z ). Both this temporal drift and the double-frequency term (containing sin 2θ) in Φ vanish for deep-water waves. The ratio S of the free-surface amplitudes at second order and first order – according to Stokes's second-order theory – is: [ 6 ] S = k a 3 − tanh 2 k h 4 tanh 3 k h . {\displaystyle {\mathcal {S}}=ka\,{\frac {3-\tanh ^{2}\,kh}{4\,\tanh ^{3}\,kh}}.} In deep water, for large kh the ratio S has the asymptote lim k h → ∞ S = 1 2 k a . {\displaystyle \lim _{kh\to \infty }{\mathcal {S}}={\frac {1}{2}}\,ka.} For long waves, i.e. small kh , the ratio S behaves as lim k h → 0 S = 3 4 k a ( k h ) 3 , {\displaystyle \lim _{kh\to 0}{\mathcal {S}}={\frac {3}{4}}\,{\frac {ka}{(kh)^{3}}},} or, in terms of the wave height H = 2 a and wavelength λ = 2 π / k : lim k h → 0 S = 3 32 π 2 H λ 2 h 3 = 3 32 π 2 U , {\displaystyle \lim _{kh\to 0}{\mathcal {S}}={\frac {3}{32\,\pi ^{2}}}\,{\frac {H\,\lambda ^{2}}{h^{3}}}={\frac {3}{32\,\pi ^{2}}}\,{\mathcal {U}},} with U ≡ H λ 2 h 3 . {\displaystyle {\mathcal {U}}\equiv {\frac {H\,\lambda ^{2}}{h^{3}}}.} Here U is the Ursell parameter (or Stokes parameter). For long waves ( λ ≫ h ) of small height H , i.e. U ≪ 32π 2 /3 ≈ 100 , second-order Stokes theory is applicable. Otherwise, for fairly long waves ( λ > 7 h ) of appreciable height H a cnoidal wave description is more appropriate. [ 6 ] According to Hedges, fifth-order Stokes theory is applicable for U < 40 , and otherwise fifth-order cnoidal wave theory is preferable. [ 10 ] [ 11 ] For Stokes waves under the action of gravity, the third-order dispersion relation is – according to Stokes's first definition of celerity : [ 9 ] ω 2 = ( g k tanh k h ) { 1 + 9 − 10 σ 2 + 9 σ 4 8 σ 4 ( k a ) 2 } + O ( ( k a ) 4 ) , with σ = tanh k h . {\displaystyle {\begin{aligned}\omega ^{2}&=\left(gk\,\tanh \,kh\right)\;\left\{1+{\frac {9-10\,\sigma ^{2}+9\,\sigma ^{4}}{8\,\sigma ^{4}}}\,(ka)^{2}\right\}+{\mathcal {O}}\left((ka)^{4}\right),\\&\qquad {\text{with}}\\\sigma &=\tanh \,kh.\end{aligned}}} This third-order dispersion relation is a direct consequence of avoiding secular terms , when inserting the second-order Stokes solution into the third-order equations (of the perturbation series for the periodic wave problem). In deep water (short wavelength compared to the depth): lim k h → ∞ ω 2 = g k { 1 + ( k a ) 2 } + O ( ( k a ) 4 ) , {\displaystyle \lim _{kh\to \infty }\omega ^{2}=gk\,\left\{1+\left(ka\right)^{2}\right\}+{\mathcal {O}}\left((ka)^{4}\right),} and in shallow water (long wavelengths compared to the depth): lim k h → 0 ω 2 = k 2 g h { 1 + 9 8 ( k a ) 2 ( k h ) 4 } + O ( ( k a ) 4 ) . {\displaystyle \lim _{kh\to 0}\omega ^{2}=k^{2}\,gh\,\left\{1+{\frac {9}{8}}\,{\frac {\left(ka\right)^{2}}{\left(kh\right)^{4}}}\right\}+{\mathcal {O}}\left((ka)^{4}\right).} As shown above , the long-wave Stokes expansion for the dispersion relation will only be valid for small enough values of the Ursell parameter: U ≪ 100 . A fundamental problem in finding solutions for surface gravity waves is that boundary conditions have to be applied at the position of the free surface , which is not known beforehand and is thus a part of the solution to be found. Sir George Stokes solved this nonlinear wave problem in 1847 by expanding the relevant potential flow quantities in a Taylor series around the mean (or still) surface elevation. [ 12 ] As a result, the boundary conditions can be expressed in terms of quantities at the mean (or still) surface elevation (which is fixed and known). Next, a solution for the nonlinear wave problem (including the Taylor series expansion around the mean or still surface elevation) is sought by means of a perturbation series – known as the Stokes expansion – in terms of a small parameter, most often the wave steepness. The unknown terms in the expansion can be solved sequentially. [ 6 ] [ 8 ] Often, only a small number of terms is needed to provide a solution of sufficient accuracy for engineering purposes. [ 11 ] Typical applications are in the design of coastal and offshore structures , and of ships . Another property of nonlinear waves is that the phase speed of nonlinear waves depends on the wave height . In a perturbation-series approach, this easily gives rise to a spurious secular variation of the solution, in contradiction with the periodic behaviour of the waves. Stokes solved this problem by also expanding the dispersion relationship into a perturbation series, by a method now known as the Lindstedt–Poincaré method . [ 6 ] Stokes's wave theory , when using a low order of the perturbation expansion (e.g. up to second, third or fifth order), is valid for nonlinear waves on intermediate and deep water, that is for wavelengths ( λ ) not large as compared with the mean depth ( h ). In shallow water , the low-order Stokes expansion breaks down (gives unrealistic results) for appreciable wave amplitude (as compared to the depth). Then, Boussinesq approximations are more appropriate. Further approximations on Boussinesq-type (multi-directional) wave equations lead – for one-way wave propagation – to the Korteweg–de Vries equation or the Benjamin–Bona–Mahony equation . Like (near) exact Stokes-wave solutions, [ 14 ] these two equations have solitary wave ( soliton ) solutions, besides periodic-wave solutions known as cnoidal waves . [ 11 ] Already in 1914, Wilton extended the Stokes expansion for deep-water surface gravity waves to tenth order, although introducing errors at the eight order. [ 15 ] A fifth-order theory for finite depth was derived by De in 1955. [ 16 ] For engineering use, the fifth-order formulations of Fenton are convenient, applicable to both Stokes first and second definition of phase speed (celerity). [ 17 ] The demarcation between when fifth-order Stokes theory is preferable over fifth-order cnoidal wave theory is for Ursell parameters below about 40. [ 10 ] [ 11 ] Different choices for the frame of reference and expansion parameters are possible in Stokes-like approaches to the nonlinear wave problem. In 1880, Stokes himself inverted the dependent and independent variables, by taking the velocity potential and stream function as the independent variables, and the coordinates ( x , z ) as the dependent variables, with x and z being the horizontal and vertical coordinates respectively. [ 18 ] This has the advantage that the free surface, in a frame of reference in which the wave is steady (i.e. moving with the phase velocity), corresponds with a line on which the stream function is a constant. Then the free surface location is known beforehand, and not an unknown part of the solution. The disadvantage is that the radius of convergence of the rephrased series expansion reduces. [ 19 ] Another approach is by using the Lagrangian frame of reference , following the fluid parcels . The Lagrangian formulations show enhanced convergence, as compared to the formulations in both the Eulerian frame , and in the frame with the potential and streamfunction as independent variables. [ 20 ] [ 21 ] An exact solution for nonlinear pure capillary waves of permanent form, and for infinite fluid depth, was obtained by Crapper in 1957. Note that these capillary waves – being short waves forced by surface tension , if gravity effects are negligible – have sharp troughs and flat crests. This contrasts with nonlinear surface gravity waves, which have sharp crests and flat troughs. [ 22 ] By use of computer models, the Stokes expansion for surface gravity waves has been continued, up to high (117th) order by Schwartz (1974) . Schwartz has found that the amplitude a (or a 1 ) of the first-order fundamental reaches a maximum before the maximum wave height H is reached. Consequently, the wave steepness ka in terms of wave amplitude is not a monotone function up to the highest wave, and Schwartz utilizes instead kH as the expansion parameter. To estimate the highest wave in deep water, Schwartz has used Padé approximants and Domb–Sykes plots in order to improve the convergence of the Stokes expansion. Extended tables of Stokes waves on various depths, computed by a different method (but in accordance with the results by others), are provided in Williams ( 1981 , 1985 ). Several exact relationships exist between integral properties – such as kinetic and potential energy , horizontal wave momentum and radiation stress – as found by Longuet-Higgins (1975) . He shows, for deep-water waves, that many of these integral properties have a maximum before the maximum wave height is reached (in support of Schwartz's findings). Cokelet (1978) harvtxt error: no target: CITEREFCokelet1978 ( help ) , using a method similar to the one of Schwartz, computed and tabulated integral properties for a wide range of finite water depths (all reaching maxima below the highest wave height). Further, these integral properties play an important role in the conservation laws for water waves, through Noether's theorem . [ 25 ] In 2005, Hammack, Henderson and Segur have provided the first experimental evidence for the existence of three-dimensional progressive waves of permanent form in deep water – that is bi-periodic and two-dimensional progressive wave patterns of permanent form. [ 26 ] The existence of these three-dimensional steady deep-water waves has been revealed in 2002, from a bifurcation study of two-dimensional Stokes waves by Craig and Nicholls, using numerical methods. [ 27 ] Convergence of the Stokes expansion was first proved by Levi-Civita (1925) for the case of small-amplitude waves – on the free surface of a fluid of infinite depth. This was extended shortly afterwards by Struik (1926) for the case of finite depth and small-amplitude waves. [ 28 ] Near the end of the 20th century, it was shown that for finite-amplitude waves the convergence of the Stokes expansion depends strongly on the formulation of the periodic wave problem. For instance, an inverse formulation of the periodic wave problem as used by Stokes – with the spatial coordinates as a function of velocity potential and stream function – does not converge for high-amplitude waves. While other formulations converge much more rapidly, e.g. in the Eulerian frame of reference (with the velocity potential or stream function as a function of the spatial coordinates). [ 19 ] The maximum wave steepness, for periodic and propagating deep-water waves, is H / λ = 0.1410633 ± 4 · 10 −7 , [ 29 ] so the wave height is about one-seventh ( ⁠ 1 / 7 ⁠ ) of the wavelength λ. [ 24 ] And surface gravity waves of this maximum height have a sharp wave crest – with an angle of 120° (in the fluid domain) – also for finite depth, as shown by Stokes in 1880. [ 18 ] An accurate estimate of the highest wave steepness in deep water ( H / λ ≈ 0.142 ) was already made in 1893, by John Henry Michell , using a numerical method. [ 30 ] A more detailed study of the behaviour of the highest wave near the sharp-cornered crest has been published by Malcolm A. Grant, in 1973. [ 31 ] The existence of the highest wave on deep water with a sharp-angled crest of 120° was proved by John Toland in 1978. [ 32 ] The convexity of η(x) between the successive maxima with a sharp-angled crest of 120° was independently proven by C.J. Amick et al. and Pavel I. Plotnikov in 1982 . [ 33 ] [ 34 ] The highest Stokes wave – under the action of gravity – can be approximated with the following simple and accurate representation of the free surface elevation η ( x , t ): [ 35 ] η λ = A [ cosh ( x − c t λ ) − 1 ] , {\displaystyle {\frac {\eta }{\lambda }}=A\,\left[\cosh \,\left({\frac {x-ct}{\lambda }}\right)-1\right],} with A = 1 3 sinh ⁡ ( 1 2 ) ≈ 1.108 , {\displaystyle A={\frac {1}{{\sqrt {3}}\,\sinh \left({\frac {1}{2}}\right)}}\approx 1.108,} for − 1 2 λ ≤ ( x − c t ) ≤ 1 2 λ , {\displaystyle -{\tfrac {1}{2}}\,\lambda \leq (x-ct)\leq {\tfrac {1}{2}}\,\lambda ,} and shifted horizontally over an integer number of wavelengths to represent the other waves in the regular wave train. This approximation is accurate to within 0.7% everywhere, as compared with the "exact" solution for the highest wave. [ 35 ] Another accurate approximation – however less accurate than the previous one – of the fluid motion on the surface of the steepest wave is by analogy with the swing of a pendulum in a grandfather clock . [ 36 ] Large library of Stokes waves computed with high precision for the case of infinite depth, represented with high accuracy (at least 27 digits after decimal point) as a Padé approximant can be found at StokesWave.org [ 37 ] In deeper water, Stokes waves are unstable. [ 38 ] This was shown by T. Brooke Benjamin and Jim E. Feir in 1967. [ 39 ] [ 40 ] The Benjamin–Feir instability is a side-band or modulational instability, with the side-band modulations propagating in the same direction as the carrier wave ; waves become unstable on deeper water for a relative depth kh > 1.363 (with k the wavenumber and h the mean water depth). [ 41 ] The Benjamin–Feir instability can be described with the nonlinear Schrödinger equation , by inserting a Stokes wave with side bands. [ 38 ] Subsequently, with a more refined analysis, it has been shown – theoretically and experimentally – that the Stokes wave and its side bands exhibit Fermi–Pasta–Ulam–Tsingou recurrence : a cyclic alternation between modulation and demodulation. [ 42 ] In 1978 Longuet-Higgins , by means of numerical modelling of fully non-linear waves and modulations (propagating in the carrier wave direction), presented a detailed analysis of the region of instability in deep water: both for superharmonics (for perturbations at the spatial scales smaller than the wavelength λ {\displaystyle \lambda } ) [ 43 ] and subharmonics (for perturbations at the spatial scales larger than λ {\displaystyle \lambda } ). [ 44 ] With increase of Stokes wave's amplitude, new modes of superharmonic instability appear. Appearance of a new branch of instability happens when the energy of the wave passes extremum. Detailed analysis of the mechanism of appearance of the new branches of instability has shown that their behavior follows closely a simple law, which allows to find with a good accuracy instability growth rates for all known and predicted branches. [ 45 ] In Longuet-Higgins studies of two-dimensional wave motion, as well as the subsequent studies of three-dimensional modulations by McLean et al., new types of instabilities were found – these are associated with resonant wave interactions between five (or more) wave components. [ 46 ] [ 47 ] [ 48 ] In many instances, the oscillatory flow in the fluid interior of surface waves can be described accurately using potential flow theory, apart from boundary layers near the free surface and bottom (where vorticity is important, due to viscous effects , see Stokes boundary layer ). [ 49 ] Then, the flow velocity u can be described as the gradient of a velocity potential Φ {\displaystyle \Phi } : Consequently, assuming incompressible flow , the velocity field u is divergence-free and the velocity potential Φ {\displaystyle \Phi } satisfies Laplace's equation [ 49 ] in the fluid interior. The fluid region is described using three-dimensional Cartesian coordinates ( x , y , z ), with x and y the horizontal coordinates, and z the vertical coordinate – with the positive z -direction opposing the direction of the gravitational acceleration . Time is denoted with t . The free surface is located at z = η ( x , y , t ) , and the bottom of the fluid region is at z = − h ( x , y ) . The free-surface boundary conditions for surface gravity waves – using a potential flow description – consist of a kinematic and a dynamic boundary condition. [ 50 ] The kinematic boundary condition ensures that the normal component of the fluid's flow velocity , u = [ ∂ Φ / ∂ x ∂ Φ / ∂ y ∂ Φ / ∂ z ] T {\displaystyle \mathbf {u} =[\partial \Phi /\partial x~~~\partial \Phi /\partial y~~~\partial \Phi /\partial z]^{\mathrm {T} }} in matrix notation, at the free surface equals the normal velocity component of the free-surface motion z = η ( x , y , t ) : The dynamic boundary condition states that, without surface tension effects, the atmospheric pressure just above the free surface equals the fluid pressure just below the surface. For an unsteady potential flow this means that the Bernoulli equation is to be applied at the free surface. In case of a constant atmospheric pressure, the dynamic boundary condition becomes: where the constant atmospheric pressure has been taken equal to zero, without loss of generality . Both boundary conditions contain the potential Φ {\displaystyle \Phi } as well as the surface elevation η . A (dynamic) boundary condition in terms of only the potential Φ {\displaystyle \Phi } can be constructed by taking the material derivative of the dynamic boundary condition, and using the kinematic boundary condition: [ 49 ] [ 50 ] [ 51 ] ( ∂ ∂ t + u ⋅ ∇ ) ( ∂ Φ ∂ t + 1 2 | u | 2 + g η ) = 0 {\displaystyle {\color {Gray}{{\Bigl (}{\frac {\partial }{\partial t}}+\mathbf {u} \cdot {\boldsymbol {\nabla }}{\Bigr )}\,\left({\frac {\partial \Phi }{\partial t}}+{\tfrac {1}{2}}\,|\mathbf {u} |^{2}+g\,\eta \right)=0}}} ⇒ ∂ 2 Φ ∂ t 2 + g ∂ Φ ∂ z + u ⋅ ∇ ∂ Φ ∂ t + 1 2 ∂ ∂ t ( | u | 2 ) + 1 2 u ⋅ ∇ ( | u | 2 ) = 0 {\displaystyle {\color {Gray}{\Rightarrow \quad {\frac {\partial ^{2}\Phi }{\partial t^{2}}}+g\,{\frac {\partial \Phi }{\partial z}}+\mathbf {u} \cdot {\boldsymbol {\nabla }}{\frac {\partial \Phi }{\partial t}}+{\tfrac {1}{2}}\,{\frac {\partial }{\partial t}}\left(|\mathbf {u} |^{2}\right)+{\tfrac {1}{2}}\,\mathbf {u} \cdot {\boldsymbol {\nabla }}\left(|\mathbf {u} |^{2}\right)=0}}} At the bottom of the fluid layer, impermeability requires the normal component of the flow velocity to vanish: [ 49 ] where h ( x , y ) is the depth of the bed below the datum z = 0 and n is the coordinate component in the direction normal to the bed . For permanent waves above a horizontal bed, the mean depth h is a constant and the boundary condition at the bed becomes: ∂ Φ ∂ z = 0 at z = − h . {\displaystyle {\frac {\partial \Phi }{\partial z}}=0\qquad {\text{ at }}z=-h.} The free-surface boundary conditions (D) and (E) apply at the yet unknown free-surface elevation z = η ( x , y , t ) . They can be transformed into boundary conditions at a fixed elevation z = constant by use of Taylor series expansions of the flow field around that elevation. [ 49 ] Without loss of generality the mean surface elevation – around which the Taylor series are developed – can be taken at z = 0 . This assures the expansion is around an elevation in the proximity of the actual free-surface elevation. Convergence of the Taylor series for small-amplitude steady-wave motion was proved by Levi-Civita (1925) . The following notation is used: the Taylor series of some field f ( x , y , z , t ) around z = 0 – and evaluated at z = η ( x , y , t ) – is: [ 52 ] f ( x , y , η , t ) = [ f ] 0 + η [ ∂ f ∂ z ] 0 + 1 2 η 2 [ ∂ 2 f ∂ z 2 ] 0 + ⋯ {\displaystyle f(x,y,\eta ,t)=\left[f\right]_{0}+\eta \,\left[{\frac {\partial f}{\partial z}}\right]_{0}+{\frac {1}{2}}\,\eta ^{2}\,\left[{\frac {\partial ^{2}f}{\partial z^{2}}}\right]_{0}+\cdots } with subscript zero meaning evaluation at z = 0 , e.g.: [ f ] 0 = f ( x , y ,0, t ) . Applying the Taylor expansion to free-surface boundary condition Eq. (E) in terms of the potential Φ gives: [ 49 ] [ 52 ] showing terms up to triple products of η , Φ and u , as required for the construction of the Stokes expansion up to third-order O (( ka ) 3 ). Here, ka is the wave steepness, with k a characteristic wavenumber and a a characteristic wave amplitude for the problem under study. The fields η , Φ and u are assumed to be O ( ka ). The dynamic free-surface boundary condition Eq. (D) can be evaluated in terms of quantities at z = 0 as: [ 49 ] [ 52 ] The advantages of these Taylor-series expansions fully emerge in combination with a perturbation-series approach, for weakly non-linear waves ( ka ≪ 1) . The perturbation series are in terms of a small ordering parameter ε ≪ 1 – which subsequently turns out to be proportional to (and of the order of) the wave slope ka , see the series solution in this section . [ 53 ] So, take ε = ka : η = ε η 1 + ε 2 η 2 + ε 3 η 3 + ⋯ , Φ = ε Φ 1 + ε 2 Φ 2 + ε 3 Φ 3 + ⋯ and u = ε u 1 + ε 2 u 2 + ε 3 u 3 + ⋯ . {\displaystyle {\begin{aligned}\eta &=\varepsilon \,\eta _{1}+\varepsilon ^{2}\,\eta _{2}+\varepsilon ^{3}\,\eta _{3}+\cdots ,\\\Phi &=\varepsilon \,\Phi _{1}+\varepsilon ^{2}\,\Phi _{2}+\varepsilon ^{3}\,\Phi _{3}+\cdots \quad {\text{and}}\\\mathbf {u} &=\varepsilon \,\mathbf {u} _{1}+\varepsilon ^{2}\,\mathbf {u} _{2}+\varepsilon ^{3}\,\mathbf {u} _{3}+\cdots .\end{aligned}}} When applied in the flow equations, they should be valid independent of the particular value of ε . By equating in powers of ε , each term proportional to ε to a certain power has to equal to zero. As an example of how the perturbation-series approach works, consider the non-linear boundary condition (G) ; it becomes: [ 6 ] ε { ∂ 2 Φ 1 ∂ t 2 + g ∂ Φ 1 ∂ z } + ε 2 { ∂ 2 Φ 2 ∂ t 2 + g ∂ Φ 2 ∂ z + η 1 ∂ ∂ z ( ∂ 2 Φ 1 ∂ t 2 + g ∂ Φ 1 ∂ z ) + ∂ ∂ t ( | u 1 | 2 ) } + ε 3 { ∂ 2 Φ 3 ∂ t 2 + g ∂ Φ 3 ∂ z + η 1 ∂ ∂ z ( ∂ 2 Φ 2 ∂ t 2 + g ∂ Φ 2 ∂ z ) + η 2 ∂ ∂ z ( ∂ 2 Φ 1 ∂ t 2 + g ∂ Φ 1 ∂ z ) + 2 ∂ ∂ t ( u 1 ⋅ u 2 ) + 1 2 η 1 2 ∂ 2 ∂ z 2 ( ∂ 2 Φ 1 ∂ t 2 + g ∂ Φ 1 ∂ z ) + η 1 ∂ 2 ∂ t ∂ z ( | u 1 | 2 ) + 1 2 u 1 ⋅ ∇ ( | u 1 | 2 ) } + O ( ε 4 ) = 0 , at z = 0. {\displaystyle {\begin{aligned}&\varepsilon \,\left\{{\frac {\partial ^{2}\Phi _{1}}{\partial t^{2}}}+g\,{\frac {\partial \Phi _{1}}{\partial z}}\right\}\\&+\varepsilon ^{2}\,\left\{{\frac {\partial ^{2}\Phi _{2}}{\partial t^{2}}}+g\,{\frac {\partial \Phi _{2}}{\partial z}}+\eta _{1}\,{\frac {\partial }{\partial z}}\left({\frac {\partial ^{2}\Phi _{1}}{\partial t^{2}}}+g\,{\frac {\partial \Phi _{1}}{\partial z}}\right)+{\frac {\partial }{\partial t}}\left(|\mathbf {u} _{1}|^{2}\right)\right\}\\&+\varepsilon ^{3}\,\left\{{\frac {\partial ^{2}\Phi _{3}}{\partial t^{2}}}+g\,{\frac {\partial \Phi _{3}}{\partial z}}+\eta _{1}\,{\frac {\partial }{\partial z}}\left({\frac {\partial ^{2}\Phi _{2}}{\partial t^{2}}}+g\,{\frac {\partial \Phi _{2}}{\partial z}}\right)\right.\\&\qquad \quad \left.+\eta _{2}\,{\frac {\partial }{\partial z}}\left({\frac {\partial ^{2}\Phi _{1}}{\partial t^{2}}}+g\,{\frac {\partial \Phi _{1}}{\partial z}}\right)+2\,{\frac {\partial }{\partial t}}\left(\mathbf {u} _{1}\cdot \mathbf {u} _{2}\right)\right.\\&\qquad \quad \left.+{\tfrac {1}{2}}\,\eta _{1}^{2}\,{\frac {\partial ^{2}}{\partial z^{2}}}\left({\frac {\partial ^{2}\Phi _{1}}{\partial t^{2}}}+g\,{\frac {\partial \Phi _{1}}{\partial z}}\right)+\eta _{1}\,{\frac {\partial ^{2}}{\partial t\,\partial z}}\left(|\mathbf {u} _{1}|^{2}\right)+{\tfrac {1}{2}}\,\mathbf {u} _{1}\cdot {\boldsymbol {\nabla }}\left(|\mathbf {u} _{1}|^{2}\right)\right\}\\&+{\mathcal {O}}\left(\varepsilon ^{4}\right)=0,\qquad {\text{at }}z=0.\end{aligned}}} The resulting boundary conditions at z = 0 for the first three orders are: In a similar fashion – from the dynamic boundary condition (H) – the conditions at z = 0 at the orders 1, 2 and 3 become: For the linear equations (A) , (B) and (F) the perturbation technique results in a series of equations independent of the perturbation solutions at other orders: The above perturbation equations can be solved sequentially, i.e. starting with first order, thereafter continuing with the second order, third order, etc. The waves of permanent form propagate with a constant phase velocity (or celerity ), denoted as c . If the steady wave motion is in the horizontal x -direction, the flow quantities η and u are not separately dependent on x and time t , but are functions of x − ct : [ 55 ] η ( x , t ) = η ( x − c t ) and u ( x , z , t ) = u ( x − c t , z ) . {\displaystyle \eta (x,t)=\eta (x-ct)\quad {\text{and}}\quad \mathbf {u} (x,z,t)=\mathbf {u} (x-ct,z).} Further the waves are periodic – and because they are also of permanent form – both in horizontal space x and in time t , with wavelength λ and period τ respectively. Note that Φ ( x , z , t ) itself is not necessary periodic due to the possibility of a constant (linear) drift in x and/or t : [ 56 ] Φ ( x , z , t ) = β x − γ t + φ ( x − c t , z ) , {\displaystyle \Phi (x,z,t)=\beta x-\gamma t+\varphi (x-ct,z),} with φ ( x , z , t ) – as well as the derivatives ∂ Φ /∂ t and ∂ Φ /∂ x – being periodic. Here β is the mean flow velocity below trough level, and γ is related to the hydraulic head as observed in a frame of reference moving with the wave's phase velocity c (so the flow becomes steady in this reference frame). In order to apply the Stokes expansion to progressive periodic waves, it is advantageous to describe them through Fourier series as a function of the wave phase θ ( x , t ): [ 48 ] [ 56 ] θ = k x − ω t = k ( x − c t ) , {\displaystyle \theta =kx-\omega t=k\left(x-ct\right),} assuming waves propagating in the x –direction. Here k = 2 π / λ is the wavenumber , ω = 2 π / τ is the angular frequency and c = ω / k (= λ / τ ) is the phase velocity . Now, the free surface elevation η ( x , t ) of a periodic wave can be described as the Fourier series : [ 11 ] [ 56 ] η = ∑ n = 1 ∞ A n cos ( n θ ) . {\displaystyle \eta =\sum _{n=1}^{\infty }A_{n}\,\cos \,(n\theta ).} Similarly, the corresponding expression for the velocity potential Φ ( x , z , t ) is: [ 56 ] Φ = β x − γ t + ∑ n = 1 ∞ B n [ cosh ( n k ( z + h ) ) ] sin ( n θ ) , {\displaystyle \Phi =\beta x-\gamma t+\sum _{n=1}^{\infty }B_{n}\,{\biggl [}\cosh \,\left(nk\,(z+h)\right){\biggr ]}\,\sin \,(n\theta ),} satisfying both the Laplace equation ∇ 2 Φ = 0 in the fluid interior, as well as the boundary condition ∂ Φ /∂ z = 0 at the bed z = − h . For a given value of the wavenumber k , the parameters: A n , B n (with n = 1, 2, 3, ... ), c , β and γ have yet to be determined. They all can be expanded as perturbation series in ε . Fenton (1990) provides these values for fifth-order Stokes's wave theory. For progressive periodic waves, derivatives with respect to x and t of functions f ( θ , z ) of θ ( x , t ) can be expressed as derivatives with respect to θ : ∂ f ∂ x = + k ∂ f ∂ θ and ∂ f ∂ t = − ω ∂ f ∂ θ . {\displaystyle {\frac {\partial f}{\partial x}}=+k\,{\frac {\partial f}{\partial \theta }}\qquad {\text{and}}\qquad {\frac {\partial f}{\partial t}}=-\omega \,{\frac {\partial f}{\partial \theta }}.} The important point for non-linear waves – in contrast to linear Airy wave theory – is that the phase velocity c also depends on the wave amplitude a , besides its dependence on wavelength λ = 2π / k and mean depth h . Negligence of the dependence of c on wave amplitude results in the appearance of secular terms , in the higher-order contributions to the perturbation-series solution. Stokes (1847) already applied the required non-linear correction to the phase speed c in order to prevent secular behaviour. A general approach to do so is now known as the Lindstedt–Poincaré method . Since the wavenumber k is given and thus fixed, the non-linear behaviour of the phase velocity c = ω / k is brought into account by also expanding the angular frequency ω into a perturbation series: [ 9 ] ω = ω 0 + ε ω 1 + ε 2 ω 2 + ⋯ . {\displaystyle \omega =\omega _{0}+\varepsilon \,\omega _{1}+\varepsilon ^{2}\,\omega _{2}+\cdots .} Here ω 0 will turn out to be related to the wavenumber k through the linear dispersion relation . However time derivatives, through ∂ f /∂ t = − ω ∂ f /∂ θ , now also give contributions – containing ω 1 , ω 2 , etc. – to the governing equations at higher orders in the perturbation series. By tuning ω 1 , ω 2 , etc., secular behaviour can be prevented. For surface gravity waves, it is found that ω 1 = 0 and the first non-zero contribution to the dispersion relation comes from ω 2 (see e.g. the sub-section " Third-order dispersion relation " above). [ 9 ] For non-linear surface waves there is, in general, ambiguity in splitting the total motion into a wave part and a mean part. As a consequence, there is some freedom in choosing the phase speed (celerity) of the wave. Stokes (1847) identified two logical definitions of phase speed, known as Stokes's first and second definition of wave celerity: [ 6 ] [ 11 ] [ 57 ] As pointed out by Michael E. McIntyre , the mean horizontal mass transport will be (near) zero for a wave group approaching into still water, with also in deep water the mass transport caused by the waves balanced by an opposite mass transport in a return flow (undertow). [ 58 ] This is due to the fact that otherwise a large mean force will be needed to accelerate the body of water into which the wave group is propagating.
https://en.wikipedia.org/wiki/Stokes_wave
The Stollé synthesis is a series of chemical reactions that produce oxindoles from anilines and α-haloacid chlorides (or oxalyl chloride ). [ 1 ] [ 2 ] [ 3 ] [ 4 ] The first step is an amide coupling, while the second step is a Friedel–Crafts reaction . [ 5 ] [ 6 ] An improved procedure has been developed. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Stollé_synthesis
The Stolper–Samuelson theorem is a theorem in Heckscher–Ohlin trade theory. It describes the relationship between relative prices of output and relative factor returns—specifically, real wages and real returns to capital. The theorem states that—under specific economic assumptions (constant returns to scale , perfect competition , equality of the number of factors to the number of products)—a rise in the relative price of a good will lead to a rise in the real return to that factor which is used most intensively in the production of the good, and conversely, to a fall in the real return to the other factor. It was derived in 1941 from within the framework of the Heckscher–Ohlin model by Wolfgang Stolper and Paul Samuelson , [ 1 ] but has subsequently been derived in less restricted models. As a term, it is applied to all cases where the effect is seen. Ronald W. Jones and José Scheinkman show that under very general conditions the factor returns change with output prices as predicted by the theorem. [ 2 ] If considering the change in real returns under increased international trade a robust finding of the theorem is that returns to the scarce factor will go down, ceteris paribus . An additional robust corollary of the theorem is that a compensation to the scarce factor exists which will overcome this effect and make increased trade Pareto optimal . [ 3 ] The original Heckscher–Ohlin model was a two-factor model with a labor market specified by a single number. Therefore, the early versions of the theorem could make no predictions about the effect on the unskilled labor force in a high-income country under trade liberalization. However, more sophisticated models with multiple classes of worker productivity have been shown to produce the Stolper–Samuelson effect within each class of labor: Unskilled workers producing traded goods in a high-skill country will be worse off as international trade increases, because, relative to the world market in the good they produce, an unskilled first world production-line worker is a less abundant factor of production than capital. The Stolper–Samuelson theorem is closely linked to the factor price equalization theorem , which states that, regardless of international factor mobility, factor prices will tend to equalize across countries that do not differ in technology. Considering a two-good economy that produces only wheat and cloth, with labor and land being the only factors of production, wheat a land-intensive industry and cloth a labor-intensive one, and assuming that the price of each product equals its marginal cost, the theorem can be derived. The price of cloth should be: with P ( C ) standing for the price of cloth, r standing for rent paid to landowners, w for wage levels and a and b respectively standing for the amount of land and labor used, and do not change with the prices of goods. Similarly, the price of wheat would be: with P ( W ) standing for the price of wheat, r and w for rent and wages, and c and d for the respective amount of land and labor used, and also considered to be constant. If, then, cloth experiences a rise in its price, at least one of its factors must also become more expensive, for equation 1 to hold true, since the relative amounts of labor and land are not affected by changing prices. It can be assumed that it would be labor—the factor that is intensively used in the production of cloth—that would rise. When wages rise, rent must fall, in order for equation 2 to hold true. But a fall in rent also affects equation 1. For it to still hold true, then, the rise in wages must be more than proportional to the rise in cloth prices. A rise in the price of a product, then, will more than proportionally raise the return to the most intensively used factor, and decrease the return to the less intensively used factor. The validity of the Heckscher–Ohlin model has been questioned since the classical Leontief paradox . Indeed, Feenstra called the Heckscher–Ohlin model "hopelessly inadequate as an explanation for historical and modern trade patterns". [ 4 ] As for the Stolper–Samuelson theorem itself, Davis and Mishra recently stated, "It is time to declare Stolper–Samuelson dead". [ 5 ] They argue that the Stolper–Samuelson theorem is "dead" because following trade liberalization in some developing countries (particularly in Latin America), wage inequality rose, and, under the assumption that these countries are labor-abundant, the SS theorem predicts that wage inequality should have fallen. Aside from the declining trend in wage inequality in Latin America that has followed trade liberalization in the longer run (see Lopez-Calva and Lustig), an alternative view would be to recognize that technically the SS theorem predicts a relationship between output prices and relative wages. [ 6 ] Papers that compare output prices with changes in relative wages find moderate-to-strong support for the Stolper–Samuelson theorem for Chile , [ 7 ] Mexico , [ 8 ] and Brazil . [ 9 ]
https://en.wikipedia.org/wiki/Stolper–Samuelson_theorem
In botany , a stoma ( pl. : stomata , from Greek στόμα , "mouth"), also called a stomate ( pl. : stomates ), is a pore found in the epidermis of leaves, stems, and other organs, that controls the rate of gas exchange between the internal air spaces of the leaf and the atmosphere. The pore is bordered by a pair of specialized parenchyma cells known as guard cells that regulate the size of the stomatal opening. The term is usually used collectively to refer to the entire stomatal complex, consisting of the paired guard cells and the pore itself, which is referred to as the stomatal aperture. [ 1 ] Air, containing oxygen , which is used in respiration , and carbon dioxide , which is used in photosynthesis , passes through stomata by gaseous diffusion . Water vapour diffuses through the stomata into the atmosphere as part of a process called transpiration . Stomata are present in the sporophyte generation of the vast majority of land plants , with the exception of liverworts , as well as some mosses and hornworts . In vascular plants the number, size and distribution of stomata varies widely. Dicotyledons usually have more stomata on the lower surface of the leaves than the upper surface. Monocotyledons such as onion , oat and maize may have about the same number of stomata on both leaf surfaces. [ 2 ] : 5 In plants with floating leaves, stomata may be found only on the upper epidermis and submerged leaves may lack stomata entirely. Most tree species have stomata only on the lower leaf surface. [ 3 ] Leaves with stomata on both the upper and lower leaf surfaces are called amphistomatous leaves; leaves with stomata only on the lower surface are hypostomatous , and leaves with stomata only on the upper surface are epistomatous or hyperstomatous . [ 3 ] Size varies across species, with end-to-end lengths ranging from 10 to 80 μm and width ranging from a few to 50 μm. [ 4 ] Carbon dioxide , a key reactant in photosynthesis , is present in the atmosphere at a concentration of about 400 ppm. Most plants require the stomata to be open during daytime. The air spaces in the leaf are saturated with water vapour , which exits the leaf through the stomata in a process known as transpiration . Therefore, plants cannot gain carbon dioxide without simultaneously losing water vapour. [ 5 ] Ordinarily, carbon dioxide is fixed to ribulose 1,5-bisphosphate (RuBP) by the enzyme RuBisCO in mesophyll cells exposed directly to the air spaces inside the leaf. This exacerbates the transpiration problem for two reasons: first, RuBisCo has a relatively low affinity for carbon dioxide, and second, it fixes oxygen to RuBP, wasting energy and carbon in a process called photorespiration . For both of these reasons, RuBisCo needs high carbon dioxide concentrations, which means wide stomatal apertures and, as a consequence, high water loss. Narrower stomatal apertures can be used in conjunction with an intermediary molecule with a high carbon dioxide affinity, phosphoenolpyruvate carboxylase (PEPcase). Retrieving the products of carbon fixation from PEPCase is an energy-intensive process, however. As a result, the PEPCase alternative is preferable only where water is limiting but light is plentiful, or where high temperatures increase the solubility of oxygen relative to that of carbon dioxide, magnifying RuBisCo's oxygenation problem. A group of mostly desert plants called "C.A.M." plants ( crassulacean acid metabolism , after the family Crassulaceae, which includes the species in which the CAM process was first discovered) open their stomata at night (when water evaporates more slowly from leaves for a given degree of stomatal opening), use PEPcase to fix carbon dioxide and store the products in large vacuoles. The following day, they close their stomata and release the carbon dioxide fixed the previous night into the presence of RuBisCO. This saturates RuBisCO with carbon dioxide, allowing minimal photorespiration. This approach, however, is severely limited by the capacity to store fixed carbon in the vacuoles, so it is preferable only when water is severely limited. However, most plants do not have CAM and must therefore open and close their stomata during the daytime, in response to changing conditions, such as light intensity, humidity, and carbon dioxide concentration. When conditions are conducive to stomatal opening (e.g., high light intensity and high humidity), a proton pump drives protons (H + ) from the guard cells. This means that the cells' electrical potential becomes increasingly negative. The negative potential opens potassium voltage-gated channels and so an uptake of potassium ions (K + ) occurs. To maintain this internal negative voltage so that entry of potassium ions does not stop, negative ions balance the influx of potassium. In some cases, chloride ions enter, while in other plants the organic ion malate is produced in guard cells. This increase in solute concentration lowers the water potential inside the cell, which results in the diffusion of water into the cell through osmosis . This increases the cell's volume and turgor pressure . Then, because of rings of cellulose microfibrils that prevent the width of the guard cells from swelling, and thus only allow the extra turgor pressure to elongate the guard cells, whose ends are held firmly in place by surrounding epidermal cells, the two guard cells lengthen by bowing apart from one another, creating an open pore through which gas can diffuse. [ 6 ] When the roots begin to sense a water shortage in the soil, abscisic acid (ABA) is released. [ 7 ] ABA binds to receptor proteins in the guard cells' plasma membrane and cytosol, which first raises the pH of the cytosol of the cells and cause the concentration of free Ca 2+ to increase in the cytosol due to influx from outside the cell and release of Ca 2+ from internal stores such as the endoplasmic reticulum and vacuoles. [ 8 ] This causes the chloride (Cl − ) and organic ions to exit the cells. Second, this stops the uptake of any further K + into the cells and, subsequently, the loss of K + . The loss of these solutes causes an increase in water potential , which results in the diffusion of water back out of the cell by osmosis . This makes the cell plasmolysed , which results in the closing of the stomatal pores. Guard cells have more chloroplasts than the other epidermal cells from which guard cells are derived. Their function is controversial. [ 9 ] [ 10 ] The degree of stomatal resistance can be determined by measuring leaf gas exchange of a leaf. The transpiration rate is dependent on the diffusion resistance provided by the stomatal pores and also on the humidity gradient between the leaf's internal air spaces and the outside air. Stomatal resistance (or its inverse, stomatal conductance ) can therefore be calculated from the transpiration rate and humidity gradient. This allows scientists to investigate how stomata respond to changes in environmental conditions, such as light intensity and concentrations of gases such as water vapor, carbon dioxide, and ozone . [ 11 ] Evaporation ( E ) can be calculated as [ 12 ] where e i and e a are the partial pressures of water in the leaf and in the ambient air respectively, P is atmospheric pressure, and r is stomatal resistance. The inverse of r is conductance to water vapor ( g ), so the equation can be rearranged to [ 12 ] and solved for g : [ 12 ] Photosynthetic CO 2 assimilation ( A ) can be calculated from where C a and C i are the atmospheric and sub-stomatal partial pressures of CO 2 respectively [ clarification needed ] . The rate of evaporation from a leaf can be determined using a photosynthesis system . These scientific instruments measure the amount of water vapour leaving the leaf and the vapor pressure of the ambient air. Photosynthetic systems may calculate water use efficiency ( A / E ), g , intrinsic water use efficiency ( A / g ), and C i . These scientific instruments are commonly used by plant physiologists to measure CO 2 uptake and thus measure photosynthetic rate. [ 13 ] [ 14 ] There is little evidence of the evolution of stomata in the fossil record, but they had appeared in land plants by the middle of the Silurian period. [ 15 ] They may have evolved by the modification of conceptacles from plants' alga-like ancestors. [ 16 ] However, the evolution of stomata must have happened at the same time as the waxy cuticle was evolving – these two traits together constituted a major advantage for early terrestrial plants. [ citation needed ] There are three major epidermal cell types which all ultimately derive from the outermost (L1) tissue layer of the shoot apical meristem , called protodermal cells: trichomes , pavement cells and guard cells, all of which are arranged in a non-random fashion. An asymmetrical cell division occurs in protodermal cells resulting in one large cell that is fated to become a pavement cell and a smaller cell called a meristemoid that will eventually differentiate into the guard cells that surround a stoma. This meristemoid then divides asymmetrically one to three times before differentiating into a guard mother cell. The guard mother cell then makes one symmetrical division, which forms a pair of guard cells. [ 17 ] Cell division is inhibited in some cells so there is always at least one cell between stomata. [ 18 ] Stomatal patterning is controlled by the interaction of many signal transduction components such as EPF (Epidermal Patterning Factor), ERL (ERecta Like) and YODA (a putative MAP kinase kinase kinase ). [ 18 ] Mutations in any one of the genes which encode these factors may alter the development of stomata in the epidermis. [ 18 ] For example, a mutation in one gene causes more stomata that are clustered together, hence is called Too Many Mouths ( TMM ). [ 17 ] Whereas, disruption of the SPCH (SPeecCHless) gene prevents stomatal development all together. [ 18 ] Inhibition of stomatal production can occur by the activation of EPF1, which activates TMM/ERL, which together activate YODA. YODA inhibits SPCH, causing SPCH activity to decrease, preventing asymmetrical cell division that initiates stomata formation. [ 18 ] [ 19 ] Stomatal development is also coordinated by the cellular peptide signal called stomagen, which signals the activation of the SPCH, resulting in increased number of stomata. [ 20 ] Environmental and hormonal factors can affect stomatal development. Light increases stomatal development in plants; while, plants grown in the dark have a lower amount of stomata. Auxin represses stomatal development by affecting their development at the receptor level like the ERL and TMM receptors. However, a low concentration of auxin allows for equal division of a guard mother cell and increases the chance of producing guard cells. [ 21 ] Most angiosperm trees have stomata only on their lower leaf surface. Poplars and willows have them on both surfaces. When leaves develop stomata on both leaf surfaces, the stomata on the lower surface tend to be larger and more numerous, but there can be a great degree of variation in size and frequency about species and genotypes. White ash and white birch leaves had fewer stomata but larger in size. On the other hand sugar maple and silver maple had small stomata that were more numerous. [ 22 ] Different classifications of stoma types exist. One that is widely used is based on the types that Julien Joseph Vesque introduced in 1889, was further developed by Metcalfe and Chalk, [ 23 ] and later complemented by other authors. It is based on the size, shape and arrangement of the subsidiary cells that surround the two guard cells. [ 24 ] They distinguish for dicots : In monocots , several different types of stomata occur such as: In ferns , four different types are distinguished: A catalogue of leaf epidermis prints showing stomata from a wide range of species can be found in Wikimedia commons https://commons.wikimedia.org/wiki/Category:Leaf_epidermis_and_stomata_prints Stomatal crypts are sunken areas of the leaf epidermis which form a chamber-like structure that contains one or more stomata and sometimes trichomes or accumulations of wax . Stomatal crypts can be an adaption to drought and dry climate conditions when the stomatal crypts are very pronounced. However, dry climates are not the only places where they can be found. The following plants are examples of species with stomatal crypts or antechambers: Nerium oleander , conifers, Hakea [ 26 ] and Drimys winteri which is a species of plant found in the cloud forest . [ 27 ] Stomata are holes in the leaf by which pathogens can enter unchallenged. However, stomata can sense the presence of some, if not all, pathogens. [ 28 ] However, pathogenic bacteria applied to Arabidopsis plant leaves can release the chemical coronatine , which induce the stomata to reopen. [ 29 ] Photosynthesis , plant water transport ( xylem ) and gas exchange are regulated by stomatal function which is important in the functioning of plants. [ 30 ] Stomata are responsive to light with blue light being almost 10 times as effective as red light in causing stomatal response. Research suggests this is because the light response of stomata to blue light is independent of other leaf components like chlorophyll . Guard cell protoplasts swell under blue light provided there is sufficient availability of potassium . [ 31 ] Multiple studies have found support that increasing potassium concentrations may increase stomatal opening in the mornings, before the photosynthesis process starts, but that later in the day sucrose plays a larger role in regulating stomatal opening. [ 32 ] Zeaxanthin in guard cells acts as a blue light photoreceptor which mediates the stomatal opening. [ 33 ] The effect of blue light on guard cells is reversed by green light, which isomerizes zeaxanthin. [ 33 ] Stomatal density and aperture (length of stomata) varies under a number of environmental factors such as atmospheric CO 2 concentration, light intensity, air temperature and photoperiod (daytime duration). [ 34 ] [ 35 ] Decreasing stomatal density is one way plants have responded to the increase in concentration of atmospheric CO 2 ([CO 2 ] atm ). [ 36 ] Although changes in [CO 2 ] atm response is the least understood mechanistically, this stomatal response has begun to plateau where it is soon expected to impact transpiration and photosynthesis processes in plants. [ 30 ] [ 37 ] Drought inhibits stomatal opening, but research on soybeans suggests moderate drought does not have a significant effect on stomatal closure of its leaves. There are different mechanisms of stomatal closure. Low humidity stresses guard cells causing turgor loss, termed hydropassive closure. Hydroactive closure is contrasted as the whole leaf affected by drought stress, believed to be most likely triggered by abscisic acid . [ 38 ] It is expected that [CO 2 ] atm will reach 500–1000 ppm by 2100. [ 30 ] 96% of the past 400,000 years experienced below 280 ppm CO 2 . From this figure, it is highly probable that genotypes of today’s plants have diverged from their pre-industrial relatives. [ 30 ] The gene HIC (high carbon dioxide) encodes a negative regulator for the development of stomata in plants. [ 39 ] Research into the HIC gene using Arabidopsis thaliana found no increase of stomatal development in the dominant allele , but in the ‘wild type’ recessive allele showed a large increase, both in response to rising CO 2 levels in the atmosphere. [ 39 ] These studies imply the plants response to changing CO 2 levels is largely controlled by genetics. The CO 2 fertiliser effect has been greatly overestimated during Free-Air Carbon dioxide Enrichment (FACE) experiments where results show increased CO 2 levels in the atmosphere enhances photosynthesis, reduce transpiration, and increase water use efficiency (WUE). [ 36 ] Increased biomass is one of the effects with simulations from experiments predicting a 5–20% increase in crop yields at 550 ppm of CO 2 . [ 40 ] Rates of leaf photosynthesis were shown to increase by 30–50% in C3 plants, and 10–25% in C4 under doubled CO 2 levels. [ 40 ] The existence of a feedback mechanism results a phenotypic plasticity in response to [CO 2 ] atm that may have been an adaptive trait in the evolution of plant respiration and function. [ 30 ] [ 35 ] Predicting how stomata perform during adaptation is useful for understanding the productivity of plant systems for both natural and agricultural systems . [ 34 ] Plant breeders and farmers are beginning to work together using evolutionary and participatory plant breeding to find the best suited species such as heat and drought resistant crop varieties that could naturally evolve to the change in the face of food security challenges. [ 36 ]
https://en.wikipedia.org/wiki/Stoma
Stomach oil is the light oil composed of neutral dietary lipids found in the proventriculus (fore-gut) of birds in the order Procellariiformes . All albatrosses , procellarids (gadfly petrels and shearwaters) and northern and austral storm petrels use the oil. The only Procellariiformes that do not are the diving petrels . The chemical make up of stomach oil varies from species to species and between individuals, but almost always contains both wax esters and triglycerides . Other compounds found in stomach oil include glycerol ethers , pristane and squalene . Stomach oil has low viscosity and will solidify into a hard wax if allowed to cool. It was once thought that stomach oil was a secretion of the proventriculus, but it is now known to be a residue of the diet created by digestion of the prey items such as krill , squid , copepods and fish . It is thought to serve several functions for Procellariiformes, primarily as an energy store; its calorific value is around 40 MJ/kg (9.6 kcal per gram), which is only slightly lower than the value for diesel oil. For this reason a great deal more energy can be stored in oil form as opposed to undigested prey. This can be a real advantage for species that range over huge distances to provide food for hungry chicks, or as a store for lean times when ranging across the sea looking for patchy areas of prey. Surface nesting petrels and albatross can eject this oil out of their mouths (not nostrils, as has sometimes been suggested) towards attacking predators or conspecific rivals. This oil can be deadly to birds, as it can cause matting of the feathers leading to the loss of flight or water repellency. Against threatening mammals (including humans) it is not outright dangerous, but due to its extremely offensive smell it is usually highly repulsive and liable to spoil a predator's hunting success for quite some time. The smell of the hydrophobic oil cannot be removed with water, and can persist (e.g. on clothing) for months or even years.
https://en.wikipedia.org/wiki/Stomach_oil
The Stomatophyta are a proposed sister branch of the Marchantiophyta (Liverworts), together forming the Embryophyta . [ 1 ] The Stomatophyta consist of the Bryophyta ( Moss ), and the remainder of the Embryophyta, including the Anthocerotophyta (Hornsworts). The word stomatophyta means plant with stoma . [ citation needed ] An updated phylogeny of Embryophyta based on the work by Novíkov & Barabaš-Krasni 2015 [ 2 ] with plant taxon authors from Anderson, Anderson & Cleal 2007 [ 3 ] and some clade names from Pelletier 2012 and Lecointre, et al. [ 4 ] [ self-published source? ] [ 1 ] Marchantiophyta (Liverworts) Bryophyta (Mosses) Anthocerotophyta (Hornworts) Polysporangiophyta This plant article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stomatophyta
Stone Aerospace is an aerospace engineering firm founded by engineer and explorer Bill Stone , located in Del Valle , a suburb of Austin, Texas . [ 1 ] Bill Stone began Stone Aerospace as a part-time consulting business in 1999, at which time he was working at the National Institute of Standards and Technology . At the time, Stone already had an extensive background in underground and underwater exploration, which had led him to develop several technologies to further human exploration capabilities. This background, and in particular the success of the Wakulla II Project in Wakulla Springs , Florida , which employed Stone's human-navigated digital wall mapper, lead to inquiries as to whether it would be possible to design an autonomous underwater vehicle , which could explore on its own, making exploration possible where it was not safe or possible for human divers to go. After submitting several proposals to NASA , in 2003 DEPTHX was funded. [ 1 ] Shortly thereafter Stone's Piedra-Sombra Corporation began doing business as Stone Aerospace in Del Valle, Texas. After the successful autonomous exploration by DEPTHX of several cenotes in Mexico , [ 2 ] NASA then funded the ENDURANCE Project, which spent two seasons exploring frozen-over lakes in the Dry Valleys of Antarctica . [ 3 ] Project VALKYRIE was awarded NASA funding in 2010, and it was field-tested in Matanuska Glacier , Alaska in 2015. [ 4 ] Currently, Stone Aerospace is further refining its technologies to produce a full sized cryobot called SPINDLE . [ 5 ] DEPTHX was a NASA-funded project for which Stone Aerospace was the Principal Investigator. Co-investigators included Carnegie Mellon University , which was responsible for the navigation and guidance software, the Southwest Research Institute , which built the vehicle's science payload , and research scientists from the University of Texas at Austin , the Colorado School of Mines , and NASA Ames Research Center . [ 6 ] The DEPTHX vehicle was a fully autonomous underwater vehicle outfitted with scientific sampling equipment [ 7 ] designed to expand upon the limits of human underwater exploration, and was successfully tested over two field seasons in cenotes in south east Mexico. [ 2 ] Among its most notable accomplishments were the discovery of at least three new divisions of bacteria [ 8 ] (the first such discovery by a robotic vehicle) and the first use of three-dimensional simultaneous localization and mapping (SLAM). [ 9 ] The NASA-funded ENDURANCE project built upon the successes of DEPTHX. The DEPTHX vehicle itself was reconfigured to create ENDURANCE, with a new science payload and new navigation systems added to meet the challenges particular to the frozen-over environment in Antarctica, where it spent two field seasons. [ 3 ] The principal investigator for ENDURANCE was Peter Doran of the University of Illinois at Chicago . Co-investigators were Stone Aerospace, John Priscu of Montana State University , and NASA Ames Research Center. [ 10 ] ENDURANCE spent two seasons exploring West Lake Bonney in the Dry Valleys , autonomously collecting aqueous chemistry data as well as making high-resolution maps of the lake floor and the portion of Taylor Glacier which interfaces with the lake. [ 11 ] The results are thought to be one of the most comprehensive three-dimensional biogeochemical maps of any lake on the planet. [ 3 ] The project was the subject of an episode of National Geographic Explorer in 2010 which focused on the goal of discovering life on Jupiter 's moon, Europa . [ 12 ] In 2011, NASA awarded Stone Aerospace $4 million to fund the Phase 2 of project VALKYRIE (Very-Deep Autonomous Laser-Powered Kilowatt-Class Yo-Yoing Robotic Ice Explorer). [ 13 ] This project created an autonomous cryobot capable of melting through vast amounts of ice. [ 14 ] The 5 kW power source on the surface uses optic fiber to conduct a high-energy laser beam to produce hot water jets that melt the ice ahead. [ 13 ] [ 15 ] Some beam energy is converted to electricity via photovoltaic cells to power on-board electronics and jet pumps. [ 13 ] Phase 2 of project VALKYRIE consisted of testing a scaled-down version of the cryobot in Matanuska Glacier , Alaska in 2015. [ 4 ] Stone Aerospace is now integrating a prototype submersible called ARTEMIS (4.3 m long, 1,270 kg) [ 16 ] with the VALKYRIE technology to produce SPINDLE . [ 5 ] This phase will use a full-scale version of the cryobot which will melt its way to an Antarctic subglacial lake — Lake Vostok — to collect samples, and then resurface. [ 14 ] [ 4 ] The vehicle features a radar integrated to an intelligent algorithm for autonomous scientific sampling and navigation through ice and water. [ 17 ] [ 18 ] This phase of the project would be viewed as a precursor to possible future missions to an icy moon. [ 19 ] [ 20 ] Stone Aerospave developed an autonomous underwater vehicle called Sunfish that was used to map flooded caves in Florida ( Peacock Springs ) in 2017–2018 and Namibia ( Lake Guinas , Harasib Cave , and Dragon's Breath Cave ) in 2019, and Hudson Grotto in Florida in 2020. [ 21 ] [ 22 ] [ 23 ]
https://en.wikipedia.org/wiki/Stone_Aerospace
In mathematics , a Stone algebra or Stone lattice is a pseudocomplemented distributive lattice L in which any of the following equivalent statements hold for all x , y ∈ L : {\displaystyle x,y\in L:} [ 1 ] They were introduced by Grätzer & Schmidt (1957) and named after Marshall Harvey Stone . The set S ( L ) ≝ { x ** | x ∈ L } is called the skeleton of L . Then L is a Stone algebra if and only if it's skeleton S ( L ) is a sublattice of L . [ 1 ] Boolean algebras are Stone algebras, and Stone algebras are Ockham algebras . Examples: This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stone_algebra
Stone sealing is the application of a surface treatment to products constructed of natural stone to retard staining and corrosion. [ 1 ] All bulk natural stone is riddled with interconnected capillary channels that permit penetration by liquids and gases. This is true for igneous rock types such as granite and basalt , metamorphic rocks such as marble and slate , and sedimentary rocks such as limestone , travertine , and sandstone . These porous channels act like a sponge, and capillary action draws in liquids over time, along with any dissolved salts and other solutes. Very porous stone, such as sandstone absorb liquids relatively quickly, while denser igneous stones such as granite are significantly less porous; they absorb smaller volumes, and more slowly, especially when absorbing viscous liquids. Natural stone is used in kitchens, floors, walls, bathrooms, dining rooms, around swimming pools, building foyers, public areas and facades. Since ancient times, stone has been popular for building and decorative purposes. It has been valued for its strength, durability, and insulation properties. It can be cut, cleft, or sculpted to shape as required, and the variety of natural stone types, textures, and colors provide an exceptionally versatile range of building materials. The porosity and makeup of most stone does, however, leave it prone to certain types of damage if unsealed. The longevity and usefulness of stone can be extended by sealing its surface effectively, so as to exclude harmful liquids and gases. The ancient Romans often used olive oil to seal their stone. Such treatment provides some protection by excluding water and other weathering agents, but it stains the stone permanently. During the renaissance Europeans experimented with the use of topical varnishes and sealants made from ingredients such as egg white, natural resins and silica, which were clear, could be applied wet and harden to form a protective skin. Most such measures did not last long, and some proved harmful in the long run. Modern stone sealers are divided into 3 broad types: topical sealers, penetrating sealers, and impregnating sealers. Topical sealers are generally made from polyurethanes, acrylics, or natural wax. [ 2 ] These sealers may be effective at stopping stains but, being exposed on the surface of the material, they tend to wear out relatively quickly, especially on high-traffic areas of flooring. This type of sealer will significantly change the look and slip resistance of the surface, especially when it is wet. These sealers are not breathable i.e. do not allow the escape of water vapour and other gases, and are not effective against salt attack, such as efflorescence and spalling . The most penetrating sealers use siliconates, fluoro-polymers and siloxanes, which repel liquids. These sealers penetrate the surface of the stone enough to anchor the material to the surface. They are generally longer lasting than topical sealers and often do not substantially alter the look of the stone, but still can change the slip characteristics of the surface and do wear relatively quickly. Penetrating sealers often require the use of special cleaners which both clean and top up the repellent ingredient left on the stone surface. These sealers are often breathable to a certain degree, but do not penetrate deeply enough (generally less than 1mm) to be effective against salt attack, such as efflorescence and spalling. Uses silanes or modified silanes. These are a type of penetrating sealer, which penetrate deeply into the material, impregnating it with molecules which bond to the capillary pores and repels water and / or oils from within the material. Some modified silane sealers impregnate deeply enough to protect against salt attack, such as efflorescence, spalling, picture framing and freeze-thaw spalling. Some silane stone sealers based on nanotechnology claim to be resistant to UV light and higher pH levels found in new masonry and pointing. [ 3 ] A good depth of penetration is also essential for protection from weathering and traffic.
https://en.wikipedia.org/wiki/Stone_sealer
In mathematics , the Stoneham numbers are a certain class of real numbers , named after mathematician Richard G. Stoneham (1920–1996). [ 1 ] For coprime numbers b , c > 1, the Stoneham number α b , c is defined as It was shown by Stoneham in 1973 that α b , c is b - normal whenever c is an odd prime and b is a primitive root of c 2 . In 2002, Bailey & Crandall showed that coprimality of b , c > 1 is sufficient for b -normality of α b , c . [ 2 ] This number theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stoneham_number
A Stone–Wales defect is a crystallographic defect that involves the change of connectivity of two π-bonded carbon atoms, leading to their rotation by 90° with respect to the midpoint of their bond. [ 1 ] The reaction commonly involves conversion between a naphthalene -like structure into a fulvalene -like structure, that is, two rings that share an edge vs two separate rings that have vertices bonded to each other. The reaction occurs on carbon nanotubes , graphene , and similar carbon frameworks, where the four adjacent six-membered rings of a pyrene -like region are changed into two five-membered rings and two seven-membered rings when the bond uniting two of the adjacent rings rotates. In these materials, the rearrangement is thought to have important implications for the thermal, [ 3 ] chemical, electrical, and mechanical properties. [ 4 ] The rearrangement is an example of a pyracyclene rearrangement. The defect is named after Anthony Stone and David J. Wales at the University of Cambridge , who described it in a 1986 paper [ 5 ] on the isomerization of fullerenes . However, a similar defect was described much earlier by G. J. Dienes in 1952 in a paper on diffusion mechanisms in graphite [ 6 ] and later in 1969 in a paper on defects in graphite by Peter Thrower . [ 7 ] For this reason, the term Stone–Thrower–Wales defect is sometimes used. The defects have been imaged using scanning tunneling microscopy [ citation needed ] and transmission electron microscopy [ 8 ] and can be determined using various vibrational spectroscopy techniques. [ citation needed ] It has been proposed that the coalescence process of fullerenes or carbon nanotubes may occur through a sequence of such a rearrangements. [ citation needed ] The defect is thought to be responsible for nanoscale plasticity and the brittle–ductile transitions in carbon nanotubes. [ citation needed ] The activation energy for the simple atomic motion that gives the bond-rotation apparent in a Stone–Wales defects is fairly high—a barrier of several electronvolts . [ 4 ] [ 9 ] but various processes can create the defects at substantially lower energies than might be expected. [ 8 ] The rearrangement creates a structure with less resonance stabilization among the sp 2 atoms involved and higher strain energy in the local structure. As a result, the defect creates a region with greater chemical reactivity, including acting as a nucleophile [ citation needed ] and creating a preferred site for binding to hydrogen atoms. [ 10 ] The high affinity of these defects for hydrogen, coupled with the large surface area of the bulk material, might make these defects an important aspect in the use of carbon nanomaterials for hydrogen storage. [ 10 ] Incorporation of defects along a carbon-nanotube network can program a carbon-nanotube circuit to enhance the conductance along a specific path. [ citation needed ] In this scenario, the defects lead to a charge delocalization, which redirects an incoming electron down a given trajectory.
https://en.wikipedia.org/wiki/Stone–Wales_defect
In mathematical analysis , the Weierstrass approximation theorem states that every continuous function defined on a closed interval [ a , b ] can be uniformly approximated as closely as desired by a polynomial function. Because polynomials are among the simplest functions, and because computers can directly evaluate polynomials, this theorem has both practical and theoretical relevance, especially in polynomial interpolation . The original version of this result was established by Karl Weierstrass in 1885 using the Weierstrass transform . Marshall H. Stone considerably generalized the theorem [ 1 ] and simplified the proof. [ 2 ] His result is known as the Stone–Weierstrass theorem . The Stone–Weierstrass theorem generalizes the Weierstrass approximation theorem in two directions: instead of the real interval [ a , b ] , an arbitrary compact Hausdorff space X is considered, and instead of the algebra of polynomial functions, a variety of other families of continuous functions on X {\displaystyle X} are shown to suffice, as is detailed below . The Stone–Weierstrass theorem is a vital result in the study of the algebra of continuous functions on a compact Hausdorff space . Further, there is a generalization of the Stone–Weierstrass theorem to noncompact Tychonoff spaces , namely, any continuous function on a Tychonoff space is approximated uniformly on compact sets by algebras of the type appearing in the Stone–Weierstrass theorem and described below. A different generalization of Weierstrass' original theorem is Mergelyan's theorem , which generalizes it to functions defined on certain subsets of the complex plane . The statement of the approximation theorem as originally discovered by Weierstrass is as follows: Weierstrass approximation theorem — Suppose f is a continuous real-valued function defined on the real interval [ a , b ] . For every ε > 0 , there exists a polynomial p such that for all x in [ a , b ] , we have | f ( x ) − p ( x )| < ε , or equivalently, the supremum norm ‖ f − p ‖ < ε . A constructive proof of this theorem using Bernstein polynomials is outlined on that page. For differentiable functions, Jackson's inequality bounds the error of approximations by polynomials of a given degree: if f {\displaystyle f} has a continuous k-th derivative, then for every n ∈ N {\displaystyle n\in \mathbb {N} } there exists a polynomial p n {\displaystyle p_{n}} of degree at most n {\displaystyle n} such that ‖ f − p n ‖ ≤ π 2 1 ( n + 1 ) k ‖ f ( k ) ‖ {\displaystyle \lVert f-p_{n}\rVert \leq {\frac {\pi }{2}}{\frac {1}{(n+1)^{k}}}\lVert f^{(k)}\rVert } . [ 3 ] However, if f {\displaystyle f} is merely continuous, the convergence of the approximations can be arbitrarily slow in the following sense: for any sequence of positive real numbers ( a n ) n ∈ N {\displaystyle (a_{n})_{n\in \mathbb {N} }} decreasing to 0 there exists a function f {\displaystyle f} such that ‖ f − p ‖ > a n {\displaystyle \lVert f-p\rVert >a_{n}} for every polynomial p {\displaystyle p} of degree at most n {\displaystyle n} . [ 4 ] As a consequence of the Weierstrass approximation theorem, one can show that the space C[ a , b ] is separable : the polynomial functions are dense, and each polynomial function can be uniformly approximated by one with rational coefficients; there are only countably many polynomials with rational coefficients. Since C[ a , b ] is metrizable and separable it follows that C[ a , b ] has cardinality at most 2 ℵ 0 . (Remark: This cardinality result also follows from the fact that a continuous function on the reals is uniquely determined by its restriction to the rationals.) The set C[ a , b ] of continuous real-valued functions on [ a , b ] , together with the supremum norm ‖ f ‖ = sup a ≤ x ≤ b | f ( x ) | is a Banach algebra , (that is, an associative algebra and a Banach space such that ‖ fg ‖ ≤ ‖ f ‖·‖ g ‖ for all f , g ). The set of all polynomial functions forms a subalgebra of C[ a , b ] (that is, a vector subspace of C[ a , b ] that is closed under multiplication of functions), and the content of the Weierstrass approximation theorem is that this subalgebra is dense in C[ a , b ] . Stone starts with an arbitrary compact Hausdorff space X and considers the algebra C( X , R ) of real-valued continuous functions on X , with the topology of uniform convergence . He wants to find subalgebras of C( X , R ) which are dense. It turns out that the crucial property that a subalgebra must satisfy is that it separates points : a set A of functions defined on X is said to separate points if, for every two different points x and y in X there exists a function p in A with p ( x ) ≠ p ( y ) . Now we may state: Stone–Weierstrass theorem (real numbers) — Suppose X is a compact Hausdorff space and A is a subalgebra of C( X , R ) which contains a non-zero constant function. Then A is dense in C( X , R ) if and only if it separates points. This implies Weierstrass' original statement since the polynomials on [ a , b ] form a subalgebra of C[ a , b ] which contains the constants and separates points. A version of the Stone–Weierstrass theorem is also true when X is only locally compact . Let C 0 ( X , R ) be the space of real-valued continuous functions on X that vanish at infinity ; that is, a continuous function f is in C 0 ( X , R ) if, for every ε > 0 , there exists a compact set K ⊂ X such that | f |  < ε on X \ K . Again, C 0 ( X , R ) is a Banach algebra with the supremum norm . A subalgebra A of C 0 ( X , R ) is said to vanish nowhere if not all of the elements of A simultaneously vanish at a point; that is, for every x in X , there is some f in A such that f ( x ) ≠ 0 . The theorem generalizes as follows: Stone–Weierstrass theorem (locally compact spaces) — Suppose X is a locally compact Hausdorff space and A is a subalgebra of C 0 ( X , R ) . Then A is dense in C 0 ( X , R ) (given the topology of uniform convergence ) if and only if it separates points and vanishes nowhere. This version clearly implies the previous version in the case when X is compact, since in that case C 0 ( X , R ) = C( X , R ) . There are also more general versions of the Stone–Weierstrass theorem that weaken the assumption of local compactness. [ 5 ] The Stone–Weierstrass theorem can be used to prove the following two statements, which go beyond Weierstrass's result. Slightly more general is the following theorem, where we consider the algebra C ( X , C ) {\displaystyle C(X,\mathbb {C} )} of complex-valued continuous functions on the compact space X {\displaystyle X} , again with the topology of uniform convergence. This is a C*-algebra with the *-operation given by pointwise complex conjugation . Stone–Weierstrass theorem (complex numbers) — Let X {\displaystyle X} be a compact Hausdorff space and let S {\displaystyle S} be a separating subset of C ( X , C ) {\displaystyle C(X,\mathbb {C} )} . Then the complex unital *-algebra generated by S {\displaystyle S} is dense in C ( X , C ) {\displaystyle C(X,\mathbb {C} )} . The complex unital *-algebra generated by S {\displaystyle S} consists of all those functions that can be obtained from the elements of S {\displaystyle S} by throwing in the constant function 1 and adding them, multiplying them, conjugating them, or multiplying them with complex scalars, and repeating finitely many times. This theorem implies the real version, because if a net of complex-valued functions uniformly approximates a given function, f n → f {\displaystyle f_{n}\to f} , then the real parts of those functions uniformly approximate the real part of that function, Re ⁡ f n → Re ⁡ f {\displaystyle \operatorname {Re} f_{n}\to \operatorname {Re} f} , and because for real subsets, S ⊂ C ( X , R ) ⊂ C ( X , C ) , {\displaystyle S\subset C(X,\mathbb {R} )\subset C(X,\mathbb {C} ),} taking the real parts of the generated complex unital (selfadjoint) algebra agrees with the generated real unital algebra generated. As in the real case, an analog of this theorem is true for locally compact Hausdorff spaces. The following is an application of this complex version. Following Holladay (1957) , consider the algebra C( X , H ) of quaternion-valued continuous functions on the compact space X , again with the topology of uniform convergence. If a quaternion q is written in the form q = a + i b + j c + k d {\textstyle q=a+ib+jc+kd} Likewise Then we may state: Stone–Weierstrass theorem (quaternion numbers) — Suppose X is a compact Hausdorff space and A is a subalgebra of C( X , H ) which contains a non-zero constant function. Then A is dense in C( X , H ) if and only if it separates points . The space of complex-valued continuous functions on a compact Hausdorff space X {\displaystyle X} i.e. C ( X , C ) {\displaystyle C(X,\mathbb {C} )} is the canonical example of a unital commutative C*-algebra A {\displaystyle {\mathfrak {A}}} . The space X may be viewed as the space of pure states on A {\displaystyle {\mathfrak {A}}} , with the weak-* topology. Following the above cue, a non-commutative extension of the Stone–Weierstrass theorem, which remains unsolved, is as follows: Conjecture — If a unital C*-algebra A {\displaystyle {\mathfrak {A}}} has a C*-subalgebra B {\displaystyle {\mathfrak {B}}} which separates the pure states of A {\displaystyle {\mathfrak {A}}} , then A = B {\displaystyle {\mathfrak {A}}={\mathfrak {B}}} . In 1960, Jim Glimm proved a weaker version of the above conjecture. Stone–Weierstrass theorem (C*-algebras) [ 6 ] — If a unital C*-algebra A {\displaystyle {\mathfrak {A}}} has a C*-subalgebra B {\displaystyle {\mathfrak {B}}} which separates the pure state space (i.e. the weak-* closure of the pure states) of A {\displaystyle {\mathfrak {A}}} , then A = B {\displaystyle {\mathfrak {A}}={\mathfrak {B}}} . Let X be a compact Hausdorff space. Stone's original proof of the theorem used the idea of lattices in C( X , R ) . A subset L of C( X , R ) is called a lattice if for any two elements f , g ∈ L , the functions max{ f , g }, min{ f , g } also belong to L . The lattice version of the Stone–Weierstrass theorem states: Stone–Weierstrass theorem (lattices) — Suppose X is a compact Hausdorff space with at least two points and L is a lattice in C( X , R ) with the property that for any two distinct elements x and y of X and any two real numbers a and b there exists an element f ∈ L with f ( x ) = a and f ( y ) = b . Then L is dense in C( X , R ) . The above versions of Stone–Weierstrass can be proven from this version once one realizes that the lattice property can also be formulated using the absolute value | f | which in turn can be approximated by polynomials in f . A variant of the theorem applies to linear subspaces of C( X , R ) closed under max: [ 7 ] Stone–Weierstrass theorem (max-closed) — Suppose X is a compact Hausdorff space and B is a family of functions in C( X , R ) such that Then B is dense in C( X , R ) . More precise information is available: Another generalization of the Stone–Weierstrass theorem is due to Errett Bishop . Bishop's theorem is as follows: [ 8 ] Bishop's theorem — Let A be a closed subalgebra of the complex Banach algebra C( X , C ) of continuous complex-valued functions on a compact Hausdorff space X , using the supremum norm. For S ⊂ X we write A S = { g| S : g ∈ A } . Suppose that f ∈ C( X , C ) has the following property: Then f ∈ A . Glicksberg (1962) gives a short proof of Bishop's theorem using the Krein–Milman theorem in an essential way, as well as the Hahn–Banach theorem : the process of Louis de Branges (1959) . See also Rudin (1973 , §5.7). Nachbin's theorem gives an analog for Stone–Weierstrass theorem for algebras of complex valued smooth functions on a smooth manifold. [ 9 ] Nachbin's theorem is as follows: [ 10 ] Nachbin's theorem — Let A be a subalgebra of the algebra C ∞ ( M ) of smooth functions on a finite dimensional smooth manifold M . Suppose that A separates the points of M and also separates the tangent vectors of M : for each point m ∈ M and tangent vector v at the tangent space at m , there is a f ∈ A such that d f ( x )( v ) ≠ 0. Then A is dense in C ∞ ( M ) . In 1885 it was also published in an English version of the paper whose title was On the possibility of giving an analytic representation to an arbitrary function of real variable . [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] According to the mathematician Yamilet Quintana , Weierstrass "suspected that any analytic functions could be represented by power series ". [ 15 ] [ 14 ] The historical publication of Weierstrass (in German language ) is freely available from the digital online archive of the Berlin Brandenburgische Akademie der Wissenschaften :
https://en.wikipedia.org/wiki/Stone–Weierstrass_theorem
In mathematics and in theoretical physics , the Stone–von Neumann theorem refers to any one of a number of different formulations of the uniqueness of the canonical commutation relations between position and momentum operators . It is named after Marshall Stone and John von Neumann . [ 1 ] [ 2 ] [ 3 ] [ 4 ] In quantum mechanics , physical observables are represented mathematically by linear operators on Hilbert spaces . For a single particle moving on the real line R {\displaystyle \mathbb {R} } , there are two important observables: position and momentum . In the Schrödinger representation quantum description of such a particle, the position operator x and momentum operator p {\displaystyle p} are respectively given by [ x ψ ] ( x 0 ) = x 0 ψ ( x 0 ) [ p ψ ] ( x 0 ) = − i ℏ ∂ ψ ∂ x ( x 0 ) {\displaystyle {\begin{aligned}[][x\psi ](x_{0})&=x_{0}\psi (x_{0})\\[][p\psi ](x_{0})&=-i\hbar {\frac {\partial \psi }{\partial x}}(x_{0})\end{aligned}}} on the domain V {\displaystyle V} of infinitely differentiable functions of compact support on R {\displaystyle \mathbb {R} } . Assume ℏ {\displaystyle \hbar } to be a fixed non-zero real number—in quantum theory ℏ {\displaystyle \hbar } is the reduced Planck constant , which carries units of action (energy times time). The operators x {\displaystyle x} , p {\displaystyle p} satisfy the canonical commutation relation Lie algebra, [ x , p ] = x p − p x = i ℏ . {\displaystyle [x,p]=xp-px=i\hbar .} Already in his classic book, [ 5 ] Hermann Weyl observed that this commutation law was impossible to satisfy for linear operators p , x acting on finite-dimensional spaces unless ħ vanishes. This is apparent from taking the trace over both sides of the latter equation and using the relation Trace( AB ) = Trace( BA ) ; the left-hand side is zero, the right-hand side is non-zero. Further analysis shows that any two self-adjoint operators satisfying the above commutation relation cannot be both bounded (in fact, a theorem of Wielandt shows the relation cannot be satisfied by elements of any normed algebra [ note 1 ] ). For notational convenience, the nonvanishing square root of ℏ may be absorbed into the normalization of p and x , so that, effectively, it is replaced by 1. We assume this normalization in what follows. The idea of the Stone–von Neumann theorem is that any two irreducible representations of the canonical commutation relations are unitarily equivalent. Since, however, the operators involved are necessarily unbounded (as noted above), there are tricky domain issues that allow for counter-examples. [ 6 ] : Example 14.5 To obtain a rigorous result, one must require that the operators satisfy the exponentiated form of the canonical commutation relations, known as the Weyl relations. The exponentiated operators are bounded and unitary. Although, as noted below, these relations are formally equivalent to the standard canonical commutation relations, this equivalence is not rigorous, because (again) of the unbounded nature of the operators. (There is also a discrete analog of the Weyl relations, which can hold in a finite-dimensional space, [ 6 ] : Chapter 14, Exercise 5 namely Sylvester 's clock and shift matrices in the finite Heisenberg group, discussed below.) One would like to classify representations of the canonical commutation relation by two self-adjoint operators acting on separable Hilbert spaces, up to unitary equivalence . By Stone's theorem , there is a one-to-one correspondence between self-adjoint operators and (strongly continuous) one-parameter unitary groups. Let Q and P be two self-adjoint operators satisfying the canonical commutation relation, [ Q , P ] = i , and s and t two real parameters. Introduce e itQ and e isP , the corresponding unitary groups given by functional calculus . (For the explicit operators x and p defined above, these are multiplication by e itx and pullback by translation x → x + s .) A formal computation [ 6 ] : Section 14.2 (using a special case of the Baker–Campbell–Hausdorff formula ) readily yields e i t Q e i s P = e − i s t e i s P e i t Q . {\displaystyle e^{itQ}e^{isP}=e^{-ist}e^{isP}e^{itQ}.} Conversely, given two one-parameter unitary groups U ( t ) and V ( s ) satisfying the braiding relation U ( t ) V ( s ) = e − i s t V ( s ) U ( t ) ∀ s , t , {\displaystyle U(t)V(s)=e^{-ist}V(s)U(t)\qquad \forall s,t,} ( E1 ) formally differentiating at 0 shows that the two infinitesimal generators satisfy the above canonical commutation relation. This braiding formulation of the canonical commutation relations (CCR) for one-parameter unitary groups is called the Weyl form of the CCR . It is important to note that the preceding derivation is purely formal. Since the operators involved are unbounded, technical issues prevent application of the Baker–Campbell–Hausdorff formula without additional domain assumptions. Indeed, there exist operators satisfying the canonical commutation relation but not the Weyl relations ( E1 ). [ 6 ] : Example 14.5 Nevertheless, in "good" cases, we expect that operators satisfying the canonical commutation relation will also satisfy the Weyl relations. The problem thus becomes classifying two jointly irreducible one-parameter unitary groups U ( t ) and V ( s ) which satisfy the Weyl relation on separable Hilbert spaces. The answer is the content of the Stone–von Neumann theorem : all such pairs of one-parameter unitary groups are unitarily equivalent . [ 6 ] : Theorem 14.8 In other words, for any two such U ( t ) and V ( s ) acting jointly irreducibly on a Hilbert space H , there is a unitary operator W : L 2 ( R ) → H so that W ∗ U ( t ) W = e i t x and W ∗ V ( s ) W = e i s p , {\displaystyle W^{*}U(t)W=e^{itx}\quad {\text{and}}\quad W^{*}V(s)W=e^{isp},} where p and x are the explicit position and momentum operators from earlier. When W is U in this equation, so, then, in the x -representation, it is evident that P is unitarily equivalent to e − itQ P e itQ = P + t , and the spectrum of P must range along the entire real line. The analog argument holds for Q . There is also a straightforward extension of the Stone–von Neumann theorem to n degrees of freedom. [ 6 ] : Theorem 14.8 Historically, this result was significant, because it was a key step in proving that Heisenberg 's matrix mechanics , which presents quantum mechanical observables and dynamics in terms of infinite matrices, is unitarily equivalent to Schrödinger 's wave mechanical formulation (see Schrödinger picture ), [ U ( t ) ψ ] ( x ) = e i t x ψ ( x ) , [ V ( s ) ψ ] ( x ) = ψ ( x + s ) . {\displaystyle [U(t)\psi ](x)=e^{itx}\psi (x),\qquad [V(s)\psi ](x)=\psi (x+s).} In terms of representation theory, the Stone–von Neumann theorem classifies certain unitary representations of the Heisenberg group . This is discussed in more detail in the Heisenberg group section , below. Informally stated, with certain technical assumptions, every representation of the Heisenberg group H 2 n + 1 is equivalent to the position operators and momentum operators on R n . Alternatively, that they are all equivalent to the Weyl algebra (or CCR algebra ) on a symplectic space of dimension 2 n . More formally, there is a unique (up to scale) non-trivial central strongly continuous unitary representation. This was later generalized by Mackey theory – and was the motivation for the introduction of the Heisenberg group in quantum physics. In detail: In all cases, if one has a representation H 2 n + 1 → A , where A is an algebra [ clarification needed ] and the center maps to zero, then one simply has a representation of the corresponding abelian group or algebra, which is Fourier theory . [ clarification needed ] If the center does not map to zero, one has a more interesting theory, particularly if one restricts oneself to central representations. Concretely, by a central representation one means a representation such that the center of the Heisenberg group maps into the center of the algebra : for example, if one is studying matrix representations or representations by operators on a Hilbert space, then the center of the matrix algebra or the operator algebra is the scalar matrices . Thus the representation of the center of the Heisenberg group is determined by a scale value, called the quantization value (in physics terms, the Planck constant), and if this goes to zero, one gets a representation of the abelian group (in physics terms, this is the classical limit). More formally, the group algebra of the Heisenberg group over its field of scalars K , written K [ H ] , has center K [ R ] , so rather than simply thinking of the group algebra as an algebra over the field K , one may think of it as an algebra over the commutative algebra K [ R ] . As the center of a matrix algebra or operator algebra is the scalar matrices, a K [ R ] -structure on the matrix algebra is a choice of scalar matrix – a choice of scale. Given such a choice of scale, a central representation of the Heisenberg group is a map of K [ R ] -algebras K [ H ] → A , which is the formal way of saying that it sends the center to a chosen scale. Then the Stone–von Neumann theorem is that, given the standard quantum mechanical scale (effectively, the value of ħ), every strongly continuous unitary representation is unitarily equivalent to the standard representation with position and momentum. Let G be a locally compact abelian group and G ^ be the Pontryagin dual of G . The Fourier–Plancherel transform defined by f ↦ f ^ ( γ ) = ∫ G γ ( t ) ¯ f ( t ) d μ ( t ) {\displaystyle f\mapsto {\hat {f}}(\gamma )=\int _{G}{\overline {\gamma (t)}}f(t)d\mu (t)} extends to a C*-isomorphism from the group C*-algebra C*( G ) of G and C 0 ( G ^ ) , i.e. the spectrum of C*( G ) is precisely G ^ . When G is the real line R , this is Stone's theorem characterizing one-parameter unitary groups. The theorem of Stone–von Neumann can also be restated using similar language. The group G acts on the C *-algebra C 0 ( G ) by right translation ρ : for s in G and f in C 0 ( G ) , ( s ⋅ f ) ( t ) = f ( t + s ) . {\displaystyle (s\cdot f)(t)=f(t+s).} Under the isomorphism given above, this action becomes the natural action of G on C*( G ^ ) : ( s ⋅ f ) ^ ( γ ) = γ ( s ) f ^ ( γ ) . {\displaystyle {\widehat {(s\cdot f)}}(\gamma )=\gamma (s){\hat {f}}(\gamma ).} So a covariant representation corresponding to the C *- crossed product C ∗ ( G ^ ) ⋊ ρ ^ G {\displaystyle C^{*}\left({\hat {G}}\right)\rtimes _{\hat {\rho }}G} is a unitary representation U ( s ) of G and V ( γ ) of G ^ such that U ( s ) V ( γ ) U ∗ ( s ) = γ ( s ) V ( γ ) . {\displaystyle U(s)V(\gamma )U^{*}(s)=\gamma (s)V(\gamma ).} It is a general fact that covariant representations are in one-to-one correspondence with *-representation of the corresponding crossed product. On the other hand, all irreducible representations of C 0 ( G ) ⋊ ρ G {\displaystyle C_{0}(G)\rtimes _{\rho }G} are unitarily equivalent to the K ( L 2 ( G ) ) {\displaystyle {\mathcal {K}}\left(L^{2}(G)\right)} , the compact operators on L 2 ( G )) . Therefore, all pairs { U ( s ), V ( γ )} are unitarily equivalent. Specializing to the case where G = R yields the Stone–von Neumann theorem. The above canonical commutation relations for P , Q are identical to the commutation relations that specify the Lie algebra of the general Heisenberg group H 2 n +1 for n a positive integer. This is the Lie group of ( n + 2) × ( n + 2) square matrices of the form M ( a , b , c ) = [ 1 a c 0 1 n b 0 0 1 ] . {\displaystyle \mathrm {M} (a,b,c)={\begin{bmatrix}1&a&c\\0&1_{n}&b\\0&0&1\end{bmatrix}}.} In fact, using the Heisenberg group, one can reformulate the Stone von Neumann theorem in the language of representation theory. Note that the center of H 2n+1 consists of matrices M(0, 0, c ) . However, this center is not the identity operator in Heisenberg's original CCRs. The Heisenberg group Lie algebra generators, e.g. for n = 1 , are P = [ 0 1 0 0 0 0 0 0 0 ] , Q = [ 0 0 0 0 0 1 0 0 0 ] , z = [ 0 0 1 0 0 0 0 0 0 ] , {\displaystyle {\begin{aligned}P&={\begin{bmatrix}0&1&0\\0&0&0\\0&0&0\end{bmatrix}},&Q&={\begin{bmatrix}0&0&0\\0&0&1\\0&0&0\end{bmatrix}},&z&={\begin{bmatrix}0&0&1\\0&0&0\\0&0&0\end{bmatrix}},\end{aligned}}} and the central generator z = log M (0, 0, 1) = exp( z ) − 1 is not the identity. Theorem — For each non-zero real number h there is an irreducible representation U h acting on the Hilbert space L 2 ( R n ) by [ U h ( M ( a , b , c ) ) ] ψ ( x ) = e i ( b ⋅ x + h c ) ψ ( x + h a ) . {\displaystyle \left[U_{h}(\mathrm {M} (a,b,c))\right]\psi (x)=e^{i(b\cdot x+hc)}\psi (x+ha).} All these representations are unitarily inequivalent ; and any irreducible representation which is not trivial on the center of H n is unitarily equivalent to exactly one of these. Note that U h is a unitary operator because it is the composition of two operators which are easily seen to be unitary: the translation to the left by ha and multiplication by a function of absolute value 1. To show U h is multiplicative is a straightforward calculation. The hard part of the theorem is showing the uniqueness; this claim, nevertheless, follows easily from the Stone–von Neumann theorem as stated above. We will sketch below a proof of the corresponding Stone–von Neumann theorem for certain finite Heisenberg groups. In particular, irreducible representations π , π′ of the Heisenberg group H n which are non-trivial on the center of H n are unitarily equivalent if and only if π ( z ) = π′ ( z ) for any z in the center of H n . One representation of the Heisenberg group which is important in number theory and the theory of modular forms is the theta representation , so named because the Jacobi theta function is invariant under the action of the discrete subgroup of the Heisenberg group. For any non-zero h , the mapping α h : M ( a , b , c ) → M ( − h − 1 b , h a , c − a ⋅ b ) {\displaystyle \alpha _{h}:\mathrm {M} (a,b,c)\to \mathrm {M} \left(-h^{-1}b,ha,c-a\cdot b\right)} is an automorphism of H n which is the identity on the center of H n . In particular, the representations U h and U h α are unitarily equivalent. This means that there is a unitary operator W on L 2 ( R n ) such that, for any g in H n , W U h ( g ) W ∗ = U h α ( g ) . {\displaystyle WU_{h}(g)W^{*}=U_{h}\alpha (g).} Moreover, by irreducibility of the representations U h , it follows that up to a scalar , such an operator W is unique (cf. Schur's lemma ). Since W is unitary, this scalar multiple is uniquely determined and hence such an operator W is unique. Theorem — The operator W is the Fourier transform on L 2 ( R n ) . This means that, ignoring the factor of (2 π ) n /2 in the definition of the Fourier transform, ∫ R n e − i x ⋅ p e i ( b ⋅ x + h c ) ψ ( x + h a ) d x = e i ( h a ⋅ p + h ( c − b ⋅ a ) ) ∫ R n e − i y ⋅ ( p − b ) ψ ( y ) d y . {\displaystyle \int _{\mathbf {R} ^{n}}e^{-ix\cdot p}e^{i(b\cdot x+hc)}\psi (x+ha)\ dx=e^{i(ha\cdot p+h(c-b\cdot a))}\int _{\mathbf {R} ^{n}}e^{-iy\cdot (p-b)}\psi (y)\ dy.} This theorem has the immediate implication that the Fourier transform is unitary , also known as the Plancherel theorem . Moreover, ( α h ) 2 M ( a , b , c ) = M ( − a , − b , c ) . {\displaystyle (\alpha _{h})^{2}\mathrm {M} (a,b,c)=\mathrm {M} (-a,-b,c).} Theorem — The operator W 1 such that W 1 U h W 1 ∗ = U h α 2 ( g ) {\displaystyle W_{1}U_{h}W_{1}^{*}=U_{h}\alpha ^{2}(g)} is the reflection operator [ W 1 ψ ] ( x ) = ψ ( − x ) . {\displaystyle [W_{1}\psi ](x)=\psi (-x).} From this fact the Fourier inversion formula easily follows. The Segal–Bargmann space is the space of holomorphic functions on C n that are square-integrable with respect to a Gaussian measure. Fock observed in 1920s that the operators a j = ∂ ∂ z j , a j ∗ = z j , {\displaystyle a_{j}={\frac {\partial }{\partial z_{j}}},\qquad a_{j}^{*}=z_{j},} acting on holomorphic functions, satisfy the same commutation relations as the usual annihilation and creation operators, namely, [ a j , a k ∗ ] = δ j , k . {\displaystyle \left[a_{j},a_{k}^{*}\right]=\delta _{j,k}.} In 1961, Bargmann showed that a ∗ j is actually the adjoint of a j with respect to the inner product coming from the Gaussian measure. By taking appropriate linear combinations of a j and a ∗ j , one can then obtain "position" and "momentum" operators satisfying the canonical commutation relations. It is not hard to show that the exponentials of these operators satisfy the Weyl relations and that the exponentiated operators act irreducibly. [ 6 ] : Section 14.4 The Stone–von Neumann theorem therefore applies and implies the existence of a unitary map from L 2 ( R n ) to the Segal–Bargmann space that intertwines the usual annihilation and creation operators with the operators a j and a ∗ j . This unitary map is the Segal–Bargmann transform . The Heisenberg group H n ( K ) is defined for any commutative ring K . In this section let us specialize to the field K = Z / p Z for p a prime. This field has the property that there is an embedding ω of K as an additive group into the circle group T . Note that H n ( K ) is finite with cardinality | K | 2 n + 1 . For finite Heisenberg group H n ( K ) one can give a simple proof of the Stone–von Neumann theorem using simple properties of character functions of representations. These properties follow from the orthogonality relations for characters of representations of finite groups. For any non-zero h in K define the representation U h on the finite-dimensional inner product space ℓ 2 ( K n ) by [ U h M ( a , b , c ) ψ ] ( x ) = ω ( b ⋅ x + h c ) ψ ( x + h a ) . {\displaystyle \left[U_{h}\mathrm {M} (a,b,c)\psi \right](x)=\omega (b\cdot x+hc)\psi (x+ha).} Theorem — For a fixed non-zero h , the character function χ of U h is given by: χ ( M ( a , b , c ) ) = { | K | n ω ( h c ) if a = b = 0 0 otherwise {\displaystyle \chi (\mathrm {M} (a,b,c))={\begin{cases}|K|^{n}\,\omega (hc)&{\text{if }}a=b=0\\0&{\text{otherwise}}\end{cases}}} It follows that 1 | H n ( K ) | ∑ g ∈ H n ( K ) | χ ( g ) | 2 = 1 | K | 2 n + 1 | K | 2 n | K | = 1. {\displaystyle {\frac {1}{\left|H_{n}(\mathbf {K} )\right|}}\sum _{g\in H_{n}(K)}|\chi (g)|^{2}={\frac {1}{|K|^{2n+1}}}|K|^{2n}|K|=1.} By the orthogonality relations for characters of representations of finite groups this fact implies the corresponding Stone–von Neumann theorem for Heisenberg groups H n ( Z / p Z ) , particularly: Actually, all irreducible representations of H n ( K ) on which the center acts nontrivially arise in this way. [ 6 ] : Chapter 14, Exercise 5 The Stone–von Neumann theorem admits numerous generalizations. Much of the early work of George Mackey was directed at obtaining a formulation [ 7 ] of the theory of induced representations developed originally by Frobenius for finite groups to the context of unitary representations of locally compact topological groups.
https://en.wikipedia.org/wiki/Stone–von_Neumann_theorem
In mathematics , the Stone–Čech remainder of a topological space X , also called the corona or corona set , is the complement β X \ X of the space in its Stone–Čech compactification β X . A topological space is said to be σ-compact if it is the union of countably many compact subspaces, and locally compact if every point has a neighbourhood with compact closure . The Stone–Čech remainder of a σ-compact and locally compact Hausdorff space is a sub-Stonean space , i.e., any two open σ-compact disjoint subsets have disjoint compact closures.
https://en.wikipedia.org/wiki/Stone–Čech_remainder
A Stonyhurst disk is a transparent circular grid with lines of longitude and latitude that can overlay a solar image to reference the positions of sunspots . This overlay system was originally created at the Stonyhurst College observatory . [ 1 ] [ 2 ] This England -related article is a stub . You can help Wikipedia by expanding it . This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stonyhurst_disks
Stop-and-wait ARQ , also referred to as alternating bit protocol , is a method in telecommunications to send information between two connected devices. It ensures that information is not lost due to dropped packets and that packets are received in the correct order. It is the simplest automatic repeat-request (ARQ) mechanism. A stop-and-wait ARQ sender sends one frame at a time; it is a special case of the general sliding window protocol with transmit and receive window sizes equal to one in both cases. After sending each frame, the sender doesn't send any further frames until it receives an acknowledgement (ACK) signal. After receiving a valid frame, the receiver sends an ACK. If the ACK does not reach the sender before a certain time, known as the timeout, the sender sends the same frame again. The timeout countdown is reset after each frame transmission. The above behavior is a basic example of Stop-and-Wait. However, real-life implementations vary to address certain issues of design. Typically the transmitter adds a redundancy check number to the end of each frame. The receiver uses the redundancy check number to check for possible damage. If the receiver sees that the frame is good, it sends an ACK. If the receiver sees that the frame is damaged, the receiver discards it and does not send an ACK—pretending that the frame was completely lost, not merely damaged. One problem is when the ACK sent by the receiver is damaged or lost. In this case, the sender doesn't receive the ACK, times out, and sends the frame again. Now the receiver has two copies of the same frame, and doesn't know if the second one is a duplicate frame or the next frame of the sequence carrying identical DATA. Another problem is when the transmission medium has such a long latency that the sender's timeout runs out before the frame reaches the receiver. In this case the sender resends the same packet. Eventually the receiver gets two copies of the same frame, and sends an ACK for each one. The sender, waiting for a single ACK, receives two ACKs, which may cause problems if it assumes that the second ACK is for the next frame in the sequence. To avoid these problems, the most common solution is to define a 1 bit sequence number in the header of the frame. This sequence number alternates (from 0 to 1) in subsequent frames. When the receiver sends an ACK, it includes the sequence number of the next packet it expects. This way, the receiver can detect duplicated frames by checking if the frame sequence numbers alternate. If two subsequent frames have the same sequence number, they are duplicates, and the second frame is discarded. Similarly, if two subsequent ACKs reference the same sequence number, they are acknowledging the same frame. Stop-and-wait ARQ is inefficient compared to other ARQs, because the time between packets, if the ACK and the data are received successfully, is twice the transit time (assuming the turnaround time can be zero). The throughput on the channel is a fraction of what it could be. To solve this problem, one can send more than one packet at a time with a larger sequence number and use one ACK for a set. This is what is done in Go-Back-N ARQ and the Selective Repeat ARQ .
https://en.wikipedia.org/wiki/Stop-and-wait_ARQ
In molecular biology , a stop codon (or termination codon ) is a codon ( nucleotide triplet within messenger RNA ) that signals the termination of the translation process of the current protein . [ 1 ] Most codons in messenger RNA correspond to the addition of an amino acid to a growing polypeptide chain, which may ultimately become a protein; stop codons signal the termination of this process by binding release factors , which cause the ribosomal subunits to disassociate, releasing the amino acid chain. While start codons need nearby sequences or initiation factors to start translation, a stop codon alone is sufficient to initiate termination. In the standard genetic code, there are three different termination codons: There are variations on the standard genetic code , and alternative stop codons have been found in the mitochondrial genomes of vertebrates , [ 2 ] Scenedesmus obliquus , [ 3 ] and Thraustochytrium . [ 4 ] The nuclear genetic code is flexible as illustrated by variant genetic codes that reassign standard stop codons to amino acids. [ 5 ] In 1986, convincing evidence was provided that selenocysteine (Sec) was incorporated co-translationally. Moreover, the codon partially directing its incorporation in the polypeptide chain was identified as UGA also known as the opal termination codon. [ 6 ] Different mechanisms for overriding the termination function of this codon have been identified in prokaryotes and in eukaryotes. [ 7 ] A particular difference between these kingdoms is that cis elements seem restricted to the neighborhood of the UAG codon in prokaryotes while in eukaryotes this restriction is not present. Instead such locations seem disfavored albeit not prohibited. [ 8 ] In 2003, a landmark paper described the identification of all known selenoproteins in humans: 25 in total. [ 9 ] Similar analyses have been run for other organisms. The UAG codon can translate into pyrrolysine (Pyl) in a similar manner. Distribution of stop codons within the genome of an organism is non-random and can correlate with GC-content . [ 10 ] [ 11 ] For example, the E. coli K-12 genome contains 2705 TAA (63%), 1257 TGA (29%), and 326 TAG (8%) stop codons (GC content 50.8%). [ 12 ] Also the substrates for the stop codons release factor 1 or release factor 2 are strongly correlated to the abundance of stop codons. [ 11 ] Large scale study of bacteria with a broad range of GC-contents shows that while the frequency of occurrence of TAA is negatively correlated to the GC-content and the frequency of occurrence of TGA is positively correlated to the GC-content, the frequency of occurrence of the TAG stop codon, which is often the minimally used stop codon in a genome, is not influenced by the GC-content. [ 13 ] Recognition of stop codons in bacteria have been associated with the so-called 'tripeptide anticodon', [ 14 ] a highly conserved amino acid motif in RF1 (PxT) and RF2 (SPF). Even though this is supported by structural studies, it was shown that the tripeptide anticodon hypothesis is an oversimplification. [ 15 ] Stop codons were historically given many different names, as they each corresponded to a distinct class of mutants that all behaved in a similar manner. These mutants were first isolated within bacteriophages ( T4 and lambda ), viruses that infect the bacteria Escherichia coli . Mutations in viral genes weakened their infectious ability, sometimes creating viruses that were able to infect and grow within only certain varieties of E. coli . They were the first set of nonsense mutations to be discovered, isolated by Richard H. Epstein and Charles Steinberg and named after their friend and graduate Caltech student Harris Bernstein, whose last name means " amber " in German ( cf. Bernstein ). [ 16 ] [ 17 ] [ 18 ] Viruses with amber mutations are characterized by their ability to infect only certain strains of bacteria, known as amber suppressors. These bacteria carry their own mutation that allows a recovery of function in the mutant viruses. For example, a mutation in the tRNA that recognizes the amber stop codon allows translation to "read through" the codon and produce a full-length protein, thereby recovering the normal form of the protein and "suppressing" the amber mutation. [ 19 ] Thus, amber mutants are an entire class of virus mutants that can grow in bacteria that contain amber suppressor mutations. Similar suppressors are known for ochre and opal stop codons as well. tRNA molecules carrying unnatural aminoacids have been designed to recognize the amber stop codon in bacterial RNA. This technology allows for incorporation of orthogonal aminoacids (such as p-azidophenylalanine) at specific locations of the target protein. It was the second stop codon mutation to be discovered. Reminiscent of the usual yellow-orange-brown color associated with amber, this second stop codon was given the name of " ochre " , an orange-reddish-brown mineral pigment. [ 17 ] Ochre mutant viruses had a property similar to amber mutants in that they recovered infectious ability within certain suppressor strains of bacteria. The set of ochre suppressors was distinct from amber suppressors, so ochre mutants were inferred to correspond to a different nucleotide triplet. Through a series of mutation experiments comparing these mutants with each other and other known amino acid codons, Sydney Brenner concluded that the amber and ochre mutations corresponded to the nucleotide triplets "UAG" and "UAA". [ 20 ] The third and last stop codon in the standard genetic code was discovered soon after, and corresponds to the nucleotide triplet "UGA". [ 21 ] To continue matching with the theme of colored minerals, the third nonsense codon came to be known as " opal " , which is a type of silica showing a variety of colors. [ 17 ] Nonsense mutations that created this premature stop codon were later called opal mutations or umber mutations. Nonsense mutations are changes in DNA sequence that introduce a premature stop codon, causing any resulting protein to be abnormally shortened. This often causes a loss of function in the protein, as critical parts of the amino acid chain are no longer assembled. Because of this terminology, stop codons have also been referred to as nonsense codons . A nonstop mutation , also called a stop-loss variant , is a point mutation that occurs within a stop codon. Nonstop mutations cause the continued translation of an mRNA strand into what should be an untranslated region. Most polypeptides resulting from a gene with a nonstop mutation lose their function due to their extreme length and the impact on normal folding. Nonstop mutations differ from nonsense mutations in that they do not create a stop codon but, instead, delete one. Nonstop mutations also differ from missense mutations , which are point mutations where a single nucleotide is changed to cause replacement by a different amino acid . Nonstop mutations have been linked with many inherited diseases including endocrine disorders, [ 22 ] eye disease, [ 23 ] and neurodevelopmental disorders . [ 24 ] [ 25 ] Hidden stops are non-stop codons that would be read as stop codons if they were frameshifted +1 or −1. These prematurely terminate translation if the corresponding frame-shift (such as due to a ribosomal RNA slip) occurs before the hidden stop. It is hypothesised that this decreases resource wastage on nonfunctional proteins and the production of potential cytotoxins . Researchers at Louisiana State University propose the ambush hypothesis , that hidden stops are selected for. Codons that can form hidden stops are used in genomes more frequently compared to synonymous codons that would otherwise code for the same amino acid. Unstable rRNA in an organism correlates with a higher frequency of hidden stops. [ 26 ] However, this hypothesis could not be validated with a larger data set. [ 27 ] Stop-codons and hidden stops together are collectively referred as stop-signals. Researchers at University of Memphis found that the ratios of the stop-signals on the three reading frames of a genome (referred to as translation stop-signals ratio or TSSR) of genetically related bacteria, despite their great differences in gene contents, are much alike. This nearly identical genomic-TSSR value of genetically related bacteria may suggest that bacterial genome expansion is limited by their unique stop-signals bias of that bacterial species. [ 28 ] Stop codon suppression or translational readthrough occurs when in translation a stop codon is interpreted as a sense codon, that is, when a (standard) amino acid is 'encoded' by the stop codon. Mutated tRNAs can be the cause of readthrough, but also certain nucleotide motifs close to the stop codon. Translational readthrough is very common in viruses and bacteria, and has also been found as a gene regulatory principle in humans, yeasts, bacteria and drosophila. [ 29 ] [ 30 ] This kind of endogenous translational readthrough constitutes a variation of the genetic code , because a stop codon codes for an amino acid. In the case of human malate dehydrogenase , the stop codon is read through with a frequency of about 4%. [ 31 ] The amino acid inserted at the stop codon depends on the identity of the stop codon itself: Gln, Tyr, and Lys have been found for the UAA and UAG codons, while Cys, Trp, and Arg for the UGA codon have been identified by mass spectrometry. [ 32 ] Extent of readthrough in mammals have widely variable extents, and can broadly diversify the proteome and affect cancer progression. [ 33 ] In 2010, when Craig Venter unveiled the first fully functioning, reproducing cell controlled by synthetic DNA he described how his team used frequent stop codons to create watermarks in RNA and DNA to help confirm the results were indeed synthetic (and not contaminated or otherwise), using it to encode authors' names and website addresses. [ 34 ]
https://en.wikipedia.org/wiki/Stop_codon
In telecommunications , a stop signal is a signal that marks the end of part of a transmission, for example: This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22. (in support of MIL-STD-188 ). This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stop_signal
In particle physics , a stop squark , symbol t͂ , is the superpartner of the top quark as predicted by supersymmetry (SUSY). It is a sfermion , which means it is a spin-0 boson ( scalar boson ). While the top quark is the heaviest known quark, the stop squark is actually often the lightest squark in many supersymmetry models. [ 1 ] The stop squark is a key ingredient of a wide range of SUSY models that address the hierarchy problem of the Standard Model (SM) in a natural way. A boson partner to the top quark would stabilize the Higgs boson mass against quadratically divergent quantum corrections, provided its mass is close to the electroweak symmetry breaking energy scale. If this was the case then the stop squark would be accessible at the Large Hadron Collider . In the generic R-parity conserving Minimal Supersymmetric Standard Model (MSSM) the scalar partners of right-handed and left-handed top quarks mix to form two stop mass eigenstates. Depending on the specific details of the SUSY model and the mass hierarchy of the sparticles, the stop might decay into a bottom quark and a chargino , with a subsequent decay of the chargino into the lightest neutralino (which is often the lightest supersymmetric particle ). Many searches for evidence of the stop squark have been performed by both the ATLAS and CMS experiments at the LHC but so far no signal has been discovered. [ 2 ] [ 3 ] In January 2019, the CMS Collaboration published findings excluding stop squarks with masses as large as 1230 GeV at 95% (2σ) confidence level. [ 4 ] This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stop_squark
Stoplogs are hydraulic engineering control elements that are used in floodgates to adjust the water level or discharge in a river , canal , or reservoir . Stoplogs are designed to cut off or stop flow through a conduit. They are typically long rectangular timber beams or boards that are placed on top of each other and dropped into premade slots inside a weir , gate , or channel . Present day, the process of adding and removing stoplogs is not manual, but done with hydraulic stoplog lifters and hoists. [ 1 ] Since the height of the barrier can only be adjusted through the addition and removal of stoplogs, finding a lighter and stronger material other than wood or concrete became a more desirable choice. [ 2 ] Other materials, including steel and composites, can be used as stoplogs as well. Stoplogs are sometimes confused with flashboards , as both elements are used in bulkhead or crest gates. Stoplogs are modular in nature, giving the operator of a gated structure the ability to control the water level in a channel by adding or removing individual stoplogs. A gate may make use of one or more logs . Each log is lowered horizontally into a space or bay between two grooved piers referred to as a stoplog check. [ 3 ] In larger gate structures, there will be multiple bays in which stoplogs can be placed to better control the discharge through the structure. Stoplogs are frequently used to temporarily block flow through a spillway or canal during routine maintenance. At other times stoplogs can be used over longer periods of times, such as when a field is flooded and stoplogs are being used in smaller gates in order to control the depth of water in fields. The logs may be left in and adjusted during the entire time that the field is submerged. In most cases, the boards used are subjected to high flow conditions. As individual stoplogs begin to age they are replaced. Typically small amounts of water will leak between individual logs. Stoplogs are typically used in structures where the removal, installation, and replacement of the logs is expected infrequently. When larger flows of water are passing through a stoplog gate, it can be difficult to remove or place individuals logs. Larger logs often require multiple people to position and lift the logs. Sometimes engineers will use these two terms interchangeably by calling a stoplog a flashboard. This is done in part because unlike many other types of bulkhead gates that are one continuous unit, both stoplogs and flashboards are modular and can be easily designed to hold back water at varying levels. However, most engineering texts and design firms differentiate between the two structures. Stoplogs are specialized bulkheads that are dropped into premade slots or guides in a channel or control structure, while flashboards are bulkheads that are placed on the crest or top of a channel wall or control structure. Flashboards are sometimes designed to break away under high flow conditions and thus to provide only a temporary diversion. In contrast, stoplogs are intended to be reused, and failure of a stoplog will result in an uncontrolled flow through a gate. Smaller stoplogs are sometimes referred to as handstops. Handstops are used in smaller gated structures, such as irrigation delivery ditches or the gates used to control water depth in larger submerged fields (such as rice fields ). They are designed to be easily operated by a single individual.
https://en.wikipedia.org/wiki/Stoplogs
Stopped-flow is one of a number of methods of studying the kinetics of reactions in solution. It is ideal for studying chemical reactions with a typical dead time on the order of 1 millisecond. In the simplest form of the technique, the solutions of two reactants are rapidly mixed by being forced through a mixing chamber, on emerging from which the mixed fluid passes through an optical observation cell. At some point in time, the flow is suddenly stopped, and the reaction is monitored using a suitable spectroscopic probe, such as absorbance , fluorescence or fluorescence polarization . The change in spectroscopic signal as a function of time is recorded, and the rate constants that define the reaction kinetics can then be obtained by fitting the data using a suitable model. Stopped-flow as an experimental technique was introduced by Britton Chance [ 1 ] [ 2 ] and extended by Quentin Gibson . [ 3 ] Other techniques, such as the temperature-jump method, are available for much faster processes. Stopped-flow spectrometry enables the solution-phase study of chemical kinetics for fast reactions, typically with half-lives in the millisecond range. Initially, it was primarily used for investigating enzyme-catalyzed reactions but quickly became a staple in biochemistry, biophysics, and chemistry laboratories for tracking rapid chemical processes. In its simplest form, a stopped-flow system rapidly mixes two solutions. Small volumes of each solution are driven into a high-efficiency mixer, initiating a fast reaction. The mixed solution then flows into the observation cell, displacing the remaining contents from the previous experiment or a washing step. The time it takes for the solution to travel from the mixing point to the observation point is referred to as the "dead time." The minimum injection volume depends on the size of the mixing cell. Once enough solution has been injected to completely replace the previous one, the system reaches a stationary state, and the flow is stopped. This can be achieved using a stop syringe and hard-stop assembly. At this point, the instrument sends a "start signal," or trigger, to the detector so the reaction can be observed. The timing of the trigger is software-controlled, allowing users to synchronize it with the flow stop or slightly earlier to confirm that the stationary state has been reached. The performance of a stopped-flow instrument is determined to a large extent by its dead time. This is defined as the time between the reactants mixing and the observation beginning, and is essentially the age of the reaction as the reaction mixture enters the observation cell. The limiting factors in the dead time of a particular stopped-flow apparatus are the efficiency of the mixer, the distance between the mixer and the cell, and the flow rate of the reaction mixture at the instant at which flow is stopped. Depending on the dimensions of the observation cell used, modern stopped-flow instruments are typically capable of achieving dead times of between 0.5-1 milliseconds. The simplest operating mode of a stopped-flow instrument is with a single-mixing configuration. Two reactants are used; these are loaded into syringes and are forced through the mixer and optical cell by the action of a pneumatically controlled ram which drives the syringe plungers. The reaction mixture emerging from the optical cell enters a third (stop) syringe, and flow ceases when the stop syringe plunger contacts a trigger switch. This simultaneously stops the flow and starts data acquisition. Normally, the two drive syringes are the same size, to achieve a mixing ratio of 1:1, but syringes of different sizes can be combined to obtain other mixing ratios up to 1:10 or 1:20. This so called asymmetric, or ratio mixing, is a common requirement in stopped-flow work. Sequential-, or double-, mixing is a variation of stopped-flow in which two reactants are forced through a pre-mixer into an ageing loop. After a specified delay period, the mixed fluid is forced through a separate mixer with a third reactant, and the subsequent reaction is studied as in single-mixing. Sequential-mixing is used to investigate the behavior of reaction intermediates or short-lived transients. A non-ozone-producing xenon arc lamp is commonly used for most general stopped-flow experiments above 250 nm. Broad-spectrum xenon lamps are highly versatile, allowing users to select virtually any wavelength for absorbance or fluorescence studies, making them ideal for applications such as monitoring structural changes in proteins over time. For far-UV applications, ozone-producing xenon arc lamps are available, but they require purging with pure nitrogen gas to prevent ozone buildup and optical degradation. Alternatively, mercury-xenon (Hg-Xe) lamps are well-suited for fluorescence experiments where the desired excitation wavelength corresponds to one of the intense mercury emission lines. LED light sources are another popular and inexpensive choice for stopped-flow experiments, especially when only a single or a few specific wavelengths are needed. Two syringes are filled with solutions that remain inert until mixed. These drive syringes are coupled and simultaneously emptied into a mixing device, either by a single drive ram (piston) or independent stepping motors. Ratio mixing is easily achieved by using syringes with different volume capacities, enabling precise control over the proportions of the combined solutions. For applications requiring sequential mixing—such as preincubating two reagents before introducing a third—two independent drive rams can be employed to allow for more complex mixing sequences. Once the two solutions are expelled from their syringes, they enter a mixing system designed to ensure thorough mixing, typically using a geometry like a T-mixer. This setup promotes turbulent flow, which achieves complete mixing. In contrast, laminar flow would result in the solutions flowing side by side, leading to incomplete mixing. The dead time is the interval required for solutions to travel from the mixing point to the observation point, representing the portion of reaction kinetics that cannot be observed. A shorter dead time enhances instrument performance and enables the study of a wider range of reactions. Typical dead times range from 0.5 to 1 millisecond, depending on the instrument design. [ 4 ] Dead time can be minimized by reducing the dimensions of the flow cell, but this approach has limitations due to the decreased signal-to-noise ratio caused by smaller observation windows and shorter pathlengths. The fluorescence quenching reaction between N-acetyltryptophanamide (NAT) and N-bromosuccinimide (NBS), as described by Peterman, is a commonly used method for measuring the dead time of a stopped-flow instrument. [ 5 ] The mixed reactants are delivered into an observation cell (flow cell) where the reaction can be monitored spectrophotometrically, typically using techniques such as absorbance , fluorescence , fluorescence anisotropy , or circular dichroism . It is increasingly common to combine several of these techniques for more comprehensive analysis. [ 6 ] Flow cell cartridges are commonly available with absorbance pathlengths ranging from 1 to 10 mm and shorter fluorescence pathlengths of around 2 mm. Short pathlengths are particularly important for fluorescence measurements to minimize the inner filter effect. Modern stopped-flow instruments are designed to accommodate a variety of flow cell sizes to suit different experimental needs. Once sufficient solution has been injected to fully replace the previous contents in the observation cell, the mixture flows into a third syringe, known as the stop syringe. This syringe hits a volume-calibrated hard-stop assembly, halting the flow and bringing the system to a stationary state. At this moment, the detector is triggered to begin observing the reaction. Stopped-flow spectrophotometers may function as stand-alone instruments, but they are often integrated into systems for circular dichroism (CD), absorbance, and/or fluorescence measurements, or equipped with various accessories to support specialized applications. Common stopped-flow accessories include: Other popular add-ons or accessories include: These accessories and configurations enhance the versatility of stopped-flow spectrophotometers, enabling their use across a broad range of applications in biochemistry, biophysics, and chemistry. The stopped-flow method evolved from the continuous-flow technique developed by Hamilton Hartridge and Francis Roughton [ 7 ] to study the binding of oxygen to hemoglobin. In the continuous-flow system, the reaction mixture was passed through a long tube, past an observation system (a simple colorimeter in 1923), and then discarded as waste. By moving the colorimeter along the tube and knowing the flow rate, Hartridge and Roughton were able to measure reaction progress at specific time intervals. This innovation was groundbreaking for its time, demonstrating that processes occurring within milliseconds could be studied using relatively simple equipment, despite the limitations of instruments requiring seconds for each measurement. However, the method had significant practical constraints, particularly the need for large quantities of reactants, making it suitable mainly for studies on abundant proteins like hemoglobin. Today, the continuous-flow approach is considered obsolete for practical purposes, having been replaced by more efficient and versatile techniques like stopped-flow spectrometry. The stopped-flow method relies on the presence of spectroscopic properties to monitor reactions in real time. When such properties are unavailable, quenched-flow provides an alternative by using conventional chemical analysis. [ 8 ] Instead of a mechanical stopping system, the reaction is halted by quenching , where the products are immediately stopped by freezing, chemical denaturation, or exposure to a denaturing light source. Similar to the continuous-flow method, the time between mixing and quenching can be adjusted by varying the length of the reaction tube. The pulsed quenched-flow method, introduced by Alan Fersht and Ross Jakes, [ 9 ] eliminates the need for a long reaction tube. In this approach, the reaction is initiated as in a stopped-flow experiment, but quenching is performed using a third syringe, which delivers the quenching agent at a precise, pre-set time after initiation. Quenched-flow has distinct advantages and disadvantages compared to stopped-flow. On the positive side, chemical analysis provides clear identification of the measured process, whereas spectroscopic signals in stopped-flow experiments may sometimes be ambiguous. However, quenched-flow is significantly more labor-intensive, as each time point must be measured individually. For example, in studies of nitrogenase catalysis from Klebsiella pneumoniae [ 10 ] , the agreement in half-times showed that absorbance at 420 nm corresponded to P i release, but obtaining this result through quenched-flow required 11 individual data points, highlighting the method's demanding nature. Stopped-flow is only one of multiple biophysical techniques used to study the kinetics of biological systems. For a broader perspective, Zheng et al. (2015) review various analytical methods for investigating biological interactions, including stopped-flow analysis, surface plasmon resonance spectroscopy, affinity chromatography, and capillary electrophoresis. The article provides an overview of each technique’s principles, applications, advantages, and limitations. [ 11 ]
https://en.wikipedia.org/wiki/Stopped-flow
Stopping and Range of Ions in Matter ( SRIM ) is a group of computer programs which calculate interactions between ions and matter ; the core of SRIM is a program called Transport of Ions in Matter ( TRIM ). SRIM is popular in the ion implantation research and technology community, and also used widely in other branches of radiation material science . SRIM originated in 1980 as a DOS based program then called TRIM. [ 1 ] The DOS version was upgraded until 1998 and is still available for download. It will run on a Unix PC having a DOS emulator. SRIM-2000 requires a computer with any Windows operating system. The program may work with Unix or Macintosh based systems through Wine . [ 2 ] [ 3 ] The programs were developed by James F. Ziegler and Jochen P. Biersack around 1983 [ 1 ] [ 4 ] and are being continuously upgraded with the major changes occurring approximately every five years. [ 5 ] SRIM is based on a Monte Carlo simulation method , namely the binary collision approximation [ 6 ] [ 7 ] [ 8 ] with a random selection of the impact parameter of the next colliding ion. As the input parameters, it needs the ion type and energy (in the range 10 eV – 2 GeV) and the material of one or several target layers. As the output, it lists or plots the three-dimensional distribution of the ions in the solid and its parameters, such as penetration depth, its spread along the ion beam (called straggle) and perpendicular to it, all target atom cascades in the target are followed in detail; concentration of vacancies , sputtering rate, ionization, and phonon production in the target material; energy partitioning between the nuclear and electron losses , energy deposition rate; The programs are made so they can be interrupted at any time, and then resumed later. They have an easy-to-use user interface and built-in default parameters for all ions and materials. Another part of the software allows calculating the electronic stopping power of any ion in any material (including gaseous targets) based on an averaging parametrization of a vast range of experimental data. [ 4 ] Those features made SRIM immensely popular. However, it doesn't take account of the crystal structure nor dynamic composition changes in the material that severely limits its usefulness in some cases. Other approximations of the program include binary collision (i.e. the influence of neighboring atoms is neglected); the material is fully amorphous, i.e. description of ion channeling effects [ 9 ] is not possible, recombination of knocked off atoms (interstitials) with the vacancies, [ 10 ] an effect known to be very important in heat spikes in metals, [ 11 ] is neglected; There is no description of defect clustering and irradiation-induced amorphization, even though the former occurs in most materials [ 12 ] [ 13 ] and the latter is very important in semiconductors. [ 14 ] The electronic stopping power is an averaging fit to a large number of experiments. [ 4 ] and the interatomic potential as a universal form which is an averaging fit to quantum mechanical calculations, [ 4 ] [ 15 ] the target atom which reaches the surface can leave the surface (be sputtered ) if it has momentum and energy to pass the surface barrier, which is a simplifying assumption that does not work well e.g. at energies below the surface penetration energy [ 16 ] or if chemical effects are present. [ 17 ] The system is layered, i.e. simulation of materials with composition differences in 2D or 3D is not possible. The threshold displacement energy is a step function for each element, even though in reality it is crystal-direction dependent. [ 18 ]
https://en.wikipedia.org/wiki/Stopping_and_Range_of_Ions_in_Matter
In nuclear and materials physics , stopping power is the retarding force acting on charged particles , typically alpha and beta particles , due to interaction with matter , resulting in loss of particle kinetic energy . [ 1 ] [ 2 ] Stopping power is also interpreted as the rate at which a material absorbs the kinetic energy of a charged particle . Its application is important in a wide range of thermodynamic areas such as radiation protection , ion implantation and nuclear medicine . [ 3 ] Both charged and uncharged particles lose energy while passing through matter. Positive ions are considered in most cases below. The stopping power depends on the type and energy of the radiation and on the properties of the material it passes. Since the production of an ion pair (usually a positive ion and a (negative) electron) requires a fixed amount of energy (for example, 33.97 eV in dry air [ 4 ] : 305 ), the number of ionizations per path length is proportional to the stopping power. The stopping power of the material is numerically equal to the loss of energy E per unit path length, x : The minus sign makes S positive. The force usually increases toward the end of range and reaches a maximum, the Bragg peak , shortly before the energy drops to zero. The curve that describes the force as function of the material depth is called the Bragg curve . This is of great practical importance for radiation therapy . The equation above defines the linear stopping power which in the international system is expressed in N but is usually indicated in other units like M eV /mm or similar. If a substance is compared in gaseous and solid form, then the linear stopping powers of the two states are very different just because of the different density. One therefore often divides the force by the density of the material to obtain the mass stopping power which in the international system is expressed in m 4 / s 2 but is usually found in units like MeV/(mg/cm 2 ) or similar. The mass stopping power then depends only very little on the density of the material. The picture shows how the stopping power of 5.49 MeV alpha particles increases while the particle traverses air, until it reaches the maximum. This particular energy corresponds to that of the alpha particle radiation from naturally radioactive gas radon ( 222 Rn) which is present in the air in minute amounts. The mean range can be calculated by integrating the reciprocal stopping power over energy: [ 5 ] where: The deposited energy can be obtained by integrating the stopping power over the entire path length of the ion while it moves in the material. Electronic stopping refers to the slowing down of a projectile ion due to the inelastic collisions between bound electrons in the medium and the ion moving through it. The term inelastic is used to signify that energy is lost during the process (the collisions may result both in excitations of bound electrons of the medium, and in excitations of the electron cloud of the ion as well). Linear electronic stopping power is identical to unrestricted linear energy transfer . Instead of energy transfer, some models consider the electronic stopping power as momentum transfer between electron gas and energetic ion. This is consistent with the result of Bethe in the high energy range. [ 6 ] Since the number of collisions an ion experiences with electrons is large, and since the charge state of the ion while traversing the medium may change frequently, it is very difficult to describe all possible interactions for all possible ion charge states. Instead, the electronic stopping power is often given as a simple function of energy F e ( E ) {\displaystyle F_{e}(E)} which is an average taken over all energy loss processes for different charge states. It can be theoretically determined to an accuracy of a few % in the energy range above several hundred keV per nucleon from theoretical treatments, the best known being the Bethe formula . At energies lower than about 100 keV per nucleon, it becomes more difficult to determine the electronic stopping using analytical models. [ 7 ] Recently real-time Time-dependent density functional theory has been successfully used to accurately determine the electronic stopping for various ion-target systems over a wide range of energies including the low energy regime. [ 8 ] [ 9 ] Graphical presentations of experimental values of the electronic stopping power for many ions in many substances have been given by Paul. [ 10 ] The accuracy of various stopping tables has been determined using statistical comparisons. [ 11 ] Nuclear stopping power refers to the elastic collisions between the projectile ion and atoms in the sample (the established designation "nuclear" may be confusing since nuclear stopping is not due to nuclear forces, [ 12 ] but it is meant to note that this type of stopping involves the interaction of the ion with the nuclei in the target). If one knows the form of the repulsive potential energy V ( r ) {\displaystyle V(r)} between two atoms (see below), it is possible to calculate the nuclear stopping power F n ( E ) {\displaystyle F_{n}(E)} . This is done by determining the energy loss in binary collisions T ( E , b ) {\displaystyle T(E,b)} of two atoms interacting with the energy V ( r ) {\displaystyle V(r)} as a function of impact parameter b {\displaystyle b} In the stopping power figure shown above for aluminium ions in aluminum, nuclear stopping is negligible except at the lowest energy. Nuclear stopping increases when the mass of the ion increases. In the figure shown on the right, nuclear stopping is larger than electronic stopping at low energy. For very light ions slowing down in heavy materials, the nuclear stopping is weaker than the electronic at all energies. Especially in the field of radiation damage in detectors, the term " non-ionizing energy loss " (NIEL) is used as a term opposite to the linear energy transfer (LET), see e.g. Refs. [ 13 ] [ 14 ] [ 15 ] Since per definition nuclear stopping power does not involve electronic excitations, NIEL and nuclear stopping can be considered to be the same quantity in the absence of nuclear reactions. The total non-relativistic stopping power is therefore the sum of two terms: F ( E ) = F e ( E ) + F n ( E ) {\displaystyle F(E)=F_{e}(E)+F_{n}(E)} . Several semi-empirical stopping power formulas have been devised. The model given by Ziegler, Biersack and Littmark (the so-called "ZBL" stopping, see next chapter), [ 16 ] [ 17 ] implemented in different versions of the TRIM/SRIM codes, [ 18 ] is used most often today. Radiative stopping power , which is due to the emission of bremsstrahlung in the electric fields of the particles in the material traversed, must be considered at extremely high ion energies. [ 12 ] For electron projectiles, radiative stopping is always important. At high ion energies, there may also be energy losses due to nuclear reactions, but such processes are not normally described by stopping power. [ 12 ] Close to the surface of a solid target material, both nuclear and electronic stopping may lead to sputtering . In the beginning of the slowing-down process at high energies, the ion is slowed mainly by electronic stopping, and it moves almost in a straight path. When the ion has slowed sufficiently, the collisions with nuclei (the nuclear stopping) become more and more probable, finally dominating the slowing down. When atoms of the solid receive significant recoil energies when struck by the ion, they will be removed from their lattice positions, and produce a cascade of further collisions in the material. These collision cascades are the main cause of damage production during ion implantation in metals and semiconductors. When the energies of all atoms in the system have fallen below the threshold displacement energy , the production of new damage ceases, and the concept of nuclear stopping is no longer meaningful. The total amount of energy deposited by the nuclear collisions to atoms in the materials is called the nuclear deposited energy. The inset in the figure shows a typical range distribution of ions deposited in the solid. The case shown here might, for instance, be the slowing down of a 1 MeV silicon ion in silicon. The mean range for a 1 MeV ion is typically in the micrometer range. At very small distances between the nuclei the repulsive interaction can be regarded as essentially Coulombic. At greater distances, the electron clouds screen the nuclei from each other. Thus the repulsive potential can be described by multiplying the Coulombic repulsion between nuclei with a screening function φ(r/a), where φ(r/a) → 1 when r → 0. Here Z 1 {\displaystyle Z_{1}} and Z 2 {\displaystyle Z_{2}} are the charges of the interacting nuclei, and r the distance between them; a is the so-called screening parameter. A large number of different repulsive potentials and screening functions have been proposed over the years, some determined semi-empirically, others from theoretical calculations. A much used repulsive potential is the one given by Ziegler, Biersack and Littmark, the so-called ZBL repulsive potential. It has been constructed by fitting a universal screening function to theoretically obtained potentials calculated for a large variety of atom pairs. [ 16 ] The ZBL screening parameter and function have the forms and where x = r/a u , and a 0 is the Bohr atomic radius = 0.529 Å. The standard deviation of the fit of the universal ZBL repulsive potential to the theoretically calculated pair-specific potentials it is fit to is 18% above 2 eV. [ 16 ] Even more accurate repulsive potentials can be obtained from self-consistent total energy calculations using density-functional theory [ 19 ] or other quantum chemical methods such as Hartree-Fock methods. Nordlund, Lehtola and Hobler have compared potentials with these methods for all atom pairs from Z=1 (hydrogen) to Z=92 (uranium) and shown that pair-specific quantum chemical calculations can give repulsive potentials that are accurate to withing ~1% above 30 eV. [ 20 ] Moreover, they fit pair-specific "NLH" screening parameters for all atom pairs in the range Z 1 , Z 2 <= 92 in the functional form φ ( r ) = a 1 − b 1 r + a 2 − b 2 r + a 3 − b 3 r {\displaystyle \varphi (r)=a_{1}^{-b_{1}r}+a_{2}^{-b_{2}r}+a_{3}^{-b_{3}r}} where r is directly the interatomic distance, i.e. the screening parameter a =1. The pair-specific parameters are available in the supplemental material of Ref.: [ 20 ] [1] . In crystalline materials the ion may in some instances get "channeled", i.e., get focused into a channel between crystal planes where it experiences almost no collisions with nuclei. [ 21 ] Also, the electronic stopping power may be weaker in the channel. [ 22 ] Thus the nuclear and electronic stopping do not only depend on material type and density but also on its microscopic structure and cross-section. Computer simulation methods to calculate the motion of ions in a medium have been developed since the 1960s, and are now the dominant way of treating stopping power theoretically. The basic idea in them is to follow the movement of the ion in the medium by simulating the collisions with nuclei in the medium. The electronic stopping power is usually taken into account as a frictional force slowing down the ion. Conventional methods used to calculate ion ranges are based on the binary collision approximation (BCA). [ 23 ] In these methods the movement of ions in the implanted sample is treated as a succession of individual collisions between the recoil ion and atoms in the sample. For each individual collision the classical scattering integral is solved by numerical integration . The impact parameter p in the scattering integral is determined either from a stochastic distribution or in a way that takes into account the crystal structure of the sample. The former method is suitable only in simulations of implantation into amorphous materials, as it does not account for channeling. The best known BCA simulation program is TRIM/SRIM ( acronym for TRansport of Ions in Matter, in more recent versions called Stopping and Range of Ions in Matter), which is based on the ZBL electronic stopping and interatomic potential . [ 16 ] [ 18 ] [ 24 ] It has a very easy-to-use user interface, and has default parameters for all ions in all materials up to an ion energy of 1 GeV, which has made it immensely popular. However, it doesn't take account of the crystal structure, which severely limits its usefulness in many cases. Several BCA programs overcome this difficulty; some fairly well known are MARLOWE, [ 25 ] BCCRYS and crystal-TRIM. Although the BCA methods have been successfully used in describing many physical processes, they have some obstacles for describing the slowing down process of energetic ions realistically. Basic assumption that collisions are binary results in severe problems when trying to take multiple interactions into account. Also, in simulating crystalline materials the selection process of the next colliding lattice atom and the impact parameter p always involve several parameters which may not have perfectly well defined values, which may affect the results 10–20% even for quite reasonable-seeming choices of the parameter values. The best reliability in BCA is obtained by including multiple collisions in the calculations, which is not easy to do correctly. However, at least MARLOWE does this. A fundamentally more straightforward way to model multiple atomic collisions is provided by molecular dynamics (MD) simulations, in which the time evolution of a system of atoms is calculated by solving the equations of motion numerically. Special MD methods have been devised in which the number of interactions and atoms involved in MD simulations have been reduced in order to make them efficient enough for calculating ion ranges. [ 26 ] [ 27 ] The MD simulations this automatically describe the nuclear stopping power. The electronic stopping power can be readily included in molecular dynamics simulations, either as a frictional force [ 26 ] [ 28 ] [ 29 ] [ 30 ] [ 27 ] [ 31 ] [ 32 ] [ 33 ] or in a more advanced manner by also following the heating of the electronic systems and coupling the electronic and atomic degrees of freedom. [ 34 ] [ 35 ] [ 36 ] Beyond the maximum, stopping power decreases approximately like 1/v 2 with increasing particle velocity v , but after a minimum, it increases again. [ 37 ] A minimum ionizing particle (MIP) is a particle whose mean energy loss rate through matter is close to the minimum. In many practical cases, relativistic particles (e.g., cosmic-ray muons ) are minimum ionizing particles. An important property of all minimum ionizing particles is that β γ ≃ 3 {\displaystyle \beta \gamma \simeq 3} is approximately true where β {\displaystyle \beta } and γ {\displaystyle \gamma } are the usual relativistic kinematic quantities. Moreover, all of the MIPs have almost the same energy loss in the material which value is: − d E d x ≃ 2 MeV g cm − 2 {\displaystyle -{\frac {dE}{dx}}\simeq 2{\frac {\text{MeV}}{\mathrm {g} \,{\text{cm}}^{-2}}}} . [ 37 ]
https://en.wikipedia.org/wiki/Stopping_power_(particle_radiation)
The storage effect is a coexistence mechanism proposed in the ecological theory of species coexistence, which tries to explain how such a wide variety of similar species are able to coexist within the same ecological community or guild . The storage effect was originally proposed in the 1980s [ 1 ] to explain coexistence in diverse communities of coral reef fish, however it has since been generalized to cover a variety of ecological communities. [ 2 ] [ 3 ] [ 4 ] The theory proposes one way for multiple species to coexist: in a changing environment, no species can be the best under all conditions. [ 5 ] Instead, each species must have a unique response to varying environmental conditions, and a way of buffering against the effects of bad years. [ 5 ] The storage effect gets its name because each population "stores" the gains in good years or microhabitats (patches) to help it survive population losses in bad years or patches. [ 1 ] One strength of this theory is that, unlike most coexistence mechanisms, the storage effect can be measured and quantified, with units of per-capita growth rate (offspring per adult per generation). [ 5 ] The storage effect can be caused by both temporal and spatial variation. The temporal storage effect (often referred to as simply "the storage effect") occurs when species benefit from changes in year-to-year environmental patterns, [ 1 ] while the spatial storage effect occurs when species benefit from variation in microhabitats across a landscape. [ 6 ] For the storage effect to operate, it requires variation (i.e. fluctuations) in the environment [ 1 ] and thus can be termed a "fluctuation-dependent mechanism". This variation can come from a large degree of factors, including resource availability, temperature, and predation levels. However, for the storage effect to function, this variation must change the birth, survival, or recruitment rate of species from year to year (or patch to patch). [ 1 ] [ 6 ] For competing species within the same community to coexist, they have to meet one fundamental requirement: the impact of competition from a species on itself must exceed its competitive impact on other species. In other words, intraspecific competition must exceed interspecific competition . [ 7 ] For example, jackrabbits living in the same area compete for food and nesting grounds. Such competition within the same species is called intraspecific competition , which limits the growth of the species itself. Members from different species can also compete. For instance, jackrabbits and cottontail rabbits also compete for food and nesting grounds. Competition between different species is called interspecific competition , which limits the growth of other species. Stable coexistence occurs when any one species in the community limits its own growth more strongly than the growth of others. [ citation needed ] The storage effect mixes three essential ingredients to assemble a community of competing species that fulfill the requirement. They are 1) correlation between the quality of an environment and the amount of competition experienced by a population in that environment (i.e. covariance between environment and competition), 2) differences in species response to the same environment (i.e. species-specific environmental responses), and 3) the ability of a population to diminish the impact of competition under worsening environment (i.e. buffered population growth). [ 5 ] Each ingredient is described in detail below with an explanation why the combination of the three leads to species coexistence. The growth of a population can be strongly influenced by the environment it experiences. An environment consists of not only physical elements such as resource abundance, temperature, and level of physical disturbance, but also biological elements such as the abundance of natural enemies and mutualists . [ 8 ] Usually organisms reproduce more in a favorable environment (i.e. either during a good year, or within a good patch), build up their population densities, and lead themselves to a high level of competition due to this increasing crowding. [ 5 ] Such a trend means that higher quality environments usually correlate with a higher strength of competition experienced by the organisms in those environments. In short, a better environment results in stronger competition. [ 5 ] In statistics, such correlation means that there will be a non-zero covariance between the change of population density in response to the environment and that to the competition. That is why the first ingredient is called " covariance between environment and competition". [ citation needed ] Covariance between environment and competition suggests that organisms experience the strongest competition under their optimal environmental conditions because their populations grow most rapidly in those conditions. In nature, we often find that different species from the same community respond to the same conditions in distinctive manners. For example, plant species have different preferred levels of light and water availability, which affect their germination and physical growth rates. [ 2 ] Such differences in their response to the environment, which is called "species-specific environmental response," means no two species from a community will have the same best environment in a given year or a given patch. As a result, when a species is under its optimal environmental conditions and thus experiencing the strongest intraspecific competition , other species from the same community only experience the strongest interspecific competition coming from that species, but not the strongest intraspecific competition coming from themselves. A population can decline when environmental conditions worsen and when competition intensifies. If a species cannot limit the impact of competition in a hostile environment, its population will crash, and it will become locally extinct. [ 1 ] Marvelously, in nature organisms are often able to slow down the rate of population decline in a hostile environment by alleviating the impact of competition. In doing so, they are able to set up a lower limit on the rate of their population decline. [ 1 ] This phenomenon is called "buffered population growth", which occurs under a variety of situations. Under the temporal storage effect, it can be accomplished by the adults of a species having long life spans, which are relatively unaffected by environmental stressors. For example, an adult tree is unlikely to be killed by a few weeks of drought or a single night of freezing temperatures, whereas a seedling may not survive these conditions. [ 4 ] Even if all seedlings are killed by bad environmental conditions, the long-lived adults are able to keep the overall population from crashing. [ 1 ] Moreover, the adults usually adopt strategies such as dormancy or hibernation under a hostile environment, which make them less sensitive to competition, and allows them to buffer against the double blades of the hostile environment and competition from their rivals. For a different example, buffered population growth is attained by annual plants with a persistent seed bank. [ 2 ] Thanks to these long-lived seeds, the entire population cannot be destroyed by a single bad year. Moreover, the seeds stay dormant under unfavorable environmental conditions, avoiding direct competition with rivals who are favored by the same environment, and thus diminish the impact of competition in bad years. [ 2 ] There are some temporal situations in which buffered population growth is not expected to occur. Namely, when multiple generations do not overlap (such as Labord's chameleon ) or when adults have a high mortality rate (such as many aquatic insects, or some populations of the Eastern Fence Lizard [ 9 ] ), buffered growth does not occur. Under the spatial storage effect, buffered population growth is generally automatic, because the effects of a detrimental microhabitat will only be experienced by individuals in that area, rather than the population as a whole. [ 6 ] The combined effect of (1) covariance between environment and competition, and (2) species-specific response to the environment decouple the strongest intraspecific and interspecific competition experienced by a species. [ 5 ] Intraspecific competition is strongest when a species is favored by the environment, whereas interspecific competition is strongest when its rivals are favored. After this decoupling, buffered population growth limits the impact of interspecific competition when a species is not favored by the environment. As a consequence, the impact of intraspecific competition on the species favored by a particular environment exceeds that of the interspecific competition on species less favored by that environment. We see that the fundamental requirement for species coexistence is fulfilled and thus storage effect is able to maintain stable coexistence in a community of competing species. [ 5 ] For species to coexist in a community, all species must be able to recover from low density. [ 7 ] Not surprisingly, being a coexistence mechanism, the storage effect helps species when they become rare. It does so by making the abundant species’ effect on itself greater than its effect on the rare species. [ 5 ] The difference between species’ response to environmental conditions means that a rare species’ optimal environment is not the same as its competitors. Under these conditions, the rare species will experience low levels of interspecific competition . Because the rare species itself is rare, it will experience little impact from intraspecific competition as well, even at its highest possible levels of intraspecific competition. Free from the impact of competition, the rare species is able to make gains in these good years or patches. [ 1 ] Moreover, thanks to the buffered population growth, the rare species is able to survive the bad years or patches by "storing" the gains from the good years/patches. As a result, the population of any rare species is able to grow due to the storage effect. One natural outcome from the covariance between environment and competition is that the species with very low densities will have more fluctuation in its recruitment rates than species with normal densities. [ 10 ] This occurs because in good environments, species with high densities will often experience large amount of crowding by members of the same species, thus limiting the benefits of good years/patches, and making good years/patches more similar to bad years/patches. Low-density species are rarely able to cause crowding, thus allowing significantly increased fitness in good years/patches. [ 1 ] Since the fluctuation in recruitment rate is an indicator of covariance between environment and competition, and since species-specific environmental response and buffered population growth can normally be assumed in nature, finding much stronger fluctuation in recruitment rates in rare and low-density species provides a strong indication that the storage effect is operating within a community. [ 4 ] [ 10 ] The storage effect is not a model for population growth (such as the Lotka–Volterra equation ) itself, but is an effect that appears in non-additive models of population growth. [ 5 ] Thus, the equations shown below will work for any arbitrary model of population growth, but will only be as accurate as the original model. The derivation below is taken from Chesson 1994. [ 5 ] It is a derivation of the temporal storage effect, but is very similar to the spatial storage effect. The fitness of an individual, as well as expected growth rate , can be measured in terms of the average number of offspring it will leave during its lifetime. This parameter, r ( t ), is a function of both environmental factors, e ( t ), and how much the organism must compete with other individuals (both of its own species, and different species), c ( t ). Thus, where g is an arbitrary function for growth rate. Throughout the article, subscripts are occasionally used to represent functions of a particular species (e.g. r j ( t ) is the fitness of species j ). It is assumed that there must be some values e* and c* , such that g ( e *, c *) = 0, representing a zero-population growth equilibrium. These values need not be unique, but for every e* , there is a unique c* . For ease of calculation, standard parameters E ( t ) and C ( t ) are defined, such that Both E and C represent the effect of deviations in environmental response from equilibrium. E represents the effect that varying environmental conditions (e.g. rainfall patterns, temperature, food availability, etc.) have on fitness, in the absence of abnormal competitive effects. For the storage effect to occur, the environmental response for each species must be unique (i.e. E j ( t ) ≠ E i ( t ) when j ≠ i ). C ( t ) represents how much average fitness is lowered as a result of competition . For example, if there is more rain during a given year, E ( t ) will likely increase. If more plants begin to bloom, and thus compete for that rain, then C ( t ) will increase as well. Because e* and c* are not unique, E ( t ) and C ( t ) are not unique, and thus one should choose them as conveniently as possible. Under most conditions (see Chesson 1994 [ 5 ] ), r ( t ) can be approximated as where γ represents the nonadditivity of growth rates. If γ = 0 (known as additivity ) it means that the impact of competition on fitness does not change with the environment. If γ > 0 ( superadditivity ), it means that the adverse effects of competition during a bad year are relatively worse than during a good year. In other words, a population suffers more from competition in bad years than in good years. If γ < 0 ( subadditivity , or buffered population growth), it means that the harm done by competition during a bad year is relatively minor when compared to a good year. In other words, the population is able to diminish the impact of competition as the environment worsens. As stated above, for the storage effect to contribute to species coexistence, we must have buffered population growth (i.e. it must be the case that γ < 0). The long-term average of the above equation is which, under environments with sufficient variation relative to mean effects, can be approximated as For any effect to act as a coexistence mechanism, it must boost the average fitness of an individual when they are at below-normal population density. Otherwise, a species at low density (known as an `invader') will continue to dwindle, and this negative feedback will cause its extinction. When a species is at equilibrium (known as a `resident'), its average long-term fitness must be 0. For a species to recover from low density, its average fitness must be greater than 0. For the remainder of the text, we refer to functions of the invader with the subscript i , and to the resident with the subscript r . A long-term average growth rate of an invader is often written as where, and, ΔI, the storage effect, where In this equation, q ir tells us how much the competition experienced by r affects the competition experienced by i . The biological meaning of the storage effect is expressed in the mathematical form of ΔI. The first term of the expression is covariance between environment and competition (Cov( E C )), scaled by a factor representing buffered population growth ( γ ). The difference between the first term and the second term represents the difference in species responses to the environment between the invader and the sum of the residents, scaled by the effect each resident has on the invader ( q ir ). [ 5 ] Recent work has extended what is known about the storage effect to include apparent competition (i.e., competition mediated through a shared predator). These models showed that generalist predators can undermine the benefits of the storage effect that from competition. [ 11 ] This occurs because generalist predators depress population levels by eating individuals. When this happens, there are fewer individuals competing for resources. As a result, relatively abundant species are less constrained by competition for resource in favorable years (i.e., the covariance between environment and competition is weakened), and therefore the storage effect from competition is weakened. This conclusion follows the general trend that the introduction of a generalist predator will often weaken other competition-based coexistence mechanisms, and which result in competitive exclusion . [ 12 ] [ 13 ] Additionally, certain types of predators can produce a storage effect from predation. This effect has been shown for frequency-dependent predators , who are more likely to attack prey that are abundant, [ 14 ] and for generalist pathogens, who cause outbreaks when prey are abundant. [ 15 ] When prey species are especially numerous and active, frequency-dependent predators become more active, and pathogens outbreaks become more severe (i.e., there was a positive covariance between the environment and predation, analogous to the covariance between the environment and competition). As a result, abundant species are limited during their best years by high predation – an effect that is analogous to the storage effect from competition. The first empirical study that tested the requirements of the storage effect was done by Pake and Venable, [ 2 ] who looked at three desert annual plants . They experimentally manipulated density and water availability over a two-year period, and found that fitness and germination rates varied greatly from year to year, and over different environmental conditions. This shows that each species has a unique environmental response, and implied that likely there is a covariance between environment and competition. This, combined with the buffered population growth that is a product of a long-lived seed bank , showed that a temporal storage effect was probably an important factor in mediating coexistence. This study was also important, because it showed that variation in germination conditions could be a major factor promoting species coexistence. [ 2 ] The first attempt made at quantifying the temporal storage effect was by Carla Cáceres in 1997. [ 3 ] Using 30 years of water-column data from Oneida Lake , New York, she studied the effect the storage effect had on two species of plankton ( Daphnia galeata mendotae and D. pulicaria ). These species of plankton lay diapausing eggs which, much like the seeds of annual plants, lay dormant in the sediment for many years before hatching. Cáceres found that the size of reproductive bouts were fairly uncorrelated between the two species. She also found, in the absence of the storage effect, D. galeata mendotae would have gone extinct. She was unable to measure certain important parameters (such as the rate of egg predation), but found that her results were robust to a wide range of estimates. [ 3 ] The first test of the spatial storage effect was done by Sears and Chesson [ 10 ] in the desert area east of Portal, Arizona. Using a common neighbor-removal experiment, they examined whether coexistence between two annual plants, Erodium cicutarium and Phacelia popeii, was due to the spatial storage effect or resource partitioning. The storage effect was quantified in terms of number of inflorescences (a proxy for fitness) instead of actual population growth rate. They found that E. cicutarium was able to outcompete P. popeii in many situations, and in the absence of the storage effect, would likely competitively exclude P. popeii. However, they found a very strong difference in the covariance between environment and competition, which showed that some of the most favorable areas for P. popeii (the rare species), were unfavorable to E. cicutarium (the common species). This suggests that P. popeii is able to avoid strong interspecific competition in some good patches, and that this may be enough to compensate for losses in areas favorable to E. cicutarium. [ 10 ] Colleen Kelly and colleagues have used congeneric species pairs to examine storage dynamics where species similarity is a natural outcome of relatedness and not dependent on researcher-based estimates. Initial studies were of 12 species of trees coexisting in a tropical deciduous forest at the Chamela Biological Station in Jalisco , Mexico. [ 4 ] [ 16 ] For each of the 12 species they examined age structure (calculated from size and species-specific growth rate), and found that recruitment of young trees varies from year to year. Grouping the species into 6 congeneric pairs, the locally rarer species of each pair unanimously had a more irregular age distributions than the more common species. This finding strongly suggests that between closely competing tree species, the rarer species experiences stronger recruitment fluctuation than the commoner species. Such difference in recruitment fluctuation, combined with evidence of greater competitive ability in the rarer species of each pair, indicates a difference in covariance between the environment and competition between rare and common species. Since species-specific environmental response and buffered population growth can be naturally assumed, their finding strongly suggests that the storage effect operates in this tropical deciduous forest so as to maintain the coexistence between different tree species. Further work with these species has shown that the storage dynamic is a pairwise, competitive relationship, between congeneric species pairs, and possibly extending as successively nested pairs within a genus. [ 17 ] Angert and colleagues demonstrated the temporal storage effect occurring in the desert annual plant community on Tumamoc Hill , Arizona. [ 18 ] Previous studies [ 19 ] [ 20 ] had shown the annual plants in that community exhibited a trade-off between growth rate (a proxy for competitive ability) and water use efficiency (a proxy for drought tolerance). As a result, some plants grew better during wet years, while others grew better during dry years. This, combined with variation in germination rates, produced an overall community average storage effect of 0.103. In other words, the storage effect is expected to help the population of any species at low density to increase, on average, by 10.3% each generation, until it recovers from low density. [ 18 ]
https://en.wikipedia.org/wiki/Storage_effect
A storage organ is a part of a plant specifically modified for storage of energy (generally in the form of carbohydrates ) or water. [ 1 ] Storage organs often grow underground, where they are better protected from attack by herbivores . Plants that have an underground storage organ are called geophytes in the Raunkiær plant life-form classification system . [ 2 ] [ 3 ] Storage organs often, but not always, act as perennating organs which enable plants to survive adverse conditions (such as cold, excessive heat, lack of light or drought). Storage organs may act as perennating organs ('perennating' as in perennial , meaning "through the year", used in the sense of continuing beyond the year and in due course lasting for multiple years). These are used by plants to survive adverse periods in the plant's life-cycle (e.g. caused by cold, excessive heat, lack of light or drought). During these periods, parts of the plant die and then when conditions become favourable again, re-growth occurs from buds in the perennating organs. For example, geophytes growing in woodland under deciduous trees (e.g. bluebells , trilliums ) die back to underground storage organs during summer when tree leaf cover restricts light and water is less available. [ citation needed ] However, perennating organs need not be storage organs. After losing their leaves, deciduous trees grow them again from 'resting buds', which are the perennating organs of phanerophytes in the Raunkiær classification , but which do not specifically act as storage organs. Equally, storage organs need not be perennating organs. Many succulents have leaves adapted for water storage, which they retain in adverse conditions. In common parlance, underground storage organs may be generically called roots, tubers , or bulbs, but to the botanist there is more specific technical nomenclature : Some of the above, particularly pseudobulbs and caudices, may occur wholly or partially above ground. Intermediates and combinations of the above are also found, making classification difficult. As an example of an intermediate, the tuber of Cyclamen arises from the stem of the seedling, which forms the junction of the roots and stem of the mature plant. In some species (e.g. Cyclamen coum ) roots come from the bottom of the tuber, suggesting that it is a stem tuber; in others (e.g. Cyclamen hederifolium ) roots come largely from the top of the tuber, suggesting that it is a root tuber. [ 6 ] As an example of a combination, juno irises have both bulbs and storage roots. [ 7 ] Underground storage organs used for food may be generically called root vegetables , although this phrase should not be taken to imply that the class only includes true roots. Succulents are plants which are adapted to withstand periods of drought by their ability to store moisture in specialized storage organs. [ 8 ]
https://en.wikipedia.org/wiki/Storage_organ
Storage tanks are containers that hold liquids or compressed gases. The term can be used for reservoirs (artificial lakes and ponds), and for manufactured containers. The usage of the word "tank" for reservoirs is uncommon in American English but is moderately common in British English . In other countries, the term tends to refer only to artificial containers. In the U.S., storage tanks operate under no (or very little) pressure , distinguishing them from pressure vessels . Tanks can be used to hold materials as diverse as milk , water , waste , petroleum , chemicals , and other hazardous materials , all while meeting industry standards and regulations. [ 1 ] Storage tanks are available in many shapes: vertical and horizontal cylindrical; open top and closed top; flat bottom, cone bottom, slope bottom and dish bottom. Large tanks tend to be vertical cylindrical, with flat bottoms, and a fixed frangible or floating roof, or to have rounded corners transition from the vertical side wall to bottom profile, in order to withstand hydraulic hydrostatic pressure . Tanks built below ground level are sometimes used and referred to as underground storage tanks (USTs). Reservoirs can be covered, in which case they may be called covered or underground storage tanks or reservoirs. Covered water tanks are common in urban areas. Tanks can be mounted on a lorry or an articulated lorry trailer. The resulting vehicle is called a road tanker (or simply tanker; tank truck in American English). Tank cars are tanks mounted on goods wagons for rail transportation. The word "tank" originally meant "artificial lake" and came from India, perhaps via Portuguese tanque . It may have some connection with: While steel and concrete remain one the most popular choices for tanks, glass-reinforced plastic , thermoplastic and polyethylene tanks are increasing in popularity. They offer lower build costs and greater chemical resistance, especially for storage of specialty chemicals . There are several relevant standards, such as British Standard 4994 (1989), DVS 2205, and ASME RTP-1 which give advice on wall thickness, quality-control procedures, testing procedures, accreditation, fabrication and design criteria of final product. Some storage tanks need a floating roof in addition to or in lieu of the fixed roof and structure. This floating roof rises and falls with the liquid level inside the tank, thereby decreasing the vapour space above the liquid level. Floating roofs are considered a safety requirement as well as a pollution prevention measure for many industries including petroleum refining. In order for volume measurements from the tank to be used, it typically has a capacity table created using appropriate standards. [ 3 ] Each row of capacity table contains fill level value and corresponding volume value (and other related data). In the U.S., metal tanks in contact with soil and containing petroleum products must be protected from corrosion to prevent escape of the product into the environment. [ 4 ] The most effective and common corrosion control techniques for steel in contact with soil is cathodic protection . Outside the United States and at some locations in the United States, elevated tank support foundations with a sand bitumen mix finish are often used. These type of foundations keep the tank bottom plates free from water, therefore preventing corrosion. In addition to their design and application, maintenance and inspection of storage tanks play a critical role in ensuring their safety and efficiency. Regular inspection is essential for identifying potential issues such as corrosion, leaks, structural weaknesses, and compliance with environmental regulations. These inspections can vary in frequency and detail depending on the type of tank, the material stored, and the regulatory requirements applicable in the location where the tank is used. For instance, tanks storing hazardous materials may require more frequent and thorough inspections compared to those used for non-hazardous materials. Maintenance protocols, including cleaning, repairs, and preventative measures, are equally important to prolong the lifespan of the tanks and prevent environmental contamination or accidents. Advanced technologies, such as remote sensing , ultrasonic testing , and robotic inspection tools, like remotely-operated drones , are increasingly being employed to enhance the effectiveness and safety of these inspection processes. Understanding and implementing appropriate inspection and maintenance schedules is paramount for operators of storage tanks to ensure operational reliability and adherence to safety standards. [ 5 ] Several environmental regulations apply to the design and operation of storage tanks, often depending on the nature of the fluid contained within. [ 1 ] In the U.S., air emissions are typically required to undergo air quality permitting under the federal Clean Air Act . Quantification of potential emissions from tanks for permitting purposes is most often accomplished by applying emission equations published in chapter 7.1 of the Environmental Protection Agency 's AP-42 (Compilation of Air Pollutant Emissions Factors from Stationary Sources) . Since most liquids can spill or seep through even the smallest opening, special consideration must be made for their safe and secure handling. This usually involves building a bunding , or containment dike, around the tank, so that any leakage may be safely contained. An atmospheric tank is a container for holding a liquid at atmospheric pressure . The major design codes for welded atmospheric tanks are API 650 and API 620. API 653 is used for analysis of in-service storage tanks. In Europe the applicable design code is EN 14015, which uses load cases from Eurocode 3 (EN 1993), part 4-2. In the case of a liquefied gas such as hydrogen or chlorine , or a compressed gas such as compressed natural gas or MAPP , the storage tank must be made to withstand the sometimes-considerable pressures exerted by the contents. These tanks, being pressure vessels , are sometimes excluded from the class of "tanks". Container tanks for handling liquids during transportation are often designed to handle varying degrees of pressure. One form of seasonal thermal energy storage (STES) is the use of large surface water tanks that are insulated and then covered with earth berms to enable storage of seasonal solar-thermal heat that is collected primarily in the summer for all-year heating. [ 6 ] A related technology has become widespread in Danish district heating systems. The thermal storage medium is gravel and water in large, shallow, lined pits that are covered with insulation, soil and grass. [ 7 ] Ice and slush tanks are used for short-term storage of cold for use in air conditioning, allowing refrigeration equipment to be run at night when electric power is less expensive, yet provide cooling during hot daytime hours. A bulk milk cooling tank is a storage tank located in a dairy farm's milkhouse used for cooling and holding fluid milk at a low temperature until it can be picked up by a milk hauler. Since milk leaves the udder at approximately 35 °C, milk tanks are needed to rapidly cool fresh raw milk to a storage temperature of 4 °C to 6 °C, thereby slowing growth of microorganisms. [ 8 ] Bulk milk cooling tanks are usually made of stainless steel and are constructed to sanitary standards. They must be cleaned after each milk collection. The milk cooling tank may be the property of the farmer, or may be rented by the farmer from a dairy plant. A septic tank is part of a small-scale sewage treatment system often referred to as a septic system. Septic systems are commonly used to treat wastewater from homes and small businesses in rural and suburban areas. [ 9 ] It consists of the tank and a septic drain field . Waste water enters the tank where solids can settle and scum floats. Anaerobic digestion occurs on the settled solids, reducing the volume of solids. The water released by the system is normally absorbed by the drain field without needing any further treatment. While not strictly a "storage" tank, mobile tanks share many of the same features of storage tanks. Also, they must be designed to deal with a heavy sloshing load and the risk of collision or other accident. Some of these include ocean-going oil tankers and LNG carriers ; railroad tank cars ; and road tankers . Also included are the holding tanks which are the tanks that store toilet waste on RVs , boats and aircraft. Tanks for crude oil and oil-based fuels are chosen according to the flash point of the material. If the material is not a liquefied gas, such as LPG , tanks are atmospheric and generally come in two types: Liquefied gases (such as LPG, butane , propylene , etc.) may be stored in spherical tanks (or Horton spheres). Typical classification codes used for tanks in a refinery are: Chemical tanks are storage containers for chemicals widely used within the chemical industry . They come in a variety of sizes and shapes, and are used for static storage and transport of both raw materials and finished chemical products. A chemical tank is of necessity designed for a specific chemical. Chemicals have variable corrosion potentials, so the size and features of chemical tanks are diverse. Chemical resistance is usually the first priority in designing chemical tanks. Selected materials have to be as resistant to the chemical stored as design and economics allow. This includes selection of smaller features such as gaskets and plumbing materials. Other parameters to be taken in consideration are heat, cold, vacuum, pressure, exothermic reactivity and the inherent aggressive nature of the chemical ( acids , caustic, etc.). Secondary containment is a back-up strategy sometimes used to mitigate potential failure of the primary container. The typical profile of a vessel with secondary containment is a primary vessel with an exterior vessel shell encompassing the primary vessel with at least 100% capacity. Secondary vessels are available in polyethylene , fiberglass and metal materials. Secondary containment tank systems are suggested for all aggressive chemicals. There have been numerous catastrophic failures of storage tanks, one of the most notorious being that which occurred at Boston on January 14, 1919. A large tank had only been filled eight times when it failed, and the resulting wave of molasses killed 21 people in the vicinity. The Boston molasses disaster was caused by poor design and construction, with a wall too thin to bear repeated loads from the contents. The tank had not been tested before use by filling with water, and was also poorly riveted. The owner of the tank, United States Industrial Alcohol Company , paid out $300,000 (nearly $4 million in 2012 ) in compensation to the victims or their relatives. There have been many other accidents caused by tanks since then, often caused by faulty welding or by sub-standard steel . New inventions have at least fixed some of the more common issues around the tanks' seal. [ 10 ] [ 11 ] However, storage tanks also present another problem, surprisingly, when empty. If they have been used to hold oil or oil products such as gasoline , the atmosphere in the tanks may be highly explosive as the space fills with hydrocarbons . If new welding operations are started, then sparks can easily ignite the contents, with disastrous results for the welders. The problem is similar to that of empty bunkers on tanker ships , which are now required to use an inert gas blanket to prevent explosive atmospheres building up from residues.
https://en.wikipedia.org/wiki/Storage_tank
A stored energy printer is a computer printer that uses the energy stored in a spring or magnetic field to push a hammer into a ribbon to print a dot. It is a type of impact printer. [ 1 ] As compared to dot matrix printers that print a single column of dots at a time, this printer generally creates an entire line of dots at a time. Therefore, it is also known as a line matrix printer . [ clarification needed ] An advantage of this technology is its low running costs: printer hammers have a lifespan of millions to billions of dots, and ink is transferred using conventional typewriter-style ribbons. The most common printer to use this technology was the line-matrix printer made by Printronix and its licensees. In this type, the hammers are machined from an oval of magnetically permeable stainless steel, and the hammer-tips form vertical rows. The hammers are arranged as a "hammerbank"; a type of comb that oscillates horizontally to produce a line of dots. The original technology was patented by Printronix in 1974. [ 2 ] The tungsten carbide hammer is brazed to the center-top of a leaf spring . The top of this stiff spring is initially held back by a magnetic pole-piece. To produce a dot, an electromagnetic coil wrapped around the pole-piece neutralizes the magnetic field, causing the spring to release the hammer and hit the ribbon and the paper behind it, leaving behind the printed dot. [ citation needed ] Character matrix printers have also been produced. [ 2 ] Recent [ when? ] designs have performed complex optimizations in the magnetic circuit, and eliminated unwanted resonances in the spring. The result was a near-doubling of speed. Other improvements include the use of electrical discharge machining to produce complex, three-dimensional hammers that trade-off the magnetic circuit, mechanical resonances, and printing speed. Normal wear occurs on the pole piece when the spring rubs against it as it returns. This eventually requires the pole pieces to be reground and recertified. However, using hexavalent chrome plating on the pole-piece, combined with careful design, [ specify ] more than doubles speeds and improves life-span, allowing approximately a billion impressions per hammer. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stored_energy_printer
The Stork enamine alkylation involves the addition of an enamine to a Michael acceptor (e.g., an α,β -unsaturated carbonyl compound ) or another electrophilic alkylation reagent to give an alkylated iminium product, which is hydrolyzed by dilute aqueous acid to give the alkylated ketone or aldehyde. [ 1 ] Since enamines are generally produced from ketones or aldehydes, this overall process (known as the Stork enamine synthesis ) constitutes a selective monoalkylation of a ketone or aldehyde, a process that may be difficult to achieve directly. The Stork enamine synthesis: The reaction also applies to acyl halides as electrophiles, which results in the formation of 1,3- diketones ( Stork acylation ). [ 2 ] It is also effective for activated sp 3 alkyl electrophiles, including benzylic, allylic/propargylic, α-carbonyl (e.g., bromoacetone ), and α-alkoxy (e.g., methoxymethyl chloride ) alkyl halides. However, nonactivated alkyl halides, including methyl and other primary alkyl halides, generally only give low to moderate yields of the desired alkylation product ( see below ). [ 3 ] The reaction is named after its inventor, Gilbert Stork (Columbia University). By using an anionic version of an enamine, known as an azaenolate or metalloenamine , it is also possible to alkylate ketones or aldehydes with alkyl halides as less reactive electrophiles : [ 4 ] In this method a carbonyl compound is condensed to a Schiff base . The imine then reacts with a Grignard reagent to the corresponding Hauser base . This species' negative charge enables displacing a less reactive alkyl halide, including methyl, ethyl, and other nonactivated halides. Hydrolysis then yields the alkylated ketone. In the Enders SAMP/RAMP hydrazone-alkylation reaction , a hydrazone replaces the amine for enantioselection .
https://en.wikipedia.org/wiki/Stork_enamine_alkylation
A storm drain , storm sewer ( United Kingdom , U.S. and Canada ), highway drain , [ 1 ] surface water drain / sewer ( United Kingdom ), or stormwater drain ( Australia and New Zealand ) is infrastructure designed to drain excess rain and ground water from impervious surfaces such as paved streets, car parks, parking lots, footpaths, sidewalks, and roofs. Storm drains vary in design from small residential dry wells to large municipal systems. Drains receive water from street gutters on most motorways , freeways and other busy roads , as well as towns in areas with heavy rainfall that leads to flooding , and coastal towns with regular storms . Even rain gutters from houses and buildings can connect to the storm drain. Since many storm drainage systems are gravity sewers that drain untreated storm water into rivers or streams, any hazardous substances poured into the drains will contaminate the destination bodies of water. Storm drains sometimes cannot manage the quantity of rain that falls in heavy rains or storms. Inundated drains can cause basement and street flooding. Many areas require detention tanks inside a property that temporarily hold runoff in heavy rains and restrict outlet flow to the public sewer. This reduces the risk of overwhelming the public sewer. Some storm drains mix stormwater (rainwater) with sewage , either intentionally in the case of combined sewers , or unintentionally . Several related terms are used differently in American and British English. There are two main types of stormwater drain (highway drain or road gully in the UK) inlets: side inlets and grated inlets. Side inlets are located adjacent to the curb and rely on the ability of the opening under the back stone or lintel to capture flow. They are usually depressed at the invert of the channel to improve capture capacity. [ 4 ] Many inlets have gratings or grids to prevent people, vehicles, large objects or debris from falling into the storm drain. Grate bars are spaced so that the flow of water is not impeded, but sediment and many small objects can also fall through. However, if grate bars are too far apart, the openings may present a risk to pedestrians, bicyclists, and others in the vicinity. Grates with long narrow slots parallel to traffic flow are of particular concern to cyclists, as the front tire of a bicycle may become stuck, causing the cyclist to go over the handlebars or lose control and fall. Storm drains in streets and parking areas must be strong enough to support the weight of vehicles, and are often made of cast iron or reinforced concrete. [ citation needed ] Some of the heavier sediment and small objects may settle in a catch basin, or sump , which lies immediately below the outlet, where water from the top of the catch basin reservoir overflows into the sewer proper. The catchbasin serves much the same function as the "trap" in household wastewater plumbing in trapping objects. In the United States, unlike the plumbing trap, the catch basin does not necessarily prevent sewer gases such as hydrogen sulfide and methane from escaping. However, in the United Kingdom, where they are called gully pots , [ 5 ] they are designed as true water-filled traps and do block the egress of gases and rodents. Most catchbasins contain stagnant water during drier parts of the year and can, in warm countries, become mosquito breeding grounds. Larvicides or disruptive larval hormones, sometimes released from "mosquito biscuits", have been used to control mosquito breeding in catch basins. Mosquitoes may be physically prevented from reaching the standing water or migrating into the sewer proper by the use of an "inverted cone filter". Another method of mosquito control is to spread a thin layer of oil on the surface of stagnant water, interfering with the breathing tubes of mosquito larvae. The performance of catch basins at removing sediment and other pollutants depends on the design of the catchbasin (for example, the size of the sump), and on routine maintenance to retain the storage available in the sump to capture sediment. Municipalities typically have large vacuum trucks that perform this task. Catch basins act as the first-line pretreatment for other treatment practices, such as retention basins , by capturing large sediments and street litter from urban runoff before it enters the storm drainage pipes. Pipes can come in many different cross-sectional shapes (rectangular, square, bread-loaf-shaped, oval, inverted pear-shaped, egg shaped, and most commonly, circular). Drainage systems may have many different features including waterfalls , stairways, balconies and pits for catching rubbish, sometimes called Gross Pollutant Traps (GPTs). Pipes made of different materials can also be used, such as brick, concrete, high-density polyethylene or galvanized steel. Fibre reinforced plastic is being used more commonly for drain pipes and fittings. [ 6 ] Most drains have a single large exit at their point of discharge (often covered by a grating ) into a canal , river, lake, reservoir , sea or ocean. Other than catchbasins, typically there are no treatment facilities in the piping system. Small storm drains may discharge into individual dry wells . Storm drains may be interconnected using slotted pipe, to make a larger dry well system. Storm drains may discharge into human-made excavations known as recharge basins or retention ponds. Storm drains are often unable to manage the quantity of rain that falls during heavy rains and/or storms. When storm drains are inundated, basement and street flooding can occur. Unlike catastrophic flooding events, this type of urban flooding occurs in built-up areas where human-made drainage systems are prevalent. Urban flooding is the primary cause of sewer backups and basement flooding, which can affect properties repeatedly. [ 7 ] Clogged drains also contribute to flooding by the obstruction of storm drains. Communities or cities can help reduce this by cleaning leaves from the storm drains to stop ponding or flooding into yards. [ 8 ] Snow in the winter can also clog drains when there is an unusual amount of rain in the winter and snow is plowed atop storm drains. [ 9 ] Runoff into storm sewers can be minimized by including sustainable urban drainage systems (UK term) or low impact development or green infrastructure practices (US terms) into municipal plans. To reduce stormwater from rooftops, flows from eaves troughs ( rain gutters and downspouts) may be infiltrated into adjacent soil, rather than discharged into the storm sewer system. Storm water runoff from paved surfaces can be directed to unlined ditches (sometimes called swales or bioswales ) before flowing into the storm sewers, again to allow the runoff to soak into the ground. Permeable paving materials can be used in building sidewalks, driveways and in some cases, parking lots, to infiltrate a portion of the stormwater volume. [ 10 ] Many areas require that properties have detention tanks that temporarily hold rainwater runoff, and restrict the outlet flow to the public sewer. This lessens the risk of overburdening the public sewer during heavy rain. An overflow outlet may also connect higher on the outlet side of the detention tank. This overflow prevents the detention tank from completely filling. Restricting water flow and temporarily holding the water in a detention tank public this way makes it far less likely for rain to overwhelm the sewers. [ 11 ] The first flush from urban runoff can be extremely dirty. Storm water may become contaminated while running down the road or other impervious surface , or from lawn chemical run-off, before entering the drain. Water running off these impervious surfaces tends to pick up gasoline , motor oil , heavy metals , trash and other pollutants from roadways and parking lots, as well as fertilizers and pesticides from lawns. Roads and parking lots are major sources of nickel , copper , zinc , cadmium , lead and polycyclic aromatic hydrocarbons (PAHs), which are created as combustion byproducts of gasoline and other fossil fuels . Roof runoff contributes high levels of synthetic organic compounds and zinc (from galvanized gutters). Fertilizer use on residential lawns, parks and golf courses is a significant source of nitrates and phosphorus . [ 12 ] [ 13 ] Separation of undesired runoff can be achieved by installing devices within the storm sewer system. These devices are relatively new and can only be installed with new development or during major upgrades. They are referred to as oil-grit separators (OGS) or oil-sediment separators (OSS). They consist of a specialized manhole chamber, and use the water flow and/or gravity to separate oil and grit. [ 14 ] Catch basins are commonly designed with a sump area below the outlet pipe level—a reservoir for water and debris that helps prevent the pipe from clogging. Unless constructed with permeable bottoms to let water infiltrate into underlying soil, this subterranean basin can become a mosquito breeding area, because it is cool, dark, and retains stagnant water for a long time. Combined with standard grates, which have holes large enough for mosquitoes to enter and leave the basin, this is a major problem in mosquito control. [ 16 ] Basins can be filled with concrete up to the pipe level to prevent this reservoir from forming. Without proper maintenance, the functionality of the basin is questionable, as these catch basins are most commonly not cleaned annually as is needed to make them perform as designed. The trapping of debris serves no purpose because once filled they operate as if no basins were present, but continue to allow a shallow area of water retention for the breeding of mosquito. Moreover, even if cleaned and maintained, the water reservoir remains filled, accommodating the breeding of mosquitoes. Storm drains are separate and distinct from sanitary sewer systems. The separation of storm sewers from sanitary sewers helps prevent sewage treatment plants becoming overwhelmed by infiltration/inflow during a rainstorm, which could discharge untreated sewage into the environment. Many storm drainage systems drain untreated storm water into rivers or streams. In the US, many local governments conduct public awareness campaigns about this, lest people dump waste into the storm drain system. [ 17 ] In Cleveland, Ohio , for example, all new catch basins installed have inscriptions on them not to dump any waste, and usually include a fish imprint as well. Trout Unlimited Canada recommends that a yellow fish symbol be painted next to existing storm drains. [ 18 ] Cities that installed their sewage collection systems before the 1930s typically used single piping systems to transport both urban runoff and sewage. This type of collection system is referred to as a combined sewer system (CSS). The cities' rationale when combined sewers were built was that it would be cheaper to build just a single system. [ 19 ] In these systems a sudden large rainfall that exceeds sewage treatment capacity is allowed to overflow directly from storm drains into receiving waters via structures called combined sewer overflows . [ 20 ] Storm drains are typically installed at shallower depths than combined sewers. This is because combined sewers were designed to accept sewage flows from buildings with basements, in addition to receiving surface runoff from streets. [ 21 ] About 860 communities in the US have combined sewer systems, serving about 40 million people. [ 22 ] New York City , Washington, D.C. , Seattle and other cities with combined systems have this problem due to a large influx of storm water after every heavy rain event. Some cities have dealt with this by adding large storage tanks or ponds to hold the water until it can be treated. Chicago has a system of tunnels, collectively called the Deep Tunnel , underneath the city for storing its stormwater. [ 23 ] Many areas require detention tanks or roof detention systems that temporarily hold runoff in heavy rains and restrict outlet flow to the public sewer. This lessens the risk of overwhelming the public sewer in heavy rain. An overflow outlet may also connect higher on the outlet side of the detention tank. This overflow prevents the detention tank from completely filling. By restricting the flow of water in this way and temporarily holding the water in a detention vault or tank or by rooftop detention, public sewers are less likely to overflow. [ 24 ] Building codes and local government ordinances vary significantly on the handling of storm drain runoff. New developments might be required to construct their storm drain processing capacity for returning the runoff to the water table and bioswales may be required in sensitive ecological areas to protect the watershed . In the United States, cities, suburban communities, and towns with over 100,000 population, smaller community drainage systems in urbanized areas, and additional municipal systems that are specifically designated by state agencies are required to obtain discharge permits for their storm sewer systems under the Clean Water Act . [ 25 ] The Environmental Protection Agency (EPA) issued stormwater regulations for large cities in 1990 and for other communities in 1999. [ 26 ] The permits require local governments to operate stormwater management programs, covering both construction of new buildings and facilities, and maintenance of their existing municipal drainage networks. For new construction projects, many municipalities require builders to obtain approval of the site drainage system and structural plans. State government facilities, such as roads and highways , are also subject to the stormwater management regulations. [ 27 ] Southeastern Los Angeles County installed thousands of stainless steel, full-capture trash devices on their road drains in 2011. [ 28 ] An international subculture has grown up around exploring stormwater drains. Societies such as the Cave Clan regularly explore the drains underneath cities. This is commonly known as " urban exploration ", but is also known as draining when in specific relation to storm drains. [ 29 ] In several large American cities, homeless people live in storm drains. At least 300 people live in the 200 miles of underground storm drains of Las Vegas , many of them making a living finding unclaimed winnings in the gambling machines. [ 30 ] An organization called Shine a Light was founded in 2009 to help the drain residents after over 20 drowning deaths occurred in the preceding years. [ 30 ] [ 31 ] A man in San Diego was evicted from a storm drain after living there for nine months in 1986. [ 32 ] Archaeological studies have revealed use of rather sophisticated stormwater runoff systems in ancient cultures. For example, in Minoan Crete around 2000 BC, cities such as Phaistos were designed to have storm drains and channels to collect precipitation runoff. At Cretan Knossos , storm drains include stone-lined structures large enough for a person to crawl through. [ 33 ] Other examples of early civilizations with elements of stormwater drain systems include early people of Mainland Orkney such as Gurness and the Brough of Birsay in Scotland .
https://en.wikipedia.org/wiki/Storm_drain
Storm hardening is the process whereby construction is used to create new infrastructure or retrofit existing infrastructure such that it is more capable of withstanding extreme weather events. It "involves physically changing infrastructure to make it less susceptible to damage from extreme wind , flooding , or flying debris. Hardening measures include adopting new technology, installing new equipment, constructing protective barriers, or changing communications/ IT at the facility. "Hardening usually requires significant investment by the energy company . Some projects take years to complete; for example, large earth-moving equipment may be brought in to build a new levee . Sometimes the sheer magnitude of assets involved (e.g., thousands of wooden distribution poles) requires years of concerted effort to upgrade." [ 1 ]
https://en.wikipedia.org/wiki/Storm_hardening
Storm oil is described as nearly water-insoluble oil acting as a surfactant , and has been used since ancient times to smooth ocean waves. [ 1 ] [ 2 ] It has been historically employed to facilitate sea rescues and improve navigational safety, involving pouring the oil onto the ocean surface to reduce wave intensity. [ 3 ] [ 4 ] The nearly immiscible spilled oil acts as a surfactant, accumulating on the surface, and as waves locally stretch or compress, it leads to a concentration gradient inducing tangential shear forces leading to extra dissipation and damping. [ 5 ] [ 1 ] The phenomena were later discovered and scientifically explored by figures such as Benjamin Franklin , Lord Rayleigh , and Agnes Pockels , collectively deepening the scientific knowledge of surface tension and wave dynamics. [ 1 ] [ 4 ] Steamships and lifeboats from many countries were required to carry them until the end of the 20th century. [ 6 ] [ 7 ] The United States Maritime Service Training Manual included storm oil in the list of general equipment aboard lifeboats, [ 8 ] while the Merchant Shipping Act 1894 ( 57 & 58 Vict. c. 60) mandated them for British vessels until 1998. [ 6 ] [ 7 ] Frequently vegetable oil or fish oil was used as a cheap form of oil. [ 6 ] Oil has a damping effect on water and quickly forms a thin layer over a large expanse of the surface, which absorbs some of the energy of the waves. [ 6 ] [ 9 ] This prevents wind from getting traction along the water; thus, waves cannot form as easily. [ 10 ] The practice can be traced back as far as 350 BC with Aristotle and to the early 1st century with Pliny the Elder . Aristotle described the use of oil as spreading on the eyes of divers with the intention to "quiet the surface and permit the rays of light to reach them". Whaling vessels are purported to have dangled blubber around the hull when in heavy seas to help calm the ocean. [ 11 ] Benjamin Franklin famously investigated oil's calming properties on waves during his visits to England in 1757 to negotiate on taxation issues, [ 1 ] demonstrating the effect on lakes such as Derwentwater . Communications between Franklin and William Brownrigg show that Franklin had first encountered the phenomenon aboard a ship in 1757 and investigated it several years later alongside Brownrigg and Sir John Pringle . [ 12 ] This led to the discussion of the topic at the Royal Society on 2 June 1774. Franklin was also the first one to do a controlled experiment on various ponds and lakes in England and the first to publish the findings as a scientific publication. [ 13 ] [ 1 ] Subsequent investigators included Strutt , [ 14 ] Lord Rayleigh . [ 1 ] In parallel, Agnes Pockels , working from her kitchen in Brunswick, Germany, experimented with the properties of oil monolayers on water, measuring the thickness of oil layers on water at approximately 1.3 nanometers. [ 15 ] Her work studying storm oils through her surface film balance technique later influenced the design of tools like the Langmuir trough. [ 15 ] Pockels also suggested that the calming effect of oil on water involved more than just reduced surface tension, including additional viscous resistance. [ 1 ]
https://en.wikipedia.org/wiki/Storm_oil
Stormtroopers Advance Under a Gas Attack (German: Sturmtruppe geht unter Gas vor ) is an engraving in aquatint by German painter and printmaker Otto Dix representing German soldiers in combat during the First World War . It is the twelfth in the series of fifty engravings entitled The War , published in 1924. Copies are kept at the German Historical Museum in Berlin , at the Museum of Modern Art in New York , and at the Minneapolis Institute of Art , among other public collections. The engraving is almost monochrome, rectangular in format (19.3 × 28.8 cm for the engraving, 34.8 × 47.3 cm for the sheet). The engraving represents five German stormtroopers , recognizable by their steel helmets, all wearing gas masks, as they are advancing into enemy lines, while suffering a gas attack. [ 1 ] [ 2 ] [ 3 ] This article about a twentieth-century painting is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stormtroopers_Advance_Under_a_Gas_Attack
Stormwater , also written storm water , is water that originates from precipitation ( storm ), including heavy rain and meltwater from hail and snow . Stormwater can soak into the soil ( infiltrate ) and become groundwater , be stored on depressed land surface in ponds and puddles , evaporate back into the atmosphere, or contribute to surface runoff . Most runoff is conveyed directly as surface water to nearby streams , rivers or other large water bodies ( wetlands , lakes and oceans ) without treatment. In natural landscapes, such as forests, soil absorbs much of the stormwater. Plants also reduce stormwater by improving infiltration, intercepting precipitation as it falls, and by taking up water through their roots. In developed environments, such as cities , unmanaged stormwater can create two major issues: one related to the volume and timing of runoff ( flooding ) and the other related to potential contaminants the water is carrying ( water pollution ). In addition to the pollutants carried in stormwater runoff, urban runoff is being recognized as a cause of pollution in its own right. Stormwater is also an important resource as human population and demand for water grow, particularly in arid and drought-prone climates. Stormwater harvesting techniques and purification could potentially make some urban environments self-sustaining in terms of water. With less vegetation and more impervious surfaces ( parking lots , roads , buildings , compacted soil ), developed areas allow less rain to infiltrate into the ground, and more runoff is generated than in undeveloped conditions. Additionally, passages such as ditches and storm sewers quickly transport runoff away from commercial and residential areas into nearby water bodies. This greatly increases the volume of water in waterways and the discharge of those waterways, leading to erosion and flooding. Because the water is flushed out of the watershed during the storm event, little infiltrates the soil, replenishes groundwater , or supplies stream baseflow in dry weather. [ 1 ] A first flush is the initial runoff of a rainstorm. During this phase, polluted water entering storm drains in areas with high proportions of impervious surfaces is typically more concentrated compared to the remainder of the storm. Consequently, these high concentrations of urban runoff result in high levels of pollutants discharged from storm sewers to surface waters . [ 2 ] [ 3 ] : 216 Daily human activities result in deposition of pollutants on roads , lawns , roofs , farm fields, and other land surfaces. Such pollutants include trash, sediment, nutrients, bacteria, pesticides, metals, and petroleum byproducts. [ 4 ] When it rains or there is irrigation , water runs off and ultimately makes its way to a river , lake , or the ocean . While there is some attenuation of these pollutants before entering receiving waters, polluted runoff results in large enough quantities of pollutants to impair receiving waters. [ 5 ] In addition to the pollutants carried in stormwater runoff , urban runoff is being recognized as a cause of pollution in its own right. In natural catchments ( watersheds ) surface runoff entering waterways is a relatively rare event, occurring only a few times each year and generally after larger storms. Before land development occurs in a particular area, most rainfall soaks into the ground and contributes to groundwater recharge, or is recycled into the atmosphere by vegetation through evapotranspiration . Modern drainage systems, which collect runoff from impervious surfaces (e.g., roofs and roads), ensure that water is efficiently moved to waterways through pipe networks, meaning that even small storms result in increased waterway flows. In addition to delivering higher pollutants from the urban catchment, increased stormwater flow can lead to stream erosion , encourage weed invasion, and alter natural flow regimes. Native species often rely on such flow regimes for spawning, juvenile development, and migration. Stormwater runoff from roadways has been observed to contain many metals including zinc , cadmium , copper , nickel , lead , chromium , manganese , iron , vanadium , cobalt , and aluminum and other constituents. [ 6 ] In some areas, especially along the U.S. coast, polluted runoff from roads and highways may be the largest source of water pollution . For example, about 75 percent of the toxic chemicals getting to Seattle , Washington's Puget Sound are carried by stormwater that runs off paved roads and driveways, rooftops, yards, and other developed land. [ 7 ] Industrial stormwater is runoff from precipitation that lands on industrial sites (e.g. manufacturing facilities, mines, airports). This runoff is often polluted by materials that are handled or stored on the sites, and the facilities are subject to regulations to control the discharges. [ 8 ] [ 9 ] Stormwater management facilities (SWMF) are generally designed using Stokes' law to allow rudimentary treatment through the settling particulate matter larger than 40 micron in size and to impound water to reduce downstream flooding. [ citation needed ] However, regulation on the effluent from SWMFs is becoming more stringent. The effect of phosphorus , either dissolved from (fertilizers) or bound to sediment particles from construction or agriculture runoff, causes algae and toxic cyanobacteria (aka Blue-green algae ) blooms in receiving lakes. Cyanotoxin is of particular concern as many drinking water treatment plants can not effectively remove this health hazard. [ citation needed ] In a recent [ when? ] municipal stormwater treatment study, an advanced sedimentation technology was used passively in large diameter stormwater mains upstream of SWMFs to remove an average of 90% of total suspended solids (TSS) and phosphorus during a near 50 year rain event turning a management facility into a passive treatment facility. [ 10 ] Chemical treatment of stormwater to remove pollutants can be accomplished without large scale infrastructure improvements. Passive treatment technologies use the energy of water flowing by gravity through ditches, canals, culverts, pipes or other constructed conveyances to enable treatment. Self-dosing products, such as gel flocculants, are placed in the flowing water where sediment particles, colloids and flow energy combine to release the required dosage, thereby creating heavy flocs which can then be easily filtered or settled. [ 11 ] Natural woven fibers like jute are often used in ditch bottoms to act as filtration media. Silt retention mats can also be placed in situ to capture floccules. Sedimentation in a forebay is often utilized as a deposition area to clarify the water and concentrate the material. Mining, heavy construction and other industries have used passive systems for more than twenty years. These types of systems are low carbon as no external power source is needed, they require little skill to operate, minimal maintenance and are effective at reducing TSS, some heavy metals and phosphorus. [ citation needed ] Stormwater is a major cause of urban flooding . Urban flooding is the inundation of land or property in a built-up environment caused by stormwater overwhelming the capacity of drainage systems , such as storm sewers . Although triggered by single events such as flash flooding or snow melt , urban flooding is a condition, characterized by its repetitive, costly and systemic impacts on communities. In areas susceptible to urban flooding, backwater valves and other infrastructure may be installed to mitigate losses. Where properties are built with basements , urban flooding is the primary cause of basement and sewer backups. Although the number of casualties from urban flooding is usually limited, the economic, social and environmental consequences can be considerable: in addition to direct damage to property and infrastructure ( highways , utilities and services), chronically wet houses are linked to an increase in respiratory problems and other illnesses. [ 12 ] Sewer backups are often caused by defects in the sanitary sewer system, which takes on some storm water as a result of Infiltration/Inflow . An example of urban stormwater creating a sinkhole collapse is the February 25, 2002 Dishman Lane collapse in Bowling Green, Kentucky where a sinkhole suddenly dropped the road under four traveling vehicles. The nine-month repair of the Dishman Lane collapse cost a million dollars but there remains the potential for future problems. [ 14 ] In undisturbed areas with natural subsurface ( karst ) drainage, soil and rock fragments choke karst openings, thereby being a self-limitation to the growth of openings. [ 15 ] : 189–190, 196 The undisturbed karst drainage system becomes balanced with the climate so it can drain the water produced by most storms. However, problems occur when the landscape is altered by urban development. [ 16 ] : 28 In urban areas with natural subsurface karst drainage there are no surface streams for the increased stormwater from impervious surfaces such as roofs, parking lots, and streets to receive drainage. Instead, the stormwater enters the subsurface drainage system by moving down through the ground. When the subsurface water flow becomes great enough to transport soil and rock fragments, the karst openings grow rapidly. [ 15 ] : 190 Where karst openings are roofed by supportive ( competent ) limestone, there frequently is no surface warning that an opening has grown so large it will suddenly collapse catastrophically. [ 15 ] : 198 It is recommended that land-use planning agencies avoid karst areas when considering new development projects. [ 16 ] : 37–38 Ultimately taxpayers end up paying the costs for poor land use decisions. Managing the quantity and quality of stormwater is termed, "Stormwater Management." [ 17 ] The term Best Management Practice (BMP) or stormwater control measure (SCM) is often used to refer to both structural or engineered control devices and systems (e.g. retention ponds ) to treat or store polluted stormwater, as well as operational or procedural practices (e.g. street sweeping). [ 18 ] Stormwater management includes both technical and institutional aspects. [ 19 ] Integrated water management (IWM) of stormwater has the potential to address many of the issues affecting the health of waterways and water supply challenges facing the modern urban city. IWM is often associated with green infrastructure when considered in the design process. Professionals in their respective fields, such as urban planners , architects , landscape architects , interior designers , and engineers , often consider integrated water management as a foundation of the design process. Also known as low impact development (LID) [ 21 ] in the United States , or Water Sensitive Urban Design (WSUD) [ 22 ] in Australia , IWM has the potential to improve runoff quality, reduce the risk and impact of flooding and deliver an additional water resource to augment potable supply. The development of the modern city often results in increased demands for water supply due to population growth, while at the same time altered runoff predicted by climate change has the potential to increase the volume of stormwater that can contribute to drainage and flooding problems. IWM offers several techniques, including stormwater harvest (to reduce the amount of water that can cause flooding), infiltration (to restore the natural recharge of groundwater), biofiltration or bioretention (e.g., rain gardens ), to store and treat runoff and release it at a controlled rate to reduce impact on streams and wetland treatments (to store and control runoff rates and provide habitat in urban areas). There are many ways of achieving LID. The most popular is to incorporate land-based solutions to reduce stormwater runoff through the use of retention ponds, bioswales , infiltration trenches, sustainable pavements (such as permeable paving ), and others noted above. LID can also be achieved by utilizing engineered, manufactured products to achieve similar, or potentially better, results as land-based systems (underground storage tanks, stormwater treatment systems, biofilters , etc.). The proper LID solution is one that balances the desired results (controlling runoff and pollution) with the associated costs (loss of usable land for land-based systems versus capital cost of manufactured solution). Green (vegetated) roofs are also another low-cost solution. IWM as a movement can be regarded as being in its infancy and brings together elements of drainage science, ecology and a realization that traditional drainage solutions transfer problems further downstream to the detriment of the environment and water resources. In the United States , the Environmental Protection Agency (EPA) is charged with regulating stormwater pursuant to the Clean Water Act (CWA). [ 23 ] The goal of the CWA is to restore all " Waters of the United States " to their "fishable" and "swimmable" conditions. Point source discharges, which originate mostly from municipal wastewater ( sewage ) and industrial wastewater discharges, have been regulated since enactment of the CWA in 1972. Pollutant loadings from these sources are tightly controlled through the issuance of National Pollution Discharge Elimination System ( NPDES ) permits. However, despite these controls, thousands of water bodies in the U.S. remain classified as "impaired," meaning that they contain pollutants at levels higher than is considered safe by EPA for the intended beneficial uses of the water. Much of this impairment is due to polluted runoff, generally in urbanized watersheds (in other US watersheds, agricultural pollution is a major source). [ 24 ] : 15 To address the nationwide problem of stormwater pollution, Congress broadened the CWA definition of "point source" in 1987 to include industrial stormwater discharges and Municipal Separate Storm Sewer Systems ("MS4"). These facilities are required to obtain NPDES permits. [ 25 ] In 2017, about 855 large municipal systems (serving populations of 100,000 or more), and 6,695 small systems are regulated by the permit system. [ 26 ] EPA has authorized 47 states to issue NPDES permits. [ 27 ] In addition to implementing the NPDES requirements, many states and local governments have enacted their own stormwater management laws and ordinances, and some have published stormwater treatment design manuals. [ 17 ] [ 28 ] Some of these state and local requirements have expanded coverage beyond the federal requirements. For example, the State of Maryland requires erosion and sediment controls on construction sites of 5,000 sq ft (460 m 2 ) or more. [ 29 ] It is not uncommon for state agencies to revise their requirements and impose them upon counties and cities; daily fines ranging as high as $25,000 can be imposed for failure to modify their local stormwater permitting for construction sites, for instance. Agricultural runoff (except for concentrated animal feeding operations, or " CAFO ") is classified as nonpoint source pollution under the CWA. It is not included in the CWA definition of "point source" and therefore not subject to NPDES permit requirements. The 1987 CWA amendments established a non-regulatory program at EPA for nonpoint source pollution management consisting of research and demonstration projects. [ 30 ] Related programs, such as the Environmental Quality Incentives Program are conducted by the Natural Resources Conservation Service (NRCS) in the U.S. Department of Agriculture . [ 31 ] Education is a key component of stormwater management. A number of agencies and organizations have launched campaigns to teach the public about stormwater pollution, and how they can contribute to solving it. Thousands of local governments in the U.S. have developed education programs as required by their NPDES stormwater permits. [ 32 ] One example of a local educational program is that of the West Michigan Environmental Action Council (WMEAC), which has coined the term Hydrofilth to describe stormwater pollution, [ 33 ] as part of its "15 to the River" campaign. (During a rain storm, it may take only 15 minutes for contaminated runoff in Grand Rapids, Michigan to reach the Grand River .) [ 34 ] Its outreach activities include a rain barrel distribution program and materials for homeowners on installing rain gardens . [ 35 ] Other public education campaigns highlight the importance of green infrastructure in slowing down and treating stormwater runoff. DuPage County Stormwater Management launched the "Love Blue. Live Green." outreach campaign on social media sites to educate the public on green infrastructure and some other best management practices for stormwater runoff. [ 36 ] Articles, websites, pictures, videos and other media are spread to the public through this campaign. Stormwater infrastructure is an expensive long-term investment that is difficult to replace when the underlying circumstances change. As a result, the system will perform worse or malfunction more frequently over time. This is precisely what is occurring in the region surrounding Europe and the Baltic Sea, where the quickening pace of climate change is stressing the systems, the advancement of urbanization, and stricter regulations. Rethinking stormwater management techniques and investing in infrastructure are essential to adapting to these rapidly changing circumstances. [ 37 ] [ 38 ] Stormwater runoff has been an issue since humans began living in concentrated villages or urban settings. During the Bronze Age , housing took a more concentrated form, and impervious surfaces emerged as a factor in the design of early human settlements . Some of the early incorporation of stormwater engineering is evidenced in Ancient Greece . [ 39 ] A specific example of an early stormwater runoff system design is found in the archaeological recovery at Minoan Phaistos on Crete . [ 40 ]
https://en.wikipedia.org/wiki/Stormwater
A stormwater detention vault is an underground structure designed to manage excess stormwater runoff on a developed site, often in an urban setting. This type of best management practice may be selected when there is insufficient space on the site to infiltrate the runoff or build a surface facility such as a detention basin or retention basin . [ 1 ] Detention vaults manage stormwater quantity flowing to nearby surface waters . They help prevent flooding and can reduce erosion in rivers and streams. They do not provide treatment to improve water quality , [ 2 ] though some are attached to a media filter bank to remove pollutants. Underground stormwater detention allows for high volume storage of runoff in a small footprint area. The storage vessels can be made from a variety of materials, including corrugated metal pipe , aluminum , steel , plastic , fiberglass , pre-cast or poured-in-place concrete . [ 3 ] The vault is typically buried under a parking lot or other open land on the site. In the latter case, this underground vault may be preferable to a surface detention pond if other uses are intended for the land (e.g. a pedestrian plaza or park). In other situations, a vault is used because installing a pond might pose other problems, such as attracting unwanted waterfowl or other animals. In some sites, a vault may be installed in the basement of a building, such as a parking garage. [ 4 ] Tunnels may be bored to serve as detention vaults. [ 5 ] Tunnels may be cheaper than basins, as they do not require pumps to move the water. [ 6 ] The outlet is generally a restricted-flow drain from the detention vessel, with a weir for containing detritus. [ 3 ] Detention vessels delay water's delivery downstream, and possibly creates a later water level peak post-rainfall. It is important to consider timing of water release and the types of reservoirs feeding a waterway. [ 7 ]
https://en.wikipedia.org/wiki/Stormwater_detention_vault
In engineering and computing , "stovepipe system" is a pejorative term for a system that has the potential to share data or functionality with other systems but which does not do so. The term evokes the image of stovepipes rising above buildings, each functioning individually. A simple example of a stovepipe system is one that implements its own user IDs and passwords, instead of relying on a common user ID and password shared with other systems. Stovepipes are systems procured and developed to solve a specific problem, characterized by a limited focus and functionality, and containing data that cannot be easily shared with other systems. A stovepipe system is generally considered an example of an anti-pattern , particularly found in legacy systems . This is due to the lack of code reuse , and resulting software brittleness due to potentially general functions only being used on limited input. However, in certain cases stovepipe systems are considered appropriate, due to benefits from vertical integration and avoiding dependency hell . [ 2 ] For example, the Microsoft Excel team has avoided dependencies and even maintained its own C compiler, which helped it to ship on time, have high-quality code, and generate small, cross-platform code. [ 2 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stovepipe_system
Str8ts is a logic -based number-placement puzzle , invented by Jeff Widderich in 2008. [ 1 ] It is distinct from, but shares some properties and rules with, Sudoku . The name is derived from the poker straight . The puzzle is published in a number of newspapers internationally, [ 2 ] in two book collections, and in downloadable apps. It was featured on the Canadian television show Dragons' Den on November 24, 2010. A hand made prototype of Str8ts which used black cells and the new rule of straights in compartments was invented by Canadian puzzle designer Jeff Widderich in 2007. He approached Andrew Stuart, a UK-based puzzle maker and programmer, to make the puzzle. Their collaboration settled how the clues would be determined and finalized the rules. The first puzzle was presented at the Nuremberg International Toy Fair in February 2008. A daily puzzle has been published at their website since 24 November 2008, and more recently, a weekly "extreme" puzzle has appeared, which incorporates an active discussion forum for each puzzle. The puzzle has appeared in Süddeutsche Zeitung since March 2010, and in the Saturday edition of Die Rheinpfalz since August 2010. An iOS app was released in July 2009, containing hundreds of puzzles in four difficulty levels. [ 3 ] In August 2010, a wooden board and piece version was designed by Intellego holzspiele. [ 4 ] Widderich appeared with the game in the fifth season of the Canadian television show Dragons' Den on November 24, 2010, where he made a deal for a $150,000 investment in return for 10% royalties, from three of the panelists. [ 5 ] [ 6 ] [ 7 ] The solver is given a 9x9 grid, partially divided by black cells into compartments . Each compartment, vertically or horizontally, must contain a straight – a set of consecutive numbers, but in any order. For example: 7, 6, 4, 5 is valid, but 1, 3, 8, 7 is not. Like Sudoku, the solver must fill the remaining white cells with numbers 1 to 9 (or 1 to n in puzzles with N cells per side) such that each row and column contains unique digits. Whereas Sudoku has the additional constraint of 3x3 boxes , in Str8ts rows and columns are divided by black cells. Additional clues are set in some of the black cells – these numbers remove that digit as an option in the row and column. Such digits do not form part of any straight. There are a number of variants to the basic game, including Symmetric Str8ts, Asymmetric Str8ts, Mini-Str8ts, and Transformations. As Str8ts belongs to the same class of puzzles as Sudoku the puzzle can demonstrate a wide spectrum of relative difficulty. The grade is determined by a combination of opportunities to solve at each stage and the difficulty of the strategy that grants each solution. An easier puzzle will have many places where a logical deduction can place a solution or eliminate a candidate number. When the whole puzzle is assessed in this way, plus some heuristics, a score can be determined. Over a large number of puzzles (>10,000) a bell curve of scores can be produced. This can be quartiled to group puzzles into specific grades. [ 8 ]
https://en.wikipedia.org/wiki/Str8ts
In mathematics , the Strahler number or Horton–Strahler number of a mathematical tree is a numerical measure of its branching complexity. These numbers were first developed in hydrology , as a way of measuring the complexity of rivers and streams, by Robert E. Horton ( 1945 ) and Arthur Newell Strahler ( 1952 , 1957 ). In this application, they are referred to as the Strahler stream order and are used to define stream size based on a hierarchy of tributaries . The same numbers also arise in the analysis of L-systems and of hierarchical biological structures such as (biological) trees and animal respiratory and circulatory systems, in register allocation for compilation of high-level programming languages and in the analysis of social networks . All trees in this context are directed graphs , oriented from the root towards the leaves; in other words, they are arborescences . The degree of a node in a tree is just its number of children. One may assign a Strahler number to all nodes of a tree, in bottom-up order, as follows: The Strahler number of a tree is the number of its root node. Algorithmically , these numbers may be assigned by performing a depth-first search and assigning each node's number in postorder . The same numbers may also be generated via a pruning process in which the tree is simplified in a sequence of stages, where in each stage one removes all leaf nodes and all of the paths of degree-one nodes leading to leaves: the Strahler number of a node is the stage at which it would be removed by this process, and the Strahler number of a tree is the number of stages required to remove all of its nodes. Another equivalent definition of the Strahler number of a tree is that it is the height of the largest complete binary tree that can be homeomorphically embedded into the given tree; the Strahler number of a node in a tree is similarly the height of the largest complete binary tree that can be embedded below that node. Any node with Strahler number i must have at least two descendants with Strahler number i − 1, at least four descendants with Strahler number i − 2, etc., and at least 2 i − 1 leaf descendants. Therefore, in a tree with n nodes, the largest possible Strahler number is log 2 n + 1. [ 1 ] However, unless the tree forms a complete binary tree its Strahler number will be less than this bound. In an n -node binary tree , chosen uniformly at random among all possible binary trees , the expected index of the root is with high probability very close to log 4 n . [ 2 ] In the application of the Strahler stream order to hydrology, each segment of a stream or river within a river network is treated as a node in a tree, with the next segment downstream as its parent. When two first-order streams come together, they form a second-order stream. When two second-order streams come together, they form a third-order stream. Streams of lower order joining a higher order stream do not change the order of the higher stream. Thus, if a first-order stream joins a second-order stream, it remains a second-order stream. It is not until a second-order stream combines with another second-order stream that it becomes a third-order stream. As with mathematical trees, a segment with index i must be fed by at least 2 i − 1 different tributaries of index 1. Shreve noted that Horton's and Strahler's Laws should be expected from any topologically random distribution. A later review of the relationships confirmed this argument, establishing that, from the properties the laws describe, no conclusion can be drawn to explain the structure or origin of the stream network. [ 3 ] [ 4 ] To qualify as a stream a hydrological feature must be either recurring or perennial . Recurring (or "intermittent") streams have water in the channel for at least part of the year. The index of a stream or river may range from 1 (a stream with no tributaries) to 12 (globally the most powerful river, the Amazon , at its mouth). The Ohio River is of order eight and the Mississippi River is of order 10. Estimates are that 80% of the streams on the planet are first to third order headwater streams . [ 5 ] If the bifurcation ratio of a river network is high, then there is a higher chance of flooding. There would also be a lower time of concentration. [ 6 ] The bifurcation ratio can also show which parts of a drainage basin are more likely to flood, comparatively, by looking at the separate ratios. Most British rivers have a bifurcation ratio of between 3 and 5. [ 7 ] Gleyzer et al. (2004) describe how to compute Strahler stream order values in a GIS application. This algorithm is implemented by RivEX , an ESRI ArcGIS Pro 3.4.x tool. The input to their algorithm is a network of the centre lines of the bodies of water, represented as arcs (or edges) joined at nodes. Lake boundaries and river banks should not be used as arcs, as these will generally form a non-tree network with an incorrect topology. Alternative stream ordering systems have been developed by Shreve [ 8 ] [ 9 ] and Hodgkinson et al. [ 3 ] A statistical comparison of Strahler and Shreve systems, together with an analysis of stream/link lengths, is given by Smart. [ 10 ] The Strahler numbering may be applied in the statistical analysis of any hierarchical system, not just to rivers. When translating a high-level programming language to assembly language the minimum number of registers required to evaluate an expression tree is exactly its Strahler number. In this context, the Strahler number may also be called the register number . [ 13 ] For expression trees that require more registers than are available, the Sethi–Ullman algorithm may be used to translate an expression tree into a sequence of machine instructions that uses the registers as efficiently as possible, minimizing the number of times intermediate values are spilled from registers to main memory and the total number of instructions in the resulting compiled code. Associated with the Strahler numbers of a tree are bifurcation ratios , numbers describing how close to balanced a tree is. For each order i in a hierarchy, the i th bifurcation ratio is where n i denotes the number of nodes with order i . The bifurcation ratio of an overall hierarchy may be taken by averaging the bifurcation ratios at different orders. In a complete binary tree, the bifurcation ratio will be 2, while other trees will have larger bifurcation ratios. It is a dimensionless number. The pathwidth of an arbitrary undirected graph G may be defined as the smallest number w such that there exists an interval graph H containing G as a subgraph, with the largest clique in H having w + 1 vertices. For trees (viewed as undirected graphs by forgetting their orientation and root) the pathwidth differs from the Strahler number, but is closely related to it: in a tree with pathwidth w and Strahler number s , these two numbers are related by the inequalities [ 14 ] The ability to handle graphs with cycles and not just trees gives pathwidth extra versatility compared to the Strahler number. However, unlike the Strahler number, the pathwidth is defined only for the whole graph, and not separately for each node in the graph.
https://en.wikipedia.org/wiki/Strahler_number
Straight-Through Quality ( STQ ) are approaches and outputs of test automation that have quality and deliver business benefit. STQ takes its name from the business concept of straight-through processing (STP). Also acting as a tool and enabler for STP. Traditional techniques for testing and delivery have often required a great deal of manual support and intervention. These approaches are subject to human error, cost of delay and lack of reuse. These also have the negative side-effect of being unable to deliver ' fail-fast ' approaches, which have proven popular with Agile practitioners. Previous traditional approaches have been typically expensive where whole silo'ed departments are created within commercial companies to deliver Quality and Deployment alone. Thus STQ as an approach hopes to resolve this problem. Tangible examples of STQ approaches in the software industry are present and often known as continuous integration (CI) and continuous delivery (CD). These combined can ensure that software delivery is integrated, automatically tested and ready for automatic delivery at any time. Together CI/CD can enable STQ which can be used as Business output terminology for business users who do not understand the technical complexities of CI/CD. [ 1 ] [ unreliable source? ]
https://en.wikipedia.org/wiki/Straight-Through_Quality
A straightedge or straight edge is a tool used for drawing straight lines, or checking their straightness. If it has equally spaced markings along its length, it is usually called a ruler . Straightedges are used in the automotive service and machining industry to check the flatness of machined mating surfaces. They are also used in the decorating industry for cutting and hanging wallpaper. [ 1 ] True straightness can in some cases be checked by using a laser line level as an optical straightedge: it can illuminate an accurately straight line on a flat surface such as the edge of a plank or shelf. A pair of straightedges called winding sticks are used in woodworking to make warping easier to perceive in pieces of wood. Three straight edges can be used to test and calibrate themselves to a certain extent, however this procedure does not control twist. For accurate calibration of a straight edge, a surface plate must be used. [ 2 ] An idealized straightedge is used in compass-and-straightedge constructions in plane geometry . It may be used: The idealized straightedge is: It may not be marked or used together with the compass so as to transfer the length of one segment to another. It is possible to do all compass and straightedge constructions without the straightedge . That is, it is possible, using only a compass, to find the intersection of two lines given two points on each, and to find the tangent points to circles. It is not, however, possible to do all constructions using only a straightedge. It is possible to do them with straightedge alone given a circle and its center .
https://en.wikipedia.org/wiki/Straightedge
In differential calculus , the domain-straightening theorem states that, given a vector field X {\displaystyle X} on a manifold , there exist local coordinates y 1 , … , y n {\displaystyle y_{1},\dots ,y_{n}} such that X = ∂ / ∂ y 1 {\displaystyle X=\partial /\partial y_{1}} in a neighborhood of a point where X {\displaystyle X} is nonzero. The theorem is also known as straightening out of a vector field . The Frobenius theorem in differential geometry can be considered as a higher-dimensional generalization of this theorem. It is clear that we only have to find such coordinates at 0 in R n {\displaystyle \mathbb {R} ^{n}} . First we write X = ∑ j f j ( x ) ∂ ∂ x j {\displaystyle X=\sum _{j}f_{j}(x){\partial \over \partial x_{j}}} where x {\displaystyle x} is some coordinate system at 0 , {\displaystyle 0,} and f 1 , f 2 , … , f n {\displaystyle f_{1},f_{2},\dots ,f_{n}} are the component function of X {\displaystyle X} relative to x . {\displaystyle x.} Let f = ( f 1 , … , f n ) {\displaystyle f=(f_{1},\dots ,f_{n})} . By linear change of coordinates, we can assume f ( 0 ) = ( 1 , 0 , … , 0 ) . {\displaystyle f(0)=(1,0,\dots ,0).} Let Φ ( t , p ) {\displaystyle \Phi (t,p)} be the solution of the initial value problem x ˙ = f ( x ) , x ( 0 ) = p {\displaystyle {\dot {x}}=f(x),x(0)=p} and let Φ {\displaystyle \Phi } (and thus ψ {\displaystyle \psi } ) is smooth by smooth dependence on initial conditions in ordinary differential equations . It follows that and, since ψ ( 0 , x 2 , … , x n ) = Φ ( 0 , ( 0 , x 2 , … , x n ) ) = ( 0 , x 2 , … , x n ) {\displaystyle \psi (0,x_{2},\dots ,x_{n})=\Phi (0,(0,x_{2},\dots ,x_{n}))=(0,x_{2},\dots ,x_{n})} , the differential d ψ {\displaystyle d\psi } is the identity at 0 {\displaystyle 0} . Thus, y = ψ − 1 ( x ) {\displaystyle y=\psi ^{-1}(x)} is a coordinate system at 0 {\displaystyle 0} . Finally, since x = ψ ( y ) {\displaystyle x=\psi (y)} , we have: ∂ x j ∂ y 1 = f j ( ψ ( y ) ) = f j ( x ) {\displaystyle {\partial x_{j} \over \partial y_{1}}=f_{j}(\psi (y))=f_{j}(x)} and so ∂ ∂ y 1 = X {\displaystyle {\partial \over \partial y_{1}}=X} as required.
https://en.wikipedia.org/wiki/Straightening_theorem_for_vector_fields
In biology , a strain is a genetic variant, a subtype or a culture within a biological species . Strains are often seen as inherently artificial concepts, characterized by a specific intent for genetic isolation. [ 1 ] This is most easily observed in microbiology where strains are derived from a single cell colony and are typically quarantined by the physical constraints of a Petri dish . Strains are also commonly referred to within virology , botany , and with rodents used in experimental studies . [ citation needed ] It has been said that "there is no universally accepted definition for the terms 'strain', ' variant ', and 'isolate' in the virology community, and most virologists simply copy the usage of terms from others". [ 2 ] A strain is a genetic variant or subtype of a microorganism such as a bacterial strain or a specific strain of a virus , or fungus . For example, a "flu strain" is a certain biological form of the influenza or "flu" virus. These flu strains are characterized by their differing isoforms of surface proteins. New viral strains can be created due to mutation or swapping of genetic components when two or more viruses infect the same cell in nature. [ 3 ] These phenomena are known respectively as antigenic drift and antigenic shift . Microbial strains can also be differentiated by their genetic makeup using metagenomic methods to maximize resolution within species. [ 4 ] This has become a valuable tool to analyze the microbiome . [ citation needed ] Scientists have modified strains of viruses in order to study their behavior, as in the case of the H5N1 influenza virus. While funding for such research has aroused controversy at times due to safety concerns, leading to a temporary pause, it has subsequently proceeded. [ 5 ] [ 6 ] In biotechnology, microbial strains have been constructed to establish metabolic pathways suitable for treating a variety of applications. [ 7 ] Historically, a major effort of metabolic research has been devoted to the field of biofuel production. [ 8 ] Escherichia coli is most common species for prokaryotic strain engineering. Scientists have succeeded in establishing viable minimal genomes from which new strains can be developed. [ 9 ] These minimal strains provide a near guarantee that experiments on genes outside the minimal framework will not be effected by non-essential pathways. Optimized strains of E. coli are typically used for this application. E. coli are also often used as a chassis for the expression of simple proteins. These strains, such as BL21, are genetically modified to minimize protease activity, hence enabling potential for high efficiency industrial scale protein production . [ 10 ] Strains of yeasts are the most common subjects of eukaryotic genetic modification, especially with respect to industrial fermentation . [ 11 ] The term has no official ranking status in botany; the term refers to the collective descendants produced from a common ancestor that share a uniform morphological or physiological character. [ 12 ] A strain is a designated group of offspring that are either descended from a modified plant (produced by conventional breeding or by biotechnological means), or which result from genetic mutation. [ citation needed ] As an example, some rice strains are made by inserting new genetic material into a rice plant, [ 13 ] all the descendants of the genetically modified rice plant are a strain with unique genetic information that is passed on to later generations; the strain designation, which is normally a number or a formal name, covers all the plants that descend from the originally modified plant. The rice plants in the strain can be bred to other rice strains or cultivars , and if desirable plants are produced, these are further bred to stabilize the desirable traits; the stabilized plants that can be propagated and "come true" (remain identical to the parent plant) are given a cultivar name and released into production to be used by farmers. [ citation needed ] A laboratory mouse or rat strain is a group of animals that is genetically uniform. Strains are used in laboratory experiments. Mouse strains can be inbred , mutated , or genetically modified , while rat strains are usually inbred . A given inbred rodent population is considered genetically identical after 20 generations of sibling-mating. Many rodent strains have been developed for a variety of disease models, and they are also often used to test drug toxicity. [ 14 ] [ 15 ] [ 16 ] The common fruit fly ( Drosophila melanogaster ) was among the first organisms used for genetic analysis , has a simple genome , and is very well understood. It has remained a popular model organism for many other reasons, like the ease of its breeding and maintenance, and the speed and volume of its reproduction. Various specific strains have been developed, including a flightless version with stunted wings (also used in the pet trade as live food for small reptiles and amphibians). [ citation needed ]
https://en.wikipedia.org/wiki/Strain_(biology)
In chemistry , a molecule experiences strain when its chemical structure undergoes some stress which raises its internal energy in comparison to a strain-free reference compound . The internal energy of a molecule consists of all the energy stored within it. A strained molecule has an additional amount of internal energy which an unstrained molecule does not. This extra internal energy, or strain energy , can be likened to a compressed spring . [ 1 ] Much like a compressed spring must be held in place to prevent release of its potential energy , a molecule can be held in an energetically unfavorable conformation by the bonds within that molecule. Without the bonds holding the conformation in place, the strain energy would be released. The equilibrium of two molecular conformations is determined by the difference in Gibbs free energy of the two conformations. From this energy difference, the equilibrium constant for the two conformations can be determined. If there is a decrease in Gibbs free energy from one state to another, this transformation is spontaneous and the lower energy state is more stable . A highly strained, higher energy molecular conformation will spontaneously convert to the lower energy molecular conformation. Enthalpy and entropy are related to Gibbs free energy through the equation (at a constant temperature ): Enthalpy is typically the more important thermodynamic function for determining a more stable molecular conformation. [ 1 ] While there are different types of strain, the strain energy associated with all of them is due to the weakening of bonds within the molecule. Since enthalpy is usually more important, entropy can often be ignored. [ 1 ] This isn't always the case; if the difference in enthalpy is small, entropy can have a larger effect on the equilibrium. For example, n-butane has two possible conformations, anti and gauche . The anti conformation is more stable by 0.9 kcal mol −1 . [ 1 ] We would expect that butane is roughly 82% anti and 18% gauche at room temperature. However, there are two possible gauche conformations and only one anti conformation. Therefore, entropy makes a contribution of 0.4 kcal in favor of the gauche conformation. [ 2 ] We find that the actual conformational distribution of butane is 70% anti and 30% gauche at room temperature. The standard heat of formation (Δ f H °) of a compound is described as the enthalpy change when the compound is formed from its separated elements. [ 3 ] When the heat of formation for a compound is different from either a prediction or a reference compound, this difference can often be attributed to strain. For example, Δ f H ° for cyclohexane is -29.9 kcal mol −1 while Δ f H ° for methylcyclopentane is -25.5 kcal mol −1 . [ 1 ] Despite having the same atoms and number of bonds, methylcyclopentane is higher in energy than cyclohexane. This difference in energy can be attributed to the ring strain of a five-membered ring which is absent in cyclohexane. Experimentally, strain energy is often determined using heats of combustion which is typically an easy experiment to perform. Determining the strain energy within a molecule requires knowledge of the expected internal energy without the strain. There are two ways do this. First, one could compare to a similar compound that lacks strain, such as in the previous methylcyclohexane example. Unfortunately, it can often be difficult to obtain a suitable compound. An alternative is to use Benson group increment theory . As long as suitable group increments are available for the atoms within a compound, a prediction of Δ f H ° can be made. If the experimental Δ f H ° differs from the predicted Δ f H °, this difference in energy can be attributed to strain energy. Van der Waals strain , or steric strain, occurs when atoms are forced to get closer than their Van der Waals radii allow. [ 4 ] : 5 Specifically, Van der Waals strain is considered a form of strain where the interacting atoms are at least four bonds away from each other. [ 5 ] The amount on steric strain in similar molecules is dependent on the size of the interacting groups; bulky tert-butyl groups take up much more space than methyl groups and often experience greater steric interactions. The effects of steric strain in the reaction of trialkylamines and trimethylboron were studied by Nobel laureate Herbert C. Brown et al. [ 6 ] They found that as the size of the alkyl groups on the amine were increased, the equilibrium constant decreased as well. The shift in equilibrium was attributed to steric strain between the alkyl groups of the amine and the methyl groups on boron. There are situations where seemingly identical conformations are not equal in strain energy. Syn-pentane strain is an example of this situation. There are two different ways to put both of the bonds the central in n -pentane into a gauche conformation, one of which is 3 kcal mol −1 higher in energy than the other. [ 1 ] When the two methyl-substituted bonds are rotated from anti to gauche in opposite directions, the molecule assumes a cyclopentane-like conformation where the two terminal methyl groups are brought into proximity. If the bonds are rotated in the same direction, this doesn't occur. The steric strain between the two terminal methyl groups accounts for the difference in energy between the two similar, yet very different conformations. Allylic strain , or A 1,3 strain is closely associated to syn-pentane strain. An example of allylic strain can be seen in the compound 2-pentene . It's possible for the ethyl substituent of the olefin to rotate such that the terminal methyl group is brought near to the vicinal methyl group of the olefin. These types of compounds usually take a more linear conformation to avoid the steric strain between the substituents. [ 1 ] 1,3-diaxial strain is another form of strain similar to syn-pentane. In this case, the strain occurs due to steric interactions between a substituent of a cyclohexane ring ('α') and gauche interactions between the alpha substituent and both methylene carbons two bonds away from the substituent in question (hence, 1,3-diaxial interactions). [ 4 ] : 10 When the substituent is axial , it is brought near to an axial gamma hydrogen. The amount of strain is largely dependent on the size of the substituent and can be relieved by forming into the major chair conformation placing the substituent in an equatorial position. The difference in energy between conformations is called the A value and is well known for many different substituents. The A value is a thermodynamic parameter and was originally measured along with other methods using the Gibbs free energy equation and, for example, the Meerwein–Ponndorf–Verley reduction / Oppenauer oxidation equilibrium for the measurement of axial versus equatorial values of cyclohexanone/cyclohexanol (0.7 kcal mol −1 ). [ 7 ] Torsional strain is the resistance to bond twisting. In cyclic molecules, it is also called Pitzer strain . Torsional strain occurs when atoms separated by three bonds are placed in an eclipsed conformation instead of the more stable staggered conformation . The barrier of rotation between staggered conformations of ethane is approximately 2.9 kcal mol −1 . [ 1 ] It was initially believed that the barrier to rotation was due to steric interactions between vicinal hydrogens, but the Van der Waals radius of hydrogen is too small for this to be the case. Recent research has shown that the staggered conformation may be more stable due to a hyperconjugative effect . [ 8 ] Rotation away from the staggered conformation interrupts this stabilizing force. More complex molecules, such as butane, have more than one possible staggered conformation. The anti conformation of butane is approximately 0.9 kcal mol −1 (3.8 kJ mol −1 ) more stable than the gauche conformation. [ 1 ] Both of these staggered conformations are much more stable than the eclipsed conformations. Instead of a hyperconjugative effect, such as that in ethane , the strain energy in butane is due to both steric interactions between methyl groups and angle strain caused by these interactions. According to the VSEPR theory of molecular bonding, the preferred geometry of a molecule is that in which both bonding and non-bonding electrons are as far apart as possible. In molecules, it is quite common for these angles to be somewhat compressed or expanded compared to their optimal value. This strain is referred to as angle strain, or Baeyer strain. [ 9 ] The simplest examples of angle strain are small cycloalkanes such as cyclopropane and cyclobutane, which are discussed below. Furthermore, there is often eclipsing or Pitzer strain in cyclic systems. These and possible transannular interactions were summarized early by H.C. Brown as internal strain, or I-Strain. [ 10 ] Molecular mechanics or force field approaches allow to calculate such strain contributions, which then can be correlated e.g. with reaction rates or equilibria. Many reactions of alicyclic compounds, including equilibria, redox and solvolysis reactions, which all are characterized by transition between sp2 and sp3 state at the reaction center, correlate with corresponding strain energy differences SI (sp2 -sp3). [ 11 ] The data reflect mainly the unfavourable vicinal angles in medium rings, as illustrated by the severe increase of ketone reduction rates with increasing SI (Figure 1). Another example is the solvolysis of bridgehead tosylates with steric energy differences between corresponding bromide derivatives (sp3) and the carbenium ion as sp2- model for the transition state . [ 12 ] (Figure 2) In principle, angle strain can occur in acyclic compounds, but the phenomenon is rare. Cyclohexane is considered a benchmark in determining ring strain in cycloalkanes and it is commonly accepted that there is little to no strain energy. [ 1 ] In comparison, smaller cycloalkanes are much higher in energy due to increased strain. Cyclopropane is analogous to a triangle and thus has bond angles of 60°, much lower than the preferred 109.5° of an sp 3 hybridized carbon. Furthermore, the hydrogens in cyclopropane are eclipsed. Cyclobutane experiences similar strain, with bond angles of approximately 88° (it isn't completely planar) and eclipsed hydrogens. The strain energy of cyclopropane and cyclobutane are 27.5 and 26.3 kcal mol −1 , respectively. [ 1 ] Cyclopentane experiences much less strain, mainly due to torsional strain from eclipsed hydrogens: its preferred conformations interconvert by a process called pseudorotation . [ 4 ] : 14 Ring strain can be considerably higher in bicyclic systems . For example, bicyclobutane , C 4 H 6 , is noted for being one of the most strained compounds that is isolatable on a large scale; its strain energy is estimated at 63.9 kcal mol −1 (267 kJ mol −1 ). [ 13 ] [ 14 ] Medium-sized rings (7–13 carbons) experience more strain energy than cyclohexane, due mostly to deviation from ideal vicinal angles, or Pitzer strain. Molecular mechanics calculations indicate that transannular strain, also known as Prelog strain , does not play an essential role. Transannular reactions however, such as 1,5-shifts in cyclooctane substitution reactions, are well known. The amount of strain energy in bicyclic systems is commonly the sum of the strain energy in each individual ring. [ 1 ] This isn't always the case, as sometimes the fusion of rings induces some extra strain. In synthetic allosteric systems there are typically two or more conformers with stability differences due to strain contributions. Positive cooperativity for example results from increased binding of a substrate A to a conformer C2 which is produced by binding of an effector molecule E. If the conformer C2 has a similar stability as another equilibrating conformer C1 a fit induced by the substrate A will lead to binding of A to C2 also in absence of the effector E. Only if the stability of the conformer C2 is significantly smaller, meaning that in absence of an effector E the population of C2 is much smaller than that of C1, the ratio K2/K1 which measures the efficiency of the allosteric signal will increase. The ratio K2/K1 can be related directly to the strain energy difference between the conformers C1 and C2; if it is small higher concentrations of A will directly bind to C2 and make the effector E inefficient. In addition, the response time of such allosteric switches depends on the strain of the conformer interconversion transitions state. [ 15 ]
https://en.wikipedia.org/wiki/Strain_(chemistry)
In mechanics , strain is defined as relative deformation , compared to a reference position configuration. Different equivalent choices may be made for the expression of a strain field depending on whether it is defined with respect to the initial or the final configuration of the body and on whether the metric tensor or its dual is considered. Strain has dimension of a length ratio , with SI base units of meter per meter (m/m). Hence strains are dimensionless and are usually expressed as a decimal fraction or a percentage . Parts-per notation is also used, e.g., parts per million or parts per billion (sometimes called "microstrains" and "nanostrains", respectively), corresponding to μm /m and nm /m. Strain can be formulated as the spatial derivative of displacement : ε ≐ ∂ ∂ X ( x − X ) = F ′ − I , {\displaystyle {\boldsymbol {\varepsilon }}\doteq {\cfrac {\partial }{\partial \mathbf {X} }}\left(\mathbf {x} -\mathbf {X} \right)={\boldsymbol {F}}'-{\boldsymbol {I}},} where I is the identity tensor . The displacement of a body may be expressed in the form x = F ( X ) , where X is the reference position of material points of the body; displacement has units of length and does not distinguish between rigid body motions (translations and rotations) and deformations (changes in shape and size) of the body. The spatial derivative of a uniform translation is zero, thus strains measure how much a given displacement differs locally from a rigid-body motion. [ 1 ] A strain is in general a tensor quantity. Physical insight into strains can be gained by observing that a given strain can be decomposed into normal and shear components. The amount of stretch or compression along material line elements or fibers is the normal strain , and the amount of distortion associated with the sliding of plane layers over each other is the shear strain , within a deforming body. [ 2 ] This could be applied by elongation, shortening, or volume changes, or angular distortion. [ 3 ] The state of strain at a material point of a continuum body is defined as the totality of all the changes in length of material lines or fibers, the normal strain , which pass through that point and also the totality of all the changes in the angle between pairs of lines initially perpendicular to each other, the shear strain , radiating from this point. However, it is sufficient to know the normal and shear components of strain on a set of three mutually perpendicular directions. If there is an increase in length of the material line, the normal strain is called tensile strain ; otherwise, if there is reduction or compression in the length of the material line, it is called compressive strain . Depending on the amount of strain, or local deformation, the analysis of deformation is subdivided into three deformation theories: In each of these theories the strain is then defined differently. The engineering strain is the most common definition applied to materials used in mechanical and structural engineering, which are subjected to very small deformations. On the other hand, for some materials, e.g., elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%; [ 4 ] thus other more complex definitions of strain are required, such as stretch , logarithmic strain , Green strain , and Almansi strain . Engineering strain , also known as Cauchy strain , is expressed as the ratio of total deformation to the initial dimension of the material body on which forces are applied. In the case of a material line element or fiber axially loaded, its elongation gives rise to an engineering normal strain or engineering extensional strain e , which equals the relative elongation or the change in length Δ L per unit of the original length L of the line element or fibers (in meters per meter). The normal strain is positive if the material fibers are stretched and negative if they are compressed. Thus, we have e = Δ L L = l − L L {\displaystyle e={\frac {\Delta L}{L}}={\frac {l-L}{L}}} , where e is the engineering normal strain , L is the original length of the fiber and l is the final length of the fiber. The true shear strain is defined as the change in the angle (in radians) between two material line elements initially perpendicular to each other in the undeformed or initial configuration. The engineering shear strain is defined as the tangent of that angle, and is equal to the length of deformation at its maximum divided by the perpendicular length in the plane of force application, which sometimes makes it easier to calculate. The stretch ratio or extension ratio (symbol λ) is an alternative measure related to the extensional or normal strain of an axially loaded differential line element. It is defined as the ratio between the final length l and the initial length L of the material line. λ = l L {\displaystyle \lambda ={\frac {l}{L}}} The extension ratio λ is related to the engineering strain e by e = λ − 1 {\displaystyle e=\lambda -1} This equation implies that when the normal strain is zero, so that there is no deformation, the stretch ratio is equal to unity. The stretch ratio is used in the analysis of materials that exhibit large deformations, such as elastomers , which can sustain stretch ratios of 3 or 4 before they fail. On the other hand, traditional engineering materials, such as concrete or steel, fail at much lower stretch ratios. The logarithmic strain ε , also called, true strain or Hencky strain . [ 5 ] Considering an incremental strain (Ludwik) δ ε = δ l l {\displaystyle \delta \varepsilon ={\frac {\delta l}{l}}} the logarithmic strain is obtained by integrating this incremental strain: ∫ δ ε = ∫ L l δ l l ε = ln ⁡ ( l L ) = ln ⁡ ( λ ) = ln ⁡ ( 1 + e ) = e − e 2 2 + e 3 3 − ⋯ {\displaystyle {\begin{aligned}\int \delta \varepsilon &=\int _{L}^{l}{\frac {\delta l}{l}}\\\varepsilon &=\ln \left({\frac {l}{L}}\right)=\ln(\lambda )\\&=\ln(1+e)\\&=e-{\frac {e^{2}}{2}}+{\frac {e^{3}}{3}}-\cdots \end{aligned}}} where e is the engineering strain. The logarithmic strain provides the correct measure of the final strain when deformation takes place in a series of increments, taking into account the influence of the strain path. [ 2 ] The Green strain is defined as: ε G = 1 2 ( l 2 − L 2 L 2 ) = 1 2 ( λ 2 − 1 ) {\displaystyle \varepsilon _{G}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{L^{2}}}\right)={\tfrac {1}{2}}(\lambda ^{2}-1)} The Euler-Almansi strain is defined as ε E = 1 2 ( l 2 − L 2 l 2 ) = 1 2 ( 1 − 1 λ 2 ) {\displaystyle \varepsilon _{E}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{l^{2}}}\right)={\tfrac {1}{2}}\left(1-{\frac {1}{\lambda ^{2}}}\right)} The (infinitesimal) strain tensor (symbol ε {\displaystyle {\boldsymbol {\varepsilon }}} ) is defined in the International System of Quantities (ISQ), more specifically in ISO 80000-4 (Mechanics), as a "tensor quantity representing the deformation of matter caused by stress. Strain tensor is symmetric and has three linear strain and three shear strain (Cartesian) components." [ 6 ] ISO 80000-4 further defines linear strain as the "quotient of change in length of an object and its length" and shear strain as the "quotient of parallel displacement of two surfaces of a layer and the thickness of the layer". [ 6 ] Thus, strains are classified as either normal or shear . A normal strain is perpendicular to the face of an element, and a shear strain is parallel to it. These definitions are consistent with those of normal stress and shear stress . The strain tensor can then be expressed in terms of normal and shear components as: ε _ _ = [ ε x x ε x y ε x z ε y x ε y y ε y z ε z x ε z y ε z z ] = [ ε x x 1 2 γ x y 1 2 γ x z 1 2 γ y x ε y y 1 2 γ y z 1 2 γ z x 1 2 γ z y ε z z ] {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{xx}&{\tfrac {1}{2}}\gamma _{xy}&{\tfrac {1}{2}}\gamma _{xz}\\{\tfrac {1}{2}}\gamma _{yx}&\varepsilon _{yy}&{\tfrac {1}{2}}\gamma _{yz}\\{\tfrac {1}{2}}\gamma _{zx}&{\tfrac {1}{2}}\gamma _{zy}&\varepsilon _{zz}\\\end{bmatrix}}} Consider a two-dimensional, infinitesimal, rectangular material element with dimensions dx × dy , which, after deformation, takes the form of a rhombus . The deformation is described by the displacement field u . From the geometry of the adjacent figure we have l e n g t h ( A B ) = d x {\displaystyle \mathrm {length} (AB)=dx} and l e n g t h ( a b ) = ( d x + ∂ u x ∂ x d x ) 2 + ( ∂ u y ∂ x d x ) 2 = d x 2 ( 1 + ∂ u x ∂ x ) 2 + d x 2 ( ∂ u y ∂ x ) 2 = d x ( 1 + ∂ u x ∂ x ) 2 + ( ∂ u y ∂ x ) 2 {\displaystyle {\begin{aligned}\mathrm {length} (ab)&={\sqrt {\left(dx+{\frac {\partial u_{x}}{\partial x}}dx\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}dx\right)^{2}}}\\&={\sqrt {dx^{2}\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+dx^{2}\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\\&=dx~{\sqrt {\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\end{aligned}}} For very small displacement gradients the squares of the derivative of u y {\displaystyle u_{y}} and u x {\displaystyle u_{x}} are negligible and we have l e n g t h ( a b ) ≈ d x ( 1 + ∂ u x ∂ x ) = d x + ∂ u x ∂ x d x {\displaystyle \mathrm {length} (ab)\approx dx\left(1+{\frac {\partial u_{x}}{\partial x}}\right)=dx+{\frac {\partial u_{x}}{\partial x}}dx} For an isotropic material that obeys Hooke's law , a normal stress will cause a normal strain. Normal strains produce dilations . The normal strain in the x -direction of the rectangular element is defined by ε x = extension original length = l e n g t h ( a b ) − l e n g t h ( A B ) l e n g t h ( A B ) = ∂ u x ∂ x {\displaystyle \varepsilon _{x}={\frac {\text{extension}}{\text{original length}}}={\frac {\mathrm {length} (ab)-\mathrm {length} (AB)}{\mathrm {length} (AB)}}={\frac {\partial u_{x}}{\partial x}}} Similarly, the normal strain in the y - and z -directions becomes ε y = ∂ u y ∂ y , ε z = ∂ u z ∂ z {\displaystyle \varepsilon _{y}={\frac {\partial u_{y}}{\partial y}}\quad ,\qquad \varepsilon _{z}={\frac {\partial u_{z}}{\partial z}}} The engineering shear strain ( γ xy ) is defined as the change in angle between lines AC and AB . Therefore, γ x y = α + β {\displaystyle \gamma _{xy}=\alpha +\beta } From the geometry of the figure, we have tan ⁡ α = ∂ u y ∂ x d x d x + ∂ u x ∂ x d x = ∂ u y ∂ x 1 + ∂ u x ∂ x tan ⁡ β = ∂ u x ∂ y d y d y + ∂ u y ∂ y d y = ∂ u x ∂ y 1 + ∂ u y ∂ y {\displaystyle {\begin{aligned}\tan \alpha &={\frac {{\tfrac {\partial u_{y}}{\partial x}}dx}{dx+{\tfrac {\partial u_{x}}{\partial x}}dx}}={\frac {\tfrac {\partial u_{y}}{\partial x}}{1+{\tfrac {\partial u_{x}}{\partial x}}}}\\\tan \beta &={\frac {{\tfrac {\partial u_{x}}{\partial y}}dy}{dy+{\tfrac {\partial u_{y}}{\partial y}}dy}}={\frac {\tfrac {\partial u_{x}}{\partial y}}{1+{\tfrac {\partial u_{y}}{\partial y}}}}\end{aligned}}} For small displacement gradients we have ∂ u x ∂ x ≪ 1 ; ∂ u y ∂ y ≪ 1 {\displaystyle {\frac {\partial u_{x}}{\partial x}}\ll 1~;~~{\frac {\partial u_{y}}{\partial y}}\ll 1} For small rotations, i.e. α and β are ≪ 1 we have tan α ≈ α , tan β ≈ β . Therefore, α ≈ ∂ u y ∂ x ; β ≈ ∂ u x ∂ y {\displaystyle \alpha \approx {\frac {\partial u_{y}}{\partial x}}~;~~\beta \approx {\frac {\partial u_{x}}{\partial y}}} thus γ x y = α + β = ∂ u y ∂ x + ∂ u x ∂ y {\displaystyle \gamma _{xy}=\alpha +\beta ={\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}} By interchanging x and y and u x and u y , it can be shown that γ xy = γ yx . Similarly, for the yz - and xz -planes, we have γ y z = γ z y = ∂ u y ∂ z + ∂ u z ∂ y , γ z x = γ x z = ∂ u z ∂ x + ∂ u x ∂ z {\displaystyle \gamma _{yz}=\gamma _{zy}={\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\quad ,\qquad \gamma _{zx}=\gamma _{xz}={\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}} The volumetric strain, also called bulk strain, is the relative variation of the volume, as arising from dilation or compression ; it is the first strain invariant or trace of the tensor: δ = Δ V V 0 = I 1 = ε 11 + ε 22 + ε 33 {\displaystyle \delta ={\frac {\Delta V}{V_{0}}}=I_{1}=\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}} Actually, if we consider a cube with an edge length a , it is a quasi-cube after the deformation (the variations of the angles do not change the volume) with the dimensions a ⋅ ( 1 + ε 11 ) × a ⋅ ( 1 + ε 22 ) × a ⋅ ( 1 + ε 33 ) {\displaystyle a\cdot (1+\varepsilon _{11})\times a\cdot (1+\varepsilon _{22})\times a\cdot (1+\varepsilon _{33})} and V 0 = a 3 , thus Δ V V 0 = ( 1 + ε 11 + ε 22 + ε 33 + ε 11 ⋅ ε 22 + ε 11 ⋅ ε 33 + ε 22 ⋅ ε 33 + ε 11 ⋅ ε 22 ⋅ ε 33 ) ⋅ a 3 − a 3 a 3 {\displaystyle {\frac {\Delta V}{V_{0}}}={\frac {\left(1+\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}+\varepsilon _{11}\cdot \varepsilon _{22}+\varepsilon _{11}\cdot \varepsilon _{33}+\varepsilon _{22}\cdot \varepsilon _{33}+\varepsilon _{11}\cdot \varepsilon _{22}\cdot \varepsilon _{33}\right)\cdot a^{3}-a^{3}}{a^{3}}}} as we consider small deformations, 1 ≫ ε i i ≫ ε i i ⋅ ε j j ≫ ε 11 ⋅ ε 22 ⋅ ε 33 {\displaystyle 1\gg \varepsilon _{ii}\gg \varepsilon _{ii}\cdot \varepsilon _{jj}\gg \varepsilon _{11}\cdot \varepsilon _{22}\cdot \varepsilon _{33}} therefore the formula. A strain field associated with a displacement is defined, at any point, by the change in length of the tangent vectors representing the speeds of arbitrarily parametrized curves passing through that point. A basic geometric result, due to Fréchet , von Neumann and Jordan , states that, if the lengths of the tangent vectors fulfil the axioms of a norm and the parallelogram law , then the length of a vector is the square root of the value of the quadratic form associated, by the polarization formula , with a positive definite bilinear map called the metric tensor .
https://en.wikipedia.org/wiki/Strain_(mechanics)
In physics , the elastic potential energy gained by a wire during elongation with a tensile (stretching) or compressive (contractile) force is called strain energy. For linearly elastic materials, strain energy is: where σ is stress , ε is strain , V is volume, and E is Young's modulus : In a molecule , strain energy is released when the constituent atoms are allowed to rearrange themselves in a chemical reaction. [ 1 ] The external work done on an elastic member in causing it to distort from its unstressed state is transformed into strain energy which is a form of potential energy. The strain energy in the form of elastic deformation is mostly recoverable in the form of mechanical work. For example, the heat of combustion of cyclopropane (696 kJ/mol) is higher than that of propane (657 kJ/mol) for each additional CH 2 unit . Compounds with unusually large strain energy include tetrahedranes , propellanes , cubane-type clusters , fenestranes and cyclophanes .
https://en.wikipedia.org/wiki/Strain_energy
The strain hardening exponent (also called the strain hardening index ), usually denoted n {\displaystyle n} , is a measured parameter that quantifies the ability of a material to become stronger due to strain hardening. Strain hardening (work hardening) is the process by which a material's load-bearing capacity increases during plastic (permanent) strain , or deformation . This characteristic is what sets ductile materials apart from brittle materials. [ 1 ] The uniaxial tension test is the primary experimental method used to directly measure a material's stress–strain behavior , providing valuable insights into its strain-hardening behavior. [ 1 ] The strain hardening exponent is sometimes regarded as a constant and occurs in forging and forming calculations as well as the formula known as the Hollomon equation (after John Herbert Hollomon Jr. ) who originally posited it as: σ = K ϵ n {\displaystyle \sigma =K\epsilon ^{n}} [ 2 ] where σ {\displaystyle \sigma } represents the applied true stress on the material, ϵ {\displaystyle \epsilon } is the true strain , and K {\displaystyle K} is the strength coefficient. The value of the strain hardening exponent lies between 0 and 1, with a value of 0 implying a perfectly plastic solid and a value of 1 representing a perfectly elastic solid. Most metals have an n {\displaystyle n} -value between 0.10 and 0.50. In one study, strain hardening exponent values extracted from tensile data from 58 steel pipes from natural gas pipelines were found to range from 0.08 to 0.25, [ 1 ] with the lower end of the range dominated by high-strength low alloy steels and the upper end of the range mostly normalized steels. This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strain_hardening_exponent
In mechanics and materials science , strain rate is the time derivative of strain of a material. Strain rate has dimension of inverse time and SI units of inverse second , s −1 (or its multiples). The strain rate at some point within the material measures the rate at which the distances of adjacent parcels of the material change with time in the neighborhood of that point. It comprises both the rate at which the material is expanding or shrinking ( expansion rate ), and also the rate at which it is being deformed by progressive shearing without changing its volume ( shear rate ). It is zero if these distances do not change, as happens when all particles in some region are moving with the same velocity (same speed and direction) and/or rotating with the same angular velocity , as if that part of the medium were a rigid body . The strain rate is a concept of materials science and continuum mechanics that plays an essential role in the physics of fluids and deformable solids. In an isotropic Newtonian fluid , in particular, the viscous stress is a linear function of the rate of strain, defined by two coefficients, one relating to the expansion rate (the bulk viscosity coefficient) and one relating to the shear rate (the "ordinary" viscosity coefficient). In solids, higher strain rates can often cause normally ductile materials to fail in a brittle manner. [ 1 ] The definition of strain rate was first introduced in 1867 by American metallurgist Jade LeCocq, who defined it as "the rate at which strain occurs. It is the time rate of change of strain." In physics the strain rate is generally defined as the derivative of the strain with respect to time. Its precise definition depends on how strain is measured. The strain is the ratio of two lengths, so it is a dimensionless quantity (a number that does not depend on the choice of measurement units ). Thus, strain rate has dimension of inverse time and units of inverse second , s −1 (or its multiples). In simple contexts, a single number may suffice to describe the strain, and therefore the strain rate. For example, when a long and uniform rubber band is gradually stretched by pulling at the ends, the strain can be defined as the ratio ϵ {\displaystyle \epsilon } between the amount of stretching and the original length of the band: where L 0 {\displaystyle L_{0}} is the original length and L ( t ) {\displaystyle L(t)} its length at each time t {\displaystyle t} . Then the strain rate will be where v ( t ) {\displaystyle v(t)} is the speed at which the ends are moving away from each other. The strain rate can also be expressed by a single number when the material is being subjected to parallel shear without change of volume; namely, when the deformation can be described as a set of infinitesimally thin parallel layers sliding against each other as if they were rigid sheets, in the same direction, without changing their spacing. This description fits the laminar flow of a fluid between two solid plates that slide parallel to each other (a Couette flow ) or inside a circular pipe of constant cross-section (a Poiseuille flow ). In those cases, the state of the material at some time t {\displaystyle t} can be described by the displacement X ( y , t ) {\displaystyle X(y,t)} of each layer, since an arbitrary starting time, as a function of its distance y {\displaystyle y} from the fixed wall. Then the strain in each layer can be expressed as the limit of the ratio between the current relative displacement X ( y + d , t ) − X ( y , t ) {\displaystyle X(y+d,t)-X(y,t)} of a nearby layer, divided by the spacing d {\displaystyle d} between the layers: Therefore, the strain rate is where V ( y , t ) {\displaystyle V(y,t)} is the current linear speed of the material at distance y {\displaystyle y} from the wall. In more general situations, when the material is being deformed in various directions at different rates, the strain (and therefore the strain rate) around a point within a material cannot be expressed by a single number, or even by a single vector . In such cases, the rate of deformation must be expressed by a tensor , a linear map between vectors, that expresses how the relative velocity of the medium changes when one moves by a small distance away from the point in a given direction. This strain rate tensor can be defined as the time derivative of the strain tensor , or as the symmetric part of the gradient (derivative with respect to position) of the velocity of the material. With a chosen coordinate system , the strain rate tensor can be represented by a symmetric 3×3 matrix of real numbers. The strain rate tensor typically varies with position and time within the material, and is therefore a (time-varying) tensor field . It only describes the local rate of deformation to first order ; but that is generally sufficient for most purposes, even when the viscosity of the material is highly non-linear. Materials can be tested using the so-called epsilon dot ( ε ˙ {\displaystyle {\dot {\varepsilon }}} ) method [ 2 ] which can be used to derive viscoelastic parameters through lumped parameter analysis . Similarly, the sliding rate, also called the deviatoric strain rate or shear strain rate is the derivative with respect to time of the shear strain. Engineering sliding strain can be defined as the angular displacement created by an applied shear stress, τ {\displaystyle \tau } . [ 3 ] Therefore the unidirectional sliding strain rate can be defined as:
https://en.wikipedia.org/wiki/Strain_rate
In physics , strain scanning is the general name for various techniques that aim to measure the strain in a crystalline material through its effect on the diffraction of X-rays and neutrons . In these methods the material itself is used as a form of strain gauge . The various methods are derived from powder diffraction but look for the small shifts in the diffraction spectrum that indicate a change in a lattice parameter instead of trying to derive unknown structural information. By comparing the lattice parameter to a known reference value it is possible to determine the. If sufficient measurements are made in different directions it is possible to derive the strain tensor . If the elastic properties of the material are known, one can then compute the stress tensor . At its most basic level strain scanning uses shifts in Bragg diffraction [ 1 ] peaks to determine the strain. Strain is defined as the change in length (shift in lattice parameter, d) divided by the original length (unstrained lattice parameter, d 0 ). In diffraction based strain scanning this becomes the change in peak position divided by the original position. The precise equation is presented in terms of diffraction angle, energy, or - for relatively slow moving neutrons - time of flight: The details of the technique are heavily influenced by the type of radiation used since lab X-rays, synchrotron X-rays and neutrons have very different properties. Nevertheless, there is considerable overlap between the various methods. This spectroscopy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strain_scanning
Straintronics (from strain and electronics ) is the study of how folds and mechanically induced stresses in a layer of two-dimensional materials can change their electrical properties. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] It is distinct from twistronics in that the latter involves changes in the angle between two layers of 2D material. However, in such multi-layers if strain is applied to only one layers, which is called heterostrain , strain can have similar effect as twist in changing electronic properties. [ 8 ] [ 9 ] It is also distinct from, but similar to, the piezoelectric effects which are created by bending, twisting, or squeezing of certain material.
https://en.wikipedia.org/wiki/Straintronics
The Strait of Sicily Tunnel is a proposed megaproject to link Sicily and Tunisia . The distance between the coastlines is about 155 kilometres (96 miles) and would be reached by five tunnels constructed between four intermediate artificial islands which will be built with the excavated material. A preliminary study was promoted by the ENEA institute. [ 1 ] The connections across the Strait of Sicily , as of 2011 [update] , are by car ferry and air travel. There are ferries Palermo–Tunis (3 round trips per week), Trapani–Tunis (1 round trip per week), Civitavecchia–Tunis (2 per week), Genoa–Tunis, and Marseille–Tunis. [ 2 ] This African road or road transport-related article is a stub . You can help Wikipedia by expanding it . This European road or road transport-related article is a stub . You can help Wikipedia by expanding it . This tunnel-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strait_of_Sicily_Tunnel
A strand jack (also known as strandjack ) is a jack used to lift very heavy loads (e.g. thousands of tons or more with multiple jacks) for construction and engineering purposes. [ 1 ] Strandjacking was invented by VSL Australia's Patrick Kilkeary & Bruce Ramsay in 1969 for concrete post tensioning systems, and is now widely used for heavy lifting, to erect bridges, offshore structures, refineries, power stations, major buildings and other structures where the use of conventional cranes is either impractical or too expensive. Strand jacks can be used horizontally for pulling objects and structures, and are widely used in the oil and gas industry for skidded loadouts. Oil rigs of 38,000 t have been moved in this way from the place of construction on to a barge . Since multiple jacks can be operated simultaneously by hydraulic controllers, they can be used in tandem to lift very large loads of thousands of tons. Tandem use of even two cranes is a very difficult operation. [ 2 ] A strand jack is a hollow hydraulic cylinder with a set of steel cables (the "strands") passing through the open centre, each one passing through two clamps - one mounted to either end of the cylinder. The jack operates in the manner of a caterpillar 's walk: climbing (or descending) along the strands by releasing the clamp at one end, expanding the cylinder, clamping there, releasing the trailing end, contracting, and clamping the trailing end before starting over again. The real significance of this device lies in the facility for precision control. The expansion and contraction can be done at any speed, and paused at any location. Although a lone jack may lift only 1700 tons or so, there exist computer control systems that can operate 120 jacks simultaneously, offering fingertip feel movement control over extremely massive objects. Strand jacking is a construction process whereby large pre-fabricated building sections are carefully lifted and precisely placed. The alternative would be to do all assembly in situ , even if expensive, technically risky, or dangerous. Strand jacks used for heavy lifting and skidding operations are owned and operated by a large number of construction and heavy lifting companies around the world. They are manufactured by a small number of companies based in Europe. [ clarification needed ] [ citation needed ] Uses outside of construction
https://en.wikipedia.org/wiki/Strand_jack
Strange matter (or strange quark matter ) is quark matter containing strange quarks . In extreme environments, strange matter is hypothesized to occur in the core of neutron stars , or, more speculatively, as isolated droplets that may vary in size from femtometers ( strangelets ) to kilometers, as in the hypothetical strange stars . At high enough density, strange matter is expected to be color superconducting . [ citation needed ] Ordinary matter , also referred to as atomic matter, is composed of atoms, with nearly all matter concentrated in the atomic nuclei. Nuclear matter is a liquid composed of neutrons and protons , and they are themselves composed of up and down quarks . Quark matter is a condensed form of matter composed entirely of quarks . When quark matter does not contain strange quarks, it is sometimes referred to as non-strange quark matter. In particle physics and astrophysics , the term 'strange matter' is used in two different contexts, one broader and the other more specific and hypothetical: [ 1 ] [ 2 ] In the general context, strange matter might occur inside neutron stars, if the pressure at their core is high enough to provide a sufficient gravitational force (i.e. above the critical pressure). At the sort of densities and high pressures we expect in the center of a neutron star, the quark matter would probably be strange matter. It could conceivably be non-strange quark matter, if the effective mass of the strange quark were too high. Charm quarks and heavier quarks would only occur at much higher densities. Strange matter comes about as a way to relieve degeneracy pressure . The Pauli exclusion principle forbids fermions such as quarks from occupying the same position and energy level. When the particle density is high enough that all energy levels below the available thermal energy are already occupied, increasing the density further requires raising some to higher, unoccupied energy levels. This need for energy to cause compression manifests as a pressure. Neutrons consist of twice as many down quarks (charge − ⁠ 1 / 3 ⁠ e ) as up quarks (charge + ⁠ 2 / 3 ⁠ e ), so the degeneracy pressure of down quarks usually dominates electrically neutral quark matter. However, when the required energy level is high enough, an alternative becomes available: half of the down quarks can be transmuted to strange quarks (charge − ⁠ 1 / 3 ⁠ e ). The higher rest mass of the strange quark costs some energy, but by opening up an additional set of energy levels, the average energy per particle can be lower, [ 1 ] : 5 making strange matter more stable than non-strange quark matter. A neutron star with a quark matter core is often [ 1 ] [ 2 ] called a hybrid star. However, it is difficult to know whether hybrid stars really exist in nature because physicists currently have little idea of the likely value of the critical pressure or density. It seems plausible that the transition to quark matter will already have occurred when the separation between the nucleons becomes much smaller than their size, so the critical density must be less than about 100 times nuclear saturation density. But a more precise estimate is not yet available, because the strong interaction that governs the behavior of quarks is mathematically intractable, and numerical calculations using lattice QCD are currently blocked by the fermion sign problem . One major area of activity in neutron star physics is the attempt to find observable signatures by which we could tell whether neutron stars have quark matter (probably strange matter) in their core. During the merger of two neutron stars, strange matter may be ejected out into the space around the stars, which may allow for the studying of strange matter. However, the rate at which strange matter decays is unknown, and there are very few binary pairs of neutron stars nearby to the Solar System, which could make the official discovery of strange matter very difficult. If the " strange matter hypothesis " is true, then nuclear matter is metastable against decaying into strange matter. The lifetime for spontaneous decay is very long, so we do not see this decay process happening around us. [ 4 ] However, under this hypothesis there should be strange matter in the universe:
https://en.wikipedia.org/wiki/Strange_matter
A strange particle is an elementary particle with a strangeness quantum number different from zero. Strange particles are members of a large family of elementary particles carrying the quantum number of strangeness , including several cases where the quantum number is hidden in a strange/anti-strange pair, for example in the Φ meson . The classification of particles, as mesons and baryons , follows the quark / anti-quark and three quark content respectively. Murray Gell-Mann recognized the group structure of elementary particle classification introducing the flavour SU(3) and strangeness as a new quantum number. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Strange_particle
A strangelet (pronounced / ˈ s t r eɪ n dʒ . l ɪ t / ) is a hypothetical particle consisting of a bound state of roughly equal numbers of up , down , and strange quarks . An equivalent description is that a strangelet is a small fragment of strange matter , small enough to be considered a particle . The size of an object composed of strange matter could, theoretically, range from a few femtometers across (with the mass of a light nucleus) to arbitrarily large. Once the size becomes macroscopic (on the order of metres across), such an object is usually called a strange star . The term "strangelet" originates with Edward Farhi and Robert Jaffe in 1984. It has been theorized that strangelets can convert matter to strange matter on contact. [ 1 ] Strangelets have also been suggested as a dark matter candidate. [ 2 ] The known particles with strange quarks are unstable. Because the strange quark is heavier than the up and down quarks, it can spontaneously decay , via the weak interaction , into an up quark. Consequently, particles containing strange quarks, such as the lambda particle , always lose their strangeness , by decaying into lighter particles containing only up and down quarks. However, condensed states with a larger number of quarks might not suffer from this instability. That possible stability against decay is the " strange matter hypothesis ", proposed separately by Arnold Bodmer [ 3 ] and Edward Witten . [ 4 ] According to this hypothesis, when a large enough number of quarks are concentrated together, the lowest energy state is one which has roughly equal numbers of up, down, and strange quarks, namely a strangelet. This stability would occur because of the Pauli exclusion principle ; having three types of quarks, rather than two as in normal nuclear matter, allows more quarks to be placed in lower energy levels. A nucleus is a collection of a number of up and down quarks (in some nuclei a fairly large number), confined into triplets ( neutrons and protons ). According to the strange matter hypothesis, strangelets are more stable than nuclei, so nuclei are expected to decay into strangelets. But this process may be extremely slow because there is a large energy barrier to overcome: as the weak interaction starts making a nucleus into a strangelet, the first few strange quarks form strange baryons, such as the Lambda, which are heavy. Only if many conversions occur almost simultaneously will the number of strange quarks reach the critical proportion required to achieve a lower energy state. This is very unlikely to happen, so even if the strange matter hypothesis were correct, nuclei would never be seen to decay to strangelets because their lifetime would be longer than the age of the universe. [ 5 ] The stability of strangelets depends on their size, because of Although nuclei do not decay to strangelets, there are other ways to create strangelets, so if the strange matter hypothesis is correct there should be strangelets in the universe. There are at least three ways they might be created in nature: These scenarios offer possibilities for observing strangelets. If strangelets can be produced in high-energy collisions, then they might be produced by heavy-ion colliders. Similarly, if there are strangelets flying around the universe, then occasionally a strangelet should hit Earth, where it may appear as an exotic type of cosmic ray; alternatively, a stable strangelet could end up incorporated into the bulk of the Earth's matter, acquiring an electron shell proportional to its charge and hence appearing as an anomalously heavy isotope of the appropriate element—though searches for such anomalous "isotopes" have, so far, been unsuccessful. [ 10 ] At heavy ion accelerators like the Relativistic Heavy Ion Collider (RHIC), nuclei are collided at relativistic speeds, creating strange and antistrange quarks that could conceivably lead to strangelet production. The experimental signature of a strangelet would be its very high ratio of mass to charge, which would cause its trajectory in a magnetic field to be very nearly, but not quite, straight. The STAR collaboration has searched for strangelets produced at the RHIC, [ 11 ] but none were found. The Large Hadron Collider (LHC) is even less likely to produce strangelets, [ 12 ] but searches are planned [ 13 ] for the LHC ALICE detector. The Alpha Magnetic Spectrometer (AMS), an instrument that is mounted on the International Space Station , could detect strangelets. [ 14 ] In May 2002, a group of researchers at Southern Methodist University reported the possibility that strangelets may have been responsible for seismic events recorded on October 22 and November 24 in 1993. [ 15 ] The authors later retracted their claim, after finding that the clock of one of the seismic stations had a large error during the relevant period. [ 16 ] It has been suggested that the International Monitoring System be set up to verify the Comprehensive Nuclear Test Ban Treaty (CTBT) after entry into force may be useful as a sort of "strangelet observatory" using the entire Earth as its detector. The IMS will be designed to detect anomalous seismic disturbances down to 1 kiloton of TNT (4.2 TJ ) energy release or less, and could be able to track strangelets passing through Earth in real time if properly exploited. It has been suggested that strangelets of subplanetary (i.e. heavy meteorite) mass would puncture planets and other Solar System objects, leading to impact craters which show characteristic features. [ 17 ] If the strange matter hypothesis is correct, and if a stable negatively-charged strangelet with a surface tension larger than the aforementioned critical value exists, then a larger strangelet would be more stable than a smaller one. One speculation that has resulted from the idea is that a strangelet coming into contact with a lump of ordinary matter could over time convert the ordinary matter to strange matter. [ 18 ] [ 19 ] This is not a concern for strangelets in cosmic rays because they are produced far from Earth and have had time to decay to their ground state , which is predicted by most models to be positively charged, so they are electrostatically repelled by nuclei, and would rarely merge with them. [ 20 ] [ 21 ] On the other hand, high-energy collisions could produce negatively charged strangelet states, which could live long enough to interact with the nuclei of ordinary matter . [ 22 ] The danger of catalyzed conversion by strangelets produced in heavy-ion colliders has received some media attention, [ 23 ] [ 24 ] and concerns of this type were raised [ 18 ] [ 25 ] at the commencement of the RHIC experiment at Brookhaven , which could potentially have created strangelets. A detailed analysis [ 19 ] concluded that the RHIC collisions were comparable to ones which naturally occur as cosmic rays traverse the Solar System , so we would already have seen such a disaster if it were possible. RHIC has been operating since 2000 without incident. Similar concerns have been raised about the operation of the LHC at CERN [ 26 ] but such fears are dismissed as far-fetched by scientists. [ 26 ] [ 27 ] [ 28 ] In the case of a neutron star , the conversion scenario may be more plausible. A neutron star is in a sense a giant nucleus (20 km across), held together by gravity , but it is electrically neutral and would not electrostatically repel strangelets. If a strangelet hit a neutron star, it might catalyze quarks near its surface to form into more strange matter, potentially continuing until the entire star became a strange star . [ 29 ] The strange matter hypothesis remains unproven. No direct search for strangelets in cosmic rays or particle accelerators has yet confirmed a strangelet. If any of the objects such as neutron stars could be shown to have a surface made of strange matter, this would indicate that strange matter is stable at zero pressure , which would vindicate the strange matter hypothesis. However, there is no strong evidence for strange matter surfaces on neutron stars. Another argument against the hypothesis is that if it were true, essentially all neutron stars should be made of strange matter, and otherwise none should be. [ 30 ] Even if there were only a few strange stars initially, violent events such as collisions would soon create many fragments of strange matter flying around the universe. Because collision with a single strangelet would convert a neutron star to strange matter, all but a few of the most recently formed neutron stars should by now have already been converted to strange matter. This argument is still debated, [ 31 ] [ 32 ] [ 33 ] [ 34 ] but if it is correct then showing that one old neutron star has a conventional nuclear matter crust would disprove the strange matter hypothesis. Because of its importance for the strange matter hypothesis, there is an ongoing effort to determine whether the surfaces of neutron stars are made of strange matter or nuclear matter . The evidence currently favors nuclear matter. This comes from the phenomenology of X-ray bursts , which is well explained in terms of a nuclear matter crust, [ 35 ] and from measurement of seismic vibrations in magnetars . [ 36 ]
https://en.wikipedia.org/wiki/Strangelet
In particle physics , strangeness (symbol S ) [ 1 ] [ 2 ] is a property of particles , expressed as a quantum number , for describing decay of particles in strong and electromagnetic interactions that occur in a short period of time . The strangeness of a particle is defined as: S = − ( n s − n s ¯ ) {\displaystyle S=-(n_{\text{s}}-n_{\bar {\text{s}}})} where n s represents the number of strange quarks ( s ) and n s represents the number of strange antiquarks ( s ). Evaluation of strangeness production has become an important tool in search, discovery, observation and interpretation of quark–gluon plasma (QGP). [ 3 ] Strangeness is an excited state of matter and its decay is governed by CKM mixing . The terms strange and strangeness predate the discovery of the quark, and were adopted after its discovery in order to preserve the continuity of the phrase: strangeness of particles as −1 and anti-particles as +1, per the original definition. For all the quark flavour quantum numbers (strangeness, charm , topness and bottomness ) the convention is that the flavour charge and the electric charge of a quark have the same sign. With this, any flavour carried by a charged meson has the same sign as its charge. Strangeness was introduced by Murray Gell-Mann , [ 4 ] Abraham Pais , [ 5 ] [ 6 ] Tadao Nakano and Kazuhiko Nishijima [ 7 ] to explain the fact that certain particles, such as the kaons or the hyperons Σ and Λ , were created easily in particle collisions, yet decayed much more slowly than expected for their large masses and large production cross sections . Noting that collisions seemed to always produce pairs of these particles, it was postulated that a new conserved quantity, dubbed "strangeness", was preserved during their creation, but not conserved in their decay. [ 8 ] In our modern understanding, strangeness is conserved during the strong and the electromagnetic interactions , but not during the weak interactions . Consequently, the lightest particles containing a strange quark cannot decay by the strong interaction, and must instead decay via the much slower weak interaction. In most cases these decays change the value of the strangeness by one unit. This doesn't necessarily hold in second-order weak reactions, however, where there are mixes of K 0 and K 0 mesons. All in all, the amount of strangeness can change in a weak interaction reaction by +1, 0 or −1 (depending on the reaction). For example, the interaction of a K − meson with a proton is represented as: K − + p → Ξ 0 + K 0 {\displaystyle K^{-}+p\rightarrow \Xi ^{0}+K^{0}} ( − 1 ) + ( 0 ) → ( − 2 ) + ( 1 ) {\displaystyle (-1)+(0)\rightarrow (-2)+(1)} Here strangeness is conserved and the interaction proceeds via the strong nuclear force. [ 9 ] Nonetheless, in reactions like the decay of the positive kaon: K + → π + + π 0 {\displaystyle K^{+}\rightarrow \pi ^{+}+\pi ^{0}} + 1 → ( 0 ) + ( 0 ) {\displaystyle +1\rightarrow (0)+(0)} Since both pions have a strangeness of 0, this violates conservation of strangeness, meaning the reaction must go via the weak force. [ 9 ]
https://en.wikipedia.org/wiki/Strangeness
In high-energy nuclear physics , strangeness production in relativistic heavy-ion collisions is a signature and diagnostic tool of quark–gluon plasma (QGP) formation and properties. [ 1 ] Unlike up and down quarks , from which everyday matter is made, heavier quark flavors such as strange and charm typically approach chemical equilibrium in a dynamic evolution process. QGP (also known as quark matter ) is an interacting localized assembly of quarks and gluons at thermal (kinetic) and not necessarily chemical (abundance) equilibrium. The word plasma signals that color charged particles (quarks and/or gluons) are able to move in the volume occupied by the plasma. The abundance of strange quarks is formed in pair-production processes in collisions between constituents of the plasma, creating the chemical abundance equilibrium. The dominant mechanism of production involves gluons only present when matter has become a quark–gluon plasma. When quark–gluon plasma disassembles into hadrons in a breakup process, the high availability of strange antiquarks helps to produce antimatter containing multiple strange quarks, which is otherwise rarely made. Similar considerations are at present made for the heavier charm flavor, which is made at the beginning of the collision process in the first interactions and is only abundant in the high-energy environments of CERN 's Large Hadron Collider . Free quarks probably existed in the extreme conditions of the very early universe until about 30 microseconds after the Big Bang, [ 2 ] in a very hot gas of free quarks, antiquarks and gluons. This gas is called quark–gluon plasma (QGP), since the quark-interaction charge ( color charge ) is mobile and quarks and gluons move around. This is possible because at a high temperature the early universe is in a different vacuum state , in which normal matter cannot exist but quarks and gluons can; they are deconfined (able to exist independently as separate unbound particles). In order to recreate this deconfined phase of matter in the laboratory it is necessary to exceed a minimum temperature, or its equivalent, a minimum energy density . Scientists achieve this using particle collisions at extremely high speeds, where the energy released in the collision can raise the subatomic particles' energies to an exceedingly high level, sufficient for them to briefly form a tiny amount of quark–gluon plasma that can be studied in laboratory experiments for little more than the time light needs to cross the QGP fireball, thus about 10 −22 s. After this brief time the hot drop of quark plasma evaporates in a process called hadronization . This is so since practically all QGP components flow out at relativistic speed. In this way, it is possible to study conditions akin to those in the early Universe at the age of 10–40 microseconds. Discovery of this new QGP state of matter has been announced both at CERN [ 3 ] and at Brookhaven National Laboratory (BNL). [ 4 ] Preparatory work, allowing for these discoveries, was carried out at the Joint Institute for Nuclear Research (JINR) and Lawrence Berkeley National Laboratory (LBNL) at the Bevalac . [ 5 ] New experimental facilities, FAIR at the GSI Helmholtz Centre for Heavy Ion Research (GSI) and NICA at JINR, are under construction. Strangeness as a signature of QGP was first explored in 1983. [ 6 ] Comprehensive experimental evidence about its properties is being assembled. Recent work by the ALICE collaboration [ 7 ] at CERN has opened a new path to study of QGP and strangeness production in very high energy pp collisions. The diagnosis and the study of the properties of quark–gluon plasma can be undertaken using quarks not present in matter seen around us. The experimental and theoretical work relies on the idea of strangeness enhancement. This was the first observable of quark–gluon plasma proposed in 1980 by Johann Rafelski and Rolf Hagedorn . [ 8 ] Unlike the up and down quarks, strange quarks are not brought into the reaction by the colliding nuclei. Therefore, any strange quarks or antiquarks observed in experiments have been "freshly" made from the kinetic energy of colliding nuclei, with gluons being the catalyst. [ 9 ] Conveniently, the mass of strange quarks and antiquarks is equivalent to the temperature or energy at which protons, neutrons and other hadrons dissolve into quarks. This means that the abundance of strange quarks is sensitive to the conditions, structure and dynamics of the deconfined matter phase, and if their number is large it can be assumed that deconfinement conditions were reached. An even stronger signature of strangeness enhancement is the highly enhanced production of strange antibaryons . [ 10 ] [ 11 ] An early comprehensive review of strangeness as a signature of QGP was presented by Koch, Müller and Rafelski, [ 12 ] which was recently updated. [ 13 ] The abundance of produced strange anti-baryons, and in particular anti-omega Ω ¯ ( s ¯ s ¯ s ¯ ) {\displaystyle {\bar {\Omega }}({\bar {s}}{\bar {s}}{\bar {s}})} , allowed to distinguish fully deconfined large QGP domain [ 14 ] from transient collective quark models such as the color rope model proposed by Biró, Nielsen and Knoll. [ 15 ] The relative abundance of ϕ ( s s ¯ ) / Ξ ¯ ( q ¯ s ¯ s ¯ ) {\displaystyle \phi (s{\bar {s}})/{\bar {\Xi }}({\bar {q}}{\bar {s}}{\bar {s}})} resolves [ 16 ] questions raised by the canonical model of strangeness enhancement. [ 17 ] One cannot assume that under all conditions the yield of strange quarks is in thermal equilibrium. In general, the quark-flavor composition of the plasma varies during its ultra short lifetime as new flavors of quarks such as strangeness are cooked up inside. The up and down quarks from which normal matter is made are easily produced as quark–antiquark pairs in the hot fireball because they have small masses. On the other hand, the next lightest quark flavor—strange quarks—will reach its high quark–gluon plasma thermal abundance provided that there is enough time and that the temperature is high enough. [ 13 ] This work elaborated the kinetic theory of strangness production proposed by T. Biro and J. Zimanyi who demonstrated  that strange quarks could not be produced fast enough alone by quark-antiquark reactions. [ 18 ] A new mechanism operational alone in QGP was proposed. Yield equilibration of strangeness yield in QGP is only possible due to a new process, gluon fusion, as shown by Rafelski and Müller . [ 9 ] The top section of the Feynman diagrams figure, shows the new gluon fusion processes: gluons are the wavy lines; strange quarks are the solid lines; time runs from left to right. The bottom section is the process where the heavier quark pair arises from the lighter pair of quarks shown as dashed lines. The gluon fusion process occurs almost ten times faster than the quark-based strangeness process, and allows achievement of the high thermal yield where the quark based process would fail to do so during the duration of the "micro-bang". [ 19 ] The ratio of newly produced s ¯ s {\displaystyle {\bar {s}}s} pairs with the normalized light quark pairs u ¯ u + d ¯ d / 2 {\displaystyle {\bar {u}}u+{\bar {d}}d/2} —the  Wroblewski ratio [ 20 ] —is considered a measure of efficacy of strangeness production. This ratio more than doubles in heavy ion collisions, [ 21 ] providing a model independent confirmation of a new mechanism of strangeness production operating in collisions that are producing QGP. Regarding charm and bottom flavour : [ 22 ] [ 23 ] the gluon collisions here are occurring within the thermal matter phase and thus are different from the high energy processes that can ensue in the early stages of the collisions when the nuclei crash into each other. The heavier, charm and bottom quarks are produced there dominantly. The study in relativistic nuclear (heavy ion) collisions of charmed and soon also bottom hadronic particle production—beside strangeness—will provide complementary and important confirmation of the mechanisms of formation, evolution and hadronization of quark–gluon plasma in the laboratory. [ 7 ] These newly cooked strange quarks find their way into a multitude of different final particles that emerge as the hot quark–gluon plasma fireball breaks up, see the scheme of different processes in figure. Given the ready supply of antiquarks in the "fireball", one also finds a multitude of antimatter particles containing more than one strange quark. On the other hand, in a system involving a cascade of nucleon–nucleon collisions, multi-strange antimatter are produced less frequently considering that several relatively improbable events must occur in the same collision process. For this reason one expects that the yield of multi-strange antimatter particles produced in the presence of quark matter is enhanced compared to conventional series of reactions. [ 24 ] [ 25 ] Strange quarks also bind with the heavier charm and bottom quarks which also like to bind with each other. Thus, in the presence of a large number of these quarks, quite unusually abundant exotic particles can be produced; some of which have never been observed before. This should be the case in the forthcoming exploration at the new Large Hadron Collider at CERN of the particles that have charm and strange quarks, and even bottom quarks, as components. [ 26 ] Strange quarks are naturally radioactive and decay by weak interactions into lighter quarks on a timescale that is extremely long compared with the nuclear-collision times. This makes it relatively easy to detect strange particles through the tracks left by their decay products. Consider as an example the decay of a negatively charged Ξ {\displaystyle \Xi } baryon (green in figure, dss), into a negative pion ( u d) and a neutral Λ {\displaystyle \Lambda } (uds) baryon . Subsequently, the Λ {\displaystyle \Lambda } decays into a proton and another negative pion. In general this is the signature of the decay of a Ξ {\displaystyle \Xi } . Although the negative Ω {\displaystyle \Omega } (sss) baryon has a similar final state decay topology, it can be clearly distinguished from the Ξ {\displaystyle \Xi } because its decay products are different. Measurement of abundant formation of Ξ {\displaystyle \Xi } (uss/dss), Ω {\displaystyle \Omega } (sss) and especially their antiparticles is an important cornerstone of the claim that quark–gluon plasma has been formed. [ 27 ] This abundant formation is often presented in comparison with the scaled expectation from normal proton–proton collisions; however, such a comparison is not a necessary step in view of the large absolute yields which defy conventional model expectations. [ 12 ] The overall yield of strangeness is also larger than expected if the new form of matter has been achieved. However, considering that the light quarks are also produced in gluon fusion processes, one expects increased production of all hadrons. The study of the relative yields of strange and non strange particles provides information about the competition of these processes and thus the reaction mechanism of particle production. The work of Koch, Muller, Rafelski [ 12 ] predicts that in a quark–gluon plasma hadronization process the enhancement for each particle species increases with the strangeness content of the particle. The enhancements for particles carrying one, two and three strange or antistrange quarks were measured and this effect was demonstrated by the CERN WA97 experiment [ 28 ] in time for the CERN announcement in 2000 [ 29 ] of a possible quark–gluon plasma formation in its experiments. [ 30 ] These results were elaborated by the successor collaboration NA57 [ 31 ] as shown in the enhancement of antibaryon figure. The gradual rise of the enhancement as a function of the variable representing the amount of nuclear matter participating in the collisions, and thus as a function of the geometric centrality of nuclear collision strongly favors the quark–gluon plasma source over normal matter reactions. A similar enhancement was obtained by the STAR experiment at the RHIC . [ 32 ] Here results obtained when two colliding systems at 100 A GeV in each beam are considered: in red the heavier gold–gold collisions and in blue the smaller copper–copper collisions. The energy at RHIC is 11 times greater in the CM frame of reference compared to the earlier CERN work. The important result is that enhancement observed by STAR also increases with the number of participating nucleons. We further note that for the most peripheral events at the smallest number of participants, copper and gold systems show, at the same number of participants, the same enhancement as expected. Another remarkable feature of these results, comparing CERN and STAR, is that the enhancement is of similar magnitude for the vastly different collision energies available in the reaction. This near energy independence of the enhancement also agrees with the quark–gluon plasma approach regarding the mechanism of production of these particles and confirms that a quark–gluon plasma is created over a wide range of collision energies, very probably once a minimal energy threshold is exceeded. The very high precision of (strange) particle spectra and large transverse momentum coverage reported by the ALICE Collaboration at the Large Hadron Collider (LHC) allows in-depth exploration of lingering challenges, which always accompany new physics, and here in particular the questions surrounding strangeness signature. Among the most discussed challenges has been the question if the abundance of particles produced is enhanced or if the comparison base line is suppressed. Suppression is expected when an otherwise absent quantum number, such as strangeness, is rarely produced. This situation was recognized by Hagedorn in his early analysis of particle production [ 37 ] and solved by Rafelski and Danos. [ 38 ] In that work it was shown that even if just a few new pairs of strange particles were produced the effect disappears. However, the matter was revived by Hamieh et al. [ 17 ] who argued that is possible that small sub-volumes in QGP are of relevance. This argument can be resolved by exploring specific sensitive experimental signatures for example the ratio of double strange particles of different type, such yield of s s q {\displaystyle ssq} ( Ξ {\displaystyle \Xi } ) compared to s ¯ s {\displaystyle {\bar {s}}s} ( ϕ {\displaystyle \phi } ). The ALICE experiment obtained this ratio for several collision systems in a wide range of hadronization volumes as described by the total produced particle multiplicy. The results show that this ratio assumes the expected value for a large range volumes (two orders of magnitude). At small particle volume or multiplicity, the curve shows the expected reduction: The s s q {\displaystyle ssq} ( Ξ {\displaystyle \Xi } ) must be smaller compared to s ¯ s {\displaystyle {\bar {s}}s} ( ϕ {\displaystyle \phi } ) as the number of produced strange pairs decreases and thus it easier to make s ¯ s {\displaystyle {\bar {s}}s} ( ϕ {\displaystyle \phi } ) compared to s s q {\displaystyle ssq} ( Ξ {\displaystyle \Xi } ) that requires two pairs minimum to be made. However, we also see an increase at very high volume—this is an effect at the level of one to two standard deviations. Similar results were already recognized before by Petran et al. [ 16 ] Another highly praised ALICE result [ 7 ] is the observation of same strangeness enhancement, not only on AA (nucleus–nucleus) but also in pA (proton–nucleus) and pp (proton–proton) collisions when the particle production yields are presented as a function of the multiplicity, which, as noted, corresponds to the available hadronization volume. ALICE results display a smooth volume dependence of total yield of all studied particles as function of volume, there is no additional "canonical" suppression. [ 17 ] This is so since the yield of strange pairs in QGP is sufficiently high and tracks well the expected abundance increase as the volume and lifespan of QGP increases. This increase is incompatible with the hypothesis that for all reaction volumes QGP is always in chemical (yield) equilibrium of strangeness. Instead, this confirms the theoretical kinetic model proposed by Rafelski and Müller . [ 9 ] The production of QGP in pp collisions was not expected by all, but should not be a surprise. The onset of deconfinement is naturally a function of both energy and collision system size. The fact that at extreme LHC energies we cross this boundary also in experiments with the smallest elementary collision systems, such as pp, confirms the unexpected strength of the processes leading to QGP formation. Onset of deconfinement in pp and other "small" system collisions remains an active research topic. Beyond strangeness the great advantage offered by LHC energy range is the abundant production of charm and bottom flavor . [ 22 ] When QGP is formed, these quarks are embedded in a high density of strangeness present. This should lead to copious production of exotic heavy particles, for example D s . Other heavy flavor particles, some which have not even been discovered at this time, are also likely to appear. [ 39 ] [ 40 ] Looking back to the beginning of the CERN heavy ion program one sees de facto announcements of quark–gluon plasma discoveries. The CERN- NA35 [ 25 ] and CERN-WA85 [ 42 ] experimental collaborations announced Λ ¯ {\displaystyle {\bar {\Lambda }}} formation in heavy ion reactions in May 1990 at the Quark Matter Conference, Menton , France . The data indicates a significant enhancement of the production of this antimatter particle comprising one antistrange quark as well as antiup and antidown quarks. All three constituents of the Λ ¯ {\displaystyle {\bar {\Lambda }}} particle are newly produced in the reaction. The WA85 results were in agreement with theoretical predictions. [ 12 ] In the published report, WA85 interpreted their results as QGP. [ 43 ] NA35 had large systematic errors in its data, which were improved in the following years. Moreover, the collaboration needed to evaluate the pp-background. These results are presented as function of the variable called rapidity which characterizes the speed of the source. The peak of emission indicates that the additionally formed antimatter particles do not originate from the colliding nuclei themselves, but from a source that moves at a speed corresponding to one-half of the rapidity of the incident nucleus that is a common center of momentum frame of reference source formed when both nuclei collide, that is, the hot quark–gluon plasma fireball. One of the most interesting questions is if there is a threshold in reaction energy and/or volume size which needs to be exceeded in order to form a domain in which quarks can move freely. [ 44 ] It is natural to expect that if such a threshold exists the particle yields/ratios we have shown above should indicate that. [ 45 ] One of the most accessible signatures would be the relative Kaon yield ratio. [ 46 ] A possible structure has been predicted, [ 47 ] and indeed, an unexpected structure is seen in the ratio of particles comprising the positive kaon K (comprising anti s-quarks and up-quark) and positive pion particles, seen in the figure (solid symbols). The rise and fall (square symbols) of the ratio has been reported by the CERN NA49 . [ 48 ] [ 49 ] The reason the negative kaon particles do not show this "horn" feature is that the s-quarks prefer to hadronize bound in the Lambda particle, where the counterpart structure is observed. Data point from BNL–RHIC–STAR (red stars) in figure agree with the CERN data. In view of these results the objective of ongoing NA61/SHINE experiment at CERN SPS and the proposed low energy run at BNL RHIC where in particular the STAR detector can search for the onset of production of quark–gluon plasma as a function of energy in the domain where the horn maximum is seen, in order to improve the understanding of these results, and to record the behavior of other related quark–gluon plasma observables. The strangeness production and its diagnostic potential as a signature of quark–gluon plasma has been discussed for nearly 30 years. The theoretical work in this field today focuses on the interpretation of the overall particle production data and the derivation of the resulting properties of the bulk of quark–gluon plasma at the time of breakup. [ 33 ] The global description of all produced particles can be attempted based on the picture of hadronizing hot drop of quark–gluon plasma or, alternatively, on the picture of confined and equilibrated hadron matter. In both cases one describes the data within the statistical thermal production model, but considerable differences in detail differentiate the nature of the source of these particles. The experimental groups working in the field also like to develop their own data analysis models and the outside observer sees many different analysis results. There are as many as 10–15 different particles species that follow the pattern predicted for the QGP as function of reaction energy, reaction centrality, and strangeness content. At yet higher LHC energies saturation of strangeness yield and binding to heavy flavor open new experimental opportunities. Scientists studying strangeness as signature of quark gluon plasma present and discuss their results at specialized meetings. Well established is the series International Conference on Strangeness in Quark Matter, first organized in Tucson , Arizona , in 1995. [ 50 ] [ 51 ] The latest edition, 10–15 June 2019, of the conference was held in Bari, Italy, attracting about 300 participants. [ 52 ] [ 53 ] A more general venue is the Quark Matter conference, which last time took place from 3–9 September 2023 in Houston , USA , attracting about 800 participants. [ 54 ] [ 55 ] {{|}}
https://en.wikipedia.org/wiki/Strangeness_and_quark–gluon_plasma