text
stringlengths
11
320k
source
stringlengths
26
161
Polonium hydride (also known as polonium dihydride , hydrogen polonide , or polane ) is a chemical compound with the formula Po H 2 . It is a liquid at room temperature, the second hydrogen chalcogenide with this property after water . It is very unstable chemically and tends to decompose into elemental polonium and hydrogen . It is a volatile and very labile compound, from which many polonides can be derived. Additionally, it is radioactive. [ 2 ] Polonium hydride cannot be produced by direct reaction from the elements upon heating. Other unsuccessful routes to synthesis include the reaction of polonium tetrachloride (PoCl 4 ) with lithium aluminium hydride (LiAlH 4 ), which only produces elemental polonium, and the reaction of hydrochloric acid with magnesium polonide (MgPo). The fact that these synthesis routes do not work may be caused by the radiolysis of polonium hydride upon formation. [ 3 ] Trace quantities of polonium hydride may be prepared by reacting hydrochloric acid with polonium-plated magnesium foil. In addition, the diffusion of trace quantities of polonium in palladium or platinum that is saturated with hydrogen (see palladium hydride ) may be due to the formation and migration of polonium hydride. [ 3 ] Polonium hydride is a more covalent compound than most metal hydrides because polonium straddles the border between metals and metalloids and has some nonmetallic properties. It is intermediate between a hydrogen halide like hydrogen chloride and a metal hydride like stannane . It should have properties similar to that of hydrogen selenide and hydrogen telluride , other borderline hydrides . It is expected to be an endothermic compound, like the lighter hydrogen telluride and hydrogen selenide, and therefore would decompose into its constituent elements, releasing heat in the process. The amount of heat given off in the decomposition of polonium hydride is over 100 kJ/mol , the largest of all the hydrogen chalcogenides . It is predicted that, like the other hydrogen chalcogenides, polonium may form two types of salts : polonide (containing the Po 2− anion ) and one from polonium hydride (containing –PoH, which would be the polonium analogue of thiol , selenol and tellurol ). However, no salts from polonium hydride are known. An example of a polonide is lead polonide (PbPo), which occurs naturally as lead is formed in the alpha decay of polonium. [ 4 ] Polonium hydride is difficult to work with due to the extreme radioactivity of polonium and its compounds and has only been prepared in very dilute tracer quantities. As a result, its physical properties are not definitely known. [ 3 ] It is also unknown if polonium hydride forms an acidic solution in water like its lighter homologues, or if it behaves more like a metal hydride (see also hydrogen astatide ).
https://en.wikipedia.org/wiki/PoH2
Polonium monoxide (also known as polonium(II) oxide ) is a chemical compound with the formula Po O . It is one of three oxides of polonium , the other two being polonium dioxide ( PoO 2 ) and polonium trioxide ( PoO 3 ). It is an interchalcogen . Polonium monoxide is a black solid. It is formed during the radiolysis of polonium sulfite ( PoSO 3 ) and polonium selenite ( PoSeO 3 ). [ 1 ] [ 2 ] On contact with oxygen or water, both polonium monoxide and its related hydroxide ( polonium(II) hydroxide , Po(OH) 2 ) are oxidized quickly to Po(IV) . [ 2 ]
https://en.wikipedia.org/wiki/PoO
Polonium dioxide (also known as polonium(IV) oxide ) is a chemical compound with the formula Po O 2 . It is one of three oxides of polonium , the other two being polonium monoxide (PoO) and polonium trioxide (PoO 3 ). It is a pale yellow crystalline solid at room temperature . Under lowered pressure (such as a vacuum ), it decomposes into elemental polonium and oxygen at 500 °C. It is the most stable oxide of polonium and is an interchalcogen . [ 5 ] At room temperature, polonium dioxide has a face-centered cubic ( fluorite ) crystal structure; upon heating to high temperatures, it crystallises in the tetragonal crystal system . The cubic form is pale yellow, while the tetragonal form is red. Polonium dioxide darkens upon heating, and is chocolate brown at its sublimation point, 885 °C. [ 2 ] [ 3 ] The ionic radius of the Po 4+ ion is 1.02 or 1.04 Å ; thus, the ratio of the ionic radii Po 4+ / O 2− is about 0.73, the lower limit of stability for the cubic crystal system, allowing polonium dioxide to have two modifications. When freshly prepared, polonium dioxide is always in the tetragonal form, and changes to the cubic form after being left to stand or after being cooled strongly. [ 6 ] Polonium dioxide does not occur naturally due to the scarcity of polonium in nature and the high temperatures (250 °C) required to form the dioxide. [ 2 ] Polonium dioxide is prepared by reacting elemental polonium with oxygen at 250 °C or by thermal decomposition of polonium(IV) hydroxide (PoO(OH) 2 ), or various polonium salts such as polonium disulfate (Po(SO 4 ) 2 ), polonium selenate (Po(SeO 4 ) 2 ), or polonium tetranitrate (Po(NO 3 ) 4 ). [ 2 ] [ 4 ] When placed in hydrogen , polonium dioxide is slowly reduced to metallic polonium at 200 °C; the same reduction occurs at 250 °C in ammonia or hydrogen sulfide . When heated in sulfur dioxide at 250 °C, a white compound is formed, possibly a polonium sulfite . [ 6 ] When polonium dioxide is hydrated, polonous acid (H 2 PoO 3 ), a pale yellow, voluminous precipitate , is formed. Despite its name, polonous acid is an amphoteric compound, reacting with both acids and bases . [ 2 ] [ 4 ] Halogenation of polonium dioxide with the hydrogen halides yields the polonium tetra halides : [ 2 ] In reactions, polonium dioxide behaves very much like its homologue tellurium dioxide , forming Po(IV) salts; however, the acidic character of the chalcogen oxides decreases going down the group, and polonium dioxide and polonium(IV) hydroxide are much less acidic than their lighter homologues. [ 6 ] For example, SO 2 , SO 3 , SeO 2 , SeO 3 and TeO 3 are acidic, but TeO 2 is amphoteric, and PoO 2 , while amphoteric, even shows some basic character. [ 7 ] The reaction of polonium dioxide with potassium hydroxide or potassium nitrate in air gives the colourless potassium polonite (K 2 PoO 3 ): [ 6 ] Polonium dioxide is closely related to the polonite anion ( PoO 2− 3 ), similar to the relationship between polonium trioxide and the polonate anion ( PoO 2− 4 ). Polonium dioxide has no uses outside of basic research. [ 6 ] Polonium, whether in elemental form or as any polonium compound, such as polonium dioxide, is extremely radioactive . Thus PoO 2 must be handled in a glove box . The glove box must further be enclosed in another box similar to the glove box, maintained at a slightly higher pressure than the first glove box to prevent the radioactive materials from leaking out. Gloves made of natural rubber do not provide sufficient protection against the radiation from polonium; surgical gloves are necessary. Neoprene gloves shield radiation from polonium better than natural rubber. [ 6 ]
https://en.wikipedia.org/wiki/PoO2
Polonium trioxide (also known as polonium(VI) oxide ) is a chemical compound with the formula Po O 3 . It is one of three oxides of polonium , the other two being polonium monoxide (PoO) and polonium dioxide (PoO 2 ). It is an interchalcogen that has so far only been detected in trace amounts. [ 1 ] [ 2 ] It has been reported that trace quantities of polonium trioxide form during the anodic deposition of polonium from acidic solutions . Although there is no experimental evidence for this, the fact that the deposit dissolves in hydrogen peroxide suggests that it contains polonium in a high oxidation state . It has been predicted that polonium trioxide may be formed by heating polonium dioxide and chromium trioxide together in air. [ 2 ] It is very difficult to oxidize polonium beyond Po(IV); for example, the only hexa halide of polonium is the hexafluoride , PoF 6 , and fluorine is already the most electronegative element [ 2 ] (though polonium hexaiodide was once reportedly formed in the vapour phase, it immediately decomposed). [ 3 ] However, the difficulty in obtaining polonium trioxide and polonates (containing the PoO 2− 4 anion , analogous to sulfate , selenate , and tellurate ) by direct oxidation of Po(IV) compounds may be due to the fact that polonium-210 , while the most easily available isotope of polonium, is strongly radioactive. Similar work with curium shows that it is easier to achieve higher oxidation states with longer-lived isotopes; thus, it may be easier to obtain Po(VI) (especially polonium trioxide) using the longer-lived polonium-208 or polonium-209 . [ 2 ] It has been suggested that Po(VI) might be more stabilized in anions such as PoF 2− 8 or PoO 6− 6 , like other high oxidation states. [ 3 ]
https://en.wikipedia.org/wiki/PoO3
Po is a word that precedes and signals a provocation. A provocation is an idea which moves thinking forward to a new place from where new ideas or solutions may be found. Po is also an interjection , aimed at obtaining further clarifications without agreeing or disagreeing. The term po was first created by Edward de Bono as part of a lateral thinking technique to suggest forward movement, that is, making a statement and seeing where it leads to. It is an extraction from words such as hypothesis , suppose , possible and poetry , all of which indicate forward movement and contain the syllable "po." Po can be taken to refer to any of the following: provoking operation, provocative operation or provocation operation. Additionally, in Maori , the word "po" refers to the original chaotic state of formlessness, from which evolution occurred. Edward de Bono argues that this context as well applies to the term. [ citation needed ] For example, in response to "sales are dropping off because our product is perceived as old fashioned": Some of the above ideas may be impractical, not sensible or not business-minded. The value of these ideas is that they move thinking from a place where it is entrenched to a place where it can move. The above ideas might develop into: The point of these examples is that an initial po may seem silly, but a further development may seem very good indeed. The intermediate silly idea is a necessary step to find the good idea.
https://en.wikipedia.org/wiki/Po_(lateral_thinking)
PocketGenie was an embedded wireless application developed for two-way pagers in 1997. At the time WAP was commonplace for mobile services; PocketGenie utilized HTML and URL links. [ 2 ] [ 3 ] It was the first commercial wireless service for RIM (BlackBerry) and Motorola. [ 4 ] WolfeTech was founded by Surya Jayaweera (CEO) in January 1997. Jayaweera developed PocketGenie in November 1996. Two days later, he drove to COMDEX and pitched the idea to Motorola. Its approval led to the start of WolfeTech. [ 5 ] [ 6 ] PocketGenie was announced to the public in 1997. [ 7 ] PocketGenie was a text-based software before it upgraded to an icon based GUI in 2000. Users could type keywords into the software's menu to access the Internet services of WolfeTech's business partners; users could check stock reports, news headlines, get driving directions or purchase assorted goods. Later versions made improvements to business transactions and included a multilingual translator. [ 8 ] [ 9 ] Over 200 different services were provided. [ 10 ] A companion software PocketInternet permitted global web browsing and full HTML functionality. [ 2 ] PC Mag gave PocketGenie a good performance rating but noted its sluggish lag times in its review. [ 1 ] In 2011, the trademark for PocketGenie expired. [ 11 ]
https://en.wikipedia.org/wiki/PocketGenie
PocketMail was a very small and inexpensive mobile computer , with a built-in acoustic coupler , developed by PocketScience. PocketMail was developed by the company PocketScience and used technology developed by NASA. [ 1 ] This was the first ever mass-market mobile email. The hardware cost around US$100 and the service was initially US$9.95 per month for unlimited use. Later the monthly fee increased. After the company made a reference hardware design, leading consumer electronics manufacturers Audioxo, Sharp , JVC , and others made their own PocketMail devices. [ 2 ] Later a PocketMail dongle was created for the PalmPilot . PocketMail users were given a custom email address or able to synch up PocketMail with their existing email account (including AOL accounts). Although actually a computer, its main function was email . Its main advantages were that it was simple, and that it worked with any phone, even outside the United States. It was a low-cost personal digital assistant (PDA) with an inbuilt acoustic coupler which allowed users to send and receive email while connected to a normal telephone, thus allowing use outside of mobile phone range, or without the need to be signed up with a mobile telephone provider. Popularity of the PocketMail peaked around 2000, when the company stopped investing in new technology development. In Australia, the company known as PocketMail in 2007 stopped marketing the PocketMail service, changed its name to Adavale Resources Limited and now owns uranium mining prospects in Queensland and South Australia. [ 3 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/PocketMail
A pocket comparator is an optical device for measuring and inspection comprising a loupe and a reticle . [ 1 ] The instrument was developed and manufactured by the Bell & Howell Company, but similar instruments of other names are made by other manufacturers. [ 2 ] [ 3 ] It is used for: Measurements are performed by bringing the surface of the reticle as close as possible to the work inspected. [ 8 ]
https://en.wikipedia.org/wiki/Pocket_comparator
Pocket filters are filters used in HVAC applications [ 1 ] to remove dust from ambient air . They are commonly used as final filters in commercial applications or as prefilters for HEPA filters in hospitals in the Pharmaceutical Industry . Pocket filters were historically produced from glass fiber media, however in recent a years a shift to synthetic media took place. Glass fiber media is prone to bacterial growth and shedding. On the other hand, it has the advantage of increased filtration efficiency over time. Synthetic media filters are usually electrostatically charged to increase efficiency. The drawback of this approach is the media loses efficiency by as much as 75% over time. To remedy this manufacturers have introduced new multilayer synthetic media with a smaller fiber diameter . Not only does this approach contribute to a limited loss of efficiency, but also to a service life that is longer by more than 30%. Some filters come with bacteriostatically treated media to prevent bacterial growth, while others come with a media that is inherently not suitable for bacterial growth. Pocket filters come with a pocket number of 3–12 depending on the frame size. Frame material can be Metal of Plastic. The most common frame sizes are: Despite the advances in synthetic media, pocket filters are slowly being overtaken by rigid type filters made from glass fiber paper or more recently from nanofiber . These filters usually offer a service life that is 4–8 times what is offered by pocket filters at a lower energy expenditure.
https://en.wikipedia.org/wiki/Pocket_filter
A pocket forest is created by planting native trees and shrubs in close proximity as a means of rapidly restoring native plant species in damaged ecosystems. While forests naturally grow through a primary stage and then a secondary stage before reaching their climax stage , pocket forests are created by a dense planting of climax stage species which grow rapidly in competition for sunlight. Pocket forests have been embraced by environmentalists as a means of reforesting urban spaces and teaching urban residents about native forest environments. The growing interest in pocket forests was inspired in large part by the work of Japanese botanist Akira Miyawaki , whose " Miyawaki forests " have influenced the development of a variety of pocket forest methodologies adapted to different climates and spacial constraints. A variety of protocols for site preparation and planting have been developed, all sharing the same underlying principles as the Miyawaki method. [ citation needed ] The following is an example methodology: The area to be planted is first covered with a layer of cardboard which is then covered with 3–6 in (7.6–15.2 cm) of compost and allowed to acclimate to local moisture conditions for several months. The covered area is then planted with year-old plant nursery saplings spaced approximately 2 ft (0.6 m) apart. The entire surface area should be planted at the same time with a variety of native species so no saplings of the same species are adjacent to each other. [ 1 ] Watering is unnecessary for native plants acclimated to the local environment; although watering for the first few years after planting, and during drought periods, will reduce mortality of individual plants. [ 2 ] Pocket forests planted with greater density than commercial timberland utilize edge lighting in addition to overhead lighting to grow faster while absorbing more carbon dioxide per acre. [ 3 ] Three is the minimum number of different species of nursery saplings for planting a pocket forest. The arrangement below of species A, B and C illustrates avoidance of planting the same species in adjacent positions. Miyawaki developed the method as a means of replenishing forest soils by allowing dead leaves and twigs to decompose in a moist, wood-rotting ecosystem. [ 9 ] This process may be less successful in drier fire ecosystems where nutrients are recycled as ashes. [ 10 ] [ 11 ] The dense pocket forest forms a capture mechanism for wind-blown embers , dried ground litter is an ignition source, and the multi-layered pocket forest forms a fuel ladder with wildfire risks in urban areas. [ 12 ] Miyawaki forest at Edappally , Eranakulam.
https://en.wikipedia.org/wiki/Pocket_forest
A pocket prairie is a small, artificially created, self-sustaining area of land where forbs and plants predominate. [ 1 ] Oftentimes these plants are native. Pocket prairies are typically found in urban and suburban areas where there exists a lack of vegetation and wildlife (e.g. vacant lots, backyards, green spaces). [ 2 ] These parcels of land serve as a habitat for nearby bird, insect, and mammal species. Pocket Prairies provide the benefits of: As a result of economic and population decline, thousands of vacant lots are dispersed in Cleveland, Ohio. From the abundance of these lots came The Cleveland Pocket Prairie Project , an Ohio State University-led project which aims to repurpose 64 vacant lots with 8 distinct plant communities referred to as "pocket prairies". [ 4 ] The project employs 8 different treatment methods to create plant communities, two of which use existing vegetation while the rest introduce plant mixes. This project took place in eight Cleveland neighborhoods: Glenville, Slavic Village, Buckeye, Central, Tremont/Clark Fulton, Buckeye, Fairfax, and Hough. [ 4 ]
https://en.wikipedia.org/wiki/Pocket_prairie
Pocklington's algorithm is a technique for solving a congruence of the form where x and a are integers and a is a quadratic residue . The algorithm is one of the first efficient methods to solve such a congruence. It was described by H.C. Pocklington in 1917. [ 1 ] (Note: all ≡ {\displaystyle \equiv } are taken to mean ( mod p ) {\displaystyle {\pmod {p}}} , unless indicated otherwise.) Inputs: Outputs: Pocklington separates 3 different cases for p : The first case, if p = 4 m + 3 {\displaystyle p=4m+3} , with m ∈ N {\displaystyle m\in \mathbb {N} } , the solution is x ≡ ± a m + 1 {\displaystyle x\equiv \pm a^{m+1}} . The second case, if p = 8 m + 5 {\displaystyle p=8m+5} , with m ∈ N {\displaystyle m\in \mathbb {N} } and The third case, if p = 8 m + 1 {\displaystyle p=8m+1} , put D ≡ − a {\displaystyle D\equiv -a} , so the equation to solve becomes x 2 + D ≡ 0 {\displaystyle x^{2}+D\equiv 0} . Now find by trial and error t 1 {\displaystyle t_{1}} and u 1 {\displaystyle u_{1}} so that N = t 1 2 − D u 1 2 {\displaystyle N=t_{1}^{2}-Du_{1}^{2}} is a quadratic non-residue. Furthermore, let The following equalities now hold: Supposing that p is of the form 4 m + 1 {\displaystyle 4m+1} (which is true if p is of the form 8 m + 1 {\displaystyle 8m+1} ), D is a quadratic residue and t p ≡ t 1 p ≡ t 1 , u p ≡ u 1 p D ( p − 1 ) / 2 ≡ u 1 {\displaystyle t_{p}\equiv t_{1}^{p}\equiv t_{1},\quad u_{p}\equiv u_{1}^{p}D^{(p-1)/2}\equiv u_{1}} . Now the equations give a solution t p − 1 ≡ 1 , u p − 1 ≡ 0 {\displaystyle t_{p-1}\equiv 1,\quad u_{p-1}\equiv 0} . Let p − 1 = 2 r {\displaystyle p-1=2r} . Then 0 ≡ u p − 1 ≡ 2 t r u r {\displaystyle 0\equiv u_{p-1}\equiv 2t_{r}u_{r}} . This means that either t r {\displaystyle t_{r}} or u r {\displaystyle u_{r}} is divisible by p . If it is u r {\displaystyle u_{r}} , put r = 2 s {\displaystyle r=2s} and proceed similarly with 0 ≡ 2 t s u s {\displaystyle 0\equiv 2t_{s}u_{s}} . Not every u i {\displaystyle u_{i}} is divisible by p , for u 1 {\displaystyle u_{1}} is not. The case u m ≡ 0 {\displaystyle u_{m}\equiv 0} with m odd is impossible, because t m 2 − D u m 2 ≡ N m {\displaystyle t_{m}^{2}-Du_{m}^{2}\equiv N^{m}} holds and this would mean that t m 2 {\displaystyle t_{m}^{2}} is congruent to a quadratic non-residue, which is a contradiction. So this loop stops when t l ≡ 0 {\displaystyle t_{l}\equiv 0} for a particular l . This gives − D u l 2 ≡ N l {\displaystyle -Du_{l}^{2}\equiv N^{l}} , and because − D {\displaystyle -D} is a quadratic residue, l must be even. Put l = 2 k {\displaystyle l=2k} . Then 0 ≡ t l ≡ t k 2 + D u k 2 {\displaystyle 0\equiv t_{l}\equiv t_{k}^{2}+Du_{k}^{2}} . So the solution of x 2 + D ≡ 0 {\displaystyle x^{2}+D\equiv 0} is got by solving the linear congruence u k x ≡ ± t k {\displaystyle u_{k}x\equiv \pm t_{k}} . The following are 4 examples, corresponding to the 3 different cases in which Pocklington divided forms of p . All ≡ {\displaystyle \equiv } are taken with the modulus in the example. This is the first case, according to the algorithm, x ≡ 43 ( 47 + 1 ) / 2 = 43 12 ≡ 2 {\displaystyle x\equiv 43^{(47+1)/2}=43^{12}\equiv 2} , but then x 2 = 2 2 = 4 {\displaystyle x^{2}=2^{2}=4} not 43, so we should not apply the algorithm at all. The reason why the algorithm is not applicable is that a=43 is a quadratic non residue for p=47. Solve the congruence The modulus is 23. This is 23 = 4 ⋅ 5 + 3 {\displaystyle 23=4\cdot 5+3} , so m = 5 {\displaystyle m=5} . The solution should be x ≡ ± 18 6 ≡ ± 8 ( mod 23 ) {\displaystyle x\equiv \pm 18^{6}\equiv \pm 8{\pmod {23}}} , which is indeed true: ( ± 8 ) 2 ≡ 64 ≡ 18 ( mod 23 ) {\displaystyle (\pm 8)^{2}\equiv 64\equiv 18{\pmod {23}}} . Solve the congruence The modulus is 13. This is 13 = 8 ⋅ 1 + 5 {\displaystyle 13=8\cdot 1+5} , so m = 1 {\displaystyle m=1} . Now verifying 10 2 m + 1 ≡ 10 3 ≡ − 1 ( mod 13 ) {\displaystyle 10^{2m+1}\equiv 10^{3}\equiv -1{\pmod {13}}} . So the solution is x ≡ ± y / 2 ≡ ± ( 4 a ) 2 / 2 ≡ ± 800 ≡ ± 7 ( mod 13 ) {\displaystyle x\equiv \pm y/2\equiv \pm (4a)^{2}/2\equiv \pm 800\equiv \pm 7{\pmod {13}}} . This is indeed true: ( ± 7 ) 2 ≡ 49 ≡ 10 ( mod 13 ) {\displaystyle (\pm 7)^{2}\equiv 49\equiv 10{\pmod {13}}} . Solve the congruence x 2 ≡ 13 ( mod 17 ) {\displaystyle x^{2}\equiv 13{\pmod {17}}} . For this, write x 2 − 13 = 0 {\displaystyle x^{2}-13=0} . First find a t 1 {\displaystyle t_{1}} and u 1 {\displaystyle u_{1}} such that t 1 2 + 13 u 1 2 {\displaystyle t_{1}^{2}+13u_{1}^{2}} is a quadratic nonresidue. Take for example t 1 = 3 , u 1 = 1 {\displaystyle t_{1}=3,u_{1}=1} . Now find t 8 {\displaystyle t_{8}} , u 8 {\displaystyle u_{8}} by computing And similarly t 4 = − 299 ≡ 7 ( mod 17 ) , u 4 = 156 ≡ 3 ( mod 17 ) {\displaystyle t_{4}=-299\equiv 7{\pmod {17}},u_{4}=156\equiv 3{\pmod {17}}} such that t 8 = − 68 ≡ 0 ( mod 17 ) , u 8 = 42 ≡ 8 ( mod 17 ) . {\displaystyle t_{8}=-68\equiv 0{\pmod {17}},u_{8}=42\equiv 8{\pmod {17}}.} Since t 8 = 0 {\displaystyle t_{8}=0} , the equation 0 ≡ t 4 2 + 13 u 4 2 ≡ 7 2 − 13 ⋅ 3 2 ( mod 17 ) {\displaystyle 0\equiv t_{4}^{2}+13u_{4}^{2}\equiv 7^{2}-13\cdot 3^{2}{\pmod {17}}} which leads to solving the equation 3 x ≡ ± 7 ( mod 17 ) {\displaystyle 3x\equiv \pm 7{\pmod {17}}} . This has solution x ≡ ± 8 ( mod 17 ) {\displaystyle x\equiv \pm 8{\pmod {17}}} . Indeed, ( ± 8 ) 2 = 64 ≡ 13 ( mod 17 ) {\displaystyle (\pm 8)^{2}=64\equiv 13{\pmod {17}}} .
https://en.wikipedia.org/wiki/Pocklington's_algorithm
Pocosin is a type of palustrine wetland with deep, acidic, sandy, peat soils. [ 1 ] Groundwater saturates the soil except during brief seasonal dry spells and during prolonged droughts . Pocosin soils are nutrient-deficient ( oligotrophic ), especially in phosphorus . [ 2 ] Pocosins occur in the southern portions of the Atlantic coastal plain of North America , spanning from southeastern Virginia, through North Carolina, and into South Carolina. The majority of pocosins are found in North Carolina. [ 3 ] The Alligator River National Wildlife Refuge was created in 1984 to help preserve pocosin wetlands. The nearby Cedar Island National Wildlife Refuge also protects pocosin habitat. [ 4 ] Pocosins occupy poorly drained higher ground between streams and floodplains . Seeps cause the inundation. There are often perched water tables underlying pocosins. Shrub vegetation is common in a pocosin ecosystem . Pocosins are sometimes called shrub bogs . Pond pines ( Pinus serotina ) dominate pocosin forests, but loblolly pine ( Pinus taeda ) and longleaf pine ( Pinus palustris ) are also associated with pocosins. Additionally, pocosins are home to rare and threatened plant species including Venus flytrap ( Dionaea muscipula ) and sweet pitcher plant ( Sarracenia rubra ). [ 2 ] A distinction is sometimes made between short pocosins , which have shorter trees (less than 20 feet (6.1 m)), deeper peat, and fewer soil nutrients, and tall pocosins , which have taller trees (greater than 20 feet (6.1 m)), shallow peat, and more nutrient-rich soil. [ 2 ] [ 5 ] Where soil saturation is less frequent and peat depths shallower, pocosins transition into pine flatwoods . A loose definition of "pocosin" can include all shrub and forest bogs , as well as stands of Atlantic white cedar ( Chamaecyparis thyoides ) and loblolly pine on the Atlantic coastal plain. [ 2 ] Pocosins are formed by the accumulation of organic matter, resembling black muck, that is built up over thousands of years. This accumulation of material causes the area to be highly acidic and nutrient-deficient. The thickness of the organic buildup varies depending on one's location within the pocosin. Near the edges the buildup can be several inches thick but toward the center it can be up to several feet thick. Vegetation on the pocosin varies throughout. At the edges more pond pine is found with an abundance of titi, zenobia (a shrub unique to pocosins), and greenbrier vines. [ 6 ] Closer to the center, thin stunted trees are typically found and fewer shrubs and vines are present. [ 7 ] Pocosins are important to migratory birds due to the abundance of various types of berries. [ 7 ] Pocosin ecosystems are fire-adapted ( pyrophytic ). Pond pines exhibit serotiny , such that wildfire can create a pond pine seedbed in the soil. Wildfires in pocosins tend to be intense, sometimes burning deep into the peat, resulting in small lakes and ponds. Wildfires occurring about once a decade tend to cause pond pines to dominate over other trees, and cane ( Arundinaria ) rather than shrubs to dominate the understory . More frequent fires result in a pyrophytic shrub understory. Annual fires prevent shrub growth and thin the pond pine forest cover , creating a flooded savanna with grass, sedge, and herb groundcover. The word pocosin has Eastern Algonquian roots. [ 8 ] Sources have long attested that the term translates into English as "swamp-on-a-hill," but evidence for this precise translation is lacking. [ 1 ] [ 8 ] The city of Poquoson, Virginia , located in the coastal plain of Virginia (see Tidewater region of Virginia ) derives its name from this geographic feature.
https://en.wikipedia.org/wiki/Pocosin
In computing , Podman ( pod manager ) is an open source Open Container Initiative (OCI)-compliant [ 2 ] container management tool from Red Hat used for handling containers, images , volumes , and pods on the Linux operating system , [ 3 ] with support for macOS and Microsoft Windows via a virtual machine . [ 4 ] Based on the libpod library, it offers APIs for the lifecycle management of containers, pods, images, and volumes. The API is identical to the Docker API. [ 5 ] Podman Desktop provides an alternative to Docker Desktop. [ 6 ] Podman lets containers run without root privileges (rootless), meaning they can be created, run, and managed by regular users without administrator rights by using Linux namespaces . [ 7 ] This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Podman
Podospora anserina is a filamentous ascomycete fungus from the order Sordariales . It is considered a model organism for the study of molecular biology of senescence (aging), prions , sexual reproduction, and meiotic drive . [ 1 ] [ 2 ] It has an obligate sexual and pseudohomothallic life cycle. It is a non-pathogenic coprophilous fungus that colonizes the dung of herbivorous animals such as horses, rabbits, cows and sheep. [ 1 ] [ 3 ] Podospora anserina was originally named Malinvernia anserina Rabenhorst (1857). Podospora anserina was subsequently published in Niessl (1883), [ 4 ] which is used today to reference the common laboratory strain therefrom, 'Niessl'. It is also known as Pleurage anserina (Ces.) Kuntze. [ 5 ] [ 6 ] Genetics of P. anserina were characterized in Rizet and Engelmann (1949) and reviewed by Esser (1974). P. anserina is estimated to have diverged from Neurospora crassa 75 million years ago based on 18S rRNA and protein orthologous share 60-70% homology. [ 7 ] Gene cluster orthologs between Aspergillus nidulans and Podospora anserina have 63% identical primary amino acid sequence (even though these species are from distinct classes) and the average amino acid of compared proteomes is 10% less, giving rise to hypotheses of distinct species yet shared genes. Podospora is a model organism to study genetics, aging (senescence, cell degeneration), ascomycete development, heterokaryon incompatibility, [ 8 ] mating in fungi , prions , and mitochondrial and peroxisomal physiology. [ 9 ] Podospora is easily culturable on complex (full) potato dextrose, cornmeal agar/broth, or even synthetic medium, and, using modern molecular tools, is easy to manipulate. Its optimal growth temperature is 25–27 °C (77–81 °F) and can complete its life cycle in 7 to 11 days under laboratory conditions. [ 1 ] [ 10 ] Most research has been done in a small collection of French strains sampled in the 1920s, in particular the strains named S and s. [ 11 ] These two strains are known to be very similar except for the het-s locus. The reference genome published in 2008 corresponds to S+, a haploid derivate of the S strain with a + mating type. [ 7 ] In addition, two other populations have been sampled, one in Usingen, Germany, [ 12 ] and the other in Wageningen, the Netherlands, [ 13 ] [ 14 ] [ 15 ] [ 16 ] both of which have been used to study spore killing , the phenotypic expression of meiotic drive in fungi. [ 2 ] In addition there are multiple lab-derived strains: Podospora anserina has a definite life span and shows senescence phenotypically (by slower growth, less aerial hyphae , and increased pigment production in distal hyphae). However, isolates show either increased life span or immortality, as to study the process of aging many genetic manipulations have been done to produce immortal strains or increase life-span. In general, the mitochondrion and mitochondrial chromosome is investigated. [ 21 ] [ 22 ] Senescence occurs because during respiration reactive oxygen species are produced that limit the life span and over time defective mitochondrial DNA can accumulate. [ 19 ] [ 23 ] With this knowledge, focus turned to nutrition availability, respiration (ATP synthesis) and oxidases, such as cytochrome c oxidase. Carotenoids , pigments that are also found in plants and provide health benefits to humans, [ 24 ] are known to be in fungi like Podospora's divergent ancestor Neurospora crassa ; in N. crassa (and other fungi) cartenoids al genes namely provide UV radiation protection. Over-expression of al-2 Podospora anserina increased life span by 31%. [ 25 ] Calorie restriction studies show that decreasing nutrients, such as sugar, increased life span, likely due to slower metabolism and thus decreased reactive oxygen species production or induced survival genes. Also, intracellular copper levels were found to be correlated with growth. This was studied in Grisea-deleted and ex1-deleted strains, as well as in a wild type s strain. Podospora without Grisea, a copper transcription factor, had decreased intracellular copper levels which lead to use of an alternative respiratory pathway that consequently produced less oxidative stress. [ 20 ] In the P. anserina aging model, autophagy , a pathway for the degradation of damaged biomolecules and organelles, was shown to be a longevity assurance mechanism. [ 26 ] The following genes, both allelic and nonallelic, are found to be involved in vegetative incompatibility (only those cloned and characterized are listed): het-c , het-c , het-s , idi-2 , idi-1 , idi-3 , mod-A , mode-D , mod-E , psp-A . Podospora anserina contains at least 9 het loci. [ 27 ] Podospora anserina is known to produce laccases, a type of phenoloxidase. [ 28 ] Original genetic studies by gel electrophoresis led to the finding of the genome size, c. 35 megabases , with 7 chromosomes and 1 mitochondrial chromosome. In the 1980s the mitochondrial chromosome was sequenced. Then, in 2003, a pilot study was initiated to sequence regions bordering chromosome V's centromere using BAC clones and direct sequencing. [ 29 ] In 2008, a 10x whole genome draft sequence was published. [ 7 ] The genome size is now estimated to be 35-36 megabases. [ 7 ] Genetic manipulation in fungi is difficult due to low homologous recombination efficiency and ectopic integrations [ 30 ] which hinders genetic studies using allele replacement and knock-outs . [ 9 ] In 2005, a method for gene deletion was developed based on a model for Aspergillus nidulans that involved cosmid plasmid transformation. A better system for Podospora was developed in 2008 by using a strain that lacked nonhomologous end joining proteins ( Ku (protein) , known in Podospora as PaKu70 ). This method claimed to have 100% of transformants undergo desired homologous recombination leading to allelic replacement, and after the transformation, the PaKu70 deletion can be restored by crossing over with a wild-type strain to yield progeny with only the targeted gene deletion or allelic exchange (e.g. point mutation). [ 9 ] It is well known that many organisms across all domains produce secondary metabolites , and fungi are prolific in this regard. Product mining was well underway in the 1990s for the genus Podospora . From Podospora anserina two new natural products classified as pentaketides , specifically derivatives of benzoquinones , were discovered; these showed antifungal, antibacterial, and cytotoxic activities. [ 31 ] Horizontal gene transfer is common in bacteria and between prokaryotes and eukaryotes yet is more rare between eukaryotic organisms. Between fungi, secondary metabolite clusters are good candidates for horizontal gene transfer, for example, a functional ST gene cluster that produces sterigmatocystin was found in Podospora anserina and originally derived from Aspergillus . This cluster is well-conserved, including, notably, the transcription-factor binding sites. Sterigmatocystin itself is toxic and is a precursor to another toxic metabolite, aflatoxin . [ 32 ]
https://en.wikipedia.org/wiki/Podospora_anserina
Podsolisation is an extreme form of leaching which causes the eluviation of iron and aluminium sesquioxides . [ 1 ] The process generally occurs in areas where precipitation is greater than evapotranspiration . The minerals are removed by a process known as leaching . When organic material is broken down nutrients are released, but at the same time organic acids are released. These organic acids are known as chelating agents . Many podsol soils form underneath coniferous forests , the fact that pine trees are evergreen causes a very thin litter layer inhibiting the production of humus . As a result, an acidic (pH 4.5) mor humus is produced which provides a greater amount of chelating agents. [ 2 ] In podsolisation, chelating agents break down clay and release minerals such as iron and aluminium. When iron and aluminium are hydrated they become sesquioxides. The sesquioxides are translocated from the A Horizon, a zone of out-washing, to the B Horizon, a zone of illuviation. Many bases such as calcium and potassium are also leached from the zone along with organic matter and silica . Often minerals like quartz and silica are left behind in the A horizon. What is significantly different about podsols in comparison to other soils is that the bottom of A horizon is known as the AE horizon, which is an eluviated area which has lost sesquioxides. It tends to be an ash gray colour. [ 3 ] The B Horizon has dark layer where minerals, organic matter and bases are being illuviated (washed in/accumulated). Below this is a red/orange layer of iron and aluminium sesquioxides deposit. Some bases remain in the soil, though others may be lost by throughflow . In many podsols, Iron Pans are created. This can cause water logging which may then saturate the A horizon leading to mottling or a gleyed podsol . Also useful: soil chemistry This soil science –related article is a stub . You can help Wikipedia by expanding it . This geochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Podsolisation
Pohlke's theorem is the fundamental theorem of axonometry . It was established 1853 by the German painter and teacher of descriptive geometry Karl Wilhelm Pohlke . The first proof of the theorem was published 1864 by the German mathematician Hermann Amandus Schwarz , who was a student of Pohlke. Therefore the theorem is sometimes called theorem of Pohlke and Schwarz , too. For a mapping of a unit cube, one has to apply an additional scaling either in the space or in the plane. Because a parallel projection and a scaling preserves ratios one can map an arbitrary point P = ( x , y , z ) {\displaystyle P=(x,y,z)} by the axonometric procedure below. Pohlke's theorem can be stated in terms of linear algebra as: Pohlke's theorem is the justification for the following easy procedure to construct a scaled parallel projection of a 3-dimensional object using coordinates,: [ 2 ] [ 3 ] In order to get undistorted pictures, one has to choose the images of the axes and the forshortenings carefully (see Axonometry ). In order to get an orthographic projection only the images of the axes are free and the forshortenings are determined. (see de:orthogonale Axonometrie ). Schwarz formulated and proved the more general statement: and used a theorem of L’Huilier :
https://en.wikipedia.org/wiki/Pohlke's_theorem
Poimapper is an on-site data collection , sharing and analysing software. [ 1 ] [ 2 ] The mobile application is used to collect data and update data. By uploading data to a cloud server it is shared among other mobile and office workers. [ 3 ] Poimapper is developed by Pajat Solutions Ltd. Pajat Solutions was founded in 2012 and is headquartered in Finland . In 2013, Pajat was awarded the European CSR award for innovative, non-business partnerships that have helped to solve social problems while creating business advantage. The award came from a partnership where NGO's like Plan International were using Poimapper in their health-related monitoring and evaluation work. [ 4 ] Poimapper is also used in supplier audits, due to is ability to process large supplier audit checklists effectively, including score calculations, coloring of results and summarizing findings into corrective action tables. The action table can be shared to supplies for updating their action proposals, schedules and action status. The solution also supports supplier self-audits. This Finnish corporation or company article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Poimapper
In mathematics, and especially topology , a Poincaré complex (named after the mathematician Henri Poincaré ) is an abstraction of the singular chain complex of a closed , orientable manifold . The singular homology and cohomology groups of a closed, orientable manifold are related by Poincaré duality . Poincaré duality is an isomorphism between homology and cohomology groups . A chain complex is called a Poincaré complex if its homology groups and cohomology groups have the abstract properties of Poincaré duality. [ 1 ] A Poincaré space is a topological space whose singular chain complex is a Poincaré complex. These are used in surgery theory to analyze manifold algebraically. Let C = { C i } {\displaystyle C=\{C_{i}\}} be a chain complex of abelian groups , and assume that the homology groups of C {\displaystyle C} are finitely generated . Assume that there exists a map Δ : C → C ⊗ C {\displaystyle \Delta \colon C\to C\otimes C} , called a chain-diagonal, with the property that ( ε ⊗ 1 ) Δ = ( 1 ⊗ ε ) Δ {\displaystyle (\varepsilon \otimes 1)\Delta =(1\otimes \varepsilon )\Delta } . Here the map ε : C 0 → Z {\displaystyle \varepsilon \colon C_{0}\to \mathbb {Z} } denotes the ring homomorphism known as the augmentation map , which is defined as follows: if n 1 σ 1 + ⋯ + n k σ k ∈ C 0 {\displaystyle n_{1}\sigma _{1}+\cdots +n_{k}\sigma _{k}\in C_{0}} , then ε ( n 1 σ 1 + ⋯ + n k σ k ) = n 1 + ⋯ + n k ∈ Z {\displaystyle \varepsilon (n_{1}\sigma _{1}+\cdots +n_{k}\sigma _{k})=n_{1}+\cdots +n_{k}\in \mathbb {Z} } . [ 2 ] Using the diagonal as defined above, we are able to form pairings, namely: where ⌢ {\displaystyle \scriptstyle \frown } denotes the cap product . [ 3 ] A chain complex C is called geometric if a chain- homotopy exists between Δ {\displaystyle \Delta } and τ Δ {\displaystyle \tau \Delta } , where τ : C ⊗ C → C ⊗ C {\displaystyle \tau \colon C\otimes C\to C\otimes C} is the transposition/flip given by τ ( a ⊗ b ) = b ⊗ a {\displaystyle \tau (a\otimes b)=b\otimes a} . A geometric chain complex is called an algebraic Poincaré complex , of dimension n , if there exists an infinite- ordered element of the n -dimensional homology group, say μ ∈ H n ( C ) {\displaystyle \mu \in H_{n}(C)} , such that the maps given by are group isomorphisms for all 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} . These isomorphisms are the isomorphisms of Poincaré duality. [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Poincaré_complex
In the mathematical field of geometric topology , the Poincaré conjecture ( UK : / ˈ p w æ̃ k ær eɪ / , [ 2 ] US : / ˌ p w æ̃ k ɑː ˈ r eɪ / , [ 3 ] [ 4 ] French: [pwɛ̃kaʁe] ) is a theorem about the characterization of the 3-sphere , which is the hypersphere that bounds the unit ball in four-dimensional space. Originally conjectured by Henri Poincaré in 1904, the theorem concerns spaces that locally look like ordinary three-dimensional space but which are finite in extent. Poincaré hypothesized that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere . Attempts to resolve the conjecture drove much progress in the field of geometric topology during the 20th century. The eventual proof built upon Richard S. Hamilton 's program of using the Ricci flow to solve the problem. By developing a number of new techniques and results in the theory of Ricci flow, Grigori Perelman was able to modify and complete Hamilton's program. In papers posted to the arXiv repository in 2002 and 2003, Perelman presented his work proving the Poincaré conjecture (and the more powerful geometrization conjecture of William Thurston ). Over the next several years, several mathematicians studied his papers and produced detailed formulations of his work. Hamilton and Perelman's work on the conjecture is widely recognized as a milestone of mathematical research. Hamilton was recognized with the Shaw Prize in 2011 and the Leroy P. Steele Prize for Seminal Contribution to Research in 2009. The journal Science marked Perelman's proof of the Poincaré conjecture as the scientific Breakthrough of the Year in 2006. [ 5 ] The Clay Mathematics Institute , having included the Poincaré conjecture in their well-known Millennium Prize Problem list, offered Perelman their prize of US$ 1 million in 2010 for the conjecture's resolution. [ 6 ] He declined the award, saying that Hamilton's contribution had been equal to his own. [ 7 ] [ 8 ] The Poincaré conjecture was a mathematical problem in the field of geometric topology . In terms of the vocabulary of that field, it says the following: Poincaré conjecture . Every three-dimensional topological manifold which is closed , connected , and has trivial fundamental group is homeomorphic to the three-dimensional sphere . Familiar shapes, such as the surface of a ball (which is known in mathematics as the two -dimensional sphere) or of a torus , are two-dimensional. The surface of a ball has trivial fundamental group, meaning that any loop drawn on the surface can be continuously deformed to a single point. By contrast, the surface of a torus has nontrivial fundamental group, as there are loops on the surface which cannot be so deformed. Both are topological manifolds which are closed (meaning that they have no boundary and take up a finite region of space) and connected (meaning that they consist of a single piece). Two closed manifolds are said to be homeomorphic when it is possible for the points of one to be reallocated to the other in a continuous way. Because the (non)triviality of the fundamental group is known to be invariant under homeomorphism, it follows that the two-dimensional sphere and torus are not homeomorphic. The two-dimensional analogue of the Poincaré conjecture says that any two-dimensional topological manifold which is closed and connected but non-homeomorphic to the two-dimensional sphere must possess a loop which cannot be continuously contracted to a point. (This is illustrated by the example of the torus, as above.) This analogue is known to be true via the classification of closed and connected two-dimensional topological manifolds, which was understood in various forms since the 1860s. In higher dimensions, the closed and connected topological manifolds do not have a straightforward classification, precluding an easy resolution of the Poincaré conjecture. In the 1800s, Bernhard Riemann and Enrico Betti initiated the study of topological invariants of manifolds . [ 9 ] [ 10 ] They introduced the Betti numbers , which associate to any manifold a list of nonnegative integers. Riemann showed that a closed connected two-dimensional manifold is fully characterized by its Betti numbers. As part of his 1895 paper Analysis Situs (announced in 1892), Poincaré showed that Riemann's result does not extend to higher dimensions. [ 11 ] [ 12 ] [ 13 ] To do this he introduced the fundamental group as a novel topological invariant, and was able to exhibit examples of three-dimensional manifolds which have the same Betti numbers but distinct fundamental groups. He posed the question of whether the fundamental group is sufficient to topologically characterize a manifold (of given dimension), although he made no attempt to pursue the answer, saying only that it would "demand lengthy and difficult study". [ 12 ] [ 13 ] [ 14 ] The primary purpose of Poincaré's paper was the interpretation of the Betti numbers in terms of his newly-introduced homology groups , along with the Poincaré duality theorem on the symmetry of Betti numbers. Following criticism of the completeness of his arguments, he released a number of subsequent "supplements" to enhance and correct his work. The closing remark of his second supplement, published in 1900, said: [ 15 ] [ 13 ] In order to avoid making this work too prolonged, I confine myself to stating the following theorem, the proof of which will require further developments: Each polyhedron which has all its Betti numbers equal to 1 and all its tables T q orientable is simply connected, i.e., homeomorphic to a hypersphere. (In a modern language, taking note of the fact that Poincaré is using the terminology of simple-connectedness in an unusual way, [ 16 ] this says that a closed connected oriented manifold with the homology of a sphere must be homeomorphic to a sphere. [ 14 ] ) This modified his negative generalization of Riemann's work in two ways. Firstly, he was now making use of the full homology groups and not only the Betti numbers. Secondly, he narrowed the scope of the problem from asking if an arbitrary manifold is characterized by topological invariants to asking whether the sphere can be so characterized. However, after publication he found his announced theorem to be incorrect. In his fifth and final supplement, published in 1904, he proved this with the counterexample of the Poincaré homology sphere , which is a closed connected three-dimensional manifold which has the homology of the sphere but whose fundamental group has 120 elements. This example made it clear that homology is not powerful enough to characterize the topology of a manifold. In the closing remarks of the fifth supplement, Poincaré modified his erroneous theorem to use the fundamental group instead of homology: [ 17 ] [ 13 ] One question remains to be dealt with: is it possible for the fundamental group of V to reduce to the identity without V being simply connected? [...] However, this question would carry us too far away. In this remark, as in the closing remark of the second supplement, Poincaré used the term "simply connected" in a way which is at odds with modern usage, as well as his own 1895 definition of the term. [ 12 ] [ 16 ] (According to modern usage, Poincaré's question is a tautology , asking if it is possible for a manifold to be simply connected without being simply connected.) However, as can be inferred from context, [ 18 ] Poincaré was asking whether the triviality of the fundamental group uniquely characterizes the sphere. [ 14 ] Throughout the work of Riemann, Betti, and Poincaré, the topological notions in question are not defined or used in a way that would be recognized as precise from a modern perspective. Even the key notion of a "manifold" was not used in a consistent way in Poincaré's own work, and there was frequent confusion between the notion of a topological manifold , a PL manifold , and a smooth manifold . [ 16 ] [ 19 ] For this reason, it is not possible to read Poincaré's questions unambiguously. It is only through the formalization and vocabulary of topology as developed by later mathematicians that Poincaré's closing question has been understood as the "Poincaré conjecture" as stated in the preceding section. However, despite its usual phrasing in the form of a conjecture, proposing that all manifolds of a certain type are homeomorphic to the sphere, Poincaré only posed an open-ended question, without venturing to conjecture one way or the other. Moreover, there is no evidence as to which way he believed his question would be answered. [ 14 ] In the 1930s, J. H. C. Whitehead claimed a proof but then retracted it. In the process, he discovered some examples of simply-connected (indeed contractible, i.e. homotopically equivalent to a point) non-compact 3-manifolds not homeomorphic to R 3 {\displaystyle \mathbb {R} ^{3}} , the prototype of which is now called the Whitehead manifold . In the 1950s and 1960s, other mathematicians attempted proofs of the conjecture only to discover that they contained flaws. Influential mathematicians such as Georges de Rham , R. H. Bing , Wolfgang Haken , Edwin E. Moise , and Christos Papakyriakopoulos attempted to prove the conjecture. In 1958, R. H. Bing proved a weak version of the Poincaré conjecture: if every simple closed curve of a compact 3-manifold is contained in a 3-ball, then the manifold is homeomorphic to the 3-sphere. [ 20 ] Bing also described some of the pitfalls in trying to prove the Poincaré conjecture. [ 21 ] Włodzimierz Jakobsche showed in 1978 that, if the Bing–Borsuk conjecture is true in dimension 3, then the Poincaré conjecture must also be true. [ 22 ] Over time, the conjecture gained the reputation of being particularly tricky to tackle. John Milnor commented that sometimes the errors in false proofs can be "rather subtle and difficult to detect". [ 23 ] Work on the conjecture improved understanding of 3-manifolds. Experts in the field were often reluctant to announce proofs and tended to view any such announcement with skepticism. The 1980s and 1990s witnessed some well-publicized fallacious proofs (which were not actually published in peer-reviewed form). [ 24 ] [ 25 ] An exposition of attempts to prove this conjecture can be found in the non-technical book Poincaré's Prize by George Szpiro . [ 26 ] The classification of closed surfaces gives an affirmative answer to the analogous question in two dimensions. For dimensions greater than three, one can pose the Generalized Poincaré conjecture: is a homotopy n -sphere homeomorphic to the n -sphere? A stronger assumption than simply-connectedness is necessary; in dimensions four and higher there are simply-connected, closed manifolds which are not homotopy equivalent to an n -sphere. Historically, while the conjecture in dimension three seemed plausible, the generalized conjecture was thought to be false. In 1961, Stephen Smale shocked mathematicians by proving the Generalized Poincaré conjecture for dimensions greater than four and extended his techniques to prove the fundamental h-cobordism theorem . In 1982, Michael Freedman proved the Poincaré conjecture in four dimensions. Freedman's work left open the possibility that there is a smooth four-manifold homeomorphic to the four-sphere which is not diffeomorphic to the four-sphere. This so-called smooth Poincaré conjecture, in dimension four, remains open and is thought to be very difficult. Milnor 's exotic spheres show that the smooth Poincaré conjecture is false in dimension seven, for example. These earlier successes in higher dimensions left the case of three dimensions in limbo. The Poincaré conjecture was essentially true in both dimension four and all higher dimensions for substantially different reasons. In dimension three, the conjecture had an uncertain reputation until the geometrization conjecture put it into a framework governing all 3-manifolds. John Morgan wrote: [ 27 ] It is my view that before Thurston 's work on hyperbolic 3-manifolds and … the Geometrization conjecture there was no consensus among the experts as to whether the Poincaré conjecture was true or false. After Thurston's work, notwithstanding the fact that it had no direct bearing on the Poincaré conjecture, a consensus developed that the Poincaré conjecture (and the Geometrization conjecture) were true. Hamilton's program was started in his 1982 paper in which he introduced the Ricci flow on a manifold and showed how to use it to prove some special cases of the Poincaré conjecture. [ 28 ] In the following years, he extended this work but was unable to prove the conjecture. The actual solution was not found until Grigori Perelman published his papers. In late 2002 and 2003, Perelman posted three papers on arXiv . [ 29 ] [ 30 ] [ 31 ] In these papers, he sketched a proof of the Poincaré conjecture and a more general conjecture, Thurston's geometrization conjecture , completing the Ricci flow program outlined earlier by Richard S. Hamilton . From May to July 2006, several groups presented papers that filled in the details of Perelman's proof of the Poincaré conjecture, as follows: In this paper, we shall present the Hamilton-Perelman theory of Ricci flow. Based on it, we shall give the first written account of a complete proof of the Poincaré conjecture and the geometrization conjecture of Thurston. While the complete work is an accumulated efforts of many geometric analysts, the major contributors are unquestionably Hamilton and Perelman. All three groups found that the gaps in Perelman's papers were minor and could be filled in using his own techniques. On August 22, 2006, the ICM awarded Perelman the Fields Medal for his work on the Ricci flow, but Perelman refused the medal. [ 38 ] [ 39 ] John Morgan spoke at the ICM on the Poincaré conjecture on August 24, 2006, declaring that "in 2003, Perelman solved the Poincaré Conjecture". [ 40 ] In December 2006, the journal Science honored the proof of Poincaré conjecture as the Breakthrough of the Year and featured it on its cover. [ 5 ] Hamilton's program for proving the Poincaré conjecture involves first putting a Riemannian metric on the unknown simply connected closed 3-manifold. The basic idea is to try to "improve" this metric; for example, if the metric can be improved enough so that it has constant positive curvature, then according to classical results in Riemannian geometry, it must be the 3-sphere. Hamilton prescribed the " Ricci flow equations" for improving the metric; where g is the metric and R its Ricci curvature, and one hopes that, as the time t increases, the manifold becomes easier to understand. Ricci flow expands the negative curvature part of the manifold and contracts the positive curvature part. In some cases, Hamilton was able to show that this works; for example, his original breakthrough was to show that if the Riemannian manifold has positive Ricci curvature everywhere, then the above procedure can only be followed for a bounded interval of parameter values, t ∈ [ 0 , T ) {\displaystyle t\in [0,T)} with T < ∞ {\displaystyle T<\infty } , and more significantly, that there are numbers c t {\displaystyle c_{t}} such that as t ↗ T {\displaystyle t\nearrow T} , the Riemannian metrics c t g ( t ) {\displaystyle c_{t}g(t)} smoothly converge to one of constant positive curvature. According to classical Riemannian geometry, the only simply-connected compact manifold which can support a Riemannian metric of constant positive curvature is the sphere. So, in effect, Hamilton showed a special case of the Poincaré conjecture: if a compact simply-connected 3-manifold supports a Riemannian metric of positive Ricci curvature, then it must be diffeomorphic to the 3-sphere. If, instead, one only has an arbitrary Riemannian metric, the Ricci flow equations must lead to more complicated singularities. Perelman's major achievement was to show that, if one takes a certain perspective, if they appear in finite time, these singularities can only look like shrinking spheres or cylinders. With a quantitative understanding of this phenomenon, he cuts the manifold along the singularities, splitting the manifold into several pieces and then continues with the Ricci flow on each of these pieces. This procedure is known as Ricci flow with surgery. Perelman provided a separate argument based on curve shortening flow to show that, on a simply-connected compact 3-manifold, any solution of the Ricci flow with surgery becomes extinct in finite time. An alternative argument, based on the min-max theory of minimal surfaces and geometric measure theory, was provided by Tobias Colding and William Minicozzi . Hence, in the simply-connected context, the above finite-time phenomena of Ricci flow with surgery is all that is relevant. In fact, this is even true if the fundamental group is a free product of finite groups and cyclic groups. This condition on the fundamental group turns out to be necessary and sufficient for finite time extinction. It is equivalent to saying that the prime decomposition of the manifold has no acyclic components and turns out to be equivalent to the condition that all geometric pieces of the manifold have geometries based on the two Thurston geometries S 2 × R and S 3 . In the context that one makes no assumption about the fundamental group whatsoever, Perelman made a further technical study of the limit of the manifold for infinitely large times, and in so doing, proved Thurston's geometrization conjecture: at large times, the manifold has a thick-thin decomposition , whose thick piece has a hyperbolic structure, and whose thin piece is a graph manifold . Due to Perelman's and Colding and Minicozzi's results, however, these further results are unnecessary in order to prove the Poincaré conjecture. On November 11, 2002, Russian mathematician Grigori Perelman posted the first of a series of three eprints on arXiv outlining a solution of the Poincaré conjecture. Perelman's proof uses a modified version of a Ricci flow program developed by Richard S. Hamilton . In August 2006, Perelman was awarded, but declined, the Fields Medal (worth $15,000 CAD) for his work on the Ricci flow. On March 18, 2010, the Clay Mathematics Institute awarded Perelman the $1 million Millennium Prize in recognition of his proof. [ 41 ] [ 42 ] Perelman rejected that prize as well. [ 7 ] [ 43 ] Perelman proved the conjecture by deforming the manifold using the Ricci flow (which behaves similarly to the heat equation that describes the diffusion of heat through an object). The Ricci flow usually deforms the manifold towards a rounder shape, except for some cases where it stretches the manifold apart from itself towards what are known as singularities . Perelman and Hamilton then chop the manifold at the singularities (a process called "surgery"), causing the separate pieces to form into ball-like shapes. Major steps in the proof involve showing how manifolds behave when they are deformed by the Ricci flow, examining what sort of singularities develop, determining whether this surgery process can be completed, and establishing that the surgery need not be repeated infinitely many times. The first step is to deform the manifold using the Ricci flow . The Ricci flow was defined by Richard S. Hamilton as a way to deform manifolds. The formula for the Ricci flow is an imitation of the heat equation , which describes the way heat flows in a solid. Like the heat flow, Ricci flow tends towards uniform behavior. Unlike the heat flow, the Ricci flow could run into singularities and stop functioning. A singularity in a manifold is a place where it is not differentiable: like a corner or a cusp or a pinching. The Ricci flow was only defined for smooth differentiable manifolds. Hamilton used the Ricci flow to prove that some compact manifolds were diffeomorphic to spheres, and he hoped to apply it to prove the Poincaré conjecture. He needed to understand the singularities. [ 44 ] Hamilton created a list of possible singularities that could form, but he was concerned that some singularities might lead to difficulties. He wanted to cut the manifold at the singularities and paste in caps and then run the Ricci flow again, so he needed to understand the singularities and show that certain kinds of singularities do not occur. Perelman discovered the singularities were all very simple: consider that a cylinder is formed by 'stretching' a circle along a line in another dimension, repeating that process with spheres instead of circles essentially gives the form of the singularities. Perelman proved this using something called the "Reduced Volume", which is closely related to an eigenvalue of a certain elliptic equation . Sometimes, an otherwise complicated operation reduces to multiplication by a scalar (a number). Such numbers are called eigenvalues of that operation. Eigenvalues are closely related to vibration frequencies and are used in analyzing a famous problem: can you hear the shape of a drum? Essentially, an eigenvalue is like a note being played by the manifold. Perelman proved this note goes up as the manifold is deformed by the Ricci flow. This helped him eliminate some of the more troublesome singularities that had concerned Hamilton, particularly the cigar soliton solution, which looked like a strand sticking out of a manifold with nothing on the other side. In essence, Perelman showed that all the strands that form can be cut and capped and none stick out on one side only. Completing the proof, Perelman takes any compact, simply connected, three-dimensional manifold without boundary and starts to run the Ricci flow. This deforms the manifold into round pieces with strands running between them. He cuts the strands and continues deforming the manifold until, eventually, he is left with a collection of round three-dimensional spheres. Then, he rebuilds the original manifold by connecting the spheres together with three-dimensional cylinders, morphs them into a round shape, and sees that, despite all the initial confusion, the manifold was, in fact, homeomorphic to a sphere. One immediate question posed was how one could be sure that infinitely many cuts are not necessary. This was raised due to the cutting potentially progressing forever. Perelman proved this cannot happen by using minimal surfaces on the manifold. A minimal surface is one on which any local deformation increases area; a familiar example is a soap film spanning a bent loop of wire. Hamilton had shown that the area of a minimal surface decreases as the manifold undergoes Ricci flow. Perelman verified what happened to the area of the minimal surface when the manifold was sliced. He proved that, eventually, the area is so small that any cut after the area is that small can only be chopping off three-dimensional spheres and not more complicated pieces. This is described as a battle with a Hydra by Sormani in Szpiro's book cited below. This last part of the proof appeared in Perelman's third and final paper on the subject.
https://en.wikipedia.org/wiki/Poincaré_conjecture
In mathematics , the Poincaré duality theorem, named after Henri Poincaré , is a basic result on the structure of the homology and cohomology groups of manifolds . It states that if M is an n -dimensional oriented closed manifold ( compact and without boundary), then the k th cohomology group of M is isomorphic to the ( n − k ) th homology group of M , for all integers k Poincaré duality holds for any coefficient ring , so long as one has taken an orientation with respect to that coefficient ring; in particular, since every manifold has a unique orientation mod 2, Poincaré duality holds mod 2 without any assumption of orientation. A form of Poincaré duality was first stated, without proof, by Henri Poincaré in 1893. It was stated in terms of Betti numbers : The k th and ( n − k ) th Betti numbers of a closed (i.e., compact and without boundary) orientable n -manifold are equal. The cohomology concept was at that time about 40 years from being clarified. In his 1895 paper Analysis Situs , Poincaré tried to prove the theorem using topological intersection theory , which he had invented. Criticism of his work by Poul Heegaard led him to realize that his proof was seriously flawed. In the first two complements to Analysis Situs , Poincaré gave a new proof in terms of dual triangulations. Poincaré duality did not take on its modern form until the advent of cohomology in the 1930s, when Eduard Čech and Hassler Whitney invented the cup and cap products and formulated Poincaré duality in these new terms. The modern statement of the Poincaré duality theorem is in terms of homology and cohomology: if M is a closed oriented n -manifold, then there is a canonically defined isomorphism H k ( M , Z ) → H n − k ( M , Z ) {\displaystyle H^{k}(M,\mathbb {Z} )\to H_{n-k}(M,\mathbb {Z} )} for any integer k . To define such an isomorphism, one chooses a fixed fundamental class [ M ] of M , which will exist if M {\displaystyle M} is oriented. Then the isomorphism is defined by mapping an element α ∈ H k ( M ) {\displaystyle \alpha \in H^{k}(M)} to the cap product [ M ] ⌢ α {\displaystyle [M]\frown \alpha } . [ 1 ] Homology and cohomology groups are defined to be zero for negative degrees, so Poincaré duality in particular implies that the homology and cohomology groups of orientable closed n -manifolds are zero for degrees bigger than n . Here, homology and cohomology are integral, but the isomorphism remains valid over any coefficient ring. In the case where an oriented manifold is not compact, one has to replace homology by Borel–Moore homology or replace cohomology by cohomology with compact support Given a triangulated manifold, there is a corresponding dual polyhedral decomposition. The dual polyhedral decomposition is a cell decomposition of the manifold such that the k -cells of the dual polyhedral decomposition are in bijective correspondence with the ( n − k {\displaystyle n-k} )-cells of the triangulation, generalizing the notion of dual polyhedra . Precisely, let T {\displaystyle T} be a triangulation of an n {\displaystyle n} -manifold M {\displaystyle M} . Let S {\displaystyle S} be a simplex of T {\displaystyle T} . Let Δ {\displaystyle \Delta } be a top-dimensional simplex of T {\displaystyle T} containing S {\displaystyle S} , so we can think of S {\displaystyle S} as a subset of the vertices of Δ {\displaystyle \Delta } . Define the dual cell D S {\displaystyle DS} corresponding to S {\displaystyle S} so that Δ ∩ D S {\displaystyle \Delta \cap DS} is the convex hull in Δ {\displaystyle \Delta } of the barycentres of all subsets of the vertices of Δ {\displaystyle \Delta } that contain S {\displaystyle S} . One can check that if S {\displaystyle S} is i {\displaystyle i} -dimensional, then D S {\displaystyle DS} is an ( n − i ) {\displaystyle (n-i)} -dimensional cell. Moreover, the dual cells to T {\displaystyle T} form a CW-decomposition of M {\displaystyle M} , and the only ( n − i {\displaystyle n-i} )-dimensional dual cell that intersects an i {\displaystyle i} -cell S {\displaystyle S} is D S {\displaystyle DS} . Thus the pairing C i M ⊗ C n − i M → Z {\displaystyle C_{i}M\otimes C_{n-i}M\to \mathbb {Z} } given by taking intersections induces an isomorphism C i M → C n − i M {\displaystyle C_{i}M\to C^{\,n-i}M} , where C i {\displaystyle C_{i}} is the cellular homology of the triangulation T {\displaystyle T} , and C n − i M {\displaystyle C_{n-i}M} and C n − i M {\displaystyle C^{\,n-i}M} are the cellular homologies and cohomologies of the dual polyhedral/CW decomposition the manifold respectively. The fact that this is an isomorphism of chain complexes is a proof of Poincaré duality. Roughly speaking, this amounts to the fact that the boundary relation for the triangulation T {\displaystyle T} is the incidence relation for the dual polyhedral decomposition under the correspondence S ⟼ D S {\displaystyle S\longmapsto DS} . Note that H k {\displaystyle H^{k}} is a contravariant functor while H n − k {\displaystyle H_{n-k}} is covariant . The family of isomorphisms is natural in the following sense: if is a continuous map between two oriented n -manifolds which is compatible with orientation, i.e. which maps the fundamental class of M to the fundamental class of N , then where f ∗ {\displaystyle f_{*}} and f ∗ {\displaystyle f^{*}} are the maps induced by f {\displaystyle f} in homology and cohomology, respectively. Note the very strong and crucial hypothesis that f {\displaystyle f} maps the fundamental class of M to the fundamental class of N . Naturality does not hold for an arbitrary continuous map f {\displaystyle f} , since in general f ∗ {\displaystyle f^{*}} is not an injection on cohomology. For example, if f {\displaystyle f} is a covering map then it maps the fundamental class of M to a multiple of the fundamental class of N . This multiple is the degree of the map f {\displaystyle f} . Assuming the manifold M is compact, boundaryless, and orientable , let denote the torsion subgroup of H i M {\displaystyle H_{i}M} and let be the free part – all homology groups taken with integer coefficients in this section. Then there are bilinear maps which are duality pairings (explained below). and Here Q / Z {\displaystyle \mathbb {Q} /\mathbb {Z} } is the quotient of the rationals by the integers, taken as an additive group. Notice that in the torsion linking form, there is a −1 in the dimension, so the paired dimensions add up to n − 1 , rather than to n . The first form is typically called the intersection product and the 2nd the torsion linking form . Assuming the manifold M is smooth, the intersection product is computed by perturbing the homology classes to be transverse and computing their oriented intersection number. For the torsion linking form, one computes the pairing of x and y by realizing nx as the boundary of some class z . The form then takes the value equal to the fraction whose numerator is the transverse intersection number of z with y , and whose denominator is n . The statement that the pairings are duality pairings means that the adjoint maps and are isomorphisms of groups. This result is an application of Poincaré duality together with the universal coefficient theorem , which gives an identification and Thus, Poincaré duality says that f H i M {\displaystyle fH_{i}M} and f H n − i M {\displaystyle fH_{n-i}M} are isomorphic, although there is no natural map giving the isomorphism, and similarly τ H i M {\displaystyle \tau H_{i}M} and τ H n − i − 1 M {\displaystyle \tau H_{n-i-1}M} are also isomorphic, though not naturally. While for most dimensions, Poincaré duality induces a bilinear pairing between different homology groups, in the middle dimension it induces a bilinear form on a single homology group. The resulting intersection form is a very important topological invariant. What is meant by "middle dimension" depends on parity. For even dimension n = 2 k , which is more common, this is literally the middle dimension k , and there is a form on the free part of the middle homology: By contrast, for odd dimension n = 2 k + 1 , which is less commonly discussed, it is most simply the lower middle dimension k , and there is a form on the torsion part of the homology in that dimension: However, there is also a pairing between the free part of the homology in the lower middle dimension k and in the upper middle dimension k + 1 : The resulting groups, while not a single group with a bilinear form, are a simple chain complex and are studied in algebraic L-theory . This approach to Poincaré duality was used by Józef Przytycki and Akira Yasuhara to give an elementary homotopy and diffeomorphism classification of 3-dimensional lens spaces . [ 2 ] An immediate result from Poincaré duality is that any closed odd-dimensional manifold M has Euler characteristic zero, which in turn gives that any manifold that bounds has even Euler characteristic. Poincaré duality is closely related to the Thom isomorphism theorem . Let M {\displaystyle M} be a compact, boundaryless oriented n -manifold, and M × M the product of M with itself. Let V be an open tubular neighbourhood of the diagonal in M × M . Consider the maps: Combined, this gives a map H i M ⊗ H j M → H i + j − n M {\displaystyle H_{i}M\otimes H_{j}M\to H_{i+j-n}M} , which is the intersection product , generalizing the intersection product discussed above. A similar argument with the Künneth theorem gives the torsion linking form . This formulation of Poincaré duality has become popular [ 3 ] as it defines Poincaré duality for any generalized homology theory , given a Künneth theorem and a Thom isomorphism for that homology theory. A Thom isomorphism theorem for a homology theory is now viewed as the generalized notion of orientability for that theory. For example, a spin C -structure on a manifold is a precise analog of an orientation within complex topological k-theory . The Poincaré–Lefschetz duality theorem is a generalisation for manifolds with boundary. In the non-orientable case, taking into account the sheaf of local orientations, one can give a statement that is independent of orientability: see twisted Poincaré duality . Blanchfield duality is a version of Poincaré duality which provides an isomorphism between the homology of an abelian covering space of a manifold and the corresponding cohomology with compact supports. It is used to get basic structural results about the Alexander module and can be used to define the signatures of a knot . With the development of homology theory to include K-theory and other extraordinary theories from about 1955, it was realised that the homology H ∗ ′ {\displaystyle H'_{*}} could be replaced by other theories, once the products on manifolds were constructed; and there are now textbook treatments in generality. More specifically, there is a general Poincaré duality theorem for a generalized homology theory which requires a notion of orientation with respect to a homology theory, and is formulated in terms of a generalized Thom isomorphism theorem . The Thom isomorphism theorem in this regard can be considered as the germinal idea for Poincaré duality for generalized homology theories. Verdier duality is the appropriate generalization to (possibly singular ) geometric objects, such as analytic spaces or schemes , while intersection homology was developed by Robert MacPherson and Mark Goresky for stratified spaces , such as real or complex algebraic varieties, precisely so as to generalise Poincaré duality to such stratified spaces. There are many other forms of geometric duality in algebraic topology , including Lefschetz duality , Alexander duality , Hodge duality , and S-duality . More algebraically, one can abstract the notion of a Poincaré complex , which is an algebraic object that behaves like the singular chain complex of a manifold, notably satisfying Poincaré duality on its homology groups, with respect to a distinguished element (corresponding to the fundamental class). These are used in surgery theory to algebraicize questions about manifolds. A Poincaré space is one whose singular chain complex is a Poincaré complex. These are not all manifolds, but their failure to be manifolds can be measured by obstruction theory .
https://en.wikipedia.org/wiki/Poincaré_duality
The Poincaré group , named after Henri Poincaré (1905), [ 1 ] was first defined by Hermann Minkowski (1908) as the isometry group of Minkowski spacetime . [ 2 ] [ 3 ] It is a ten-dimensional non-abelian Lie group that is of importance as a model in our understanding of the most basic fundamentals of physics . The Poincaré group consists of all coordinate transformations of Minkowski space that do not change the spacetime interval between events . For example, if everything were postponed by two hours, including the two events and the path you took to go from one to the other, then the time interval between the events recorded by a stopwatch that you carried with you would be the same. Or if everything were shifted five kilometres to the west, or turned 60 degrees to the right, you would also see no change in the interval. It turns out that the proper length of an object is also unaffected by such a shift. In total, there are ten degrees of freedom for such transformations. They may be thought of as translation through time or space (four degrees, one per dimension); reflection through a plane (three degrees, the freedom in orientation of this plane); or a " boost " in any of the three spatial directions (three degrees). Composition of transformations is the operation of the Poincaré group, with rotations being produced as the composition of an even number of reflections. In classical physics , the Galilean group is a comparable ten-parameter group that acts on absolute time and space . Instead of boosts, it features shear mappings to relate co-moving frames of reference. In general relativity , i.e. under the effects of gravity , Poincaré symmetry applies only locally. A treatment of symmetries in general relativity is not in the scope of this article. Poincaré symmetry is the full symmetry of special relativity . It includes: The last two symmetries, J and K , together make the Lorentz group (see also Lorentz invariance ); the semi-direct product of the spacetime translations group and the Lorentz group then produce the Poincaré group. Objects that are invariant under this group are then said to possess Poincaré invariance or relativistic invariance . 10 generators (in four spacetime dimensions) associated with the Poincaré symmetry, by Noether's theorem , imply 10 conservation laws: [ 4 ] [ 5 ] The Poincaré group is the group of Minkowski spacetime isometries . It is a ten-dimensional noncompact Lie group . The four-dimensional abelian group of spacetime translations is a normal subgroup , while the six-dimensional Lorentz group is also a subgroup, the stabilizer of the origin. The Poincaré group itself is the minimal subgroup of the affine group which includes all translations and Lorentz transformations . More precisely, it is a semidirect product of the spacetime translations group and the Lorentz group, with group multiplication Another way of putting this is that the Poincaré group is a group extension of the Lorentz group by a vector representation of it; it is sometimes dubbed, informally, as the inhomogeneous Lorentz group . In turn, it can also be obtained as a group contraction of the de Sitter group SO(4, 1) ~ Sp(2, 2) , as the de Sitter radius goes to infinity. Its positive energy unitary irreducible representations are indexed by mass (nonnegative number) and spin ( integer or half integer) and are associated with particles in quantum mechanics (see Wigner's classification ). In accordance with the Erlangen program , the geometry of Minkowski space is defined by the Poincaré group: Minkowski space is considered as a homogeneous space for the group. In quantum field theory , the universal cover of the Poincaré group which may be identified with the double cover is more important, because representations of SO ⁡ ( 1 , 3 ) {\displaystyle \operatorname {SO} (1,3)} are not able to describe fields with spin 1/2; i.e. fermions . Here SL ⁡ ( 2 , C ) {\displaystyle \operatorname {SL} (2,\mathbf {C} )} is the group of complex 2 × 2 {\displaystyle 2\times 2} matrices with unit determinant, isomorphic to the Lorentz-signature spin group Spin ⁡ ( 1 , 3 ) {\displaystyle \operatorname {Spin} (1,3)} . The Poincaré algebra is the Lie algebra of the Poincaré group. It is a Lie algebra extension of the Lie algebra of the Lorentz group. More specifically, the proper ( det Λ = 1 {\textstyle \det \Lambda =1} ), orthochronous ( Λ 0 0 ≥ 1 {\textstyle {\Lambda ^{0}}_{0}\geq 1} ) part of the Lorentz subgroup (its identity component ), S O ( 1 , 3 ) + ↑ {\textstyle \mathrm {SO} (1,3)_{+}^{\uparrow }} , is connected to the identity and is thus provided by the exponentiation exp ⁡ ( i a μ P μ ) exp ⁡ ( i 2 ω μ ν M μ ν ) {\textstyle \exp \left(ia_{\mu }P^{\mu }\right)\exp \left({\frac {i}{2}}\omega _{\mu \nu }M^{\mu \nu }\right)} of this Lie algebra . In component form, the Poincaré algebra is given by the commutation relations: [ 7 ] [ 8 ] [ P μ , P ν ] = 0 1 i [ M μ ν , P ρ ] = η μ ρ P ν − η ν ρ P μ 1 i [ M μ ν , M ρ σ ] = η μ ρ M ν σ − η μ σ M ν ρ − η ν ρ M μ σ + η ν σ M μ ρ , {\displaystyle {\begin{aligned}[][P_{\mu },P_{\nu }]&=0\,\\{\frac {1}{i}}~[M_{\mu \nu },P_{\rho }]&=\eta _{\mu \rho }P_{\nu }-\eta _{\nu \rho }P_{\mu }\,\\{\frac {1}{i}}~[M_{\mu \nu },M_{\rho \sigma }]&=\eta _{\mu \rho }M_{\nu \sigma }-\eta _{\mu \sigma }M_{\nu \rho }-\eta _{\nu \rho }M_{\mu \sigma }+\eta _{\nu \sigma }M_{\mu \rho }\,,\end{aligned}}} where P {\displaystyle P} is the generator of translations, M {\displaystyle M} is the generator of Lorentz transformations, and η {\displaystyle \eta } is the ( + , − , − , − ) {\displaystyle (+,-,-,-)} Minkowski metric (see Sign convention ). The bottom commutation relation is the ("homogeneous") Lorentz group, consisting of rotations, J i = 1 2 ϵ i m n M m n {\textstyle J_{i}={\frac {1}{2}}\epsilon _{imn}M^{mn}} , and boosts, K i = M i 0 {\textstyle K_{i}=M_{i0}} . In this notation, the entire Poincaré algebra is expressible in noncovariant (but more practical) language as where the bottom line commutator of two boosts is often referred to as a "Wigner rotation". The simplification [ J m + i K m , J n − i K n ] = 0 {\textstyle [J_{m}+iK_{m},\,J_{n}-iK_{n}]=0} permits reduction of the Lorentz subalgebra to s u ( 2 ) ⊕ s u ( 2 ) {\textstyle {\mathfrak {su}}(2)\oplus {\mathfrak {su}}(2)} and efficient treatment of its associated representations . In terms of the physical parameters, we have The Casimir invariants of this algebra are P μ P μ {\textstyle P_{\mu }P^{\mu }} and W μ W μ {\textstyle W_{\mu }W^{\mu }} where W μ {\textstyle W_{\mu }} is the Pauli–Lubanski pseudovector ; they serve as labels for the representations of the group. The Poincaré group is the full symmetry group of any relativistic field theory . As a result, all elementary particles fall in representations of this group . These are usually specified by the four-momentum squared of each particle (i.e. its mass squared) and the intrinsic quantum numbers J P C {\textstyle J^{PC}} , where J {\displaystyle J} is the spin quantum number, P {\displaystyle P} is the parity and C {\displaystyle C} is the charge-conjugation quantum number. In practice, charge conjugation and parity are violated by many quantum field theories ; where this occurs, P {\displaystyle P} and C {\displaystyle C} are forfeited. Since CPT symmetry is invariant in quantum field theory, a time-reversal quantum number may be constructed from those given. As a topological space , the group has four connected components: the component of the identity; the time reversed component; the spatial inversion component; and the component which is both time-reversed and spatially inverted. [ 9 ] The definitions above can be generalized to arbitrary dimensions in a straightforward manner. The d -dimensional Poincaré group is analogously defined by the semi-direct product with the analogous multiplication The Lie algebra retains its form, with indices µ and ν now taking values between 0 and d − 1 . The alternative representation in terms of J i and K i has no analogue in higher dimensions.
https://en.wikipedia.org/wiki/Poincaré_group
In mathematics , the Poincaré inequality [ 1 ] is a result in the theory of Sobolev spaces , named after the French mathematician Henri Poincaré . The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations . A very closely related result is Friedrichs' inequality . Let p , so that 1 ≤ p < ∞ and Ω a subset bounded at least in one direction. Then there exists a constant C , depending only on Ω and p , so that, for every function u of the Sobolev space W 0 1, p (Ω) of zero- trace (a.k.a. zero on the boundary) functions, Assume that 1 ≤ p ≤ ∞ and that Ω is a bounded connected open subset of the n - dimensional Euclidean space R n {\displaystyle \mathbb {R} ^{n}} with a Lipschitz boundary (i.e., Ω is a Lipschitz domain ). Then there exists a constant C , depending only on Ω and p , such that for every function u in the Sobolev space W 1, p (Ω) , ‖ u − u Ω ‖ L p ( Ω ) ≤ C ‖ ∇ u ‖ L p ( Ω ) , {\displaystyle \|u-u_{\Omega }\|_{L^{p}(\Omega )}\leq C\|\nabla u\|_{L^{p}(\Omega )},} where u Ω = 1 | Ω | ∫ Ω u ( y ) d y {\displaystyle u_{\Omega }={\frac {1}{|\Omega |}}\int _{\Omega }u(y)\,\mathrm {d} y} is the average value of u over Ω, with |Ω| standing for the Lebesgue measure of the domain Ω. When Ω is a ball, the above inequality is called a ( p , p ) -Poincaré inequality; for more general domains Ω, the above is more familiarly known as a Sobolev inequality . The necessity to subtract the average value can be seen by considering constant functions for which the derivative is zero while, without subtracting the average, we can have the integral of the function as large as we wish. There are other conditions instead of subtracting the average that we can require in order to deal with this issue with constant functions, for example, requiring trace zero, or subtracting the average over some proper subset of the domain. The constant C in the Poincare inequality may be different from condition to condition. Also note that the issue is not just the constant functions, because it is the same as saying that adding a constant value to a function can increase its integral while the integral of its derivative remains the same. So, simply excluding the constant functions will not solve the issue. In the context of metric measure spaces , the definition of a Poincaré inequality is slightly different. One definition is: a metric measure space supports a (q,p)-Poincare inequality for some 1 ≤ q , p < ∞ {\displaystyle 1\leq q,p<\infty } if there are constants C and λ ≥ 1 so that for each ball B in the space, μ ( B ) − 1 q ‖ u − u B ‖ L q ( B ) ≤ C rad ⁡ ( B ) μ ( B ) − 1 p ‖ ∇ u ‖ L p ( λ B ) . {\displaystyle \mu (B)^{-{\frac {1}{q}}}\left\|u-u_{B}\right\|_{L^{q}(B)}\leq C\operatorname {rad} (B)\mu (B)^{-{\frac {1}{p}}}\|\nabla u\|_{L^{p}(\lambda B)}.} Here we have an enlarged ball in the right hand side. In the context of metric measure spaces, ‖ ∇ u ‖ {\displaystyle \|\nabla u\|} is the minimal p-weak upper gradient of u in the sense of Heinonen and Koskela. [ 2 ] Whether a space supports a Poincaré inequality has turned out to have deep connections to the geometry and analysis of the space. For example, Cheeger has shown that a doubling space satisfying a Poincaré inequality admits a notion of differentiation. [ 3 ] Such spaces include sub-Riemannian manifolds and Laakso spaces . There exist other generalizations of the Poincaré inequality to other Sobolev spaces. For example, consider the Sobolev space H 1/2 ( T 2 ), i.e. the space of functions u in the L 2 space of the unit torus T 2 with Fourier transform û satisfying [ u ] H 1 / 2 ( T 2 ) 2 = ∑ k ∈ Z 2 | k | | u ^ ( k ) | 2 < + ∞ . {\displaystyle [u]_{H^{1/2}(\mathbf {T} ^{2})}^{2}=\sum _{k\in \mathbf {Z} ^{2}}|k|\left|{\hat {u}}(k)\right|^{2}<+\infty .} In this context, the Poincaré inequality says: there exists a constant C such that, for every u ∈ H 1/2 ( T 2 ) with u identically zero on an open set E ⊆ T 2 , ∫ T 2 | u ( x ) | 2 d x ≤ C ( 1 + 1 cap ⁡ ( E × { 0 } ) ) [ u ] H 1 / 2 ( T 2 ) 2 , {\displaystyle \int _{\mathbf {T} ^{2}}|u(x)|^{2}\,\mathrm {d} x\leq C\left(1+{\frac {1}{\operatorname {cap} (E\times \{0\})}}\right)[u]_{H^{1/2}(\mathbf {T} ^{2})}^{2},} where cap( E × {0}) denotes the harmonic capacity of E × {0 } when thought of as a subset of R 3 {\displaystyle \mathbb {R} ^{3}} . [ 4 ] Yet another generalization involves weighted Poincaré inequalities where the Lebesgue measure is replaced by a weighted version. The optimal constant C in the Poincaré inequality is sometimes known as the Poincaré constant for the domain Ω. Determining the Poincaré constant is, in general, a very hard task that depends upon the value of p and the geometry of the domain Ω. Certain special cases are tractable, however. For example, if Ω is a bounded , convex , Lipschitz domain with diameter d , then the Poincaré constant is at most d /2 for p = 1 , d / π {\displaystyle d/\pi } for p = 2 , [ 5 ] [ 6 ] and this is the best possible estimate on the Poincaré constant in terms of the diameter alone. For smooth functions, this can be understood as an application of the isoperimetric inequality to the function's level sets . [ 7 ] In one dimension, this is Wirtinger's inequality for functions . However, in some special cases the constant C can be determined concretely. For example, for p = 2, it is well known that over the domain of unit isosceles right triangle, C = 1/π ( < d /π where d = 2 {\displaystyle d={\sqrt {2}}} ). [ 8 ] Furthermore, for a smooth, bounded domain Ω , since the Rayleigh quotient for the Laplace operator in the space W 0 1 , 2 ( Ω ) {\displaystyle W_{0}^{1,2}(\Omega )} is minimized by the eigenfunction corresponding to the minimal eigenvalue λ 1 of the (negative) Laplacian, it is a simple consequence that, for any u ∈ W 0 1 , 2 ( Ω ) {\displaystyle u\in W_{0}^{1,2}(\Omega )} , ‖ u ‖ L 2 2 ≤ λ 1 − 1 ‖ ∇ u ‖ L 2 2 {\displaystyle \|u\|_{L^{2}}^{2}\leq \lambda _{1}^{-1}\left\|\nabla u\right\|_{L^{2}}^{2}} and furthermore, that the constant λ 1 is optimal. Since the 90s there have been several fruitful ways to make sense of Sobolev functions on general metric measure spaces (metric spaces equipped with a measure that is often compatible with the metric in certain senses). For example, the approach based on "upper gradients" leads to Newtonian-Sobolev space of functions. Thus, it makes sense to say that a space "supports a Poincare inequality". It turns out that whether a space supports any Poincare inequality and if so, the critical exponent for which it does, is tied closely to the geometry of the space. For example, a space that supports a Poincare inequality must be path connected. Indeed, between any pair of points there must exist a rectifiable path with length comparable to the distance of the points. Much deeper connections have been found, e.g. through the notion of modulus of path families. A good and rather recent reference is the monograph "Sobolev Spaces on Metric Measure Spaces, an approach based on upper gradients" written by Heinonen et al. Given 0 < s < 1 {\displaystyle 0<s<1} and p ∈ [ 1 , ∞ ) {\displaystyle p\in [1,\infty )} , the Sobolev Slobodeckij space W s , p ( Ω ) {\displaystyle W^{s,p}(\Omega )} is defined as the set of all functions u {\displaystyle u} such that u ∈ L p ( Ω ) {\displaystyle u\in L^{p}(\Omega )} and the seminorm [ u ] s , p {\displaystyle [u]_{s,p}} is finite. The seminorm [ u ] s , p {\displaystyle [u]_{s,p}} is defined by: The Poincaré Inequality in this context can be generalized as follows: where u Ω {\displaystyle u_{\Omega }} is the average of u {\displaystyle u} over Ω {\displaystyle \Omega } and C {\displaystyle C} is a constant dependent on s , p {\displaystyle s,p} , and Ω {\displaystyle \Omega } . This inequality holds for every bounded Ω {\displaystyle \Omega } . The proof follows that of Irene Drelichman and Ricardo G. Durán. [ 9 ] Let f Ω = 1 | Ω | ∫ Ω f ( x ) d x {\displaystyle f_{\Omega }={\frac {1}{|\Omega |}}\int _{\Omega }f(x)\,dx} . By applying Jensen's inequality, we obtain: By exploiting the boundedness of Ω {\displaystyle \Omega } and further estimates: It follows that the constant C {\displaystyle C} is given as C = diam ( Ω ) n p + s | Ω | 1 p {\displaystyle C={\frac {{\text{diam}}(\Omega )^{{\frac {n}{p}}+s}}{|\Omega |^{\frac {1}{p}}}}} , however, the reference [ 10 ] with Theorem 1 indicates that this is not the optimal constant. We can derive a growth constant for Balls in a manner similar to previous cases. The relationship is given by the following inequality: ‖ u − u Ω ‖ L p ( B R ( y ) ) ≤ C R s [ u ] W ˙ s , p ( B R ( x ) ) {\displaystyle \|u-u_{\Omega }\|_{L^{p}(B_{R}(y))}\leq CR^{s}[u]_{{\dot {W}}^{s,p}(B_{R}(x))}} The proof proceeds similarly to the classical one, by using the scaling u R ( x ) = u ( R x ) {\displaystyle u_{R}(x)=u(Rx)} . Then, by using a form of chain rule for the fractional derivative, we get R s {\displaystyle R^{s}} as a result.
https://en.wikipedia.org/wiki/Poincaré_inequality
In mathematics , particularly in dynamical systems , a first recurrence map or Poincaré map , named after Henri Poincaré , is the intersection of a periodic orbit in the state space of a continuous dynamical system with a certain lower-dimensional subspace, called the Poincaré section , transversal to the flow of the system. More precisely, one considers a periodic orbit with initial conditions within a section of the space, which leaves that section afterwards, and observes the point at which this orbit first returns to the section. One then creates a map to send the first point to the second, hence the name first recurrence map . The transversality of the Poincaré section means that periodic orbits starting on the subspace flow through it and not parallel to it. A Poincaré map can be interpreted as a discrete dynamical system with a state space that is one dimension smaller than the original continuous dynamical system. Because it preserves many properties of periodic and quasiperiodic orbits of the original system and has a lower-dimensional state space, it is often used for analyzing the original system in a simpler way. [ citation needed ] In practice this is not always possible as there is no general method to construct a Poincaré map. A Poincaré map differs from a recurrence plot in that space, not time, determines when to plot a point. For instance, the locus of the Moon when the Earth is at perihelion is a recurrence plot; the locus of the Moon when it passes through the plane perpendicular to the Earth's orbit and passing through the Sun and the Earth at perihelion is a Poincaré map. [ citation needed ] It was used by Michel Hénon to study the motion of stars in a galaxy , because the path of a star projected onto a plane looks like a tangled mess, while the Poincaré map shows the structure more clearly. Let ( R , M , φ ) be a global dynamical system , with R the real numbers , M the phase space and φ the evolution function . Let γ be a periodic orbit through a point p and S be a local differentiable and transversal section of φ through p , called a Poincaré section through p . Given an open and connected neighborhood U ⊂ S {\displaystyle U\subset S} of p , a function is called Poincaré map for the orbit γ on the Poincaré section S through the point p if Consider the following system of differential equations in polar coordinates, ( θ , r ) ∈ S 1 × R + {\displaystyle (\theta ,r)\in \mathbb {S} ^{1}\times \mathbb {R} ^{+}} : The flow of the system can be obtained by integrating the equation: for the θ {\displaystyle \theta } component we simply have θ ( t ) = θ 0 + t {\displaystyle \theta (t)=\theta _{0}+t} while for the r {\displaystyle r} component we need to separate the variables and integrate: Inverting last expression gives and since we find The flow of the system is therefore The behaviour of the flow is the following: Therefore, the solution with initial data ( θ 0 , r 0 ≠ 1 ) {\displaystyle (\theta _{0},r_{0}\neq 1)} draws a spiral that tends towards the radius 1 circle. We can take as Poincaré section for this flow the positive horizontal axis, namely Σ = { ( θ , r ) : θ = 0 } {\displaystyle \Sigma =\{(\theta ,r)\ :\ \theta =0\}} : obviously we can use r {\displaystyle r} as coordinate on the section. Every point in Σ {\displaystyle \Sigma } returns to the section after a time t = 2 π {\displaystyle t=2\pi } (this can be understood by looking at the evolution of the angle): we can take as Poincaré map the restriction of Φ {\displaystyle \Phi } to the section Σ {\displaystyle \Sigma } computed at the time 2 π {\displaystyle 2\pi } , Φ 2 π | Σ {\displaystyle \Phi _{2\pi }|_{\Sigma }} . The Poincaré map is therefore : Ψ ( r ) = 1 1 + e − 4 π ( 1 r 2 − 1 ) {\displaystyle \Psi (r)={\sqrt {\frac {1}{1+e^{-4\pi }\left({\frac {1}{r^{2}}}-1\right)}}}} The behaviour of the orbits of the discrete dynamical system ( Σ , Z , Ψ ) {\displaystyle (\Sigma ,\mathbb {Z} ,\Psi )} is the following: Poincaré maps can be interpreted as a discrete dynamical system . The stability of a periodic orbit of the original system is closely related to the stability of the fixed point of the corresponding Poincaré map. Let ( R , M , φ ) be a differentiable dynamical system with periodic orbit γ through p . Let be the corresponding Poincaré map through p . We define and then ( Z , U , P ) is a discrete dynamical system with state space U and evolution function Per definition this system has a fixed point at p . The periodic orbit γ of the continuous dynamical system is stable if and only if the fixed point p of the discrete dynamical system is stable. The periodic orbit γ of the continuous dynamical system is asymptotically stable if and only if the fixed point p of the discrete dynamical system is asymptotically stable.
https://en.wikipedia.org/wiki/Poincaré_map
A Poincaré plot , named after Henri Poincaré , is a graphical representation used to visualize the relationship between consecutive data points in time series to detect patterns and irregularities in the time series, revealing information about the stability of dynamical systems , providing insights into periodic orbits , chaotic motions , and bifurcations . It plays a role in controlling and predicting the system's long-term behavior, making it an indispensable tool for various scientific and engineering disciplines. It is also known as a return map . [ 1 ] [ 2 ] Poincaré plots can be used to distinguish chaos from randomness by embedding a data set in a higher-dimensional state space . Given a time series of the form a Poincaré map in its simplest form first plots dots in a scatter plot at the positions ( x t , x t + 1 ) {\displaystyle (x_{t},x_{t+1})} , then plots ( x t + 1 , x t + 2 ) {\displaystyle (x_{t+1},x_{t+2})} , then ( x t + 2 , x t + 3 ) {\displaystyle (x_{t+2},x_{t+3})} , and so on. For iterative ( discrete time ) maps, the Poincaré map represents the function that maps the values of the system from one time step to the next. In the Logistic map x n + 1 = r ⋅ x n ⋅ ( 1 − x n ) {\displaystyle x_{n+1}=r\cdot x_{n}\cdot (1-x_{n})} , the Poincaré plot would represent a shape corresponding to the function f ( x ) = 4 ( x − x 2 ) {\displaystyle f(x)=4(x-x^{2})} . An electrocardiogram (ECG) is a tracing of the voltage changes in the chest generated by the heart, whose contraction in a normal person is triggered by an electrical impulse that originates in the sinoatrial node . The ECG normally consists of a series of waves, labeled the P, Q, R, S and T waves. The P wave represents depolarization of the atria , the Q-R-S series of waves depolarization of the ventricles and the T wave repolarization of the ventricles. The interval between two successive R waves (the RR interval) is a measure of the heart rate. The heart rate normally varies slightly: during a deep breath, it speeds up and during a deep exhalation, it slows down. (The RR interval will shorten when the heart speeds up, and lengthen when it slows.) An RR tachograph is a graph of the numerical value of the RR-interval versus time. In the context of RR tachography , a Poincaré plot is a graph of RR( n ) on the x -axis versus RR( n + 1) (the succeeding RR interval) on the y -axis, i.e. one takes a sequence of intervals and plots each interval against the following interval. [ 3 ] The Poincaré plot is used as a standard visualizing technique to detect the presence of oscillations in heart rate variability and other non-linear dynamic systems. In the context of electrocardiography, the rate of the healthy heart is normally tightly controlled by the body's regulatory mechanisms (specifically, by the autonomic nervous system ). Several research papers [ 4 ] [ 5 ] demonstrate the potential of ECG signal-based Poincaré plots in detecting heart-related diseases or abnormalities. Various characteristics of the plot can be quantified, such as the SD1 and SD2 values, which are the standard deviation of the points as measured in the direction perpendicular to and along the line of identity, respectively. SD2 represents long-term variations and the ratio of SD1:SD2 is often measured. An ellipse with these dimensions is often fitted to the plot, and the area of this ellipse can also be measured.
https://en.wikipedia.org/wiki/Poincaré_plot
In mathematics and physics , the Poincaré recurrence theorem states that certain dynamical systems will, after a sufficiently long but finite time, return to a state arbitrarily close to (for continuous state systems), or exactly the same as (for discrete state systems), their initial state. The Poincaré recurrence time is the length of time elapsed until the recurrence. This time may vary greatly depending on the exact initial state and required degree of closeness. The result applies to isolated mechanical systems subject to some constraints, e.g., all particles must be bound to a finite volume. The theorem is commonly discussed in the context of ergodic theory , dynamical systems and statistical mechanics . Systems to which the Poincaré recurrence theorem applies are called conservative systems . The theorem is named after Henri Poincaré , who discussed it in 1890. [ 1 ] [ 2 ] A proof was presented by Constantin Carathéodory using measure theory in 1919. [ 3 ] [ 4 ] Any dynamical system defined by an ordinary differential equation determines a flow map f t mapping phase space on itself. The system is said to be volume-preserving if the volume of a set in phase space is invariant under the flow. For instance, all Hamiltonian systems are volume-preserving because of Liouville's theorem . The theorem is then: If a flow preserves volume and has only bounded orbits, then, for each open set , any orbit that intersects this open set intersects it infinitely often. [ 5 ] The proof, speaking qualitatively, hinges on two premises: [ 6 ] Imagine any finite starting volume D 1 {\displaystyle D_{1}} of the phase space and to follow its path under the dynamics of the system. The volume evolves through a "phase tube" in the phase space, keeping its size constant. Assuming a finite phase space, after some number of steps k 1 {\displaystyle k_{1}} the phase tube must intersect itself. This means that at least a finite fraction R 1 {\displaystyle R_{1}} of the starting volume is recurring. Now, consider the size of the non-returning portion D 2 {\displaystyle D_{2}} of the starting phase volume – that portion that never returns to the starting volume. Using the principle just discussed in the last paragraph, we know that if the non-returning portion is finite, then a finite part R 2 {\displaystyle R_{2}} of it must return after k 2 {\displaystyle k_{2}} steps. But that would be a contradiction, since in a number k 3 = {\displaystyle k_{3}=} lcm ( k 1 , k 2 ) {\displaystyle (k_{1},k_{2})} of step, both R 1 {\displaystyle R_{1}} and R 2 {\displaystyle R_{2}} would be returning, against the hypothesis that only R 1 {\displaystyle R_{1}} was. Thus, the non-returning portion of the starting volume cannot be the empty set , i.e. all D 1 {\displaystyle D_{1}} is recurring after some number of steps. The theorem does not comment on certain aspects of recurrence which this proof cannot guarantee: Let be a finite measure space and let be a measure-preserving transformation . Below are two alternative statements of the theorem. For any E ∈ Σ {\displaystyle E\in \Sigma } , the set of those points x {\displaystyle x} of E {\displaystyle E} for which there exists N ∈ N {\displaystyle N\in \mathbb {N} } such that f n ( x ) ∉ E {\displaystyle f^{n}(x)\notin E} for all n > N {\displaystyle n>N} has zero measure. In other words, almost every point of E {\displaystyle E} returns to E {\displaystyle E} . In fact, almost every point returns infinitely often; i.e. The following is a topological version of this theorem: If X {\displaystyle X} is a second-countable Hausdorff space and Σ {\displaystyle \Sigma } contains the Borel sigma-algebra , then the set of recurrent points of f {\displaystyle f} has full measure. That is, almost every point is recurrent. More generally, the theorem applies to conservative systems , and not just to measure-preserving dynamical systems. Roughly speaking, one can say that conservative systems are precisely those to which the recurrence theorem applies. For time-independent quantum mechanical systems with discrete energy eigenstates, a similar theorem holds. For every ε > 0 {\displaystyle \varepsilon >0} and T 0 > 0 {\displaystyle T_{0}>0} there exists a time T larger than T 0 {\displaystyle T_{0}} , such that | | ψ ( T ) ⟩ − | ψ ( 0 ) ⟩ | < ε {\displaystyle ||\psi (T)\rangle -|\psi (0)\rangle |<\varepsilon } , where | ψ ( t ) ⟩ {\displaystyle |\psi (t)\rangle } denotes the state vector of the system at time t . [ 7 ] [ 8 ] [ 9 ] The essential elements of the proof are as follows. The system evolves in time according to: where the E n {\displaystyle E_{n}} are the energy eigenvalues (we use natural units , so ℏ = 1 {\displaystyle \hbar =1} ), and the | ϕ n ⟩ {\displaystyle |\phi _{n}\rangle } are the energy eigenstates . The squared norm of the difference of the state vector at time T {\displaystyle T} and time zero, can be written as: We can truncate the summation at some n = N independent of T , because ∑ n = N + 1 ∞ | c n | 2 [ 1 − cos ⁡ ( E n T ) ] ≤ 2 ∑ n = N + 1 ∞ | c n | 2 {\displaystyle \sum _{n=N+1}^{\infty }|c_{n}|^{2}[1-\cos(E_{n}T)]\leq 2\sum _{n=N+1}^{\infty }|c_{n}|^{2}} which can be made arbitrarily small by increasing N , as the summation ∑ n = 0 ∞ | c n | 2 {\displaystyle \sum _{n=0}^{\infty }|c_{n}|^{2}} , being the squared norm of the initial state, converges to 1. The finite sum can be made arbitrarily small for specific choices of the time T , according to the following construction. Choose an arbitrary δ > 0 {\displaystyle \delta >0} , and then choose T such that there are integers k n {\displaystyle k_{n}} that satisfies for all numbers 0 ≤ n ≤ N {\displaystyle 0\leq n\leq N} . For this specific choice of T , As such, we have: The state vector | ψ ( T ) ⟩ {\displaystyle |\psi (T)\rangle } thus returns arbitrarily close to the initial state | ψ ( 0 ) ⟩ {\displaystyle |\psi (0)\rangle } . This article incorporates material from Poincaré recurrence theorem on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Poincaré_recurrence_theorem
In mathematics , the Poincaré separation theorem , also known as the Cauchy interlacing theorem , gives some upper and lower bounds of eigenvalues of a real symmetric matrix B T AB that can be considered as the orthogonal projection of a larger real symmetric matrix A onto a linear subspace spanned by the columns of B . The theorem is named after Henri Poincaré . More specifically, let A be an n × n real symmetric matrix and B an n × r semi-orthogonal matrix such that B T B = I r . Denote by λ i {\displaystyle \lambda _{i}} , i = 1, 2, ..., n and μ i {\displaystyle \mu _{i}} , i = 1, 2, ..., r the eigenvalues of A and B T AB , respectively (in descending order). We have An algebraic proof, based on the variational interpretation of eigenvalues , has been published in Magnus' Matrix Differential Calculus with Applications in Statistics and Econometrics . [ 1 ] From the geometric point of view, B T AB can be considered as the orthogonal projection of A onto the linear subspace spanned by B , so the above results follow immediately. [ 2 ] An alternative proof can be made for the case where B is a principle submatrix of A , demonstrated by Steve Fisk. [ 3 ] When considering two mechanical systems, each described by an equation of motion , that differ by exactly one constraint (such that n − r = 1 {\displaystyle n-r=1} ), the natural frequencies of the two systems interlace. This has an important consequence when considering the frequency response of a complicated system such as a large room . Even though there may be many modes, each with unpredictable modes shapes that will vary as details change such as furniture being moved, the interlacing theorem implies that the modal density (average number of modes per frequency interval) remains predictable and approximately constant. This allows for the technique of modal density analysis . Min-max theorem#Cauchy interlacing theorem
https://en.wikipedia.org/wiki/Poincaré_separation_theorem
In number theory , a Poincaré series is a mathematical series generalizing the classical theta series that is associated to any discrete group of symmetries of a complex domain , possibly of several complex variables . In particular, they generalize classical Eisenstein series . They are named after Henri Poincaré . If Γ is a finite group acting on a domain D and H ( z ) is any meromorphic function on D , then one obtains an automorphic function by averaging over Γ: However, if Γ is a discrete group , then additional factors must be introduced in order to assure convergence of such a series. To this end, a Poincaré series is a series of the form where J γ is the Jacobian determinant of the group element γ, [ 1 ] and the asterisk denotes that the summation takes place only over coset representatives yielding distinct terms in the series. The classical Poincaré series of weight 2 k of a Fuchsian group Γ is defined by the series the summation extending over congruence classes of fractional linear transformations belonging to Γ. Choosing H to be a character of the cyclic group of order n , one obtains the so-called Poincaré series of order n : The latter Poincaré series converges absolutely and uniformly on compact sets (in the upper halfplane), and is a modular form of weight 2 k for Γ. Note that, when Γ is the full modular group and n = 0, one obtains the Eisenstein series of weight 2 k . In general, the Poincaré series is, for n ≥ 1, a cusp form .
https://en.wikipedia.org/wiki/Poincaré_series_(modular_form)
In algebraic topology , a Poincaré space is an n -dimensional topological space with a distinguished element μ of its n th homology group such that taking the cap product with an element of the k th cohomology group yields an isomorphism to the ( n − k )th homology group. [ 1 ] The space is essentially one for which Poincaré duality is valid; more precisely, one whose singular chain complex forms a Poincaré complex with respect to the distinguished element μ . For example, any closed, orientable, connected manifold M is a Poincaré space, where the distinguished element is the fundamental class [ M ] . {\displaystyle [M].} Poincaré spaces are used in surgery theory to analyze and classify manifolds. Not every Poincaré space is a manifold, but the difference can be studied, first by having a normal map from a manifold, and then via obstruction theory . Sometimes, [ 2 ] Poincaré space means a homology sphere with non-trivial fundamental group —for instance, the Poincaré dodecahedral space in 3 dimensions. This topology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Poincaré_space
In mathematics , the Poincaré–Bendixson theorem is a statement about the long-term behaviour of orbits of continuous dynamical systems on the plane, cylinder, or two-sphere. [ 1 ] Given a differentiable real dynamical system defined on an open subset of the plane, every non-empty compact ω -limit set of an orbit , which contains only finitely many fixed points, is either [ 2 ] Moreover, there is at most one orbit connecting different fixed points in the same direction. However, there could be countably many homoclinic orbits connecting one fixed point. A weaker version of the theorem was originally conceived by Henri Poincaré ( 1892 ), although he lacked a complete proof which was later given by Ivar Bendixson ( 1901 ). Continuous dynamical systems that are defined on two-dimensional manifolds other than the plane (or cylinder or two-sphere), as well as those defined on higher-dimensional manifolds, may exhibit ω -limit sets that defy the three possible cases under the Poincaré–Bendixson theorem. On a torus , for example, it is possible to have a recurrent non-periodic orbit, [ 3 ] and three-dimensional systems may have strange attractors . Nevertheless, it is possible to classify the minimal sets of continuous dynamical systems on any two-dimensional compact and connected manifold due to a generalization of Arthur J. Schwartz. [ 4 ] [ 5 ] One important implication is that a two-dimensional continuous dynamical system cannot give rise to a strange attractor . If a strange attractor C did exist in such a system, then it could be enclosed in a closed and bounded subset of the phase space. By making this subset small enough, any nearby stationary points could be excluded. But then the Poincaré–Bendixson theorem says that C is not a strange attractor at all—it is either a limit cycle or it converges to a limit cycle. The Poincaré–Bendixson theorem does not apply to discrete dynamical systems , where chaotic behaviour can arise in two- or even one-dimensional systems.
https://en.wikipedia.org/wiki/Poincaré–Bendixson_theorem
In symplectic topology and dynamical systems , Poincaré–Birkhoff theorem (also known as Poincaré–Birkhoff fixed point theorem and Poincaré's last geometric theorem ) states that every area-preserving, orientation-preserving homeomorphism of an annulus that rotates the two boundaries in opposite directions has at least two fixed points . The Poincaré–Birkhoff theorem was discovered by Henri Poincaré , who published it in a 1912 paper titled "Sur un théorème de géométrie", and proved it for some special cases. The general case was proved by George D. Birkhoff in his 1913 paper titled "Proof of Poincaré's geometric theorem". [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Poincaré–Birkhoff_theorem
In mathematics, the Poincaré–Miranda theorem is a generalization of intermediate value theorem , from a single function in a single dimension, to n functions in n dimensions. It says as follows: The theorem is named after Henri Poincaré — who conjectured it in 1883 — and Carlo Miranda — who in 1940 showed that it is equivalent to the Brouwer fixed-point theorem . [ 1 ] [ 2 ] : 545 [ 3 ] It is sometimes called the Miranda theorem or the Bolzano–Poincaré–Miranda theorem. [ 4 ] The picture on the right shows an illustration of the Poincaré–Miranda theorem for n = 2 functions. Consider a couple of functions ( f , g ) whose domain of definition is [-1,1] 2 (i.e., the unit square). The function f is negative on the left boundary and positive on the right boundary (green sides of the square), while the function g is negative on the lower boundary and positive on the upper boundary (red sides of the square). When we go from left to right along any path, we must go through a point in which f is 0 . Therefore, there must be a "wall" separating the left from the right, along which f is 0 (green curve inside the square). Similarly, there must be a "wall" separating the top from the bottom, along which g is 0 (red curve inside the square). These walls must intersect in a point in which both functions are 0 (blue point inside the square). The simplest generalization, as a matter of fact a corollary , of this theorem is the following one. For every variable x i , let a i be any value in the range [sup x i = 0 f i , inf x i = 1 f i ] . Then there is a point in the unit cube in which for all i : This statement can be reduced to the original one by a simple translation of axes , where By using topological degree theory it is possible to prove yet another generalization. [ 5 ] Poincare-Miranda was also generalized to infinite-dimensional spaces. [ 6 ]
https://en.wikipedia.org/wiki/Poincaré–Miranda_theorem
In mathematics , Poinsot's spirals are two spirals represented by the polar equations where csch is the hyperbolic cosecant , and sech is the hyperbolic secant . [ 1 ] They are named after the French mathematician Louis Poinsot . This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Poinsot's_spirals
Point-in-time recovery ( PITR ) in the context of computers involves systems, often databases , whereby an administrator can restore or recover a set of data or a particular setting from a time in the past. [ 1 ] [ 2 ] [ 3 ] Note for example Windows 's capability to restore operating-system settings from a past date (for instance, before data corruption occurred). Time Machine for macOS provides another example of point-in-time recovery. Once PITR logging starts for a PITR-capable database , a database administrator can restore that database from backups to the state that it had at any time since. [ 1 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Point-in-time_recovery
The curved point-normal triangle , in short PN triangle , is an interpolation algorithm to retrieve a cubic Bézier triangle from the vertex coordinates of a regular flat triangle and normal vectors . The PN triangle retains the vertices of the flat triangle as well as the corresponding normals. For computer graphics applications, additionally a linear or quadratic interpolant of the normals is created to represent an incorrect but plausible normal when rendering and so giving the impression of smooth transitions between adjacent PN triangles. [ 1 ] The usage of the PN triangle enables the visualization of triangle based surfaces in a smoother shape at low cost in terms of rendering complexity and time. With information of the given vertex positions P 1 , P 2 , P 3 ∈ R 3 {\textstyle \mathbf {P} _{1},\mathbf {P} _{2},\mathbf {P} _{3}\in \mathbb {R} ^{3}} of a flat triangle and the according normal vectors N 1 , N 2 , N 3 {\textstyle \mathbf {N} _{1},\mathbf {N} _{2},\mathbf {N} _{3}} at the vertices a cubic Bézier triangle is constructed. In contrast to the notation of the Bézier triangle page the nomenclature follows G. Farin (2002), [ 2 ] therefore we denote the 10 control points as b i j k {\textstyle \mathbf {b} _{ijk}} with the positive indices holding the condition i + j + k = 3 {\textstyle i+j+k=3} . The first three control points are equal to the given vertices. b 300 = P 1 , b 030 = P 2 , b 003 = P 3 {\displaystyle {\begin{aligned}\mathbf {b} _{300}&=\mathbf {P} _{1},&\mathbf {b} _{030}&=\mathbf {P} _{2},&\mathbf {b} _{003}&=\mathbf {P} _{3}\end{aligned}}} Six control points related to the triangle edges, i.e. i , j , k = { 0 , 1 , 2 } {\textstyle i,j,k=\left\{0,1,2\right\}} are computed as b 012 = 1 3 ( 2 P 3 + P 2 − ω 32 N 3 ) , b 021 = 1 3 ( 2 P 2 + P 3 − ω 23 N 2 ) , b 102 = 1 3 ( 2 P 3 + P 1 − ω 31 N 3 ) , b 201 = 1 3 ( 2 P 1 + P 3 − ω 13 N 1 ) , b 120 = 1 3 ( 2 P 2 + P 1 − ω 21 N 2 ) , b 210 = 1 3 ( 2 P 1 + P 2 − ω 12 N 1 ) with ω i j = ( P j − P i ) ⋅ N i . {\displaystyle {\begin{aligned}\mathbf {b} _{012}&={\frac {1}{3}}\left(2\mathbf {P} _{3}+\mathbf {P} _{2}-\omega _{32}\mathbf {N} _{3}\right),&\mathbf {b} _{021}&={\frac {1}{3}}\left(2\mathbf {P} _{2}+\mathbf {P} _{3}-\omega _{23}\mathbf {N} _{2}\right),&&\\\mathbf {b} _{102}&={\frac {1}{3}}\left(2\mathbf {P} _{3}+\mathbf {P} _{1}-\omega _{31}\mathbf {N} _{3}\right),&\mathbf {b} _{201}&={\frac {1}{3}}\left(2\mathbf {P} _{1}+\mathbf {P} _{3}-\omega _{13}\mathbf {N} _{1}\right),&&\\\mathbf {b} _{120}&={\frac {1}{3}}\left(2\mathbf {P} _{2}+\mathbf {P} _{1}-\omega _{21}\mathbf {N} _{2}\right),&\mathbf {b} _{210}&={\frac {1}{3}}\left(2\mathbf {P} _{1}+\mathbf {P} _{2}-\omega _{12}\mathbf {N} _{1}\right)&\qquad {\text{with}}\quad \omega _{ij}&=\left(\mathbf {P} _{j}-\mathbf {P} _{i}\right)\cdot \mathbf {N} _{i}.\\\end{aligned}}} This definition ensures that the original vertex normals are reproduced in the interpolated triangle. Finally the internal control point ( i = j = k = 1 ) {\textstyle (i=j=k=1)} is derived from the previously calculated control points as b 111 = E + 1 2 ( E − V ) with E = 1 6 ( b 012 + b 021 + b 102 + b 201 + b 120 + b 210 ) and V = 1 3 ( P 1 + P 2 + P 3 ) . {\displaystyle {\begin{aligned}\mathbf {b} _{111}&=\mathbf {E} +{\frac {1}{2}}\left(\mathbf {E} -\mathbf {V} \right)\\{\text{with}}&\quad \mathbf {E} ={\frac {1}{6}}\left(\mathbf {b} _{012}+\mathbf {b} _{021}+\mathbf {b} _{102}+\mathbf {b} _{201}+\mathbf {b} _{120}+\mathbf {b} _{210}\right)\\{\text{and}}&\quad \mathbf {V} ={\frac {1}{3}}\left(\mathbf {P} _{1}+\mathbf {P} _{2}+\mathbf {P} _{3}\right).\end{aligned}}} An alternative interior control point b 111 = E + 5 ( E − V ) {\displaystyle {\begin{aligned}\mathbf {b} _{111}&=\mathbf {E} +5\left(\mathbf {E} -\mathbf {V} \right)\end{aligned}}} was suggested in. [ 3 ]
https://en.wikipedia.org/wiki/Point-normal_triangle
In a cyclic order , such as the real projective line , two pairs of points separate each other when they occur alternately in the order. Thus the ordering a b c d of four points has ( a,c ) and ( b,d ) as separating pairs. This point-pair separation is an invariant of projectivities of the line. The concept was described by G. B. Halsted at the outset of his Synthetic Projective Geometry : With regard to a pair of different points of those on a straight, all remaining fall into two classes, such that every point belongs to one and only one. If two points belong to different classes with regard to a pair of points, then also the latter two belong to different classes with regard to the first two. Two such point pairs are said to 'separate each other.' Four different points on a straight can always be partitioned in one and only one way into pairs separating each other. [ 1 ] Given any pair of points on a projective line, they separate a third point from its harmonic conjugate . A pair of lines in a pencil separates another pair when a transversal crosses the pairs in separated points. The point-pair separation of points was written AC//BD by H. S. M. Coxeter in his textbook The Real Projective Plane . [ 2 ] The relation may be used in showing the real projective plane is a complete space . The axiom of continuity used is "Every monotonic sequence of points has a limit." The point-pair separation is used to provide definitions: Whereas a linear order endows a set with a positive end and a negative end, an other relation forgets not only which end is which, but also where the ends are located. In this way it is a final, further weakening of the concepts of a betweenness relation and a cyclic order . There is nothing else that can be forgotten: up to the relevant sense of interdefinability, these three relations are the only nontrivial reducts of the ordered set of rational numbers . [ 3 ] A quaternary relation S(a, b, c, d) is defined satisfying certain axioms, which is interpreted as asserting that a and c separate b from d . [ 4 ] [ 5 ] The separation relation was described with axioms in 1898 by Giovanni Vailati . [ 6 ]
https://en.wikipedia.org/wiki/Point-pair_separation
In computer vision , pattern recognition , and robotics , point-set registration , also known as point-cloud registration or scan matching , is the process of finding a spatial transformation ( e.g., scaling , rotation and translation ) that aligns two point clouds . The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model (or coordinate frame), and mapping a new measurement to a known data set to identify features or to estimate its pose . Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras . 3D point clouds can also be generated from computer vision algorithms such as triangulation , bundle adjustment , and more recently, monocular image depth estimation using deep learning . For 2D point set registration used in image processing and feature-based image registration , a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection . Point cloud registration has extensive applications in autonomous driving , [ 1 ] motion estimation and 3D reconstruction , [ 2 ] object detection and pose estimation , [ 3 ] [ 4 ] robotic manipulation , [ 5 ] simultaneous localization and mapping (SLAM), [ 6 ] [ 7 ] panorama stitching , [ 8 ] virtual and augmented reality , [ 9 ] and medical imaging . [ 10 ] As a special case, registration of two point sets that only differ by a 3D rotation ( i.e., there is no scaling and translation), is called the Wahba Problem and also related to the orthogonal procrustes problem . The problem may be summarized as follows: [ 11 ] Let { M , S } {\displaystyle \lbrace {\mathcal {M}},{\mathcal {S}}\rbrace } be two finite size point sets in a finite-dimensional real vector space R d {\displaystyle \mathbb {R} ^{d}} , which contain M {\displaystyle M} and N {\displaystyle N} points respectively ( e.g., d = 3 {\displaystyle d=3} recovers the typical case of when M {\displaystyle {\mathcal {M}}} and S {\displaystyle {\mathcal {S}}} are 3D point sets). The problem is to find a transformation to be applied to the moving "model" point set M {\displaystyle {\mathcal {M}}} such that the difference (typically defined in the sense of point-wise Euclidean distance ) between M {\displaystyle {\mathcal {M}}} and the static "scene" set S {\displaystyle {\mathcal {S}}} is minimized. In other words, a mapping from R d {\displaystyle \mathbb {R} ^{d}} to R d {\displaystyle \mathbb {R} ^{d}} is desired which yields the best alignment between the transformed "model" set and the "scene" set. The mapping may consist of a rigid or non-rigid transformation. The transformation model may be written as T {\displaystyle T} , using which the transformed, registered model point set is: The output of a point set registration algorithm is therefore the optimal transformation T ⋆ {\displaystyle T^{\star }} such that M {\displaystyle {\mathcal {M}}} is best aligned to S {\displaystyle {\mathcal {S}}} , according to some defined notion of distance function dist ⁡ ( ⋅ , ⋅ ) {\displaystyle \operatorname {dist} (\cdot ,\cdot )} : where T {\displaystyle {\mathcal {T}}} is used to denote the set of all possible transformations that the optimization tries to search for. The most popular choice of the distance function is to take the square of the Euclidean distance for every pair of points: where ‖ ⋅ ‖ 2 {\displaystyle \|\cdot \|_{2}} denotes the vector 2-norm , s m {\displaystyle s_{m}} is the corresponding point in set S {\displaystyle {\mathcal {S}}} that attains the shortest distance to a given point m {\displaystyle m} in set M {\displaystyle {\mathcal {M}}} after transformation. Minimizing such a function in rigid registration is equivalent to solving a least squares problem. When the correspondences ( i.e., s m ↔ m {\displaystyle s_{m}\leftrightarrow m} ) are given before the optimization, for example, using feature matching techniques, then the optimization only needs to estimate the transformation. This type of registration is called correspondence-based registration . On the other hand, if the correspondences are unknown, then the optimization is required to jointly find out the correspondences and transformation together. This type of registration is called simultaneous pose and correspondence registration . Given two point sets, rigid registration yields a rigid transformation which maps one point set to the other. A rigid transformation is defined as a transformation that does not change the distance between any two points. Typically such a transformation consists of translation and rotation . [ 12 ] In rare cases, the point set may also be mirrored. In robotics and computer vision, rigid registration has the most applications. Given two point sets, non-rigid registration yields a non-rigid transformation which maps one point set to the other. Non-rigid transformations include affine transformations such as scaling and shear mapping . However, in the context of point set registration, non-rigid registration typically involves nonlinear transformation. If the eigenmodes of variation of the point set are known, the nonlinear transformation may be parametrized by the eigenvalues. [ 13 ] A nonlinear transformation may also be parametrized as a thin plate spline . [ 14 ] [ 13 ] Some approaches to point set registration use algorithms that solve the more general graph matching problem. [ 11 ] However, the computational complexity of such methods tend to be high and they are limited to rigid registrations. In this article, we will only consider algorithms for rigid registration, where the transformation is assumed to contain 3D rotations and translations (possibly also including a uniform scaling). The PCL (Point Cloud Library) is an open-source framework for n-dimensional point cloud and 3D geometry processing. It includes several point registration algorithms. [ 15 ] Correspondence-based methods assume the putative correspondences m ↔ s m {\displaystyle m\leftrightarrow s_{m}} are given for every point m ∈ M {\displaystyle m\in {\mathcal {M}}} . Therefore, we arrive at a setting where both point sets M {\displaystyle {\mathcal {M}}} and S {\displaystyle {\mathcal {S}}} have N {\displaystyle N} points and the correspondences m i ↔ s i , i = 1 , … , N {\displaystyle m_{i}\leftrightarrow s_{i},i=1,\dots ,N} are given. In the simplest case, one can assume that all the correspondences are correct, meaning that the points m i , s i ∈ R 3 {\displaystyle m_{i},s_{i}\in \mathbb {R} ^{3}} are generated as follows: where l > 0 {\displaystyle l>0} is a uniform scaling factor (in many cases l = 1 {\displaystyle l=1} is assumed), R ∈ SO ( 3 ) {\displaystyle R\in {\text{SO}}(3)} is a proper 3D rotation matrix ( SO ( d ) {\displaystyle {\text{SO}}(d)} is the special orthogonal group of degree d {\displaystyle d} ), t ∈ R 3 {\displaystyle t\in \mathbb {R} ^{3}} is a 3D translation vector and ϵ i ∈ R 3 {\displaystyle \epsilon _{i}\in \mathbb {R} ^{3}} models the unknown additive noise ( e.g., Gaussian noise ). Specifically, if the noise ϵ i {\displaystyle \epsilon _{i}} is assumed to follow a zero-mean isotropic Gaussian distribution with standard deviation σ i {\displaystyle \sigma _{i}} , i.e., ϵ i ∼ N ( 0 , σ i 2 I 3 ) {\displaystyle \epsilon _{i}\sim {\mathcal {N}}(0,\sigma _{i}^{2}I_{3})} , then the following optimization can be shown to yield the maximum likelihood estimate for the unknown scale, rotation and translation: Note that when the scaling factor is 1 and the translation vector is zero, then the optimization recovers the formulation of the Wahba problem . Despite the non-convexity of the optimization ( cb.2 ) due to non-convexity of the set SO ( 3 ) {\displaystyle {\text{SO}}(3)} , seminal work by Berthold K.P. Horn showed that ( cb.2 ) actually admits a closed-form solution, by decoupling the estimation of scale, rotation and translation. [ 16 ] Similar results were discovered by Arun et al . [ 17 ] In addition, in order to find a unique transformation ( l , R , t ) {\displaystyle (l,R,t)} , at least N = 3 {\displaystyle N=3} non-collinear points in each point set are required. More recently, Briales and Gonzalez-Jimenez have developed a semidefinite relaxation using Lagrangian duality , for the case where the model set M {\displaystyle {\mathcal {M}}} contains different 3D primitives such as points, lines and planes (which is the case when the model M {\displaystyle {\mathcal {M}}} is a 3D mesh). [ 18 ] Interestingly, the semidefinite relaxation is empirically tight, i.e., a certifiably globally optimal solution can be extracted from the solution of the semidefinite relaxation. The least squares formulation ( cb.2 ) is known to perform arbitrarily badly in the presence of outliers . An outlier correspondence is a pair of measurements s i ↔ m i {\displaystyle s_{i}\leftrightarrow m_{i}} that departs from the generative model ( cb.1 ). In this case, one can consider a different generative model as follows: [ 19 ] where if the i − {\displaystyle i-} th pair s i ↔ m i {\displaystyle s_{i}\leftrightarrow m_{i}} is an inlier, then it obeys the outlier-free model ( cb.1 ), i.e., s i {\displaystyle s_{i}} is obtained from m i {\displaystyle m_{i}} by a spatial transformation plus some small noise; however, if the i − {\displaystyle i-} th pair s i ↔ m i {\displaystyle s_{i}\leftrightarrow m_{i}} is an outlier, then s i {\displaystyle s_{i}} can be any arbitrary vector o i {\displaystyle o_{i}} . Since one does not know which correspondences are outliers beforehand, robust registration under the generative model ( cb.3 ) is of paramount importance for computer vision and robotics deployed in the real world, because current feature matching techniques tend to output highly corrupted correspondences where over 95 % {\displaystyle 95\%} of the correspondences can be outliers. [ 20 ] Next, we describe several common paradigms for robust registration. Maximum consensus seeks to find the largest set of correspondences that are consistent with the generative model ( cb.1 ) for some choice of spatial transformation ( l , R , t ) {\displaystyle (l,R,t)} . Formally speaking, maximum consensus solves the following optimization: where | I | {\displaystyle \vert {\mathcal {I}}\vert } denotes the cardinality of the set I {\displaystyle {\mathcal {I}}} . The constraint in ( cb.4 ) enforces that every pair of measurements in the inlier set I {\displaystyle {\mathcal {I}}} must have residuals smaller than a pre-defined threshold ξ {\displaystyle \xi } . Unfortunately, recent analyses have shown that globally solving problem (cb.4) is NP-Hard , and global algorithms typically have to resort to branch-and-bound (BnB) techniques that take exponential-time complexity in the worst case. [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] Although solving consensus maximization exactly is hard, there exist efficient heuristics that perform quite well in practice. One of the most popular heuristics is the Random Sample Consensus (RANSAC) scheme. [ 26 ] RANSAC is an iterative hypothesize-and-verify method. At each iteration, the method first randomly samples 3 out of the total number of N {\displaystyle N} correspondences and computes a hypothesis ( l , R , t ) {\displaystyle (l,R,t)} using Horn's method, [ 16 ] then the method evaluates the constraints in ( cb.4 ) to count how many correspondences actually agree with such a hypothesis (i.e., it computes the residual ‖ s i − l R m i − t ‖ 2 2 / σ i 2 {\displaystyle \Vert s_{i}-lRm_{i}-t\Vert _{2}^{2}/\sigma _{i}^{2}} and compares it with the threshold ξ {\displaystyle \xi } for each pair of measurements). The algorithm terminates either after it has found a consensus set that has enough correspondences, or after it has reached the total number of allowed iterations. RANSAC is highly efficient because the main computation of each iteration is carrying out the closed-form solution in Horn's method. However, RANSAC is non-deterministic and only works well in the low-outlier-ratio regime ( e.g., below 50 % {\displaystyle 50\%} ), because its runtime grows exponentially with respect to the outlier ratio. [ 20 ] To fill the gap between the fast but inexact RANSAC scheme and the exact but exhaustive BnB optimization, recent researches have developed deterministic approximate methods to solve consensus maximization. [ 21 ] [ 22 ] [ 27 ] [ 23 ] Outlier removal methods seek to pre-process the set of highly corrupted correspondences before estimating the spatial transformation. The motivation of outlier removal is to significantly reduce the number of outlier correspondences, while maintaining inlier correspondences, so that optimization over the transformation becomes easier and more efficient ( e.g., RANSAC works poorly when the outlier ratio is above 95 % {\displaystyle 95\%} but performs quite well when outlier ratio is below 50 % {\displaystyle 50\%} ). Parra et al. have proposed a method called Guaranteed Outlier Removal (GORE) that uses geometric constraints to prune outlier correspondences while guaranteeing to preserve inlier correspondences. [ 20 ] GORE has been shown to be able to drastically reduce the outlier ratio, which can significantly boost the performance of consensus maximization using RANSAC or BnB. Yang and Carlone have proposed to build pairwise translation-and-rotation-invariant measurements (TRIMs) from the original set of measurements and embed TRIMs as the edges of a graph whose nodes are the 3D points. Since inliers are pairwise consistent in terms of the scale, they must form a clique within the graph. Therefore, using efficient algorithms for computing the maximum clique of a graph can find the inliers and effectively prune the outliers. [ 4 ] The maximum clique based outlier removal method is also shown to be quite useful in real-world point set registration problems. [ 19 ] Similar outlier removal ideas were also proposed by Parra et al. . [ 28 ] M-estimation replaces the least squares objective function in ( cb.2 ) with a robust cost function that is less sensitive to outliers. Formally, M-estimation seeks to solve the following problem: where ρ ( ⋅ ) {\displaystyle \rho (\cdot )} represents the choice of the robust cost function. Note that choosing ρ ( x ) = x 2 {\displaystyle \rho (x)=x^{2}} recovers the least squares estimation in ( cb.2 ). Popular robust cost functions include ℓ 1 {\displaystyle \ell _{1}} -norm loss, Huber loss , [ 29 ] Geman-McClure loss [ 30 ] and truncated least squares loss . [ 19 ] [ 8 ] [ 4 ] M-estimation has been one of the most popular paradigms for robust estimation in robotics and computer vision. [ 31 ] [ 32 ] Because robust objective functions are typically non-convex ( e.g., the truncated least squares loss v.s. the least squares loss), algorithms for solving the non-convex M-estimation are typically based on local optimization , where first an initial guess is provided, following by iterative refinements of the transformation to keep decreasing the objective function. Local optimization tends to work well when the initial guess is close to the global minimum, but it is also prone to get stuck in local minima if provided with poor initialization. Graduated non-convexity (GNC) is a general-purpose framework for solving non-convex optimization problems without initialization. It has achieved success in early vision and machine learning applications. [ 33 ] [ 34 ] The key idea behind GNC is to solve the hard non-convex problem by starting from an easy convex problem. Specifically, for a given robust cost function ρ ( ⋅ ) {\displaystyle \rho (\cdot )} , one can construct a surrogate function ρ μ ( ⋅ ) {\displaystyle \rho _{\mu }(\cdot )} with a hyper-parameter μ {\displaystyle \mu } , tuning which can gradually increase the non-convexity of the surrogate function ρ μ ( ⋅ ) {\displaystyle \rho _{\mu }(\cdot )} until it converges to the target function ρ ( ⋅ ) {\displaystyle \rho (\cdot )} . [ 34 ] [ 35 ] Therefore, at each level of the hyper-parameter μ {\displaystyle \mu } , the following optimization is solved: Black and Rangarajan proved that the objective function of each optimization ( cb.6 ) can be dualized into a sum of weighted least squares and a so-called outlier process function on the weights that determine the confidence of the optimization in each pair of measurements. [ 33 ] Using Black-Rangarajan duality and GNC tailored for the Geman-McClure function, Zhou et al. developed the fast global registration algorithm that is robust against about 80 % {\displaystyle 80\%} outliers in the correspondences. [ 30 ] More recently, Yang et al. showed that the joint use of GNC (tailored to the Geman-McClure function and the truncated least squares function) and Black-Rangarajan duality can lead to a general-purpose solver for robust registration problems, including point clouds and mesh registration. [ 35 ] Almost none of the robust registration algorithms mentioned above (except the BnB algorithm that runs in exponential-time in the worst case) comes with performance guarantees , which means that these algorithms can return completely incorrect estimates without notice. Therefore, these algorithms are undesirable for safety-critical applications like autonomous driving. Very recently, Yang et al. has developed the first certifiably robust registration algorithm, named Truncated least squares Estimation And SEmidefinite Relaxation (TEASER). [ 19 ] For point cloud registration, TEASER not only outputs an estimate of the transformation, but also quantifies the optimality of the given estimate. TEASER adopts the following truncated least squares (TLS) estimator: which is obtained by choosing the TLS robust cost function ρ ( x ) = min ( x 2 , c ¯ 2 ) {\displaystyle \rho (x)=\min(x^{2},{\bar {c}}^{2})} , where c ¯ 2 {\displaystyle {\bar {c}}^{2}} is a pre-defined constant that determines the maximum allowed residuals to be considered inliers. The TLS objective function has the property that for inlier correspondences ( ‖ s i − l R m i − t ‖ 2 2 / σ i 2 < c ¯ 2 {\displaystyle \Vert s_{i}-lRm_{i}-t\Vert _{2}^{2}/\sigma _{i}^{2}<{\bar {c}}^{2}} ), the usual least square penalty is applied; while for outlier correspondences ( ‖ s i − l R m i − t ‖ 2 2 / σ i 2 > c ¯ 2 {\displaystyle \Vert s_{i}-lRm_{i}-t\Vert _{2}^{2}/\sigma _{i}^{2}>{\bar {c}}^{2}} ), no penalty is applied and the outliers are discarded. If the TLS optimization ( cb.7 ) is solved to global optimality, then it is equivalent to running Horn's method on only the inlier correspondences. However, solving ( cb.7 ) is quite challenging due to its combinatorial nature. TEASER solves ( cb.7 ) as follows : (i) It builds invariant measurements such that the estimation of scale, rotation and translation can be decoupled and solved separately, a strategy that is inspired by the original Horn's method; (ii) The same TLS estimation is applied for each of the three sub-problems, where the scale TLS problem can be solved exactly using an algorithm called adaptive voting, the rotation TLS problem can relaxed to a semidefinite program (SDP) where the relaxation is exact in practice, [ 8 ] even with large amount of outliers; the translation TLS problem can solved using component-wise adaptive voting. A fast implementation leveraging GNC is open-sourced here . In practice, TEASER can tolerate more than 99 % {\displaystyle 99\%} outlier correspondences and runs in milliseconds. In addition to developing TEASER, Yang et al. also prove that, under some mild conditions on the point cloud data, TEASER's estimated transformation has bounded errors from the ground-truth transformation. [ 19 ] The iterative closest point (ICP) algorithm was introduced by Besl and McKay. [ 36 ] The algorithm performs rigid registration in an iterative fashion by alternating in (i) given the transformation, finding the closest point in S {\displaystyle {\mathcal {S}}} for every point in M {\displaystyle {\mathcal {M}}} ; and (ii) given the correspondences, finding the best rigid transformation by solving the least squares problem ( cb.2 ). As such, it works best if the initial pose of M {\displaystyle {\mathcal {M}}} is sufficiently close to S {\displaystyle {\mathcal {S}}} . In pseudocode , the basic algorithm is implemented as follows: Here, the function least_squares performs least squares optimization to minimize the distance in each of the ⟨ m i , s ^ i ⟩ {\displaystyle \langle m_{i},{\hat {s}}_{i}\rangle } pairs, using the closed-form solutions by Horn [ 16 ] and Arun. [ 17 ] Because the cost function of registration depends on finding the closest point in S {\displaystyle {\mathcal {S}}} to every point in M {\displaystyle {\mathcal {M}}} , it can change as the algorithm is running. As such, it is difficult to prove that ICP will in fact converge exactly to the local optimum. [ 37 ] In fact, empirically, ICP and EM-ICP do not converge to the local minimum of the cost function. [ 37 ] Nonetheless, because ICP is intuitive to understand and straightforward to implement, it remains the most commonly used point set registration algorithm. [ 37 ] Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points to the minimization strategy. [ 13 ] [ 38 ] For example, the expectation maximization algorithm is applied to the ICP algorithm to form the EM-ICP method, and the Levenberg-Marquardt algorithm is applied to the ICP algorithm to form the LM-ICP method. [ 12 ] Robust point matching (RPM) was introduced by Gold et al. [ 39 ] The method performs registration using deterministic annealing and soft assignment of correspondences between point sets. Whereas in ICP the correspondence generated by the nearest-neighbour heuristic is binary, RPM uses a soft correspondence where the correspondence between any two points can be anywhere from 0 to 1, although it ultimately converges to either 0 or 1. The correspondences found in RPM is always one-to-one, which is not always the case in ICP. [ 14 ] Let m i {\displaystyle m_{i}} be the i {\displaystyle i} th point in M {\displaystyle {\mathcal {M}}} and s j {\displaystyle s_{j}} be the j {\displaystyle j} th point in S {\displaystyle {\mathcal {S}}} . The match matrix μ {\displaystyle \mathbf {\mu } } is defined as such: The problem is then defined as: Given two point sets M {\displaystyle {\mathcal {M}}} and S {\displaystyle {\mathcal {S}}} find the Affine transformation T {\displaystyle T} and the match matrix μ {\displaystyle \mathbf {\mu } } that best relates them. [ 39 ] Knowing the optimal transformation makes it easy to determine the match matrix, and vice versa. However, the RPM algorithm determines both simultaneously. The transformation may be decomposed into a translation vector and a transformation matrix: The matrix A {\displaystyle \mathbf {A} } in 2D is composed of four separate parameters { a , θ , b , c } {\displaystyle \lbrace a,\theta ,b,c\rbrace } , which are scale, rotation, and the vertical and horizontal shear components respectively. The cost function is then: subject to ∀ j ∑ i = 1 M μ i j ≤ 1 {\textstyle \forall j~\sum _{i=1}^{M}\mu _{ij}\leq 1} , ∀ i ∑ j = 1 N μ i j ≤ 1 {\textstyle \forall i~\sum _{j=1}^{N}\mu _{ij}\leq 1} , ∀ i j μ i j ∈ { 0 , 1 } {\textstyle \forall ij~\mu _{ij}\in \lbrace 0,1\rbrace } . The α {\displaystyle \alpha } term biases the objective towards stronger correlation by decreasing the cost if the match matrix has more ones in it. The function g ( A ) {\displaystyle g(\mathbf {A} )} serves to regularize the Affine transformation by penalizing large values of the scale and shear components: for some regularization parameter γ {\displaystyle \gamma } . The RPM method optimizes the cost function using the Softassign algorithm. The 1D case will be derived here. Given a set of variables { Q j } {\displaystyle \lbrace Q_{j}\rbrace } where Q j ∈ R 1 {\displaystyle Q_{j}\in \mathbb {R} ^{1}} . A variable μ j {\displaystyle \mu _{j}} is associated with each Q j {\displaystyle Q_{j}} such that ∑ j = 1 J μ j = 1 {\textstyle \sum _{j=1}^{J}\mu _{j}=1} . The goal is to find μ {\displaystyle \mathbf {\mu } } that maximizes ∑ j = 1 J μ j Q j {\textstyle \sum _{j=1}^{J}\mu _{j}Q_{j}} . This can be formulated as a continuous problem by introducing a control parameter β > 0 {\displaystyle \beta >0} . In the deterministic annealing method, the control parameter β {\displaystyle \beta } is slowly increased as the algorithm runs. Let μ {\displaystyle \mathbf {\mu } } be: this is known as the softmax function . As β {\displaystyle \beta } increases, it approaches a binary value as desired in Equation ( rpm.1 ). The problem may now be generalized to the 2D case, where instead of maximizing ∑ j = 1 J μ j Q j {\textstyle \sum _{j=1}^{J}\mu _{j}Q_{j}} , the following is maximized: where This is straightforward, except that now the constraints on μ {\displaystyle \mu } are doubly stochastic matrix constraints: ∀ j ∑ i = 1 M μ i j = 1 {\textstyle \forall j~\sum _{i=1}^{M}\mu _{ij}=1} and ∀ i ∑ j = 1 N μ i j = 1 {\textstyle \forall i~\sum _{j=1}^{N}\mu _{ij}=1} . As such the denominator from Equation ( rpm.3 ) cannot be expressed for the 2D case simply. To satisfy the constraints, it is possible to use a result due to Sinkhorn, [ 39 ] which states that a doubly stochastic matrix is obtained from any square matrix with all positive entries by the iterative process of alternating row and column normalizations. Thus the algorithm is written as such: [ 39 ] where the deterministic annealing control parameter β {\displaystyle \beta } is initially set to β 0 {\displaystyle \beta _{0}} and increases by factor β r {\displaystyle \beta _{r}} until it reaches the maximum value β f {\displaystyle \beta _{f}} . The summations in the normalization steps sum to M + 1 {\displaystyle M+1} and N + 1 {\displaystyle N+1} instead of just M {\displaystyle M} and N {\displaystyle N} because the constraints on μ {\displaystyle \mu } are inequalities. As such the M + 1 {\displaystyle M+1} th and N + 1 {\displaystyle N+1} th elements are slack variables . The algorithm can also be extended for point sets in 3D or higher dimensions. The constraints on the correspondence matrix μ {\displaystyle \mathbf {\mu } } are the same in the 3D case as in the 2D case. Hence the structure of the algorithm remains unchanged, with the main difference being how the rotation and translation matrices are solved. [ 39 ] The thin plate spline robust point matching (TPS-RPM) algorithm by Chui and Rangarajan augments the RPM method to perform non-rigid registration by parametrizing the transformation as a thin plate spline . [ 14 ] However, because the thin plate spline parametrization only exists in three dimensions, the method cannot be extended to problems involving four or more dimensions. The kernel correlation (KC) approach of point set registration was introduced by Tsin and Kanade. [ 37 ] Compared with ICP, the KC algorithm is more robust against noisy data. Unlike ICP, where, for every model point, only the closest scene point is considered, here every scene point affects every model point. [ 37 ] As such this is a multiply-linked registration algorithm. For some kernel function K {\displaystyle K} , the kernel correlation K C {\displaystyle KC} of two points x i , x j {\displaystyle x_{i},x_{j}} is defined thus: [ 37 ] The kernel function K {\displaystyle K} chosen for point set registration is typically symmetric and non-negative kernel, similar to the ones used in the Parzen window density estimation. The Gaussian kernel typically used for its simplicity, although other ones like the Epanechnikov kernel and the tricube kernel may be substituted. [ 37 ] The kernel correlation of an entire point set χ {\displaystyle {\mathcal {\chi }}} is defined as the sum of the kernel correlations of every point in the set to every other point in the set: [ 37 ] The logarithm of KC of a point set is proportional, within a constant factor, to the information entropy . Observe that the KC is a measure of a "compactness" of the point set—trivially, if all points in the point set were at the same location, the KC would evaluate to a large value. The cost function of the point set registration algorithm for some transformation parameter θ {\displaystyle \theta } is defined thus: Some algebraic manipulation yields: The expression is simplified by observing that K C ( S ) {\displaystyle KC({\mathcal {S}})} is independent of θ {\displaystyle \theta } . Furthermore, assuming rigid registration, K C ( T ( M , θ ) ) {\displaystyle KC(T({\mathcal {M}},\theta ))} is invariant when θ {\displaystyle \theta } is changed because the Euclidean distance between every pair of points stays the same under rigid transformation . So the above equation may be rewritten as: The kernel density estimates are defined as: The cost function can then be shown to be the correlation of the two kernel density estimates: Having established the cost function , the algorithm simply uses gradient descent to find the optimal transformation. It is computationally expensive to compute the cost function from scratch on every iteration, so a discrete version of the cost function Equation ( kc.6 ) is used. The kernel density estimates P M , P S {\displaystyle P_{\mathcal {M}},P_{\mathcal {S}}} can be evaluated at grid points and stored in a lookup table . Unlike the ICP and related methods, it is not necessary to find the nearest neighbour, which allows the KC algorithm to be comparatively simple in implementation. Compared to ICP and EM-ICP for noisy 2D and 3D point sets, the KC algorithm is less sensitive to noise and results in correct registration more often. [ 37 ] The kernel density estimates are sums of Gaussians and may therefore be represented as Gaussian mixture models (GMM). [ 40 ] Jian and Vemuri use the GMM version of the KC registration algorithm to perform non-rigid registration parametrized by thin plate splines . Coherent point drift (CPD) was introduced by Myronenko and Song. [ 13 ] [ 41 ] The algorithm takes a probabilistic approach to aligning point sets, similar to the GMM KC method. Unlike earlier approaches to non-rigid registration which assume a thin plate spline transformation model, CPD is agnostic with regard to the transformation model used. The point set M {\displaystyle {\mathcal {M}}} represents the Gaussian mixture model (GMM) centroids. When the two point sets are optimally aligned, the correspondence is the maximum of the GMM posterior probability for a given data point. To preserve the topological structure of the point sets, the GMM centroids are forced to move coherently as a group. The expectation maximization algorithm is used to optimize the cost function. [ 13 ] Let there be M points in M {\displaystyle {\mathcal {M}}} and N points in S {\displaystyle {\mathcal {S}}} . The GMM probability density function for a point s is: where, in D dimensions, p ( s | i ) {\displaystyle p(s|i)} is the Gaussian distribution centered on point m i ∈ M {\displaystyle m_{i}\in {\mathcal {M}}} . The membership probabilities P ( i ) = 1 M {\displaystyle P(i)={\frac {1}{M}}} is equal for all GMM components. The weight of the uniform distribution is denoted as w ∈ [ 0 , 1 ] {\displaystyle w\in [0,1]} . The mixture model is then: The GMM centroids are re-parametrized by a set of parameters θ {\displaystyle \theta } estimated by maximizing the likelihood. This is equivalent to minimizing the negative log-likelihood function : where it is assumed that the data is independent and identically distributed . The correspondence probability between two points m i {\displaystyle m_{i}} and s j {\displaystyle s_{j}} is defined as the posterior probability of the GMM centroid given the data point: The expectation maximization (EM) algorithm is used to find θ {\displaystyle \theta } and σ 2 {\displaystyle \sigma ^{2}} . The EM algorithm consists of two steps. First, in the E-step or estimation step, it guesses the values of parameters ("old" parameter values) and then uses Bayes' theorem to compute the posterior probability distributions P old ( i , s j ) {\displaystyle P^{\text{old}}(i,s_{j})} of mixture components. Second, in the M-step or maximization step, the "new" parameter values are then found by minimizing the expectation of the complete negative log-likelihood function, i.e. the cost function: Ignoring constants independent of θ {\displaystyle \theta } and σ {\displaystyle \sigma } , Equation ( cpd.4 ) can be expressed thus: where with N = N P {\displaystyle N=N_{\mathbf {P} }} only if w = 0 {\displaystyle w=0} . The posterior probabilities of GMM components computed using previous parameter values P old {\displaystyle P^{\text{old}}} is: Minimizing the cost function in Equation ( cpd.5 ) necessarily decreases the negative log-likelihood function E in Equation ( cpd.3 ) unless it is already at a local minimum. [ 13 ] Thus, the algorithm can be expressed using the following pseudocode, where the point sets M {\displaystyle {\mathcal {M}}} and S {\displaystyle {\mathcal {S}}} are represented as M × D {\displaystyle M\times D} and N × D {\displaystyle N\times D} matrices M {\displaystyle \mathbf {M} } and S {\displaystyle \mathbf {S} } respectively: [ 13 ] where the vector 1 {\displaystyle \mathbf {1} } is a column vector of ones. The solve function differs by the type of registration performed. For example, in rigid registration, the output is a scale a , a rotation matrix R {\displaystyle \mathbf {R} } , and a translation vector t {\displaystyle \mathbf {t} } . The parameter θ {\displaystyle \theta } can be written as a tuple of these: which is initialized to one, the identity matrix , and a column vector of zeroes: The aligned point set is: The solve_rigid function for rigid registration can then be written as follows, with derivation of the algebra explained in Myronenko's 2010 paper. [ 13 ] For affine registration, where the goal is to find an affine transformation instead of a rigid one, the output is an affine transformation matrix B {\displaystyle \mathbf {B} } and a translation t {\displaystyle \mathbf {t} } such that the aligned point set is: The solve_affine function for rigid registration can then be written as follows, with derivation of the algebra explained in Myronenko's 2010 paper. [ 13 ] It is also possible to use CPD with non-rigid registration using a parametrization derived using calculus of variations . [ 13 ] Sums of Gaussian distributions can be computed in linear time using the fast Gauss transform (FGT). [ 13 ] Consequently, the time complexity of CPD is O ( M + N ) {\displaystyle O(M+N)} , which is asymptotically much faster than O ( M N ) {\displaystyle O(MN)} methods. [ 13 ] A variant of coherent point drift, called Bayesian coherent point drift (BCPD), was derived through a Bayesian formulation of point set registration. [ 42 ] BCPD has several advantages over CPD, e.g., (1) nonrigid and rigid registrations can be performed in a single algorithm, (2) the algorithm can be accelerated regardless of the Gaussianity of a Gram matrix to define motion coherence, (3) the algorithm is more robust against outliers because of a more reasonable definition of an outlier distribution. Additionally, in the Bayesian formulation, motion coherence was introduced through a prior distribution of displacement vectors, providing a clear difference between tuning parameters that control motion coherence. BCPD was further accelerated by a method called BCPD++, which is a three-step procedure composed of (1) downsampling of point sets, (2) registration of downsampled point sets, and (3) interpolation of a deformation field. [ 43 ] The method can register point sets composed of more than 10M points while maintaining its registration accuracy. An variant of coherent point drift called CPD with Local Surface Geometry (LSG-CPD) for rigid point cloud registration. [ 44 ] The method adaptively adds different levels of point-to-plane penalization on top of the point-to-point penalization based on the flatness of the local surface. This results in GMM components with anisotropic covariances, instead of the isotropic covariances in the original CPD. [ 13 ] The anisotropic covariance matrix is modeled as: where Σ m {\displaystyle \Sigma _{m}} is the anisotropic covariance matrix of the m-th point in the target set; n m {\displaystyle \mathbf {n} _{m}} is the normal vector corresponding to the same point; I {\displaystyle \mathbf {I} } is an identity matrix, serving as a regularizer, pulling the problem away from ill-posedness. α m {\displaystyle \alpha _{m}} is penalization coefficient (a modified sigmoid function), which is set adaptively to add different levels of point-to-plane penalization depending on how flat the local surface is. This is realized by evaluating the surface variation κ m {\displaystyle \kappa _{m}} [ 45 ] within the neighborhood of the m-th target point. α m a x {\displaystyle \alpha _{max}} is the upper bound of the penalization. The point cloud registration is formulated as a maximum likelihood estimation (MLE) problem and solve it with the Expectation-Maximization (EM) algorithm. In the E step, the correspondence computation is recast into simple matrix manipulations and efficiently computed on a GPU. In the M step, an unconstrained optimization on a matrix Lie group is designed to efficiently update the rigid transformation of the registration. Taking advantage of the local geometric covariances, the method shows a superior performance in accuracy and robustness to noise and outliers, compared with the baseline CPD. [ 46 ] An enhanced runtime performance is expected thanks to the GPU accelerated correspondence calculation. An implementation of the LSG-CPD is open-sourced here . This algorithm was introduced in 2013 by H. Assalih to accommodate sonar image registration. [ 47 ] These types of images tend to have high amounts of noise, so it is expected to have many outliers in the point sets to match. SCS delivers high robustness against outliers and can surpass ICP and CPD performance in the presence of outliers. SCS doesn't use iterative optimization in high dimensional space and is neither probabilistic nor spectral. SCS can match rigid and non-rigid transformations, and performs best when the target transformation is between three and six degrees of freedom .
https://en.wikipedia.org/wiki/Point-set_registration
A triangulation of a set of points P {\displaystyle {\mathcal {P}}} in the Euclidean space R d {\displaystyle \mathbb {R} ^{d}} is a simplicial complex that covers the convex hull of P {\displaystyle {\mathcal {P}}} , and whose vertices belong to P {\displaystyle {\mathcal {P}}} . [ 1 ] In the plane (when P {\displaystyle {\mathcal {P}}} is a set of points in R 2 {\displaystyle \mathbb {R} ^{2}} ), triangulations are made up of triangles, together with their edges and vertices. Some authors require that all the points of P {\displaystyle {\mathcal {P}}} are vertices of its triangulations. [ 2 ] In this case, a triangulation of a set of points P {\displaystyle {\mathcal {P}}} in the plane can alternatively be defined as a maximal set of non-crossing edges between points of P {\displaystyle {\mathcal {P}}} . In the plane, triangulations are special cases of planar straight-line graphs . A particularly interesting kind of triangulations are the Delaunay triangulations . They are the geometric duals of Voronoi diagrams . The Delaunay triangulation of a set of points P {\displaystyle {\mathcal {P}}} in the plane contains the Gabriel graph , the nearest neighbor graph and the minimal spanning tree of P {\displaystyle {\mathcal {P}}} . Triangulations have a number of applications, and there is an interest to find the "good" triangulations of a given point set under some criteria as, for instance minimum-weight triangulations . Sometimes it is desirable to have a triangulation with special properties, e.g., in which all triangles have large angles (long and narrow ("splinter") triangles are avoided). [ 3 ] Given a set of edges that connect points of the plane, the problem to determine whether they contain a triangulation is NP-complete . [ 4 ] Some triangulations of a set of points P ⊂ R d {\displaystyle {\mathcal {P}}\subset \mathbb {R} ^{d}} can be obtained by lifting the points of P {\displaystyle {\mathcal {P}}} into R d + 1 {\displaystyle \mathbb {R} ^{d+1}} (which amounts to add a coordinate x d + 1 {\displaystyle x_{d+1}} to each point of P {\displaystyle {\mathcal {P}}} ), by computing the convex hull of the lifted set of points, and by projecting the lower faces of this convex hull back on R d {\displaystyle \mathbb {R} ^{d}} . The triangulations built this way are referred to as the regular triangulations of P {\displaystyle {\mathcal {P}}} . When the points are lifted to the paraboloid of equation x d + 1 = x 1 2 + ⋯ + x d 2 {\displaystyle x_{d+1}=x_{1}^{2}+\cdots +x_{d}^{2}} , this construction results in the Delaunay triangulation of P {\displaystyle {\mathcal {P}}} . Note that, in order for this construction to provide a triangulation, the lower convex hull of the lifted set of points needs to be simplicial . In the case of Delaunay triangulations, this amounts to require that no d + 2 {\displaystyle d+2} points of P {\displaystyle {\mathcal {P}}} lie in the same sphere. Every triangulation of any set P {\displaystyle {\mathcal {P}}} of n {\displaystyle n} points in the plane has 2 n − h − 2 {\displaystyle 2n-h-2} triangles and 3 n − h − 3 {\displaystyle 3n-h-3} edges where h {\displaystyle h} is the number of points of P {\displaystyle {\mathcal {P}}} in the boundary of the convex hull of P {\displaystyle {\mathcal {P}}} . This follows from a straightforward Euler characteristic argument. [ 5 ] Triangle Splitting Algorithm : Find the convex hull of the point set P {\displaystyle {\mathcal {P}}} and triangulate this hull as a polygon. Choose an interior point and draw edges to the three vertices of the triangle that contains it. Continue this process until all interior points are exhausted. [ 6 ] Incremental Algorithm : Sort the points of P {\displaystyle {\mathcal {P}}} according to x-coordinates. The first three points determine a triangle. Consider the next point p {\displaystyle p} in the ordered set and connect it with all previously considered points { p 1 , . . . , p k } {\displaystyle \{p_{1},...,p_{k}\}} which are visible to p. Continue this process of adding one point of P {\displaystyle {\mathcal {P}}} at a time until all of P {\displaystyle {\mathcal {P}}} has been processed. [ 7 ] The following table reports time complexity results for the construction of triangulations of point sets in the plane, under different optimality criteria, where n {\displaystyle n} is the number of points.
https://en.wikipedia.org/wiki/Point-set_triangulation
In telecommunications , point-to-multipoint communication ( P2MP , PTMP or PMP ) is communication which is accomplished via a distinct type of one-to-many connection, providing multiple paths from a single location to multiple locations. [ 1 ] Point-to-multipoint telecommunications is typically used in wireless Internet and IP telephony via gigahertz radio frequencies . P2MP systems have been designed with and without a return channel from the multiple receivers. A central antenna or antenna array broadcasts to several receiving antennas and the system uses a form of time-division multiplexing to allow for the return channel traffic. In contemporary usage, the term point-to-multipoint wireless communications relates to fixed wireless data communications for Internet or voice over IP via radio or microwave frequencies in the gigahertz range. Point-to-multipoint is the most popular approach for wireless communications that have a large number of nodes, end destinations or end users. Point to Multipoint generally assumes there is a central base station to which remote subscriber units or customer premises equipment (CPE) (a term that was originally used in the wired telephone industry) are connected over the wireless medium. Connections between the base station and subscriber units can be either line-of-sight or, for lower-frequency radio systems, non-line-of-sight where link budgets permit. [ 2 ] Generally, lower frequencies can offer non-line-of-sight connections. Various software planning tools can be used to determine feasibility of potential connections using topographic data as well as link budget simulation. Often the point to multipoint links are installed to reduce the cost of infrastructure and increase the number of CPE's and connectivity. [ 2 ] Point-to-multipoint wireless networks employing directional antennas are affected by the hidden node problem (also called hidden terminal) in case they employ a CSMA/CA medium access control protocol. The negative impact of the hidden node problem can be mitigated using a time-division multiple access (TDMA) based protocol or a polling protocol rather than the CSMA/CA protocol. [ 3 ] The telecommunications signal in a point-to-multipoint system is typically bi-directional, TDMA or channelized. Systems using frequency-division duplexing (FDD) offer full-duplex connections between base station and remote sites, and time-division duplex (TDD) systems offer half-duplex connections. Point-to-multipoint systems can be implemented in licensed, semi-licensed or unlicensed frequency bands depending on the specific application. point-to-point and point-to-multipoint links are very popular in the wireless industry and when paired with other high-capacity wireless links or technologies such as free space optics (FSO) can be referred to as backhaul . The base station may have a single omnidirectional antenna or multiple sector antennas, the latter of which allowing greater range and capacity.
https://en.wikipedia.org/wiki/Point-to-multipoint_communication
Point-to-point laser technology (PPLT) [ 1 ] refers to a technology that enables a user or surveyor to survey or capture a building's geometry in real time or while on site by translating laser range finder data directly into a Computer-aided design (CAD) or building information models (BIM) work station. Most commonly used for as-built and existing conditions documentation or converting the built environment into a digital format. PPLT has many benefits in creating BIM or (CAD) models. Entering data directly into a CAD- or BIM-enabled work station allows a user or 'surveyor' to capture and confirm a building's geometry on site. This effectively builds a digital model of a building while it is being measured enabling not only speed but accuracy. Additionally, building in real time can eliminate the need for revisits and also minimizes the need for future interpretation and manipulation of measurements and data by a CAD operator. Since PPLT mainly deals with building surveying there are a few associations that are building surveying specific. There is the Association of Professional Building Surveyors (APBS) in the United States and the Royal Institution of Chartered Surveyors (RICS) in Britain. Whereas building surveying is offered at the curriculum level in Europe it still is in its nascent stages in the United States. Both the APBS and RICS offer certification methods and programs. The APBS has the professional building surveyor (PBS) certifications.
https://en.wikipedia.org/wiki/Point-to-point_laser_technology
In geometry , a point is an abstract idealization of an exact position , without size, in physical space , [ 1 ] or its generalization to other kinds of mathematical spaces . As zero-dimensional objects, points are usually taken to be the fundamental indivisible elements comprising the space, of which one-dimensional curves , two-dimensional surfaces , and higher-dimensional objects consist. In classical Euclidean geometry , a point is a primitive notion , defined as "that which has no part". Points and other primitive notions are not defined in terms of other concepts, but only by certain formal properties, called axioms , that they must satisfy; for example, "there is exactly one straight line that passes through two distinct points" . As physical diagrams, geometric figures are made with tools such as a compass , scriber , or pen, whose pointed tip can mark a small dot or prick a small hole representing a point, or can be drawn across a surface to represent a curve. A point can also be determined by the intersection of two curves or three surfaces, called a vertex or corner . Since the advent of analytic geometry , points are often defined or represented in terms of numerical coordinates . In modern mathematics, a space of points is typically treated as a set , a point set . An isolated point is an element of some subset of points which has some neighborhood containing no other points of the subset. Points, considered within the framework of Euclidean geometry , are one of the most fundamental objects. Euclid originally defined the point as "that which has no part". [ 2 ] In the two-dimensional Euclidean plane , a point is represented by an ordered pair ( x , y ) of numbers, where the first number conventionally represents the horizontal and is often denoted by x , and the second number conventionally represents the vertical and is often denoted by y . This idea is easily generalized to three-dimensional Euclidean space , where a point is represented by an ordered triplet ( x , y , z ) with the additional third number representing depth and often denoted by z . Further generalizations are represented by an ordered tuplet of n terms, ( a 1 , a 2 , … , a n ) where n is the dimension of the space in which the point is located. [ 3 ] Many constructs within Euclidean geometry consist of an infinite collection of points that conform to certain axioms. This is usually represented by a set of points; As an example, a line is an infinite set of points of the form L = { ( a 1 , a 2 , . . . a n ) ∣ a 1 c 1 + a 2 c 2 + . . . a n c n = d } , {\displaystyle L=\lbrace (a_{1},a_{2},...a_{n})\mid a_{1}c_{1}+a_{2}c_{2}+...a_{n}c_{n}=d\rbrace ,} where c 1 through c n and d are constants and n is the dimension of the space. Similar constructions exist that define the plane , line segment , and other related concepts. [ 4 ] A line segment consisting of only a single point is called a degenerate line segment. [ citation needed ] In addition to defining points and constructs related to points, Euclid also postulated a key idea about points, that any two points can be connected by a straight line. [ 5 ] This is easily confirmed under modern extensions of Euclidean geometry, and had lasting consequences at its introduction, allowing the construction of almost all the geometric concepts known at the time. However, Euclid's postulation of points was neither complete nor definitive, and he occasionally assumed facts about points that did not follow directly from his axioms, such as the ordering of points on the line or the existence of specific points. In spite of this, modern expansions of the system serve to remove these assumptions. [ 6 ] There are several inequivalent definitions of dimension in mathematics. In all of the common definitions, a point is 0-dimensional. The dimension of a vector space is the maximum size of a linearly independent subset. In a vector space consisting of a single point (which must be the zero vector 0 ), there is no linearly independent subset. The zero vector is not itself linearly independent, because there is a non-trivial linear combination making it zero: 1 ⋅ 0 = 0 {\displaystyle 1\cdot \mathbf {0} =\mathbf {0} } . The topological dimension of a topological space X {\displaystyle X} is defined to be the minimum value of n , such that every finite open cover A {\displaystyle {\mathcal {A}}} of X {\displaystyle X} admits a finite open cover B {\displaystyle {\mathcal {B}}} of X {\displaystyle X} which refines A {\displaystyle {\mathcal {A}}} in which no point is included in more than n +1 elements. If no such minimal n exists, the space is said to be of infinite covering dimension. A point is zero-dimensional with respect to the covering dimension because every open cover of the space has a refinement consisting of a single open set. Let X be a metric space . If S ⊂ X and d ∈ [0, ∞) , the d -dimensional Hausdorff content of S is the infimum of the set of numbers δ ≥ 0 such that there is some (indexed) collection of balls { B ( x i , r i ) : i ∈ I } {\displaystyle \{B(x_{i},r_{i}):i\in I\}} covering S with r i > 0 for each i ∈ I that satisfies ∑ i ∈ I r i d < δ . {\displaystyle \sum _{i\in I}r_{i}^{d}<\delta .} The Hausdorff dimension of X is defined by dim H ⁡ ( X ) := inf { d ≥ 0 : C H d ( X ) = 0 } . {\displaystyle \operatorname {dim} _{\operatorname {H} }(X):=\inf\{d\geq 0:C_{H}^{d}(X)=0\}.} A point has Hausdorff dimension 0 because it can be covered by a single ball of arbitrarily small radius. Although the notion of a point is generally considered fundamental in mainstream geometry and topology, there are some systems that forgo it, e.g. noncommutative geometry and pointless topology . A "pointless" or "pointfree" space is defined not as a set , but via some structure ( algebraic or logical respectively) which looks like a well-known function space on the set: an algebra of continuous functions or an algebra of sets respectively. More precisely, such structures generalize well-known spaces of functions in a way that the operation "take a value at this point" may not be defined. [ 7 ] A further tradition starts from some books of A. N. Whitehead in which the notion of region is assumed as a primitive together with the one of inclusion or connection . [ 8 ] Often in physics and mathematics, it is useful to think of a point as having non-zero mass or charge (this is especially common in classical electromagnetism , where electrons are idealized as points with non-zero charge). The Dirac delta function , or δ function , is (informally) a generalized function on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line. [ 9 ] The delta function is sometimes thought of as an infinitely high, infinitely thin spike at the origin, with total area one under the spike, and physically represents an idealized point mass or point charge . [ 10 ] It was introduced by theoretical physicist Paul Dirac . In the context of signal processing it is often referred to as the unit impulse symbol (or function). [ 11 ] Its discrete analog is the Kronecker delta function which is usually defined on a finite domain and takes values 0 and 1.
https://en.wikipedia.org/wiki/Point_(geometry)
The Point Cloud Library ( PCL ) is an open-source library of algorithms for point cloud processing tasks and 3D geometry processing , such as occur in three-dimensional computer vision . The library contains algorithms for filtering, feature estimation, surface reconstruction, 3D registration , [ 5 ] model fitting , object recognition , and segmentation . Each module is implemented as a smaller library that can be compiled separately (for example, libpcl_filters, libpcl_features, libpcl_surface, ...). PCL has its own data format for storing point clouds - PCD (Point Cloud Data), but also allows datasets to be loaded and saved in many other formats. It is written in C++ and released under the BSD license . These algorithms have been used, for example, for perception in robotics to filter outliers from noisy data, stitch 3D point clouds together , segment relevant parts of a scene, extract keypoints and compute descriptors to recognize objects in the world based on their geometric appearance, and create surfaces from point clouds and visualize them. [ 6 ] [ failed verification ] PCL requires several third-party libraries to function, which must be installed. Most mathematical operations are implemented using the Eigen library. The visualization module for 3D point clouds is based on VTK . Boost is used for shared pointers and the FLANN library for quick k-nearest neighbor search. Additional libraries such as Qhull, OpenNI , or Qt are optional and extend PCL with additional features. PCL is cross-platform software that runs on the most commonly used operating systems : Linux , Windows , macOS and Android . The library is fully integrated with the Robot Operating System (ROS) and provides support for OpenMP and Intel Threading Building Blocks (TBB) libraries for multi-core parallelism . [ 7 ] [ 8 ] The library is constantly updated and expanded, and its use in various industries is constantly growing. For example, PCL participated in the Google Summer of Code 2020 initiative with three projects. One was the extension of PCL for use with Python using Pybind11. [ 9 ] A large number of examples and tutorials are available on the PCL website, either as C++ source files or as tutorials with a detailed description and explanation of the individual steps. Point cloud library is widely used in many different fields, here are some examples: PCL requires for its installation several third-party libraries, which are listed below. Some libraries are optional and extend PCL with additional features. The PCL library is built with the CMake build system ( http://www.cmake.org/ ) at least in version 3.5.0. [ 10 ] [ 8 ] Mandatory libraries: Optional libraries that enable some additional features: The PCD ( Point Cloud Data ) is a file format for storing 3D point cloud data. It was created because existing formats did not support some of the features provided by the PCL library. PCD is the primary data format in PCL, but the library also offers the ability to save and load data in other formats (such as PLY, IFS, VTK, STL, OBJ, X3D). However, these other formats do not have the flexibility and speed of PCD files. One of the PCD advantages is the ability to store and process organized point cloud datasets. Another is very fast saving and loading of points that are stored in binary form. [ 11 ] [ 12 ] The PCD version is specified with the numbers 0.x (e.g., 0.5, 0.6, etc.) in the header of each file. The official version in 2020 is PCD 0.7 ( PCD_V7 ). The main difference compared to version 0.6 is that a new header - VIEWPOINT has been added. It specifies the information about the orientation of the sensor relative to the dataset. [ 13 ] The PCD file is divided into two parts - header and data . The header has a precisely defined format and contains the necessary information about the point cloud data that are stored in it. The header must be encoded in ASCII, however, the data can be stored in ASCII or binary format. Thanks to the fact that the ASCII format is more human readable, it can be opened in standard software tools and easily edited. In version 0.7 the version of the PCD file is at the beginning of the header, followed by the name , size , and type of each dimension of the stored data. It also shows a number of points ( height * width ) in the whole cloud and information about whether the point cloud dataset is organized or unorganized. The data type specifies in which format the point cloud data are stored (ASCII or binary). The header is followed by a set of points. Each point can be stored on a separate line (unorganized point-cloud) or they are stored in an image-like organized structure (organized point-cloud). [ 11 ] More detailed information about header entries can be found in documentation . Below is an example of a PCD file. The order of header entries is important! The development of the Point Cloud Library started in March 2010 at Willow Garage . The project initially resided on a sub domain of Willow Garage then moved to a new website www.pointclouds.org in March 2011. [ 1 ] PCL's first official release (Version 1.0) was released two months later in May 2011. [ 2 ] PCL is divided into several smaller code libraries that can be compiled separately. Some of the most important modules and their functions are described below. [ 14 ] [ 15 ] When scanning a 3D point cloud, errors and various deviations can occur, which causes noise in the data. This complicates the estimation of some local point cloud characteristics, such as surface normals. These inaccuracies can lead to significant errors in further processing and it is therefore advisable to remove them with a suitable filter. The pcl_filters library provides several useful filters for removing outliers and noise and also downsampling the data. Some of them use simple criteria to trim points, others use statistical analysis. The pcl_features library contains algorithms and data structures for 3D feature estimation. Mostly used local geometric features are the point normal and underlying surface's estimated curvature. The features describe geometrical patterns at a certain point based on selected k-neighborhood (data space selected around the point). The neighborhood can be selected by determining a fixed number of points in the closest area or defining a radius of a sphere around the point. One of the easiest implemented methods for estimating the surface normal is an analysis of the eigenvectors and eigenvalues of a covariance matrix created from the neighborhood of the point. Point Feature Histograms (or faster FPFH) descriptors are an advanced feature representation and depend on normal estimations at each point. It generalizes the mean curvature around the point using a multidimensional histogram of values. Some of other descriptors in the library are Viewpoint Feature Histogram (VFH) descriptor, NARF descriptors, Moment of inertia and eccentricity based descriptors, Globally Aligned Spatial Distribution (GASD) descriptors, and more. The pcl_segmentation library contains algorithms for segmenting a point cloud into different clusters. Clustering is often used to divide the cloud into individual parts, that can be further processed. There are implemented several classes, that support various segmentation methods: The pcl_visualization library is used to quickly and easily visualize 3D point cloud data. The package makes use of the VTK library for 3D rendering of clouds and range images. The library offers: Registration is the problem of aligning various point cloud datasets acquired from different views into a single point cloud model. The pcl_registration library implements number of point cloud registration algorithms for both organized and unorganized datasets. The task is to identify the corresponding points between the data sets and find a transformation that minimizes their distance. The iterative closest point algorithm minimizes the distances between the points of two pointclouds. It can be used for determining if one PointCloud is just a rigid transformation of another. Normal Distributions Transform (NDT) is a registration algorithm that can be used to determine a rigid transformation between two point clouds that have over 100,000 points. The sample_consensus library holds SAmple Consensus (SAC) methods like RANSAC and models to detect specific objects in point clouds. Some of the models implemented in this library include plane models that are often used to detect interior surfaces such as walls and floors. Next models are the lines, 2D and 3D circles in a plane, sphere, cylinder, cone, a model for determining a line parallel with a given axis, a model for determining a plane perpendicular to a user-specified axis, plane parallel to a user-specified axis, etc. These can be used to detect objects with common geometric structures (e.g., fitting a cylinder model to a mug). Robust sample consensus estimators that are available in the library: Several algorithms for surface reconstruction of 3D point clouds are implemented in the pcl_surface library. There are several ways to reconstruct the surface. One of the most commonly used is meshing, and the PCL library has two algorithms: very fast triangulation of original points and slower networking, which also smooths and fills holes. If the cloud is noisy, it is advisable to use surface smoothing using one of the implemented algorithms. The Moving Least Squares (MLS) surface reconstruction method is a resampling algorithm that can reconstruct missing parts of a surface. Thanks to higher order polynomial interpolations between surrounding data points, MLS can correct and smooth out small errors caused by scanning. Greedy Projection Triangulation implements an algorithm for fast surface triangulation on an unordered PointCloud with normals. The result is a triangle mesh that is created by projecting the local neighborhood of a point along the normal of the point. It works best if the surface is locally smooth and there are smooth transitions between areas with different point densities. Many parameters can be set that are taken into account when connecting points (how many neighbors are searched, the maximum distance for a point, minimum and maximum angle of a triangle). The library also implements functions for creating a concave or convex hull polygon for a plane model, Grid projection surface reconstruction algorithm, marching cubes , ear clipping triangulation algorithm, Poisson surface reconstruction algorithm, etc. The io_library allows you to load and save point clouds to files, as well as capture clouds from various devices. It includes functions that allow you to concatenate the points of two different point clouds with the same type and number of fields. The library can also concatenate fields (e.g., dimensions) of two different point clouds with same number of points. Starting with PCL 1.0 the library offers a new generic grabber interface that provides easy access to different devices and file formats. The first devices supported for data collection were OpenNI compatible cameras (tested with Primesense Reference Design , Microsoft Kinect and Asus Xtion Pro cameras ). As of PCL 1.7 , point cloud data can be also obtained from the Velodyne High Definition LiDAR (HDL) system, which produces 360 degree point clouds. PCL supports both the original HDL-64e and HDL-32e . There is also a new driver for Dinast Cameras (tested with IPA-1110 , Cyclopes II and IPA-1002 ng T-Less NG ). PCL 1.8 brings support for IDS-Imaging Ensenso cameras, DepthSense cameras (e.g. Creative Senz3D , DepthSense DS325 ), and davidSDK scanners. The pcl_kdtree library provides the kd-tree data-structure for organizing a set of points in a space with k dimensions. Used to find the K nearest neighbors (using FLANN) of a specific point or location. The pcl_octree library implements the octree hierarchical tree data structure for point cloud data. The library provides nearest neighbor search algorithms, such as “Neighbors within Voxel Search”, “K Nearest Neighbor Search” and “Neighbors within Radius Search”. There are also several octree types that differ by their leaf node's properties. Each leaf node can hold a single point or a list of point indices, or it does not store any point information. The library can be also used for detection of spatial changes between multiple unorganized point clouds by recursive comparison of octet tree structures. The pcl_search library implements methods for searching for nearest neighbors using different data structures, that can be found in other modules, such as KdTree, Octree, or specialized search for organized datasets. The range_image library contains two classes for representing and working with range images whose pixel values represent a distance from the sensor. The range image can be converted to a point cloud if the sensor position is specified or the borders can be extracted from it. The pcl_keypoints library contains implementations of point cloud keypoint detection algorithms (AGAST corner point detector, Harris detector , BRISK detector, etc.). The pcl_common library contains the core data structures for point cloud, types for point representation, surface normals, RGB color values, etc. There are also implemented useful methods for computing distances, mean values and covariance, geometric transformations, and more. The common library is mainly used by other PCL modules.
https://en.wikipedia.org/wiki/Point_Cloud_Library
A point accepted mutation — also known as a PAM — is the replacement of a single amino acid in the primary structure of a protein with another single amino acid, which is accepted by the processes of natural selection . This definition does not include all point mutations in the DNA of an organism. In particular, silent mutations are not point accepted mutations, nor are mutations that are lethal or that are rejected by natural selection in other ways. A PAM matrix is a matrix where each column and row represents one of the twenty standard amino acids. In bioinformatics , PAM matrices are sometimes used as substitution matrices to score sequence alignments for proteins. Each entry in a PAM matrix indicates the likelihood of the amino acid of that row being replaced with the amino acid of that column through a series of one or more point accepted mutations during a specified evolutionary interval, rather than these two amino acids being aligned due to chance. Different PAM matrices correspond to different lengths of time in the evolution of the protein sequence. The genetic instructions of every replicating cell in a living organism are contained within its DNA. [ 1 ] Throughout the cell's lifetime, this information is transcribed and replicated by cellular mechanisms to produce proteins or to provide instructions for daughter cells during cell division , and the possibility exists that the DNA may be altered during these processes. [ 1 ] [ 2 ] This is known as a mutation . At the molecular level, there are regulatory systems that correct most — but not all — of these changes to the DNA before it is replicated. [ 2 ] [ 3 ] One of the possible mutations that occurs is the replacement of a single nucleotide , known as a point mutation. If a point mutation occurs within an expressed region of a gene , an exon , then this will change the codon specifying a particular amino acid in the protein produced by that gene. [ 2 ] Despite the redundancy in the genetic code , there is a possibility that this mutation will then change the amino acid that is produced during translation , and as a consequence the structure of the protein will be changed. The functionality of a protein is highly dependent on its structure. [ 4 ] Changing a single amino acid in a protein may reduce its ability to carry out this function, or the mutation may even change the function that the protein carries out. [ 2 ] Changes like these may severely impact a crucial function in a cell, potentially causing the cell — and in extreme cases, the organism — to die. [ 5 ] Conversely, the change may allow the cell to continue functioning albeit differently, and the mutation can be passed on to the organism's offspring. If this change does not result in any significant physical disadvantage to the offspring, the possibility exists that this mutation will persist within the population. The possibility also exists that the change in function becomes advantageous. In either case, while being subjected to the processes of natural selection, the point mutation has been accepted into the genetic pool. The 20 amino acids translated by the genetic code vary greatly by the physical and chemical properties of their side chains. [ 4 ] However, these amino acids can be categorised into groups with similar physicochemical properties. [ 4 ] Substituting an amino acid with another from the same category is more likely to have a smaller impact on the structure and function of a protein than replacement with an amino acid from a different category. Consequently, acceptance of point mutations depends heavily on the amino acid being replaced in the mutation, and the replacement amino acid. The PAM matrices are a mathematical tool that account for these varying rates of acceptance when evaluating the similarity of proteins during alignment. The term accepted point mutation was initially used to describe the mutation phenomenon. However, the acronym PAM was preferred over APM due to readability, and so the term point accepted mutation is used more regularly. [ 6 ] Because the value n {\displaystyle n} in the PAM n matrix represents the number of mutations per 100 amino acids, which can be likened to a percentage of mutations, the term percentage accepted mutation is sometimes used. It is important to distinguish between point accepted mutations (PAMs), point accepted mutation matrices (PAM matrices) and the PAM n matrix. The term 'point accepted mutation' refers to the mutation event itself. However, 'PAM matrix' refers to one of a family of matrices which contain scores representing the likelihood of two amino acids being aligned due to a series of mutation events, rather than due to random chance. The 'PAM n matrix' is the PAM matrix corresponding to a time frame long enough for n {\displaystyle n} mutation events to occur per 100 amino acids. PAM matrices were introduced by Margaret Dayhoff in 1978. [ 7 ] The calculation of these matrices was based on 1572 observed mutations in the phylogenetic trees of 71 families of closely related proteins. The proteins to be studied were selected on the basis of having high similarity with their predecessors. The protein alignments included were required to display at least 85% identity. [ 6 ] [ 8 ] As a result, it is reasonable to assume that any aligned mismatches were the result of a single mutation event, rather than several at the same location. Each PAM matrix has twenty rows and twenty columns — one representing each of the twenty amino acids translated by the genetic code. The value in each cell of a PAM matrix is related to the probability of a row amino acid before the mutation being aligned with a column amino acid afterwards. [ 6 ] [ 7 ] [ 8 ] From this definition, PAM matrices are an example of a substitution matrix . For each branch in the phylogenetic trees of the protein families, the number of mismatches that were observed were recorded and a record kept of the two amino acids involved. [ 7 ] These counts were used as entries below the main diagonal of the matrix A {\displaystyle A} . Since the vast majority of protein samples come from organisms that are alive today (extant species), the 'direction' of a mutation cannot be determined. That is, the amino acid present before the mutation cannot be distinguished from the amino acid that replaced it after the mutation. Because of this, the matrix A {\displaystyle A} is assumed to be symmetric , and the entries of A {\displaystyle A} above the main diagonal are computed on this basis. The entries along the diagonal of A {\displaystyle A} do not correspond to mutations and can be left unfilled. In addition to these counts, data on the mutability and the frequency of the amino acids was obtained. [ 6 ] [ 7 ] The mutability of an amino acid is the ratio of the number of mutations it is involved in and the number of times it occurs in an alignment. [ 7 ] Mutability measures how likely an amino acid is to mutate acceptably. Asparagine , an amino acid with a small polar side chain, was found to be the most mutable of the amino acids. [ 7 ] Cysteine and tryptophan were found to be the least mutable amino acids. [ 7 ] The side chains for cysteine and tryptophan have less common structures: cysteine's side chain contains sulfur which participates in disulfide bonds with other cysteine molecules, and tryptophan's side chain is large and aromatic . [ 4 ] Since there are several small polar amino acids, these extremes suggest that amino acids are more likely to acceptably mutate if their physical and chemical properties are more common among alternative amino acids. [ 6 ] [ 8 ] For the j {\displaystyle j} th amino acid, the values m ( j ) {\displaystyle m(j)} and f ( j ) {\displaystyle f(j)} are its mutability and frequency. The frequencies of the amino acids are normalised so that they sum to 1. If total number of occurrences of the j {\displaystyle j} th amino acid is n ( j ) {\displaystyle n(j)} , and N {\displaystyle N} is the total number of all amino acids, then Based on the definition of mutability as the ratio of mutations to occurrences of an amino acid or The mutation matrix M {\displaystyle M} is constructed so that the entry M ( i , j ) {\displaystyle M(i,j)} represents the probability of the j {\displaystyle j} th amino acid mutating into the i {\displaystyle i} th amino acid. The non-diagonal entries are computed by the equation [ 7 ] where λ {\displaystyle \lambda } is a constant of proportionality. However, this equation does not compute the diagonal entries. Each column in the matrix M {\displaystyle M} lists each of the twenty possible outcomes for an amino acid — it can mutate into one of the 19 other amino acids, or remain unchanged. Since the non-diagonal entries listing the probabilities of each of the 19 mutations are known, and the sum of the probabilities of these twenty outcomes must be 1, this last probability can be calculated by which simplifies to [ 7 ] Substituting in the expression for the non-diagonal entries mutation matrix: Since the values of λ {\displaystyle \lambda } and m ( j ) {\displaystyle m(j)} are constants that don't change with the value of i {\displaystyle i} And thus cancellation reveals that A result of particular significance is that for the non-diagonal entries Which means that for all entries in the mutation matrix The probabilities contained in M {\displaystyle M} vary as some unknown function of the amount of time that a protein sequence is allowed to mutate for. Instead of attempting to determine this relationship, the values of M {\displaystyle M} are calculated for a short time frame, and the matrices for longer periods of time are calculated by assuming mutations follow a Markov chain model. [ 9 ] [ 10 ] The base unit of time for the PAM matrices is the time required for 1 mutation to occur per 100 amino acids, sometimes called 'a PAM unit' or 'a PAM' of time. [ 6 ] This is precisely the duration of mutation assumed by the PAM 1 matrix. The constant λ {\displaystyle \lambda } is used to control the proportion of amino acids that are unchanged. By using only alignments of proteins that had at least 85% similarity, it could be reasonably assumed that the mutations observed were direct, without any intermediate states. This means that scaling down these counts by a common factor would provide an accurate estimate of the mutation counts had the similarity been closer to 100%. It also means that the number of mutations per 100 amino acids, the n {\displaystyle n} in PAM n is equal to the number of mutated amino acids per 100 amino acids. To find the mutation matrix for the PAM 1 matrix, the requirement that 99% of the amino acids in a sequence are conserved is imposed. The quantity n ( j ) M ( j , j ) {\displaystyle n(j)M(j,j)} is equal to the number of conserved amino acid j {\displaystyle j} units, and so the total number of conserved amino acids is The value of λ {\displaystyle \lambda } needed to be pick to produce 99% identity after mutation is then given by the equation This λ {\displaystyle \lambda } value can then be used in the mutation matrix for the PAM 1 matrix. The Markov chain model of protein mutation relates the mutation matrix for PAM n , M n {\displaystyle M_{n}} , to the mutation matrix for the PAM 1 matrix, M 1 {\displaystyle M_{1}} by the simple relationship The PAM n matrix is constructed from the ratio of the probability of point accepted mutations replacing the j {\displaystyle j} th amino acid with the i {\displaystyle i} th amino acid, to the probability of these amino acids being aligned by chance. The entries of the PAM n matrix are given by the equation [ 11 ] [ 12 ] Note that in Gusfield's book, the entries M ( i , j ) {\displaystyle M(i,j)} and PAM n ( i , j ) {\displaystyle {\text{PAM}}_{n}(i,j)} are related to the probability of the i {\displaystyle i} th amino acid mutating into the j {\displaystyle j} th amino acid. [ 11 ] This is the origin of the different equation for the entries of the PAM matrices. When using the PAM n matrix to score an alignment of two proteins, the following assumption is made: When the alignment of the i {\displaystyle i} th and j {\displaystyle j} th amino acids is considered, the score indicates the relative likelihoods of the alignment due to the proteins being related or due to random chance. While the mutation probability matrix M {\displaystyle M} is not symmetric, each of the PAM matrices are. [ 6 ] [ 7 ] This somewhat surprising property is a result of the relationship that was noted for the mutation probability matrix: In fact, this relationship holds for all positive integer powers of the matrix M {\displaystyle M} : This generalisation can be proven using mathematical induction . Suppose that for a matrix M {\displaystyle M} And that for a positive integer k {\displaystyle k} By expansion of the matrix product M k + 1 = M k ⋅ M {\displaystyle M^{k+1}=M^{k}\cdot M} , Using the property we have assumed of the matrix M {\displaystyle M} And using the property for the matrix M k {\displaystyle M^{k}} In this case, it is only known at first that the result holds for k = 1 {\displaystyle k=1} . However, the above argument shows that the property also holds for k = 2 {\displaystyle k=2} . This new knowledge then shows that the property also holds for k {\displaystyle k} and this repeats to show that the property holds for all positive integers k {\displaystyle k} . As a result, the entries of the PAM n matrix are symmetric, since The value n {\displaystyle n} represents the number of mutations that occur per 100 amino acids, however this value is rarely accessible and often estimated. However, when comparing two proteins it is easy to calculate m {\displaystyle m} instead, which is the number of mutated amino acids per 100 amino acids. Despite the random nature of mutation, these values can be approximately related by [ 13 ] Mutations in the primary structure of a protein can occur anywhere along the sequence. If it is assumed the distribution of the mutations among amino acid positions is uniform, the problem is analogous to a distribution of "balls into bins", a common problem in combinatorics . In a case where K {\displaystyle K} balls (i.e. mutations) are distributed amongst N {\displaystyle N} bins (amino acid positions), the number of bins containing at least one ball, M {\displaystyle M} has a distribution with a mean given by [ 14 ] If the rate of mutation is n {\displaystyle n} mutations per 100 amino acids, then And if there are m {\displaystyle m} mutated amino acids per 100 amino acids, then it is approximately equal to Now m {\displaystyle m} and n {\displaystyle n} can be related by For large values of N {\displaystyle N} , an assumption that can be reasonably made for typical proteins, this expression is approximately equal to The validity of these estimates can be verified by counting the number of amino acids that remain unchanged under the action of the matrix M {\displaystyle M} . The total number of unchanged amino acids for the time interval of the PAM n matrix is and so the proportion of unchanged amino acids is A PAM250 is a commonly used scoring matrix for sequence comparison. Only the lower half of the matrix needs to be computed, since by their construction, PAM matrices are required to be symmetric. Each of the 20 amino acid are shown down the top and side of the matrix, with 3 additional ambiguous amino acids . The amino acids are most commonly shown listed alphabetically, or listed in groups. These groups are the characteristics shared among the amino acids. [ 7 ] The molecular clock hypothesis predicts that the rate of amino acid substitution in a particular protein will be approximately constant over time, though this rate may vary between protein families. [ 13 ] This suggests that the number of mutations per amino acid in a protein increases approximately linearly with time. Determining the time at which two proteins diverged is an important task in phylogenetics . Fossil records are often used to establish the position of events on the timeline of the Earth's evolutionary history, but the application of this source is limited . However, if the rate at which the molecular clock of protein family ticks — that is, the rate at which the number of mutations per amino acid increases — is known, then knowing this number of mutations would allow the date of divergence to be found. Suppose the date of divergence for two related proteins, taken from organisms living today, is sought. The two proteins have both been accumulating accepted mutations since the date of divergence, and so the total number of mutations per amino acid separating them is approximately twice that which separates them from their common ancestor . If a range of PAM matrices are used to align two proteins that are known to be related, then the value of n {\displaystyle n} in the PAM n matrix which results in the best score is most likely to correspond to the mutations per amino acid separating the two proteins. Halving this value and dividing by the rate at which accepted mutations accumulate in the protein family provides an estimate of the time of divergence of these two proteins from their common ancestor. That is, the time of divergence in myr is [ 13 ] Where K {\displaystyle K} is the number of mutations per amino acid, and r {\displaystyle r} is the rate of accepted mutation accumulation in mutations per amino acid site per million years. PAM matrices are also used as a scoring matrix when comparing DNA sequences or protein sequences to judge the quality of the alignment. This form of scoring system is utilized by a wide range of alignment software including BLAST . [ 15 ] Although the PAM log-odds matrices were the first scoring matrices used with BLAST, the PAM matrices have largely been replaced by the BLOSUM matrices. Although both matrices produce similar scoring outcomes they were generated using differing methodologies. The BLOSUM matrices were generated directly from the amino acid differences in aligned blocks that have diverged to varying degrees the PAM matrices reflect the extrapolation of evolutionary information based on closely related sequences to longer timescales. [ 16 ] Since scoring information for the PAM and BLOSUM matrices were generated in very different ways the numbers associated with the matrices have fundamentally different meanings; the numbers for PAM matrices increase for comparisons among more divergent proteins whereas the numbers for the BLOSUM matrices decrease. [ 17 ] However, all amino acid substitution matrices can be compared in an information theoretic framework [ 18 ] using their relative entropy.
https://en.wikipedia.org/wiki/Point_accepted_mutation
In geometry , a point at infinity or ideal point is an idealized limiting point at the "end" of each line. In the case of an affine plane (including the Euclidean plane ), there is one ideal point for each pencil of parallel lines of the plane. Adjoining these points produces a projective plane , in which no point can be distinguished, if we "forget" which points were added. This holds for a geometry over any field , and more generally over any division ring . [ 1 ] In the real case, a point at infinity completes a line into a topologically closed curve. In higher dimensions, all the points at infinity form a projective subspace of one dimension less than that of the whole projective space to which they belong. A point at infinity can also be added to the complex line (which may be thought of as the complex plane), thereby turning it into a closed surface known as the complex projective line, C P 1 , also called the Riemann sphere (when complex numbers are mapped to each point). In the case of a hyperbolic space , each line has two distinct ideal points . Here, the set of ideal points takes the form of a quadric . In an affine or Euclidean space of higher dimension, the points at infinity are the points which are added to the space to get the projective completion . [ citation needed ] The set of the points at infinity is called, depending on the dimension of the space, the line at infinity , the plane at infinity or the hyperplane at infinity , in all cases a projective space of one less dimension. [ 2 ] As a projective space over a field is a smooth algebraic variety , the same is true for the set of points at infinity. Similarly, if the ground field is the real or the complex field, the set of points at infinity is a manifold . In artistic drawing and technical perspective, the projection on the picture plane of the point at infinity of a class of parallel lines is called their vanishing point . [ 3 ] In hyperbolic geometry , points at infinity are typically named ideal points . [ 4 ] Unlike Euclidean and elliptic geometries, each line has two points at infinity: given a line l and a point P not on l , the right- and left- limiting parallels converge asymptotically to different points at infinity. All points at infinity together form the Cayley absolute or boundary of a hyperbolic plane . A symmetry of points and lines arises in a projective plane: just as a pair of points determine a line, so a pair of lines determine a point. The existence of parallel lines leads to establishing a point at infinity which represents the intersection of these parallels. This axiomatic symmetry grew out of a study of graphical perspective where a parallel projection arises as a central projection where the center C is a point at infinity, or figurative point . [ 5 ] The axiomatic symmetry of points and lines is called duality . Though a point at infinity is considered on a par with any other point of a projective range , in the representation of points with projective coordinates , distinction is noted: finite points are represented with a 1 in the final coordinate while a point at infinity has a 0 there. The need to represent points at infinity requires that one extra coordinate beyond the space of finite points is needed. This construction can be generalized to topological spaces . Different compactifications may exist for a given space, but arbitrary topological space admits Alexandroff extension , also called the one-point compactification when the original space is not itself compact . Projective line (over arbitrary field) is the Alexandroff extension of the corresponding field. Thus, the circle is the one-point compactification of the real line , and the sphere is the one-point compactification of the plane. Projective spaces P n for n > 1 are not one-point compactifications of corresponding affine spaces for the reason mentioned above under § Affine geometry , and completions of hyperbolic spaces with ideal points are also not one-point compactifications.
https://en.wikipedia.org/wiki/Point_at_infinity
An SS7 point code is an address for the SS7 telephone switching system. It is similar to an IP address in an IP network. It is a unique address for a node (Signaling Point, or SP), used in MTP layer 3 to identify the destination of a message signal unit (MSU). Message contain an OPC (Originating Point Code) and a DPC (Destination Point Code); sometimes documents refer to it as a signaling point code. Depending on the network, a point code can be 24 bits (North America, China), 16 bits (Japan), or 14 bits (ITU standard, International SS7 network and most countries) in length. ANSI point codes use 24 bits, mostly in 8-8-8 format. [ 1 ] ITU point codes use 14 bits in 3-8-3 format. [ 2 ] Fourteen bit point codes can be written in multiple formats. The most common are decimal number, hexadecimal number, or 3-8-3 format (3 most significant bits, 8 middle bits, 3 least significant bits). Twenty-four bit point codes may be written in decimal, hexadecimal, or 8-8-8 format. This computing article is a stub . You can help Wikipedia by expanding it . This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Point_code
In geometry , a point group is a mathematical group of symmetry operations ( isometries in a Euclidean space ) that have a fixed point in common. The coordinate origin of the Euclidean space is conventionally taken to be a fixed point, and every point group in dimension d is then a subgroup of the orthogonal group O( d ). Point groups are used to describe the symmetries of geometric figures and physical objects such as molecules . Each point group can be represented as sets of orthogonal matrices M that transform point x into point y according to y = Mx . Each element of a point group is either a rotation ( determinant of M = 1 ), or it is a reflection or improper rotation (determinant of M = −1 ). The geometric symmetries of crystals are described by space groups , which allow translations and contain point groups as subgroups. Discrete point groups in more than one dimension come in infinite families, but from the crystallographic restriction theorem and one of Bieberbach's theorems , each number of dimensions has only a finite number of point groups that are symmetric over some lattice or grid with that number of dimensions. These are the crystallographic point groups . Point groups can be classified into chiral (or purely rotational) groups and achiral groups. [ 1 ] The chiral groups are subgroups of the special orthogonal group SO( d ): they contain only orientation-preserving orthogonal transformations, i.e., those of determinant +1. The achiral groups contain also transformations of determinant −1. In an achiral group, the orientation-preserving transformations form a (chiral) subgroup of index 2. Finite Coxeter groups or reflection groups are those point groups that are generated purely by a set of reflectional mirrors passing through the same point. A rank n Coxeter group has n mirrors and is represented by a Coxeter–Dynkin diagram . Coxeter notation offers a bracketed notation equivalent to the Coxeter diagram, with markup symbols for rotational and other subsymmetry point groups. Reflection groups are necessarily achiral (except for the trivial group containing only the identity element). There are only two one-dimensional point groups, the identity group and the reflection group. Point groups in two dimensions , sometimes called rosette groups . They come in two infinite families: Applying the crystallographic restriction theorem restricts n to values 1, 2, 3, 4, and 6 for both families, yielding 10 groups. The subset of pure reflectional point groups, defined by 1 or 2 mirrors, can also be given by their Coxeter group and related polygons. These include 5 crystallographic groups. The symmetry of the reflectional groups can be doubled by an isomorphism , mapping both mirrors onto each other by a bisecting mirror, doubling the symmetry order. Point groups in three dimensions , sometimes called molecular point groups after their wide use in studying symmetries of molecules . They come in 7 infinite families of axial groups (also called prismatic), and 7 additional polyhedral groups (also called Platonic). In Schoenflies notation , Applying the crystallographic restriction theorem to these groups yields the 32 crystallographic point groups . The reflection point groups, defined by 1 to 3 mirror planes, can also be given by their Coxeter group and related polyhedra. The [3,3] group can be doubled, written as [[3,3]], mapping the first and last mirrors onto each other, doubling the symmetry to 48, and isomorphic to the [4,3] group. The four-dimensional point groups (chiral as well as achiral) are listed in Conway and Smith, [ 1 ] Section 4, Tables 4.1–4.3. The following list gives the four-dimensional reflection groups (excluding those that leave a subspace fixed and that are therefore lower-dimensional reflection groups). Each group is specified as a Coxeter group , and like the polyhedral groups of 3D, it can be named by its related convex regular 4-polytope . Related pure rotational groups exist for each with half the order, and can be represented by the bracket Coxeter notation with a '+' exponent, for example [3,3,3] + has three 3-fold gyration points and symmetry order 60. Front-back symmetric groups like [3,3,3] and [3,4,3] can be doubled, shown as double brackets in Coxeter's notation, for example [[3,3,3]] with its order doubled to 240. The following table gives the five-dimensional reflection groups (excluding those that are lower-dimensional reflection groups), by listing them as Coxeter groups . Related chiral groups exist for each with half the order, and can be represented by the bracket Coxeter notation with a '+' exponent, for example [3,3,3,3] + has four 3-fold gyration points and symmetry order 360. The following table gives the six-dimensional reflection groups (excluding those that are lower-dimensional reflection groups), by listing them as Coxeter groups . Related pure rotational groups exist for each with half the order, and can be represented by the bracket Coxeter notation with a '+' exponent, for example [3,3,3,3,3] + has five 3-fold gyration points and symmetry order 2520. The following table gives the seven-dimensional reflection groups (excluding those that are lower-dimensional reflection groups), by listing them as Coxeter groups . Related chiral groups exist for each with half the order, defined by an even number of reflections, and can be represented by the bracket Coxeter notation with a '+' exponent, for example [3,3,3,3,3,3] + has six 3-fold gyration points and symmetry order 20160. The following table gives the eight-dimensional reflection groups (excluding those that are lower-dimensional reflection groups), by listing them as Coxeter groups . Related chiral groups exist for each with half the order, defined by an even number of reflections, and can be represented by the bracket Coxeter notation with a '+' exponent, for example [3,3,3,3,3,3,3] + has seven 3-fold gyration points and symmetry order 181440.
https://en.wikipedia.org/wiki/Point_group
In computational geometry , the point-in-polygon ( PIP ) problem asks whether a given point in the plane lies inside, outside, or on the boundary of a polygon . It is a special case of point location problems and finds applications in areas that deal with processing geometrical data, such as computer graphics , computer vision , geographic information systems (GIS), motion planning , and computer-aided design (CAD). An early description of the problem in computer graphics shows two common approaches ( ray casting and angle summation) in use as early as 1974. [ 1 ] An attempt of computer graphics veterans to trace the history of the problem and some tricks for its solution can be found in an issue of the Ray Tracing News . [ 2 ] One simple way of finding whether the point is inside or outside a simple polygon is to test how many times a ray , starting from the point and going in any fixed direction, intersects the edges of the polygon. If the point is on the outside of the polygon the ray will intersect its edge an even number of times. If the point is on the inside of the polygon then it will intersect the edge an odd number of times. The status of a point on the edge of the polygon depends on the details of the ray intersection algorithm. This algorithm is sometimes also known as the crossing number algorithm or the even–odd rule algorithm , and was known as early as 1962. [ 3 ] The algorithm is based on a simple observation that if a point moves along a ray from infinity to the probe point and if it crosses the boundary of a polygon, possibly several times, then it alternately goes from the outside to inside, then from the inside to the outside, etc. As a result, after every two "border crossings" the moving point goes outside. This observation may be mathematically proved using the Jordan curve theorem . If implemented on a computer with finite precision arithmetics , the results may be incorrect if the point lies very close to that boundary, because of rounding errors. For some applications, like video games or other entertainment products, this is not a large concern since they often favor speed over precision. However, for a formally correct computer program , one would have to introduce a numerical tolerance ε and test in line whether P (the point) lies within ε of L (the Line), in which case the algorithm should stop and report " P lies very close to the boundary." Most implementations of the ray casting algorithm consecutively check intersections of a ray with all sides of the polygon in turn. In this case the following problem must be addressed. If the ray passes exactly through a vertex of a polygon, then it will intersect 2 segments at their endpoints. While it is OK for the case of the topmost vertex in the example or the vertex between crossing 4 and 5, the case of the rightmost vertex (in the example) requires that we count one intersection for the algorithm to work correctly. A similar problem arises with horizontal segments that happen to fall on the ray. The issue is solved as follows: If the intersection point is a vertex of a tested polygon side, then the intersection counts only if the other vertex of the side lies below the ray. This is effectively equivalent to considering vertices on the ray as lying slightly above the ray. Once again, the case of the ray passing through a vertex may pose numerical problems in finite precision arithmetics : for two sides adjacent to the same vertex the straightforward computation of the intersection with a ray may not give the vertex in both cases. If the polygon is specified by its vertices, then this problem is eliminated by checking the y-coordinates of the ray and the ends of the tested polygon side before actual computation of the intersection. In other cases, when polygon sides are computed from other types of data, other tricks must be applied for the numerical robustness of the algorithm. Another technique used to check if a point is inside a polygon is to compute the given point's winding number with respect to the polygon. If the winding number is non-zero, the point lies inside the polygon. This algorithm is sometimes also known as the nonzero-rule algorithm . One way to compute the winding number is to sum up the angles subtended by each side of the polygon. [ 4 ] However, this involves costly inverse trigonometric functions , which generally makes this algorithm performance-inefficient (slower) compared to the ray casting algorithm. Luckily, these inverse trigonometric functions do not need to be computed. Since the result, the sum of all angles, can add up to 0 or 2 π {\displaystyle 2\pi } (or multiples of 2 π {\displaystyle 2\pi } ) only, it is sufficient to track through which quadrants the polygon winds, [ 5 ] as it turns around the test point, which makes the winding number algorithm comparable in speed to counting the boundary crossings. An improved algorithm to calculate the winding number was developed by Dan Sunday in 2001. [ 6 ] It does not use angles in calculations, nor any trigonometry, and functions exactly the same as the ray casting algorithms described above. Sunday's algorithm works by considering an infinite horizontal ray cast from the point being checked. Whenever that ray crosses an edge of the polygon, Juan Pineda's edge crossing algorithm (1988) [ 7 ] is used to determine how the crossing will affect the winding number. As Sunday describes it, if the edge crosses the ray going "upwards", the winding number is incremented; if it crosses the ray "downwards", the number is decremented. Sunday's algorithm gives the correct answer for nonsimple polygons, whereas the boundary crossing algorithm fails in this case. [ 6 ] Similar methods are used in SVG for defining a way of filling with color various shapes (such as path, polyline, polygon, text etc.). [ 8 ] The algorithm of filling is influenced by 'fill-rule' attribute. The value may be either nonzero or evenodd . For example, in a pentagram , there is a central "hole" (visible background) with evenodd , and none with nonzero attribute. [ 9 ] For simple polygons , the algorithms will give the same result. However, for complex polygons , the algorithms may give different results for points in the regions where the polygon intersects itself, where the polygon does not have a clearly defined inside and outside. One solution using the even-odd rule is to transform (complex) polygons into simpler ones that are even-odd-equivalent before the intersection check. [ 10 ] This, however, is computationally expensive. It is less expensive to use the fast non-zero winding number algorithm, which gives the correct result even when the polygon overlaps itself. The point in polygon problem may be considered in the general repeated geometric query setting: given a single polygon and a sequence of query points, quickly find the answers for each query point. Clearly, any of the general approaches for planar point location may be used. Simpler solutions are available for some special polygons. Simpler algorithms are possible for monotone polygons , star-shaped polygons , convex polygons and triangles . The triangle case can be solved easily by use of a barycentric coordinate system , parametric equation or dot product . [ 11 ] The dot product method extends naturally to any convex polygon.
https://en.wikipedia.org/wiki/Point_in_polygon
A point mutation is a genetic mutation where a single nucleotide base is changed, inserted or deleted from a DNA or RNA sequence of an organism's genome. [ 1 ] Point mutations have a variety of effects on the downstream protein product—consequences that are moderately predictable based upon the specifics of the mutation. These consequences can range from no effect (e.g. synonymous mutations ) to deleterious effects (e.g. frameshift mutations ), with regard to protein production, composition, and function. Point mutations usually take place during DNA replication . DNA replication occurs when one double-stranded DNA molecule creates two single strands of DNA, each of which is a template for the creation of the complementary strand. A single point mutation can change the whole DNA sequence. Changing one purine or pyrimidine may change the amino acid that the nucleotides code for. Point mutations may arise from spontaneous mutations that occur during DNA replication . The rate of mutation may be increased by mutagens . Mutagens can be physical, such as radiation from UV rays , X-rays or extreme heat, or chemical (molecules that misplace base pairs or disrupt the helical shape of DNA). Mutagens associated with cancers are often studied to learn about cancer and its prevention. There are multiple ways for point mutations to occur. First, ultraviolet (UV) light and higher-frequency light have ionizing capability, which in turn can affect DNA. Reactive oxygen molecules with free radicals, which are a byproduct of cellular metabolism, can also be very harmful to DNA. These reactants can lead to both single-stranded and double-stranded DNA breaks. Third, bonds in DNA eventually degrade, which creates another problem to keep the integrity of DNA to a high standard. There can also be replication errors that lead to substitution, insertion, or deletion mutations. In 1959 Ernst Freese coined the terms "transitions" or "transversions" to categorize different types of point mutations. [ 2 ] [ 3 ] Transitions are replacement of a purine base with another purine or replacement of a pyrimidine with another pyrimidine. Transversions are replacement of a purine with a pyrimidine or vice versa. There is a systematic difference in mutation rates for transitions (Alpha) and transversions (Beta). Transition mutations are about ten times more common than transversions. Nonsense mutations include stop-gain and start-loss. Stop-gain is a mutation that results in a premature termination codon ( a stop was gained ), which signals the end of translation. This interruption causes the protein to be abnormally shortened. The number of amino acids lost mediates the impact on the protein's functionality and whether it will function whatsoever. [ 4 ] Stop-loss is a mutation in the original termination codon ( a stop was lost ), resulting in abnormal extension of a protein's carboxyl terminus. Start-gain creates an AUG start codon upstream of the original start site. If the new AUG is near the original start site, in-frame within the processed transcript and downstream to a ribosomal binding site, it can be used to initiate translation. The likely effect is additional amino acids added to the amino terminus of the original protein. Frame-shift mutations are also possible in start-gain mutations, but typically do not affect translation of the original protein. Start-loss is a point mutation in a transcript's AUG start codon, resulting in the reduction or elimination of protein production. Missense mutations code for a different amino acid. A missense mutation changes a codon so that a different protein is created, a non-synonymous change. [ 4 ] Conservative mutations result in an amino acid change. However, the properties of the amino acid remain the same (e.g., hydrophobic, hydrophilic, etc.). At times, a change to one amino acid in the protein is not detrimental to the organism as a whole. Most proteins can withstand one or two point mutations before their function changes. Non-conservative mutations result in an amino acid change that has different properties than the wild type . The protein may lose its function, which can result in a disease in the organism. For example, sickle-cell disease is caused by a single point mutation (a missense mutation) in the beta- hemoglobin gene that converts a GAG codon into GUG, which encodes the amino acid valine rather than glutamic acid . The protein may also exhibit a "gain of function" or become activated, such is the case with the mutation changing a valine to glutamic acid in the BRAF gene; this leads to an activation of the RAF protein which causes unlimited proliferative signalling in cancer cells. [ 5 ] These are both examples of a non-conservative (missense) mutation. Silent mutations code for the same amino acid (a " synonymous substitution "). A silent mutation does not affect the functioning of the protein . A single nucleotide can change, but the new codon specifies the same amino acid, resulting in an unmutated protein. This type of change is called synonymous change since the old and new codon code for the same amino acid. This is possible because 64 codons specify only 20 amino acids. Different codons can lead to differential protein expression levels, however. [ 4 ] Sometimes the term point mutation is used to describe insertions or deletions of a single base pair (which has more of an adverse effect on the synthesized protein due to the nucleotides' still being read in triplets, but in different frames: a mutation called a frameshift mutation ). [ 4 ] Point mutations that occur in non-coding sequences are most often without consequences, although there are exceptions. If the mutated base pair is in the promoter sequence of a gene, then the expression of the gene may change. Also, if the mutation occurs in the splicing site of an intron , then this may interfere with correct splicing of the transcribed pre-mRNA . By altering just one amino acid, the entire peptide may change, thereby changing the entire protein. The new protein is called a protein variant . If the original protein functions in cellular reproduction then this single point mutation can change the entire process of cellular reproduction for this organism. Point germline mutations can lead to beneficial as well as harmful traits or diseases. This leads to adaptations based on the environment where the organism lives. An advantageous mutation can create an advantage for that organism and lead to the trait's being passed down from generation to generation, improving and benefiting the entire population. The scientific theory of evolution is greatly dependent on point mutations in cells . The theory explains the diversity and history of living organisms on Earth. In relation to point mutations, it states that beneficial mutations allow the organism to thrive and reproduce, thereby passing its positively affected mutated genes on to the next generation. On the other hand, harmful mutations cause the organism to die or be less likely to reproduce in a phenomenon known as natural selection . There are different short-term and long-term effects that can arise from mutations. Smaller ones would be a halting of the cell cycle at numerous points. This means that a codon coding for the amino acid glycine may be changed to a stop codon, causing the proteins that should have been produced to be deformed and unable to complete their intended tasks. Because the mutations can affect the DNA and thus the chromatin , it can prohibit mitosis from occurring due to the lack of a complete chromosome. Problems can also arise during the processes of transcription and replication of DNA. These all prohibit the cell from reproduction and thus lead to the death of the cell. Long-term effects can be a permanent changing of a chromosome, which can lead to a mutation. These mutations can be either beneficial or detrimental. Cancer is an example of how they can be detrimental. [ 6 ] Other effects of point mutations, or single nucleotide polymorphisms in DNA, depend on the location of the mutation within the gene. For example, if the mutation occurs in the region of the gene responsible for coding, the amino acid sequence of the encoded protein may be altered, causing a change in the function, protein localization, stability of the protein or protein complex. Many methods have been proposed to predict the effects of missense mutations on proteins. Machine learning algorithms train their models to distinguish known disease-associated from neutral mutations whereas other methods do not explicitly train their models but almost all methods exploit the evolutionary conservation assuming that changes at conserved positions tend to be more deleterious. While majority of methods provide a binary classification of effects of mutations into damaging and benign, a new level of annotation is needed to offer an explanation of why and how these mutations damage proteins. [ 7 ] Moreover, if the mutation occurs in the region of the gene where transcriptional machinery binds to the protein, the mutation can affect the binding of the transcription factors because the short nucleotide sequences recognized by the transcription factors will be altered. Mutations in this region can affect rate of efficiency of gene transcription, which in turn can alter levels of mRNA and, thus, protein levels in general. Point mutations can have several effects on the behavior and reproduction of a protein depending on where the mutation occurs in the amino acid sequence of the protein. If the mutation occurs in the region of the gene that is responsible for coding for the protein, the amino acid may be altered. This slight change in the sequence of amino acids can cause a change in the function, activation of the protein meaning how it binds with a given enzyme, where the protein will be located within the cell, or the amount of free energy stored within the protein. If the mutation occurs in the region of the gene where transcriptional machinery binds to the protein, the mutation can affect the way in which transcription factors bind to the protein. The mechanisms of transcription bind to a protein through recognition of short nucleotide sequences. A mutation in this region may alter these sequences and, thus, change the way the transcription factors bind to the protein. Mutations in this region can affect the efficiency of gene transcription, which controls both the levels of mRNA and overall protein levels. [ 8 ] Point mutations in multiple tumor suppressor proteins cause cancer . For instance, point mutations in Adenomatous Polyposis Coli promote tumorigenesis. [ 9 ] A novel assay, Fast parallel proteolysis (FASTpp) , might help swift screening of specific stability defects in individual cancer patients. [ 10 ] Neurofibromatosis is caused by point mutations in the Neurofibromin 1 [ 11 ] [ 12 ] or Neurofibromin 2 gene. [ 13 ] Sickle-cell anemia is caused by a point mutation in the β-globin chain of hemoglobin, causing the hydrophilic amino acid glutamic acid to be replaced with the hydrophobic amino acid valine at the sixth position. The β-globin gene is found on the short arm of chromosome 11. The association of two wild-type α-globin subunits with two mutant β-globin subunits forms hemoglobin S (HbS). Under low-oxygen conditions (being at high altitude, for example), the absence of a polar amino acid at position six of the β-globin chain promotes the non-covalent polymerisation (aggregation) of hemoglobin, which distorts red blood cells into a sickle shape and decreases their elasticity. [ 14 ] Hemoglobin is a protein found in red blood cells, and is responsible for the transportation of oxygen through the body. [ 15 ] There are two subunits that make up the hemoglobin protein: beta-globins and alpha-globins . [ 16 ] Beta-hemoglobin is created from the genetic information on the HBB, or "hemoglobin, beta" gene found on chromosome 11p15.5. [ 17 ] A single point mutation in this polypeptide chain, which is 147 amino acids long, results in the disease known as Sickle Cell Anemia. [ 18 ] Sickle-cell anemia is an autosomal recessive disorder that affects 1 in 500 African Americans, and is one of the most common blood disorders in the United States. [ 17 ] The single replacement of the sixth amino acid in the beta-globin, glutamic acid, with valine results in deformed red blood cells. These sickle-shaped cells cannot carry nearly as much oxygen as normal red blood cells and they get caught more easily in the capillaries, cutting off blood supply to vital organs. The single nucleotide change in the beta-globin means that even the smallest of exertions on the part of the carrier results in severe pain and even heart attack. Below is a chart depicting the first thirteen amino acids in the normal and abnormal sickle cell polypeptide chain. [ 18 ] The cause of Tay–Sachs disease is a genetic defect that is passed from parent to child. This genetic defect is located in the HEXA gene, which is found on chromosome 15. The HEXA gene makes part of an enzyme called beta-hexosaminidase A, which plays a critical role in the nervous system. This enzyme helps break down a fatty substance called GM2 ganglioside in nerve cells. Mutations in the HEXA gene disrupt the activity of beta-hexosaminidase A, preventing the breakdown of the fatty substances. As a result, the fatty substances accumulate to deadly levels in the brain and spinal cord. The buildup of GM2 ganglioside causes progressive damage to the nerve cells. This is the cause of the signs and symptoms of Tay-Sachs disease. [ 19 ] In molecular biology , repeat-induced point mutation or RIP is a process by which DNA accumulates G : C to A : T transition mutations. Genomic evidence indicates that RIP occurs or has occurred in a variety of fungi [ 20 ] while experimental evidence indicates that RIP is active in Neurospora crassa , [ 21 ] Podospora anserina , [ 22 ] Magnaporthe grisea , [ 23 ] Leptosphaeria maculans , [ 24 ] Gibberella zeae , [ 25 ] Nectria haematococca [ 26 ] and Paecilomyces variotii . [ 27 ] In Neurospora crassa , sequences mutated by RIP are often methylated de novo . [ 21 ] RIP occurs during the sexual stage in haploid nuclei after fertilization but prior to meiotic DNA replication . [ 21 ] In Neurospora crassa , repeat sequences of at least 400 base pairs in length are vulnerable to RIP. Repeats with as low as 80% nucleotide identity may also be subject to RIP. Though the exact mechanism of repeat recognition and mutagenesis are poorly understood, RIP results in repeated sequences undergoing multiple transition mutations . The RIP mutations do not seem to be limited to repeated sequences. Indeed, for example, in the phytopathogenic fungus L. maculans , RIP mutations are found in single copy regions, adjacent to the repeated elements. These regions are either non-coding regions or genes encoding small secreted proteins including avirulence genes. The degree of RIP within these single copy regions was proportional to their proximity to repetitive elements. [ 28 ] Rep and Kistler have speculated that the presence of highly repetitive regions containing transposons, may promote mutation of resident effector genes. [ 29 ] So the presence of effector genes within such regions is suggested to promote their adaptation and diversification when exposed to strong selection pressure. [ 30 ] As RIP mutation is traditionally observed to be restricted to repetitive regions and not single copy regions, Fudal et al. [ 31 ] suggested that leakage of RIP mutation might occur within a relatively short distance of a RIP-affected repeat. Indeed, this has been reported in N. crassa whereby leakage of RIP was detected in single copy sequences at least 930 bp from the boundary of neighbouring duplicated sequences. [ 32 ] To elucidate the mechanism of detection of repeated sequences leading to RIP may allow to understand how the flanking sequences may also be affected. RIP causes G : C to A : T transition mutations within repeats, however, the mechanism that detects the repeated sequences is unknown. RID is the only known protein essential for RIP. It is a DNA methyltransferease-like protein, that when mutated or knocked out results in loss of RIP. [ 33 ] Deletion of the rid homolog in Aspergillus nidulans , dmtA , results in loss of fertility [ 34 ] while deletion of the rid homolog in Ascobolus immersens , masc1 , results in fertility defects and loss of methylation induced premeiotically (MIP) . [ 35 ] RIP is believed to have evolved as a defense mechanism against transposable elements , which resemble parasites by invading and multiplying within the genome. RIP creates multiple missense and nonsense mutations in the coding sequence. This hypermutation of G-C to A-T in repetitive sequences eliminates functional gene products of the sequence (if there were any to begin with). In addition, many of the C-bearing nucleotides become methylated , thus decreasing transcription. Because RIP is so efficient at detecting and mutating repeats, biologists working on Neurospora crassa have used it as a tool for mutagenesis . A second copy of a single-copy gene is first transformed into the genome . The fungus must then mate and go through its sexual cycle to activate the RIP machinery. Many different mutations within the duplicated gene are obtained from even a single fertilization event so that inactivated alleles, usually due to nonsense mutations , as well as alleles containing missense mutations can be obtained. [ 36 ] The cellular reproduction process of meiosis was discovered by Oscar Hertwig in 1876. Mitosis was discovered several years later in 1882 by Walther Flemming . Hertwig studied sea urchins, and noticed that each egg contained one nucleus prior to fertilization and two nuclei after. This discovery proved that one spermatozoon could fertilize an egg, and therefore proved the process of meiosis. Hermann Fol continued Hertwig's research by testing the effects of injecting several spermatozoa into an egg, and found that the process did not work with more than one spermatozoon. [ 37 ] Flemming began his research of cell division starting in 1868. The study of cells was an increasingly popular topic in this time period. By 1873, Schneider had already begun to describe the steps of cell division. Flemming furthered this description in 1874 and 1875 as he explained the steps in more detail. He also argued with Schneider's findings that the nucleus separated into rod-like structures by suggesting that the nucleus actually separated into threads that in turn separated. Flemming concluded that cells replicate through cell division, to be more specific mitosis. [ 38 ] Matthew Meselson and Franklin Stahl are credited with the discovery of DNA replication . Watson and Crick acknowledged that the structure of DNA did indicate that there is some form of replicating process. However, there was not a lot of research done on this aspect of DNA until after Watson and Crick. People considered all possible methods of determining the replication process of DNA, but none were successful until Meselson and Stahl. Meselson and Stahl introduced a heavy isotope into some DNA and traced its distribution. Through this experiment, Meselson and Stahl were able to prove that DNA reproduces semi-conservatively. [ 39 ] ÷⊈⊂⊃⊅
https://en.wikipedia.org/wiki/Point_mutation
A point of delivery ( PoD ) is "a module of network, compute, storage, and application components that work together to deliver networking services. The PoD is a repeatable design pattern , and its components maximize the modularity, scalability, and manageability of data centers." [ 1 ] The modular design principle has been applied to telephone and data networks, for instance through a repeatable node design describing the configuration of equipment housed in point of presence facilities. The term is similarly used in cable video networks, [ 2 ] to describe the modular component that delivers video service to a subscriber. The distinction of a PoD versus other design patterns is that it is a deployable module which delivers a service . The PoD design pattern is especially important in service provider infrastructure , for instance in datacenters supporting cloud computing services , in order to sustain scalability as usage grows.
https://en.wikipedia.org/wiki/Point_of_delivery_(networking)
A point of interest ( POI ) is a specific point location that someone may find useful or interesting. An example is a point on the Earth representing the location of the Eiffel Tower , or a point on Mars representing the location of its highest mountain , Olympus Mons . Most consumers use the term when referring to hotels, campsites, fuel stations or any other categories used in modern automotive navigation systems . Users of a mobile device can be provided with geolocation and time-aware POI service [ 1 ] that recommends geolocations nearby and with a temporal relevance (e.g. POI to special services in a ski resort are available only in winter). The term is widely used in cartography , especially in electronic variants including GIS , and GPS navigation software . In this context the synonym waypoint is common. A GPS point of interest specifies, at minimum, the latitude and longitude of the POI, assuming a certain map datum . A name or description for the POI is usually included, and other information such as altitude or a telephone number may also be attached. GPS applications typically use icons to graphically represent different categories of POI on a map. [ 2 ] A region of interest (ROI) and a volume of interest (VOI) are similar in concept, denoting a region or a volume (which may contain various individual POIs). In medical fields such as histology , pathology , and histopathology , points of interest are selected from the general background in a field of view ; for example, among hundreds of normal cells , the pathologist may find 3 or 4 neoplastic cells that stand out from the others upon staining . Digital maps for modern GPS devices typically include a basic selection of POI for the map area. [ 3 ] However, websites exist that specialize in the collection, verification, management and distribution of POI which end-users can load onto their devices to replace or supplement the existing POI. [ 4 ] [ 5 ] While some of these websites are generic, and will collect and categorize POI for any interest, others are more specialized in a particular category (such as speed cameras) or GPS device (e.g. TomTom / Garmin ). End-users also have the ability to create their own custom collections. Commercial POI collections, especially those that ship with digital maps, or that are sold on a subscription basis are usually protected by copyright . However, there are also many websites from which royalty-free POI collections can be obtained, e.g. SPOI - Smart Points of Interest , which is distributed under ODbL license. [ 6 ] The applications for POI are extensive. As GPS-enabled devices as well as software applications that use digital maps become more available, so too the applications for POI are also expanding. Newer digital cameras for example can automatically tag a photograph using Exif with the GPS location where a picture was taken; these pictures can then be overlaid as POI on a digital map or satellite image such as Google Earth . Geocaching applications are built around POI collections. In vehicle tracking systems , POIs are used to mark destination points and/or offices to that users of GPS tracking software would easily monitor position of vehicles according to POIs. Many different file formats , including proprietary formats , are used to store point of interest data, even where the same underlying WGS84 system is used. Reasons for variations to store the same data include: The following are some of the file formats used by different vendors and devices to exchange POI (and in some cases, also navigation tracks ): Third party and vendor-supplied utilities are available to convert point of interest data [ 7 ] between different formats to allow them to be exchanged between otherwise incompatible GPS devices or systems. [ 8 ] Furthermore, many applications will support the generic ASCII text file format, although this format is more prone to error due to its loose structure as well as the many ways in which GPS co-ordinates can be represented (e.g. decimal vs degree/minute/second). POI format converters are often named after the POI file format they convert and convert to, such as KML2GPX (converts KML to GPX) and KML2OV2 (converts KML to OV2).
https://en.wikipedia.org/wiki/Point_of_interest
Point of use water filters are used in individual houses or offices to provide filtration of potable water close to the point of consumption. [ 1 ] The related topic, point-of-use water treatment describes full-scale water treatment options and technologies designed to serve communities when municipal water treatment fails or is unavailable. Probably the best known POU water filters are those installed in the plumbing in kitchens just prior to the tap and also jug filters where water is passed through a filter in a specially constructed plastic jug. [ 2 ] Such filters are typically based on ion exchange resins designed to remove calcium ions to reduce water hardness and removing any toxic heavy metal ions such as lead . Many filters also incorporate activated charcoal to eliminate excess chlorine and to reduce unwanted tastes and odours. They may also be effective in reducing concentrations of halogenated organic species that can be created through the halogenation of organic rich waters as part of the disinfection process at the municipal water treatment facility. [ 3 ] Filters incorporating reverse osmosis are also available and can be effective in removing many pathogenic organisms. Point of use filters have limited capacity to modify water chemistry and typically require that treatment cartridges are replaced at regular intervals, especially in hard water areas. POU filters are generally efficient at softening hard water and reducing lime scale in kitchen utensils, on shower heads and reducing water smear on shower enclosures provided that the treatment cartridges are regularly replaced. They can also be efficient in removing heavy metal ions where these are present. However, modern municipal water treatment and modern plumbing standards mean that toxic metals concentrations are very rarely a significant issue. Even where such ions are detectable, they may be at concentrations below the effective treatment range of the filter Although the technology in POU filters is generally robust, the limited contact time with the water stream and the technological limitations of the devices mean that the level of performance is not as great as the user may expect. Per- and polyfluoroalkyl substances (PFAS) may occur in some water supplies because of contamination of the water catchment. [ 1 ] but a number of POU filters only offer a reduction of concentration down to 70 ng/L whereas the limit on municipal water treatment plant may be as low as 20 ng/L. In such cases the POU filter will be of no benefit. All point of use filters incorporate technology that requires periodic exchange or replacement. For example, ion exchange units become exhausted and no longer work efficiently and activated carbon units become saturated with organic species and can no longer perform as designed. Because the levels of treatment may be undetectable by the user, many manufacturers recommend replacement of units on a regular basis. The cost of such units can be significant [ 4 ] Many filters are designed to remove chlorine from water to improve the smell and taste of the water. Removing the chlorine can allow bacteria and other microorganisms to colonise parts of the filter downstream of the chlorine removal and the stem of the tap or the shower hose and head. Such colonisation may pose health risks not present in the unfiltered water [ 4 ] Three organizations are accredited by the American National Standards Institute , and each one of them certified products using American National Standard Institute/National Science Foundation standards. Each American National Standards Institute/National Science Foundation standard requires verification of contaminant reduction performance claims, an evaluation of the unit, including its materials and structural integrity, and a review of the product labels and sales literature. Each certifies that home water treatment units meet or exceed National Standard Institute/National Science Foundation and Environmental Protection Agency drinking water standards . American National Standard Institute/National Science Foundation standards are issued in two different sets, one for health concerns (such as removal of specific contaminants (Standard 53, Health Effects) and one for aesthetic concerns (Aesthetic Effects, such as improving taste or appearance of water). Certification from these organizations will specify one or both of these specific standards. NSF International as it is now known started out as the National Sanitation Foundation in 1944 at the University of Michigan School of Public Health. [ 5 ] The NSF's water treatment Device Certification Program requires extensive product testing and unannounced audits of production facilities. One goal of this not for profit organization is to provide assurance to consumers that the water treatment devices they are purchasing meet the design, material, and performance requirements of national standards. [ 5 ] Underwriters Laboratories , Inc., is an independent, accredited testing and certification organization that certifies home water treatment units which meet or exceed EPA and American National Standard Institute/National Science Foundation drinking water standards of contaminant reduction, aesthetic concerns, structural integrity, and materials safety. The Water Quality Association is a trade organization that tests water treatment equipment, and awards its Gold Seal to systems that meet or exceed ANSI/NSF standards for contaminant reduction performance, structural integrity, and materials safety. [ 6 ] Filters that use reverse osmosis , those labeled as “absolute one micron filters,” or those labeled as certified by an American National Standards Institute (ANSI)- accredited organization to American National Standard Institute/National Science Foundation Standard 53 for “Cyst Removal” provide the greatest assurance of removing Cryptosporidium . As with all filters, follow the manufacturer's instructions for filter use and replacement. [ 7 ]
https://en.wikipedia.org/wiki/Point_of_use_water_filter
The point of zero charge (pzc) is generally described as the pH at which the net electrical charge of the particle surface (i.e. adsorbent 's surface) is equal to zero. This concept has been introduced in the studies dealing with colloidal flocculation to explain why pH is affecting the phenomenon. [ 1 ] A related concept in electrochemistry is the electrode potential at the point of zero charge. Generally, the pzc in electrochemistry is the value of the negative decimal logarithm of the activity of the potential-determining ion in the bulk fluid. [ 2 ] The pzc is of fundamental importance in surface science . For example, in the field of environmental science , it determines how easily a substrate is able to adsorb potentially harmful ions. It also has countless applications in technology of colloids , e.g., flotation of minerals. Therefore, the pzc value has been examined in many application of adsorption to the environmental science. [ 3 ] [ 4 ] The pzc value is typically obtained by titrations and several titration methods have been developed. [ 5 ] [ 6 ] Related values associated with the soil characteristics exist along with the pzc value, including zero point of charge (zpc), point of zero net charge (pznc), etc. [ 7 ] The point of zero charge is the pH value for which the net surface charge of adsorbent is equal to zero. This concept has been introduced by an increase of interest in the pH of the solution during adsorption experiments. [ 1 ] The reason is that the adsorption of some substances is very dependent on pH. The pzc value is determined by the characteristics of an adsorbent. For example, the surface charge of adsorbent is described by the ion that lies on the surface of the particle (adsorbent) structure like image. At a lower pH, hydrogen ions (protons, H + ) would be more adsorbed than other cations (adsorbate) so that the other cations would be less adsorbed than in the case of the negatively charged particle. On the other hand, if the surface is positively charged and pH is increased, anions will be less adsorbed as pH increases. From the view of the adsorbent, if the pH of the solution is below the pzc value, the surface charge of the adsorbent would become positive so that the anions can be adsorbed. Conversely, if the pH is above the pzc value, the surface charge would be negative so that the cations can be adsorbed. For example, the electrical charge on the surface of silver iodide (AgI) crystals can be determined by the concentration of iodide ions present in the solution above the crystals. Then, the pzc value of the AgI surface will be described by a function of the concentration of I − in the solution (or by the negative decimal logarithm of this concentration, -log 10 [I – ] = p I − ). The pzc is the same as the isoelectric point (iep) if there is no adsorption of other ions than the potential determining H + /OH − at the surface [ clarification needed ] . [ 8 ] This is often the case for pure ("pristine surface") oxides in suspension in water. In the presence of specific adsorption, pzc and isoelectric point generally have different values. The pzc is typically obtained by acid-base titrations of colloidal dispersions while monitoring the electrophoretic mobility of the particles and the pH of the suspension. Several titrations are required to distinguish pzc from iep, using different supporting electrolytes (including varying the electrolyte ionic strength ). Once satisfactory curves are obtained (acid/base amount—pH, and pH— zeta potential ), the pzc is established as the common intersection point (cip) of the lines. Therefore, pzc is also sometimes referred to as cip. Besides pzc, iep, and cip, there are also numerous other terms used in the literature, usually expressed as initialisms , with identical or (confusingly) near-identical meaning: zero point of charge (zpc), point of zero net charge (pznc), point of zero net proton charge (pznpc), pristine point of zero charge (ppzc), point of zero salt effect (pzse), zero point of titration (zpt) of colloidal dispersion, and isoelectric point of the solid (ieps) [ 9 ] and point of zero surface tension (pzst [ 10 ] or pzs [ 11 ] ). In electrochemistry, the electrode -electrolyte interface is generally charged. If the electrode is polarizable , then its surface charge depends on the electrode potential . IUPAC defines [ 2 ] the potential at the point of zero charge as the potential of an electrode (against a defined reference electrode ) at which one of the charges defined is zero. The potential of zero charge is used for determination of the absolute electrode potential in a given electrolyte . IUPAC also defines the potential difference with respect to the potential of zero charge as: where: The structure of electrolyte at the electrode surface can also depend on the surface charge, with a change around the pzc potential. For example, on a platinum electrode, water molecules have been reported to be weakly hydrogen-bonded with "oxygen-up" orientation on negatively charged surfaces, and strongly hydrogen-bonded with nearly flat orientation at positively charged surfaces. [ 12 ] At pzc, the colloidal system exhibits zero zeta potential (that is, the particles remain stationary in an electric field ), minimum stability (exhibits maximum coagulation or flocculation rate), maximum solubility of the solid phase, maximum viscosity of the dispersion, and other peculiarities. [ citation needed ] In the field of environmental science, adsorption is involved in many techniques that can eliminate pollutants and governs the concentration of chemicals in soils and/or atmosphere. When studying pollutant degradation or a sorption process, it is important to examine the pzc value related to adsorption. For example, natural and organic substrates including wood ash, sawdust, etc. are used as an adsorbent by eliminating harmful heavy metals like arsenic, cobalt, mercury ion and so forth in contaminated neutral drainage (CND), which is a passive reactor that could possible metal adsorption with low-cost materials. Therefore, the pzc values of the organic substrates were evaluated to optimize the selection of materials in CND. [ 3 ] Another example is that the emission of nitrous acid , which controls the atmosphere's oxidative capacity. Different soil pH leads to the different surface charges of minerals so the emission of nitrous acid would be varied, further impacting on the biological cycle involved in the nitrous acid species. [ 4 ]
https://en.wikipedia.org/wiki/Point_of_zero_charge
A point particle , ideal particle [ 1 ] or point-like particle (often spelled pointlike particle ) is an idealization of particles heavily used in physics . Its defining feature is that it lacks spatial extension ; being dimensionless, it does not take up space . [ 2 ] A point particle is an appropriate representation of any object whenever its size, shape, and structure are irrelevant in a given context. For example, from far enough away, any finite-size object will look and behave as a point-like object. Point masses and point charges, discussed below, are two common cases. When a point particle has an additive property, such as mass or charge, it is often represented mathematically by a Dirac delta function . In classical mechanics there is usually no concept of rotation of point particles about their "center". In quantum mechanics , the concept of a point particle is complicated by the Heisenberg uncertainty principle , because even an elementary particle , with no internal structure, occupies a nonzero volume. For example, the atomic orbit of an electron in the hydrogen atom occupies a volume of ~ 10 −30 m 3 . There is nevertheless a distinction between elementary particles such as electrons or quarks , which have no known internal structure, and composite particles such as protons and neutrons, whose internal structures are made up of quarks. Elementary particles are sometimes called "point particles" in reference to their lack of internal structure, but this is in a different sense than that discussed herein. Point mass ( pointlike mass ) is the concept, for example in classical physics , of a physical object (typically matter ) that has nonzero mass, and yet explicitly and specifically is (or is being thought of or modeled as) infinitesimal (infinitely small) in its volume or linear dimensions . In the theory of gravity , extended objects can behave as point-like even in their immediate vicinity. For example, spherical objects interacting in 3-dimensional space whose interactions are described by the Newtonian gravitation behave, as long as they do not touch each other, in such a way as if all their matter were concentrated in their centers of mass . [ 3 ] In fact, this is true for all fields described by an inverse square law . [ 4 ] [ 5 ] Similar to point masses, in electromagnetism physicists discuss a point charge , a point particle with a nonzero electric charge . [ 6 ] The fundamental equation of electrostatics is Coulomb's law , which describes the electric force between two point charges. Another result, Earnshaw's theorem , states that a collection of point charges cannot be maintained in a static equilibrium configuration solely by the electrostatic interaction of the charges. The electric field associated with a classical point charge increases to infinity as the distance from the point charge decreases towards zero, which suggests that the model is no longer accurate in this limit. In quantum mechanics , there is a distinction between an elementary particle (also called "point particle") and a composite particle . An elementary particle, such as an electron , quark , or photon , is a particle with no known internal structure. Whereas a composite particle, such as a proton or neutron , has an internal structure. However, neither elementary nor composite particles are spatially localized, because of the Heisenberg uncertainty principle . The particle wavepacket always occupies a nonzero volume. For example, see atomic orbital : The electron is an elementary particle, but its quantum states form three-dimensional patterns. Nevertheless, there is good reason that an elementary particle is often called a point particle. Even if an elementary particle has a delocalized wavepacket, the wavepacket can be represented as a quantum superposition of quantum states wherein the particle is exactly localized. Moreover, the interactions of the particle can be represented as a superposition of interactions of individual states which are localized. This is not true for a composite particle, which can never be represented as a superposition of exactly-localized quantum states. It is in this sense that physicists can discuss the intrinsic "size" of a particle: The size of its internal structure, not the size of its wavepacket. The "size" of an elementary particle, in this sense, is exactly zero. For example, for the electron, experimental evidence shows that the size of an electron is less than 10 −18 m . [ 7 ] This is consistent with the expected value of exactly zero. (This should not be confused with the classical electron radius , which, despite the name, is unrelated to the actual size of an electron.)
https://en.wikipedia.org/wiki/Point_particle
Point plotting is an elementary mathematical skill required in analytic geometry . Invented by René Descartes and originally used to locate positions on military maps , this skill is now assumed of everyone who wants to locate grid 7A on any map. Using point plotting, one associates an ordered pair of real numbers ( x , y ) with a point in the plane in a one-to-one manner. As a result, one obtains the 2-dimensional Cartesian coordinate system . To be able to plot points, one needs to first decide on a point in plane which will be called the origin , and a couple of perpendicular lines, called the x and y axes, as well as a preferred direction on each of the lines. Usually one chooses the x axis pointing right and the y axis pointing up, and these will be named the positive directions. Also, one picks a segment in the plane which is declared to be of unit length. Using rotated versions of this segment, one can measure distances along the x and y axes. Having the origin and the axes in place, given a pair ( x , y ) of real numbers, one considers the point on the x axis at distance | x | from the origin and along the positive direction if x ≥0, and the other direction otherwise. In the same way one picks the point on the y axis corresponding to the number y . The line parallel to the y axis going through the first point and the line parallel to the x axis going through the second point will intersect at precisely one point, which will be called the point with coordinates ( x , y ). This mathematics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Point_plotting
In probability and statistics , point process notation comprises the range of mathematical notation used to symbolically represent random objects known as point processes , which are used in related fields such as stochastic geometry , spatial statistics and continuum percolation theory and frequently serve as mathematical models of random phenomena, representable as points, in time, space or both. The notation varies due to the histories of certain mathematical fields and the different interpretations of point processes, [ 1 ] [ 2 ] [ 3 ] and borrows notation from mathematical areas of study such as measure theory and set theory . [ 1 ] The notation, as well as the terminology, of point processes depends on their setting and interpretation as mathematical objects which under certain assumptions can be interpreted as random sequences of points, random sets of points or random counting measures . [ 1 ] In some mathematical frameworks, a given point process may be considered as a sequence of points with each point randomly positioned in d -dimensional Euclidean space R d [ 1 ] as well as some other more abstract mathematical spaces . In general, whether or not a random sequence is equivalent to the other interpretations of a point process depends on the underlying mathematical space, but this holds true for the setting of finite-dimensional Euclidean space R d . [ 4 ] A point process is called simple if no two (or more points) coincide in location with probability one . Given that often point processes are simple and the order of the points does not matter, a collection of random points can be considered as a random set of points [ 1 ] [ 5 ] The theory of random sets was independently developed by David Kendall and Georges Matheron . In terms of being considered as a random set, a sequence of random points is a random closed set if the sequence has no accumulation points with probability one [ 6 ] A point process is often denoted by a single letter, [ 1 ] [ 7 ] [ 8 ] for example N {\displaystyle {N}} , and if the point process is considered as a random set, then the corresponding notation: [ 1 ] is used to denote that a random point x {\displaystyle x} is an element of (or belongs to) the point process N {\displaystyle {N}} . The theory of random sets can be applied to point processes owing to this interpretation, which alongside the random sequence interpretation has resulted in a point process being written as: which highlights its interpretation as either a random sequence or random closed set of points. [ 1 ] Furthermore, sometimes an uppercase letter denotes the point process, while a lowercase denotes a point from the process, so, for example, the point x {\displaystyle \textstyle x} (or x i {\displaystyle \textstyle x_{i}} ) belongs to or is a point of the point process X {\displaystyle \textstyle X} , or with set notation, x ∈ X {\displaystyle \textstyle x\in X} . [ 8 ] To denote the number of points of N {\displaystyle {N}} located in some Borel set B {\displaystyle B} , it is sometimes written [ 7 ] where Φ ( B ) {\displaystyle \Phi (B)} is a random variable and # {\displaystyle \#} is a counting measure , which gives the number of points in some set. In this mathematical expression the point process is denoted by: On the other hand, the symbol: represents the number of points of N {\displaystyle {N}} in B {\displaystyle B} . In the context of random measures, one can write: to denote that there is the set B {\displaystyle B} that contains n {\displaystyle n} points of N {\displaystyle {N}} . In other words, a point process can be considered as a random measure that assigns some non-negative integer-valued measure to sets. [ 1 ] This interpretation has motivated a point process being considered just another name for a random counting measure [ 9 ] : 106 and the techniques of random measure theory offering another way to study point processes, [ 1 ] [ 10 ] which also induces the use of the various notations used in integration and measure theory. [ a ] The different interpretations of point processes as random sets and counting measures is captured with the often used notation [ 1 ] [ 3 ] [ 8 ] [ 11 ] in which: Denoting the counting measure again with # {\displaystyle \#} , this dual notation implies: If f {\displaystyle f} is some measurable function on R d , then the sum of f ( x ) {\displaystyle f(x)} over all the points x {\displaystyle x} in N {\displaystyle {N}} can be written in a number of ways [ 1 ] [ 3 ] such as: which has the random sequence appearance, or with set notation as: or, equivalently, with integration notation as: which puts an emphasis on the interpretation of N {\displaystyle {N}} being a random counting measure. An alternative integration notation may be used to write this integral as: The dual interpretation of point processes is illustrated when writing the number of N {\displaystyle {N}} points in a set B {\displaystyle B} as: where the indicator function 1 B ( x ) = 1 {\displaystyle 1_{B}(x)=1} if the point x {\displaystyle x} is exists in B {\displaystyle B} and zero otherwise, which in this setting is also known as a Dirac measure . [ 11 ] In this expression the random measure interpretation is on the left-hand side while the random set notation is used is on the right-hand side. The average or expected value of a sum of functions over a point process is written as: [ 1 ] [ 3 ] where (in the random measure sense) P {\displaystyle P} is an appropriate probability measure defined on the space of counting measures N {\displaystyle {\textbf {N}}} . The expected value of N ( B ) {\displaystyle {N}(B)} can be written as: [ 1 ] which is also known as the first moment measure of N {\displaystyle {N}} . The expectation of such a random sum, known as a shot noise process in the theory of point processes, can be calculated with Campbell's theorem . [ 2 ] Point processes are employed in other mathematical and statistical disciplines, hence the notation may be used in fields such stochastic geometry , spatial statistics or continuum percolation theory , and areas which use the methods and theory from these fields.
https://en.wikipedia.org/wiki/Point_process_notation
A point source is a single identifiable localized source of something. A point source has a negligible extent, distinguishing it from other source geometries. Sources are called point sources because, in mathematical modeling , these sources can usually be approximated as a mathematical point to simplify analysis. The actual source need not be physically small if its size is negligible relative to other length scales in the problem. For example, in astronomy , stars are routinely treated as point sources, even though they are in actuality much larger than the Earth . In three dimensions , the density of something leaving a point source decreases in proportion to the inverse square of the distance from the source, if the distribution is isotropic , and there is no absorption or other loss. In mathematics, a point source is a singularity from which flux or flow is emanating. Although singularities such as this do not exist in the observable universe, mathematical point sources are often used as approximations to reality in physics and other fields. Generally, a source of light can be considered a point source if the resolution of the imaging instrument is too low to resolve the source's apparent size. There are two types and sources of light: a point source and an extended source. Mathematically an object may be considered a point source if its angular size , θ {\displaystyle \theta } , is much smaller than the resolving power of the telescope: θ << λ / D {\displaystyle \theta <<\lambda /D} , where λ {\displaystyle \lambda } is the wavelength of light and D {\displaystyle D} is the telescope diameter. Examples: Radio wave sources that are smaller than one radio wavelength are also generally treated as point sources. Radio emissions generated by a fixed electrical circuit are usually polarized , producing anisotropic radiation. If the propagating medium is lossless, however, the radiant power in the radio waves at a given distance will still vary as the inverse square of the distance if the angle remains constant to the source polarization. Gamma ray and X-ray sources may be treated as a point source if sufficiently small. Radiological contamination and nuclear sources are often point sources. This has significance in health physics and radiation protection . Examples: Sound is an oscillating pressure wave. As the pressure oscillates up and down, an audio point source acts in turn as a fluid point source and then a fluid point sink. (Such an object does not exist physically, but is often a good simplified model for calculations.) Examples: A coaxial loudspeaker is designed to work as a point source to allow a wider field for listening. Point sources are used as a means of calibrating ionizing radiation instruments. They are usually sealed capsules and are most commonly used for gamma, x-ray and beta-measuring instruments. In a vacuum , heat escapes as radiation isotropically. If the source remains stationary in a compressible fluid such as air , flow patterns can form around the source due to convection , leading to an anisotropic pattern of heat loss. The most common form of anisotropy is the formation of a thermal plume above the heat source. Examples: Fluid point sources are commonly used in fluid dynamics and aerodynamics . A point source of fluid is the inverse of a fluid point sink (a point where fluid is removed). Whereas fluid sinks exhibit complex rapidly changing behavior such as is seen in vortices (for example water running into a plug-hole or tornadoes generated at points where air is rising), fluid sources generally produce simple flow patterns, with stationary isotropic point sources generating an expanding sphere of new fluid. If the fluid is moving (such as wind in air or currents in water) a plume is generated from the point source. Examples: Sources of various types of pollution are often considered as point sources in large-scale studies of pollution. [ 1 ]
https://en.wikipedia.org/wiki/Point_source
In mathematics , a pointed set [ 1 ] [ 2 ] (also based set [ 1 ] or rooted set [ 3 ] ) is an ordered pair ( X , x 0 ) {\displaystyle (X,x_{0})} where X {\displaystyle X} is a set and x 0 {\displaystyle x_{0}} is an element of X {\displaystyle X} called the base point [ 2 ] (also spelled basepoint ). [ 4 ] : 10–11 Maps between pointed sets ( X , x 0 ) {\displaystyle (X,x_{0})} and ( Y , y 0 ) {\displaystyle (Y,y_{0})} —called based maps , [ 5 ] pointed maps , [ 4 ] or point-preserving maps [ 6 ] —are functions from X {\displaystyle X} to Y {\displaystyle Y} that map one basepoint to another, i.e. maps f : X → Y {\displaystyle f\colon X\to Y} such that f ( x 0 ) = y 0 {\displaystyle f(x_{0})=y_{0}} . Based maps are usually denoted f : ( X , x 0 ) → ( Y , y 0 ) {\textstyle f\colon (X,x_{0})\to (Y,y_{0})} . Pointed sets are very simple algebraic structures . In the sense of universal algebra , a pointed set is a set X {\displaystyle X} together with a single nullary operation ∗ : X 0 → X , {\displaystyle *:X^{0}\to X,} [ a ] which picks out the basepoint. [ 7 ] Pointed maps are the homomorphisms of these algebraic structures. The class of all pointed sets together with the class of all based maps forms a category . Every pointed set can be converted to an ordinary set by forgetting the basepoint (the forgetful functor is faithful ), but the reverse is not true. [ 8 ] : 44 In particular, the empty set cannot be pointed, because it has no element that can be chosen as the basepoint. [ 9 ] The category of pointed sets and based maps is equivalent to the category of sets and partial functions . [ 6 ] The base point serves as a "default value" for those arguments for which the partial function is not defined. One textbook notes that "This formal completion of sets and partial maps by adding 'improper', 'infinite' elements was reinvented many times, in particular, in topology ( one-point compactification ) and in theoretical computer science ." [ 10 ] This category is also isomorphic to the coslice category ( 1 ↓ S e t {\displaystyle \mathbf {1} \downarrow \mathbf {Set} } ), where 1 {\displaystyle \mathbf {1} } is (a functor that selects) a singleton set, and S e t {\displaystyle \scriptstyle {\mathbf {Set} }} (the identity functor of) the category of sets . [ 8 ] : 46 [ 11 ] This coincides with the algebraic characterization, since the unique map 1 → 1 {\displaystyle \mathbf {1} \to \mathbf {1} } extends the commutative triangles defining arrows of the coslice category to form the commutative squares defining homomorphisms of the algebras. There is a faithful functor from pointed sets to usual sets, but it is not full and these categories are not equivalent . [ 8 ] The category of pointed sets is a pointed category . The pointed singleton sets ( { a } , a ) {\displaystyle (\{a\},a)} are both initial objects and terminal objects , [ 1 ] i.e. they are zero objects . [ 4 ] : 226 The category of pointed sets and pointed maps has both products and coproducts , but it is not a distributive category . It is also an example of a category where 0 × A {\displaystyle 0\times A} is not isomorphic to 0 {\displaystyle 0} . [ 9 ] Many algebraic structures rely on a distinguished point. For example, groups are pointed sets by choosing the identity element as the basepoint, so that group homomorphisms are point-preserving maps. [ 12 ] : 24 This observation can be restated in category theoretic terms as the existence of a forgetful functor from groups to pointed sets. [ 12 ] : 582 A pointed set may be seen as a pointed space under the discrete topology or as a vector space over the field with one element . [ 13 ] As "rooted set" the notion naturally appears in the study of antimatroids [ 3 ] and transportation polytopes. [ 14 ]
https://en.wikipedia.org/wiki/Pointed_set
In mathematics , a pointed space or based space is a topological space with a distinguished point, the basepoint . The distinguished point is just simply one particular point, picked out from the space, and given a name, such as x 0 , {\displaystyle x_{0},} that remains unchanged during subsequent discussion, and is kept track of during all operations. Maps of pointed spaces ( based maps ) are continuous maps preserving basepoints, i.e., a map f {\displaystyle f} between a pointed space X {\displaystyle X} with basepoint x 0 {\displaystyle x_{0}} and a pointed space Y {\displaystyle Y} with basepoint y 0 {\displaystyle y_{0}} is a based map if it is continuous with respect to the topologies of X {\displaystyle X} and Y {\displaystyle Y} and if f ( x 0 ) = y 0 . {\displaystyle f\left(x_{0}\right)=y_{0}.} This is usually denoted Pointed spaces are important in algebraic topology , particularly in homotopy theory , where many constructions, such as the fundamental group , depend on a choice of basepoint. The pointed set concept is less important; it is anyway the case of a pointed discrete space . Pointed spaces are often taken as a special case of the relative topology , where the subset is a single point. Thus, much of homotopy theory is usually developed on pointed spaces, and then moved to relative topologies in algebraic topology . The class of all pointed spaces forms a category Top ∙ {\displaystyle \bullet } with basepoint preserving continuous maps as morphisms . Another way to think about this category is as the comma category , ( { ∙ } ↓ {\displaystyle \{\bullet \}\downarrow } Top ) where { ∙ } {\displaystyle \{\bullet \}} is any one point space and Top is the category of topological spaces . (This is also called a coslice category denoted { ∙ } / {\displaystyle \{\bullet \}/} Top .) Objects in this category are continuous maps { ∙ } → X . {\displaystyle \{\bullet \}\to X.} Such maps can be thought of as picking out a basepoint in X . {\displaystyle X.} Morphisms in ( { ∙ } ↓ {\displaystyle \{\bullet \}\downarrow } Top ) are morphisms in Top for which the following diagram commutes : It is easy to see that commutativity of the diagram is equivalent to the condition that f {\displaystyle f} preserves basepoints. As a pointed space, { ∙ } {\displaystyle \{\bullet \}} is a zero object in Top { ∙ } {\displaystyle \{\bullet \}} , while it is only a terminal object in Top . There is a forgetful functor Top { ∙ } {\displaystyle \{\bullet \}} → {\displaystyle \to } Top which "forgets" which point is the basepoint. This functor has a left adjoint which assigns to each topological space X {\displaystyle X} the disjoint union of X {\displaystyle X} and a one-point space { ∙ } {\displaystyle \{\bullet \}} whose single element is taken to be the basepoint.
https://en.wikipedia.org/wiki/Pointed_space
Pointer Telocation was a publicly traded company, headquartered in Israel , that developed automatic vehicle location systems and provided roadside automotive service. Its shares were traded on the NASDAQ Capital Market and were listed on the Tel Aviv Stock Exchange until 2012, and then again starting in April 2016. In October 2019, the company merged with I.D. Systems, Inc., to form PowerFleet . [ 2 ] Pointer Telocation provided two different motor vehicle-related services: RF - and GPS / GPRS -based automatic vehicle location systems and command-and-control systems for fleet management and stolen vehicle recovery , and, through its Shagrir subsidiary, roadside assistance to motorists. [ 3 ] Pointer was originally the name of a product developed by Nexus Telecommunications Systems Ltd. and marketed by Eden Telecom Ltd., beginning in 1999 with Eden's founding. The product's name was inspired by the Pointer dog breed. At the time of its establishment Eden was a joint venture of four Israeli investors: Motorola, Nexus, Poalim Investments, and Clal. In 1997 Nexus Telecommunications Systems changed its name to Nexus Telocation Systems Ltd. In 2002 Eden changed its name to Pointer. In 2004 Nexus Telocation Systems acquired Pointer and in 2006 changed its name to Pointer Telocation Ltd. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Shaul Elovitch 's Eurocom Group began accumulating shares of Pointer in August 2008, acquiring 9.03% of the company. [ 10 ] [ 11 ] Since December 2008 Eurocom holds 14.66% of Pointer's shares. DBSI Investments controls 37.27% of the company. [ 12 ] Giora Inbar, formerly commander of the Lebanon Liaison Unit of the Israel Defense Forces, was brought on board Eden Telecom as CEO with the company's founding in 1999. He left the position as part of the 2004–2005 deal to merge Shagrir with Pointer. In 2005 Danny Stern was appointed CEO and president of Pointer. He vacated the positions in 2011, whereupon David Mahlab became Pointer's CEO and president. [ 13 ] [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/Pointer_Telocation
In quantum Darwinism and similar theories, pointer states are quantum states , sometimes of a measuring apparatus, if present, that are less perturbed by decoherence than other states, and are the quantum equivalents of the classical states of the system after decoherence has occurred through interaction with the environment. [ 1 ] [ 2 ] [ 3 ] 'Pointer' refers to the reading of a recording or measuring device, which in old analog versions would often have a gauge or pointer display. This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pointer_state
Pointman is a seated user interface for controlling one's avatar in a 3D virtual environment . It combines head tracking , a gamepad , and sliding foot pedals to provide positional control over many aspects of the avatar's posture. [ 1 ] [ 2 ] Pointman was developed by the US Naval Research Laboratory (NRL) to support the use of dismounted infantry simulation for USMC training and mission rehearsal. [ 1 ] [ 2 ] [ 3 ] NRL's goal in developing Pointman was to extend the range and precision of actions supported by virtual simulators, to better represent what infantrymen can do. [ 4 ] Pointman seeks to enhance the level of control provided by conventional desktop and console game controllers by engaging the user's whole body to control corresponding segments of the avatar's body. [ 1 ] [ 2 ] [ 3 ] [ 5 ] The user employs his head and upper body to control looking and aiming, as well as leaning to duck and peek around cover. He uses his hands to operate virtual weapons and direct tactical movement, and he uses his feet for stepping and controlling his avatar's postural height. [ 1 ] [ 2 ] Pointman uses a set of three consumer grade input devices: a Natural Point TrackIR 5 head tracker, a Sony DualShock 3 gamepad, and a pair of flight simulator foot pedals from CH Products. [ 1 ] The additional input from the head and feet offloads the hands from having to control the entire avatar and allows for a more natural assignment of control. Together, the three input devices offer twelve independent channels of control over the avatar's posture. [ 4 ] The head tracker registers the translation and rotation of the user's head. Pointman uses these inputs to map the movements of the user's head and torso one-for-one to those of his avatar. [ 1 ] [ 2 ] The virtual view changes as the user turns his head to look around or leans his torso in any direction. When the weapon is raised into an aim position its sights remain centered in the field of view , so that turning the head also adjusts the aim. [ 1 ] The user can aim as precisely as he can hold his head on target. [ 2 ] [ 6 ] Hunching the head down by flexing the spine is also registered by the head's translation, and the avatar adopts a matching posture. Leaning forward and hunching are used to duck behind cover. Rising up and leaning to the side are used to look out and shoot from behind cover. [ 1 ] [ 2 ] The gamepad includes dual thumb sticks and a pair of tilt sensors. Pointman uses the thumb sticks to turn the avatar's body and set the stepping direction. [ 1 ] [ 2 ] [ 3 ] The tilt of the gamepad is mapped to control how the virtual rifle is held. The user tilts the gamepad down to lower the rifle, and tilts the gamepad up to continuously raise the rifle up through a low ready into an aim and then to a high ready. [ 1 ] [ 2 ] This allows users to practice muzzle discipline, by lowering the rifle to avoid muzzle sweeping friendlies, minimizing collisions when moving through tight spaces, or leading with the rifle when moving around cover. [ 2 ] [ 6 ] Once the rifle is raised into an aim position, the user's head motion aligns the sight picture. [ 1 ] [ 2 ] The user rolls the gamepad (tilting it side to side) to cant the weapon. [ 1 ] Gamepad buttons are mapped to control various weapon operations (including firing and reloading) and aiming functions (such as the optic zoom level). [ 7 ] The foot pedals slide back and forth and also move up and down like accelerator pedals. Pointman uses these inputs to control the avatar's lower body. [ 1 ] [ 2 ] [ 3 ] The translational sliding (apart then together) is mapped to control the separation of the avatar's legs, simulating stepping when the avatar is upright and crawling when the avatar is prone. [ 2 ] This allows users to take precise, measured steps when moving around obstacles or cover, and to continuously vary their speed over a realistic range of walking, running and crawling gaits. [ 1 ] [ 2 ] [ 6 ] The up-down movement of the pedals is mapped to control the avatar's postural height via the flexing of the avatar's legs. This allows the avatar to continuously transition from standing tall to a low crouch (or kneel when the legs are apart), and when prone from hands-and-knees to belly-on-the-ground. [ 1 ] [ 2 ] The ability to precisely control their avatar's postural height allows users to make better use of cover and concealment, and to look and shoot out from behind cover while minimizing their exposure. [ 2 ] [ 3 ] [ 6 ] The Virtual Battlespace combined arms simulators (VBS2, VBS3, and their successor VBS4), from Bohemia Interactive Simulations (BIS), are used for training by the USMC , the US Army , and a number of the NATO armed forces. BIS worked closely with NRL to tightly integrate the Pointman interface with VBS, and to allow Pointman to control the posture of the user's avatar on a continuous basis. [ 1 ] [ 2 ] The detailed articulation of the user's avatar is made visible to other squad members running in a networked simulation. Pointman-enhanced VBS (VBS-Pointman) supports the operation of a wide range of small arms and additional forms of mobility, including climbing, swimming, and mounted roles (driver, passenger and gunner) using the full complement of manned vehicles. [ 8 ] Pointman was designed by Dr. Jim Templeman of NRL and implemented by Patricia Denbrook of DCS, Inc. NRL's Base Funding Program supported the initial development of Pointman, and the USMC Program Manager Training Systems (PM_TRASYS) together with the Office of Naval Research (ONR) Rapid Technology Transition program office supported its integration with VBS2. ONR's Human Performance, Training, and Education Thrust Area added its support in refining and demonstrating Pointman. [ 1 ] A formal Military Utility Assessment (MUA) of Pointman integrated with VBS2 was performed by the MarForPac Experimentation Center at MCB Hawaii in September 2011. [ 1 ] [ 2 ] [ 5 ] [ 6 ] The squad of Marines that participated in the study (Golf Company, 2nd Battalion, 34d Marine Regiment) gave Pointman high marks for realism and usability. In response to a series of survey questions, the Marines felt Pointman allowed them to realistically: control viewing, perform tactical movements, control the virtual rifle, utilize cover, and control the avatar’s posture. They found it comfortable, easy to use, and that it enhanced the simulation. The primary recommendation of the MUA report was: “Transition the Pointman DISI (dismounted infantry simulation interface) enhancements into VBS2 to increase realism and efficacy as a virtual training aid.” [ 6 ] NRL is continuing [ when? ] to develop the Pointman interface as part of its ongoing research in expressive interaction for desktop simulation. This involves extending Pointman to include non-verbal communications (such as eye movements, facial expression, and arm gestures) needed to support team and cross-cultural interaction, without limiting tactical mobility. [ 4 ] A driving application is the training of cultural interaction skills alongside warfighting skills, using training scenarios which pose a mix of tactical, cultural and ethical challenges. [ 9 ]
https://en.wikipedia.org/wiki/Pointman_(user_interface)
The points of the compass are a set of horizontal, radially arrayed compass directions (or azimuths ) used in navigation and cartography . A compass rose is primarily composed of four cardinal directions — north , east , south , and west —each separated by 90 degrees , and secondarily divided by four ordinal (intercardinal) directions—northeast, southeast, southwest, and northwest—each located halfway between two cardinal directions. Some disciplines such as meteorology and navigation further divide the compass with additional azimuths. Within European tradition, a fully defined compass has 32 "points" (and any finer subdivisions are described in fractions of points). [ 1 ] Compass points or compass directions are valuable in that they allow a user to refer to a specific azimuth in a colloquial fashion, without having to compute or remember degrees. [ 2 ] The names of the compass point directions follow these rules: In summary, the 32-wind compass rose comes from the eight principal winds, eight half-winds, and sixteen quarter-winds combined, with each compass point at an 11 + 1 ⁄ 4 ° angle from the next. By the middle of the 18th century, the 32-point system had been further extended by using half- and quarter-points to give a total of 128 directions. [ 6 ] These fractional points are named by appending, for example, ⁠ 1 / 4 ⁠ east, ⁠ 1 / 2 ⁠ east, or ⁠ 3 / 4 ⁠ east to the name of one of the 32 points. Each of the 96 fractional points can be named in two ways, depending on which of the two adjoining whole points is used, for example, N ⁠ 3 / 4 ⁠ E is equivalent to NbE ⁠ 1 / 4 ⁠ N. Either form is easily understood, but alternative conventions as to correct usage developed in different countries and organisations. "It is the custom in the United States Navy to box from north and south toward east and west, with the exception that divisions adjacent to a cardinal or inter-cardinal point are always referred to that point." [ 7 ] The Royal Navy used the additional "rule that quarter points were never read from a point beginning and ending with the same letter." [ 8 ] Compass roses very rarely named the fractional points and only showed small, unlabelled markers as a guide for helmsmen. Prior to the modern three-figure method of describing directions (using the 360° of a circle), the 32-point compass was used for directions on most ships, especially among European crews. The smallest unit of measure recognized was 'one point', 1/32 of a circle, or 11 + 1 ⁄ 4 °. [ 9 ] In the mariner's exercise of "boxing the compass", all thirty-two points of the compass are named in clockwise order. [ 10 ] This exercise became more significant as navigation improved and the half- and quarter-point system increased the number of directions to include in the 'boxing'. Points remained the standard unit until switching to the three-figure degree method. These points were also used for relative measurement, so that an obstacle might be noted as 'two points off the starboard bow', meaning two points clockwise of straight ahead, 22 + 1 ⁄ 2 ° [ 9 ] This relative measurement may still be used in shorthand on modern ships, especially for handoffs between outgoing and incoming helmsmen, as the loss of granularity is less significant than the brevity and simplicity of the summary. The table below shows how each of the 128 directions are named. The first two columns give the number of points and degrees clockwise from north. The third gives the equivalent bearing to the nearest degree from north or south towards east or west. The "CW" column gives the fractional-point bearings increasing in the clockwise direction and "CCW" counterclockwise . The final three columns show three common naming conventions: No "by" avoids the use of "by" with fractional points. Colour coding shows whether each of the three naming systems matches the "CW" or "CCW" column. The traditional compass rose of eight winds (and its 16-wind and 32-wind derivatives) was invented by seafarers in the Mediterranean Sea during the Middle Ages (with no obvious connection to the twelve classical compass winds of the ancient Greeks and Romans). The traditional mariner's wind names were expressed in Italian , or more precisely, the Italianate Mediterranean lingua franca common among sailors in the 13th and 14th centuries, which was principally composed of Genoese ( Ligurian ), mixed with Venetian , Sicilian , Provençal , Catalan , Greek , and Arabic terms from around the Mediterranean basin. This Italianate patois was used to designate the names of the principal winds on the compass rose found in mariners' compasses and portolan charts of the 14th and 15th centuries. The traditional names of the eight principal winds are: Local spelling variations are far more numerous than listed, e.g. Tramutana, Gregale, Grecho, Sirocco, Xaloc, Lebeg, Libezo, Leveche, Mezzodi, Migjorn, Magistro, Mestre, etc. Traditional compass roses will typically have the initials T, G, L, S, O, L, P, and M on the main points. Portolan charts also colour-coded the compass winds: black for the eight principal winds, green for the eight half-winds, and red for the sixteen quarter-winds. Each half-wind name is simply a combination of the two principal winds that it bisects, with the shortest name usually placed first, for example: NNE is "Greco-Tramontana"; ENE is "Greco-Levante"; SSE is "Ostro-Scirocco", etc. The quarter winds are expressed with an Italian phrase, " Quarto di X verso Y" ( pronounced [ˈkwarto di X ˈvɛrso Y ] [ 11 ] [ 12 ] [ 13 ] one quarter from X towards Y), or "X al Y" (X to Y) or "X per Y" (X by Y). There are no irregularities to trip over; the closest principal wind always comes first, the more distant one second, for example: north-by-east is " Quarto di Tramontana verso Greco "; and northeast-by-north is " Quarto di Greco verso Tramontana ". The table below shows how the 32 compass points are named. Each point has an angular range of 11 + 1 ⁄ 4 degrees where the azimuth midpoint is the horizontal angular direction (clockwise from north) of the given compass bearing; minimum is the lower (counterclockwise) angular limit of the compass point; and maximum is the upper (clockwise) angular limit of the compass point. Navigation texts dating from the Yuan , Ming , and Qing dynasties in China use a 24-pointed compass with named directions. These are based on the twelve Earthly Branches , which also form the basis of the Chinese zodiac. When a single direction is specified, it may be prefaced by the character 單 (meaning single) or 丹 . Headings mid-way in-between are compounds as in English. For instance, 癸子 refers to the direction halfway between point 子 and point 癸 , or 7 + 1 ⁄ 2 °. This technique is referred to as a double-needle ( 雙針 ) compass.
https://en.wikipedia.org/wiki/Points_of_the_compass
In mathematics , the qualifier pointwise is used to indicate that a certain property is defined by considering each value f ( x ) {\displaystyle f(x)} of some function f . {\displaystyle f.} An important class of pointwise concepts are the pointwise operations , that is, operations defined on functions by applying the operations to function values separately for each point in the domain of definition. Important relations can also be defined pointwise. A binary operation o : Y × Y → Y on a set Y can be lifted pointwise to an operation O : ( X → Y ) × ( X → Y ) → ( X → Y ) on the set X → Y of all functions from X to Y as follows: Given two functions f 1 : X → Y and f 2 : X → Y , define the function O ( f 1 , f 2 ): X → Y by Commonly, o and O are denoted by the same symbol. A similar definition is used for unary operations o , and for operations of other arity . [ citation needed ] The pointwise addition f + g {\displaystyle f+g} of two functions f {\displaystyle f} and g {\displaystyle g} with the same domain and codomain is defined by: The pointwise product or pointwise multiplication is: The pointwise product with a scalar is usually written with the scalar term first. Thus, when λ {\displaystyle \lambda } is a scalar : An example of an operation on functions which is not pointwise is convolution . Pointwise operations inherit such properties as associativity , commutativity and distributivity from corresponding operations on the codomain . If A {\displaystyle A} is some algebraic structure , the set of all functions X {\displaystyle X} to the carrier set of A {\displaystyle A} can be turned into an algebraic structure of the same type in an analogous way. Componentwise operations are usually defined on vectors, where vectors are elements of the set K n {\displaystyle K^{n}} for some natural number n {\displaystyle n} and some field K {\displaystyle K} . If we denote the i {\displaystyle i} -th component of any vector v {\displaystyle v} as v i {\displaystyle v_{i}} , then componentwise addition is ( u + v ) i = u i + v i {\displaystyle (u+v)_{i}=u_{i}+v_{i}} . Componentwise operations can be defined on matrices. Matrix addition, where ( A + B ) i j = A i j + B i j {\displaystyle (A+B)_{ij}=A_{ij}+B_{ij}} is a componentwise operation while matrix multiplication is not. A tuple can be regarded as a function, and a vector is a tuple. Therefore, any vector v {\displaystyle v} corresponds to the function f : n → K {\displaystyle f:n\to K} such that f ( i ) = v i {\displaystyle f(i)=v_{i}} , and any componentwise operation on vectors is the pointwise operation on functions corresponding to those vectors. In order theory it is common to define a pointwise partial order on functions. With A , B posets , the set of functions A → B can be ordered by defining f ≤ g if (∀ x ∈ A) f ( x ) ≤ g ( x ) . Pointwise orders also inherit some properties of the underlying posets. For instance if A and B are continuous lattices , then so is the set of functions A → B with pointwise order. [ 1 ] Using the pointwise order on functions one can concisely define other important notions, for instance: [ 2 ] An example of an infinitary pointwise relation is pointwise convergence of functions—a sequence of functions ( f n ) n = 1 ∞ {\displaystyle (f_{n})_{n=1}^{\infty }} with f n : X ⟶ Y {\displaystyle f_{n}:X\longrightarrow Y} converges pointwise to a function f if for each x in X lim n → ∞ f n ( x ) = f ( x ) . {\displaystyle \lim _{n\to \infty }f_{n}(x)=f(x).} For order theory examples: This article incorporates material from Pointwise on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Pointwise
In mathematics , the qualifier pointwise is used to indicate that a certain property is defined by considering each value f ( x ) {\displaystyle f(x)} of some function f . {\displaystyle f.} An important class of pointwise concepts are the pointwise operations , that is, operations defined on functions by applying the operations to function values separately for each point in the domain of definition. Important relations can also be defined pointwise. A binary operation o : Y × Y → Y on a set Y can be lifted pointwise to an operation O : ( X → Y ) × ( X → Y ) → ( X → Y ) on the set X → Y of all functions from X to Y as follows: Given two functions f 1 : X → Y and f 2 : X → Y , define the function O ( f 1 , f 2 ): X → Y by Commonly, o and O are denoted by the same symbol. A similar definition is used for unary operations o , and for operations of other arity . [ citation needed ] The pointwise addition f + g {\displaystyle f+g} of two functions f {\displaystyle f} and g {\displaystyle g} with the same domain and codomain is defined by: The pointwise product or pointwise multiplication is: The pointwise product with a scalar is usually written with the scalar term first. Thus, when λ {\displaystyle \lambda } is a scalar : An example of an operation on functions which is not pointwise is convolution . Pointwise operations inherit such properties as associativity , commutativity and distributivity from corresponding operations on the codomain . If A {\displaystyle A} is some algebraic structure , the set of all functions X {\displaystyle X} to the carrier set of A {\displaystyle A} can be turned into an algebraic structure of the same type in an analogous way. Componentwise operations are usually defined on vectors, where vectors are elements of the set K n {\displaystyle K^{n}} for some natural number n {\displaystyle n} and some field K {\displaystyle K} . If we denote the i {\displaystyle i} -th component of any vector v {\displaystyle v} as v i {\displaystyle v_{i}} , then componentwise addition is ( u + v ) i = u i + v i {\displaystyle (u+v)_{i}=u_{i}+v_{i}} . Componentwise operations can be defined on matrices. Matrix addition, where ( A + B ) i j = A i j + B i j {\displaystyle (A+B)_{ij}=A_{ij}+B_{ij}} is a componentwise operation while matrix multiplication is not. A tuple can be regarded as a function, and a vector is a tuple. Therefore, any vector v {\displaystyle v} corresponds to the function f : n → K {\displaystyle f:n\to K} such that f ( i ) = v i {\displaystyle f(i)=v_{i}} , and any componentwise operation on vectors is the pointwise operation on functions corresponding to those vectors. In order theory it is common to define a pointwise partial order on functions. With A , B posets , the set of functions A → B can be ordered by defining f ≤ g if (∀ x ∈ A) f ( x ) ≤ g ( x ) . Pointwise orders also inherit some properties of the underlying posets. For instance if A and B are continuous lattices , then so is the set of functions A → B with pointwise order. [ 1 ] Using the pointwise order on functions one can concisely define other important notions, for instance: [ 2 ] An example of an infinitary pointwise relation is pointwise convergence of functions—a sequence of functions ( f n ) n = 1 ∞ {\displaystyle (f_{n})_{n=1}^{\infty }} with f n : X ⟶ Y {\displaystyle f_{n}:X\longrightarrow Y} converges pointwise to a function f if for each x in X lim n → ∞ f n ( x ) = f ( x ) . {\displaystyle \lim _{n\to \infty }f_{n}(x)=f(x).} For order theory examples: This article incorporates material from Pointwise on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Pointwise_order
PoisonIvy is a remote access trojan that enables key logging, screen capturing, video capturing, file transfers, system administration, password theft, and traffic relaying. [ 1 ] It was created around 2005 by a Chinese hacker [ 2 ] and has been used in several prominent hacks, including a breach of the RSA SecurID authentication tool and the Nitro attacks on chemical companies, both in 2011. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] Another name for the malware is "Backdoor.Darkmoon". [ 9 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/PoisonIvy_(trojan)
The Poison Prevention Packaging Act of 1970 ( PPPA ); ( Pub. L. 91-601 , 84 Stat. 1670-74) was signed into law by U.S. President Richard Nixon on December 30, 1970. It was enacted by the 91st United States Congress . This law required the use of child-resistant packaging for prescription drugs, over-the-counter (OTC) drugs, household chemicals, and other hazardous materials that could be considered dangerous for children. Before the PPPA was enacted, unintentional poisonings by both medicines and common household products were considered by most pediatricians to be the leading cause of injury to children aged 5 and under. At that time there were about 500 deaths per year being reported for children aged 5 and under due to accessibility of these chemicals. [ 1 ] The purpose of the PPPA was to protect children from ingesting harmful chemicals and prescription medications by accident. After the PPPA was implemented, deaths in children aged 5 and under went down by 1.4 per million. This represented a reduction in the rate of fatalities, up to 45%, from projections of deaths without the presence of child-proof packaging and equated to an average of 24 fewer deaths in children annually. [ 2 ] In 1957, the National Clearinghouse for Poison Control Centers was established with the goal in mind to collect data from different individual poison control centers and provide them with the information needed on the many types of household products involved in childhood poisonings. [ 1 ] Some of the earliest attempts at controlling the problem of poisonings in children came about after World War II. In 1960, the Food and Drug Administration (FDA), in association with the American Medical Association (AMA), drafted what became known as the Hazardous Substances Labeling Act . This law stated that certain products, identified as "hazardous substances" within the meaning of the law, had to carry on their labels specific statements of caution. [ 1 ] There are some exceptions to the child-resistant packaging. There were concerns about accessibility of medications to the elderly and handicapped. As such, a manufacturer may package any over-the-counter household substance, subject to a PPPA standard, in a single-size package if the manufacturer also supplies such a substance in packages that comply with such a standard and if the packages of such substance that do not meet such standard bear conspicuous labeling stating: "This package for households without young children" (or "Package Not Child-Resistant" for small packages). [ 3 ] As a result, with the exception of prescription drugs, manufacturers of certain household products that are regulated under the PPPA have the option of marketing one size in a conventional package as long as that same product is supplied in a popular-sized package, which is child-resistant. [ 1 ] Some of the main products that are exempted from the PPPA include the following: There is a long list of substances that fall under the authority of the PPPA. These substances include, but are not limited to
https://en.wikipedia.org/wiki/Poison_Prevention_Packaging_Act_of_1970
Poison exons (PEs); also called premature termination codon (PTC) exons or nonsense-mediated decay (NMD) exons] are a class of cassette exons that contain PTCs. Inclusion of a PE in a transcript targets the transcript for degradation via NMD. PEs are generally highly conserved elements of the genome and are thought to have important regulatory roles in biology. [ 1 ] [ 2 ] Targeting PE inclusion or exclusion in certain transcripts is being evaluated as a therapeutic strategy. In 2002, a model termed regulated unproductive splicing and translation (RUST) was proposed based on the finding that many (~one-third) alternatively spliced transcripts contain PEs. In this model, coupling alternative splicing to NMD (AS-NMD) is thought to tune transcript levels to regulate protein expression. [ 3 ] Alternative splicing may also lead to NMD via other pathways besides PE inclusion, e.g., intron retention. [ 4 ] [ 5 ] PEs were initially characterized in RNA-binding proteins from the SR protein family . [ 1 ] [ 2 ] Genes for other RNA-binding proteins (RBPs) such as those for heterogenous nuclear ribonucleoprotein (hnRNP) also contain PEs. [ 2 ] Numerous chromatin regulators also contain PEs, though these are less conserved than PEs within RBPs such as the SR proteins. [ 6 ] Multiple spliceosomal components contain PEs. [ 7 ] Certain PEs may occur only in specific tissues. [ 8 ] PE-containing transcripts generally represent a minority of the overall transcript population, in part due to their active degradation via NMD, though this relative abundance can be elevated upon inhibition of NMD or certain biological states. [ 2 ] [ 7 ] [ 9 ] [ 10 ] [ 11 ] Certain PE-containing transcripts are resistant to NMD and may be translated into truncated proteins. [ 12 ] Cis -regulatory elements neighboring PEs have been found to affect PE inclusion. [ 13 ] Many proteins whose corresponding genes contain PEs autoregulate PE inclusion in their respective transcripts and thereby control their own levels via a feedback loop. [ 12 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] Cross-regulation of PE inclusion has also been observed. [ 20 ] [ 21 ] [ 22 ] Differential splicing of PEs is implicated in biological processes such as differentiation , [ 23 ] [ 24 ] neurodevelopment , [ 25 ] dispersal of nuclear speckles during hypoxia , [ 26 ] tumorigenesis , [ 24 ] [ 27 ] organism growth, [ 15 ] and T cell expansion. [ 28 ] Protein kinases that regulate phosphorylation of splicing factors can affect splicing processes, thus kinase inhibitors may affect inclusion of PEs. For example, CMGC kinase inhibitors and CDK9 inhibitors have been found to induce PE inclusion in RBM39 . [ 29 ] Small molecules that modulate chromatin accessibility can affect PE inclusion. [ 30 ] Mutations in splicing factors can lead to inclusion of PEs in certain transcripts. [ 31 ] PE inclusion can be regulated by external variables such as temperature and electrical activity. For example, PE inclusion in RBM3 transcript is lowered during hypothermia . This is mediated by temperature-dependent binding of the splicing factor HNRNPH1 to the RBM3 transcript. [ 9 ] The neuronal RBPs NOVA1/2 are translocated from the nucleus to the cytoplasm during pilocarpine -induced seizure in mice, and it was found that NOVA1/2 regulates the expression of cryptic PEs. [ 32 ] The glycosyltransferase O-GlcNAc transferase is responsible for installing the O-GlcNAc post-translational modification and contains a PE. [ 33 ] It has been frequently observed that pharmacological or genetic perturbations that elevate cellular O-GlcNAc levels increase PE inclusion in the OGT transcript. [ 34 ] Proper regulation of PE inclusion and exclusion is important for health. Genetic mutations can affect inclusion of PEs and cause disease. For example, loss of CCAR1 leads to PE inclusion in the FANCA transcript, resulting in a Fanconi anemia phenotype. [ 35 ] Dysregulation of components of the splicing machinery can also cause dysregulation of PE inclusion. Mutations in the splicing factor SF3B1 have been found to promote PE inclusion in BRD9 , reducing BRD9 mRNA and protein levels and leading to melanomagenesis . [ 36 ] Mutations in U2AF1 promote PE inclusion in EIF4A2 , leading to impaired global mRNA translation and acute myeloid leukemia (AML) chemoresistance through the integrated stress response pathway. [ 37 ] The splicing factor SRSF6 contains a PE whose skipping is connected to T cell acute lymphoblastic leukemia (T-ALL), [ 38 ] and PE inclusion in SRSF10 is linked to acute lymphoblastic leukemia (ALL) . [ 39 ] Intronic mutations can lead to PE inclusion, such as in the case of SCN1A , where mutations within intron 20 promote inclusion of the nearby PE 20N, leading to Dravet syndrome -like phenotypes in mouse models. [ 40 ] [ 41 ] An intronic mutation in FLNA has been found to impair binding of the splicing regulator PTBP1 , leading to inclusion of a poison exon in FLNA transcripts that causes a brain-specific malformation. [ 25 ] In RAD50 , TGAGT deletion is associated with a cryptic poison exon that occurs 30 nucleotides downstream within intron 21 mediated by altered U2AF recognition. [ 42 ] Differential inclusion of PEs in various splicing factor and hnRNP genes has been reported in type 1 diabetes . [ 43 ] SRSF2 mutations have been found to promote PE inclusion in the epigenetic regulator EZH2 , resulting in impaired hematopoietic differentiation. [ 31 ] The TRA2B PE is essential for male fertility and meiotic cell division in mouse models. Deletion of this PE leads to an azoospermia phenotype. [ 44 ] With the advent of next-generation sequencing technologies, [ 45 ] diagnostic genetic testing has emerged as a powerful tool to diagnose afflictions associated with specific genetic variants. Many diagnostic genetic testing efforts have focused on exome sequencing . [ 46 ] PE annotations may improve the diagnostic yield of these tests for certain diseases. For example, variants that affect PE inclusion in sodium channel genes ( SCN1A , SCN2A , and SCN8A ) have been found to be associated with epilepsies, and analogous variants in SNRPB have been found to be associated with cerebrocostomandibular syndrome . [ 47 ] [ 48 ] As PE inclusion results in transcript degradation, targeted PE inclusion or exclusion is being evaluated as a therapeutic strategy. [ 49 ] This strategy may prove especially applicable towards targets whose gene products are not easily ligandable such as "undruggable" proteins . Targeting PE inclusion/exlusion has been demonstrated with both small molecules [ 50 ] [ 51 ] and antisense oligonucleotides (ASOs) . [ 24 ] [ 52 ] Small molecules may modulate splicing by stabilizing alternative splice sites. [ 50 ] [ 53 ] ASOs may block specific splice sites or target certain cis -regulatory elements to promote splicing at other sites. [ 54 ] [ 55 ] These ASOs may also be referred to as splice-switching oligonucleotides (SSOs). [ 24 ] [ 55 ] ASO walks tiling different ASOs across a gene sequence may be necessary to identify ASOs that have the desired effect on PE inclusion. [ 52 ] Stoke Therapeutics is evaluating a strategy termed T argeted A ugmentation of N uclear G ene O utput (TANGO). [ 52 ] Targeting exon 20N in SCN1A mRNA with the antisense oligonucleotide zorevunersen (STK-001) blocks inclusion of this PE, leading to elevated levels of the productive SCN1A transcript and the gene product sodium channel protein 1 subunit alpha (Na V 1.1). In mouse models of Dravet syndrome, which is driven by mutations in SCN1A , [ 40 ] [ 41 ] [ 56 ] zorevunersen was able to reduce incidence of electrographic seizures and sudden unexpected death in epilepsy and prolong survival. [ 57 ] [ 58 ] As of October 2024, zorevunersen is being evaluated in phase 2 clinical trials (NCT04740476). [ 59 ] Zorevunersen received FDA Breakthrough Therapy Designation in December 2024. [ 60 ] Also in December 2024, Stoke Therapeutics disclosed that zorevunersen is generally well tolerated and shows substantial and sustained reductions in convulsive seizure frequency. [ 61 ] Stoke Therapeutics expects to launch a phase 3 clinical trial in 2025 evaluating zorevunersen for reduction in seizure frequency as the primary endpoint and cognition and behavioral changes as secondary endpoints. [ 62 ] Stoke Therapeutics is also evaluating the ASO STK-002 for treatment of autosomal dominant optic atrophy (ADOA) . STK-002 promotes removal of a PE in the transcript of OPA1 , leading to elevated OPA1 protein levels. [ 63 ] Remix Therapeutics developed REM-422, which is an oral small molecule that promotes PE inclusion in the oncogene MYB . REM-422 was discovered through a screening campaign for molecules that promote PE inclusion in MYB . Subsequent in vitro experiments showed that REM-422 selectively facilitates binding of the U1 snRNP complex to oligonucleotides containing the MYB 5' splice site sequence. In various AML cell lines, REM-422 leads to degradation of MYB mRNA and lower MYB protein levels. REM-422 demonstrated antitumor activity in mouse xenograft models of acute myeloid leukemia. [ 50 ] [ 64 ] As of October 2024, REM-422 is being evaluated in phase 1 clinical trials (NCT06118086, NCT06297941). [ 65 ] [ 66 ] The splicing modulator small molecule risdiplam , originally developed to promote exon 7 inclusion in the SMN2 transcript for treatment of spinal muscular atrophy, [ 67 ] [ 68 ] dose-dependently promotes PE inclusion in the MYB transcript as well. [ 69 ] Rgenta Therapeutics has also developed RGT-61159, an oral small molecule that promotes PE inclusion in MYB , as a potential treatment for adenoid cystic carcinoma (ACC) . [ 70 ] RGT-61159 is being evaluated in phase 1 clinical trials (NCT06462183). [ 71 ] PTC Therapeutics is evaluating the oral small molecule PTC518 as a treatment for Huntington's disease . [ 51 ] PTC518 was well-tolerated and showed dose-dependent decreases in HTT mRNA and HTT protein levels in a phase 1 clinical trial. [ 72 ] As of October 2024, PTC518 is being evaluated in phase 2 clinical trials (NCT05358717). [ 73 ] In December 2024, Novartis entered a global license and collaboration agreement with PTC Therapeutics for PTC518 with an upfront payment of $1.0 billion and up to $1.9 billion in development, regulatory, and sales milestones. [ 74 ] Therapeutic targeting of poison exon inclusion/exclusion has also been proposed for oncogenic splicing factors, [ 24 ] [ 27 ] BRD9 (for treatment of cancer), [ 36 ] SYNGAP1 , [ 75 ] RBM3 (for treatment of neurodegeneration), [ 54 ] and CFTR (for treatment of cystic fibrosis ). [ 76 ]
https://en.wikipedia.org/wiki/Poison_exon
The poison laboratory of the Soviet secret services , alternatively known as Laboratory 1 , Laboratory 12 , and Kamera (which means "The Cell" in Russian ), was a covert research-and-development facility of the Soviet secret police agencies . Prior to the dissolution of the Soviet Union, the laboratory manufactured and tested poisons, [ 1 ] [ 2 ] and was reportedly reactivated by the Russian government in the late 1990s. [ 3 ] [ 4 ] The laboratory activities were mentioned in the Mitrokhin archive . Mairanovsky and his colleagues tested a variety of lethal poisons on prisoners from the Gulags , including mustard gas , ricin , digitoxin , curare , cyanide, and many others. [ 7 ] The objective of these experiments was to identify a tasteless, odorless chemical that could not be detected post-mortem . Candidate poisons were administered to the victims along with a meal or drink, disguised as "medication". [ 5 ] Ultimately, a preparation meeting the desired criteria was developed and referred to as C-2 or K-2 ( carbylamine choline chloride ). [ 5 ] [ 8 ] [ 9 ] According to witness testimonies, the victims experienced physical changes, such as a rapid weakening and diminishment in height, followed by a calm and silent demeanor, culminating in death within 15 minutes. [ 5 ] Mairanovsky intentionally brought individuals of various physical conditions and ages into the laboratory to comprehensively understand the effects of each poison. Pavel Sudoplatov and Nahum Eitingon only approved specialized equipment (namely, poisons) if it had been tested on "humans", as revealed in the testimony of Mikhail Filimonov. [ 5 ] Vsevolod Merkulov stated that these experiments received authorization from NKVD chief Lavrentiy Beria . [ 5 ] Following Stalin's death and Beria's subsequent arrest, Beria attested on August 28, 1953, that "I gave orders to Mairanovsky to conduct experiments on people sentenced to the highest measure of punishment, but it was not my idea". [ 5 ] In addition to human experimentation, Mairanovsky personally executed people with poisons, under the supervision of Sudoplatov. [ 5 ] [ 10 ] The New York Times reported that Garry Kasparov , the chess champion and Putin opponent, drinks bottled water and eats prepared meals carried by his bodyguards. [ 35 ]
https://en.wikipedia.org/wiki/Poison_laboratory_of_the_Soviet_secret_services
A poison message refers to a client–server model issue, where a client machine tries to send a message to the server and fails too many times (the actual amount of "too many" is variable). The behavior toward poison messages varies - they are either discarded, create a service request event, or initiate other failure indications. The term is used mainly in Microsoft -related frameworks, like SQL Server [ 1 ] or Windows Communication Foundation (WCF). [ 2 ] RabbitMQ also has a notion of poisoned messages. [ 3 ] This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Poison_message
Poisoning is the harmful effect which occurs when toxic substances are introduced into the body. [ 1 ] The term "poisoning" is a derivative of poison , a term describing any chemical substance that may harm or kill a living organism upon ingestion . [ 2 ] Poisoning can be brought on by swallowing, inhaling, injecting or absorbing toxins through the skin. Toxicology is the practice and study of symptoms, mechanisms, diagnoses, and treatments correlated to poisoning. [ 3 ] When a living organism is introduced to a poison, the symptoms that follow successful contact develop in close relation to the degree of exposure . [ 4 ] Acute toxicity/poisoning consists of a living organism being harmfully exposed to poison once or more times during a brief period, with symptoms manifesting within 14 days since administration. [ 5 ] Chronic toxicity/poisoning involves a living organism being exposed to a toxin on multiple instances over an extended period of time, whereas the symptoms either develop gradually or after a protracted latent period . [ 6 ] Chronic poisoning most commonly occurs following exposure to poisons that bioaccumulate , or are biomagnified , such as mercury , gadolinium , and lead . [ 7 ] In 2020, America's Poison Centers' NPDS (National Poison Data System) report determined that 76.9% of recorded toxin exposures were accidental, with the rest of the statistics either being deliberate or unexpected. [ 9 ] A large portion of these accidental incidents occurred due to mistakenly taking the incorrect medicine, or doubling one's dose by mistake. [ 9 ] Nerve gases are synthetic substances used in industry or warfare that are specifically engineered to bring harm to living organisms. [ 10 ] They may paralyze a person in a matter of seconds or cease organ function, quickly resulting in death .They're considered to be biologically derived neurotoxins , which are a genre of toxic agents that act specifically against the nervous system. Inhaled or ingested cyanide (or Zyklon B ) was used as a method of execution in gas chambers . [ 11 ] This method of poisoning instantly starved the body of energy by inhibiting the enzymes in mitochondria that produce ATP . [ 12 ] Intravenous injection of an unnaturally high concentration of potassium chloride , such as in the execution of prisoners in parts of the United States, quickly stops the heart by eliminating the cell potential necessary for muscle contraction . [ 13 ] Most biocides, including pesticides , are created to act as poisons to target organisms. Although acute or less observable chronic poisoning can also occur in non-target organisms ( secondary poisoning ), including the humans who apply the biocides and other beneficial organisms . [ 14 ] For example, the herbicide 2,4-D imitates the action of a plant hormone, which makes its lethal toxicity specific to plants. Indeed, 2,4-D is not a poison, but is classified as harmful. [ 15 ] Many substances regarded as poisons are toxic only indirectly through toxication . An example is "wood alcohol" or methanol , which is not poisonous itself but is chemically converted to toxic formaldehyde and formic acid once it reaches the liver . [ 16 ] Many drug molecules are made toxic in the liver, and the genetic variability of certain liver enzymes makes the toxicity of many compounds differ between people. As mandated in GHS , various safety-orientated government agencies from around the globe have put into place the usage of pictograms when labelling toxic substances. [ 17 ] [ 18 ] The hazard symbol which labels a substance as capable of poisoning depicts the imagery of a human skull in front of two bones crossing one another. [ 19 ] GHS precautionary statements , which advise users to exercise caution or be aware of the substance's potentially dangerous features, are added to a legal toxins' labelling. [ 20 ] Toxic substances can also come with instructions on how to handle the product, what compounds to avoid mixing the product with, and how to treat a victim at risk of poisoning who has come in contact with the product. [ 21 ] Various poison control centers are also available to assist in diagnosing, managing, and preventing possible incidents of poisoning. [ 22 ] Many are accessible through phone calls or official websites . Seeking medical attention is strongly advised if someone is thought to have been exposed to or consumed a poison, mainly from a nearby poison control centre. [ 23 ] [ 24 ] It is advised to provide medical personnel with information regarding the poisoning, the patient's age, weight, and any other drugs they may be taking in addition to the symptoms of the illness. Try to determine what was ingested, the amount and how long since the person was exposed to it. If possible, have on hand the pill bottle, medication package or other suspect container. [ 25 ] The treatment will depend on the substance to which the patient is exposed. Depending on the type of poisoning, some first aid measures may help. Treatments include activated charcoal, induction of vomiting and dilution or neutralizing of the poison. [ 26 ]
https://en.wikipedia.org/wiki/Poisoning
On 20 August 2020, Russian opposition leader and anti-corruption activist Alexei Navalny was poisoned with the Novichok nerve agent and as a result, he was hospitalized in serious condition. During a flight from Tomsk to Moscow , he became ill and was taken to a hospital in Omsk after an emergency landing there, and then, he was put in a coma . He was evacuated to the Charité hospital in Berlin , Germany, two days later. The use of the nerve agent was confirmed by five Organisation for the Prohibition of Chemical Weapons (OPCW) certified laboratories. [ 4 ] [ 5 ] On 7 September, doctors announced that they had taken Navalny out of the induced coma and that his condition had improved. [ 6 ] He was discharged from the hospital on 22 September 2020. [ 7 ] The OPCW said that a cholinesterase inhibitor from the Novichok group was found in Navalny's blood, urine, skin samples and his water bottle. [ 4 ] [ 8 ] [ 9 ] [ 10 ] At the same time, the OPCW report clarified that Navalny was poisoned with a new type of Novichok, which was not included in the list of controlled chemicals of the Chemical Weapons Convention . [ 11 ] [ 12 ] [ 13 ] Navalny accused President Vladimir Putin of being responsible for his poisoning, but the Kremlin said the accusations were "utterly unfounded" and "insulting". The Kremlin further alleged that Navalny was working for the CIA. [ 14 ] The EU and the UK [ 15 ] imposed sanctions over Navalny's poisoning on the director of the Russian Federal Security Service (FSB) Alexander Bortnikov , five other senior Russian officials, and the State Research Institute of Organic Chemistry and Technology (GosNIIOKhT). [ 16 ] According to the EU, the poisoning of Navalny became possible "only with the consent of the Presidential Executive Office" and with the participation of the FSB. [ 17 ] [ 18 ] [ 19 ] An investigation by Bellingcat and The Insider implicated agents from the FSB in Navalny's poisoning. [ 20 ] Russian prosecutors refused to open an official criminal investigation of the poisoning, claiming they found no sign that a crime had been committed, [ 21 ] [ 22 ] and the Kremlin denied involvement in the poisoning of Navalny. [ 23 ] Alexei Navalny had previously been attacked by chemical substances. On 27 April 2017, Navalny was attacked by unknown assailants outside his office in the Anti-Corruption Foundation who sprayed brilliant green dye , possibly mixed with other components, into his face (see Zelyonka attack ). He said he had lost 80 percent of the sight in his right eye. He also said that his doctor believed there was a second corrosive substance in the liquid and that "there is hope" the lost eyesight would be restored. He also alleged that the attacker was Aleksandr Petrunko, a man he claimed had ties with State Duma deputy speaker Pyotr Olegovich Tolstoy . [ 24 ] [ 25 ] Navalny accused the Kremlin of orchestrating the attack. [ 26 ] [ 27 ] Another incident occurred in July 2019, when Navalny was arrested and imprisoned. On 28 July, he was hospitalized with severe damage to his eyes and skin. At the hospital, he was diagnosed with an allergic reaction, although this diagnosis was disputed by Anastasia Vasilyeva , one of his personal doctors. [ 28 ] Vasilyeva questioned the diagnosis and suggested the possibility that Navalny's condition was the result of "the damaging effects of undetermined chemicals". [ 29 ] On 29 July 2019, Navalny was discharged from hospital and taken back to prison, despite the objections of his personal physician who questioned the hospital's motives. [ 28 ] [ 30 ] In August 2020, in the days leading up to the poisoning, Navalny had been publishing videos on his YouTube channel in which he expressed support for the pro-democracy 2020 Belarusian protests , which were triggered by the heavily contested 2020 Belarusian presidential election . [ 31 ] Navalny had also written that the kind of 'revolution' that was taking place in neighboring Belarus would soon happen in Russia. [ 32 ] Local news site Tayga.Info reported that during his Siberia trip, Navalny had been carrying out an investigation, as well as meeting local candidates and volunteers. When asked if Navalny were preparing an exposé shortly before he became violently ill, Navalny ally Lyubov Sobol stated "I can't reveal all the details, but Navalny was on a work trip. He wasn't relaxing in the regions". [ 32 ] The video investigation was later published by Navalny's team on 31 August. [ 33 ] It is assumed that Navalny was poisoned in a politically motivated attack as 'punishment' for his opposition work. [ 32 ] According to The New York Times , experts expressed doubts that the Novichok agent would be used by someone other than a state-sponsored agent. [ 34 ] Journalist and human rights advocate Anna Politkovskaya , known for her criticism of Putin and her coverage of the Second Chechen War , fell ill during a flight to cover the Beslan school siege in 2004 after drinking tea in an apparent poisoning attempt. She was later assassinated in 2006. [ 35 ] [ 36 ] In 2018, Pussy Riot activist Pyotr Verzilov was hospitalised in Moscow and later taken to the Charité hospital in Berlin a few days later for treatment which was organised by the Cinema for Peace Foundation after a suspected poisoning, where doctors at the hospital said it was "highly probable" that he was poisoned. [ 37 ] According to activist Ilya Chumakov, who met Navalny along with other supporters the day before his flight, when Navalny was asked why he was not dead, he said that his death would not be beneficial to Putin and that it would turn him into a hero. [ 38 ] On 20 August 2020, Navalny fell ill during a flight from Tomsk to Moscow and was hospitalised in the City Clinical Emergency Hospital No. 1 in Omsk ( Russian : Городская клиническая больница скорой медицинской помощи №1 ), where the plane had made an emergency landing . The change in his condition on the plane was sudden and violent, and video footage showed crew members on the flight scurrying towards him and Navalny crying loudly. [ 32 ] Afterwards, his spokeswoman said that he was in a coma and on a ventilator in the hospital. She also said that Navalny only drank tea since the morning and that it was suspected that something was added to his drink. The hospital said that he was in a stable but serious condition, and after initially acknowledging that Navalny had probably been poisoned, the hospital's deputy chief physician told reporters that poisoning was "one scenario among many" being considered. [ 32 ] Although doctors in Russia initially suggested he suffered from a metabolic disorder caused by low blood sugar, they later stated that he had most likely been poisoned by antipsychotics or neuroleptics and that industrial chemicals such as 2-ethylhexyl diphenyl phosphate were found. [ 39 ] [ 40 ] A photograph on social media taken by a supporter appeared to show Navalny drinking tea at a Tomsk airport café, where Interfax news agency reported that the owners of the café were checking CCTV footage to see if any evidence could be provided. [ 41 ] [ 42 ] [ 43 ] By the afternoon, Navalny's wife, Yulia, had reached the hospital from Moscow. She brought with her Navalny's personal doctor, Anastasia Vasilyeva. The authorities, however, initially refused to allow them into the room. They demanded proof in the form of a marriage certificate that Yulia was indeed his wife. [ 32 ] A chartered plane paid for by Cinema for Peace Foundation was sent from Germany to evacuate Navalny from Omsk for treatment at the Charité in Berlin. [ 44 ] Approximately 31 hours after onset of his symptoms, a doctor from the German team was granted brief access to Alexey Navalny, and recorded bradycardia, hypothermia (34.4°C), and wide pupils non-reactive to light. According to German doctors, Navalny was under sedation with propofol, and it was the only obvious drug given at that time. [ 45 ] The doctors treating him in Omsk had initially declared he was too sick to be transported [ 46 ] but later released him, and he arrived in Berlin on 22 August. [ 47 ] [ 48 ] Alexander Murakhovsky, the head doctor at the Omsk hospital, told the press conference on 24 August that they had saved his life and found no traces of any poison in his system; he also said the doctors at the hospital had not been under pressure on the part of Russian officials. [ 49 ] The doctors treating him at the Charité announced later in the day that while the specific substance was not yet known, clinical findings indicated poisoning with a substance from the group of nerve agents known as cholinesterase inhibitors , and that they would be performing further tests to discover the exact substance. Evidence might come with the publication of the initiated laboratory testing. [ 50 ] As of 2 September 2020, Navalny was in a medically-induced coma. German physicians said that if he recovered, lasting effects could not be ruled out. [ 34 ] Dr. Murakhovsky wrote a letter to the Charité, demanding that they show laboratory data about him being poisoned with a cholinesterase inhibitor, stating the doctors in his hospital found no such evidence. He stated that cholinesterase decrease may have happened either by intake of a compound or naturally, also publishing a purported independent analysis detecting no cholinesterase inhibitors. He confirmed giving him atropine , which is used to counteract certain nerve agents and pesticide poisoning , but claimed the reasons were unrelated to poisoning. [ 51 ] [ 52 ] On 7 September, doctors brought Navalny out of the medically-induced coma. [ 6 ] In a press release, Charité said: [ 53 ] The condition of Alexei Navalny, ... has improved. The patient has been removed from his medically induced coma and is being weaned off mechanical ventilation. He is responding to verbal stimuli. It remains too early to gauge the potential long-term effects of his severe poisoning. On 10 September, news media reported the police protection outside the Charité hospital had been stepped up, that Navalny was able to speak again, but Navalny's spokeswoman described reports of his quick recuperation as "exaggerated". [ 54 ] On 14 September, the Charité hospital said that Navalny was taken off the ventilator and that he is able to get out of bed. For the first time, the hospital said that it published the statement following consultations "with the patient and his wife", rather than his wife only. [ 55 ] On 15 September, Navalny's spokeswoman said that Navalny would return to Russia. Navalny also posted a picture from his hospital bed on social media for the first time since his poisoning. The Kremlin ruled out a meeting between Navalny and Putin. [ 56 ] On 22 September, the doctors at the Charité hospital declared him well enough to be discharged from in-patient care. [ 7 ] [ 57 ] While recovering after discharge from the Charité hospital, Navalny stated "I assert that Putin was behind the crime, and I have no other explanation for what happened. Only three people can give orders to put into action ' active measures ' and use Novichok ... [but] FSB director Alexander Bortnikov , foreign intelligence service head Sergey Naryshkin and the director of GRU cannot make such a decision without Putin's orders." [ 58 ] Anti-Corruption Foundation (FBK) employees in Tomsk, having learned about the poisoning, told the administration of the hotel where Navalny had stayed that he could have been poisoned with "something from the minibar" and received permission to inspect his room. [ 59 ] [ 60 ] The inspection was carried out in the presence of a hotel administrator and a lawyer, and was filmed. [ 61 ] Navalny's associates took his personal belongings from the room, including several plastic water bottles. The head of the FBK investigation department, Maria Pevchikh, subsequently took these bottles to Germany on the same medical plane on which Navalny himself was transported, and handed them over to German specialists. [ 60 ] [ 62 ] [ 63 ] Although the Navalny's team member maintained control of the bottle, there is no official chain of custody for it. [ 64 ] [ verification needed ] On the same day, 20 August 2020, Navalny's lawyers appealed to the Investigative Committee of Russia and demanded for the initiation of a criminal case in accordance with the Articles 30, 105, and 227 of the Criminal Code of the Russian Federation. [ 65 ] Upon Navalny's admission to Charité hospital intensive care unit, toxicological analysis and drug screening in the patient's blood and urine samples was performed. [ 45 ] Several drugs, including atropin were identified, whose presence was attributed to the previous treatment in Omsk. [ 45 ] Cholinesterase status testing was performed in an external laboratory, and it showed virtually no activity of acetylcholinesterase in erythrocytes, which served as a strong evidence supporting exposure to cholinesterase inhibitor. [ 45 ] Navalny's attending doctors from the Charité turned to Bundeswehr experts for help to check whether Navalny had been poisoned with a chemical warfare agent. [ 66 ] On 2 September 2020, the German government announced that scientists at the Bundeswehr Institute of Pharmacology and Toxicology obtained in unequivocal proof that Navalny was poisoned by Novichok type nerve agent. [ 67 ] [ 68 ] [ 69 ] [ 70 ] [ 71 ] Although the German government did not disclose any technical details about the exact procedure of Novichok's identification, as well as a concrete formula of the poison, Marc-Michael Blum, an expert in chemical weapons testing and former OPCW employee suggested that the analytic procedure used by German chemists was similar to the one used for identification of Sarin poisoning, and it allows reliable identification of the poison at the parts per billion level. [ 67 ] His opinion is in agreement with the opinion of Palmer Taylor, a pharmacologist at the University of California, San Diego. [ 72 ] Traces of the Novichok nerve agent were found in blood and urine, as well as on Navalny's skin samples. [ 8 ] [ 73 ] Traces of the poison were also found on one of Navalny's bottles, which had previously been handed over to Berlin doctors, [ 74 ] [ 75 ] and on some other undisclosed object(s). [ 76 ] [ 77 ] Experts suggest that Navalny drank from it after he was poisoned, and left traces on it. [ 74 ] [ 75 ] Navalny's team suggested that he was possibly poisoned before leaving the hotel. It was also stated that before leaving Russia, Navalny's clothes were seized by the Russian government. [ 78 ] Bruno Kahl , the head of Germany's foreign intelligence service , revealed that the Novichok agent identified from Navalny's toxicology results was a "harder" form than previously seen. [ 79 ] The research results from the Bundeswehr Institute of Pharmacology and Toxicology were handed over to the OPCW. [ 80 ] The Charité hospital, with Navalny's consent, published a scientific article titled "Novichok nerve agent poisoning" in the peer-reviewed medical journal The Lancet . [ 45 ] In the article, 14 doctors described Navalny's clinical details and course of treatment. [ 81 ] [ 82 ] [ 83 ] The doctors also confirmed that severe poisoning was the cause of Navalny's condition: "A laboratory of the German armed forces designated by the Organization for the Prohibition of Chemical Weapons identified an organophosphorus nerve agent from the Novichok group in blood samples collected immediately after the patient's admission to Charité ." [ 84 ] They also expressed the opinion that Navalny survived thanks to timely treatment [ 81 ] and previous good health. [ 85 ] After the publication, Navalny said that the evidence of the poisoning that Putin was demanding was now available to the whole world. [ 86 ] [ 87 ] On 3 September 2020, the Organisation for the Prohibition of Chemical Weapons (OPCW) received from the German government a request of assistance according to the Chemical Weapons Convention . [ 88 ] A team of inspectors visited the Charité Hospital, met Navalny, confirmed his identity, and directly supervised and monitored blood and urine sampling, which was conducted by the hospital staff in line with the OPCW procedure. [ 89 ] The samples, which were maintained under OPCW chain of custody, [ 64 ] [ verification needed ] were transferred to two certified laboratories designated by the OPCW Director General. [ 89 ] Researchers in the two OPCW designated laboratories, [ 90 ] the laboratory in Bouchet, subordinate to the Direction générale de l'armement [ 91 ] and Umeå , subordinate to the Swedish Defence Research Agency , [ 92 ] confirmed Navalny's poisoning with a Novichok nerve agent. [ 93 ] [ 94 ] On 6 October 2020, the OPCW announced that results of testing samples obtained from Navalny had confirmed the presence of a Novichok nerve agent, saying: [ 4 ] ... the biomarkers of the cholinesterase inhibitor found in Mr Navalny's blood and urine samples have similar structural characteristics as the toxic chemicals belonging to schedules 1.A.14 and 1.A.15 that were added to the Annex on Chemicals to the Convention during the Twenty-Fourth Session of the Conference of the States Parties in November 2019. This cholinesterase inhibitor is not listed in the Annex on Chemicals to the Convention. The exact structure of the agent involved has not been publicly disclosed, but according to the announcement above, the compound shares structural similarities with A-232 (the example compound for schedule 1.A.14) and A-242 (the example compound for schedule 1.A.15). [ 95 ] [ failed verification ] It was emphasized that any use of chemical weapons is "reprehensible and wholly contrary to the legal norms established by the international community." [ 4 ] [ 96 ] United Nations special rapporteur on Extrajudicial Executions Agnès Callamard and UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion Irene Khan have confirmed that they intend to investigate Navalny's poisoning at his request. [ 97 ] [ 98 ] According to Die Zeit and Der Spiegel , [ 99 ] "a new and improved version of the Novichok agent, which has not been encountered in the world before", was used in Navalny's poisoning. This new type of Novichok is more toxic and dangerous than its previously known variants, but acts more slowly. It had been planned that Navalny would die on board the plane, but he had survived "thanks to a sequence of successful coincidences: the quick reaction of the pilot who made an emergency landing, and the doctors in Omsk, who immediately injected Navalny with atropine ". German experts came to the conclusion that only Russian special services could have used such a "deadly and complex poison". To create a binary chemical weapon of this kind, a special laboratory is needed; it could not be synthesized by ordinary criminals. The German side rejected the version of Navalny's poisoning by foreign special services, as it would have been "unthinkable" in the conditions of constant surveillance of Navalny by the Russia's Federal Security Service (FSB): "All this allows us to draw only one plausible conclusion: it was the Kremlin who gave the order to get rid of unwanted criticism." [ 100 ] [ 101 ] A source in the German special services told The New York Times that, according to German experts, Navalny was poisoned by Novichok in the form of a powder dissolved in a liquid, most likely in the tea he drank at the Russian airport. Given that the poison was also found on a bottle from Navalny's hotel room, The New York Times concluded that he could have been poisoned twice. [ 102 ] [ 103 ] Parliamentary Assembly of the Council of Europe has appointed Jacques Maire as special rapporteur on the poisoning of Navalny. [ 104 ] On 13 December 2020, an article from The Sunday Times , quoting anonymous intelligence sources, reported that Navalny had been poisoned a second time while in hospital in Omsk; the prior administration of the antidote atropine in response to the first poisoning is thought to have saved Navalny's life by counteracting the second dose of Novichok. [ 105 ] [ 106 ] Specialists found Novichok on the politician's underwear and clothes, including on his belt. The poison got on things after intelligence agents entered Navalny's hotel room in Tomsk. [ 107 ] On 14 December 2020, Bellingcat and The Insider , in co-operation with CNN , Der Spiegel [ 108 ] and Anti-Corruption Foundation, published a joint investigation implicating agents from Russia's Federal Security Service (FSB) in Navalny's poisoning. The investigation detailed a special unit of the FSB specialising in chemical substances, and the investigators tracked members of the unit using telecom and travel data. The same day, Navalny published a new video and tweeted: "Case closed. I know who tried to kill me. The case concerning my murder attempt is solved. We know the names, we know the job ranks, and we have the photos." According to the investigation, Navalny had been under surveillance by a group of eight operatives from the unit since 2017, and there may have been earlier attempts to poison him. [ 20 ] [ 109 ] [ 110 ] The travel data of the alleged FSB officers were made publicly available. [ 111 ] When asked about the investigation, Putin called it "the legalisation of the materials of American intelligence agencies" and confirmed that Russian security agents were tailing Navalny, claiming that Navalny was backed by U.S. intelligence and denying that he was poisoned. [ 112 ] [ 113 ] [ 114 ] In January 2021, Bellingcat in a separate joint investigation with Der Spiegel linked the unit that tracked Navalny and allegedly poisoned him to the deaths of two other activists including Timur Kuashev in 2014 and Ruslan Magomedragimov in 2015, as well as potentially the politician Nikita Isayev in 2019, but it noted that Isayev was "absolutely loyal" to the Kremlin and there was no motive for him to have been killed by the FSB. [ 115 ] According to Bellingcat, the same FSB unit had been involved in the assassination of Boris Nemtsov and the poisonings of Vladimir Kara-Murza and Dmitry Bykov . [ 116 ] Following the investigation but before its publication, Navalny recorded his telephone conversation with Konstantin Borisovich Kudryavtsev, an FSB operative who was allegedly involved in his poisoning. The recording was released on 21 December 2020. During the phone conversation, Navalny posed as an aide to the secretary of Russia's Security Council Nikolai Patrushev , pretending to debrief Kudryavtsev about the operation and asking for details of why the mission had failed. Investigators used caller ID spoofing software to make the call look like it was coming from an FSB work phone number. [ 117 ] [ 118 ] [ 119 ] Kudryavtsev unwittingly confessed that the Novichok agent had been applied to Navalny's underwear while he was staying at the hotel in Tomsk ; but while Navalny had worn them for the flight as planned, the poison had apparently been absorbed too slowly. He attributed Navalny's survival to the pilots rerouting the flight and doctors in Omsk administering an antidote "almost immediately". Following Navalny's medical evacuation to Germany, the man said he had been sent to recover Navalny's clothes so that they could be treated to remove traces of Novichok before they could be tested by independent experts. [ 120 ] The FSB later dismissed the recording of Navalny's telephone call as a forgery, calling it a "provocation" that "would not have been possible without the organizational and technical support of foreign special services". [ 121 ] [ 122 ] However, Bellingcat had arranged for its representatives to be present during the call, and they were; there are direct witnesses, in addition to the published audio and visual records of the call. [ 117 ] On 27 August 2020, Russian police and the Ministry of the Interior said they had launched a routine preliminary investigation into the poisoning, inspecting the hotel room and security footage. Russian police said that over 100 pieces of potential evidence had been collected. [ 123 ] Prosecutors asserted that there was no need for any further investigation after the preliminary investigation, claiming it had found no sign that a crime had been committed. [ 21 ] In January 2021, Navalny returned to Russia and was immediately detained on accusations of violating parole conditions while he was hospitalised in Germany. Following his arrest, mass protests were held across Russia. [ 124 ] In February 2021, his suspended sentence was replaced with a prison sentence of over two and a half years' detention, and his organisations were later designated as extremist and liquidated. In March 2022, Navalny was sentenced to an additional nine years in prison after being found guilty of embezzlement and contempt of court in a new trial described as a sham by Amnesty International ; [ 125 ] [ 126 ] his appeal was rejected and in June, he was transferred to a high-security prison. [ 127 ] In August 2023, Navalny was sentenced to an additional 19 years in prison on extremism charges. [ 128 ] In December 2023, Navalny went missing from prison for almost three weeks. He re-emerged in an Arctic Circle corrective colony in the Yamalo-Nenets Autonomous Okrug . [ 129 ] [ 130 ] On 16 February 2024, the Russian prison service reported that Navalny had died at the age of 47. [ 131 ] [ 132 ] One of the developers of substances like Novichok , Vladimir Uglyov, said that he "trusts the German specialists 100%" and suggested that the poison was delivered using "a solution (for example, in dimethylformamide ) of a solid analogue of A-234 , namely, a solid A-242 , which was applied on Navalny's underwear, plus the addition of some faster-acting substance that hides symptoms (for example, clonidine )." [ 133 ] Uglyov suggested that Novichok had been applied on Navalny's underwear in a hotel room, and that the poison had entered his body a few hours before departure. According to the scientist, Navalny received about 20% of the lethal dose. [ 134 ] Uglyov suggested that the organizers decided to poison Navalny with Novichok, since they mistakenly believed that this substance would be impossible to detect. [ 135 ] Uglyov also expressed the opinion that the Russian laboratory could not detect Novichok in the biological samples acquired from Navalny because "German specialists had more modern equipment and instruments for determining such quantities, which were taken from Alexei's blood. And those who did the tests in Moscow had much weaker equipment and the tests showed the absence of the substance." [ 136 ] After the release of an independent investigation by Navalny, in which it was assumed that Alexei and his wife Yulia could have been subjected to attempts at poisoning earlier, Uglyov did not rule out that Navalny could have encountered Novichok three times, in previous cases receiving small doses insufficient for a lethal outcome. [ 137 ] Chemical weapons specialist Vil Mirzayanov , who worked at GosNIIOKhT and in the 1990s revealed the Novichok development program in the USSR and Russia, agreed with the conclusions of the Bundeswehr special laboratory and suggested that the latest versions of Novichok were used to poison Navalny: the compounds A-242 or A-262 . [ 138 ] [ 139 ] Mirzayanov also stated that the symptoms described by Navalny on 19 September were comparable to those he was aware of for similar cases. [ 140 ] One of the developers of the Novichok agent, Leonid Rink, stated that if Novichok had been used to poison Navalny, he "would have been dead, not left in coma" and suggested that what happened to Navalny was an attack of acute pancreatitis . This position was later dismissed as "nonsense" by doctor Yaroslav Ashikhmin who treated Navalny, as no increase in pancreatic enzymes was found in Navalny's blood. [ 141 ] Another fact-checker pointed out at least three cases of people poisoned with Novichok who survived (Andrei Zheleznyakov, Sergei and Yulia Skripal ) whose symptoms were very consistent with those of Navalny. [ 142 ] [ 143 ] Rink also theorized that Navalny had poisoned himself in a "big theater play" or that the Germans had copied Novichok and then poisoned Navalny in Germany, without providing any evidence for those theories. [ 144 ] Rink had also previously (in 2018) echoed a theory floated in Russian state media that the British could have been behind the poisoning of Sergei Skripal . He had also in 1995 confessed to selling Novichok to a criminal gang for $1500 but was never convicted. [ 142 ] [ 145 ] Reanimatologist ( anesthesiologist ) Boris Teplykh, one of the participants in the council of Russian doctors who treated Navalny in the first days after the poisoning, said in an interview with Meduza that the Russian specialists from the Moscow Medical Center of Forensic Medicine were looking for organophosphates and traces of cholinesterase inhibitors, but did not find any. Teplykh explained the difference in the test results of Russian and German specialists by the fact that "we worked with forensic toxicologists, and they [Germans] worked with superchemists who deal with chemical warfare agents . Slightly different things." [ 146 ] Navalny's attending physician Anastasia Vasilyeva stated on the TV channel RTVI that all Navalny's symptoms clearly indicated organophosphate poisoning . The fact that she, as the attending physician, was not allowed to examine Alexei, nor access consultations and medical documentation for two days, in her opinion, indicated that they had tried to hide the symptoms of poisoning from her, on the basis of which she concluded that the Omsk doctors were given instructions to be silent. [ 147 ] Andy Smith, Professor of the Department of Toxicology at the University of Leicester , noted that it would be difficult to identify a specific toxic substance in Navalny's body after a few days, though not impossible, given recent advances in analytical chemistry . He also noted that although Navalny, with the help of atropine and other drugs, had survived the acute stage of the poisoning, inhibition of cholinesterase could lead to the appearance of neurodegenerative and neuropsychiatric diseases. In his opinion, this was exactly what the poisoners were counting on. [ 148 ] The head of the Institute of Toxicology in Munich , Professor Martin Goettlicher, in an interview with Deutsche Welle , noted that Navalny's symptomatology was in many ways similar to that of the poisoning of Sergei and Yulia Skripal . For example, in both cases, patients were put into an artificial coma for about three weeks to restore cholinesterase . Goettlicher also explained that when Navalny was poisoned, those around him did not necessarily have to suffer, as was the case with the Skripals, since the Novichok can be an oily liquid that evaporates or spreads poorly, depending on how the substance got into the body and in what quantity. [ 149 ] [ 150 ] German biochemist Dr. Marc-Michael Blum, who previously headed the OPCW laboratory, [ 151 ] [ 152 ] as well as the team investigating the poisoning of Sergei and Yulia Skripal, confirmed that when Navalny was poisoned by Novichok, those around him could not have suffered: according to Blum, this indicated either that the level of exposure to the substance or the degree of its ingress into Navalny's body was too low, or that no one around had come into contact with the epicenter of the poison. Blum praised the work of the laboratory of the Institute of Pharmacology and Toxicology of the Bundeswehr , which was the first to confirm the poisoning of Navalny by Novichok (Blum had worked there in 2006–2010). [ 153 ] Blum categorically denied that the OPCW laboratories could have engaged in conspiracies and falsified the results of chemical analysis by anyone's political will. After the OPCW published its own report, Blum stated: "five laboratories ... certified by the OPCW ... did the same tests and came to the same result," that "Navalny was poisoned with a chemical agent." Blum confirmed that "use of chemical weapons" and "violation of the Chemical Weapons Convention " is not determined by the presence of a substance on the List of Schedule 1 substances (CWC) : "in the end, there is no need for the substance to be on the list — the point is how it will be used." [ 154 ] Boris Zhuikov, Doctor of Chemistry and Head of the Laboratory of the Institute for Nuclear Research of the Russian Academy of Sciences, explained that although Novichok can break down in the body relatively quickly (for example, in a couple of days), when decomposed, it leaves behind specific compounds containing fragments of Novichok molecules, with which it is possible not only to confirm a poisoning by Novichok, but also to establish which substance from this group was used. The Russian team stated that it did not find Novichok itself in its analyses of Navalny's samples, while the German laboratory found traces of Novichok's presence. Zhuikov explained that these statements do not necessarily contradict each other: "the substance itself is really no longer there, but the interaction products remained." Modern methods of analysis (primarily mass spectrometry in combination with chromatography ) make it possible to detect such chemical byproducts with very high sensitivity (for example, easily detecting the presence of 1 mg of poison in a human body weighing 70 kg), and the detection of these byproducts can unambiguously identify the original poisonous substance. The German laboratory of the Bundeswehr, which analyzed Navalny's samples, had such equipment. [ 143 ] A group of six leading Western experts in the field of toxicology and chemical weapons, in an interview with the BBC Russian Service , commented that prompt medical assistance saved Navalny's life: he was given the antidote atropine (perhaps preventively) and breathing support. Scientists explained that there were no other victims of Navalny's poisoning since only Navalny received a high dose of the poisonous substance and was in prolonged contact with it. Experts also said that it is impossible to find the components necessary for the manufacture of Novichok on the market (some components themselves fall under the ban under the Chemical Weapons Convention ), and only military laboratories can produce such poisons, since this requires special equipment and special security conditions. [ 155 ] In November 2021, a statement was published by 55 states parties to the OPCW that condemned the use by a "toxic chemical as a weapon" against Navalny, confirmed presence of cholinease inhibitors consistent with the "Novichok" series and urged Russia to provide answers to OPCW questions, to which Russia has failed to do since 2020. [ 156 ] The news of Navalny's poisoning caused the ruble to fall against the dollar and the euro. [ 157 ] On 20 August, UN spokesman Stéphane Dujarric expressed concern about Navalny's "sudden illness". [ 158 ] U.S. President Donald Trump said the U.S. was monitoring reports on the leader of the Russian opposition. [ 159 ] U.S. National Security Advisor Robert O'Brien said that reports of the possible poisoning of Navalny were causing "extreme concern" in Washington. [ 160 ] On 21 August, the Office of the United Nations High Commissioner for Human Rights said it expected Navalny to receive proper medical attention. [ 161 ] The German government announced that there were serious grounds to suspect that poisoning had taken place, and called for Navalny to be provided with any medical assistance that could save him. [ 162 ] Head of the European Council Charles Michel expressed concern about the state of Navalny. [ 163 ] French President Emmanuel Macron stated that France was ready to offer "all necessary assistance ... in terms of health care, asylum, protection" to Navalny and his family and demanded clarity on the circumstances surrounding the incident. German Chancellor Angela Merkel also offered any medical assistance necessary in German hospitals. Amnesty International called for an investigation into the alleged poisoning. [ 164 ] According to John Sipher, a former CIA station chief in Moscow, "Whether or not Putin personally ordered the poisoning, he is behind any and all efforts to maintain control through intimidation and murder". [ 165 ] On 24 August, German Chancellor Angela Merkel and Foreign Minister Heiko Maas in a joint statement called on the Russian authorities to clarify in detail and as transparently as possible the entirety of circumstances surrounding the incident, and to identify and prosecute those responsible. [ 166 ] [ 167 ] The head of EU diplomacy, Josep Borrell , said that the Russian authorities should immediately begin an independent and transparent investigation into the poisoning of Navalny. [ 168 ] On 25 August, French Foreign Minister Jean-Yves Le Drian said that France, on the basis of the preliminary conclusion of the Charité clinic doctors about Navalny's poisoning, considered the incident a criminal act and called for finding and punishing those responsible. [ 169 ] [ 170 ] U.S. Deputy Secretary of State Stephen Biegun expressed deep concern about Navalny's condition, as well as the impact of reports of his poisoning on civil society in Russia. The American diplomat also stressed the importance of transparency and freedom of speech in any democratic state. Biegun said that "if Navalny's poisoning is confirmed, the U.S. could take steps that would exceed Washington's response to findings of Russian meddling in the 2016 U.S. presidential election." [ 171 ] On 25 August, businessman Yevgeny Prigozhin , who had ties to Putin and had been nicknamed "Putin's chef", was quoted as saying that he intended to enforce a court decision the previous year that required Navalny, his associate Lyubov Sobol and his Anti-Corruption Foundation to pay 88 million rubles in damages to the Moskovsky Shkolnik company over a video investigation. Prigozhin had bought the debt so that Navalny and his associate would owe him directly. Prigozhin was quoted by the company as saying "I intend to strip this group of unscrupulous people of their clothes and shoes" and that if Navalny survived, Navalny would be liable "according to the full severity of Russian law". [ 172 ] On 26 August, British Prime Minister Boris Johnson and NATO Secretary General Jens Stoltenberg joined in demanding a transparent investigation. According to Johnson, Navalny's poisoning "shocked the world," and Stoltenberg saw no reason to question the conclusions of the Charité doctors. [ 173 ] [ 174 ] On 2 September, after the German government officially announced that the pharmaceutical and toxicology laboratory of the Bundeswehr found traces of poison from the Novichok group in Alexey Navalny's body, [ 69 ] German Chancellor Angela Merkel issued a statement in which she called Navalny's poisoning an attempt to silence him: "Someone tried to silence [Mr Navalny] and in the name of the whole German government I condemn that in the strongest terms." [ 71 ] Merkel said that "Mr. Navalny ha[d] been the victim of a crime" which "raise[d] very serious questions that only the Russian government c[ould] and must answer". [ 3 ] The European External Action Service in a statement condemned the poisoning and said that it is "essential that the Russian government investigates thoroughly and in a transparent manner the assassination attempt of Mr Navalny". [ 175 ] Boris Johnson demanded that Russia provide an explanation and said that he considered it "outrageous that a chemical weapon was used against Alexey Navalny." He promised to "work with international partners to ensure justice is done." [ 176 ] Jean-Yves Le Drian condemned the "shocking and irresponsible" [ 177 ] use of a Novichok poisoning agent and said it was in violation of the ban on the use of chemical weapons. [ 178 ] The Italian Foreign Ministry "condemned with force" the poisoning of Navalny, called this act a "crime", and, expressing "profound concern and indignation", demanded an explanation from Russia. [ 176 ] Stephen Biegun stated that the U.S. finds the German conclusion about the use of Novichok "very credible" and "deeply concerning". [ 34 ] Navalny's chief of staff, Leonid Volkov , stated "In 2020, poisoning Navalny with Novichok is the same as leaving an autograph at the scene of the crime". [ 34 ] On 3 September, the European Council called the incident an "assassination attempt". [ 179 ] [ 180 ] After the German government concluded that Navalny was poisoned by Novichok, the wife of British policeman Nick Bailey, who was exposed to Novichok after the poisoning of Sergei and Yulia Skripal , tweeted, "It's been almost 2 + 1 ⁄ 2 years after the events in Salisbury and there has been no justice for Dawn and her family and none for the Skripals, Charlie or us. And now it's happened again". [ 181 ] On 4 September, Jens Stoltenberg stated: "Time and again, we have seen opposition leaders and critics of the Russian regime attacked, and their lives threatened. Some have even been killed. So this is not just an attack on an individual, but on fundamental democratic rights. It is a serious breach of international law, which demands an international response." He also asked the Russian authorities to fully cooperate with an impartial international investigation. [ 182 ] On 5 September, Donald Trump announced that the United States should soon receive documents from Germany on the Navalny case, which will allow Washington to determine its position. He noted that he had no reason to question Germany's conclusions that Navalny was poisoned by Novichok, and stressed that if the fact of the poisoning was confirmed, it would anger him. [ 183 ] On 6 September, Heiko Maas said that Germany was planning to discuss possible sanctions against Russia if the Kremlin does not provide an explanation soon, saying that any sanctions should be "targeted". Maas also said that there were "several indications" that Russian authorities were behind the poisoning. [ 184 ] He also said that a lack of support from Russia in the investigation could "force" Germany to change its position on the Nord Stream 2 gas pipeline from Russia to Germany. [ 185 ] On 8 September, UN High Commissioner for Human Rights Michelle Bachelet called on Russia "to carry out, or fully cooperate with, a thorough, transparent, independent and impartial investigation, after German specialists said they have "unequivocal proof" that he was poisoned with a Novichok nerve agent." [ 186 ] [ 187 ] In a March 2021 report, the East StratCom Task Force of the European External Action Service registered an increase in false information propagated in Russia about Germany as a result of the deterioration in German-Russian relations developed since the poison attack. [ 188 ] [ 189 ] In 2020 The Task Force reported over 300 cases of false information targeting Alexei Navalny, while most of the cases have appeared after the poisoning. [ 190 ] Political scientist Nikolai Petrov, senior fellow at the Royal Institute of International Affairs (Chatham House) and professor of political science at the Higher School of Economics , commenting on the poisoning of Navalny for The New York Times , noted that in the Kremlin "no one else causes such hostility and fear as Navalny". This means that "there is a very long list of potential enemies" who may wish for Navalny's death, or at least want to render him incapacitated. However, Navalny is such a prominent figure in Russia that none of his personal enemies would have dared to take such radical measures against him without, "at least, the tacit consent of Putin." According to Petrov, in Russia there is a system "like in the mafia: nothing can be done without the approval and guarantees of immunity from the boss. I'm not saying that Putin directly ordered the poisoning of him, but no one can act without making sure the boss is happy and punishes them." [ 191 ] The opinion that Putin personally was involved in the poisoning of Navalny was also expressed by the doctor of historical sciences, political scientist Valery Solovei . In his opinion, such operations cannot be carried out without Putin knowing about them, who at least was aware of the poisoning plans. Solovei also believes that after returning to Russia, they will continue to exert financial pressure on Navalny and will try to "hit all his connections, all with whom he cooperates and interacts." [ 192 ] Mark Galeotti , an expert on Russian special services and professor emeritus at University College London , noted in an interview with Deutsche Welle that the use of Novichok is proof that either the state or "someone with a high degree of power and authority in the state" was involved in the attempt to poison Navalny. According to Galeotti, if the measures taken by the West in response to this poisoning are not effective, then Moscow will continue to carry out such operations. [ 193 ] Former German ambassador to Russia and former vice-president of the Federal Intelligence Service of Germany Rüdiger von Fritsch , commenting on the inaction of the Russian Federation in the investigation of Navalny's poisoning, said: "For more than three weeks now we have been in a situation where Russia does not help in any way in the investigation, but only puts forward accusations in reply ... The one who stole himself then shouts very loudly, pointing in the other direction. We are faced with this for the umpteenth time. The scenario is the same: reprimands, shifting the burden of gathering evidence to others, threats, ridicule. The most important thing is not to conduct an investigation, never". [ 194 ] Professor of Rutgers University , sociologist Sergei Erofeev said that a group of professors from a number of recognized universities nominated Navalny for the Nobel Peace Prize as having made a significant contribution to the fight for human rights. [ 195 ] Erofeev noted that although the very idea of such a nomination is not new, it acquired particular relevance in connection with the poisoning of Navalny. [ 196 ] Retired FSB General Evgeniy Savostyanov [ ru ] said that the poisoning of Navalny is "an act of state terrorism", in which "the president and his special services" were involved. [ 197 ] [ 198 ] Researcher of the Russian armed forces and special services Mark Galeotti , commenting on the telephone conversation between Navalny and the FSB agent, said that Navalny was able to "demonstrate how much top-secret information is available on the darknet — phone numbers, names, everything else. And that all this can be used to identify individuals. And what's more, getting them to talk about their work." [ 199 ] According to polls conducted by the Levada Center in December 2020, 78% of Russian respondents had heard about Navalny's poisoning. 30% of the respondents believed there was no poisoning at all and that it is a mock-up, 19% opted for a provocation of Western special services, 15% opted for an attempt by the authorities to eliminate the political opponent. 7% of respondents see a personal revenge on one of the people involved in Navalny's investigations, 6% a struggle within the Russian opposition, and 1% believe in health problems, accidental poisoning, or common poisoning. 4% think other reasons are more likely, and 19% find the question hard to answer. There was a significant correlation between the belief in poisoning by the authorities and age, sources of information, and attitude towards the government in general. [ 200 ] [ 201 ] Navalny , a 2022 documentary directed by Daniel Roher was released recounting the days leading up to, during, and after the assassination attempt. On the review aggregator website Rotten Tomatoes , 99% of 97 critics' reviews are positive, with an average rating of 8.4/10. The website's consensus reads, " Navalny is a documentary that's as gripping as any thriller – but the real-life fight against authoritarianism that it details is deadly serious." [ 202 ] Metacritic, which uses a weighted average, assigned the film a score of 82 out of 100, based on 22 critics, indicating "universal acclaim". [ 203 ] The Guardian critic Phil Harrison awarded it 5/5 stars calling it "... one of the most jaw-dropping things you'll ever witness", and "this terrifying documentary enters the realms of the far-fetched spy thriller – and yet it's all true". [ 204 ] New York Times film critic Ben Kenigsberg added the film to the "Critic's List" and also praised it saying "Roher has assembled a tense and absorbing look at Navalny and his inner circle". [ 205 ]
https://en.wikipedia.org/wiki/Poisoning_of_Alexei_Navalny
The poisoning of Sergei and Yulia Skripal , also known as the Salisbury Poisonings , was a botched assassination attempt to poison Sergei Skripal , a former Russian military officer and double agent for the British intelligence agencies in the city of Salisbury , England on 4 March 2018. [ 5 ] [ 6 ] Sergei and his daughter, Yulia Skripal, were poisoned by means of a Novichok nerve agent . [ 7 ] Both spent several weeks in hospital in a critical condition, before being discharged. A police officer, Nick Bailey, was also taken into intensive care after attending the incident, and was later discharged. [ 8 ] [ 9 ] [ 10 ] [ a ] The British government accused Russia of attempted murder and announced a series of punitive measures against Russia, including the expulsion of diplomats. The UK's official assessment of the incident was supported by 28 other countries which responded similarly. Altogether, an unprecedented 153 Russian diplomats were expelled by the end of March 2018. [ 12 ] Russia denied the accusations, expelled foreign diplomats in retaliation for the expulsion of its own diplomats, and accused Britain of the poisoning. [ 13 ] On 30 June 2018, a similar poisoning of two British nationals in Amesbury , seven miles (11 km) north of Salisbury, involved the same nerve agent. [ 14 ] [ 15 ] Charlie Rowley found a perfume bottle, later discovered to contain the agent, in a litter bin somewhere in Salisbury and gave it to Dawn Sturgess who sprayed it on her wrist. [ 16 ] [ 17 ] Sturgess fell ill within 15 minutes and died on 8 July, but Rowley, who had also come into contact with the poison, survived. [ 18 ] British police believe this incident was not a targeted attack, but a result of the way the nerve agent was disposed of after the poisoning in Salisbury. [ 19 ] A public inquiry was launched into the circumstances of Sturgess's death. [ 20 ] On 5 September 2018, British authorities identified two Russian nationals, using the names Alexander Petrov and Ruslan Boshirov, as suspected of the Skripals' poisoning, [ 2 ] and alleged that they were active officers in Russian military intelligence. [ 21 ] Later, investigative website Bellingcat stated that it had positively identified Ruslan Boshirov as being the highly decorated GRU Colonel Anatoliy Chepiga , [ 22 ] that Alexander Petrov was Alexander Mishkin , also of the GRU, [ 23 ] [ 24 ] and that a third GRU officer present in the UK at the time was identified as Denis Vyacheslavovich Sergeev , [ 25 ] [ 26 ] believed to hold the rank of major general in the GRU. The pattern of his communications while in the UK indicates that he liaised with superior officers in Moscow. [ 27 ] The attempted assassination and subsequent agent exposures was an embarrassment for Putin and for Russia's spying organisation. [ 5 ] [ 28 ] It was allegedly organised by the secret Unit 29155 of the Russian GRU, under the command of Major General Andrei V. Averyanov. [ 29 ] On 27 November 2019, the Organisation for the Prohibition of Chemical Weapons (OPCW) added Novichok, the Soviet-era nerve agent used in the attack, to its list of banned substances. [ 30 ] At 09:03 the following morning, Salisbury NHS Foundation Trust declared a major incident in response to concerns raised by medical staff; shortly afterwards this became a multi-agency incident named Operation Fairline. [ 39 ] [ 40 ] Health authorities checked 21 members of the emergency services and the public for possible symptoms; [ 41 ] [ 42 ] two police officers were treated for minor symptoms, said to be itchy eyes and wheezing, while one, Detective Sergeant Nick Bailey, who had been sent to Skripal's house, was in a serious condition. [ 43 ] [ 44 ] On 22 March, Bailey was discharged from the hospital. In a statement he said "normal life for me will probably never be the same" and thanked the hospital staff. [ 9 ] On 26 March, Skripal and his daughter were reported to still be critically ill. [ 45 ] [ 46 ] On 29 March it was announced that Yulia's condition was improving and she was no longer in a critical condition. [ 47 ] After three weeks in a critical condition, Yulia regained consciousness and was able to speak. [ 48 ] [ 49 ] [ 50 ] Sergei was also in a critical condition until he regained consciousness one month after the attack. [ 51 ] [ 52 ] On 5 April, doctors said that Sergei was no longer in critical condition and was responding well to treatment. [ 53 ] On 9 April, Yulia was discharged from hospital and taken to a secure location. [ 54 ] [ 55 ] On 18 May, Sergei Skripal was discharged from the hospital too. [ 56 ] On 23 May, a handwritten letter and a video statement by Yulia were released to the Reuters news agency for the first time after the poisoning. She stated that she was lucky to be alive after the poisoning and thanked the staff of the Salisbury hospital. She described her treatment as slow, heavy and extremely painful and mentioned a scar on her neck, apparently from a tracheotomy . She expressed hope that someday she would return to Russia. She thanked the Russian embassy for its offer of assistance but said she and her father were "not ready to take it". [ 57 ] On 5 April, British authorities said that inside Skripal's house, which had been sealed by the police, two guinea pigs were found dead by vets , when they were allowed in, along with a cat in a distressed state, which had to be put down. [ 58 ] On 22 November the first interview with DS Bailey was released, in which he reported that he had been poisoned, despite the fact that he inspected the Skripals' house wearing a forensic suit . In addition to the poisoning, Bailey and his family had lost their home and all their possessions, because of contamination. Investigators said that the perfume bottle containing Novichok nerve agent , which was later found in a bin, had contained enough of the nerve agent to potentially kill thousands of people. [ 59 ] In early 2019, building contractors built a scaffolding "sealed frame" over the house and the garage of Skripal's home. A military team then dismantled and removed the roofs on both buildings over the course of two weeks. Cleaning and decontamination was followed by rebuilding over a period of four months. [ 60 ] [ 61 ] On 22 February 2019, Government officials announced that the last of the 12 sites that had been undergoing an intense and hazardous clean-up – Skripal's house – had been judged safe. [ 62 ] In May 2019, Sergei Skripal made a phone call and left a voice message to his niece Viktoria living in Russia. This was the first time after the poisoning that his voice had been heard by the public. [ 63 ] In August 2019 it was confirmed that a second police officer had been poisoned while investigating, but only in trace amounts. [ 64 ] The first public response to the poisoning came on 6 March. It was agreed under the Counter Terrorism Policing network that the Counter Terrorism Command based within the Metropolitan Police would take over the investigation from Wiltshire Police . Assistant Commissioner Mark Rowley, head of Counter Terrorism Policing, appealed for witnesses to the incident following a COBR meeting chaired by Home Secretary Amber Rudd . [ 65 ] Samples of the nerve agent used in the attack tested positive at the Defence Science and Technology Laboratory at Porton Down for a "very rare" nerve agent, according to the UK Home Secretary. [ 66 ] 180 military experts in chemical warfare defence and decontamination, as well as 18 vehicles, were deployed on 9 March to assist the Metropolitan Police to remove vehicles and objects from the scene and look for any further traces of the nerve agent. The personnel were drawn mostly from the Army, including instructors from the Defence CBRN Centre and the 29 Explosive Ordnance Disposal and Search Group , as well as from the Royal Marines and Royal Air Force . The vehicles included TPz Fuchs operated by Falcon Squadron from the Royal Tank Regiment . [ 67 ] On 11 March, the UK government advised those present at either The Mill pub or the Zizzi restaurant in Salisbury on 4 and 5 March to wash or wipe their possessions, emphasising that the risk to the general public was low. [ 68 ] [ 69 ] Several days later, on 12 March, Prime Minister Theresa May said the agent had been identified as one of the Novichok family of agents, believed to have been developed in the 1980s by the Soviet Union . [ 70 ] [ 71 ] According to the Russian ambassador to the UK, Alexander Yakovenko , the British authorities identified the agent as A-234 , [ 72 ] derived from an earlier version known as A-232. [ 73 ] By 14 March, the investigation was focused on Skripal's home and car, a bench where the two fell unconscious, a restaurant in which they dined and a pub where they had drinks. [ 74 ] A recovery vehicle was removed by the military from Gillingham in Dorset on 14 March, in connection with the poisoning. [ 75 ] [ 76 ] Subsequently, there was speculation within the British media that the nerve agent had been planted in one of the personal items in Yulia Skripal's suitcase before she left Moscow for London, [ 77 ] and in US media that it had been planted in their car. [ 78 ] [ 79 ] Ahmet Üzümcü , Director-General of the Organisation for the Prohibition of Chemical Weapons (OPCW), said on 20 March that it will take "another two to three weeks to finalise the analysis" of samples taken from the poisoning of Skripal. [ 80 ] On 22 March, the Court of Protection gave permission for new blood samples to be obtained from Yulia and Sergei Skripal for use by the OPCW. [ 81 ] [ 82 ] By 28 March, the police investigation concluded that the Skripals were poisoned at Sergei's home, with the highest concentration being found on the handle of his front door. [ 83 ] On 12 April the OPCW confirmed the UK's analysis of the type of nerve agent and reported it was of a "high purity", stating that the "name and structure of the identified toxic chemical are contained in the full classified report of the Secretariat, available to States Parties". [ 84 ] [ 85 ] [ 86 ] A declassified letter from the UK's national security adviser , Sir Mark Sedwill , to NATO Secretary General Jens Stoltenberg , stated Russian military intelligence hacked Yulia Skripal's email account since at least 2013 and tested methods for delivering nerve agents including on door handles. [ 87 ] The Department for Environment confirmed the nerve agent was delivered "in a liquid form". They said eight sites require decontamination, which will take several months to complete and cost millions of pounds. The BBC reported experts said the nerve agent does not evaporate or disappear over time. Intense cleaning with caustic chemicals is required to get rid of it. [ 88 ] [ 89 ] The Skripals' survival was possibly due to the weather – there had been heavy fog and high humidity, and according to its inventor and other scientists, moisture weakens the potency of this type of toxin. [ 90 ] [ 91 ] [ 92 ] On 22 April 2018, it was reported that British counter-terror police had identified a suspect in the poisoning: a former Federal Security Service (FSB) officer (reportedly a 54-year-old former FSB captain [ citation needed ] ) who acted under several code names including "Gordon" and "Mihails Savickis". According to detectives, he led a team of six Russian assassins who organised the chemical weapons attack. [ 93 ] Sedwill reported on 1 May 2018 however that UK intelligence and police agencies had failed to identify the individual or individuals who carried out the attack. [ 94 ] On 3 May 2018, the head of the OPCW, Ahmet Üzümcü , informed the New York Times that he had been told that about 50–100 grams of the nerve agent was thought to have been used in the attack, which indicated it was likely created for use as a weapon and was enough to kill a large number of people. [ 95 ] The next day however the OPCW made a correcting statement that the "quantity should probably be characterised in milligrams", though "the OPCW would not be able to estimate or determine the amount of the nerve agent that was used". [ 96 ] [ 97 ] On 19 July the Press Association reported that police believed they had identified "several Russians" as the suspected perpetrators of the attack. They had been identified through CCTV , cross-checked with border entry data . [ 98 ] On 6 August 2018, it was reported that the British government was "poised to submit an extradition request to Moscow for two Russians suspected of carrying out the Salisbury nerve agent attack". [ needs update ] The Metropolitan Police used two super recognisers to identify the suspects after trawling through up to 5,000 hours of CCTV footage from Salisbury and numerous airports across the country. [ 99 ] [ 100 ] British Prime Minister Theresa May announced in the Commons the same day that British intelligence services had identified the two suspects as officers in the G. U. Intelligence Service (formerly known as GRU) and the assassination attempt was not a rogue operation and was "almost certainly" approved at a senior level of the Russian government. [ 2 ] [ 101 ] May also said Britain would push for the EU to agree new sanctions against Russia. On 5 September 2018, the Russian news site Fontanka reported that the numbers on leaked passport files for Petrov and Boshirov are only three digits apart, and fall in a range that includes the passport files for a Russian military official expelled from Poland for spying. [ 102 ] [ 103 ] It is not known how the passport files were obtained, but Andrew Roth, the Moscow correspondent for The Guardian , commented that "If the reporting is confirmed, it would be a major blunder by the intelligence agency, allowing any country to check passport data for Russians requesting visas or entering the country against a list of nearly 40 passport files of suspected GRU officers." [ 104 ] On 14 September 2018, the online platforms Bellingcat and The Insider Russia observed that in Petrov's leaked passport files, there is no record of a residential address or any identification papers prior to 2009, suggesting that the name is an alias created that year; the analysis also noted that Petrov's dossier is stamped "Do not provide any information" and has the handwritten annotation "S.S.," a common abbreviation in Russian for "top secret". [ 105 ] On 15 September 2018, the Russian opposition newspaper Novaya Gazeta reported finding in Petrov's passport files a cryptic number that seems to be a telephone number associated with the Russian Defence Ministry , most likely the Military Intelligence Directorate. [ 106 ] As part of the announcement Scotland Yard and the Counter Terrorism Command released a detailed track of the individuals' 48 hours in the UK. [ 107 ] This covered their arrival from Moscow at Gatwick Airport , a trip to Salisbury by train the day before the attack, stated by police to be for reconnaissance , a trip to Salisbury by train on the day of the attack, and return to Moscow via Heathrow Airport . [ 108 ] [ 21 ] The two spent both nights at the City Stay Hotel, next to Bow Church DLR station in Bow , East London. Novichok was found in their hotel room after police sealed it off on 4 May 2018. Neil Basu, National Lead for Counter Terrorism Policing said that tests were carried out on their hotel room and it was "deemed safe". [ 109 ] [ 110 ] On 26 September 2018, the real identity of the suspect named by police as Ruslan Boshirov was revealed as Colonel Anatoliy Vladimirovich Chepiga by The Daily Telegraph , citing reporting by itself and Bellingcat, with Petrov having a more junior rank in the GRU. [ 111 ] The 39-year-old was made a Hero of the Russian Federation by decree of the President in 2014. Two European security sources confirmed that the details were accurate. [ 112 ] [ 113 ] The BBC commented: "The BBC understands there is no dispute over the identification." [ 3 ] The Secretary of State for Defence Gavin Williamson wrote: "The true identity of one of the Salisbury suspects has been revealed to be a Russian Colonel. I want to thank all the people who are working so tirelessly on this case." [ 114 ] However, that statement was subsequently deleted from Twitter. [ 115 ] On 8 October 2018, the real identity of the suspect named by police as Alexander Petrov was revealed as Alexander Mishkin . [ 1 ] [ 116 ] [ 117 ] [ 118 ] On 22 November 2018, more CCTV footage, with the two suspects walking in Salisbury, was published by the police. [ 119 ] On 19 December 2018, Mishkin (a.k.a. Petrov) and Chepiga (a.k.a. Boshirov) were added to the sanctions list of the United States Treasury Department , along with other 13 members of the GRU agency. [ 120 ] [ 121 ] [ 122 ] On 6 January 2019, the Telegraph reported that the British authorities had established all the essential details of the assassination attempt, including the chain of command that leads up to Vladimir Putin . [ 123 ] In February, a third GRU officer present in the UK at the time, Denis Sergeev , was identified. [ 25 ] [ 26 ] In September 2021, the BBC reported that Crown Prosecution Service had authorised charges against the three men but that formal charges could not be laid unless the men were arrested. [ 124 ] The charges authorised against the three men are conspiracy to murder, attempted murder , causing grievous bodily harm and use and possession of a chemical weapon. [ 124 ] Within days of the attack, political pressure began to mount on Theresa May's government to take action against the perpetrators, and most senior politicians appeared to believe that the Russian government was behind the attack. [ 125 ] [ 126 ] The situation was additionally sensitive for Russia as Russian president Vladimir Putin was facing his fourth presidential election in mid-March, and Russia was to host the 2018 FIFA World Cup football competition in June. [ 126 ] [ 127 ] When giving a response to an urgent question from Tom Tugendhat , the chairman of the Foreign Affairs Select Committee of the House of Commons , who suggested that Moscow was conducting "a form of soft war against the West", Foreign Secretary Boris Johnson on 6 March said the government would "respond appropriately and robustly" if the Russian state was found to have been involved in the poisoning. [ 128 ] [ 129 ] UK Home Secretary Amber Rudd said on 8 March 2018 that the use of a nerve agent on UK soil was a "brazen and reckless act" of attempted murder "in the most cruel and public way". [ 130 ] Prime Minister Theresa May said in the House of Commons on 12 March: It is now clear that Mr Skripal and his daughter were poisoned with a military-grade nerve agent of a type developed by Russia. This is part of a group of nerve agents known as 'Novichok'. Based on the positive identification of this chemical agent by world-leading experts at the Defence Science and Technology Laboratory at Porton Down; our knowledge that Russia has previously produced this agent and would still be capable of doing so; Russia's record of conducting state-sponsored assassinations; and our assessment that Russia views some defectors as legitimate targets for assassinations; the Government has concluded that it is highly likely that Russia was responsible for the act against Sergei and Yulia Skripal. Mr Speaker, there are therefore only two plausible explanations for what happened in Salisbury on 4 March. Either this was a direct act by the Russian State against our country. Or the Russian government lost control of this potentially catastrophically damaging nerve agent and allowed it to get into the hands of others. [ 70 ] May also said that the UK government requested that Russia explain which of these two possibilities it was by the end of 13 March 2018. [ 70 ] She also said: "[T]he extra-judicial killing of terrorists and dissidents outside Russia were given legal sanction by the Russian Parliament in 2006. And of course Russia used radiological substances in its barbaric assault on Mr Litvinenko ."  She said that the UK government would "consider in detail the response from the Russian State" and in the event that there was no credible response, the government would "conclude that this action amounts to an unlawful use of force by the Russian State against the United Kingdom" and measures would follow. [ 70 ] British media billed the statement as "Theresa May's ultimatum to Putin". [ 7 ] [ 131 ] On 13 March 2018, UK Home Secretary Amber Rudd ordered an inquiry by the police and security services into alleged Russian state involvement in 14 previous suspicious deaths of Russian exiles and businessmen in the UK. [ 132 ] May unveiled a series of measures on 14 March 2018 in retaliation for the poisoning attack, after the Russian government refused to meet the UK's request for an account of the incident. One of the chief measures was the expulsion of 23 Russian diplomats which she presented as "actions to dismantle the Russian espionage network in the UK", as these diplomats had been identified by the UK as "undeclared intelligence agents". [ 133 ] [ 134 ] The BBC reported other responses, including: [ 135 ] [ 136 ] May said that some measures which the government planned could "not be shared publicly for reasons of national security". [ 133 ] Jeremy Corbyn cast doubt in his parliamentary response to May's statement concerning blaming the attack on Russia prior to the results of an independent investigation, which provoked criticism from some MPs, including members of his own party. [ 138 ] [ 139 ] A few days later, Corbyn was satisfied that the evidence pointed to Russia. [ 140 ] He supported the expulsion but argued that a crackdown on money laundering by UK financial firms on behalf of Russian oligarchs would be a more effective measure against "the Putin regime" than the Conservative government's plans. [ 141 ] Corbyn pointed to the pre- Iraq War judgements about Iraq and weapons of mass destruction as reason to be suspicious. [ 142 ] The United Nations Security Council called an urgent meeting on 14 March 2018 on the initiative of the UK to discuss the Salisbury incident. [ 143 ] [ 37 ] According to the Russian mission's press secretary, the draft press statement introduced by Russia at the United Nations Security Council meeting was blocked by the UK. [ 144 ] The UK and the US blamed Russia for the incident during the meeting, with the UK accusing Russia of breaking its obligations under the Chemical Weapons Convention . [ 145 ] Separately, the White House fully supported the UK in attributing the attack to Russia, as well as the punitive measures taken against Russia. The White House also accused Russia of undermining the security of countries worldwide. [ 146 ] [ 147 ] The UK, and subsequently NATO , requested Russia provide "full and complete disclosure" of the Novichok programme to the OPCW. [ 148 ] [ 149 ] [ 150 ] On 14 March 2018, the government stated it would supply a sample of the substance used to the OPCW once UK legal obligations from the criminal investigation permitted. [ 151 ] Boris Johnson said on 16 March that it was "overwhelmingly likely" that the poisoning had been ordered directly by Russian president Vladimir Putin, which marked the first time the British government accused Putin of personally ordering the poisoning. [ 152 ] According to the UK Foreign Office , the UK attributed the attack to Russia based on Porton Down 's determination that the chemical was Novichok, additional intelligence, and a lack of alternative explanations from Russia. [ 153 ] The Defence Science and Technology Laboratory announced that it was "completely confident" that the agent used was Novichok, but they still did not know the "precise source" of the agent. [ 154 ] [ 155 ] The UK had held an intelligence briefing with its allies in which it stated that the Novichok chemical used in the Salisbury poisoning was produced at a chemical facility in the town of Shikhany , Saratov Oblast , Russia. [ 156 ] On 6 March 2018 Andrey Lugovoy , deputy of Russia's State Duma and alleged killer of Alexander Litvinenko , in his interview with the Echo of Moscow said: "Something constantly happens to Russian citizens who either run away from Russian justice, or for some reason choose for themselves a way of life they call a change of their Motherland. So the more Britain accepts on its territory every good-for-nothing, every scum from all over the world, the more problems they will have." [ 157 ] [ 158 ] Russian Foreign Minister Sergey Lavrov on 9 March rejected Britain's claim of Russia's involvement in Skripal's poisoning and accused the United Kingdom of spreading propaganda. [ 159 ] [ 160 ] Lavrov said that Russia was "ready to cooperate" and demanded access to the samples of the nerve-agent which was used to poison Skripal. The request was rejected by the British government. [ 161 ] Following Theresa May's 12 March statement in Parliament – in which she gave President Putin's administration until midnight of the following day to explain how a former spy was poisoned in Salisbury, otherwise she would conclude it was an "unlawful use of force" by the Russian state against the UK, [ 7 ] Lavrov, talking to the Russian press on 13 March, [ 162 ] [ 163 ] [ 164 ] said that the procedure stipulated by the Chemical Weapons Convention should be followed whereunder Russia was entitled to have access to the substance in question and 10 days to respond. [ 162 ] [ 165 ] [ 166 ] [ 167 ] On 17 March, Russia announced that it was expelling 23 British diplomats and ordered the closure of the UK's consulate in St Petersburg and the British Council office in Moscow, stopping all British Council activities in Russia. [ 168 ] Russia has officially declared the poisoning to be a fabrication and a "grotesque provocation rudely staged by the British and U.S. intelligence agencies" to undermine the country. [ 169 ] [ 170 ] The Russian government and embassy of Russia in the United Kingdom repeatedly requested access to the Skripals, and sought to offer consular assistance. These requests and offers were respectively denied or declined. [ 171 ] [ 172 ] [ 173 ] [ 174 ] On 5 September 2018 Putin's Press Secretary, Dmitry Peskov , stated that Russia had not received any official request from Britain for assistance in identifying the two suspected Russian GRU military intelligence officers that Scotland Yard believes carried out the Skripal attack. The same day, the Foreign Ministry of Russia asserted that UK ambassador in Moscow , Laurie Bristow , had said that London would not provide Russia with the suspects' fingerprints, passport numbers, visa numbers, or any extra data. [ 175 ] [ better source needed ] On 12 September 2018, Putin, while answering questions at the plenary meeting of the 4th Eastern Economic Forum in Russia's Far Eastern port city of Vladivostok said that the identities of both men London suspected of involvement in the Skripal case were known to the Russian authorities and that both were civilians, who had done nothing criminal. He also said he would like the men to come forward to tell their story. [ 176 ] [ 177 ] In a 13 September 2018 interview on the state-funded television channel RT , the accused claimed to be sports nutritionists who had gone to Salisbury merely to see the sights and look for nutrition products, saying that they took a second day-trip to Salisbury because slush had dampened their first one. [ 178 ] On 26 September, the same day one of the suspects was identified as the Colonel of GRU, Lavrov urged the British authorities to cooperate in the investigation of the case, said Britain had given no proof of Russia's guilt and suggested that Britain had something to hide. [ 179 ] [ 180 ] On 25 September, the FSB began searching for Ministry of Internal Affairs (MVD) officers who had provided journalists with foreign passport and flight information about the suspects [ needs update ] . [ 181 ] For a few days following the poisoning, the story was discussed by web sites, radio stations and newspapers, but Russian state-run main national TV channels largely ignored the incident. [ 182 ] [ 183 ] Eventually, on 7 March, anchor Kirill Kleimyonov of the state television station Channel One Russia 's current affairs programme Vremya mentioned the incident by attributing the allegation to Boris Johnson. [ 184 ] After speaking of Johnson disparagingly, Kleimyonov said that being "a traitor to the motherland" was one of the most hazardous professions and warned: "Don't choose England as a next country to live in. Whatever the reasons, whether you're a professional traitor to the motherland or you just hate your country in your spare time, I repeat, no matter, don't move to England. Something is not right there. Maybe it's the climate, but in recent years there have been too many strange incidents with a grave outcome. People get hanged, poisoned, they die in helicopter crashes and fall out of windows in industrial quantities." [ 128 ] [ 182 ] [ 184 ] [ 185 ] [ 186 ] Kleimyonov's commentary was accompanied by a report highlighting previous suspicious Russia-related deaths in the UK, namely those of financier Alexander Perepilichny , businessman Boris Berezovsky , ex-FSB officer Alexander Litvinenko and radiation expert Matthew Puncher. [ 184 ] Puncher discovered that Litvinenko was poisoned by polonium ; he died in 2006, five months after a trip to Russia. [ 187 ] Dmitry Kiselyov , pro-Kremlin TV presenter, said on 11 March that the poisoning of Sergei Skripal, who was "completely wrung out and of little interest" as a source, was only advantageous to the British to "nourish their Russophobia " and organise the boycott of the FIFA World Cup scheduled for June 2018. Kiselyov referred to London as a "pernicious place for Russian exiles". [ 188 ] [ 189 ] [ 190 ] The prominent Russian television hosts' warnings to Russians living in the UK were echoed by a similar direct warning from a senior member of the Russian Federation Council , Andrey Klimov , who said: "It's going to be very unsafe for you." [ 166 ] Claims made by Russian media were fact-checked by UK media organisations. [ 191 ] [ 192 ] An interview with two men claiming to be the suspects named by the UK was aired on RT on 13 September 2018 with RT editor Margarita Simonyan . [ 193 ] They said they were ordinary tourists who had wished to see Stonehenge , Old Sarum , and the "famous ... 123-metre spire" of Salisbury Cathedral . They also said that they "maybe approached Skripal's house, but we didn't know where it was located," and denied using Novichok, which they had allegedly transported in a fake perfume bottle, saying, "Is it silly for decent lads to have women's perfume? The customs are checking everything, they would have questions as to why men have women's perfume in their luggage." [ 194 ] Although Simonyan avoided most questions about the two men's backgrounds, she hinted that they might be gay by asking, "All footage features you two together ... What do you have in common that you spend so much time together?" [ 194 ] The New York Times interpreted the hint by noting that "The possibility that Mr. Petrov and Mr. Boshirov could be gay would, for a Russian audience, immediately rule out the possibility that they serve as military intelligence officers." [ 178 ] On 22 August 2022, the editor-in-chief of Kremlin-backed RT network, Margarita Simonyan , appeared to lend support to the suggestion that Russia had been involved in the poisoning, with her remark "I am sure we can find professionals willing to admire the famous spires in the vicinity of Tallinn" – seen as a reference to the agents' claims that they were sightseeing in Salisbury. [ 195 ] On 3 April 2018 Gary Aitkenhead, the chief executive of the Government's Defence Science and Technology Laboratory (Dstl) at Porton Down responsible for testing the substance involved in the case, said they had established the agent was Novichok or from that family but had been unable to verify the "precise source" of the nerve agent and that they had "provided the scientific info to Government who have then used a number of other sources to piece together the conclusions you have come to". [ 196 ] [ 197 ] Aitkenhead refused to comment on whether the laboratory had developed or maintains stocks of Novichok. [ 197 ] He also dismissed speculations the substance could have come from Porton Down: "There is no way anything like that could have come from us or left the four walls of our facility." [ 197 ] Aitkenhead stated the creation of the nerve agent was "probably only within the capabilities of a state actor", and that there was no known antidote. [ 196 ] [ 155 ] Vil Mirzayanov , a former Soviet Union scientist who worked at the research institute that developed the Novichok class of nerve agents and lives in the United States, believes that hundreds of people could have been affected by residual contamination in Salisbury. He said that Sergei and Yulia Skripal, if poisoned with Novichok, would be left with debilitating health issues for the rest of their lives. He also criticised the response of Public Health England , saying that washing personal belongings was insufficient to remove traces of the chemical. [ 198 ] [ 199 ] Two other Russian scientists who now live in Russia and have been involved in Soviet-era chemical weapons development, Vladimir Uglev and Leonid Rink, were quoted as saying that Novichok agents had been developed in the 1970s–1980s within the programme that was officially titled FOLIANT , while the term Novichok referred to a whole system of chemical weapons use; they, as well as Mirzayanov, who published Novichok's formula in 2008, also noted that Novichok-type agents might be synthesised in other countries. [ 200 ] [ 201 ] [ 202 ] [ 203 ] In 1995, Leonid Rink received a one-year suspended sentence for selling Novichok agents to unnamed buyers, soon after the fatal poisoning of Russian banker Ivan Kivilidi by Novichok. [ 204 ] [ 205 ] [ 206 ] [ 207 ] A former KGB and FSB officer, Boris Karpichkov, who operated in Latvia in the 1990s and fled to the UK in 1998, [ 208 ] told ITV's Good Morning Britain that on 12 February 2018, three weeks before the Salisbury attack and exactly on his birthday, he received a message over the burner phone from "a very reliable source" in the FSB telling Karpichkov that "something bad [wa]s going to happen with [him] and seven other people, including Mr. Skripal", whom he then knew nothing about. [ 209 ] Karpichkov said he disregarded the message at the time, thinking it was not serious, as he had previously received such messages. [ 209 ] According to Karpichkov, the FSB's list includes the names of Oleg Gordievsky and William Browder . [ 208 ] [ 210 ] The Swiss Federal Intelligence Service announced on 14 September 2018 that two Russian spies had been caught in the Netherlands and expelled earlier in the year for attempting to hack into the Spiez Laboratory in the Swiss town of Spiez , a designated lab of the OPCW that had been tasked with confirming that the samples of poison collected in Salisbury were Novichok. The spies were discovered through a joint investigation by the Swiss, Dutch, and British intelligence services. The two men expelled were not the same as the Salisbury suspects. [ 211 ] [ 212 ] Following Theresa May's statement in Parliament , the US Secretary of State Rex Tillerson released a statement on 12 March that fully supported the stance of the UK government on the poisoning attack, including "its assessment that Russia was likely responsible for the nerve agent attack that took place in Salisbury". [ 213 ] The following day, US President Donald Trump said that Russia was likely responsible. [ 214 ] United States Ambassador to the United Nations Nikki Haley at the Security Council briefing on 14 March 2018 stated: "The United States believes that Russia is responsible for the attack on two people in the United Kingdom using a military-grade nerve agent". [ 215 ] Following the United States National Security Council 's recommendation, [ 216 ] President Trump, on 26 March, ordered the expulsion of sixty Russian diplomats (referred to by the White House as "Russian intelligence officers" [ 217 ] ) and the closure of the Russian consulate in Seattle. [ 218 ] [ 219 ] The action was cast as being "in response to Russia's use of a military-grade chemical weapon on the soil of the United Kingdom, the latest in its ongoing pattern of destabilising activities around the world". [ 217 ] On 8 August, [ 220 ] five months after the poisoning, [ 221 ] the US government agreed to place sanctions on Russian banks and exports. [ 222 ] [ 223 ] [ 224 ] [ 221 ] On 6 August, [ 220 ] the US State Department concluded that Russia was behind the poisoning. [ 220 ] The sanctions, which are enforced under the Chemical and Biological Weapons Control and Warfare Elimination Act of 1991 (CBW Act), [ 220 ] were planned to come into effect on 27 August. [ 225 ] However, these sanctions were not implemented by the Trump administration. [ 226 ] European Commission Vice-President Frans Timmermans argued for "unequivocal, unwavering and very strong" European solidarity with the United Kingdom when speaking to lawmakers in Strasburg on 13 March. [ 227 ] Federica Mogherini , the High Representative of the Union for Foreign Affairs and Security Policy , expressed shock and offered the bloc's support. [ 228 ] MEP and leader of the Alliance of Liberals and Democrats for Europe in the European Parliament Guy Verhofstadt proclaimed solidarity with the British people. [ 229 ] During a meeting in the Foreign Affairs Council on 19 March, all foreign ministers of the European Union declared in a joint statement that the "European Union expresses its unqualified solidarity with the UK and its support, including for the UK's efforts to bring those responsible for this crime to justice." In addition, the statement also pointed out that "The European Union takes extremely seriously the UK Government's assessment that it is highly likely that the Russian Federation is responsible." [ 230 ] Norbert Röttgen , a former federal minister in Angela Merkel's government and current chairman of Germany's parliamentary foreign affairs committee, said the incident demonstrated the need for Britain to review its open-door policy towards Russian capital of dubious origin. [ 229 ] Sixteen EU countries expelled 33 Russian diplomats on 26 March. [ 231 ] [ 232 ] The European Union officially sanctioned 4 Russians that were suspected of carrying out the attack on 21 January 2019. The head of the GRU Igor Kostyukov and the deputy head Vladimir Alexseyev were both sanctioned along with Mishkin and Chepiga. The sanctions banned them from travelling to the EU and froze any assets they may have there along with banning any person or company in the EU providing any financial support to those sanctioned. [ 233 ] Albania, Australia, Canada, Georgia, North Macedonia, Moldova, Norway and Ukraine expelled a total of 27 Russian diplomats who were believed to have been intelligence officers. [ 234 ] Australia's Malcolm Turnbull said, "We responded with the solidarity we've always shown when Britain's freedoms have been challenged." [ 235 ] The New Zealand Government also issued a statement supporting the actions, noting that it would have expelled any Russian intelligence agents who had been detected in the country. [ 236 ] NATO issued an official response to the attack on 14 March. The alliance expressed its deep concern over the first offensive use of a nerve agent on its territory since its foundation and said that the attack was in breach of international treaties. It called on Russia to fully disclose its research of the Novichok agent to the OPCW. [ 149 ] Jens Stoltenberg , NATO Secretary General , announced on 27 March that NATO would be expelling seven Russian diplomats from the Russian mission to NATO in Brussels. In addition, 3 unfilled positions at the mission have been denied accreditation from NATO. Russia blamed the US for the NATO response. [ 237 ] The leaders of France, Germany, the United States and the United Kingdom released a joint statement on 15 March which supported the UK's stance on the incident, stating that it was "highly likely that Russia was responsible" and calling on Russia to provide complete disclosure to the OPCW concerning its Novichok nerve agent program. [ 238 ] [ 239 ] On 19 March, the European Union also issued a statement strongly condemning the attack and stating it "takes extremely seriously the UK Government's assessment that it is highly likely that the Russian Federation is responsible". [ 230 ] On 6 September 2018, Canada, France, Germany and the United States issued a joint statement saying they had "full confidence" that the Salisbury attack was orchestrated by Russia's Main Intelligence Directorate and "almost certainly approved at a senior government level" and urged Russia to provide full disclosure of its Novichok programme to the OPCW. [ 240 ] By the end of March 2018 a number of countries and other organisations expelled a total of more than 150 Russian diplomats in a show of solidarity with the UK. According to the BBC it was "the largest collective expulsion of Russian intelligence officers in history". [ 231 ] [ 237 ] [ 241 ] The UK expelled 23 Russian diplomats on 14 March 2018. Three days later, Russia expelled an equal number of British diplomats and ordered the closure of the UK consulate in St. Petersburg and of the British Council in Russia. [ 168 ] Nine countries expelled Russian diplomats on 26 March: along with 6 other EU nations, the US, Canada, Ukraine and Albania. The following day, several nations inside and outside of the EU, and NATO responded similarly. By 30 March, Russia expelled an equal number of diplomats of most nations who had expelled Russian diplomats. By that time, Belgium, Montenegro, Hungary and Georgia had also expelled one or more Russian diplomats. Additionally on 30 March, Russia reduced the size of the total UK mission's personnel in Russia to match that of the Russian mission to the UK. Bulgaria, Luxembourg, Malta, Portugal, Slovakia, Slovenia and the European Union itself have not expelled any Russian diplomats but have recalled their ambassadors from Russia for consultations. [ 242 ] [ 243 ] [ 244 ] [ 245 ] [ 246 ] [ 247 ] Furthermore, Iceland decided to diplomatically boycott the 2018 FIFA World Cup held in Russia. [ 248 ] This cooperation of countries for the mass expulsions of Russian diplomats was used again just four years later in 2022 as the format for the diplomatic expulsions during the Russo-Ukrainian War . Notes The failed poisoning of the Skripals became an embarrassment for Putin, and ended up causing severe damage to Russia's spying organisations. [ 5 ] [ 28 ] [ 257 ] [ 258 ] Once Bellingcat exposed the agents' names in September, Moscow then targeted interior ministry leaks that may have helped expose dozens of undercover operatives. [ 5 ] It also prompted fury in the Kremlin, the result of which was a purge in the senior ranks of the GRU. [ 259 ] Furthermore a number of botched attempts by the GRU were also revealed in October – the Sandworm cybercrime unit had attempted unsuccessfully to hack the UK Foreign Office and the Porton Down facility within a month of the poisonings. Another hack was attempted in April this time on the headquarters of the Organisation for the Prohibition of Chemical Weapons (OPCW) in the Netherlands. [ 260 ] The OPCW was investigating the poisonings in the UK, as well the Douma chemical attack in Syria. Four Russian intelligence officers, believed to have been part of a 'GRU cleanup' unit for the earlier failed operations, travelled to The Hague on diplomatic passports. The incident was thwarted by Dutch military intelligence, who had been tipped off by British intelligence officials. The four tried – and failed – to destroy their equipment and were immediately put on a plane back to Moscow. [ 261 ] Soon after these events Vladimir Putin's tone changed; at the Russian Energy Forum in Moscow he described Skripal as a "scum and a traitor to the motherland." [ 262 ] The 2018 disclosure of the link on sequential passport numbers issued to GRU agents led to a number of other Russian agents fleeing the west and returning to Russia, including Maria Adela Kuhfeldt Rivera – real name Olga Kolobova, a deep cover agent in Naples. [ 263 ] Another was Sergey Vladimirovich Cherkasov , arrested and jailed in Brazil in 2022. [ 264 ] Russia's chief of military intelligence, Igor Korobov and his agency thus came under heavy criticism. [ 265 ] Putin was angered over the identification of the agents and the botched failures, and in a meeting apparently scolded Korobov. [ 266 ] Soon after this Korobov then collapsed at home in sudden "ill health", (claimed journalist Sergey Kanev) and died not long after in November after a "long illness". GRU defector Viktor Suvorov claimed that 'Korobov was murdered, and everyone in the GRU understood why'. [ 267 ] Alexander Golts, a Russian military analyst even admitted that agents 'got a bit too relaxed' and went on to say 'such sloppy work is the reality'. [ 28 ] In February 2019, Bellingcat confirmed that a third GRU officer present in the UK at the time was identified as Denis Vyacheslavovich Sergeev , [ 25 ] [ 26 ] believed to hold the rank of major general in the GRU. The pattern of his communications while in the UK indicated that he liaised with superior officers in Moscow. [ 268 ] In September 2021, Bellingcat revealed that "Russian authorities had taken the unusual measure of erasing any public records" of Sergeev's existence, as well as the other two main suspects in the Skripal poisoning. Sergeev is said to have had a senior position to Chepiga and Mishkin and was likely in charge of coordinating the operation in Salisbury. [ 269 ] In April 2021, Mishkin and Chepiga were named as having been involved in the 2014 Vrbětice ammunition warehouses explosions in the Czech Republic. [ 270 ] The Moscow Times reported a public opinion survey later in the year of the poisonings: The results of the survey published by the independent Levada Center pollster [in October 2018] say that 28 percent of Russians believe that British intelligence services were behind Skripals' poisoning, with only 3 percent saying they believe their own intelligence officers carried out the attack. Another 56 percent said that "it could have been anyone". Meanwhile, 37 percent of respondents said they knew about the case in detail and 33 percent said they had "heard something" about it, with another 20 percent saying they had heard nothing about the poisoning. [ 271 ] In the UK, the response to the poisonings was viewed as a success. [ 272 ] Initially there were criticisms of the intelligence failures with the supposed GRU agents gaining access to the UK in the first place. [ 273 ] After the Litvinenko poisoning, however, there were calls for more robust action against Russia, should an event like it unfold. [ 274 ] The Salisbury poisonings put that robustness into action, rallying significant solidarity from the West. In addition, the response also exposed many Russian intelligence officers, and British officials believe they did real damage to Russian intelligence operations, even if it was short term. [ 5 ] [ 272 ] [ 274 ] Some of the emergency vehicles used in the response to the poisoning were buried in a landfill site near Cheltenham . [ 275 ] In June 2019 it was revealed emergency services spent £891,000 on replacing and discarding contaminated vehicles. South Western ambulance service discarded eight vehicles, comprising three ambulances and five paramedic cars. Wiltshire Police destroyed a total of 16 vehicles at a cost of £460,000. [ 276 ] On 13 September 2018, Chris Busby , a retired research scientist, who is regularly featured as an expert on the Russian government -controlled RT television network, was arrested after his home in Bideford was raided by police. [ 277 ] [ 278 ] Busby was an outspoken critic of the British Government's handling of the Salisbury poisoning. [ 279 ] In one video he said: "Just to make it perfectly clear, there's no way that there's any proof that the material that poisoned the Skripals came from Russia." Busby was held for 19 hours under the Explosive Substances Act 1883 , [ 280 ] before being released with no further action. [ 281 ] Following his release, Busby told the BBC he believed that the fact that two of the officers who had raided his property had felt unwell was explained by "psychological problems associated with their knowledge of the Skripal poisoning". [ 282 ] On 16 September, fears of Novichok contamination flared up again after two people fell ill at a Prezzo restaurant, 300 metres (980 ft) from the Zizzi location where the Skripals had eaten before collapsing. The restaurant, a nearby pub, and surrounding streets were cordoned off, with some patrons under observation or unable to leave the area. [ 283 ] The next day, the police said there was "nothing to suggest that Novichok" was the cause of the two people falling ill. [ 284 ] However, on 19 September, one of the apparent victims, Anna Shapiro, claimed in The Sun newspaper that the incident had been an attempted assassination against her and her husband by Russia. [ 285 ] This article was later removed from The Sun "for legal reasons" [ 285 ] and the police began to investigate the incident as a "possible hoax" after the couple were discharged from hospital. [ 286 ] In 2020, senior British officials told The Times that Sergei and Yulia Skripal had been given new identities and state support to start a new life. Both had relocated to New Zealand under the assumed identities. [ 287 ] In May 2021 Nick Bailey, who continued to feel the effects of his poisoning and had retired early as a result, began personal injury litigation against Wiltshire Police; [ 288 ] an undisclosed settlement was reached in April 2022. [ 20 ] As of 17 October 2018, a total of £7.5 million had been pledged by government in support of the city and to support businesses, boost tourism and to cover unexpected costs. Wiltshire Council had spent or pledged £7,338,974 on recovery, and a further £500,000 "was in the pipeline": Deputy Chief Constable Paul Mills and Superintendent Dave Minty of Wiltshire Police were each awarded the Queen's Police Medal in the 2020 New Year Honours for their roles in responding to the incident. [ 290 ] [ 291 ] The combined Wiltshire Emergency Services received Wiltshire Life ' s 2019 "Pride of Wiltshire" award. [ 292 ] The Salisbury Poisonings , a three-part dramatisation of the events in Salisbury and Amesbury, with a focus on the response of local officials and the local community, was broadcast on BBC One in June 2020 and later released on Netflix in December 2021. [ 293 ] [ 294 ]
https://en.wikipedia.org/wiki/Poisoning_of_Sergei_and_Yulia_Skripal
In materials science and solid mechanics , Poisson's ratio (symbol: ν ( nu )) is a measure of the Poisson effect , the deformation (expansion or contraction) of a material in directions perpendicular to the specific direction of loading . The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain . For small values of these changes, ν is the amount of transversal elongation divided by the amount of axial compression . Most materials have Poisson's ratio values ranging between 0.0 and 0.5. For soft materials, [ 1 ] such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2 to 0.3. The ratio is named after the French mathematician and physicist Siméon Poisson . Poisson's ratio is a measure of the Poisson effect, the phenomenon in which a material tends to expand in directions perpendicular to the direction of compression. Conversely, if the material is stretched rather than compressed, it usually tends to contract in the directions transverse to the direction of stretching. It is a common observation when a rubber band is stretched, it becomes noticeably thinner. Again, the Poisson ratio will be the ratio of relative contraction to relative expansion and will have the same value as above. In certain rare cases, [ 2 ] a material will actually shrink in the transverse direction when compressed (or expand when stretched) which will yield a negative value of the Poisson ratio. The Poisson's ratio of a stable, isotropic , linear elastic material must be between −1.0 and +0.5 because of the requirement for Young's modulus , the shear modulus and bulk modulus to have positive values. [ 3 ] Most materials have Poisson's ratio values ranging between 0.0 and 0.5. A perfectly incompressible isotropic material deformed elastically at small strains would have a Poisson's ratio of exactly 0.5. Most steels and rigid polymers when used within their design limits (before yield ) exhibit values of about 0.3, increasing to 0.5 for post-yield deformation which occurs largely at constant volume. [ 4 ] Rubber has a Poisson ratio of nearly 0.5. Cork's Poisson ratio is close to 0, showing very little lateral expansion when compressed. Glass is between 0.18 and 0.30. Some materials, e.g. some polymer foams, origami folds, [ 5 ] [ 6 ] and certain cells can exhibit negative Poisson's ratio, and are referred to as auxetic materials . If these auxetic materials are stretched in one direction, they become thicker in the perpendicular direction. In contrast, some anisotropic materials, such as carbon nanotubes , zigzag-based folded sheet materials, [ 7 ] [ 8 ] and honeycomb auxetic metamaterials [ 9 ] to name a few, can exhibit one or more Poisson's ratios above 0.5 in certain directions. Assuming that the material is stretched or compressed in only one direction (the x axis in the diagram below): where and positive strain indicates extension and negative strain indicates contraction. For a cube stretched in the x -direction (see Figure 1) with a length increase of Δ L in the x -direction, and a length decrease of Δ L ′ in the y - and z -directions, the infinitesimal diagonal strains are given by If Poisson's ratio is constant through deformation, integrating these expressions and using the definition of Poisson's ratio gives Solving and exponentiating, the relationship between Δ L and Δ L ′ is then For very small values of Δ L and Δ L ′ , the first-order approximation yields: The relative change of volume ⁠ Δ V / V ⁠ of a cube due to the stretch of the material can now be calculated. Since V = L 3 and one can derive Using the above derived relationship between Δ L and Δ L ′ : and for very small values of Δ L and Δ L ′ , the first-order approximation yields: For isotropic materials we can use Lamé's relation [ 10 ] where K is bulk modulus and E is Young's modulus . If a rod with diameter (or width, or thickness) d and length L is subject to tension so that its length will change by Δ L then its diameter d will change by: The above formula is true only in the case of small deformations; if deformations are large then the following (more precise) formula can be used: where The value is negative because it decreases with increase of length For a linear isotropic material subjected only to compressive (i.e. normal) forces, the deformation of a material in the direction of one axis will produce a deformation of the material along the other axis in three dimensions. Thus it is possible to generalize Hooke's law (for compressive forces) into three dimensions: where: these equations can be all synthesized in the following: In the most general case, also shear stresses will hold as well as normal stresses, and the full generalization of Hooke's law is given by: where δ ij is the Kronecker delta . The Einstein notation is usually adopted: to write the equation simply as: For anisotropic materials, the Poisson ratio depends on the direction of extension and transverse deformation Here ν is Poisson's ratio, E is Young's modulus , n is a unit vector directed along the direction of extension, m is a unit vector directed perpendicular to the direction of extension. Poisson's ratio has a different number of special directions depending on the type of anisotropy. [ 11 ] [ 12 ] Orthotropic materials have three mutually perpendicular planes of symmetry in their material properties. An example is wood, which is most stiff (and strong) along the grain, and less so in the other directions. Then Hooke's law can be expressed in matrix form as [ 13 ] [ 14 ] where The Poisson ratio of an orthotropic material is different in each direction ( x , y and z ). However, the symmetry of the stress and strain tensors implies that not all the six Poisson's ratios in the equation are independent. There are only nine independent material properties: three elastic moduli, three shear moduli, and three Poisson's ratios. The remaining three Poisson's ratios can be obtained from the relations From the above relations we can see that if E x > E y then ν xy > ν yx . The larger ratio (in this case ν xy ) is called the major Poisson ratio while the smaller one (in this case ν yx ) is called the minor Poisson ratio . We can find similar relations between the other Poisson ratios. Transversely isotropic materials have a plane of isotropy in which the elastic properties are isotropic. If we assume that this plane of isotropy is the yz -plane, then Hooke's law takes the form [ 15 ] where we have used the yz -plane of isotropy to reduce the number of constants, that is, The symmetry of the stress and strain tensors implies that This leaves us with six independent constants E x , E y , G xy , G yz , ν xy , ν yz . However, transverse isotropy gives rise to a further constraint between G yz and E y , ν yz which is Therefore, there are five independent elastic material properties two of which are Poisson's ratios. For the assumed plane of symmetry, the larger of ν xy and ν yx is the major Poisson ratio. The other major and minor Poisson ratios are equal. Some materials known as auxetic materials display a negative Poisson's ratio. When subjected to positive strain in a longitudinal axis, the transverse strain in the material will actually be positive (i.e. it would increase the cross sectional area). For these materials, it is usually due to uniquely oriented, hinged molecular bonds. In order for these bonds to stretch in the longitudinal direction, the hinges must ‘open’ in the transverse direction, effectively exhibiting a positive strain. [ 19 ] This can also be done in a structured way and lead to new aspects in material design as for mechanical metamaterials . Studies have shown that certain solid wood types display negative Poisson's ratio exclusively during a compression creep test. [ 20 ] [ 21 ] Initially, the compression creep test shows positive Poisson's ratios, but gradually decreases until it reaches negative values. Consequently, this also shows that Poisson's ratio for wood is time-dependent during constant loading, meaning that the strain in the axial and transverse direction do not increase in the same rate. Media with engineered microstructure may exhibit negative Poisson's ratio. In a simple case auxeticity is obtained removing material and creating a periodic porous media. [ 22 ] Lattices can reach lower values of Poisson's ratio, [ 23 ] which can be indefinitely close to the limiting value −1 in the isotropic case. [ 24 ] More than three hundred crystalline materials have negative Poisson's ratio. [ 25 ] [ 26 ] [ 27 ] For example, Li, Na, K, Cu, Rb, Ag, Fe, Ni, Co, Cs, Au, Be, Ca, Zn Sr, Sb, MoS 2 and others. At finite strains , the relationship between the transverse and axial strains ε trans and ε axial is typically not well described by the Poisson ratio. In fact, the Poisson ratio is often considered a function of the applied strain in the large strain regime. In such instances, the Poisson ratio is replaced by the Poisson function, for which there are several competing definitions. [ 28 ] Defining the transverse stretch λ trans = ε trans + 1 and axial stretch λ axial = ε axial + 1 , where the transverse stretch is a function of the axial stretch, the most common are the Hencky, Biot, Green, and Almansi functions: One area in which Poisson's effect has a considerable influence is in pressurized pipe flow. When the air or liquid inside a pipe is highly pressurized it exerts a uniform force on the inside of the pipe, resulting in a hoop stress within the pipe material. Due to Poisson's effect, this hoop stress will cause the pipe to increase in diameter and slightly decrease in length. The decrease in length, in particular, can have a noticeable effect upon the pipe joints, as the effect will accumulate for each section of pipe joined in series. A restrained joint may be pulled apart or otherwise prone to failure. [ citation needed ] Another area of application for Poisson's effect is in the realm of structural geology . Rocks, like most materials, are subject to Poisson's effect while under stress. In a geological timescale, excessive erosion or sedimentation of Earth's crust can either create or remove large vertical stresses upon the underlying rock. This rock will expand or contract in the vertical direction as a direct result of the applied stress, and it will also deform in the horizontal direction as a result of Poisson's effect. This change in strain in the horizontal direction can affect or form joints and dormant stresses in the rock. [ 29 ] Although cork was historically chosen to seal wine bottle for other reasons (including its inert nature, impermeability, flexibility, sealing ability, and resilience), [ 30 ] cork's Poisson's ratio of zero provides another advantage. As the cork is inserted into the bottle, the upper part which is not yet inserted does not expand in diameter as it is compressed axially. The force needed to insert a cork into a bottle arises only from the friction between the cork and the bottle due to the radial compression of the cork. If the stopper were made of rubber, for example, (with a Poisson's ratio of about +0.5), there would be a relatively large additional force required to overcome the radial expansion of the upper part of the rubber stopper. Most car mechanics are aware that it is hard to pull a rubber hose (such as a coolant hose) off a metal pipe stub, as the tension of pulling causes the diameter of the hose to shrink, gripping the stub tightly. (This is the same effect as shown in a Chinese finger trap .) Hoses can more easily be pushed off stubs instead using a wide flat blade. There are two valid solutions. The plus sign leads to ν ≥ 0 {\displaystyle \nu \geq 0} .
https://en.wikipedia.org/wiki/Poisson's_ratio
In probability theory , the law of rare events or Poisson limit theorem states that the Poisson distribution may be used as an approximation to the binomial distribution , under certain conditions. [ 1 ] The theorem was named after Siméon Denis Poisson (1781–1840). A generalization of this theorem is Le Cam's theorem . Let p n {\displaystyle p_{n}} be a sequence of real numbers in [ 0 , 1 ] {\displaystyle [0,1]} such that the sequence n p n {\displaystyle np_{n}} converges to a finite limit λ {\displaystyle \lambda } . Then: Assume λ > 0 {\displaystyle \lambda >0} (the case λ = 0 {\displaystyle \lambda =0} is easier). Then Since this leaves Using Stirling's approximation , it can be written: Letting n → ∞ {\displaystyle n\to \infty } and n p = λ {\displaystyle np=\lambda } : As n → ∞ {\displaystyle n\to \infty } , ( 1 − x n ) n → e − x {\displaystyle \left(1-{\frac {x}{n}}\right)^{n}\to e^{-x}} so: It is also possible to demonstrate the theorem through the use of ordinary generating functions of the binomial distribution: by virtue of the binomial theorem . Taking the limit N → ∞ {\displaystyle N\rightarrow \infty } while keeping the product p N ≡ λ {\displaystyle pN\equiv \lambda } constant, it can be seen: which is the OGF for the Poisson distribution. (The second equality holds due to the definition of the exponential function .)
https://en.wikipedia.org/wiki/Poisson_limit_theorem
In probability theory , The Poisson scatter theorem describes a probability model of random scattering . It implies that the number of points in a fixed region will follow a Poisson distribution . Let there exist a chance process realized by a set of points (called hits) over a bounded region K ∈ R 2 {\displaystyle K\in \mathbb {R} ^{2}} such that: In any region B , let N B be the number of hits in B . Then there exists a positive constant λ {\displaystyle \lambda } such that for each subregion B ∈ K {\displaystyle B\in K} , N B has a Poisson distribution with parameter λ | B | {\displaystyle \lambda |B|} , where | B | {\displaystyle |B|} is the area of B (remember that this is R 2 {\displaystyle \mathbb {R} ^{2}} , in other measure spaces , | B | {\displaystyle |B|} could mean different things, i.e. length in R {\displaystyle \mathbb {R} } ). In addition, for any non-overlapping regions B 1 , … , B k {\displaystyle B_{1},\ldots ,B_{k}} , the random variables N B 1 , … , N B k {\displaystyle N_{B_{1}},\ldots ,N_{B_{k}}} are independent from one another. The positive constant λ {\displaystyle \lambda } is called the intensity parameter, and is equivalent to the number of hits in a unit area of K . Also, While the statement of the theorem here is limited to R 2 {\displaystyle \mathbb {R} ^{2}} , the theorem can be generalized to any-dimensional space. Some calculations change depending on the space that the points are scattered in (as is mentioned above), but the general assumptions and outcomes still hold. Consider raindrops falling on a rooftop. The rooftop is the region K ∈ R 2 {\displaystyle K\in \mathbb {R} ^{2}} , while the raindrops can be considered the hits of our system. It is reasonable to assume that the number of raindrops that fall in any particular region of the rooftop follows a poisson distribution. The Poisson Scatter Theorem states that if one was to subdivide the rooftops into k disjoint sub-regions, then the number of raindrops that hits a particular region B i {\displaystyle B_{i}} with intensity λ i {\displaystyle \lambda _{i}} of the rooftop is independent from the number of raindrops that hit any other subregion. Suppose that 2000 raindrops fall in 1000 subregions of the rooftop, randomly. The expected number of raindrops per subregion would be 2. So the distribution of the number of raindrops on the whole rooftop is Poisson with intensity parameter 2. The distribution of the number of raindrops falling on 1/5 of the rooftop is Poisson with intensity parameter 2/5. Due to the reproductive property of the Poisson distribution, k independent random scatters on the same region can superimpose to produce a random scatter that follows a poisson distribution with parameter ( λ 1 + λ 2 + ⋯ + λ k ) {\displaystyle (\lambda _{1}+\lambda _{2}+\cdots +\lambda _{k})} . ^ Pitman 2003, p. 230.
https://en.wikipedia.org/wiki/Poisson_scatter_theorem
In mathematics , the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform . Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation . For a smooth, complex valued function s ( x ) {\displaystyle s(x)} on R {\displaystyle \mathbb {R} } which decays at infinity with all derivatives ( Schwartz function ), the simplest version of the Poisson summation formula states that where S {\displaystyle S} is the Fourier transform of s {\displaystyle s} , i.e., S ( f ) ≜ ∫ − ∞ ∞ s ( x ) e − i 2 π f x d x . {\textstyle S(f)\triangleq \int _{-\infty }^{\infty }s(x)\ e^{-i2\pi fx}\,dx.} The summation formula can be restated in many equivalent ways, but a simple one is the following. [ 1 ] Suppose that f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} ( L 1 for L 1 space ) and Λ {\displaystyle \Lambda } is a unimodular lattice in R n {\displaystyle \mathbb {R} ^{n}} . Then the periodization of f {\displaystyle f} , which is defined as the sum f Λ ( x ) = ∑ λ ∈ Λ f ( x + λ ) , {\textstyle f_{\Lambda }(x)=\sum _{\lambda \in \Lambda }f(x+\lambda ),} converges in the L 1 {\displaystyle L^{1}} norm of R n / Λ {\displaystyle \mathbb {R} ^{n}/\Lambda } to an L 1 ( R n / Λ ) {\displaystyle L^{1}(\mathbb {R} ^{n}/\Lambda )} function having Fourier series f Λ ( x ) ∼ ∑ λ ′ ∈ Λ ′ f ^ ( λ ′ ) e 2 π i λ ′ x {\displaystyle f_{\Lambda }(x)\sim \sum _{\lambda '\in \Lambda '}{\hat {f}}(\lambda ')e^{2\pi i\lambda 'x}} where Λ ′ {\displaystyle \Lambda '} is the dual lattice to Λ {\displaystyle \Lambda } . (Note that the Fourier series on the right-hand side need not converge in L 1 {\displaystyle L^{1}} or otherwise.) Let s ( x ) {\textstyle s\left(x\right)} be a smooth, complex valued function on R {\displaystyle \mathbb {R} } which decays at infinity with all derivatives ( Schwartz function ), and its Fourier transform S ( f ) {\displaystyle S\left(f\right)} , defined as S ( f ) = ∫ − ∞ ∞ s ( x ) e − 2 π i x f d x . {\displaystyle S(f)=\int _{-\infty }^{\infty }s(x)e^{-2\pi ixf}dx.} Then S ( f ) {\displaystyle S(f)} is also a Schwartz function, and we have the reciprocal relationship that s ( x ) = ∫ − ∞ ∞ S ( f ) e 2 π i x f d f . {\displaystyle s(x)=\int _{-\infty }^{\infty }S(f)e^{2\pi ixf}df.} The periodization of s ( x ) {\displaystyle s(x)} with period P > 0 {\displaystyle P>0} is given by s P ( x ) ≜ ∑ n = − ∞ ∞ s ( x + n P ) . {\displaystyle s_{_{P}}(x)\triangleq \sum _{n=-\infty }^{\infty }s(x+nP).} Likewise, the periodization of S ( f ) {\displaystyle S(f)} with period 1 / T {\displaystyle 1/T} , where T > 0 {\displaystyle T>0} , is S 1 / T ( f ) ≜ ∑ k = − ∞ ∞ S ( f + k / T ) . {\displaystyle S_{1/T}(f)\triangleq \sum _{k=-\infty }^{\infty }S(f+k/T).} Then Eq.1 , ∑ n = − ∞ ∞ s ( n ) = ∑ k = − ∞ ∞ S ( k ) , {\displaystyle \sum _{n=-\infty }^{\infty }s(n)=\sum _{k=-\infty }^{\infty }S(k),} is a special case (P=1, x=0) of this generalization: [ 2 ] [ 3 ] which is a Fourier series expansion with coefficients that are samples of the function S ( f ) . {\displaystyle S(f).} Conversely, Eq.2 follows from Eq.1 by applying the known behavior of the Fourier transform under translations (see the Fourier transform properties time scaling and shifting). Similarly: also known as the important Discrete-time Fourier transform . We prove that, [ 2 ] if s ∈ L 1 ( R ) {\displaystyle s\in L^{1}(\mathbb {R} )} , then the (possibly divergent) Fourier series of s P ( x ) {\displaystyle s_{P}(x)} is s P ( x ) ∼ ∑ k = − ∞ ∞ 1 P S ( k P ) e 2 π i k / P . {\displaystyle s_{_{P}}(x)\sim \sum _{k=-\infty }^{\infty }{\frac {1}{P}}S\left({\frac {k}{P}}\right)e^{2\pi ik/P}.} When s ( x ) {\displaystyle s(x)} is a Schwartz function, this establishes equality in Eq.2 of the previous section. First, the periodization s P ( x ) {\displaystyle s_{P}(x)} converges in L 1 {\displaystyle L^{1}} norm to an L 1 ( [ 0 , P ] ) {\displaystyle L^{1}([0,P])} function which is periodic on R {\displaystyle \mathbb {R} } , and therefore integrable on any interval of length P . {\displaystyle P.} We must therefore show that the Fourier series coefficients of s P ( x ) {\displaystyle s_{_{P}}(x)} are 1 P S ( k P ) {\textstyle {\frac {1}{P}}S\left({\frac {k}{P}}\right)} where S ( f ) {\textstyle S\left(f\right)} is the Fourier transform of s ( x ) {\textstyle s\left(x\right)} . (Not S [ k ] {\textstyle S\left[k\right]} , which is the Fourier coefficient of s P ( x ) {\displaystyle s_{_{P}}(x)} .) Proceeding from the definition of the Fourier coefficients we have : S [ k ] ≜ 1 P ∫ 0 P s P ( x ) ⋅ e − i 2 π k P x d x = 1 P ∫ 0 P ( ∑ n = − ∞ ∞ s ( x + n P ) ) ⋅ e − i 2 π k P x d x = 1 P ∑ n = − ∞ ∞ ∫ 0 P s ( x + n P ) ⋅ e − i 2 π k P x d x , {\displaystyle {\begin{aligned}S[k]\ &\triangleq \ {\frac {1}{P}}\int _{0}^{P}s_{_{P}}(x)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx\\&=\ {\frac {1}{P}}\int _{0}^{P}\left(\sum _{n=-\infty }^{\infty }s(x+nP)\right)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx\\&=\ {\frac {1}{P}}\sum _{n=-\infty }^{\infty }\int _{0}^{P}s(x+nP)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx,\end{aligned}}} where the interchange of summation with integration is justified by dominated convergence . With a change of variables ( τ = x + n P {\displaystyle \tau =x+nP} ), this becomes the following, completing the proof of Eq.2 : S [ k ] = 1 P ∑ n = − ∞ ∞ ∫ n P ( n + 1 ) P s ( τ ) e − i 2 π k P τ e i 2 π k n ⏟ 1 d τ = 1 P ∫ − ∞ ∞ s ( τ ) e − i 2 π k P τ d τ ≜ 1 P ⋅ S ( k P ) . {\displaystyle {\begin{aligned}S[k]={\frac {1}{P}}\sum _{n=-\infty }^{\infty }\int _{nP}^{(n+1)P}s(\tau )\ e^{-i2\pi {\frac {k}{P}}\tau }\ \underbrace {e^{i2\pi kn}} _{1}\,d\tau \ =\ {\frac {1}{P}}\int _{-\infty }^{\infty }s(\tau )\ e^{-i2\pi {\frac {k}{P}}\tau }d\tau \triangleq {\frac {1}{P}}\cdot S\left({\frac {k}{P}}\right)\end{aligned}}.} This proves Eq.2 for L 1 {\displaystyle L^{1}} functions, in the sense that the right-hand side is the (possibly divergent) Fourier series of the left-hand side. Similarly, if S ( f ) {\displaystyle S(f)} is in L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} , a similar proof shows the corresponding version of Eq.3 . Finally, if s P ( x ) {\displaystyle s_{_{P}}(x)} has an absolutely convergent Fourier series, then Eq.2 holds as an equality almost everywhere. This is the case, in particular, when s ( x ) {\displaystyle s(x)} is a Schwartz function. Similarly, Eq.3 holds when S ( f ) {\displaystyle S(f)} is a Schwartz function. These equations can be interpreted in the language of distributions [ 4 ] [ 5 ] : §7.2 for a function s {\displaystyle s} whose derivatives are all rapidly decreasing (see Schwartz function ). The Poisson summation formula arises as a particular case of the Convolution Theorem on tempered distributions , using the Dirac comb distribution and its Fourier series : ∑ n = − ∞ ∞ δ ( x − n T ) ≡ ∑ k = − ∞ ∞ 1 T ⋅ e − i 2 π k T x ⟺ F 1 T ⋅ ∑ k = − ∞ ∞ δ ( f − k / T ) . {\displaystyle \sum _{n=-\infty }^{\infty }\delta (x-nT)\equiv \sum _{k=-\infty }^{\infty }{\frac {1}{T}}\cdot e^{-i2\pi {\frac {k}{T}}x}\quad {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\quad {\frac {1}{T}}\cdot \sum _{k=-\infty }^{\infty }\delta (f-k/T).} In other words, the periodization of a Dirac delta δ , {\displaystyle \delta ,} resulting in a Dirac comb , corresponds to the discretization of its spectrum which is constantly one. Hence, this again is a Dirac comb but with reciprocal increments. For the case T = 1 , {\displaystyle T=1,} Eq.1 readily follows: ∑ k = − ∞ ∞ S ( k ) = ∑ k = − ∞ ∞ ( ∫ − ∞ ∞ s ( x ) e − i 2 π k x d x ) = ∫ − ∞ ∞ s ( x ) ( ∑ k = − ∞ ∞ e − i 2 π k x ) ⏟ ∑ n = − ∞ ∞ δ ( x − n ) d x = ∑ n = − ∞ ∞ ( ∫ − ∞ ∞ s ( x ) δ ( x − n ) d x ) = ∑ n = − ∞ ∞ s ( n ) . {\displaystyle {\begin{aligned}\sum _{k=-\infty }^{\infty }S(k)&=\sum _{k=-\infty }^{\infty }\left(\int _{-\infty }^{\infty }s(x)\ e^{-i2\pi kx}dx\right)=\int _{-\infty }^{\infty }s(x)\underbrace {\left(\sum _{k=-\infty }^{\infty }e^{-i2\pi kx}\right)} _{\sum _{n=-\infty }^{\infty }\delta (x-n)}dx\\&=\sum _{n=-\infty }^{\infty }\left(\int _{-\infty }^{\infty }s(x)\ \delta (x-n)\ dx\right)=\sum _{n=-\infty }^{\infty }s(n).\end{aligned}}} Similarly: ∑ k = − ∞ ∞ S ( f − k / T ) = ∑ k = − ∞ ∞ F { s ( x ) ⋅ e i 2 π k T x } = F { s ( x ) ∑ k = − ∞ ∞ e i 2 π k T x ⏟ T ∑ n = − ∞ ∞ δ ( x − n T ) } = F { ∑ n = − ∞ ∞ T ⋅ s ( n T ) ⋅ δ ( x − n T ) } = ∑ n = − ∞ ∞ T ⋅ s ( n T ) ⋅ F { δ ( x − n T ) } = ∑ n = − ∞ ∞ T ⋅ s ( n T ) ⋅ e − i 2 π n T f . {\displaystyle {\begin{aligned}\sum _{k=-\infty }^{\infty }S(f-k/T)&=\sum _{k=-\infty }^{\infty }{\mathcal {F}}\left\{s(x)\cdot e^{i2\pi {\frac {k}{T}}x}\right\}\\&={\mathcal {F}}{\bigg \{}s(x)\underbrace {\sum _{k=-\infty }^{\infty }e^{i2\pi {\frac {k}{T}}x}} _{T\sum _{n=-\infty }^{\infty }\delta (x-nT)}{\bigg \}}={\mathcal {F}}\left\{\sum _{n=-\infty }^{\infty }T\cdot s(nT)\cdot \delta (x-nT)\right\}\\&=\sum _{n=-\infty }^{\infty }T\cdot s(nT)\cdot {\mathcal {F}}\left\{\delta (x-nT)\right\}=\sum _{n=-\infty }^{\infty }T\cdot s(nT)\cdot e^{-i2\pi nTf}.\end{aligned}}} Or: [ 6 ] : 143 ∑ k = − ∞ ∞ S ( f − k / T ) = S ( f ) ∗ ∑ k = − ∞ ∞ δ ( f − k / T ) = S ( f ) ∗ F { T ∑ n = − ∞ ∞ δ ( x − n T ) } = F { s ( x ) ⋅ T ∑ n = − ∞ ∞ δ ( x − n T ) } = F { ∑ n = − ∞ ∞ T ⋅ s ( n T ) ⋅ δ ( x − n T ) } as above . {\displaystyle {\begin{aligned}\sum _{k=-\infty }^{\infty }S(f-k/T)&=S(f)*\sum _{k=-\infty }^{\infty }\delta (f-k/T)\\&=S(f)*{\mathcal {F}}\left\{T\sum _{n=-\infty }^{\infty }\delta (x-nT)\right\}\\&={\mathcal {F}}\left\{s(x)\cdot T\sum _{n=-\infty }^{\infty }\delta (x-nT)\right\}={\mathcal {F}}\left\{\sum _{n=-\infty }^{\infty }T\cdot s(nT)\cdot \delta (x-nT)\right\}\quad {\text{as above}}.\end{aligned}}} The Poisson summation formula can also be proved quite conceptually using the compatibility of Pontryagin duality with short exact sequences such as [ 7 ] 0 → Z → R → R / Z → 0. {\textstyle 0\to \mathbb {Z} \to \mathbb {R} \to \mathbb {R} /\mathbb {Z} \to 0.} Eq.2 holds provided s ( x ) {\displaystyle s(x)} is a continuous integrable function which satisfies | s ( x ) | + | S ( x ) | ≤ C ( 1 + | x | ) − 1 − δ {\textstyle |s(x)|+|S(x)|\leq C(1+|x|)^{-1-\delta }} for some C > 0 , δ > 0 {\displaystyle C>0,\delta >0} and every x . {\displaystyle x.} [ 8 ] [ 9 ] Note that such s ( x ) {\displaystyle s(x)} is uniformly continuous , this together with the decay assumption on s {\displaystyle s} , show that the series defining s P {\displaystyle s_{_{P}}} converges uniformly to a continuous function. Eq.2 holds in the strong sense that both sides converge uniformly and absolutely to the same limit. [ 9 ] Eq.2 holds in a pointwise sense under the strictly weaker assumption that s {\displaystyle s} has bounded variation and [ 3 ] 2 ⋅ s ( x ) = lim ε → 0 s ( x + ε ) + lim ε → 0 s ( x − ε ) . {\displaystyle 2\cdot s(x)=\lim _{\varepsilon \to 0}s(x+\varepsilon )+\lim _{\varepsilon \to 0}s(x-\varepsilon ).} The Fourier series on the right-hand side of Eq.2 is then understood as a (conditionally convergent) limit of symmetric partial sums. As shown above, Eq.2 holds under the much less restrictive assumption that s ( x ) {\displaystyle s(x)} is in L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} , but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of s P ( x ) . {\displaystyle s_{_{P}}(x).} [ 3 ] In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability . When interpreting convergence in this way Eq.2 , case x = 0 , {\displaystyle x=0,} holds under the less restrictive conditions that s ( x ) {\displaystyle s(x)} is integrable and 0 is a point of continuity of s P ( x ) {\displaystyle s_{_{P}}(x)} . However, Eq.2 may fail to hold even when both s {\displaystyle s} and S {\displaystyle S} are integrable and continuous, and the sums converge absolutely. [ 10 ] In partial differential equations , the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images . Here the heat kernel on R 2 {\displaystyle \mathbb {R} ^{2}} is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions. [ 8 ] In one dimension, the resulting solution is called a theta function . In electrodynamics , the method is also used to accelerate the computation of periodic Green's functions . [ 11 ] In the statistical study of time-series, if s {\displaystyle s} is a function of time, then looking only at its values at equally spaced points of time is called "sampling." In applications, typically the function s {\displaystyle s} is band-limited , meaning that there is some cutoff frequency f o {\displaystyle f_{o}} such that S ( f ) {\displaystyle S(f)} is zero for frequencies exceeding the cutoff: S ( f ) = 0 {\displaystyle S(f)=0} for | f | > f o . {\displaystyle |f|>f_{o}.} For band-limited functions, choosing the sampling rate 1 T > 2 f o {\displaystyle {\tfrac {1}{T}}>2f_{o}} guarantees that no information is lost: since S {\displaystyle S} can be reconstructed from these sampled values. Then, by Fourier inversion, so can s . {\displaystyle s.} This leads to the Nyquist–Shannon sampling theorem . [ 2 ] Computationally, the Poisson summation formula is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space. [ 12 ] (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation . The Poisson summation formula is also useful to bound the errors obtained when an integral is approximated by a (Riemann) sum. Consider an approximation of S ( 0 ) = ∫ − ∞ ∞ d x s ( x ) {\textstyle S(0)=\int _{-\infty }^{\infty }dx\,s(x)} as δ ∑ n = − ∞ ∞ s ( n δ ) {\textstyle \delta \sum _{n=-\infty }^{\infty }s(n\delta )} , where δ ≪ 1 {\displaystyle \delta \ll 1} is the size of the bin. Then, according to Eq.2 this approximation coincides with ∑ k = − ∞ ∞ S ( k / δ ) {\textstyle \sum _{k=-\infty }^{\infty }S(k/\delta )} . The error in the approximation can then be bounded as | ∑ k ≠ 0 S ( k / δ ) | ≤ ∑ k ≠ 0 | S ( k / δ ) | {\textstyle \left|\sum _{k\neq 0}S(k/\delta )\right|\leq \sum _{k\neq 0}|S(k/\delta )|} . This is particularly useful when the Fourier transform of s ( x ) {\displaystyle s(x)} is rapidly decaying if 1 / δ ≫ 1 {\displaystyle 1/\delta \gg 1} . The Poisson summation formula may be used to derive Landau's asymptotic formula for the number of lattice points inside a large Euclidean sphere. It can also be used to show that if an integrable function, s {\displaystyle s} and S {\displaystyle S} both have compact support then s = 0. {\displaystyle s=0.} [ 2 ] In number theory , Poisson summation can also be used to derive a variety of functional equations including the functional equation for the Riemann zeta function . [ 13 ] One important such use of Poisson summation concerns theta functions : periodic summations of Gaussians. Put q = e i π τ {\displaystyle q=e^{i\pi \tau }} , for τ {\displaystyle \tau } a complex number in the upper half plane, and define the theta function: θ ( τ ) = ∑ n q n 2 . {\displaystyle \theta (\tau )=\sum _{n}q^{n^{2}}.} The relation between θ ( − 1 / τ ) {\displaystyle \theta (-1/\tau )} and θ ( τ ) {\displaystyle \theta (\tau )} turns out to be important for number theory, since this kind of relation is one of the defining properties of a modular form . By choosing s ( x ) = e − π x 2 {\displaystyle s(x)=e^{-\pi x^{2}}} and using the fact that S ( f ) = e − π f 2 , {\displaystyle S(f)=e^{-\pi f^{2}},} one can conclude: θ ( − 1 τ ) = τ i θ ( τ ) , {\displaystyle \theta \left({-1 \over \tau }\right)={\sqrt {\tau \over i}}\theta (\tau ),} by putting 1 / λ = τ / i . {\displaystyle {1/\lambda }={\sqrt {\tau /i}}.} It follows from this that θ 8 {\displaystyle \theta ^{8}} has a simple transformation property under τ ↦ − 1 / τ {\displaystyle \tau \mapsto {-1/\tau }} and this can be used to prove Jacobi's formula for the number of different ways to express an integer as the sum of eight perfect squares. Cohn & Elkies [ 14 ] proved an upper bound on the density of sphere packings using the Poisson summation formula, which subsequently led to a proof of optimal sphere packings in dimension 8 and 24. The Poisson summation formula holds in Euclidean space of arbitrary dimension. Let Λ {\displaystyle \Lambda } be the lattice in R d {\displaystyle \mathbb {R} ^{d}} consisting of points with integer coordinates. For a function s {\displaystyle s} in L 1 ( R d ) {\displaystyle L^{1}(\mathbb {R} ^{d})} , consider the series given by summing the translates of s {\displaystyle s} by elements of Λ {\displaystyle \Lambda } : P s ( x ) = ∑ ν ∈ Λ s ( x + ν ) . {\displaystyle \mathbb {P} s(x)=\sum _{\nu \in \Lambda }s(x+\nu ).} Theorem For s {\displaystyle s} in L 1 ( R d ) {\displaystyle L^{1}(\mathbb {R} ^{d})} , the above series converges pointwise almost everywhere, and defines a Λ {\displaystyle \Lambda } -periodic function on R d {\displaystyle \mathbb {R} ^{d}} , hence a function P s ( x ¯ ) {\displaystyle \mathbb {P} s({\bar {x}})} on the torus R d / Λ . {\displaystyle \mathbb {R} ^{d}/\Lambda .} a.e. P s {\displaystyle \mathbb {P} s} lies in L 1 ( R d / Λ ) {\displaystyle L^{1}(\mathbb {R} ^{d}/\Lambda )} with ‖ P s ‖ L 1 ( R d / Λ ) ≤ ‖ s ‖ L 1 ( R ) . {\displaystyle \|\mathbb {P} s\|_{L_{1}(\mathbb {R} ^{d}/\Lambda )}\leq \|s\|_{L_{1}(\mathbb {R} )}.} Moreover, for all ν {\displaystyle \nu } in Λ , {\displaystyle \Lambda ,} (the Fourier transform of P s {\displaystyle \mathbb {P} s} on the torus R d / Λ {\displaystyle \mathbb {R} ^{d}/\Lambda } ) equals (the Fourier transform of s {\displaystyle s} on R d {\displaystyle \mathbb {R} ^{d}} ). When s {\displaystyle s} is in addition continuous, and both s {\displaystyle s} and S {\displaystyle S} decay sufficiently fast at infinity, then one can "invert" the Fourier series back to their domain R d {\displaystyle \mathbb {R} ^{d}} and make a stronger statement. More precisely, if | s ( x ) | + | S ( x ) | ≤ C ( 1 + | x | ) − d − δ {\displaystyle |s(x)|+|S(x)|\leq C(1+|x|)^{-d-\delta }} for some C , δ > 0, then [ 9 ] : VII §2 ∑ ν ∈ Λ s ( x + ν ) = ∑ ν ∈ Λ S ( ν ) e i 2 π ν ⋅ x , {\displaystyle \sum _{\nu \in \Lambda }s(x+\nu )=\sum _{\nu \in \Lambda }S(\nu )e^{i2\pi \nu \cdot x},} where both series converge absolutely and uniformly on Λ. When d = 1 and x = 0, this gives Eq.1 above. More generally, a version of the statement holds if Λ is replaced by a more general lattice in a finite dimensional vector space V {\displaystyle V} . Choose a translation invariant measure m {\displaystyle m} on V {\displaystyle V} . It is unique up to positive scalar. Again for a function s ∈ L 1 ( V , m ) {\displaystyle s\in L_{1}(V,m)} we define the periodisation as above. The dual lattice Λ ′ {\displaystyle \Lambda '} is defined as a subset of the dual vector space V ′ {\displaystyle V'} that evaluates to integers on the lattice Λ {\displaystyle \Lambda } or alternatively, by Pontryagin duality , as the characters of V {\displaystyle V} that contain Λ {\displaystyle \Lambda } in the kernel. Then the statement is that for all ν ∈ Λ ′ {\displaystyle \nu \in \Lambda '} the Fourier transform P S {\displaystyle \mathbb {P} S} of the periodisation P s {\displaystyle \mathbb {P} s} as a function on V / Λ {\displaystyle V/\Lambda } and the Fourier transform S {\displaystyle S} of s {\displaystyle s} on V {\displaystyle V} itself are related by proper normalisation Note that the right-hand side is independent of the choice of invariant measure μ {\displaystyle \mu } . If s {\displaystyle s} and S {\displaystyle S} are continuous and tend to zero faster than 1 / r dim ⁡ ( V ) + δ {\displaystyle 1/r^{\dim(V)+\delta }} then In particular This is applied in the theory of theta functions and is a possible method in geometry of numbers . In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis . Further generalization to locally compact abelian groups is required in number theory . In non-commutative harmonic analysis , the idea is taken even further in the Selberg trace formula but takes on a much deeper character. A series of mathematicians applying harmonic analysis to number theory, most notably Martin Eichler, Atle Selberg , Robert Langlands , and James Arthur, have generalised the Poisson summation formula to the Fourier transform on non-commutative locally compact reductive algebraic groups G {\displaystyle G} with a discrete subgroup Γ {\displaystyle \Gamma } such that G / Γ {\displaystyle G/\Gamma } has finite volume. For example, G {\displaystyle G} can be the real points of S L n {\displaystyle SL_{n}} and Γ {\displaystyle \Gamma } can be the integral points of S L n {\displaystyle SL_{n}} . In this setting, G {\displaystyle G} plays the role of the real number line in the classical version of Poisson summation, and Γ {\displaystyle \Gamma } plays the role of the integers n {\displaystyle n} that appear in the sum. The generalised version of Poisson summation is called the Selberg Trace Formula and has played a role in proving many cases of Artin's conjecture and in Wiles's proof of Fermat's Last Theorem. The left-hand side of Eq.1 becomes a sum over irreducible unitary representations of G {\displaystyle G} , and is called "the spectral side," while the right-hand side becomes a sum over conjugacy classes of Γ {\displaystyle \Gamma } , and is called "the geometric side." The Poisson summation formula is the archetype for vast developments in harmonic analysis and number theory. The Selberg trace formula was later generalized to more general smooth manifolds (without any algebraic structure) by Gutzwiller, Balian-Bloch, Chazarain, Colin de Verdière, Duistermaat-Guillemin, Uribe, Guillemin-Melrose, Zelditch and others. The "wave trace" or "semiclassical trace" formula relates geometric and spectral properties of the underlying topological space. The spectral side is the trace of a unitary group of operators (e.g., the Schrödinger or wave propagator) which encodes the spectrum of a differential operator and the geometric side is a sum of distributions which are supported at the lengths of periodic orbits of a corresponding Hamiltonian system. The Hamiltonian is given by the principal symbol of the differential operator which generates the unitary group. For the Laplacian, the "wave trace" has singular support contained in the set of lengths of periodic geodesics; this is called the Poisson relation. The Poisson summation formula is a particular case of the convolution theorem on tempered distributions . If one of the two factors is the Dirac comb , one obtains periodic summation on one side and sampling on the other side of the equation. Applied to the Dirac delta function and its Fourier transform , the function that is constantly 1, this yields the Dirac comb identity .
https://en.wikipedia.org/wiki/Poisson_summation_formula
The Poisson–Boltzmann equation describes the distribution of the electric potential in solution in the direction normal to a charged surface. This distribution is important to determine how the electrostatic interactions will affect the molecules in solution. It is expressed as a differential equation of the electric potential ψ {\displaystyle \psi } , which depends on the solvent permitivity ε {\displaystyle \varepsilon } , the solution temperature T {\displaystyle T} , and the mean concentration of each ion species c i 0 {\displaystyle c_{i}^{0}} : The Poisson–Boltzmann equation is derived via mean-field assumptions. [ 1 ] [ 2 ] From the Poisson–Boltzmann equation many other equations have been derived with a number of different assumptions. The Poisson–Boltzmann equation describes a model proposed independently by Louis Georges Gouy and David Leonard Chapman in 1910 and 1913, respectively. [ 3 ] In the Gouy-Chapman model , a charged solid comes into contact with an ionic solution, creating a layer of surface charges and counter-ions or double layer . [ 4 ] Due to thermal motion of ions, the layer of counter-ions is a diffuse layer and is more extended than a single molecular layer, as previously proposed by Hermann Helmholtz in the Helmholtz model. [ 3 ] The Stern Layer model goes a step further and takes into account the finite ion size. The Gouy–Chapman model explains the capacitance -like qualities of the electric double layer. [ 4 ] A simple planar case with a negatively charged surface can be seen in the figure below. As expected, the concentration of counter-ions is higher near the surface than in the bulk solution. The Poisson–Boltzmann equation describes the electrochemical potential of ions in the diffuse layer. The three-dimensional potential distribution can be described by the Poisson equation [ 4 ] ∇ 2 ψ = ∂ 2 ψ ∂ x 2 + ∂ 2 ψ ∂ y 2 + ∂ 2 ψ ∂ z 2 = − ρ e ε , {\displaystyle \nabla ^{2}\psi ={\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}+{\frac {\partial ^{2}\psi }{\partial z^{2}}}=-{\frac {\rho _{e}}{\varepsilon }},} where The freedom of movement of ions in solution can be accounted for by Boltzmann statistics . The Boltzmann equation is used to calculate the local ion density such that c i = c i 0 ⋅ exp ⁡ ( − W i k B T ) , {\displaystyle c_{i}=c_{i}^{0}\cdot \exp \left({\frac {-W_{i}}{k_{\mathrm {B} }T}}\right),} where The equation for local ion density can be substituted into the Poisson equation under the assumptions that the work being done is only electric work, and that the concentration of salt is much higher than the concentration of ions. [ 4 ] The electric work to bring an ion of charge q i {\displaystyle q_{i}} to a surface with potential ψ can be represented by W i = q i ψ {\displaystyle W_{i}=q_{i}\psi } . [ 4 ] These work equations can be substituted into the Boltzmann equation, producing an expression for the concentration of each ion species c i = c i 0 exp ⁡ ( − q i ψ ( x , y , z ) k B T ) {\displaystyle c_{i}=c_{i}^{0}\exp \left(-{\frac {q_{i}\psi (x,y,z)}{k_{B}T}}\right)} . Substituting this Boltzmann relation into the local electric charge density expression, the following expression can be obtained ρ e = ∑ i c i q i = ∑ i c i 0 q i exp ⁡ ( − q i ψ ( x , y , z ) k B T ) . {\displaystyle \rho _{e}=\sum _{i}c_{i}q_{i}=\sum _{i}c_{i}^{0}q_{i}\exp \left({\frac {-q_{i}\psi (x,y,z)}{k_{B}T}}\right).} Finally the charge density can be substituted into the Poisson equation to produce the Poisson–Boltzmann equation: [ 4 ] ∇ 2 ψ = − 1 ε ∑ i c i 0 q i exp ⁡ ( − q i ψ ( x , y , z ) k B T ) {\displaystyle \nabla ^{2}\psi =-{\frac {1}{\varepsilon }}\sum _{i}c_{i}^{0}q_{i}\exp \left({\frac {-q_{i}\psi (x,y,z)}{k_{B}T}}\right)} When distance is measured as multiples of Bjerrum length l b {\displaystyle l_{b}} and potential is measured in multiples of k B T / e {\displaystyle k_{B}T/e} then the equation can be rearranged to dimensionless form ∇ 2 ψ = 2 c 0 ( l b ) 3 sinh ⁡ ( ψ ) . {\displaystyle \nabla ^{2}\psi =2c_{0}(l_{b})^{3}\sinh \left(\psi \right).} The Poisson–Boltzmann equation can take many forms throughout various scientific fields. In biophysics and certain surface chemistry applications, it is known simply as the Poisson–Boltzmann equation. [ 9 ] It is also known in electrochemistry as Gouy-Chapman theory; in solution chemistry as Debye–Huckel theory ; in colloid chemistry as Derjaguin–Landau–Verwey–Overbeek (DLVO) theory . [ 9 ] Only minor modifications are necessary to apply the Poisson–Boltzmann equation to various interfacial models, making it a highly useful tool in determining electrostatic potential at surfaces. [ 4 ] Because the Poisson–Boltzmann equation is a partial differential of the second order, it is commonly solved numerically ; however, with certain geometries, it can be solved analytically. The geometry that most easily facilitates this is a planar surface. In the case of an infinitely extended planar surface, there are two dimensions in which the potential cannot change because of symmetry. Assuming these dimensions are the y and z dimensions, only the x dimension is left. Below is the Poisson–Boltzmann equation solved analytically in terms of a second order derivative with respect to x. [ 4 ] d 2 ψ d x 2 = c 0 e ε ⋅ [ e e ψ ( x ) k B T − e − e ψ ( x ) k B T ] {\displaystyle {\frac {d^{2}\psi }{dx^{2}}}={\frac {c_{0}e}{\varepsilon }}\cdot \left[e^{\frac {e\psi (x)}{k_{\mathrm {B} }T}}-e^{\frac {-e\psi (x)}{k_{\mathrm {B} }T}}\right]} Analytical solutions have also been found for axial and spherical cases in a particular study. [ 10 ] The equation is in the form of a logarithm of a power series and it is as follows: d 2 ψ d r 2 + L r d ψ d r = e ψ − δ e − ψ {\displaystyle {\frac {d^{2}\psi }{dr^{2}}}+{\frac {L}{r}}{\frac {d\psi }{dr}}=e^{\psi }-\delta e^{-\psi }} It uses a dimensionless potential ψ = e Φ k T {\displaystyle \psi ={\frac {e\Phi }{kT}}} and the lengths are measured in units of the Debye electron radius in the region of zero potential R e D = k T 4 π e 2 n e 0 {\displaystyle R_{eD}={\sqrt {\frac {kT}{4\pi e^{2}n_{e0}}}}} (where n e 0 {\displaystyle n_{e0}} denotes the number density of negative ions in the zero potential region). For the spherical case, L=2, the axial case, L=1, and the planar case, L=0. When using the Poisson–Boltzmann equation, it is important to determine if the specific case is low or high potential . The high-potential case becomes more complex so if applicable, use the low-potential equation. In the low-potential condition, the linearized version of the Poisson–Boltzmann equation (shown below) is valid, and it is commonly used as it is more simple and spans a wide variety of cases. [ 11 ] ψ = ψ 0 e − K x {\displaystyle \psi =\psi _{0}e^{-\mathrm {K} x}} Strictly, low potential means that e | ψ | ≪ k B T {\displaystyle e\left\vert \psi \right\vert \ll k_{\mathrm {B} }T} ; however, the results that the equations yields are valid for a wider range of potentials, from 50–80mV. [ 4 ] Nevertheless, at room temperature, ψ ≤ 25 m V {\displaystyle \psi \leq \mathrm {25mV} } and that is generally the standard. [ 4 ] Some boundary conditions that apply in low potential cases are that: at the surface, the potential must be equal to the surface potential and at large distances from the surface the potential approaches a zero value. This distance decay length is yielded by the Debye length λ D {\displaystyle \lambda _{D}} equation. [ 4 ] K = 2 c 0 e 2 ε k B T {\displaystyle \mathrm {K} ={\sqrt {\frac {2c_{0}e^{2}}{\varepsilon k_{\mathrm {B} }T}}}} λ D = K − 1 {\displaystyle \lambda _{D}=\mathrm {K} ^{-1}} As salt concentration increases, the Debye length decreases due to the ions in solution screening the surface charge. [ 12 ] A special instance of this equation is for the case of 25 ∘ C {\displaystyle 25^{\circ }C} water with a monovalent salt. [ 4 ] The Debye length equation is then: λ D = 0.304 n m c 0 {\displaystyle \lambda _{D}={\frac {\mathrm {0.304nm} }{\sqrt {c_{0}}}}} where c 0 {\displaystyle c_{0}} is the salt concentration in mol/L. These equations all require 1:1 salt concentration cases, but if ions that have higher valence are present, the following case is used. [ 4 ] K = e 2 ε k B T ∑ c i Z i 2 {\displaystyle \mathrm {K} ={\sqrt {{\frac {e^{2}}{\varepsilon k_{\mathrm {B} }T}}\sum c_{i}{Z_{i}}^{2}}}} The high-potential case is referred to as the “full one-dimensional case”. In order to obtain the equation, the general solution to the Poisson–Boltzmann equation is used and the case of low potentials is dropped. The equation is solved with a dimensionless parameter y ≡ e ψ k B T {\displaystyle y\equiv {\frac {e\psi }{k_{B}T}}} , which is not to be confused with the spatial coordinate symbol, y. [ 4 ] Employing several trigonometric identities and the boundary conditions that at large distances from the surface, the dimensionless potential and its derivative are zero, the high potential equation is revealed. [ 4 ] e − K x = ( e y / 2 − 1 ) ( e y 0 / 2 + 1 ) ( e y / 2 + 1 ) ( e y 0 / 2 − 1 ) {\displaystyle e^{-\mathrm {K} x}={\frac {(e^{y/2}-1)(e^{y_{0}/2}+1)}{(e^{y/2}+1)(e^{y_{0}/2}-1)}}} This equation solved for e y / 2 {\displaystyle e^{y/2}} is shown below. e y / 2 = e y 0 / 2 + 1 + ( e y 0 / 2 − 1 ) ⋅ e − K x e y 0 / 2 + 1 − ( e y 0 / 2 − 1 ) ⋅ e − K x {\displaystyle e^{y/2}={\frac {e^{y_{0}/2}+1+(e^{y_{0}/2}-1)\cdot e^{-\mathrm {K} x}}{e^{y_{0}/2}+1-(e^{y_{0}/2}-1)\cdot e^{-\mathrm {K} x}}}} In order to obtain a more useful equation that facilitates graphing high potential distributions, take the natural logarithm of both sides and solve for the dimensionless potential, y. y = 2 ln ⁡ e y 0 / 2 + 1 + ( e y 0 / 2 − 1 ) ⋅ e − K x e y 0 / 2 + 1 − ( e y 0 / 2 − 1 ) ⋅ e − K x {\displaystyle y=2\ln {\frac {e^{y_{0}/2}+1+(e^{y_{0}/2}-1)\cdot e^{-\mathrm {K} x}}{e^{y_{0}/2}+1-(e^{y_{0}/2}-1)\cdot e^{-\mathrm {K} x}}}} Knowing that y ≡ e ψ k B T {\displaystyle y\equiv {\frac {e\psi }{k_{B}T}}} , substitute this for y in the previous equation and solve for ψ {\displaystyle \psi } . The following equation is rendered. ψ = 2 k B T e ⋅ ln ⁡ e y 0 / 2 + 1 + ( e y 0 / 2 − 1 ) ⋅ e − K x e y 0 / 2 + 1 − ( e y 0 / 2 − 1 ) ⋅ e − K x {\displaystyle \psi ={\frac {2k_{B}T}{e}}\cdot \ln {\frac {e^{y_{0}/2}+1+(e^{y_{0}/2}-1)\cdot e^{-\mathrm {K} x}}{e^{y_{0}/2}+1-(e^{y_{0}/2}-1)\cdot e^{-\mathrm {K} x}}}} y 0 = e ψ 0 k B T {\displaystyle y_{0}={\frac {e\psi _{0}}{k_{B}T}}} In low potential cases, the high potential equation may be used and will still yield accurate results. As the potential rises, the low potential, linear case overestimates the potential as a function of distance from the surface. This overestimation is visible at distances less than half the Debye length , where the decay is steeper than exponential decay. The following figure employs the linearized equation and the high potential graphing equation derived above. It is a potential-versus-distance graph for varying surface potentials of 50, 100, 150, and 200 mV. The equations employed in this figure assume an 80mM NaCl solution. The Poisson–Boltzmann equation can be applied in a variety of fields mainly as a modeling tool to make approximations for applications such as charged biomolecular interactions, dynamics of electrons in semiconductors or plasma, etc. Most applications of this equation are used as models to gain further insight on electrostatics . The Poisson–Boltzmann equation can be applied to biomolecular systems. One example is the binding of electrolytes to biomolecules in a solution. This process is dependent upon the electrostatic field generated by the molecule, the electrostatic potential on the surface of the molecule, as well as the electrostatic free energy. [ 13 ] The linearized Poisson–Boltzmann equation can be used to calculate the electrostatic potential and free energy of highly charged molecules such as tRNA in an ionic solution with different number of bound ions at varying physiological ionic strengths. It is shown that electrostatic potential depends on the charge of the molecule, while the electrostatic free energy takes into account the net charge of the system. [ 14 ] Another example of utilizing the Poisson–Boltzmann equation is the determination of an electric potential profile at points perpendicular to the phospholipid bilayer of an erythrocyte . This takes into account both the glycocalyx and spectrin layers of the erythrocyte membrane. This information is useful for many reasons including the study of the mechanical stability of the erythrocyte membrane. [ 15 ] The Poisson–Boltzmann equation can also be used to calculate the electrostatic free energy for hypothetically charging a sphere using the following charging integral: Δ G el = ∫ τ q U ( τ ′ ) d τ ′ {\displaystyle \Delta G^{\text{el}}=\int ^{\tau }qU(\tau ')\,d\tau '} where τ q {\displaystyle \tau q} is the final charge on the sphere The electrostatic free energy can also be expressed by taking the process of the charging system. The following expression utilizes chemical potential of solute molecules and implements the Poisson-Boltzmann Equation with the Euler-Lagrange functional: Δ G el = ∫ V ( k T ∑ i c i ∞ [ 1 − exp ⁡ ( − z i q U k T ) ] + p f U − − ε ( ∇ U ) 2 8 π ) d V {\displaystyle \Delta G^{\text{el}}=\int _{V}\left(kT\sum _{i}c_{i}^{\infty }\left[1-\exp \left({\frac {-z_{i}qU}{kT}}\right)\right]+p^{f}U-{\frac {-\varepsilon ({\boldsymbol {\nabla }}U)^{2}}{8\pi }}\right)dV} Note that the free energy is independent of the charging pathway [5c]. The above expression can be rewritten into separate free energy terms based on different contributions to the total free energy Δ G el = Δ G ef + Δ G em + Δ G mob + Δ G solv {\displaystyle \Delta G^{\text{el}}=\Delta G^{\text{ef}}+\Delta G^{\text{em}}+\Delta G^{\text{mob}}+\Delta G^{\text{solv}}} where Finally, by combining the last three term the following equation representing the outer space contribution to the free energy density integral Δ G out = Δ G em + Δ G mob + Δ G solv {\displaystyle \Delta G^{\text{out}}=\Delta G^{\text{em}}+\Delta G^{\text{mob}}+\Delta G^{\text{solv}}} These equations can act as simple geometry models for biological systems such as proteins , nucleic acids , and membranes. [ 13 ] This involves the equations being solved with simple boundary conditions such as constant surface potential. These approximations are useful in fields such as colloid chemistry . [ 13 ] An analytical solution to the Poisson–Boltzmann equation can be used to describe an electron-electron interaction in a metal-insulator semiconductor (MIS). [ 16 ] This can be used to describe both time and position dependence of dissipative systems such as a mesoscopic system. This is done by solving the Poisson–Boltzmann equation analytically in the three-dimensional case. Solving this results in expressions of the distribution function for the Boltzmann equation and self-consistent average potential for the Poisson equation . These expressions are useful for analyzing quantum transport in a mesoscopic system. In metal-insulator semiconductor tunneling junctions, the electrons can build up close to the interface between layers and as a result the quantum transport of the system will be affected by the electron-electron interactions. [ 16 ] Certain transport properties such as electric current and electronic density can be known by solving for self-consistent Coulombic average potential from the electron-electron interactions, which is related to electronic distribution. Therefore, it is essential to analytically solve the Poisson–Boltzmann equation in order to obtain the analytical quantities in the MIS tunneling junctions. [ 16 ] Applying the following analytical solution of the Poisson–Boltzmann equation (see section 2) to MIS tunneling junctions, the following expression can be formed to express electronic transport quantities such as electronic density and electric current f 1 f 0 − f 0 + e E z τ 0 m ∂ f 0 ∂ v z ( 1 − e − τ τ 0 ) − ∫ 0 t e m e t − τ ′ τ 0 ∇ ρ [ r − v ( t − t ′ ) ] × ∂ f 0 ∂ v d t ′ {\displaystyle f_{1}f^{0}-f_{0}+{\frac {eE_{z}\tau _{0}}{m}}{\frac {\partial f_{0}}{\partial v_{z}}}\left(1-e^{\frac {-\tau }{\tau _{0}}}\right)-\int _{0}^{t}{\frac {e}{m}}e{^{\frac {t-\tau '}{\tau _{0}}}}\nabla \rho [r-v(t-t')]\times {\frac {\partial f_{0}}{\partial v}}dt'} Applying the equation above to the MIS tunneling junction, electronic transport can be analyzed along the z-axis, which is referenced perpendicular to the plane of the layers. An n-type junction is chosen in this case with a bias V applied along the z-axis. The self-consistent average potential of the system can be found using ρ ρ 1 + ρ 2 {\displaystyle \rho \rho _{1}+\rho _{2}} where λ is called the Debye length . The electronic density and electric current can be found by manipulation to equation 16 above as functions of position z. These electronic transport quantities can be used to help understand various transport properties in the system. Source: [ 4 ] As with any approximate model, the Poisson–Boltzmann equation is an approximation rather than an exact representation. Several assumptions were made to approximate the potential of the diffuse layer. The finite size of the ions was considered negligible and ions were treated as individual point charges, where ions were assumed to interact with the average electrostatic field of all their neighbors rather than each neighbor individually. In addition, non-Coulombic interactions were not considered and certain interactions were unaccounted for, such as the overlap of ion hydration spheres in an aqueous system. The permittivity of the solvent was assumed to be constant, resulting in a rough approximation as polar molecules are prevented from freely moving when they encounter the strong electric field at the solid surface. Though the model faces certain limitations, it describes electric double layers very well. The errors resulting from the previously mentioned assumptions cancel each other for the most part. Accounting for non-Coulombic interactions increases the ion concentration at the surface and leads to a reduced surface potential. On the other hand, including the finite size of the ions causes the opposite effect. The Poisson–Boltzmann equation is most appropriate for approximating the electrostatic potential at the surface for aqueous solutions of univalent salts at concentrations smaller than 0.2 M and potentials not exceeding 50–80 mV. In the limit of strong electrostatic interactions, a strong coupling theory is more applicable than the weak coupling assumed in deriving the Poisson-Boltzmann theory. [ 17 ]
https://en.wikipedia.org/wiki/Poisson–Boltzmann_equation
Poka-yoke ( ポカヨケ , [poka joke] ) is any mechanism in a process that helps an equipment operator to avoid mistakes and defects by preventing, correcting, or drawing attention to human errors as they occur. [ 1 ] It is a Japanese term that means "mistake-proofing" or "error prevention", and is also sometimes referred to as a forcing function or a behavior-shaping constraint . The concept was formalized, and the term adopted, by Shigeo Shingo as part of the Toyota Production System . [ 2 ] [ 3 ] Poka-yoke was originally baka -yoke , but as this means "fool-proofing" (or " idiot-proofing ") the name was changed to the milder poka-yoke . [ 4 ] Poka-yoke is derived from poka o yokeru ( ポカを避ける ), a term in shogi that means avoiding an unthinkably bad move. More broadly, the term can refer to any behavior-shaping constraint designed into a process to prevent incorrect operation by the user. A simple poka-yoke example is demonstrated when a driver of the car equipped with a manual gearbox must press on the clutch pedal (a process step, therefore a poka-yoke) prior to starting an automobile. [ 5 ] The interlock serves to prevent unintended movement of the car. Another example of poka-yoke would be the car equipped with an automatic transmission, which has a switch that requires the car to be in "Park" or "Neutral" before the car can be started (some automatic transmissions require the brake pedal to be depressed as well). These serve as behavior-shaping constraints as the action of "car in Park (or Neutral)" or "foot depressing the clutch/brake pedal" must be performed before the car is allowed to start. The requirement of a depressed brake pedal to shift most of the cars with an automatic transmission from "Park" to any other gear is yet another example of a poka-yoke application. Over time, the driver's behavior is conformed with the requirements by repetition and habit. When automobiles first started shipping with on-board GPS systems, it was not uncommon to use a forcing function which prevented the user from interacting with the GPS (such as entering in a destination) while the car was in motion. This ensures that the driver's attention is not distracted by the GPS. However, many drivers found this feature irksome, and the forcing function has largely been abandoned. This reinforces the idea that forcing functions are not always the best approach to shaping behavior. The microwave oven provides another example of a forcing function. [ 6 ] In all modern microwave ovens, it is impossible to start the microwave while the door is still open. Likewise, the microwave will shut off automatically if the door is opened by the user. By forcing the user to close the microwave door while it is in use, it becomes impossible for the user to err by leaving the door open. Forcing functions are very effective in safety critical situations such as this, but can cause confusion in more complex systems that do not inform the user of the error that has been made. These forcing functions are being used in the service industry as well. Call centers concerned with credit card fraud and friendly fraud are using agent-assisted automation to prevent the agent from seeing or hearing the credit card information so that it cannot be stolen. The customer punches the information into their phone keypad, the tones are masked to the agent and are not visible in the customer relationship management software. [ 7 ] The term poka-yoke was applied by Shigeo Shingo in the 1960s to industrial processes designed to prevent human errors. [ 1 ] Shingo redesigned a process in which factory workers, while assembling a small switch, would often forget to insert the required spring under one of the switch buttons. In the redesigned process, the worker would perform the task in two steps, first preparing the two required springs and placing them in a placeholder, then inserting the springs from the placeholder into the switch. When a spring remained in the placeholder, the workers knew that they had forgotten to insert it and could correct the mistake effortlessly. [ 8 ] Shingo distinguished between the concepts of inevitable human mistakes and defects in the production. Defects occur when the mistakes are allowed to reach the customer. The aim of poka-yoke is to design the process so that mistakes can be detected and corrected immediately, eliminating defects at the source. Poka-yoke can be implemented at any step of a manufacturing process where something can go wrong or an error can be made. [ 9 ] For example, a fixture that holds pieces for processing might be modified to only allow pieces to be held in the correct orientation, [ 10 ] or a digital counter might track the number of spot welds on each piece to ensure that the worker executes the correct number of welds. [ 10 ] Shingo recognized three types of poka-yoke for detecting and preventing errors in a mass production system: [ 2 ] [ 9 ] Either the operator is alerted when a mistake is about to be made, or the poka-yoke device actually prevents the mistake from being made. In Shingo's lexicon, the former implementation would be called a warning poka-yoke, while the latter would be referred to as a control poka-yoke. [ 2 ] Shingo argued that errors are inevitable in any manufacturing process, but that if appropriate poka-yokes are implemented, then mistakes can be caught quickly and prevented from resulting in defects. By eliminating defects at the source, the cost of mistakes within a company is reduced. [ citation needed ] A typical feature of poka-yoke solutions is that they don't let an error in a process happen. Other advantages include: [ 11 ]
https://en.wikipedia.org/wiki/Poka-yoke
Pokhozhaev's identity is an integral relation satisfied by stationary localized solutions to a nonlinear Schrödinger equation or nonlinear Klein–Gordon equation . It was obtained by S.I. Pokhozhaev [ 1 ] and is similar to the virial theorem . This relation is also known as G.H. Derrick's theorem . Similar identities can be derived for other equations of mathematical physics. Here is a general form due to H. Berestycki and P.-L. Lions . [ 2 ] Let g ( s ) {\displaystyle g(s)} be continuous and real-valued, with g ( 0 ) = 0 {\displaystyle g(0)=0} . Denote G ( s ) = ∫ 0 s g ( t ) d t {\displaystyle G(s)=\int _{0}^{s}g(t)\,dt} . Let be a solution to the equation in the sense of distributions . Then u {\displaystyle u} satisfies the relation There is a form of the virial identity for the stationary nonlinear Dirac equation in three spatial dimensions (and also the Maxwell-Dirac equations ) [ 3 ] and in arbitrary spatial dimension. [ 4 ] Let n ∈ N , N ∈ N {\displaystyle n\in \mathbb {N} ,\,N\in \mathbb {N} } and let α i , 1 ≤ i ≤ n {\displaystyle \alpha ^{i},\,1\leq i\leq n} and β {\displaystyle \beta } be the self-adjoint Dirac matrices of size N × N {\displaystyle N\times N} : Let D 0 = − i α ⋅ ∇ = − i ∑ i = 1 n α i ∂ ∂ x i {\displaystyle D_{0}=-\mathrm {i} \alpha \cdot \nabla =-\mathrm {i} \sum _{i=1}^{n}\alpha ^{i}{\frac {\partial }{\partial x^{i}}}} be the massless Dirac operator . Let g ( s ) {\displaystyle g(s)} be continuous and real-valued, with g ( 0 ) = 0 {\displaystyle g(0)=0} . Denote G ( s ) = ∫ 0 s g ( t ) d t {\displaystyle G(s)=\int _{0}^{s}g(t)\,dt} . Let ϕ ∈ L l o c ∞ ( R n , C N ) {\displaystyle \phi \in L_{\mathrm {loc} }^{\infty }(\mathbb {R} ^{n},\mathbb {C} ^{N})} be a spinor -valued solution that satisfies the stationary form of the nonlinear Dirac equation , in the sense of distributions , with some ω ∈ R {\displaystyle \omega \in \mathbb {R} } . Assume that Then ϕ {\displaystyle \phi } satisfies the relation
https://en.wikipedia.org/wiki/Pokhozhaev's_identity
See text Pokkesviricetes is a class of viruses . [ 1 ] The class, Pokkesviricetes , is derived from pockes , the Middle English word for pox as in referring to the disease associated to certain members of Poxviridae . The following orders are recognized: This virus -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pokkesviricetes
In physical chemistry , the Polanyi potential theory, also called Polanyi's potential theory of adsorption or Eucken–Polanyi potential theory , is a model of adsorption proposed independently by Michael Polanyi and Arnold Eucken . Under this model, adsorption can be measured through the equilibrium between the chemical potential of a gas near the surface and the chemical potential of the gas from a large distance away. In this model, the attraction largely due to Van der Waals forces of the gas to the surface is determined by the position of the gas particle from the surface, and that the gas behaves as an ideal gas until condensation where the gas exceeds its equilibrium vapor pressure . While the adsorption theory of William Henry is more applicable in low pressure and the adsorption isotherm equation from Brunauer–Emmett–Teller (BET) theory is more useful at from 0.05 to 0.35 P / P 0 , the Polanyi potential theory has much more application at higher P / P 0 (≈ 0.1–0.8). In 1914, Arnold Eucken introduced his theory of adsorption and coined the term adsorption potential ( German : Adsorptionspotential ). [ 1 ] [ 2 ] [ 3 ] A few months later, Michael Polanyi wrote his first paper on adsorption where he proposed a model for the adsorption of gas onto a solid surface. [ 1 ] Afterwards, he published a fully developed paper in 1916, which included experimental verification by his students and other authors. During his research in the University of Budapest , his mentor Georg Bredig , sent his research findings to Albert Einstein . Einstein wrote back to Bredig stating: The papers of your M. Polanyi please me a lot. I have checked over the essentials in them and found them fundamentally correct. Polanyi later described this event by saying: Bang! I was a scientist. Polanyi and Einstein continued to write to each other on and off for the next 20 years. Polanyi's model of adsorption was met with much criticism for several decades after publication years. His simplistic model for determining adsorption was formed during the time of the discovery of Peter Debye 's fixed dipoles, Niels Bohr 's atomic model , and well as the developing theory of intermolecular forces and electrostatic forces by key figures in the chemistry world including W.H. Bragg , W.L. Bragg , and Willem Hendrik Keesom . Opponents of his model claimed that Polanyi's theory did not take into account these emerging theories. Criticism included that the model did not take into account the electrical interactions of the gas and the surface, and that the presence of other molecules would screen off the attraction of the gas to the surface. Polanyi's model was furthermore put under scrutiny following the adsorption model of Irving Langmuir from 1916 to 1918 through whose research would eventually win the Nobel Prize in Chemistry in 1932. However, Polanyi was not able to participate in many of these discussions because he served as a medical officer for the Austro-Hungarian army in the Serbian front during World War I . Polanyi wrote about this experience saying: I myself was protected for a while against any knowledge of these developments by serving as a medical officer in the Austro-Hungarian Army from August 1914 to October 1918, and by the subsequent revolutions and counter revolutions that lasted until the end of 1919. Members of less-well-informed circles elsewhere continued to be impressed for some time by the simplicity of my theory and its wide experimental verifications. [ 1 ] Polanyi described that the “turning point” of the acceptance of his model of adsorption occurred when Fritz Haber asked him to defend his theory in full in the Kaiser Wilhelm Institute for Physical Chemistry in Berlin, Germany. Many key players in the scientific world were present in this meeting including Einstein. After hearing Polanyi's full explanation of his model, Haber and Einstein claimed that Polanyi “had displayed a total disregard for the scientifically established structure of the matter”. Years later, Polanyi described his ordeal by concluding, Professionally, I survived the occasion only by the skin of my teeth. Polanyi continued to provide supporting evidence in proving the validity of his model years after this meeting. [ 1 ] [ non-primary source needed ] Polanyi considered Eucken theory erroneous and claimed that Eucken modified his own theory in 1922 to fit with his. [ 1 ] Polanyi's 'deliverance' (as he described it) from these rejections and criticism of his model occurred in 1930, when Fritz London proposed a new theory of cohesive forces founded on the theories of quantum mechanics on the polarization of electronic systems. Polanyi wrote to London asking, “Are these forces subject to screening by intervening molecules? Would a solid acting by these forces possess a spatially fixed adsorption potential?” After computational analysis, a joint publication was made between Polanyi and London claiming that the adsorptive forces behaved similarly to the model that Polanyi had proposed. [ 1 ] Polanyi's theory has historical significance whose work has been used a foundation for other models, such as the theory of volume filling micropores (TVFM) and the Dubinin–Radushkevich theory . Other research have been performed loosely involving the potential theory of Polanyi such as the capillary condensation phenomenon discovered by Richard Adolf Zsigmondy . Unlike Poylani's theory which involves a flat surface, Zsigmondy's research involves a porous structure like silica materials. His research proved that condensation of vapors can occur in narrow pores below the standard saturated vapour pressure . [ 2 ] The Polanyi potential adsorption theory is based on the assumption that the molecules near a surface move according to a potential, similar to that of gravity or electric fields. [ 4 ] This model is applicable in the case of gases at a surface at constant temperature. Gas molecules move closer to that surface when the pressure is higher than the equilibrium vapor pressure. The change in potential relative to the distance from the surface can be calculated using the formula for difference of the chemical potential, where μ {\displaystyle \mu } is the chemical potential , S m {\displaystyle S_{\rm {m}}} is the molar entropy , V m {\displaystyle V_{\rm {m}}} is the molar volume , and U m {\displaystyle U_{\rm {m}}} is the molar internal energy . At equilibrium, the chemical potential of a gas at a distance r {\displaystyle r} from a surface, μ ( r , p r ) {\displaystyle {\mu (r,p_{r})}} , is equal to the chemical potential of the gas at an infinitely large distance from the surface, μ ( ∞ , p ) {\displaystyle {\mu (\infty ,p)}} . As a result, the integration from an infinitely far distance to r distance from the surface leads to where p r {\displaystyle p_{r}} is the partial pressure at distance r and p {\displaystyle p} is the partial pressure at infinite distance from the surface. Since the temperature remains constant, the difference in chemical potential formula can be integrated over pressures p {\displaystyle p} and p r {\displaystyle p_{r}} By setting the U m ( ∞ ) = 0 {\displaystyle U_{\rm {m}}(\infty )=0} , the equation can be simplified to Using the ideal gas law , p V m = R T {\displaystyle pV_{\rm {m}}=RT} , the following formula is obtained Since gas condenses into a liquid on a surface when the pressure of the gas exceeds the equilibrium vapor pressure, p 0 {\displaystyle p_{0}} , we can assume a liquid film forms over the surface of thickness, δ {\displaystyle \delta } . The energy at p 0 {\displaystyle p_{0}} is Considering that the partial pressure of the gases relates to the concentration, the adsorption potential, ε s {\displaystyle \varepsilon _{\rm {s}}} can be calculated as where c s {\displaystyle c_{\rm {s}}} is the saturated concentration of adsorbate and c {\displaystyle c} is the equilibrium concentration of the adsorbate. The potential theory underwent many refinements and changes throughout the years since its first report. One major theories of note that was developed using Polanyi's theory was the Dubinin theories, Dubinin–Radushkivech and Dubinin–Astakhov equations. Using the adsorption potential, the degree of filling of the adsorption space, θ {\displaystyle \theta } , can be calculated as where a {\displaystyle a} is value of adsorption at temperature T and equilibrium pressure p , a 0 {\displaystyle a_{0}} is the maximum value of adsorption, and E {\displaystyle E} is the characteristic energy of adsorption in kJ/mol, A {\displaystyle A} is the loss in Gibbs free energy in adsorption equal to Δ G = − R T log ⁡ ( p 0 / p ) {\displaystyle \Delta G=-RT\log(p_{0}/p)} and b {\displaystyle b} is the fitting coefficient. [ 5 ] The Dubinin–Radushkivech equation where b {\displaystyle b} is equal to 2 and the optimized Dubinin-Astakhov equation where b {\displaystyle b} is fit to experimental data can be simplified to Other studies have used the Dubinin–Astakhov in a similar form of log ⁡ q e = log ⁡ Q 0 + ( ε s w / E ) b {\displaystyle \log q_{\rm {e}}=\log Q^{0}+(\varepsilon _{\rm {sw}}/E)^{b}} , where q e {\displaystyle q_{\rm {e}}} is equilibrium adsorbed concentration of adsorbent in mg/g, Q 0 {\displaystyle Q^{0}} is maximum adsorbed concentration of adsorbent in mg/g, ε s w {\displaystyle \varepsilon _{\rm {sw}}} is the effective adsorption potential, where equal to ε s w = − R T ln ⁡ ( c e / c s ) {\displaystyle \varepsilon _{\rm {sw}}=-RT\ln(c_{\rm {e}}/c_{\rm {s}})} , c e {\displaystyle c_{\rm {e}}} is equilibrium concentration of adsorbent in the solution phase in mg/L, and c s {\displaystyle c_{\rm {s}}} is the adsorbent solubility in water in mg/L. [ 6 ] The characteristic energy of adsorption can be related to a characteristic energy of adsorption for a standard vapor on the same surface, E 0 {\displaystyle E_{0}} , through the use of an affinity coefficient, β {\displaystyle \beta } The affinity coefficient is a ratio of the properties of the sample and standard vapors where α {\displaystyle \alpha } and α 0 {\displaystyle \alpha _{0}} are the polarizabilities of the sample and standard vapors, respectively. Many studies have been performed to determine optimal fitting coefficients, b {\displaystyle b} , and affinity coefficients, β {\displaystyle \beta } , to best describe the adsorption of gases and vapors onto solids. As a result, the Dubinin–Astakhov equation remains in use in adsorption studies due to the accuracy it can obtain when fitted with experimental results. In many modern studies, the Polanyi theory is widely used in the study of activated carbons, or carbon black. The theory has been successfully used to model a variety of scenarios such as the gas adsorption on activated carbon and the adsorption process of nonionic polycyclic aromatic hydrocarbons . [ 10 ] Later on, experiments also showed that it can model ionic polycyclic aromatic hydrocarbons such as phenols and anilines . More recently, the Polyani adsorption isotherm has been used to model to adsorption of carbon nanoparticles . Historically, the theory was used to model nonuniform adsorbates and multi-components solutes. For certain pairs of adsorbates and adsorbents, the mathematical parameters of the Polyani theory can be related to the physicochemical properties of both adsorbents and adsorbates. The theory has been used to model the adsorption of carbon nanotubes and carbon nanoparticles. In the study done by Yang and Xing, [ 6 ] the theory have been shown to better fit the adsorption isotherm than Langmuir , Freundlich , and partition. The experiment studied the adsorption of organic molecules on carbon nanoparticles and carbon nanotubes. According to the Polyani theory the surface defect curvatures of carbon nanoparticles could affect their adsorption. Flat surfaces on the particles will allow more surface atoms to approach adsorbing organic molecules which will increase the potential, leading to stronger interactions. The theory has been beneficial in trying to understand the adsorption mechanisms of organic compounds on carbon nanoparticles and estimating the adsorption capacity and affinity. Using this theory, researchers are hoping to be able to design carbon nanoparticles for specific needs such as using them as sorbents in environmental studies. In one of the earlier studies conducted by Manes, M., & Hofer, L. J. E., [ 11 ] the Polyani theory was used to characterize liquid-phase adsorption isotherms on various concentrations activated carbon using a wide range of organic solvent. The polyani theory was shown to be a good fit for these various systems. Because of the results, the study introduced the possibility of predicting isotherms for similar systems using minimal data. However, the limitation is that the adsorption isotherms for a large variety of solvents can only fit over a limited range. The curve was not able to fit the data at high-capacity range. The study also concluded that there were a few anomalies in the results. The adsorption from carbon tetrachloride , cyclohexane , and carbon disulfide onto activated carbon was not able to fit well to the curve, and remain to be explained. The researchers who conducted the experiment speculate that steric effects of carbon tetrachloride and cyclohexane may have played a role. The study has been done with a variety of systems such as organic liquids from water solutions and organic solids from water solutions. Since a variety of systems have been investigated, a study was done to investigate the individual adsorption of a mixed solution. This phenomenon is also called competitive adsorption because solutes tend to compete for the same adsorption sites. In the experiment conducted by Rosene and Manes, [ 12 ] the competitive adsorption of glucose , urea , benzoic acid , phthalide , and p-nitrophenol . Using the Polanyi adsorption model, they were able to calculate the relative adsorption of each compound onto the surface of activated carbon.
https://en.wikipedia.org/wiki/Polanyi_potential_theory
Polar Science is a quarterly peer-reviewed scientific journal covering research related to the polar regions of the Earth and other planets. [ 1 ] It is published by Elsevier on behalf of the National Institute of Polar Research (Japan). It covers a wide range of fields, including atmospheric science , oceanography , glaciology and environmental science . The editor-in-chief is Takashi Yamanouchi (National Institute of Polar Research and Sōkendai ). The journal is hybrid open access . By paying a fee, authors can choose to make their papers open access at the time of publication. All papers that are over 24 months old are freely available to the community via ScienceDirect . The journal is abstracted and indexed in the Science Citation Index Expanded , Scopus , EBSCOhost , GEOBASE , GeoRef , ProQuest and Referativnyi Zhurnal ( VINITI Database RAS ). [ 2 ] According to the Journal Citation Reports , the journal has a 2020 impact factor of 1.927. [ 3 ] This article about a journal on glaciology is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . This article about an oceanography journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Polar_Science
In chemistry , polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole moment , with a negatively charged end and a positively charged end. Polar molecules must contain one or more polar bonds due to a difference in electronegativity between the bonded atoms. Molecules containing polar bonds have no molecular polarity if the bond dipoles cancel each other out by symmetry. Polar molecules interact through dipole-dipole intermolecular forces and hydrogen bonds . Polarity underlies a number of physical properties including surface tension , solubility , and melting and boiling points. Not all atoms attract electrons with the same force. The amount of "pull" an atom exerts on its electrons is called its electronegativity . Atoms with high electronegativities – such as fluorine , oxygen , and nitrogen – exert a greater pull on electrons than atoms with lower electronegativities such as alkali metals and alkaline earth metals . In a bond, this leads to unequal sharing of electrons between the atoms, as electrons will be drawn closer to the atom with the higher electronegativity. Because electrons have a negative charge, the unequal sharing of electrons within a bond leads to the formation of an electric dipole : a separation of positive and negative electric charge. Because the amount of charge separated in such dipoles is usually smaller than a fundamental charge , they are called partial charges , denoted as δ+ ( delta plus) and δ− (delta minus). These symbols were introduced by Sir Christopher Ingold and Edith Hilda (Usherwood) Ingold in 1926. [ 1 ] [ 2 ] The bond dipole moment is calculated by multiplying the amount of charge separated and the distance between the charges. These dipoles within molecules can interact with dipoles in other molecules, creating dipole-dipole intermolecular forces . Bonds can fall between one of two extremes – completely nonpolar or completely polar. A completely nonpolar bond occurs when the electronegativities are identical and therefore possess a difference of zero. A completely polar bond is more correctly called an ionic bond , and occurs when the difference between electronegativities is large enough that one atom actually takes an electron from the other. The terms "polar" and "nonpolar" are usually applied to covalent bonds , that is, bonds where the polarity is not complete. To determine the polarity of a covalent bond using numerical means, the difference between the electronegativity of the atoms is used. Bond polarity is typically divided into three groups that are loosely based on the difference in electronegativity between the two bonded atoms. According to the Pauling scale : Pauling based this classification scheme on the partial ionic character of a bond, which is an approximate function of the difference in electronegativity between the two bonded atoms. He estimated that a difference of 1.7 corresponds to 50% ionic character, so that a greater difference corresponds to a bond which is predominantly ionic. [ 3 ] As a quantum-mechanical description, Pauling proposed that the wave function for a polar molecule AB is a linear combination of wave functions for covalent and ionic molecules: ψ = aψ(A:B) + bψ(A + B − ). The amount of covalent and ionic character depends on the values of the squared coefficients a 2 and b 2 . [ 4 ] The bond dipole moment [ 5 ] uses the idea of electric dipole moment to measure the polarity of a chemical bond within a molecule . It occurs whenever there is a separation of positive and negative charges. The bond dipole μ is given by: The bond dipole is modeled as δ + — δ – with a distance d between the partial charges δ + and δ – . It is a vector, parallel to the bond axis, pointing from minus to plus, [ 6 ] as is conventional for electric dipole moment vectors. Chemists often draw the vector pointing from plus to minus. [ 7 ] This vector can be physically interpreted as the movement undergone by electrons when the two atoms are placed a distance d apart and allowed to interact, the electrons will move from their free state positions to be localised more around the more electronegative atom. The SI unit for electric dipole moment is the coulomb–meter. This is too large to be practical on the molecular scale. Bond dipole moments are commonly measured in debyes , represented by the symbol D, which is obtained by measuring the charge δ {\displaystyle \delta } in units of 10 −10 statcoulomb and the distance d in Angstroms . Based on the conversion factor of 10 −10 statcoulomb being 0.208 units of elementary charge, so 1.0 debye results from an electron and a proton separated by 0.208 Å. A useful conversion factor is 1 D = 3.335 64 × 10 −30 C m. [ 8 ] For diatomic molecules there is only one (single or multiple) bond so the bond dipole moment is the molecular dipole moment, with typical values in the range of 0 to 11 D. At one extreme, a symmetrical molecule such as bromine , Br 2 , has zero dipole moment, while near the other extreme, gas phase potassium bromide , KBr, which is highly ionic, has a dipole moment of 10.41 D. [ 9 ] [ page needed ] [ 10 ] [ verification needed ] For polyatomic molecules, there is more than one bond. The total molecular dipole moment may be approximated as the vector sum of the individual bond dipole moments. Often bond dipoles are obtained by the reverse process: a known total dipole of a molecule can be decomposed into bond dipoles. This is done to transfer bond dipole moments to molecules that have the same bonds, but for which the total dipole moment is not yet known. The vector sum of the transferred bond dipoles gives an estimate for the total (unknown) dipole of the molecule. A molecule is composed of one or more chemical bonds between molecular orbitals of different atoms. A molecule may be polar either as a result of polar bonds due to differences in electronegativity as described above, or as a result of an asymmetric arrangement of nonpolar covalent bonds and non-bonding pairs of electrons known as a full molecular orbital . While the molecules can be described as "polar covalent", "nonpolar covalent", or "ionic", this is often a relative term, with one molecule simply being more polar or more nonpolar than another. However, the following properties are typical of such molecules. When comparing a polar and nonpolar molecule with similar molar masses, the polar molecule in general has a higher boiling point, because the dipole–dipole interaction between polar molecules results in stronger intermolecular attractions. One common form of polar interaction is the hydrogen bond , which is also known as the H-bond. For example, water forms H-bonds and has a molar mass M = 18 and a boiling point of +100 °C, compared to nonpolar methane with M = 16 and a boiling point of −161 °C. Due to the polar nature of the water molecule itself, other polar molecules are generally able to dissolve in water. Most nonpolar molecules are water-insoluble ( hydrophobic ) at room temperature. Many nonpolar organic solvents , such as turpentine , are able to dissolve nonpolar substances. Polar compounds tend to have higher surface tension than nonpolar compounds. [ citation needed ] Polar liquids have a tendency to rise against gravity in a small diameter tube. [ citation needed ] Polar liquids have a tendency to be more viscous than nonpolar liquids. [ citation needed ] For example, nonpolar hexane is much less viscous than polar water. However, molecule size is a much stronger factor on viscosity than polarity, where compounds with larger molecules are more viscous than compounds with smaller molecules. [ citation needed ] Thus, water (small polar molecules) is less viscous than hexadecane (large nonpolar molecules). A polar molecule has a net dipole as a result of the opposing charges (i.e. having partial positive and partial negative charges) from polar bonds arranged asymmetrically. Water (H 2 O) is an example of a polar molecule since it has a slight positive charge on one side and a slight negative charge on the other. The dipoles do not cancel out, resulting in a net dipole. The dipole moment of water depends on its state. In the gas phase the dipole moment is ≈ 1.86 debye (D), [ 11 ] whereas liquid water (≈ 2.95 D) [ 12 ] and ice (≈ 3.09 D) [ 13 ] are higher due to differing hydrogen-bonded environments. Other examples include sugars (like sucrose ), which have many polar oxygen–hydrogen (−OH) groups and are overall highly polar. If the bond dipole moments of the molecule do not cancel, the molecule is polar. For example, the water molecule (H 2 O) contains two polar O−H bonds in a bent (nonlinear) geometry. The bond dipole moments do not cancel, so that the molecule forms a molecular dipole with its negative pole at the oxygen and its positive pole midway between the two hydrogen atoms. In the figure each bond joins the central O atom with a negative charge (red) to an H atom with a positive charge (blue). The hydrogen fluoride , HF, molecule is polar by virtue of polar covalent bonds – in the covalent bond electrons are displaced toward the more electronegative fluorine atom. Ammonia , NH 3 , is a molecule whose three N−H bonds have only a slight polarity (toward the more electronegative nitrogen atom). The molecule has two lone electrons in an orbital that points towards the fourth apex of an approximately regular tetrahedron, as predicted by the VSEPR theory . This orbital is not participating in covalent bonding; it is electron-rich, which results in a powerful dipole across the whole ammonia molecule. In ozone (O 3 ) molecules, the two O−O bonds are nonpolar (there is no electronegativity difference between atoms of the same element). However, the distribution of other electrons is uneven – since the central atom has to share electrons with two other atoms, but each of the outer atoms has to share electrons with only one other atom, the central atom is more deprived of electrons than the others (the central atom has a formal charge of +1, while the outer atoms each have a formal charge of − 1 ⁄ 2 ). Since the molecule has a bent geometry, the result is a dipole across the whole ozone molecule. A molecule may be nonpolar either when there is an equal sharing of electrons between the two atoms of a diatomic molecule or because of the symmetrical arrangement of polar bonds in a more complex molecule. For example, boron trifluoride (BF 3 ) has a trigonal planar arrangement of three polar bonds at 120°. This results in no overall dipole in the molecule. Carbon dioxide (CO 2 ) has two polar C=O bonds, but the geometry of CO 2 is linear so that the two bond dipole moments cancel and there is no net molecular dipole moment; the molecule is nonpolar. Examples of household nonpolar compounds include fats, oil, and petrol/gasoline. In the methane molecule (CH 4 ) the four C−H bonds are arranged tetrahedrally around the carbon atom. Each bond has polarity (though not very strong). The bonds are arranged symmetrically so there is no overall dipole in the molecule. The diatomic oxygen molecule (O 2 ) does not have polarity in the covalent bond because of equal electronegativity, hence there is no polarity in the molecule. Large molecules that have one end with polar groups attached and another end with nonpolar groups are described as amphiphiles or amphiphilic molecules. They are good surfactants and can aid in the formation of stable emulsions, or blends, of water and fats. Surfactants reduce the interfacial tension between oil and water by adsorbing at the liquid–liquid interface. Determining the point group is a useful way to predict polarity of a molecule. In general, a molecule will not possess dipole moment if the individual bond dipole moments of the molecule cancel each other out. This is because dipole moments are euclidean vector quantities with magnitude and direction, and a two equal vectors that oppose each other will cancel out. Any molecule with a centre of inversion ("i") or a horizontal mirror plane ("σ h ") will not possess dipole moments. Likewise, a molecule with more than one C n axis of rotation will not possess a dipole moment because dipole moments cannot lie in more than one dimension . As a consequence of that constraint, all molecules with dihedral symmetry (D n ) will not have a dipole moment because, by definition, D point groups have two or multiple C n axes. Since C 1 , C s ,C ∞h C n and C n v point groups do not have a centre of inversion, horizontal mirror planes or multiple C n axis, molecules in one of those point groups will have dipole moment. Contrary to popular misconception, the electrical deflection of a stream of water from a charged object is not based on polarity. The deflection occurs because of electrically charged droplets in the stream, which the charged object induces. A stream of water can also be deflected in a uniform electrical field, which cannot exert force on polar molecules. Additionally, after a stream of water is grounded, it can no longer be deflected. Weak deflection is even possible for nonpolar liquids. [ 14 ]
https://en.wikipedia.org/wiki/Polar_bond
In information theory , polar codes are a linear block error-correcting codes . The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity. [ 1 ] Polar codes were developed by Erdal Arikan , a professor of electrical engineering at Bilkent University . Notably, polar codes have modest encoding and decoding complexity O ( n log n ) , which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can reach the fundamental lower bounds for energy consumption of two dimensional circuitry to within an O ( n ε polylog n ) factor for any ε > 0 . [ 2 ] Polar codes have some limitations when used in industrial applications. Primarily, the original design of the polar codes achieves capacity when block sizes are asymptotically large with a successive cancellation decoder. However, with the block sizes used in industry, the performance of the successive cancellation is poor compared to well-defined and implemented coding schemes such as low-density parity-check code (LDPC) and turbo code . Polar performance can be improved with successive cancellation list decoding, but its usability in real applications is still questionable due to very poor implementation efficiencies caused by the iterative approach. [ 3 ] In October 2016, Huawei announced that it had achieved 27 Gbit/s in 5G field trial tests using polar codes for channel coding. The improvements have been introduced so that the channel performance has now almost closed the gap to the Shannon limit , which sets the bar for the maximum rate for a given bandwidth and a given noise level. [ 4 ] In November 2016, 3GPP agreed to adopt polar codes for the eMBB (Enhanced Mobile Broadband) control channels for the 5G NR (New Radio) interface. At the same meeting, 3GPP agreed to use LDPC for the corresponding data channel. [ 5 ] [ 6 ] In 2019, Arıkan suggested to employ a convolutional pre-transformation before polar coding. These pre-transformed variant of polar codes were dubbed polarization-adjusted convolutional (PAC) codes. [ 7 ] It was shown that the pre-transformation can effectively improve the distance properties of polar codes by reducing the number of minimum-weight and in general small-weight codewords, [ 8 ] resulting in the improvement of block error rates under near maximum likelihood (ML) decoding algorithm such as Fano decoding and list decoding. [ 9 ] Fano decoding is a tree search algorithm that determines the transmitted codeword by utilizing an optimal metric function to efficiently guide the search process. [ 10 ] PAC codes are also equivalent to post-transforming polar codes with certain cyclic codes. [ 11 ] At short blocklengths , such codes outperform both convolutional codes and CRC-aided list decoding of conventional polar codes. [ 12 ] [ 13 ] Neural Polar Decoders (NPDs) [ 14 ] are an advancement in channel coding that combine neural networks (NNs) with polar codes, providing unified decoding for channels with or without memory, without requiring an explicit channel model. They use four neural networks to approximate the functions of polar decoding: the embedding (E) NN, the check-node (F) NN, the bit-node (G) NN, and the embedding-to-LLR (H) NN. The weights of these NNs are determined by estimating the mutual information of the synthetic channels. By the end of training, the weights of the NPD are fixed and can then be used for decoding. The computational complexity of NPDs is determined by the parameterization of the neural networks, unlike successive cancellation (SC) trellis decoders, [ 15 ] whose complexity is determined by the channel model and are typically used for finite-state channels (FSCs). The computational complexity of NPDs is O ( k d N log 2 ⁡ N ) {\textstyle O(kdN\log _{2}N)} , where k {\displaystyle k} is the number of hidden units in the neural networks, d {\displaystyle d} is the dimension of the embedding, and N {\displaystyle N} is the block length. In contrast, the computational complexity of SC trellis decoders is O ( | S | 3 N log 2 ⁡ N ) {\displaystyle O(|{\mathcal {S}}|^{3}N\log _{2}N)} , where S {\displaystyle {\mathcal {S}}} is the state space of the channel model. NPDs can be integrated into SC decoding [ 1 ] schemes such as SC list decoding and CRC-aided SC decoding. [ 16 ] They are also compatible with non-uniform and i.i.d. input distributions by integrating them into the Honda-Yamamoto scheme. [ 17 ] This flexibility allows NPDs to be used in various decoding scenarios, improving error correction performance while maintaining manageable computational complexity.
https://en.wikipedia.org/wiki/Polar_code_(coding_theory)
In the celestial equatorial coordinate system Σ(α, δ) in astronomy , polar distance ( PD ) is an angular distance of a celestial object on its meridian measured from the celestial pole , similar to the way declination (dec, δ) is measured from the celestial equator . Polar distance in celestial navigation is the angle between the pole and the position of body on its declination. Referring to diagram: P- Pole, WQE- Equator, Z - Zenith of observer, Y- Lower meridian passage of body X- Upper meridian passage of body Here body will be on declination circle (XY). The distance between PY or PX will be the Polar distance of the body. NP=ZQ=Latitude of observer NY and NX will be the True altitude of body at that instant. Polar distance (PD) = 90° ± δ Polar distances are expressed in degrees and cannot exceed 180° in magnitude. An object on the celestial equator has a PD of 90°. Polar distance is affected by the precession of the equinoxes . If the polar distance of the Sun is equal to the observer's latitude , the shadow path of a gnomon 's tip on a sundial will be a parabola ; at higher latitudes it will be an ellipse and lower, a hyperbola . This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Polar_distance_(astronomy)
Polar ecology is the relationship between plants and animals in a polar environment. Polar environments are in the Arctic and Antarctic regions. Arctic regions are in the Northern Hemisphere , and it contains land and the islands that surrounds it. Antarctica is in the Southern Hemisphere and it also contains the land mass, surrounding islands and the ocean. Polar regions also contain the subantarctic and subarctic zone which separate the polar regions from the temperate regions . Antarctica and the Arctic lie in the polar circles . The polar circles are imaginary lines shown on maps to be the areas that receives less sunlight due to less radiation. These areas either receive sunlight ( midnight sun ) or shade ( polar night ) 24 hours a day because of the earth's tilt . Plants and animals in the polar regions are able to withstand living in harsh weather conditions but are facing environmental threats that limit their survival. Polar climates are cold, windy and dry. Because of the lack of precipitation and low temperatures the Arctic and Antarctic are considered the world's largest deserts or Polar deserts . [ 1 ] [ 2 ] Much of the radiation from the Sun that is received is reflected off the snow making the polar regions cold. [ 3 ] When the radiation is reflected, the heat is also reflected. The polar regions reflect 89-90% of the Sun radiation that the Earth receives. [ 4 ] And because Antarctica is closer to the Sun at perihelion , it receives 7% more radiation than the Arctic. [ 5 ] Also in the polar region, the atmosphere is thin. Because of this the UV radiation that gets to the atmosphere can cause fast sun tanning and snow blindness . Polar regions are dry areas; there is very little precipitation due to the cold air. There are some times when the humidity may be high but the water vapor present in the air may be low. Wind is also strong in the polar region. Wind carries snow creating blizzard like conditions. Winds may also move small organisms or vegetation if it is present. [ 6 ] The wind blows the snow making snowdrifts or snow dunes which may exist even in the spring when the snow is thawing out. [ 6 ] It is hard for meteorologists to measure the amount of precipitation. This is because it is expensive to take care of the stations that collect weather data and it hard for them to measure snowfall amounts because the wind blows the snow too much to calculate exact amounts. The temperatures are similar between the Arctic and Antarctic. The temperatures in the Arctic are different depending on the location. Temperatures in the Arctic have a higher range than in the Antarctic. Temperatures can range as much as 100 °C (180 °F). Along the coast in the Arctic temperatures average −30 to −40 °C (−22 to −40 °F) in December, January and February. [ 7 ] The ice melts along the coast during the summer months which are around June, July and August and the temperature may rise a few degrees above freezing causing there to be some vegetation. During these same months in the northern regions, there will be 24 hours of daylight. Arctic regions also receive a lot of snowfall. The Arctic Basin has snow 320 days out of the year while the Arctic Seas have snow cover 260 days a year. [ 8 ] The thickness of the snow averages 30–40 cm (12–16 in). [ 8 ] In Greenland , temperatures have an average temperature of −40 °C (−40 °F) in the winter and in the summer the temperatures reach −12 °C (10 °F). Iceland, on the other hand, is in a subarctic region meaning it is near the temperate zone. Because of this, the temperatures are above the freezing point throughout much of the year. In Russia, temperatures are extremely cold. In Verkhoyansk , Siberia it has reached the coldest temp of −68.8 °C (−91.8 °F) in the Northern Hemisphere. [ 9 ] The temperatures in the summer in Siberia can get to 36 °C (97 °F). In the Antarctic, there are fewer temperature variations. Temperatures only range by around 30 °C (54 °F). The winter months are May till September while the summer months will be October till April. The sun reappears in September which then starts the 24 hours of daylight. The temperatures are different between the plateaus in Antarctica and between the coasts. The plateaus are the coldest regions of Antarctica. [ 10 ] In the summer months there is low precipitation with light winds. Vostok has received the lowest temperature worldwide getting as low as −88.3 °C (−126.9 °F) in 1960. The West Antarctica plateau reaches snow levels of around 30 cm (12 in). This area is also warmer but it receives the heaviest snow and receives more wind. Because of the cold desert-like conditions on the plateaus, there are very little plants and animals. Some species of birds though have been seen. On the coasts, in the summer there is more wind, and it is cloudier. Coasts with higher latitudes have a temperature of −24 °C (−11 °F) in the winter months whereas lower latitude coasts get down to −20 °C (−4 °F). Coastal areas may receive 40 cm (16 in) or more of snow. Water is an important part of human survival. Because of its cold temperature, much of the Earth's water comes from the polar regions. 90% of the world's water comes from the Antarctic ice cap although a lot of this water is not used. [ 11 ] Water environments are important for many species around the world. Many bacteria thrive there as well as algae and flora. Many of the ponds or lakes in polar regions are frozen over or snow-covered for most of the year. Larger lakes thaw out around the edges during the warmer months while the smaller lakes thaw entirely. There are few rivers in the polar regions. The Arctic has more rivers compared to Antarctica. The regions also have ponds. The ponds that attract birds tend to be rich in nutrients. This is because of the bird droppings or bird feathers. [ 12 ] There are two different types of lakes in polar regions including Arctic lakes and Antarctic lakes. Of the Arctic lakes, they include glacial lakes and permafrost lakes. The polar regions include the Arctic Ocean and the Southern Ocean . The Arctic Ocean covers 14,000,000 km 2 (5,400,000 sq mi). [ 13 ] In the spring the ice covers an area of 5,000,000–8,000,000 km 2 (1,900,000–3,100,000 sq mi) and in the winter it is twice that. In this area, it is never totally ice-covered. This is due to the winds breaking up the ice. Because of these cracks in the ice there is more biological productivity in the ocean. The Southern Ocean is 28,000,000 km 2 (11,000,000 sq mi). This ocean contains the Weddell Sea and Ross Sea . The ocean contains large packs of ice that surrounds Antarctica. Because of the cold weather it is hard for plants to grow. Frozen ground covers most of the polar regions for the majority of the year. Permafrost reaches a thickness of 600–1,000 m (2,000–3,300 ft) deep. Large amounts of permafrost can lead to poor water drainage. Due to the permafrost the water in the soil remains frozen for most of the year. In the summer the top of the permafrost may be covered with water due to melting in the area. [ 14 ] Weathering is also common in polar regions. There is rubble from rocks that are scattered on the land due to movement of glaciers . Also quick temperature change causes weathering. The main type of soil in the polar regions is ahumic soil. [ 15 ] This includes the cold desert soil. This soil consists of sand that is frozen. These soils tend to not have an abundant amount of vegetation but bacteria has been found. The other type of soil is organic soil. This type of soil is found in areas that are warmer and have more moisture. Some vegetation that lives here are algae , fungi and mosses . One type of organic soil is the brown soils, which have drainage. Due to the harsh weather in the polar regions, there are not many animals. The animals that do exist in the polar region are similar between the Antarctic and Arctic regions. The animals do differ by the temperature. In the Arctic some invertebrates include spiders, mites, mosquitoes and flies. In warmer areas of the polar regions moths, butterflies and beetles can be found. Some of the larger animals that exist are foxes, wolves, rabbits, hares, polar bears, reindeer/caribou . There are various bird species that have been spotted in the Arctic. Eight species of birds reside on the polar tundra year round while 150 breed in the Arctic. [ 16 ] The birds that do breed go to the Arctic between May and July. One of the known birds is the snowy owl , which has enough fat on it to be able to survive in the cold temperatures. In the Antarctic some invertebrates that exist are mites, fleas and ticks. Antarctica is the only continent that does not have a land mammal population. [ 17 ] There are also no birds that reside in Antarctica. Though, it has been known that various birds from South America have been spotted in Antarctica. Two studies have assessed the contributions of soil invertebrates to the polar ecosystem in Antarctica, suggesting that biotic interactions play crucial roles in such a seemingly simple ecosystem. [ 18 ] [ 19 ] For animals to be able to live in the polar region they have to have adaptations which allow them to live in the cold and windy environments. These animals have originated with these adaptations, and animals that live in these regions are accumulating adaptations to be able to live in this type of environment. Some of these adaptations may be to be big and insolated, have a lot of fur, and to be darker. Also, many animals live in groups to be able to protect themselves from the cold. Animals also tend to be homeotherms which are animals that maintain a high temperature. [ 20 ] Smaller invertebrates also tend to be smaller in polar regions which helps them conserve energy. There are also many different animals that live in the sea water near polar regions. Squids are one animal that live in both Antarctica and the Arctic. They are the food source for other large animals such as the male sperm whale . [ 21 ] There is also a wide variety of fish in the polar regions. Arctic cod is a major species in the Arctic. Halibut , cod , herring , and Alaska pollock (walleye pollock) are some other types of fish. In Antarctica there is not a lot of diversity among the fish; there is a lot of the same kind. Antarctic silverfish and lanternfish are some examples of fish that live in Antarctica. [ 22 ] Seals are also found in polar regions and number around 2.5 million. [ 23 ] They are known to breed on land in the polar regions. Whales are also in the polar regions and can be found near the surfaces of water where they pray. There are also birds that breed in the polar regions. In the Arctic, 95% of the birds breeding here consists of only four different species. These include the northern fulmar , kittiwake , the little auk and the thick-billed murre . These birds breed here when the ice starts to thaw and when there are cracks in the ice so the birds are able to feed. In the Antarctic there are two different birds that live there including the penguin and the procellariiformes . There is a wide source of vegetation in the polar region but there are few species in common in the southern and northern polar regions. The Arctic consists of desert and tundra vegetations. The desert vegetation consists of algae, lichens , and mosses. Lichens are the most dominant plants. The ground is bare with a patchy cover of lichens and mosses. [ 24 ] Flowering plants are also seen but not as common. It only contains 60 species of flowering plants. The Arctic tundra vegetation also consists of lichens and mosses, but it includes shrubs, grasses and forbs as well. The amount of vegetation in the tundra consists on how much sun, or snow cover is in the area. The vegetation in this area may grow as tall as 50 cm (20 in). In the southern part of the Arctic, there tend to be more shrubs whereas the northern parts there is less plant cover. In wet areas of the tundra, there is tussock grasses and cotton grasses. In moist areas, there are short grasses, mosses, willows , and birches . The Antarctic vegetation consists of algae or lichens, and some bacteria and fungi, although mosses and lichens dominate. The algae and lichens grow where there is moisture, and they hide in cracks to be protected from the wind. The dominant grassland is the tussock. These grasses get to be 2 m (6 ft 7 in) high, so they provide habitat for many mammals . [ 25 ] Of the 14,000,000 km 2 (5,400,000 sq mi) of land that makes up Antarctic, less than 2% of it does not have snow or ice. [ 26 ] One example of a type of vegetation is a crustose lichen. These lichens are found in moist areas that are hidden from the wind. They hide on the surface of rocks in the cracks. They survive off the water that melts from above. These lichens occur in Canada and Alaska , as well as Greenland and Iceland. These lichens can be red or orange colored and are known to defoliate rocks. [ 27 ] There exist many threats to the polar regions. One threat is whaling. Whaling started in the 16th century. People hunted whales to sell meat. By 1925 the number of whales being killed rose from 14,000 to 40,000 [ clarification needed ] [ citation needed ] . The International Whaling Commission tried to stop whaling in the 20th century, [ 28 ] but was unsuccessful. Overfishing is another threat to the polar regions. In the Bering Sea there is a lot of fishing due to the high populations of halibut and Alaskan Pollock. Around the 1970s krill began to become a popular crustacean to catch. The Soviet Union started advertising food with krill in it and they started overfishing krill. It has been estimated that 40 tonnes of krill per hour were caught during this time. [ 29 ] In 1982, the exclusive economic zone was established. This said that a certain country can fish 200 nmi (370 km; 230 mi) off the shore. The country is now able to control who fishes in their EEZ area. But the EEZ has been unsuccessful. Another threat is pollution . There are many land and water areas within the polar regions that are contaminated. This can be due to the transport of oil by large ships. Siberia is one example of a place that has had major pollution in its rivers. [ 30 ] Depletion on the ozone layer is one more threat. An ozone hole has been detected above Antarctica. The cause of the depletion of the ozone layer is due to chlorofluorocarbons and other greenhouse gases . The other main reason is due to man-made gases that are released into the atmosphere. There are many environmental effects due to this because of the gases that are being released five times faster than they are destroyed. [ 31 ] Global warming is also having an effect on the Arctic and Antarctic regions. Global warming is causing the temperature on the earth to increase. In Plan B 2.0 Lester R. Brown talks about how the Arctic is warming twice as fast as the rest of the world. [ 32 ] He goes on to say that the temperature in the Arctic region has increased by 3–4 °C (5–7 °F) within the last half-century. And with the increase in temperatures, some worry that this will cause the sea level to rise. Scientists believe that if the Greenland ice sheet melts then the sea level could rise by 23 ft (7.0 m) [ 33 ] The melting of this ice sheet or others could have an effect on ocean currents . It could cause lower temperatures in northern North America. Rising of the sea level will also impact coastal areas. One example is in Bangladesh . If there was a 1 m (3 ft 3 in) increase in sea level then millions of people would have to migrate from the coast. Global warming is also affecting Antarctica. The Larsen Ice Shelf or Larsen A is an ice sheet on the Antarctic Peninsula . The sheet broke in 1995, and then in 2000 an iceberg that is 4,250 sq mi (11,000 km 2 ) broke off the Ross Ice Shelf in Antarctica. [ 34 ] In 2002 Larsen B, which was 5,500 km 2 (2,100 sq mi), broke off. Global warming affects plants and animals. For plants, the warmer temperatures induce stress on the plants. [ 35 ] For animals, there has been a decrease in the number of polar bears in the Hudson Bay area. [ 36 ] Since 1981, the polar bear population has been declining. This is because global warming causes the ice to break up faster so the polar bears are going to the coasts when there are poor conditions. [ 36 ] Whoever owns the land is responsible or managing it. And the owners of the land differ between the Antarctic and Arctic and also within the polar regions. In the Arctic, there are six nations that own the land about 60°N. These nations include: Canada, Russia , Finland , USA , Denmark , Iceland and Norway . [ 37 ] There have been international treaties set up so there are no disputes. These nations have also set their government to manage the land properly. They have set up national parks , land for wilderness , and also land for research. In the polar regions, there have been laws set up to manage the number of visitors. There have been rules set up allowing only a certain amount of mining to be done and other measures to protect the environment from damages. In the Antarctic, the owners of the land are less clear. Some areas of Antarctica are controlled by the French, while other areas are controlled by South Africa , Australia , New Zealand , and the UK . [ 38 ] Whoever owns the Antarctic is still unclear therefore many other countries have put out scientific stations. The Antarctic Treaty System of 1961 was established to make sure all the conflicts were resolved about who owned the land. This and other treaties have shown interest in helping to conserve the Antarctic region. All of these countries have conservation laws. These laws manage the amount of hunting in the area, monitor invasive species, and control burning and settlement.
https://en.wikipedia.org/wiki/Polar_ecology
In mathematics , the polar coordinate system specifies a given point in a plane by using a distance and an angle as its two coordinates . These are The distance from the pole is called the radial coordinate , radial distance or simply radius , and the angle is called the angular coordinate , polar angle , or azimuth . [ 1 ] The pole is analogous to the origin in a Cartesian coordinate system . Polar coordinates are most appropriate in any context where the phenomenon being considered is inherently tied to direction and length from a center point in a plane, such as spirals . Planar physical systems with bodies moving around a central point, or phenomena originating from a central point, are often simpler and more intuitive to model using polar coordinates. The polar coordinate system is extended to three dimensions in two ways: the cylindrical coordinate system adds a second distance coordinate, and the spherical coordinate system adds a second angular coordinate. Grégoire de Saint-Vincent and Bonaventura Cavalieri independently introduced the system's concepts in the mid-17th century, though the actual term polar coordinates has been attributed to Gregorio Fontana in the 18th century. The initial motivation for introducing the polar system was the study of circular and orbital motion . The concepts of angle and radius were already used by ancient peoples of the first millennium BC . The Greek astronomer and astrologer Hipparchus (190–120 BC) created a table of chord functions giving the length of the chord for each angle, and there are references to his using polar coordinates in establishing stellar positions. [ 2 ] In On Spirals , Archimedes describes the Archimedean spiral , a function whose radius depends on the angle. The Greek work, however, did not extend to a full coordinate system. From the 8th century AD onward, astronomers developed methods for approximating and calculating the direction to Mecca ( qibla )—and its distance—from any location on the Earth. [ 3 ] From the 9th century onward they were using spherical trigonometry and map projection methods to determine these quantities accurately. The calculation is essentially the conversion of the equatorial polar coordinates of Mecca (i.e. its longitude and latitude ) to its polar coordinates (i.e. its qibla and distance) relative to a system whose reference meridian is the great circle through the given location and the Earth's poles and whose polar axis is the line through the location and its antipodal point . [ 4 ] There are various accounts of the introduction of polar coordinates as part of a formal coordinate system. The full history of the subject is described in Harvard professor Julian Lowell Coolidge 's Origin of Polar Coordinates. [ 5 ] Grégoire de Saint-Vincent and Bonaventura Cavalieri independently introduced the concepts in the mid-seventeenth century. Saint-Vincent wrote about them privately in 1625 and published his work in 1647, while Cavalieri published his in 1635 with a corrected version appearing in 1653. Cavalieri first used polar coordinates to solve a problem relating to the area within an Archimedean spiral . Blaise Pascal subsequently used polar coordinates to calculate the length of parabolic arcs . In Method of Fluxions (written 1671, published 1736), Sir Isaac Newton examined the transformations between polar coordinates, which he referred to as the "Seventh Manner; For Spirals", and nine other coordinate systems. [ 6 ] In the journal Acta Eruditorum (1691), Jacob Bernoulli used a system with a point on a line, called the pole and polar axis respectively. Coordinates were specified by the distance from the pole and the angle from the polar axis . Bernoulli's work extended to finding the radius of curvature of curves expressed in these coordinates. The actual term polar coordinates has been attributed to Gregorio Fontana and was used by 18th-century Italian writers. The term appeared in English in George Peacock 's 1816 translation of Lacroix 's Differential and Integral Calculus . [ 7 ] [ 8 ] Alexis Clairaut was the first to think of polar coordinates in three dimensions, and Leonhard Euler was the first to actually develop them. [ 5 ] The radial coordinate is often denoted by r or ρ , and the angular coordinate by φ , θ , or t . The angular coordinate is specified as φ by ISO standard 31-11 , now 80000-2:2019 . However, in mathematical literature the angle is often denoted by θ instead. Angles in polar notation are generally expressed in either degrees or radians (2 π rad being equal to 360°). Degrees are traditionally used in navigation , surveying , and many applied disciplines, while radians are more common in mathematics and mathematical physics . [ 9 ] The angle φ is defined to start at 0° from a reference direction , and to increase for rotations in either clockwise (cw) or counterclockwise (ccw) orientation. For example, in mathematics, the reference direction is usually drawn as a ray from the pole horizontally to the right, and the polar angle increases to positive angles for ccw rotations, whereas in navigation ( bearing , heading ) the 0°-heading is drawn vertically upwards and the angle increases for cw rotations. The polar angles decrease towards negative values for rotations in the respectively opposite orientations. Adding any number of full turns (360°) to the angular coordinate does not change the corresponding direction. Similarly, any polar coordinate is identical to the coordinate with the negative radial component and the opposite direction (adding 180° to the polar angle). Therefore, the same point ( r , φ ) can be expressed with an infinite number of different polar coordinates ( r , φ + n × 360°) and (− r , φ + 180° + n × 360°) = (− r , φ + (2 n + 1) × 180°) , where n is an arbitrary integer . [ 10 ] Moreover, the pole itself can be expressed as (0, φ ) for any angle φ . [ 11 ] Where a unique representation is needed for any point besides the pole, it is usual to limit r to positive numbers ( r > 0 ) and φ to either the interval [0, 360°) or the interval (−180°, 180°] , which in radians are [0, 2π) or (−π, π] . [ 12 ] Another convention, in reference to the usual codomain of the arctan function , is to allow for arbitrary nonzero real values of the radial component and restrict the polar angle to (−90°, 90°] . In all cases a unique azimuth for the pole ( r = 0) must be chosen, e.g., φ = 0. The polar coordinates r and φ can be converted to the Cartesian coordinates x and y by using the trigonometric functions sine and cosine: x = r cos ⁡ φ , y = r sin ⁡ φ . {\displaystyle {\begin{aligned}x&=r\cos \varphi ,\\y&=r\sin \varphi .\end{aligned}}} The Cartesian coordinates x and y can be converted to polar coordinates r and φ with r ≥ 0 and φ in the interval (− π , π ] by: [ 13 ] r = x 2 + y 2 = hypot ⁡ ( x , y ) φ = atan2 ⁡ ( y , x ) , {\displaystyle {\begin{aligned}r&={\sqrt {x^{2}+y^{2}}}=\operatorname {hypot} (x,y)\\\varphi &=\operatorname {atan2} (y,x),\end{aligned}}} where hypot is the Pythagorean sum and atan2 is a common variation on the arctangent function defined as atan2 ⁡ ( y , x ) = { arctan ⁡ ( y x ) if x > 0 arctan ⁡ ( y x ) + π if x < 0 and y ≥ 0 arctan ⁡ ( y x ) − π if x < 0 and y < 0 π 2 if x = 0 and y > 0 − π 2 if x = 0 and y < 0 undefined if x = 0 and y = 0. {\displaystyle \operatorname {atan2} (y,x)={\begin{cases}\arctan \left({\frac {y}{x}}\right)&{\mbox{if }}x>0\\\arctan \left({\frac {y}{x}}\right)+\pi &{\mbox{if }}x<0{\mbox{ and }}y\geq 0\\\arctan \left({\frac {y}{x}}\right)-\pi &{\mbox{if }}x<0{\mbox{ and }}y<0\\{\frac {\pi }{2}}&{\mbox{if }}x=0{\mbox{ and }}y>0\\-{\frac {\pi }{2}}&{\mbox{if }}x=0{\mbox{ and }}y<0\\{\text{undefined}}&{\mbox{if }}x=0{\mbox{ and }}y=0.\end{cases}}} If r is calculated first as above, then this formula for φ may be stated more simply using the arccosine function: φ = { arccos ⁡ ( x r ) if y ≥ 0 and r ≠ 0 − arccos ⁡ ( x r ) if y < 0 undefined if r = 0. {\displaystyle \varphi ={\begin{cases}\arccos \left({\frac {x}{r}}\right)&{\mbox{if }}y\geq 0{\mbox{ and }}r\neq 0\\-\arccos \left({\frac {x}{r}}\right)&{\mbox{if }}y<0\\{\text{undefined}}&{\mbox{if }}r=0.\end{cases}}} Every complex number can be represented as a point in the complex plane , and can therefore be expressed by specifying either the point's Cartesian coordinates (called rectangular or Cartesian form) or the point's polar coordinates (called polar form). In polar form, the distance and angle coordinates are often referred to as the number's magnitude and argument respectively. Two complex numbers can be multiplied by adding their arguments and multiplying their magnitudes. The complex number z can be represented in rectangular form as z = x + i y {\displaystyle z=x+iy} where i is the imaginary unit , or can alternatively be written in polar form as z = r ( cos ⁡ φ + i sin ⁡ φ ) {\displaystyle z=r(\cos \varphi +i\sin \varphi )} and from there, by Euler's formula , [ 14 ] as z = r e i φ = r exp ⁡ i φ . {\displaystyle z=re^{i\varphi }=r\exp i\varphi .} where e is Euler's number , and φ , expressed in radians, is the principal value of the complex number function arg applied to x + iy . To convert between the rectangular and polar forms of a complex number, the conversion formulae given above can be used. Equivalent are the cis and angle notations : z = r c i s ⁡ φ = r ∠ φ . {\displaystyle z=r\operatorname {\mathrm {cis} } \varphi =r\angle \varphi .} For the operations of multiplication , division , exponentiation , and root extraction of complex numbers, it is generally much simpler to work with complex numbers expressed in polar form rather than rectangular form. From the laws of exponentiation: The equation defining a plane curve expressed in polar coordinates is known as a polar equation . In many cases, such an equation can simply be specified by defining r as a function of φ . The resulting curve then consists of points of the form ( r ( φ ), φ ) and can be regarded as the graph of the polar function r . Note that, in contrast to Cartesian coordinates, the independent variable φ is the second entry in the ordered pair. Different forms of symmetry can be deduced from the equation of a polar function r : Because of the circular nature of the polar coordinate system, many curves can be described by a rather simple polar equation, whereas their Cartesian form is much more intricate. Among the best known of these curves are the polar rose , Archimedean spiral , lemniscate , limaçon , and cardioid . For the circle, line, and polar rose below, it is understood that there are no restrictions on the domain and range of the curve. The general equation for a circle with a center at ( r 0 , γ ) {\displaystyle (r_{0},\gamma )} and radius a is r 2 − 2 r r 0 cos ⁡ ( φ − γ ) + r 0 2 = a 2 . {\displaystyle r^{2}-2rr_{0}\cos(\varphi -\gamma )+r_{0}^{2}=a^{2}.} This can be simplified in various ways, to conform to more specific cases, such as the equation r ( φ ) = a {\displaystyle r(\varphi )=a} for a circle with a center at the pole and radius a . [ 15 ] When r 0 = a or the origin lies on the circle, the equation becomes r = 2 a cos ⁡ ( φ − γ ) . {\displaystyle r=2a\cos(\varphi -\gamma ).} In the general case, the equation can be solved for r , giving r = r 0 cos ⁡ ( φ − γ ) + a 2 − r 0 2 sin 2 ⁡ ( φ − γ ) {\displaystyle r=r_{0}\cos(\varphi -\gamma )+{\sqrt {a^{2}-r_{0}^{2}\sin ^{2}(\varphi -\gamma )}}} The solution with a minus sign in front of the square root gives the same curve. Radial lines (those running through the pole) are represented by the equation φ = γ , {\displaystyle \varphi =\gamma ,} where γ {\displaystyle \gamma } is the angle of elevation of the line; that is, φ = arctan ⁡ m {\displaystyle \varphi =\arctan m} , where m {\displaystyle m} is the slope of the line in the Cartesian coordinate system. The non-radial line that crosses the radial line φ = γ {\displaystyle \varphi =\gamma } perpendicularly at the point ( r 0 , γ ) {\displaystyle (r_{0},\gamma )} has the equation r ( φ ) = r 0 sec ⁡ ( φ − γ ) . {\displaystyle r(\varphi )=r_{0}\sec(\varphi -\gamma ).} Otherwise stated ( r 0 , γ ) {\displaystyle (r_{0},\gamma )} is the point in which the tangent intersects the imaginary circle of radius r 0 {\displaystyle r_{0}} A polar rose is a mathematical curve that looks like a petaled flower, and that can be expressed as a simple polar equation, r ( φ ) = a cos ⁡ ( k φ + γ 0 ) {\displaystyle r(\varphi )=a\cos \left(k\varphi +\gamma _{0}\right)} for any constant γ 0 (including 0). If k is an integer, these equations will produce a k -petaled rose if k is odd , or a 2 k -petaled rose if k is even. If k is rational, but not an integer, a rose-like shape may form but with overlapping petals. Note that these equations never define a rose with 2, 6, 10, 14, etc. petals. The variable a directly represents the length or amplitude of the petals of the rose, while k relates to their spatial frequency. The constant γ 0 can be regarded as a phase angle. The Archimedean spiral is a spiral discovered by Archimedes which can also be expressed as a simple polar equation. It is represented by the equation r ( φ ) = a + b φ . {\displaystyle r(\varphi )=a+b\varphi .} Changing the parameter a will turn the spiral, while b controls the distance between the arms, which for a given spiral is always constant. The Archimedean spiral has two arms, one for φ > 0 and one for φ < 0 . The two arms are smoothly connected at the pole. If a = 0 , taking the mirror image of one arm across the 90°/270° line will yield the other arm. This curve is notable as one of the first curves, after the conic sections , to be described in a mathematical treatise, and as a prime example of a curve best defined by a polar equation. A conic section with one focus on the pole and the other somewhere on the 0° ray (so that the conic's major axis lies along the polar axis) is given by: r = ℓ 1 − e cos ⁡ φ {\displaystyle r={\ell \over {1-e\cos \varphi }}} where e is the eccentricity and ℓ {\displaystyle \ell } is the semi-latus rectum (the perpendicular distance at a focus from the major axis to the curve). If e > 1 , this equation defines a hyperbola ; if e = 1 , it defines a parabola ; and if e < 1 , it defines an ellipse . The special case e = 0 of the latter results in a circle of the radius ℓ {\displaystyle \ell } . A quadratrix in the first quadrant ( x, y ) is a curve with y = ρ sin θ equal to the fraction of the quarter circle with radius r determined by the radius through the curve point. Since this fraction is 2 r θ π {\displaystyle {\frac {2r\theta }{\pi }}} , the curve is given by ρ ( θ ) = 2 r θ π sin ⁡ θ {\displaystyle \rho (\theta )={\frac {2r\theta }{\pi \sin \theta }}} . [ 16 ] The graphs of two polar functions r = f ( θ ) {\displaystyle r=f(\theta )} and r = g ( θ ) {\displaystyle r=g(\theta )} have possible intersections of three types: Calculus can be applied to equations expressed in polar coordinates. [ 17 ] [ 18 ] The angular coordinate φ is expressed in radians throughout this section, which is the conventional choice when doing calculus. Using x = r cos φ and y = r sin φ , one can derive a relationship between derivatives in Cartesian and polar coordinates. For a given function, u ( x , y ), it follows that (by computing its total derivatives ) or r d u d r = r ∂ u ∂ x cos ⁡ φ + r ∂ u ∂ y sin ⁡ φ = x ∂ u ∂ x + y ∂ u ∂ y , d u d φ = − ∂ u ∂ x r sin ⁡ φ + ∂ u ∂ y r cos ⁡ φ = − y ∂ u ∂ x + x ∂ u ∂ y . {\displaystyle {\begin{aligned}r{\frac {du}{dr}}&=r{\frac {\partial u}{\partial x}}\cos \varphi +r{\frac {\partial u}{\partial y}}\sin \varphi =x{\frac {\partial u}{\partial x}}+y{\frac {\partial u}{\partial y}},\\[2pt]{\frac {du}{d\varphi }}&=-{\frac {\partial u}{\partial x}}r\sin \varphi +{\frac {\partial u}{\partial y}}r\cos \varphi =-y{\frac {\partial u}{\partial x}}+x{\frac {\partial u}{\partial y}}.\end{aligned}}} Hence, we have the following formula: r d d r = x ∂ ∂ x + y ∂ ∂ y d d φ = − y ∂ ∂ x + x ∂ ∂ y . {\displaystyle {\begin{aligned}r{\frac {d}{dr}}&=x{\frac {\partial }{\partial x}}+y{\frac {\partial }{\partial y}}\\[2pt]{\frac {d}{d\varphi }}&=-y{\frac {\partial }{\partial x}}+x{\frac {\partial }{\partial y}}.\end{aligned}}} Using the inverse coordinates transformation, an analogous reciprocal relationship can be derived between the derivatives. Given a function u ( r , φ ), it follows that d u d x = ∂ u ∂ r ∂ r ∂ x + ∂ u ∂ φ ∂ φ ∂ x , d u d y = ∂ u ∂ r ∂ r ∂ y + ∂ u ∂ φ ∂ φ ∂ y , {\displaystyle {\begin{aligned}{\frac {du}{dx}}&={\frac {\partial u}{\partial r}}{\frac {\partial r}{\partial x}}+{\frac {\partial u}{\partial \varphi }}{\frac {\partial \varphi }{\partial x}},\\[2pt]{\frac {du}{dy}}&={\frac {\partial u}{\partial r}}{\frac {\partial r}{\partial y}}+{\frac {\partial u}{\partial \varphi }}{\frac {\partial \varphi }{\partial y}},\end{aligned}}} or d u d x = ∂ u ∂ r x x 2 + y 2 − ∂ u ∂ φ y x 2 + y 2 = cos ⁡ φ ∂ u ∂ r − 1 r sin ⁡ φ ∂ u ∂ φ , d u d y = ∂ u ∂ r y x 2 + y 2 + ∂ u ∂ φ x x 2 + y 2 = sin ⁡ φ ∂ u ∂ r + 1 r cos ⁡ φ ∂ u ∂ φ . {\displaystyle {\begin{aligned}{\frac {du}{dx}}&={\frac {\partial u}{\partial r}}{\frac {x}{\sqrt {x^{2}+y^{2}}}}-{\frac {\partial u}{\partial \varphi }}{\frac {y}{x^{2}+y^{2}}}\\[2pt]&=\cos \varphi {\frac {\partial u}{\partial r}}-{\frac {1}{r}}\sin \varphi {\frac {\partial u}{\partial \varphi }},\\[2pt]{\frac {du}{dy}}&={\frac {\partial u}{\partial r}}{\frac {y}{\sqrt {x^{2}+y^{2}}}}+{\frac {\partial u}{\partial \varphi }}{\frac {x}{x^{2}+y^{2}}}\\[2pt]&=\sin \varphi {\frac {\partial u}{\partial r}}+{\frac {1}{r}}\cos \varphi {\frac {\partial u}{\partial \varphi }}.\end{aligned}}} Hence, we have the following formulae: d d x = cos ⁡ φ ∂ ∂ r − 1 r sin ⁡ φ ∂ ∂ φ d d y = sin ⁡ φ ∂ ∂ r + 1 r cos ⁡ φ ∂ ∂ φ . {\displaystyle {\begin{aligned}{\frac {d}{dx}}&=\cos \varphi {\frac {\partial }{\partial r}}-{\frac {1}{r}}\sin \varphi {\frac {\partial }{\partial \varphi }}\\[2pt]{\frac {d}{dy}}&=\sin \varphi {\frac {\partial }{\partial r}}+{\frac {1}{r}}\cos \varphi {\frac {\partial }{\partial \varphi }}.\end{aligned}}} To find the Cartesian slope of the tangent line to a polar curve r ( φ ) at any given point, the curve is first expressed as a system of parametric equations . x = r ( φ ) cos ⁡ φ y = r ( φ ) sin ⁡ φ {\displaystyle {\begin{aligned}x&=r(\varphi )\cos \varphi \\y&=r(\varphi )\sin \varphi \end{aligned}}} Differentiating both equations with respect to φ yields d x d φ = r ′ ( φ ) cos ⁡ φ − r ( φ ) sin ⁡ φ d y d φ = r ′ ( φ ) sin ⁡ φ + r ( φ ) cos ⁡ φ . {\displaystyle {\begin{aligned}{\frac {dx}{d\varphi }}&=r'(\varphi )\cos \varphi -r(\varphi )\sin \varphi \\[2pt]{\frac {dy}{d\varphi }}&=r'(\varphi )\sin \varphi +r(\varphi )\cos \varphi .\end{aligned}}} Dividing the second equation by the first yields the Cartesian slope of the tangent line to the curve at the point ( r ( φ ), φ ) : d y d x = r ′ ( φ ) sin ⁡ φ + r ( φ ) cos ⁡ φ r ′ ( φ ) cos ⁡ φ − r ( φ ) sin ⁡ φ . {\displaystyle {\frac {dy}{dx}}={\frac {r'(\varphi )\sin \varphi +r(\varphi )\cos \varphi }{r'(\varphi )\cos \varphi -r(\varphi )\sin \varphi }}.} For other useful formulas including divergence, gradient, and Laplacian in polar coordinates, see curvilinear coordinates . The arc length (length of a line segment) defined by a polar function is found by the integration over the curve r ( φ ). Let L denote this length along the curve starting from points A through to point B , where these points correspond to φ = a and φ = b such that 0 < b − a < 2 π . The length of L is given by the following integral L = ∫ a b [ r ( φ ) ] 2 + [ d r ( φ ) d φ ] 2 d φ {\displaystyle L=\int _{a}^{b}{\sqrt {\left[r(\varphi )\right]^{2}+\left[{\tfrac {dr(\varphi )}{d\varphi }}\right]^{2}}}d\varphi } Let R denote the region enclosed by a curve r ( φ ) and the rays φ = a and φ = b , where 0 < b − a ≤ 2 π . Then, the area of R is 1 2 ∫ a b [ r ( φ ) ] 2 d φ . {\displaystyle {\frac {1}{2}}\int _{a}^{b}\left[r(\varphi )\right]^{2}\,d\varphi .} This result can be found as follows. First, the interval [ a , b ] is divided into n subintervals, where n is some positive integer. Thus Δ φ , the angle measure of each subinterval, is equal to b − a (the total angle measure of the interval), divided by n , the number of subintervals. For each subinterval i = 1, 2, ..., n , let φ i be the midpoint of the subinterval, and construct a sector with the center at the pole, radius r ( φ i ), central angle Δ φ and arc length r ( φ i )Δ φ . The area of each constructed sector is therefore equal to [ r ( φ i ) ] 2 π ⋅ Δ φ 2 π = 1 2 [ r ( φ i ) ] 2 Δ φ . {\displaystyle \left[r(\varphi _{i})\right]^{2}\pi \cdot {\frac {\Delta \varphi }{2\pi }}={\frac {1}{2}}\left[r(\varphi _{i})\right]^{2}\Delta \varphi .} Hence, the total area of all of the sectors is ∑ i = 1 n 1 2 r ( φ i ) 2 Δ φ . {\displaystyle \sum _{i=1}^{n}{\tfrac {1}{2}}r(\varphi _{i})^{2}\,\Delta \varphi .} As the number of subintervals n is increased, the approximation of the area improves. Taking n → ∞ , the sum becomes the Riemann sum for the above integral. A mechanical device that computes area integrals is the planimeter , which measures the area of plane figures by tracing them out: this replicates integration in polar coordinates by adding a joint so that the 2-element linkage effects Green's theorem , converting the quadratic polar integral to a linear integral. Using Cartesian coordinates , an infinitesimal area element can be calculated as dA = dx dy . The substitution rule for multiple integrals states that, when using other coordinates, the Jacobian determinant of the coordinate conversion formula has to be considered: J = det ∂ ( x , y ) ∂ ( r , φ ) = | ∂ x ∂ r ∂ x ∂ φ ∂ y ∂ r ∂ y ∂ φ | = | cos ⁡ φ − r sin ⁡ φ sin ⁡ φ r cos ⁡ φ | = r cos 2 ⁡ φ + r sin 2 ⁡ φ = r . {\displaystyle J=\det {\frac {\partial (x,y)}{\partial (r,\varphi )}}={\begin{vmatrix}{\frac {\partial x}{\partial r}}&{\frac {\partial x}{\partial \varphi }}\\[2pt]{\frac {\partial y}{\partial r}}&{\frac {\partial y}{\partial \varphi }}\end{vmatrix}}={\begin{vmatrix}\cos \varphi &-r\sin \varphi \\\sin \varphi &r\cos \varphi \end{vmatrix}}=r\cos ^{2}\varphi +r\sin ^{2}\varphi =r.} Hence, an area element in polar coordinates can be written as d A = d x d y = J d r d φ = r d r d φ . {\displaystyle dA=dx\,dy\ =J\,dr\,d\varphi =r\,dr\,d\varphi .} Now, a function, that is given in polar coordinates, can be integrated as follows: ∬ R f ( x , y ) d A = ∫ a b ∫ 0 r ( φ ) f ( r , φ ) r d r d φ . {\displaystyle \iint _{R}f(x,y)\,dA=\int _{a}^{b}\int _{0}^{r(\varphi )}f(r,\varphi )\,r\,dr\,d\varphi .} Here, R is the same region as above, namely, the region enclosed by a curve r ( φ ) and the rays φ = a and φ = b . The formula for the area of R is retrieved by taking f identically equal to 1. A more surprising application of this result yields the Gaussian integral : ∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.} Vector calculus can also be applied to polar coordinates. For a planar motion, let r {\displaystyle \mathbf {r} } be the position vector ( r cos( φ ), r sin( φ )) , with r and φ depending on time t . We define an orthonormal basis with three unit vectors: radial, transverse, and normal directions . The radial direction is defined by normalizing r {\displaystyle \mathbf {r} } : r ^ = ( cos ⁡ ( φ ) , sin ⁡ ( φ ) ) {\displaystyle {\hat {\mathbf {r} }}=(\cos(\varphi ),\sin(\varphi ))} Radial and velocity directions span the plane of the motion , whose normal direction is denoted k ^ {\displaystyle {\hat {\mathbf {k} }}} : k ^ = v ^ × r ^ . {\displaystyle {\hat {\mathbf {k} }}={\hat {\mathbf {v} }}\times {\hat {\mathbf {r} }}.} The transverse direction is perpendicular to both radial and normal directions: φ ^ = ( − sin ⁡ ( φ ) , cos ⁡ ( φ ) ) = k ^ × r ^ , {\displaystyle {\hat {\boldsymbol {\varphi }}}=(-\sin(\varphi ),\cos(\varphi ))={\hat {\mathbf {k} }}\times {\hat {\mathbf {r} }}\ ,} Then r = ( x , y ) = r ( cos ⁡ φ , sin ⁡ φ ) = r r ^ , r ˙ = ( x ˙ , y ˙ ) = r ˙ ( cos ⁡ φ , sin ⁡ φ ) + r φ ˙ ( − sin ⁡ φ , cos ⁡ φ ) = r ˙ r ^ + r φ ˙ φ ^ , r ¨ = ( x ¨ , y ¨ ) = r ¨ ( cos ⁡ φ , sin ⁡ φ ) + 2 r ˙ φ ˙ ( − sin ⁡ φ , cos ⁡ φ ) + r φ ¨ ( − sin ⁡ φ , cos ⁡ φ ) − r φ ˙ 2 ( cos ⁡ φ , sin ⁡ φ ) = ( r ¨ − r φ ˙ 2 ) r ^ + ( r φ ¨ + 2 r ˙ φ ˙ ) φ ^ = ( r ¨ − r φ ˙ 2 ) r ^ + 1 r d d t ( r 2 φ ˙ ) φ ^ . {\displaystyle {\begin{aligned}\mathbf {r} &=(x,\ y)=r(\cos \varphi ,\ \sin \varphi )=r{\hat {\mathbf {r} }}\ ,\\[1.5ex]{\dot {\mathbf {r} }}&=\left({\dot {x}},\ {\dot {y}}\right)={\dot {r}}(\cos \varphi ,\ \sin \varphi )+r{\dot {\varphi }}(-\sin \varphi ,\ \cos \varphi )={\dot {r}}{\hat {\mathbf {r} }}+r{\dot {\varphi }}{\hat {\boldsymbol {\varphi }}}\ ,\\[1.5ex]{\ddot {\mathbf {r} }}&=\left({\ddot {x}},\ {\ddot {y}}\right)\\[1ex]&={\ddot {r}}(\cos \varphi ,\ \sin \varphi )+2{\dot {r}}{\dot {\varphi }}(-\sin \varphi ,\ \cos \varphi )+r{\ddot {\varphi }}(-\sin \varphi ,\ \cos \varphi )-r{\dot {\varphi }}^{2}(\cos \varphi ,\ \sin \varphi )\\[1ex]&=\left({\ddot {r}}-r{\dot {\varphi }}^{2}\right){\hat {\mathbf {r} }}+\left(r{\ddot {\varphi }}+2{\dot {r}}{\dot {\varphi }}\right){\hat {\boldsymbol {\varphi }}}\\[1ex]&=\left({\ddot {r}}-r{\dot {\varphi }}^{2}\right){\hat {\mathbf {r} }}+{\frac {1}{r}}\;{\frac {d}{dt}}\left(r^{2}{\dot {\varphi }}\right){\hat {\boldsymbol {\varphi }}}.\end{aligned}}} This equation can be obtain by taking derivative of the function and derivatives of the unit basis vectors. For a curve in 2D where the parameter is θ {\displaystyle \theta } the previous equations simplify to: r = r ( θ ) e ^ r d r d θ = d r d θ e ^ r + r e ^ θ d 2 r d θ 2 = ( d 2 r d θ 2 − r ) e ^ r + d r d θ e ^ θ {\displaystyle {\begin{aligned}\mathbf {r} &=r(\theta ){\hat {\mathbf {e} }}_{r}\\[1ex]{\frac {d\mathbf {r} }{d\theta }}&={\frac {dr}{d\theta }}{\hat {\mathbf {e} }}_{r}+r{\hat {\mathbf {e} }}_{\theta }\\[1ex]{\frac {d^{2}\mathbf {r} }{d\theta ^{2}}}&=\left({\frac {d^{2}r}{d\theta ^{2}}}-r\right){\hat {\mathbf {e} }}_{r}+{\frac {dr}{d\theta }}{\hat {\mathbf {e} }}_{\theta }\end{aligned}}} The term r φ ˙ 2 {\displaystyle r{\dot {\varphi }}^{2}} is sometimes referred to as the centripetal acceleration , and the term 2 r ˙ φ ˙ {\displaystyle 2{\dot {r}}{\dot {\varphi }}} as the Coriolis acceleration . For example, see Shankar. [ 19 ] Note: these terms, that appear when acceleration is expressed in polar coordinates, are a mathematical consequence of differentiation; they appear whenever polar coordinates are used. In planar particle dynamics these accelerations appear when setting up Newton's second law of motion in a rotating frame of reference. Here these extra terms are often called fictitious forces ; fictitious because they are simply a result of a change in coordinate frame. That does not mean they do not exist, rather they exist only in the rotating frame. For a particle in planar motion, one approach to attaching physical significance to these terms is based on the concept of an instantaneous co-rotating frame of reference . [ 20 ] To define a co-rotating frame, first an origin is selected from which the distance r ( t ) to the particle is defined. An axis of rotation is set up that is perpendicular to the plane of motion of the particle, and passing through this origin. Then, at the selected moment t , the rate of rotation of the co-rotating frame Ω is made to match the rate of rotation of the particle about this axis, dφ / dt . Next, the terms in the acceleration in the inertial frame are related to those in the co-rotating frame. Let the location of the particle in the inertial frame be ( r ( t ), φ ( t )), and in the co-rotating frame be ( r ′(t), φ ′(t)). Because the co-rotating frame rotates at the same rate as the particle, dφ ′/ dt = 0. The fictitious centrifugal force in the co-rotating frame is mr Ω 2 , radially outward. The velocity of the particle in the co-rotating frame also is radially outward, because dφ ′/ dt = 0. The fictitious Coriolis force therefore has a value −2 m ( dr / dt )Ω, pointed in the direction of increasing φ only. Thus, using these forces in Newton's second law we find: F + F cf + F Cor = m r ¨ , {\displaystyle \mathbf {F} +\mathbf {F} _{\text{cf}}+\mathbf {F} _{\text{Cor}}=m{\ddot {\mathbf {r} }}\,,} where over dots represent derivatives with respect to time, and F is the net real force (as opposed to the fictitious forces). In terms of components, this vector equation becomes: F r + m r Ω 2 = m r ¨ F φ − 2 m r ˙ Ω = m r φ ¨ , {\displaystyle {\begin{aligned}F_{r}+mr\Omega ^{2}&=m{\ddot {r}}\\F_{\varphi }-2m{\dot {r}}\Omega &=mr{\ddot {\varphi }}\ ,\end{aligned}}} which can be compared to the equations for the inertial frame: F r = m r ¨ − m r φ ˙ 2 F φ = m r φ ¨ + 2 m r ˙ φ ˙ . {\displaystyle {\begin{aligned}F_{r}&=m{\ddot {r}}-mr{\dot {\varphi }}^{2}\\F_{\varphi }&=mr{\ddot {\varphi }}+2m{\dot {r}}{\dot {\varphi }}\ .\end{aligned}}} This comparison, plus the recognition that by the definition of the co-rotating frame at time t it has a rate of rotation Ω = dφ / dt , shows that we can interpret the terms in the acceleration (multiplied by the mass of the particle) as found in the inertial frame as the negative of the centrifugal and Coriolis forces that would be seen in the instantaneous, non-inertial co-rotating frame. For general motion of a particle (as opposed to simple circular motion), the centrifugal and Coriolis forces in a particle's frame of reference commonly are referred to the instantaneous osculating circle of its motion, not to a fixed center of polar coordinates. For more detail, see centripetal force . In the modern terminology of differential geometry , polar coordinates provide coordinate charts for the differentiable manifold R 2 \ {(0,0)} , the plane minus the origin. In these coordinates, the Euclidean metric tensor is given by d s 2 = d r 2 + r 2 d θ 2 . {\displaystyle ds^{2}=dr^{2}+r^{2}d\theta ^{2}.} This can be seen via the change of variables formula for the metric tensor, or by computing the differential forms dx , dy via the exterior derivative of the 0-forms x = r cos( θ ) , y = r sin( θ ) and substituting them in the Euclidean metric tensor ds 2 = dx 2 + dy 2 . Let p 1 = ( x 1 , y 1 ) = ( r 1 , θ 1 ) {\displaystyle p_{1}=(x_{1},y_{1})=(r_{1},\theta _{1})} , and p 2 = ( x 2 , y 2 ) = ( r 2 , θ 2 ) {\displaystyle p_{2}=(x_{2},y_{2})=(r_{2},\theta _{2})} be two points in the plane given by their cartesian and polar coordinates. Then Since d x 2 = ( r 2 cos ⁡ θ 2 − r 1 cos ⁡ θ 1 ) 2 {\displaystyle dx^{2}=(r_{2}\cos \theta _{2}-r_{1}\cos \theta _{1})^{2}} , and d y 2 = ( r 2 sin ⁡ θ 2 − r 1 sin ⁡ θ 1 ) 2 {\displaystyle dy^{2}=(r_{2}\sin \theta _{2}-r_{1}\sin \theta _{1})^{2}} , we get that Now we use the trigonometric identity cos ⁡ ( θ 2 − θ 1 ) = cos ⁡ θ 1 cos ⁡ θ 2 + sin ⁡ θ 1 sin ⁡ θ 2 {\displaystyle \cos(\theta _{2}-\theta _{1})=\cos \theta _{1}\cos \theta _{2}+\sin \theta _{1}\sin \theta _{2}} to proceed: If the radial and angular quantities are near to each other, and therefore near to a common quantity r {\displaystyle r} and θ {\displaystyle \theta } , we have that r 1 r 2 ≈ r 2 {\displaystyle r_{1}r_{2}\approx r^{2}} . Moreover, the cosine of d θ {\displaystyle d\theta } can be approximated with the Taylor series of the cosine up to linear terms: so that 1 − cos ⁡ d θ ≈ d θ 2 2 {\displaystyle 1-\cos d\theta \approx {\frac {d\theta ^{2}}{2}}} , and 2 r 1 r 2 ( 1 − cos ⁡ d θ ) ≈ 2 r 2 d θ 2 2 = r 2 d θ 2 {\displaystyle 2r_{1}r_{2}(1-\cos d\theta )\approx 2r^{2}{\frac {d\theta ^{2}}{2}}=r^{2}d\theta ^{2}} . Therefore, around an infinitesimally small domain of any point, as stated. An orthonormal frame with respect to this metric is given by e r = ∂ ∂ r , e θ = 1 r ∂ ∂ θ , {\displaystyle e_{r}={\frac {\partial }{\partial r}},\quad e_{\theta }={\frac {1}{r}}{\frac {\partial }{\partial \theta }},} with dual coframe e r = d r , e θ = r d θ . {\displaystyle e^{r}=dr,\quad e^{\theta }=rd\theta .} The connection form relative to this frame and the Levi-Civita connection is given by the skew-symmetric matrix of 1-forms ω i j = ( 0 − d θ d θ 0 ) {\displaystyle {\omega ^{i}}_{j}={\begin{pmatrix}0&-d\theta \\d\theta &0\end{pmatrix}}} and hence the curvature form Ω = dω + ω ∧ ω vanishes. Therefore, as expected, the punctured plane is a flat manifold . The polar coordinate system is extended into three dimensions with two different coordinate systems, the cylindrical and spherical coordinate system . Polar coordinates are two-dimensional and thus they can be used only where point positions lie on a single two-dimensional plane. They are most appropriate in any context where the phenomenon being considered is inherently tied to direction and length from a center point. For instance, the examples above show how elementary polar equations suffice to define curves—such as the Archimedean spiral—whose equation in the Cartesian coordinate system would be much more intricate. Moreover, many physical systems—such as those concerned with bodies moving around a central point or with phenomena originating from a central point—are simpler and more intuitive to model using polar coordinates. The initial motivation for the introduction of the polar system was the study of circular and orbital motion . Polar coordinates are used often in navigation as the destination or direction of travel can be given as an angle and distance from the object being considered. For instance, aircraft use a slightly modified version of the polar coordinates for navigation. In this system, the one generally used for any sort of navigation, the 0° ray is generally called heading 360, and the angles continue in a clockwise direction, rather than counterclockwise, as in the mathematical system. Heading 360 corresponds to magnetic north , while headings 90, 180, and 270 correspond to magnetic east, south, and west, respectively. [ 21 ] Thus, an aircraft traveling 5 nautical miles due east will be traveling 5 units at heading 90 (read zero-niner-zero by air traffic control ). [ 22 ] Systems displaying radial symmetry provide natural settings for the polar coordinate system, with the central point acting as the pole. A prime example of this usage is the groundwater flow equation when applied to radially symmetric wells. Systems with a radial force are also good candidates for the use of the polar coordinate system. These systems include gravitational fields , which obey the inverse-square law , as well as systems with point sources , such as radio antennas . Radially asymmetric systems may also be modeled with polar coordinates. For example, a microphone 's pickup pattern illustrates its proportional response to an incoming sound from a given direction, and these patterns can be represented as polar curves. The curve for a standard cardioid microphone, the most common unidirectional microphone, can be represented as r = 0.5 + 0.5sin( ϕ ) at its target design frequency. [ 23 ] The pattern shifts toward omnidirectionality at lower frequencies.
https://en.wikipedia.org/wiki/Polar_graph
A polar metal , metallic ferroelectric , [ 1 ] or ferroelectric metal [ 2 ] is a metal that contains an electric dipole moment . Its components have an ordered electric dipole. Such metals should be unexpected, because the charge should conduct by way of the free electrons in the metal and neutralize the polarized charge. However they do exist. [ 3 ] Probably the first report of a polar metal was in single crystals of the cuprate superconductors YBa 2 Cu 3 O 7−δ . [ 4 ] [ 5 ] A polarization was observed along one (001) axis by pyroelectric effect measurements, and the sign of the polarization was shown to be reversible, while its magnitude could be increased by poling with an electric field. [ 6 ] The polarization was found to disappear in the superconducting state. [ 7 ] The lattice distortions responsible were considered to be a result of oxygen ion displacements induced by doped charges that break inversion symmetry. [ 8 ] [ 9 ] The effect was utilized for fabrication of pyroelectric detectors for space applications, having the advantage of large pyroelectric coefficient and low intrinsic resistance. [ 10 ] Another substance family that can produce a polar metal is the nickelate perovskites . One example interpreted to show polar metallic behavior is lanthanum nickelate , LaNiO 3 . [ 11 ] [ 12 ] A thin film of LaNiO 3 grown on the (111) crystal face of lanthanum aluminate , (LaAlO 3 ) was interpreted to be both conductor and a polar material at room temperature. [ 11 ] The resistivity of this system, however, shows an upturn with decreasing temperature, hence does not strictly adhere to the definition of a metal. Also, when grown 3 or 4 unit cells thick (1-2 nm) on the (100) crystal face of LaAlO 3 , the LaNiO 3 can be a polar insulator or polar metal depending on the atomic termination of the surface. [ 12 ] Lithium osmate , [ 13 ] LiOsO 3 also undergoes a ferrorelectric transition when it is cooled below 140K. The point group changes from R 3 c to R3c losing its centrosymmetry. [ 14 ] [ 15 ] At room temperature and below, lithium osmate is an electric conductor, in single crystal, polycrystalline or powder forms, and the ferroelectric form only appears below 140K. Above 140K the material behaves like a normal metal. [ 16 ] Artificial two-dimensional polar metal by charge transfer to a ferroelectric insulator has been realized in LaAlO 3 /Ba 0.8 Sr 0.2 TiO 3 /SrTiO 3 complex oxide heterostructures. [ 17 ] Native metallicity and ferroelectricity has been observed at room temperature in bulk single-crystalline tungsten ditelluride (WTe 2 ); a transition metal dichalcogenide (TMDC). It has bistable and electrically switchable spontaneous polarization states indicating ferroelectricity. [ 18 ] Coexistence of metallic behavior and switchable electric polarization in WTe 2 , which is a layered material , has been observed in the low-thickness limit of two- and three-layers. [ 19 ] Calculations suggest this originates from vertical charge transfer between the layers, which is switched by interlayer sliding. [ 20 ] In April 2022 another polar metal at room temperature was reported which was also magnetic, skyrmions and the Rashba–Edelstein effect were observed. [ 21 ] [ 22 ] [ 23 ] P. W. Anderson and E. I. Blount predicted that a ferroelectric metal could exist in 1965. [ 14 ] They were inspired to make this prediction based on superconducting transitions, and the ferroelectric transition in barium titanate . The prediction was that atoms do not move far and only a slight crystal non-symmetrical deformation occurs, say from cubic to tetragonal. This transition they called martensitic. They suggested looking at sodium tungsten bronze and InTl alloy. They realised that the free electrons in the metal would neutralise the effect of the polarization at a global level, but that the conduction electrons do not strongly affect transverse optical phonons, or the local electric field inherent in ferroelectricity . [ 24 ]
https://en.wikipedia.org/wiki/Polar_metal
A polar organic chemical integrative sampler (POCIS) is a passive sampling device which allows for the in situ collection of a time-integrated average of hydrophilic organic contaminants developed by researchers with the United States Geological Survey in Columbia, Missouri . [ 1 ] POCIS provides a means for estimating the toxicological significance of waterborne contaminants. [ 2 ] The POCIS sampler mimics the respiratory exposure of organisms living in the aquatic environment and can provide an understanding of bioavailable contaminants present in the system. [ 3 ] POCIS can be deployed in a wide range of aquatic environments and is commonly used to assist in environmental monitoring studies. The first passive sampling devices were developed in the 1970s to determine concentrations of contaminants in the air. In 1980 this technology was first adapted for the monitoring of organic contaminants in water. [ 4 ] The initial type of passive sampler developed for aquatic monitoring purposes was the semipermeable membrane device (SPMD). [ 4 ] SPMD samplers are most effective at absorbing hydrophobic pollutants with an octanol-water partition coefficient (Kow) ranging from 4-8. [ 5 ] As the global emission of bioconcentratable persistent organic pollutants (POPs) was shown to result in adverse ecological effects, industry developed a wide range of increasing water-soluble, polar hydrophilic organic compounds (HpOCs) to replace them. These compounds generally have lower bioconcentration factors . However, there is evidence that large fluxes of these HpOCs into aquatic environments may be responsible for a number of adverse effects to aquatic organisms, such as altered behavior, neurotoxicity , endocrine disruption , and impaired reproduction. [ 5 ] In the late 1990s research was underway to develop a new passive sampler in order to monitor HpOCs with a log Kow value of less than 3. [ 4 ] In 1999 the POCIS sampler was under development at the University of Missouri-Columbia. It gathered more support in the early 2000s as concern increased regarding the effects of pharmaceutical and personal care products in surface waters. [ 4 ] The United States Geological Survey (USGS) has been heavily involved in the development of passive samplers and has articles in their database regarding the development of POCIS as early as 2000. The USGS Columbia Environmental Research Center (CERC) is a self-proclaimed international leader in the field of passive sampling. [ 1 ] There have been recent efforts by the USGS to connect people who have an interest in passive sampling. An international workshop and symposium on passive sampling was held by the USGS in 2013 to connect developers, policy makers and end users in order to discuss ways of monitoring environmental pollution. [ 6 ] The POCIS device was developed and patented by Jimmie D. Petty, James N. Huckins, and David A. Alvarez, of the Columbia Environmental Research Center. [ 1 ] Integrative passive samplers are an effective way to monitor the concentration of organic contaminants in aquatic systems over time. Most aquatic monitoring programs rely on collecting individual samples, often called grab samples , at a specific time. [ 7 ] The grab sampling method is associated with many disadvantages that can be resolved by passive sampling techniques. When contaminants are present in trace amounts, grab sampling may require the collection of large volumes of water. Also, lab analysis of the sample can only provide a snapshot of contaminant levels at the time of collection. This approach therefore has drawbacks when monitoring in environments where water contamination varies over time and episodic contamination events occur. [ 4 ] Passive sampling techniques have been able to provide a time-integrated sample of water contamination with low detection limits and in situ extraction of analytes . [ 8 ] The POCIS sampler consists of an array of sampling disks mounted on a support rod. Each disk consists of a solid sorbent sandwiched between two polyethersoulfone (PES) microporous membranes which are then compressed between two stainless steel rings which expose a sampling area. [ 8 ] A standard POCIS disk consists of a sampling surface area to sorbent mass ratio of approximately 180 cm 2 g. Because the amount of chemical sampled is directly related to the sample surface area, it is sometimes necessary to combine extracts from multiple POCIS disks into one sample. Stainless steel rings, or other rigid inert material, are essential to prevent sorbent loss as the PES membranes are not able to be heat sealed. [ 5 ] The POCIS array is then inserted and deployed within a protective canister. This canister is usually made of stainless steel or PVC and works to deflect debris that may displace the POCIS array during its deployment. [ 3 ] The PES membrane acts as a semipermeable barrier between the sorbent and surrounding aquatic environment. It allows dissolved contaminants to pass through the sorbent while selectively excluding any particles larger than 100 nm. [ 5 ] The membrane resists biofouling because the polyethersulphone used in the design is less prone than other materials. [ 3 ] The POCIS is versatile in that the sorbents can be changed to target different classes of contaminants. However, only two sorbent classes are considered as standards of all POCIS deployments to date. [ 2 ] Each POCIS disk will sample a certain volume of water per day. The volume of water sampled varies from chemical to chemical and is dependent on the physical and chemical properties of the compound as well as the duration of sampling. The sampling rate of POCIS can vary with changes in the water flow, turbulence, temperature, and the buildup of solids on the sampler’s surface. [ 3 ] The accumulation of contaminants into a POCIS device is the result of three successive process occurring at the same time. First, the contaminants have to diffuse across the water boundary layer . The thickness of this layer is dependent on water flow and turbulence around the sampler and can significantly alter sampling rates. Second, the contaminant must transport across the membrane either through the water-filled pores or through the membrane itself. Finally, contaminants transfer from the membrane into the sorbent material mainly through adsorption . These last two steps make the modeling, understanding, and prediction of accumulation by a POCIS device challenging. To date, a limited number of chemical sampling rates have been determined. [ 8 ] Accumulation of chemicals by a POCIS device generally follows first order kinetics. The kinetics are characterized by an initial integrative phase, followed by an equilibrium partitioning phase. During the integrative phase of uptake, a passive sampling device accumulates residues linearly relative to time, assuming constant exposure concentrations. Based on current results, the POCIS sampler remains in a linear phase for at least 30 days, and has been observed up to 56 days. Therefore, both laboratory and field data justify the use of a linear uptake model for the calculation of sample rates. [ 5 ] In order to estimate the ambient water concentration of contaminants sampled by a POCIS device, there must be available calibration data applicable for in situ conditions regarding the target compound. Currently, this information is limited. [ 3 ] POCIS can be deployed in a wide range of aquatic environments including stagnant pools, rivers, springs, estuarine systems, and wastewater streams. [ 9 ] However, there has been little research into the use of POCIS in strictly marine environments. [ 8 ] Prior to deployment of a POCIS device, it is essential to select a study site that will maximize the effectiveness of the sampler. Selecting an area that is shaded will help prevent light sensitive chemicals from being degrading. The site should also allow the sampler to be submerged in the water without being buried in the sediment. [ 9 ] It is ideal to place the sampler in moving water in order to increase sampling rates, however, areas with an extremely turbulent water flow should be avoided as to prevent damage to the POCIS device. Passive samplers are very vulnerable to vandalism and it is therefore important to secure the sampler in areas that are not easily visible and that are away from areas frequently used by people. [ 9 ] POCIS samplers can be deployed for a period of time ranging from weeks to months. The shortest deployment lengths are typically 7 days but average 2–3 months. [ 8 ] It is important to have a long enough deployment period to allow for adequate detection of contaminants at ambient environmental concentrations. Often, the two different types of POCIS devices will be deployed together in order to provide the greatest understanding of contamination. [ 8 ] It is also important to deploy enough POCIS devices to ensure a large enough sample of contaminant is recovered for chemical analysis. An estimate or the number of samplers needed at a given site can be determined by the following equation. [ 10 ] Any compound with a log Kow of less than or equal to 3 can concentrate in a POCIS sampler. [ 5 ] Applicable classes of contaminants measured by POCIS are pharmaceuticals, household and industrial products, hormones, herbicides, and polar pesticides (Table 1). Currently, there are two POCIS configurations that are targeted for different classes of contaminants. A general POCIS design contains a sorbent that is used to collect pesticides, natural as well as synthetic hormones, and wastewater related chemicals. The pharmaceutical POCIS configuration contains a sorbent that is designed to specifically target classes of pharmaceuticals. [ 11 ] Before the POCIS is constructed, all the hardware as well as the sorbents and membrane must be thoroughly cleaned so that any potential interference is removed. During and after sampling the only cleaning necessary is the removal of any sediment that has adhered to the surface of the sampler. After assembly, and prior to deployment, the samplers are stored in frozen airtight containers to avoid any contamination. The samplers should be kept in airtight containers during transportation both to and from the sampling site so that airborne contaminants do not contaminate the sampler. It is ideal to keep the samplers cold while transporting them in order to preserve the integrity of the samples. [ 3 ] After the POCIS is retrieved from the field, the membrane is gently cleaned to reduce the possibility of any contamination to the sorbent. [ 5 ] The sorbent is placed into a chromatography column so that the chemicals that samples can be recovered using an organic solvent. The solvent used is specifically chosen based on the type of sorbent and chemicals sampled. The sample can go through further processing such as cleanup or fractionation depending on the desired use of the sample. [ 3 ] After the sample has been processed, the extract can be analysed using a variety of data analysis techniques. [ 8 ] The chemical analysis and analytical instrumentation used depends on the goal of the study. Many analyses require multiple samples, although in some cases a single POCIS sample can be used for multiple analyses. [ 10 ] It is vital to use quality control (QC) procedures when using passive samplers. [ 5 ] It is common practice for 10% to 50% of the total number of samples to be used for QC purposes. The number of QC samples depends on the study objectives. [ 10 ] The QC samples are used to address issues such as sample contamination and analyte recovery. [ 5 ] The types of QC samples commonly used include; reagent blanks, field blanks, matrix spikes, and procedural spikes. [ 5 ] A large number of studies have been performed in which POCIS data was combined with bioassays to measure biological endpoints. Testing POCIS extracts in biological assays is useful as a POCIS device samples over its entire deployment period, and biologically active compounds can be effectively monitored. It can also be argued that the use of POCIS is a more relevant from an ecotoxicological perspective as the use of a passive sampler mimics the uptake of compounds by organisms. Another strength in using bioassays to test environmental samples is that they can provide an integrative measure of the toxic potential of a group of chemical compounds, rather than a single contaminant. [ 8 ] There are many types of passive samplers used that specialize in absorbing different classes of aquatic contaminants found in the environment. [ 3 ] Chemcatcher and SMPD are two types of passive samplers that are also commonly used. [ 4 ] Monitoring programs use SMPDs to measure to hydrophobic organic contaminants. SPMDs are designed to mimic the bioconcentration of contaminants in fatty tissues (ITRC, 2006). Contaminants applicable to the use of an SPMD include, but are not limited to, polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), organochlorine pesticides, dioxins , and furans . [ 3 ] The SPMD consist of a thin-walled, nonporous , polyethylene membrane tube that is filled with high molecular weight lipid . [ 2 ] These tubes are approximately 90 cm long and wrap around the inside of a stainless steel deployment canister. [ 9 ] SMPDs are efficient at absorbing pollutants with a log Kow of 4-8. This slightly overlaps with the range of contaminants absorbed by POCIS. Because of this, SMPDs and POCIS devices are often used together in monitoring studies to achieve a more representative understanding of contamination. [ 2 ] The POCIS system is continually evaluated for the potential to sample a wide range of contaminants. Calibration data and analyte recovery methods are currently being generated by researchers around the world. Techniques to merge the POCIS device with bioassays are also under development. [ 3 ] The POCIS sampler already serves as a versatile, economical, and robust tool for monitoring studies and observing trends in both space and time. However, sampling rates are not yet robust enough to supply reliable contaminant concentrations, particularly when regarding environmental quality standards. [ 8 ] A limited number of sampling rates have been determined for chemicals and the determination of additional sampling rate data is necessary for the advancement of passive sampling technology. [ 3 ]
https://en.wikipedia.org/wiki/Polar_organic_chemical_integrative_sampler
Polar overdominance is a unique form of inheritance originally described in livestock, with relevant examples in humans [ 1 ] and mice being discovered shortly after. The term polar is used to describe this type of overdominance because the phenotype of the heterozygote is more prevalent than the other genotypes . This polarity is shown as differential phenotype is only present in one of the heterozygote configurations when the recessive allele is inherited in a parent of origin type fashion. [ 2 ] Polar overdominance differs from regular overdominance (also known as heterozygote advantage ) where both heterozygote genotypes display a phenotype that has increased fitness regardless of the parent of origin. Studying this type of inheritance could have practical applications in preventative medicine for humans as well as a variety of other agricultural applications. The first described occurrence of polar overdominance in sheep was shown after finding that a mutant allele, called callipyge (after Venus Callipyge ), must be inherited from the father to cause a condition called muscle hypertrophy . Muscle hypertrophy in the offspring is caused by an increase in the size and proportion of muscle fibers , namely the fast-twitch muscle fibers. [ 3 ] This increase is generally located in the hind quarters and torso. Muscle hypertrophy only manifests itself in the offspring approximately one month after birth. [ 4 ] Polar overdominance shows evidence of an imprinted locus displayed as the difference between the expression of heterozygote phenotypes in a parent-of-origin type fashion. It was discovered that a single-nucleotide polymorphism in the DLK1 – DIO3 imprinted gene cluster affects the gene expression of paternal allele-specific genes and several maternal allele-specific long non-coding RNA and microRNA . [ 5 ] Ectopic expression of the Delta-like 1 homologue (DLK1) and the Retrotransposon-like 1 (RTL1/PEG11) genes which are paternally expressed proteins in skeletal muscle are a hallmark of these mutant individuals. [ 6 ] More and more studies have identified quantitative trait loci (QTL) that show evidence of genomic imprinting in farm animals other than sheep. [ 7 ] After polar overdominant inheritance was discovered to be the cause of muscular hypertrophy in sheep, the ortholog for the human DLK1 gene (DLK1-GTL2 intergenic region ) was studied in pigs to try to determine the effects of inheritance on ham weight. The original purpose of this study was to find the connection between genetics and ham weight to try to produce pigs that were abnormally large compared to the average. Before conducting this research, it was also hypothesized that the locus for ham weight was related to the ovine callipyge locus in sheep. After researching it was discovered that the two regions were likely unrelated due to different forms of parental inheritance exhibited in both cases and a relatively large physical distance between the loci on the chromosome. Unlike the form of paternal polar overdominance that occurs in the ovine callipyge locus, the locus that controls ham weight operates in a maternal polar overdominant fashion. [ 8 ] The term polar is used to describe this type of inheritance because the phenotype of one heterozygote is expressed at a level higher than other genotypes for the same locus including those displaying either homozygous geneotype. [ 2 ] This unique form of inheritance has largely been studied in non-human mammals since 1996 [ 4 ] until it was first described in humans in 2008. In humans, the inheritance of the alleles for the DLK1 gene (imprinted in eutherian mammals ) is linked to a higher rate of obesity in the F1 generation . [ 1 ] The imprinted DLK1-GTL2 in sheep is homologous to the DLK1 gene in humans, and includes the callipyge locus. [ 9 ] There has been evidence to show that by screening potential fathers for a mutation at the DLK1 locus one could potentially see if their child is at a higher risk for obesity. [ 10 ] Individuals who inherit this mutant allele from their father are more likely to show signs of obesity because the DLK1 gene is key in adipogenesis , or more simply the formation of fat cells. [ 11 ]
https://en.wikipedia.org/wiki/Polar_overdominance
In geometry, a polar point group is a point group in which there is more than one point that every symmetry operation leaves unmoved. [ 1 ] The unmoved points will constitute a line, a plane, or all of space. While the simplest point group, C 1 , leaves all points invariant, most polar point groups will move some, but not all points. To describe the points which are unmoved by the symmetry operations of the point group, we draw a straight line joining two unmoved points. This line is called a polar direction. The electric polarization must be parallel to a polar direction. In polar point groups of high symmetry, the polar direction can be a unique axis of rotation, but if the symmetry operations do not allow any rotation at all, such as mirror symmetry, there can be an infinite number of such axes: in that case the only restriction on the polar direction is that it must be parallel to any mirror planes. A point group with more than one axis of rotation or with a mirror plane perpendicular to an axis of rotation cannot be polar. Of the 32 crystallographic point groups , 10 are polar: [ 2 ] The space groups associated with a polar point group do not have a discrete set of possible origin points that are unambiguously determined by symmetry elements. [ 1 ] When materials having a polar point group crystal structure are heated or cooled, they may temporarily generate a voltage called pyroelectricity . Molecular crystals which have symmetry described by one of the polar space groups, such as sucrose , may exhibit triboluminescence . [ 3 ]
https://en.wikipedia.org/wiki/Polar_point_group
Polar seas is a collective term for the Arctic Ocean (about 4-5 percent of Earth's oceans ) and the southern part of the Southern Ocean (south of Antarctic Convergence , about 10 percent of Earth's oceans). In the coldest years, sea ice can cover around 13 percent of the Earth's total surface at its maximum, but out of phase in the two hemispheres. The polar seas contain a huge biome with many organisms . Among the species that inhabit various polar seas and surrounding land areas are polar bear , reindeer (caribou), muskox , wolverine , ermine , lemming , Arctic hare , Arctic ground squirrel , whale , harp seal , and walrus . [ 1 ] These species have unique adaptations to the extreme conditions. Many might be endangered if they cannot adapt to changing conditions. Contrary to popular opinion, the World Wildlife Fund studies for polar bears show that this species has prospered since 1950, attaining five times the numbers found in 1950. In general, Arctic ecosystems are relatively fragile and slow to recover from serious damage. A large amount of the land in the north polar region is part of Earth's tundra biome . South of the Arctic tundra, where temperatures are a little less cold, are the vast forests of conifer trees of the taiga biome . North of the Arctic tundra are polar bears and the unique marine life of the Arctic Ocean. [ 2 ] The Arctic Ocean has relatively abundant plant life. Nutrients from rivers along with mixing and upwelling from storms contribute mixed layer nutrients which are essential for Arctic phytoplankton development. During summer, nearly continuous solar insolation encourages phytoplankta blooms . The Arctic Ocean is surrounded by continents and has a few narrow, relatively shallow connections to the large ocean basins to the south. Large amounts of riverine fresh water as well as abundant nutrients (gelbstoff) flow into the Arctic basin from Siberian rivers. The widest continental shelf on the planet is found in the Arctic Ocean, extending more than 1000 kilometers outward from Siberia and Alaska. Consequently, much of the basin is very shallow. On the other hand, the Arctic Ocean contains the deepest, slowest spreading mid-ocean ridge on the planet which, until 2003, was thought to be inactive volcanically. Since then, however, a dozen active volcanoes have been discovered, illustrating the limited information available for the difficult-to-study Arctic Ocean. The numerous Siberian rivers flowing onto the shallow continental shelf freshen the seawater. These rivers have shown increased flow recently, possibly due to increased global rainfall as a result of climate change. A flow increase may raise the level of riverine nutrients. Severe Siberian drought, as experienced in 2010, could, however, decrease flows. There is an interest in a potential release of methane gas, a potent greenhouse gas, from methane clathrates present in the Arctic continental shelf sediments, if sufficient ocean warming were to occur. As much as 80% of the ocean surface is covered by ice in winter, declining to about 60% in summer; ice cover has been declining at a steady and rapid pace. A large fraction of the ice is multi-year ice, and in the far North the thickness can be more than 2m. In summer, ice tends to melt at the air-sea interface. Surface melt ponds are formed, increasing the albedo . Most of Antarctica is covered with a thick layer of ice , with few species permanently living in the ice-covered areas. There are many species of penguins in the south polar region. Almost all animals in Antarctica find their food in the Southern Ocean surrounding the continent, and there is abundant marine life in the Southern Ocean. [ 2 ] Because of the extreme but constant habitats, small disruption or damage could alter the polar systems. Although they are remote from human world, polar seas are not 'pristine' environments. Compared with the Antarctic region, the Arctic has long history of interaction with man. The polar food web structure can be sensitive to man's 'top-down' control especially with the development and growth of industrial fishing . Climate change is a natural phenomenon that influences life in polar areas. There is observational evidence that the Antarctica climate is a bellwether for climate change in the northern hemisphere, leading by about 1000 years. The study of Antarctic ice, its distribution, changes in ice volume, and other indicators of the continental climate in Antarctica are at an early stage of development and even earlier stage of understanding. Techniques for studying the terrain beneath the ice are just being explored. Lake Vostok, buried under miles of ice, has not been penetrated to date. There is evidence that it has been out of contact with the atmosphere for millions of years, making it a possible treasure trove of information. Relatively, the Antarctic seas and Southern Ocean surround the highest, driest, coldest and windiest continent on the Earth - Antarctica . Due to the low mean temperature, there is no riverine input into the Antarctic seas. And little input DOM, POC from land. In addition to relatively thin sea ice, thick and extensive ice shelves (floating glaciers) are present in the Antarctic region. More than 90% of the sea ice is first year (annual) ice and less than 2m thick. The first confirmed sighting of Antarctica can date back to the 1820. [ 3 ] The polar seas play an important role in the global climate : Models predict a latitudinal effect in response to climate change. These would be expected to be first obvious in polar and subpolar regions. The decline of arctic ocean summer ice coverage was assumed to be one such sign, however, the reversal of that trend leaves the question open as the origins of the now reversed 30 year trend. In 1979, the cover of Time magazine was illustrated with an image of arctic ice covering and expanding BEYOND the arctic basin, opinion seems to have little value. Real data reveal the trend of ice coverage to be on the increase. If this increase will continue, only time will tell. Developed nations ceased the production and use of chlorofluorocarbons and the atmospheric abundance and consequent ozone depletion are generally on the decrease. ozone hole Breaking-off of polar ice shelves is a continuing process with increases in Western Antarctica balanced by decreases in Eastern Antarctica. Careful study has revealed that the volume of ice in Western Antarctica was incorrectly estimated and shows no significant change attributable to climate change over the observation period. Extreme oscillations in irradiance occur in these regions, for months it may be totally dark (in winter) or light (in summer). Because of the existence of polar ice, surface reflectance of sunlight is very high. In addition the solar angle are relatively quite low. So less light can penetrate into water and becomes bio available to the plants in polar water under the ice cover. Water temperatures are low, but do not change much seasonally. The low temperature and low available high irradiance together limit primary production . Major nutrients (N, P, Si) are often not limiting primary production . Blooming of Phytoplankton happens in summer, because of the decreased salinity (sea ice melt-water), lower mixing, higher stratification, higher temperature and more bio available light. Ice is very important in structuring environment. It regulates the physics, chemistry and biology of the water column, air-sea exchange, and is also an important habitat. There is an expected rise in the exploration and development of the Polar Seas due to an increasing demand for fuel and commodities. The emergence of China and India's economy is considered one of the main drivers of this phenomenon due to an unprecedented appetite for raw materials and fuel. [ 5 ] Presently, there are still no exact figures that define the oil and gas reserves in the Polar Seas. Initial explorations, however, have resulted in the identification of its potential. For example, Canada's explorations discovered gas and oil in several Arctic locations such as Beaufort, High Arctic (Arctic Islands), Labrador, and Newfoundland. [ 6 ] The Arctic Islands alone has an estimated 4.3 billion barrels of oil reserve while those locations that fall within the Alaskan continental shelf have potential recoverable reserves worth $18 billion. [ 7 ] There are now 46 nations that are parties to cooperation and treaties covering the polar regions. Several of these, either individually or with partners, conduct research and explorations in the Arctic and Antarctic seas. [ 8 ] These activities are governed by international environmental protocols. However, countries like China, which faces a future oil shortage, are aggressively exploring the region for the purpose of oil extraction despite an international ban on such activity. [ 9 ]
https://en.wikipedia.org/wiki/Polar_seas
The polar surface area ( PSA ) or topological polar surface area ( TPSA ) of a molecule is defined as the surface sum over all polar atoms or molecules, primarily oxygen and nitrogen , also including their attached hydrogen atoms. PSA is a commonly used medicinal chemistry metric for the optimization of a drug's ability to permeate cells. Molecules with a polar surface area of greater than 140 angstroms squared (Å 2 ) tend to be poor at permeating cell membranes. [ 1 ] For molecules to penetrate the blood–brain barrier (and thus act on receptors in the central nervous system ), a PSA less than 90 Å 2 is usually needed. [ 2 ] TPSA is a valuable tool in drug discovery and development. By analyzing a drug candidate's TPSA, scientists can predict its potential for oral bioavailability and ability to reach target sites within the body. This prediction hinges on a drug's ability to permeate biological barriers. Permeating these barriers, such as the Blood-Brain Barrier (BBB), the Placental Barrier (PB), and the Blood-Mammary Barrier (BM), is crucial for many drugs to reach their intended targets. The BBB, for example, protects the brain from harmful substances. Drugs with a lower TPSA (generally below 90 Ų) tend to permeate the BBB more easily, allowing them to reach the brain and exert their therapeutic effects (Shityakov et al [ 3 ] ., 2013). Similarly, for drugs intended to treat the fetus, a lower TPSA (below 60 Ų) is preferred to ensure they can pass through the placenta (Augustiño-Roubina [ 4 ] et al., 2019). Breastfeeding mothers also need consideration. Here, an optimal TPSA for a drug is around 60-80 Ų to allow it to reach the breast tissue for milk production, while drugs exceeding 90 Ų are less likely to permeate the Blood-Mammary Barrier. [ 5 ] This chemistry -related article is a stub . You can help Wikipedia by expanding it . This computer science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Polar_surface_area
Polaris Networks is a networking software company founded in 2003 by some employees of Agilent Technologies of Santa Clara, California . [ 1 ] Its headquarters are in Chicago , and it has an overseas office in Kolkata . [ 2 ] It focuses on developing networking protocol software, and its products primarily include wireless protocol test tools and emulators for 3GPP LTE networks. In 2012, CERN selected the xTCA Test Tools developed by Polaris Networks for the internal testing of their xTCA systems, including those of the Large Hadron Collider . [ 3 ] In April 2013, Polaris Networks announced the cloud-based deployment of their NetEPC, a carrier-grade EPC which combines the functionality of the MME , SGW, PGW, HSS and PCRF into a single high-availability platform. [ 4 ] And in June 2013, the Public Safety Communications Research Program used the Polaris Networks NetEPC to demonstrate deployable LTE at the Public Safety Broadband Stakeholder Conference in Westminster, Colorado. [ 5 ] In June 2018, Polaris Networks and Nemergent Solutions completed interoperability tests between Polaris Networks’ NetEPC and Nemergent's Mission Critical Services (MCS) application server. [ 6 ] [ 7 ] Polaris was acquired by Motorola Solutions in 2020 [ 1 ] and relocated from California to Chicago. This article about wireless technology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Polaris_Networks
Polaris Partners is a venture capital firm active in the field of healthcare and biotechnology companies. The company has offices in Boston, Massachusetts , New York, New York and San Francisco, California . [ 3 ] [ 4 ] Polaris Partners was founded in 1996 by Jon Flint , Terry McGuire , Steve Arnold . [ 5 ] [ 6 ] [ 7 ] The firm has over $5 billion in committed capital and is now making investments through its tenth fund. [ 1 ] The current managing partners are Amy Schulman , and Brian Chee. [ 1 ] [ 8 ] Polaris Partners also has two affiliate funds. Polaris Growth Fund targets investments in profitable, founder-owned technology companies and is led by managing partners Bryce Youngren and Dan Lombard. [ 9 ] [ 10 ] Polaris Innovation Fund focuses on the commercial and therapeutic potential of early-stage academic research and is led by managing partners Amy Schulman and Ellie McGuire. [ 11 ] [ 12 ] [ 13 ] [ 14 ]
https://en.wikipedia.org/wiki/Polaris_Partners
Polariton superfluid is predicted to be a state of the exciton-polaritons system that combines the characteristics of lasers with those of excellent electrical conductors. [ 1 ] [ 2 ] Researchers look for this state in a solid state optical microcavity coupled with quantum well excitons . The idea is to create an ensemble of particles known as exciton-polaritons and trap them. [ 3 ] Wave behavior in this state results in a light beam similar to that from a laser but possibly more energy efficient. Unlike traditional superfluids that need temperatures of approximately ~4 K, the polariton superfluid could in principle be stable at much higher temperatures, and might soon be demonstrable at room temperature. [ 4 ] Evidence for polariton superfluidity was reported in by Alberto Amo and coworkers, [ 5 ] based on the suppressed scattering of the polaritons during their motion. Although several other researchers are working in the same field, [ 6 ] [ 7 ] the terminology and conclusions are not completely shared by the different groups. In particular, important properties of superfluids , such as zero viscosity , and of lasers , such as perfect optical coherence , are a matter of debate. [ 8 ] [ 9 ] Although, there is clear indication of quantized vortices when the pump beam has orbital angular momentum . [ 10 ] Furthermore, clear evidence has been demonstrated also for superfluid motion of polaritons, in terms of the Landau criterion and the suppression of scattering from defects when the flow velocity is slower than the speed of sound in the fluid. [ 11 ] The same phenomena have been demonstrated in an organic exciton polariton fluid, representing the first achievement of room-temperature superfluidity of a hybrid fluid of photons and excitons. [ 12 ]
https://en.wikipedia.org/wiki/Polariton_superfluid
Polaritonics is an intermediate regime between photonics and sub-microwave electronics (see Fig. 1). In this regime, signals are carried by an admixture of electromagnetic and lattice vibrational waves known as phonon- polaritons , rather than currents or photons . Since phonon-polaritons propagate with frequencies in the range of hundreds of gigahertz to several terahertz , polaritonics bridges the gap between electronics and photonics. A compelling motivation for polaritonics is the demand for high speed signal processing and linear and nonlinear terahertz spectroscopy . Polaritonics has distinct advantages over electronics, photonics, and traditional terahertz spectroscopy in that it offers the potential for a fully integrated platform that supports terahertz wave generation, guidance, manipulation, and readout in a single patterned material. Polaritonics, like electronics and photonics, requires three elements: robust waveform generation, detection, and guidance and control. Without all three, polaritonics would be reduced to just phonon-polaritons, just as electronics and photonics would be reduced to just electromagnetic radiation. These three elements can be combined to enable device functionality similar to that in electronics and photonics. To illustrate the functionality of polaritonic devices, consider the hypothetical circuit in Fig. 2 (right). The optical excitation pulses that generate phonon-polaritons, in the top left and bottom right of the crystal, enter normal to the crystal face (into the page). The resulting phonon-polaritons will travel laterally away from the excitation regions. Entrance into the waveguides is facilitated by reflective and focusing structures. Phonon-polaritons are guided through the circuit by terahertz waveguides carved into the crystal. Circuit functionality resides in the interferometer structure at the top and the coupled waveguide structure at the bottom of the circuit. The latter employs a photonic bandgap structure with a defect (yellow) that could provide bistability for the coupled waveguide. Phonon-polaritons generated in ferroelectric crystals propagate nearly laterally to the excitation pulse due to the high dielectric constants of ferroelectric crystals, facilitating easy separation of phonon-polaritons from the excitation pulses that generated them. Phonon-polaritons are therefore available for direct observation, as well as coherent manipulation, as they move from the excitation region into other parts of the crystal. Lateral propagation is paramount to a polaritonic platform in which generation and propagation take place in a single crystal. A full treatment of the Cherenkov-radiation -like terahertz wave response reveals that in general, there is also a forward propagation component that must be considered in many cases. Direct observation of phonon-polariton propagation was made possible by real-space imaging, in which the spatial and temporal profiles of phonon-polaritons are imaged onto a CCD camera using Talbot phase-to-amplitude conversion. This by itself was an extraordinary breakthrough. It was the first time that electromagnetic waves were imaged directly, appearing much like ripples in a pond when a rock plummets through the water's surface (see Fig. 3). Real-space imaging is the preferred detection technique in polaritonics, though other more conventional techniques like optical Kerr-gating, time resolved diffraction , interferometric probing, and terahertz field induced second-harmonic generation are useful in some applications where real-space imaging is not easily employed. For example, patterned materials with feature sizes on the order of a few tens of micrometres cause parasitic scattering of the imaging light. Phonon-polariton detection is then only possible by focusing a more conventional probe, like those mentioned before, into an unblemished region of the crystal. The last element requisite to polaritonics is guidance and control. Complete lateral propagation parallel to the crystal plane is achieved by generating phonon-polaritons in crystals of thickness on the order of the phonon-polariton wavelength. This forces propagation to take place in one or more of the available slab waveguide modes. However, dispersion in these modes can be radically different from that in bulk propagation, and in order to exploit this, the dispersion must be understood. Control and guidance of phonon-polariton propagation may also be achieved by guided wave, reflective, diffractive, and dispersive elements, as well as photonic and effective index crystals that can be integrated directly into the host crystal. However, lithium niobate , lithium tantalate , and other perovskites are impermeable to the standard techniques of material patterning. In fact, the only etchant known to be even marginally successful is hydrofluoric acid (HF), which etches slowly and predominantly in the direction of the crystal optic axis. Femtosecond laser micromachining is used for device fabrication by milling 'air' holes and/or troughs into ferroelectric crystals by directing them through the focus region of a femtosecond laser beam. . The advantages of femtosecond laser micromachining for a wide range of materials have been well documented. [ 1 ] In brief, free electrons are created within the beam focus through multiphoton excitation . Because the peak intensity of a femtosecond laser pulse is many orders of magnitude higher than that from longer pulse or continuous wave lasers, the electrons are rapidly excited, heated to form a quantum plasma . Particularly in dielectric materials, the electrostatic instability, induced by the plasma , of the remaining lattice ions results in ejection of these ions and hence ablation of the material, [ 2 ] leaving a material void in the laser focus region. Also, since the pulse duration and ablation time scales are much faster than the thermalization time, femtosecond laser micromachining does not suffer from the adverse effects of a heat-affected-zone, like cracking and melting in regions neighboring the intended damage region. [ 3 ]
https://en.wikipedia.org/wiki/Polaritonics
In developmental biology , an embryo is divided into two hemispheres: the animal pole and the vegetal pole within a blastula . The animal pole consists of small cells that divide rapidly, in contrast with the vegetal pole below it. In some cases, the animal pole is thought to differentiate into the later embryo itself, forming the three primary germ layers and participating in gastrulation . The vegetal pole contains large yolky cells that divide very slowly, in contrast with the animal pole above it. In some cases, the vegetal pole is thought to differentiate into the extraembryonic membranes that protect and nourish the developing embryo, such as the placenta in mammals and the chorion in birds. In amphibians, the development of the animal-vegetal axis occurs prior to fertilization. [ 1 ] Sperm entry can occur anywhere in the animal hemisphere. [ 2 ] The point of sperm entry defines the dorso-ventral axis - cells opposite the region of sperm entry will eventually form the dorsal portion of the body. [ 1 ] [ 3 ] In the frog Xenopus laevis , the animal pole is heavily pigmented while the vegetal pole remains unpigmented. [ 4 ] A pigment pattern provides the oocyte with features of a radially symmetrical body with a distinct polarity. The animal hemisphere is dark brown, and the vegetal hemisphere is only weakly pigmented. The axis of symmetry passes through on one side the animal pole, and on the other side the vegetal pole. The two hemispheres are separated by an unpigmented equatorial belt. Polarity has a major influence on the emergence of embryonic structures. In fact, the axis polarity serves as one coordinate of the geometrical system in which early embryogenesis is organized. [ 5 ] The animal pole draws its name from its liveliness relative to the slowly developing vegetal pole. In contrast, the vegetal pole is named for its relative inactivity relative to the animal pole.
https://en.wikipedia.org/wiki/Polarity_in_embryogenesis
Polarity symbols are a notation for electrical polarity , found on devices that use direct current (DC) power, when this is or may be provided from an alternating current (AC) source via an AC adapter . The adapter typically supplies power to the device through a thin electrical cord which terminates in a coaxial power connector often referred to as a "barrel plug" (so-named because of its cylindrical shape). The polarity of the adapter cord and plug must match the polarity of the device, meaning that the positive contact of the plug must mate with the positive contact in the receptacle, and the negative plug contact must mate with the negative receptacle contact. Since there is no standardization of these plugs, a polarity symbol is typically printed on the case indicating which type of plug is needed. The commonly used symbol denoting the polarity of a device or adapter consists of a black dot with a line leading to the right and a broken circle (like the letter "C") surrounding the dot and with a line leading to the left. At the ends of the lines leading right and left are found a plus sign (+), meaning positive, also sometimes referred to as "hot", and a minus sign (−), meaning negative, also sometimes referred to as "neutral". The symbol connected to the dot (usually the symbol found to the right) denotes the polarity of the center/tip, whereas the symbol connected to the broken circle denotes the polarity of the barrel/ring. When a device or adapter is described simply as having "positive polarity" or "negative polarity", this denotes the polarity of the center/tip. [ citation needed ]
https://en.wikipedia.org/wiki/Polarity_symbols
Polarizability usually refers to the tendency of matter, when subjected to an electric field , to acquire an electric dipole moment in proportion to that applied field. It is a property of particles with an electric charge . When subject to an electric field, the negatively charged electrons and positively charged atomic nuclei are subject to opposite forces and undergo charge separation . Polarizability is responsible for a material's dielectric constant and, at high (optical) frequencies, its refractive index . The polarizability of an atom or molecule is defined as the ratio of its induced dipole moment to the local electric field; in a crystalline solid, one considers the dipole moment per unit cell . [ 1 ] Note that the local electric field seen by a molecule is generally different from the macroscopic electric field that would be measured externally. This discrepancy is taken into account by the Clausius–Mossotti relation (below) which connects the bulk behaviour ( polarization density due to an external electric field according to the electric susceptibility χ = ε r − 1 {\displaystyle \chi =\varepsilon _{\mathrm {r} }-1} ) with the molecular polarizability α {\displaystyle \alpha } due to the local field. Magnetic polarizability likewise refers to the tendency for a magnetic dipole moment to appear in proportion to an external magnetic field . Electric and magnetic polarizabilities determine the dynamical response of a bound system (such as a molecule or crystal) to external fields, and provide insight into a molecule's internal structure. [ 2 ] "Polarizability" should not be confused with the intrinsic magnetic or electric dipole moment of an atom, molecule, or bulk substance; these do not depend on the presence of an external field. Electric polarizability is the relative tendency of a charge distribution, like the electron cloud of an atom or molecule , to be distorted from its normal shape by an external electric field . The polarizability α {\displaystyle \alpha } in isotropic media is defined as the ratio of the induced dipole moment p {\displaystyle \mathbf {p} } of an atom to the electric field E {\displaystyle \mathbf {E} } that produces this dipole moment. [ 3 ] Polarizability has the SI units of C·m 2 ·V −1 = A 2 ·s 4 ·kg −1 while its cgs unit is cm 3 . Usually it is expressed in cgs units as a so-called polarizability volume, sometimes expressed in Å 3 = 10 −24 cm 3 . One can convert from SI units ( α {\displaystyle \alpha } ) to cgs units ( α ′ {\displaystyle \alpha '} ) as follows: where ε 0 {\displaystyle \varepsilon _{0}} , the vacuum permittivity , is ≈8.854 × 10 −12 (F/m). If the polarizability volume in cgs units is denoted α ′ {\displaystyle \alpha '} the relation can be expressed generally [ 4 ] (in SI) as α = 4 π ε 0 α ′ {\displaystyle \alpha =4\pi \varepsilon _{0}\alpha '} . The polarizability of individual particles is related to the average electric susceptibility of the medium by the Clausius–Mossotti relation : where R is the molar refractivity , N A {\displaystyle N_{\text{A}}} is the Avogadro constant , α c {\displaystyle \alpha _{c}} is the electronic polarizability, p is the density of molecules, M is the molar mass , and ε r = ϵ / ϵ 0 {\displaystyle \varepsilon _{\mathrm {r} }=\epsilon /\epsilon _{0}} is the material's relative permittivity or dielectric constant (or in optics, the square of the refractive index ). Polarizability for anisotropic or non-spherical media cannot in general be represented as a scalar quantity. Defining α {\displaystyle \alpha } as a scalar implies both that applied electric fields can only induce polarization components parallel to the field and that the x , y {\displaystyle x,y} and z {\displaystyle z} directions respond in the same way to the applied electric field. For example, an electric field in the x {\displaystyle x} -direction can only produce an x {\displaystyle x} component in p {\displaystyle \mathbf {p} } and if that same electric field were applied in the y {\displaystyle y} -direction the induced polarization would be the same in magnitude but appear in the y {\displaystyle y} component of p {\displaystyle \mathbf {p} } . Many crystalline materials have directions that are easier to polarize than others and some even become polarized in directions perpendicular to the applied electric field [ citation needed ] , and the same thing happens with non-spherical bodies. Some molecules and materials with this sort of anisotropy are optically active , or exhibit linear birefringence of light. To describe anisotropic media a polarizability rank two tensor or 3 × 3 {\displaystyle 3\times 3} matrix α {\displaystyle \alpha } is defined, so that: The elements describing the response parallel to the applied electric field are those along the diagonal. A large value of α y x {\displaystyle \alpha _{yx}} here means that an electric-field applied in the x {\displaystyle x} -direction would strongly polarize the material in the y {\displaystyle y} -direction. Explicit expressions for α {\displaystyle \alpha } have been given for homogeneous anisotropic ellipsoidal bodies. [ 5 ] [ 6 ] The matrix above can be used with the molar refractivity equation and other data to produce density data for crystallography. Each polarizability measurement along with the refractive index associated with its direction will yield a direction specific density that can be used to develop an accurate three dimensional assessment of molecular stacking in the crystal. This relationship was first observed by Linus Pauling . [ 1 ] Polarizability and molecular property are related to refractive index and bulk property. In crystalline structures, the interactions between molecules are considered by comparing a local field to the macroscopic field. Analyzing a cubic crystal lattice , we can imagine an isotropic spherical region to represent the entire sample. Giving the region the radius a {\displaystyle a} , the field would be given by the volume of the sphere times the dipole moment per unit volume P . {\displaystyle \mathbf {P} .} We can call our local field F {\displaystyle \mathbf {F} } , our macroscopic field E {\displaystyle \mathbf {E} } , and the field due to matter within the sphere, E i n = − P 3 ε 0 {\displaystyle \mathbf {E} _{\mathrm {in} }={\tfrac {-\mathbf {P} }{3\varepsilon _{0}}}} [ 7 ] We can then define the local field as the macroscopic field without the contribution of the internal field: The polarization is proportional to the macroscopic field by P = ε 0 ( ε r − 1 ) E = χ e ε 0 E {\displaystyle \mathbf {P} =\varepsilon _{0}(\varepsilon _{r}-1)\mathbf {E} =\chi _{\text{e}}\varepsilon _{0}\mathbf {E} } where ε 0 {\displaystyle \varepsilon _{0}} is the electric permittivity constant and χ e {\displaystyle \chi _{\text{e}}} is the electric susceptibility . Using this proportionality, we find the local field as F = 1 3 ( ε r + 2 ) E {\displaystyle \mathbf {F} ={\tfrac {1}{3}}(\varepsilon _{\mathrm {r} }+2)\mathbf {E} } which can be used in the definition of polarization and simplified with ε r = 1 + N α ε 0 V {\displaystyle \varepsilon _{\mathrm {r} }=1+{\tfrac {N\alpha }{\varepsilon _{0}V}}} to get P = ε 0 ( ε r − 1 ) E {\displaystyle \mathbf {P} =\varepsilon _{0}(\varepsilon _{\mathrm {r} }-1)\mathbf {E} } . These two terms can both be set equal to the other, eliminating the E {\displaystyle \mathbf {E} } term giving us We can replace the relative permittivity ε r {\displaystyle \varepsilon _{\mathrm {r} }} with refractive index n {\displaystyle n} , since ε r = n 2 {\displaystyle \varepsilon _{\mathrm {r} }=n^{2}} for a low-pressure gas. The number density can be related to the molecular weight M {\displaystyle M} and mass density ρ {\displaystyle \rho } through N V = N A ρ M {\displaystyle {\tfrac {N}{V}}={\tfrac {N_{\mathrm {A} }\rho }{M}}} , adjusting the final form of our equation to include molar refractivity: This equation allows us to relate bulk property ( refractive index ) to the molecular property (polarizability) as a function of frequency. [ 8 ] Generally, polarizability increases as the volume occupied by electrons increases. [ 9 ] In atoms, this occurs because larger atoms have more loosely held electrons in contrast to smaller atoms with tightly bound electrons. [ 9 ] [ 10 ] On rows of the periodic table , polarizability therefore decreases from left to right. [ 9 ] Polarizability increases down on columns of the periodic table. [ 9 ] Likewise, larger molecules are generally more polarizable than smaller ones. Water is a very polar molecule, but alkanes and other hydrophobic molecules are more polarizable. Water with its permanent dipole is less likely to change shape due to an external electric field. Alkanes are the most polarizable molecules. [ 9 ] Although alkenes and arenes are expected to have larger polarizability than alkanes because of their higher reactivity compared to alkanes, alkanes are in fact more polarizable. [ 9 ] This results because of alkene's and arene's more electronegative sp 2 carbons to the alkane's less electronegative sp 3 carbons. [ 9 ] Ground state electron configuration models often describe molecular or bond polarization during chemical reactions poorly, because reactive intermediates may be excited, or be the minor, alternate structures in a chemical equilibrium with the initial reactant. [ 9 ] Magnetic polarizability defined by spin interactions of nucleons is an important parameter of deuterons and hadrons . In particular, measurement of tensor polarizabilities of nucleons yields important information about spin-dependent nuclear forces. [ 11 ] The method of spin amplitudes uses quantum mechanics formalism to more easily describe spin dynamics. Vector and tensor polarization of particle/nuclei with spin S ≥ 1 are specified by the unit polarization vector p {\displaystyle \mathbf {p} } and the polarization tensor P ` . Additional tensors composed of products of three or more spin matrices are needed only for the exhaustive description of polarization of particles/nuclei with spin S ≥ 3 ⁄ 2 . [ 11 ]
https://en.wikipedia.org/wiki/Polarizability