text
stringlengths
11
320k
source
stringlengths
26
161
Schoenflies (or Schönflies) displacement (or motion) named after Arthur Moritz Schoenflies is a rigid body motion consisting of linear motion in three dimensional space plus one orientation around an axis with fixed direction. [ 1 ] In robotic manipulation this is a common motion as many pick and place operations require moving an object from one plane and placing it with a different orientation onto another parallel plane ( e.g. , placement of components on a circuit board ). These robots are commonly called Schoenflies-motion generators. [ 2 ] Because the SCARA manipulator was one of the first manipulators providing similar motion, this is often referred to as SCARA-type motion. [ 3 ] Today, many robotic manipulators , including some with parallel kinematic architecture , are used in industry for applications ranging from the manufacture of electronics to food processing and packaging industry. [ 4 ] [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Schoenflies_displacement
The Schoenflies (or Schönflies ) notation , named after the German mathematician Arthur Moritz Schoenflies , is a notation primarily used to specify point groups in three dimensions . Because a point group alone is completely adequate to describe the symmetry of a molecule , the notation is often sufficient and commonly used for spectroscopy . However, in crystallography , there is additional translational symmetry , and point groups are not enough to describe the full symmetry of crystals, so the full space group is usually used instead. The naming of full space groups usually follows another common convention, the Hermann–Mauguin notation , also known as the international notation. Although Schoenflies notation without superscripts is a pure point group notation, optionally, superscripts can be added to further specify individual space groups. However, for space groups, the connection to the underlying symmetry elements is much more clear in Hermann–Mauguin notation, so the latter notation is usually preferred for space groups. Symmetry elements are denoted by i for centers of inversion, C for proper rotation axes, σ for mirror planes, and S for improper rotation axes ( rotation-reflection axes ). C and S are usually followed by a subscript number (abstractly denoted n ) denoting the order of rotation possible. By convention, the axis of proper rotation of greatest order is defined as the principal axis. All other symmetry elements are described in relation to it. A vertical mirror plane (containing the principal axis) is denoted σ v ; a horizontal mirror plane (perpendicular to the principal axis) is denoted σ h . In three dimensions, there are an infinite number of point groups, but all of them can be classified by several families. All groups that do not contain more than one higher-order axis (order 3 or more) can be arranged as shown in a table below; symbols in red are rarely used. In crystallography, due to the crystallographic restriction theorem , n is restricted to the values of 1, 2, 3, 4, or 6. The noncrystallographic groups are shown with grayed backgrounds. D 4d and D 6d are also forbidden because they contain improper rotations with n = 8 and 12 respectively. The 27 point groups in the table plus T , T d , T h , O and O h constitute 32 crystallographic point groups . Groups with n = ∞ are called limit groups or Curie groups . There are two more limit groups, not listed in the table: K (for Kugel , German for ball, sphere), the group of all rotations in 3-dimensional space; and K h , the group of all rotations and reflections. In mathematics and theoretical physics they are known respectively as the special orthogonal group and the orthogonal group in three-dimensional space, with the symbols SO(3) and O(3). The space groups with given point group are numbered by 1, 2, 3, ... (in the same order as their international number) and this number is added as a superscript to the Schönflies symbol for the corresponding point group. For example, groups numbers 3 to 5 whose point group is C 2 have Schönflies symbols C 1 2 , C 2 2 , C 3 2 . While in case of point groups, Schönflies symbol defines the symmetry elements of group unambiguously, the additional superscript for space group doesn't have any information about translational symmetry of space group (lattice centering, translational components of axes and planes), hence one needs to refer to special tables, containing information about correspondence between Schönflies and Hermann–Mauguin notation . Such table is given in List of space groups page.
https://en.wikipedia.org/wiki/Schoenflies_notation
In mathematics , the Schoenflies problem or Schoenflies theorem , of geometric topology is a sharpening of the Jordan curve theorem by Arthur Schoenflies . For Jordan curves in the plane it is often referred to as the Jordan–Schoenflies theorem. The original formulation of the Schoenflies problem states that not only does every simple closed curve in the plane separate the plane into two regions, one (the "inside") bounded and the other (the "outside") unbounded; but also that these two regions are homeomorphic to the inside and outside of a standard circle in the plane. An alternative statement is that if C ⊂ R 2 {\displaystyle C\subset \mathbb {R} ^{2}} is a simple closed curve, then there is a homeomorphism f : R 2 → R 2 {\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}} such that f ( C ) {\displaystyle f(C)} is the unit circle in the plane. Elementary proofs can be found in Newman (1939) , Cairns (1951) , Moise (1977) and Thomassen (1992) . The result can first be proved for polygons when the homeomorphism can be taken to be piecewise linear and the identity map off some compact set; the case of a continuous curve is then deduced by approximating by polygons. The theorem is also an immediate consequence of Carathéodory's extension theorem for conformal mappings , as discussed in Pommerenke (1992 , p. 25). If the curve is smooth then the homeomorphism can be chosen to be a diffeomorphism . Proofs in this case rely on techniques from differential topology . Although direct proofs are possible (starting for example from the polygonal case), existence of the diffeomorphism can also be deduced by using the smooth Riemann mapping theorem for the interior and exterior of the curve in combination with the Alexander trick for diffeomorphisms of the circle and a result on smooth isotopy from differential topology. [ 1 ] Such a theorem is valid only in two dimensions. In three dimensions there are counterexamples such as Alexander's horned sphere . Although they separate space into two regions, those regions are so twisted and knotted that they are not homeomorphic to the inside and outside of a normal sphere. For smooth or polygonal curves, the Jordan curve theorem can be proved in a straightforward way. Indeed, the curve has a tubular neighbourhood , defined in the smooth case by the field of unit normal vectors to the curve or in the polygonal case by points at a distance of less than ε from the curve. In a neighbourhood of a differentiable point on the curve, there is a coordinate change in which the curve becomes the diameter of an open disk. Taking a point not on the curve, a straight line aimed at the curve starting at the point will eventually meet the tubular neighborhood; the path can be continued next to the curve until it meets the disk. It will meet it on one side or the other. This proves that the complement of the curve has at most two connected components. On the other hand, using the Cauchy integral formula for the winding number , it can be seen that the winding number is constant on connected components of the complement of the curve, is zero near infinity and increases by 1 when crossing the curve. Hence the curve separates the plane into exactly two components, its "interior" and its "exterior", the latter being unbounded. The same argument works for a piecewise differentiable Jordan curve. [ 2 ] Given a simple closed polygonal curve in the plane, the piecewise linear Jordan–Schoenflies theorem states that there is a piecewise linear homeomorphism of the plane, with compact support, carrying the polygon onto a triangle and taking the interior and exterior of one onto the interior and exterior of the other. [ 3 ] The interior of the polygon can be triangulated by small triangles, so that the edges of the polygon form edges of some of the small triangles. Piecewise linear homeomorphisms can be made up from special homeomorphisms obtained by removing a diamond from the plane and taking a piecewise affine map, fixing the edges of the diamond, but moving one diagonal into a V shape. Compositions of homeomorphisms of this kind give rise to piecewise linear homeomorphisms of compact support; they fix the outside of a polygon and act in an affine way on a triangulation of the interior. A simple inductive argument shows that it is always possible to remove a free triangle—one for which the intersection with the boundary is a connected set made up of one or two edges—leaving a simple closed Jordan polygon. The special homeomorphisms described above or their inverses provide piecewise linear homeomorphisms which carry the interior of the larger polygon onto the polygon with the free triangle removed. Iterating this process it follows that there is a piecewise linear homeomorphism of compact support carrying the original polygon onto a triangle. [ 4 ] Because the homeomorphism is obtained by composing finite many homeomorphisms of the plane of compact support, it follows that the piecewise linear homeomorphism in the statement of the piecewise linear Jordan-Schoenflies theorem has compact support. As a corollary, it follows that any homeomorphism between simple closed polygonal curves extends to a homeomorphism between their interiors. [ 5 ] For each polygon there is a homeomorphism of a given triangle onto the closure of their interior. The three homeomorphisms yield a single homeomorphism of the boundary of the triangle. By the Alexander trick this homeomorphism can be extended to a homeomorphism of closure of interior of the triangle. Reversing this process this homeomorphism yields a homeomorphism between the closures of the interiors of the polygonal curves. The Jordan-Schoenflies theorem for continuous curves can be proved using Carathéodory's theorem on conformal mapping . It states that the Riemann mapping between the interior of a simple Jordan curve and the open unit disk extends continuously to a homeomorphism between their closures, mapping the Jordan curve homeomorphically onto the unit circle. [ 6 ] To prove the theorem, Carathéodory's theorem can be applied to the two regions on the Riemann sphere defined by the Jordan curve. This will result in homeomorphisms between their closures and the closed disks | z | ≤ 1 and | z | ≥ 1. The homeomorphisms from the Jordan curve to the circle will differ by a homeomorphism of the circle which can be extended to the unit disk (or its complement) by the Alexander trick . Composition with this homeomorphism will yield a pair of homeomorphisms which match on the Jordan curve and therefore define a homeomorphism of the Riemann sphere carrying the Jordan curve onto the unit circle. The continuous case can also be deduced from the polygonal case by approximating the continuous curve by a polygon. [ 7 ] The Jordan curve theorem is first deduced by this method. The Jordan curve is given by a continuous function on the unit circle. It and the inverse function from its image back to the unit circle are uniformly continuous . So dividing the circle up into small enough intervals, there are points on the curve such that the line segments joining adjacent points lie close to the curve, say by ε. Together these line segments form a polygonal curve. If it has self-intersections, these must also create polygonal loops. Erasing these loops, results in a polygonal curve without self-intersections which still lies close to the curve; some of its vertices might not lie on the curve, but they all lie within a neighbourhood of the curve. The polygonal curve divides the plane into two regions, one bounded region U and one unbounded region V . Both U and V ∪ ∞ are continuous images of the closed unit disk. Since the original curve is contained within a small neighbourhood of the polygonal curve, the union of the images of slightly smaller concentric open disks entirely misses the original curve and their union excludes a small neighbourhood of the curve. One of the images is a bounded open set consisting of points around which the curve has winding number one; the other is an unbounded open set consisting of points of winding number zero. Repeating for a sequence of values of ε tending to 0, leads to a union of open path-connected bounded sets of points of winding number one and a union of open path-connected unbounded sets of winding number zero. By construction these two disjoint open path-connected sets fill out the complement of the curve in the plane. [ 8 ] Given the Jordan curve theorem, the Jordan-Schoenflies theorem can be proved as follows. [ 9 ] Proofs in the smooth case depend on finding a diffeomorphism between the interior/exterior of the curve and the closed unit disk (or its complement in the extended plane). This can be solved for example by using the smooth Riemann mapping theorem , for which a number of direct methods are available, for example through the Dirichlet problem on the curve or Bergman kernels . [ 10 ] (Such diffeomorphisms will be holomorphic on the interior and exterior of the curve; more general diffeomorphisms can be constructed more easily using vector fields and flows.) Regarding the smooth curve as lying inside the extended plane or 2-sphere, these analytic methods produce smooth maps up to the boundary between the closure of the interior/exterior of the smooth curve and those of the unit circle. The two identifications of the smooth curve and the unit circle will differ by a diffeomorphism of the unit circle. On the other hand, a diffeomorphism f of the unit circle can be extended to a diffeomorphism F of the unit disk by the Alexander extension : where ψ is a smooth function with values in [0,1], equal to 0 near 0 and 1 near 1, and f ( e i θ ) = e ig (θ) , with g (θ + 2π) = g (θ) + 2π . Composing one of the diffeomorphisms with the Alexander extension allows the two diffeomorphisms to be patched together to give a homeomorphism of the 2-sphere which restricts to a diffeomorphism on the closed unit disk and the closures of its complement which it carries onto the interior and exterior of the original smooth curve. By the isotopy theorem in differential topology, [ 11 ] the homeomorphism can be adjusted to a diffeomorphism on the whole 2-sphere without changing it on the unit circle. This diffeomorphism then provides the smooth solution to the Schoenflies problem. The Jordan-Schoenflies theorem can be deduced using differential topology . In fact it is an immediate consequence of the classification up to diffeomorphism of smooth oriented 2-manifolds with boundary, as described in Hirsch (1994) . Indeed, the smooth curve divides the 2-sphere into two parts. By the classification each is diffeomorphic to the unit disk and—taking into account the isotopy theorem—they are glued together by a diffeomorphism of the boundary. By the Alexander trick, such a diffeomorphism extends to the disk itself. Thus there is a diffeomorphism of the 2-sphere carrying the smooth curve onto the unit circle. On the other hand, the diffeomorphism can also be constructed directly using the Jordan-Schoenflies theorem for polygons and elementary methods from differential topology, namely flows defined by vector fields. [ 12 ] When the Jordan curve is smooth (parametrized by arc length) the unit normal vectors give a non-vanishing vector field X 0 in a tubular neighbourhood U 0 of the curve. Take a polygonal curve in the interior of the curve close to the boundary and transverse to the curve (at the vertices the vector field should be strictly within the angle formed by the edges). By the piecewise linear Jordan–Schoenflies theorem, there is a piecewise linear homeomorphism, affine on an appropriate triangulation of the interior of the polygon, taking the polygon onto a triangle. Take an interior point P in one of the small triangles of the triangulation. It corresponds to a point Q in the image triangle. There is a radial vector field on the image triangle, formed of straight lines pointing towards Q . This gives a series of lines in the small triangles making up the polygon. Each defines a vector field X i on a neighbourhood U i of the closure of the triangle. Each vector field is transverse to the sides, provided that Q is chosen in "general position" so that it is not collinear with any of the finitely many edges in the triangulation. Translating if necessary, it can be assumed that P and Q are at the origin 0. On the triangle containing P the vector field can be taken to be the standard radial vector field. Similarly the same procedure can be applied to the outside of the smooth curve, after applying Möbius transformation to map it into the finite part of the plane and ∞ to 0. In this case the neighbourhoods U i of the triangles have negative indices. Take the vector fields X i with a negative sign, pointing away from the point at infinity. Together U 0 and the U i 's with i ≠ 0 form an open cover of the 2-sphere. Take a smooth partition of unity ψ i subordinate to the cover U i and set X is a smooth vector field on the two sphere vanishing only at 0 and ∞. It has index 1 at 0 and -1 at ∞. Near 0 the vector field equals the radial vector field pointing towards 0. If α t is the smooth flow defined by X , the point 0 is an attracting point and ∞ a repelling point. As t tends to +∞, the flow send points to 0; while as t tends to –∞ points are sent to ∞. Replacing X by f ⋅ X with f a smooth positive function, changes the parametrization of the integral curves of X , but not the integral curves themselves. For an appropriate choice of f equal to 1 outside a small annulus near 0, the integral curves starting at points of the smooth curve will all reach smaller circle bounding the annulus at the same time s . The diffeomorphism α s therefore carries the smooth curve onto this small circle. A scaling transformation, fixing 0 and ∞, then carries the small circle onto the unit circle. Composing these diffeomorphisms gives a diffeomorphism carrying the smooth curve onto the unit circle. There does exist a higher-dimensional generalization due to Morton Brown ( 1960 ) and independently Barry Mazur ( 1959 ) with Morse (1960) , which is also called the generalized Schoenflies theorem . It states that, if an ( n − 1)-dimensional sphere S is embedded into the n -dimensional sphere S n in a locally flat way (that is, the embedding extends to that of a thickened sphere), then the pair ( S n , S ) is homeomorphic to the pair ( S n , S n −1 ), where S n −1 is the equator of the n -sphere. Brown and Mazur received the Veblen Prize for their contributions. Both the Brown and Mazur proofs are considered "elementary" and use inductive arguments. The Schoenflies problem can be posed in categories other than the topologically locally flat category, i.e. does a smoothly (piecewise-linearly) embedded ( n − 1)-sphere in the n -sphere bound a smooth (piecewise-linear) n -ball? For n = 4, the problem is still open for both categories. See Mazur manifold . For n ≥ 5 the question in the smooth category has an affirmative answer, and follows from the h-cobordism theorem.
https://en.wikipedia.org/wiki/Schoenflies_problem
The Scholl reaction is a coupling reaction between two arene compounds with the aid of a Lewis acid and a protic acid . [ 1 ] [ 2 ] It is named after its discoverer, Roland Scholl , a Swiss chemist. In 1910 Scholl reported the synthesis of a quinone [ 3 ] and of perylene from naphthalene [ 4 ] both with aluminum chloride . Perylene was also synthesised from 1,1’-binaphthalene in 1913. [ 5 ] The synthesis of Benzanthrone was reported in 1912. [ 6 ] The protic acid in the Scholl reaction is often an impurity in the Lewis Acid and also formed in the course of a Scholl reaction. Reagents are iron(III) chloride in dichloromethane , copper(II) chloride , PIFA and boron trifluoride etherate in dichloromethane, Molybdenum(V) chloride and lead tetraacetate with BF 3 in acetonitrile . [ 7 ] Given the high reaction temperature and the requirement for strongly acidic catalysts the chemical yield often is low and the method is not a popular one. Intramolecular reactions fare better than the intermolecular ones, for instance in the organic synthesis of 9-phenylfluorene : Or the formation of the pyrene dibenzo-(a.1)-pyrene from the anthracene 1-phenylbenz(a)anthracene (66% yield). [ 8 ] One study showed that the reaction lends itself to cascade reactions to form more complex polycyclic aromatic hydrocarbons [ 9 ] In certain applications such as triphenylene synthesis this reaction is advocated as an alternative for the Suzuki reaction . A recurring problem is oligomerization of the product which can be prevented by blocking tert-butyl substituents: [ 7 ] The exact reaction mechanism is not known but could very well proceed through an arenium ion . Just as in electrophilic aromatic substitution , Activating groups such as methoxy improve yield and selectivity: [ 7 ] Indeed, oxidative coupling of phenols is a research strategy in modern organic synthesis. Two mechanisms may compete. In step one of a radical cation mechanism a radical cation is formed from one reaction partner by oxidation, in step two the radical ion attacks the second neutral partner in a substitution reaction and a new radical ion is formed with one ring bearing the positive charge and the other one the radical position. In step three dihydrogen is split off with rearomatisation to the biaryl compound. In the arenium ion mechanism one reaction partner is protonated to an arenium ion which then attacks the second reaction partner. The arenium ion can also be formed by attack of the Lewis acid. The mechanisms are difficult to distinguish because many Lewis acids can behave as oxidants. Reactions taking place at room-temperature with well-known one-electron oxidizing agents likely proceed through a radical cation mechanism and reactions requiring elevated temperatures likely proceed through an arenium ion mechanism. [ 2 ]
https://en.wikipedia.org/wiki/Scholl_reaction
Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields . The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve. The algorithm was published by René Schoof in 1985 and it was a theoretical breakthrough, as it was the first deterministic polynomial time algorithm for counting points on elliptic curves . Before Schoof's algorithm, approaches to counting points on elliptic curves such as the naive and baby-step giant-step algorithms were, for the most part, tedious and had an exponential running time. This article explains Schoof's approach, laying emphasis on the mathematical ideas underlying the structure of the algorithm. Let E {\displaystyle E} be an elliptic curve defined over the finite field F q {\displaystyle \mathbb {F} _{q}} , where q = p n {\displaystyle q=p^{n}} for p {\displaystyle p} a prime and n {\displaystyle n} an integer ≥ 1 {\displaystyle \geq 1} . Over a field of characteristic ≠ 2 , 3 {\displaystyle \neq 2,3} an elliptic curve can be given by a (short) Weierstrass equation with A , B ∈ F q {\displaystyle A,B\in \mathbb {F} _{q}} . The set of points defined over F q {\displaystyle \mathbb {F} _{q}} consists of the solutions ( a , b ) ∈ F q 2 {\displaystyle (a,b)\in \mathbb {F} _{q}^{2}} satisfying the curve equation and a point at infinity O {\displaystyle O} . Using the group law on elliptic curves restricted to this set one can see that this set E ( F q ) {\displaystyle E(\mathbb {F} _{q})} forms an abelian group , with O {\displaystyle O} acting as the zero element. In order to count points on an elliptic curve, we compute the cardinality of E ( F q ) {\displaystyle E(\mathbb {F} _{q})} . Schoof's approach to computing the cardinality # E ( F q ) {\displaystyle \#E(\mathbb {F} _{q})} makes use of Hasse's theorem on elliptic curves along with the Chinese remainder theorem and division polynomials . Hasse's theorem states that if E / F q {\displaystyle E/\mathbb {F} _{q}} is an elliptic curve over the finite field F q {\displaystyle \mathbb {F} _{q}} , then # E ( F q ) {\displaystyle \#E(\mathbb {F} _{q})} satisfies This powerful result, given by Hasse in 1934, simplifies our problem by narrowing down # E ( F q ) {\displaystyle \#E(\mathbb {F} _{q})} to a finite (albeit large) set of possibilities. Defining t {\displaystyle t} to be q + 1 − # E ( F q ) {\displaystyle q+1-\#E(\mathbb {F} _{q})} , and making use of this result, we now have that computing the value of t {\displaystyle t} modulo N {\displaystyle N} where N > 4 q {\displaystyle N>4{\sqrt {q}}} , is sufficient for determining t {\displaystyle t} , and thus # E ( F q ) {\displaystyle \#E(\mathbb {F} _{q})} . While there is no efficient way to compute t ( mod N ) {\displaystyle t{\pmod {N}}} directly for general N {\displaystyle N} , it is possible to compute t ( mod l ) {\displaystyle t{\pmod {l}}} for l {\displaystyle l} a small prime, rather efficiently. We choose S = { l 1 , l 2 , . . . , l r } {\displaystyle S=\{l_{1},l_{2},...,l_{r}\}} to be a set of distinct primes such that ∏ l i = N > 4 q {\displaystyle \prod l_{i}=N>4{\sqrt {q}}} . Given t ( mod l i ) {\displaystyle t{\pmod {l_{i}}}} for all l i ∈ S {\displaystyle l_{i}\in S} , the Chinese remainder theorem allows us to compute t ( mod N ) {\displaystyle t{\pmod {N}}} . In order to compute t ( mod l ) {\displaystyle t{\pmod {l}}} for a prime l ≠ p {\displaystyle l\neq p} , we make use of the theory of the Frobenius endomorphism ϕ {\displaystyle \phi } and division polynomials . Note that considering primes l ≠ p {\displaystyle l\neq p} is no loss since we can always pick a bigger prime to take its place to ensure the product is big enough. In any case Schoof's algorithm is most frequently used in addressing the case q = p {\displaystyle q=p} since there are more efficient, so called p {\displaystyle p} adic algorithms for small-characteristic fields. Given the elliptic curve E {\displaystyle E} defined over F q {\displaystyle \mathbb {F} _{q}} we consider points on E {\displaystyle E} over F ¯ q {\displaystyle {\bar {\mathbb {F} }}_{q}} , the algebraic closure of F q {\displaystyle \mathbb {F} _{q}} ; i.e. we allow points with coordinates in F ¯ q {\displaystyle {\bar {\mathbb {F} }}_{q}} . The Frobenius endomorphism of F ¯ q {\displaystyle {\bar {\mathbb {F} }}_{q}} over F q {\displaystyle \mathbb {F} _{q}} extends to the elliptic curve by ϕ : ( x , y ) ↦ ( x q , y q ) {\displaystyle \phi :(x,y)\mapsto (x^{q},y^{q})} . This map is the identity on E ( F q ) {\displaystyle E(\mathbb {F} _{q})} and one can extend it to the point at infinity O {\displaystyle O} , making it a group morphism from E ( F ¯ q ) {\displaystyle E({\bar {\mathbb {F} }}_{q})} to itself. The Frobenius endomorphism satisfies a quadratic polynomial which is linked to the cardinality of E ( F q ) {\displaystyle E(\mathbb {F} _{q})} by the following theorem: Theorem: The Frobenius endomorphism given by ϕ {\displaystyle \phi } satisfies the characteristic equation Thus we have for all P = ( x , y ) ∈ E {\displaystyle P=(x,y)\in E} that ( x q 2 , y q 2 ) + q ( x , y ) = t ( x q , y q ) {\displaystyle (x^{q^{2}},y^{q^{2}})+q(x,y)=t(x^{q},y^{q})} , where + denotes addition on the elliptic curve and q ( x , y ) {\displaystyle q(x,y)} and t ( x q , y q ) {\displaystyle t(x^{q},y^{q})} denote scalar multiplication of ( x , y ) {\displaystyle (x,y)} by q {\displaystyle q} and of ( x q , y q ) {\displaystyle (x^{q},y^{q})} by t {\displaystyle t} . One could try to symbolically compute these points ( x q 2 , y q 2 ) {\displaystyle (x^{q^{2}},y^{q^{2}})} , ( x q , y q ) {\displaystyle (x^{q},y^{q})} and q ( x , y ) {\displaystyle q(x,y)} as functions in the coordinate ring F q [ x , y ] / ( y 2 − x 3 − A x − B ) {\displaystyle \mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B)} of E {\displaystyle E} and then search for a value of t {\displaystyle t} which satisfies the equation. However, the degrees get very large and this approach is impractical. Schoof's idea was to carry out this computation restricted to points of order l {\displaystyle l} for various small primes l {\displaystyle l} . Fixing an odd prime l {\displaystyle l} , we now move on to solving the problem of determining t l {\displaystyle t_{l}} , defined as t ( mod l ) {\displaystyle t{\pmod {l}}} , for a given prime l ≠ 2 , p {\displaystyle l\neq 2,p} . If a point ( x , y ) {\displaystyle (x,y)} is in the l {\displaystyle l} - torsion subgroup E [ l ] = { P ∈ E ( F q ¯ ) ∣ l P = O } {\displaystyle E[l]=\{P\in E({\bar {\mathbb {F} _{q}}})\mid lP=O\}} , then q P = q ¯ P {\displaystyle qP={\bar {q}}P} where q ¯ {\displaystyle {\bar {q}}} is the unique integer such that q ≡ q ¯ ( mod l ) {\displaystyle q\equiv {\bar {q}}{\pmod {l}}} and ∣ q ¯ ∣ < l / 2 {\displaystyle \mid {\bar {q}}\mid <l/2} . Note that ϕ ( O ) = O {\displaystyle \phi (O)=O} and that for any integer r {\displaystyle r} we have r ϕ ( P ) = ϕ ( r P ) {\displaystyle r\phi (P)=\phi (rP)} . Thus ϕ ( P ) {\displaystyle \phi (P)} will have the same order as P {\displaystyle P} . Thus for ( x , y ) {\displaystyle (x,y)} belonging to E [ l ] {\displaystyle E[l]} , we also have t ( x q , y q ) = t ¯ ( x q , y q ) {\displaystyle t(x^{q},y^{q})={\bar {t}}(x^{q},y^{q})} if t ≡ t ¯ ( mod l ) {\displaystyle t\equiv {\bar {t}}{\pmod {l}}} . Hence we have reduced our problem to solving the equation where t ¯ {\displaystyle {\bar {t}}} and q ¯ {\displaystyle {\bar {q}}} have integer values in [ − ( l − 1 ) / 2 , ( l − 1 ) / 2 ] {\displaystyle [-(l-1)/2,(l-1)/2]} . The l th division polynomial is such that its roots are precisely the x coordinates of points of order l . Thus, to restrict the computation of ( x q 2 , y q 2 ) + q ¯ ( x , y ) {\displaystyle (x^{q^{2}},y^{q^{2}})+{\bar {q}}(x,y)} to the l -torsion points means computing these expressions as functions in the coordinate ring of E and modulo the l th division polynomial. I.e. we are working in F q [ x , y ] / ( y 2 − x 3 − A x − B , ψ l ) {\displaystyle \mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l})} . This means in particular that the degree of X and Y defined via ( X ( x , y ) , Y ( x , y ) ) := ( x q 2 , y q 2 ) + q ¯ ( x , y ) {\displaystyle (X(x,y),Y(x,y)):=(x^{q^{2}},y^{q^{2}})+{\bar {q}}(x,y)} is at most 1 in y and at most ( l 2 − 3 ) / 2 {\displaystyle (l^{2}-3)/2} in x . The scalar multiplication q ¯ ( x , y ) {\displaystyle {\bar {q}}(x,y)} can be done either by double-and-add methods or by using the q ¯ {\displaystyle {\bar {q}}} th division polynomial. The latter approach gives: where ψ n {\displaystyle \psi _{n}} is the n th division polynomial. Note that y q ¯ / y {\displaystyle y_{\bar {q}}/y} is a function in x only and denote it by θ ( x ) {\displaystyle \theta (x)} . We must split the problem into two cases: the case in which ( x q 2 , y q 2 ) ≠ ± q ¯ ( x , y ) {\displaystyle (x^{q^{2}},y^{q^{2}})\neq \pm {\bar {q}}(x,y)} , and the case in which ( x q 2 , y q 2 ) = ± q ¯ ( x , y ) {\displaystyle (x^{q^{2}},y^{q^{2}})=\pm {\bar {q}}(x,y)} . Note that these equalities are checked modulo ψ l {\displaystyle \psi _{l}} . By using the addition formula for the group E ( F q ) {\displaystyle E(\mathbb {F} _{q})} we obtain: Note that this computation fails in case the assumption of inequality was wrong. We are now able to use the x -coordinate to narrow down the choice of t ¯ {\displaystyle {\bar {t}}} to two possibilities, namely the positive and negative case. Using the y -coordinate one later determines which of the two cases holds. We first show that X is a function in x alone. Consider ( y q 2 − y q ¯ ) 2 = y 2 ( y q 2 − 1 − y q ¯ / y ) 2 {\displaystyle (y^{q^{2}}-y_{\bar {q}})^{2}=y^{2}(y^{q^{2}-1}-y_{\bar {q}}/y)^{2}} . Since q 2 − 1 {\displaystyle q^{2}-1} is even, by replacing y 2 {\displaystyle y^{2}} by x 3 + A x + B {\displaystyle x^{3}+Ax+B} , we rewrite the expression as and have that Here, it seems not right, we throw away x q 2 − x q ¯ {\displaystyle x^{q^{2}}-x_{\bar {q}}} ? Now if X ≡ x t ¯ q mod ψ l ( x ) {\displaystyle X\equiv x_{\bar {t}}^{q}{\bmod {\psi }}_{l}(x)} for one t ¯ ∈ [ 0 , ( l − 1 ) / 2 ] {\displaystyle {\bar {t}}\in [0,(l-1)/2]} then t ¯ {\displaystyle {\bar {t}}} satisfies for all l -torsion points P . As mentioned earlier, using Y and y t ¯ q {\displaystyle y_{\bar {t}}^{q}} we are now able to determine which of the two values of t ¯ {\displaystyle {\bar {t}}} ( t ¯ {\displaystyle {\bar {t}}} or − t ¯ {\displaystyle -{\bar {t}}} ) works. This gives the value of t ≡ t ¯ ( mod l ) {\displaystyle t\equiv {\bar {t}}{\pmod {l}}} . Schoof's algorithm stores the values of t ¯ ( mod l ) {\displaystyle {\bar {t}}{\pmod {l}}} in a variable t l {\displaystyle t_{l}} for each prime l considered. We begin with the assumption that ( x q 2 , y q 2 ) = q ¯ ( x , y ) {\displaystyle (x^{q^{2}},y^{q^{2}})={\bar {q}}(x,y)} . Since l is an odd prime it cannot be that q ¯ ( x , y ) = − q ¯ ( x , y ) {\displaystyle {\bar {q}}(x,y)=-{\bar {q}}(x,y)} and thus t ¯ ≠ 0 {\displaystyle {\bar {t}}\neq 0} . The characteristic equation yields that t ¯ ϕ ( P ) = 2 q ¯ P {\displaystyle {\bar {t}}\phi (P)=2{\bar {q}}P} . And consequently that t ¯ 2 q ¯ ≡ ( 2 q ) 2 ( mod l ) {\displaystyle {\bar {t}}^{2}{\bar {q}}\equiv (2q)^{2}{\pmod {l}}} . This implies that q is a square modulo l . Let q ≡ w 2 ( mod l ) {\displaystyle q\equiv w^{2}{\pmod {l}}} . Compute w ϕ ( x , y ) {\displaystyle w\phi (x,y)} in F q [ x , y ] / ( y 2 − x 3 − A x − B , ψ l ) {\displaystyle \mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l})} and check whether q ¯ ( x , y ) = w ϕ ( x , y ) {\displaystyle {\bar {q}}(x,y)=w\phi (x,y)} . If so, t l {\displaystyle t_{l}} is ± 2 w ( mod l ) {\displaystyle \pm 2w{\pmod {l}}} depending on the y-coordinate. If q turns out not to be a square modulo l or if the equation does not hold for any of w and − w {\displaystyle -w} , our assumption that ( x q 2 , y q 2 ) = + q ¯ ( x , y ) {\displaystyle (x^{q^{2}},y^{q^{2}})=+{\bar {q}}(x,y)} is false, thus ( x q 2 , y q 2 ) = − q ¯ ( x , y ) {\displaystyle (x^{q^{2}},y^{q^{2}})=-{\bar {q}}(x,y)} . The characteristic equation gives t l = 0 {\displaystyle t_{l}=0} . If you recall, our initial considerations omit the case of l = 2 {\displaystyle l=2} . Since we assume q to be odd, q + 1 − t ≡ t ( mod 2 ) {\displaystyle q+1-t\equiv t{\pmod {2}}} and in particular, t 2 ≡ 0 ( mod 2 ) {\displaystyle t_{2}\equiv 0{\pmod {2}}} if and only if E ( F q ) {\displaystyle E(\mathbb {F} _{q})} has an element of order 2. By definition of addition in the group, any element of order 2 must be of the form ( x 0 , 0 ) {\displaystyle (x_{0},0)} . Thus t 2 ≡ 0 ( mod 2 ) {\displaystyle t_{2}\equiv 0{\pmod {2}}} if and only if the polynomial x 3 + A x + B {\displaystyle x^{3}+Ax+B} has a root in F q {\displaystyle \mathbb {F} _{q}} , if and only if gcd ( x q − x , x 3 + A x + B ) ≠ 1 {\displaystyle \gcd(x^{q}-x,x^{3}+Ax+B)\neq 1} . Most of the computation is taken by the evaluation of ϕ ( P ) {\displaystyle \phi (P)} and ϕ 2 ( P ) {\displaystyle \phi ^{2}(P)} , for each prime l {\displaystyle l} , that is computing x q {\displaystyle x^{q}} , y q {\displaystyle y^{q}} , x q 2 {\displaystyle x^{q^{2}}} , y q 2 {\displaystyle y^{q^{2}}} for each prime l {\displaystyle l} . This involves exponentiation in the ring R = F q [ x , y ] / ( y 2 − x 3 − A x − B , ψ l ) {\displaystyle R=\mathbb {F} _{q}[x,y]/(y^{2}-x^{3}-Ax-B,\psi _{l})} and requires O ( log ⁡ q ) {\displaystyle O(\log q)} multiplications. Since the degree of ψ l {\displaystyle \psi _{l}} is l 2 − 1 2 {\displaystyle {\frac {l^{2}-1}{2}}} , each element in the ring is a polynomial of degree O ( l 2 ) {\displaystyle O(l^{2})} . By the prime number theorem , there are around O ( log ⁡ q ) {\displaystyle O(\log q)} primes of size O ( log ⁡ q ) {\displaystyle O(\log q)} , giving that l {\displaystyle l} is O ( log ⁡ q ) {\displaystyle O(\log q)} and we obtain that O ( l 2 ) = O ( log 2 ⁡ q ) {\displaystyle O(l^{2})=O(\log ^{2}q)} . Thus each multiplication in the ring R {\displaystyle R} requires O ( log 4 ⁡ q ) {\displaystyle O(\log ^{4}q)} multiplications in F q {\displaystyle \mathbb {F} _{q}} which in turn requires O ( log 2 ⁡ q ) {\displaystyle O(\log ^{2}q)} bit operations. In total, the number of bit operations for each prime l {\displaystyle l} is O ( log 7 ⁡ q ) {\displaystyle O(\log ^{7}q)} . Given that this computation needs to be carried out for each of the O ( log ⁡ q ) {\displaystyle O(\log q)} primes, the total complexity of Schoof's algorithm turns out to be O ( log 8 ⁡ q ) {\displaystyle O(\log ^{8}q)} . Using fast polynomial and integer arithmetic reduces this to O ~ ( log 5 ⁡ q ) {\displaystyle {\tilde {O}}(\log ^{5}q)} . In the 1990s, Noam Elkies , followed by A. O. L. Atkin , devised improvements to Schoof's basic algorithm by restricting the set of primes S = { l 1 , … , l s } {\displaystyle S=\{l_{1},\ldots ,l_{s}\}} considered before to primes of a certain kind. These came to be called Elkies primes and Atkin primes respectively. A prime l {\displaystyle l} is called an Elkies prime if the characteristic equation: ϕ 2 − t ϕ + q = 0 {\displaystyle \phi ^{2}-t\phi +q=0} splits over F l {\displaystyle \mathbb {F} _{l}} , while an Atkin prime is a prime that is not an Elkies prime. Atkin showed how to combine information obtained from the Atkin primes with the information obtained from Elkies primes to produce an efficient algorithm, which came to be known as the Schoof–Elkies–Atkin algorithm . The first problem to address is to determine whether a given prime is Elkies or Atkin. In order to do so, we make use of modular polynomials, which come from the study of modular forms and an interpretation of elliptic curves over the complex numbers as lattices. Once we have determined which case we are in, instead of using division polynomials , we are able to work with a polynomial that has lower degree than the corresponding division polynomial: O ( l ) {\displaystyle O(l)} rather than O ( l 2 ) {\displaystyle O(l^{2})} . For efficient implementation, probabilistic root-finding algorithms are used, which makes this a Las Vegas algorithm rather than a deterministic algorithm. Under the heuristic assumption that approximately half of the primes up to an O ( log ⁡ q ) {\displaystyle O(\log q)} bound are Elkies primes, this yields an algorithm that is more efficient than Schoof's, with an expected running time of O ( log 6 ⁡ q ) {\displaystyle O(\log ^{6}q)} using naive arithmetic, and O ~ ( log 4 ⁡ q ) {\displaystyle {\tilde {O}}(\log ^{4}q)} using fast arithmetic. Although this heuristic assumption is known to hold for most elliptic curves, it is not known to hold in every case, even under the GRH . Several algorithms were implemented in C++ by Mike Scott and are available with source code . The implementations are free (no terms, no conditions), and make use of the MIRACL library which is distributed under the AGPLv3 .
https://en.wikipedia.org/wiki/Schoof's_algorithm
The Schoof–Elkies–Atkin algorithm (SEA) is an algorithm used for finding the order of or calculating the number of points on an elliptic curve over a finite field . Its primary application is in elliptic curve cryptography . The algorithm is an extension of Schoof's algorithm by Noam Elkies and A. O. L. Atkin to significantly improve its efficiency (under heuristic assumptions). The Elkies-Atkin extension to Schoof's algorithm works by restricting the set of primes S = { l 1 , … , l s } {\displaystyle S=\{l_{1},\ldots ,l_{s}\}} considered to primes of a certain kind. These came to be called Elkies primes and Atkin primes respectively. A prime l {\displaystyle l} is called an Elkies prime if the characteristic equation: ϕ 2 − t ϕ + q = 0 {\displaystyle \phi ^{2}-t\phi +q=0} splits over F l {\displaystyle \mathbb {F} _{l}} , while an Atkin prime is a prime that is not an Elkies prime. Atkin showed how to combine information obtained from the Atkin primes with the information obtained from Elkies primes to produce an efficient algorithm, which came to be known as the Schoof–Elkies–Atkin algorithm. The first problem to address is to determine whether a given prime is Elkies or Atkin. In order to do so, we make use of modular polynomials Φ l ( X , Y ) {\displaystyle \Phi _{l}(X,Y)} that parametrize pairs of l {\displaystyle l} - isogenous elliptic curves in terms of their j-invariants (in practice alternative modular polynomials may also be used but for the same purpose). If the instantiated polynomial Φ l ( X , j ( E ) ) {\displaystyle \Phi _{l}(X,j(E))} has a root j ( E ′ ) {\displaystyle j(E')} in F q {\displaystyle \mathbb {F} _{q}} then l {\displaystyle l} is an Elkies prime, and we may compute a polynomial f l ( X ) {\displaystyle f_{l}(X)} whose roots correspond to points in the kernel of the l {\displaystyle l} -isogeny from E {\displaystyle E} to E ′ {\displaystyle E'} . The polynomial f l {\displaystyle f_{l}} is a divisor of the corresponding division polynomial used in Schoof's algorithm, and it has significantly lower degree, O ( l ) {\displaystyle O(l)} versus O ( l 2 ) {\displaystyle O(l^{2})} . For Elkies primes, this allows one to compute the number of points on E {\displaystyle E} modulo l {\displaystyle l} more efficiently than in Schoof's algorithm. In the case of an Atkin prime, we can gain some information from the factorization pattern of Φ l ( X , j ( E ) ) {\displaystyle \Phi _{l}(X,j(E))} in F l [ X ] {\displaystyle \mathbb {F} _{l}[X]} , which constrains the possibilities for the number of points modulo l {\displaystyle l} , but the asymptotic complexity of the algorithm depends entirely on the Elkies primes. Provided there are sufficiently many small Elkies primes (on average, we expect half the primes l {\displaystyle l} to be Elkies primes), this results in a reduction in the running time. The resulting algorithm is probabilistic (of Las Vegas type), and its expected running time is, heuristically, O ~ ( log 4 ⁡ q ) {\displaystyle {\tilde {O}}(\log ^{4}q)} , making it more efficient in practice than Schoof's algorithm. Here the O ~ {\displaystyle {\tilde {O}}} notation is a variant of big O notation that suppresses terms that are logarithmic in the main term of an expression. The Schoof–Elkies–Atkin algorithm is implemented in the PARI/GP computer algebra system in the GP function ellap.
https://en.wikipedia.org/wiki/Schoof–Elkies–Atkin_algorithm
Canada's SchoolNet was a federal educational technology project in partnership with provinces, school boards, non-profit organizations, and the private sector, funded primarily by Industry Canada and developed by Ingenia Communications Corporation to promote the effective use of information and communications technologies (ICT) in libraries and schools across the country. [ 1 ] Many important early Canadian ICT programs fell under the SchoolNet umbrella, including Computers for Schools, LibraryNet, First Nations SchoolNet, and Canada's Digital Collections. [ 2 ] By 1997, SchoolNet brought internet access to all 433 First Nations schools under federal jurisdiction. [ 3 ] Microsoft founder Bill Gates praised the program in the Edmonton Journal on November 26. 1995, stating that "SchoolNet is the leading program in the world in terms of letting kids get out and use computers." [ 4 ] Notable early projects included the SchoolNet MOO and the Special Needs Education (SNE) network. [ 5 ] The MOO was abandoned by Industry Canada in 1998, but a non-profit corporation was set up to continue the site as MOO Canada Eh! [ 6 ] From 1999-2001, SchoolNet funded Project Achieve MOO developed at the Knowledge Media Design Institute at the University of Toronto . [ 7 ] Although acknowledged by executives at Industry Canada as "one of the most successful websites in terms of the level of interest" funding for the SNE was discontinued, and the project moved with its developer Keenan Wellar from Ingenia Communications Corporation [ 8 ] to charitable organization LiveWorkPlay in 1997, before the site was discontinued when corporate sponsorship failed to materialize. [ 9 ] The SchoolNet project was active from 1995 to the early 2000s, and the site was taken offline in 2008. [ 10 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/SchoolNet
The School of Molecular Sciences is an academic unit of The College of Liberal Arts and Sciences at Arizona State University (ASU). The School of Molecular Sciences (SMS) is responsible for the study and teaching of the academic disciplines of chemistry and biochemistry at ASU. Chemistry instruction at ASU can be traced back to the early 1890s. At that time, the educational institution, a Normal School for the Territory of Arizona , “acquired...a supply of chemicals” for instructional purposes. [ 1 ] Chemistry classes were held in Old Main during the late 1800s and into the early 1900s, taught by Frederick M. Irish . [ 2 ] In 1927, President Arthur John Matthews hired George Bateman, the first faculty to hold a PhD who was not also a principal or president of the school. [ 3 ] Bateman taught chemistry classes, among other things, for forty years. He oversaw the development of the physical sciences at ASU, including new science facilities and degrees. [ 1 ] In 1946, new majors leading to degrees were added, including Physical and Biological Science. In 1947 the State of AZ designated $525,000 for a new science building. [ 1 ] In 1953 the first college, the College of Arts and Sciences was established with 14 departments. [ 1 ] In 1954 Arizona State College was restructured into 4 colleges, which went into effect in the 1955–56 academic year: the College of Liberal Arts, the College of Education, the College of Applied Arts and Sciences, and the College of Business and Public Administration. In 1957, the Department of Chemistry first appeared in the Arizona State College Bulletin [ 4 ] (Vol. LXXII No. 2, April 1957), listed under the Division of Physical Sciences. Early chemists, such as LeRoy Eyring helped build ASU's strong science reputation; Roland K. Robins conducted cancer research as early as 1957. [ 2 ] In 1958, Arizona State College was renamed Arizona State University . Chemistry was the first department to be approved to offer a doctoral degree. In 1960, George Boyd, the university's first coordinator of research, helped secure a portion of Harvey H. Nininger ’s meteorites for ASU, making it the largest university-based meteorite collection in the world. [ 2 ] [ 5 ] In 1961, Geochemist Carleton B. Moore became the first director of the Center for Meteorite Studies, [ 6 ] [ 7 ] which at the time was housed in the Department of Chemistry. [ 7 ] In 1963, Peter R. Buseck , who pioneered high-resolution transmission electron microscopy (TEM) research on meteorites and terrestrial minerals. [ 8 ] In 1963, ASU awarded its first doctoral degrees to four students, one of whom, Jesse W. Jones, was the first Chemistry PhD of ASU and the first African American to earn a PhD at ASU. [ 9 ] Jones went on to teach chemistry at Baylor University for over 30 years. [ 10 ] [ 11 ] In 1965 Robert Pettit was hired and began developing marine-organism research that led to the creation of anti-cancer drugs and, in 1973, what became the Cancer Research Institute. [ 2 ] Pettit taught at ASU until his retirement in 2021. [ 12 ] In 1967, George Bateman, after enjoying a productive forty-year career at ASU, retired. The Bateman Physical Sciences Complex was named to honor his many contributions and years of service in 1977. [ 1 ] In 1992 the Department of Chemistry was renamed the Department of Chemistry and Biochemistry. In 2015 the department became the School of Molecular Sciences to recognize the fact that modern chemical science has impact well beyond the traditional disciplinary boundaries of chemistry and biochemistry. Rather than being discipline-based, the school's mission is to tackle important societal problems in medicine, technology, energy and the environment from an atomic and molecular perspective. The administrative offices of the School of Molecular Sciences are located within the Bateman Science Complex on ASU's Tempe campus. Faculty labs are located in the Bateman Complex, in the Biodesign Institute, and the ISTB1 and ISTB5 buildings. Research in the School of Molecular Sciences is organized around six themes:
https://en.wikipedia.org/wiki/School_of_Molecular_Sciences
The School of Physics and Astronomy at the University of St Andrews is an academic department dedicated to the teaching, research, and dissemination of knowledge in the fields of physics and astronomy. Located on the North Haugh in the historic town of St Andrews , in Fife, Scotland , the school is part of the oldest university in Scotland and the third-oldest in the English-speaking world. [ 2 ] Physics and astronomy have been studied and taught for more than 350 years at the University of St Andrews. Mathematical and astronomical work was integral to the medieval curriculum, and notable figures such as James Gregory , inventor of the Gregorian telescope , held positions at the university. [ 3 ] Over the centuries, the disciplines evolved into formal departments within the university. Sir David Brewster worked at the university on optical materials and the polarisation of light, and became principal of the university. [ 4 ] More recently, John F Allen was chair of natural philosophy at the university, [ 5 ] laying the foundations for a still very active group investigating the properties of matter at cryogenic temperatures, and installing Scotland's first helium liquefier. The school still operates Scotland's only helium liquefier. During John Allen's time in St Andrews, the North Haugh site was purchased by the university, where the current building of the school is located. [ 6 ] The physics department moved to this location in 1965; the building is now named after John F. Allen. While originally physics and astronomy were taught in separate departments, they were merged in 1987 into the present School of Physics and Astronomy. [ 7 ] Today, the school continues a long tradition of inquiry as a leading center for physics and astronomy research. In 2017, the school was awarded Juno Champion status by the Institute of Physics , [ 8 ] [ 9 ] and shortly after an Athena SWAN Silver award. The school strives to provide an education of the highest quality for both undergraduate and postgraduate students, developing the skills and knowledge for a successful career in industry, business or academia. It has modern teaching facilities and a better than average student-to-staff ratio, with all undergraduate degrees accredited by the Institute of Physics. The school has regularly been highly placed in university league tables. For example, from 2017 to 2021 the Guardian University league table had the school four times at number one and once at number two in the UK. [ 7 ] The school's teaching portfolio includes a number of BSc (three to four years) and MPhys (four to five years) degree programmes, [ 10 ] plus an MSc programme in Astrophysics. PhD and EngD students in the school benefit from a wide range of technical and skills courses within the SUPA Graduate School, with some postgraduate students also trained within discipline-specific Doctoral Training Centres. The School of Physics and Astronomy is internationally recognized for its research in its priority areas, [ 11 ] [ 12 ] including: Research groups often collaborate with external partners and participate in national and international consortia, such as the Scottish Universities Physics Alliance (SUPA) . The school maintains telescopes and observing facilities for both research and education, including the Gregory telescope, the largest operating optical telescope in the UK. The school also owns three one-metre robotic telescopes within the Las Cumbres Observatory Global Network. Collaborative agreements with external observatories and space agencies further expand the reach of the department's astronomical research. For photonics and materials research, the school operates two cleanrooms and specialized laser labs. These cutting-edge environments allow scientists to fabricate and study materials under precisely controlled conditions. As part of the Centre for Designer Quantum Materials , the school hosts an integrated ultra-high vacuum system with multiple angle-resolved photoemission systems and molecular beam epitaxy systems with in-vacuo transfer of samples to dedicated ultra-low vibration laboratories housing a suite of bespoke low temperature scanning tunneling microscopes.
https://en.wikipedia.org/wiki/School_of_Physics_and_Astronomy,_University_of_St_Andrews
The Schotten–Baumann reaction is a method to synthesize amides from amines and acid chlorides : Schotten–Baumann reaction also refers to the conversion of acid chloride to esters . The reaction was first described in 1883 by German chemists Carl Schotten and Eugen Baumann . [ 1 ] [ 2 ] The name "Schotten–Baumann reaction conditions" often indicate the use of a two-phase solvent system, consisting of water and an organic solvent. The base in the water phase neutralizes the acid generated by the reaction while the starting materials and product remain in the organic phase, often dichloromethane or diethyl ether . The Schotten–Baumann reaction or reaction conditions are widely used in organic chemistry . [ 3 ] [ 4 ] [ 5 ] Examples: In the Fischer peptide synthesis ( Emil Fischer , 1903), [ 6 ] an α-chloro acid chloride is condensed with the ester of an amino acid . The ester is then hydrolyzed and the acid converted to the acid chloride, enabling the extension of the peptide chain by another unit. In a final step the chloride atom is replaced by an amino group, completing the peptide synthesis .
https://en.wikipedia.org/wiki/Schotten–Baumann_reaction
The Schottky anomaly is an effect observed in solid-state physics where the specific heat capacity of a solid at low temperature has a peak. It is called anomalous because the heat capacity usually increases with temperature, or stays constant. It occurs in systems with a limited number of energy levels so that E(T) increases with sharp steps, one for each energy level that becomes available. Since Cv =(dE/dT), it will experience a large peak as the temperature crosses over from one step to the next. This effect can be explained by looking at the change in entropy of the system. At zero temperature only the lowest energy level is occupied, entropy is zero, and there is very little probability of a transition to a higher energy level. As the temperature increases, there is an increase in entropy and thus the probability of a transition goes up. As the temperature approaches the difference between the energy levels there is a broad peak in the specific heat corresponding to a large change in entropy for a small change in temperature. At high temperatures all of the levels are populated evenly, so there is again little change in entropy for small changes in temperature, and thus a lower specific heat capacity. For a two level system the specific heat coming from the Schottky anomaly has the form: Where Δ is the energy between the two levels. [ 1 ] This anomaly is usually seen in paramagnetic salts or even ordinary glass (due to paramagnetic iron impurities) at low temperature. At high temperature the paramagnetic spins have many spin states available, but at low temperatures some of the spin states are "frozen out" (having too high energy due to crystal field splitting ), and the entropy per paramagnetic atom is lowered. It was named after Walter H. Schottky . In a system where particles can have either a state of energy 0 or ϵ {\displaystyle \epsilon } , the expected value of the energy of a particle in the canonical ensemble is: ⟨ ϵ ⟩ = ϵ ⋅ e − β ϵ 1 + e − β ϵ = ϵ e + β ϵ + 1 {\displaystyle \langle \epsilon \rangle =\epsilon \cdot {\frac {e^{-\beta \epsilon }}{1+e^{-\beta \epsilon }}}={\frac {\epsilon }{e^{+\beta \epsilon }+1}}} with the inverse temperature β = 1 k B T {\displaystyle \beta ={\frac {1}{k_{\mathrm {B} }T}}} and the Boltzmann constant k B {\displaystyle k_{\mathrm {B} }} . The total energy of N {\displaystyle N} independent particles is thus: U = N ⟨ ϵ ⟩ = N ϵ e + β ϵ + 1 {\displaystyle U=N\langle \epsilon \rangle ={\frac {N\epsilon }{e^{+\beta \epsilon }+1}}} The heat capacity is therefore: C = ( ∂ U ∂ T ) ϵ = − 1 k B T 2 ∂ U ∂ β = N k B ( ϵ k B T ) 2 e + ϵ k B T ( e + ϵ k B T + 1 ) 2 {\displaystyle C=\left({\frac {\partial U}{\partial T}}\right)_{\epsilon }=-{\frac {1}{k_{\mathrm {B} }T^{2}}}{\frac {\partial U}{\partial \beta }}=Nk_{\mathrm {B} }\left({\frac {\epsilon }{k_{\mathrm {B} }T}}\right)^{2}{\frac {e^{+{\frac {\epsilon }{k_{\mathrm {B} }T}}}}{\left(e^{+{\frac {\epsilon }{k_{\mathrm {B} }T}}}+1\right)^{2}}}} Plotting C {\displaystyle C} as a function of temperature, a peak can be seen at k B T ≈ 0.417 ϵ {\displaystyle k_{\mathrm {B} }T\approx 0.417\epsilon } . In this section ϵ k B = Δ {\displaystyle {\frac {\epsilon }{k_{\mathrm {B} }}}=\Delta } for the Δ {\displaystyle \Delta } in the introductory section. This thermodynamics -related article is a stub . You can help Wikipedia by expanding it . This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schottky_anomaly
A Schottky defect is an excitation of the site occupations in a crystal lattice leading to point defects named after Walter H. Schottky . In ionic crystals , this defect forms when oppositely charged ions leave their lattice sites and become incorporated for instance at the surface, creating oppositely charged vacancies . These vacancies are formed in stoichiometric units, to maintain an overall neutral charge in the ionic solid. Schottky defects consist of unoccupied anion and cation sites in a stoichiometric ratio. For a simple ionic crystal of type A − B + , a Schottky defect consists of a single anion vacancy (A) and a single cation vacancy (B), or v • A + v ⁠ ′ {\displaystyle \prime } ⁠ B following Kröger–Vink notation . For a more general crystal with formula A x B y , a Schottky cluster is formed of x vacancies of A and y vacancies of B, thus the overall stoichiometry and charge neutrality are conserved. Conceptually, a Schottky defect is generated if the crystal is expanded by one unit cell, whose a prior empty sites are filled by atoms that diffused out of the interior, thus creating vacancies in the crystal. Schottky defects are observed most frequently when there is a small difference in size between the cations and anions that make up a material. Chemical equations in Kröger–Vink notation for the formation of Schottky defects in TiO 2 and BaTiO 3 . This can be illustrated schematically with a two-dimensional diagram of a sodium chloride crystal lattice: The vacancies that make up the Schottky defects have opposite charge, thus they experience a mutually attractive Coulomb force . At low temperature, they may form bound clusters. The degree at which the Schottky defect affects the lattice is dependent on temperature where the higher temperatures around a cation vacancy multiple anion vacancies can also be observed. When there are anion vacancies located near a cation vacancy this will hinder the displacement of cation energy. The bound clusters are typically less mobile than the dilute counterparts, as multiple species need to move in a concerted motion for the whole cluster to migrate. This has important implications for numerous functional ceramics used in a wide range of applications, including ion conductors , Solid oxide fuel cells and nuclear fuel . [ 1 ] This type of defect is typically observed in highly ionic compounds , highly coordinated compounds , and where there is only a small difference in sizes of cations and anions of which the compound lattice is composed. Typical salts where Schottky disorder is observed are NaCl , KCl , KBr , CsCl and AgBr . [ citation needed ] For engineering applications, Schottky defects are important in oxides with Fluorite structure , such as CeO 2 , cubic ZrO 2 , UO 2 , ThO 2 and PuO 2 . [ citation needed ] Typically, the formation volume of a vacancy is positive: the lattice contraction due to the strains around the defect does not make up for the expansion of the crystal due to the additional number of sites. Thus, the density of the solid crystal is less than the theoretical density of the material.
https://en.wikipedia.org/wiki/Schottky_defect
The Schottky effect or field enhanced thermionic emission is a phenomenon in condensed matter physics named after Walter H. Schottky . In electron emission devices, especially electron guns , the thermionic electron emitter will be biased negative relative to its surroundings. This creates an electric field of magnitude F at the emitter surface. Without the field, the surface barrier seen by an escaping Fermi-level electron has height W equal to the local work-function. The electric field lowers the surface barrier by an amount Δ W , and increases the emission current. It can be modeled by a simple modification of the Richardson equation , by replacing W by ( W − Δ W ). This gives the equation [ 1 ] [ 2 ] where J is the emission current density , T is the temperature of the metal, W is the work function of the metal, k is the Boltzmann constant , q e is the Elementary charge , ε 0 is the vacuum permittivity , and A G is the product of a universal constant A 0 multiplied by a material-specific correction factor λ R which is typically of order 0.5. The expression is sometimes written as [ q e F / ( 4 π ϵ 0 ) ] 1 / 2 {\displaystyle [q_{e}F/(4\pi \epsilon _{0})]^{1/2}} , in which case Δ W {\displaystyle \Delta W} is expressed as a voltage. Electron emission that takes place in the field-and-temperature-regime where this modified equation applies is often called Schottky emission . This equation is relatively accurate for electric field strengths lower than about 10 8 V  m −1 . For electric field strengths higher than 10 8 V m −1 , so-called Fowler–Nordheim (FN) tunneling begins to contribute significant emission current. In this regime, the combined effects of field-enhanced thermionic and field emission can be modeled by the Murphy–Good equation for thermo-field (T-F) emission. [ 3 ] At even higher fields, FN tunneling becomes the dominant electron emission mechanism, and the emitter operates in the so-called "cold field electron emission (CFE)" regime. Thermionic emission can also be enhanced by interaction with other forms of excitation such as light. [ 4 ] For example, excited Cs-vapours in thermionic converters form clusters of Cs- Rydberg matter which yield a decrease of collector emitting work function from 1.5 eV to 1.0–0.7 eV. Due to long-lived nature of Rydberg matter this low work function remains low which essentially increases the low-temperature converter’s efficiency. [ 5 ]
https://en.wikipedia.org/wiki/Schottky_effect
In finite group theory , the Schreier conjecture asserts that the outer automorphism group of every finite simple group is solvable . It was proposed by Otto Schreier in 1926, and is now known to be true as a result of the classification of finite simple groups , but no simpler proof is known. This abstract algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schreier_conjecture
In the area of mathematics called combinatorial group theory , the Schreier coset graph is a graph associated with a group G , a generating set of G , and a subgroup of G . The Schreier graph encodes the abstract structure of the group modulo an equivalence relation formed by the cosets of the subgroup. The graph is named after Otto Schreier , who used the term " Nebengruppenbild ". [ 1 ] An equivalent definition was made in an early paper of Todd and Coxeter. [ 2 ] Given a group G , a subgroup H ≤ G , and a generating set S = { s i : i in I } of G , the Schreier graph Sch( G , H , S ) is a graph whose vertices are the right cosets Hg = { hg : h in H } for g in G and whose edges are of the form ( Hg , Hgs ) for g in G and s in S . More generally, if X is any G -set , one can define a Schreier graph Sch( G , X , S ) of the action of G on X (with respect to the generating set S ): its vertices are the elements of X , and its edges are of the form ( x , xs ) for x in X and s in S . This includes the original Schreier coset graph definition, as H \ G is a naturally a G -set with respect to multiplication from the right. From an algebraic-topological perspective, the graph Sch( G , X , S ) has no distinguished vertex, whereas Sch( G , H , S ) has the distinguished vertex H , and is thus a pointed graph . The Cayley graph of the group G itself is the Schreier coset graph for H = {1 G } ( Gross & Tucker 1987 , p. 73). A spanning tree of a Schreier coset graph corresponds to a Schreier transversal, as in Schreier's subgroup lemma ( Conder 2003 ). The book "Categories and Groupoids" listed below relates this to the theory of covering morphisms of groupoids . A subgroup H of a group G determines a covering morphism of groupoids p : K → G {\displaystyle p:K\rightarrow G} and if S is a generating set for G then its inverse image under p is the Schreier graph of ( G , S ). The graph is useful to understand coset enumeration and the Todd–Coxeter algorithm . Coset graphs can be used to form large permutation representations of groups and were used by Graham Higman to show that the alternating groups of large enough degree are Hurwitz groups ( Conder 2003 ). Stallings ' core graphs [ 3 ] are retracts of Schreier graphs of free groups, and are an essential tool for computing with subgroups of a free group. Every vertex-transitive graph is a coset graph. This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schreier_coset_graph
Schreinemaker's analysis is the use of Schreinemaker's rules to create a phase diagram . After applying Schreinemaker's rules and creating a phase diagram, the resulting geometric figure will be thermodynamically accurate, although the axes will be undetermined. In order to determine the correct orientation of the geometric figure obtained through Schreinemaker's rules, one must have additional information about the given reactions or go through an analytical treatment of the thermodynamics of the relevant phases . Univariant lines are sometimes called reaction lines. The extension of a univariant line through the invariant point is called the metastable extension . Univariant lines are usually drawn as a solid line while their metastable extensions are drawn as a dotted line. Univariant lines and their metastable extensions are often labeled by putting in square brackets the phase that is absent from the reaction associated with the given univariant line. In other words, since every univariant line represents a chemical equilibrium , these equilibrium curves are named with the phase (or phases) that is not involved in the equilibrium. Take an example with four phases: A, B, C, D. If a univariant line is defined by the equilibrium reaction A+D←→C, this univariant line would be labeled [B], because the phase B is absent from the reaction A+D←→C. The Morey–Schreinemaker coincidence theorem states that for every univariant line that passes through the invariant point, one side is stable and the other is metastable . The invariant point marks the boundary of the stable and metastable segments of a reaction line. An invariant point is defined as a representation of an invariant system (0 degrees of freedom by Gibbs' phase rule ) by a point on a phase diagram. A univariant line thus represents a univariant system with 1 degree of freedom. Two univariant lines can then define a divariant area with 2 degrees of freedom. From the Morey–Schreinemaker coincidence theorem, Schreinemaker's rules can be determined. These rules can be used in the creation of an accurate phase diagram where both axes are intensive thermodynamic variables. There are many correct collections of "Schreinemaker's rules" and the choice to use a given set of rules depends on the nature of the phase diagrams being created. Due to the phrasing of the Morey–Schreinemaker coincidence theorem, only one rule is essential to the Schreinemaker's rules. This is the so-called metastable extensions rule : [ 1 ] The metastable extension of the [phase-absent] reaction must fall in the sector in which that phase is stable in all possible assemblages. This rule is geometrically sound in the construction of phase diagrams since for every metastable reaction, there must be a phase that is relatively stable. This phase must be the one which does not participate in the reaction and is therefore not consumed as a reactant or formed as a product, thus being "stable". Some collections of Schreinemaker's rules will contain the following, additional, basic statements: An assemblage is the phases on one side of an equilibrium reaction. An assemblage can be either a single phase or a collection of phases. In the example above with the equilibrium reaction A+D←→C, (A+D) is an assemblage as well as (C) on its own.
https://en.wikipedia.org/wiki/Schreinemaker's_analysis
Schreyerite (V 2 Ti 3 O 9 ), is a vanadium, titanium oxide mineral found in the Lasamba Hill, Kwale district in Coast Province , Kenya . It is polymorphous with kyzylkumite . The mineral occurs as exsolution lamellae and particles in rutile , coexisting with kyanite , sillimanite , and tourmaline in a highly metamorphosed gneiss . It was named after German mineralogist and petrologist Werner Schreyer , for his research on mineralogy of rock-forming minerals and petrology of metamorphic rocks both in nature and by experiment. Investigation of deposits of green vanadium-bearing kornerupine , revealed the presence of a new vanadium mineral through observations in reflected light. Schreyerite was first discovered in the Kwale district, Kenya. Polymorphous with kyzylkumite, it occurs in highly twinned unmixed grains in vanadium-bearing rutile that occurs as idiomorphic crystals in kornerupine-bearing quartz-biotite-sillimanite gneiss. It also occurs in a pyrite deposit at Sartra , Sweden , in a Pb-Zn ore deposit at Rampura Agucha , India , and recently in metamorphic rocks of the Ol’khon complex on the western shore of Lake Baikal , Russia . Instead of the usual intergrowths with rutile, single crystals of schreyerite were found, associated with titanite . Schreyerite is a reddish-brown, opaque mineral with metallic luster. Its reflectivity is slightly lower than rutile, and as a result, it is mostly gray. Pleochroism is weak: yellowish brown to reddish brown. When immersed in oil, the contrasts between rutile and schreyerite become clearer, and the color becomes more intense. With crossed polarizers, moderate anisotropism becomes evident, so that the very fine lamellar twinning becomes distinct. It has hardness of 7 and calculated specific gravity of 4.46.
https://en.wikipedia.org/wiki/Schreyerite
In quantum mechanics , the Schrieffer–Wolff transformation is a unitary transformation used to determine an effective (often low-energy) Hamiltonian by decoupling weakly interacting subspaces. [ 1 ] [ 2 ] Using a perturbative approach , the transformation can be constructed such that the interaction between the two subspaces vanishes up to the desired order in the perturbation. The transformation also perturbatively diagonalizes the system Hamiltonian to first order in the interaction. In this, the Schrieffer–Wolff transformation is an operator version of second-order perturbation theory . The Schrieffer–Wolff transformation is often used to project out the high energy excitations of a given quantum many-body Hamiltonian in order to obtain an effective low energy model . [ 1 ] The Schrieffer–Wolff transformation thus provides a controlled perturbative way to study the strong coupling regime of quantum-many body Hamiltonians. Although commonly attributed to the paper in which the Kondo model was obtained from the Anderson impurity model by J.R. Schrieffer and P.A. Wolff., [ 3 ] Joaquin Mazdak Luttinger and Walter Kohn used this method in an earlier work about non-periodic k·p perturbation theory . [ 4 ] Using the Schrieffer–Wolff transformation, the high energy charge excitations present in Anderson impurity model are projected out and a low energy effective Hamiltonian is obtained which has only virtual charge fluctuations. For the Anderson impurity model case, the Schrieffer–Wolff transformation showed that the Kondo model lies in the strong coupling regime of the Anderson impurity model. Consider a quantum system evolving under the time-independent Hamiltonian operator H {\displaystyle H} of the form: H = H 0 + V {\displaystyle H=H_{0}+V} where H 0 {\displaystyle H_{0}} is a Hamiltonian with known eigenstates | m ⟩ {\displaystyle |m\rangle } and corresponding eigenvalues E m {\displaystyle E_{m}} , and where V {\displaystyle V} is a small perturbation. Moreover, it is assumed without loss of generality that V {\displaystyle V} is purely off-diagonal in the eigenbasis of H 0 {\displaystyle H_{0}} , i.e., ⟨ m | V | m ⟩ = 0 {\displaystyle \langle m|V|m\rangle =0} for all m {\displaystyle m} . Indeed, this situation can always be arranged by absorbing the diagonal elements of V {\displaystyle V} into H 0 {\displaystyle H_{0}} , thus modifying its eigenvalues to E m ′ = E m + ⟨ m | V | m ⟩ {\displaystyle E'_{m}=E_{m}+\langle m|V|m\rangle } . The Schrieffer–Wolff transformation is a unitary transformation which expresses the Hamiltonian in a basis (the "dressed" basis) [ 5 ] where it is diagonal to first order in the perturbation V {\displaystyle V} . This unitary transformation is conventionally written as: H ′ = e S H e − S {\displaystyle H'=e^{S}He^{-S}} When V {\displaystyle V} is small, the generator S {\displaystyle S} of the transformation will likewise be small. The transformation can then be expanded in S {\displaystyle S} using the Baker-Campbell-Haussdorf formula H ′ = H + [ S , H ] + 1 2 [ S , [ S , H ] ] + … {\displaystyle H'=H+[S,H]+{\frac {1}{2}}[S,[S,H]]+\dots } Here, [ A , B ] {\displaystyle [A,B]} is the commutator between operators A {\displaystyle A} and B {\displaystyle B} . In terms of H 0 {\displaystyle H_{0}} and V {\displaystyle V} , the transformation becomes H ′ = H 0 + V + [ S , H 0 ] + [ S , V ] + 1 2 [ S , [ S , H 0 ] ] + 1 2 [ S , [ S , V ] ] + … {\displaystyle H'=H_{0}+V+[S,H_{0}]+[S,V]+{\frac {1}{2}}[S,[S,H_{0}]]+{\frac {1}{2}}[S,[S,V]]+\dots } The Hamiltonian can be made diagonal to first order in V {\displaystyle V} by choosing the generator S {\displaystyle S} such that V + [ S , H 0 ] = 0 {\displaystyle V+[S,H_{0}]=0} This equation always has a definite solution under the assumption that V {\displaystyle V} is off-diagonal in the eigenbasis of H 0 {\displaystyle H_{0}} . Substituting this choice in the previous transformation yields: H ′ = H 0 + 1 2 [ S , V ] + O ( V 3 ) {\displaystyle H'=H_{0}+{\frac {1}{2}}[S,V]+O(V^{3})} This expression is the standard form of the Schrieffer–Wolff transformation. Note that all the operators on the right-hand side are now expressed in a new basis "dressed" by the interaction V {\displaystyle V} to first order. In the general case, the difficult step of the transformation is to find an explicit expression for the generator S {\displaystyle S} . Once this is done, it is straightforward to compute the Schrieffer-Wolff Hamiltonian by computing the commutator [ S , V ] {\displaystyle [S,V]} . The Hamiltonian can then be projected on any subspace of interest to obtain an effective projected Hamiltonian for that subspace. In order for the transformation to be accurate, the eliminated subspaces must be energetically well separated from the subspace of interest, meaning that the strength of the interaction V {\displaystyle V} must be much smaller than the energy difference between the subspaces. This is the same regime of validity as in standard second-order perturbation theory . This section will illustrate how to practically compute the Schrieffer-Wolff (SW) transformation in the particular case of an unperturbed Hamiltonian that is block-diagonal. But first, to properly compute anything, it is important to understand what is actually happening during the whole procedure. The SW transformation W = e S {\displaystyle W=e^{S}} being unitary, it does not change the amount of information or the complexity of the Hamiltonian. The resulting shuffle of the matrix elements creates, however, a hierarchy in the information (e.g. eigenvalues), that can be used afterward for a projection in the relevant sector. In addition, when the off-diagonal elements coupling the blocks are much smaller than the typical unperturbed energy scales, a perturbative expansion is allowed to simplify the problem. Consider now, for concreteness, the full Hamiltonian H = H 0 + V {\displaystyle H=H_{0}+V} with an unperturbed part H 0 {\displaystyle H_{0}} made of independent blocks H 0 i {\displaystyle H_{0}^{i}} . In physics, and in the original motivation for the SW transformation, it is desired that each block corresponds to a distinct energy scale. In particular, all degenerate energy levels should belong to the same block. This well-split Hamiltonian is our starting point H 0 {\displaystyle H_{0}} . A perturbative coupling V {\displaystyle V} takes now on a specific meaning: the typical matrix element coupling different sectors must be much smaller than the eigenvalue differences between those sectors. The SW transformation will modify each block H 0 i {\displaystyle H_{0}^{i}} into an effective Hamiltonian H e f f i {\displaystyle H_{eff}^{i}} incorporating ("integrating out") the effects of the other blocks via the perturbation V {\displaystyle V} . In the end, it is sufficient to look at the sector of interest (called a projection) and to work with the chosen effective Hamiltonian to compute, for instance, eigenvalues and eigenvectors. In physics, this would generate effective low- (or high-)energy Hamiltonians. As mentioned in the previous section, the difficult step is the computation of the generator S {\displaystyle S} of the SW transformation. To obtain results comparable to second-order perturbation theory, it is enough to solve the equation [ H 0 , S ] = V {\displaystyle [H_{0},S]=V} (see Derivation). A simple trick in two steps is available when H 0 {\displaystyle H_{0}} is block-diagonal. The first step consists of finding the unitary transformation U {\displaystyle U} diagonalizing H 0 {\displaystyle H_{0}} . Since each block H 0 i {\displaystyle H_{0}^{i}} can be diagonalized with a unitary transformation U i {\displaystyle U^{i}} (this is the matrix of right-eigenvectors of H 0 i {\displaystyle H_{0}^{i}} ), it is enough to build U = diag ⁡ ( U i ) {\displaystyle U=\operatorname {diag} (U^{i})} , composed of the smaller rotations U i {\displaystyle U^{i}} on its diagonal, to transform H 0 {\displaystyle H_{0}} into a purely diagonal matrix D 0 = diag ⁡ ( d i ) {\displaystyle D_{0}=\operatorname {diag} (d_{i})} . The application of U {\displaystyle U} to the whole matrix H {\displaystyle H} yields then H D = U − 1 H U = D 0 + V ′ {\displaystyle H_{D}=U^{-1}HU=D_{0}+V'} with a transformed perturbation V ′ = U − 1 V U {\displaystyle V'=U^{-1}VU} , which remains off-diagonal. In this new form, the second step to compute S {\displaystyle S} becomes very simple, since we obtain an explicit expression, in components: S i j = V i j ′ d i − d j {\displaystyle S_{ij}={\frac {V'_{ij}}{d_{i}-d_{j}}}} where d i {\displaystyle d_{i}} denotes the i {\displaystyle i} th element on the diagonal of D 0 {\displaystyle D_{0}} . The reason for this comes from the observation that, for any matrix A = ( A i j ) {\displaystyle A=(A_{ij})} , and diagonal matrix D = diag ⁡ ( d i ) {\displaystyle D=\operatorname {diag} (d_{i})} , we have the relation [ D , A ] i j = ( d i − d j ) A i j {\displaystyle [D,A]_{ij}=(d_{i}-d_{j})A_{ij}} . Since the generator for H D {\displaystyle H_{D}} is defined by [ D 0 , S ] = V ′ {\displaystyle [D_{0},S]=V'} , the above formula follows immediately. As expected, the associated operator W = e S {\displaystyle W=e^{S}} is unitary (it satisfies W † = W − 1 {\displaystyle W^{\dagger }=W^{-1}} ) because the denominator of S {\displaystyle S} changes sign when transposed, and V {\displaystyle V} is Hermitian. Using the last formula in the derivation, the second-order Schrieffer-Wolff-transformed Hamiltonian H ′ = e S H D e − S {\displaystyle H'=e^{S}H_{D}e^{-S}} has now an explicit form as a function of its elementary terms D 0 {\displaystyle D_{0}} and V ′ {\displaystyle V'} : H i j ′ = d i δ i j + 1 2 ∑ k V i k ′ V k j ′ ( 1 d i − d k + 1 d j − d k ) + O ( V ′ 3 ) {\displaystyle H'_{ij}=d_{i}\delta _{ij}+{\frac {1}{2}}\sum _{k}V'_{ik}V'_{kj}\left({\frac {1}{d_{i}-d_{k}}}+{\frac {1}{d_{j}-d_{k}}}\right)+O(V'^{3})} The "dressed" states have an energy E n ′ = d n + ∑ k V n k ′ V k n ′ d n − d k {\displaystyle E'_{n}=d_{n}+\sum _{k}{\frac {V'_{nk}V'_{kn}}{d_{n}-d_{k}}}} following the recipe for first-order (non-degenerate) perturbation theory. This is applicable since the SW transformation is based on the approximation | V i j | ≪ | d i − d j | {\displaystyle |V_{ij}|\ll |d_{i}-d_{j}|} Note that the unitary rotation U {\displaystyle U} does not affect the eigenvalues, meaning that E n ′ {\displaystyle E'_{n}} is also a meaningful approximation for the original Hamiltonian H {\displaystyle H} . The "dressed" states themselves can be derived, in first-order perturbation theory too, as ψ n ′ = ψ R , n + 1 2 ∑ k ≠ n ∑ j ψ R , k V k j ′ V j n ′ 2 d j − d k − d n ( d j − d n ) ( d j − d k ) ( d n − d k ) {\displaystyle \psi '_{n}=\psi _{R,n}+{\frac {1}{2}}\sum _{k\neq n}\sum _{j}\psi _{R,k}V'_{kj}V'_{jn}{\frac {2d_{j}-d_{k}-d_{n}}{(d_{j}-d_{n})(d_{j}-d_{k})(d_{n}-d_{k})}}} Notice the index R {\displaystyle R} to the unperturbed eigenstate ψ R {\displaystyle \psi _{R}} of D 0 {\displaystyle D_{0}} to recall the current rotated basis of H D {\displaystyle H_{D}} . To express the eigenstates in the natural basis ψ {\displaystyle \psi } of H {\displaystyle H} itself, it is necessary to perform the unitary transformation ψ R → U − 1 ψ {\displaystyle \psi _{R}\to U^{-1}\psi } . This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schrieffer–Wolff_transformation
Schroeder's paradox refers to the phenomenon of certain polymers exhibiting more solvent uptake (observed as swelling ) when exposed to a pure liquid versus a saturated vapor. [ 1 ] It is named after the German chemist Paul von Schroeder , who first reported the phenomenon working on a sample of gelatin in contact with water in 1903. [ 2 ] An equivalent observation has also been independently discovered and discussed within the biophysical community as the vapor pressure paradox . [ 3 ] The phenomenon was recognized as notable due to its application to the Nafion /water system, with technological importance due to application in proton-exchange membrane fuel cells . [ 4 ] [ 5 ] According to phase equilibrium theory, the activity of a chemical species should be equal to its equilibrium partial vapor pressure , so both saturated vapor and pure liquid should exhibit the same equilibrium for absorption into the polymer. For this reason, Schroeder's experimental results were immediately questioned, and the phenomenon has often been attributed to experimental error, such as failure to attain proper water saturation or isothermal conditions between the phases. [ 1 ] However, even exact measurements support an existence of a systematic difference between sorption from saturated vapor and from pure liquid for certain systems. Additional surface effects along the polymer-liquid interface are required to explain the difference. A mechanism based on action of Maxwell stresses due to formation of an electrical double layer at the polymer's surface, present only where the polymer is submerged in liquid, has been proposed to explain this effect in the case of ion-exchange polymers , [ 6 ] and a similar mechanism involving van der Waals and solvation forces for the case of nonionogenic polymers. [ 7 ] [ 3 ] Mechanistic interpretations based on wetting of micropores in the polymer matrix have also been proposed. [ 1 ] The difference in absorption can in either case be explained by a difference in surface stresses on the interface , which differs between immersion in pure liquid and saturated vapor, resolving the paradox without requiring a difference in activity between the two. [ 6 ] Schroeder's paradox has been reported for various polymer/solvent pairs, such as:
https://en.wikipedia.org/wiki/Schroeder's_paradox
Schröder's equation , [ 1 ] [ 2 ] [ 3 ] named after Ernst Schröder , is a functional equation with one independent variable : given the function h , find the function Ψ such that ∀ x Ψ ( h ( x ) ) = s Ψ ( x ) . {\displaystyle \forall x\;\;\;\Psi {\big (}h(x){\big )}=s\Psi (x).} Schröder's equation is an eigenvalue equation for the composition operator C h that sends a function f to f ( h (.)) . If a is a fixed point of h , meaning h ( a ) = a , then either Ψ( a ) = 0 (or ∞ ) or s = 1 . Thus, provided that Ψ( a ) is finite and Ψ′( a ) does not vanish or diverge, the eigenvalue s is given by s = h ′( a ) . For a = 0 , if h is analytic on the unit disk, fixes 0 , and 0 < | h ′(0)| < 1 , then Gabriel Koenigs showed in 1884 that there is an analytic (non-trivial) Ψ satisfying Schröder's equation. This is one of the first steps in a long line of theorems fruitful for understanding composition operators on analytic function spaces, cf. Koenigs function . Equations such as Schröder's are suitable to encoding self-similarity , and have thus been extensively utilized in studies of nonlinear dynamics (often referred to colloquially as chaos theory ). It is also used in studies of turbulence , as well as the renormalization group . [ 4 ] [ 5 ] An equivalent transpose form of Schröder's equation for the inverse Φ = Ψ −1 of Schröder's conjugacy function is h (Φ( y )) = Φ( sy ) . The change of variables α( x ) = log(Ψ( x ))/log( s ) (the Abel function ) further converts Schröder's equation to the older Abel equation , α( h ( x )) = α( x ) + 1 . Similarly, the change of variables Ψ( x ) = log(φ( x )) converts Schröder's equation to Böttcher's equation , φ( h ( x )) = (φ( x )) s . Moreover, for the velocity, [ 5 ] β( x ) = Ψ/Ψ′ , Julia 's equation , β( f ( x )) = f ′( x )β( x ) , holds. The n -th power of a solution of Schröder's equation provides a solution of Schröder's equation with eigenvalue s n , instead. In the same vein, for an invertible solution Ψ( x ) of Schröder's equation, the (non-invertible) function Ψ( x ) k (log Ψ( x )) is also a solution, for any periodic function k ( x ) with period log( s ) . All solutions of Schröder's equation are related in this manner. Schröder's equation was solved analytically if a is an attracting (but not superattracting) fixed point, that is 0 < | h ′( a )| < 1 by Gabriel Koenigs (1884). [ 6 ] [ 7 ] In the case of a superattracting fixed point, | h ′( a )| = 0 , Schröder's equation is unwieldy, and had best be transformed to Böttcher's equation . [ 8 ] There are a good number of particular solutions dating back to Schröder's original 1870 paper. [ 1 ] The series expansion around a fixed point and the relevant convergence properties of the solution for the resulting orbit and its analyticity properties are cogently summarized by Szekeres . [ 9 ] Several of the solutions are furnished in terms of asymptotic series , cf. Carleman matrix . It is used to analyse discrete dynamical systems by finding a new coordinate system in which the system (orbit) generated by h ( x ) looks simpler, a mere dilation. More specifically, a system for which a discrete unit time step amounts to x → h ( x ) , can have its smooth orbit (or flow ) reconstructed from the solution of the above Schröder's equation, its conjugacy equation . That is, h ( x ) = Ψ −1 ( s Ψ( x )) ≡ h 1 ( x ) . In general, all of its functional iterates (its regular iteration group , see iterated function ) are provided by the orbit h t ( x ) = Ψ − 1 ( s t Ψ ( x ) ) , {\displaystyle h_{t}(x)=\Psi ^{-1}{\big (}s^{t}\Psi (x){\big )},} for t real — not necessarily positive or integer. (Thus a full continuous group .) The set of h n ( x ) , i.e., of all positive integer iterates of h ( x ) ( semigroup ) is called the splinter (or Picard sequence) of h ( x ) . However, all iterates (fractional, infinitesimal, or negative) of h ( x ) are likewise specified through the coordinate transformation Ψ ( x ) determined to solve Schröder's equation: a holographic continuous interpolation of the initial discrete recursion x → h ( x ) has been constructed; [ 10 ] in effect, the entire orbit . For instance, the functional square root is h 1/2 ( x ) = Ψ −1 ( s 1/2 Ψ( x )) , so that h 1/2 ( h 1/2 ( x )) = h ( x ) , and so on. For example, [ 11 ] special cases of the logistic map such as the chaotic case h ( x ) = 4 x (1 − x ) were already worked out by Schröder in his original article [ 1 ] (p. 306), In fact, this solution is seen to result as motion dictated by a sequence of switchback potentials, [ 12 ] V ( x ) ∝ x ( x − 1) ( nπ + arcsin √ x ) 2 , a generic feature of continuous iterates effected by Schröder's equation. A nonchaotic case he also illustrated with his method, h ( x ) = 2 x (1 − x ) , yields Likewise, for the Beverton–Holt model , h ( x ) = x /(2 − x ) , one readily finds [ 10 ] Ψ( x ) = x /(1 − x ) , so that [ 13 ]
https://en.wikipedia.org/wiki/Schröder's_equation
A Schröder–Bernstein property [ 1 ] is any mathematical property that matches the following pattern: The name Schröder–Bernstein (or Cantor–Schröder–Bernstein, or Cantor–Bernstein) property is in analogy to the theorem of the same name (from set theory). In order to define a specific Schröder–Bernstein property one should decide: In the classical (Cantor–)Schröder–Bernstein theorem : Not all statements of this form are true. For example, assume that: Then the statement fails badly: every triangle X evidently is similar to some triangle inside Y , and the other way round; however, X and Y need not be similar. A Schröder–Bernstein property is a joint property of: Instead of the relation "be a part of" one may use a binary relation "be embeddable into" (embeddability) interpreted as "be similar to some part of". Then a Schröder–Bernstein property takes the following form: The same in the language of category theory : The relation "injects into" is a preorder (that is, a reflexive and transitive relation), and "be isomorphic" is an equivalence relation . Also, embeddability is usually a preorder, and similarity is usually an equivalence relation (which is natural, but not provable in the absence of formal definitions). Generally, a preorder leads to an equivalence relation and a partial order between the corresponding equivalence classes . The Schröder–Bernstein property claims that the embeddability preorder (assuming that it is a preorder) leads to the similarity equivalence relation, and a partial order (not just preorder) between classes of similar objects. The problem of deciding whether a Schröder–Bernstein property (for a given class and two relations) holds or not, is called a Schröder–Bernstein problem. A theorem that states a Schröder–Bernstein property (for a given class and two relations), thus solving the Schröder–Bernstein problem in the affirmative, is called a Schröder–Bernstein theorem (for the given class and two relations), not to be confused with the classical (Cantor–) Schröder–Bernstein theorem mentioned above. The Schröder–Bernstein theorem for measurable spaces [ 2 ] states the Schröder–Bernstein property for the following case: In the Schröder–Bernstein theorem for operator algebras : [ 3 ] Taking into account that commutative von Neumann algebras are closely related to measurable spaces, [ 4 ] one may say that the Schröder–Bernstein theorem for operator algebras is in some sense a noncommutative counterpart of the Schröder–Bernstein theorem for measurable spaces. The Myhill isomorphism theorem can be viewed as a Schröder–Bernstein theorem in computability theory . There is also a Schröder–Bernstein theorem for Borel sets . [ 5 ] Banach spaces violate the Schröder–Bernstein property; [ 6 ] [ 7 ] here: Many other Schröder–Bernstein problems related to various spaces and algebraic structures (groups, rings, fields etc.) are discussed by informal groups of mathematicians (see External Links below).
https://en.wikipedia.org/wiki/Schröder–Bernstein_property
In set theory , the Schröder–Bernstein theorem states that, if there exist injective functions f : A → B and g : B → A between the sets A and B , then there exists a bijective function h : A → B . In terms of the cardinality of the two sets, this classically implies that if | A | ≤ | B | and | B | ≤ | A | , then | A | = | B | ; that is, A and B are equipotent . This is a useful feature in the ordering of cardinal numbers . The theorem is named after Felix Bernstein and Ernst Schröder . It is also known as the Cantor–Bernstein theorem or Cantor–Schröder–Bernstein theorem , after Georg Cantor , who first published it (albeit without proof). The following proof is attributed to Julius König . [ 1 ] Assume without loss of generality that A and B are disjoint . For any a in A or b in B we can form a unique two-sided sequence of elements that are alternately in A and B , by repeatedly applying f {\displaystyle f} and g − 1 {\displaystyle g^{-1}} to go from A to B and g {\displaystyle g} and f − 1 {\displaystyle f^{-1}} to go from B to A (where defined; the inverses f − 1 {\displaystyle f^{-1}} and g − 1 {\displaystyle g^{-1}} are understood as partial functions .) For any particular a , this sequence may terminate to the left or not, at a point where f − 1 {\displaystyle f^{-1}} or g − 1 {\displaystyle g^{-1}} is not defined. By the fact that f {\displaystyle f} and g {\displaystyle g} are injective functions, each a in A and b in B is in exactly one such sequence to within identity: if an element occurs in two sequences, all elements to the left and to the right must be the same in both, by the definition of the sequences. Therefore, the sequences form a partition of the (disjoint) union of A and B . Hence it suffices to produce a bijection between the elements of A and B in each of the sequences separately, as follows: Call a sequence an A-stopper if it stops at an element of A , or a B-stopper if it stops at an element of B . Otherwise, call it doubly infinite if all the elements are distinct or cyclic if it repeats. See the picture for examples. If we assume the axiom of choice, then a pair of surjective functions f {\displaystyle f} and g {\displaystyle g} also implies the existence of a bijection. We construct an injective function h : B → A from f − 1 {\displaystyle f^{-1}} by picking a single element from the inverse image of each point in B . {\displaystyle B.} The surjectivity of f {\displaystyle f} guarantees the existence of at least one element in each such inverse image. We do the same to obtain an injective function k : A → B from g − 1 . {\displaystyle g^{-1}.} The Schröder-Bernstein theorem then can be applied to the injections h and k . The traditional name "Schröder–Bernstein" is based on two proofs published independently in 1898. Cantor is often added because he first stated the theorem in 1887, while Schröder's name is often omitted because his proof turned out to be flawed while the name of Richard Dedekind , who first proved it, is not connected with the theorem. According to Bernstein, Cantor had suggested the name equivalence theorem (Äquivalenzsatz). [ 2 ] Both proofs of Dedekind are based on his famous 1888 memoir Was sind und was sollen die Zahlen? and derive it as a corollary of a proposition equivalent to statement C in Cantor's paper, [ 7 ] which reads A ⊆ B ⊆ C and | A | = | C | implies | A | = | B | = | C | . Cantor observed this property as early as 1882/83 during his studies in set theory and transfinite numbers and was therefore (implicitly) relying on the axiom of choice . The 1895 proof by Cantor relied, in effect, on the axiom of choice by inferring the result as a corollary of the well-ordering theorem . [ 8 ] [ 9 ] However, König's proof given above shows that the result can also be proved without using the axiom of choice. On the other hand, König's proof uses the principle of excluded middle to draw a conclusion through case analysis. As such, the above proof is not a constructive one. In fact, in a constructive set theory such as intuitionistic set theory I Z F {\displaystyle {\mathsf {IZF}}} , which adopts the full axiom of separation but dispenses with the principle of excluded middle, assuming the Schröder–Bernstein theorem implies the latter. [ 19 ] In turn, there is no proof of König's conclusion in this or weaker constructive theories. Therefore, intuitionists do not accept the statement of the Schröder–Bernstein theorem. [ 20 ] There is also a proof which uses Tarski's fixed point theorem . [ 21 ]
https://en.wikipedia.org/wiki/Schröder–Bernstein_theorem
The Cantor–Bernstein–Schroeder theorem of set theory has a counterpart for measurable spaces , sometimes called the Borel Schroeder–Bernstein theorem, since measurable spaces are also called Borel spaces . This theorem, whose proof is quite easy, is instrumental when proving that two measurable spaces are isomorphic. The general theory of standard Borel spaces contains very strong results about isomorphic measurable spaces, see Kuratowski's theorem . However, (a) the latter theorem is very difficult to prove, (b) the former theorem is satisfactory in many important cases (see Examples), and (c) the former theorem is used in the proof of the latter theorem. Let X {\displaystyle X} and Y {\displaystyle Y} be measurable spaces. If there exist injective, bimeasurable maps f : X → Y , {\displaystyle f:X\to Y,} g : Y → X , {\displaystyle g:Y\to X,} then X {\displaystyle X} and Y {\displaystyle Y} are isomorphic (the Schröder–Bernstein property ). The phrase " f {\displaystyle f} is bimeasurable" means that, first, f {\displaystyle f} is measurable (that is, the preimage f − 1 ( B ) {\displaystyle f^{-1}(B)} is measurable for every measurable B ⊂ Y {\displaystyle B\subset Y} ), and second, the image f ( A ) {\displaystyle f(A)} is measurable for every measurable A ⊂ X {\displaystyle A\subset X} . (Thus, f ( X ) {\displaystyle f(X)} must be a measurable subset of Y , {\displaystyle Y,} not necessarily the whole Y . {\displaystyle Y.} ) An isomorphism (between two measurable spaces) is, by definition, a bimeasurable bijection . If it exists, these measurable spaces are called isomorphic. First, one constructs a bijection h : X → Y {\displaystyle h:X\to Y} out of f {\displaystyle f} and g {\displaystyle g} exactly as in the proof of the Cantor–Bernstein–Schroeder theorem . Second, h {\displaystyle h} is measurable, since it coincides with f {\displaystyle f} on a measurable set and with g − 1 {\displaystyle g^{-1}} on its complement. Similarly, h − 1 {\displaystyle h^{-1}} is measurable. The open interval (0, 1) and the closed interval [0, 1] are evidently non-isomorphic as topological spaces (that is, not homeomorphic ). However, they are isomorphic as measurable spaces. Indeed, the closed interval is evidently isomorphic to a shorter closed subinterval of the open interval. Also the open interval is evidently isomorphic to a part of the closed interval (just itself, for instance). The real line R {\displaystyle \mathbb {R} } and the plane R 2 {\displaystyle \mathbb {R} ^{2}} are isomorphic as measurable spaces. It is immediate to embed R {\displaystyle \mathbb {R} } into R 2 . {\displaystyle \mathbb {R} ^{2}.} The converse, embedding of R 2 . {\displaystyle \mathbb {R} ^{2}.} into R {\displaystyle \mathbb {R} } (as measurable spaces, of course, not as topological spaces) can be made by a well-known trick with interspersed digits; for example, The map g : R 2 → R {\displaystyle g:\mathbb {R} ^{2}\to \mathbb {R} } is clearly injective. It is easy to check that it is bimeasurable. (However, it is not bijective; for example, the number 1 / 11 = 0.090909 … {\displaystyle 1/11=0.090909\dots } is not of the form g ( x , y ) {\displaystyle g(x,y)} ).
https://en.wikipedia.org/wiki/Schröder–Bernstein_theorem_for_measurable_spaces
Schrödinger, Inc. is an international scientific software and biotechnology company that specializes in developing computational tools and software for drug discovery and materials science. Schrödinger's software is used by pharmaceutical companies , biotech firms, and academic researchers to simulate and model the behavior of molecules at the atomic level. This accelerates the design and develops new drugs and materials more efficiently, reducing the time and cost of bringing them to market. Schrödinger's software tools include molecular dynamics simulations, free energy calculations, quantum mechanics calculations, and virtual screening tools. The company also offers consulting services and collaborates with partners in the industry to advance the field of computational chemistry and drug discovery. Schrödinger's computational platforms evaluate compounds in silico, with experimental accuracy on properties such as binding affinity and solubility. The company's products include molecular modeling programs, and an Enterprise Informatics Platform named LiveDesign, which is intended to facilitate communication among interdisciplinary research teams. [ 2 ] In addition to computational platforms, Schrödinger develops custom software for enterprises, as well as training, computer-cluster design and implementation, and research-based drug discovery projects. [ 3 ] [ 4 ] Schrödinger software licenses are available to academic institutions for education and not-for-profit research. [ 5 ] Schrödinger's partners include pharmaceutical companies, including Bayer , [ 6 ] [ 7 ] Takeda , [ 8 ] and more. Nimbus Therapeutics, co-founded by Schrödinger, uses Schrödinger's drug screening and design platform for drug discovery . In 2016, Nimbus Therapeutics sold an Acetyl-CoA carboxylase (ACC) inhibitor designed by Schrödinger to Gilead Sciences in a deal worth up to $1.2 billion. [ 9 ] As of spring 2019 the ACC inhibitor was moving through late-stage clinical trials in non-alcoholic steatohepatitis . [ 10 ] In November 2013, Schrödinger, in collaboration with Cycle Computing and the University of Southern California , set a record for the world's largest and fastest cloud computing run by using 156,000 cores on Amazon Web Services to screen over 205,000 molecules for materials science research. [ 11 ] That work was a follow-up to a 2012 collaboration which saw Cycle Computing creating a 50,000 core virtual supercomputer using Amazon and Schrödinger's infrastructure; at that time, it was used to analyze 2.1 million compounds in 3 hours. [ 12 ]
https://en.wikipedia.org/wiki/Schrödinger,_Inc.
The Schrödinger group is the symmetry group of the free particle Schrödinger equation . Mathematically, the group SL(2,R) acts on the Heisenberg group by outer automorphisms , and the Schrödinger group is the corresponding semidirect product . The Schrödinger algebra is the Lie algebra of the Schrödinger group. It is not semi-simple . In one space dimension, it can be obtained as a semi-direct sum of the Lie algebra sl(2,R) and the Heisenberg algebra ; similar constructions apply to higher spatial dimensions. It contains a Galilei algebra with central extension. where J a , P a , K a , H {\displaystyle J_{a},P_{a},K_{a},H} are generators of rotations ( angular momentum operator ), spatial translations ( momentum operator ), Galilean boosts and time translation ( Hamiltonian ) respectively. (Notes: i {\displaystyle i} is the imaginary unit, i 2 = − 1 {\displaystyle i^{2}=-1} . The specific form of the commutators of the generators of rotation J a {\displaystyle J_{a}} is the one of three-dimensional space, then a , b , c = 1 , … , 3 {\displaystyle a,b,c=1,\ldots ,3} .). The central extension M has an interpretation as non-relativistic mass and corresponds to the symmetry of Schrödinger equation under phase transformation (and to the conservation of probability). There are two more generators which we shall denote by D and C . They have the following commutation relations: The generators H , C and D form the sl(2, R ) algebra. A more systematic notation allows to cast these generators into the four (infinite) families X n , Y m ( j ) , M n {\displaystyle X_{n},Y_{m}^{(j)},M_{n}} and R n ( j k ) = − R n ( k j ) {\displaystyle R_{n}^{(jk)}=-R_{n}^{(kj)}} , where n ∈ ℤ is an integer and m ∈ ℤ+1/2 is a half-integer and j,k=1,...,d label the spatial direction, in d spatial dimensions. The non-vanishing commutators of the Schrödinger algebra become (euclidean form) The Schrödinger algebra is finite-dimensional and contains the generators X − 1 , 0 , 1 , Y − 1 / 2 , 1 / 2 ( j ) , M 0 , R 0 ( j k ) {\displaystyle X_{-1,0,1},Y_{-1/2,1/2}^{(j)},M_{0},R_{0}^{(jk)}} . In particular, the three generators X − 1 = H , X 0 = D , X 1 = C {\displaystyle X_{-1}=H,X_{0}=D,X_{1}=C} span the sl(2,R) sub-algebra. Space-translations are generated by Y − 1 / 2 ( j ) {\displaystyle Y_{-1/2}^{(j)}} and the Galilei-transformations by Y 1 / 2 ( j ) {\displaystyle Y_{1/2}^{(j)}} . In the chosen notation, one clearly sees that an infinite-dimensional extension exists, which is called the Schrödinger–Virasoro algebra . Then, the generators X n {\displaystyle X_{n}} with n integer span a loop-Virasoro algebra. An explicit representation as time-space transformations is given by, with n ∈ ℤ and m ∈ ℤ+1/2 [ 1 ] This shows how the central extension M 0 {\displaystyle M_{0}} of the non-semi-simple and finite-dimensional Schrödinger algebra becomes a component of an infinite family in the Schrödinger–Virasoro algebra. In addition, and in analogy with either the Virasoro algebra or the Kac–Moody algebra , further central extensions are possible. However, a non-vanishing result only exists for the commutator [ X n , X n ′ ] {\displaystyle [X_{n},X_{n'}]} , where it must be of the familiar Virasoro form, namely or for the commutator between the rotations R n ( j k ) {\displaystyle R_{n}^{(jk)}} , where it must have a Kac-Moody form. Any other possible central extension can be absorbed into the Lie algebra generators. Though the Schrödinger group is defined as symmetry group of the free particle Schrödinger equation , it is realized in some interacting non-relativistic systems (for example cold atoms at criticality). The Schrödinger group in d spatial dimensions can be embedded into relativistic conformal group in d + 1 dimensions SO(2, d + 2) . This embedding is connected with the fact that one can get the Schrödinger equation from the massless Klein–Gordon equation through Kaluza–Klein compactification along null-like dimensions and Bargmann lift of Newton–Cartan theory . This embedding can also be viewed as the extension of the Schrödinger algebra to the maximal parabolic sub-algebra of SO(2, d + 2) . The Schrödinger group symmetry can give rise to exotic properties to interacting bosonic and fermionic systems, such as the superfluids in bosons [ 2 ] , [ 3 ] and Fermi liquids and non-Fermi liquids in fermions. [ 4 ] They have applications in condensed matter and cold atoms. The Schrödinger group also arises as dynamical symmetry in condensed-matter applications: it is the dynamical symmetry of the Edwards–Wilkinson model of kinetic interface growth. [ 5 ] It also describes the kinetics of phase-ordering, after a temperature quench from the disordered to the ordered phase, in magnetic systems.
https://en.wikipedia.org/wiki/Schrödinger_group
Schrödinger logics are a kind of non-classical logic in which the law of identity is restricted. These logics are motivated by the consideration that in quantum mechanics, elementary particles may be indistinguishable, even in principle, on the basis of any measurement. This in turn suggests that such particles cannot be considered as self-identical objects in the way that such things are usually treated within formal logic and set theory. [ 1 ] Schrödinger logics are many-sorted logics in which the expression x = y is not a well-formed formula in general. A formal semantics can be provided using the concept of a quasi-set. Schrödinger logics were introduced by da Costa and Krause. [ 2 ] [ 3 ] Schrödinger logic is not related to quantum logic , which is a propositional logic that rejects the distributivity laws of classical logic .
https://en.wikipedia.org/wiki/Schrödinger_logic
The Schrödinger–Newton equation , sometimes referred to as the Newton–Schrödinger or Schrödinger–Poisson equation , is a nonlinear modification of the Schrödinger equation with a Newtonian gravitational potential, where the gravitational potential emerges from the treatment of the wave function as a mass density, including a term that represents interaction of a particle with its own gravitational field. The inclusion of a self-interaction term represents a fundamental alteration of quantum mechanics. [ 1 ] It can be written either as a single integro-differential equation or as a coupled system of a Schrödinger and a Poisson equation . In the latter case it is also referred to in the plural form. The Schrödinger–Newton equation was first considered by Ruffini and Bonazzola [ 2 ] in connection with self-gravitating boson stars . In this context of classical general relativity it appears as the non-relativistic limit of either the Klein–Gordon equation or the Dirac equation in a curved space-time together with the Einstein field equations . [ 3 ] The equation also describes fuzzy dark matter and approximates classical cold dark matter described by the Vlasov–Poisson equation in the limit that the particle mass is large. [ 4 ] Later on it was proposed as a model to explain the quantum wave function collapse by Lajos Diósi [ 5 ] and Roger Penrose , [ 6 ] [ 7 ] [ 8 ] from whom the name "Schrödinger–Newton equation" originates. In this context, matter has quantum properties, while gravity remains classical even at the fundamental level. The Schrödinger–Newton equation was therefore also suggested as a way to test the necessity of quantum gravity . [ 9 ] In a third context, the Schrödinger–Newton equation appears as a Hartree approximation for the mutual gravitational interaction in a system of a large number of particles. In this context, a corresponding equation for the electromagnetic Coulomb interaction was suggested by Philippe Choquard at the 1976 Symposium on Coulomb Systems in Lausanne to describe one-component plasmas. Elliott H. Lieb provided the proof for the existence and uniqueness of a stationary ground state and referred to the equation as the Choquard equation . [ 10 ] As a coupled system, the Schrödinger–Newton equations are the usual Schrödinger equation with a self-interaction gravitational potential i ℏ ∂ Ψ ∂ t = − ℏ 2 2 m ∇ 2 Ψ + V Ψ + m Φ Ψ , {\displaystyle i\hbar \ {\frac {\partial \Psi }{\ \partial t\ }}=-{\frac {\ \hbar ^{2}}{\ 2\ m\ }}\ \nabla ^{2}\Psi \;+\;V\ \Psi \;+\;m\ \Phi \ \Psi \ ,} where V is an ordinary potential, and the gravitational potential Φ , {\displaystyle \ \Phi \ ,} representing the interaction of the particle with its own gravitational field, satisfies the Poisson equation ∇ 2 Φ = 4 π G m | Ψ | 2 . {\displaystyle \ \nabla ^{2}\Phi =4\pi \ G\ m\ |\Psi |^{2}~.} Because of the back coupling of the wave-function into the potential, it is a nonlinear system . Replacing Φ {\displaystyle \ \Phi \ } with the solution to the Poisson equation produces the integro-differential form of the Schrödinger–Newton equation: i ℏ ∂ Ψ ∂ t = [ − ℏ 2 2 m ∇ 2 + V − G m 2 ∫ | Ψ ( t , y ) | 2 | x − y | d 3 y ] Ψ . {\displaystyle i\hbar \ {\frac {\ \partial \Psi \ }{\partial t}}=\left[\ -{\frac {\ \hbar ^{2}}{\ 2\ m\ }}\ \nabla ^{2}\;+\;V\;-\;G\ m^{2}\int {\frac {\ |\Psi (t,\mathbf {y} )|^{2}}{\ |\mathbf {x} -\mathbf {y} |\ }}\;\mathrm {d} ^{3}\mathbf {y} \ \right]\Psi ~.} It is obtained from the above system of equations by integration of the Poisson equation under the assumption that the potential must vanish at infinity. Mathematically, the Schrödinger–Newton equation is a special case of the Hartree equation for n = 2 . The equation retains most of the properties of the linear Schrödinger equation. In particular, it is invariant under constant phase shifts, leading to conservation of probability and exhibits full Galilei invariance . In addition to these symmetries, a simultaneous transformation m → μ m , t → μ − 5 t , x → μ − 3 x , ψ ( t , x ) → μ 9 / 2 ψ ( μ 5 t , μ 3 x ) {\displaystyle m\to \mu \ m\ ,\qquad t\to \mu ^{-5}t\ ,\qquad \mathbf {x} \to \mu ^{-3}\mathbf {x} \ ,\qquad \psi (t,\mathbf {x} )\to \mu ^{9/2}\psi (\mu ^{5}t,\mu ^{3}\mathbf {x} )} maps solutions of the Schrödinger–Newton equation to solutions. [ 11 ] [ 12 ] The stationary equation, which can be obtained in the usual manner via a separation of variables, possesses an infinite family of normalisable solutions of which only the stationary ground state is stable. [ 13 ] [ 14 ] [ 15 ] The Schrödinger–Newton equation can be derived under the assumption that gravity remains classical, even at the fundamental level, and that the right way to couple quantum matter to gravity is by means of the semiclassical Einstein equations . In this case, a Newtonian gravitational potential term is added to the Schrödinger equation, where the source of this gravitational potential is the expectation value of the mass density operator or mass flux-current. [ 16 ] In this regard, if gravity is fundamentally classical, the Schrödinger–Newton equation is a fundamental one-particle equation, which can be generalised to the case of many particles (see below). If, on the other hand, the gravitational field is quantised, the fundamental Schrödinger equation remains linear. The Schrödinger–Newton equation is then only valid as an approximation for the gravitational interaction in systems of a large number of particles and has no effect on the centre of mass. [ 17 ] If the Schrödinger–Newton equation is considered as a fundamental equation, there is a corresponding N -body equation that was already given by Diósi [ 5 ] and can be derived from semiclassical gravity in the same way as the one-particle equation: i ℏ ∂ ∂ t Ψ ( t , x 1 , … , x N ) = ( − ∑ j = 1 N ℏ 2 2 m j ∇ j 2 + ∑ j ≠ k V j k ( | x j − x k | ) − G ∑ j , k = 1 N m j m k ∫ d 3 y 1 ⋯ d 3 y N | Ψ ( t , y 1 , … , y N ) | 2 | x j − y k | ) Ψ ( t , x 1 , … , x N ) . {\displaystyle {\begin{aligned}i\hbar {\frac {\partial }{\partial t}}\Psi (t,\mathbf {x} _{1},\dots ,\mathbf {x} _{N})={\bigg (}&-\sum _{j=1}^{N}{\frac {\hbar ^{2}}{2m_{j}}}\nabla _{j}^{2}+\sum _{j\neq k}V_{jk}{\big (}|\mathbf {x} _{j}-\mathbf {x} _{k}|{\big )}\\&-G\sum _{j,k=1}^{N}m_{j}m_{k}\int \mathrm {d} ^{3}\mathbf {y} _{1}\cdots \mathrm {d} ^{3}\mathbf {y} _{N}\,{\frac {|\Psi (t,\mathbf {y} _{1},\dots ,\mathbf {y} _{N})|^{2}}{|\mathbf {x} _{j}-\mathbf {y} _{k}|}}{\bigg )}\Psi (t,\mathbf {x} _{1},\dots ,\mathbf {x} _{N}).\end{aligned}}} The potential V j k {\displaystyle V_{jk}} contains all the mutual linear interactions, e.g. electrodynamical Coulomb interactions, while the gravitational-potential term is based on the assumption that all particles perceive the same gravitational potential generated by all the marginal distributions for all the particles together. In a Born–Oppenheimer -like approximation, this N -particle equation can be separated into two equations, one describing the relative motion, the other providing the dynamics of the centre-of-mass wave-function. For the relative motion, the gravitational interaction does not play a role, since it is usually weak compared to the other interactions represented by V j k {\displaystyle V_{jk}} . But it has a significant influence on the centre-of-mass motion. While V j k {\displaystyle V_{jk}} only depends on relative coordinates and therefore does not contribute to the centre-of-mass dynamics at all, the nonlinear Schrödinger–Newton interaction does contribute. In the aforementioned approximation, the centre-of-mass wave-function satisfies the following nonlinear Schrödinger equation: i ℏ ∂ ψ c ( t , R ) ∂ t = ( ℏ 2 2 M ∇ 2 − G ∫ d 3 R ′ ∫ d 3 y ∫ d 3 z | ψ c ( t , R ′ ) | 2 ρ c ( y ) ρ c ( z ) | R − R ′ − y + z | ) ψ c ( t , R ) , {\displaystyle i\hbar {\frac {\partial \psi _{c}(t,\mathbf {R} )}{\partial t}}=\left({\frac {\hbar ^{2}}{2M}}\nabla ^{2}-G\int \mathrm {d} ^{3}\mathbf {R'} \,\int \mathrm {d} ^{3}\mathbf {y} \,\int \mathrm {d} ^{3}\mathbf {z} \,{\frac {|\psi _{c}(t,\mathbf {R'} )|^{2}\,\rho _{c}(\mathbf {y} )\rho _{c}(\mathbf {z} )}{\left|\mathbf {R} -\mathbf {R'} -\mathbf {y} +\mathbf {z} \right|}}\right)\psi _{c}(t,\mathbf {R} ),} where M is the total mass, R is the relative coordinate, ψ c {\displaystyle \psi _{c}} the centre-of-mass wave-function, and ρ c {\displaystyle \rho _{c}} is the mass density of the many-body system (e.g. a molecule or a rock) relative to its centre of mass. [ 18 ] In the limiting case of a wide wave-function, i.e. where the width of the centre-of-mass distribution is large compared to the size of the considered object, the centre-of-mass motion is approximated well by the Schrödinger–Newton equation for a single particle. The opposite case of a narrow wave-function can be approximated by a harmonic-oscillator potential, where the Schrödinger–Newton dynamics leads to a rotation in phase space. [ 19 ] In the context where the Schrödinger–Newton equation appears as a Hartree approximation, the situation is different. In this case the full N -particle wave-function is considered a product state of N single-particle wave-functions, where each of those factors obeys the Schrödinger–Newton equation. The dynamics of the centre-of-mass, however, remain strictly linear in this picture. This is true in general: nonlinear Hartree equations never have an influence on the centre of mass. A rough order-of-magnitude estimate of the regime where effects of the Schrödinger–Newton equation become relevant can be obtained by a rather simple reasoning. [ 9 ] For a spherically symmetric Gaussian , Ψ ( t = 0 , r ) = ( π σ 2 ) − 3 / 4 exp ⁡ ( − r 2 2 σ 2 ) , {\displaystyle \Psi (t=0,r)=(\pi \sigma ^{2})^{-3/4}\exp \left(-{\frac {r^{2}}{2\sigma ^{2}}}\right),} the free linear Schrödinger equation has the solution Ψ ( t , r ) = ( π σ 2 ) − 3 / 4 ( 1 + i ℏ t m σ 2 ) − 3 / 2 exp ⁡ ( − r 2 2 σ 2 ( 1 + i ℏ t m σ 2 ) ) . {\displaystyle \Psi (t,r)=(\pi \sigma ^{2})^{-3/4}\left(1+{\frac {i\hbar t}{m\sigma ^{2}}}\right)^{-3/2}\exp \left(-{\frac {r^{2}}{2\sigma ^{2}\left(1+{\frac {i\hbar t}{m\sigma ^{2}}}\right)}}\right).} The peak of the radial probability density 4 π r 2 | Ψ | 2 {\displaystyle 4\pi r^{2}|\Psi |^{2}} can be found at r p = σ 1 + ℏ 2 t 2 m 2 σ 4 . {\displaystyle r_{p}=\sigma {\sqrt {1+{\frac {\hbar ^{2}t^{2}}{m^{2}\sigma ^{4}}}}}.} Now we set the acceleration r ¨ p = ℏ 2 m 2 r p 3 {\displaystyle {\ddot {r}}_{p}={\frac {\hbar ^{2}}{m^{2}r_{p}^{3}}}} of this peak probability equal to the acceleration due to Newtonian gravity: r ¨ = − G m r 2 , {\displaystyle {\ddot {r}}=-{\frac {Gm}{r^{2}}},} using that r p = σ {\displaystyle r_{p}=\sigma } at time t = 0 {\displaystyle t=0} . This yields the relation m 3 σ = ℏ 2 G ≈ 1.7 × 10 − 58 m kg 3 , {\displaystyle m^{3}\sigma ={\frac {\hbar ^{2}}{G}}\approx 1.7\times 10^{-58}~{\text{m}}\,{\text{kg}}^{3},} which allows us to determine a critical width for a given mass value and conversely. We also recognise the scaling law mentioned above. Numerical simulations [ 12 ] [ 1 ] show that this equation gives a rather good estimate of the mass regime above which effects of the Schrödinger–Newton equation become significant. For an atom the critical width is around 10 22 metres, while it is already down to 10 −31 metres for a mass of one microgram. The regime where the mass is around 10 10 atomic mass units while the width is of the order of micrometers is expected to allow an experimental test of the Schrödinger–Newton equation in the future. A possible candidate are interferometry experiments with heavy molecules, which currently reach masses up to 10 000 atomic mass units. The idea that gravity causes (or somehow influences) the wavefunction collapse dates back to the 1960s and was originally proposed by Károlyházy . [ 20 ] The Schrödinger–Newton equation was proposed in this context by Diósi. [ 5 ] There the equation provides an estimation for the "line of demarcation" between microscopic (quantum) and macroscopic (classical) objects. The stationary ground state has a width of a 0 ≈ ℏ 2 G m 3 . {\displaystyle a_{0}\approx {\frac {\hbar ^{2}}{Gm^{3}}}.} For a well-localised homogeneous sphere, i.e. a sphere with a centre-of-mass wave-function that is narrow compared to the radius of the sphere, Diósi finds as an estimate for the width of the ground-state centre-of-mass wave-function a 0 ( R ) ≈ a 0 1 / 4 R 3 / 4 . {\displaystyle a_{0}^{(R)}\approx a_{0}^{1/4}R^{3/4}.} Assuming a usual density around 1000 kg/m 3 , a critical radius can be calculated for which a 0 ( R ) ≈ R {\displaystyle a_{0}^{(R)}\approx R} . This critical radius is around a tenth of a micrometer. Roger Penrose proposed that the Schrödinger–Newton equation mathematically describes the basis states involved in a gravitationally induced wavefunction collapse scheme. [ 6 ] [ 7 ] [ 8 ] Penrose suggests that a superposition of two or more quantum states having a significant amount of mass displacement ought to be unstable and reduce to one of the states within a finite time. He hypothesises that there exists a "preferred" set of states that could collapse no further, specifically, the stationary states of the Schrödinger–Newton equation. A macroscopic system can therefore never be in a spatial superposition, since the nonlinear gravitational self-interaction immediately leads to a collapse to a stationary state of the Schrödinger–Newton equation. According to Penrose's idea, when a quantum particle is measured, there is an interplay of this nonlinear collapse and environmental decoherence . The gravitational interaction leads to the reduction of the environment to one distinct state, and decoherence leads to the localisation of the particle, e.g. as a dot on a screen. Three major problems occur with this interpretation of the Schrödinger–Newton equation as the cause of the wave-function collapse: First, numerical studies [ 12 ] [ 15 ] [ 1 ] agreeingly find that when a wave packet "collapses" to a stationary solution, a small portion of it seems to run away to infinity. This would mean that even a completely collapsed quantum system still can be found at a distant location. Since the solutions of the linear Schrödinger equation tend towards infinity even faster, this only indicates that the Schrödinger–Newton equation alone is not sufficient to explain the wave-function collapse. If the environment is taken into account, this effect might disappear and therefore not be present in the scenario described by Penrose. A second problem, also arising in Penrose's proposal, is the origin of the Born rule : To solve the measurement problem , a mere explanation of why a wave-function collapses – e.g., to a dot on a screen – is not enough. A good model for the collapse process also has to explain why the dot appears on different positions of the screen, with probabilities that are determined by the squared absolute-value of the wave-function. It might be possible that a model based on Penrose's idea could provide such an explanation, but there is as yet no known reason that Born's rule would naturally arise from it. Thirdly, since the gravitational potential is linked to the wave-function in the picture of the Schrödinger–Newton equation, the wave-function must be interpreted as a real object. Therefore, at least in principle, it becomes a measurable quantity. Making use of the nonlocal nature of entangled quantum systems, this could be used to send signals faster than light, which is generally thought to be in contradiction with causality. It is, however, not clear whether this problem can be resolved by applying the right collapse prescription, yet to be found, consistently to the full quantum system. Also, since gravity is such a weak interaction, it is not clear that such an experiment can be actually performed within the parameters given in our universe (see the referenced discussion [ 21 ] about a similar thought experiment proposed by Eppley & Hannah [ 22 ] ).
https://en.wikipedia.org/wiki/Schrödinger–Newton_equation
In mathematics, Schubert polynomials are generalizations of Schur polynomials that represent cohomology classes of Schubert cycles in flag varieties . They were introduced by Lascoux & Schützenberger (1982) and are named after Hermann Schubert . Lascoux (1995) described the history of Schubert polynomials. The Schubert polynomials S w {\displaystyle {\mathfrak {S}}_{w}} are polynomials in the variables x 1 , x 2 , … {\displaystyle x_{1},x_{2},\ldots } depending on an element w {\displaystyle w} of the infinite symmetric group S ∞ {\displaystyle S_{\infty }} of all permutations of N {\displaystyle \mathbb {N} } fixing all but a finite number of elements. They form a basis for the polynomial ring Z [ x 1 , x 2 , … ] {\displaystyle \mathbb {Z} [x_{1},x_{2},\ldots ]} in infinitely many variables. The cohomology of the flag manifold Fl ( m ) {\displaystyle {\text{Fl}}(m)} is Z [ x 1 , x 2 , … , x m ] / I , {\displaystyle \mathbb {Z} [x_{1},x_{2},\ldots ,x_{m}]/I,} where I {\displaystyle I} is the ideal generated by homogeneous symmetric functions of positive degree. The Schubert polynomial S w {\displaystyle {\mathfrak {S}}_{w}} is the unique homogeneous polynomial of degree ℓ ( w ) {\displaystyle \ell (w)} representing the Schubert cycle of w {\displaystyle w} in the cohomology of the flag manifold Fl ( m ) {\displaystyle {\text{Fl}}(m)} for all sufficiently large m . {\displaystyle m.} [ citation needed ] Schubert polynomials can be calculated recursively from these two properties. In particular, this implies that S w = ∂ w − 1 w 0 x 1 n − 1 x 2 n − 2 ⋯ x n − 1 1 {\displaystyle {\mathfrak {S}}_{w}=\partial _{w^{-1}w_{0}}x_{1}^{n-1}x_{2}^{n-2}\cdots x_{n-1}^{1}} . Other properties are As an example Since the Schubert polynomials form a Z {\displaystyle \mathbb {Z} } -basis, there are unique coefficients c β γ α {\displaystyle c_{\beta \gamma }^{\alpha }} such that These can be seen as a generalization of the Littlewood−Richardson coefficients described by the Littlewood–Richardson rule . For algebro-geometric reasons ( Kleiman's transversality theorem of 1974 ), these coefficients are non-negative integers and it is an outstanding problem in representation theory and combinatorics to give a combinatorial rule for these numbers. Double Schubert polynomials S w ( x 1 , x 2 , … , y 1 , y 2 , … ) {\displaystyle {\mathfrak {S}}_{w}(x_{1},x_{2},\ldots ,y_{1},y_{2},\ldots )} are polynomials in two infinite sets of variables, parameterized by an element w of the infinite symmetric group, that becomes the usual Schubert polynomials when all the variables y i {\displaystyle y_{i}} are 0 {\displaystyle 0} . The double Schubert polynomial S w ( x 1 , x 2 , … , y 1 , y 2 , … ) {\displaystyle {\mathfrak {S}}_{w}(x_{1},x_{2},\ldots ,y_{1},y_{2},\ldots )} are characterized by the properties The double Schubert polynomials can also be defined as Fomin, Gelfand & Postnikov (1997) introduced quantum Schubert polynomials, that have the same relation to the (small) quantum cohomology of flag manifolds that ordinary Schubert polynomials have to the ordinary cohomology. Fulton (1999) introduced universal Schubert polynomials, that generalize classical and quantum Schubert polynomials. He also described universal double Schubert polynomials generalizing double Schubert polynomials.
https://en.wikipedia.org/wiki/Schubert_polynomial
In mathematics , the Schuette–Nesbitt formula is a generalization of the inclusion–exclusion principle . It is named after Donald R. Schuette and Cecil J. Nesbitt . The probabilistic version of the Schuette–Nesbitt formula has practical applications in actuarial science , where it is used to calculate the net single premium for life annuities and life insurances based on the general symmetric status. Consider a set Ω and subsets A 1 , ..., A m . Let denote the number of subsets to which ω ∈ Ω belongs, where we use the indicator functions of the sets A 1 , ..., A m . Furthermore, for each k ∈ {0, 1, ..., m } , let denote the number of intersections of exactly k sets out of A 1 , ..., A m , to which ω belongs, where the intersection over the empty index set is defined as Ω , hence N 0 = 1 Ω . Let V denote a vector space over a field R such as the real or complex numbers (or more generally a module over a ring R with multiplicative identity ). Then, for every choice of c 0 , ..., c m ∈ V , where 1 { N = n } denotes the indicator function of the set of all ω ∈ Ω with N ( ω ) = n , and ( k l ) {\displaystyle \textstyle {\binom {k}{l}}} is a binomial coefficient . Equality ( 3 ) says that the two V -valued functions defined on Ω are the same. We prove that ( 3 ) holds pointwise. Take ω ∈ Ω and define n = N ( ω ) . Then the left-hand side of ( 3 ) equals c n . Let I denote the set of all those indices i ∈ {1, ..., m } such that ω ∈ A i , hence I contains exactly n indices. Given J ⊂ {1, ..., m } with k elements, then ω belongs to the intersection ∩ j ∈ J A j if and only if J is a subset of I . By the combinatorial interpretation of the binomial coefficient , there are N k = ( n k ) {\displaystyle \textstyle {\binom {n}{k}}} such subsets (the binomial coefficient is zero for k > n ). Therefore the right-hand side of ( 3 ) evaluated at ω equals where we used that the first binomial coefficient is zero for k > n . Note that the sum (*) is empty and therefore defined as zero for n < l . Using the factorial formula for the binomial coefficients, it follows that Rewriting (**) with the summation index j = k − l und using the binomial formula for the third equality shows that which is the Kronecker delta . Substituting this result into the above formula and noting that n choose l equals 1 for l = n , it follows that the right-hand side of ( 3 ) evaluated at ω also reduces to c n . As a special case, take for V the polynomial ring R [ x ] with the indeterminate x . Then ( 3 ) can be rewritten in a more compact way as This is an identity for two polynomials whose coefficients depend on ω , which is implicit in the notation. Proof of ( 4 ) using ( 3 ): Substituting c n = x n for n ∈ {0, ..., m } into ( 3 ) and using the binomial formula shows that which proves ( 4 ). Consider the linear shift operator E and the linear difference operator Δ , which we define here on the sequence space of V by and Substituting x = E in ( 4 ) shows that where we used that Δ = E – I with I denoting the identity operator . Note that E 0 and Δ 0 equal the identity operator I on the sequence space, E k and Δ k denote the k -fold composition . To prove ( 5 ), we first want to verify the equation involving indicator functions of the sets A 1 , ..., A m and their complements with respect to Ω . Suppose an ω from Ω belongs to exactly k sets out of A 1 , ..., A m , where k ∈ {0, ..., m } , for simplicity of notation say that ω only belongs to A 1 , ..., A k . Then the left-hand side of ( ✳ ) is E k . On the right-hand side of ( ✳ ), the first k factors equal E , the remaining ones equal I , their product is also E k , hence the formula ( ✳ ) is true. Note that Inserting this result into equation ( ✳ ) and expanding the product gives because the product of indicator functions is the indicator function of the intersection. Using the definition ( 2 ), the result ( 5 ) follows. Let (Δ k c ) 0 denote the 0th component of the k -fold composition Δ k applied to c = ( c 0 , c 1 , ..., c m , ...) , where Δ 0 denotes the identity. Then ( 3 ) can be rewritten in a more compact way as Consider arbitrary events A 1 , ..., A m in a probability space (Ω, F , P {\displaystyle \mathbb {P} } ) and let E denote the expectation operator . Then N from ( 1 ) is the random number of these events which occur simultaneously. Using N k from ( 2 ), define where the intersection over the empty index set is again defined as Ω , hence S 0 = 1 . If the ring R is also an algebra over the real or complex numbers, then taking the expectation of the coefficients in ( 4 ) and using the notation from ( 7 ), in R [ x ] . If R is the field of real numbers, then this is the probability-generating function of the probability distribution of N . Similarly, ( 5 ) and ( 6 ) yield and, for every sequence c = ( c 0 , c 1 , c 2 , c 3 , ..., c m , ...) , The quantity on the left-hand side of ( 6' ) is the expected value of c N . For textbook presentations of the probabilistic Schuette–Nesbitt formula ( 6' ) and their applications to actuarial science, cf. Gerber (1997) . Chapter 8, or Bowers et al. (1997) , Chapter 18 and the Appendix, pp. 577–578. For independent events, the formula ( 6' ) appeared in a discussion of Robert P. White and T.N.E. Greville's paper by Donald R. Schuette and Cecil J. Nesbitt , see Schuette & Nesbitt (1959) . In the two-page note Gerber (1979) , Hans U. Gerber, called it Schuette–Nesbitt formula and generalized it to arbitrary events. Christian Buchta, see Buchta (1994) , noticed the combinatorial nature of the formula and published the elementary combinatorial proof of ( 3 ). Cecil J. Nesbitt, PhD , F.S.A. , M.A.A.A., received his mathematical education at the University of Toronto and the Institute for Advanced Study in Princeton . He taught actuarial mathematics at the University of Michigan from 1938 to 1980. He served the Society of Actuaries from 1985 to 1987 as Vice-President for Research and Studies. Professor Nesbitt died in 2001. (Short CV taken from Bowers et al. (1997) , page xv.) Donald Richard Schuette was a PhD student of C. Nesbitt, he later became professor at the University of Wisconsin–Madison . The probabilistic version of the Schuette–Nesbitt formula ( 6' ) generalizes much older formulae of Waring , which express the probability of the events { N = n } and { N ≥ n } in terms of S 1 , S 2 , ..., S m . More precisely, with ( k n ) {\displaystyle \textstyle {\binom {k}{n}}} denoting the binomial coefficient , and see Feller (1968) , Sections IV.3 and IV.5, respectively. To see that these formulae are special cases of the probabilistic version of the Schuette–Nesbitt formula, note that by the binomial theorem Applying this operator identity to the sequence c = (0, ..., 0, 1, 0, 0, ...) with n leading zeros and noting that ( E j c ) 0 = 1 if j = n and ( E j c ) 0 = 0 otherwise, the formula ( 8 ) for { N = n } follows from ( 6' ). Applying the identity to c = (0, ..., 0, 1, 1, 1, ...) with n leading zeros and noting that ( E j c ) 0 = 1 if j ≥ n and ( E j c ) 0 = 0 otherwise, equation ( 6' ) implies that Expanding (1 – 1) k using the binomial theorem and using equation (11) of the formulas involving binomial coefficients , we obtain Hence, we have the formula ( 9 ) for { N ≥ n } . Problem: Suppose there are m persons aged x 1 , ..., x m with remaining random (but independent) lifetimes T 1 , ..., T m . Suppose the group signs a life insurance contract which pays them after t years the amount c n if exactly n persons out of m are still alive after t years. How high is the expected payout of this insurance contract in t years? Solution: Let A j denote the event that person j survives t years, which means that A j = { T j > t } . In actuarial notation the probability of this event is denoted by t p x j and can be taken from a life table . Use independence to calculate the probability of intersections. Calculate S 1 , ..., S m and use the probabilistic version of the Schuette–Nesbitt formula ( 6' ) to calculate the expected value of c N . Let σ be a random permutation of the set {1, ..., m } and let A j denote the event that j is a fixed point of σ , meaning that A j = { σ ( j ) = j } . When the numbers in J , which is a subset of {1, ..., m } , are fixed points, then there are ( m – | J |)! ways to permute the remaining m – | J | numbers, hence By the combinatorical interpretation of the binomial coefficient , there are ( m k ) {\displaystyle \textstyle {\binom {m}{k}}} different choices of a subset J of {1, ..., m } with k elements, hence ( 7 ) simplifies to Therefore, using ( 4' ), the probability-generating function of the number N of fixed points is given by This is the partial sum of the infinite series giving the exponential function at x – 1 , which in turn is the probability-generating function of the Poisson distribution with parameter 1 . Therefore, as m tends to infinity , the distribution of N converges to the Poisson distribution with parameter 1 .
https://en.wikipedia.org/wiki/Schuette–Nesbitt_formula
The Schumann–Runge bands are a set of absorption bands of molecular oxygen that occur at wavelengths between 176 and 192.6 nanometres. [ 1 ] [ 2 ] The bands are named for Victor Schumann and Carl Runge . This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schumann–Runge_bands
In mathematics , Schur's inequality , named after Issai Schur , establishes that for all non-negative real numbers x , y , z , and t>0 , with equality if and only if x = y = z or two of them are equal and the other is zero. When t is an even positive integer , the inequality holds for all real numbers x , y and z . When t = 1 {\displaystyle t=1} , the following well-known special case can be derived: Since the inequality is symmetric in x , y , z {\displaystyle x,y,z} we may assume without loss of generality that x ≥ y ≥ z {\displaystyle x\geq y\geq z} . Then the inequality clearly holds, since every term on the left-hand side of the inequality is non-negative. This rearranges to Schur's inequality. A generalization of Schur's inequality is the following: Suppose a,b,c are positive real numbers. If the triples (a,b,c) and (x,y,z) are similarly sorted , then the following inequality holds: In 2007, Romanian mathematician Valentin Vornicu showed that a yet further generalized form of Schur's inequality holds: Consider a , b , c , x , y , z ∈ R {\displaystyle a,b,c,x,y,z\in \mathbb {R} } , where a ≥ b ≥ c {\displaystyle a\geq b\geq c} , and either x ≥ y ≥ z {\displaystyle x\geq y\geq z} or z ≥ y ≥ x {\displaystyle z\geq y\geq x} . Let k ∈ Z + {\displaystyle k\in \mathbb {Z} ^{+}} , and let f : R → R 0 + {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} _{0}^{+}} be either convex or monotonic . Then, The standard form of Schur's is the case of this inequality where x = a , y = b , z = c , k = 1, ƒ ( m ) = m r . [ 1 ] Another possible extension states that if the non-negative real numbers x ≥ y ≥ z ≥ v {\displaystyle x\geq y\geq z\geq v} with and the positive real number t are such that x + v ≥ y + z then [ 2 ]
https://en.wikipedia.org/wiki/Schur's_inequality
In mathematics , Schur's lemma [ 1 ] is an elementary but extremely useful statement in representation theory of groups and algebras . In the group case it says that if M and N are two finite-dimensional irreducible representations of a group G and φ is a linear map from M to N that commutes with the action of the group, then either φ is invertible , or φ = 0. An important special case occurs when M = N , i.e. φ is a self-map; in particular, any element of the center of a group must act as a scalar operator (a scalar multiple of the identity ) on M . The lemma is named after Issai Schur who used it to prove the Schur orthogonality relations and develop the basics of the representation theory of finite groups . Schur's lemma admits generalisations to Lie groups and Lie algebras , the most common of which are due to Jacques Dixmier and Daniel Quillen . Representation theory is the study of homomorphisms from a group, G , into the general linear group GL ( V ) of a vector space V ; i.e., into the group of automorphisms of V . (Let us here restrict ourselves to the case when the underlying field of V is C {\displaystyle \mathbb {C} } , the field of complex numbers .) Such a homomorphism is called a representation of G on V . A representation on V is a special case of a group action on V , but rather than permit any arbitrary bijections ( permutations ) of the underlying set of V , we restrict ourselves to invertible linear transformations. Let ρ be a representation of G on V . It may be the case that V has a subspace , W , such that for every element g of G , the invertible linear map ρ ( g ) preserves or fixes W , so that ( ρ ( g ))( w ) is in W for every w in W , and ( ρ ( g ))( v ) is not in W for any v not in W . In other words, every linear map ρ ( g ): V → V is also an automorphism of W , ρ ( g ): W → W , when its domain is restricted to W . We say W is stable under G , or stable under the action of G . It is clear that if we consider W on its own as a vector space, then there is an obvious representation of G on W —the representation we get by restricting each map ρ ( g ) to W . When W has this property, we call W with the given representation a subrepresentation of V . Every representation of G has itself and the zero vector space as trivial subrepresentations. A representation of G with no non-trivial subrepresentations is called an irreducible representation . Irreducible representations – like the prime numbers , or like the simple groups in group theory – are the building blocks of representation theory. Many of the initial questions and theorems of representation theory deal with the properties of irreducible representations. Just as we are interested in homomorphisms between groups, and in continuous maps between topological spaces , we are also interested in certain functions between representations of G . Let V and W be vector spaces, and let ρ V {\displaystyle \rho _{V}} and ρ W {\displaystyle \rho _{W}} be representations of G on V and W respectively. Then we define a G -linear map f from V to W to be a linear map from V to W that is equivariant under the action of G ; that is, for every g in G , ρ W ( g ) ∘ f = f ∘ ρ V ( g ) {\displaystyle \rho _{W}(g)\circ f=f\circ \rho _{V}(g)} . In other words, we require that f commutes with the action of G . G -linear maps are the morphisms in the category of representations of G . Schur's Lemma is a theorem that describes what G -linear maps can exist between two irreducible representations of G . Theorem (Schur's Lemma) : Let V and W be vector spaces; and let ρ V {\displaystyle \rho _{V}} and ρ W {\displaystyle \rho _{W}} be irreducible representations of G on V and W respectively. [ 2 ] Proof: Suppose f {\displaystyle f} is a nonzero G -linear map from V {\displaystyle V} to W {\displaystyle W} . We will prove that V {\displaystyle V} and W {\displaystyle W} are isomorphic. Let V ′ {\displaystyle V'} be the kernel , or null space, of f {\displaystyle f} in V {\displaystyle V} , described as V ′ = { x ∈ V | f ( x ) = 0 } {\displaystyle V'=\{x\in V|f(x)=0\}} . V ′ {\displaystyle V'} is a subspace of V {\displaystyle V} as it is nonempty (contains the zero vector), and is closed. By the assumption that f {\displaystyle f} is G -linear , for every g {\displaystyle g} in G {\displaystyle G} and choice of x {\displaystyle x} in V ′ {\displaystyle V'} , f ( ( ρ V ( g ) ) ( x ) ) = ( ρ W ( g ) ) ( f ( x ) ) = ( ρ W ( g ) ) ( 0 ) = 0 {\displaystyle f((\rho _{V}(g))(x))=(\rho _{W}(g))(f(x))=(\rho _{W}(g))(0)=0} . But saying that f ( ρ V ( g ) ( x ) ) = 0 {\displaystyle f(\rho _{V}(g)(x))=0} is the same as saying that ρ V ( g ) ( x ) {\displaystyle \rho _{V}(g)(x)} is in the null space of f : V → W {\displaystyle f:V\rightarrow W} . So V ′ {\displaystyle V'} is stable under the action of G ; it is a subrepresentation. Since by assumption V {\displaystyle V} is irreducible, V ′ {\displaystyle V'} must be zero; so f {\displaystyle f} is injective . By an identical argument we will show f {\displaystyle f} is also surjective ; since f ( ( ρ V ( g ) ) ( x ) ) = ( ρ W ( g ) ) ( f ( x ) ) {\displaystyle f((\rho _{V}(g))(x))=(\rho _{W}(g))(f(x))} , we can conclude that for arbitrary choice of f ( x ) {\displaystyle f(x)} in the image of f {\displaystyle f} , ρ W ( g ) {\displaystyle \rho _{W}(g)} sends f ( x ) {\displaystyle f(x)} somewhere else in the image of f {\displaystyle f} ; in particular it sends it to the image of ρ V ( g ) x {\displaystyle \rho _{V}(g)x} . So the image of f ( x ) {\displaystyle f(x)} is a subspace W ′ {\displaystyle W'} of W {\displaystyle W} stable under the action of G {\displaystyle G} , so it is a subrepresentation and f {\displaystyle f} must be zero or surjective. By assumption it is not zero, so it is surjective, in which case it is an isomorphism. In the event that V = W {\displaystyle V=W} finite-dimensional over an algebraically closed field and they have the same representation, let λ {\displaystyle \lambda } be an eigenvalue of f {\displaystyle f} . (An eigenvalue exists for every linear transformation on a finite-dimensional vector space over an algebraically closed field.) Let f ′ = f − λ I {\displaystyle f'=f-\lambda I} . Then if x {\displaystyle x} is an eigenvector of f {\displaystyle f} corresponding to λ , f ′ ( x ) = 0 {\displaystyle \lambda ,f'(x)=0} . It is clear that f ′ {\displaystyle f'} is a G -linear map, because the sum or difference of G -linear maps is also G -linear . Then we return to the above argument, where we used the fact that a map was G -linear to conclude that the kernel is a subrepresentation, and is thus either zero or equal to all of V {\displaystyle V} ; because it is not zero (it contains x {\displaystyle x} ) it must be all of V and so f ′ {\displaystyle f'} is trivial, so f = λ I {\displaystyle f=\lambda I} . An important corollary of Schur's lemma follows from the observation that we can often build explicitly G {\displaystyle G} -linear maps between representations by "averaging" over the action of individual group elements on some fixed linear operator. In particular, given any irreducible representation, such objects will satisfy the assumptions of Schur's lemma, hence be scalar multiples of the identity. More precisely: Corollary : Using the same notation from the previous theorem, let h {\displaystyle h} be a linear mapping of V into W , and set h 0 = 1 | G | ∑ g ∈ G ( ρ W ( g ) ) − 1 h ρ V ( g ) . {\displaystyle h_{0}={\frac {1}{|G|}}\sum _{g\in G}(\rho _{W}(g))^{-1}h\rho _{V}(g).} Then, Proof: Let us first show that h 0 {\displaystyle h_{0}} is a G-linear map, i.e., ρ W ( g ) ∘ h 0 = h 0 ∘ ρ V ( g ) {\displaystyle \rho _{W}(g)\circ h_{0}=h_{0}\circ \rho _{V}(g)} for all g ∈ G {\displaystyle g\in G} . Indeed, consider that ( ρ W ( g ′ ) ) − 1 h 0 ρ V ( g ′ ) = 1 | G | ∑ g ∈ G ( ρ W ( g ′ ) ) − 1 ( ρ W ( g ) ) − 1 h ρ V ( g ) ρ V ( g ′ ) = 1 | G | ∑ g ∈ G ( ρ W ( g ∘ g ′ ) ) − 1 h ρ V ( g ∘ g ′ ) = h 0 {\displaystyle {\begin{aligned}(\rho _{W}(g'))^{-1}h_{0}\rho _{V}(g')&={\frac {1}{|G|}}\sum _{g\in G}(\rho _{W}(g'))^{-1}(\rho _{W}(g))^{-1}h\rho _{V}(g)\rho _{V}(g')\\&={\frac {1}{|G|}}\sum _{g\in G}(\rho _{W}(g\circ g'))^{-1}h\rho _{V}(g\circ g')\\&=h_{0}\end{aligned}}} Now applying the previous theorem, for case 1, it follows that h 0 = 0 {\displaystyle h_{0}=0} , and for case 2, it follows that h 0 {\displaystyle h_{0}} is a scalar multiple of the identity matrix (i.e., h 0 = μ I {\displaystyle h_{0}=\mu I} ). To determine the scalar multiple μ {\displaystyle \mu } , consider that T r [ h 0 ] = 1 | G | ∑ g ∈ G T r [ ( ρ V ( g ) ) − 1 h ρ V ( g ) ] = T r [ h ] {\displaystyle \mathrm {Tr} [h_{0}]={\frac {1}{|G|}}\sum _{g\in G}\mathrm {Tr} [(\rho _{V}(g))^{-1}h\rho _{V}(g)]=\mathrm {Tr} [h]} It then follows that μ = T r [ h ] / n {\displaystyle \mu =\mathrm {Tr} [h]/n} . This result has numerous applications. For example, in the context of quantum information science , it is used to derive results about complex projective t-designs . [ 3 ] In the context of molecular orbital theory , it is used to restrict atomic orbital interactions based on the molecular symmetry . [ 4 ] Theorem: Let M , N {\displaystyle M,N} be two simple modules over a ring R {\displaystyle R} . Then any homomorphism f : M → N {\displaystyle f\colon M\to N} of R {\displaystyle R} - modules is either zero or an isomorphism. [ 5 ] In particular, the endomorphism ring of a simple module is a division ring . [ 6 ] Proof: Consider the kernel and image of f {\displaystyle f} : since ker ⁡ ( f ) ⊆ M , i m ( f ) ⊆ N {\displaystyle \ker(f)\subseteq M,\mathrm {im} (f)\subseteq N} are submodules of simple modules, by definition they are either zero or equal to M , N {\displaystyle M,N} respectively. In particular, we have that either ker ⁡ ( f ) = M {\displaystyle \ker(f)=M} , meaning that f {\displaystyle f} is the zero morphism, or that ker ⁡ ( f ) = 0 {\displaystyle \ker(f)=0} , meaning that f {\displaystyle f} is injective. In the latter case, the first isomorphism theorem tells us furthermore that i m ( f ) ≅ M / ker ⁡ ( f ) ≅ M {\displaystyle \mathrm {im} (f)\cong M/\ker(f)\cong M} is not trivial and thus i m ( f ) = N {\displaystyle \mathrm {im} (f)=N} : this shows that f {\displaystyle f} is in addition surjective, hence bijective and thus an isomorphism of R {\displaystyle R} -modules. The group version is a special case of the module version, since any representation of a group G can equivalently be viewed as a module over the group ring of G . Schur's lemma is frequently applied in the following particular case. Suppose that R is an algebra over a field k and the vector space M = N is a simple module of R . Then Schur's lemma says that the endomorphism ring of the module M is a division algebra over k . If M is finite-dimensional, this division algebra is finite-dimensional. If k is the field of complex numbers, the only option is that this division algebra is the complex numbers. Thus the endomorphism ring of the module M is "as small as possible". In other words, the only linear transformations of M that commute with all transformations coming from R are scalar multiples of the identity. More generally, if R {\displaystyle R} is an algebra over an algebraically closed field k {\displaystyle k} and M {\displaystyle M} is a simple R {\displaystyle R} -module satisfying dim k ⁡ ( M ) < # k {\displaystyle \dim _{k}(M)<\#k} (the cardinality of k {\displaystyle k} ), then End R ⁡ ( M ) = k {\displaystyle \operatorname {End} _{R}(M)=k} . [ 7 ] So in particular, if R {\displaystyle R} is an algebra over an uncountable algebraically closed field k {\displaystyle k} and M {\displaystyle M} is a simple module that is at most countably-dimensional, the only linear transformations of M {\displaystyle M} that commute with all transformations coming from R {\displaystyle R} are scalar multiples of the identity. When the field is not algebraically closed, the case where the endomorphism ring is as small as possible is still of particular interest. A simple module over a k {\displaystyle k} -algebra is said to be absolutely simple if its endomorphism ring is isomorphic to k {\displaystyle k} . This is in general stronger than being irreducible over the field k {\displaystyle k} , and implies the module is irreducible even over the algebraic closure of k {\displaystyle k} . [ citation needed ] Definition: Let R {\displaystyle R} be a k {\displaystyle k} -algebra. An R {\displaystyle R} -module M {\displaystyle M} is said to have central character χ : Z ( R ) → k {\displaystyle \chi :Z(R)\to k} (here, Z ( R ) {\displaystyle Z(R)} is the center of R {\displaystyle R} ) if for every m ∈ M , z ∈ Z ( R ) {\displaystyle m\in M,z\in Z(R)} there is n ∈ N {\displaystyle n\in \mathbb {N} } such that ( z − χ ( z ) ) n m = 0 {\displaystyle (z-\chi (z))^{n}m=0} , i.e. if every m ∈ M {\displaystyle m\in M} is a generalized eigenvector of z {\displaystyle z} with eigenvalue χ ( z ) {\displaystyle \chi (z)} . If End R ⁡ ( M ) = k {\displaystyle \operatorname {End} _{R}(M)=k} , say in the case sketched above, every element of Z ( R ) {\displaystyle Z(R)} acts on M {\displaystyle M} as an R {\displaystyle R} -endomorphism and hence as a scalar. Thus, there is a ring homomorphism χ : Z ( R ) → k {\displaystyle \chi :Z(R)\to k} such that ( z − χ ( z ) ) m = 0 {\displaystyle (z-\chi (z))m=0} for all z ∈ Z ( R ) , m ∈ M {\displaystyle z\in Z(R),m\in M} . In particular, M {\displaystyle M} has central character χ {\displaystyle \chi } . If R = U ( g ) , k = C {\displaystyle R=U({\mathfrak {g}}),k=\mathbb {C} } is the universal enveloping algebra of a Lie algebra, a central character is also referred to as an infinitesimal character and the previous considerations show that if g {\displaystyle {\mathfrak {g}}} is finite-dimensional (so that R = U ( g ) {\displaystyle R=U({\mathfrak {g}})} is countable-dimensional), then every simple g {\displaystyle {\mathfrak {g}}} -module has an infinitesimal character. In the case where k = C , R = C [ G ] {\displaystyle k=\mathbb {C} ,R=\mathbb {C} [G]} is the group algebra of a finite group G {\displaystyle G} , the same conclusion follows. Here, the center of R {\displaystyle R} consists of elements of the shape ∑ g ∈ G a ( g ) g {\displaystyle \sum _{g\in G}a(g)g} where a : G → C {\displaystyle a:G\to \mathbb {C} } is a class function , i.e. invariant under conjugation. Since the set of class functions is spanned by the characters χ π {\displaystyle \chi _{\pi }} of the irreducible representations π ∈ G ^ {\displaystyle \pi \in {\hat {G}}} , the central character is determined by what it maps u π := 1 # G ∑ g ∈ G χ π ( g ) g {\displaystyle u_{\pi }:={\frac {1}{\#G}}\sum _{g\in G}\chi _{\pi }(g)g} to (for all π ∈ G ^ {\displaystyle \pi \in {\hat {G}}} ). Since all u π {\displaystyle u_{\pi }} are idempotent, they are each mapped either to 0 or to 1, and since u π u π ′ = 0 {\displaystyle u_{\pi }u_{\pi '}=0} for two different irreducible representations, only one u π {\displaystyle u_{\pi }} can be mapped to 1: the one corresponding to the module M {\displaystyle M} . We now describe Schur's lemma as it is usually stated in the context of representations of Lie groups and Lie algebras . There are three parts to the result. [ 8 ] First, suppose that V 1 {\displaystyle V_{1}} and V 2 {\displaystyle V_{2}} are irreducible representations of a Lie group or Lie algebra over any field and that ϕ : V 1 → V 2 {\displaystyle \phi :V_{1}\rightarrow V_{2}} is an intertwining map . Then ϕ {\displaystyle \phi } is either zero or an isomorphism. Second, if V {\displaystyle V} is an irreducible representation of a Lie group or Lie algebra over an algebraically closed field and ϕ : V → V {\displaystyle \phi :V\rightarrow V} is an intertwining map, then ϕ {\displaystyle \phi } is a scalar multiple of the identity map. Third, suppose V 1 {\displaystyle V_{1}} and V 2 {\displaystyle V_{2}} are irreducible representations of a Lie group or Lie algebra over an algebraically closed field and ϕ 1 , ϕ 2 : V 1 → V 2 {\displaystyle \phi _{1},\phi _{2}:V_{1}\rightarrow V_{2}} are nonzero intertwining maps . Then ϕ 1 = λ ϕ 2 {\displaystyle \phi _{1}=\lambda \phi _{2}} for some scalar λ {\displaystyle \lambda } . A simple corollary of the second statement is that every complex irreducible representation of an abelian group is one-dimensional. Suppose g {\displaystyle {\mathfrak {g}}} is a Lie algebra and U ( g ) {\displaystyle U({\mathfrak {g}})} is the universal enveloping algebra of g {\displaystyle {\mathfrak {g}}} . Let π : g → E n d ( V ) {\displaystyle \pi :{\mathfrak {g}}\rightarrow \mathrm {End} (V)} be an irreducible representation of g {\displaystyle {\mathfrak {g}}} over an algebraically closed field. The universal property of the universal enveloping algebra ensures that π {\displaystyle \pi } extends to a representation of U ( g ) {\displaystyle U({\mathfrak {g}})} acting on the same vector space. It follows from the second part of Schur's lemma that if x {\displaystyle x} belongs to the center of U ( g ) {\displaystyle U({\mathfrak {g}})} , then π ( x ) {\displaystyle \pi (x)} must be a multiple of the identity operator. In the case when g {\displaystyle {\mathfrak {g}}} is a complex semisimple Lie algebra , an important example of the preceding construction is the one in which x {\displaystyle x} is the (quadratic) Casimir element C {\displaystyle C} . In this case, π ( C ) = λ π I {\displaystyle \pi (C)=\lambda _{\pi }I} , where λ π {\displaystyle \lambda _{\pi }} is a constant that can be computed explicitly in terms of the highest weight of π {\displaystyle \pi } . [ 9 ] The action of the Casimir element plays an important role in the proof of complete reducibility for finite-dimensional representations of semisimple Lie algebras. [ 10 ] The one-module version of Schur's lemma admits generalizations for modules M {\displaystyle M} that are not necessarily simple. They express relations between the module-theoretic properties of M {\displaystyle M} and the properties of the endomorphism ring of M {\displaystyle M} . Theorem ( Lam 2001 , §19): A module is said to be strongly indecomposable if its endomorphism ring is a local ring . For a module M {\displaystyle M} of finite length , the following properties are equivalent: Schur's lemma cannot be reversed in general, however, since there exist modules that are not simple but whose endomorphism algebra is a division ring . Such modules are necessarily indecomposable and so cannot exist over semisimple rings , such as the complex group ring of a finite group . However, even over the ring of integers , the module of rational numbers has an endomorphism ring that is a division ring, specifically the field of rational numbers. Even for group rings, there are examples when the characteristic of the field divides the order of the group: the Jacobson radical of the projective cover of the one-dimensional representation of the alternating group A 5 over the finite field with three elements F 3 has F 3 as its endomorphism ring. [ citation needed ]
https://en.wikipedia.org/wiki/Schur's_lemma
In Riemannian geometry , Schur's lemma is a result that says, heuristically, whenever certain curvatures are pointwise constant then they are forced to be globally constant. The proof is essentially a one-step calculation, which has only one input: the second Bianchi identity. Suppose ( M , g ) {\displaystyle (M,g)} is a smooth Riemannian manifold with dimension n . {\displaystyle n.} Recall that this defines for each element p {\displaystyle p} of M {\displaystyle M} : The Schur lemma states the following: Suppose that n {\displaystyle n} is not equal to two. If there is a function κ {\displaystyle \kappa } on M {\displaystyle M} such that Ric p = κ ( p ) g p {\displaystyle \operatorname {Ric} _{p}=\kappa (p)g_{p}} for all p ∈ M {\displaystyle p\in M} then d κ ( p ) = 0. {\displaystyle d\kappa (p)=0.} Equivalently, κ {\displaystyle \kappa } is constant on each connected component of M {\displaystyle M} ; this could also be phrased as asserting that each connected component of M {\displaystyle M} is an Einstein manifold . The Schur lemma is a simple consequence of the "twice-contracted second Bianchi identity ," which states that div g ⁡ Ric = 1 2 d R . {\displaystyle \operatorname {div} _{g}\operatorname {Ric} ={\frac {1}{2}}dR.} understood as an equality of smooth 1-forms on M . {\displaystyle M.} Substituting in the given condition Ric p = κ ( p ) g p , {\displaystyle \operatorname {Ric} _{p}=\kappa (p)g_{p},} one finds that d κ = n 2 d κ . {\displaystyle \textstyle d\kappa ={\frac {n}{2}}d\kappa .} Let B {\displaystyle B} be a symmetric bilinear form on an n {\displaystyle n} -dimensional inner product space ( V , g ) . {\displaystyle (V,g).} Then | B | g 2 = | B − 1 n ( tr g ⁡ B ) g | g 2 + 1 n ( tr g ⁡ B ) 2 . {\displaystyle |B|_{g}^{2}=\left|B-{\frac {1}{n}}\left(\operatorname {tr} ^{g}B\right)g\right|_{g}^{2}+{\frac {1}{n}}\left(\operatorname {tr} ^{g}B\right)^{2}.} Additionally, note that if B = κ g {\displaystyle B=\kappa g} for some number κ , {\displaystyle \kappa ,} then one automatically has κ = 1 n tr g ⁡ B . {\displaystyle \kappa ={\frac {1}{n}}\operatorname {tr} ^{g}B.} { With these observations in mind, one can restate the Schur lemma in the following form: Let ( M , g ) {\displaystyle (M,g)} be a connected smooth Riemannian manifold whose dimension is not equal to two. Then the following are equivalent: If ( M , g ) {\displaystyle (M,g)} is a connected smooth pseudo-Riemannian manifold, then the first three conditions are equivalent, and they imply the fourth condition. Note that the dimensional restriction is important, since every two-dimensional Riemannian manifold which does not have constant curvature would be a counterexample. The following is an immediate corollary of the Schur lemma for the Ricci tensor. Let ( M , g ) {\displaystyle (M,g)} be a connected smooth Riemannian manifold whose dimension n {\displaystyle n} is not equal to two. Then the following are equivalent: Let ( M , g ) {\displaystyle (M,g)} be a smooth Riemannian or pseudo-Riemannian manifold of dimension n . {\displaystyle n.} Let h {\displaystyle h} he a smooth symmetric (0,2)-tensor field whose covariant derivative, with respect to the Levi-Civita connection, is completely symmetric. The symmetry condition is an analogue of the Bianchi identity ; continuing the analogy, one takes a trace to find that div g ⁡ h = d ( tr g ⁡ h ) . {\displaystyle \operatorname {div} ^{g}h=d{\big (}\operatorname {tr} ^{g}h{\big )}.} If there is a function κ {\displaystyle \kappa } on M {\displaystyle M} such that h p = κ ( p ) g p {\displaystyle h_{p}=\kappa (p)g_{p}} for all p {\displaystyle p} in M , {\displaystyle M,} then upon substitution one finds d κ = n ⋅ d κ . {\displaystyle d\kappa =n\cdot d\kappa .} Hence n > 1 {\displaystyle n>1} implies that κ {\displaystyle \kappa } is constant on each connected component of M . {\displaystyle M.} As above, one can then state the Schur lemma in this context: Let ( M , g ) {\displaystyle (M,g)} be a connected smooth Riemannian manifold whose dimension is not equal to one. Let h {\displaystyle h} be a smooth symmetric (0,2)-tensor field whose covariant derivative is totally symmetric as a (0,3)-tensor field. Then the following are equivalent: If ( M , g ) {\displaystyle (M,g)} is a connected and smooth pseudo-Riemannian manifold, then the first three are equivalent, and imply the fourth and fifth. The Schur lemmas are frequently employed to prove roundness of geometric objects. A noteworthy example is to characterize the limits of convergent geometric flows . For example, a key part of Richard Hamilton 's 1982 breakthrough on the Ricci flow [ 1 ] was his "pinching estimate" which, informally stated, says that for a Riemannian metric which appears in a 3-manifold Ricci flow with positive Ricci curvature, the eigenvalues of the Ricci tensor are close to one another relative to the size of their sum. If one normalizes the sum, then, the eigenvalues are close to one another in an absolute sense. In this sense, each of the metrics appearing in a 3-manifold Ricci flow of positive Ricci curvature "approximately" satisfies the conditions of the Schur lemma. The Schur lemma itself is not explicitly applied, but its proof is effectively carried out through Hamilton's calculations. In the same way, the Schur lemma for the Riemann tensor is employed to study convergence of Ricci flow in higher dimensions. This goes back to Gerhard Huisken 's extension of Hamilton's work to higher dimensions, [ 2 ] where the main part of the work is that the Weyl tensor and the semi-traceless Riemann tensor become zero in the long-time limit. This extends to the more general Ricci flow convergence theorems, some expositions of which directly use the Schur lemma. [ 3 ] This includes the proof of the differentiable sphere theorem . The Schur lemma for Codazzi tensors is employed directly in Huisken's foundational paper on convergence of mean curvature flow , which was modeled on Hamilton's work. [ 4 ] In the final two sentences of Huisken's paper, it is concluded that one has a smooth embedding S n → R n + 1 {\displaystyle S^{n}\to \mathbb {R} ^{n+1}} with | h | 2 = 1 n H 2 , {\displaystyle |h|^{2}={\frac {1}{n}}H^{2},} where h {\displaystyle h} is the second fundamental form and H {\displaystyle H} is the mean curvature. The Schur lemma implies that the mean curvature is constant, and the image of this embedding then must be a standard round sphere. Another application relates full isotropy and curvature. Suppose that ( M , g ) {\displaystyle (M,g)} is a connected thrice-differentiable Riemannian manifold, and that for each p ∈ M {\displaystyle p\in M} the group of isometries Isom ⁡ ( M , g ) {\displaystyle \operatorname {Isom} (M,g)} acts transitively on T p M . {\displaystyle T_{p}M.} This means that for all p ∈ M {\displaystyle p\in M} and all v , w ∈ T p M {\displaystyle v,w\in T_{p}M} there is an isometry φ : ( M , g ) → ( M , g ) {\displaystyle \varphi :(M,g)\to (M,g)} such that φ ( p ) = p {\displaystyle \varphi (p)=p} and d φ p ( v ) = w . {\displaystyle d\varphi _{p}(v)=w.} This implies that Isom ⁡ ( M , g ) {\displaystyle \operatorname {Isom} (M,g)} also acts transitively on Gr ( 2 , T p M ) , {\displaystyle {\text{Gr}}(2,T_{p}M),} that is, for every P , Q ∈ Gr ( 2 , T p M ) {\displaystyle P,Q\in {\text{Gr}}(2,T_{p}M)} there is an isometry φ : ( M , g ) → ( M , g ) {\displaystyle \varphi :(M,g)\to (M,g)} such that φ ( p ) = p {\displaystyle \varphi (p)=p} and d φ p ( P ) = Q . {\displaystyle d\varphi _{p}(P)=Q.} Since isometries preserve sectional curvature, this implies that sec p {\displaystyle \operatorname {sec} _{p}} is constant for each p ∈ M . {\displaystyle p\in M.} The Schur lemma implies that ( M , g ) {\displaystyle (M,g)} has constant curvature. A particularly notable application of this is that any spacetime which models the cosmological principle must be the warped product of an interval and a constant-curvature Riemannian manifold. See O'Neill (1983, page 341). Recent research has investigated the case that the conditions of the Schur lemma are only approximately satisfied. Consider the Schur lemma in the form "If the traceless Ricci tensor is zero then the scalar curvature is constant." Camillo De Lellis and Peter Topping [ 5 ] have shown that if the traceless Ricci tensor is approximately zero then the scalar curvature is approximately constant. Precisely: Next, consider the Schur lemma in the special form "If Σ {\displaystyle \Sigma } is a connected embedded surface in R 3 {\displaystyle \mathbb {R} ^{3}} whose traceless second fundamental form is zero, then its mean curvature is constant." Camillo De Lellis and Stefan Müller [ 6 ] have shown that if the traceless second fundamental form of a compact surface is approximately zero then the mean curvature is approximately constant. Precisely As an application, one can conclude that Σ {\displaystyle \Sigma } itself is 'close' to a round sphere.
https://en.wikipedia.org/wiki/Schur's_lemma_(Riemannian_geometry)
In discrete mathematics , Schur's theorem is any of several theorems of the mathematician Issai Schur . In differential geometry , Schur's theorem is a theorem of Axel Schur . In functional analysis , Schur's theorem is often called Schur's property , also due to Issai Schur. In Ramsey theory , Schur's theorem states that for any partition of the positive integers into a finite number of parts, one of the parts contains three integers x , y , z with For every positive integer c , S ( c ) denotes the smallest number S such that for every partition of the integers { 1 , … , S } {\displaystyle \{1,\ldots ,S\}} into c parts, one of the parts contains integers x , y , and z with x + y = z {\displaystyle x+y=z} . Schur's theorem ensures that S ( c ) is well-defined for every positive integer c . The numbers of the form S ( c ) are called Schur's number s. Folkman's theorem generalizes Schur's theorem by stating that there exist arbitrarily large sets of integers, all of whose nonempty sums belong to the same part. Using this definition, the only known Schur numbers are S (n) = 2, 5, 14, 45, and 161 ( OEIS : A030126 ) The proof that S (5) = 161 was announced in 2017 and required 2 petabytes of space. [ 1 ] [ 2 ] In combinatorics , Schur's theorem tells the number of ways for expressing a given number as a (non-negative, integer) linear combination of a fixed set of relatively prime numbers. In particular, if { a 1 , … , a n } {\displaystyle \{a_{1},\ldots ,a_{n}\}} is a set of integers such that gcd ( a 1 , … , a n ) = 1 {\displaystyle \gcd(a_{1},\ldots ,a_{n})=1} , the number of different multiples of non-negative integer numbers ( c 1 , … , c n ) {\displaystyle (c_{1},\ldots ,c_{n})} such that x = c 1 a 1 + ⋯ + c n a n {\displaystyle x=c_{1}a_{1}+\cdots +c_{n}a_{n}} when x {\displaystyle x} goes to infinity is: As a result, for every set of relatively prime numbers { a 1 , … , a n } {\displaystyle \{a_{1},\ldots ,a_{n}\}} there exists a value of x {\displaystyle x} such that every larger number is representable as a linear combination of { a 1 , … , a n } {\displaystyle \{a_{1},\ldots ,a_{n}\}} in at least one way. This consequence of the theorem can be recast in a familiar context considering the problem of changing an amount using a set of coins. If the denominations of the coins are relatively prime numbers (such as 2 and 5) then any sufficiently large amount can be changed using only these coins. (See Coin problem .) In differential geometry , Schur's theorem compares the distance between the endpoints of a space curve C ∗ {\displaystyle C^{*}} to the distance between the endpoints of a corresponding plane curve C {\displaystyle C} of less curvature. Suppose C ( s ) {\displaystyle C(s)} is a plane curve with curvature κ ( s ) {\displaystyle \kappa (s)} which makes a convex curve when closed by the chord connecting its endpoints, and C ∗ ( s ) {\displaystyle C^{*}(s)} is a curve of the same length with curvature κ ∗ ( s ) {\displaystyle \kappa ^{*}(s)} . Let d {\displaystyle d} denote the distance between the endpoints of C {\displaystyle C} and d ∗ {\displaystyle d^{*}} denote the distance between the endpoints of C ∗ {\displaystyle C^{*}} . If κ ∗ ( s ) ≤ κ ( s ) {\displaystyle \kappa ^{*}(s)\leq \kappa (s)} then d ∗ ≥ d {\displaystyle d^{*}\geq d} . Schur's theorem is usually stated for C 2 {\displaystyle C^{2}} curves, but John M. Sullivan has observed that Schur's theorem applies to curves of finite total curvature (the statement is slightly different). In linear algebra , Schur’s theorem is referred to as either the triangularization of a square matrix with complex entries, or of a square matrix with real entries and real eigenvalues . In functional analysis and the study of Banach spaces , Schur's theorem, due to I. Schur , often refers to Schur's property , that for certain spaces, weak convergence implies convergence in the norm. In number theory , Issai Schur showed in 1912 that for every nonconstant polynomial p ( x ) with integer coefficients , if S is the set of all nonzero values { p ( n ) ≠ 0 : n ∈ N } {\displaystyle {\begin{Bmatrix}p(n)\neq 0:n\in \mathbb {N} \end{Bmatrix}}} , then the set of primes that divide some member of S is infinite.
https://en.wikipedia.org/wiki/Schur's_theorem
In mathematics, a Schur-convex function , also known as S-convex , isotonic function and order-preserving function is a function f : R d → R {\displaystyle f:\mathbb {R} ^{d}\rightarrow \mathbb {R} } that for all x , y ∈ R d {\displaystyle x,y\in \mathbb {R} ^{d}} such that x {\displaystyle x} is majorized by y {\displaystyle y} , one has that f ( x ) ≤ f ( y ) {\displaystyle f(x)\leq f(y)} . Named after Issai Schur , Schur-convex functions are used in the study of majorization . A function f is 'Schur-concave' if its negative, − f , is Schur-convex. Every function that is convex and symmetric (under permutations of the arguments) is also Schur-convex. Every Schur-convex function is symmetric, but not necessarily convex. [ 1 ] If f {\displaystyle f} is (strictly) Schur-convex and g {\displaystyle g} is (strictly) monotonically increasing, then g ∘ f {\displaystyle g\circ f} is (strictly) Schur-convex. If g {\displaystyle g} is a convex function defined on a real interval, then ∑ i = 1 n g ( x i ) {\displaystyle \sum _{i=1}^{n}g(x_{i})} is Schur-convex. If f is symmetric and all first partial derivatives exist, then f is Schur-convex if and only if holds for all 1 ≤ i , j ≤ d {\displaystyle 1\leq i,j\leq d} . [ 2 ] This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schur-convex_function
In mathematics , Schur algebras , named after Issai Schur , are certain finite-dimensional algebras closely associated with Schur–Weyl duality between general linear and symmetric groups . They are used to relate the representation theories of those two groups . Their use was promoted by the influential monograph of J. A. Green first published in 1980. [ 1 ] The name "Schur algebra" is due to Green. In the modular case (over infinite fields of positive characteristic ) Schur algebras were used by Gordon James and Karin Erdmann to show that the (still open ) problems of computing decomposition numbers for general linear groups and symmetric groups are actually equivalent. [ 2 ] Schur algebras were used by Friedlander and Suslin to prove finite generation of cohomology of finite group schemes . [ 3 ] The Schur algebra S k ( n , r ) {\displaystyle S_{k}(n,r)} can be defined for any commutative ring k {\displaystyle k} and integers n , r ≥ 0 {\displaystyle n,r\geq 0} . Consider the algebra k [ x i j ] {\displaystyle k[x_{ij}]} of polynomials (with coefficients in k {\displaystyle k} ) in n 2 {\displaystyle n^{2}} commuting variables x i j {\displaystyle x_{ij}} , 1 ≤ i , j ≤ n {\displaystyle n} . Denote by A k ( n , r ) {\displaystyle A_{k}(n,r)} the homogeneous polynomials of degree r {\displaystyle r} . Elements of A k ( n , r ) {\displaystyle A_{k}(n,r)} are k -linear combinations of monomials formed by multiplying together r {\displaystyle r} of the generators x i j {\displaystyle x_{ij}} (allowing repetition). Thus Now, k [ x i j ] {\displaystyle k[x_{ij}]} has a natural coalgebra structure with comultiplication Δ {\displaystyle \Delta } and counit ε {\displaystyle \varepsilon } the algebra homomorphisms given on generators by Since comultiplication is an algebra homomorphism, k [ x i j ] {\displaystyle k[x_{ij}]} is a bialgebra . One easily checks that A k ( n , r ) {\displaystyle A_{k}(n,r)} is a subcoalgebra of the bialgebra k [ x i j ] {\displaystyle k[x_{ij}]} , for every r ≥ 0. Definition. The Schur algebra (in degree r {\displaystyle r} ) is the algebra S k ( n , r ) = H o m k ( A k ( n , r ) , k ) {\displaystyle S_{k}(n,r)=\mathrm {Hom} _{k}(A_{k}(n,r),k)} . That is, S k ( n , r ) {\displaystyle S_{k}(n,r)} is the linear dual of A k ( n , r ) {\displaystyle A_{k}(n,r)} . It is a general fact that the linear dual of a coalgebra A {\displaystyle A} is an algebra in a natural way, where the multiplication in the algebra is induced by dualizing the comultiplication in the coalgebra. To see this, let and, given linear functionals f {\displaystyle f} , g {\displaystyle g} on A {\displaystyle A} , define their product to be the linear functional given by The identity element for this multiplication of functionals is the counit in A {\displaystyle A} . Then the symmetric group S r {\displaystyle {\mathfrak {S}}_{r}} on r {\displaystyle r} letters acts naturally on the tensor space by place permutation, and one has an isomorphism In other words, S k ( n , r ) {\displaystyle S_{k}(n,r)} may be viewed as the algebra of endomorphisms of tensor space commuting with the action of the symmetric group . The study of these various classes of generalizations forms an active area of contemporary research.
https://en.wikipedia.org/wiki/Schur_algebra
The Schur complement is a key tool in the fields of linear algebra , the theory of matrices , numerical analysis, and statistics. It is defined for a block matrix . Suppose p , q are nonnegative integers such that p + q > 0 , and suppose A , B , C , D are respectively p × p , p × q , q × p , and q × q matrices of complex numbers. Let M = [ A B C D ] {\displaystyle M={\begin{bmatrix}A&B\\C&D\end{bmatrix}}} so that M is a ( p + q ) × ( p + q ) matrix. If D is invertible, then the Schur complement of the block D of the matrix M is the p × p matrix defined by M / D := A − B D − 1 C . {\displaystyle M/D:=A-BD^{-1}C.} If A is invertible, the Schur complement of the block A of the matrix M is the q × q matrix defined by M / A := D − C A − 1 B . {\displaystyle M/A:=D-CA^{-1}B.} In the case that A or D is singular , substituting a generalized inverse for the inverses on M/A and M/D yields the generalized Schur complement . The Schur complement is named after Issai Schur [ 1 ] who used it to prove Schur's lemma , although it had been used previously. [ 2 ] Emilie Virginia Haynsworth was the first to call it the Schur complement . [ 3 ] The Schur complement is sometimes referred to as the Feshbach map after a physicist Herman Feshbach . [ 4 ] The Schur complement arises when performing a block Gaussian elimination on the matrix M . In order to eliminate the elements below the block diagonal, one multiplies the matrix M by a block lower triangular matrix on the right as follows: M = [ A B C D ] → [ A B C D ] [ I p 0 − D − 1 C I q ] = [ A − B D − 1 C B 0 D ] , {\displaystyle {\begin{aligned}&M={\begin{bmatrix}A&B\\C&D\end{bmatrix}}\quad \to \quad {\begin{bmatrix}A&B\\C&D\end{bmatrix}}{\begin{bmatrix}I_{p}&0\\-D^{-1}C&I_{q}\end{bmatrix}}={\begin{bmatrix}A-BD^{-1}C&B\\0&D\end{bmatrix}},\end{aligned}}} where I p denotes a p × p identity matrix . As a result, the Schur complement M / D = A − B D − 1 C {\displaystyle M/D=A-BD^{-1}C} appears in the upper-left p × p block. Continuing the elimination process beyond this point (i.e., performing a block Gauss–Jordan elimination ), [ A − B D − 1 C B 0 D ] → [ I p − B D − 1 0 I q ] [ A − B D − 1 C B 0 D ] = [ A − B D − 1 C 0 0 D ] , {\displaystyle {\begin{aligned}&{\begin{bmatrix}A-BD^{-1}C&B\\0&D\end{bmatrix}}\quad \to \quad {\begin{bmatrix}I_{p}&-BD^{-1}\\0&I_{q}\end{bmatrix}}{\begin{bmatrix}A-BD^{-1}C&B\\0&D\end{bmatrix}}={\begin{bmatrix}A-BD^{-1}C&0\\0&D\end{bmatrix}},\end{aligned}}} leads to an LDU decomposition of M , which reads M = [ A B C D ] = [ I p B D − 1 0 I q ] [ A − B D − 1 C 0 0 D ] [ I p 0 D − 1 C I q ] . {\displaystyle {\begin{aligned}M&={\begin{bmatrix}A&B\\C&D\end{bmatrix}}={\begin{bmatrix}I_{p}&BD^{-1}\\0&I_{q}\end{bmatrix}}{\begin{bmatrix}A-BD^{-1}C&0\\0&D\end{bmatrix}}{\begin{bmatrix}I_{p}&0\\D^{-1}C&I_{q}\end{bmatrix}}.\end{aligned}}} Thus, the inverse of M may be expressed involving D −1 and the inverse of Schur's complement, assuming it exists, as M − 1 = [ A B C D ] − 1 = ( [ I p B D − 1 0 I q ] [ A − B D − 1 C 0 0 D ] [ I p 0 D − 1 C I q ] ) − 1 = [ I p 0 − D − 1 C I q ] [ ( A − B D − 1 C ) − 1 0 0 D − 1 ] [ I p − B D − 1 0 I q ] = [ ( A − B D − 1 C ) − 1 − ( A − B D − 1 C ) − 1 B D − 1 − D − 1 C ( A − B D − 1 C ) − 1 D − 1 + D − 1 C ( A − B D − 1 C ) − 1 B D − 1 ] = [ ( M / D ) − 1 − ( M / D ) − 1 B D − 1 − D − 1 C ( M / D ) − 1 D − 1 + D − 1 C ( M / D ) − 1 B D − 1 ] . {\displaystyle {\begin{aligned}M^{-1}={\begin{bmatrix}A&B\\C&D\end{bmatrix}}^{-1}={}&\left({\begin{bmatrix}I_{p}&BD^{-1}\\0&I_{q}\end{bmatrix}}{\begin{bmatrix}A-BD^{-1}C&0\\0&D\end{bmatrix}}{\begin{bmatrix}I_{p}&0\\D^{-1}C&I_{q}\end{bmatrix}}\right)^{-1}\\={}&{\begin{bmatrix}I_{p}&0\\-D^{-1}C&I_{q}\end{bmatrix}}{\begin{bmatrix}\left(A-BD^{-1}C\right)^{-1}&0\\0&D^{-1}\end{bmatrix}}{\begin{bmatrix}I_{p}&-BD^{-1}\\0&I_{q}\end{bmatrix}}\\[4pt]={}&{\begin{bmatrix}\left(A-BD^{-1}C\right)^{-1}&-\left(A-BD^{-1}C\right)^{-1}BD^{-1}\\-D^{-1}C\left(A-BD^{-1}C\right)^{-1}&D^{-1}+D^{-1}C\left(A-BD^{-1}C\right)^{-1}BD^{-1}\end{bmatrix}}\\[4pt]={}&{\begin{bmatrix}\left(M/D\right)^{-1}&-\left(M/D\right)^{-1}BD^{-1}\\-D^{-1}C\left(M/D\right)^{-1}&D^{-1}+D^{-1}C\left(M/D\right)^{-1}BD^{-1}\end{bmatrix}}.\end{aligned}}} The above relationship comes from the elimination operations that involve D −1 and M/D . An equivalent derivation can be done with the roles of A and D interchanged. By equating the expressions for M −1 obtained in these two different ways, one can establish the matrix inversion lemma , which relates the two Schur complements of M : M/D and M/A (see "Derivation from LDU decomposition" in Woodbury matrix identity § Alternative proofs ). The Schur complement arises naturally in solving a system of linear equations such as [ 7 ] [ A B C D ] [ x y ] = [ u v ] {\displaystyle {\begin{bmatrix}A&B\\C&D\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}u\\v\end{bmatrix}}} . Assuming that the submatrix A {\displaystyle A} is invertible, we can eliminate x {\displaystyle x} from the equations, as follows. x = A − 1 ( u − B y ) . {\displaystyle x=A^{-1}(u-By).} Substituting this expression into the second equation yields We refer to this as the reduced equation obtained by eliminating x {\displaystyle x} from the original equation. The matrix appearing in the reduced equation is called the Schur complement of the first block A {\displaystyle A} in M {\displaystyle M} : Solving the reduced equation, we obtain Substituting this into the first equation yields We can express the above two equation as: Therefore, a formulation for the inverse of a block matrix is: In particular, we see that the Schur complement is the inverse of the 2 , 2 {\displaystyle 2,2} block entry of the inverse of M {\displaystyle M} . In practice, one needs A {\displaystyle A} to be well-conditioned in order for this algorithm to be numerically accurate. This method is useful in electrical engineering to reduce the dimension of a network's equations. It is especially useful when element(s) of the output vector are zero. For example, when u {\displaystyle u} or v {\displaystyle v} is zero, we can eliminate the associated rows of the coefficient matrix without any changes to the rest of the output vector. If v {\displaystyle v} is null then the above equation for x {\displaystyle x} reduces to x = ( A − 1 + A − 1 B S − 1 C A − 1 ) u {\displaystyle x=\left(A^{-1}+A^{-1}BS^{-1}CA^{-1}\right)u} , thus reducing the dimension of the coefficient matrix while leaving u {\displaystyle u} unmodified. This is used to advantage in electrical engineering where it is referred to as node elimination or Kron reduction . Suppose the random column vectors X , Y live in R n and R m respectively, and the vector ( X , Y ) in R n + m has a multivariate normal distribution whose covariance is the symmetric positive-definite matrix where A ∈ R n × n {\textstyle A\in \mathbb {R} ^{n\times n}} is the covariance matrix of X , C ∈ R m × m {\textstyle C\in \mathbb {R} ^{m\times m}} is the covariance matrix of Y and B ∈ R n × m {\textstyle B\in \mathbb {R} ^{n\times m}} is the covariance matrix between X and Y . Then the conditional covariance of X given Y is the Schur complement of C in Σ {\textstyle \Sigma } : [ 8 ] If we take the matrix Σ {\displaystyle \Sigma } above to be, not a covariance of a random vector, but a sample covariance, then it may have a Wishart distribution . In that case, the Schur complement of C in Σ {\displaystyle \Sigma } also has a Wishart distribution. [ citation needed ] Let X be a symmetric matrix of real numbers given by X = [ A B B T C ] . {\displaystyle X=\left[{\begin{matrix}A&B\\B^{\mathrm {T} }&C\end{matrix}}\right].} Then The first and third statements can be derived [ 7 ] by considering the minimizer of the quantity u T A u + 2 v T B T u + v T C v , {\displaystyle u^{\mathrm {T} }Au+2v^{\mathrm {T} }B^{\mathrm {T} }u+v^{\mathrm {T} }Cv,\,} as a function of v (for fixed u ). Furthermore, since [ A B B T C ] ≻ 0 ⟺ [ C B T B A ] ≻ 0 {\displaystyle \left[{\begin{matrix}A&B\\B^{\mathrm {T} }&C\end{matrix}}\right]\succ 0\Longleftrightarrow \left[{\begin{matrix}C&B^{\mathrm {T} }\\B&A\end{matrix}}\right]\succ 0} and similarly for positive semi-definite matrices, the second (respectively fourth) statement is immediate from the first (resp. third) statement. There is also a sufficient and necessary condition for the positive semi-definiteness of X in terms of a generalized Schur complement. [ 2 ] Precisely, where A g {\displaystyle A^{g}} denotes a generalized inverse of A {\displaystyle A} .
https://en.wikipedia.org/wiki/Schur_complement
In the mathematical discipline of linear algebra , the Schur decomposition or Schur triangulation , named after Issai Schur , is a matrix decomposition . It allows one to write an arbitrary complex square matrix as unitarily similar to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix. The complex Schur decomposition reads as follows: if A is an n × n square matrix with complex entries, then A can be expressed as [ 1 ] [ 2 ] [ 3 ] A = Q U Q − 1 {\displaystyle A=QUQ^{-1}} for some unitary matrix Q (so that the inverse Q −1 is also the conjugate transpose Q * of Q ), and some upper triangular matrix U . This is called a Schur form of A . Since U is similar to A , it has the same spectrum , and since it is triangular, its eigenvalues are the diagonal entries of U . The Schur decomposition implies that there exists a nested sequence of A -invariant subspaces {0} = V 0 ⊂ V 1 ⊂ ⋯ ⊂ V n = C n , and that there exists an ordered orthonormal basis (for the standard Hermitian form of C n ) such that the first i basis vectors span V i for each i occurring in the nested sequence. Phrased somewhat differently, the first part says that a linear operator J on a complex finite-dimensional vector space stabilizes a complete flag ( V 1 , ..., V n ) . There is also a real Schur decomposition. If A is an n × n square matrix with real entries, then A can be expressed as [ 4 ] A = Q H Q − 1 {\displaystyle A=QHQ^{-1}} where Q is an orthogonal matrix and H is either upper or lower quasi-triangular. A quasi-triangular matrix is a matrix that when expressed as a block matrix of 2 × 2 and 1 × 1 blocks is triangular. This is a stronger property than being Hessenberg . Just as in the complex case, a family of commuting real matrices { A i } may be simultaneously brought to quasi-triangular form by an orthogonal matrix. There exists an orthogonal matrix Q such that, for every A i in the given family, H i = Q A i Q − 1 {\displaystyle H_{i}=QA_{i}Q^{-1}} is upper quasi-triangular. A constructive proof for the Schur decomposition is as follows: every operator A on a complex finite-dimensional vector space has an eigenvalue λ , corresponding to some eigenspace V λ . Let V λ ⊥ be its orthogonal complement. It is clear that, with respect to this orthogonal decomposition, A has matrix representation (one can pick here any orthonormal bases Z 1 and Z 2 spanning V λ and V λ ⊥ respectively) [ Z 1 Z 2 ] ∗ A [ Z 1 Z 2 ] = [ λ I λ A 12 0 A 22 ] : V λ ⊕ V λ ⊥ → V λ ⊕ V λ ⊥ {\displaystyle {\begin{bmatrix}Z_{1}&Z_{2}\end{bmatrix}}^{*}A{\begin{bmatrix}Z_{1}&Z_{2}\end{bmatrix}}={\begin{bmatrix}\lambda \,I_{\lambda }&A_{12}\\0&A_{22}\end{bmatrix}}:{\begin{matrix}V_{\lambda }\\\oplus \\V_{\lambda }^{\perp }\end{matrix}}\rightarrow {\begin{matrix}V_{\lambda }\\\oplus \\V_{\lambda }^{\perp }\end{matrix}}} where I λ is the identity operator on V λ . The above matrix would be upper-triangular except for the A 22 block. But exactly the same procedure can be applied to the sub-matrix A 22 , viewed as an operator on V λ ⊥ , and its submatrices. Continue this way until the resulting matrix is upper triangular. Since each conjugation increases the dimension of the upper-triangular block by at least one, this process takes at most n steps. Thus the space C n will be exhausted and the procedure has yielded the desired result. [ 5 ] The above argument can be slightly restated as follows: let λ be an eigenvalue of A , corresponding to some eigenspace V λ . A induces an operator T on the quotient space C n / V λ . This operator is precisely the A 22 submatrix from above. As before, T would have an eigenspace, say W μ ⊂ C n modulo V λ . Notice the preimage of W μ under the quotient map is an invariant subspace of A that contains V λ . Continue this way until the resulting quotient space has dimension 0. Then the successive preimages of the eigenspaces found at each step form a flag that A stabilizes. Although every square matrix has a Schur decomposition, in general this decomposition is not unique. For example, the eigenspace V λ can have dimension > 1, in which case any orthonormal basis for V λ would lead to the desired result. Write the triangular matrix U as U = D + N , where D is diagonal and N is strictly upper triangular (and thus a nilpotent matrix ). The diagonal matrix D contains the eigenvalues of A in arbitrary order (hence its Frobenius norm, squared, is the sum of the squared moduli of the eigenvalues of A , while the Frobenius norm of A , squared, is the sum of the squared singular values of A ). The nilpotent part N is generally not unique either, but its Frobenius norm is uniquely determined by A (just because the Frobenius norm of A is equal to the Frobenius norm of U = D + N ). [ 6 ] It is clear that if A is a normal matrix , then U from its Schur decomposition must be a diagonal matrix and the column vectors of Q are the eigenvectors of A . Therefore, the Schur decomposition extends the spectral decomposition . In particular, if A is positive definite , the Schur decomposition of A , its spectral decomposition, and its singular value decomposition coincide. A commuting family { A i } of matrices can be simultaneously triangularized, i.e. there exists a unitary matrix Q such that, for every A i in the given family, Q A i Q* is upper triangular. This can be readily deduced from the above proof. Take element A from { A i } and again consider an eigenspace V A . Then V A is invariant under all matrices in { A i }. Therefore, all matrices in { A i } must share one common eigenvector in V A . Induction then proves the claim. As a corollary, we have that every commuting family of normal matrices can be simultaneously diagonalized . In the infinite dimensional setting, not every bounded operator on a Banach space has an invariant subspace. However, the upper-triangularization of an arbitrary square matrix does generalize to compact operators . Every compact operator on a complex Banach space has a nest of closed invariant subspaces. The Schur decomposition of a given matrix is numerically computed by the QR algorithm or its variants. In other words, the roots of the characteristic polynomial corresponding to the matrix are not necessarily computed ahead in order to obtain its Schur decomposition. Conversely, the QR algorithm can be used to compute the roots of any given characteristic polynomial by finding the Schur decomposition of its companion matrix . Similarly, the QR algorithm is used to compute the eigenvalues of any given matrix, which are the diagonal entries of the upper triangular matrix of the Schur decomposition. Although the QR algorithm is formally an infinite sequence of operations, convergence to machine precision is practically achieved in O ( n 3 ) {\displaystyle {\mathcal {O}}(n^{3})} operations. [ 7 ] See the Nonsymmetric Eigenproblems section in LAPACK Users' Guide. [ 8 ] Lie theory applications include: Given square matrices A and B , the generalized Schur decomposition factorizes both matrices as A = Q S Z ∗ {\displaystyle A=QSZ^{*}} and B = Q T Z ∗ {\displaystyle B=QTZ^{*}} , where Q and Z are unitary , and S and T are upper triangular . The generalized Schur decomposition is also sometimes called the QZ decomposition . [ 2 ] : 375 [ 9 ] The generalized eigenvalues λ {\displaystyle \lambda } that solve the generalized eigenvalue problem A x = λ B x {\displaystyle A\mathbf {x} =\lambda B\mathbf {x} } (where x is an unknown nonzero vector) can be calculated as the ratio of the diagonal elements of S to those of T . That is, using subscripts to denote matrix elements, the i th generalized eigenvalue λ i {\displaystyle \lambda _{i}} satisfies λ i = S i i / T i i {\displaystyle \lambda _{i}=S_{ii}/T_{ii}} .
https://en.wikipedia.org/wiki/Schur_decomposition
In mathematics , Schur polynomials , named after Issai Schur , are certain symmetric polynomials in n variables, indexed by partitions , that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials . In representation theory they are the characters of polynomial irreducible representations of the general linear groups . The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule . More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials. Schur polynomials are indexed by integer partitions . Given a partition λ = ( λ 1 , λ 2 , ..., λ n ) , where λ 1 ≥ λ 2 ≥ ... ≥ λ n , and each λ j is a non-negative integer, the functions a ( λ 1 + n − 1 , λ 2 + n − 2 , … , λ n ) ( x 1 , x 2 , … , x n ) = det [ x 1 λ 1 + n − 1 x 2 λ 1 + n − 1 … x n λ 1 + n − 1 x 1 λ 2 + n − 2 x 2 λ 2 + n − 2 … x n λ 2 + n − 2 ⋮ ⋮ ⋱ ⋮ x 1 λ n x 2 λ n … x n λ n ] {\displaystyle a_{(\lambda _{1}+n-1,\lambda _{2}+n-2,\dots ,\lambda _{n})}(x_{1},x_{2},\dots ,x_{n})=\det \left[{\begin{matrix}x_{1}^{\lambda _{1}+n-1}&x_{2}^{\lambda _{1}+n-1}&\dots &x_{n}^{\lambda _{1}+n-1}\\x_{1}^{\lambda _{2}+n-2}&x_{2}^{\lambda _{2}+n-2}&\dots &x_{n}^{\lambda _{2}+n-2}\\\vdots &\vdots &\ddots &\vdots \\x_{1}^{\lambda _{n}}&x_{2}^{\lambda _{n}}&\dots &x_{n}^{\lambda _{n}}\end{matrix}}\right]} are alternating polynomials by properties of the determinant . A polynomial is alternating if it changes sign under any transposition of the variables. Since they are alternating, they are all divisible by the Vandermonde determinant a ( n − 1 , n − 2 , … , 0 ) ( x 1 , x 2 , … , x n ) = det [ x 1 n − 1 x 2 n − 1 … x n n − 1 x 1 n − 2 x 2 n − 2 … x n n − 2 ⋮ ⋮ ⋱ ⋮ 1 1 … 1 ] = ∏ 1 ≤ j < k ≤ n ( x j − x k ) . {\displaystyle a_{(n-1,n-2,\dots ,0)}(x_{1},x_{2},\dots ,x_{n})=\det \left[{\begin{matrix}x_{1}^{n-1}&x_{2}^{n-1}&\dots &x_{n}^{n-1}\\x_{1}^{n-2}&x_{2}^{n-2}&\dots &x_{n}^{n-2}\\\vdots &\vdots &\ddots &\vdots \\1&1&\dots &1\end{matrix}}\right]=\prod _{1\leq j<k\leq n}(x_{j}-x_{k}).} The Schur polynomials are defined as the ratio s λ ( x 1 , x 2 , … , x n ) = a ( λ 1 + n − 1 , λ 2 + n − 2 , … , λ n + 0 ) ( x 1 , x 2 , … , x n ) a ( n − 1 , n − 2 , … , 0 ) ( x 1 , x 2 , … , x n ) . {\displaystyle s_{\lambda }(x_{1},x_{2},\dots ,x_{n})={\frac {a_{(\lambda _{1}+n-1,\lambda _{2}+n-2,\dots ,\lambda _{n}+0)}(x_{1},x_{2},\dots ,x_{n})}{a_{(n-1,n-2,\dots ,0)}(x_{1},x_{2},\dots ,x_{n})}}.} This is known as the bialternant formula of Jacobi . It is a special case of the Weyl character formula . This is a symmetric function because the numerator and denominator are both alternating, and a polynomial since all alternating polynomials are divisible by the Vandermonde determinant. The degree d Schur polynomials in n variables are a linear basis for the space of homogeneous degree d symmetric polynomials in n variables. For a partition λ = ( λ 1 , λ 2 , ..., λ r ) with r ≤ n {\displaystyle r\leq n} , the Schur polynomial is a sum of monomials, where the summation is over all semistandard Young tableaux T of shape λ using the numbers 1, 2, ..., n . The exponents t 1 , ..., t n give the weight of T , in other words each t i counts the occurrences of the number i in T . This can be shown to be equivalent to the definition from the first Giambelli formula using the Lindström–Gessel–Viennot lemma (as outlined on that page). Schur polynomials can be expressed as linear combinations of monomial symmetric functions m μ with non-negative integer coefficients K λμ called Kostka numbers , The Kostka numbers K λμ are given by the number of semi-standard Young tableaux of shape λ and weight μ . The first Jacobi−Trudi formula expresses the Schur polynomial as a determinant in terms of the complete homogeneous symmetric polynomials , where h i := s ( i ) . [ 1 ] The second Jacobi-Trudi formula expresses the Schur polynomial as a determinant in terms of the elementary symmetric polynomials , where e i := s (1 i ) and λ ′ = ( λ 1 ′ , … , λ l ′ ) {\displaystyle \lambda '=(\lambda '_{1},\ldots ,\lambda '_{l})} is the conjugate partition to λ . [ 2 ] In both identities, functions with negative subscripts are defined to be zero. Another determinantal identity is Giambelli's formula , which expresses the Schur function for an arbitrary partition in terms of those for the hook partitions contained within the Young diagram. In Frobenius' notation, the partition is denoted where, for each diagonal element in position ii , a i denotes the number of boxes to the right in the same row and b i denotes the number of boxes beneath it in the same column (the arm and leg lengths, respectively). The Giambelli identity expresses the Schur function corresponding to this partition as the determinant of those for hook partitions. The Cauchy identity for Schur functions (now in infinitely many variables), and its dual state that and where the sum is taken over all partitions λ , and h λ ( x ) {\displaystyle h_{\lambda }(x)} , e λ ( x ) {\displaystyle e_{\lambda }(x)} denote the complete symmetric functions and elementary symmetric functions , respectively. If the sum is taken over products of Schur polynomials in n {\displaystyle n} variables ( x 1 , … , x n ) {\displaystyle (x_{1},\dots ,x_{n})} , the sum includes only partitions of length ℓ ( λ ) ≤ n {\displaystyle \ell (\lambda )\leq n} since otherwise the Schur polynomials vanish. There are many generalizations of these identities to other families of symmetric functions. For example, Macdonald polynomials, Schubert polynomials and Grothendieck polynomials admit Cauchy-like identities. The Schur polynomial can also be computed via a specialization of a formula for Hall–Littlewood polynomials , where S n λ {\displaystyle S_{n}^{\lambda }} is the subgroup of permutations such that λ w ( i ) = λ i {\displaystyle \lambda _{w(i)}=\lambda _{i}} for all i , and w acts on variables by permuting indices. The Murnaghan–Nakayama rule expresses a product of a power-sum symmetric function with a Schur polynomial, in terms of Schur polynomials: where the sum is over all partitions μ such that μ / λ is a rim-hook of size r and ht ( μ / λ ) is the number of rows in the diagram μ / λ . The Littlewood–Richardson coefficients depend on three partitions , say λ , μ , ν {\displaystyle \lambda ,\mu ,\nu } , of which λ {\displaystyle \lambda } and μ {\displaystyle \mu } describe the Schur functions being multiplied, and ν {\displaystyle \nu } gives the Schur function of which this is the coefficient in the linear combination; in other words they are the coefficients c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} such that The Littlewood–Richardson rule states that c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} is equal to the number of Littlewood–Richardson tableaux of skew shape ν / λ {\displaystyle \nu /\lambda } and of weight μ {\displaystyle \mu } . Pieri's formula is a special case of the Littlewood-Richardson rule, which expresses the product h r s λ {\displaystyle h_{r}s_{\lambda }} in terms of Schur polynomials. The dual version expresses e r s λ {\displaystyle e_{r}s_{\lambda }} in terms of Schur polynomials. Evaluating the Schur polynomial s λ in (1, 1, ..., 1) gives the number of semi-standard Young tableaux of shape λ with entries in 1, 2, ..., n . One can show, by using the Weyl character formula for example, that s λ ( 1 , 1 , … , 1 ) = ∏ 1 ≤ i < j ≤ n λ i − λ j + j − i j − i . {\displaystyle s_{\lambda }(1,1,\dots ,1)=\prod _{1\leq i<j\leq n}{\frac {\lambda _{i}-\lambda _{j}+j-i}{j-i}}.} In this formula, λ , the tuple indicating the width of each row of the Young diagram, is implicitly extended with zeros until it has length n . The sum of the elements λ i is d . See also the Hook length formula which computes the same quantity for fixed λ . The following extended example should help clarify these ideas. Consider the case n = 3, d = 4. Using Ferrers diagrams or some other method, we find that there are just four partitions of 4 into at most three parts. We have and so on, where Δ {\displaystyle \Delta } is the Vandermonde determinant a ( 2 , 1 , 0 ) ( x 1 , x 2 , x 3 ) {\displaystyle a_{(2,1,0)}(x_{1},x_{2},x_{3})} . Summarizing: Every homogeneous degree-four symmetric polynomial in three variables can be expressed as a unique linear combination of these four Schur polynomials, and this combination can again be found using a Gröbner basis for an appropriate elimination order. For example, is obviously a symmetric polynomial which is homogeneous of degree four, and we have The Schur polynomials occur in the representation theory of the symmetric groups , general linear groups , and unitary groups . The Weyl character formula implies that the Schur polynomials are the characters of finite-dimensional irreducible representations of the general linear groups, and helps to generalize Schur's work to other compact and semisimple Lie groups . Several expressions arise for this relation, one of the most important being the expansion of the Schur functions s λ in terms of the symmetric power functions p k = ∑ i x i k {\displaystyle p_{k}=\sum _{i}x_{i}^{k}} . If we write χ λ ρ for the character of the representation of the symmetric group indexed by the partition λ evaluated at elements of cycle type indexed by the partition ρ, then where ρ = (1 r 1 , 2 r 2 , 3 r 3 , ...) means that the partition ρ has r k parts of length k . A proof of this can be found in R. Stanley's Enumerative Combinatorics Volume 2, Corollary 7.17.5. The integers χ λ ρ can be computed using the Murnaghan–Nakayama rule . Due to the connection with representation theory, a symmetric function which expands positively in Schur functions are of particular interest. For example, the skew Schur functions expand positively in the ordinary Schur functions, and the coefficients are Littlewood–Richardson coefficients. A special case of this is the expansion of the complete homogeneous symmetric functions h λ in Schur functions. This decomposition reflects how a permutation module is decomposed into irreducible representations. There are several approaches to prove Schur positivity of a given symmetric function F . If F is described in a combinatorial manner, a direct approach is to produce a bijection with semi-standard Young tableaux. The Edelman–Greene correspondence and the Robinson–Schensted–Knuth correspondence are examples of such bijections. A bijection with more structure is a proof using so called crystals . This method can be described as defining a certain graph structure described with local rules on the underlying combinatorial objects. A similar idea is the notion of dual equivalence. This approach also uses a graph structure, but on the objects representing the expansion in the fundamental quasisymmetric basis. It is closely related to the RSK-correspondence. Skew Schur functions s λ/μ depend on two partitions λ and μ, and can be defined by the property Here, the inner product is the Hall inner product, for which the Schur polynomials form an orthonormal basis. Similar to the ordinary Schur polynomials, there are numerous ways to compute these. The corresponding Jacobi-Trudi identities are There is also a combinatorial interpretation of the skew Schur polynomials, namely it is a sum over all semi-standard Young tableaux (or column-strict tableaux) of the skew shape λ / μ {\displaystyle \lambda /\mu } . The skew Schur polynomials expands positively in Schur polynomials. A rule for the coefficients is given by the Littlewood-Richardson rule . The double Schur polynomials [ 3 ] can be seen as a generalization of the shifted Schur polynomials. These polynomials are also closely related to the factorial Schur polynomials. Given a partition λ , and a sequence a 1 , a 2 ,... one can define the double Schur polynomial s λ ( x || a ) as s λ ( x | | a ) = ∑ T ∏ α ∈ λ ( x T ( α ) − a T ( α ) − c ( α ) ) {\displaystyle s_{\lambda }(x||a)=\sum _{T}\prod _{\alpha \in \lambda }(x_{T(\alpha )}-a_{T(\alpha )-c(\alpha )})} where the sum is taken over all reverse semi-standard Young tableaux T of shape λ , and integer entries in 1, ..., n . Here T (α) denotes the value in the box α in T and c(α) is the content of the box. A combinatorial rule for the Littlewood-Richardson coefficients (depending on the sequence a ) was given by A.I Molev. [ 3 ] In particular, this implies that the shifted Schur polynomials have non-negative Littlewood-Richardson coefficients. The shifted Schur polynomials s * λ ( y ) can be obtained from the double Schur polynomials by specializing a i = − i and y i = x i + i . The double Schur polynomials are special cases of the double Schubert polynomials . The factorial Schur polynomials may be defined as follows. Given a partition λ, and a doubly infinite sequence ..., a −1 , a 0 , a 1 , ... one can define the factorial Schur polynomial s λ ( x | a ) as s λ ( x | a ) = ∑ T ∏ α ∈ λ ( x T ( α ) − a T ( α ) + c ( α ) ) {\displaystyle s_{\lambda }(x|a)=\sum _{T}\prod _{\alpha \in \lambda }(x_{T(\alpha )}-a_{T(\alpha )+c(\alpha )})} where the sum is taken over all semi-standard Young tableaux T of shape λ, and integer entries in 1, ..., n . Here T (α) denotes the value in the box α in T and c(α) is the content of the box. There is also a determinant formula, s λ ( x | a ) = det [ ( x j | a ) λ i + n − i ] i , j = 1 l ( λ ) ∏ i < j ( x i − x j ) {\displaystyle s_{\lambda }(x|a)={\frac {\det[(x_{j}|a)^{\lambda _{i}+n-i}]_{i,j=1}^{l(\lambda )}}{\prod _{i<j}(x_{i}-x_{j})}}} where ( y | a ) k = ( y − a 1 ) ... ( y − a k ). It is clear that if we let a i = 0 for all i , we recover the usual Schur polynomial s λ . The double Schur polynomials and the factorial Schur polynomials in n variables are related via the identity s λ ( x || a ) = s λ ( x | u ) where a n − i +1 = u i . There are numerous generalizations of Schur polynomials:
https://en.wikipedia.org/wiki/Schur_polynomial
In mathematical analysis , the Schur test , named after German mathematician Issai Schur , is a bound on the L 2 → L 2 {\displaystyle L^{2}\to L^{2}} operator norm of an integral operator in terms of its Schwartz kernel (see Schwartz kernel theorem ). Here is one version. [ 1 ] Let X , Y {\displaystyle X,\,Y} be two measurable spaces (such as R n {\displaystyle \mathbb {R} ^{n}} ). Let T {\displaystyle \,T} be an integral operator with the non-negative Schwartz kernel K ( x , y ) {\displaystyle \,K(x,y)} , x ∈ X {\displaystyle x\in X} , y ∈ Y {\displaystyle y\in Y} : If there exist real functions p ( x ) > 0 {\displaystyle \,p(x)>0} and q ( y ) > 0 {\displaystyle \,q(y)>0} and numbers α , β > 0 {\displaystyle \,\alpha ,\beta >0} such that for almost all x {\displaystyle \,x} and for almost all y {\displaystyle \,y} , then T {\displaystyle \,T} extends to a continuous operator T : L 2 → L 2 {\displaystyle T:L^{2}\to L^{2}} with the operator norm Such functions p ( x ) {\displaystyle \,p(x)} , q ( y ) {\displaystyle \,q(y)} are called the Schur test functions. In the original version, T {\displaystyle \,T} is a matrix and α = β = 1 {\displaystyle \,\alpha =\beta =1} . [ 2 ] A common usage of the Schur test is to take p ( x ) = q ( y ) = 1. {\displaystyle \,p(x)=q(y)=1.} Then we get: This inequality is valid no matter whether the Schwartz kernel K ( x , y ) {\displaystyle \,K(x,y)} is non-negative or not. A similar statement about L p → L q {\displaystyle L^{p}\to L^{q}} operator norms is known as Young's inequality for integral operators : [ 3 ] if where r {\displaystyle r} satisfies 1 r = 1 − ( 1 p − 1 q ) {\displaystyle {\frac {1}{r}}=1-{\Big (}{\frac {1}{p}}-{\frac {1}{q}}{\Big )}} , for some 1 ≤ p ≤ q ≤ ∞ {\displaystyle 1\leq p\leq q\leq \infty } , then the operator T f ( x ) = ∫ Y K ( x , y ) f ( y ) d y {\displaystyle Tf(x)=\int _{Y}K(x,y)f(y)\,dy} extends to a continuous operator T : L p ( Y ) → L q ( X ) {\displaystyle T:L^{p}(Y)\to L^{q}(X)} , with ‖ T ‖ L p → L q ≤ C . {\displaystyle \Vert T\Vert _{L^{p}\to L^{q}}\leq C.} Using the Cauchy–Schwarz inequality and inequality (1), we get: Integrating the above relation in x {\displaystyle x} , using Fubini's Theorem , and applying inequality (2), we get: It follows that ‖ T f ‖ L 2 ≤ α β ‖ f ‖ L 2 {\displaystyle \Vert Tf\Vert _{L^{2}}\leq {\sqrt {\alpha \beta }}\Vert f\Vert _{L^{2}}} for any f ∈ L 2 ( Y ) {\displaystyle f\in L^{2}(Y)} .
https://en.wikipedia.org/wiki/Schur_test
In mathematics , particularly linear algebra , the Schur–Horn theorem , named after Issai Schur and Alfred Horn , characterizes the diagonal of a Hermitian matrix with given eigenvalues . It has inspired investigations and substantial generalizations in the setting of symplectic geometry . A few important generalizations are Kostant's convexity theorem , Atiyah–Guillemin–Sternberg convexity theorem and Kirwan convexity theorem . Schur–Horn theorem — Let d 1 , … , d N {\displaystyle d_{1},\dots ,d_{N}} and λ 1 , … , λ N {\displaystyle \lambda _{1},\dots ,\lambda _{N}} be two sequences of real numbers arranged in a non-increasing order. There is a Hermitian matrix with diagonal values d 1 , … , d N {\displaystyle d_{1},\dots ,d_{N}} (in this order, starting with d 1 {\displaystyle d_{1}} at the top-left) and eigenvalues λ 1 , … , λ N {\displaystyle \lambda _{1},\dots ,\lambda _{N}} if and only if ∑ i = 1 n d i ≤ ∑ i = 1 n λ i n = 1 , … , N − 1 {\displaystyle \sum _{i=1}^{n}d_{i}\leq \sum _{i=1}^{n}\lambda _{i}\qquad n=1,\dots ,N-1} and ∑ i = 1 N d i = ∑ i = 1 N λ i . {\displaystyle \sum _{i=1}^{N}d_{i}=\sum _{i=1}^{N}\lambda _{i}.} The condition on the two sequences is equivalent to the majorization condition: d → ⪯ λ → {\displaystyle {\vec {d}}\preceq {\vec {\lambda }}} . The inequalities above may alternatively be written: d 1 ≤ λ 1 d 2 + d 1 ≤ λ 1 + λ 2 ⋮ ≤ ⋮ d N − 1 + ⋯ + d 2 + d 1 ≤ λ 1 + λ 2 + ⋯ + λ N − 1 d N + d N − 1 + ⋯ + d 2 + d 1 = λ 1 + λ 2 + ⋯ + λ N − 1 + λ N . {\displaystyle {\begin{alignedat}{7}d_{1}&\;\leq \;&&\lambda _{1}\\[0.3ex]d_{2}+d_{1}&\;\leq &&\lambda _{1}+\lambda _{2}\\[0.3ex]\vdots &\;\leq &&\vdots \\[0.3ex]d_{N-1}+\cdots +d_{2}+d_{1}&\;\leq &&\lambda _{1}+\lambda _{2}+\cdots +\lambda _{N-1}\\[0.3ex]d_{N}+d_{N-1}+\cdots +d_{2}+d_{1}&\;=&&\lambda _{1}+\lambda _{2}+\cdots +\lambda _{N-1}+\lambda _{N}.\\[0.3ex]\end{alignedat}}} The Schur–Horn theorem may thus be restated more succinctly and in plain English: Although this theorem requires that d 1 ≥ ⋯ ≥ d N {\displaystyle d_{1}\geq \cdots \geq d_{N}} and λ 1 ≥ ⋯ ≥ λ N {\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{N}} be non-increasing, it is possible to reformulate this theorem without these assumptions. We start with the assumption λ 1 ≥ ⋯ ≥ λ N . {\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{N}.} The left hand side of the theorem's characterization (that is, "there exists a Hermitian matrix with these eigenvalues and diagonal elements") depends on the order of the desired diagonal elements d 1 , … , d N {\displaystyle d_{1},\dots ,d_{N}} (because changing their order would change the Hermitian matrix whose existence is in question) but it does not depend on the order of the desired eigenvalues λ 1 , … , λ N . {\displaystyle \lambda _{1},\dots ,\lambda _{N}.} On the right hand right hand side of the characterization, only the values of λ 1 + ⋯ + λ n {\displaystyle \lambda _{1}+\cdots +\lambda _{n}} depend on the assumption λ 1 ≥ ⋯ ≥ λ N . {\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{N}.} Notice that this assumption means that the expression λ 1 + ⋯ + λ n {\displaystyle \lambda _{1}+\cdots +\lambda _{n}} is just notation for the sum of the n {\displaystyle n} largest desired eigenvalues. Replacing the expression λ 1 + ⋯ + λ n {\displaystyle \lambda _{1}+\cdots +\lambda _{n}} with this written equivalent makes the assumption λ 1 ≥ ⋯ ≥ λ N {\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{N}} completely unnecessary: The permutation polytope generated by x ~ = ( x 1 , x 2 , … , x n ) ∈ R n {\displaystyle {\tilde {x}}=(x_{1},x_{2},\ldots ,x_{n})\in \mathbb {R} ^{n}} denoted by K x ~ {\displaystyle {\mathcal {K}}_{\tilde {x}}} is defined as the convex hull of the set { ( x π ( 1 ) , x π ( 2 ) , … , x π ( n ) ) ∈ R n : π ∈ S n } . {\displaystyle \{(x_{\pi (1)},x_{\pi (2)},\ldots ,x_{\pi (n)})\in \mathbb {R} ^{n}:\pi \in S_{n}\}.} Here S n {\displaystyle S_{n}} denotes the symmetric group on { 1 , 2 , … , n } . {\displaystyle \{1,2,\ldots ,n\}.} In other words, the permutation polytope generated by ( x 1 , … , x n ) {\displaystyle (x_{1},\dots ,x_{n})} is the convex hull of the set of all points in R n {\displaystyle \mathbb {R} ^{n}} that can be obtained by rearranging the coordinates of ( x 1 , … , x n ) . {\displaystyle (x_{1},\dots ,x_{n}).} The permutation polytope of ( 1 , 1 , 2 ) , {\displaystyle (1,1,2),} for instance, is the convex hull of the set { ( 1 , 1 , 2 ) , ( 1 , 2 , 1 ) , ( 2 , 1 , 1 ) } , {\displaystyle \{(1,1,2),(1,2,1),(2,1,1)\},} which in this case is the solid (filled) triangle whose vertices are the three points in this set. Notice, in particular, that rearranging the coordinates of ( x 1 , … , x n ) {\displaystyle (x_{1},\dots ,x_{n})} does not change the resulting permutation polytope; in other words, if a point y ~ {\displaystyle {\tilde {y}}} can be obtained from x ~ = ( x 1 , … , x n ) {\displaystyle {\tilde {x}}=(x_{1},\dots ,x_{n})} by rearranging its coordinates, then K y ~ = K x ~ . {\displaystyle {\mathcal {K}}_{\tilde {y}}={\mathcal {K}}_{\tilde {x}}.} The following lemma characterizes the permutation polytope of a vector in R n . {\displaystyle \mathbb {R} ^{n}.} Lemma [ 1 ] [ 2 ] — If x 1 ≥ ⋯ ≥ x n , {\displaystyle x_{1}\geq \cdots \geq x_{n},} and y 1 ≥ ⋯ ≥ y n , {\displaystyle y_{1}\geq \cdots \geq y_{n},} have the same sum x 1 + ⋯ + x n = y 1 + ⋯ + y n , {\displaystyle x_{1}+\cdots +x_{n}=y_{1}+\cdots +y_{n},} then the following statements are equivalent: In view of the equivalence of (i) and (ii) in the lemma mentioned above, one may reformulate the theorem in the following manner. Theorem. Let d 1 , … , d N {\displaystyle d_{1},\dots ,d_{N}} and λ 1 , … , λ N {\displaystyle \lambda _{1},\dots ,\lambda _{N}} be real numbers. There is a Hermitian matrix with diagonal entries d 1 , … , d N {\displaystyle d_{1},\dots ,d_{N}} and eigenvalues λ 1 , … , λ N {\displaystyle \lambda _{1},\dots ,\lambda _{N}} if and only if the vector ( d 1 , … , d n ) {\displaystyle (d_{1},\ldots ,d_{n})} is in the permutation polytope generated by ( λ 1 , … , λ n ) . {\displaystyle (\lambda _{1},\ldots ,\lambda _{n}).} Note that in this formulation, one does not need to impose any ordering on the entries of the vectors d 1 , … , d N {\displaystyle d_{1},\dots ,d_{N}} and λ 1 , … , λ N . {\displaystyle \lambda _{1},\dots ,\lambda _{N}.} Let A = ( a j k ) {\displaystyle A=(a_{jk})} be a n × n {\displaystyle n\times n} Hermitian matrix with eigenvalues { λ i } i = 1 n , {\displaystyle \{\lambda _{i}\}_{i=1}^{n},} counted with multiplicity. Denote the diagonal of A {\displaystyle A} by a ~ , {\displaystyle {\tilde {a}},} thought of as a vector in R n , {\displaystyle \mathbb {R} ^{n},} and the vector ( λ 1 , λ 2 , … , λ n ) {\displaystyle (\lambda _{1},\lambda _{2},\ldots ,\lambda _{n})} by λ ~ . {\displaystyle {\tilde {\lambda }}.} Let Λ {\displaystyle \Lambda } be the diagonal matrix having λ 1 , λ 2 , … , λ n {\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{n}} on its diagonal. ( ⇒ {\displaystyle \Rightarrow } ) A {\displaystyle A} may be written in the form U Λ U − 1 , {\displaystyle U\Lambda U^{-1},} where U {\displaystyle U} is a unitary matrix. Then a i i = ∑ j = 1 n λ j | u i j | 2 , i = 1 , 2 , … , n . {\displaystyle a_{ii}=\sum _{j=1}^{n}\lambda _{j}|u_{ij}|^{2},\;i=1,2,\ldots ,n.} Let S = ( s i j ) {\displaystyle S=(s_{ij})} be the matrix defined by s i j = | u i j | 2 . {\displaystyle s_{ij}=|u_{ij}|^{2}.} Since U {\displaystyle U} is a unitary matrix, S {\displaystyle S} is a doubly stochastic matrix and we have a ~ = S λ ~ . {\displaystyle {\tilde {a}}=S{\tilde {\lambda }}.} By the Birkhoff–von Neumann theorem , S {\displaystyle S} can be written as a convex combination of permutation matrices. Thus a ~ {\displaystyle {\tilde {a}}} is in the permutation polytope generated by λ ~ . {\displaystyle {\tilde {\lambda }}.} This proves Schur's theorem. ( ⇐ {\displaystyle \Leftarrow } ) If a ~ {\displaystyle {\tilde {a}}} occurs as the diagonal of a Hermitian matrix with eigenvalues { λ i } i = 1 n , {\displaystyle \{\lambda _{i}\}_{i=1}^{n},} then t a ~ + ( 1 − t ) τ ( a ~ ) {\displaystyle t{\tilde {a}}+(1-t)\tau ({\tilde {a}})} also occurs as the diagonal of some Hermitian matrix with the same set of eigenvalues, for any transposition τ {\displaystyle \tau } in S n . {\displaystyle S_{n}.} One may prove that in the following manner. Let ξ {\displaystyle \xi } be a complex number of modulus 1 {\displaystyle 1} such that ξ a j k ¯ = − ξ a j k {\displaystyle {\overline {\xi a_{jk}}}=-\xi a_{jk}} and U {\displaystyle U} be a unitary matrix with ξ t , t {\displaystyle \xi {\sqrt {t}},{\sqrt {t}}} in the j , j {\displaystyle j,j} and k , k {\displaystyle k,k} entries, respectively, − 1 − t , ξ 1 − t {\displaystyle -{\sqrt {1-t}},\xi {\sqrt {1-t}}} at the j , k {\displaystyle j,k} and k , j {\displaystyle k,j} entries, respectively, 1 {\displaystyle 1} at all diagonal entries other than j , j {\displaystyle j,j} and k , k , {\displaystyle k,k,} and 0 {\displaystyle 0} at all other entries. Then U A U − 1 {\displaystyle UAU^{-1}} has t a j j + ( 1 − t ) a k k {\displaystyle ta_{jj}+(1-t)a_{kk}} at the j , j {\displaystyle j,j} entry, ( 1 − t ) a j j + t a k k {\displaystyle (1-t)a_{jj}+ta_{kk}} at the k , k {\displaystyle k,k} entry, and a l l {\displaystyle a_{ll}} at the l , l {\displaystyle l,l} entry where l ≠ j , k . {\displaystyle l\neq j,k.} Let τ {\displaystyle \tau } be the transposition of { 1 , 2 , … , n } {\displaystyle \{1,2,\dots ,n\}} that interchanges j {\displaystyle j} and k . {\displaystyle k.} Then the diagonal of U A U − 1 {\displaystyle UAU^{-1}} is t a ~ + ( 1 − t ) τ ( a ~ ) . {\displaystyle t{\tilde {a}}+(1-t)\tau ({\tilde {a}}).} Λ {\displaystyle \Lambda } is a Hermitian matrix with eigenvalues { λ i } i = 1 n . {\displaystyle \{\lambda _{i}\}_{i=1}^{n}.} Using the equivalence of (i) and (iii) in the lemma mentioned above, we see that any vector in the permutation polytope generated by ( λ 1 , λ 2 , … , λ n ) , {\displaystyle (\lambda _{1},\lambda _{2},\ldots ,\lambda _{n}),} occurs as the diagonal of a Hermitian matrix with the prescribed eigenvalues. This proves Horn's theorem. The Schur–Horn theorem may be viewed as a corollary of the Atiyah–Guillemin–Sternberg convexity theorem in the following manner. Let U ( n ) {\displaystyle {\mathcal {U}}(n)} denote the group of n × n {\displaystyle n\times n} unitary matrices. Its Lie algebra, denoted by u ( n ) , {\displaystyle {\mathfrak {u}}(n),} is the set of skew-Hermitian matrices. One may identify the dual space u ( n ) ∗ {\displaystyle {\mathfrak {u}}(n)^{*}} with the set of Hermitian matrices H ( n ) {\displaystyle {\mathcal {H}}(n)} via the linear isomorphism Ψ : H ( n ) → u ( n ) ∗ {\displaystyle \Psi :{\mathcal {H}}(n)\rightarrow {\mathfrak {u}}(n)^{*}} defined by Ψ ( A ) ( B ) = t r ( i A B ) {\displaystyle \Psi (A)(B)=\mathrm {tr} (iAB)} for A ∈ H ( n ) , B ∈ u ( n ) . {\displaystyle A\in {\mathcal {H}}(n),B\in {\mathfrak {u}}(n).} The unitary group U ( n ) {\displaystyle {\mathcal {U}}(n)} acts on H ( n ) {\displaystyle {\mathcal {H}}(n)} by conjugation and acts on u ( n ) ∗ {\displaystyle {\mathfrak {u}}(n)^{*}} by the coadjoint action . Under these actions, Ψ {\displaystyle \Psi } is an U ( n ) {\displaystyle {\mathcal {U}}(n)} -equivariant map i.e. for every U ∈ U ( n ) {\displaystyle U\in {\mathcal {U}}(n)} the following diagram commutes, Let λ ~ = ( λ 1 , λ 2 , … , λ n ) ∈ R n {\displaystyle {\tilde {\lambda }}=(\lambda _{1},\lambda _{2},\ldots ,\lambda _{n})\in \mathbb {R} ^{n}} and Λ ∈ H ( n ) {\displaystyle \Lambda \in {\mathcal {H}}(n)} denote the diagonal matrix with entries given by λ ~ . {\displaystyle {\tilde {\lambda }}.} Let O λ ~ {\displaystyle {\mathcal {O}}_{\tilde {\lambda }}} denote the orbit of Λ {\displaystyle \Lambda } under the U ( n ) {\displaystyle {\mathcal {U}}(n)} -action i.e. conjugation. Under the U ( n ) {\displaystyle {\mathcal {U}}(n)} -equivariant isomorphism Ψ , {\displaystyle \Psi ,} the symplectic structure on the corresponding coadjoint orbit may be brought onto O λ ~ . {\displaystyle {\mathcal {O}}_{\tilde {\lambda }}.} Thus O λ ~ {\displaystyle {\mathcal {O}}_{\tilde {\lambda }}} is a Hamiltonian U ( n ) {\displaystyle {\mathcal {U}}(n)} -manifold. Let T {\displaystyle \mathbb {T} } denote the Cartan subgroup of U ( n ) {\displaystyle {\mathcal {U}}(n)} which consists of diagonal complex matrices with diagonal entries of modulus 1. {\displaystyle 1.} The Lie algebra t {\displaystyle {\mathfrak {t}}} of T {\displaystyle \mathbb {T} } consists of diagonal skew-Hermitian matrices and the dual space t ∗ {\displaystyle {\mathfrak {t}}^{*}} consists of diagonal Hermitian matrices, under the isomorphism Ψ . {\displaystyle \Psi .} In other words, t {\displaystyle {\mathfrak {t}}} consists of diagonal matrices with purely imaginary entries and t ∗ {\displaystyle {\mathfrak {t}}^{*}} consists of diagonal matrices with real entries. The inclusion map t ↪ u ( n ) {\displaystyle {\mathfrak {t}}\hookrightarrow {\mathfrak {u}}(n)} induces a map Φ : H ( n ) ≅ u ( n ) ∗ → t ∗ , {\displaystyle \Phi :{\mathcal {H}}(n)\cong {\mathfrak {u}}(n)^{*}\rightarrow {\mathfrak {t}}^{*},} which projects a matrix A {\displaystyle A} to the diagonal matrix with the same diagonal entries as A . {\displaystyle A.} The set O λ ~ {\displaystyle {\mathcal {O}}_{\tilde {\lambda }}} is a Hamiltonian T {\displaystyle \mathbb {T} } -manifold, and the restriction of Φ {\displaystyle \Phi } to this set is a moment map for this action. By the Atiyah–Guillemin–Sternberg theorem, Φ ( O λ ~ ) {\displaystyle \Phi ({\mathcal {O}}_{\tilde {\lambda }})} is a convex polytope. A matrix A ∈ H ( n ) {\displaystyle A\in {\mathcal {H}}(n)} is fixed under conjugation by every element of T {\displaystyle \mathbb {T} } if and only if A {\displaystyle A} is diagonal. The only diagonal matrices in O λ ~ {\displaystyle {\mathcal {O}}_{\tilde {\lambda }}} are the ones with diagonal entries λ 1 , λ 2 , … , λ n {\displaystyle \lambda _{1},\lambda _{2},\ldots ,\lambda _{n}} in some order. Thus, these matrices generate the convex polytope Φ ( O λ ~ ) . {\displaystyle \Phi ({\mathcal {O}}_{\tilde {\lambda }}).} This is exactly the statement of the Schur–Horn theorem.
https://en.wikipedia.org/wiki/Schur–Horn_theorem
Schur–Weyl duality is a mathematical theorem in representation theory that relates irreducible finite-dimensional representations of the general linear and symmetric groups. Schur–Weyl duality forms an archetypical situation in representation theory involving two kinds of symmetry that determine each other. It is named after two pioneers of representation theory of Lie groups , Issai Schur , who discovered the phenomenon, and Hermann Weyl , who popularized it in his books on quantum mechanics and classical groups as a way of classifying representations of unitary and general linear groups. Schur–Weyl duality can be proven using the double centralizer theorem . [ 1 ] Consider the tensor space The symmetric group S k on k letters acts on this space (on the left) by permuting the factors, The general linear group GL n of invertible n × n matrices acts on it by the simultaneous matrix multiplication , These two actions commute , and in its concrete form, the Schur–Weyl duality asserts that under the joint action of the groups S k and GL n , the tensor space decomposes into a direct sum of tensor products of irreducible modules (for these two groups) that actually determine each other, The summands are indexed by the Young diagrams D with k boxes and at most n rows, and representations π k D {\displaystyle \pi _{k}^{D}} of S k with different D are mutually non-isomorphic, and the same is true for representations ρ n D {\displaystyle \rho _{n}^{D}} of GL n . The abstract form of the Schur–Weyl duality asserts that two algebras of operators on the tensor space generated by the actions of GL n and S k are the full mutual centralizers in the algebra of the endomorphisms E n d C ( C n ⊗ C n ⊗ ⋯ ⊗ C n ) . {\displaystyle \mathrm {End} _{\mathbb {C} }(\mathbb {C} ^{n}\otimes \mathbb {C} ^{n}\otimes \cdots \otimes \mathbb {C} ^{n}).} Suppose that k = 2 and n is greater than one. Then the Schur–Weyl duality is the statement that the space of two-tensors decomposes into symmetric and antisymmetric parts, each of which is an irreducible module for GL n : The symmetric group S 2 consists of two elements and has two irreducible representations, the trivial representation and the sign representation . The trivial representation of S 2 gives rise to the symmetric tensors, which are invariant (i.e. do not change) under the permutation of the factors, and the sign representation corresponds to the skew-symmetric tensors, which flip the sign. First consider the following setup: The proof uses two algebraic lemmas. Lemma 1 — [ 2 ] If W {\displaystyle W} is a simple left A -module, then U ⊗ A W {\displaystyle U\otimes _{A}W} is a simple left B -module. Proof : Since U is semisimple by Maschke's theorem , there is a decomposition U = ⨁ i U i ⊕ m i {\displaystyle U=\bigoplus _{i}U_{i}^{\oplus m_{i}}} into simple A -modules. Then U ⊗ A W = ⨁ i ( U i ⊗ A W ) ⊕ m i {\displaystyle U\otimes _{A}W=\bigoplus _{i}(U_{i}\otimes _{A}W)^{\oplus m_{i}}} . Since A is the left regular representation of G , each simple G -module appears in A and we have that U i ⊗ A W = C {\displaystyle U_{i}\otimes _{A}W=\mathbb {C} } (respectively zero) if and only if U i , W {\displaystyle U_{i},W} correspond to the same simple factor of A (respectively otherwise). Hence, we have: U ⊗ A W = ( U i 0 ⊗ A W ) ⊕ m i 0 = C ⊕ m i 0 . {\displaystyle U\otimes _{A}W=(U_{i_{0}}\otimes _{A}W)^{\oplus m_{i_{0}}}=\mathbb {C} ^{\oplus m_{i_{0}}}.} Now, it is easy to see that each nonzero vector in C ⊕ m i 0 {\displaystyle \mathbb {C} ^{\oplus m_{i_{0}}}} generates the whole space as a B -module and so U ⊗ A W {\displaystyle U\otimes _{A}W} is simple. (In general, a nonzero module is simple if and only if each of its nonzero cyclic submodule coincides with the module.) ◻ {\displaystyle \square } Lemma 2 — [ 3 ] When U = V ⊗ d {\displaystyle U=V^{\otimes d}} and G is the symmetric group S d {\displaystyle {\mathfrak {S}}_{d}} , a subspace of U {\displaystyle U} is a B -submodule if and only if it is invariant under GL ⁡ ( V ) {\displaystyle \operatorname {GL} (V)} ; in other words, a B -submodule is the same as a GL ⁡ ( V ) {\displaystyle \operatorname {GL} (V)} -submodule. Proof : Let W = End ⁡ ( V ) {\displaystyle W=\operatorname {End} (V)} . The W ↪ End ⁡ ( U ) , w ↦ w d = d ! w ⊗ ⋯ ⊗ w {\displaystyle W\hookrightarrow \operatorname {End} (U),w\mapsto w^{d}=d!w\otimes \cdots \otimes w} . Also, the image of W spans the subspace of symmetric tensors Sym d ⁡ ( W ) {\displaystyle \operatorname {Sym} ^{d}(W)} . Since B = Sym d ⁡ ( W ) {\displaystyle B=\operatorname {Sym} ^{d}(W)} , the image of W {\displaystyle W} spans B {\displaystyle B} . Since GL ⁡ ( V ) {\displaystyle \operatorname {GL} (V)} is dense in W either in the Euclidean topology or in the Zariski topology, the assertion follows. ◻ {\displaystyle \square } The Schur–Weyl duality now follows. We take G = S d {\displaystyle G={\mathfrak {S}}_{d}} to be the symmetric group and U = V ⊗ d {\displaystyle U=V^{\otimes d}} the d -th tensor power of a finite-dimensional complex vector space V . Let V λ {\displaystyle V^{\lambda }} denote the irreducible S d {\displaystyle {\mathfrak {S}}_{d}} -representation corresponding to a partition λ {\displaystyle \lambda } and m λ = dim ⁡ V λ {\displaystyle m_{\lambda }=\dim V^{\lambda }} . Then by Lemma 1 is irreducible as a GL ⁡ ( V ) {\displaystyle \operatorname {GL} (V)} -module. Moreover, when A = ⨁ λ ( V λ ) ⊕ m λ {\displaystyle A=\bigoplus _{\lambda }(V^{\lambda })^{\oplus m_{\lambda }}} is the left semisimple decomposition, we have: [ 4 ] which is the semisimple decomposition as a GL ⁡ ( V ) {\displaystyle \operatorname {GL} (V)} -module. The Brauer algebra plays the role of the symmetric group in the generalization of the Schur-Weyl duality to the orthogonal and symplectic groups. More generally, the partition algebra and its subalgebras give rise to a number of generalizations of the Schur-Weyl duality.
https://en.wikipedia.org/wiki/Schur–Weyl_duality
The Schuylkill and Susquehanna Navigation Company was a limited liability corporation founded in Pennsylvania on September 29, 1791. [ 1 ] [ 2 ] [ 3 ] The company was founded for the purpose of improving river navigation, which in the post- colonial United States era of the 1790s meant improving river systems, not canals. In this Pennsylvania plan, however, two rivers, a large river, the Susquehanna and a smaller one, the Schuylkill , were to be improved by clearing channels through obstructions and building dams where needed. To connect the two watersheds, the company proposed a 4-mile (6.4 km) summit level crossing at Lebanon , a length of almost 80 miles (130 km) between the two rivers. The completed project was intended to be part of a navigable water route from Philadelphia to Lake Erie and the Ohio Valley . The original engineering concept developed by the Society and the navigation company was to build a canal up the Schuylkill River to Norristown , improving the Schuylkill River from there to Reading . While from Reading, the canal was to extend to the Susquehanna River via Lebanon . This would have required a four-mile summit crossing between Tulpehocken and the Quittapahilla with an artificial waterway connecting two separate river valleys; namely the Susquehanna and the Schuylkill watersheds. Its successful completion would have made the middle reach, the first summit-level canal in the United States. The term refers to a canal that rises then falls, as opposed to a lateral canal , which has a continuous fall only. In this case, the proposed canal at 80 miles in length would rise 192 feet (59 m) over 42 miles (68 km) from the west at the Susquehanna River to the summit and then fall 311 miles (501 km) over 34 miles (55 km) to the Schuylkill River to the east. It was to be the golden link between Philadelphia and the vast interior of Pennsylvania and beyond. This proposed summit crossing offered a severe test of 18th-century engineering skills, materials and construction techniques. For both designing and operating a water-conveyance transportation system through an area where sinkholes are common, and surface water is scarce. Ultimately, the 1794 engineering concept was flawed, as the water supply for the summit crossing was inadequate and the technology for minimizing supply losses was still another century away. While the 1794 construction was never completed, the company's successor, the Union Canal , was faced with the same challenges of sealing the canal bed to conserve water. The summit crossing was never able to handle the canal traffic. Even with two reservoirs constructed at the summit as feeders to the canal, the Union Canal still required pumped water from a waterworks at the junction of Swatara Creek and Clarks Run and later from a second waterworks on Furnace Creek on the Quitipahilla. At the first works, there were four pumps necessary to provide summit water, but only two could be powered by river water. The other two had to be powered by Cornish steam engines , a technology available in 1828 when the canal opened but not in 1791. Despite all of these problems, in 1791, the enthusiasm for this venture was such that it didn't seem at all impossible that Pennsylvania would have succeeded in securing the commercial prestige which the Erie Canal captured for New York . By 1795 however, the navigation company's project was a commercial failure. The result was that with the onset of the Erie Canal still some thirty years into the future, Philadelphia lost the early initiative in water transportation. Despite Philadelphia and Pennsylvania's "heroic efforts" to hold their share of the internal trade which in 1796 was forty percent more than New York; by 1825 with the opening of the Erie Canal, Philadelphia's trade was forty-five percent less than New York. New York City's rise to preeminence among American cities was an important development, but was not a foregone conclusion. At the time the Schuylkill and Susquehanna Navigation Company was chartered, Philadelphia was the leading American city; its residents, as well as others, generally expected it to take on more of a metropolitan role as the nation became independent, and prepared the city for that role. Instead, Philadelphia slid into second place. By 1807, New York was the acknowledged commercial capital of the nation; by 1837, it was the American metropolis. Philadelphia's dismal failure to build the "golden link" thirty years before New York opened the Erie Canal was a major factor in that slide into second place. The idea of uniting the Schuylkill and Susquehanna rivers by a canal was first proposed and discussed by William Penn in 1690. [ 4 ] [ 5 ] Penn's plan, conceived a few years after he had founded Philadelphia, was to make "a second settlement" on the Susquehanna River, similar in size to that of Philadelphia itself. He made this plan, titled "Some Proposals for a Second Settlement in the Province of Pennsylvania" public in England in 1690. [ 6 ] The route envisioned by Penn was a road up the west bank of the Schuylkill to the mouth of French Creek near present-day Phoenixville , heading west to the Susquehanna via present day Lancaster and a Susquehanna tributary , Conestoga Creek . [ 6 ] Although Penn first proposed the project of continuous water transportation from the Delaware to the Susquehanna , he did not call for the building of a canal. [ 6 ] In 1762, Philadelphia merchants petitioned the Pennsylvania Provincial Assembly to commission a project for the passage by water up the west branch of the Susquehanna River with an intervening portage to a navigable branch of the Ohio River . [ 6 ] In 1769, another petition to the Assembly requested that then Province make the Juniata River navigable down to the Susquehanna River. Both petitions were unsuccessful, but neither mentioned canals as an essential element for the proposed improvement. [ 6 ] In 1769, the American Philosophical Society with Benjamin Franklin as its first president was organized with six standing committees, one of which was on "Husbandry and American Improvements ". [ 7 ] One of the first projects the committee looked at in February 1769 was a canal between the Chesapeake and Delaware bays using the Chester River in Maryland and Duck Creek , near Smyrna, Delaware some 15 miles (24 km) south of the present location of the Chesapeake and Delaware Canal (C&D Canal). [ 7 ] In March, the committee was tasked with preparing a "scheme of application" for the Philadelphia merchants for defraying the expenses of conducting a route location ("proper levels") for the canal as well as construction costs. [ 7 ] In April, the committee discussed a more northerly route using the Bohemia River , a tributary of the Elk River with headwaters extending into Delaware using Drawyers Creek . [ 7 ] In June, this route was reported being feasible only with locks, as the cost of constructing a clear passage from river to river was too great. [ 7 ] That same month, Thomas Gilpin, a member of the merchant committee, submitted an alternative "plan of a canal and elevation" using the original southerly route along the Chester River and Duck Creek. [ 7 ] In April 1770, W. T. Fisher produced a map of the several canal routes proposed for connecting the Chesapeake and Delaware bays. [ 7 ] In August 1771, the committee then became aware of the prospect of joining the Susquehanna and Schuylkill Rivers by means of a canal. [ 7 ] One of the key features of that survey was its emphasis on the middle ground or summit level , roughly 4.5 miles (7.2 km) miles between the headwaters of the Quitapahilla , near Lebanon , and those of Tulpehocken, near Myerstown . The survey was conducted by Dr. William Smith , Provost of the College of Philadelphia , John Lukens , Esquire , Surveyor General [ 8 ] of the then Province (now State) of Pennsylvania, and John Sellers . Samuel Rhoads , a Philadelphia architect, vice-president of the Society and colonial mayor of Philadelphia, had also been on the survey with Rittenhouse and company. [ 9 ] Rhoads had been impressed with the "... apparent practicality of a canal on the Tulpehocken-Swatara route. But, he asked Franklin, whether it was better to dig a canal, or just to dam up the rivers and creeks to provide for navigation?" [ 9 ] The same year, the Society recommended the third route for a canal. [ 6 ] [ 10 ] The Pennsylvania Provincial Assembly then appointed a committee of its own to survey the Susquehanna, Schuylkill, and Lehigh Rivers and in 1773, David Rittenhouse delivered its report. [ 11 ] Nothing became of this work due to the coming of the Revolution. [ 12 ] In total, the Society sponsored studies of three routes to the connect Philadelphia with the Susquehanna Valley : one by canal across the Delmarva Peninsula (1769-1771), the second a paved road from the Susquehanna Valley to a river port south of Philadelphia and the third (1773) a canal using the Schuylkill and Susquehanna Rivers and their tributaries, the Tulpehocken and Swatara creeks. [ 3 ] [ 6 ] The project became the goal of the Society for Promoting the Improvement of Roads and Inland Navigation [ 5 ] organized in 1789 with preeminent wartime financier Robert Morris [ 10 ] as president, David Rittenhouse , William Smith and John Nicolson. [ 3 ] The Society petitioned the General Assembly to again survey the river routes, only this time the State acted upon the recommendations. [ 3 ] In the spring of 1790, the General Assembly passed a resolution on March 31, 1790, that authorized river surveys. [ 13 ] Governor Thomas Mifflin commissioned Timothy Matlack (1736–1829), Samuel Maclay (1741–1811) and John Adlum (1759–1836) to survey the Swatara, West Branch of the Susquehanna River, Allegheny River , French Creek with a portage to Lake Erie, the Kiskiminetas / Conemaugh to Stony Creek , the future site of Johnstown , with a second portage to the Frankstown branch of the Juniata and then down the Juniata to the Susquehanna River and onto Harrisburg . [ 13 ] Mifflin also appointed other survey teams: [ 13 ] In April 1790, Maclay surveyed "...the Swatara Creek and Quitapahilla Creek to Old's Iron Works, then by to Lebanon; (noting that) the Quitapahilla can be made navigable for boats of 5 tons." [ 13 ] On Dec. 14, 1790, Maclay and the other commissioners reported on their recommendations for rivers west of the Allegheny Front or barrier range. They recommend three routes; one via the Juniata and two using the West branch. The first uses the Juniata to go over the barrier range at Poplar Run gap to the Kiskiminetas , a tributary of the Allegheny River . The two West branch of the Susquehanna river routes, one via the north branch of Sinnemahoning Creek , a tributary of the West branch and thence over the barrier range to the Allegheny River, and one via west branch of the Sinnemahoning Creek and thence also over the barrier range to the Allegheny river. They also recommended the Allegheny and French Creek with portage to Lake Erie. [ 13 ] [ 14 ] Maclay and the other commissioners found that most of the waterways could be constructed, but several portages were recommended to reduce costs such as the Lebanon summit crossing of four miles, a road from French Creek to Presque Isle on Lake Erie and an 18 miles (29 km) portage over the Allegheny Mountains at Poplar run. The latter crossing was south of the route eventually selected in 1831 for the Portage Railroad which, when built, was 36 miles (58 km) in length. Both the 1791 and 1831 routes converged on the Little Conemaugh River as the route into Pittsburgh . On February 10, 1791, reports were given on the second round of river surveys regarding improvements to the Delaware River from the bay to the New York state line. Improvements were also recommended for the Schuylkill river with a portage road or canal from Reading to the Susquehanna River, and improvements for the North and West Branches of the Susquehanna and a second Allegheny portage to reach Lake Erie. [ 13 ] The Society proposed in its 1791 report to use the Schuylkill River from Philadelphia up to "...Tulpehocken Creek, near Reading, continuing on the Tulpehocken as far as practicable." [ 15 ] Critically, the Society had yet to recommend or devise a way over the summit near Lebanon joining the "...Quitapahilla and Swatara creeks, the latter leading to the Susquehanna ..." river. [ 15 ] The proposed mileages were: [ 16 ] The concept of navigation in the context of the post- colonial United States and 1790 timeframe was predominately focused on improving river systems. [ 15 ] A contemporary project, the Western Inland Lock Navigation Company in New York which later became a part of the Erie Canal was also "... primarily a river system." [ 15 ] In the Pennsylvania scheme, large rivers such as the Susquehanna and to a lesser extent, the Schuylkill were to be improved by clearing channels through obstructions and building dams where needed. [ 15 ] Most importantly, these larger segments of the scheme were to be connected by short sections of slackwater canals and in some instances such as the Allegheny range crossing , portages. [ 15 ] One author noted that ... While the Society mapped the prospective route with commendable diligence and care, its efforts were of course immeasurably handicapped by a lack of knowledge of canals which at that time were unknown in America but upon which the surveys of the board of commissioners indicated the waterway would have to depend for a short distance in the eastern region and perhaps in the vicinity of the Allegheny Mountains. Descriptions of the two canal connections given in the memorial clearly reflect the prevailing inexperience ... (of the Society). One of (the canal crossings), "20 feet wide and 7 feet on an average," would be necessary between Tulpehocken and Quitapahilla creeks in order to provide an unbroken water link from the Schuylkill to the Susquehanna, but there was uncertainty about the immediate possibility of building it. ... (More detailed engineering to had to be done) ... to determine whether "a plan of lock navigation" might not be cheaper than a water-level channel. "It is supposed that the canal or lock navigation between the heads of Tulpehocken and Quitapahilla, is to be compleated; but if that work should be thought too great to begin with, it will be only the addition of four miles portage, by an excellent and level road." In point of fact, no estimate could be included for "the canal." (Emphasis added) The Society in its report estimated the total cost of the Schuylkill River improvements and canal connection with the Susquehanna River at £55,540 (£1791) or $8.6 million (in 2018 US dollars). [ 16 ] The Schuylkill Navigation Company and the Union Canal ultimately completed this Society scheme by 1830 for a total reported cost of $2.8 million (in 1830 US dollars) or $73 million in (in 2018 US dollars): [ 17 ] roughly nine times the original estimate. James Brindley (1745-1820), a well-known canal engineer and nephew of the famous British canal engineer James Brindley (1716-1772), was in Delaware in 1791. [ 18 ] Brindley had been originally recruited in 1774 by the Potomac Company for the Little Falls Bypass Canal on the Potomac River . [ 19 ] Subsequently, Brindley worked on the Susquehanna Canal (1783-) in Maryland, Santee Canal in South Carolina (1786) and the James River Canal in Virginia (1787). [ 19 ] In 1791, he was introduced to the Society for the purpose of resurveying the 1771 summit route for the canal between the Tulpehocken and Quittapahilla Creeks. [ 13 ] The Society engaged Brindley to resurvey the 1771 summit route [ 9 ] along with Timothy Matlack (1736-1829) and John Adlum (1759-1836). [ 13 ] Later that year in the summer, they presented a final report and Brindley's map for the summit canal between the creeks. Crucially, they find that there is sufficient water at the summit to feed the canal within a four-mile radius. [ 13 ] The society would later in February 1792 ask the newly incorporated Schuylkill and Susquehanna Navigation company to pay for the expense of this survey. [ 13 ] In that same year of 1791, the Society presented proposals to the State proposing to connect the Atlantic seaboard with Lake Erie . [ 5 ] This Pennsylvania plan was before the creation of New York's Western and Northern Inland Lock Navigation Companies in 1792. The New York plan took the first steps to improve navigation on the Mohawk River by constructing a canal between the Mohawk and Lake Ontario [ 20 ] but that effort with private financing was insufficient. In the Pennsylvania plan, the Society proposed a canal route, 426 miles [ 5 ] in length connecting Philadelphia with Pittsburgh by a canal. One part of this project was a canal segment up to the Schuylkill River to Tulpehocken Creek to a summit-level canal near Lebanon and thence by way of the Quitapahilla and Swatara creeks to the Susquehanna River. [ 3 ] This action resulted in the formation of two companies The first was the Schuylkill and Susquehanna Navigation Company incorporated on September 29, 1791, [ 21 ] [ 22 ] to open a communication between the Schuylkill and Susquehanna rivers from Reading on the Schuylkill to Middletown on the Susquehanna. The second was the Delaware and Schuylkill Navigation Company incorporated in 1792 to open a canal between the Schuylkill River and the Delaware River. [ 23 ] Robert Morris was the president of both companies. [ 3 ] The 1791, Pennsylvania act incorporating the company contained an elaborate process for using Sheriff's juries to assess damages for taking of lands and waters becoming "...the model for subsequent Pennsylvania canal statutes. ". [ 24 ] Up to that point in time, the policy had been to only allow damages to improved lands. [ 24 ] This 1791 act required the company to pay all damages resulting from its use of eminent domain authority to take all lands (improved or unimproved), water, and materials necessary for construction and operating the canal including mills, mill ponds, water and water courses. [ 21 ] This caused many canal companies such as the Schuylkill and Susquehanna Navigation Company great concern over the amount of damages awarded in these procedures. Charles G. Paleske, an officer of the company stated in 1807 that "...the company could not complete the largest branch of its canal because, among other reasons, of "the enormous sums paid for land and water rights." [ 24 ] In early 1792, the company was organized in Philadelphia with noted financier and land speculator Robert Morris as president, Tench Francis as treasurer and noted engrosser of the declaration of independence Timothy Matlack as secretary. [ 13 ] The company's directors were also notable Philadelphians such as Morris' partner and former comptroller general of the State of Pennsylvania and president of the Pennsylvania Population Company , John Nicholson (1757-1800), [ 25 ] Samuel Powel (1738-1793) and University of Pennsylvania provost William Smith (1727-1803). [ 13 ] Junior founding partner of the notable Philadelphia shipping company James and Drinker and the Philadelphia tea party incident, Henry Drinker (1734-1809), a "substantial provider of credit" in those times [ 26 ] also was a director. [ 13 ] Other notable directors included Brevet generals Walter Stewart and Samuel Miles , the latter, a former mayor of the city of Philadelphia. Philadelphia politician and brewer Robert Hare (1752-1811) father of chemist Robert Hare (1781-1858) [ 27 ] was a director as well as the then treasurer of the United States, Samuel Meredith (1741-1817) and his brother in law, a signatory to both the Declaration of Independence and the Constitution, George Clymer (1739-1813). [ 13 ] Pennsylvania State Attorney General and future Attorney General for the United States, William Bradford (1755-1795), future Speaker of the Pennsylvania House of Representatives, George Lattimer and light horse cavalry member and quartermaster John Donaldson (1754-1831); [ 28 ] Nicholson eventually has 270 shares on which $64,300 is paid; Robert Morris, 52 shares and $14,300. [ 13 ] George Washington received one share of stock in the company, issued by Morris in 1792 worth one pound. [ 29 ] In recruiting stock subscriptions, the Commissioners were required to advertise in three newspapers for a month with one being in the German language. [ 21 ] They were authorized to sell one thousand shares and if the stock was oversubscribed, a lottery was to be used to apportion the sales, no one person was to initially own more than ten shares. [ 21 ] At the time that Robert Morris and the others were organizing the company "(p)oor harvests in Europe brought unprecedented agricultural and commercial prosperity to the Delaware Valley." [ 30 ] One of the administration's first official acts as part of Hamilton's economic plan was to "...pour thousands of dollars into the pockets of prescient speculators by funding depreciated American bonds at 100 percent of their face value. The resulting ebullience in the investment markets facilitated the flotation of a series of new companies ..." [ 30 ] such as Morris' Schuylkill and Susquehanna Navigation company. While post-revolutionary grain exports from Philadelphia had stagnated through 1788, the Continental subsistence crisis created a demand for American grain that Philadelphia rushed to fill. [ 30 ] "Between 1788 and 1789 the value of Quaker City exports leaped 45 percent to the level of $3,510,765, and they continued to climb to the extraordinary level of $17,513,866 in 1796 ($450 million US in 2018). [ 31 ] With Americans serving as neutral maritime carriers for the warring nations of Europe, the shipping industry also flourished. The amount of tonnage registered for foreign trade increased by 167 percent between 1789 and 1796." Beyond the Delaware Valley lay the vast Susquehanna River Valley, a major export market for Philadelphia despite the gains made by Baltimore in shifting trade to its ports. [ 32 ] "...the essential economic function of Philadelphia's merchant community was to link the city's hinterland with its overseas markets. It was the merchants who shipped flour to Lisbon, lumber to London, flaxseed to Belfast; and it was they who imported vast amounts of cloth and hardware from London and the outports." [ 30 ] The Schuylkill and Susquehanna Navigation company would provide the "golden link" between the two. On December 1, 1791, the company's book was opened for stock subscriptions, and by one o'clock more than the five hundred shares ($200,000) required as a minimum were subscribed, and when the books had been open the required fifteen days no less than forty-six thousand shares were subscribed. [ 33 ] This was acclaimed "another instance of the public spirit of the inhabitants of this state," though in reality it testifies chiefly to the speculative spirit then running riot. [ 33 ] The subscriptions were reduced by lottery to one thousand shares, and canal scrip was soon selling at an advance. [ 33 ] Several months later, the first financial panic in the new United States occurred, the panic of 1792 . This impacted the availability of cash for subscribers to fulfill their obligations from the previous December and the Company agreed to take notes in lieu of cash. [ 13 ] This process of financing the navigation company was managed by Morris in the same time period as large swaths of Northern Pennsylvania were being developed by the managers of the company. [ 34 ] "Pennsylvania's backlands ... (were) ... the stakes in a giant speculative bubble: they were cheap, they could be bought on credit, they could be paid for in depreciated certificates, settlement and improvement requirements were generally overlooked, and those in actual charge of the disposal of lands were very cooperative. Convinced of getting a 10, 20, or 30-fold return, it is little wonder that other assets were converted into land, heavy mortgages taken, and credit stretched to fantastic lengths." The problem was that speculators such as Robert Morris had too much credit. [ 34 ] Often using the land to which "...they had only preliminary claim, either selling, encumbering them with mortgages or using them as collateral for loans." [ 34 ] The Schuylkill and Susquehanna Navigation company prospectus promised greater trade and settlement, thus raising the value of the lands. [ 34 ] In addition to the two navigation companies, Robert Morris, and other managers "...established no less than six companies of this type between 1793 and 1797." [ 34 ] These were the Pennsylvania Population Company , Asylum Land Company, North American Land Company , Territorial Land Company, Pennsylvania Land Company, Pennsylvania Property Company. [ 34 ] This speculative bubble burst in 1796 just when the navigation company was trying to mobilize the financing for its operations. [ 34 ] "...speculators had invested in roads, canals, and mills to encourage settlement, but often could not finance these projects to completion. By the late 1790s, most of these speculations failed due to overreaching. Robert Morris, the grandest speculator of them all, went to debtors' prison. "The 'Philadelphia fever' that raged during the era of exploitation of our eastern public lands ruined many of those it infected. It despoiled a great portion of the Commonwealth's landed inheritance. It victimized the actual settler ... (a)nd it retarded the development of one-third of the State for several generations." There were very few trained civil engineers in the new United States when the company was chartered. [ 36 ] The earlier planning for locating the canal commissioned by the Society up through 1791 had been performed by members such as John Lukens , surveyor general of Pennsylvania and the eminent American astronomer and surveyor, David Rittenhouse . [ 23 ] Other than Brindley (1745-1820), no one had any experience with canal location or lockage. [ 9 ] [ 36 ] The original engineering concept developed by the Society as well as the navigation company's charter had been to build a canal up the "...Schuylkill valley to Norristown, and improving the river from there to Reading; while from Reading a canal was to extend to the Susquehanna, via Lebanon." [ 36 ] This would have made the Schuylkill and Susquehanna canal the first summit-level canal in the United States. A four-mile summit crossing between Tulpehocken and the Quitipahilla would be an artificial waterway connecting two separate river valleys; namely the Susquehanna and the Schuylkill watersheds. The term refers to a canal that rises then falls, as opposed to a lateral canal, which has a continuous fall only. [ 37 ] In this case, the proposed canal at 80 miles in length would rise 192 feet over 42 miles from the west at the Susquehanna River to the summit and then fall 311 feet over 34 miles to the Schuylkill River to the east. [ 38 ] Unfortunately, most of the four-mile summit crossing was underlain by the Ontelaunee Formation , a "...dark grayish-brown weathering dolomite ..." or carbonate bedrock. [ 39 ] Other equally important parts of the summit crossing were constructed through the Annville Formation, a "...very thick bedded, finely crystalline, light blue-gray to light pinkish-gray, high-calcium limestone ." [ 39 ] Crucially, that meant the summit traversed highly soluble bedrock with poor surface drainage and where sinkholes were common. [ 39 ] This ... (summit crossing) ... offered a severe test of ... (18th century) ... engineering skills in both designing and operating a water-conveyance transportation system through an area where sinkholes are common, and surface water is scarce. Ultimately, the 1794 engineering concept was flawed. The water supply for the summit crossing was inadequate. While the 1794 construction was never "watered", its successor, the Union Canal was faced with the choice of either " puddling " (packing low-permeability clay on the bottom and sides), or "planking" (lining the sides and bottom of the canal with wood planks) for the summit crossing in order to conserve water supplies. [ 39 ] In the end, "planking" was chosen which required "...close to 2,000,000 board-feet of lumber ..." to seal the crossing. [ 39 ] Even with two reservoirs constructed at the summit as feeders to the canal, the Union canal required pumped water from a waterworks at the junction of Swatara Creek and Clarke's run and later from a second waterworks on Furnace Creek on the Quitipahilla. [ 39 ] At the first works, there were four pumps with the capacity to lift about "...15,000 gallons per minute through 3.3 miles of wooden and brick pipes to the summit level, 95 feet above the pumps ..." [ 39 ] Of the four pumps only two could be powered by water, the other two had to be powered by Cornish steam engines , [ 39 ] a technology available in 1828 when the canal opened but not in 1791. [ 40 ] by 1885, the Union canal was sold at a sheriff sale, "unable to cope with ... (competition from) ... the railroads, poor planning, and the carbonate bedrock of Lebanon County, Pennsylvania. [ 39 ] Had the Schuylkill and Susquehanna navigation company been successful in completing the canal in 1794-95, it probably would have succumbed to same poor planning and summit geology as its successor did. While the navigation company was being organized in 1791, the Society asked Brindley to re-evaluate the summit level crossing Between Lebanon, Pennsylvania and Myerstown . Brindley was to reexamine the topography of the summit and produce a detailed location for the canal. He was also to ensure that the local supply of water was adequate to supply the amount of water necessary to operate the locks on both sides of the summit; critical for the success of the project, as well as to make an estimate of the "...lands and waters necessary ..." for the work. [ 23 ] Brindley completed the work that summer, yet, Morris still agreed with George Washington's earlier assessment that although Brindley had "more practical knowledge of cuts and locks for the improvement of inland navigation than any man among us ..." in Morris' mind, Brindley's skills remained unproven. [ 9 ] Nonetheless, the Navigation company hired Brindley in April 1792 for the construction season work as canal engineer along with Col. Thomas Bull (1744-1837) as superintendent. [ 13 ] In May, the board of directors with Brindley tour the summit crossing between the Quitapahilla and Tulpehocken Creeks as well as the waters to the north, including the Deep Run Branch of the Little Swatara. [ 13 ] From west to east, the route was to follow Swatara Creek upstream from Middletown to Quittapahilla Creek , which it then followed upstream through Lebanon and towards Myerstown . It then crossed overland to the headwaters of Tulpehocken Creek , following Tulpehocken Creek downstream to Reading on the Schuylkill River. It was to follow the Schuylkill downriver to the Delaware River at Philadelphia. [ 13 ] The summit route was fixed by the Board between Kuchner's dam on the Quittapahilla and Loy's springs on the Tulpehocken west of Myerstown. [ 41 ] In August of that year, the company approves Brindley's engineering concept for crossing the summit. It was to be a twenty-five deep cut, thirty-feet wide at the bottom and watered to a depth of four feet. [ 13 ] Based solely upon Brindley's work and before their new British engineer, Weston could review the scheme, in October 1792, the Board authorized Superintendent Bull to purchase a strip of land 100 feet wide for the canal route to the Swatara. [ 13 ] In November,1792 the company purchases the mill of Baltzer Orth on the head of the Quittapahilla Creek for £4,250 and two tracts of Abraham Crow for £2,600. [ 13 ] Superintendent Bull and Timothy Matlack begin construction staking for the summit canal using Brindley's route. [ 13 ] The work is met with resistance from the local residents who "resent the intrusion of rich Philadelphians into their entirely German community and having their farms cut up ..." [ 13 ] The local residents protested the exercise of eminent domain by the company in cutting up farms to build a straight and regular, rather than a traditional meandering and undulating road or canal. [ 13 ] During the time that Brindley acting as canal engineer, the company approached Patrick Colquhoun in London to recruit what the company considered to be a more qualified British engineer for the canal. [ 36 ] In January 1792, Colquhoun initially tried to recruit John Dadford but he was unavailable. [ 36 ] [ 9 ] [ 13 ] Colquhoun then approached the eminent British civil engineer William Jessop to select "...a properly qualified engineer for North America, he recommended Weston." [ 36 ] Colquhoun was finally able to secure the services William Weston twenty-nine years old at the time building at that time canals in Ireland . [ 36 ] Weston signed a contract drafted by Colquhoun [ 36 ] for his services to the company as its "engineer" with the annual salary of £ 800 in 1792 for no more than seven months in any one year worth $120,000 US in 2018. [ 42 ] [ 36 ] At the time that Weston traveled over to the new country of the United States, ... Surveyors' compasses were common in the (United) States, engineers' levels were almost, if not quite, non-existent. (David Rittenhouse doubtless could have made one, but it is quite certain that he had not). In fact, Weston may have brought with him the first leveling instrument used on this side of the Atlantic. It was, according to Weston's own description, a Y-level [ 43 ] with achromatic glasses , and had been made for him by Mr. Troughton , a mathematical instrument maker on Fleet Street, London. [ 36 ] Almost immediately upon his arrival in Pennsylvania, the company attempted to renegotiate Weston's compensation to cover twelve months instead of seven, offering to raise it to £1,500 ($225,000 US in 2018) and increasing the geographical scope of his services to include the states of Pennsylvania, New Jersey, New York, and Delaware. [ 36 ] Although the Board had authorized work for the summit crossing, there still was a question in their minds as of September 1792 over staying with their original concept of river navigation for improving the Tulpehocken and Quitapahilla and Swatara or to go for lock system navigation. [ 13 ] The Board had also been faced with two routes across the summit and onto the Swatara using either the Quitapahilla to the south or Clark's run to the north. [ 39 ] The company was pursuing several construction projects during a time in which skilled labor was in short supply and very costly. The presence of several projects could easily drive up labor and material costs. Much as in the twentieth century where project labor agreements are used to predetermine wages and working conditions, these eighteenth-century project managers sought to negotiate cooperative agreements with other projects to constrain the growth in wages and control working conditions. In October 1792, the Board of Directors appointed a committee to "... confer with the Delaware & Schuylkill Canal and Philadelphia & Lancaster Turnpike Road on sending a joint agent to New England to recruit labor." [ 13 ] The next month the Board directs superintendent Bull to limit wages to 3s6d (70 cents) per day with the company providing tools and provisions. More importantly, the Board also directs Bull to negotiate "... an agreement with the Delaware & Schuylkill Canal and the Philadelphia & Lancaster Turnpike Road to observe a uniform ceiling on the wages to be offered." [ 13 ] The practice even went so far as have the Boards of several companies meet as a joint committee. [ 13 ] Thus in November 1792, the Schuylkill & Susquehanna, Delaware & Schuylkill, and Conewago Canals and Philadelphia & Lancaster Turnpike Road met as a joint committee and "... named Isaac Roberdeau (1763-1829), who had worked under Pierre C. L'Enfant on laying out Washington, D.C. and Paterson, N.J., is named agent of all three companies at $120 per month; he later becomes William Weston's assistant." [ 13 ] The joint committee also agreed to "... cooperate with each other and with local employers of day laborers so as not to increase wages by bidding against each other; workers imported from New England are to be excepted." [ 13 ] The Joint committee continued to make plans for a coordinated effort over the winter of 1792–1793 to "procure laborers in New England, 400 for each of the main canals, 150 for the Conewago Canal, and 200 for the turnpike, also 10 yokes of oxen, carts, and drivers for the turnpike; maximum wage rates and working conditions were established for moving expenses and the use of company teams." The committee also directed that all member companies were to sell provisions to the men at cost. The labor force was being mobilized in Philadelphia to start the construction season on March 10, 1793. [ 13 ] In January 1793, the Company reported that "... 80 to 100 men are at work and about a half-mile of the canal has been dug; are working on the summit level on land purchased by John Nicholson from Jacob Schaffer." [ 13 ] Brindley's design concept for the summit crossing was a cut twenty-five deep, thirty-feet wide at the bottom and watered to a depth of four feet. Brindley had assumed that the cut would entirely excavate earth instead they "... struck rock at a depth of 9 feet." [ 13 ] The next month, roughly 400 men are working on the Tulpehocken Creek side of the summit. [ 13 ] Engineer Weston reviews Brindley's plans for the summit crossing including Brindley's scheme for supplying the summit with water. [ 13 ] Weston changes the design to twenty feet from thirty feet but increases the depth from four to six feet of water, acting as a reservoir. [ 13 ] By March, 1793, the company has exhausted its project funding and has accumulated $56,000 in liabilities ($1.5 million in 2018 US dollars). In April, the Conewago Canal is incorporated as a separate company with James Brindley as chief engineer. [ 13 ] That same month the Company Board directs engineer Weston to "make the Tulpehocken side of the summit the priority ..." [ 13 ] as well to develop more sources of water to supply the summit crossing. During the same period, the company moved to acquire right of way on the Tulpehocken creekside by legally enforcing its eminent domain rights. [ 13 ] However the effort was met with "a large force . ... armed with clubs who oppose (seizing the land) ... in the meantime, landowners refuse to allow entry onto their land." [ 13 ] The pace of construction slowed and in that summer of 1793, Superintendent Bull resigned. [ 13 ] The company arranges for some interim financing in the form of a $4,000 loan from Major Edward Burd . That summer was also notable for the first yellow fever epidemic in 30 years began in the city of Philadelphia in August, 1793. [ 44 ] [ 45 ] It was one of the most severe epidemics in the United States. At the height of the panic from the epidemic in late August 1793, the Company closed its offices, and they would remain closed through November of that year. [ 13 ] This crippled the company's ability to raise additional funding for construction. [ 13 ] The Myerstown Riots occurred at Myerstown, Pennsylvania , in Lebanon County , when "a group of young men from the town crash(ed) a party of canal men at a local tavern and provoke(d) a brawl in response to a recent insult; the canal men (broke into) several houses looking for their assailants; German residents had long opposed the canal for exercising eminent domain, and fights were frequent because of ethnic differences between German residents and canal workers, who were Scots-Irish or Irish." [ 13 ] The riots continued for several days and were further inflamed by a mob of over 100 canal men "... armed with clubs and led by an overseer armed with pistols march on Myerstown and proceeded to intimidate townspeople while seizing and beating the young men they suspected of starting the brawl the previous night." [ 13 ] In 1794, as part of the federal government 's response to the Whiskey Rebellion , George Washington , according to historian Joseph Ellis , became "the first and only time a sitting American president led troops in the field". [ 46 ] Washington left Philadelphia which at that time was the capital city for the country on the 30th of September to first dine at Norristown and then stay the night at what is now Trappe, Pennsylvania. [ 47 ] The next day he traveled to Reading, Pennsylvania on his way to meet up with the rest of the militia he ordered mobilized at Carlisle . [ 47 ] On the second of October, 1794, Washington left Reading heading west to Womelsdorf in order to "view the canal from Myerstown towards Lebanon and the locks between the two places ...". [ 47 ] Another officer on the march noted that at that time, ten miles of canal had been excavated and five locks constructed for a total lift of thirty feet in elevation. [ 47 ] By the end of 1793, Weston reported to the board that "... lawsuits and jury awards have slowed the work. ... " [ 13 ] While Weston had over four hundred men working on the project that summer, by the end of the year, most of his workforce had left the project. [ 13 ] The remaining workforce was assigned to work on the towpath. [ 13 ] In the end, Weston had completed 4.25 miles of the canal prism through the narrows between the two springs. [ 13 ] Weston, though had to narrow the summit cut to pass only one boat at a time. [ 13 ] Crucially, Weston had also to acknowledge a problem that none of his predecessors had faced when he was forced to "... line both sides of the canal with drywall stones to reduce leakage." [ 13 ] Going into 1794, Weston estimated that he needed $231,000 ($4.9 million in 2018 US dollars) for the years work requiring the company to raise another $120 thousand in capital. [ 13 ] The company was unable to raise the capital or borrow the money and on May 3, 1794, it reported that its funds were exhausted. [ 13 ] However, the company continued to make attempts to raise funds for the project, and in December 1794, Chief engineer Weston reported on the state of the project. [ 13 ] Funds are still insufficient and the Schuylkill and Susquehanna Navigation company in the close of 1794 makes its final payroll and informs Weston that in the future he is solely an employee of the Delaware and Schuylkill Canal company. [ 13 ] The company's efforts were futile as no additional funds were secured. [ 13 ] Finally, in April 1795, the Board authorizes Weston "to sell the company's teams and send the rest to Philadelphia for sale; the company's stock of black powder is to be sent to Norristown for the use of the Delaware & Schuylkill Canal; Weston appoints seven men to take care of the works, which are effectively abandoned and never brought into use." [ 13 ] In the spring of 1796, the Board orders the disposal of all the bricks Weston had manufactured for construction of the canal's locks effectively terminating the project. [ 13 ] As the navigation company exhausted its funding by early 1795, in May of that year the Board terminated Weston's employment contract with the Schuylkill and Susquehanna Navigation company. [ 13 ] Weston though was still was obligated to work with the Delaware and Schuylkill Canal company. By the spring of 1796, Weston reported that six miles of canal had been completed, three at each side but that due to lack of funds, the work had been terminated. [ 13 ] The Board for the canal company also terminated Weston's employment contract that spring. [ 13 ] Weston went on to work with Gen. Phillip Schuyler for Western Inland Lock Navigation Company for 4 years. During this period, Benjamin Wright (1770-1842) who was later to become chief engineer of the Erie Canal and other projects, worked under Weston. [ 13 ] Despite the termination of construction and Weston's employment as canal engineer the company managed to forestall foreclosure on its property and constructed works. [ 48 ] In 1802, the company had to fend off such an attempt and was only successful in holding onto its property and water rights through the sale of excess property, often whole farms were sold. [ 48 ] Although originally set to expire in 1801, the company's corporate charter was extended in 1806 to 1820. In 1807, Charles Gottfried Paleske (1758-1816) was elected to the Board of Directors of the company and working with James Milnor , Robert Brooke, Isaac Roberdeau, and John Scott walked "... the line of the Schuylkill & Susquehanna Navigation Company from Kruitzer's plantation where the canal ends to the end of the summit near Kucher's mill, about 9 miles; find the work in good condition including the five locks at Ley's, and the bridges decayed or collapsed ..." [ 48 ] In 1808, Paleske was elected president and Joseph S. Lewis (1778-1836) treasurer. [ 48 ] In 1809, the company's directors appointed a committee to draft articles for a merger with the Delaware and Schuylkill Canal company which was submitted to the State legislature. [ 48 ] In 1810, William John Duane (1780-1865) writing as "Franklin" advocates for reviving the Schuylkill and Susquehanna Navigation company as part of a scheme for a canal route to Lake Erie instead of the Ohio Valley. [ 49 ] In July,1811, the two corporations (Schuylkill & Susquehanna Navigation Company and Delaware and Schuylkill Canal company) were merged into the Union Canal Company with Paleske as its first president and "...authorized to extend to Lake Erie and to build turnpikes along right of way; company is also given monopoly of lotteries in Pennsylvania until $400,000 is raised ..." [ 49 ] By 1885, the successor company, the Union Canal, was sold at a sheriff sale , being unable to cope with railroad competition, poor planning, and the technical challenges posed by a summit crossing underlain by the carbonate bedrock of Lebanon County . Had the Schuylkill and Susquehanna Navigation Company been successful in completing the canal in 1794–95, it probably would have succumbed to the same poor planning and summit geology as its successor did. Much like the Potomac Canal (1785-1828), between the beginning of the Navigation Company in 1791 and its merger and completion by its successor company in 1828, the Union Canal of Pennsylvania (1811-1885), "...civil engineering had come to America and Americans had become civil engineers." [ 50 ] One of George Washington 's observations was that ... Although some road projects had been completed by several states, ... The situation was such that it cheaper in that period (1789) to import " During this same period (1789-1820), the focus of Philadelphia mercantile interests was central Pennsylvania and its vast Susquehanna watershed draining two-thirds of Pennsylvania. [ 51 ] Pennsylvania however, was so situated physiographically that most of its trade was carried out of the State away from its chief city and the Federal capital, Philadelphia by its two rivals; Baltimore at the base of the Susquehanna system and Albany on the north. [ 51 ] It was estimated that half of the produce shipped down the Susquehanna river ultimately went to Baltimore, not Philadelphia. [ 51 ] Throughout this period, it was argued by men such as Samuel Breck and William Duane that connecting the Schuylkill and Susquehanna rivers would solve all these problems and assure Philadelphia that its trade would be secured and enlarged. [ 52 ] Breck also advocated for improving the Schuylkill river, but he noted that ... The linchpin of this whole strategy rested on the "golden link" between the Schuylkill and the Susquehanna rivers, the Schuylkill and Susquehanna Navigation Company [ 54 ] with its summit level crossing at Lebanon, Pennsylvania. The State response to this advocacy varied. In 1791, it passed legislation that provided funding in three areas: river navigation, turnpike roads, and corporate canals. [ 15 ] As the later experience would show with the Lebanon summit canal project, these ... The turnpike roads such as the Philadelphia and Lancaster Turnpike Road Company of 1792 were more successful, albeit costly, solution of reaching the Susquehanna river at Columbia. [ 15 ] Corporate canals, (state-aided joint-stock company) such as the Schuylkill and Susquehanna Navigation Company even when fostered and subsidized like turnpike companies, "... were a dismal failure, nothing material being accomplished in this field until about 1821 ... and its eventual shortcomings as an improvement agency were one of several considerations that pointed the way to state enterprise." [ 15 ] With such enthusiasm prevailing at that time (1789-1820), the chief engineer for the Erie Canal later wrote in 1905 that it didn't seem at all "...impossible that Pennsylvania, had it not been for the Erie canal; would have succeeded ultimately in overcoming natural difficulties and piercing the mountain barrier ... to secure ... the commercial prestige which the (Erie) canal ... captured for New York State." [ 54 ] The result of the failure of the Schuylkill and Susquehanna Navigation Company was that in 1795 with the Erie canal thirty years into the future, Philadelphia lost "...the early initiative in water transportation.." [ 15 ] Despite Philadelphia and Pennsylvania's "heroic efforts" to hold their share of the internal trade which in 1796 was forty percent more than New York; by 1825 with the opening of the Erie Canal, Philadelphia's trade was forty-five percent less than New York. [ 54 ] "New York's rise to pre-eminence among American cities was an important development that was neither inevitable nor predictable. At the time of the American Revolution , Philadelphia was the leading American city; its residents as well as others generally expected it to take on more of a metropolitan role as the nation became independent, and prepared the city for that role. Instead, Philadelphia slid into second place. By 1807, New York City was the acknowledged commercial capital of the nation; by 1837, it was clearly the American metropolis." Philadelphia's failure to build the "golden link" thirty years before New York City opened the Erie Canal was a factor in the city's slide into second place. Bibliography
https://en.wikipedia.org/wiki/Schuylkill_and_Susquehanna_Navigation_Company
Schwartz's reagent is the common name for the organozirconium compound with the formula (C 5 H 5 ) 2 ZrHCl, sometimes called zirconocene hydrochloride or zirconocene chloride hydride , and is named after Jeffrey Schwartz, a chemistry professor at Princeton University . This metallocene is used in organic synthesis for various transformations of alkenes and alkynes . [ 1 ] The complex was first prepared by Wailes and Weigold. [ 2 ] It can be purchased or readily prepared by reduction of zirconocene dichloride with lithium aluminium hydride : This reaction also affords (C 5 H 5 ) 2 ZrH 2 , which is treated with methylene chloride to give Schwartz's reagent [ 3 ] An alternative procedure that generated Schwartz's reagent from dihydride has also been reported. [ 4 ] Moreover, it's possible to perform an in situ preparation of (C 5 H 5 ) 2 ZrHCl from zirconocene dichloride by using LiH. This method can also be used to synthesize isotope-labeled molecules, like olefines by employing Li 2 H or Li 3 H as reducing agents. [ 5 ] Schwartz's reagent has a low solubility in common organic solvents. [ 6 ] The trifluoromethanesulfonate (C 5 H 5 ) 2 ZrH(OTf) is soluble in THF. [ 7 ] The complex adopts the usual "clam-shell" structure seen for other Cp 2 MX n complexes. [ 8 ] The dimetallic structure has been confirmed by Microcrystal electron diffraction . [ 9 ] The results are consistent with FT-IR spectroscopy , which established that the hydrides are bridging. Solid state NMR spectroscopy also indicates a dimeric structure. The X-ray crystallographic structure for the methyl compound (C 5 H 5 ) 4 Zr 2 H 2 (CH 3 ) 2 compound is analogous. [ 10 ] Schwartz's reagent reduces amides to aldehydes . [ 11 ] Vinylation of ketones in high yields is a possible use of Schwartz's reagent. [ 12 ] Schwartz's reagent has been used in the synthesis of some macrolide antibiotics , [ 13 ] [ 14 ] (−)-motuporin, [ 15 ] and antitumor agents. [ 16 ] Hydrozirconation is a form of hydrometalation . Substrates for hydrozirconation are alkenes and alkynes . With terminal alkynes the terminal vinyl zirconium product is predominantly formed. Secondary reactions are nucleophilic additions , transmetalations , [ 17 ] conjugate additions , [ 18 ] coupling reactions , carbonylation and halogenation . Computational studies indicate that hydrozirconation occurs from the interior portion. [ 19 ] [ 20 ] When treated with one equivalent of Cp 2 ZrClH, diphenylacetylene gives the corresponding alkenylzirconium as a mixture of cis and trans isomers . With two equivalents of hydride, the endproduct was a mixture of erythro and threo zircono alkanes: In 1974 Hart and Schwartz reported that the organozirconium intermediates react with electrophiles such as hydrochloric acid , bromine and acid chlorides to give the corresponding alkane , bromoalkanes , and ketones : [ 21 ] The corresponding organoboron and organoaluminum compounds were already known, but these are air-sensitive and/or pyrophoric whereas organozirconium compounds are not. In one study the usual regioselectivity of an alkyne hydrozirconation is reversed with the addition of zinc chloride : [ 22 ] [ 23 ] One example of a one-pot hydrozirconation - carbonylation - coupling is depicted below: [ 24 ] [ 25 ] With certain allyl alcohols , the alcohol group is replaced by nucleophilic carbon forming a cyclopropane ring: [ 26 ] The selectivity of the hydrozirconation of alkynes has been studied in detail. [ 27 ] [ 28 ] Generally, the addition of the Zr–H proceeds via the syn -addition. The rate of addition to unsaturated carbon-carbon bonds is terminal alkyne > terminal alkene ≈ internal alkyne > disubstituted alkene [ 29 ] Acyl complexes can be generated by insertion of CO into the C–Zr bond resulting from hydrozirconation. [ 30 ] Upon alkene insertion into the zirconium hydride bond, the resulting zirconium alkyl undergoes facile rearrangement to the terminal alkyl and therefore only terminal acyl compounds can be synthesized in this way. The rearrangement most likely proceeds via β-hydride elimination followed by reinsertion. MgCpBr (TiCp 2 Cl) 2 TiCpCl 3 TiCp 2 S 5 TiCp 2 (CO) 2 TiCp 2 Me 2 VCpCh VCp 2 Cl 2 VCp(CO) 4 (CrCp(CO) 3 ) 2 Fe(η 5 -C 5 H 4 Li) 2 ((C 5 H 5 )Fe(C 5 H 4 )) 2 (C 5 H 4 -C 5 H 4 ) 2 Fe 2 FeCp 2 PF 6 FeCp(CO) 2 I CoCp(CO) 2 NiCpNO ZrCp 2 ClH MoCp 2 Cl 2 (MoCp(CO) 3 ) 2 RuCp(PPh 3 ) 2 Cl RuCp(MeCN) 3 PF 6
https://en.wikipedia.org/wiki/Schwartz's_reagent
In mathematics , a Schwartz–Bruhat function , named after Laurent Schwartz and François Bruhat , is a complex valued function on a locally compact abelian group , such as the adeles , that generalizes a Schwartz function on a real vector space. A tempered distribution is defined as a continuous linear functional on the space of Schwartz–Bruhat functions. The Fourier transform of a Schwartz–Bruhat function on a locally compact abelian group is a Schwartz–Bruhat function on the Pontryagin dual group. Consequently, the Fourier transform takes tempered distributions on such a group to tempered distributions on the dual group. Given the (additive) Haar measure on A K {\displaystyle \mathbb {A} _{K}} the Schwartz–Bruhat space S ( A K ) {\displaystyle {\mathcal {S}}(\mathbb {A} _{K})} is dense in the space L 2 ( A K , d x ) . {\displaystyle L^{2}(\mathbb {A} _{K},dx).} In algebraic number theory , the Schwartz–Bruhat functions on the adeles can be used to give an adelic version of the Poisson summation formula from analysis, i.e., for every f ∈ S ( A K ) {\displaystyle f\in {\mathcal {S}}(\mathbb {A} _{K})} one has ∑ x ∈ K f ( a x ) = 1 | a | ∑ x ∈ K f ^ ( a − 1 x ) {\displaystyle \sum _{x\in K}f(ax)={\frac {1}{|a|}}\sum _{x\in K}{\hat {f}}(a^{-1}x)} , where a ∈ A K × {\displaystyle a\in \mathbb {A} _{K}^{\times }} . John Tate developed this formula in his doctoral thesis to prove a more general version of the functional equation for the Riemann zeta function . This involves giving the zeta function of a number field an integral representation in which the integral of a Schwartz–Bruhat function, chosen as a test function, is twisted by a certain character and is integrated over A K × {\displaystyle \mathbb {A} _{K}^{\times }} with respect to the multiplicative Haar measure of this group. This allows one to apply analytic methods to study zeta functions through these zeta integrals. [ 5 ]
https://en.wikipedia.org/wiki/Schwartz–Bruhat_function
In mathematics, the Schwartz–Zippel lemma (also called the DeMillo–Lipton–Schwartz–Zippel lemma ) is a tool commonly used in probabilistic polynomial identity testing . Identity testing is the problem of determining whether a given multivariate polynomial is the 0-polynomial, the polynomial that ignores all its variables and always returns zero. The lemma states that evaluating a nonzero polynomial on inputs chosen randomly from a large-enough set is likely to find an input that produces a nonzero output. it was discovered independently by Jack Schwartz , [ 1 ] Richard Zippel , [ 2 ] and Richard DeMillo and Richard J. Lipton , although DeMillo and Lipton's version was shown a year prior to Schwartz and Zippel's result. [ 3 ] The finite field version of this bound was proved by Øystein Ore in 1922. [ 4 ] Theorem 1 (Schwartz, Zippel). Let be a non-zero polynomial of total degree d ≥ 0 over an integral domain R. Let S be a finite subset of R and let r 1 , r 2 , ..., r n be selected at random independently and uniformly from S. Then Equivalently, the Lemma states that for any finite subset S of R, if Z(P) is the zero set of P, then Proof. The proof is by mathematical induction on n . For n = 1 , P can have at most d roots by the fundamental theorem of algebra . This gives us the base case. Now, assume that the theorem holds for all polynomials in n − 1 variables. We can then consider P to be a polynomial in x 1 by writing it as Since P is not identically 0, there is some i such that P i {\displaystyle P_{i}} is not identically 0. Take the largest such i . Then deg ⁡ P i ≤ d − i {\displaystyle \deg P_{i}\leq d-i} , since the degree of x 1 i P i {\displaystyle x_{1}^{i}P_{i}} is at most d. Now we randomly pick r 2 , … , r n {\displaystyle r_{2},\dots ,r_{n}} from S . By the induction hypothesis, Pr [ P i ( r 2 , … , r n ) = 0 ] ≤ d − i | S | . {\displaystyle \Pr[P_{i}(r_{2},\ldots ,r_{n})=0]\leq {\frac {d-i}{|S|}}.} If P i ( r 2 , … , r n ) ≠ 0 {\displaystyle P_{i}(r_{2},\ldots ,r_{n})\neq 0} , then P ( x 1 , r 2 , … , r n ) {\displaystyle P(x_{1},r_{2},\ldots ,r_{n})} is of degree i (and thus not identically zero) so If we denote the event P ( r 1 , r 2 , … , r n ) = 0 {\displaystyle P(r_{1},r_{2},\ldots ,r_{n})=0} by A , the event P i ( r 2 , … , r n ) = 0 {\displaystyle P_{i}(r_{2},\ldots ,r_{n})=0} by B , and the complement of B by B c {\displaystyle B^{c}} , we have The importance of the Schwartz–Zippel Theorem and Testing Polynomial Identities follows from algorithms which are obtained to problems that can be reduced to the problem of polynomial identity testing . One of the most common applications of the Schwartz-Zippel lemma in theoretical computer science is to testing whether a polynomial (given in terms of an arithmetic circuit or formula) is identically 0. For example, consider asking whether the arithmetic formula below is identically 0 To solve deterministically, we can multiply all the terms and check whether the coefficient of every possible product of variables is  0. However, this can take exponential time in the number of variables $n$ since there may be at most $2^n$ product terms. Instead, we can evaluate the polynomial at a random tuple of points over a sufficiently large field (which can be done in linearly-many arithmetic operations in the length of the formula) and if the result is indeed 0, we can use the SZ lemma to conclude the formula is identically 0 with high probability. Given a pair of polynomials p 1 ( x ) {\displaystyle p_{1}(x)} and p 2 ( x ) {\displaystyle p_{2}(x)} , is This problem can be solved by reducing it to the problem of polynomial identity testing. It is equivalent to checking if Hence if we can determine that where then we can determine whether the two polynomials are equivalent. Comparison of polynomials has applications for branching programs (also called binary decision diagrams ). A read-once branching program can be represented by a multilinear polynomial which computes (over any field) on {0,1}-inputs the same Boolean function as the branching program, and two branching programs compute the same function if and only if the corresponding polynomials are equal. Thus, identity of Boolean functions computed by read-once branching programs can be reduced to polynomial identity testing. Comparison of two polynomials (and therefore testing polynomial identities) also has applications in 2D-compression, where the problem of finding the equality of two 2D-texts A and B is reduced to the problem of comparing equality of two polynomials p A ( x , y ) {\displaystyle p_{A}(x,y)} and p B ( x , y ) {\displaystyle p_{B}(x,y)} . Given n ∈ N {\displaystyle n\in \mathbb {N} } , is n {\displaystyle n} a prime number ? A simple randomized algorithm developed by Manindra Agrawal and Somenath Biswas can determine probabilistically whether n {\displaystyle n} is prime and uses polynomial identity testing to do so. They propose that all prime numbers n (and only prime numbers) satisfy the following polynomial identity: This is a consequence of the Frobenius endomorphism . Let Then P n ( z ) = 0 ( mod n ) {\displaystyle {\mathcal {P}}_{n}(z)=0\;({\mbox{mod}}\;n)} iff n is prime . The proof can be found in [4]. However, since this polynomial has degree n {\displaystyle n} , where n {\displaystyle n} may or may not be a prime, the Schwartz–Zippel method would not work. Agrawal and Biswas use a more sophisticated technique, which divides P n {\displaystyle {\mathcal {P}}_{n}} by a random monic polynomial of small degree. Prime numbers are used in a number of applications such as hash table sizing, pseudorandom number generators and in key generation for cryptography . Therefore, finding very large prime numbers (on the order of (at least) 10 350 ≈ 2 1024 {\displaystyle 10^{350}\approx 2^{1024}} ) becomes very important and efficient primality testing algorithms are required. Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph of n vertices where n is even. Does G contain a perfect matching ? Theorem 2 ( Tutte 1947 ): A Tutte matrix determinant is not a 0 -polynomial if and only if there exists a perfect matching. A subset D of E is called a matching if each vertex in V is incident with at most one edge in D . A matching is perfect if each vertex in V has exactly one edge that is incident to it in D . Create a Tutte matrix A in the following way: where The Tutte matrix determinant (in the variables x ij , ⁠ i < j {\displaystyle i<j} ⁠ ) is then defined as the determinant of this skew-symmetric matrix which coincides with the square of the pfaffian of the matrix A and is non-zero (as polynomial) if and only if a perfect matching exists. One can then use polynomial identity testing to find whether G contains a perfect matching. There exists a deterministic black-box algorithm for graphs with polynomially bounded permanents (Grigoriev & Karpinski 1987). [ 5 ] In the special case of a balanced bipartite graph on n = m + m {\displaystyle n=m+m} vertices this matrix takes the form of a block matrix if the first m rows (resp. columns) are indexed with the first subset of the bipartition and the last m rows with the complementary subset. In this case the pfaffian coincides with the usual determinant of the m × m matrix X (up to sign). Here X is the Edmonds matrix . Let be the determinant of the polynomial matrix . Currently, there is no known sub-exponential time algorithm that can solve this problem deterministically. However, there are randomized polynomial algorithms whose analysis requires a bound on the probability that a non-zero polynomial will have roots at randomly selected test points.
https://en.wikipedia.org/wiki/Schwartz–Zippel_lemma
In mathematics , the Schwarz lemma , named after Hermann Amandus Schwarz , is a result in complex analysis about holomorphic functions from the open unit disk to itself. The lemma is less celebrated than deeper theorems, such as the Riemann mapping theorem , which it helps to prove. It is, however, one of the simplest results capturing the rigidity of holomorphic functions. Let D = { z : | z | < 1 } {\displaystyle \mathbf {D} =\{z:|z|<1\}} be the open unit disk in the complex plane C {\displaystyle \mathbb {C} } centered at the origin , and let f : D → C {\displaystyle f:\mathbf {D} \rightarrow \mathbb {C} } be a holomorphic map such that f ( 0 ) = 0 {\displaystyle f(0)=0} and | f ( z ) | ≤ 1 {\displaystyle |f(z)|\leq 1} on D {\displaystyle \mathbf {D} } . Then | f ( z ) | ≤ | z | {\displaystyle |f(z)|\leq |z|} for all z ∈ D {\displaystyle z\in \mathbf {D} } , and | f ′ ( 0 ) | ≤ 1 {\displaystyle |f'(0)|\leq 1} . Moreover, if | f ( z ) | = | z | {\displaystyle |f(z)|=|z|} for some non-zero z {\displaystyle z} or | f ′ ( 0 ) | = 1 {\displaystyle |f'(0)|=1} , then f ( z ) = a z {\displaystyle f(z)=az} for some a ∈ C {\displaystyle a\in \mathbb {C} } with | a | = 1 {\displaystyle |a|=1} . [ 1 ] The proof is a straightforward application of the maximum modulus principle on the function which is holomorphic on the whole of D {\displaystyle D} , including at the origin (because f {\displaystyle f} is differentiable at the origin and fixes zero). Now if D r = { z : | z | ≤ r } {\displaystyle D_{r}=\{z:|z|\leq r\}} denotes the closed disk of radius r {\displaystyle r} centered at the origin, then the maximum modulus principle implies that, for r < 1 {\displaystyle r<1} , given any z ∈ D r {\displaystyle z\in D_{r}} , there exists z r {\displaystyle z_{r}} on the boundary of D r {\displaystyle D_{r}} such that As r → 1 {\displaystyle r\rightarrow 1} we get | g ( z ) | ≤ 1 {\displaystyle |g(z)|\leq 1} . Moreover, suppose that | f ( z ) | = | z | {\displaystyle |f(z)|=|z|} for some non-zero z ∈ D {\displaystyle z\in D} , or | f ′ ( 0 ) | = 1 {\displaystyle |f'(0)|=1} . Then, | g ( z ) | = 1 {\displaystyle |g(z)|=1} at some point of D {\displaystyle D} . So by the maximum modulus principle, g ( z ) {\displaystyle g(z)} is equal to a constant a {\displaystyle a} such that | a | = 1 {\displaystyle |a|=1} . Therefore, f ( z ) = a z {\displaystyle f(z)=az} , as desired. A variant of the Schwarz lemma, known as the Schwarz–Pick theorem (after Georg Pick ), characterizes the analytic automorphisms of the unit disc, i.e. bijective holomorphic mappings of the unit disc to itself: Let f : D → D {\displaystyle f:\mathbf {D} \to \mathbf {D} } be holomorphic. Then, for all z 1 , z 2 ∈ D {\displaystyle z_{1},z_{2}\in \mathbf {D} } , and, for all z ∈ D {\displaystyle z\in \mathbf {D} } , The expression is the distance of the points z 1 {\displaystyle z_{1}} , z 2 {\displaystyle z_{2}} in the Poincaré metric , i.e. the metric in the Poincaré disk model for hyperbolic geometry in dimension two. The Schwarz–Pick theorem then essentially states that a holomorphic map of the unit disk into itself decreases the distance of points in the Poincaré metric. If equality holds throughout in one of the two inequalities above (which is equivalent to saying that the holomorphic map preserves the distance in the Poincaré metric), then f {\displaystyle f} must be an analytic automorphism of the unit disc, given by a Möbius transformation mapping the unit disc to itself. An analogous statement on the upper half-plane H {\displaystyle \mathbf {H} } can be made as follows: Let f : H → H {\displaystyle f:\mathbf {H} \to \mathbf {H} } be holomorphic. Then, for all z 1 , z 2 ∈ H {\displaystyle z_{1},z_{2}\in \mathbf {H} } , This is an easy consequence of the Schwarz–Pick theorem mentioned above: One just needs to remember that the Cayley transform W ( z ) = ( z − i ) / ( z + i ) {\displaystyle W(z)=(z-i)/(z+i)} maps the upper half-plane H {\displaystyle \mathbf {H} } conformally onto the unit disc D {\displaystyle \mathbf {D} } . Then, the map W ∘ f ∘ W − 1 {\displaystyle W\circ f\circ W^{-1}} is a holomorphic map from D {\displaystyle \mathbf {D} } onto D {\displaystyle \mathbf {D} } . Using the Schwarz–Pick theorem on this map, and finally simplifying the results by using the formula for W {\displaystyle W} , we get the desired result. Also, for all z ∈ H {\displaystyle z\in \mathbf {H} } , If equality holds for either the one or the other expressions, then f {\displaystyle f} must be a Möbius transformation with real coefficients. That is, if equality holds, then with a , b , c , d ∈ R {\displaystyle a,b,c,d\in \mathbb {R} } and a d − b c > 0 {\displaystyle ad-bc>0} . The proof of the Schwarz–Pick theorem follows from Schwarz's lemma and the fact that a Möbius transformation of the form maps the unit circle to itself. Fix z 1 {\displaystyle z_{1}} and define the Möbius transformations Since M ( z 1 ) = 0 {\displaystyle M(z_{1})=0} and the Möbius transformation is invertible, the composition φ ( f ( M − 1 ( z ) ) ) {\displaystyle \varphi (f(M^{-1}(z)))} maps 0 {\displaystyle 0} to 0 {\displaystyle 0} and the unit disk is mapped into itself. Thus we can apply Schwarz's lemma, which is to say Now calling z 2 = M − 1 ( z ) {\displaystyle z_{2}=M^{-1}(z)} (which will still be in the unit disk) yields the desired conclusion To prove the second part of the theorem, we rearrange the left-hand side into the difference quotient and let z 2 {\displaystyle z_{2}} tend to z 1 {\displaystyle z_{1}} . The Schwarz–Ahlfors–Pick theorem provides an analogous theorem for hyperbolic manifolds. De Branges' theorem , formerly known as the Bieberbach Conjecture, is an important extension of the lemma, giving restrictions on the higher derivatives of f {\displaystyle f} at 0 {\displaystyle 0} in case f {\displaystyle f} is injective ; that is, univalent . The Koebe 1/4 theorem provides a related estimate in the case that f {\displaystyle f} is univalent. This article incorporates material from Schwarz lemma on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Schwarz_lemma
In mathematics , the Schwarz reflection principle is a way to extend the domain of definition of a complex analytic function , i.e., it is a form of analytic continuation . It states that if an analytic function is defined on the upper half-plane , and has well-defined (non-singular) real values on the real axis , then it can be extended to the conjugate function on the lower half-plane. In notation, if F ( z ) {\displaystyle F(z)} is a function that satisfies the above requirements, then its extension to the rest of the complex plane is given by the formula, F ( z ¯ ) = F ( z ) ¯ . {\displaystyle F({\bar {z}})={\overline {F(z)}}.} That is, we make the definition that agrees along the real axis. The result proved by Hermann Schwarz is as follows. Suppose that F is a continuous function on the closed upper half plane { z ∈ C ∣ Im ⁡ ( z ) ≥ 0 } {\displaystyle \left\{z\in \mathbb {C} \mid \operatorname {Im} (z)\geq 0\right\}} , holomorphic on the upper half plane { z ∈ C ∣ Im ⁡ ( z ) > 0 } {\displaystyle \left\{z\in \mathbb {C} \mid \operatorname {Im} (z)>0\right\}} , which takes real values on the real axis. Then the extension formula given above is an analytic continuation to the whole complex plane. [ 1 ] In practice it would be better to have a theorem that allows F certain singularities, for example F a meromorphic function . To understand such extensions, one needs a proof method that can be weakened. In fact Morera's theorem is well adapted to proving such statements. Contour integrals involving the extension of F clearly split into two, using part of the real axis. So, given that the principle is rather easy to prove in the special case from Morera's theorem, understanding the proof is enough to generate other results. The principle also adapts to apply to harmonic functions .
https://en.wikipedia.org/wiki/Schwarz_reflection_principle
In geometry , a Schwarz triangle , named after Hermann Schwarz , is a spherical triangle that can be used to tile a sphere ( spherical tiling ), possibly overlapping, through reflections in its edges. They were classified in Schwarz (1873) . These can be defined more generally as tessellations of the sphere, the Euclidean plane , or the hyperbolic plane . Each Schwarz triangle on a sphere defines a finite group , while on the Euclidean or hyperbolic plane they define an infinite group. A Schwarz triangle is represented by three rational numbers ( p q r ) , each representing the angle at a vertex. The value n ⁄ d means the vertex angle is d ⁄ n of the half-circle. "2" means a right triangle . When these are whole numbers, the triangle is called a Möbius triangle, and corresponds to a non -overlapping tiling, and the symmetry group is called a triangle group . In the sphere there are three Möbius triangles plus one one-parameter family; in the plane there are three Möbius triangles, while in hyperbolic space there is a three-parameter family of Möbius triangles, and no exceptional objects . A fundamental domain triangle ( p q r ) , with vertex angles π ⁄ p , π ⁄ q , and π ⁄ r , can exist in different spaces depending on the value of the sum of the reciprocals of these integers: 1 p + 1 q + 1 r { > 1 ⟹ Sphere = 1 ⟹ Euclidean plane < 1 ⟹ Hyperbolic plane {\displaystyle {\frac {1}{p}}+{\frac {1}{q}}+{\frac {1}{r}}\quad {\begin{cases}>1&\implies {\text{Sphere}}\\[2pt]=1&\implies {\text{Euclidean plane}}\\[2pt]<1&\implies {\text{Hyperbolic plane}}\end{cases}}} This is simply a way of saying that in Euclidean space the interior angles of a triangle sum to π , while on a sphere they sum to an angle greater than π , and on hyperbolic space they sum to less. A Schwarz triangle is represented graphically by a triangular graph . Each node represents an edge (mirror) of the Schwarz triangle. Each edge is labeled by a rational value corresponding to the reflection order, being π/ vertex angle . Order-2 edges represent perpendicular mirrors that can be ignored in this diagram. The Coxeter-Dynkin diagram represents this triangular graph with order-2 edges hidden. A Coxeter group can be used for a simpler notation, as ( p q r ) for cyclic graphs, and ( p q 2) = [ p , q ] for (right triangles), and ( p 2 2) = [ p ]×[]. Schwarz triangles with whole numbers, also called Möbius triangles , include one 1-parameter family and three exceptional cases: The Schwarz triangles ( p q r ), grouped by density : Density 1: Density 2: Density ∞: Density 1: Density 2: Density 3: Density 4: Density 6: Density 10: The (2 3 7) Schwarz triangle is the smallest hyperbolic Schwarz triangle, and as such is of particular interest. Its triangle group (or more precisely the index 2 von Dyck group of orientation-preserving isometries) is the (2,3,7) triangle group , which is the universal group for all Hurwitz groups – maximal groups of isometries of Riemann surfaces . All Hurwitz groups are quotients of the (2,3,7) triangle group, and all Hurwitz surfaces are tiled by the (2,3,7) Schwarz triangle. The smallest Hurwitz group is the simple group of order 168, the second smallest non-abelian simple group , which is isomorphic to PSL(2,7) , and the associated Hurwitz surface (of genus 3) is the Klein quartic . The (2 3 8) triangle tiles the Bolza surface , a highly symmetric (but not Hurwitz) surface of genus 2. The triangles with one noninteger angle, listed above, were first classified by Anthony W. Knapp in. [ 1 ] A list of triangles with multiple noninteger angles is given in. [ 2 ] In this section tessellations of the hyperbolic upper half plane by Schwarz triangles will be discussed using elementary methods. For triangles without "cusps"—angles equal to zero or equivalently vertices on the real axis—the elementary approach of Carathéodory (1954) will be followed. For triangles with one or two cusps, elementary arguments of Evans (1973) , simplifying the approach of Hecke (1935) , will be used: in the case of a Schwarz triangle with one angle zero and another a right angle, the orientation-preserving subgroup of the reflection group of the triangle is a Hecke group . For an ideal triangle in which all angles are zero, so that all vertices lie on the real axis, the existence of the tessellation will be established by relating it to the Farey series described in Hardy & Wright (2008) and Series (2015) . In this case the tessellation can be considered as that associated with three touching circles on the Riemann sphere , a limiting case of configurations associated with three disjoint non-nested circles and their reflection groups, the so-called " Schottky groups ", described in detail in Mumford, Series & Wright (2015) . Alternatively—by dividing the ideal triangle into six triangles with angles 0, π /2 and π /3—the tessellation by ideal triangles can be understood in terms of tessellations by triangles with one or two cusps. Suppose that the hyperbolic triangle Δ has angles π / a , π / b and π / c with a , b , c integers greater than 1. The hyperbolic area of Δ equals π − π / a − π / b − π / c , so that The construction of a tessellation will first be carried out for the case when a , b and c are greater than 2. [ 3 ] The original triangle Δ gives a convex polygon P 1 with 3 vertices. At each of the three vertices the triangle can be successively reflected through edges emanating from the vertices to produce 2 m copies of the triangle where the angle at the vertex is π / m . The triangles do not overlap except at the edges, half of them have their orientation reversed and they fit together to tile a neighborhood of the point. The union of these new triangles together with the original triangle form a connected shape P 2 . It is made up of triangles which only intersect in edges or vertices, forms a convex polygon with all angles less than or equal to π and each side being the edge of a reflected triangle. In the case when an angle of Δ equals π /3, a vertex of P 2 will have an interior angle of π , but this does not affect the convexity of P 2 . Even in this degenerate case when an angle of π arises, the two collinear edges are still considered as distinct for the purposes of the construction. The construction of P 2 can be understood more clearly by noting that some triangles or tiles are added twice, the three which have a side in common with the original triangle. The rest have only a vertex in common. A more systematic way of performing the tiling is first to add a tile to each side (the reflection of the triangle in that edge) and then fill in the gaps at each vertex. This results in a total of 3 + (2 a − 3) + (2 b − 3) + (2 c − 3) = 2( a + b + c ) − 6 new triangles. The new vertices are of two types. Those which are vertices of the triangles attached to sides of the original triangle, which are connected to 2 vertices of Δ. Each of these lie in three new triangles which intersect at that vertex. The remainder are connected to a unique vertex of Δ and belong to two new triangles which have a common edge. Thus there are 3 + (2 a − 4) + (2 b − 4) + (2 c − 4) = 2( a + b + c ) − 9 new vertices. By construction there is no overlapping. To see that P 2 is convex, it suffices to see that the angle between sides meeting at a new vertex make an angle less than or equal to π . But the new vertices lies in two or three new triangles, which meet at that vertex, so the angle at that vertex is no greater than 2 π /3 or π , as required. This process can be repeated for P 2 to get P 3 by first adding tiles to each edge of P 2 and then filling in the tiles round each vertex of P 2 . Then the process can be repeated from P 3 , to get P 4 and so on, successively producing P n from P n − 1 . It can be checked inductively that these are all convex polygons, with non-overlapping tiles. Indeed, as in the first step of the process there are two types of tile in building P n from P n − 1 , those attached to an edge of P n − 1 and those attached to a single vertex. Similarly there are two types of vertex, one in which two new tiles meet and those in which three tiles meet. So provided that no tiles overlap, the previous argument shows that angles at vertices are no greater than π and hence that P n is a convex polygon. [ a ] It therefore has to be verified that in constructing P n from P n − 1 : [ 4 ] (a) the new triangles do not overlap with P n − 1 except as already described; (b) the new triangles do not overlap with each other except as already described; (c) the geodesic from any point in Δ to a vertex of the polygon P n − 1 makes an angle ≤ 2 π /3 with each of the edges of the polygon at that vertex. To prove (a), note that by convexity, the polygon P n − 1 is the intersection of the convex half-spaces defined by the full circular arcs defining its boundary. Thus at a given vertex of P n − 1 there are two such circular arcs defining two sectors: one sector contains the interior of P n − 1 , the other contains the interiors of the new triangles added around the given vertex. This can be visualized by using a Möbius transformation to map the upper half plane to the unit disk and the vertex to the origin; the interior of the polygon and each of the new triangles lie in different sectors of the unit disk. Thus (a) is proved. Before proving (c) and (b), a Möbius transformation can be applied to map the upper half plane to the unit disk and a fixed point in the interior of Δ to the origin. The proof of (c) proceeds by induction. Note that the radius joining the origin to a vertex of the polygon P n − 1 makes an angle of less than 2 π /3 with each of the edges of the polygon at that vertex if exactly two triangles of P n − 1 meet at the vertex, since each has an angle less than or equal to π /3 at that vertex. To check this is true when three triangles of P n − 1 meet at the vertex, C say, suppose that the middle triangle has its base on a side AB of P n − 2 . By induction the radii OA and OB makes angles of less than or equal to 2 π /3 with the edge AB . In this case the region in the sector between the radii OA and OB outside the edge AB is convex as the intersection of three convex regions. By induction the angles at A and B are greater than or equal to π /3. Thus the geodesics to C from A and B start off in the region; by convexity, the triangle ABC lies wholly inside the region. The quadrilateral OACB has all its angles less than π (since OAB is a geodesic triangle), so is convex. Hence the radius OC lies inside the angle of the triangle ABC near C . Thus the angles between OC and the two edges of P n − 1 meeting at C are less than or equal to π /3 + π /3 = 2 π /3, as claimed. To prove (b), it must be checked how new triangles in P n intersect. First consider the tiles added to the edges of P n − 1 . Adopting similar notation to (c), let AB be the base of the tile and C the third vertex. Then the radii OA and OB make angles of less than or equal to 2 π /3 with the edge AB and the reasoning in the proof of (c) applies to prove that the triangle ABC lies within the sector defined by the radii OA and OB . This is true for each edge of P n − 1 . Since the interiors of sectors defined by distinct edges are disjoint, new triangles of this type only intersect as claimed. Next consider the additional tiles added for each vertex of P n − 1 . Taking the vertex to be A , three are two edges AB 1 and AB 2 of P n − 1 that meet at A . Let C 1 and C 2 be the extra vertices of the tiles added to these edges. Now the additional tiles added at A lie in the sector defined by radii OB 1 and OB 2 . The polygon with vertices C 2 O , C 1 , and then the vertices of the additional tiles has all its internal angles less than π and hence is convex. It is therefore wholly contained in the sector defined by the radii OC 1 and OC 2 . Since the interiors of these sectors are all disjoint, this implies all the claims about how the added tiles intersect. Finally it remains to prove that the tiling formed by the union of the triangles covers the whole of the upper half plane. Any point z covered by the tiling lies in a polygon P n and hence a polygon P n + 1 . It therefore lies in a copy of the original triangle Δ as well as a copy of P 2 entirely contained in P n + 1 . The hyperbolic distance between Δ and the exterior of P 2 is equal to r > 0. Thus the hyperbolic distance between z and points not covered by the tiling is at least r . Since this applies to all points in the tiling, the set covered by the tiling is closed. On the other hand, the tiling is open since it coincides with the union of the interiors of the polygons P n . By connectivity, the tessellation must cover the whole of the upper half plane. To see how to handle the case when an angle of Δ is a right angle, note that the inequality implies that if one of the angles is a right angle, say a = 2, then both b and c are greater than 2 and one of them, b say, must be greater than 3. In this case, reflecting the triangle across the side AB gives an isosceles hyperbolic triangle with angles π / c , π / c and 2 π / b . If 2 π / b ≤ π /3, i.e. b is greater than 5, then all the angles of the doubled triangle are less than or equal to π /3. In that case the construction of the tessellation above through increasing convex polygons adapts word for word to this case except that around the vertex with angle 2 π / b , only b —and not 2 b —copies of the triangle are required to tile a neighborhood of the vertex. This is possible because the doubled triangle is isosceles. The tessellation for the doubled triangle yields that for the original triangle on cutting all the larger triangles in half. [ 5 ] It remains to treat the case when b equals 4 or 5. If b = 4, then c ≥ 5: in this case if c ≥ 6, then b and c can be switched and the argument above applies, leaving the case b = 4 and c = 5. If b = 5, then c ≥ 4. The case c ≥ 6 can be handled by swapping b and c , so that the only extra case is b = 5 and c = 5. This last isosceles triangle is the doubled version of the first exceptional triangle, so only that triangle Δ 1 —with angles π /2, π /4 and π /5 and hyperbolic area π /20—needs to be considered (see below). Carathéodory (1954) handles this case by a general method which works for all right angled triangles for which the two other angles are less than or equal to π /4. The previous method for constructing P 2 , P 3 , ... is modified by adding an extra triangle each time an angle 3 π /2 arises at a vertex. The same reasoning applies to prove there is no overlapping and that the tiling covers the hyperbolic upper half plane. [ 5 ] On the other hand, the given configuration gives rise to an arithmetic triangle group. These were first studied in Fricke & Klein (1897) . and have given rise to an extensive literature. In 1977 Takeuchi obtained a complete classification of arithmetic triangle groups (there are only finitely many) and determined when two of them are commensurable. The particular example is related to Bring's curve and the arithmetic theory implies that the triangle group for Δ 1 contains the triangle group for the triangle Δ 2 with angles π /4, π /4 and π /5 as a non-normal subgroup of index 6. [ 6 ] Doubling the triangles Δ 1 and Δ 2 , this implies that there should be a relation between 6 triangles Δ 3 with angles π /2, π /5 and π /5 and hyperbolic area π /10 and a triangle Δ 4 with angles π /5, π /5 and π /10 and hyperbolic area 3 π /5. Threlfall (1932) established such a relation directly by completely elementary geometric means, without reference to the arithmetic theory: indeed as illustrated in the fifth figure below, the quadrilateral obtained by reflecting across a side of a triangle of type Δ 4 can be tiled by 12 triangles of type Δ 3 . The tessellation by triangles of the type Δ 4 can be handled by the main method in this section; this therefore proves the existence of the tessellation by triangles of type Δ 3 and Δ 1 . [ 7 ] In the case of a Schwarz triangle with one or two cusps, the process of tiling becomes simpler; but it is easier to use a different method going back to Hecke to prove that these exhaust the hyperbolic upper half plane. In the case of one cusp and non-zero angles π / a , π / b with a , b integers greater than one, the tiling can be envisaged in the unit disk with the vertex having angle π / a at the origin. The tiling starts by adding 2 a − 1 copies of the triangle at the origin by successive reflections. This results in a polygon P 1 with 2 a cusps and between each two 2 a vertices each with an angle π / b . The polygon is therefore convex. For each non-ideal vertex of P 1 , the unique triangle with that vertex can be similar reflected around that vertex, thus adding 2 b − 1 new triangles, 2 b − 1 new ideal points and 2 b − 1 new vertices with angle π / a . The resulting polygon P 2 is thus made up of 2 a (2 b − 1) cusps and the same number of vertices each with an angle of π / a , so is convex. The process can be continued in this way to obtain convex polygons P 3 , P 4 , and so on. The polygon P n will have vertices having angles alternating between 0 and π / a for n even and between 0 and π / b for n odd. By construction the triangles only overlap at edges or vertices, so form a tiling. [ 8 ] The case where the triangle has two cusps and one non-zero angle π / a can be reduced to the case of one cusp by observing that the trinale is the double of a triangle with one cusp and non-zero angles π / a and π / b with b = 2. The tiling then proceeds as before. [ 9 ] To prove that these give tessellations, it is more convenient to work in the upper half plane. Both cases can be treated simultaneously, since the case of two cusps is obtained by doubling a triangle with one cusp and non-zero angles π / a and π /2. So consider the geodesic triangle in the upper half plane with angles 0, π / a , π / b with a , b integers greater than one. The interior of such a triangle can be realised as the region X in the upper half plane lying outside the unit disk | z | ≤ 1 and between two lines parallel to the imaginary axis through points u and v on the unit circle. Let Γ be the triangle group generated by the three reflections in the sides of the triangle. To prove that the successive reflections of the triangle cover the upper half plane, it suffices to show that for any z in the upper half plane there is a g in Γ such that g ( z ) lies in X . This follows by an argument of Evans (1973) , simplified from the theory of Hecke groups . Let λ = Re a and μ = Re b so that, without loss of generality, λ < 0 ≤ μ. The three reflections in the sides are given by Thus T = R 3 ∘ R 2 is translation by μ − λ. It follows that for any z 1 in the upper half plane, there is an element g 1 in the subgroup Γ 1 of Γ generated by T such that w 1 = g 1 ( z 1 ) satisfies λ ≤ Re w 1 ≤ μ, i.e. this strip is a fundamental domain for the translation group Γ 1 . If | w 1 | ≥ 1, then w 1 lies in X and the result is proved. Otherwise let z 2 = R 1 ( w 1 ) and find g 2 Γ 1 such that w 2 = g 2 ( z 2 ) satisfies λ ≤ Re w 2 ≤ μ. If | w 2 | ≥ 1 then the result is proved. Continuing in this way, either some w n satisfies | w n | ≥ 1, in which case the result is proved; or | w n | < 1 for all n . Now since g n + 1 lies in Γ 1 and | w n | < 1, In particular and Thus, from the inequality above, the points ( w n ) lies in the compact set | z | ≤ 1, λ ≤ Re z ≤ μ and Im z ≥ Im w 1 . It follows that | w n | tends to 1; for if not, then there would be an r < 1 such that | w m | ≤ r for inifitely many m and then the last equation above would imply that Im w n tends to infinity, a contradiction. Let w be a limit point of the w n , so that | w | = 1. Thus w lies on the arc of the unit circle between u and v . If w ≠ u , v , then R 1 w n would lie in X for n sufficiently large, contrary to assumption. Hence w = u or v . Hence for n sufficiently large w n lies close to u or v and therefore must lie in one of the reflections of the triangle about the vertex u or v , since these fill out neighborhoods of u and v . Thus there is an element g in Γ such that g ( w n ) lies in X . Since by construction w n is in the Γ-orbit of z 1 , it follows that there is a point in this orbit lying in X , as required. [ 10 ] The tessellation for an ideal triangle with all its vertices on the unit circle and all its angles 0 can be considered as a special case of the tessellation for a triangle with one cusp and two now zero angles π /3 and π /2. Indeed, the ideal triangle is made of six copies one-cusped triangle obtained by reflecting the smaller triangle about the vertex with angle π /3. Each step of the tiling, however, is uniquely determined by the positions of the new cusps on the circle, or equivalently the real axis; and these points can be understood directly in terms of Farey series following Series (2015) , Hatcher (2013 , pp. 20–32) and Hardy & Wright (2008 , pp. 23–31). This starts from the basic step that generates the tessellation, the reflection of an ideal triangle in one of its sides. Reflection corresponds to the process of inversion in projective geometry and taking the projective harmonic conjugate , which can be defined in terms of the cross ratio . In fact if p , q , r , s are distinct points in the Riemann sphere, then there is a unique complex Möbius transformation g sending p , q and s to 0, ∞ and 1 respectively. The cross ratio ( p , q ; r , s ) is defined to be g ( r ) and is given by the formula By definition it is invariant under Möbius transformations. If a , b lie on the real axis, the harmonic conjugate of c with respect to a and b is defined to be the unique real number d such that ( a , b ; c , d ) = −1. So for example if a = 1 and b = −1, the conjugate of r is 1/ r . In general Möbius invariance can be used to obtain an explicit formula for d in terms of a , b and c . Indeed, translating the centre t = ( a + b )/2 of the circle with diameter having endpoints a and b to 0, d − t is the harmonic conjugate of c − t with respect to a − t and b − t . The radius of the circle is ρ = ( b − a )/2 so ( d − t )/ρ is the harmonic conjugate of ( c − t )/ρ with respect to 1 and −1. Thus so that It will now be shown that there is a parametrisation of such ideal triangles given by rationals in reduced form with a and c satisfying the "neighbour condition" p 2 q 1 − q 2 p 1 = 1. The middle term b is called the Farey sum or mediant of the outer terms and written The formula for the reflected triangle gives Similarly the reflected triangle in the second semicircle gives a new vertex b ⊕ c . It is immediately verified that a and b satisfy the neighbour condition, as do b and c . Now this procedure can be used to keep track of the triangles obtained by successively reflecting the basic triangle Δ with vertices 0, 1 and ∞. It suffices to consider the strip with 0 ≤ Re z ≤ 1, since the same picture is reproduced in parallel strips by applying reflections in the lines Re z = 0 and 1. The ideal triangle with vertices 0, 1, ∞ reflects in the semicircle with base [0,1] into the triangle with vertices a = 0, b = 1/2, c = 1. Thus a = 0/1 and c = 1/1 are neighbours and b = a ⊕ c . The semicircle is split up into two smaller semicircles with bases [ a , b ] and [ b , c ]. Each of these intervals splits up into two intervals by the same process, resulting in 4 intervals. Continuing in this way, results into subdivisions into 8, 16, 32 intervals, and so on. At the n th stage, there are 2 n adjacent intervals with 2 n + 1 endpoints. The construction above shows that successive endpoints satisfy the neighbour condition so that new endpoints resulting from reflection are given by the Farey sum formula. To prove that the tiling covers the whole hyperbolic plane, it suffices to show that every rational in [0,1] eventually occurs as an endpoint. There are several ways to see this. One of the most elementary methods is described in Graham, Knuth & Patashnik (1994) in their development—without the use of continued fractions —of the theory of the Stern–Brocot tree , which codifies the new rational endpoints that appear at the n th stage. They give a direct proof that every rational appears. Indeed, starting with {0/1,1/1}, successive endpoints are introduced at level n +1 by adding Farey sums or mediants ( p + r )/( q + s ) between all consecutive terms p / q , r / s at the n th level (as described above). Let x = a / b be a rational lying between 0 and 1 with a and b coprime. Suppose that at some level x is sandwiched between successive terms p / q < x < r / s . These inequalities force aq – bp ≥ 1 and br – as ≥ 1 and hence, since rp – qs = 1 , This puts an upper bound on the sum of the numerators and denominators. On the other hand, the mediant ( p + r )/( q + s ) can be introduced and either equals x , in which case the rational x appears at this level; or the mediant provides a new interval containing x with strictly larger numerator-and-denominator sum. The process must therefore terminate after at most a + b steps, thus proving that x appears. [ 11 ] A second approach relies on the modular group G = SL(2, Z ). [ 12 ] The Euclidean algorithm implies that this group is generated by the matrices In fact let H be the subgroup of G generated by S and T . Let be an element of SL(2, Z ). Thus ad − cb = 1, so that a and c are coprime. Let Applying S if necessary, it can be assumed that | a | > | c | (equality is not possible by coprimeness). We write a = mc + r with 0 ≤ r ≤ | c |. But then This process can be continued until one of the entries is 0, in which case the other is necessarily ±1. Applying a power of S if necessary, it follows that v = h u for some h in H . Hence with p , q integers. Clearly p = 1, so that h −1 g = T q . Thus g = h T q lies in H as required. To prove that all rationals in [0,1] occur, it suffices to show that G carries Δ onto triangles in the tessellation. This follows by first noting that S and T carry Δ on to such a triangle: indeed as Möbius transformations, S ( z ) = −1/ z and T ( z ) = z + 1, so these give reflections of Δ in two of its sides. But then S and T conjugate the reflections in the sides of Δ into reflections in the sides of S Δ and T Δ, which lie in Γ. Thus G normalizes Γ. Since triangles in the tessellation are exactly those of the form g Δ with g in Γ, it follows that S and T , and hence all elements of G , permute triangles in the tessellation. Since every rational is of the form g (0) for g in G , every rational in [0,1] is the vertex of a triangle in the tessellation. The reflection group and tessellation for an ideal triangle can also be regarded as a limiting case of the Schottky group for three disjoint unnested circles on the Riemann sphere. Again this group is generated by hyperbolic reflections in the three circles. In both cases the three circles have a common circle which cuts them orthogonally. Using a Möbius transformation, it may be assumed to be the unit circle or equivalently the real axis in the upper half plane. [ 13 ] In this subsection the approach of Carl Ludwig Siegel to the tessellation theorem for triangles is outlined. Siegel's less elementary approach does not use convexity, instead relying on the theory of Riemann surfaces , covering spaces and a version of the monodromy theorem for coverings. It has been generalized to give proofs of the more general Poincaré polygon theorem. (Note that the special case of tiling by regular n -gons with interior angles 2 π / n is an immediate consequence of the tessellation by Schwarz triangles with angles π / n , π / n and π /2.) [ 14 ] [ 15 ] Let Γ be the free product Z 2 ∗ Z 2 ∗ Z 2 . If Δ = ABC is a Schwarz triangle with angles π / a , π / b and π / c , where a , b , c ≥ 2, then there is a natural map of Γ onto the group generated by reflections in the sides of Δ. Elements of Γ are described by a product of the three generators where no two adjacent generators are equal. At the vertices A , B and C the product of reflections in the sides meeting at the vertex define rotations by angles 2 π / a , 2 π / b and 2 π / c ; Let g A , g B and g C be the corresponding products of generators of Γ = Z 2 ∗ Z 2 ∗ Z 2 . Let Γ 0 be the normal subgroup of index 2 of Γ, consisting of elements that are the product of an even number of generators; and let Γ 1 be the normal subgroup of Γ generated by ( g A ) a , ( g B ) b and ( g C ) c . These act trivially on Δ. Let Γ = Γ/Γ 1 and Γ 0 = Γ 0 /Γ 1 . The disjoint union of copies of Δ indexed by elements of Γ with edge identifications has the natural structure of a Riemann surface Σ. At an interior point of a triangle there is an obvious chart. As a point of the interior of an edge the chart is obtained by reflecting the triangle across the edge. At a vertex of a triangle with interior angle π / n , the chart is obtained from the 2 n copies of the triangle obtained by reflecting it successively around that vertex. The group Γ acts by deck transformations of Σ, with elements in Γ 0 acting as holomorphic mappings and elements not in Γ 0 acting as antiholomorphic mappings. There is a natural map P of Σ into the hyperbolic plane. The interior of the triangle with label g in Γ is taken onto g (Δ), edges are taken to edges and vertices to vertices. It is also easy to verify that a neighbourhood of an interior point of an edge is taken into a neighbourhood of the image; and similarly for vertices. Thus P is locally a homeomorphism and so takes open sets to open sets. The image P (Σ), i.e. the union of the translates g ( Δ ), is therefore an open subset of the upper half plane. On the other hand, this set is also closed. Indeed, if a point is sufficiently close to Δ it must be in a translate of Δ . Indeed, a neighbourhood of each vertex is filled out the reflections of Δ and if a point lies outside these three neighbourhoods but is still close to Δ it must lie on the three reflections of Δ in its sides. Thus there is δ > 0 such that if z lies within a distance less than δ from Δ , then z lies in a Γ -translate of Δ . Since the hyperbolic distance is Γ -invariant, it follows that if z lies within a distance less than δ from Γ( Δ ) it actually lies in Γ( Δ ), so this union is closed. By connectivity it follows that P (Σ) is the whole upper half plane. On the other hand, P is a local homeomorphism, so a covering map. Since the upper half plane is simply connected, it follows that P is one-one and hence the translates of Δ tessellate the upper half plane. This is a consequence of the following version of the monodromy theorem for coverings of Riemann surfaces: if Q is a covering map between Riemann surfaces Σ 1 and Σ 2 , then any path in Σ 2 can be lifted to a path in Σ 1 and any two homotopic paths with the same end points lift to homotopic paths with the same end points; an immediate corollary is that if Σ 2 is simply connected, Q must be a homeomorphism. [ 16 ] To apply this, let Σ 1 = Σ, let Σ 2 be the upper half plane and let Q = P . By the corollary of the monodromy theorem, P must be one-one. It also follows that g (Δ) = Δ if and only if g lies in Γ 1 , so that the homomorphism of Γ 0 into the Möbius group is faithful. The tessellation of the Schwarz triangles can be viewed as a generalization of the theory of infinite Coxeter groups , following the theory of hyperbolic reflection groups developed algebraically by Jacques Tits [ 17 ] and geometrically by Ernest Vinberg . [ 18 ] In the case of the Lobachevsky or hyperbolic plane , the ideas originate in the nineteenth-century work of Henri Poincaré and Walther von Dyck . As Joseph Lehner has pointed out in Mathematical Reviews , however, rigorous proofs that reflections of a Schwarz triangle generate a tessellation have often been incomplete, his own 1964 book "Discontinuous Groups and Automorphic Functions" , being one example. [ 19 ] [ 20 ] Carathéodory's elementary treatment in his 1950 textbook Funktiontheorie , translated into English in 1954, and Siegel's 1954 account using the monodromy principle are rigorous proofs. The approach using Coxeter groups will be summarised here, within the general framework of classification of hyperbolic reflection groups. [ 21 ] Let r, s, t be symbols and let a , b , c ≥ 2 be integers, possibly ∞ , with 1 a + 1 b + 1 c < 1. {\displaystyle {1 \over a}+{1 \over b}+{1 \over c}<1.} Define Γ to be the group with presentation having generators r, s, t that are all involutions and satisfy ( s t ) a = 1 , ( t r ) b = 1 , ( r s ) c = 1. {\displaystyle {\begin{aligned}(st)^{a}&=1,\\[2pt](tr)^{b}&=1,\\[2pt](rs)^{c}&=1.\end{aligned}}} If one of the integers is infinite, then the product has infinite order. The generators r, s, t are called the simple reflections . Set [ 22 ] A = { cos ⁡ π a if a ≥ 2 is finite, cosh ⁡ x , x > 0 otherwise. B = { cos ⁡ π b if b ≥ 2 is finite, cosh ⁡ y , y > 0 otherwise. C = { cos ⁡ π c if c ≥ 2 is finite, cosh ⁡ z , z > 0 otherwise. {\displaystyle {\begin{aligned}A&={\begin{cases}\cos {\frac {\pi }{a}}&{\text{if }}a\geq 2{\text{ is finite,}}\\[2pt]\cosh x,\ x>0&{\text{otherwise.}}\end{cases}}\\[8pt]B&={\begin{cases}\cos {\frac {\pi }{b}}&{\text{if }}b\geq 2{\text{ is finite,}}\\[2pt]\cosh y,\ y>0&{\text{otherwise.}}\end{cases}}\\[8pt]C&={\begin{cases}\cos {\frac {\pi }{c}}&{\text{if }}c\geq 2{\text{ is finite,}}\\[2pt]\cosh z,\ z>0&{\text{otherwise.}}\end{cases}}\end{aligned}}} Let e r , e s , e t be a basis for a 3-dimensional real vector space V with symmetric bilinear form Λ such that Λ ( e s , e t ) = − A , Λ ( e t , e r ) = − B , Λ ( e r , e s ) = − C , {\displaystyle {\begin{aligned}\Lambda (\mathbf {e} _{s},\mathbf {e} _{t})&=-A,\\[2pt]\Lambda (\mathbf {e} _{t},\mathbf {e} _{r})&=-B,\\[2pt]\Lambda (\mathbf {e} _{r},\mathbf {e} _{s})&=-C,\end{aligned}}} with the three diagonal entries equal to one. The symmetric bilinear form Λ is non-degenerate with signature (2, 1) . Define: ρ ( v ) = v − 2 Λ ( v , e r ) e r σ ( v ) = v − 2 Λ ( v , e s ) e s τ ( v ) = v − 2 Λ ( v , e t ) e t {\displaystyle {\begin{aligned}\rho (\mathbf {v} )&=\mathbf {v} -2\Lambda (\mathbf {v} ,\mathbf {e} _{r})\mathbf {e} _{r}\\[2pt]\sigma (\mathbf {v} )&=\mathbf {v} -2\Lambda (\mathbf {v} ,\mathbf {e} _{s})\mathbf {e} _{s}\\[2pt]\tau (\mathbf {v} )&=\mathbf {v} -2\Lambda (\mathbf {v} ,\mathbf {e} _{t})\mathbf {e} _{t}\end{aligned}}} Theorem (geometric representation). The operators ρ, σ, τ are involutions on V , with respective eigenvectors e r , e s , e t with simple eigenvalue −1. The products of the operators have orders corresponding to the presentation above (so στ has order a , etc). The operators ρ, σ, τ induce a representation of Γ on V which preserves Λ . The bilinear form Λ for the basis has matrix M = ( 1 − C − B − C 1 − A − B − A 1 ) , {\displaystyle M={\begin{pmatrix}1&-C&-B\\-C&1&-A\\-B&-A&1\\\end{pmatrix}},} so has determinant det ( M ) = 1 − A 2 − B 2 − C 2 − 2 A B C . {\displaystyle \det(M)=1-A^{2}-B^{2}-C^{2}-2ABC.} If c = 2 , say, then the eigenvalues of the matrix are 1 , 1 ± A 2 + B 2 . {\displaystyle 1,\ 1\pm {\sqrt {A^{2}+B^{2}}}.} The condition 1 a + 1 b < 1 2 {\displaystyle {\tfrac {1}{a}}+{\tfrac {1}{b}}<{\tfrac {1}{2}}} immediately forces A 2 + B 2 > 1 , {\displaystyle A^{2}+B^{2}>1,} so that Λ must have signature (2, 1) . So in general a , b , c ≥ 3 . Clearly the case where all are equal to 3 is impossible. But then the determinant of the matrix is negative while its trace is positive. As a result two eigenvalues are positive and one negative, i.e. Λ has signature (2, 1) . Manifestly ρ, σ, τ are involutions, preserving Λ with the given −1 eigenvectors. To check the order of the products like στ , it suffices to note that: (1) is clear since if γ = στ generates a normal subgroup with σγσ −1 = γ −1 . For (2), U is invariant by definition and the matrix is positive-definite since 0 < cos ⁡ π a < 1. {\displaystyle 0<\cos {\tfrac {\pi }{a}}<1.} Since Λ has signature (2, 1) , a non-zero vector w in W must satisfy Λ( w , w ) < 0 . By definition, σ has eigenvalues 1 and −1 on U , so w must be fixed by σ . Similarly w must be fixed by τ , so that (3) is proved. Finally in (1) σ ( e s ) = − e s , τ ( e s ) = 2 cos ⁡ ( π a ) e s + e t , σ ( e t ) = 2 cos ⁡ ( π a ) e s + e t , τ ( e t ) = − e t , {\displaystyle {\begin{alignedat}{5}\sigma (\mathbf {e} _{s})&=-{\mathbf {e} }_{s},&\quad \tau (\mathbf {e} _{s})&=2\cos({\tfrac {\pi }{a}})\,\mathbf {e} _{s}+\mathbf {e} _{t},\\[2pt]\sigma (\mathbf {e} _{t})&=2\cos({\tfrac {\pi }{a}})\,\mathbf {e} _{s}+\mathbf {e} _{t},&\quad \tau (\mathbf {e} _{t})&=-{\mathbf {e} }_{t},\end{alignedat}}} so that, if a is finite, the eigenvalues of στ are −1, ς and ς −1 , where ς = e 2 π i a ; {\displaystyle \varsigma =e^{\frac {2\pi i}{a}};} and if a is infinite, the eigenvalues are −1, X and X −1 , where X = e 2 x . {\displaystyle X=e^{2x}.} Moreover a straightforward induction argument shows that if θ = π a {\displaystyle \theta ={\tfrac {\pi }{a}}} then [ 23 ] ( σ τ ) m ( e s ) = [ sin ⁡ ( 2 m + 1 ) θ sin ⁡ θ ] e s + [ sin ⁡ 2 m θ sin ⁡ θ ] e t , τ ( σ τ ) m ( e s ) = [ sin ⁡ ( 2 m + 1 ) θ sin ⁡ θ ] e s + [ sin ⁡ ( 2 m + 2 ) θ sin ⁡ θ ] e t , {\displaystyle {\begin{aligned}(\sigma \tau )^{m}({\mathbf {e} }_{s})&=\left[{\frac {\sin(2m+1)\theta }{\sin \theta }}\right]{\mathbf {e} }_{s}+\left[{\frac {\sin 2m\theta }{\sin \theta }}\right]{\mathbf {e} }_{t},\\[4pt]\tau (\sigma \tau )^{m}({\mathbf {e} }_{s})&=\left[{\frac {\sin(2m+1)\theta }{\sin \theta }}\right]{\mathbf {e} }_{s}+\left[{\frac {\sin(2m+2)\theta }{\sin \theta }}\right]{\mathbf {e} }_{t},\end{aligned}}} and if x > 0 then ( σ τ ) m ( e s ) = [ sinh ⁡ ( 2 m + 1 ) x sinh ⁡ x ] e s + [ sinh ⁡ 2 m x sinh ⁡ x ] e t , lim x → 0 ( σ τ ) m ( e s ) = ( 2 m + 1 ) e s + 2 m e t ; τ ( σ τ ) m ( e s ) = [ sinh ⁡ ( 2 m + 1 ) x sinh ⁡ x ] e s + [ sinh ⁡ ( 2 m + 2 ) x sinh ⁡ x ] e t , lim x → 0 τ ( σ τ ) m ( e s ) = ( 2 m + 1 ) e s + ( 2 m + 2 ) e t . {\displaystyle {\begin{aligned}(\sigma \tau )^{m}({\mathbf {e} }_{s})&=\left[{\frac {\sinh(2m+1)x}{\sinh x}}\right]{\mathbf {e} }_{s}+\left[{\frac {\sinh 2mx}{\sinh x}}\right]{\mathbf {e} }_{t},\\[4pt]\lim _{x\to 0}\ (\sigma \tau )^{m}(\mathbf {e} _{s})&=(2m+1)\mathbf {e} _{s}+2m\mathbf {e} _{t};\\[12pt]\tau (\sigma \tau )^{m}({\mathbf {e} }_{s})&=\left[{\frac {\sinh(2m+1)x}{\sinh x}}\right]{\mathbf {e} }_{s}+\left[{\frac {\sinh(2m+2)x}{\sinh x}}\right]{\mathbf {e} }_{t},\\[4pt]\lim _{x\to 0}\,\tau (\sigma \tau )^{m}(\mathbf {e} _{s})&=(2m+1)\mathbf {e} _{s}+(2m+2)\mathbf {e} _{t}.\end{aligned}}} Let Γ a be the dihedral subgroup of Γ generated by s and t , with analogous definitions for Γ b and Γ c . Similarly define Γ r to be the cyclic subgroup of Γ given by the 2-group {1, r }, with analogous definitions for Γ s and Γ t . From the properties of the geometric representation, all six of these groups act faithfully on V . In particular Γ a can be identified with the group generated by σ and τ ; as above it decomposes explicitly as a direct sum of the 2-dimensional irreducible subspace U and the 1-dimensional subspace W with a trivial action. Thus there is a unique vector w = e r + λ e s + μ e t {\displaystyle \mathbf {w} =\mathbf {e} _{r}+\lambda \mathbf {e} _{s}+\mu \mathbf {e} _{t}} in W satisfying σ ( w ) = w and τ ( w ) = w . Explicitly, λ = C + A B 1 − A 2 , μ = B + A C 1 − A 2 . {\displaystyle \lambda ={\frac {C+AB}{1-A^{2}}},\quad \mu ={\frac {B+AC}{1-A^{2}}}.} Remark on representations of dihedral groups. It is well known that, for finite-dimensional real inner product spaces, two orthogonal involutions S and T can be decomposed as an orthogonal direct sum of 2-dimensional or 1-dimensional invariant spaces; for example, this can be deduced from the observation of Paul Halmos and others, that the positive self-adjoint operator ( S − T ) 2 commutes with both S and T . In the case above, however, where the bilinear form Λ is no longer a positive definite inner product, different ad hoc reasoning has to be given. Theorem (Tits). The geometric representation of the Coxeter group is faithful. This result was first proved by Tits in the early 1960s and first published in the text of Bourbaki (1968) with its numerous exercises. In the text, the fundamental chamber was introduced by an inductive argument; exercise 8 in §4 of Chapter V was expanded by Vinay Deodhar to develop a theory of positive and negative roots and thus shorten the original argument of Tits. [ 24 ] Let X be the convex cone of sums κ e r + λ e s + μ e t with real non-negative coefficients, not all of them zero. For g in the group Γ , define ℓ( g ) , the word length or length , to be the minimum number of reflections from r, s, t required to write g as an ordered composition of simple reflections. Define a positive root to be a vector g e r , g e s or g e r lying in X , with g in Γ . [ b ] It is routine to check from the definitions that [ 25 ] Proposition. If g is in Γ and ℓ( gq ) = ℓ( g ) ± 1 for a simple reflection q , then g e q lies in ± X , and is therefore a positive or negative root, according to the sign. Replacing g by gq , only the positive sign needs to be considered. The assertion will be proved by induction on ℓ( g ) = m , it being trivial for m = 0 . Assume that ℓ( gs ) = ℓ( g ) + 1 . If ℓ( g ) = m > 0 , without less of generality it may be assumed that the minimal expression for g ends with ...t . Since s and t generate the dihedral group Γ a , g can be written as a product g = hk , where k = ( st ) n or t ( st ) n and h has a minimal expression that ends with ...r , but never with s or t . This implies that ℓ( hs ) = ℓ( h ) + 1 and ℓ( ht ) = ℓ( h ) + 1 . Since ℓ( h ) < m , the induction hypothesis shows that both h e s , h e t lie in X . It therefore suffices to show that k e s has the form λ e s + μ e t with λ , μ ≥ 0 , not both 0. But that has already been verified in the formulas above. [ 25 ] Corollary (proof of Tits' theorem). The geometric representation is faithful. It suffices to show that if g fixes e r , e s , e t , then g = 1 . Considering a minimal expression for g ≠ 1 , the conditions ℓ( gq ) = ℓ( g ) + 1 clearly cannot be simultaneously satisfied by the three simple reflections q . Note that, as a consequence of Tits' theorem, the generators (left) satisfy the conditions (right): ( g = s t ) s.t. g a = 1 , ( h = t r ) s.t. h b = 1 , ( k = r s ) s.t. k c = 1 , g h k = 1. {\displaystyle {\begin{alignedat}{5}(g&=st)\ \ {\text{ s.t. }}&g^{a}=1,\\[4pt](h&=tr)\ \ {\text{ s.t. }}&h^{b}=1,\\[4pt](k&=rs)\ \ {\text{ s.t. }}&k^{c}=1,\\[4pt]&&ghk=1.\end{alignedat}}} This gives a presentation of the orientation-preserving index 2 normal subgroup Γ 1 of Γ . The presentation corresponds to the fundamental domain obtained by reflecting two sides of the geodesic triangle to form a geodesic parallelogram (a special case of Poincaré's polygon theorem). [ 26 ] Further consequences. The roots are the disjoint union of the positive roots and the negative roots. The simple reflection q permutes every positive root other than e q . For g in Γ , ℓ( g ) is the number of positive roots made negative by g . Fundamental domain and Tits cone. [ 27 ] Let G be the 3-dimensional closed Lie subgroup of GL( V ) preserving Λ . As V can be identified with a 3-dimensional Lorentzian or Minkowski space with signature (2,1) , the group G is isomorphic to the Lorentz group O(2,1) and therefore S L ± ( 2 , R ) ∖ { ± I } . {\displaystyle \mathrm {SL} _{\pm }(2,\mathbb {R} )\setminus \{\pm \,I\,\}.} [ c ] Choosing e to be a positive root vector in X , the stabilizer of e is a maximal compact subgroup K of G isomorphic to O(2) . The homogeneous space X = G / K is a symmetric space of constant negative curvature, which can be identified with the 2-dimensional hyperboloid or Lobachevsky plane H 2 {\displaystyle {\mathfrak {H}}^{2}} . The discrete group Γ acts discontinuously on G / K : the quotient space Γ \ G / K is compact if a, b, c are all finite, and of finite area otherwise. Results about the Tits fundamental chamber have a natural interpretation in terms of the corresponding Schwarz triangle, which translate directly into the properties of the tessellation of the geodesic triangle through the hyperbolic reflection group Γ . The passage from Coxeter groups to tessellation can first be found in the exercises of §4 of Chapter V of Bourbaki (1968) , due to Tits, and in Iwahori (1966) ; currently numerous other equivalent treatments are available, not always directly phrased in terms of symmetric spaces. Maskit (1971) gave a general proof of Poincaré's polygon theorem in hyperbolic space; a similar proof was given in de Rham (1971) . Specializing to the hyperbolic plane and Schwarz triangles, this can be used to give a modern approach for establishing that the existence of Schwarz triangle tessellations, as described in Beardon (1983) and Maskit (1988) . The Swiss mathematicians de la Harpe (1991) and Haefliger have provided an introductory account, taking geometric group theory as their starting point. [ 28 ]
https://en.wikipedia.org/wiki/Schwarz_triangle
In general relativity , Schwarzschild geodesics describe the motion of test particles in the gravitational field of a central fixed mass M , {\textstyle M,} that is, motion in the Schwarzschild metric. Schwarzschild geodesics have been pivotal in the validation of Einstein's theory of general relativity . For example, they provide accurate predictions of the anomalous precession of the planets in the Solar System and of the deflection of light by gravity. Schwarzschild geodesics pertain only to the motion of particles of masses so small they contribute little to the gravitational field. However, they are highly accurate in many astrophysical scenarios provided that m {\textstyle m} is many-fold smaller than the central mass M {\textstyle M} , e.g., for planets orbiting their star. Schwarzschild geodesics are also a good approximation to the relative motion of two bodies of arbitrary mass, provided that the Schwarzschild mass M {\textstyle M} is set equal to the sum of the two individual masses m 1 {\textstyle m_{1}} and m 2 {\textstyle m_{2}} . This is important in predicting the motion of binary stars in general relativity. The Schwarzschild metric is named in honour of its discoverer Karl Schwarzschild , who found the solution in 1915, only about a month after the publication of Einstein's theory of general relativity. It was the first exact solution of the Einstein field equations other than the trivial flat space solution . In 1931, Yusuke Hagihara published a paper showing that the trajectory of a test particle in the Schwarzschild metric can be expressed in terms of elliptic functions . [ 1 ] Samuil Kaplan in 1949 has shown that there is a minimum radius for the circular orbit to be stable in Schwarzschild metric. [ 2 ] An exact solution to the Einstein field equations is the Schwarzschild metric , which corresponds to the external gravitational field of an uncharged, non-rotating, spherically symmetric body of mass M {\textstyle M} . The Schwarzschild solution can be written as [ 3 ] where In practice, this ratio is almost always extremely small. For example, the Schwarzschild radius r s {\textstyle r_{\text{s}}} of the Earth is roughly 9 mm ( 3 ⁄ 8 inch); at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The Schwarzschild radius of the Sun is much larger, roughly 2953 meters, but at its surface, the ratio r s r {\textstyle {\frac {r_{\text{s}}}{r}}} is roughly 4 parts in a million. A white dwarf star is much denser, but even here the ratio at its surface is roughly 250 parts in a million. The ratio only becomes large close to ultra-dense objects such as neutron stars (where the ratio is roughly 50%) and black holes . We may simplify the problem by using symmetry to eliminate one variable from consideration. Since the Schwarzschild metric is symmetrical about θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , any geodesic that begins moving in that plane will remain in that plane indefinitely (the plane is totally geodesic ). Therefore, we orient the coordinate system so that the orbit of the particle lies in that plane, and fix the θ {\textstyle \theta } coordinate to be π 2 {\textstyle {\frac {\pi }{2}}} so that the metric (of this plane) simplifies to Two constants of motion (values that do not change over proper time τ {\displaystyle \tau } ) can be identified (cf. the derivation given below ). One is the total energy E {\textstyle E} : and the other is the specific angular momentum : where L {\textstyle L} is the total angular momentum of the two bodies, and μ {\textstyle \mu } is the reduced mass . When M ≫ m {\textstyle M\gg m} , the reduced mass is approximately equal to m {\textstyle m} . Sometimes it is assumed that m = μ {\textstyle m=\mu } . In the case of the planet Mercury this simplification introduces an error more than twice as large as the relativistic effect. When discussing geodesics, m {\textstyle m} can be considered fictitious, and what matters are the constants E m {\textstyle {\frac {E}{m}}} and h {\textstyle h} . In order to cover all possible geodesics, we need to consider cases in which E m {\textstyle {\frac {E}{m}}} is infinite (giving trajectories of photons ) or imaginary (for tachyonic geodesics). For the photonic case, we also need to specify a number corresponding to the ratio of the two constants, namely m h E {\textstyle {\frac {mh}{E}}} , which may be zero or a non-zero real number. Substituting these constants into the definition of the Schwarzschild metric yields an equation of motion for the radius as a function of the proper time τ {\textstyle \tau } : The formal solution to this is Note that the square root will be imaginary for tachyonic geodesics. Using the relation higher up between d t d τ {\textstyle {\frac {dt}{d\tau }}} and E {\textstyle E} , we can also write Since asymptotically the integrand is inversely proportional to r − r s {\textstyle r-r_{\text{s}}} , this shows that in the r , θ , φ , t {\textstyle r,\theta ,\varphi ,t} frame of reference if r {\textstyle r} approaches r s {\textstyle r_{\text{s}}} it does so exponentially without ever reaching it. However, as a function of τ {\textstyle \tau } , r {\textstyle r} does reach r s {\textstyle r_{\text{s}}} . The above solutions are valid while the integrand is finite, but a total solution may involve two or an infinity of pieces, each described by the integral but with alternating signs for the square root. When E = m c 2 {\textstyle E=mc^{2}} and h = 0 {\textstyle h=0} , we can solve for t {\textstyle t} and τ {\textstyle \tau } explicitly: and for photonic geodesics ( m = 0 {\textstyle m=0} ) with zero angular momentum (Although the proper time is trivial in the photonic case, one can define an affine parameter λ {\textstyle \lambda } , and then the solution to the geodesic equation is r = c 1 λ + c 2 {\textstyle r=c_{1}\lambda +c_{2}} .) Another solvable case is that in which E = 0 {\textstyle E=0} and t {\textstyle t} and φ {\textstyle \varphi } are constant. In the volume where r < r s {\textstyle r<r_{\text{s}}} this gives for the proper time This is close to solutions with E 2 m 2 {\textstyle {\frac {E^{2}}{m^{2}}}} small and positive. Outside of r s {\textstyle r_{\text{s}}} the E = 0 {\textstyle E=0} solution is tachyonic and the "proper time" is space-like: This is close to other tachyonic solutions with E 2 m 2 {\textstyle {\frac {E^{2}}{m^{2}}}} small and negative. The constant t {\textstyle t} tachyonic geodesic outside r s {\textstyle r_{\text{s}}} is not continued by a constant t {\textstyle t} geodesic inside r s {\textstyle r_{\text{s}}} , but rather continues into a "parallel exterior region" (see Kruskal–Szekeres coordinates ). Other tachyonic solutions can enter a black hole and re-exit into the parallel exterior region. The constant t {\textstyle t} solution inside the event horizon ( r s {\textstyle r_{\text{s}}} ) is continued by a constant t {\textstyle t} solution in a white hole . When the angular momentum is not zero we can replace the dependence on proper time by a dependence on the angle φ {\textstyle \varphi } using the definition of h {\textstyle h} which yields the equation for the orbit where, for brevity, two length-scales, a {\textstyle a} and b {\textstyle b} , have been defined by Note that in the tachyonic case, a {\textstyle a} will be imaginary and b {\textstyle b} real or infinite. The same equation can also be derived using a Lagrangian approach [ 4 ] or the Hamilton–Jacobi equation [ 5 ] (see below ). The solution of the orbit equation is This can be expressed in terms of the Weierstrass elliptic function ℘ {\textstyle \wp } . [ 6 ] Unlike in classical mechanics, in Schwarzschild coordinates d r d τ {\textstyle {\frac {{\rm {d}}r}{{\rm {d}}\tau }}} and r d φ d τ {\textstyle r\ {\frac {{\rm {d}}\varphi }{{\rm {d}}\tau }}} are not the radial v ∥ {\textstyle v_{\parallel }} and transverse v ⊥ {\textstyle v_{\perp }} components of the local velocity v {\textstyle v} (relative to a stationary observer), instead they give the components for the celerity which are related to v {\textstyle v} by for the radial and for the transverse component of motion, with v 2 = v ∥ 2 + v ⊥ 2 {\textstyle v^{2}=v_{\parallel }^{2}+v_{\perp }^{2}} . The coordinate bookkeeper far away from the scene observes the shapiro-delayed velocity v ^ {\textstyle {\hat {v}}} , which is given by the relation The time dilation factor between the bookkeeper and the moving test-particle can also be put into the form where the numerator is the gravitational, and the denominator is the kinematic component of the time dilation. For a particle falling in from infinity the left factor equals the right factor, since the in-falling velocity v {\textstyle v} matches the escape velocity c r s r {\textstyle c{\sqrt {\frac {r_{\text{s}}}{r}}}} in this case. The two constants angular momentum L {\textstyle L} and total energy E {\textstyle E} of a test-particle with mass m {\textstyle m} are in terms of v {\textstyle v} and where and For massive testparticles γ {\textstyle \gamma } is the Lorentz factor γ = 1 / 1 − v 2 / c 2 {\textstyle \gamma =1/{\sqrt {1-v^{2}/c^{2}}}} and τ {\textstyle \tau } is the proper time, while for massless particles like photons γ {\textstyle \gamma } is set to 1 {\textstyle 1} and τ {\textstyle \tau } takes the role of an affine parameter. If the particle is massless E r e s t {\textstyle E_{\rm {rest}}} is replaced with E k i n {\textstyle E_{\rm {kin}}} and m c 2 {\textstyle mc^{2}} with h f {\textstyle hf} , where h {\textstyle h} is the Planck constant and f {\textstyle f} the locally observed frequency. The fundamental equation of the orbit is easier to solve [ note 1 ] if it is expressed in terms of the inverse radius u = 1 r {\textstyle u={\frac {1}{r}}} The right-hand side of this equation is a cubic polynomial , which has three roots , denoted here as u 1 {\textstyle u_{1}} , u 2 {\textstyle u_{2}} , and u 3 {\textstyle u_{3}} The sum of the three roots equals the coefficient of the u 2 {\textstyle u^{2}} term A cubic polynomial with real coefficients can either have three real roots, or one real root and two complex conjugate roots. If all three roots are real numbers , the roots are labeled so that u 1 < u 2 < u 3 {\textstyle u_{1}<u_{2}<u_{3}} . If instead there is only one real root, then that is denoted as u 3 {\textstyle u_{3}} ; the complex conjugate roots are labeled u 1 {\textstyle u_{1}} and u 2 {\textstyle u_{2}} . Using Descartes' rule of signs , there can be at most one negative root; u 1 {\textstyle u_{1}} is negative if and only if b < a {\textstyle b<a} . As discussed below, the roots are useful in determining the types of possible orbits. Given this labeling of the roots, the solution of the fundamental orbital equation is where s n {\textstyle \mathrm {sn} } represents the sinus amplitudinus function (one of the Jacobi elliptic functions ) and δ {\textstyle \delta } is a constant of integration reflecting the initial position. The elliptic modulus k {\textstyle k} of this elliptic function is given by the formula To recover the Newtonian solution for the planetary orbits, one takes the limit as the Schwarzschild radius r s {\textstyle r_{\text{s}}} goes to zero. In this case, the third root u 3 {\textstyle u_{3}} becomes roughly 1 r s {\textstyle {\frac {1}{r_{\text{s}}}}} , and much larger than u 1 {\textstyle u_{1}} or u 2 {\textstyle u_{2}} . Therefore, the modulus k {\textstyle k} tends to zero; in that limit, s n {\textstyle \mathrm {sn} } becomes the trigonometric sine function Consistent with Newton's solutions for planetary motions, this formula describes a focal conic of eccentricity e {\textstyle e} If u 1 {\textstyle u_{1}} is a positive real number, then the orbit is an ellipse where u 1 {\textstyle u_{1}} and u 2 {\textstyle u_{2}} represent the distances of furthest and closest approach, respectively. If u 1 {\textstyle u_{1}} is zero or a negative real number, the orbit is a parabola or a hyperbola , respectively. In these latter two cases, u 2 {\textstyle u_{2}} represents the distance of closest approach; since the orbit goes to infinity ( u = 0 {\textstyle u=0} ), there is no distance of furthest approach. A root represents a point of the orbit where the derivative vanishes, i.e., where d u d ϕ = 0 {\textstyle {\frac {du}{d\phi }}=0} . At such a turning point, u {\textstyle u} reaches a maximum, a minimum, or an inflection point , depending on the value of the second derivative, which is given by the formula If all three roots are distinct real numbers, the second derivative is positive, negative, and positive at u 1 , u 2 , and u 3 , respectively. It follows that a graph of u versus φ may either oscillate between u 1 and u 2 , or it may move away from u 3 towards infinity (which corresponds to r going to zero). If u 1 is negative, only part of an "oscillation" will actually occur. This corresponds to the particle coming from infinity, getting near the central mass, and then moving away again toward infinity, like the hyperbolic trajectory in the classical solution. If the particle has just the right amount of energy for its angular momentum, u 2 and u 3 will merge. There are three solutions in this case. The orbit may spiral in to r = 1 u 2 = 1 u 3 {\textstyle r={\frac {1}{u_{2}}}={\frac {1}{u_{3}}}} , approaching that radius as (asymptotically) a decreasing exponential in φ, τ {\textstyle \tau } , or t {\textstyle t} . Or one can have a circular orbit at that radius. Or one can have an orbit that spirals down from that radius to the central point. The radius in question is called the inner radius and is between 3 2 {\textstyle {\frac {3}{2}}} and 3 times r s . A circular orbit also results when u 2 {\textstyle u_{2}} is equal to u 1 {\textstyle u_{1}} , and this is called the outer radius. These different types of orbits are discussed below. If the particle comes at the central mass with sufficient energy and sufficiently low angular momentum then only u 1 {\textstyle u_{1}} will be real. This corresponds to the particle falling into a black hole. The orbit spirals in with a finite change in φ. The function sn and its square sn 2 have periods of 4 K and 2 K , respectively, where K is defined by the equation [ note 2 ] Therefore, the change in φ over one oscillation of u {\textstyle u} (or, equivalently, one oscillation of r {\textstyle r} ) equals [ 7 ] In the classical limit, u 3 approaches 1 r s {\textstyle {\frac {1}{r_{\text{s}}}}} and is much larger than u 1 {\textstyle u_{1}} or u 2 {\textstyle u_{2}} . Hence, k 2 {\textstyle k^{2}} is approximately For the same reasons, the denominator of Δφ is approximately Since the modulus k {\textstyle k} is close to zero, the period K can be expanded in powers of k {\textstyle k} ; to lowest order, this expansion yields Substituting these approximations into the formula for Δφ yields a formula for angular advance per radial oscillation For an elliptical orbit, u 1 {\textstyle u_{1}} and u 2 {\textstyle u_{2}} represent the inverses of the longest and shortest distances, respectively. These can be expressed in terms of the ellipse's semi-major axis A {\textstyle A} and its orbital eccentricity e {\textstyle e} , giving Substituting the definition of r s {\textstyle r_{\text{s}}} gives the final equation In the limit as the particle mass m goes to zero (or, equivalently if the light is heading directly toward the central mass, as the length-scale a goes to infinity), the equation for the orbit becomes Expanding in powers of r s r {\textstyle {\frac {r_{\text{s}}}{r}}} , the leading order term in this formula gives the approximate angular deflection δ φ for a massless particle coming in from infinity and going back out to infinity: Here, b {\textstyle b} is the impact parameter , somewhat greater than the distance of closest approach , r 3 {\textstyle r_{3}} : [ 8 ] b = r 3 r 3 r 3 − r s {\displaystyle b=r_{3}{\sqrt {\frac {r_{3}}{r_{3}-r_{\text{s}}}}}} Although this formula is approximate, it is accurate for most measurements of gravitational lensing , due to the smallness of the ratio r s r {\textstyle {\frac {r_{\text{s}}}{r}}} . For light grazing the surface of the sun, the approximate angular deflection is roughly 1.75 arcseconds , roughly one millionth part of a circle. More generally, the geodesics of a photon with radial coordinate r ∈ [ r s , ∞ ) {\displaystyle r\in [r_{s},\infty )} can be calculated as follows, by applying the equation which gives with u = 1 r {\displaystyle u={1 \over r}} The equation can be derived as which leads to This equation with second derivative can be numerically integrated as follows by a 4th order Runge-Kutta method, considering a step size Δ φ {\displaystyle \Delta \varphi } and with: The value at the next step d u d φ ( φ + Δ φ ) {\displaystyle {du \over d\varphi }(\varphi +\Delta \varphi )} is and the value at the next step u ( φ + Δ φ ) {\displaystyle u(\varphi +\Delta \varphi )} is The step Δ φ {\displaystyle \Delta \varphi } can be chosen to be constant or adaptive, depending on the accuracy required on r = 1 u {\displaystyle r={1 \over u}} . The equation of motion for the particle derived above can be rewritten using the definition of the Schwarzschild radius r s as which is equivalent to a particle moving in a one-dimensional effective potential The first two terms are well-known classical energies, the first being the attractive Newtonian gravitational potential energy and the second corresponding to the repulsive "centrifugal" potential energy ; however, the third term is an attractive energy unique to general relativity . As shown below and elsewhere , this inverse-cubic energy causes elliptical orbits to precess gradually by an angle δφ per revolution where A {\textstyle A} is the semi-major axis and e {\textstyle e} is the eccentricity. The third term is attractive and dominates at small r {\textstyle r} values, giving a critical inner radius r inner at which a particle is drawn inexorably inwards to r = 0 {\textstyle r=0} ; this inner radius is a function of the particle's angular momentum per unit mass or, equivalently, the a {\textstyle a} length-scale defined above. The effective potential V {\textstyle V} can be re-written in terms of the length a = h c {\textstyle a={\frac {h}{c}}} . Circular orbits are possible when the effective force is zero i.e., when the two attractive forces — Newtonian gravity (first term) and the attraction unique to general relativity (third term) — are exactly balanced by the repulsive centrifugal force (second term). There are two radii at which this balancing can occur, denoted here as r inner and r outer which are obtained using the quadratic formula . The inner radius r inner is unstable, because the attractive third force strengthens much faster than the other two forces when r becomes small; if the particle slips slightly inwards from r inner (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to r = 0. At the outer radius, however, the circular orbits are stable; the third term is less important and the system behaves more like the non-relativistic Kepler problem . When a {\textstyle a} is much greater than r s {\textstyle r_{\text{s}}} (the classical case), these formulae become approximately Substituting the definitions of a {\textstyle a} and r s into r outer yields the classical formula for a particle of mass m {\textstyle m} orbiting a body of mass M {\textstyle M} . where ω φ is the orbital angular speed of the particle. This formula is obtained in non-relativistic mechanics by setting the centrifugal force equal to the Newtonian gravitational force: Where μ {\textstyle \mu } is the reduced mass . In our notation, the classical orbital angular speed equals At the other extreme, when a 2 approaches 3 r s 2 from above, the two radii converge to a single value The quadratic solutions above ensure that r outer is always greater than 3 r s , whereas r inner lies between 3 ⁄ 2 r s and 3 r s . Circular orbits smaller than 3 ⁄ 2 r s are not possible. For massless particles, a goes to infinity, implying that there is a circular orbit for photons at r inner = 3 ⁄ 2 r s . The sphere of this radius is sometimes known as the photon sphere . The orbital precession rate may be derived using this radial effective potential V . A small radial deviation from a circular orbit of radius r outer will oscillate stably with an angular frequency which equals Taking the square root of both sides and performing a Taylor series expansion yields Multiplying by the period T of one revolution gives the precession of the orbit per revolution where we have used ω φ T = 2 п and the definition of the length-scale a . Substituting the definition of the Schwarzschild radius r s gives This may be simplified using the elliptical orbit's semiaxis A and eccentricity e related by the formula to give the precession angle The non-vanishing Christoffel symbols for the Schwarzschild-metric are: [ 9 ] According to Einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. In flat space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. The equation for the geodesic lines is [ 10 ] where Γ represents the Christoffel symbol and the variable q {\textstyle q} parametrizes the particle's path through space-time , its so-called world line . The Christoffel symbol depends only on the metric tensor g μ ν {\textstyle g_{\mu \nu }} , or rather on how it changes with position. The variable q {\textstyle q} is a constant multiple of the proper time τ {\textstyle \tau } for timelike orbits (which are traveled by massive particles), and is usually taken to be equal to it. For lightlike (or null) orbits (which are traveled by massless particles such as the photon ), the proper time is zero and, strictly speaking, cannot be used as the variable q {\textstyle q} . Nevertheless, lightlike orbits can be derived as the ultrarelativistic limit of timelike orbits, that is, the limit as the particle mass m goes to zero while holding its total energy fixed. Therefore, to solve for the motion of a particle, the most straightforward way is to solve the geodesic equation, an approach adopted by Einstein [ 11 ] and others. [ 12 ] The Schwarzschild metric may be written as where the two functions w ( r ) = 1 − r s r {\textstyle w(r)=1-{\frac {r_{\text{s}}}{r}}} and its reciprocal v ( r ) = 1 w ( r ) {\textstyle v(r)={\frac {1}{w(r)}}} are defined for brevity. From this metric, the Christoffel symbols Γ μ ν λ {\textstyle \Gamma _{\mu \nu }^{\lambda }} may be calculated, and the results substituted into the geodesic equations It may be verified that θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} is a valid solution by substitution into the first of these four equations. By symmetry, the orbit must be planar, and we are free to arrange the coordinate frame so that the equatorial plane is the plane of the orbit. This θ {\textstyle \theta } solution simplifies the second and fourth equations. To solve the second and third equations, it suffices to divide them by d ϕ d q {\textstyle {\frac {d\phi }{dq}}} and d t d q {\textstyle {\frac {dt}{dq}}} , respectively. which yields two constants of motion. Because test particles follow geodesics in a fixed metric, the orbits of those particles may be determined using the calculus of variations, also called the Lagrangian approach. [ 13 ] Geodesics in space-time are defined as curves for which small local variations in their coordinates (while holding their endpoints events fixed) make no significant change in their overall length s . This may be expressed mathematically using the calculus of variations where τ is the proper time , s = cτ is the arc-length in space-time and T is defined as in analogy with kinetic energy . If the derivative with respect to proper time is represented by a dot for brevity T may be written as Constant factors (such as c or the square root of two) don't affect the answer to the variational problem; therefore, taking the variation inside the integral yields Hamilton's principle The solution of the variational problem is given by Lagrange's equations When applied to t and φ , these equations reveal two constants of motion which may be expressed in terms of two constant length-scales, a {\textstyle a} and b {\textstyle b} As shown above , substitution of these equations into the definition of the Schwarzschild metric yields the equation for the orbit. A Lagrangian solution can be recast into an equivalent Hamiltonian form. [ 14 ] In this case, the Hamiltonian H {\displaystyle H} is given by Once again, the orbit may be restricted to θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} by symmetry. Since t {\textstyle t} and φ {\textstyle \varphi } do not appear in the Hamiltonian, their conjugate momenta are constant; they may be expressed in terms of the speed of light c {\textstyle c} and two constant length-scales a {\textstyle a} and b {\textstyle b} The derivatives with respect to proper time are given by Dividing the first equation by the second yields the orbital equation The radial momentum p r can be expressed in terms of r using the constancy of the Hamiltonian H = c 2 2 {\textstyle H={\frac {c^{2}}{2}}} ; this yields the fundamental orbital equation The orbital equation can be derived from the Hamilton–Jacobi equation . [ 15 ] The advantage of this approach is that it equates the motion of the particle with the propagation of a wave, and leads neatly into the derivation of the deflection of light by gravity in general relativity , through Fermat's principle . The basic idea is that, due to gravitational slowing of time, parts of a wave-front closer to a gravitating mass move more slowly than those further away, thus bending the direction of the wave-front's propagation. Using general covariance, the Hamilton–Jacobi equation for a single particle of unit mass can be expressed in arbitrary coordinates as This is equivalent to the Hamiltonian formulation above, with the partial derivatives of the action taking the place of the generalized momenta. Using the Schwarzschild metric g μν , this equation becomes where we again orient the spherical coordinate system with the plane of the orbit. The time t and azimuthal angle φ are cyclic coordinates, so that the solution for Hamilton's principal function S can be written where p t {\displaystyle p_{t}} and p φ {\displaystyle p_{\varphi }} are the constant generalized momenta. The Hamilton–Jacobi equation gives an integral solution for the radial part S r ( r ) {\displaystyle S_{r}(r)} Taking the derivative of Hamilton's principal function S with respect to the conserved momentum p φ yields which equals Taking an infinitesimal variation in φ and r yields the fundamental orbital equation where the conserved length-scales a and b are defined by the conserved momenta by the equations The action integral for a particle affected only by gravity is where τ {\textstyle \tau } is the proper time and q {\textstyle q} is any smooth parameterization of the particle's world line. If one applies the calculus of variations to this, one again gets the equations for a geodesic. To simplify the calculations, one first takes the variation of the square of the integrand. For the metric and coordinates of this case and assuming that the particle is moving in the equatorial plane θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , that square is Taking variation of this gives Vary with respect to longitude φ {\textstyle \varphi } only to get Divide by 2 c d τ d q {\textstyle 2c{\frac {d\tau }{dq}}} to get the variation of the integrand itself Thus Integrating by parts gives The variation of the longitude is assumed to be zero at the end points, so the first term disappears. The integral can be made nonzero by a perverse choice of δ φ {\textstyle \delta \varphi } unless the other factor inside is zero everywhere. So the equation of motion is Vary with respect to time t {\textstyle t} only to get Divide by 2 c d τ d q {\textstyle 2c{\frac {d\tau }{dq}}} to get the variation of the integrand itself Thus Integrating by parts gives So the equation of motion is Integrate these equations of motion to determine the constants of integration getting These two equations for the constants of motion L {\textstyle L} (angular momentum) and E {\textstyle E} (energy) can be combined to form one equation that is true even for photons and other massless particles for which the proper time along a geodesic is zero. Substituting and into the metric equation (and using θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} ) gives from which one can derive which is the equation of motion for r {\textstyle r} . The dependence of r {\textstyle r} on φ {\textstyle \varphi } can be found by dividing this by to get which is true even for particles without mass. If length scales are defined by and then the dependence of r {\textstyle r} on φ {\textstyle \varphi } simplifies to
https://en.wikipedia.org/wiki/Schwarzschild_geodesics
In Einstein 's theory of general relativity , the Schwarzschild metric (also known as the Schwarzschild solution ) is an exact solution to the Einstein field equations that describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, angular momentum of the mass, and universal cosmological constant are all zero. The solution is a useful approximation for describing slowly rotating astronomical objects such as many stars and planets , including Earth and the Sun. It was found by Karl Schwarzschild in 1916. According to Birkhoff's theorem , the Schwarzschild metric is the most general spherically symmetric vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has neither electric charge nor angular momentum (non-rotating). A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass. The Schwarzschild black hole is characterized by a surrounding spherical boundary, called the event horizon , which is situated at the Schwarzschild radius ( r s {\displaystyle r_{\text{s}}} ), often called the radius of a black hole. The boundary is not a physical surface, and a person who fell through the event horizon (before being torn apart by tidal forces) would not notice any physical surface at that position; it is a mathematical surface which is significant in determining the black hole's properties. Any non-rotating and non-charged mass that is smaller than its Schwarzschild radius forms a black hole. The solution of the Einstein field equations is valid for any mass M , so in principle (within the theory of general relativity) a Schwarzschild black hole of any mass could exist if conditions became sufficiently favorable to allow for its formation. In the vicinity of a Schwarzschild black hole, space curves so much that even light rays are deflected, and very nearby light can be deflected so much that it travels several times around the black hole. [ 1 ] [ 2 ] [ 3 ] The Schwarzschild metric is a spherically symmetric Lorentzian metric (here, with signature convention (+, -, -, -) ), defined on (a subset of) R × ( E 3 − O ) ≅ R × ( 0 , ∞ ) × S 2 {\displaystyle \mathbb {R} \times \left(E^{3}-O\right)\cong \mathbb {R} \times (0,\infty )\times S^{2}} where E 3 {\displaystyle E^{3}} is 3 dimensional Euclidean space, and S 2 ⊂ E 3 {\displaystyle S^{2}\subset E^{3}} is the two sphere. The rotation group S O ( 3 ) = S O ( E 3 ) {\displaystyle \mathrm {SO} (3)=\mathrm {SO} (E^{3})} acts on the E 3 − O {\displaystyle E^{3}-O} or S 2 {\displaystyle S^{2}} factor as rotations around the center O {\displaystyle O} , while leaving the first R {\displaystyle \mathbb {R} } factor unchanged. The Schwarzschild metric is a solution of Einstein's field equations in empty space, meaning that it is valid only outside the gravitating body. That is, for a spherical body of radius R {\displaystyle R} the solution is valid for r > R {\displaystyle r>R} . To describe the gravitational field both inside and outside the gravitating body the Schwarzschild solution must be matched with some suitable interior solution at ⁠ r = R {\displaystyle r=R} ⁠ , [ 4 ] such as the interior Schwarzschild metric . In Schwarzschild coordinates ( t , r , θ , ϕ ) {\displaystyle (t,r,\theta ,\phi )} the Schwarzschild metric (or equivalently, the line element for proper time ) has the form d s 2 = c 2 d τ 2 = ( 1 − r s r ) c 2 d t 2 − ( 1 − r s r ) − 1 d r 2 − r 2 d Ω 2 , {\displaystyle {ds}^{2}=c^{2}\,{d\tau }^{2}=\left(1-{\frac {r_{\mathrm {s} }}{r}}\right)c^{2}\,dt^{2}-\left(1-{\frac {r_{\mathrm {s} }}{r}}\right)^{-1}\,dr^{2}-r^{2}{d\Omega }^{2},} where d Ω 2 {\displaystyle {d\Omega }^{2}} is the metric on the two sphere, i.e. ⁠ d Ω 2 = ( d θ 2 + sin 2 ⁡ θ d ϕ 2 ) {\displaystyle {d\Omega }^{2}=\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)} ⁠ . Furthermore, The Schwarzschild metric has a singularity for r = 0 , which is an intrinsic curvature singularity. It also seems to have a singularity on the event horizon r = r s . Depending on the point of view, the metric is therefore defined only on the exterior region r > r s {\displaystyle r>r_{\text{s}}} , only on the interior region r < r s {\displaystyle r<r_{\text{s}}} or their disjoint union. However, the metric is actually non-singular across the event horizon, as one sees in suitable coordinates (see below). For ⁠ r ≫ r s {\displaystyle r\gg r_{\text{s}}} ⁠ , the Schwarzschild metric is asymptotic to the standard Lorentz metric on Minkowski space. For almost all astrophysical objects, the ratio r s R {\displaystyle {\frac {r_{\text{s}}}{R}}} is extremely small. For example, the Schwarzschild radius r s ( Earth ) {\displaystyle r_{\text{s}}^{({\text{Earth}})}} of the Earth is roughly 8.9 mm , while the Sun, which is 3.3 × 10 5 times as massive [ 6 ] has a Schwarzschild radius r s ( Sun ) {\displaystyle r_{\text{s}}^{({\text{Sun}})}} of approximately 3.0 km. The ratio becomes large only in close proximity to black holes and other ultra-dense objects such as neutron stars . The radial coordinate turns out to have physical significance as the "proper distance between two events that occur simultaneously relative to the radially moving geodesic clocks, the two events lying on the same radial coordinate line". [ 7 ] The Schwarzschild solution is analogous to a classical Newtonian theory of gravity that corresponds to the gravitational field around a point particle. Even at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. [ 8 ] The Schwarzschild solution is named in honour of Karl Schwarzschild , who found the exact solution in 1915 and published it in January 1916, [ 9 ] a little more than a month after the publication of Einstein's theory of general relativity. It was the first exact solution of the Einstein field equations other than the trivial flat space solution . Schwarzschild died shortly after his paper was published, as a result of a disease (thought to be pemphigus ) he developed while serving in the German army during World War I . [ 10 ] Johannes Droste in 1916 [ 11 ] independently produced the same solution as Schwarzschild, using a simpler, more direct derivation. [ 12 ] In the early years of general relativity there was a lot of confusion about the nature of the singularities found in the Schwarzschild and other solutions of the Einstein field equations . In Schwarzschild's original paper, he put what we now call the event horizon at the origin of his coordinate system. In this paper he also introduced what is now known as the Schwarzschild radial coordinate ( r in the equations above), as an auxiliary variable. In his equations, Schwarzschild was using a different radial coordinate that was zero at the Schwarzschild radius. A more complete analysis of the singularity structure was given by David Hilbert [ 13 ] in the following year, identifying the singularities both at r = 0 and r = r s . Although there was general consensus that the singularity at r = 0 was a 'genuine' physical singularity, the nature of the singularity at r = r s remained unclear. [ 14 ] In 1921, Paul Painlevé and in 1922 Allvar Gullstrand independently produced a metric, a spherically symmetric solution of Einstein's equations, which we now know is coordinate transformation of the Schwarzschild metric, Gullstrand–Painlevé coordinates , in which there was no singularity at r = r s . They, however, did not recognize that their solutions were just coordinate transforms, and in fact used their solution to argue that Einstein's theory was wrong. In 1924 Arthur Eddington produced the first coordinate transformation ( Eddington–Finkelstein coordinates ) that showed that the singularity at r = r s was a coordinate artifact, although he also seems to have been unaware of the significance of this discovery. Later, in 1932, Georges Lemaître gave a different coordinate transformation ( Lemaître coordinates ) to the same effect and was the first to recognize that this implied that the singularity at r = r s was not physical. In 1939 Howard Robertson showed that a free falling observer descending in the Schwarzschild metric would cross the r = r s singularity in a finite amount of proper time even though this would take an infinite amount of time in terms of coordinate time t . [ 14 ] In 1950, John Synge produced a paper [ 15 ] that showed the maximal analytic extension of the Schwarzschild metric, again showing that the singularity at r = r s was a coordinate artifact and that it represented two horizons. A similar result was later rediscovered by George Szekeres , [ 16 ] and independently Martin Kruskal . [ 17 ] The new coordinates nowadays known as Kruskal–Szekeres coordinates were much simpler than Synge's but both provided a single set of coordinates that covered the entire spacetime. However, perhaps due to the obscurity of the journals in which the papers of Lemaître and Synge were published their conclusions went unnoticed, with many of the major players in the field including Einstein believing that the singularity at the Schwarzschild radius was physical. [ 14 ] Synge's later derivation of the Kruskal–Szekeres metric solution, [ 18 ] which was motivated by a desire to avoid "using 'bad' [Schwarzschild] coordinates to obtain 'good' [Kruskal–Szekeres] coordinates", has been generally under-appreciated in the literature, but was adopted by Chandrasekhar in his black hole monograph. [ 19 ] Real progress was made in the 1960s when the mathematically rigorous formulation cast in terms of differential geometry entered the field of general relativity, allowing more exact definitions of what it means for a Lorentzian manifold to be singular. This led to definitive identification of the r = r s singularity in the Schwarzschild metric as an event horizon , i.e., a hypersurface in spacetime that can be crossed in only one direction. [ 14 ] The Schwarzschild solution appears to have singularities at r = 0 and r = r s ; some of the metric components "blow up" (entail division by zero or multiplication by infinity) at these radii. Since the Schwarzschild metric is expected to be valid only for those radii larger than the radius R of the gravitating body, there is no problem as long as R > r s . For ordinary stars and planets this is always the case. For example, the radius of the Sun is approximately 700 000 km , while its Schwarzschild radius is only 3 km . The singularity at r = r s divides the Schwarzschild coordinates in two disconnected patches . The exterior Schwarzschild solution with r > r s is the one that is related to the gravitational fields of stars and planets. The interior Schwarzschild solution with 0 ≤ r < r s , which contains the singularity at r = 0 , is completely separated from the outer patch by the singularity at r = r s . The Schwarzschild coordinates therefore give no physical connection between the two patches, which may be viewed as separate solutions. The singularity at r = r s is an illusion however; it is an instance of what is called a coordinate singularity . [ citation needed ] As the name implies, the singularity arises from a bad choice of coordinates or coordinate conditions . When changing to a different coordinate system (for example Lemaître coordinates , Eddington–Finkelstein coordinates , Kruskal–Szekeres coordinates , Novikov coordinates, or Gullstrand–Painlevé coordinates ) the metric becomes regular at r = r s and can extend the external patch to values of r smaller than r s . Using a different coordinate transformation one can then relate the extended external patch to the inner patch. [ 20 ] The case r = 0 is different, however. If one asks that the solution be valid for all r one runs into a true physical singularity, or gravitational singularity , at the origin. To see that this is a true singularity one must look at quantities that are independent of the choice of coordinates. One such important quantity is the Kretschmann invariant , which is given by At r = 0 the curvature becomes infinite, indicating the presence of a singularity. At this point the metric cannot be extended in a smooth manner (the Kretschmann invariant involves second derivatives of the metric), spacetime itself is then no longer well-defined. Furthermore, Sbierski [ 21 ] showed the metric cannot be extended even in a continuous manner. For a long time it was thought that such a solution was non-physical. However, a greater understanding of general relativity led to the realization that such singularities were a generic feature of the theory and not just an exotic special case. The Schwarzschild solution, taken to be valid for all r > 0 , is called a Schwarzschild black hole. It is a perfectly valid solution of the Einstein field equations, although (like other black holes) it has rather bizarre properties. For r < r s the Schwarzschild radial coordinate r becomes timelike and the time coordinate t becomes spacelike . [ 22 ] A curve at constant r is no longer a possible worldline of a particle or observer, not even if a force is exerted to try to keep it there; this occurs because spacetime has been curved so much that the direction of cause and effect (the particle's future light cone ) points into the singularity. [ citation needed ] The surface r = r s demarcates what is called the event horizon of the black hole. It represents the point past which light can no longer escape the gravitational field. Any physical object whose radius R becomes less than or equal to the Schwarzschild radius has undergone gravitational collapse and become a black hole. The Schwarzschild solution can be expressed in a range of different choices of coordinates besides the Schwarzschild coordinates used above. Different choices tend to highlight different features of the solution. The table below shows some popular choices. In table above, some shorthand has been introduced for brevity. The speed of light c has been set to one . The notation is used for the metric of a unit radius 2-dimensional sphere. Moreover, in each entry R and T denote alternative choices of radial and time coordinate for the particular coordinates. Note, the R or T may vary from entry to entry. The Kruskal–Szekeres coordinates have the form to which the Belinski–Zakharov transform can be applied. This implies that the Schwarzschild black hole is a form of gravitational soliton . The spatial curvature of the Schwarzschild solution for r > r s can be visualized as the graphic shows. Consider a constant time equatorial slice H through the Schwarzschild solution by fixing θ = ⁠ π / 2 ⁠ , t = constant, and letting the remaining Schwarzschild coordinates ( r , φ ) vary. Imagine now that there is an additional Euclidean dimension w , which has no physical reality (it is not part of spacetime). Then replace the ( r , φ ) plane with a surface dimpled in the w direction according to the equation ( Flamm's paraboloid ) This surface has the property that distances measured within it match distances in the Schwarzschild metric, because with the definition of w above, Thus, Flamm's paraboloid is useful for visualizing the spatial curvature of the Schwarzschild metric. It should not, however, be confused with a gravity well . No ordinary (massive or massless) particle can have a worldline lying on the paraboloid, since all distances on it are spacelike (this is a cross-section at one moment of time, so any particle moving on it would have an infinite velocity ). A tachyon could have a spacelike worldline that lies entirely on a single paraboloid. However, even in that case its geodesic path is not the trajectory one gets through a "rubber sheet" analogy of gravitational well: in particular, if the dimple is drawn pointing upward rather than downward, the tachyon's geodesic path still curves toward the central mass, not away. See the gravity well article for more information. Flamm's paraboloid may be derived as follows. The Euclidean metric in the cylindrical coordinates ( r , φ , w ) is written Letting the surface be described by the function w = w ( r ) , the Euclidean metric can be written as Comparing this with the Schwarzschild metric in the equatorial plane ( θ = π/2 ) at a fixed time ( t = constant, dt = 0 ), yields an integral expression for w ( r ) : whose solution is Flamm's paraboloid. A particle orbiting in the Schwarzschild metric can have a stable circular orbit with r > 3 r s . Circular orbits with r between 1.5 r s and 3 r s are unstable, and no circular orbits exist for r < 1.5 r s . The circular orbit of minimum radius 1.5 r s corresponds to an orbital velocity approaching the speed of light. It is possible for a particle to have a constant value of r between r s and 1.5 r s , but only if some force acts to keep it there. Noncircular orbits, such as Mercury 's, dwell longer at small radii than would be expected in Newtonian gravity . This can be seen as a less extreme version of the more dramatic case in which a particle passes through the event horizon and dwells inside it forever. Intermediate between the case of Mercury and the case of an object falling past the event horizon, there are exotic possibilities such as knife-edge orbits, in which the satellite can be made to execute an arbitrarily large number of nearly circular orbits, after which it flies back outward. The isometry group of the Schwarzchild metric is ⁠ R × O ( 3 ) × { ± 1 } {\displaystyle \mathbb {R} \times \mathrm {O} (3)\times \{\pm 1\}} ⁠ , where O ( 3 ) {\displaystyle \mathrm {O} (3)} is the orthogonal group of rotations and reflections in three dimensions, Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathbb R} comprises the time translations, and { ± 1 } {\displaystyle \{\pm 1\}} is the group generated by time reversal. This is thus the subgroup of the ten-dimensional Poincaré group which takes the time axis (trajectory of the star) to itself. It omits the spatial translations (three dimensions) and boosts (three dimensions). It retains the time translations (one dimension) and rotations (three dimensions). Thus it has four dimensions. Like the Poincaré group, it has four connected components: the component of the identity; the time reversed component; the spatial inversion component; and the component which is both time reversed and spatially inverted. The Ricci curvature scalar and the Ricci curvature tensor are both zero. Non-zero components of the Riemann curvature tensor are given by [ 25 ] from which one can see that R γ α γ β = 0 {\displaystyle R^{\gamma }{}_{\alpha \gamma \beta }=0} . Six of these formulas are Eq. 5.13 in Carroll [ 26 ] and imply the other 6 by R α β γ δ = g α κ g β λ R λ κ δ γ {\displaystyle R^{\alpha }{}_{\beta \gamma \delta }=g^{\alpha \kappa }g_{\beta \lambda }R^{\lambda }{}_{\kappa \delta \gamma }} . Components which are obtainable by other symmetries of the Riemann tensor are not displayed. To understand the physical meaning of these quantities, it is useful to express the curvature tensor in an orthonormal basis. In an orthonormal basis of an observer the non-zero components in geometric units are [ 25 ] Again, components which are obtainable by the symmetries of the Riemann tensor are not displayed. These results are invariant to any Lorentz boost, thus the components do not change for non-static observers. The geodesic deviation equation shows that the tidal acceleration between two observers separated by ξ j ^ {\displaystyle \xi ^{\hat {j}}} is D 2 ξ j ^ / D τ 2 = − R j ^ t ^ k ^ t ^ ξ k ^ {\displaystyle D^{2}\xi ^{\hat {j}}/D\tau ^{2}=-R^{\hat {j}}{}_{{\hat {t}}{\hat {k}}{\hat {t}}}\xi ^{\hat {k}}} , so a body of length L {\displaystyle L} is stretched in the radial direction by an apparent acceleration ( r s / r 3 ) c 2 L {\displaystyle (r_{\text{s}}/r^{3})c^{2}L} and squeezed in the perpendicular directions by − ( r s / ( 2 r 3 ) ) c 2 L {\displaystyle -(r_{\text{s}}/(2r^{3}))c^{2}L} .
https://en.wikipedia.org/wiki/Schwarzschild_metric
A knight's tour is a sequence of moves of a knight on a chessboard such that the knight visits every square exactly once. If the knight ends on a square that is one knight's move from the beginning square (so that it could tour the board again immediately, following the same path), the tour is "closed", or "re-entrant"; otherwise, it is "open". [ 1 ] [ 2 ] The knight's tour problem is the mathematical problem of finding a knight's tour. Creating a program to find a knight's tour is a common problem given to computer science students. [ 3 ] Variations of the knight's tour problem involve chessboards of different sizes than the usual 8 × 8 , as well as irregular (non-rectangular) boards. The knight's tour problem is an instance of the more general Hamiltonian path problem in graph theory . The problem of finding a closed knight's tour is similarly an instance of the Hamiltonian cycle problem . Unlike the general Hamiltonian path problem, the knight's tour problem can be solved in linear time . [ 4 ] The earliest known reference to the knight's tour problem dates back to the 9th century AD. In Rudrata 's Kavyalankara [ 5 ] (5.15), a Sanskrit work on Poetics, the pattern of a knight's tour on a half-board has been presented as an elaborate poetic figure ( citra-alaṅkāra ) called the turagapadabandha or 'arrangement in the steps of a horse'. The same verse in four lines of eight syllables each can be read from left to right or by following the path of the knight on tour. Since the Indic writing systems used for Sanskrit are syllabic, each syllable can be thought of as representing a square on a chessboard. Rudrata's example is as follows: transliterated: For example, the first line can be read from left to right or by moving from the first square to the second line, third syllable (2.3) and then to 1.5 to 2.7 to 4.8 to 3.6 to 4.4 to 3.2. The Sri Vaishnava poet and philosopher Vedanta Desika , during the 14th century, in his 1,008-verse magnum opus praising the deity Ranganatha 's divine sandals of Srirangam , Paduka Sahasram (in chapter 30: Chitra Paddhati ) has composed two consecutive Sanskrit verses containing 32 letters each (in Anushtubh meter) where the second verse can be derived from the first verse by performing a Knight's tour on a 4 × 8 board, starting from the top-left corner. [ 6 ] The transliterated 19th verse is as follows: (1) (30) (9) (20) (3) (24) (11) (26) (16) (19) (2) (29) (10) (27) (4) (23) (31) (8) (17) (14) (21) (6) (25) (12) (18) (15) (32) (7) (28) (13) (22) (5) The 20th verse that can be obtained by performing Knight's tour on the above verse is as follows: sThi thA sa ma ya rA ja thpA ga tha rA mA dha kE ga vi | dhu ran ha sAm sa nna thA dhA sA dhyA thA pa ka rA sa rA || It is believed that Desika composed all 1,008 verses (including the special Chaturanga Turanga Padabandham mentioned above) in a single night as a challenge. [ 7 ] A tour reported in the fifth book of Bhagavantabaskaraby by Bhat Nilakantha, a cyclopedic work in Sanskrit on ritual, law and politics, written either about 1600 or about 1700 describes three knight's tours. The tours are not only reentrant but also symmetrical, and the verses are based on the same tour, starting from different squares. [ 8 ] Nilakantha's work is an extraordinary achievement being a fully symmetric closed tour, predating the work of Euler (1759) by at least 60 years. After Nilakantha, one of the first mathematicians to investigate the knight's tour was Leonhard Euler . The first procedure for completing the knight's tour was Warnsdorf's rule, first described in 1823 by H. C. von Warnsdorf. In the 20th century, the Oulipo group of writers used it, among many others. The most notable example is the 10 × 10 knight's tour which sets the order of the chapters in Georges Perec 's novel Life a User's Manual . The sixth game of the World Chess Championship 2010 between Viswanathan Anand and Veselin Topalov saw Anand making 13 consecutive knight moves (albeit using both knights); online commentators jested that Anand was trying to solve the knight's tour problem during the game. Schwenk [ 10 ] proved that for any m × n board with m ≤ n , a closed knight's tour is always possible unless one or more of these three conditions are met: Cull et al. and Conrad et al. proved that on any rectangular board whose smaller dimension is at least 5, there is a (possibly open) knight's tour. [ 4 ] [ 11 ] For any m × n board with m ≤ n , a (possibly open) knight's tour is always possible unless one or more of these three conditions are met: On an 8 × 8 board, there are exactly 26,534,728,821,064 directed closed tours (i.e. two tours along the same path that travel in opposite directions are counted separately, as are rotations and reflections ). [ 14 ] [ 15 ] [ 16 ] The number of undirected closed tours is half this number, since every tour can be traced in reverse. There are 9,862 undirected closed tours on a 6 × 6 board. [ 17 ] There are several ways to find a knight's tour on a given board with a computer. Some of these methods are algorithms , while others are heuristics . A brute-force search for a knight's tour is impractical on all but the smallest boards. [ 18 ] On an 8 × 8 board, for instance, there are 13,267,364,410,532 knight's tours, [ 14 ] and a much greater number of sequences of knight moves of the same length. It is well beyond the capacity of modern computers (or networks of computers) to perform operations on such a large set. However, the size of this number is not indicative of the difficulty of the problem, which can be solved "by using human insight and ingenuity ... without much difficulty." [ 18 ] By dividing the board into smaller pieces, constructing tours on each piece, and patching the pieces together, one can construct tours on most rectangular boards in linear time – that is, in a time proportional to the number of squares on the board. [ 11 ] [ 19 ] Warnsdorf's rule is a heuristic for finding a single knight's tour. The knight is moved so that it always proceeds to the square from which the knight will have the fewest onward moves. When calculating the number of onward moves for each candidate square, we do not count moves that revisit any square already visited. It is possible to have two or more choices for which the number of onward moves is equal; there are various methods for breaking such ties, including one devised by Pohl [ 20 ] and another by Squirrel and Cull. [ 21 ] This rule may also more generally be applied to any graph. In graph-theoretic terms, each move is made to the adjacent vertex with the least degree . [ 22 ] Although the Hamiltonian path problem is NP-hard in general, on many graphs that occur in practice this heuristic is able to successfully locate a solution in linear time . [ 20 ] The knight's tour is such a special case. [ 23 ] The heuristic was first described in "Des Rösselsprungs einfachste und allgemeinste Lösung" by H. C. von Warnsdorf in 1823. [ 23 ] A computer program that finds a knight's tour for any starting position using Warnsdorf's rule was written by Gordon Horsington and published in 1984 in the book Century/Acorn User Book of Computer Puzzles . [ 24 ] The knight's tour problem also lends itself to being solved by a neural network implementation. [ 25 ] The network is set up such that every legal knight's move is represented by a neuron , and each neuron is initialized randomly to be either "active" or "inactive" (output of 1 or 0), with 1 implying that the neuron is part of the solution. Each neuron also has a state function (described below) which is initialized to 0. When the network is allowed to run, each neuron can change its state and output based on the states and outputs of its neighbors (those exactly one knight's move away) according to the following transition rules: where t {\displaystyle t} represents discrete intervals of time, U ( N i , j ) {\displaystyle U(N_{i,j})} is the state of the neuron connecting square i {\displaystyle i} to square j {\displaystyle j} , V ( N i , j ) {\displaystyle V(N_{i,j})} is the output of the neuron from i {\displaystyle i} to j {\displaystyle j} , and G ( N i , j ) {\displaystyle G(N_{i,j})} is the set of neighbors of the neuron. Although divergent cases are possible, the network should eventually converge, which occurs when no neuron changes its state from time t {\displaystyle t} to t + 1 {\displaystyle t+1} . When the network converges, either the network encodes a knight's tour or a series of two or more independent circuits within the same board.
https://en.wikipedia.org/wiki/Schwenk's_theorem
In quantum electrodynamics (QED), the Schwinger limit is a scale above which the electromagnetic field is expected to become nonlinear . The limit was first derived in one of QED's earliest theoretical successes by Fritz Sauter in 1931 [ 1 ] and discussed further by Werner Heisenberg and his student Hans Heinrich Euler . [ 2 ] The limit, however, is commonly named in the literature [ 3 ] for Julian Schwinger , who derived the leading nonlinear corrections to the fields and calculated the rate of electron–positron pair production in a strong electric field. [ 4 ] The limit is typically reported as a maximum electric field or magnetic field before nonlinearity for the vacuum of where m e is the mass of the electron , c is the speed of light in vacuum, q e is the elementary charge , and ħ is the reduced Planck constant . These are enormous field strengths. Such an electric field is capable of accelerating a proton from rest to the maximum energy attained by protons at the Large Hadron Collider in only approximately 5 micrometers. The magnetic field is associated with birefringence of the vacuum and is exceeded on magnetars . In vacuum, the classical Maxwell's equations are perfectly linear differential equations . This implies – by the superposition principle – that the sum of any two solutions to Maxwell's equations is another solution to Maxwell's equations. For example, two intersecting beams of light should simply add together their electric fields and pass right through each other. Thus Maxwell's equations predict the impossibility of any but trivial elastic photon–photon scattering . In QED, however, non-elastic photon–photon scattering becomes possible when the combined energy is large enough to create virtual electron–positron pairs spontaneously, illustrated by the Feynman diagram in the adjacent figure. This creates nonlinear effects that are approximately described by Euler and Heisenberg's nonlinear variant of Maxwell's equations . A single plane wave is insufficient to cause nonlinear effects, even in QED. [ 4 ] The basic reason for this is that a single plane wave of a given energy may always be viewed in a different reference frame , where it has less energy (the same is the case for a single photon). A single wave or photon does not have a center-of-momentum frame where its energy must be at minimal value. However, two waves or two photons not traveling in the same direction always have a minimum combined energy in their center-of-momentum frame, and it is this energy and the electric field strengths associated with it, which determine particle–antiparticle creation, and associated scattering phenomena. Photon–photon scattering and other effects of nonlinear optics in vacuum is an active area of experimental research, with current or planned technology beginning to approach the Schwinger limit. [ 5 ] It has already been observed through inelastic channels in SLAC Experiment 144. [ 6 ] [ 7 ] However, the direct effects in elastic scattering have not been observed. As of 2012, the best constraint on the elastic photon–photon scattering cross section belonged to PVLAS , which reported an upper limit far above the level predicted by the Standard Model . [ 8 ] Proposals were made to measure elastic light-by-light scattering using the strong electromagnetic fields of the hadrons collided at the LHC . [ 9 ] In 2019, the ATLAS experiment at the LHC announced the first definitive observation of photon–photon scattering, observed in lead ion collisions that produced fields as large as 10 25 V/m , well in excess of the Schwinger limit. [ 10 ] Observation of a cross section larger or smaller than that predicted by the Standard Model could signify new physics such as axions , the search of which is the primary goal of PVLAS and several similar experiments. ATLAS observed more events than expected, potentially evidence that the cross section is larger than predicted by the Standard Model, but the excess is not yet statistically significant. [ 11 ] The planned, funded ELI –Ultra High Field Facility, which will study light at the intensity frontier, is likely to remain well below the Schwinger limit [ 12 ] although it may still be possible to observe some nonlinear optical effects. [ 13 ] The Station of Extreme Light (SEL) is another laser facility under construction which should be powerful enough to observe the effect. [ 14 ] Such an experiment, in which ultra-intense light causes pair production, has been described in the popular media as creating a " hernia " in spacetime. [ 15 ]
https://en.wikipedia.org/wiki/Schwinger_limit
Schwinger variational principle is a variational principle which expresses the scattering T-matrix as a functional depending on two unknown wave functions . The functional attains stationary value equal to actual scattering T-matrix. The functional is stationary if and only if the two functions satisfy the Lippmann-Schwinger equation . The development of the variational formulation of the scattering theory can be traced to works of L. Hultén and J. Schwinger in 1940s. [ 1 ] The T-matrix expressed in the form of stationary value of the functional reads where ϕ {\displaystyle \phi } and ϕ ′ {\displaystyle \phi '} are the initial and the final states respectively, V {\displaystyle V} is the interaction potential and G 0 ( + ) ( E ) {\displaystyle G_{0}^{(+)}(E)} is the retarded Green's operator for collision energy E {\displaystyle E} . The condition for the stationary value of the functional is that the functions ψ {\displaystyle \psi } and ψ ′ {\displaystyle \psi '} satisfy the Lippmann-Schwinger equation and Different form of the stationary principle for T-matrix reads The wave functions ψ {\displaystyle \psi } and ψ ′ {\displaystyle \psi '} must satisfy the same Lippmann-Schwinger equations to get the stationary value. The principle may be used for the calculation of the scattering amplitude in the similar way like the variational principle for bound states , i.e. the form of the wave functions ψ , ψ ′ {\displaystyle \psi ,\psi '} is guessed, with some free parameters, that are determined from the condition of stationarity of the functional. This scattering –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Schwinger_variational_principle
The Schwinger–Dyson equations ( SDEs ) or Dyson–Schwinger equations , named after Julian Schwinger and Freeman Dyson , are general relations between correlation functions in quantum field theories (QFTs). They are also referred to as the Euler–Lagrange equations of quantum field theories, since they are the equations of motion corresponding to the Green's function. They form a set of infinitely many functional differential equations, all coupled to each other, sometimes referred to as the infinite tower of SDEs. In his paper "The S-Matrix in Quantum electrodynamics", [ 1 ] Dyson derived relations between different S-matrix elements, or more specific "one-particle Green's functions", in quantum electrodynamics , by summing up infinitely many Feynman diagrams , thus working in a perturbative approach. Starting from his variational principle , Schwinger derived a set of equations for Green's functions non-perturbatively, [ 2 ] which generalize Dyson's equations to the Schwinger–Dyson equations for the Green functions of quantum field theories . Today they provide a non-perturbative approach to quantum field theories and applications can be found in many fields of theoretical physics, such as solid-state physics and elementary particle physics . Schwinger also derived an equation for the two-particle irreducible Green functions, [ 2 ] which is nowadays referred to as the inhomogeneous Bethe–Salpeter equation . Given a polynomially bounded functional F {\displaystyle F} over the field configurations, then, for any state vector (which is a solution of the QFT), | ψ ⟩ {\displaystyle |\psi \rangle } , we have where δ / δ φ {\displaystyle \delta /\delta \varphi } is the functional derivative with respect to φ , S {\displaystyle \varphi ,S} is the action functional and T {\displaystyle {\mathcal {T}}} is the time ordering operation. Equivalently, in the density state formulation, for any (valid) density state ρ {\displaystyle \rho } , we have This infinite set of equations can be used to solve for the correlation functions nonperturbatively . To make the connection to diagrammatic techniques (like Feynman diagrams ) clearer, it is often convenient to split the action S {\displaystyle S} as where the first term is the quadratic part and D − 1 {\displaystyle D^{-1}} is an invertible symmetric (antisymmetric for fermions) covariant tensor of rank two in the deWitt notation whose inverse, D {\displaystyle D} is called the bare propagator and S int [ φ ] {\displaystyle S_{\text{int}}[\varphi ]} is the "interaction action". Then, we can rewrite the SD equations as If F {\displaystyle F} is a functional of φ {\displaystyle \varphi } , then for an operator K {\displaystyle K} , F [ K ] {\displaystyle F[K]} is defined to be the operator which substitutes K {\displaystyle K} for φ {\displaystyle \varphi } . For example, if and G {\displaystyle G} is a functional of J {\displaystyle J} , then If we have an " analytic " (a function that is locally given by a convergent power series) functional Z {\displaystyle Z} (called the generating functional ) of J {\displaystyle J} (called the source field ) satisfying then, from the properties of the functional integrals the Schwinger–Dyson equation for the generating functional is If we expand this equation as a Taylor series about J = 0 {\displaystyle J=0} , we get the entire set of Schwinger–Dyson equations. To give an example, suppose for a real field φ {\displaystyle \varphi } . Then, The Schwinger–Dyson equation for this particular example is: Note that since is not well-defined because is a distribution in this equation needs to be regularized . In this example, the bare propagator D is the Green's function for − ∂ μ ∂ μ − m 2 {\displaystyle -\partial ^{\mu }\partial _{\mu }-m^{2}} and so, the Schwinger–Dyson set of equations goes as and etc. (Unless there is spontaneous symmetry breaking , the odd correlation functions vanish.) There are not many books that treat the Schwinger–Dyson equations. Here are three standard references: There are some review article about applications of the Schwinger–Dyson equations with applications to special field of physics. For applications to Quantum Chromodynamics there are
https://en.wikipedia.org/wiki/Schwinger–Dyson_equation
The Schöllkopf method or Schöllkopf Bis-Lactim Amino Acid Synthesis is a method in organic chemistry for the asymmetric synthesis of chiral amino acids . [ 1 ] [ 2 ] The method was established in 1981 by Ulrich Schöllkopf . [ 3 ] [ 4 ] [ 5 ] In it glycine is a substrate, valine a chiral auxiliary and the reaction taking place an alkylation. The dipeptide derived from glycine and (R-) valine is converted into a 2,5-Diketopiperazine (a cyclic dipeptide ). Double O- methylation gives the bis-lactim. A proton is then abstracted from the prochiral position on glycine with n -BuLi . The next step decides the stereoselectivity of the method: One face of the carbanionic center is shielded by steric hindrance from the isopropyl residue on valine. The reaction of the anion with an alkyl iodide will form the alkylated product with a strong preference for just one enantiomer . In the final step the dipeptide is cleaved by acidic hydrolysis in two amino acid methyl esters which can be separated from each other. With valine Schöllkopf selected the natural proteinogenic amino acid with the largest non-reactive and nonchiral residue in order to achieve the largest possible stereoselectivity, generally speaking enantiomeric excess of over 95% ee is feasible. With the Schöllkopf method all amino acids can be synthesised when a suitable R-I reagent is available. R does not need to be an alkyl group but can also be more complicated. The method is limited to the laboratory for the synthesis of exotic amino acids. Industrial applications are not known. One disadvantage is limited atom economy .
https://en.wikipedia.org/wiki/Schöllkopf_method
In stellar astrophysics , the Schönberg–Chandrasekhar limit is the maximum mass of a non-fusing, isothermal core that can support an enclosing envelope. It is expressed as the ratio of the core mass to the total mass of the core and envelope. Estimates of the limit depend on the models used and the assumed chemical compositions of the core and envelope; typical values given are from 0.10 to 0.15 (10% to 15% of the total stellar mass). [ 1 ] [ 2 ] This is the maximum to which a helium-filled core can grow, and if this limit is exceeded, as can only happen in massive stars, the core collapses, releasing energy that causes the outer layers of the star to expand to become a red giant. It is named after the astrophysicists Subrahmanyan Chandrasekhar and Mario Schönberg , who estimated its value in a 1942 paper. [ 3 ] They estimated it to be: ( M c M ) SC = 0.37 ( μ e μ c ) 2 , {\displaystyle {\left({\frac {M_{\text{c}}}{M}}\right)}_{\text{SC}}=0.37\left({\frac {\mu _{\text{e}}}{\mu _{\text{c}}}}\right)^{2},} where M {\displaystyle M} is the mass, μ {\displaystyle \operatorname {\mu } } is the mean molecular weight, subscript c {\displaystyle {\text{c}}} denotes the core, and subscript e {\displaystyle {\text{e}}} denotes the envelope. The Schönberg–Chandrasekhar limit comes into play when fusion in a main-sequence star exhausts the hydrogen at the center of the star. The star then contracts until hydrogen fuses in a shell surrounding a helium-rich core, both of which are surrounded by an envelope consisting primarily of hydrogen. The core increases in mass as the shell burns its way outwards through the star. If the star's mass is less than approximately 1.5 solar masses , the core will become degenerate before the Schönberg–Chandrasekhar limit is reached, and, on the other hand, if the mass is greater than approximately 6 solar masses , the star leaves the main sequence with a core mass already greater than the Schönberg–Chandrasekhar limit so its core is never isothermal before helium fusion. In the remaining case, where the mass is between 1.5 and 6 solar masses, the core will grow until the limit is reached, at which point it will contract rapidly until helium starts to fuse in the core. [ 1 ] [ 4 ]
https://en.wikipedia.org/wiki/Schönberg–Chandrasekhar_limit
In chemistry , the Schöniger oxidation (also known as the Schöniger flask test or the oxygen flask method ) is a method of elemental analysis developed by Wolfgang Schöniger . [ 1 ] The test is conducted in an Erlenmeyer flask , or in a separatory funnel . It involves the combustion of a sample in pure oxygen, followed by the absorption of the combustion products by a solution of sodium hydroxide . [ 2 ] It allows quantitative determination of elemental chlorine , nitrogen and sulfur in a sample.
https://en.wikipedia.org/wiki/Schöniger_oxidation
SciCrunch is a collaboratively edited knowledge base about scientific resources. It is a community portal for researchers and a content management system for data and databases. It is intended to provide a common source of data to the research community and the data about Research Resource Identifiers ( RRIDs ), which can be used in scientific publications . After starting as a pilot of two journals in 2014, by 2022 over 1,000 journals have been using them and over half a million RRIDs have been quoted in the scientific literature. [ 1 ] In some respect, it is for science and scholarly publishing, similar to what Wikidata is for Wikimedia Foundation projects. Hosted by the University of California, San Diego , SciCrunch was also designed to help communities of researchers create their own portals to provide access to resources, databases and tools of relevance to their research areas [ 2 ] Research Resource Identifiers (RRID) are globally unique and persistent. [ 3 ] They were introduced and are promoted by the Resource Identification Initiative . [ 3 ] Resources in this context are research resources like reagents, tools or materials. [ 3 ] [ 4 ] An example for such a resource would be a cell line used in an experiment or software tool used in a computational analysis. The Resource Identification Portal ( https://scicrunch.org/resources ) was created in support of this initiative and is a central service where these identifiers can be searched and created. [ 3 ] [ 5 ] These identifiers should be fully searchable by data mining unlike supplementary files, and can be updated to new versions as basic methodology changes over time. The recommendation for citing research resources is shown below for key biological resources: The Resource Identification Portal lists existing RRIDs and instructions for creating a new one if an RRID matching the resource does not already exist. Description: Each RRID contains an ID, a type, a URL, and a name. There are hundreds of other attributes but most are specific to the type, for example antibody type RRIDs include an attribute called clonality, denoting whether the reagent is monoclonal or polyclonal, while cell lines have an attribute of "parental cell line" denoting the origin of the cell line being described. RRID Citations: RRIDs denote those research resources that have been used in the conduct of a study. They are not intended to be casual citations. RRIDs that have been used in scientific papers have been mined from the literature using both automated tools [ 6 ] and semi-automated tools thanks to a partnership with Hypothes.is . The data that defines which paper cites a particular RRID is usually available on the resolver page for that RRID, for example: https://scicrunch.org/resolver/CVCL_0038 shows the list of 44 papers (as of April 11, 2023) that have used this cell line in research. Each reference will show how authors have used the RRID by including a short snippet of the sentence in which the resource is defined by authors. External Resolver Services for RRIDs: Name to thing resolver from the California Digital Library can resolve any RRID using the following pattern https://n2t.net/ [RRID] example https://n2t.net/RRID:NXR_1049 The Identifiers.org resolver can also resolve any RRID using the following pattern https://identifiers.org/RRID/ [RRID] example https://identifiers.org/RRID/RRID:NXR_1049 A number of publishing houses, initiatives, and research institutions encourage using SciCrunch‘s RRIDs: Common Citation Format Article in Nature, [ 7 ] Cell Press , eLife , FORCE11 , Frontiers Media , [ 8 ] GigaScience , [ 9 ] MIRIAM Registry , [ 10 ] NIH , [ 11 ] PLOS Biology and PLOS Genetics . [ 12 ]
https://en.wikipedia.org/wiki/SciCrunch
The Scianna blood antigen system consists of seven antigens . [ 1 ] [ 2 ] These include two high frequency antigens Sc1 and Sc3, and two low frequency antigens Sc2 and Sc4. [ 1 ] The very rare null phenotype is characterised by the absence of Sc1, Sc2 and Sc3. [ 1 ] The antigens are caused by changes in the erythroid membrane associated protein ( ERMAP ). [ 3 ] [ 4 ] This blood group system was discovered in 1962 when a high frequency antigen was detected in a young woman (Ms. Scianna) who had experienced several late pregnancy losses due to haemolytic disease of the fetus . [ 3 ]
https://en.wikipedia.org/wiki/Scianna_antigen_system
The Science, Technology, Engineering and Mathematics Network ( STEMNET ) is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects ( science, technology, engineering, and mathematics ) and (eventually) work. [ 1 ] It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council . Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. [ 1 ] To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. [ 2 ] these come from a wide selection of the STEM industries and include TV personalities like Rob Bell . STEMNET used to receive funding from the Department for Education and Skills . Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills , [ 3 ] since STEMNET sits on the chronological dividing point (age 16) of both of the new departments.
https://en.wikipedia.org/wiki/Science,_Technology,_Engineering_and_Mathematics_Network
Science, technology, engineering, and mathematics ( STEM ) is an umbrella term used to group together the distinct but related technical disciplines of science , technology , engineering , and mathematics . The term is typically used in the context of education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area), and immigration policy, with regard to admitting foreign students and tech workers. [ 1 ] There is no universal agreement on which disciplines are included in STEM; in particular, whether or not the science in STEM includes social sciences , such as psychology , sociology , economics , and political science . In the United States, these are typically included by the National Science Foundation (NSF), [ 1 ] the Department of Labor 's O*Net online database for job seekers, [ 2 ] and the Department of Homeland Security . [ 3 ] In the United Kingdom, the social sciences are categorized separately and are instead grouped with humanities and arts to form another counterpart acronym HASS (humanities, arts, and social sciences), [ citation needed ] rebranded in 2020 as SHAPE (social sciences, humanities and the arts for people and the economy). [ 4 ] [ 5 ] Some sources also use HEAL (health, education, administration, and literacy) as the counterpart of STEM. [ citation needed ] In the early 1990s the acronym STEM was used by a variety of educators. Beverly Schwartz developed a STEM mentoring program in the Capital District of New York State, and was using the acronym as early as November, 1991. [ 6 ] Charles E. Vela was the founder and director of the Center for the Advancement of Hispanics in Science and Engineering Education (CAHSEE) [ 7 ] [ 8 ] [ 9 ] and started a summer program for talented under-represented students in the Washington, D.C. area called the STEM Institute. Based on the program's recognized success and his expertise in STEM education, [ 10 ] Charles Vela was asked to serve on numerous NSF and Congressional panels in science, mathematics, and engineering education. [ 11 ] Previously referred to as SMET by the NSF, [ 12 ] it is through this manner that NSF was first introduced to the acronym STEM. One of the first NSF projects to use the acronym was STEMTEC, the Science, Technology, Engineering, and Math Teacher Education Collaborative at the University of Massachusetts Amherst , which was founded in 1998. [ 13 ] In 2001, at the urging of Dr. Peter Faletra, the Director of Workforce Development for Teachers and Scientists at the Office of Science, the acronym was adopted by Rita Colwell and other science administrators in the National Science Foundation (NSF). The Office of Science was also an early adopter of the STEM acronym. [ 14 ] By the mid-2000s, China surpassed the United States in the number of PhDs awarded and is expected to produce 77,000 PhDs in 2025, compared to 40,000 in the US. [ 33 ] The Australian Curriculum, Assessment, and Reporting Authority 2015 report entitled, National STEM School Education Strategy , stated that "A renewed national focus on STEM in school education is critical to ensuring that all young Australians are equipped with the necessary STEM skills and knowledge that they must need to succeed." [ 34 ] Its goals were to: Events and programs meant to help develop STEM in Australian schools include the Victorian Model Solar Vehicle Challenge , the Maths Challenge (Australian Mathematics Trust), [ 35 ] Go Girl Go Global [ 35 ] and the Australian Informatics Olympiad. [ 35 ] Canada ranks 12th out of 16 peer countries in the percentage of its graduates who studied in STEM programs, with 21.2%, a number higher than the United States, but lower than France , Germany , and Austria . The peer country with the greatest proportion of STEM graduates, Finland , has over 30% of its university graduates coming from science, mathematics, computer science, and engineering programs. [ 36 ] SHAD is an annual Canadian summer enrichment program for high-achieving high school students in July. The program focuses on academic learning, particularly in STEAM fields . [ 37 ] Scouts Canada has taken similar measures to their American counterpart to promote STEM fields to youth. Their STEM program began in 2015. [ 38 ] In 2011 Canadian entrepreneur and philanthropist Seymour Schulich established the Schulich Leader Scholarships , $100 million in $60,000 scholarships for students beginning their university education in a STEM program at 20 institutions across Canada. Each year 40 Canadian students would be selected to receive the award, two at each institution, with the goal of attracting gifted youth into the STEM fields. [ 39 ] The program also supplies STEM scholarships to five participating universities in Israel . [ 40 ] To promote STEM in China, the Chinese government issued a guideline in 2016 on national innovation-driven development strategy, "instructing that by 2020, China should become an innovative country; by 2030, it should be at the forefront of innovative countries; and by 2050, it should become a technology innovation power." [ 41 ] "[I]n May 2018, the launching ceremony and press conference for the 2029 Action Plan for China's STEM Education was held in Beijing, China. This plan aims to allow as many students to benefit from STEM education as possible and equip all students with scientific thinking and the ability to innovate." "In response to encouraging policies by the government, schools in both public and private sectors around the country have begun to carry out STEM education programs." [ 42 ] "However, to effectively implement STEM curricula, full-time teachers specializing in STEM education and relevant content to be taught are needed." Currently, "China lacks qualified STEM teachers and a training system is yet to be established." [ 42 ] Several Chinese cities have made programming a mandatory subject for elementary and middle school students. This is the case of the city of Chongqing. [ 43 ] However, most students from small and medium-sized cities have not been exposed to the concept of STEM until they enter college. [ 44 ] Several European projects have promoted STEM education and careers in Europe. For instance, Scientix [ 45 ] is a European cooperation of STEM teachers, education scientists, and policymakers. The SciChallenge [ 46 ] project used a social media contest and student-generated content to increase the motivation of pre-university students for STEM education and careers. The Erasmus programme project AutoSTEM [ 47 ] used automata to introduce STEM subjects to very young children. The LUMA Center is the leading advocate for STEM-oriented education. Its aim is to promote the instruction and research of natural sciences, mathematics, computer science, and technology across all educational levels in the country. In the native tongue luma stands for "luonnontieteellis-matemaattinen" (lit. adj. "scientific-mathematical"). [ 48 ] The short is more or less a direct translation of STEM , with engineering fields included by association. However, unlike STEM, the term is also a portmanteau from lu and ma . To address the decline in interest in learning the areas of science, the Finnish National Board of Education launched the LUMA scientific education development program. The project's main goal was to raise the level of Finnish education and to enhance students' competencies, improve educational practices, and foster interest in science. The initiative led to the establishment of 13 LUMA centers at universities across Finland supervised by LUMA Center. The name of STEM in France is industrial engineering sciences (sciences industrielles or sciences de l'ingénieur). The STEM organization in France is the association UPSTI. [ clarification needed ] STEM education has not been promoted among the local schools in Hong Kong until recent years. In November 2015, the Education Bureau of Hong Kong released a document titled Promotion of STEM Education , [ 49 ] which proposes strategies and recommendations for promoting STEM education. India is next only to China with STEM graduates per population of 1 to 52. The total number of fresh STEM graduates was 2.6 million in 2016. [ 50 ] STEM graduates have been contributing to the Indian economy with well-paid salaries locally and abroad for the past two decades. The turnaround of the Indian economy with comfortable foreign exchange reserves is mainly attributed to the skills of its STEM graduates. In India, women make up an impressive 43% of STEM graduates, the highest percentage worldwide. However, they hold only 14% of STEM-related jobs. Additionally, among the 280,000 scientists and engineers working in research and development institutes in the country, women represent a mere 14% [ 51 ] In India, OMOTEC is providing an innovative curriculum based on STEM, and their students are also performing and developing products to solve the new age problems. [ 52 ] Two students also won the Microsoft Imagine Cup for developing a non-invasive method to screen for skin cancer using artificial intelligence. [ 53 ] In Nigeria, the Association of Professional Women Engineers Of Nigeria (APWEN) has involved girls between the ages of 12 and 19 in science-based courses in order for them to pursue science-based courses in higher institutions of learning. The National Science Foundation (NSF) In Nigeria has made conscious efforts to encourage girls to innovate, invent, and build through the "invent it, build it" program sponsored by NNPC. [ 54 ] STEM subjects are taught in Pakistan as part of electives taken in the 9th and 10th grades, culminating in Matriculation exams. These electives are pure sciences (Physics, Chemistry, Biology), mathematics (Physics, Chemistry, Maths), and computer science (Physics, Chemistry, Computer Science). STEM subjects are also offered as electives taken in the 11th and 12th grades, more commonly referred to as first and second year, culminating in Intermediate exams. These electives are FSc pre- medical (Physics, Chemistry, Biology), FSc pre- engineering (Physics, Chemistry, Maths), and ICS (Physics/Statistics, Computer Science, Maths). These electives are intended to aid students in pursuing STEM-related careers in the future by preparing them for the study of these courses at university. A STEM education project has been approved by the government [ 55 ] to establish STEM labs in public schools. The Ministry of Information Technology and Telecommunication has collaborated with Google to launch Pakistan's first grassroots-level Coding Skills Development Program, [ 56 ] based on Google's CS First Program, a global initiative aimed at developing coding skills in children. The program aims to develop applied coding skills using gamification techniques for children between the ages of 9 and 14. The KPITBs Early Age Programming initiative, [ 57 ] established in the province of Khyber Pakhtunkhwa , has been successfully introduced in 225 Elementary and Secondary Schools. Many private organizations are working in Pakistan to introduce STEM education in schools. In the Philippines , STEM is a two-year program and strand that is used for Senior High School (Grades 11 and 12), assigned by the Department of Education or DepEd. The STEM strand is under the Academic Track, which also includes other strands like ABM, HUMSS, and GAS. [ 58 ] [ 59 ] The purpose of the STEM strand is to educate students in the field of science, technology, engineering, and mathematics, in an interdisciplinary and applied approach, and to give students advanced knowledge and application in the field. After completing the program, the students will earn a Diploma in Science, Technology, Engineering, and Mathematics. In some colleges and universities, they require students applying for STEM degrees (like medicine, engineering, computer studies, etc.) to be a graduate of STEM, if not, they will need to enter a bridging program. In Qatar , AL-Bairaq is an outreach program to high-school students with a curriculum that focuses on STEM, run by the Center for Advanced Materials (CAM) at Qatar University . Each year around 946 students, from about 40 high schools, participate in AL-Bairaq competitions. [ 60 ] AL-Bairaq makes use of project-based learning, encourages students to solve authentic problems, and inquires them to work with each other as a team to build real solutions. [ 61 ] [ 62 ] Research has so far shown positive results for the program. [ 63 ] STEM is part of the Applied Learning Programme (ALP) that the Singapore Ministry of Education (MOE) has been promoting since 2013, and currently, all secondary schools have such a program. It is expected that by 2023, all primary schools in Singapore will have an ALP. There are no tests or exams for ALPs. The emphasis is for students to learn through experimentation – they try, fail, try, learn from it, and try again. The MOE actively supports schools with ALPs to further enhance and strengthen their capabilities and programs that nurture innovation and creativity. The Singapore Science Centre established a STEM unit in January 2014, dedicated to igniting students' passion for STEM. To further enrich students' learning experiences, their Industrial Partnership Programme (IPP) creates opportunities for students to get early exposure to real-world STEM industries and careers. Curriculum specialists and STEM educators from the Science Centre will work hand-in-hand with teachers to co-develop STEM lessons, provide training to teachers, and co-teach such lessons to provide students with early exposure and develop their interest in STEM. In 2017, Thai Education Minister Teerakiat Jareonsettasin said after the 49th Southeast Asia Ministers of Education Organisation (SEAMEO) Council Conference in Jakarta that the meeting approved the establishment of two new SEAMEO regional centers in Thailand. One would be the STEM Education Centre, while the other would be a Sufficient Economy Learning Centre. [ 64 ] Teerakiat said that the Thai government had already allocated Bt250 million over five years for the new STEM center. The center will be the regional institution responsible for STEM education promotion. It will not only set up policies to improve STEM education, but it will also be the center for information and experience sharing among the member countries and education experts. According to him, "This is the first SEAMEO regional center for STEM education, as the existing science education center in Malaysia only focuses on the academic perspective. Our STEM education center will also prioritize the implementation and adaptation of science and technology." [ 65 ] The Institute for the Promotion of Teaching Science and Technology has initiated a STEM Education Network. Its goals are to promote integrated learning activities improve student creativity and application of knowledge, and establish a network of organations and personnel for the promotion of STEM education in the country. [ 66 ] Turkish STEM Education Task Force (or FeTeMM—Fen Bilimleri, Teknoloji, Mühendislik ve Matematik) is a coalition of academicians and teachers who show an effort to increase the quality of education in STEM fields rather than focussing on increasing the number of STEM graduates. [ 67 ] [ 68 ] In the United States, the acronym began to be used in education and immigration debates in initiatives to begin to address the perceived lack of qualified candidates for high-tech jobs. It also addresses concern that the subjects are often taught in isolation, instead of as an integrated curriculum. [ 69 ] Maintaining a citizenry that is well-versed in the STEM fields is a key portion of the public education agenda of the United States. [ 70 ] The acronym has been widely used in the immigration debate regarding access to United States work visas for immigrants who are skilled in these fields. It has also become commonplace in education discussions as a reference to the shortage of skilled workers and inadequate education in these areas. [ 71 ] The term tends not to refer to the non-professional and less visible sectors of the fields, such as electronics assembly line work. Many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field. The NSF uses a broad definition of STEM subjects that includes subjects in the fields of chemistry , computer and information technology science, engineering, geoscience, life sciences, mathematical sciences, physics and astronomy, social sciences ( anthropology , economics , psychology , and sociology ), and STEM education and learning research. [ 1 ] [ 72 ] The NSF is the only American federal agency whose mission includes support for all fields of fundamental science and engineering, except for medical sciences. [ 73 ] Its disciplinary program areas include scholarships, grants, and fellowships in fields such as biological sciences, computer and information science and engineering, education and human resources, engineering, environmental research and education, geoscience, international science and engineering, mathematical and physical sciences, social, behavioral and economic sciences, cyberinfrastructure, and polar programs. [ 72 ] Although many organizations in the United States follow the guidelines of the National Science Foundation on what constitutes a STEM field, the United States Department of Homeland Security (DHS) has its own functional definition used for immigration policy. [ 74 ] In 2012, DHS or ICE announced an expanded list of STEM-designated degree programs that qualify eligible graduates on student visas for an optional practical training (OPT) extension. Under the OPT program, international students who graduate from colleges and universities in the United States can stay in the country and receive up to twelve months of training through work experience. Students who graduate from a designated STEM degree program can stay for an additional seventeen months on an OPT STEM extension. [ 75 ] [ 76 ] As of 2023, the U.S. faces a shortage of high-skilled workers in STEM, and foreign talents must navigate difficult hurdles to immigrate. Meanwhile, some other countries, such as Australia, Canada, and the United Kingdom, have introduced programs to attract talent at the expense of the United States. [ 77 ] In the case of China, the United States risks losing its edge over a strategic rival . [ 78 ] By cultivating an interest in the natural and social sciences in preschool or immediately following school entry, the chances of STEM success in high school can be greatly improved. [ 79 ] STEM supports broadening the study of engineering within each of the other subjects and beginning engineering at younger grades, even elementary school. It also brings STEM education to all students rather than only the gifted programs. In his 2012 budget, President Barack Obama renamed and broadened the " Mathematics and Science Partnership (MSP) " to award block grants to states for improving teacher education in those subjects. [ 80 ] In the 2015 run of the international assessment test the Program for International Student Assessment (PISA), American students came out 35th in mathematics, 24th in reading, and 25th in science, out of 109 countries. The United States also ranked 29th in the percentage of 24-year-olds with science or mathematics degrees. [ 83 ] STEM education often uses new technologies such as 3D printers to encourage interest in STEM fields. [ 84 ] STEM education can also leverage the combination of new technologies, such as photovoltaics and environmental sensors , with old technologies such as composting systems and irrigation within land lab environments. In 2006 the United States National Academies expressed their concern about the declining state of STEM education in the United States. Its Committee on Science, Engineering, and Public Policy developed a list of 10 actions. Their top three recommendations were to: The National Aeronautics and Space Administration also has implemented programs and curricula to advance STEM education to replenish the pool of scientists, engineers, and mathematicians who will lead space exploration in the 21st century. [ 85 ] Individual states, such as California , have run pilot after-school STEM programs to learn what the most promising practices are and how to implement them to increase the chance of student success. [ 86 ] Another state to invest in STEM education is Florida, where Florida Polytechnic University, [ 87 ] Florida's first public university for engineering and technology dedicated to science, technology, engineering, and mathematics (STEM), was established. [ 88 ] During school, STEM programs have been established for many districts throughout the U.S. Some states include New Jersey , Arizona , Virginia , North Carolina , Texas , and Ohio . [ 89 ] [ 90 ] Continuing STEM education has expanded to the post-secondary level through masters programs such as the University of Maryland's STEM Program [ 91 ] as well as the University of Cincinnati. [ 92 ] In the United States, the National Science Foundation found that the average science score on the 2011 National Assessment of Educational Progress was lower for black and Hispanic students than for white, Asian, and Pacific Islanders. [ 94 ] In 2011, eleven percent of the U.S. workforce was black, while only six percent of STEM workers were black. [ 95 ] Though STEM in the U.S. has typically been dominated by white males, there have been considerable efforts to create initiatives to make STEM a more racially and gender-diverse field. [ 96 ] Some evidence suggests that all students, including black and Hispanic students, have a better chance of earning a STEM degree if they attend a college or university at which their entering academic credentials are at least as high as the average student's. [ 97 ] Although women make up 47% of the workforce [ 98 ] in the U.S., they hold only 24% of STEM jobs. Research suggests that exposing girls to female inventors at a young age has the potential to reduce the gender gap in technical STEM fields by half. [ 99 ] Campaigns from organizations like the National Inventors Hall of Fame aimed to achieve a 50/50 gender balance in their youth STEM programs by 2020. The gender gap in Zimbabwe's STEM fields is also significant, with only 28.79% of women holding STEM degrees compared to 71.21% of men. [ 100 ] STEM fields have been recognized as areas where underrepresentation and exclusion of marginalized groups are prevalent. STEM poses unique challenges related to intersectionality due to rigid norms and stereotypes , both in higher education and professional settings. These norms often prioritize objectivity and meritocracy while overlooking structural inequities, creating environments where individuals with intersecting marginalized identities face compounded barriers. For instance, individuals from traditionally underrepresented groups may experience a phenomenon known as "chilly climates" which refers to incidents of sexism , isolation, and pressure to prove themselves to peers and high level academics. [ 101 ] For minority populations in STEM, loneliness is experienced due to lack of belonging and social isolation. [ 102 ] In the State of the Union Address on January 31, 2006, President George W. Bush announced the American Competitiveness Initiative . Bush proposed the initiative to address shortfalls in federal government support of educational development and progress at all academic levels in the STEM fields. In detail, the initiative called for significant increases in federal funding for advanced R&D programs (including a doubling of federal funding support for advanced research in the physical sciences through DOE ) and an increase in U.S. higher education graduates within STEM disciplines. The NASA Means Business competition, sponsored by the Texas Space Grant Consortium, furthers that goal. College students compete to develop promotional plans to encourage students in middle and high school to study STEM subjects and to inspire professors in STEM fields to involve their students in outreach activities that support STEM education. The National Science Foundation has numerous programs in STEM education, including some for K–12 students such as the ITEST Program that supports The Global Challenge Award ITEST Program. STEM programs have been implemented in some Arizona schools. They implement higher cognitive skills for students and enable them to inquire and use techniques used by professionals in the STEM fields. Project Lead The Way (PLTW) is a provider of STEM education curricular programs to middle and high schools in the United States. Programs include a high school engineering curriculum called Pathway To Engineering , a high school biomedical sciences program, and a middle school engineering and technology program called Gateway To Technology . PLTW programs have been endorsed by President Barack Obama and United States Secretary of Education Arne Duncan as well as various state, national, and business leaders. [ citation needed ] The Science, Technology, Engineering, and Mathematics (STEM) Education Coalition [ 103 ] works to support STEM programs for teachers and students at the U.S. Department of Education , the National Science Foundation , and other agencies that offer STEM-related programs. Activity of the STEM Coalition seems to have slowed since September 2008. In 2012, the Boy Scouts of America began handing out awards, titled NOVA and SUPERNOVA, for completing specific requirements appropriate to the scouts' program level in each of the four main STEM areas. The Girl Scouts of the USA has similarly incorporated STEM into their program through the introduction of merit badges such as "Naturalist" and "Digital Art". [ 104 ] SAE is an international organization, and provider specializing in supporting education, award, and scholarship programs for STEM matters, from pre-K to college degrees. [ 105 ] It also promotes scientific and technological innovation. [ 106 ] eCybermission is a free, web-based science, mathematics, and technology competition for students in grades six through nine sponsored by the U.S. Army. Each webinar is focused on a different step of the scientific method and is presented by an experienced eCybermission CyberGuide. CyberGuides are military and civilian volunteers with a strong background in STEM and STEM education, who can provide insight into science, technology, engineering, and mathematics to students and team advisers. STARBASE is an educational program, sponsored by the Office of the Assistant Secretary of Defense for Reserve Affairs. Students interact with military personnel to explore careers and make connections with the "real world". The program provides students with 20–25 hours of experience at the National Guard , Navy , Marines , Air Force Reserve , and Air Force bases across the nation. SeaPerch is an underwater robotics program that trains teachers to teach their students how to build an underwater remotely operated vehicle (ROV) in an in-school or out-of-school setting. Students build the ROV from a kit composed of low-cost, easily accessible parts, following a curriculum that teaches basic engineering and science concepts with a marine engineering theme. NASAStem is a program of the U.S. space agency NASA to increase diversity within its ranks, including age, disability, and gender as well as race/ethnicity. [ 107 ] The America COMPETES Act (P.L. 110–69) became law on August 9, 2007. It is intended to increase the nation's investment in science and engineering research and in STEM education from kindergarten to graduate school and postdoctoral education. The act authorizes funding increases for the National Science Foundation , National Institute of Standards and Technology laboratories, and the Department of Energy (DOE) Office of Science over FY2008–FY2010. Robert Gabrys, Director of Education at NASA's Goddard Space Flight Center , articulated success as increased student achievement, early expression of student interest in STEM subjects, and student preparedness to enter the workforce. In November 2012 the White House announcement before the congressional vote on the STEM Jobs Act put President Obama in opposition to many of the Silicon Valley firms and executives who bankrolled his re-election campaign. [ 108 ] The Department of Labor identified 14 sectors that are "projected to add substantial numbers of new jobs to the economy or affect the growth of other industries or are being transformed by technology and innovation requiring new sets of skills for workers." [ 109 ] The identified sectors were as follows: advanced manufacturing, Automotive , construction , financial services , geospatial technology , homeland security , information technology , Transportation , Aerospace , Biotechnology , energy , healthcare , hospitality , and retail . The Department of Commerce notes STEM fields careers are some of the best-paying and have the greatest potential for job growth in the early 21st century. The report also notes that STEM workers play a key role in the sustained growth and stability of the U.S. economy, and training in STEM fields generally results in higher wages, whether or not they work in a STEM field. [ 110 ] In 2015, there were around 9.0 million STEM jobs in the United States, representing 6.1% of American employment. STEM jobs were increasing by around 9% percent per year. [ 111 ] Brookings Institution found that the demand for competent technology graduates will surpass the number of capable applicants by at least one million individuals. According to Pew Research Center, a typical STEM worker earns two-thirds more than those employed in other fields. [ 112 ] According to the 2014 US census "74 percent of those who have a bachelor's degree in science, technology, engineering and math — commonly referred to as STEM — are not employed in STEM occupations." [ 113 ] [ 114 ] In September 2017, several large American technology firms collectively pledged to donate $300 million for computer science education in the U.S. [ 115 ] PEW findings revealed in 2018 that Americans identified several issues that hound STEM education which included unconcerned parents, disinterested students, obsolete curriculum materials, and too much focus on state parameters. 57 percent of survey respondents pointed out that one main problem of STEM is the lack of students' concentration in learning. [ 116 ] The recent National Assessment of Educational Progress (NAEP) report card [ 117 ] made public technology as well as engineering literacy scores which determines whether students can apply technology and engineering proficiency to real-life scenarios. The report showed a gap of 28 points between low-income students and their high-income counterparts. The same report also indicated a 38-point difference between white and black students. [ 118 ] The Smithsonian Science Education Center (SSEC) announced the release of a five-year strategic plan by the Committee on STEM Education of the National Science and Technology Council on December 4, 2018. The plan is entitled "Charting a Course for Success: America's Strategy for STEM Education." [ 119 ] The objective is to propose a federal strategy anchored on a vision for the future so that all Americans are given permanent access to premium-quality education in Science, Technology, Engineering, and Mathematics. In the end, the United States can emerge as a world leader in STEM mastery, employment, and innovation. The goals of this plan are building foundations for STEM literacy; enhancing diversity, equality, and inclusion in STEM; and preparing the STEM workforce for the future. [ 120 ] The 2019 fiscal budget proposal of the White House supported the funding plan in President Donald Trump's Memorandum on STEM Education which allocated around $200 million (grant funding) for STEM education every year. This budget also supports STEM through a grant program worth $20 million for career as well as technical education programs. [ 121 ] In Vietnam, beginning in 2012 many private education organizations have STEM education initiatives. In 2015, the Ministry of Science and Technology and Liên minh STEM organized the first National STEM Day, followed by many similar events across the country. in 2015, the Ministry of Education and Training included STEM as an area that needed to be encouraged in the national school year program. In May 2017, the Prime Minister signed a Directive No. 16 [ 122 ] stating: "Dramatically change the policies, contents, education and vocational training methods to create a human resource capable of receiving new production technology trends, with a focus on promoting training in science, technology, engineering and mathematics (STEM), foreign languages, information technology in general education; " and asking "Ministry of Education and Training (to): Promote the deployment of science, technology, engineering and mathematics (STEM) education in general education program; Pilot organize in some high schools from 2017 to 2018. Women constitute 47% of the U.S. workforce and perform 24% of STEM-related jobs. [ 123 ] In the UK women perform 13% of STEM-related jobs (2014). [ 124 ] In the U.S. women with STEM degrees are more likely to work in education or healthcare rather than STEM fields compared with their male counterparts. The gender ratio depends on the field of study. For example, in the European Union in 2012 women made up 47.3% of the total, 51% of the social sciences, business, and law, 42% of the science, mathematics, and computing, 28% of engineering, manufacturing, and construction, and 59% of PhD graduates in Health and Welfare. [ 125 ] In a study from 2019, it was shown that part of the success of women in STEM depends on the way women in STEM are viewed. In a study that researched grants given based primarily on a project versus primarily based on the project lead there was almost no difference in the evaluation between projects from men or women when evaluated on the project, but those evaluated mainly on the project leader showed that projects headed by women were given grants four percent less often. [ 126 ] Improving the experiences of women in STEM is a major component of increasing the number of women in STEM. One part of this includes the need for role models and mentors who are women in STEM. Along with this, having good resources for information and networking opportunities can improve women's ability to flourish in STEM fields. [ 127 ] Adding to the complexity, global studies indicate that biology may play a significant role in the gender gaps in STEM fields because the propensity for women to pursue college degrees in STEM fields declines consistently as countries become more wealthy and egalitarian. As women are more free to choose their careers, they are more prone to chose careers that relate to people rather than objects. [ 128 ] People identifying within the group LGBTQ+ have faced discrimination in STEM fields throughout history. Few were openly queer in STEM; however, a couple of well-known people are Alan Turing , the father of computer science, and Sara Josephine Baker , an American physician and public-health leader. [ 129 ] Despite recent changes in attitudes towards LGBTQ+ people, discrimination still permeates throughout STEM fields. [ 130 ] [ 131 ] A recent study has shown that sexual minority students were less likely to have completed a bachelor's degree in a STEM field, [ 132 ] [ 133 ] having opted to switch their major. Those that remained in a STEM field were however more likely to participate in undergraduate research programs. According to the study sexual minorities did show higher overall retention rates within STEM related fields as compared to heterosexual woman. [ 132 ] [ 131 ] Another study concluded that queer people are more likely to experience exclusion, harassment, and other negative impacts while in a STEM career while also having fewer opportunities and resources available to them. [ 134 ] Multiple programs and institutions are working towards increasing the inclusion and acceptance of LGBTQ+ people in STEM. In the US, the National Organization of Gay and Lesbian Scientists and Technical Professionals (NOGLSTP) has organized people to address homophobia since the 1980s and now promotes activism and support for queer scientists. [ 135 ] Other programs, including 500 Queer Scientists and Pride in STEM, function as visibility campaigns for LGBTQ+ people in STEM worldwide. [ 135 ] [ 136 ] The focus on increasing participation in STEM fields has attracted criticism. In the 2014 article "The Myth of the Science and Engineering Shortage" in The Atlantic , demographer Michael S. Teitelbaum criticized the efforts of the U.S. government to increase the number of STEM graduates, saying that, among studies on the subject, "No one has been able to find any evidence indicating current widespread labor market shortages or hiring difficulties in science and engineering occupations that require bachelor's degrees or higher", and that "Most studies report that real wages in many—but not all—science and engineering occupations have been flat or slow-growing, and unemployment as high or higher than in many comparably-skilled occupations." Teitelbaum also wrote that the then-current national fixation on increasing STEM participation paralleled previous U.S. government efforts since World War II to increase the number of scientists and engineers, all of which he stated ultimately ended up in "mass layoffs, hiring freezes, and funding cuts"; including one driven by the Space Race of the late 1950s and 1960s, which he wrote led to "a bust of serious magnitude in the 1970s." [ 137 ] IEEE Spectrum contributing editor Robert N. Charette echoed these sentiments in the 2013 article "The STEM Crisis Is a Myth", also noting that there was a "mismatch between earning a STEM degree and having a STEM job" in the United States, with only around 1 ⁄ 4 of STEM graduates working in STEM fields, while less than half of workers in STEM fields have a STEM degree. [ 138 ] Economics writer Ben Casselman , in a 2014 study of post-graduation earnings in the United States for FiveThirtyEight , wrote that, based on the data, science should not be grouped with the other three STEM categories, because, while the other three generally result in high-paying jobs, "many sciences, particularly the life sciences , pay below the overall median for recent college graduates." [ 139 ] A 2017 article from the University of Leicester concluded, that "maintaining accounts of a ‘crisis’ in the supply of STEM workers has usually been in the interests of industry, the education sector and government, as well as the lobby groups that represent them. Concerns about a shortage have meant the allocation of significant additional resources to the sector whose representatives have, in turn, become powerful voices in advocating for further funds and further investment." [ 140 ] A 2022 report from Rutgers University stated: "In the United States, the STEM crisis theme is a perennial policy favorite, appearing every few years as an urgent concern in the nation’s competition with whatever other nation is ascendant, or as the cause of whatever problem is ailing the domestic economy. And the solution is always the same: increase the supply of STEM workers through expanding STEM education. Time and again, serious and empirically grounded studies find little evidence of any systemic failures or an inability of market responses to address whatever supply is required to meet workforce needs." [ 141 ] A study of the UK job market, published in 2022, found similar problems, which have been reported for the USA earlier: "It is not clear that having a degree in the sciences, rather than in other subjects, provides any sort of advantage in terms of short- or long-term employability... While only a minority of STEM graduates ever work in highly-skilled STEM jobs, we identified three particular characteristics of the STEM labour market that may present challenges for employers: STEM employment appears to be predicated on early entry to the sector; a large proportion of STEM graduates are likely to never work in the sector; and there may be more movement out of HS STEM positions by older workers than in other sectors... " [ 142 ]
https://en.wikipedia.org/wiki/Science,_technology,_engineering,_and_mathematics
Science, technology, society and environment ( STSE ) education , originates from the science technology and society (STS) movement in science education . This is an outlook on science education that emphasizes the teaching of scientific and technological developments in their cultural, economic, social and political contexts. In this view of science education, students are encouraged to engage in issues pertaining to the impact of science on everyday life and make responsible decisions about how to address such issues (Solomon, 1993 and Aikenhead, 1994) The STS movement has a long history in science education reform, and embraces a wide range of theories about the intersection between science, technology and society (Solomon and Aikenhead, 1994; Pedretti 1997). Over the last twenty years, the work of Peter Fensham, the noted Australian science educator, is considered to have heavily contributed to reforms in science education. Fensham's efforts included giving greater prominence to STS in the school science curriculum (Aikenhead, 2003). The key aim behind these efforts was to ensure the development of a broad-based science curriculum, embedded in the socio-political and cultural contexts in which it was formulated. From Fensham's point of view, this meant that students would engage with different viewpoints on issues concerning the impact of science and technology on everyday life. They would also understand the relevance of scientific discoveries, rather than just concentrate on learning scientific facts and theories that seemed distant from their realities (Fensham, 1985 & 1988). However, although the wheels of change in science education had been set in motion during the late 1970s, it was not until the 1980s that STS perspectives began to gain a serious footing in science curricula, in largely Western contexts (Gaskell, 1982). This occurred at a time when issues such as, animal testing , environmental pollution and the growing impact of technological innovation on social infrastructure, were beginning to raise ethical, moral, economic and political dilemmas (Fensham, 1988 and Osborne, 2000). There were also concerns among communities of researchers, educators and governments pertaining to the general public's lack of understanding about the interface between science and society (Bodmer, 1985; Durant et al. 1989 and Millar 1996). In addition, alarmed by the poor state of scientific literacy among school students, science educators began to grapple with the quandary of how to prepare students to be informed and active citizens, as well as the scientists, medics and engineers of the future (e.g. Osborne, 2000 and Aikenhead, 2003). Hence, STS advocates called for reforms in science education that would equip students to understand scientific developments in their cultural, economic, political and social contexts. This was considered important in making science accessible and meaningful to all students—and, most significantly, engaging them in real world issues (Fensham, 1985; Solomon, 1993; Aikenhead, 1994 and Hodson 1998). The key goals of STS are: There is no uniform definition for STSE education. As mentioned before, STSE is a form of STS education, but places greater emphasis on the environmental consequences of scientific and technological developments. In STSE curricula, scientific developments are explored from a variety of economic, environmental, ethical, moral, social and political (Kumar and Chubin, 2000 & Pedretti, 2005) perspectives. At best, STSE education can be loosely defined as a movement that attempts to bring about an understanding of the interface between science, society, technology and the environment. A key goal of STSE is to help students realize the significance of scientific developments in their daily lives and foster a voice of active citizenship (Pedretti & Forbes, 2000). Over the last two decades, STSE education has taken a prominent position in the science curricula of different parts of the world, such as Australia, Europe, the UK and USA (Kumar & Chubin, 2000). In Canada, the inclusion of STSE perspectives in science education has largely come about as a consequence of the Common Framework of science learning outcomes, Pan Canadian Protocol for collaboration on School Curriculum (1997) [2] . This document highlights a need to develop scientific literacy in conjunction with understanding the interrelationships between science, technology, and environment. According to Osborne (2000) & Hodson (2003), scientific literacy can be perceived in four different ways: However, many science teachers find it difficult and even damaging to their professional identities to teach STSE as part of science education due to the fact that traditional science focuses on established scientific facts rather than philosophical, political, and social issues, the extent of which many educators find to be devaluing to the scientific curriculum. [ 1 ] In the context of STSE education, the goals of teaching and learning are largely directed towards engendering cultural and democratic notions of scientific literacy. Here, advocates of STSE education argue that in order to broaden students' understanding of science, and better prepare them for active and responsible citizenship in the future, the scope of science education needs to go beyond learning about scientific theories, facts and technical skills. Therefore, the fundamental aim of STSE education is to equip students to understand and situate scientific and technological developments in their cultural, environmental, economic, political and social contexts (Solomon & Aikenhead, 1994; Bingle & Gaskell, 1994; Pedretti 1997 & 2005). For example, rather than learning about the facts and theories of weather patterns, students can explore them in the context of issues such as global warming. They can also debate the environmental, social, economic and political consequences of relevant legislation, such as the Kyoto Protocol . This is thought to provide a richer, more meaningful and relevant canvas against which scientific theories and phenomena relating to weather patterns can be explored (Pedretti et al. 2005). In essence, STSE education aims to develop the following skills and perspectives [ 2 ] Since STSE education has multiple facets, there are a variety of ways in which it can be approached in the classroom. This offers teachers a degree of flexibility, not only in the incorporation of STSE perspectives into their science teaching, but in integrating other curricular areas such as history, geography, social studies and language arts (Richardson & Blades, 2001). The table below summarizes the different approaches to STSE education described in the literature (Ziman, 1994 & Pedretti, 2005): Although advocates of STSE education keenly emphasize its merits in science education, they also recognize inherent difficulties in its implementation. The opportunities and challenges of STSE education have been articulated by Hughes (2000) and Pedretti & Forbes, (2000), at five different levels, as described below: Values & beliefs: The goals of STSE education may challenge the values and beliefs of students and teachers—as well as conventional, culturally entrenched views on scientific and technological developments. Students gain opportunities to engage with, and deeply examine the impact of scientific development on their lives from a critical and informed perspective. This helps to develop students' analytical and problem solving capacities, as well as their ability to make informed choices in their everyday lives. As they plan and implement STSE education lessons, teachers need to provide a balanced view of the issues being explored. This enables students to formulate their own thoughts, independently explore other opinions and have the confidence to voice their personal viewpoints. Teachers also need to cultivate safe, non-judgmental classroom environments, and must also be careful not to impose their own values and beliefs on students. Knowledge & understanding: The interdisciplinary nature of STSE education requires teachers to research and gather information from a variety of sources. At the same time, teachers need to develop a sound understanding of issues from various disciplines—philosophy, history, geography, social studies, politics, economics, environment and science. This is so that students’ knowledge base can be appropriately scaffolded to enable them to effectively engage in discussions, debates and decision-making processes. This ideal raises difficulties. Most science teachers are specialized in a particular field of science. Lack of time and resources may affect how deeply teachers and students can examine issues from multiple perspectives. Nevertheless, a multi-disciplinary approach to science education enables students to gain a more rounded perspective on the dilemmas, as well as the opportunities, that science presents in our daily lives. Pedagogic approach: Depending on teacher experience and comfort levels, a variety of pedagogic approaches based on constructivism can be used to stimulate STSE education in the classroom. As illustrated in the table below, the pedagogies used in STSE classrooms need to take students through different levels of understanding to develop their abilities and confidence to critically examine issues and take responsible action. Teachers are often faced with the challenge of transforming classroom practices from task-oriented approaches to those which focus on developing students' understanding and transferring agency for learning to students (Hughes, 2000). The table below is a compilation of pedagogic approaches for STSE education described in the literature (e.g. Hodson, 1998; Pedretti & Forbes 2000; Richardson & Blades, 2001): STSE education draws on holistic ways of knowing, learning, and interacting with science. A recent movement in science education has bridged science and technology education with society and environment awareness through critical explorations of place. The project Science and the city, for example, took place during the school years 2006-2007 and 2007-2008 involving an intergenerational group of researchers: 36 elementary students (grades 6, 7 & 8) working with their teachers, 6 university-based researchers, parents and community members. The goal was to come together, learn science and technology together, and use this knowledge to provide meaningful experiences that make a difference to the lives of friends, families, communities and environments that surround the school. The collective experience allowed students, teachers and learners to foster imagination, responsibility, collaboration, learning and action. The project has led to a series of publications: Science and the city: A Field Zine One collective publication, authored by the students, teachers and researchers together is that of a community zine that offered a format to share possibilities afforded by participatory practices that connect schools with local-knowledges, people and places. *Alsop, S., Ibrahim, S., & Blimkie, M. (Eds.) (2008) Science and the city: A Field Zine. Toronto: Ontario. [An independent publication written by students and researchers and distributed free to research, student and parent communities]. Tokyo Global Engineering Corporation is an education-services organization that provides capstone STSE education programs free of charge to engineering students and other stakeholders. These programs are intended to complement—but not to replace—STSE coursework required by academic degree programs of study. The programs are educational opportunities, so students are not paid for their participation. All correspondence among members is completed via e-mail, and all meetings are held via Skype, with English as the language of instruction and publication. Students and other stakeholders are never asked to travel or leave their geographic locations, and are encouraged to publish organizational documents in their personal, primary languages, when English is a secondary language. The Councils of Ministers of Education, Canada, website is a useful resource for understanding the goals and position of STSE education in Canadian Curricula. These are examples of books available for information on STS/STSE education, teaching practices in science and issues that may be explored in STS/STSE lessons.
https://en.wikipedia.org/wiki/Science,_technology,_society_and_environment_education
The Science Based Targets initiative ( SBTi ) is a collaboration between its founding partners, CDP , the United Nations Global Compact , World Resources Institute (WRI) and the World Wide Fund for Nature (WWF), and We Mean Business Coalition. [ 1 ] As of 2025, over 10,000 companies have set or committed to set science-based climate targets validated by SBTi. [ 2 ] The Science Based Targets initiative was established in 2015 [ 3 ] to help companies to set emission reduction targets in line with climate sciences [ 4 ] and Paris Agreement goals. [ 5 ] It is funded by IKEA Foundation , Amazon , Bezos Earth Fund , We Mean Business coalition, Rockefeller Brothers Fund and UPS Foundation. [ 6 ] In October 2021, SBTi developed and launched the world's first net zero standard, providing the framework and tools for companies to set science-based net zero targets and limit global temperature rise above pre-industrial levels to 1.5 °C. [ 7 ] [ 8 ] Best practice as identified by SBTi is for companies to adopt transition plans covering scope 1, 2 and 3 emissions , set out short-term milestones, ensure effective board-level governance and link executive compensation to the company's adopted milestones. [ 3 ] SBTi is a UK charity with a commercial subsidiary, SBTi Services, which offers to validate climate targets set by companies as science-based targets for a fee. As of 2025, SBTi operates without a central office and has 200 staff who primarily work remotely, including part-time employees. This has led to challenges in meeting the demands of a growing client base of companies seeking SBTi validation for climate targets. [ 2 ] SBTi developed separate sector-specific methodologies, frameworks and requirements for different industries. As of September 2024, sector guidance [ 9 ] is available for: In April 2024 the SBTi Board of Trustees released a statement [ 10 ] setting out an intention to permit the use of environmental attribute certificates (EACs) for abatement purposes against Scope 3 emissions reduction targets. SBTi did not previously permit the use of EACs due to the difficulties faced in tracing, measuring and validating their impact. [ 11 ] The Bezos Earth Fund, a major funder of the SBTi, exerted influence on SBTi board members to relax the organization's position on carbon offsets . [ 12 ] [ 13 ] [ 14 ] The statement led to a response letter [ 15 ] signed by various teams within the SBTi and media speculation about the policy change. The counter argument set out in the response being that carbon offsets are incompatible with the Paris Agreement. [ 16 ] Launched in September 2022, the SBTi's Forestry, Land and Agriculture (FLAG) guidance [ 17 ] allows companies to claim the achievement of their emission reduction targets through ‘insetting’, breaking from the long-held SBTi position that emission reduction targets should only be achieved through emission reductions. [ 18 ] Insetting is a business-driven concept and not a term defined in international standards and guidelines such as ISO 14050 Environmental Vocabulary and IWA 42 Net Zero Guidelines. [ 19 ] On 2 July 2024, CEO Luiz Amaral announced that he would step down for personal reasons. [ 20 ] In January 2025, David Kennedy was announced as the new CEO. [ 2 ] [ 21 ]
https://en.wikipedia.org/wiki/Science_Based_Targets_initiative
The term Science DMZ refers to a computer subnetwork that is structured to be secure, but without the performance limits that would otherwise result from passing data through a stateful firewall . [ 1 ] [ 2 ] The Science DMZ is designed to handle high volume data transfers, typical with scientific and high-performance computing , by creating a special DMZ to accommodate those transfers. [ 3 ] It is typically deployed at or near the local network perimeter, and is optimized for a moderate number of high-speed flows, rather than for general-purpose business systems or enterprise computing . [ 4 ] The term Science DMZ was coined by collaborators at the US Department of Energy's ESnet in 2010. [ 5 ] A number of universities and laboratories have deployed or are deploying a Science DMZ. In 2012 the National Science Foundation funded the creation or improvement of Science DMZs on several university campuses in the United States. [ 6 ] [ 7 ] [ 8 ] The Science DMZ [ 9 ] is a network architecture to support Big Data . The so-called information explosion has been discussed since the mid 1960s, and more recently the term data deluge [ 10 ] has been used to describe the exponential growth in many types of data sets. These huge data sets, often need to be copied from one location to another using the Internet. The movement of data sets of this magnitude in a reasonable amount of time should be possible on modern networks. For example, it should only take less than 4 hours to transfer 10 Terabytes of data on a 10 Gigabit Ethernet network path, assuming disk performance is adequate [ 11 ] The problem is that this requires networks that are free from packet loss and middleboxes such as traffic shapers or firewalls that slow network performance. Most businesses and other institutions use a firewall to protect their internal network from malicious attacks originating from outside. All traffic between the internal network and the external Internet must pass through a firewall, which discards traffic likely to be harmful. A stateful firewall tracks the state of each logical connection passing through it, and rejects data packets inappropriate for the state of the connection. For example, a website would not be allowed to send a page to a computer on the internal network, unless the computer had requested it. This requires a firewall to keep track of the pages recently requested, and match requests with responses. A firewall must also analyze network traffic in much more detail, compared to other networking components, such as routers and switches. Routers only have to deal with the network layer , but firewalls must also process the transport and application layers as well. All this additional processing takes time, and limits network throughput. While routers and most other networking components can handle speeds of 100 billion bits per second (Gbps), firewalls limit traffic to about 1 Gbit/s, [ 12 ] which is unacceptable for passing large amounts of scientific data. Modern firewalls can leverage custom hardware ( ASIC ) to accelerate traffic and inspection, in order to achieve higher throughput. This can present an alternative to Science DMZs and allows in place inspection through existing firewalls, as long as unified threat management (UTM) inspection is disabled. While stateful firewall may be necessary for critical business data, such as financial records, credit cards, employment data, student grades, trade secrets, etc., science data requires less protection, because copies usually exist in multiple locations and there is less economic incentive to tamper. [ 4 ] A firewall must restrict access to the internal network but allow external access to services offered to the public, such as web servers on the internal network. This is usually accomplished by creating a separate internal network called a DMZ, a play on the term "demilitarized zone." External devices are allowed to access devices in the DMZ. Devices in the DMZ are usually maintained more carefully to reduce their vulnerability to malware. Hardened devices are sometimes called bastion hosts . The Science DMZ takes the DMZ idea one step farther, by moving high performance computing into its own DMZ. [ 13 ] Specially configured routers pass science data directly to or from designated devices on an internal network, thereby creating a virtual DMZ. Security is maintained by setting access control lists (ACLs) in the routers to only allow traffic to/from particular sources and destinations. Security is further enhanced by using an intrusion detection system (IDS) to monitor traffic, and look for indications of attack. When an attack is detected, the IDS can automatically update router tables, resulting in what some call a Remotely Triggered BlackHole (RTBH). [ 1 ] The Science DMZ provides a well-configured location for the networking, systems, and security infrastructure that supports high-performance data movement. In data-intensive science environments, data sets have outgrown portable media, and the default configurations used by many equipment and software vendors are inadequate for high performance applications. The components of the Science DMZ are specifically configured to support high performance applications, and to facilitate the rapid diagnosis of performance problems. Without the deployment of dedicated infrastructure, it is often impossible to achieve acceptable performance. Simply increasing network bandwidth is usually not good enough, as performance problems are caused by many factors, ranging from underpowered firewalls to dirty fiber optics to untuned operating systems. The Science DMZ is the codification of a set of shared best practices—concepts that have been developed over the years—from the scientific networking and systems community. The Science DMZ model describes the essential components of high-performance data transfer infrastructure in a way that is accessible to non-experts and scalable across any size of institution or experiment. The primary components of a Science DMZ are: Optional Science DMZ components include:
https://en.wikipedia.org/wiki/Science_DMZ_Network_Architecture
Science Translational Medicine is an interdisciplinary biomedical journal established in October 2009 by the American Association for the Advancement of Science . [ 1 ] It publishes basic , biomedical , translational , and clinical research about human diseases . [ 2 ] [ 3 ] According to Web of Science , the journal has a 2023 impact factor of 15.8 [ 4 ] [ 5 ] The journal has published articles covering novel tools and technologies that aid in investigating the fundamental mechanisms underlying health and disease, as well as the variability of drug responses in humans, precision medicine, and regulatory science. The journal is abstracted and indexed by the major services with a focus on medicine and biology , including Science Citation Index [ 6 ] & Web of Science , Index Medicus / MEDLINE / PubMed , [ 7 ] and Scopus [ 8 ] This article about a medical journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Science_Translational_Medicine
The Science and Engineering Research Council ( SERC ) and its predecessor the Science Research Council ( SRC ) were the UK agencies in charge of publicly funded scientific and engineering research activities, including astronomy, biotechnology and biological sciences, space research and particle physics, between 1965 and 1994. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The SERC also had oversight of: From its formation in 1965 until 1981 it was known as the Science Research Council (SRC). The SRC had been formed in 1965 as a result of the Trend Committee enquiry into the organisation of civil science in the UK. Previously the Minister for Science had been responsible for various research activities in the Department of Scientific and Industrial Research (DSIR) and more loosely with a variety of agencies concerned with the formulation of civil scientific policy. One of the main problems addressed by the enquiry was how to decide the priorities for government funding across all areas of scientific research. Previously this task had been the responsibility of the Treasury without direct scientific advice. The other Research Councils formed in 1965 were: These bodies joined the Medical Research Council (MRC) which had existed since 1920. In 1981, to reflect the increased emphasis on engineering research, the SRC was renamed the Science and Engineering Research Council. [ 6 ] In 1994, the new Director General of Research Councils was charged with reorganization of the four existing research councils, and this resulted in the SERC being split three bodies: The two Observatories were moved under the aegis of PPARC, and the Laboratories initially into EPSRC and later into their own organization, the Council for the Central Laboratory of the Research Councils (CCLRC). In 2007 CCLRC and PPARC were merged to form the Science and Technology Facilities Council (STFC), with responsibility for nuclear physics being transferred from EPSRC to STFC. [ citation needed ]
https://en.wikipedia.org/wiki/Science_and_Engineering_Research_Council
Science and Technology of Advanced Materials is a peer-reviewed scientific journal in materials science that was established in 2000. In 2008 it became an open access journal through the sponsorship of the National Institute for Materials Science (NIMS). The journal is international; it is managed by NIMS, which was joined in 2014 by the Swiss Federal Laboratories for Materials Science and Technology (Empa). [ 1 ] Currently STAM is an electronic journal, its articles are continuously published online. Its sister journal, STAM Methods , has been launched in 2021. [ 2 ] The journal covers all aspects of materials science, including theoretical analysis, synthesis and processing, phase and structure analyses, characterization, properties, engineering, and applications. It covers advances in research on solids, liquids and colloids, with emphasis on the interdisciplinary nature of materials science and issues at the forefront of the field, such as nano-, bio- and eco- and energy materials. Since March 2014, STAM articles are published under a Creative Commons CC BY license, while the previous content is either copyrighted or released within a non-commercial CC BY-NC-SA platform. [ 3 ] Publication in STAM is free for authors until 31 March 2025. [ 4 ] STAM is indexed by major databases including the Astrophysics Data System , Chemical Abstracts Service , Inspec , PubMed , Science Citation Index , Scopus and Web of Science . According to the Journal Citation Reports , STAM has a 2023 impact factor of 7.4. [ 5 ] STAM has published articles and editorials by the Nobel Laureates Ei-ichi Negishi , [ 6 ] Heinrich Rohrer [ 7 ] and Dan Shechtman . [ 8 ]
https://en.wikipedia.org/wiki/Science_and_Technology_of_Advanced_Materials
Pacific Island economies are mostly dependent on natural resources, with a tiny manufacturing sector and no heavy industry. In Fiji and Papua New Guinea, for instance, there is a need to adopt automated machinery and design in forestry and to improve training, in order to add value to exports. [ 1 ] Papua New Guinea experienced the strongest economic growth between 2005 and 2013 (60%), during the commodities boom. Even during the global financial and economic crisis of 2008-2009, its economy grew by 13%. Vanuatu saw the next strongest growth (35%) over this period, including 10% growth in 2008-2009. Growth was more pedestrian in the Marshall Islands (a cumulative 19%), Tuvalu (16%), Samoa (15%), Kiribati (13%), Fiji (12%) and Tonga (8%). The economies of the Federated States of Micronesia and Palau actually shrank over this nine-year period. Samoa, the Marshall Islands and Fiji all experienced recession in 2008 and 2009. [ 1 ] The trade balance is more skewed towards imports than exports, with the exception of Papua New Guinea, which has a mining industry. There is growing evidence that Fiji is becoming a re-export hub in the Pacific; between 2009 and 2013, its re-exports grew threefold, accounting for more than half of all exports by Pacific Island states. Samoa can also expect to become more integrated in global markets from now on, having joined the World Trade Organization in 2012. Fiji, Papua New Guinea and the Solomon Islands are also members of the World Trade Organization. [ 1 ] Pacific Island states make up a very small share of the South Pacific's high-tech exports. These exports receded between 2008 and 2013 by 46% for Fiji and by 41% for Samoa, according to the United Nations' Comtrade database. Fiji's high-tech exports were down from US$5.0 million to US$2.7 million and Samoa's from US$0.3 to US$0.2 million. [ 1 ] In 2013, the majority of Fijian high-tech exports were pharmaceutical products (84%), whereas Samoa exports mainly scientific instruments (86%) and Kiribati non-electrical machinery (79%). Armaments make up 92% of high-exports from the Solomon Islands. [ 1 ] By 2013, one in three inhabitants of Fiji, Tonga and Tuvalu had Internet access. Growth in Internet access since 2010 has levelled out the disparity between countries to some extent, although connectivity remained extremely low in Vanuatu (11%), the Solomon Islands (8%) and Papua New Guinea (7%) in 2013. [ 1 ] Advances in mobile phone technology have clearly been a factor in the provision of Internet access to remote areas. The flow of knowledge and information through internet is likely to play an important role in the more effective dissemination and application of knowledge across the vast Pacific Island nations. [ 1 ] Mobile Internet penetration was the lowest (18%) of any region in the world in 2018 but this figure is expected to double by 2023. [ 2 ] In this remote region, high-speed Internet access comes from laying an expensive undersea cable. Recent links have been created for Papua New Guinea (2020), the Solomon Islands (2020) and Tonga (2018). [ 3 ] [ 4 ] Pacific countries are reshaping their social and economic environments to meet digital demands. To benefit from modern digital and other technological tools, regulatory bodies have adopted social media platforms and messaging systems in official protocols to disseminate disaster warnings in Samoa, Tonga, Fiji and Niue, as well as weather forecasts and information on climate change. [ 5 ] In 2015, the Pacific Islands Forum Leaders established an ICT Working Group made up of CROP agencies that is co-ordinated by the University of the South Pacific . However, no regional mechanism has since emerged in this area. [ 4 ] In the Boe Declaration on Regional Security , produced during the 2018 Pacific Islands Forum, Pacific leaders expanded the concept of security to include cybersecurity . Efforts are under way to assess cybersecurity capacity in Polynesia, Melanesia and Micronesia, in tandem with the United Nations International Telecommunication Union and other partners. Samoa has been the first to develop a National Cyber Security Strategy 2016–2021 . [ 4 ] Both the largest and smallest Pacific nations acknowledge that taking a regional approach to science and technology offers them greater opportunities for institutional development. This approach is encapsulated in the Framework for Pacific Regionalism (2014). All 14 nations have mandated the agencies attached to the Council of Regional Organisations of the Pacific (CROP) to conduct technical backstopping. CROP agencies partially fulfill the role that a science council might play in other regions. However, none of these agencies has a specific mandate or policy for science and technology. [ 4 ] Pacific Island states have established a number of regional bodies to address technological issues for sectorial development. Examples are the Council of Regional Organisations of the Pacific , such as the Pacific Community (SPC); Pacific Islands Forum Secretariat; and Secretariat of the Pacific Region Environmental Programme. The Ministers of Education from Pacific Island countries signed a Ministerial communiqué on Pacific Science, Technology and Innovation in 2017, in which they committed to developing regional and national STI policies and roadmaps. However, no policy or roadmap has since been published for want of resources. [ 4 ] The 2014 Small Island Developing States (SIDS) Accelerated Modalities of Action Pathway (SAMOA Pathway) identified science and technology as being critical to SIDS’ sustainable development. The need for research is being recognized at the regional level. The Pacific Community Centre for Ocean Science was established in New Caledonia in 2015, hosted by SPC. Construction of the Pacific Climate Change Centre was completed in Apia, Samoa in 2019 (see below). The establishment of the Pacific–Europe Network for Science, Technology and Innovation (PACE-Net Plus) goes some way towards filling the void in science policy, at least temporarily. Funded by the European Commission within its Seventh Framework Programme for Research and Innovation (2007–2013), this project has spanned the period 2013–2016 and thus overlaps with the European Union’s Horizon 2020 programme. [ 1 ] The objectives of PACE–Net Plus are to reinforce the dialogue between the Pacific region and Europe in science, technology and innovation; to support biregional research and innovation through calls for research proposals; and to promote scientific excellence and industrial and economic competition. Ten of its 16 members come from the Pacific region and the remainder from Europe. [ 1 ] The Pacific partners are the Australian National University, Montroix Pty Ltd (Australia), University of the South Pacific, Institut Malardé in French Caledonia, National Centre for Technological Research into Nickel and its Environment in New Caledonia, South Pacific Community, Landcare Research Ltd in New Zealand, University of Papua New Guinea, Samoa National University and the Vanuatu Cultural Centre. [ 1 ] The other six partners are: the Association of Commonwealth Universities, the Institut de recherche pour le développement in France, the Technical Centre for Agricultural and Rural Cooperation, a joint international institution of the African, Caribbean and Pacific Group of States and the European Union, the Sociedade Portuguesa de Inovação, United Nations Industrial Development Organization and Leibniz Centre for Tropical Marine Ecology in Germany. PACE-Net Plus focuses on three societal challenges: [ 1 ] PACE–Net Plus has organized a series of high-level policy dialogue platforms alternately in the Pacific region and in Brussels, the headquarters of the European Commission . These platforms bring together key government and institutional stakeholders in both regions, around STI issues. [ 1 ] A conference held in Suva (Fiji) in 2012 under the umbrella of PACE–Net Plus produced recommendations for a strategic plan for research, innovation and development in the Pacific. The conference report published in 2013 identified R&D needs in the Pacific in seven areas: [ 1 ] Noting the general absence of regional and national policies and plans for science, technology and innovation in the Pacific, the PACE–Net Plus conference established the Pacific Islands University Research Network to support intra- and inter- regional knowledge creation and sharing and to prepare succinct recommendations for the development of a regional policy framework for science, technology and innovation. [ 1 ] This formal research network will complement the Fiji-based University of the South Pacific , which has campuses in other Pacific Island countries. [ 1 ] It was intended for the policy role of the Pacific Islands University Research Network to be informed by evidence gleaned from measuring capability in science, technology and innovation but the absence of data presents a formidable barrier. As of 2015, only Fiji had recent data on expenditure on research and development (R&D) and there were no recent data on researchers and technicians for any of the developing Pacific island countries. [ 1 ] Without relevant data, it will be difficult for developing Pacific Island states to monitor their progress towards Sustainable Development target 9.5 , namely: Enhance scientific research, upgrade the technological capabilities of industrial sectors in all countries, in particular developing countries, including, by 2030, encouraging innovation and substantially increasing the number of research and development workers per 1 million people and public and private research and development spending . The two indicators chosen by the United Nations to measure progress are research and development expenditure as a proportion of GDP (9.5.1) and Researchers (in full-time equivalent) per million inhabitants (9.5.2). Efforts to collate and co-ordinate regional and national data are growing. Such efforts include the PRISM database from the SPC Statistics for Development Division [ 6 ] and the national and regional environmental data portals created by countries with the support of the Inform Project. [ 7 ] Fiji, Papua New Guinea and Samoa all consider education to be one of the key policy tools for driving science, technology and innovation, as well as modernization. Fiji, in particular, has made a supreme effort to re-visit existing policies, rules and regulations in this sector. The Fijian government allocates a larger portion of its national budget to education than any other Pacific Island country (4.2% of GDP in 2011), although this is down from 6% of GDP in 2000. The proportion of the education budget allocated to higher education (0.5% of GDP) amounts to 13% of the public education budget. Scholarship schemes like National Toppers, introduced in 2014, and the availability of student loans have made higher education attractive and rewarding in Fiji. [ 1 ] According to an internal investigation into the choice of disciplines in school-leaving examinations (year 13), Fijian students have shown a greater interest in science since 2011. A similar trend can be observed in enrolment figures at all three Fijian universities. [ 1 ] Many Pacific Island countries take Fiji as a benchmark for education. The country draws education leaders from other Pacific Island countries for training and, according to the Ministry of Education, teachers from Fiji are in great demand in these countries. [ 1 ] One important initiative has been the creation of the Higher Education Commission (FHEC) in 2010, the regulatory body in charge of tertiary education in Fiji. FHEC has embarked on registration and accreditation processes for tertiary-level education providers to improve the quality of higher education in Fiji. In 2014, FHEC allocated research grants to universities with a view to enhancing the research culture among faculty. [ 1 ] Fiji is the only developing Pacific Island country with recent data for gross domestic expenditure on research and development (GERD). The national Bureau of Statistics cites a GERD/GDP ratio of 0.15% in 2012. Private-sector research and development (R&D) is negligible. Between 2007 and 2012, government investment in R&D tended to favour agriculture. Scientists publish much more in geosciences and medical sciences than in agricultural sciences, however. [ 1 ] Food security has been given high priority in the Fiji 2020 Agriculture Sector Policy , as part of a shift from subsistence to commercial agriculture and agro-processing. Strategies outlined in Fiji 2020 include: [ 1 ] Fiji has taken the initiative of shifting away from subsistence agriculture towards commercial agriculture and agro-processing of root crops, tropical fruits, vegetables, spices, horticulture and livestock. In 2013, the Ministry of Agriculture revived Fiji’s Agricultural Journal in 2013, which had been dormant for 17 years. [ 1 ] In 2007, agriculture and primary production accounted for just under half of government expenditure on R&D, according to the Fijian National Bureau of Statistics. By 2012, this had risen to almost 60%. Scientists publish much more in the field of geosciences than in agriculture, though. Between 2008 and 2014, agriculture accounted for only 11 out of Fiji's 460 articles catalogued in Thomson Reuters' Web of Science (Science Citation Index Expanded), compared to 85 articles in geosciences. [ 1 ] The rise in government spending on agricultural research has come to the detriment of research in education, which dropped to 35% of total research spending between 2007 and 2012. Government expenditure on health has remained fairly constant, at about 5% of the total for research, according to the Fijian National Bureau of Statistics. [ 1 ] Over the six years to 2012, government expenditure on health remained fairly constant but low in Fiji, at about 5% of the total for research, according to the Fijian National Bureau of Statistics. This may explain why medical sciences accounted for only 72 out of Fiji's 460 articles catalogued in Thomson Reuters' Web of Science (Science Citation Index Expanded) between 2008 and 2014. [ 1 ] The Fijian Ministry of Health is seeking to develop endogenous research capacity through the Fiji Journal of Public Health , which it launched in 2012. A new set of guidelines are now in place to help build endogenous capacity in health research through training and access to new technology. The new policy guidelines require that all research projects initiated in Fiji with external bodies demonstrate how the project will contribute to local capacity-building in health research. The desire to ensure that fisheries remain sustainable is fuelling the drive to use science and technology to make the transition to value-added production. The fisheries sector in Fiji is currently dominated by the catch of tuna for the Japanese market. The Fijian government plans to diversify this sector through aquaculture, inshore fisheries and offshore fish products such as sunfish and deep-water snapper. Accordingly, many incentives and concessions are being offered to encourage the private sector to invest in these areas. [ 1 ] Fiji has shown substantial growth in access to Internet and mobile phone services. This trend has been supported by its geographical location, service culture, pro-business policies, English-speaking population and well-connected e-society . Relative to many other South Pacific Islands, Fiji has a fairly reliable and efficient telecommunications system with access to the Southern Cross submarine cable linking New Zealand, Australia and North America. A recent move to establish the University of the South Pacific Stathan ICT Park, the Kalabo ICT economic zone and the ATH technology park in Fiji should boost the ICT support service sector in the Pacific region. [ 1 ] In its Higher Education Plan III 2014–2023 , Papua New Guinea sets out a strategy for transforming tertiary education and R&D through the introduction of a quality assurance system and a programme to overcome its limited R&D capacity. [ 1 ] The National Vision 2050 was adopted in 2009. It has led to the establishment of the Research, Science and Technology Council. At its gathering in November 2014, the Council re-emphasized the need to focus on sustainable development through science and technology. [ 1 ] Vision 2050 ’s medium-term priorities are: [ 1 ] By 2016, the share of GDP invested in research and development measured just 0.03%. [ 4 ] Between 2008 and 2014, 82% of scientific articles from Papua New Guinea concerned the biological and medical sciences. Less than 10% of the country's 517 articles catalogued in Thomson Reuters' Web of Science (Science Citation Index Expanded) focused on geosciences. [ 1 ] In 2016, women represented 33.2% of the scientists in Papua New Guinea, on par with the global share. [ 4 ] Professor Teatulohi Matainaho serves as Chief Science Advisor to Papua New Guinea, appointed in 2013. Countries around the Pacific Rim are seeking ways to link their national knowledge base to regional and global advances in science. One motivation for this greater interconnectedness is the region’s vulnerability to geohazards such as earthquakes and tsunamis – the Pacific Rim is not known as the Ring of Fire for nothing. In 2009, Samoa suffered a submarine earthquake of a magnitude of 8.1 on the Richter Scale, the strongest earthquake recorded that year. The subsequent tsunami caused substantial damage and loss of life in Samoa , American Samoa , and Tonga . The need for greater disaster resilience is inciting countries to develop collaboration in the geosciences. [ 1 ] Climate change is a parallel concern, as the Pacific Rim is also one of the most vulnerable regions to rising sea levels and increasingly capricious weather patterns. In March 2015, for instance, much of Vanuatu was flattened by Cyclone Pam . [ 1 ] Climate change seems to be the most pressing environmental issue for Pacific Island countries, as it is already affecting almost all socio-economic sectors. The consequences of climate change can be seen in agriculture, food security, forestry and even in the spread of communicable diseases. Climate change mostly concerns marine issues, such as the growing frequency and severity of storms, rising sea levels and the increased salinity of soils and groundwater. [ 1 ] The Secretariat of the Pacific Community has initiated several activities to tackle problems associated with climate change. These cover a great variety of areas, including fisheries, freshwater, agriculture, coastal zone management, disaster management, energy, traditional knowledge, education, forestry, communication, tourism, culture, health, weather, gender implications and biodiversity. Almost all Pacific Island countries are involved in one or more of these activities. [ 1 ] The Seventh Pacific Islands Leaders Meeting with Japan in 2015 pledged to establish a Pacific Climate Change Centre. Construction of the centre was completed in Apia, Samoa, in 2019. A shared regional asset, the centre has four mutually reinforcing functions: knowledge brokerage; applied research; capacity-building; and innovation to promote climate change adaptation and mitigation . The government of Samoa , the Pacific Regional Environment Programme and the Japan International Cooperation Agency are all collaborating to deliver 12 courses for trainees from all Pacific Island countries and territories by 2022. The centre also houses a research node of Australia’s University of Newcastle in partnership with the Pacific Regional Environment Programme; it has offered PhD scholarships since 2018 and hosts an ‘innovation incubator’. Research undertaken at the centre aligns with the four priority areas defined by the Pacific leaders, namely: climate change resilience; ecosystems and biodiversity protection; waste management; and environmental governance. The first major scheme focusing on adaptation to climate change and climate variability dates back to 2009. Pacific Adaptation to Climate Change involves 13 Pacific Island nations, with international funding from the Global Environment Facility , as well as from the US and Australian governments. [ 1 ] Several projects related to climate change are also being co-ordinated by the United Nations Environment Programme , within the Secretariat of the Pacific Region Environmental Programme (SPREP). The aim of SPREP is to help all members improve their ‘capacity to respond to climate change through policy improvement, implementation of practical adaptation measures, enhancing ecosystem resilience to the impacts of climate change and implementing initiatives aimed at achieving low-carbon development’. [ 1 ] The blueprint for the subregion’s sustainable development over the coming decade is the Samoa Pathway , the action plan adopted by countries at the third United Nations Conference on Small Island Developing States in Apia (Samoa) in September 2014. The Samoa Pathway focuses on, inter alia, sustainable consumption and production; sustainable energy, tourism and transportation; climate change; disaster risk reduction; forests; water and sanitation, food security and nutrition; chemical and waste management; oceans and seas; biodiversity; desertification, land degradation and drought; and health and non-communicable diseases. [ 1 ] [1] Forestry is an important economic resource for Fiji and Papua New Guinea. However, forestry in both countries uses low and semi-intensive technological inputs. As a result, product ranges are limited to sawed timber, veneer, plywood, block board, moulding, poles and posts and wood chips. Only a few limited finished products are exported. Lack of automated machinery, coupled with inadequately trained local technical personnel, are some of the obstacles to introducing automated machinery and design. Policy-makers need to turn their attention to eliminating these barriers, in order for forestry to make a more efficient and sustainable contribution to national economic development. [ 1 ] On average, 10% of the GDP of Pacific Island countries funds imports of petroleum products but in some cases this figure can exceed 30%. In addition to high fuel transport costs, this reliance on fossil fuels leaves Pacific economies vulnerable to volatile global fuel prices and potential spills by oil tankers. [ 1 ] Consequently, many Pacific Island countries are convinced that renewable energy will play a role in their socio-economic development. In Fiji, Papua New Guinea, Samoa and Vanuatu, renewable energy sources already represent significant shares of the total electricity supply: 60%, 66%, 37% and 15% respectively. Tokelau has even become the first country in the world to generate 100% of its electricity using renewable sources. [ 1 ] According to the Secretariat of the Pacific Community , renewable energy still represented less than 10% of total energy use in the 22 Pacific Island countries and territories in 2015. The Secretariat of the Pacific Community observed that, 'while Fiji, Papua New Guinea and Samoa are leading the way with large-scale hydropower projects, there is enormous potential to expand the deployment of other renewable energy options such as solar, wind, geothermal and ocean-based energy sources'. [ 8 ] International development partners are participating in several projects to develop renewable energy in the Pacific island states. In the Cook Islands, for instance, the Asian Development Bank plans to supply electricity from renewable energy to all inhabited islands by 2020, within the Cook Islands Renewable Energy Chart Implementation Plan for 2012–2020 . New solar photovoltaic power plants with lithium-ion batteries were being built on up to six islands of the Southern Group in 2014. [ 9 ] The Fiji Rural Electrification Fund will bring affordable solar power and battery storage to 300 rural communities that rely on diesel generators or are without electricity access. Initiated in 2018 and lasting ten years, this fund is a public–private partnership. [ 4 ] To equip its National Energy Road Map 2016–2030 , Vanuatu approved the National Green Energy Fund in 2016 with the goal of mobilising US$ 20 million to provide all households with access to electricity (primarily through individual solar systems) and to improve energy efficiency by 2030. In off-grid areas, households’ access to electricity increased from 9% in 2015 to 64.4% in 2017. The increase was attributed to investments in imported, plug-in solar home systems, supported by the Vanuatu Rural Electrification Project in 2016. However, the share of renewable energy in electricity generation declined from 29% to 18% over the same period, owing in part to a reduction in the use of biofuels in Vanuatu’s largest electricity concession in Port Vila. [ 4 ] In April 2014, Pacific Ministers for Energy and Transport agreed to establish the Pacific Centre for Renewable Energy and Energy Efficiency, 'a first for the Pacific'. The centre will become part of the United Nations Industrial Development Organization 's network of regional Sustainable Energy for All Centres of Excellence, along with centres for the Caribbean Community, Economic Community of West African States , the Southern African Development Community and the East African Community. [ 8 ] The Pacific Centre for Renewable Energy and Energy Efficiency was established in Tonga in 2016 to advise the private sector on related policy matters, provide capacity-building and promote business investment. [ 4 ] The centre facilitates a financial mechanism offering competitive grants for start-ups to spur the adoption of renewable energy by the business sector. The centre is part of the Global Network of Regional Sustainable Energy Centres and SIDS DOCK framework designed to attract international investment in the renewable energy sector. [ 4 ] Efforts are under way to improve countries’ capacity to produce, conserve and use renewable energy. For example, the European Union has funded the Renewable Energy in Pacific Island Countries Developing Skills and Capacity programme (EPIC). Since its inception in 2013, EPIC has developed two master’s programmes in renewable energy management and helped to establish two Centres of Renewable Energy, one at the University of Papua New Guinea and the other at the University of Fiji . Both centres became operational in 2014 and aim to create a regional knowledge hub for the development of renewable energy. [ 1 ] In February 2014, the European Union and the Pacific Islands Forum Secretariat signed an agreement for a programme on Adapting to Climate Change and Sustainable Energy worth €37.26 million which will benefit 15 Pacific Island states. These are the Cook Islands, Fiji, Kiribati, Marshall Islands, Federated States of Micronesia, Nauru, Niue, Palau, Papua New Guinea, Samoa, Solomon Islands, Timor-Leste, Tonga, Tuvalu and Vanuatu. [ 1 ] Limited freedom of expression and, in some cases, religious conservatism discourage research in certain areas but the experience of Pacific Island countries shows that sustainable development and a green economy can benefit from the inclusion of traditional knowledge in formal science and technology, as underlined by the Sustainable Development Brief prepared by the Secretariat of the Pacific Community in 2013. [ 1 ] As part of their Nationally Determined Contributions under the Paris Agreement, the Pacific Island countries are building national renewable energy systems. All 14 countries now have energy strategies, although some extend only to 2020. Nearly all place a strong emphasis on electricity generation using renewable resources. [ 4 ] According to the Web of Science, Papua New Guinea had the largest number of publications (110) among Pacific Island states in 2014, followed by Fiji (106). Fijian research was concentrated in a handful of scientific disciplines, such as medical sciences, geosciences and biology. Nine out of ten scientific publications from Papua New Guinea focused on immunology, genetics, biotechnology and microbiology. [ 1 ] This pattern contrasts with the trend observed in the French territories of New Caledonia and French Polynesia, where there was a strong emphasis on geosciences: six to eight times the world average for this field. [ 1 ] More than three-quarters of articles published by scientists from Pacific Island nations between 2008 and 2014 were signed by international collaborators, according to Thomson Reuters' Web of Science, Science Citation Index Expanded. International co-authorship was higher for Papua New Guinea and Fiji (90% and 83% respectively) than for New Caledonia and French Polynesia (63% and 56% respectively). All countries counted North American partners among their top five partners. Fijian research collaboration with North American partners even exceeded that with India, even though a large proportion of Fijians are of Indian origin. Research partnerships also involved Australia and countries in Europe. Surprisingly, there was little co-authorship with authors based in France, with the notable exception of Vanuatu. Some Pacific Island states counted their neighbours among their closest scientific collaborators, as in the case of the Solomon Islands and Vanuatu. [ 1 ] Many of the smaller Pacific Island states have a near-100% rate of co-authorship This extremely high rate can be a double-edged sword. According to the Fijian Ministry of Health, research collaboration often results in an article being published in a reputed journal but gives very little back in terms of strengthening health in Fiji. A new set of guidelines are now in place in Fiji to help build endogenous capacity in health research through training and access to new technology. The new policy guidelines require that all research projects initiated in Fiji with external bodies demonstrate how the project will contribute to local capacity-building in health research. [ 1 ] Top five foreign collaborators for South Pacific scientists, 2008-2014 (43 225) (29 324) (15 493) (12 964) (110) (81) Palau (5) (8 853) (7 861) (3 021) (2 500) (197) Japan (10) Source: UNESCO Science Report: towards 2030 (2015), Figure 27.8. Data from Thomson Reuters' Web of Science, Science Citation Index Expanded, data treatment by Science Metrix Countries are struggling to steer their scientific efforts toward sustainable development, at a time when the United Nations’ Sustainable Development Goals have taken over from the Millennium Development Goals in 2016. It has been suggested that countries could begin by encouraging their scientists to focus more on attaining local goals for sustainable development, rather than on publishing in high-profile international journals on topics that may be of lesser local relevance. The difficulty with this course of action is that the key metrics for recognizing scientific quality are publications and citation data. The answer to this dilemma most likely lies in the need to recognize the global nature of many local development problems. 'We are dealing with problems without boundaries and we underestimate the scale and nature of their consequences at our collective peril. As global citizens, the research and policy communities have an obligation to collaborate and deliver, so arguing for national priorities seems irrelevant'. [ 10 ] In 2012, the Fijian Ministry of Health launched the Fiji Journal of Public Health , in an attempt to develop endogenous research capacity. In parallel, the Ministry of Agriculture revived Fiji’s Agricultural Journal in 2013, which had been dormant for 17 years. In addition, two regional journals were launched in 2009 as a focus for Pacific scientific research, the Samoan Medical Journal and the Papua New Guinea Journal of Research, Science and Technology . This article incorporates text from a free content work. Licensed under CC-BY-SA IGO 3.0. Text taken from UNESCO Science Report; towards 2030​ , UNESCO.
https://en.wikipedia.org/wiki/Science_and_technology_in_Pacific_Island_countries
Science and technology in Spain relates to the set of policies, plans and programs carried out by the Spanish Ministry of Science and Innovation [ 1 ] and other organizations aimed at research, development and innovation (R&D&I), as well as the reinforcement Spanish scientific and technological infrastructures and facilities such as universities and commercial laboratories. Spain has become the ninth scientific power in the world with 2.5% of the total number of scientific publications, thus surpassing Russia in the world ranking of scientific production [ 2 ] and surpassing Switzerland and Australia in scientific quality. Law 13/1986 on the "Promotion and General Coordination of Scientific and Technical Research" placed science for the first time on the Spanish political agenda, laying the foundations for research and its financing, organization and coordination between the State and the autonomous regions. [ 3 ] That regulation also led to the birth of the national research plan as an "instrument for financing science". [ 3 ] It also meant that public research organizations could create companies, as a solution to the lack of companies that encouraged new technologies and the disconnection of the science-technology system with the productive system. [ 4 ] It is regulated by Law 14/2011, of 1 June 2011, on "Science, Technology and Innovation", which entered into force six months after its publication. [ 5 ] According to the Ninth Final Provision of the Law, some of its provisions have the character of basic legislation. [ 6 ] [ 7 ] This provides a mechanism for national, regional and corporative entities to cooperate and optimise their resources. [ 8 ] Article 21 of the Law contemplates the pre- doctoral contract. [ 9 ] In 2020, the Ministry published the prior consultation on the reform of the Science Law. Through the 2021 Budget Law, the legal figure of the state agency was reintroduced for the State Research Agency [ 10 ] (AEI) and the Spanish National Research Council (CSIC), which had been transformed into an autonomous body in 2015. [ 11 ] State agencies have greater independence for the management of their budget. A new Science Law is expected to be approved in 2022. In 2020, Spain will invest 1.24% of its GDP in scientific research, well below the European average of 2.12%. [ 12 ] Up to 2020, eight editions of the National R&D&I Plan have been published, covering the period from 1988 to 1991 to 2007–2020, currently in force. [ 13 ] Each year a Work Program of the National R&D&I Plan is approved, which serves as a short-term programming tool, and is managed by the Ministries of Science and Innovation (MICINN); Industry, Tourism and Trade; Education (MEFP); and Environment, Rural and Marine Affairs (MARM). At the end of 2020 the Spanish Government officially presented its Digital Plan 2025 which focussed on the recovery, transformation and resilience of scientific endeavour as a significant contributor to the Spanish economy. The Minister of Digital Development Carme Artigas has announced that starting from late 2022 the country proposes to set up a secure environment where a wide range of companies will be able to test their risky AI systems for socially sensitive areas such as law enforcement, medical diagnostics or educational intervention. [ 14 ] The rules proposed by the European Commission in 2021 will be applied with strict oversight in compliance with Spain's National Artificial Intelligence Strategy (ENIA). [ 15 ] "Nanoinventum" is a project led by the University of Barcelona to incorporate science and nanotechnology principles into elementary school level curriculums. The main objective is to help young people become familiar with scientific language and to cultivate a passion for nanotechnology and science in general. [ 16 ] Public Research Organizations (OPI) carry out a large part of the R&D&I activities that are financed with public funds and usually manage some of the programs included in the National Plans. The following OPI's are attached to the Ministry of Science and Innovation: The following OPI's are attached to other ministerial departments: The Advisory Committee for Singular Infrastructures (until 2006 called the Advisory Committee for Large Scientific Facilities, CAGIC) [ 17 ] distinguishes between two types of Scientific and Technological Facilities: Large Scientific Facilities (GIC) and Medium Size Facilities (ITM). Their recognition as such is the responsibility of the Interministerial Commission for Science and Technology (CICYT). Singular Scientific and Technical Infrastructure (ICTS) refers to a facility that is unique or exceptional in Spain, that requires a relatively high investment cost, and that its importance in research or development justifies its availability. At present, the following facilities are recognized as Spanish ICTS (outdated list): [ 18 ] In addition, these are ICTS located in Spain, but with international participation: A Medium Size Installation is defined as an Installation that is unique in Spain, requiring an investment cost of between 3 and 8 million euros and a maintenance cost of more than half a million euros per year. Spain participates in several international scientific programs and organizations. The benefit obtained from this participation is twofold: on the one hand, Spanish scientists can use the facilities for the development of their projects; on the other hand, the business network has the opportunity to make important business contracts. Some of the facilities in which Spain participates are: Spain was ranked 28th in the Global Innovation Index in 2024. [ 19 ] In 2020 Pablo Jarillo-Herrero was awarded the Wolf Prize in Physics , considered the prelude to the Nobel Prize. [ 20 ] In 2009 Juan Ignacio Cirac was nominated for the same prestigious award for his research in quantum computing and quantum optics . [ 21 ] Among the Spanish contributions to chemistry are the research of Francisco Mojica that led to the birth of the CRISPR gene editing technique, a term he personally coined. Mariano Barbacid is one of the most internationally recognized biochemists , among his contributions is that he managed to isolate the human H-ras oncogene in bladder carcinoma . This was an incredible breakthrough in the study of the molecular basis of cancer. He currently directs the Spanish National Cancer Research Centre (CNIO). In 2020, Spain ranked seventh in the world in terms of scientific impact in Mathematics. [ 22 ] Internationally, centers such as the Institute of Mathematical Sciences (ICMAT), founded in 2007, and the Basque Center for Applied Mathematics (BCAM), founded in 2008, stand out. Carlos Beltrán solved Smale's Problem number 17, finding a probabilistic algorithm with polynomial complexity , and published his solution in 2009. [ 23 ] Michael Servetus described in the 16th century the pulmonary circulation of the blood. Francisco Romero in 1801 performed the first heart operation . [ 24 ] [ 25 ] Spain has a Nobel Prize in Medicine , Santiago Ramón y Cajal (1906), pioneer in the description of the functioning of the nervous system . Others were on the verge of being nominated, such as Jaime Ferrán y Clúa , discoverer of the cholera vaccine, which put an end to the epidemic that devastated Spain in the 19th century. He would later develop vaccines for tetanus , typhoid , tuberculosis and rabies . [ 26 ] Also nominated were José Gómez Ocaña and August Pi i Sunyer. [ 27 ] In the 19th century, the Balmis Expedition was the first international health expedition in history, with the aim of bringing the smallpox vaccine to all continents, a disease that was causing thousands of deaths of children worldwide. In 1921, surgeon Fidel Pagés developed the epidural anesthesia technique. The engineer Manuel Jalón Corominas invented the disposable hypodermic needle . Today Pedro Cavadas is internationally recognized for his milestones in transplant surgery. The galleon , a Spanish invention, enabled the birth of the Spanish Empire and its conquest of the seas. [ 28 ] Narcís Monturiol , inventor of air-independent propulsion , and Isaac Peral were among the creators of the submarine . Juan de la Cierva invented the articulated rotor and the autogyro , precursor of the helicopter . In 1907, Leonardo Torres Quevedo (1852–1936) started up the world's first aerial lift for passengers on Mount Ulía in San Sebastián . [ 29 ] In the biotechnology sector, institutions such as the National Biotechnology Center, companies such as PharmaMar and Zendal and researchers such as Mariano Esteban stand out. Spain currently has generation II nuclear reactors , with the most advanced countries developing the generation IV reactor . [ 30 ] It can be said that the father of nuclear energy in Spain was José María Otero de Navascués. [ 31 ] Today the Center for Energy, Environmental and Technological Research (CIEMAT) is the main Spanish research center in this area, which has the TJ-II stellarator , and is planning a successor, the TJ-III. Pablo Rodríguez Fernández is a leading researcher in the race for nuclear fusion . [ 32 ] Granada is a candidate to host IFMIF-DONES from 2030 onwards. [ 33 ] [ 34 ] [ 35 ] Ramón Verea (1833–1899) created the first mechanical calculator capable of direct multiplication. Leonardo Torres Quevedo (1852–1936) created moderm wireless remote-control operation principles [ 36 ] [ 37 ] and analog calculating machines that could solve algebraic equations. [ 38 ] In 1912, he built an automaton for playing chess endgames, El Ajedrecista , which has been considered the first computer game in history. [ 39 ] He also introduced the idea of floating-point arithmetic to computers for the first time. [ 40 ] [ 41 ] José García Santesmases (1907–1989) built the first analog computer and the first Spanish-made microprocessor . In 1967 he launched the Factor-P, the first computer manufactured in Spain. [ 42 ] In 2016 and 2017 BQ became the third best-selling smartphone brand in Spain, with phones designed in the country. [ 43 ] [ 44 ] Towards the end of the 1990s and early 2000s several companies manufactured laptops in Spain, most notably Airis [ 45 ] and Inves. [ 46 ] By 2021, Primux, Slimbook, Vant and Mountain already designed and assembled their computers in Spain. [ 47 ] [ 48 ] Between 1987 and 2009 there was a large microchip factory in Tres Cantos , but it closed due to the difficulty of competing with the Asian market. [ 49 ] Currently there are Spanish companies with microchip production capacity on a smaller scale, but which also have design capacity, such as Televés, a pioneer in Europe in the use of DIE electronic components (electronic components without encapsulation) [ 50 ] and which also has the capacity to manufacture MMIC circuits, [ 51 ] Ikor, and Anafocus, dedicated to the manufacture of CMOS image sensors . Between 1983 and 1992, Spain became one of the largest producers of video games, in what is called the golden age of the Spanish video game . Today FX Interactive , heir of Dinamic Software , is among the most prominent companies. At the end of the 1990s IRC-Hispano was the reference as a social community in the Hispanic world. Other software companies that have achieved great repercussion are the search engine Olé, Terra Networks or Tuenti . Today, Wallapop , Fotocasa, Cabify and Rakuten TV stand out. The evolution of astronomical navigation, thanks to the contributions of astronomers such as Alonso de Santa Cruz , Juan Arias de Loyola and Jorge Juan y Santacilia was also key to Spain's preponderance in the oceans. Since 1968 the National Institute for Aerospace Technology has concatenated scientific satellite programs, starting with the Intasat Program , continuing with the Minisat program which was a qualitative leap in the 90's, and continuing up to the current Small Satellite Constellation Program. Many of the instruments used in space missions to Mars and asteroids are developed at the Astrobiology Center (CAB). Among the major contributors in the space area are Emilio Herrera , inventor of the stratonautical space suit , predecessor of the space suit ; Enrique Trillas, promoter of space science programs; and Pedro Duque , the first Spanish astronaut. In Spain there are many science and technology parks, all of them are usually grouped in the Association of Science and Technology Parks of Spain (APTE). The international R&D&I programs in which Spain participates are usually focused on the European area, and the most important are the following: The Spanish Foundation for Science and Technology (FECYT) is a public foundation under the Ministry of Science and Innovation, [ 52 ] whose mission is to foster science and innovation, promoting their integration and approach to society. The National Museum of Science and Technology (MUNCYT) is dedicated to conservation and to popular science and technology. It has two sites, one in Alcobendas and the other in A Coruña .
https://en.wikipedia.org/wiki/Science_and_technology_in_Spain
Science fiction prototyping ( SFP ) refers to the idea of using science fiction to describe and explore the implications of futuristic technologies and the social structures enabled by them. [ 1 ] [ 2 ] Similar terms are design fiction , speculative design , and critical design . [ 3 ] The idea was introduced by Brian David Johnson in 2010 who, at the time, was a futurist at Intel working on the challenge his company faced anticipating the market needs for integrated circuits at the end of their 7–10 years design and production cycle. [ 4 ] [ 5 ] The roots for Science Fiction Prototyping can be traced back to two papers, the first by Callaghan et-al [ 6 ] “ Pervasive Computing and Urban Development: Issues for the individual and Society ”, presented at the 2004 United Nations World Urban Forum which used short stories as a means to convey potential future threats of technology to society and the second, by Egerton et-al [ 7 ] " Using Multiple Personas In Service Robots To Improve Exploration Strategies When Mapping New Environments " describing multiple personas and irrational thinking for humanoid robots which inspired Brian David Johnson to write the first Science Fiction Prototype, Nebulous Mechanisms , [ 8 ] which went on to become a series of stories that eventually morphed into Intel's 21st Century Robot project. [ 9 ] Together Johnson, Callaghan and Egerton formed the Creative Science Foundation as a vehicle to promote and support the use of Science Fiction Prototyping and its derivatives. The first public Science Fiction Prototyping event was Creative Science 2010 [ 10 ] (not to be confused with Creation Science ), held in Kuala Lumpur , Malaysia on 19 July 2010. This event was also significant as it included the Science Fiction Prototype Tales From a Pod [ 11 ] which became the first Science Fiction Prototype to be commercialised (by Immersive Displays Ltd, ImmersaVU [ 12 ] ). In 2011, a second Science Fiction Prototyping workshop was held in Nottingham (UK), Creative Science 2011, [ 13 ] in which Intel made the first documentary about this methodology. Shortly afterwards the Creative Science Foundation was formed as an umbrella organisation to manage Science Fiction Prototyping activity, leading to a proliferation of events and publications; a more detailed account is provided on the Science Fiction Prototyping History web pages. [ 14 ] The core methodology is the use of creative arts as a means to introduce innovations into science, engineering, business and socio-political systems. It doesn't aim to forecast the future, rather it focuses on inventing or innovating the future by extrapolating forward trends from research or foresight activities (creating new concepts, schemes, services and products). The main (but not exclusive) methodology is the use of science-fiction stories, grounded in existing practice which are written for the explicit purpose of acting as prototypes for people to explore a wide variety of futures. These 'science fiction prototypes' (SFPs) can be created by scientists, engineers, business or socio-political professionals to stretch their work or, for example, by writers, film/stage directors, school children and members of the public to influence the work of professionals. In this way these stories act as a way of involving the widest section of the population to help set the research agenda. Johnson advocates the following five step process for writing Science Fiction Prototypes: [ 4 ] Full Science Fiction Prototypes are about 6–12 pages long, with a popular structure being: an introduction, background work, the fictional story (the bulk of the SFP), a short summary and a summary (reflection). Most often science fiction prototypes extrapolate current science forward and, therefore, include a set of references at the end. Such prototypes can take several days to write and for situations where ideas need to be generated faster (e.g. meetings), the concept of micro science fiction prototypes (μSFP) is used. [ 11 ] Generally, μSFP are the size of a Twitter or Text message, being around 25–30 words (140–160 characters in standard English). Science fiction prototyping has a number of applications. The most obvious is for product innovation , in which the two earliest examples are Intel's 21st Century Robot (an open innovation project to develop a domestic robot ) and Essex University's eDesk (a mixed-reality immersive education desk) [ 15 ] both of which were introduced in the previous section. Beyond product innovation, science fiction prototyping finds itself being applied to many diverse areas. For example, at the University of Washington (USA) they have used it to facilitate broader contextual and societal thinking about computers, computer security risks, and security defense as part of an optional senior-level course in computer security . [ 16 ] In 2014, [ 17 ] these ideas were refined into a SFP methodology called Threatcasting with early adopters including the United States Air Force Academy , the Government of California , and the Army Cyber Institute at West Point Military Academy . An earlier variation called Futurcasting was used by government to provide a tool to influence the direction of society and politics. It did this by using stories about possible futures as a medium to engage the population in conversations about futures they would like to encourage or avoid. Science Fiction Prototyping is also being used in business environments. For example, in Canterbury Christ Church University (UK) Business School it is being used as a vehicle to introduce creative thinking in support of entrepreneurship courses. In the National Taiwan University (Taiwan), it is used to increase business school students' interests in science and technology for business innovation. [ 18 ] Elsewhere the Business Schools of the universities of Leeds and Manchester (UK) are exploring its use in community development projects. [ 19 ] Finally, it is being applied to Education . For example, in San-Diego State University (USA) Department of Learning Design and Technology they have explored it as a means for motivating pre-university students to take up STEM studies and careers. [ 20 ] Further afield, in China , they have identified a novel use for the methodology to address the mandatory requirement for all science and engineering students to take a course in English language . In particular Shijiazhuang University (China) are exploring the potential for Science Fiction Prototyping to overcome the dullness that some science students experience in language learning by using it as an integrated platform for teaching Computer English , combining language and science learning. [ 21 ] China is also concerned to improve the creative and innovation capabilities of their graduate which this approach supports.
https://en.wikipedia.org/wiki/Science_fiction_prototyping
Science of the Total Environment is a weekly international peer-reviewed scientific journal covering environmental science . It was established in 1972 and is published by Elsevier . The editors-in-chief are Damià Barceló ( Consejo Superior de Investigaciones Científicas ), Jay Gan ( University of California, Riverside ) and Philip Hopke ( University of Rochester ). The October 2020 article suggesting that amulets may prevent COVID-19 [ 1 ] has been met with skepticism [ 2 ] even among the listed coauthors. [ 3 ] As of November 2020, the article was under "temporary removal". [ 1 ] It was later withdrawn at the request of the authors. [ 4 ] The editor in-chief, Damià Barceló, was implicated in a €70,000 per year scheme to publish articles under the affiliation of King Saud University , Saudi Arabia. [ 5 ] Such schemes are employed to boost a university's rankings and are considered unethical by academics. [ 6 ] [ 7 ] The journal is abstracted and indexed in: According to the Journal Citation Reports , the journal has a 2023 impact factor of 8.2. [ 10 ] As of October 2024 [update] , the journal's indexation in the Science Citation Index Expanded is "on hold" and pending re-evaluation, with Web of Science citing the concerns on "the quality of the content published in this journal" as a reason for the suspension. [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Science_of_the_Total_Environment
In the philosophy of science , the science wars were a series of scholarly and public discussions in the 1990s over the social place of science in making authoritative claims about the world. Encyclopedia.com , citing the Encyclopedia of Science and Religion , describes the science wars as the The science wars took place principally in the United States in the 1990s in the academic and mainstream press. Scientific realists (such as Norman Levitt , Paul R. Gross , Jean Bricmont and Alan Sokal ) accused many writers, whom they described as ' postmodernist ', of having effectively rejected scientific objectivity , the scientific method , empiricism , and scientific knowledge. [ citation needed ] Though much of the theory associated with 'postmodernism' (see post-structuralism ) did not make any interventions into the natural sciences , the scientific realists took aim at its general influence. The scientific realists argued that large swathes of scholarship, amounting to a rejection of objectivity and realism, had been influenced by major 20th-century post-structuralist philosophers (such as Jacques Derrida , Gilles Deleuze , Jean-François Lyotard and others), whose work they declare to be incomprehensible or meaningless. They implicate a broad range of fields in this trend, including cultural studies , feminist studies , comparative literature , media studies , and especially science and technology studies , which does apply such methods to the study of science. Physicist N. David Mermin understands the science wars as a series of exchanges between scientists and " sociologists , historians and literary critics " who the scientists "thought ...were ludicrously ignorant of science, making all kinds of nonsensical pronouncements. The other side dismissed these charges as naive, ill-informed and self-serving." [ 2 ] Sociologist Harry Collins wrote that the "science wars" began "in the early 1990s with attacks by natural scientists or ex-natural scientists who had assumed the role of spokespersons for science. The subject of the attacks was the analysis of science coming out of literary studies and the social sciences." [ 3 ] Until the mid-20th century, the philosophy of science had concentrated on the viability of scientific method and knowledge, proposing justifications for the truth of scientific theories and observations and attempting to discover at a philosophical level why science worked. Karl Popper , an early opponent of logical positivism in the 20th century, repudiated the classical observationalist/ inductivist form of scientific method in favour of empirical falsification . He is also known for his opposition to the classical justificationist / verificationist account of knowledge which he replaced with critical rationalism , "the first non justificational philosophy of criticism in the history of philosophy". [ 4 ] His criticisms of scientific method were adopted by several postmodernist critiques. [ 5 ] A number of 20th-century philosophers maintained that logical models of pure science do not apply to actual scientific practice. It was the publication of Thomas Kuhn 's The Structure of Scientific Revolutions in 1962, however, which fully opened the study of science to new disciplines by suggesting that the evolution of science was in part socially determined and that it did not operate under the simple logical laws put forward by the logical positivist school of philosophy. Kuhn described the development of scientific knowledge not as a linear increase in truth and understanding, but as a series of periodic revolutions which overturned the old scientific order and replaced it with new orders (what he called " paradigms "). Kuhn attributed much of this process to the interactions and strategies of the human participants in science rather than its own innate logical structure. (See sociology of scientific knowledge ). Some interpreted Kuhn's ideas to mean that scientific theories were, either wholly or in part, social constructs , which many interpreted as diminishing the claim of science to representing objective reality, and that reality had a lesser or potentially irrelevant role in the formation of scientific theories. [ citation needed ] In 1971, Jerome Ravetz published Scientific knowledge and its social problems , a book describing the role that the scientific community, as a social construct, plays in accepting or rejecting objective scientific knowledge. [ 6 ] A number of different philosophical and historical schools, often grouped together as " postmodernism ", began reinterpreting scientific achievements of the past through the lens of the practitioners, often positing the influence of politics and economics in the development of scientific theories in addition to scientific observations. Rather than being presented as working entirely from positivistic observations, many scientists of the past were scrutinized for their connection to issues of gender, sexual orientation, race, and class. Some more radical philosophers, such as Paul Feyerabend , argued that scientific theories were themselves incoherent and that other forms of knowledge production (such as those used in religion ) served the material and spiritual needs of their practitioners with equal validity as did scientific explanations. Imre Lakatos advanced a midway view between the "postmodernist" and "realist" camps. For Lakatos, scientific knowledge is progressive; however, it progresses not by a strict linear path where every new element builds upon and incorporates every other, but by an approach where a "core" of a "research program" is established by auxiliary theories which can themselves be falsified or replaced without compromising the core. Social conditions and attitudes affect how strongly one attempts to resist falsification for the core of a program, but the program has an objective status based on its relative explanatory power. Resisting falsification only becomes ad-hoc and damaging to knowledge when an alternate program with greater explanatory power is rejected in favor of another with less. But because it is changing a theoretical core, which has broad ramifications for other areas of study, accepting a new program is also revolutionary as well as progressive. Thus, for Lakatos the character of science is that of being both revolutionary and progressive; both socially informed and objectively justified. In Higher Superstition: The Academic Left and Its Quarrels With Science (1994), scientists Paul R. Gross and Norman Levitt accused postmodernists of anti-intellectualism , presented the shortcomings of relativism , and suggested that postmodernists knew little about the scientific theories they criticized and practiced poor scholarship for political reasons. The authors insist that the "science critics" misunderstood the theoretical approaches they criticized, given their "caricature, misreading, and condescension, [rather] than argument". [ 7 ] [ 8 ] [ 9 ] [ 10 ] The book sparked the so-called science wars. Higher Superstition inspired a New York Academy of Sciences conference titled The Flight from Science and Reason , organized by Gross, Levitt, and Gerald Holton . [ 11 ] Attendees of the conference were critical of the polemical approach of Gross and Levitt, yet agreed upon the intellectual inconsistency of how laymen, non-scientist, and social studies intellectuals dealt with science. [ 12 ] In 1996, Social Text , a left-wing Duke University publication of postmodern critical theory , compiled a "Science Wars" issue containing brief articles by postmodernist academics in the social sciences and the humanities , that emphasized the roles of society and politics in science. In the introduction to the issue, the Social Text editor, activist Andrew Ross , said that the attack upon science studies was a conservative reaction to reduced funding for scientific research. He characterized the Flight from Science and Reason conference as an attempted "linking together a host of dangerous threats: scientific creationism , New Age alternatives and cults, astrology , UFO-ism , the radical science movement, postmodernism, and critical science studies, alongside the ready-made historical specters of Aryan-Nazi science and the Soviet error of Lysenkoism " that "degenerated into name-calling". [ 13 ] In another Social Text article, the postmodern sociologist Dorothy Nelkin characterised Gross and Levitt's vigorous response as a "call to arms in response to the failed marriage of Science and the State"—in contrast to the scientists' historical tendency to avoid participating in perceived political threats, such as creation science , the animal rights movement , and anti-abortionists' attempts to curb fetal research. [ clarification needed ] At the end of the Soviet–American Cold War (1945–91), military funding of science declined, while funding agencies demanded accountability, and research became directed by private interests. Nelkin suggested that postmodernist critics were "convenient scapegoats" who diverted attention from problems in science. [ 14 ] Also in 1996, physicist Alan Sokal had submitted an article to Social Text titled " Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity ", which proposed that quantum gravity is a linguistic and social construct and that quantum physics supports postmodernist criticisms of scientific objectivity . The staff published it in the "Science Wars" issue as a relevant contribution, later claiming that they held the article back from earlier issues due to Sokal's alleged refusal to consider revisions. [ 15 ] Later, in the May 1996 issue of Lingua Franca , in the article "A Physicist Experiments With Cultural Studies", Sokal exposed his parody -article, "Transgressing the Boundaries" as an experiment testing the intellectual rigor of an academic journal that would "publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions". [ 16 ] The matter became known as the " Sokal Affair " and brought greater public attention to the wider conflict. [ 17 ] Jacques Derrida , a frequent target of anti- relativist and anti-postmodern criticism in the wake of Sokal's article, responded to the hoax in "Sokal and Bricmont Aren't Serious", first published in Le Monde . He called Sokal's action sad ( triste ) for having overshadowed Sokal's mathematical work and ruined the chance to sort out controversies of scientific objectivity in a careful way. Derrida went on to fault him and co-author Jean Bricmont for what he considered an act of intellectual bad faith: they had accused him of scientific incompetence in the English edition of a follow-up book (an accusation several English reviewers noted), but deleted the accusation from the French edition and denied that it had ever existed. He concluded, as the title indicates, that Sokal was not serious in his approach, but had used the spectacle of a "quick practical joke" to displace the scholarship Derrida believed the public deserved. [ 18 ] In the first few years after the 'Science Wars' edition of Social Text , the seriousness and volume of discussion increased significantly, much of it focused on reconciling the 'warring' camps of postmodernists and scientists. One significant event was the 'Science and Its Critics' conference in early 1997; it brought together scientists and scholars who study science and featured Alan Sokal and Steve Fuller as keynote speakers. The conference generated the final wave of substantial press coverage (in both news media and scientific journals), though by no means resolved the fundamental issues of social construction and objectivity in science. [ 19 ] Other attempts have been made to reconcile the two camps. Mike Nauenberg, a physicist at the University of California, Santa Cruz , organized a small conference in May 1997 that was attended by scientists and sociologists of science alike, among them Alan Sokal , N. David Mermin and Harry Collins . In the same year, Collins organized the Southampton Peace Workshop, which again brought together a broad range of scientists and sociologists. The Peace Workshop gave rise to the idea of a book that intended to map out some of the arguments between the disputing parties. The One Culture?: A Conversation about Science , edited by chemist Jay A. Labinger and sociologist Harry Collins, was eventually published in 2001. The book's title is a reference to C. P. Snow 's The Two Cultures . It contains contributions from authors such as Alan Sokal, Jean Bricmont, Steven Weinberg , and Steven Shapin . [ 20 ] Other significant publications related to the science wars include Fashionable Nonsense by Sokal and Jean Bricmont (1998), The Social Construction of What? by Ian Hacking (1999) and Who Rules in Science by James Robert Brown (2004). To John C. Baez , the Bogdanov Affair in 2002 [ 21 ] served as the bookend to the Sokal controversy: the review, acceptance, and publication of papers, later alleged to be nonsense, in peer-reviewed physics journals. Cornell physics professor Paul Ginsparg , argued that the cases are not at all similar and that the fact that some journals and scientific institutions have low standards is "hardly a revelation". [ 22 ] The new editor in chief of the journal Annals of Physics , who was appointed after the controversy along with a new editorial staff, had said that the standards of the journal had been poor leading up to the publication since the previous editor had become sick and died. [ 21 ] Interest in the science wars has waned considerably in recent years. Though the events of the science wars are still occasionally mentioned in the mainstream press, they have had little effect on either the scientific community or the community of critical theorists. [ citation needed ] Both sides continue to maintain that the other does not understand their theories, or mistakes constructive criticisms and scholarly investigations for attacks. In 1999, the French sociologist Bruno Latour —at the time believing that the natural sciences are socially constructivist —said, "Scientists always stomp around meetings talking about 'bridging the two-culture gap', but when scores of people from outside the sciences begin to build just that bridge, they recoil in horror and want to impose the strangest of all gags on free speech since Socrates : only scientists should speak about science!" [ 23 ] Subsequently, Latour has suggested a re-evaluation of sociology's epistemology based on lessons learned from the Science Wars: "... scientists made us realize that there was not the slightest chance that the type of social forces we use as a cause could have objective facts as their effects". [ 24 ] Reviewing Sokal's Beyond the Hoax , Mermin stated that "As a sign that the science wars are over, I cite the 2008 election of Bruno Latour [...] to Foreign Honorary Membership in that bastion of the establishment, the American Academy of Arts and Sciences " and opined that "we are not only beyond Sokal's hoax, but beyond the science wars themselves". [ 2 ] However, more recently, some of the leading critical theorists have recognized that their critiques have, at times, been counter-productive and are providing intellectual ammunition for reactionary interests. [ 25 ] Writing about these developments in the context of global warming , Latour noted that "dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives. Was I wrong to participate in the invention of this field known as science studies? Is it enough to say that we did not really mean what we said?" [ 26 ] Kendrick Frazier notes that Latour is interested in helping to rebuild trust in science and that Latour has said that some of the authority of science needs to be regained. [ 27 ] In 2016, Shawn Lawrence Otto , in his book The War on Science: Who's Waging It, Why It Matters, and What We can Do About It, that the winners of the war on science "will chart the future of power, democracy, and freedom itself." [ 28 ]
https://en.wikipedia.org/wiki/Science_wars
Sciensus (formerly Healthcare at Home ) is a global life sciences business established in 1992 by founder and former chairman Charles Walsh based in London . The company supplies a wide range of specialist medications and patient support programmes for chronic, cancer and rare disease patients, with around 1,600 employees dealing with more than 230,000 patients a year. [ 1 ] It works with every NHS trust in the UK. [ 2 ] In Spring 2015, Healthcare at Home reverted to using in house logistics rather than working with a third party for all deliveries. In Autumn 2015, the company went through a full corporate rebranding, including new logos, literature, strap line and website in an attempt to re-establish the company's status in the market. [ 3 ] The company's name was changed in 2022 from "Healthcare at Home LTD" to "Sciensus Pharma Limited", using only "Sciensus" to refer to themselves from then on. [ 4 ] In July 2023, 1 patient died and 3 were hospitalised after being administered unlicensed versions of cabazitaxel chemotherapy provided by Sciensus. [ 5 ] This follows reports in May 2023 of complaints of poor service and delays raised about the company, sparking a possible CQC review. [ 6 ] [ 7 ] [ 8 ] This resulted in "partial suspension" of its licence by the Medicines and Healthcare products Regulatory Agency (MHRA). [ 9 ] In 2013 Healthcare at Home took over almost 3,000 patients from another drug delivery company, Medco Health Solutions which pulled out of the UK market. Subsequently, the company struggled to maintain services. In March 2014 Healthcare at Home said that it was no longer accepting new ‘high risk’ patients such as those suffering from haemophilia or respiratory diseases like cystic fibrosis because it could not guarantee getting their drugs to them on time or in full. The company blamed IT issues and a ‘system failure’ relating to the firm's outsourcing of its logistics and warehousing departments. Dave Roberts, chief executive of the National Clinical Homecare Association , says there are questions about whether the rapid growth and low profit margins in the sector have contributed to the situation. ‘There’s been 20% growth in this sector year on year for several years now in terms of numbers of patients – that’s a very rapid expansion. But profits in this sector are just 2-4%. That is a poor return given the extent of capital investment needed and the governance and logistics issues. As NHS budgets have fallen, all the slack has been cut out of contracts,’ Roberts added. In its 2014 accounts Healthcare at Home, which is majority-owned by private equity firm Vitruvian Partners , reported a turnover of £1bn, with pretax profits of just £15m. [ 10 ] The chief executive Mike Gordon stepped down in June 2014 after the revelation that seven per cent of the firm's 150,000 patients had not received their medication on time or in full during the last six months. [ 11 ] In May 2021 it was reported that the firm had missed almost 10,000 medicine deliveries when it introduced a new IT system in October 2020 and it was put in special measures by the Care Quality Commission. The inspectors found they failed to properly investigate some health and safety incidents and "did not always learn lessons when things went wrong". [ 12 ] On 16 October 2023, Sciencus were given 28 days to reform the firm's complaints handling process due to issues around ignoring patients concerns. The CEO of Sciencus received a written warning from the Rob Behrens, Parliamentary and Health Service Ombudsman. [ 13 ]
https://en.wikipedia.org/wiki/Sciensus
The Scientific Committee on Consumer Safety ( SCCS ) is one of the independent scientific committees managed by the Directorate-General for Health and Consumer Protection of the European Commission , which provide scientific advice to the commission on issues related to non-food issues. It is the successor to both the Scientific Committee on Consumer Products (SCCP) and the Scientific Committee on Cosmetic Products and Non-Food Products (SCCNFP). The Scientific Committee on Consumer Safety provides the European Commission with scientific advice on the safety of non-food consumer products. The SCCS's advice is intended to enable risk managers to take the adequate and required actions in order to guarantee consumer protection . The SCCS addresses questions in relation to the safety, allergenic properties, and impact on consumer health, of products and ingredients such as toys, textiles, clothing, cosmetics , personal care products, domestic products such as detergents , and consumer services such as tattooing . By the end of 2006 the SCCP had adopted close to 100 opinions or position papers on topics such as fragrances , hair dyes, sunbeds , tooth bleaching , preservatives , UV filters, and other substances. The SCCS consists of a maximum of 17 members. There is also a reserve list made up of candidates found suitable for a position in a scientific committee. The members of the SCCS are appointed on the basis of their skills and experience in the fields in question, and consistent with this a geographical distribution that reflects the diversity of scientific problems and approaches in the European Union . The experts are appointed for three years, renewable a maximum of three consecutive times. In agreement with the commission, the scientific committees may turn to specialised external experts. The SCCS complies with the principles of independence, transparency and confidentiality. The members therefore make a declaration of commitment to act in the public interest and a declaration of interests. Requests for opinions, agendas, minutes and opinions are published. The work and publications respect commercial confidentiality. The scientific committees were originally established by Commission Decision 97/404/EC of 10 June 1997. The Scientific Committee on Consumer Products (SCCP) was originally established as one of three scientific committees established by Commission Decision 2004/210/EC of 3 March 2004, replacing the former Scientific Committee on Cosmetic Products and Non-Food Products (SCCNFP). Commission Decision 2008/721/EC of 5 August 2008 reestablished the committee as the Scientific Committee on Consumer Safety (SCCS). The Directorate-General for Health and Consumer Protection also manages two other independent scientific committees on non-food products: For questions concerning the safety of food products, the European Commission consults the European Food Safety Authority (EFSA).
https://en.wikipedia.org/wiki/Scientific_Committee_on_Consumer_Safety
The Scientific Committee on Food ( SCF ), established in 1974, was the main committee providing the European Commission with scientific advice on food safety . [ 1 ] [ 2 ] Its responsibilities have been transferred to the European Food Safety Authority (EFSA) . The SCF provided the European Commission with independent scientific advice [ 1 ] on issues of public health related to food consumption. The commission was required to consult the SCF in the cases laid down by EU legislation, but could also decide to consult it on any other matter relating to food safety or consumer health, in conjunction with other committees. The SCF could also alert the commission to any specific or emerging concern. Most of the SCF's early activities concerned food additives , [ 1 ] but other issues became increasingly important as the scope of EU legislation expanded, including work with flavourings , food contact materials , nutrition , contaminants , novel foods , food hygiene , and natural mineral waters . The European Commission appointed various independent scientific experts, from across the European Union , [ 1 ] to serve as members of the SCF. The main role of the members was to give objective and authoritative advice, spanning a range of food-related issues. The Commission advertised all the posts for the SCF committee, as well as for its other scientific committees. These included experts in the fields of nutrition , toxicology , food hygiene , food technology , microbiology , biotechnology and molecular genetics . In December 1997, the SCF had 17 members. During each meeting, the SCF members were required to declare [ 1 ] if they had any commercial or other interests in the items being discussed. These declarations were later included in the minutes of each meeting, which are now publicly available on the Internet (see below: External links). The SCF established 8 working groups covering the following specialized areas: [ 1 ] Each working group was chaired by an SCF member who regularly reported back [ 1 ] to the SCF about the working group's activities and conclusions. The membership of each working group was selected from among SCF members, then supplemented by ad hoc (specific) experts with appropriate expertise, who participated in some meetings, at the request of the European Commission. After reviewing its handling of the BSE crisis, [ 1 ] the European Commission restructured its committees, replacing some and creating new committees. All of these committees were then overseen by the Scientific Steering Committee (formerly the Multi-disciplinary Scientific Committee), which also handled some specific issues such as BSE. The responsibility for all of the advisory scientific committees was also transferred to DG XXIV, which dealt with Consumer Policy and Consumer Health Protection. This transfer was made to preserve the independence of those committees, by separating the tasks of expert advice from the direct policy and legislative parts of the commission. The SCF's opinion papers (about the issues it had studied) [ 1 ] and the minutes of its meetings, can be obtained from the European Commission's website at ec.europa.eu (see below: External links). The opinions of the Scientific Committee on Food were also published by the Publications Office of the European Union , as various reports in the series on Food Science and Techniques .
https://en.wikipedia.org/wiki/Scientific_Committee_on_Food
The Scientific Committee on Health and Environmental Risks ( SCHER ) is one of the independent scientific committees managed by the Directorate-General for Health and Consumer Protection of the European Commission , which provide scientific advice to the Commission on issues related to consumer products. The SCHER provide the Commission with the scientific advice on questions relating to the toxicity and ecotoxicity of chemicals , biochemicals and biological compounds whose use may have harmful consequences for human health and the environment . In particular, the SCHER addresses questions in relation to new and existing chemicals , the restriction and marketing of dangerous substances, biocides , waste , environmental contaminants, plastics and other materials used for water pipe work (e.g. new organic substances), drinking water , indoor and ambient air quality . It also addresses questions relating to human exposure to mixtures of chemicals, sensitisation and identification of endocrine disrupters . The SCHER consists of a maximum of 19 members. The members are appointed on the basis of their skills and experience in the fields in question and consistent with this a geographical distribution that reflects the diversity of scientific problems and approaches in the European Union. The experts are appointed for three years, renewable for a maximum of three consecutive terms. In agreement with the Commission, the Scientific Committees may turn to specialized external experts. SCHER's scientific advisory procedures are based on the principles of scientific excellence, independence and transparency. The Opinions of the Committees are made available as quickly as possible following a request for advice from the Commission. In addition, the agendas, minutes of plenary meetings and lists of members are published. The Directorate-General for Health and Consumer Protection also manages two other independent Scientific Committees on non-food products: For questions concerning the safety of food products, the European Commission consults the European Food Safety Authority (EFSA). This article about the European Union is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scientific_Committee_on_Health_and_Environmental_Risks
The Scientific Committee on Occupational Exposure Limit Values ( SCOEL ) is a committee of the European Commission established in 1995 to advise on occupational exposure limits for chemicals in the workplace within the framework of: It is composed of scientists who are expert in chemistry , toxicology , epidemiology , occupational medicine or industrial hygiene , and reviews available information, recommending exposure limits where possible. [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Scientific_Committee_on_Occupational_Exposure_Limit_Values
A scientific calculator is an electronic calculator , either desktop or handheld, designed to perform calculations using basic ( addition , subtraction , multiplication , division ) and advanced ( trigonometric , hyperbolic , etc.) mathematical operations and functions . They have completely replaced slide rules as well as books of mathematical tables and are used in both educational and professional settings. In some areas of study and professions scientific calculators have been replaced by graphing calculators and financial calculators which have the capabilities of a scientific calculator along with the capability to graph input data and functions , as well as by numerical computing , computer algebra , statistical, and spreadsheet software packages running on personal computers . Both desktop and mobile software calculators can also emulate many functions of a physical scientific calculator. Standalone scientific calculators remain popular in secondary and tertiary education because computers and smartphones are often prohibited during exams to reduce the likelihood of cheating. [ 1 ] When electronic calculators were originally marketed they normally had only four or five capabilities ( addition , subtraction , multiplication , division and square root ). Modern scientific calculators generally have many more capabilities than the original four- or five-function calculator, and the capabilities differ between manufacturers and models. The capabilities of a modern scientific calculator include: In addition, high-end scientific calculators generally include some or all of the following: While most scientific calculators have traditionally used a single-line display similar to traditional pocket calculators , many of them have more digits (10 to 12), sometimes with extra digits for the floating-point exponent. A few have multi-line displays, with some models from Hewlett-Packard , Texas Instruments (both US manufacturers), Casio , Sharp , and Canon (all three Japanese makers) using dot matrix displays similar to those found on graphing calculators . Scientific calculators are used widely in situations that require quick access to certain mathematical functions, especially those that were once looked up in mathematical tables , such as trigonometric functions or logarithms . They are also used for calculations of very large or very small numbers, as in some aspects of astronomy , physics , and chemistry . They are very often required for math classes from the junior high school level through college , [ 3 ] and are generally either permitted or required on many standardized tests covering math and science subjects; [ 4 ] as a result, many are sold into educational markets to cover this demand, and some high-end models include features making it easier to translate a problem on a textbook page into calculator input, e.g. by providing a method to enter an entire problem in as it is written on the page using simple formatting tools. The first scientific calculator that included all of the basic ideas above was the programmable Hewlett-Packard HP-9100A , [ 5 ] released in 1968, though the Wang LOCI-2 and the Mathatronics Mathatron [ 6 ] had some features later identified with scientific calculator designs. The HP-9100 series was built entirely from discrete transistor logic with no integrated circuits , and was one of the first uses of the CORDIC algorithm for trigonometric computation in a personal computing device, as well as the first calculator based on reverse Polish notation (RPN) entry. HP became closely identified with RPN calculators from then on, and even today some of their high-end calculators (particularly the long-lived HP-12C financial calculator and the HP-48 series of graphing calculators ) still offer RPN as their default input mode due to having garnered a very large following. The HP-35 , introduced on February 1, 1972, was Hewlett-Packard's first pocket calculator and the world's first handheld scientific calculator. [ 7 ] Like some of HP's desktop calculators it used RPN. Introduced at US$395, the HP-35 was available from 1972 to 1975. Texas Instruments (TI), after the production of several units with scientific notation , introduced a handheld scientific calculator on January 15, 1974, in the form of the SR-50 . [ 8 ] TI's long-running TI-30 series being one of the most widely used scientific calculators in classrooms. Casio , Canon , and Sharp , produced their graphing calculators, with Casio's FX series (beginning with the Casio FX-1 in 1972 [ 9 ] ). Casio was the first company to produce a Graphing calculator ( Casio fx-7000G ).
https://en.wikipedia.org/wiki/Scientific_calculator
A scientific enterprise is a science -based project developed by, or in cooperation with, a private entrepreneur . For example, in the Age of Exploration , leaders like Henry the Navigator founded schools of navigation, from which stemmed voyages of exploration. Each organization listed below has the ability to conduct scientific research on an extended basis, involving multiple researchers over an extended time. Generally, the research is funded not only for the science itself, but for some application which shows promise for the enterprise. But the researchers, if left to their own choices, will tend to follow their research interest, which is essential for the long-term health of their chosen field. Note that a successful scientific enterprise is not equivalent to a successful high-tech enterprise or to a successful business enterprise, but that they form an ecology, a food chain .
https://en.wikipedia.org/wiki/Scientific_enterprise
Scientific laws or laws of science are statements, based on repeated experiments or observations , that describe or predict a range of natural phenomena . [ 1 ] The term law has diverse usage in many cases (approximate, accurate, broad, or narrow) across all fields of natural science ( physics , chemistry , astronomy , geoscience , biology ). Laws are developed from data and can be further developed through mathematics ; in all cases they are directly or indirectly based on empirical evidence . It is generally understood that they implicitly reflect, though they do not explicitly assert, causal relationships fundamental to reality, and are discovered rather than invented. [ 2 ] Scientific laws summarize the results of experiments or observations, usually within a certain range of application. In general, the accuracy of a law does not change when a new theory of the relevant phenomenon is worked out, but rather the scope of the law's application, since the mathematics or statement representing the law does not change. As with other kinds of scientific knowledge, scientific laws do not express absolute certainty, as mathematical laws do. A scientific law may be contradicted, restricted, or extended by future observations. A law can often be formulated as one or several statements or equations , so that it can predict the outcome of an experiment. Laws differ from hypotheses and postulates , which are proposed during the scientific process before and during validation by experiment and observation. Hypotheses and postulates are not laws, since they have not been verified to the same degree, although they may lead to the formulation of laws. Laws are narrower in scope than scientific theories , which may entail one or several laws. [ 3 ] Science distinguishes a law or theory from facts. [ 4 ] Calling a law a fact is ambiguous , an overstatement , or an equivocation . [ 5 ] The nature of scientific laws has been much discussed in philosophy , but in essence scientific laws are simply empirical conclusions reached by the scientific method; they are intended to be neither laden with ontological commitments nor statements of logical absolutes . Social sciences such as economics have also attempted to formulate scientific laws, though these generally have much less predictive power. A scientific law always applies to a physical system under repeated conditions, and it implies that there is a causal relationship involving the elements of the system. Factual and well-confirmed statements like "Mercury is liquid at standard temperature and pressure" are considered too specific to qualify as scientific laws. A central problem in the philosophy of science , going back to David Hume , is that of distinguishing causal relationships (such as those implied by laws) from principles that arise due to constant conjunction . [ 6 ] Laws differ from scientific theories in that they do not posit a mechanism or explanation of phenomena: they are merely distillations of the results of repeated observation. As such, the applicability of a law is limited to circumstances resembling those already observed, and the law may be found to be false when extrapolated. Ohm's law only applies to linear networks; Newton's law of universal gravitation only applies in weak gravitational fields; the early laws of aerodynamics , such as Bernoulli's principle , do not apply in the case of compressible flow such as occurs in transonic and supersonic flight; Hooke's law only applies to strain below the elastic limit ; Boyle's law applies with perfect accuracy only to the ideal gas, etc. These laws remain useful, but only under the specified conditions where they apply. Many laws take mathematical forms, and thus can be stated as an equation; for example, the law of conservation of energy can be written as Δ E = 0 {\displaystyle \Delta E=0} , where E {\displaystyle E} is the total amount of energy in the universe. Similarly, the first law of thermodynamics can be written as d U = δ Q − δ W {\displaystyle \mathrm {d} U=\delta Q-\delta W\,} , and Newton's second law can be written as F = d p d t . {\displaystyle \textstyle F={\frac {dp}{dt}}.} While these scientific laws explain what our senses perceive, they are still empirical (acquired by observation or scientific experiment) and so are not like mathematical theorems which can be proved purely by mathematics. Like theories and hypotheses, laws make predictions; specifically, they predict that new observations will conform to the given law. Laws can be falsified if they are found in contradiction with new data. Some laws are only approximations of other more general laws, and are good approximations with a restricted domain of applicability. For example, Newtonian dynamics (which is based on Galilean transformations) is the low-speed limit of special relativity (since the Galilean transformation is the low-speed approximation to the Lorentz transformation). Similarly, the Newtonian gravitation law is a low-mass approximation of general relativity, and Coulomb's law is an approximation to quantum electrodynamics at large distances (compared to the range of weak interactions). In such cases it is common to use the simpler, approximate versions of the laws, instead of the more accurate general laws. Laws are constantly being tested experimentally to increasing degrees of precision, which is one of the main goals of science. The fact that laws have never been observed to be violated does not preclude testing them at increased accuracy or in new kinds of conditions to confirm whether they continue to hold, or whether they break, and what can be discovered in the process. It is always possible for laws to be invalidated or proven to have limitations, by repeatable experimental evidence, should any be observed. Well-established laws have indeed been invalidated in some special cases, but the new formulations created to explain the discrepancies generalize upon, rather than overthrow, the originals. That is, the invalidated laws have been found to be only close approximations, to which other terms or factors must be added to cover previously unaccounted-for conditions, e.g. very large or very small scales of time or space, enormous speeds or masses, etc. This, rather than unchanging knowledge, physical laws are better viewed as a series of improving and more precise generalizations. Scientific laws are typically conclusions based on repeated scientific experiments and observations over many years and which have become accepted universally within the scientific community . A scientific law is " inferred from particular facts, applicable to a defined group or class of phenomena , and expressible by the statement that a particular phenomenon always occurs if certain conditions be present". [ 7 ] The production of a summary description of our environment in the form of such laws is a fundamental aim of science . Several general properties of scientific laws, particularly when referring to laws in physics , have been identified. Scientific laws are: The term "scientific law" is traditionally associated with the natural sciences , though the social sciences also contain laws. [ 11 ] For example, Zipf's law is a law in the social sciences which is based on mathematical statistics . In these cases, laws may describe general trends or expected behaviors rather than being absolutes. In natural science, impossibility assertions come to be widely accepted as overwhelmingly probable rather than considered proved to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory , very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. While an impossibility assertion in natural science can never be absolutely proved, it could be refuted by the observation of a single counterexample . Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined. Some examples of widely accepted impossibilities in physics are perpetual motion machines , which violate the law of conservation of energy , exceeding the speed of light , which violates the implications of special relativity , the uncertainty principle of quantum mechanics , which asserts the impossibility of simultaneously knowing both the position and the momentum of a particle, and Bell's theorem : no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. Some laws reflect mathematical symmetries found in nature (e.g. the Pauli exclusion principle reflects identity of electrons, conservation laws reflect homogeneity of space , time, and Lorentz transformations reflect rotational symmetry of spacetime ). Many fundamental physical laws are mathematical consequences of various symmetries of space, time, or other aspects of nature. Specifically, Noether's theorem connects some conservation laws to certain symmetries. For example, conservation of energy is a consequence of the shift symmetry of time (no moment of time is different from any other), while conservation of momentum is a consequence of the symmetry (homogeneity) of space (no place in space is special, or different from any other). The indistinguishability of all particles of each fundamental type (say, electrons, or photons) results in the Dirac and Bose quantum statistics which in turn result in the Pauli exclusion principle for fermions and in Bose–Einstein condensation for bosons . Special relativity uses rapidity to express motion according to the symmetries of hyperbolic rotation , a transformation mixing space and time. Symmetry between inertial and gravitational mass results in general relativity . The inverse square law of interactions mediated by massless bosons is the mathematical consequence of the 3-dimensionality of space . One strategy in the search for the most fundamental laws of nature is to search for the most general mathematical symmetry group that can be applied to the fundamental interactions. Conservation laws are fundamental laws that follow from the homogeneity of space, time and phase , in other words symmetry . Conservation laws can be expressed using the general continuity equation (for a conserved quantity) can be written in differential form as: where ρ is some quantity per unit volume, J is the flux of that quantity (change in quantity per unit time per unit area). Intuitively, the divergence (denoted ∇⋅) of a vector field is a measure of flux diverging radially outwards from a point, so the negative is the amount piling up at a point; hence the rate of change of density in a region of space must be the amount of flux leaving or collecting in some region (see the main article for details). In the table below, the fluxes flows for various physical quantities in transport, and their associated continuity equations, are collected for comparison. u = velocity field of fluid (m s −1 ) Ψ = wavefunction of quantum system More general equations are the convection–diffusion equation and Boltzmann transport equation , which have their roots in the continuity equation. Classical mechanics, including Newton's laws , Lagrange's equations , Hamilton's equations , etc., can be derived from the following principle: where S {\displaystyle {\mathcal {S}}} is the action ; the integral of the Lagrangian of the physical system between two times t 1 and t 2 . The kinetic energy of the system is T (a function of the rate of change of the configuration of the system), and potential energy is V (a function of the configuration and its rate of change). The configuration of a system which has N degrees of freedom is defined by generalized coordinates q = ( q 1 , q 2 , ... q N ). There are generalized momenta conjugate to these coordinates, p = ( p 1 , p 2 , ..., p N ), where: The action and Lagrangian both contain the dynamics of the system for all times. The term "path" simply refers to a curve traced out by the system in terms of the generalized coordinates in the configuration space , i.e. the curve q ( t ), parameterized by time (see also parametric equation for this concept). The action is a functional rather than a function , since it depends on the Lagrangian, and the Lagrangian depends on the path q ( t ), so the action depends on the entire "shape" of the path for all times (in the time interval from t 1 to t 2 ). Between two instants of time, there are infinitely many paths, but one for which the action is stationary (to the first order) is the true path. The stationary value for the entire continuum of Lagrangian values corresponding to some path, not just one value of the Lagrangian, is required (in other words it is not as simple as "differentiating a function and setting it to zero, then solving the equations to find the points of maxima and minima etc", rather this idea is applied to the entire "shape" of the function, see calculus of variations for more details on this procedure). [ 12 ] Notice L is not the total energy E of the system due to the difference, rather than the sum: The following [ 13 ] [ 14 ] general approaches to classical mechanics are summarized below in the order of establishment. They are equivalent formulations. Newton's is commonly used due to simplicity, but Hamilton's and Lagrange's equations are more general, and their range can extend into other branches of physics with suitable modifications. S = ∫ t 1 t 2 L d t {\displaystyle {\mathcal {S}}=\int _{t_{1}}^{t_{2}}L\,\mathrm {d} t\,\!} Using the definition of generalized momentum, there is the symmetry: The Hamiltonian as a function of generalized coordinates and momenta has the general form: Newton's laws of motion They are low-limit solutions to relativity . Alternative formulations of Newtonian mechanics are Lagrangian and Hamiltonian mechanics. The laws can be summarized by two equations (since the 1st is a special case of the 2nd, zero resultant acceleration): where p = momentum of body, F ij = force on body i by body j , F ji = force on body j by body i . For a dynamical system the two equations (effectively) combine into one: in which F E = resultant external force (due to any agent not part of system). Body i does not exert a force on itself. From the above, any equation of motion in classical mechanics can be derived. Corollaries in mechanics : Corollaries in fluid mechanics : Equations describing fluid flow in various situations can be derived, using the above classical equations of motion and often conservation of mass, energy and momentum. Some elementary examples follow. Some of the more famous laws of nature are found in Isaac Newton 's theories of (now) classical mechanics , presented in his Philosophiae Naturalis Principia Mathematica , and in Albert Einstein 's theory of relativity . Special relativity : The two postulates of special relativity are not "laws" in themselves, but assumptions of their nature in terms of relative motion . They can be stated as "the laws of physics are the same in all inertial frames " and "the speed of light is constant and has the same value in all inertial frames". The said postulates lead to the Lorentz transformations – the transformation law between two frame of references moving relative to each other. For any 4-vector this replaces the Galilean transformation law from classical mechanics. The Lorentz transformations reduce to the Galilean transformations for low velocities much less than the speed of light c . The magnitudes of 4-vectors are invariants – not "conserved", but the same for all inertial frames (i.e. every observer in an inertial frame will agree on the same value), in particular if A is the four-momentum , the magnitude can derive the famous invariant equation for mass–energy and momentum conservation (see invariant mass ): in which the (more famous) mass–energy equivalence E = mc 2 is a special case. General relativity : General relativity is governed by the Einstein field equations , which describe the curvature of space-time due to mass–energy equivalent to the gravitational field. Solving the equation for the geometry of space warped due to the mass distribution gives the metric tensor . Using the geodesic equation, the motion of masses falling along the geodesics can be calculated. Gravitoelectromagnetism : In a relatively flat spacetime due to weak gravitational fields, gravitational analogues of Maxwell's equations can be found; the GEM equations , to describe an analogous gravitomagnetic field . They are well established by the theory, and experimental tests form ongoing research. [ 15 ] where Λ = cosmological constant , R μν = Ricci curvature tensor , T μν = stress–energy tensor , g μν = metric tensor where Γ is a Christoffel symbol of the second kind , containing the metric. If g the gravitational field and H the gravitomagnetic field, the solutions in these limits are: where ρ is the mass density and J is the mass current density or mass flux . where m is the rest mass of the particlce and γ is the Lorentz factor . Kepler's laws, though originally discovered from planetary observations (also due to Tycho Brahe ), are true for any central forces . [ 16 ] For two point masses: For a non uniform mass distribution of local mass density ρ ( r ) of body of Volume V , this becomes: An equivalent statement to Newton's law is: where is the eccentricity of the elliptic orbit, of semi-major axis a and semi-minor axis b , and ℓ is the semi-latus rectum. This equation in itself is nothing physically fundamental; simply the polar equation of an ellipse in which the pole (origin of polar coordinate system) is positioned at a focus of the ellipse, where the orbited star is. where L is the orbital angular momentum of the particle (i.e. planet) of mass m about the focus of orbit, where M is the mass of the central body (i.e. star). Second law of thermodynamics : There are many statements of this law, perhaps the simplest is "the entropy of isolated systems never decreases", meaning reversible changes have zero entropy change, irreversible process are positive, and impossible process are negative. Third law of thermodynamics : Maxwell's equations give the time-evolution of the electric and magnetic fields due to electric charge and current distributions. Given the fields, the Lorentz force law is the equation of motion for charges in the fields. Gauss's law for electricity Gauss's law for magnetism Faraday's law Ampère's circuital law (with Maxwell's correction) These equations can be modified to include magnetic monopoles , and are consistent with our observations of monopoles either existing or not existing; if they do not exist, the generalized equations reduce to the ones above, if they do, the equations become fully symmetric in electric and magnetic charges and currents. Indeed, there is a duality transformation where electric and magnetic charges can be "rotated into one another", and still satisfy Maxwell's equations. Pre-Maxwell laws : These laws were found before the formulation of Maxwell's equations. They are not fundamental, since they can be derived from Maxwell's equations. Coulomb's law can be found from Gauss's law (electrostatic form) and the Biot–Savart law can be deduced from Ampere's law (magnetostatic form). Lenz's law and Faraday's law can be incorporated into the Maxwell–Faraday equation. Nonetheless, they are still very effective for simple calculations. Other laws : Classically, optics is based on a variational principle : light travels from one point in space to another in the shortest time. In geometric optics laws are based on approximations in Euclidean geometry (such as the paraxial approximation ). In physical optics , laws are based on physical properties of materials. In actuality, optical properties of matter are significantly more complex and require quantum mechanics. Quantum mechanics has its roots in postulates . This leads to results which are not usually called "laws", but hold the same status, in that all of quantum mechanics follows from them. These postulates can be summarized as follows: These postulates in turn imply many other phenomena, e.g., uncertainty principles and the Pauli exclusion principle . Schrödinger equation (general form): Describes the time dependence of a quantum mechanical system. The Hamiltonian (in quantum mechanics) H is a self-adjoint operator acting on the state space, | ψ ⟩ {\displaystyle |\psi \rangle } (see Dirac notation ) is the instantaneous quantum state vector at time t , position r , i is the unit imaginary number , ħ = h /2π is the reduced Planck constant . Planck–Einstein law : the energy of photons is proportional to the frequency of the light (the constant is the Planck constant , h ). De Broglie wavelength : this laid the foundations of wave–particle duality, and was the key concept in the Schrödinger equation , Heisenberg uncertainty principle : Uncertainty in position multiplied by uncertainty in momentum is at least half of the reduced Planck constant , similarly for time and energy ; The uncertainty principle can be generalized to any pair of observables – see main article. Schrödinger equation (original form): where r i is the position of particle i , and s is the spin of the particle. There is no way to keep track of particles physically, labels are only used mathematically to prevent confusion. Applying electromagnetism, thermodynamics, and quantum mechanics, to atoms and molecules, some laws of electromagnetic radiation and light are as follows. Chemical laws are those laws of nature relevant to chemistry . Historically, observations led to many empirical laws, though now it is known that chemistry has its foundations in quantum mechanics . Quantitative analysis : The most fundamental concept in chemistry is the law of conservation of mass , which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction . Modern physics shows that it is actually energy that is conserved, and that energy and mass are related ; a concept which becomes important in nuclear chemistry . Conservation of energy leads to the important concepts of equilibrium , thermodynamics , and kinetics . Additional laws of chemistry elaborate on the law of conservation of mass. Joseph Proust 's law of definite composition says that pure chemicals are composed of elements in a definite formulation; we now know that the structural arrangement of these elements is also important. Dalton 's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers; although in many systems (notably biomacromolecules and minerals ) the ratios tend to require large numbers, and are frequently represented as a fraction. The law of definite composition and the law of multiple proportions are the first two of the three laws of stoichiometry , the proportions by which the chemical elements combine to form chemical compounds. The third law of stoichiometry is the law of reciprocal proportions , which provides the basis for establishing equivalent weights for each chemical element. Elemental equivalent weights can then be used to derive atomic weights for each element. More modern laws of chemistry define the relationship between energy and its transformations. Reaction kinetics and equilibria : Thermochemistry : Gas laws : Chemical transport : Whether or not Natural Selection is a “law of nature” is controversial among biologists. [ 17 ] [ 18 ] Henry Byerly , an American philosopher known for his work on evolutionary theory, discussed the problem of interpreting a principle of natural selection as a law. He suggested a formulation of natural selection as a framework principle that can contribute to a better understanding of evolutionary theory. [ 18 ] His approach was to express relative fitness , the propensity of a genotype to increase in proportionate representation in a competitive environment, as a function of adaptedness (adaptive design) of the organism. Some mathematical theorems and axioms are referred to as laws because they provide logical foundation to empirical laws. Examples of other observed phenomena sometimes described as laws include the Titius–Bode law of planetary positions, Zipf's law of linguistics, and Moore's law of technological growth. Many of these laws fall within the scope of uncomfortable science . Other laws are pragmatic and observational, such as the law of unintended consequences . By analogy, principles in other fields of study are sometimes loosely referred to as "laws". These include Occam's razor as a principle of philosophy and the Pareto principle of economics. The observation and detection of underlying regularities in nature date from prehistoric times – the recognition of cause-and-effect relationships implicitly recognises the existence of laws of nature. The recognition of such regularities as independent scientific laws per se , though, was limited by their entanglement in animism , and by the attribution of many effects that do not have readily obvious causes—such as physical phenomena—to the actions of gods , spirits, supernatural beings , etc. Observation and speculation about nature were intimately bound up with metaphysics and morality. In Europe, systematic theorizing about nature ( physis ) began with the early Greek philosophers and scientists and continued into the Hellenistic and Roman imperial periods, during which times the intellectual influence of Roman law increasingly became paramount. The formula "law of nature" first appears as "a live metaphor" favored by Latin poets Lucretius , Virgil , Ovid , Manilius , in time gaining a firm theoretical presence in the prose treatises of Seneca and Pliny . Why this Roman origin? According to [historian and classicist Daryn] Lehoux's persuasive narrative, [ 19 ] the idea was made possible by the pivotal role of codified law and forensic argument in Roman life and culture. For the Romans ... the place par excellence where ethics, law, nature, religion and politics overlap is the law court . When we read Seneca's Natural Questions , and watch again and again just how he applies standards of evidence, witness evaluation, argument and proof, we can recognize that we are reading one of the great Roman rhetoricians of the age, thoroughly immersed in forensic method. And not Seneca alone. Legal models of scientific judgment turn up all over the place, and for example prove equally integral to Ptolemy 's approach to verification, where the mind is assigned the role of magistrate, the senses that of disclosure of evidence, and dialectical reason that of the law itself. [ 20 ] The precise formulation of what are now recognized as modern and valid statements of the laws of nature dates from the 17th century in Europe, with the beginning of accurate experimentation and the development of advanced forms of mathematics. During this period, natural philosophers such as Isaac Newton (1642–1727) were influenced by a religious view – stemming from medieval concepts of divine law – which held that God had instituted absolute, universal and immutable physical laws. [ 21 ] [ 22 ] In chapter 7 of The World , René Descartes (1596–1650) described "nature" as matter itself, unchanging as created by God, thus changes in parts "are to be attributed to nature. The rules according to which these changes take place I call the 'laws of nature'." [ 23 ] The modern scientific method which took shape at this time (with Francis Bacon (1561–1626) and Galileo (1564–1642)) contributed to a trend of separating science from theology , with minimal speculation about metaphysics and ethics. ( Natural law in the political sense, conceived as universal (i.e., divorced from sectarian religion and accidents of place), was also elaborated in this period by scholars such as Grotius (1583–1645), Spinoza (1632–1677), and Hobbes (1588–1679).) The distinction between natural law in the political-legal sense and law of nature or physical law in the scientific sense is a modern one, both concepts being equally derived from physis , the Greek word (translated into Latin as natura ) for nature . [ 24 ]
https://en.wikipedia.org/wiki/Scientific_law
Scientists Under Attack: Genetic Engineering in the Magnetic Field of Money ( German : Gekaufte Wahrheit – Gentechnik im Magnetfeld des Geldes ) [ 1 ] is a 2009 German documentary film by Bertram Verhaag [ de ] . It alleges that the biotechnology industry was implicit in ruining the careers of Árpád Pusztai and Ignacio Chapela when they published research critical of genetic engineering . The film premiered at the 2009 International Documentary Film Festival Amsterdam . [ 2 ] The 2009 documentary interviewed three scientists ( Árpád Pusztai , Nina Fedoroff and Ignacio Chapela ) and an attorney (Andrew Kimbrell). Pusztai was a biochemist who went to the media with unpublished research claiming that a type of genetically modified potato suppressed the immune system and stunted growth when fed to rats. The resulting controversy led to him being fired from the Rowett Institute . Fedoroff is a highly decorated molecular biologist who is an external adviser to the US Department of State . Chapela is a professor at the University of Berkeley and Kimbrell is the executive director of the Center for Food Safety who sued the FDA in 1998 over its regulation of GM foods. The German ARD cultural magazine "titel thesen temperamente" broadcast a 6-minute review about the film. [ 3 ] The Bayerischer Rundfunk described the film as committed, partisan and disputatious. [ 4 ] KinoZeit calls it an ambitious documentary. [ 5 ] The documentary received 8 international prizes, including three for best documentary. [ 4 ] [ dead link ] It won 1st prize at Indie Fest 2010 in the feature documentary category. [ 6 ] This article about a scientific documentary film is a stub . You can help Wikipedia by expanding it . This genetics article is a stub . You can help Wikipedia by expanding it . This article about biological engineering is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scientists_Under_Attack:_Genetic_Engineering_in_the_Magnetic_Field_of_Money
Scientists against Nuclear Arms [ 1 ] ( SANA ) was formed in 1981 by the physicist and peace activist Mike Pentz together with Steven Rose , both academics at the Open University , to oppose nuclear arms . [ citation needed ] SANA was one of the forerunner organisations of Scientists for Global Responsibility (SGR). [ citation needed ] This article about an organisation in England is a stub . You can help Wikipedia by expanding it . This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scientists_against_Nuclear_Arms
Scientists for Global Responsibility [ 1 ] ( SGR ) in the United Kingdom promotes the ethical practice and use of science, design and technology . SGR is affiliated to the International Network of Engineers and Scientists for Global Responsibility (INES). It is an independent UK-based membership organisation of hundreds of natural scientists, social scientists, engineers, IT professionals and architects. In 2017 its partner organization ICAN (International Campaign to Abolish Nuclear Weapons) won the Nobel Peace Prize. [ 2 ] ICAN have promoted a Kurzgesagt YouTube video endorsed by the International Committee of the Red Cross and Crescent (ICRC) showing the consequences of a single atomic weapon exploded over a city. SGR's work is focused on four main issues: security and disarmament; climate change and energy , including nuclear power; who controls science and technology?; emerging technologies. The main areas of concern are arms and arms control , including military involvement in UK universities; effect of excessive greenhouse gas emissions on climate; the nature of war and reducing barbarity; topsoil and water shortages resulting from modern agricultural methods; depletion of species of fish due to over-fishing ; continual spread of nuclear weapons , and reduction of occurrence of serious nuclear accidents . [ 3 ] In 2019 SGR launched the journal Responsible Science . [ 4 ] SGR evaluates the risk of new science and new technological solutions to older science-based problems and threats, while recognizing the enormous contribution science, design and technology has made to civilisation and human well-being. [ 5 ] SGR promotes science, design and technology that contribute to peace, social justice and environmental sustainability . This article about an organisation in the United Kingdom is a stub . You can help Wikipedia by expanding it . This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it . This article about ethics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scientists_for_Global_Responsibility
Scigress , stylised SCiGRESS , is a software suite designed for molecular modeling , computational and experimental chemistry, drug design , and materials science . It is a successor to the Computer Aided Chemistry (CAChe) software and has been used to perform experiments on hazardous or novel biomolecules and proteins in silico . [ 1 ] [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Scigress
Scillitoxin ( scillaine ) is a chemical substance found in daffodils . [ 1 ] [ 2 ] It is a cardiac glucoside (a type of glycoside ). [ 3 ] with effects similar to digitoxin . [ 4 ] The first, 1889, edition of the Merck Index lists: "Scilli-toxin (Scillain)" under the heading of " Squill (Scilla) preparations". [ 5 ] It was stated in 1929 that "Scillitoxin has not been chemically identified as a definite chemical entity". [ 6 ] This article about an organic compound is a stub . You can help Wikipedia by expanding it . This toxicology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scillitoxin
In condensed matter physics , scintillation ( / ˈ s ɪ n t ɪ l eɪ ʃ ən / SIN -til-ay-shun ) is the physical process where a material, called a scintillator , emits ultraviolet or visible light under excitation from high energy photons ( X-rays or gamma rays ) or energetic particles (such as electrons , alpha particles , neutrons , or ions ). [ 1 ] [ 2 ] See scintillator and scintillation counter for practical applications. [ 3 ] [ 4 ] Scintillation is an example of luminescence , whereby light of a characteristic spectrum is emitted following the absorption of radiation . The scintillation process can be summarized in three main stages: conversion, transport and energy transfer to the luminescence center, and luminescence. [ 1 ] [ 2 ] [ 5 ] The emitted radiation is usually less energetic than the absorbed radiation, hence scintillation is generally a down-conversion process. The first stage of scintillation, conversion, is the process where the energy from the incident radiation is absorbed by the scintillator and highly energetic electrons and holes are created in the material. The energy absorption mechanism by the scintillator depends on the type and energy of radiation involved. For highly energetic photons such as X-rays (0.1 keV < E γ {\displaystyle E_{\gamma }} < 100 keV) and γ-rays ( E γ {\displaystyle E_{\gamma }} > 100 keV), three types of interactions are responsible for the energy conversion process in scintillation: photoelectric absorption , [ 6 ] Compton scattering , [ 7 ] and pair production , [ 8 ] which only occurs when E γ {\displaystyle E_{\gamma }} > 1022 keV, i.e. the photon has enough energy to create an electron-positron pair. These processes have different attenuation coefficients , which depend mainly on the energy of the incident radiation, the average atomic number of the material and the density of the material. Generally the absorption of high energy radiation is described by: where I 0 {\displaystyle I_{0}} is the intensity of the incident radiation, d {\displaystyle d} is the thickness of the material, and μ {\displaystyle \mu } is the linear attenuation coefficient, which is the sum of the attenuation coefficients of the various contributions: At lower X-ray energies ( E γ ≲ {\displaystyle E_{\gamma }\lesssim } 60 keV), the most dominant process is the photoelectric effect, where the photons are fully absorbed by bound electrons in the material, usually core electrons in the K- or L-shell of the atom, and then ejected, leading to the ionization of the host atom. The linear attenuation coefficient contribution for the photoelectric effect is given by: [ 6 ] [ 9 ] where ρ {\displaystyle \rho } is the density of the scintillator, Z {\displaystyle Z} is the average atomic number, n {\displaystyle n} is a constant that varies between 3 and 4, and E γ {\displaystyle E_{\gamma }} is the energy of the photon. At low X-ray energies, scintillator materials with atoms with high atomic numbers and densities are favored for more efficient absorption of the incident radiation. At higher energies ( E γ {\displaystyle E_{\gamma }} ≳ {\displaystyle \gtrsim } 60 keV) Compton scattering, the inelastic scattering of photons by bound electrons, often also leading to ionization of the host atom, becomes the more dominant conversion process. The linear attenuation coefficient contribution for Compton scattering is given by: [ 7 ] [ 9 ] Unlike the photoelectric effect, the absorption resulting from Compton scattering is independent of the atomic number of the atoms present in the crystal, but linearly on their density. At γ-ray energies higher than E γ {\displaystyle E_{\gamma }} > 1022 keV, i.e. energies higher than twice the rest-mass energy of the electron, pair production starts to occur. Pair production is the relativistic phenomenon where the energy of a photon is converted into an electron-positron pair. The created electron and positron will then further interact with the scintillating material to generate energetic electron and holes. The attenuation coefficient contribution for pair production is given by: [ 8 ] [ 9 ] where m e {\displaystyle m_{e}} is the rest mass of the electron and c {\displaystyle c} is the speed of light . Hence, at high γ-ray energies, the energy absorption depends both on the density and average atomic number of the scintillator. In addition, unlike for the photoelectric effect and Compton scattering, pair production becomes more probable as the energy of the incident photons increases, and pair production becomes the most dominant conversion process above E γ {\displaystyle E_{\gamma }} ~ 8 MeV. The μ o c {\displaystyle \mu _{oc}} term includes other (minor) contributions, such as Rayleigh (coherent) scattering at low energies and photonuclear reactions at very high energies, which also contribute to the conversion, however the contribution from Rayleigh scattering is almost negligible and photonuclear reactions become relevant only at very high energies. After the energy of the incident radiation is absorbed and converted into so-called hot electrons and holes in the material, these energetic charge carriers will interact with other particles and quasi-particles in the scintillator (electrons, plasmons , phonons ), leading to an "avalanche event", where a great number of secondary electron–hole pairs are produced until the hot electrons and holes have lost sufficient energy. The large number of electrons and holes that result from this process will then undergo thermalization , i.e. dissipation of part of their energy through interaction with phonons in the material The resulting large number of energetic charge carriers will then undergo further energy dissipation called thermalization. This occurs via interaction with phonons for electrons and Auger processes for holes. The average timescale for conversion, including energy absorption and thermalization has been estimated to be in the order of 1 ps, [ 5 ] [ 10 ] which is much faster than the average decay time in photoluminescence . The second stage of scintillation is the charge transport of thermalized electrons and holes towards luminescence centers and the energy transfer to the atoms involved in the luminescence process. In this stage, the large number of electrons and holes that have been generated during the conversion process, migrate inside the material. This is probably one of the most critical phases of scintillation, since it is generally in this stage where most loss of efficiency occur due to effects such as trapping or non-radiative recombination . These are mainly caused by the presence of defects in the scintillator crystal, such as impurities, ionic vacancies, and grain boundaries . The charge transport can also become a bottleneck for the timing of the scintillation process. The charge transport phase is also one of the least understood parts of scintillation and depends strongly on the type material involved and its intrinsic charge conduction properties. Once the electrons and holes reach the luminescence centers, the third and final stage of scintillation occurs: luminescence. In this stage the electrons and holes are captured potential paths by the luminescent center, and then the electrons and hole recombine radiatively . [ 11 ] The exact details of the luminescence phase also depend on the type of material used for scintillation. For photons such as gamma rays, thallium activated NaI crystals (NaI(Tl)) are often used. For a faster response (but only 5% of the output) CsF crystals can be used. [ 12 ] : 211 In organic molecules scintillation is a product of π-orbitals . Organic materials form molecular crystals where the molecules are loosely bound by Van der Waals forces . The ground state of 12 C is 1s 2 2s 2 2p 2 . In valence bond theory, when carbon forms compounds, one of the 2s electrons is excited into the 2p state resulting in a configuration of 1s 2 2s 1 2p 3 . To describe the different valencies of carbon, the four valence electron orbitals, one 2s and three 2p, are considered to be mixed or hybridized in several alternative configurations. For example, in a tetrahedral configuration the s and p 3 orbitals combine to produce four hybrid orbitals. In another configuration, known as trigonal configuration, one of the p-orbitals (say p z ) remains unchanged and three hybrid orbitals are produced by mixing the s, p x and p y orbitals. The orbitals that are symmetrical about the bonding axes and plane of the molecule (sp 2 ) are known as σ-electrons and the bonds are called σ-bonds. The p z orbital is called a π-orbital. A π-bond occurs when two π-orbitals interact. This occurs when their nodal planes are coplanar. In certain organic molecules π-orbitals interact to produce a common nodal plane. These form delocalized π-electrons that can be excited by radiation. The de-excitation of the delocalized π-electrons results in luminescence. The excited states of π-electron systems can be explained by the perimeter free-electron model (Platt 1949). This model is used for describing polycyclic hydrocarbons consisting of condensed systems of benzenoid rings in which no C atom belongs to more than two rings and every C atom is on the periphery. The ring can be approximated as a circle with circumference l. The wave-function of the electron orbital must satisfy the condition of a plane rotator: The corresponding solutions to the Schrödinger wave equation are: where q is the orbital ring quantum number; the number of nodes of the wave-function. Since the electron can have spin up and spin down and can rotate about the circle in both directions all of the energy levels except the lowest are doubly degenerate. The above shows the π-electronic energy levels of an organic molecule. Absorption of radiation is followed by molecular vibration to the S 1 state. This is followed by a de-excitation to the S 0 state called fluorescence. The population of triplet states is also possible by other means. The triplet states decay with a much longer decay time than singlet states, which results in what is called the slow component of the decay process (the fluorescence process is called the fast component). Depending on the particular energy loss of a certain particle (dE/dx), the "fast" and "slow" states are occupied in different proportions. The relative intensities in the light output of these states thus differs for different dE/dx. This property of scintillators allows for pulse shape discrimination: it is possible to identify which particle was detected by looking at the pulse shape. Of course, the difference in shape is visible in the trailing side of the pulse, since it is due to the decay of the excited states.
https://en.wikipedia.org/wiki/Scintillation_(physics)
A scintillator ( / ˈ s ɪ n t ɪ l eɪ t ər / SIN -til-ay-ter ) is a material that exhibits scintillation , the property of luminescence , [ 1 ] when excited by ionizing radiation . Luminescent materials, when struck by an incoming particle, absorb its energy and scintillate (i.e. re-emit the absorbed energy in the form of light). [ a ] Sometimes, the excited state is metastable , so the relaxation back down from the excited state to lower states is delayed (necessitating anywhere from a few nanoseconds to hours depending on the material). The process then corresponds to one of two phenomena: delayed fluorescence or phosphorescence . The correspondence depends on the type of transition and hence the wavelength of the emitted optical photon . A scintillation detector or scintillation counter is obtained when a scintillator is coupled to an electronic light sensor such as a photomultiplier tube (PMT), photodiode , or silicon photomultiplier . PMTs absorb the light emitted by the scintillator and re-emit it in the form of electrons via the photoelectric effect . The subsequent multiplication of those electrons (sometimes called photo-electrons) results in an electrical pulse which can then be analyzed and yield meaningful information about the particle that originally struck the scintillator. Vacuum photodiodes are similar but do not amplify the signal while silicon photodiodes, on the other hand, detect incoming photons by the excitation of charge carriers directly in the silicon. Silicon photomultipliers consist of an array of photodiodes which are reverse-biased with sufficient voltage to operate in avalanche mode , enabling each pixel of the array to be sensitive to single photons. [ citation needed ] The first device which used a scintillator was built in 1903, by Sir William Crookes and used a ZnS screen. [ 2 ] [ 3 ] The scintillations produced by the screen were visible to the naked eye if viewed by a microscope in a darkened room; the device was known as a spinthariscope . The technique led to a number of important discoveries but was obviously tedious. Scintillators gained additional attention in 1944, when Curran and Baker replaced the naked eye measurement with the newly developed PMT . This was the birth of the modern scintillation detector. [ 2 ] Scintillators are used by the American government as Homeland Security radiation detectors. Scintillators can also be used in particle detectors , new energy resource exploration, X-ray security, nuclear cameras, computed tomography and gas exploration. Other applications of scintillators include CT scanners and gamma cameras in medical diagnostics, and screens in older style CRT computer monitors and television sets. Scintillators have also been proposed [ 4 ] as part of theoretical models for the harnessing of gamma-ray energy through the photovoltaic effect, for example in a nuclear battery . The use of a scintillator in conjunction with a photomultiplier tube finds wide use in hand-held survey meters used for detecting and measuring radioactive contamination and monitoring nuclear material. Scintillators generate light in fluorescent tubes, to convert the ultra-violet of the discharge into visible light. Scintillation detectors are also used in the petroleum industry as detectors for Gamma Ray logs. ]There are many desired properties of scintillators, such as high density , fast operation speed, low cost , radiation hardness , production capability, and durability of operational parameters. High density reduces the material size of showers for high-energy γ-quanta and electrons. The range of Compton scattered photons for lower energy γ-rays is also decreased via high density materials. This results in high segmentation of the detector and leads to better spatial resolution. Usually high density materials have heavy ions in the lattice (e.g., lead , cadmium ), significantly increasing the contribution of photoelectric effect (~Z 4 ). The increased photo-fraction is important for some applications such as positron emission tomography . High stopping power for electromagnetic component of the ionizing radiation needs greater photo-fraction; this allows for a compact detector. High operating speed is needed for good resolution of spectra. Precision of time measurement with a scintillation detector is proportional to √ τ sc . Short decay times are important for the measurement of time intervals and for the operation in fast coincidence circuits. High density and fast response time can allow detection of rare events in particle physics. Particle energy deposited in the material of a scintillator is proportional to the scintillator's response. Charged particles, γ-quanta and ions have different slopes when their response is measured. Thus, scintillators could be used to identify various types of γ-quanta and particles in fluxes of mixed radiation. Another consideration of scintillators is the cost of producing them. Most crystal scintillators require high-purity chemicals and sometimes rare-earth metals that are fairly expensive. Not only are the materials an expenditure, but many crystals require expensive furnaces and almost six months of growth and analyzing time. Currently, other scintillators are being researched for reduced production cost. [ 5 ] Several other properties are also desirable in a good detector scintillator: a low gamma output (i.e., a high efficiency for converting the energy of incident radiation into scintillation photons), transparency to its own scintillation light (for good light collection), efficient detection of the radiation being studied, a high stopping power , good linearity over a wide range of energy, a short rise time for fast timing applications (e.g., coincidence measurements), a short decay time to reduce detector dead-time and accommodate high event rates, emission in a spectral range matching the spectral sensitivity of existing PMTs (although wavelength shifters can sometimes be used), an index of refraction near that of glass (≈1.5) to allow optimum coupling to the PMT window. Ruggedness and good behavior under high temperature may be desirable where resistance to vibration and high temperature is necessary (e.g., oil exploration). The practical choice of a scintillator material is usually a compromise among those properties to best fit a given application. Among the properties listed above, the light output is the most important, as it affects both the efficiency and the resolution of the detector (the efficiency is the ratio of detected particles to the total number of particles impinging upon the detector; the energy resolution is the ratio of the full width at half maximum of a given energy peak to the peak position, usually expressed in %). The light output is a strong function of the type of incident particle or photon and of its energy, which therefore strongly influences the type of scintillation material to be used for a particular application. The presence of quenching effects results in reduced light output (i.e., reduced scintillation efficiency). Quenching refers to all radiationless de‑excitation processes in which the excitation is degraded mainly to heat. [ 6 ] The overall signal production efficiency of the detector, however, also depends on the quantum efficiency of the PMT (typically ~30% at peak), and on the efficiency of light transmission and collection (which depends on the type of reflector material covering the scintillator and light guides, the length/shape of the light guides, any light absorption, etc.). The light output is often quantified as a number of scintillation photons produced per keV of deposited energy. Typical numbers are (when the incident particle is an electron): ≈40 photons/keV for NaI(Tl) , ~10 photons/keV for plastic scintillators, and ~8 photons/keV for bismuth germanate ( BGO ). Scintillation detectors are generally assumed to be linear. This assumption is based on two requirements: (1) that the light output of the scintillator is proportional to the energy of the incident radiation; (2) that the electrical pulse produced by the photomultiplier tube is proportional to the emitted scintillation light. The linearity assumption is usually a good rough approximation, although deviations can occur (especially pronounced for particles heavier than the proton at low energies). [ 1 ] Resistance and good behavior under high-temperature, high-vibration environments is especially important for applications such as oil exploration ( wireline logging , measurement while drilling). For most scintillators, light output and scintillation decay time depends on the temperature. [ 7 ] This dependence can largely be ignored for room-temperature applications since it is usually weak. The dependence on the temperature is also weaker for organic scintillators than it is for inorganic crystals, such as NaI-Tl or BGO. Strong dependence of decay time on the temperature in BGO scintillator is used for remote monitoring of temperature in vacuum environment. [ 8 ] The coupled PMTs also exhibit temperature sensitivity, and can be damaged if submitted to mechanical shock. Hence, high temperature rugged PMTs should be used for high-temperature, high-vibration applications. The time evolution of the number of emitted scintillation photons N in a single scintillation event can often be described by linear superposition of one or two exponential decays. For two decays, we have the form: [ 1 ] N = A exp ⁡ ( − t τ f ) + B exp ⁡ ( − t τ s ) {\displaystyle N=A\exp \left(-{\frac {t}{{\tau }_{f}}}\right)+B\exp \left(-{\frac {t}{{\tau }_{s}}}\right)} where τ f and τ s are the fast (or prompt) and the slow (or delayed) decay constants. Many scintillators are characterized by 2 time components: one fast (or prompt), the other slow (or delayed). While the fast component usually dominates, the relative amplitude A and B of the two components depend on the scintillating material. Both of these components can also be a function of the energy loss dE / dx . In cases where this energy loss dependence is strong, the overall decay time constant varies with the type of incident particle. Such scintillators enable pulse shape discrimination, i.e., particle identification based on the decay characteristics of the PMT electric pulse. For instance, when BaF 2 is used, γ rays typically excite the fast component, while α particles excite the slow component: it is thus possible to identify them based on the decay time of the PMT signal. Organic scintillators are aromatic hydrocarbon compounds which contain benzene ring structures interlinked in various ways. Their luminescence typically decays within a few nanoseconds. [ 9 ] Some organic scintillators are pure crystals. The most common types are anthracene [ 10 ] ( C 14 H 10 , decay time ≈30 ns), stilbene [ 10 ] ( C 14 H 12 , 4.5 ns decay time), and naphthalene ( C 10 H 8 , few ns decay time). They are very durable, but their response is anisotropic (which spoils energy resolution when the source is not collimated ), and they cannot be easily machined, nor can they be grown in large sizes; hence they are not very often used. Anthracene has the highest light output of all organic scintillators and is therefore chosen as a reference: the light outputs of other scintillators are sometimes expressed as a percentage of anthracene light output. [ 11 ] These are liquid solutions of one or more organic scintillators in an organic solvent . The typical solutes are fluors such as p -terphenyl ( C 18 H 14 ), PBD ( C 20 H 14 N 2 O ), butyl PBD ( C 24 H 22 N 2 O ), PPO ( C 15 H 11 NO ), and wavelength shifter such as POPOP ( C 24 H 16 N 2 O ). The most widely used solvents are toluene , xylene , benzene , phenylcyclohexane , triethylbenzene , and decalin . Liquid scintillators are easily loaded with other additives such as wavelength shifters to match the spectral sensitivity range of a particular PMT, or 10 B to increase the neutron detection efficiency of the scintillation counter itself (since 10 B has a high interaction cross section with thermal neutrons ). Newer approaches combine several solvents or load different metals to achieve identification of incident particles. [ 12 ] [ 13 ] For many liquids, dissolved oxygen can act as a quenching agent and lead to reduced light output, hence the necessity to seal the solution in an oxygen-free, airtight enclosure. [ 6 ] The term "plastic scintillator" typically refers to a scintillating material in which the primary fluorescent emitter, called a fluor, is suspended in the base , a solid polymer matrix. While this combination is typically accomplished through the dissolution of the fluor prior to bulk polymerization, the fluor is sometimes associated with the polymer directly, either covalently or through coordination, as is the case with many Li6 plastic scintillators. Polyethylene naphthalate has been found to exhibit scintillation by itself without any additives and is expected to replace existing plastic scintillators due to higher performance and lower price. [ 14 ] The advantages of plastic scintillators include fairly high light output and a relatively quick signal, with a decay time of 2–4 nanoseconds, but perhaps the biggest advantage of plastic scintillators is their ability to be shaped, through the use of molds or other means, into almost any desired form with what is often a high degree of durability. [ 15 ] Plastic scintillators are known to show light output saturation when the energy density is large ( Birks' Law ). The most common bases used in plastic scintillators are the aromatic plastics, polymers with aromatic rings as pendant groups along the polymer backbone, amongst which polyvinyltoluene (PVT) and polystyrene (PS) are the most prominent. While the base does fluoresce in the presence of ionizing radiation, its low yield and negligible transparency to its own emission make the use of fluors necessary in the construction of a practical scintillator. [ 15 ] Aside from the aromatic plastics, the most common base is polymethylmethacrylate (PMMA), which carries two advantages over many other bases: high ultraviolet and visible light transparency and mechanical properties and higher durability with respect to brittleness. The lack of fluorescence associated with PMMA is often compensated through the addition of an aromatic co-solvent, usually naphthalene. A plastic scintillator based on PMMA in this way boasts transparency to its own radiation, helping to ensure uniform collection of light. [ 16 ] Other common bases include polyvinyl xylene (PVX) polymethyl, 2,4-dimethyl, 2,4,5-trimethyl styrenes, polyvinyl diphenyl, polyvinyl naphthalene, polyvinyl tetrahydronaphthalene, and copolymers of these and other bases. [ 15 ] Also known as luminophors, these compounds absorb the scintillation of the base and then emit at larger wavelength, effectively converting the ultraviolet radiation of the base into the more easily transferred visible light. Further increasing the attenuation length can be accomplished through the addition of a second fluor, referred to as a spectrum shifter or converter, often resulting in the emission of blue or green light. Common fluors include polyphenyl hydrocarbons, oxazole and oxadiazole aryls, especially, n-terphenyl (PPP), 2,5-diphenyloxazole (PPO), 1,4-di-(5-phenyl-2-oxazolyl)-benzene (POPOP), 2-phenyl-5-(4-biphenylyl)-1,3,4-oxadiazole (PBD), and 2-(4’-tert-butylphenyl)-5-(4’’-biphenylyl)-1,3,4-oxadiazole (B-PBD). [ 17 ] Inorganic scintillators are usually crystals grown in high temperature furnaces , for example, alkali metal halides , often with a small amount of activator impurity. The most widely used is NaI(Tl) ( thallium -doped sodium iodide ); its scintillation light is blue. Other inorganic alkali halide crystals are: CsI(Tl) , CsI(Na) , CsI (pure), CsF , KI(Tl) , LiI(Eu) . Some non-alkali crystals include: BGO , BaF 2 , CaF 2 (Eu) , ZnS(Ag) , CaWO 4 , CdWO 4 , YAG(Ce) ( Y 3 Al 5 O 12 (Ce) ), GSO , LSO , GAGG:Ce . (For more examples, see also phosphors ). [ 18 ] Newly developed products include LaCl 3 (Ce) , lanthanum chloride doped with cerium, as well as a cerium-doped lanthanum bromide , LaBr 3 (Ce) . They are both very hygroscopic (i.e., damaged when exposed to moisture in the air) but offer excellent light output and energy resolution (63 photons/keV γ for LaBr 3 (Ce) versus 38 photons/keV γ for NaI(Tl) ), a fast response (16 ns for LaBr 3 (Ce) versus 230 ns for NaI(Tl) [ 10 ] ), excellent linearity, and a very stable light output over a wide range of temperatures. In addition LaBr 3 (Ce) offers a higher stopping power for γ rays (density of 5.08 g/cm 3 versus 3.67 g/cm 3 for NaI(Tl) [ 10 ] ). LYSO ( Lu 1.8 Y 0.2 SiO 5 (Ce) ) has an even higher density (7.1 g/cm 3 , comparable to BGO ), is non-hygroscopic, and has a higher light output than BGO (32 photons/keV γ), in addition to being rather fast (41 ns decay time versus 300 ns for BGO ). A disadvantage of some inorganic crystals, e.g., NaI, is their hygroscopicity, a property which requires them to be housed in an airtight container to protect them from moisture. CsI(Tl) and BaF 2 are only slightly hygroscopic and do not usually need protection. CsF, NaI(Tl) , LaCl 3 (Ce) , LaBr 3 (Ce) are hygroscopic, while BGO , CaF 2 (Eu) , LYSO , and YAG(Ce) are not. Inorganic crystals can be cut to small sizes and arranged in an array configuration so as to provide position sensitivity. Such arrays are often used in medical physics or security applications to detect X-rays or γ rays: high- Z , high density materials (e.g. LYSO, BGO) are typically preferred for this type of applications. Scintillation in inorganic crystals is typically slower than in organic ones, ranging typically from 1.48 ns for ZnO(Ga) to 9000 ns for CaWO 4 . [ 10 ] Exceptions are CsF (~5 ns), fast BaF 2 (0.7 ns; the slow component is at 630 ns), as well as the newer products ( LaCl 3 (Ce) , 28 ns; LaBr 3 (Ce) , 16 ns; LYSO , 41 ns). For the imaging application, one of the advantage of inorganic crystals is very high light yield. Some high light yield scintillators above 100,000 photons/MeV at 662 keV are very recently reported for LuI 3 (Ce) , SrI 2 (Eu) , and Cs 2 HfCl 6 . Many semiconductor scintillator phosphors are known, such as ZnS(Ag) (mentioned in the history section), CdS(Ag), ZnO(Zn), ZnO(Ga), CdS(In), ZnSe(O), and ZnTe(O), but none of these are available as single crystals. CdS(Te) and ZnSe(Te) have been commercially available in single crystal form, but their luminosity is partially quenched at room temperature. [ 19 ] GaAs(Si,B) is a recently discovered cryogenic semiconductor scintillator with high light output in the infra-red and apparently no afterglow. In combination with ultra-low noise cryogenic photodetectors it is the target in experiments to detect rare, low-energy electronic excitations from interacting dark matter. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] Gaseous scintillators consist of nitrogen and the noble gases helium , argon , krypton , and xenon , with helium and xenon receiving the most attention. The scintillation process is due to the de-excitation of single atoms excited by the passage of an incoming particle. This de-excitation is very rapid (~1 ns), so the detector response is quite fast. Coating the walls of the container with a wavelength shifter is generally necessary as those gases typically emit in the ultraviolet and PMTs respond better to the visible blue-green region. In nuclear physics, gaseous detectors have been used to detect fission fragments or heavy charged particles . [ 27 ] The most common glass scintillators are cerium -activated lithium or boron silicates . Since both lithium and boron have large neutron cross-sections , glass detectors are particularly well suited to the detection of thermal (slow) neutrons . Lithium is more widely used than boron since it has a greater energy release on capturing a neutron and therefore greater light output. Glass scintillators are however sensitive to electrons and γ rays as well (pulse height discrimination can be used for particle identification). Being very robust, they are also well-suited to harsh environmental conditions. Their response time is ≈10 ns, their light output is however low, typically ≈30% of that of anthracene. [ 11 ] Scintillation properties of organic-inorganic methylamonium (MA) lead halide perovskites under proton irradiation were first reported by Shibuya et al. in 2002 [ 28 ] and the first γ-ray pulse height spectrum, although still with poor energy resolution, was reported on ( (C 6 H 5 (CH 2 ) 2 NH 3 ) 2 PbBr 4 ) by van Eijk et al. in 2008 . [ 29 ] Birowosuto at al. [ 30 ] studied the scintillation properties of 3-D and 2-D layered perovskites under X-ray excitation. MAPbBr 3 ( CH 3 NH 3 PbBr 3 ) emits at 550 nm and MAPbI 3 ( CH 3 NH 3 PbI 3 ) at 750 nm which is attributed to exciton emission near the band gap of the compounds. In this first generation of Pb-halide perovskites the emission is strongly quenched at room temperature and less than 1 000 ph/MeV survive. At 10 K however intense emission is observed and [ 30 ] write about yields up to 200 000 ph/MeV. The quenching is attributed to the small e-h binding energy in the exciton that decreases for Cl to Br to I . [ 31 ] Interestingly one may replace the organic MA group with Cs+ to obtain full inorganic CsPbX 3 halide perovskites. Depending on the Cl, Br, I content the triplet X-ray excited exciton emission can be tuned from 430 nm to 700 nm . [ 32 ] One may also dilute Cs with Rb to obtain similar tuning. Above very recent developments demonstrate that the organic-inorganic and all inorganic Pb-halide perovskites have various interesting scintillation properties. However, the recent two-dimensional perovskite single crystals with light yields between 10 000 and 40 000 ph/MeV and decay times below 10 ns at room temperature [ 30 ] will be more favorable as they may have much larger Stokes shift up to 200 nm in comparison with CsPbBr 3 quantum dot scintillators and this is essential to prevent self reabsorption for scintillators. More recently, a new material class first reported by Professor Biwu Ma's research group, called 0D organic metal halide hybrid (OMHH), an extension of the perovskite materials. [ 33 ] This class of materials exhibits strong exciton binding of hundreds of meV, resulting in their high photoluminescent quantum efficiency of almost unity. Their large stoke shift and reabsorption-free properties make them desirable. [ 33 ] Their potential applications for scintillators have been reported by the same group, and others. [ 34 ] [ 35 ] In 2020,(C38H34P2)MnBr4 was reported to have a light yield up to 80 000 Photon/MeV despite its low Z compared to traditional all inorganic. [ 34 ] Impressive light yields from other 0D OMHH have been reported. There is a great potential to realize new generation scintillators from this material class. However, they are limited by their relatively long response time in microseconds, which is an area of intense research. Transitions made by the free valence electrons of the molecules are responsible for the production of scintillation light in organic crystals. [ 9 ] These electrons are associated with the whole molecule rather than any particular atom and occupy the so-called - molecular orbitals . The ground state S 0 is a singlet state above which are the excited singlet states (S * , S ** , ...), the lowest triplet state (T 0 ), and its excited levels (T * , T ** , ...). A fine structure corresponding to molecular vibrational modes is associated with each of those electron levels. The energy spacing between electron levels is ≈1 eV; the spacing between the vibrational levels is about 1/10 of that for electron levels. [ 36 ] An incoming particle can excite either an electron level or a vibrational level. The singlet excitations immediately decay (< 10 ps) to the S * state without the emission of radiation (internal degradation). The S * state then decays to the ground state S 0 (typically to one of the vibrational levels above S 0 ) by emitting a scintillation photon . This is the prompt component or fluorescence . The transparency of the scintillator to the emitted photon is due to the fact that the energy of the photon is less than that required for an S 0 → S * transition (the transition is usually being to a vibrational level above S 0 ). [ 36 ] [ clarification needed ] When one of the triplet states gets excited, it immediately decays to the T 0 state with no emission of radiation (internal degradation). Since the T 0 → S 0 transition is very improbable, the T 0 state instead decays by interacting with another T 0 molecule: [ 36 ] T 0 + T 0 → S ∗ + S 0 + photons {\displaystyle T_{0}+T_{0}\rightarrow S^{*}+S_{0}+{\text{photons}}} and leaves one of the molecules in the S * state, which then decays to S 0 with the release of a scintillation photon. Since the T 0 -T 0 interaction takes time, the scintillation light is delayed: this is the slow or delayed component (corresponding to delayed fluorescence). Sometimes, a direct T 0 → S 0 transition occurs (also delayed), and corresponds to the phenomenon of phosphorescence . Note that the observational difference between delayed-fluorescence and phosphorescence is the difference in the wavelengths of the emitted optical photon in an S * → S 0 transition versus a T 0 → S 0 transition. Organic scintillators can be dissolved in an organic solvent to form either a liquid or plastic scintillator. The scintillation process is the same as described for organic crystals (above); what differs is the mechanism of energy absorption: energy is first absorbed by the solvent, then passed onto the scintillation solute (the details of the transfer are not clearly understood). [ 36 ] The scintillation process in inorganic materials is due to the electronic band structure found in crystals and is not molecular in nature as is the case with organic scintillators. [ 37 ] An incoming particle can excite an electron from the valence band to either the conduction band or the exciton band (located just below the conduction band and separated from the valence band by an energy gap ; see picture ). This leaves an associated hole behind, in the valence band. Impurities create electronic levels in the forbidden gap . The excitons are loosely bound electron-hole pairs which wander through the crystal lattice until they are captured as a whole by impurity centers. The latter then rapidly de-excite by emitting scintillation light (fast component). The activator impurities are typically chosen so that the emitted light is in the visible range or near-UV where photomultipliers are effective. The holes associated with electrons in the conduction band are independent from the latter. Those holes and electrons are captured successively by impurity centers exciting certain metastable states not accessible to the excitons. The delayed de-excitation of those metastable impurity states again results in scintillation light (slow component). BGO ( bismuth germanium oxide ) is a pure inorganic scintillator without any activator impurity. There, the scintillation process is due to an optical transition of the Bi 3+ ion, a major constituent of the crystal. [ 6 ] In tungstate scintillators CaWO 4 and CdWO 4 the emission is due to radiative decay of self-trapped excitons. The scintillation process in GaAs doped with silicon and boron impurities is different from conventional scintillators in that the silicon n -type doping provides a built-in population of delocalized electrons at the bottom of the conduction band. [ 38 ] Some of the boron impurity atoms reside on arsenic sites and serve as acceptors. [ 39 ] A scintillation photon is produced whenever an acceptor atom such as boron captures an ionization hole from the valence band and that hole recombines radiatively with one of the delocalized electrons. [ 40 ] Unlike many other semiconductors, the delocalized electrons provided by the silicon are not “frozen out” at cryogenic temperatures. Above the Mott transition concentration of 8 × 10 15 free carriers per cm 3 , the “metallic” state is maintained at cryogenic temperatures because mutual repulsion drives any additional electrons into the next higher available energy level, which is in the conduction band. [ 41 ] The spectrum of photons from this process is centered at 930 nm (1.33 eV) and there are three other emission bands centered at 860, 1070, and 1335 nm from other minor processes. [ 42 ] Each of these emission bands has a different luminosity and decay time. [ 43 ] The high scintillation luminosity is surprising because (1) with a refractive index of about 3.5, escape is inhibited by total internal reflection and (2) experiments at 90K report narrow-beam infrared absorption coefficients of several per cm. [ 44 ] [ 45 ] [ 46 ] Recent Monte Carlo and Feynman path integral calculations have shown that the high luminosity could be explained if most of the narrow beam absorption is actually a novel optical scattering from the conduction electrons with a cross section of about 5 x 10 −18 cm 2 that allows scintillation photons to escape total internal reflection. [ 47 ] [ 48 ] This cross section is about 10 7 times larger than Thomson scattering but comparable to the optical cross section of the conduction electrons in a metal mirror. In gases, the scintillation process is due to the de-excitation of single atoms excited by the passage of an incoming particle (a very rapid process: ≈1 ns). Scintillation counters are usually not ideal for the detection of heavy ions for three reasons: [ 49 ] The reduction in light output is stronger for organics than for inorganic crystals. Therefore, where needed, inorganic crystals, e.g. CsI(Tl) , ZnS(Ag) (typically used in thin sheets as α-particle monitors), CaF 2 (Eu) , should be preferred to organic materials. Typical applications are α- survey instruments , dosimetry instruments, and heavy ion dE / dx detectors. Gaseous scintillators have also been used in nuclear physics experiments. The detection efficiency for electrons is essentially 100% for most scintillators. But because electrons can make large angle scatterings (sometimes backscatterings ), they can exit the detector without depositing their full energy in it. The back-scattering is a rapidly increasing function of the atomic number Z of the scintillator material. Organic scintillators, having a lower Z than inorganic crystals, are therefore best suited for the detection of low-energy (< 10 MeV) beta particles . The situation is different for high energy electrons: since they mostly lose their energy by bremsstrahlung at the higher energies, a higher- Z material is better suited for the detection of the bremsstrahlung photon and the production of the electromagnetic shower which it can induce. [ 50 ] High- Z materials, e.g. inorganic crystals, are best suited for the detection of gamma rays . The three basic ways that a gamma ray interacts with matter are: the photoelectric effect , Compton scattering , and pair production . The photon is completely absorbed in photoelectric effect and pair production, while only partial energy is deposited in any given Compton scattering. The cross section for the photoelectric process is proportional to Z 5 , that for pair production proportional to Z 2 , whereas Compton scattering goes roughly as Z . A high- Z material therefore favors the former two processes, enabling the detection of the full energy of the gamma ray. [ 50 ] If the gamma rays are at higher energies (>5 MeV), pair production dominates. Since the neutron is not charged it does not interact via the Coulomb force and therefore does not ionize the scintillation material. It must first transfer some or all of its energy via the strong force to a charged atomic nucleus . The positively charged nucleus then produces ionization . Fast neutrons (generally >0.5 MeV [ 6 ] ) primarily rely on the recoil proton in (n,p) reactions; materials rich in hydrogen , e.g. plastic scintillators, are therefore best suited for their detection. Slow neutrons rely on nuclear reactions such as the (n,γ) or (n,α) reactions, to produce ionization. Their mean free path is therefore quite large unless the scintillator material contains nuclides having a high cross section for these nuclear reactions such as 6 Li or 10 B. Materials such as LiI(Eu) or glass silicates are therefore particularly well-suited for the detection of slow (thermal) neutrons. [ 51 ] The following is a list of commonly used inorganic crystals:
https://en.wikipedia.org/wiki/Scintillator
Scintillons are small structures in cytoplasm that produce light . Among bioluminescent organisms, only dinoflagellates have scintillons. Marine dinoflagellates at night can emit blue light by bioluminescence , a process also called "the phosphorescence of the seas". Light production in these single celled organisms is produced by small structures in the cytoplasm called scintillons . Among bioluminescent organisms, only dinoflagellates have scintillons. In the dinoflagellates, the biochemical reaction that produces light involves a luciferase -catalysed oxidation of a linear tetrapyrrole called luciferin . [ 1 ] The dinoflagellate Lingulodinium polyedra (previously called Gonyaulax polyedra ) also contains a second protein called luciferin binding protein (LBP) [ 2 ] that has been proposed to protect luciferin from non-luminescent oxidation. Luciferin is released from LBP by a decrease in pH , and the same decreased pH also activates the luciferase. [ 3 ] Light production in the dinoflagellates occurs in bioluminescent organelles called scintillons and can be stimulated by agitation of the surrounding seawater. The name scintillon was first used to describe cytoplasmic particles isolated from a bioluminescent species of dinoflagellate that were able to produce a flash of light in response to a decrease in pH. [ 4 ] Scintillons were first observed in L. polyedra by fluorescence microscopy, [ 5 ] where they appear as small blue dots close to the cell surface. This blue fluorescence is due to the presence of the bioluminescence reaction substrate, a naturally fluorescent molecule called luciferin. [ 6 ] When light production is stimulated by addition of dilute acid to the cells under the microscope, the site of light production corresponds to the location of the scintillons. Furthermore, the natural luciferin fluorescence is reduced after the light producing reaction. [ 5 ] Cells observed under the electron microscope after a technique involving rapid freezing of the cells followed by substitution of water with a polymer (Fast-freeze Fixation/Freeze Substitution) contain a large number of electron dense bodies around the cell periphery. [ 7 ] These structures correspond in size and location to the fluorescent bodies confirmed to be scintillons by their light emission, and they show colocalization of anti-luciferase and anti-LBP labeling meaning both bioluminescence proteins are found in the structures. [ 8 ] Scintillons appear as cytoplasmic drops hanging in the vacuolar space, as they are almost completely surrounded by the vacuolar membrane. This structure led to the proposal that a voltage gated proton channel in the vacuolar membrane could allow an action potential to be propagated along the vacuolar membrane. [ 7 ] This would in turn let protons enter into the cytoplasm around all the scintillons in the cells virtually simultaneously producing an intense but brief flash of light. Voltage gated proton channels were subsequently identified in a dinoflagellate confirming their predicted existence. [ 9 ] Scintillons have been extensively purified from L. polyedra by centrifugation , and these purified scintillon preparations contain luciferase and luciferin binding protein as the only detectable protein components. [ 10 ] The amount of luciferase, [ 11 ] LBP [ 12 ] and luciferin [ 13 ] all vary over the course of a daily (circadian) period, as do the number of scintillons in the cell. [ 14 ] These observations suggest that the circadian control of bioluminescence involves a daily synthesis and degradation of luciferase and LBP. When synthesized, these two proteins aggregate together and migrate to the vacuole membrane where LBP binds luciferin and the scintillons acquires an ability to produce light upon stimulation. Scintillons are not identical in different species. Scintillons isolated from dinoflagellates belonging to the genus Pyrocystis such as P. lunula (previously Dissodinium lunula ) or P. noctiluca are less dense than those of L. polyedra and do not contain LBP. [ 15 ] Little is known about the structure or composition of scintillons in species other than L. polyedra .
https://en.wikipedia.org/wiki/Scintillon
Sciography , also spelled sciagraphy or skiagraphy , is a branch of the science of perspective dealing with the projection of shadows, or the delineation of an object in perspective with its gradations of light and shade . The term comes from the Greek σκιά "shadow" and γράφειν graphein , "write". In architectural drawing , sciography is the study of shades and shadows cast by simple architectural forms on plane surfaces. In general sciography, the light source is imagined as the sun inclined at 45 degrees to both vertical plane and horizontal plane coming from left hand side. The resultant shadow is then drawn. This architecture -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sciography
In molecular biology , a scissile bond is a covalent chemical bond that can be broken by an enzyme . Examples would be the cleaved bond in the self-cleaving hammerhead ribozyme [ 1 ] or the peptide bond of a substrate cleaved by a peptidase. [ 2 ]
https://en.wikipedia.org/wiki/Scissile_bond
Scissors Modes are collective excitations in which two particle systems move with respect to each other conserving their shape. For the first time they were predicted to occur in deformed atomic nuclei by N. LoIudice and F. Palumbo, [ 1 ] who used a semiclassical Two Rotor Model, whose solution required a realization of the O(4) algebra that was not known in mathematics. In this model protons and neutrons were assumed to form two interacting rotors to be identified with the blades of scissors. Their relative motion (Fig.1) generates a magnetic dipole moment whose coupling with the electromagnetic field provides the signature of the mode. Such states have been experimentally observed for the first time by A. Richter and collaborators [ 2 ] in a rare earth nucleus, 156 Gd, and then systematically investigated experimentally and theoretically in all deformed atomic nuclei. Inspired by this, D. Guéry-Odelin and S. Stringari [ 3 ] predicted similar collective excitations in Bose-Einstein condensates in magnetic traps. In this case one of the blades of the scissors must be identified with the moving cloud of atoms and the other one with the trap. Also this excitation mode was experimentally confirmed. [ 4 ] In close analogy similar collective excitations have predicted in a number of other systems, including metal clusters , [ 5 ] quantum dots , [ 6 ] Fermi condensates [ 7 ] and crystals, [ 8 ] but none of them has yet been experimentally investigated or found.
https://en.wikipedia.org/wiki/Scissors_Modes
Scitex Continuous Tone or Scitex CT is an image file format . It is designed specifically for use on Scitex graphics processing equipment. [ 1 ] Its use is supported by numerous graphics suites and desktop publishing packages, such as Adobe Photoshop , [ 2 ] Adobe InDesign , [ 1 ] and QuarkXPress . The Scitex CT format enables high-quality images to be processed on Scitex computers. Supports CMYK (Cyan, Magenta, Yellow, Key), RGB and grayscale images, but does not support alpha channels. They consist of: [ 3 ] Scitex CT images typically represent 4 colors. A colored pixel is up to 128 bits in size (16 separations). Separations 1 through 4 are for CMYK colors. Separations from 5 to 16 are reserved in case the format of these images is expanded in the future. The data in the first separation is followed by the data in the second separation and so with the rest This type of image format is often used in jobs that require color. For example in magazine ads, newspapers … [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scitex_CT
The scleraxis protein is a member of the basic helix-loop-helix (bHLH) superfamily of transcription factors . [ 1 ] Currently two genes ( SCXA and SCXB respectively) have been identified to code for identical scleraxis proteins. It is thought that early scleraxis-expressing progenitor cells lead to the eventual formation of tendon tissue and other muscle attachments. [ 1 ] Scleraxis is involved in mesoderm formation and is expressed in the syndetome (a collection of embryonic tissue that develops into tendon and blood vessels) of developing somites (primitive segments or compartments of embryos). [ 2 ] The syndetome location within the somite is determined by FGF secreted from the center of the myotome (a collection of embryonic tissue that develops into skeletal muscle )- the FGF then induces the adjacent anterior and posterior sclerotome (a collection of embryonic tissue that develops into the axial skeleton ) to adopt a tendon cell fate. This ultimately places future scleraxis-expressing cells between the two tissue types they will ultimately join. [ 3 ] Scleraxis expression will be seen throughout the entire sclerotome (rather than just the sclerotome directly anterior and posterior to the myotome) with an overexpression of FGF8 , demonstrating that all sclerotome cells are capable of expressing scleraxis in response to FGF signaling. While the FGF interaction has been shown to be necessary for scleraxis expression, it is still unclear as to whether the FGF signaling pathway directly induces the syndetome to secrete scleraxis, or indirectly through a secondary signaling pathway. Most likely, the syndetomal cells, through careful reading of the FGF concentration (coming from the myotome), can precisely determine their location and begin expressing scleraxis. [ 3 ] Much of embryonic development follows this model of inducing specific cell fates through the reading of surrounding signaling molecule concentration gradients. bHLH transcription factors have been shown to have a wide array of functions in developmental processes. [ 4 ] More precisely, they have critical roles in the control of cellular differentiation , proliferation and regulation of oncogenesis . [ 4 ] [ 5 ] [ 6 ] To date, 242 eukaryotic proteins belonging to the HLH superfamily have been reported. They have varied expression patterns in all eukaryotes from yeast to humans. [ 7 ] Structurally, bHLH proteins are characterised by a “highly conserved domain containing a stretch of basic amino acids adjacent to two amphipathic α-helices separated by a loop”. [ 8 ] [ 9 ] These helices have important functional properties, forming part of the DNA binding and transcription activating domains. With respect to scleraxis, the bHLH region spans amino acid residues 78 to 131. A proline rich region is also predicted to lie between residues 161–170. A stretch of basic residues, which aids in DNA binding, is found closer to the N terminal end of scleraxis. [ 1 ] [ 10 ] HLH proteins that lack this basic domain have been shown to negatively regulate the activities of bHLH proteins and are called inhibitors of differentiation (Id). [ 11 ] Basic HLH proteins function normally as dimers and bind to a specific hexanucleotide DNA sequence (CAANTG) known as an E-box thus switching on the expression of various genes involved in cellular development and survival.
https://en.wikipedia.org/wiki/Scleraxis
Sclerobionts are collectively known as organisms living in or on any kind of hard substrate (Taylor and Wilson, 2003). A few examples of sclerobionts include Entobia borings, Gastrochaenolites borings, Talpina borings, serpulids , encrusting oysters , encrusting foraminiferans , Stomatopora bryozoans , and “Berenicea” bryozoans . Bioerosion This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sclerobiont
The sclerometer , also known as the Turner-sclerometer (from Ancient Greek : σκληρός meaning "hard"), is an instrument used by metallurgists , material scientists and mineralogists to measure the scratch hardness of materials. It was invented in 1896 by Thomas Turner (1861–1951), the first Professor of metallurgy in Britain, at the University of Birmingham . The Turner-Sclerometer test consists of measuring the amount of load required to make a scratch. [ 1 ] [ 2 ] In test a weighted diamond point is drawn, once forward and once backward, over the smooth surface of the material to be tested. The hardness number is the weight in grams required to produce a standard scratch. The scratch selected is one which is just visible to the naked eye as a dark line on a bright reflecting surface. It is also the scratch which can just be felt with the edge of a quill when the latter is drawn over the smooth surface at right angles to a series of such scratches produced by regularly increasing weights. This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it . This technology-related article is a stub . You can help Wikipedia by expanding it . This tool article is a stub . You can help Wikipedia by expanding it . This mineralogy article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sclerometer
A mechanical system is scleronomous if the equations of constraints do not contain the time as an explicit variable and the equation of constraints can be described by generalized coordinates. Such constraints are called scleronomic constraints. The opposite of scleronomous is rheonomous . In 3-D space, a particle with mass m {\displaystyle m\,\!} , velocity v {\displaystyle \mathbf {v} } has kinetic energy T {\displaystyle T} T = 1 2 m v 2 . {\displaystyle T={\frac {1}{2}}mv^{2}.} Velocity is the derivative of position r {\displaystyle r} with respect to time t {\displaystyle t\,\!} . Use chain rule for several variables : v = d r d t = ∑ i ∂ r ∂ q i q ˙ i + ∂ r ∂ t . {\displaystyle \mathbf {v} ={\frac {d\mathbf {r} }{dt}}=\sum _{i}\ {\frac {\partial \mathbf {r} }{\partial q_{i}}}{\dot {q}}_{i}+{\frac {\partial \mathbf {r} }{\partial t}}.} where q i {\displaystyle q_{i}} are generalized coordinates . Therefore, T = 1 2 m ( ∑ i ∂ r ∂ q i q ˙ i + ∂ r ∂ t ) 2 . {\displaystyle T={\frac {1}{2}}m\left(\sum _{i}\ {\frac {\partial \mathbf {r} }{\partial q_{i}}}{\dot {q}}_{i}+{\frac {\partial \mathbf {r} }{\partial t}}\right)^{2}.} Rearranging the terms carefully, [ 1 ] T = T 0 + T 1 + T 2 : T 0 = 1 2 m ( ∂ r ∂ t ) 2 , T 1 = ∑ i m ∂ r ∂ t ⋅ ∂ r ∂ q i q ˙ i , T 2 = ∑ i , j 1 2 m ∂ r ∂ q i ⋅ ∂ r ∂ q j q ˙ i q ˙ j , {\displaystyle {\begin{aligned}T&=T_{0}+T_{1}+T_{2}:\\[1ex]T_{0}&={\frac {1}{2}}m\left({\frac {\partial \mathbf {r} }{\partial t}}\right)^{2},\\T_{1}&=\sum _{i}\ m{\frac {\partial \mathbf {r} }{\partial t}}\cdot {\frac {\partial \mathbf {r} }{\partial q_{i}}}{\dot {q}}_{i}\,\!,\\T_{2}&=\sum _{i,j}\ {\frac {1}{2}}m{\frac {\partial \mathbf {r} }{\partial q_{i}}}\cdot {\frac {\partial \mathbf {r} }{\partial q_{j}}}{\dot {q}}_{i}{\dot {q}}_{j},\end{aligned}}} where T 0 {\displaystyle T_{0}\,\!} , T 1 {\displaystyle T_{1}\,\!} , T 2 {\displaystyle T_{2}} are respectively homogeneous functions of degree 0, 1, and 2 in generalized velocities. If this system is scleronomous, then the position does not depend explicitly with time: ∂ r ∂ t = 0. {\displaystyle {\frac {\partial \mathbf {r} }{\partial t}}=0.} Therefore, only term T 2 {\displaystyle T_{2}} does not vanish: T = T 2 . {\displaystyle T=T_{2}.} Kinetic energy is a homogeneous function of degree 2 in generalized velocities. As shown at right, a simple pendulum is a system composed of a weight and a string. The string is attached at the top end to a pivot and at the bottom end to a weight. Being inextensible, the string’s length is a constant. Therefore, this system is scleronomous; it obeys scleronomic constraint x 2 + y 2 − L = 0 , {\displaystyle {\sqrt {x^{2}+y^{2}}}-L=0,} where ( x , y ) {\displaystyle (x,y)} is the position of the weight and L {\displaystyle L} is length of the string. Take a more complicated example. Refer to the next figure at right, Assume the top end of the string is attached to a pivot point undergoing a simple harmonic motion x t = x 0 cos ⁡ ω t , {\displaystyle x_{t}=x_{0}\cos \omega t,} where x 0 {\displaystyle x_{0}} is amplitude, ω {\displaystyle \omega } is angular frequency, and t {\displaystyle t} is time. Although the top end of the string is not fixed, the length of this inextensible string is still a constant. The distance between the top end and the weight must stay the same. Therefore, this system is rheonomous as it obeys constraint explicitly dependent on time ( x − x 0 cos ⁡ ω t ) 2 + y 2 − L = 0. {\displaystyle {\sqrt {(x-x_{0}\cos \omega t)^{2}+y^{2}}}-L=0.}
https://en.wikipedia.org/wiki/Scleronomous
Sclerophyll is a type of vegetation that is adapted to long periods of dryness and heat. The plants feature hard leaves , short internodes (the distance between leaves along the stem) and leaf orientation which is parallel or oblique to direct sunlight. Sclerophyllous plants occur in many parts of the world, [ 1 ] but are most typical of areas with low rainfall or seasonal droughts, such as Australia, Africa, and western North and South America. They are prominent throughout Australia, parts of Argentina , the Cerrado biogeographic region of Bolivia , Paraguay and Brazil , and in the Mediterranean biomes that cover the Mediterranean Basin , California , Chile , and the Cape Province of South Africa . In the Mediterranean basin , holm oak , cork oak and olives are typical hardwood trees. In addition, there are several species of pine under the trees in the vegetation zone. The shrub layer contains numerous herbs such as rosemary , thyme and lavender . In relation to the potential natural vegetation, around 2% of the Earth's land surface is covered by sclerophyll woodlands, and a total of 10% of all plant species on Earth live there. The word comes from the Greek sklēros (hard) and phyllon (leaf). The term was coined by Andreas Franz Wilhelm Schimper in 1898 (translated in 1903), originally as a synonym of xeromorph , but the two words were later differentiated. [ 2 ] Sclerophyll woody plants are characterized by their relatively small, stiff, leathery and long-lasting leaves. The sclerophyll vegetation is the result of an adaptation of the flora to the summer dry period of a Mediterranean-type climate . Plant species with this type of adaptation tend to be evergreen with great longevity, slow growth and with no loss of leaves during the unfavorable season. As a result, the thickets that make up these ecosystems are of the persistent evergreen type, in addition to the predominance of plants, even herbaceous ones, with "hard" leaves, which are covered by a thick leathery layer called the cuticle , that prevents water loss during the dry season. The aerial and underground structures of these plants are modified to make up for water shortages that may affect their survival. The name sclerophyll derives from the highly developed sclerenchyma from the plant, which is responsible for the hardness or stiffness of the leaves. This structure of the leaves inhibits transpiration and thus prevents major water losses during the dry season. Most of the plant species in the sclerophyll zone are not only insensitive to summer drought, they have also used various strategies to adapt to frequent wildfires , heavy rainfall and nutrient deficiencies. [ 3 ] The type of sclerophyllic trees in the Palearctic flora region include the holm oak ( Quercus ilex ), myrtle ( Myrtus communis ), strawberry tree ( Arbutus unedo ), wild olive ( Olea europaea ), laurel ( Laurus nobilis ), mock privet ( Phillyrea latifolia ), the Italian buckthorn ( Rhamnus alaternus ), etc. [ 4 ] The sclerophyll regions are located in the outer subtropics bordering the temperate zone (also known as the warm-temperate zone). Accordingly, the annual average temperatures are relatively high at 12–24 °C (54–75 °F); An average of over 18 °C (64 °F) is reached for at least four months, eight to twelve months it is over 10 °C (50 °F) and no month is below 5 °C (41 °F) on average. Frost and snow occur only occasionally and the growing season lasts longer than 150 days and is in the winter half-year. [ 9 ] The lower limit of the moderate annual precipitation is 300 mm (12 in) ( semi-arid climate ) and the upper limit 900–1,000 mm (35–39 in). Generally, the summers are dry and hot with a dry season of a maximum of seven months, but at least two to three months. The winters are rainy and cool. However, not all regions with sclerophyll vegetation feature the classic Mediterranean climate ; parts of eastern Italy, eastern Australia and eastern South Africa, which feature sclerophyll woodlands, tend to have uniform rainfall or even a more summer-dominant rainfall, whereby falling under the humid subtropical climate zone ( Cfa / Cwa ). Furthermore, other areas with sclerophyll flora would grade to the oceanic climate ( Cfb ); particularly the eastern parts of the Eastern Cape province in South Africa, and Tasmania , Victoria and southern New South Wales in Australia. [ 10 ] Sclerophyll plants are also found in areas with nutrient-poor and acidic soils, and soils with heavy concentrations of aluminum and other metals. Sclerophyll leaves transpire less and have a lower CO 2 uptake than malacophyllous or laurophyllous leaves. These lower transpiration rates may reduce the uptake of toxic ions and better provide for C- carboxylation under nutrient-poor conditions, particularly low availability of mineral nitrogen and phosphate. Sclerophyllous plants are found in tropical heath forests, which grown on nutrient-poor sandy soils in humid regions in the Orinoco and the Rio Negro basins of northern South America on quartz sand, in the kerangas forests of Borneo and on the Malay Peninsula , in coastal sandy areas along the Gulf of Guinea in Gabon, Cameroon, and Côte d'Ivoire, and in eastern Australia. Since water drains rapidly through these soils, sclerophylly also protects plants against drought stress during dry periods. [ 11 ] [ 12 ] Sclerophylly's advantages in nutrient-poor conditions may be another factor in the prevalence of sclerophyllous plants in nutrient-poor areas in drier-climate regions, like much of Australia and the Cerrado of Brazil. [ 11 ] The zone of the sclerophyll vegetation lies in the border area between the subtropics and the temperate zone , approximately between the 30th and 40th degree of latitude (in the northern hemisphere also up to the 45th degree of latitude). Their presence is limited to the coastal western sides of the continents, but nonetheless can typical in any regions of a continent with scarce annual precipitation or frequent seasonal droughts and poor soils that are heavily leached. [ 13 ] The sclerophyll zone often merges into temperate deciduous forests towards the poles, on the coasts also into temperate rainforests and towards the equator in hot semi-deserts or deserts. The Mediterranean areas, which have a very high biodiversity , are under great pressure from the population. This is especially true for the Mediterranean region since ancient times. Through overexploitation (logging, grazing, agricultural use) and frequent fires caused by people, the original forest vegetation is converted. In extreme cases, the hard-leaf vegetation disappears completely and is replaced by open rock heaths . Some sclerophyll areas are closer to the equator than the Mediterranean zone—for example, the interior of Madagascar , the dry half of New Caledonia , the lower edge areas of the Madrean pine-oak woodlands of the Mexican highlands between 800 and 1,800 metres (2,600 and 5,900 ft) or around 2,000 m (6,600 ft) high plateaus of the Asir Mountains on the western edge of the Arabian Peninsula . [ 14 ] While the winter rain areas of America, South Africa and Australia, with an unusually large variety of food crops , were ideal gathering areas for hunter-gatherers until European colonization , agriculture and cattle breeding spread in the Mediterranean area since the Neolithic , which permanently changed the face of the landscape. In the sclerophyll regions near the coast, permanent crops such as olive and wine cultivation established themselves; However, the landscape forms that characterize the degenerate shrubbery and shrub heaths maquis and garrigue are predominantly a result of grazing (especially with goats). In the course of the last millennia, the original vegetation in almost all areas of this vegetation zone has been greatly changed by the influence of humans. Where the plants have not been replaced by vineyards and olive groves , the maquis was the predominant form of vegetation on the Mediterranean. The maquis has been degraded in many places to the low shrub heather, the garrigue. Many plant species that are rich in aromatic oils belong to both vegetation societies. The diversity of the original sclerophyll vegetation in the world is high to extremely high (3,000–5,000 species per hectare). [ 15 ] Most areas of the Australian continent able to support woody plants are occupied by sclerophyll communities as forests , savannas , or heathlands . Common plants include the Proteaceae ( grevilleas , banksias and hakeas ), tea-trees , acacias , boronias , and eucalypts . The most common sclerophyll communities in Australia are savannas dominated by grasses with an overstorey of eucalypts and acacias. Acacia (particularly mulga ) shrublands also cover extensive areas. All the dominant overstorey acacia species and a majority of the understorey acacias have a scleromorphic adaptation in which the leaves have been reduced to phyllodes consisting entirely of the petiole . [ 16 ] Many plants of the sclerophyllous woodlands and shrublands also produce leaves unpalatable to herbivores by the inclusion of toxic and indigestible compounds which assure survival of these long-lived leaves. This trait is particularly noticeable in the eucalypt and Melaleuca species which possess oil glands within their leaves that produce a pungent volatile oil that makes them unpalatable to most browsers. [ 17 ] These traits make the majority of woody plants in these woodlands largely unpalatable to domestic livestock. [ 18 ] It is therefore important from a grazing perspective that these woodlands support a more or less continuous layer of herbaceous ground cover dominated by grasses. Sclerophyll forests cover a much smaller area of the continent, being restricted to relatively high rainfall locations. They have a eucalyptus overstory (10 to 30 metres) with the understory also being hard-leaved. Dry sclerophyll forests are the most common forest type on the continent, and although it may seem barren dry sclerophyll forest is highly diverse. For example, a study of sclerophyll vegetation in Seal Creek, Victoria , found 138 species. [ 19 ] Even less extensive are wet sclerophyll forests. They have a taller eucalyptus overstory than dry sclerophyll forests, 30 metres (98 ft) or more (typically mountain ash , alpine ash , rose gum , karri , messmate stringybark , or manna gum , and a soft-leaved, fairly dense understory ( tree ferns are common). They require ample rainfall—at least 1000 mm (40 inches). Sclerophyllous plants are all part of a specific environment and are anything but newcomers. By the time of European settlement, sclerophyll forest accounted for the vast bulk of the forested areas. Most of the wooded parts of present-day Australia have become sclerophyll dominated as a result of the extreme age of the continent combined with Aboriginal fire use. Deep weathering of the crust over many millions of years leached chemicals out of the rock, leaving Australian soils deficient in nutrients, particularly phosphorus . Such nutrient deficient soils support non-sclerophyllous plant communities elsewhere in the world and did so over most of Australia prior to European arrival. However such deficient soils cannot support the nutrient losses associated with frequent fires and are rapidly replaced with sclerophyllous species under traditional Aboriginal burning regimens. With the cessation of traditional burning non-sclerophyllous species have re-colonized sclerophyll habitat in many parts of Australia. [ citation needed ] The presence of toxic compounds combined with a high carbon : nitrogen ratio make the leaves and branches of scleromorphic species long-lived in the litter, and can lead to a large build-up of litter in woodlands. [ 20 ] [ 21 ] The toxic compounds of many species, notably Eucalyptus species, are volatile and flammable and the presence of large amounts of flammable litter, coupled with an herbaceous understorey, encourages fire. [ 22 ] All the Australian sclerophyllous communities are liable to be burnt with varying frequencies and many of the woody plants of these woodlands have developed adaptations to survive and minimise the effects of fire. [ 23 ] Sclerophyllous plants generally resist dry conditions well, making them successful in areas of seasonally variable rainfall. In Australia, however, they evolved in response to the low level of phosphorus in the soil—indeed, many native Australian plants cannot tolerate higher levels of phosphorus and will die if fertilised incorrectly. The leaves are hard due to lignin , which prevents wilting and allows plants to grow, even when there is not enough phosphorus for substantial new cell growth. [ 24 ] These are the biomes or ecoregions in the world that feature an abundance of, or are known for having, sclerophyll vegetation:
https://en.wikipedia.org/wiki/Sclerophyll
A scleroscope is a device used to measure rebound hardness . It consists of a steel ball dropped from a fixed height. The device was invented in 1907. As an improvement on this rough method, the Leeb Rebound Hardness Test , invented in the 1970s, uses the ratio of impact and rebound velocities (as measured by a magnetic inducer) to determine hardness. This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it . This technology-related article is a stub . You can help Wikipedia by expanding it . This tool article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Scleroscope
Sclerotiorin is an antimicrobial Penicillium frequentans isolate. Sclerotiorin is an aldose reductase inhibitor ( IC 50 =0.4 μM) as well as a reversible lipoxygenase inhibitor ( IC 50 =4.2 μM). [ 1 ] [ 2 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sclerotiorin