text
stringlengths
11
320k
source
stringlengths
26
161
Spontaneous emission is the process in which a quantum mechanical system (such as a molecule , an atom or a subatomic particle ) transits from an excited energy state to a lower energy state (e.g., its ground state ) and emits a quantized amount of energy in the form of a photon . Spontaneous emission is ultimately responsible for most of the light we see all around us; it is so ubiquitous that there are many names given to what is essentially the same process. If atoms (or molecules) are excited by some means other than heating, the spontaneous emission is called luminescence . For example, fireflies are luminescent. And there are different forms of luminescence depending on how excited atoms are produced ( electroluminescence , chemiluminescence etc.). If the excitation is affected by the absorption of radiation the spontaneous emission is called fluorescence . Sometimes molecules have a metastable level and continue to fluoresce long after the exciting radiation is turned off; this is called phosphorescence . Figurines that glow in the dark are phosphorescent. Lasers start via spontaneous emission, then during continuous operation work by stimulated emission . Spontaneous emission cannot be explained by classical electromagnetic theory and is fundamentally a quantum process. The first person to correctly predict the phenomenon of spontaneous emission was Albert Einstein in a series of papers starting in 1916, culminating in what is now called the Einstein A Coefficient . [ 1 ] [ 2 ] Einstein's quantum theory of radiation anticipated ideas later expressed in quantum electrodynamics and quantum optics by several decades. [ 3 ] Later, after the formal discovery of quantum mechanics in 1926, the rate of spontaneous emission was accurately described from first principles by Dirac in his quantum theory of radiation, [ 4 ] the precursor to the theory which he later called quantum electrodynamics . [ 5 ] Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. [ 6 ] [ 7 ] In 1963, the Jaynes–Cummings model [ 8 ] was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave the nonintuitive prediction that the rate of spontaneous emission could be controlled depending on the boundary conditions of the surrounding vacuum field. These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections. If a light source ('the atom') is in an excited state with energy E 2 {\displaystyle E_{2}} , it may spontaneously decay to a lower lying level (e.g., the ground state) with energy E 1 {\displaystyle E_{1}} , releasing the difference in energy between the two states as a photon. The photon will have angular frequency ω {\displaystyle \omega } and an energy ℏ ω {\displaystyle \hbar \omega } : where ℏ {\displaystyle \hbar } is the reduced Planck constant . Note: ℏ ω = h ν {\displaystyle \hbar \omega =h\nu } , where h {\displaystyle h} is the Planck constant and ν {\displaystyle \nu } is the linear frequency . The phase of the photon in spontaneous emission is random as is the direction in which the photon propagates. This is not true for stimulated emission . An energy level diagram illustrating the process of spontaneous emission is shown below: If the number of light sources in the excited state at time t {\displaystyle t} is given by N ( t ) {\displaystyle N(t)} , the rate at which N {\displaystyle N} decays is: where A 21 {\displaystyle A_{21}} is the rate of spontaneous emission. In the rate-equation A 21 {\displaystyle A_{21}} is a proportionality constant for this particular transition in this particular light source. The constant is referred to as the Einstein A coefficient , and has units s −1 . [ 9 ] The above equation can be solved to give: where N ( 0 ) {\displaystyle N(0)} is the initial number of light sources in the excited state, t {\displaystyle t} is the time and Γ rad {\displaystyle \Gamma _{\!{\text{rad}}}} is the radiative decay rate of the transition. The number of excited states N {\displaystyle N} thus decays exponentially with time, similar to radioactive decay . After one lifetime, the number of excited states decays to 36.8% of its original value ( 1 e {\displaystyle {\frac {1}{e}}} -time). The radiative decay rate Γ rad {\displaystyle \Gamma _{\text{rad}}} is inversely proportional to the lifetime τ 21 {\displaystyle \tau _{21}} : Spontaneous transitions were not explainable within the framework of the Schrödinger equation , in which the electronic energy levels were quantized, but the electromagnetic field was not. Given that the eigenstates of an atom are properly diagonalized, the overlap of the wavefunctions between the excited state and the ground state of the atom is zero. Thus, in the absence of a quantized electromagnetic field, the excited state atom cannot decay to the ground state. In order to explain spontaneous transitions, quantum mechanics must be extended to a quantum field theory , wherein the electromagnetic field is quantized at every point in space. The quantum field theory of electrons and electromagnetic fields is known as quantum electrodynamics . In quantum electrodynamics (or QED), the electromagnetic field has a ground state , the QED vacuum , which can mix with the excited stationary states of the atom. [ 5 ] As a result of this interaction, the "stationary state" of the atom is no longer a true eigenstate of the combined system of the atom plus electromagnetic field. In particular, the electron transition from the excited state to the electronic ground state mixes with the transition of the electromagnetic field from the ground state to an excited state, a field state with one photon in it. Spontaneous emission in free space depends upon vacuum fluctuations to get started. [ 10 ] [ 11 ] Although there is only one electronic transition from the excited state to ground state, there are many ways in which the electromagnetic field may go from the ground state to a one-photon state. That is, the electromagnetic field has infinitely more degrees of freedom, corresponding to the different directions in which the photon can be emitted. Equivalently, one might say that the phase space offered by the electromagnetic field is infinitely larger than that offered by the atom. This infinite degree of freedom for the emission of the photon results in the apparent irreversible decay, i.e., spontaneous emission. In the presence of electromagnetic vacuum modes, the combined atom-vacuum system is explained by the superposition of the wavefunctions of the excited state atom with no photon and the ground state atom with a single emitted photon: where | e ; 0 ⟩ {\displaystyle |e;0\rangle } and a ( t ) {\displaystyle a(t)} are the atomic excited state-electromagnetic vacuum wavefunction and its probability amplitude, | g ; 1 k s ⟩ {\displaystyle |g;1_{ks}\rangle } and b k s ( t ) {\displaystyle b_{ks}(t)} are the ground state atom with a single photon (of mode k s {\displaystyle ks} ) wavefunction and its probability amplitude, ω 0 {\displaystyle \omega _{0}} is the atomic transition frequency, and ω k = c | k | {\displaystyle \omega _{k}=c|k|} is the frequency of the photon. The sum is over k {\displaystyle k} and s {\displaystyle s} , which are the wavenumber and polarization of the emitted photon, respectively. As mentioned above, the emitted photon has a chance to be emitted with different wavenumbers and polarizations, and the resulting wavefunction is a superposition of these possibilities. To calculate the probability of the atom at the ground state ( | b ( t ) | 2 {\displaystyle |b(t)|^{2}} ), one needs to solve the time evolution of the wavefunction with an appropriate Hamiltonian. [ 4 ] To solve for the transition amplitude, one needs to average over (integrate over) all the vacuum modes, since one must consider the probabilities that the emitted photon occupies various parts of phase space equally. The "spontaneously" emitted photon has infinite different modes to propagate into, thus the probability of the atom re-absorbing the photon and returning to the original state is negligible, making the atomic decay practically irreversible. Such irreversible time evolution of the atom-vacuum system is responsible for the apparent spontaneous decay of an excited atom. If one were to keep track of all the vacuum modes, the combined atom-vacuum system would undergo unitary time evolution, making the decay process reversible. Cavity quantum electrodynamics is one such system where the vacuum modes are modified resulting in the reversible decay process, see also Quantum revival . The theory of the spontaneous emission under the QED framework was first calculated by Victor Weisskopf and Eugene Wigner in 1930 in a landmark paper. [ 12 ] [ 13 ] [ 14 ] The Weisskopf-Wigner calculation remains the standard approach to spontaneous radiation emission in atomic and molecular physics. [ 15 ] Dirac had also developed the same calculation a couple of years prior to the paper by Wigner and Weisskopf. [ 16 ] The rate of spontaneous emission (i.e., the radiative rate) can be described by Fermi's golden rule . [ 17 ] The rate of emission depends on two factors: an 'atomic part', which describes the internal structure of the light source and a 'field part', which describes the density of electromagnetic modes of the environment. The atomic part describes the strength of a transition between two states in terms of transition moments. In a homogeneous medium, such as free space , the rate of spontaneous emission in the dipole approximation is given by: where ω {\displaystyle \omega } is the emission frequency, n {\displaystyle n} is the index of refraction , μ 12 {\displaystyle \mu _{12}} is the transition dipole moment , ε 0 {\displaystyle \varepsilon _{0}} is the vacuum permittivity , ℏ {\displaystyle \hbar } is the reduced Planck constant , c {\displaystyle c} is the vacuum speed of light , and α {\displaystyle \alpha } is the fine-structure constant . The expression | ⟨ 1 | r | 2 ⟩ | {\displaystyle |\langle 1|\mathbf {r} |2\rangle |} stands for the definition of the transition dipole moment | μ 12 | = | ⟨ 1 | d | 2 ⟩ | {\displaystyle |\mu _{12}|=|\langle 1|\mathbf {d} |2\rangle |} for dipole moment operator d = q r {\displaystyle \mathbf {d} =q\mathbf {r} } , where q {\displaystyle q} is the elementary charge and r {\displaystyle \mathbf {r} } stands for position operator. (This approximation breaks down in the case of inner shell electrons in high-Z atoms.) The above equation clearly shows that the rate of spontaneous emission in free space increases proportionally to ω 3 {\displaystyle \omega ^{3}} . In contrast with atoms, which have a discrete emission spectrum, quantum dots can be tuned continuously by changing their size. This property has been used to check the ω 3 {\displaystyle \omega ^{3}} -frequency dependence of the spontaneous emission rate as described by Fermi's golden rule. [ 18 ] In the rate-equation above, it is assumed that decay of the number of excited states N {\displaystyle N} only occurs under emission of light. In this case one speaks of full radiative decay and this means that the quantum efficiency is 100%. Besides radiative decay, which occurs under the emission of light, there is a second decay mechanism; nonradiative decay. To determine the total decay rate Γ tot {\displaystyle \Gamma _{\text{tot}}} , radiative and nonradiative rates should be summed: where Γ tot {\displaystyle \Gamma _{\text{tot}}} is the total decay rate, Γ rad {\displaystyle \Gamma _{\text{rad}}} is the radiative decay rate and Γ nrad {\displaystyle \Gamma _{\text{nrad}}} the nonradiative decay rate. The quantum efficiency (QE) is defined as the fraction of emission processes in which emission of light is involved: In nonradiative relaxation, the energy is released as phonons , more commonly known as heat . Nonradiative relaxation occurs when the energy difference between the levels is very small, and these typically occur on a much faster time scale than radiative transitions. For many materials (for instance, semiconductors ), electrons move quickly from a high energy level to a meta-stable level via small nonradiative transitions and then make the final move down to the bottom level via an optical or radiative transition. This final transition is the transition over the bandgap in semiconductors. Large nonradiative transitions do not occur frequently because the crystal structure generally cannot support large vibrations without destroying bonds (which generally doesn't happen for relaxation). Meta-stable states form a very important feature that is exploited in the construction of lasers . Specifically, since electrons decay slowly from them, they can be deliberately piled up in this state without too much loss and then stimulated emission can be used to boost an optical signal. If emission leaves a system in an excited state, additional transitions can occur, leading to atomic radiative cascade . For example, if calcium atoms in a low-pressure atomic beam are excited by ultraviolet light from the 4 1 S 0 ground state to the 6 1 P 1 state, they can decay in three steps, first to 6 1 S 0 then to 4 1 P 1 and finally to the ground state. The photons from the second and third transitions have correlated polarizations demonstrating quantum entanglement . [ 19 ] These correlations were used by John Clauser [ 20 ] : 880 [ 21 ] : 592 and Alain Aspect [ 22 ] in work that contributed to their 2022 Nobel prize in physics . [ 23 ]
https://en.wikipedia.org/wiki/Spontaneous_emission
Spontaneous fission (SF) is a form of radioactive decay in which a heavy atomic nucleus splits into two or more lighter nuclei. In contrast to induced fission , there is no inciting particle to trigger the decay; it is a purely probabilistic process. Spontaneous fission is a dominant decay mode for superheavy elements , with nuclear stability generally falling as nuclear mass increases. It thus forms a practical limit to heavy element nucleon number. Heavier nuclides may be created instantaneously by physical processes, both natural (via the r -process ) and artificial, though rapidly decay to more stable nuclides. As such, apart from minor decay branches in primordial radionuclides, spontaneous fission is not observed in nature. Observed fission half-lives range from 60 nanoseconds ( 252 104 Rf ) to greater than the current age of the universe ( 232 90 Th ). [ 1 ] [ 2 ] : 16 Following the discovery of induced fission by Otto Hahn and Fritz Strassmann in 1938, Soviet physicists Georgy Flyorov and Konstantin Petrzhak began conducting experiments to explore the effects of incident neutron energy on uranium nuclei. Their equipment recorded fission fragments even when no neutrons were present to induce the decay, and the effect persisted even after the equipment was moved 60 meters underground into the tunnels of the Moscow Metro's Dinamo station in an effort to insulate it from the effects of cosmic rays . The discovery of induced fission itself had come as a surprise, and no other mechanism was known that could account for the observed decays. Such an effect could only be explained by spontaneous fission of the uranium nuclei without external influence. [ 3 ] Spontaneous fission arises as a result of competition between the attractive properties of the strong nuclear force and the mutual coulombic repulsion of the constituent protons. Nuclear binding energy increases in proportion to atomic mass number (A), while coulombic repulsion increases with the square of the proton number (Z). Thus, at high mass and proton numbers, coulombic repulsion overpowers the nuclear binding forces, and the nucleus is energetically more stable as two separate fragments than as a single bound system. [ 4 ] : 478–9 Spontaneous fission is usually a slow process, as the nucleus cannot simply jump to the lower energy (divided) state. Instead it must tunnel through a potential barrier, with a probability determined by the height of the barrier. Such a barrier is energetically possible for all A ≥ 93, though its height generally decreases with increasing Z, [ 4 ] : 433 and fission is only practically observed for A ≥ 232. [ 5 ] The stability of a nuclide against fission is expressed as the ratio of the Coulomb energy to the surface energy, which can be empirically estimated as the fissility parameter, x: x ≈ Z 2 50.88 A ( 1 − η I 2 ) {\displaystyle x\approx {\frac {Z^{2}}{50.88A(1-\eta I^{2})}}} with I = N − Z A {\displaystyle I={\tfrac {N-Z}{A}}} and η ≈ 1.78 {\displaystyle \eta \approx 1.78} . [ 6 ] : 3 For light nuclei, x is small and a sizeable fission barrier exists. As nuclear mass increases, so too does the fissility parameter, eventually approaching and exceeding unity , where stability against fission is lost altogether. [ 7 ] : 4 Shell effects and nucleon pairing effects may further affect observed half-lives. Decays of odd-A nuclides are hindered by 3–5 orders of magnitude compared to even–even nuclides. [ 8 ] : 4 The barrier to fission is expected to be zero around A = 300, though an island of stability may exist centred around Z = 114, N = 184. [ 4 ] : 481–2 To date, true ab initio models describing the complete fission process are not possible. [ 8 ] : 3 Computational theories based on Hartree–Fock or density-functional theory approaches have been developed, however computational complexity makes it difficult to reproduce the full behaviour. [ 2 ] : 35 The semi-classical liquid-drop model provides a primarily qualitative description of the phenomenology by treating the nucleus as a classical drop of liquid to which quantum corrections can be applied, which provides a useful conceptual picture that matches in part with experimental data, but ignores much of the quantum nature of the system and fails at more rigorous predictions. In this model, as with a classical liquid drop, a " surface tension " term is introduced which promotes the spherical shape of the nucleus. Acting in opposition is coulombic repulsion term, which acts to increase the distance between repelling proton pairs and thus promotes elongation of the nucleus into an oval shape. [ 6 ] : 3 As the deformation of the nucleus increases, and particularly for large nuclei due to their stronger coulombic repulsion, the nucleus may find itself in a state where a thin 'neck' develops, forming a bridge between two clusters of nuclear matter which may exceed the ability of the surface tension to restore the undeformed shape, eventually breaking into two fragments at the "scission point". [ 2 ] : 15 Introducing the effects of quantum tunnelling, the nucleus always has a chance to scission which increases with increasing deformation, and may do so even if the deformation is insufficient to trigger rupture of the neck. After separation, both fragments are highly positively charged and therefore gain significant kinetic energy via their mutual repulsion as they accelerate away from each other. Shape isomers (also called fission isomers ) are excited nuclear states existing before scission which may deviate from the spherical geometry, increasing nuclear deformation compared to the ground state without undergoing full fission. These states are 'metastable' – a nucleus is this state may, on timescales between nanoseconds and microseconds, either decay back to the ground state via gamma-emission, or tunnel through the scission barrier and break apart. Should the nucleus find itself in this state, either through quantum tunnelling or via random statistical fluctuation, the barrier for fission is much reduced, as shape isomers are always at a higher energy level than the ground sate and therefore are no longer required to tunnel through the entire barrier. The resulting increased probability for fission reduces the effective half-life of the nuclide. [ 4 ] : 494–7 Triple-humped barriers have been suggested for some nuclear species such as 228 90 Th , further reducing its observed half-life. [ 9 ] Fission fragments are usually neutron-rich and always generated in excited states. [ 2 ] : 3 Thus, daughter decays occur rapidly after scission. Decays occurring within 10 −13 s of scission are termed "prompt" and are initially dominated by a series of neutron emissions which remain the dominant decay mode until the fragment energy is reduced to the same order of magnitude as the neutron separation energy (approximately 7 MeV ), when photon emission becomes competitive. Below the neutron separation energy, gamma emission is dominant, characterised by a disordered spectrum of gamma energies with characteristic low-energy peaks corresponding to specific decays as the daughter descends the yrast line , [ 2 ] : 53–4 each decay carrying away excess angular momentum. [ 7 ] : 8 Average total prompt gamma emission is 30% higher from the lighter fragment compared to the heavier, implying the heavier fragment is created with higher initial angular momentum. [ 7 ] : 19 Finally, internal conversion and x-ray emission complete the prompt emissions. [ 2 ] : 53–4 Daughter products created by prompt decays are often unstable against beta-decay, and further photon and neutron emissions are also expected. Such emissions are termed 'delayed emissions' and take place with half-lives ranging from picoseconds to years. [ 2 ] : 3 As a result of the large number of decay pathways presented to a fissioning nucleus, there is a large variation in the final products. Fragment masses are normally distributed about two peaks centred at A ≈ 95 and A ≈ 140. [ 4 ] : 484 Spontaneous fission does not favour equal-mass fragments, and no convincing explanation has been found to explain this. [ 4 ] : 484 In rare instances (0.3%), three or more fission fragments may be created. [ 10 ] Ternary products are usually alpha-particles, though can be as massive as oxygen nuclei. [ 2 ] : 46 Total energy release across all products is approximately 200 MeV , [ 6 ] : 4 mostly observed as kinetic energy of the fission fragments, with the lighter fragment receiving the larger proportion of energy. [ 4 ] : 491–2 For a given decay path, the number of emitted neutrons is not consistent, and instead follows a gaussian distribution. The distribution about the average, however, is consistent across all decay paths. [ 4 ] : 486 Prompt neutrons are emitted with energies approximated by (but not precisely fitting) a Maxwell distribution , [ 7 ] : 17–8 peaking between 0.5 and 1 MeV, with an average energy of 2 MeV and maximum energy of approximately 10 MeV . [ 11 ] : 4–5 Prompt gamma emission constitutes a further 8 MeV, while beta decay and delayed-gammas contribute a further 19 MeV and 7 MeV respectively. [ 4 ] : 492 Less than 1% of emitted neutrons are emitted as delayed neutrons. [ 12 ] The most common application for spontaneous fission is as neutron source for further use. These neutrons may be used for applications such as neutron imaging , or may drive additional nuclear reactions, including initiating induced fission of a target as is common in nuclear reactors and nuclear weapons . In crystals containing high proportions of uranium, fission products generated via spontaneous fission produce damage trails as the fragments recoil through the crystal structure. The number of trails, or fission tracks, may be used to estimate the age of a sample via fission track dating .
https://en.wikipedia.org/wiki/Spontaneous_fission
Tempered or toughened glass is a type of safety glass processed by controlled thermal or chemical treatments to increase its strength compared with normal glass. Tempering puts the outer surfaces into compression and the interior into tension . Such stresses cause the glass, when broken, to shatter into small granular chunks instead of splintering into large jagged shards as ordinary annealed glass does. These smaller, granular chunks are less likely to cause deep penetration when forced into the surface of an object (e.g. by gravity, by wind, by falling onto them, etc.) compared to larger, jagged shards because the reduction in both the mass and the maximum dimension of a glass fragment corresponds with a reduction in both the momentum and the penetration depth of the glass fragment. Tempered glass is used for its safety and strength in a variety of applications, including passenger vehicle windows (apart from windshield), shower doors, aquariums, architectural glass doors and tables, refrigerator trays, mobile phone screen protectors, bulletproof glass components, diving masks , and plates and cookware. Tempered glass is about four times stronger than annealed glass. [ 1 ] [ 2 ] The more rapid contraction of the outer layer during manufacturing induces compressive stresses in the surface of the glass balanced by tensile stresses in the body of the glass. Fully tempered 6-mm thick glass must have either a minimum surface compression of 69 MPa (10 000 psi) or an edge compression of not less than 67 MPa (9 700 psi). [ 3 ] For it to be considered safety glass , the surface compressive stress should exceed 100 megapascals (15,000 psi). As a result of the increased surface stress, when broken the glass breaks into small rounded chunks as opposed to sharp jagged shards. Compressive surface stresses give tempered glass increased strength. Annealed glass has almost no internal stress and usually forms microscopic cracks on its surface. Tension applied to the glass can drive crack propagation which, once begun, concentrates tension at the tip of the crack driving crack propagation at very high speeds. [ citation needed ] Consequently, annealed glass is fragile and breaks into irregular and sharp pieces. [ 4 ] The compressive stresses on the surface of tempered glass contain flaws, preventing their propagation or expansion. Any cutting or grinding must be done prior to tempering. Cutting, grinding, and sharp impacts after tempering will cause the glass to fracture. The strain pattern resulting from tempering can be observed by viewing through an optical polarizer , such as a pair of polarizing sunglasses. Tempered glass is used when strength, thermal resistance, and safety are important considerations. Passenger vehicles, for example, have all three requirements. Since they are stored outdoors, they are subject to constant heating and cooling as well as dramatic temperature changes throughout the year. Moreover, they must withstand small impacts from road debris such as stones as well as road accidents. Because large, sharp glass shards would present additional and unacceptable danger to passengers, tempered glass is used so that if broken, the pieces are blunt and mostly harmless. The windscreen or windshield is instead made of laminated glass , which will not shatter into pieces when broken while side windows and the rear windshield have historically been made of tempered glass. Some newer luxury vehicles have laminated side windows to meet occupant retention regulations, anti-theft purposes, or sound-deadening purposes. Other typical applications of tempered glass include: Tempered glass is also used in buildings for unframed assemblies (such as frameless glass doors), structurally loaded applications, and any other application that would become dangerous in the event of human impact. Building codes in the United States require tempered or laminated glass in several situations including some skylights, glass installed near doorways and stairways, large windows, windows which extend close to floor level, sliding doors, elevators, fire department access panels, and glass installed near swimming pools. [ 5 ] Tempered glass is also used in the home. Some common household furniture and appliances that use tempered glass are frameless shower doors, glass table tops, glass shelves, cabinet glass and glass for fireplaces. "Rim-tempered" indicates that a limited area, such as the rim of the glass or plate, is tempered, and is popular in food service. There are also fully tempered variants for strength and thermal shock resistance. Some countries specify requirements regarding this. Tempered glass has also seen increased usage in bars and pubs, particularly in the United Kingdom and Australia, to prevent broken glass being used as a weapon . [ 6 ] [ 7 ] Some forms of tempered glass are used for cooking and baking . Manufacturers and brands include Glasslock, Pyrex , Corelle , and Arc International . This is also the type of glass used for oven doors. Most touchscreen mobile devices use some form of toughened glass (such as Corning 's Gorilla Glass ), but there are also separate tempered screen protectors for touchscreen devices sold as an accessory. [ 8 ] Tempered glass can be made from annealed glass via a thermal tempering process. The glass is placed onto a roller table, taking it through a furnace that heats it well above its glass transition temperature of 564 °C (1,047 °F) to around 620 °C (1,148 °F). The glass is then rapidly cooled with forced air drafts while the inner portion remains free to flow for a short time. An alternative chemical toughening process involves forcing a surface layer of glass at least 0.1 mm thick into compression by ion exchange of the sodium ions in the glass surface with potassium ions (which are 30% larger), by immersion of the glass into a bath of molten potassium nitrate . Chemical toughening results in increased toughness compared with thermal tempering and can be applied to glass objects of complex shapes. [ 9 ] Tempered glass must be cut to size or pressed to shape before tempering, and cannot be re-worked once tempered. Polishing the edges or drilling holes in the glass is carried out before the tempering process starts. Because of the balanced stresses in the glass, damage to any portion will eventually result in the glass shattering into thumbnail-sized pieces. The glass is most susceptible to breakage due to damage at its edge, where the tensile stress is the greatest, but can also shatter in the event of a hard impact in the middle of the glass pane or if the impact is concentrated (for example, the glass is struck with a hardened point). Using tempered glass can pose a security risk in some situations because of the tendency of the glass to shatter completely upon hard impact rather than leaving shards in the window frame. [ 10 ] The surface of tempered glass does exhibit surface waves caused by contact with flattening rollers, if it has been formed using this process. This waviness is a significant problem in manufacturing of thin film solar cells. [ 11 ] The float glass process can be used to provide low-distortion sheets with very flat and parallel surfaces as an alternative for different glazing applications. [ 12 ] Spontaneous glass breakage is a phenomenon by which tempered glass may spontaneously break without any apparent reason. The most common causes are: [ 13 ] [ 14 ] Any breakage problem has more severe consequences where the glass is installed overhead or in public areas (such as in high-rise buildings). A safety window film can be applied to the tempered panes of glass to protect from its falling. An old-fashioned precaution was to install metal screens below skylights. François Barthélémy Alfred Royer de la Bastie (1830–1901) of Paris, France is credited with first developing a method of tempering glass [ 16 ] by quenching almost molten glass in a heated bath of oil or grease in 1874, the method patented in England on August 12, 1874, patent number 2783. Tempered glass is sometimes known as Bastie glass after de la Bastie. In 1877 the German Friedrich Siemens developed a different process, sometimes called compressed glass or Siemens glass, producing a tempered glass stronger than the Bastie process by pressing the glass in cool molds. [ 17 ] The first patent on a whole process to make tempered glass was held by chemist Rudolph A. Seiden who was born in 1900 in Austria and emigrated to the United States in 1935. [ 18 ] Though the underlying mechanism was not known at the time, the effects of "tempering" glass have been known for centuries. In about 1660, Prince Rupert of the Rhine brought the discovery of what are now known as " Prince Rupert's Drops " to the attention of King Charles II . These are teardrop-shaped bits of glass which are produced by allowing a molten drop of glass to fall into a bucket of water, thereby rapidly cooling it. They can withstand a blow from a hammer on the bulbous end without breaking, but the drops will disintegrate explosively into powder if the tail end is even slightly damaged.
https://en.wikipedia.org/wiki/Spontaneous_glass_breakage
Spontaneous human combustion ( SHC ) is the pseudoscientific [ 1 ] concept of the spontaneous combustion of a living (or recently deceased) human body without an apparent external source of ignition on the body. In addition to reported cases, descriptions of the alleged phenomenon appear in literature, and both types have been observed to share common characteristics in terms of circumstances and the remains of the victim. Scientific investigations have attempted to analyze reported instances of SHC and have resulted in hypotheses regarding potential causes and mechanisms, including victim behavior and habits, alcohol consumption, and proximity to potential sources of ignition, as well as the behavior of fires that consume melted fats. Natural explanations, as well as unverified natural phenomena, have been proposed to explain reports of SHC. The current scientific consensus is that purported cases of SHC involve overlooked external sources of ignition. "Spontaneous human combustion" refers to the death from a fire originating without an apparent external source of ignition: a belief that the fire starts within the body of the victim. This idea and the term "spontaneous human combustion" were both first proposed in 1746 by Paul Rolli, a Fellow of the Royal Society , in an article published in the Philosophical Transactions concerning the mysterious death of Countess Cornelia Zangheri Bandi . [ 2 ] Writing in The British Medical Journal in 1938, coroner Gavin Thurston describes the phenomenon as having "apparently attracted the attention not only of the medical profession but of the non-medical professionals one hundred years ago" (referring to a fictional account published in 1834 in the Frederick Marryat cycle). [ 3 ] In his 1995 book Ablaze! , Larry E. Arnold, a director of ParaScience International, wrote that there had been about 200 cited reports of spontaneous human combustion worldwide over a period of around 300 years. [ 4 ] The topic received coverage in the British Medical Journal in 1938. An article by L. A. Parry cited an 1823-published book Medical Jurisprudence , [ 5 ] which stated that commonalities among recorded cases of spontaneous human combustion included the following characteristics: [ 6 ] Alcoholism is a common theme in early SHC literary references, in part because some Victorian era physicians and writers believed spontaneous human combustion was the result of alcoholism. [ 7 ] An extensive two-and-a-half-year research project, involving 30 historical cases of alleged SHC from 1725 to 1982, was conducted by science investigator Joe Nickell and forensic analyst John F. Fischer. [ 8 ] Their lengthy, two-part report was published in 1984 in the journal of the International Association of Arson Investigators , [ 9 ] : 3–11 and incorporated into their 1988 book Secrets of the Supernatural . [ 10 ] Nickell has written frequently on the subject, [ 9 ] [ 10 ] [ 8 ] appeared on television documentaries, conducted additional research, and lectured at the New York State Academy of Fire Science at Montour Falls, New York , as a guest instructor. The Nickell and Fischer investigation, which looked at cases in the 18th, 19th and 20th centuries, showed that the burned cadavers were close to plausible sources for the ignition: candles, lamps, fireplaces, and so on. Such sources were often omitted from published accounts of these incidents, presumably to deepen the aura of mystery surrounding an apparently "spontaneous" death. The investigations also found that there was a correlation between alleged SHC deaths and the victim's intoxication (or other forms of incapacitation) which could conceivably have caused them to be careless and unable to respond properly to an accident. Where the destruction of the body was not particularly extensive, a primary source of combustible fuel could plausibly have been the victim's clothing or a covering such as a blanket or comforter. However, where the destruction was extensive, additional fuel sources were involved, such as chair stuffing, floor coverings, the flooring itself, and the like. The investigators described how such materials helped to retain melted fat, which caused more of the body to be burned and destroyed, yielding still more liquified fat, in a cyclic process known as the " wick effect " or the "candle effect". According to the Nickell and Fischer investigation, nearby objects often remained undamaged because fire tends to burn upwards, but burns laterally with some difficulty. The fires in question are relatively small, achieving considerable destruction by the wick effect, and nearby objects may not be close enough to catch fire themselves (much as one can closely approach a modest campfire without burning). As with other mysteries, Nickell and Fischer cautioned against a "single, simplistic explanation for all unusual burning deaths" but rather urged investigating "on an individual basis". [ 10 ] : 169 Neurologist Steven Novella has said that skepticism about spontaneous human combustion is now bleeding over into becoming popular skepticism about spontaneous combustion . [ 11 ] In a 2002 study, Angi M. Christensen of the University of Tennessee cremated both healthy and osteoporotic samples of human bone and compared the resulting color changes and fragmentation. The study found that osteoporotic bone samples "consistently displayed more discoloration and a greater degree of fragmentation than healthy ones." The same study found that when human tissue is burned, the resulting flame produces a small amount of heat, indicating that fire is unlikely to spread from burning tissue. [ 12 ] The scientific consensus is that incidents which might appear as spontaneous combustion did in fact have an external source of ignition, and that spontaneous human combustion without an external ignition source is extremely implausible. Pseudoscientific hypotheses have been presented which attempt to explain how SHC might occur without an external flame source. [ 13 ] [ 1 ] Benjamin Radford , science writer and deputy editor of the science magazine Skeptical Inquirer , casts doubt on the plausibility of spontaneous human combustion: "If SHC is a real phenomenon (and not the result of an elderly or infirm person being too close to a flame source), why doesn't it happen more often? There are 8 billion people in the world [ ⁠today in 2024⁠], and yet we don't see reports of people bursting into flame while walking down the street, attending football games, or sipping a coffee at a local Starbucks ." [ 14 ] On 2 July 1951, Mary Reeser , a 67-year-old woman, was found burned to death in her house after her landlady realised that the house's doorknob was unusually warm. The landlady notified the police, and upon entering the home they found Reeser's remains completely burned into ash, with only one leg remaining. The chair she was sitting in was also destroyed. Reeser took sleeping pills and was also a smoker. Despite its proliferation in popular culture, the contemporary FBI investigation ruled out the possibility of SHC. A common theory was that she was smoking a cigarette after taking sleeping pills and then fell asleep while still holding the burning cigarette, which could have ignited her gown, ultimately leading to her death. Her daughter-in-law stated, "The cigarette dropped to her lap. Her fat was the fuel that kept her burning. The floor was cement, and the chair was by itself. There was nothing around her to burn". [ 29 ] [ 30 ] Margaret Hogan, an 89-year-old widow who lived alone in a house on Prussia Street, Dublin , Ireland , was found burned almost to the point of complete destruction on 28 March 1970. Plastic flowers on a table in the centre of the room had been reduced to liquid and a television with a melted screen sat 12 feet from the armchair in which the ashen remains were found; otherwise, the surroundings were almost untouched. Her two feet, and both legs from below the knees, were undamaged. A small coal fire had been burning in the grate when a neighbour left the house the previous day; however, no connection between this fire and that in which Mrs. Hogan died could be found. An inquest, held on 3 April 1970, recorded death by burning, with the cause of the fire listed as "unknown". [ 31 ] On 24 November 1979, during Thanksgiving weekend, Beatrice Oczki, a 51-year-old woman, was found charred to death in her home in the village of Bolingbrook, Illinois , United States. [ 32 ] Henry Thomas, a 73-year-old man, was found burned to death in the living room of his council house on the Rassau estate in Ebbw Vale , South Wales , in 1980. Most of his body was incinerated, leaving only his skull and part of each leg below the knee. The feet and legs were still clothed in socks and trousers. Half of the chair in which he had been sitting was also destroyed. Police forensic officers decided that the incineration of Thomas was due to the wick effect . [ 33 ] In December 2010, the death of Michael Faherty , a 76-year-old man in County Galway , Ireland, was recorded as "spontaneous combustion" by the coroner. The doctor, Ciaran McLoughlin, made this statement at the inquiry into the death: "This fire was thoroughly investigated and I'm left with the conclusion that this fits into the category of spontaneous human combustion, for which there is no adequate explanation." [ 34 ] The Skeptic magazine ascribed to possible SHC the 1899 case of two children from the same family who were burned to death in different places at the same time. The evidence showed that although the coincidence seemed strange, the children both loved to play with fire and had been "whipped" for this behavior in the past. Looking at all the evidence, the coroner and jury ruled that these were both accidental deaths. [ 35 ]
https://en.wikipedia.org/wiki/Spontaneous_human_combustion
In thermodynamics , a spontaneous process is a process which occurs without any external input to the system. A more technical definition is the time-evolution of a system in which it releases free energy and it moves to a lower, more thermodynamically stable energy state (closer to thermodynamic equilibrium ). [ 1 ] [ 2 ] The sign convention for free energy change follows the general convention for thermodynamic measurements, in which a release of free energy from the system corresponds to a negative change in the free energy of the system and a positive change in the free energy of the surroundings . Depending on the nature of the process, the free energy is determined differently. For example, the Gibbs free energy change is used when considering processes that occur under constant pressure and temperature conditions, whereas the Helmholtz free energy change is used when considering processes that occur under constant volume and temperature conditions. The value and even the sign of both free energy changes can depend upon the temperature and pressure or volume. Because spontaneous processes are characterized by a decrease in the system's free energy, they do not need to be driven by an outside source of energy. For cases involving an isolated system where no energy is exchanged with the surroundings, spontaneous processes are characterized by an increase in entropy . A spontaneous reaction is a chemical reaction which is a spontaneous process under the conditions of interest. In general, the spontaneity of a process only determines whether or not a process can occur and makes no indication as to whether or not the process will occur at an observable rate. In other words, spontaneity is a necessary, but not sufficient, condition for a process to actually occur. Furthermore, spontaneity makes no implication as to the speed at which the spontaneous process may occur - just because a process is spontaneous does not mean it will happen quickly (or at all). As an example, the conversion of a diamond into graphite is a spontaneous process at room temperature and pressure. Despite being spontaneous, this process does not occur since the high activation energy of this reaction renders it too slow to observe. For a process that occurs at constant temperature and pressure, spontaneity can be determined using the change in Gibbs free energy , which is given by: Δ G = Δ H − T Δ S , {\displaystyle \Delta G=\Delta H-T\Delta S\,,} where the sign of Δ G depends on the signs of the changes in enthalpy (Δ H ) and entropy (Δ S ). If these two signs are the same (both positive or both negative), then the sign of Δ G will change from positive to negative (or vice versa) at the temperature T = Δ H /Δ S . In cases where Δ G is: This set of rules can be used to determine four distinct cases by examining the signs of the Δ S and Δ H . For the latter two cases, the temperature at which the spontaneity changes will be determined by the relative magnitudes of Δ S and Δ H . When using the entropy change of a process to assess spontaneity, it is important to carefully consider the definition of the system and surroundings. The second law of thermodynamics states that a process involving an isolated system will be spontaneous if the entropy of the system increases over time. For open or closed systems, however, the statement must be modified to say that the total entropy of the combined system and surroundings must increase, or, Δ S total = Δ S system + Δ S surroundings ≥ 0 . {\displaystyle \Delta S_{\text{total}}=\Delta S_{\text{system}}+\Delta S_{\text{surroundings}}\geq 0\,.} This criterion can then be used to explain how it is possible for the entropy of an open or closed system to decrease during a spontaneous process. A decrease in system entropy can only occur spontaneously if the entropy change of the surroundings is both positive in sign and has a larger magnitude than the entropy change of the system: Δ S surroundings > 0 {\displaystyle \Delta S_{\text{surroundings}}>0} and | Δ S surroundings | > | Δ S system | {\displaystyle \left|\Delta S_{\text{surroundings}}\right|>\left|\Delta S_{\text{system}}\right|} In many processes, the increase in entropy of the surroundings is accomplished via heat transfer from the system to the surroundings (i.e. an exothermic process).
https://en.wikipedia.org/wiki/Spontaneous_process
Spontaneous symmetry breaking is a spontaneous process of symmetry breaking , by which a physical system in a symmetric state spontaneously ends up in an asymmetric state. [ 1 ] [ 2 ] [ 3 ] In particular, it can describe systems where the equations of motion or the Lagrangian obey symmetries, but the lowest-energy vacuum solutions do not exhibit that same symmetry . When the system goes to one of those vacuum solutions, the symmetry is broken for perturbations around that vacuum even though the entire Lagrangian retains that symmetry. The spontaneous symmetry breaking cannot happen in quantum mechanics that describes finite dimensional systems, due to Stone-von Neumann theorem (that states the uniqueness of Heisenberg commutation relations in finite dimensions). So spontaneous symmetry breaking can be observed only in infinite dimensional theories, as quantum field theories . By definition, spontaneous symmetry breaking requires the existence of physical laws which are invariant under a symmetry transformation (such as translation or rotation), so that any pair of outcomes differing only by that transformation have the same probability distribution. For example if measurements of an observable at any two different positions have the same probability distribution, the observable has translational symmetry. Spontaneous symmetry breaking occurs when this relation breaks down, while the underlying physical laws remain symmetrical. Conversely, in explicit symmetry breaking , if two outcomes are considered, the probability distributions of a pair of outcomes can be different. For example in an electric field, the forces on a charged particle are different in different directions, so the rotational symmetry is explicitly broken by the electric field which does not have this symmetry. Phases of matter, such as crystals, magnets, and conventional superconductors, as well as simple phase transitions can be described by spontaneous symmetry breaking. Notable exceptions include topological phases of matter like the fractional quantum Hall effect . Typically, when spontaneous symmetry breaking occurs, the observable properties of the system change in multiple ways. For example the density, compressibility, coefficient of thermal expansion, and specific heat will be expected to change when a liquid becomes a solid. Consider a symmetric upward dome with a trough circling the bottom. If a ball is put at the very peak of the dome, the system is symmetric with respect to a rotation around the center axis. But the ball may spontaneously break this symmetry by rolling down the dome into the trough, a point of lowest energy. Afterward, the ball has come to a rest at some fixed point on the perimeter. The dome and the ball retain their individual symmetry, but the system does not. [ 4 ] In the simplest idealized relativistic model, the spontaneously broken symmetry is summarized through an illustrative scalar field theory . The relevant Lagrangian of a scalar field ϕ {\displaystyle \phi } , which essentially dictates how a system behaves, can be split up into kinetic and potential terms, It is in this potential term V ( ϕ ) {\displaystyle V(\phi )} that the symmetry breaking is triggered. An example of a potential, due to Jeffrey Goldstone [ 5 ] is illustrated in the graph at the left. This potential has an infinite number of possible minima (vacuum states) given by for any real θ between 0 and 2 π . The system also has an unstable vacuum state corresponding to Φ = 0 . This state has a U(1) symmetry. However, once the system falls into a specific stable vacuum state (amounting to a choice of θ ), this symmetry will appear to be lost, or "spontaneously broken". In fact, any other choice of θ would have exactly the same energy, and the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks the symmetry, implying the existence of a massless Nambu–Goldstone boson , the mode running around the circle at the minimum of this potential, and indicating there is some memory of the original symmetry in the Lagrangian. [ 6 ] [ 7 ] In particle physics , the force carrier particles are normally specified by field equations with gauge symmetry ; their equations predict that certain measurements will be the same at any point in the field. For instance, field equations might predict that the mass of two quarks is constant. Solving the equations to find the mass of each quark might give two solutions. In one solution, quark A is heavier than quark B. In the second solution, quark B is heavier than quark A by the same amount . The symmetry of the equations is not reflected by the individual solutions, but it is reflected by the range of solutions. An actual measurement reflects only one solution, representing a breakdown in the symmetry of the underlying theory. "Hidden" is a better term than "broken", because the symmetry is always there in these equations. This phenomenon is called spontaneous symmetry breaking (SSB) because nothing (that we know of) breaks the symmetry in the equations. [ 8 ] : 194–195 By the nature of spontaneous symmetry breaking, different portions of the early Universe would break symmetry in different directions, leading to topological defects , such as two-dimensional domain walls , one-dimensional cosmic strings , zero-dimensional monopoles , and/or textures , depending on the relevant homotopy group and the dynamics of the theory. For example, Higgs symmetry breaking may have created primordial cosmic strings as a byproduct. Hypothetical GUT symmetry-breaking generically produces monopoles , creating difficulties for GUT unless monopoles (along with any GUT domain walls) are expelled from our observable Universe through cosmic inflation . [ 9 ] Chiral symmetry breaking is an example of spontaneous symmetry breaking affecting the chiral symmetry of the strong interactions in particle physics. It is a property of quantum chromodynamics , the quantum field theory describing these interactions, and is responsible for the bulk of the mass (over 99%) of the nucleons , and thus of all common matter, as it converts very light bound quarks into 100 times heavier constituents of baryons . The approximate Nambu–Goldstone bosons in this spontaneous symmetry breaking process are the pions , whose mass is an order of magnitude lighter than the mass of the nucleons. It served as the prototype and significant ingredient of the Higgs mechanism underlying the electroweak symmetry breaking. The strong, weak, and electromagnetic forces can all be understood as arising from gauge symmetries , which is a redundancy in the description of the symmetry. The Higgs mechanism , the spontaneous symmetry breaking of gauge symmetries, is an important component in understanding the superconductivity of metals and the origin of particle masses in the standard model of particle physics. The term "spontaneous symmetry breaking" is a misnomer here as Elitzur's theorem states that local gauge symmetries can never be spontaneously broken. Rather, after gauge fixing, the global symmetry (or redundancy) can be broken in a manner formally resembling spontaneous symmetry breaking. One important consequence of the distinction between true symmetries and gauge symmetries , is that the massless Nambu–Goldstone resulting from spontaneous breaking of a gauge symmetry are absorbed in the description of the gauge vector field, providing massive vector field modes, like the plasma mode in a superconductor, or the Higgs mode observed in particle physics. In the standard model of particle physics, spontaneous symmetry breaking of the SU(2) × U(1) gauge symmetry associated with the electro-weak force generates masses for several particles, and separates the electromagnetic and weak forces. The W and Z bosons are the elementary particles that mediate the weak interaction , while the photon mediates the electromagnetic interaction . At energies much greater than 100 GeV, all these particles behave in a similar manner. The Weinberg–Salam theory predicts that, at lower energies, this symmetry is broken so that the photon and the massive W and Z bosons emerge. [ 10 ] In addition, fermions develop mass consistently. Without spontaneous symmetry breaking, the Standard Model of elementary particle interactions requires the existence of a number of particles. However, some particles (the W and Z bosons ) would then be predicted to be massless, when, in reality, they are observed to have mass. To overcome this, spontaneous symmetry breaking is augmented by the Higgs mechanism to give these particles mass. It also suggests the presence of a new particle, the Higgs boson , detected in 2012. Superconductivity of metals is a condensed-matter analog of the Higgs phenomena, in which a condensate of Cooper pairs of electrons spontaneously breaks the U(1) gauge symmetry associated with light and electromagnetism. Dynamical symmetry breaking (DSB) is a special form of spontaneous symmetry breaking in which the ground state of the system has reduced symmetry properties compared to its theoretical description (i.e., Lagrangian ). Dynamical breaking of a global symmetry is a spontaneous symmetry breaking, which happens not at the (classical) tree level (i.e., at the level of the bare action), but due to quantum corrections (i.e., at the level of the effective action ). Dynamical breaking of a gauge symmetry is subtler. In conventional spontaneous gauge symmetry breaking, there exists an unstable Higgs particle in the theory, which drives the vacuum to a symmetry-broken phase (i.e, electroweak interactions .) In dynamical gauge symmetry breaking, however, no unstable Higgs particle operates in the theory, but the bound states of the system itself provide the unstable fields that render the phase transition. For example, Bardeen, Hill, and Lindner published a paper that attempts to replace the conventional Higgs mechanism in the standard model by a DSB that is driven by a bound state of top-antitop quarks. (Such models, in which a composite particle plays the role of the Higgs boson, are often referred to as "Composite Higgs models".) [ 11 ] Dynamical breaking of gauge symmetries is often due to creation of a fermionic condensate — e.g., the quark condensate , which is connected to the dynamical breaking of chiral symmetry in quantum chromodynamics . Conventional superconductivity is the paradigmatic example from the condensed matter side, where phonon-mediated attractions lead electrons to become bound in pairs and then condense, thereby breaking the electromagnetic gauge symmetry. Most phases of matter can be understood through the lens of spontaneous symmetry breaking. For example, crystals are periodic arrays of atoms that are not invariant under all translations (only under a small subset of translations by a lattice vector). Magnets have north and south poles that are oriented in a specific direction, breaking rotational symmetry . In addition to these examples, there are a whole host of other symmetry-breaking phases of matter — including nematic phases of liquid crystals , charge- and spin-density waves, superfluids, and many others. There are several known examples of matter that cannot be described by spontaneous symmetry breaking, including: topologically ordered phases of matter, such as fractional quantum Hall liquids , and spin-liquids . These states do not break any symmetry, but are distinct phases of matter. Unlike the case of spontaneous symmetry breaking, there is not a general framework for describing such states. [ 12 ] The ferromagnet is the canonical system that spontaneously breaks the continuous symmetry of the spins below the Curie temperature and at h = 0 , where h is the external magnetic field. Below the Curie temperature , the energy of the system is invariant under inversion of the magnetization m ( x ) such that m ( x ) = − m (− x ) . The symmetry is spontaneously broken as h → 0 when the Hamiltonian becomes invariant under the inversion transformation, but the expectation value is not invariant. Spontaneously-symmetry-broken phases of matter are characterized by an order parameter that describes the quantity which breaks the symmetry under consideration. For example, in a magnet, the order parameter is the local magnetization. Spontaneous breaking of a continuous symmetry is inevitably accompanied by gapless (meaning that these modes do not cost any energy to excite) Nambu–Goldstone modes associated with slow, long-wavelength fluctuations of the order parameter. For example, vibrational modes in a crystal, known as phonons, are associated with slow density fluctuations of the crystal's atoms. The associated Goldstone mode for magnets are oscillating waves of spin known as spin-waves. For symmetry-breaking states, whose order parameter is not a conserved quantity, Nambu–Goldstone modes are typically massless and propagate at a constant velocity. An important theorem, due to Mermin and Wagner, states that, at finite temperature, thermally activated fluctuations of Nambu–Goldstone modes destroy the long-range order, and prevent spontaneous symmetry breaking in one- and two-dimensional systems. Similarly, quantum fluctuations of the order parameter prevent most types of continuous symmetry breaking in one-dimensional systems even at zero temperature. (An important exception is ferromagnets, whose order parameter, magnetization, is an exactly conserved quantity and does not have any quantum fluctuations.) Other long-range interacting systems, such as cylindrical curved surfaces interacting via the Coulomb potential or Yukawa potential , have been shown to break translational and rotational symmetries. [ 13 ] It was shown, in the presence of a symmetric Hamiltonian, and in the limit of infinite volume, the system spontaneously adopts a chiral configuration — i.e., breaks mirror plane symmetry . For spontaneous symmetry breaking to occur, there must be a system in which there are several equally likely outcomes. The system as a whole is therefore symmetric with respect to these outcomes. However, if the system is sampled (i.e. if the system is actually used or interacted with in any way), a specific outcome must occur. Though the system as a whole is symmetric, it is never encountered with this symmetry, but only in one specific asymmetric state. Hence, the symmetry is said to be spontaneously broken in that theory. Nevertheless, the fact that each outcome is equally likely is a reflection of the underlying symmetry, which is thus often dubbed "hidden symmetry", and has crucial formal consequences. (See the article on the Goldstone boson .) When a theory is symmetric with respect to a symmetry group , but requires that one element of the group be distinct, then spontaneous symmetry breaking has occurred. The theory must not dictate which member is distinct, only that one is . From this point on, the theory can be treated as if this element actually is distinct, with the proviso that any results found in this way must be resymmetrized, by taking the average of each of the elements of the group being the distinct one. The crucial concept in physics theories is the order parameter . If there is a field (often a background field) which acquires an expectation value (not necessarily a vacuum expectation value ) which is not invariant under the symmetry in question, we say that the system is in the ordered phase , and the symmetry is spontaneously broken. This is because other subsystems interact with the order parameter, which specifies a "frame of reference" to be measured against. In that case, the vacuum state does not obey the initial symmetry (which would keep it invariant, in the linearly realized Wigner mode in which it would be a singlet), and, instead changes under the (hidden) symmetry, now implemented in the (nonlinear) Nambu–Goldstone mode . Normally, in the absence of the Higgs mechanism, massless Goldstone bosons arise. The symmetry group can be discrete, such as the space group of a crystal, or continuous (e.g., a Lie group ), such as the rotational symmetry of space. However, if the system contains only a single spatial dimension, then only discrete symmetries may be broken in a vacuum state of the full quantum theory , although a classical solution may break a continuous symmetry. On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breaking. Yoichiro Nambu , of the University of Chicago , won half of the prize for the discovery of the mechanism of spontaneous broken symmetry in the context of the strong interactions, specifically chiral symmetry breaking . Physicists Makoto Kobayashi and Toshihide Maskawa , of Kyoto University , shared the other half of the prize for discovering the origin of the explicit breaking of CP symmetry in the weak interactions. [ 14 ] This origin is ultimately reliant on the Higgs mechanism, but, so far understood as a "just so" feature of Higgs couplings, not a spontaneously broken symmetry phenomenon.
https://en.wikipedia.org/wiki/Spontaneous_symmetry_breaking
In solid state physics , spontelectrics is the study and phenomenon of thin films of various materials producing strong electric fields . When laid down as thin films tens to hundreds of molecular layers thick, a range of materials spontaneously generate large electric fields . The electric fields can be greater than 10 8 V/m. [ 1 ] Spontelectric behaviour is intrinsic to the dipolar nature of the constituent molecules. The detection (in ~2009) of spontaneous electric fields in numerous solid films prepared by vapour deposition raises fundamental questions about the nature of disordered materials. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Spontelectrics
Spoof surface plasmons , also known as spoof surface plasmon polaritons and designer surface plasmons , [ 1 ] are surface electromagnetic waves in microwave and terahertz regimes that propagate along planar interfaces with sign-changing permittivities . Spoof surface plasmons are a type of surface plasmon polariton , which ordinarily propagate along metal and dielectric interfaces in infrared and visible frequencies. Since surface plasmon polaritons cannot exist naturally in microwave and terahertz frequencies due to dispersion properties of metals, spoof surface plasmons necessitate the use of artificially-engineered metamaterials . Spoof surface plasmons share the natural properties of surface plasmon polaritons, such as dispersion characteristics and subwavelength field confinement. They were first theorized by John Pendry et al. [ 2 ] Surface plasmon polaritons (SPP) result from the coupling of delocalized electron oscillations (" surface plasmon ") to electromagnetic waves (" polariton "). SPPs propagate along the interface between a positive- and a negative-permittivity material. These waves decay perpendicularly from the interface (" evanescent field "). For a plasmonic medium that is stratified along the z-direction in Cartesian coordinates , dispersion relation for SPPs can be obtained from solving Maxwell's equations : [ 3 ] where Per this relation, SPPs have shorter wavelengths than light in free space for a frequency band below surface plasmon frequency; this property, as well as subwavelength confinement, enables new applications in subwavelength optics and systems beyond the diffraction-limit . [ 3 ] Nevertheless, for lower frequency bands such as microwave and terahertz, surface plasmon polariton modes are not supported; metals function approximately as perfect electrical conductors with imaginary dielectric functions in this regime. [ 4 ] Per the effective medium approach, metal surfaces with subwavelength structural elements can mimic the plasma behaviour, resulting in artificial surface plasmon polariton excitations with similar dispersion behaviour. [ 4 ] [ 5 ] [ 6 ] For the canonical case of a metamaterial medium that is formed by thin metallic wires on a periodic square lattice , the effective relative permittivity can be represented by the Drude model formula: [ 4 ] where The use of subwavelength structures to induce low-frequency plasmonic excitations was first theorized by John Pendry et al. in 1996; Pendry proposed that a periodic lattice of thin metallic wires with a radius of 1 μm could be used to support surface-bound modes, with a plasma cut-off frequency of 8.2 GHz. [ 4 ] In 2004, Pendry et al. extended the approach to metal surfaces that are perforated by holes, terming the artificial SPP excitations as "spoof surface plasmons." [ 5 ] [ 6 ] In 2006, terahertz pulse propagation in planar metallic structures with holes were shown via FDTD simulations. [ 8 ] Martin-Cano et al. has realized the spatial and temporal modulation of guided terahertz modes via metallic parallelepiped structures, which they termed as " domino plasmons." [ 9 ] Designer spoof plasmonic structures were also tailored to improve the performance of terahertz quantum cascade lasers in 2010. [ 10 ] Spoof surface plasmons were proposed as a possible solution for decreasing the crosstalk in microwave integrated circuits , transmission lines and waveguides . [ 2 ] In 2013, Ma et al. demonstrated a matched conversion from coplanar waveguide with a characteristic impedance of 50Ω to a spoof-plasmonic structure. [ 11 ] In 2014, integration of commercial low-noise amplifier with spoof plasmonic structures was realized; the system reportedly worked from 6 to 20 GHz with a gain around 20 dB . [ 12 ] Kianinejad et al. also reported the design of a slow-wave spoof-plasmonic transmission line; conversion from quasi- TEM microstrip modes to TM spoof plasmon modes were also demonstrated. [ 13 ] Khanikaev et al. reported nonreciprocal spoof surface plasmon modes in structured conductor embedded in an asymmetric magneto-optical medium, which results in one-way transmission. [ 14 ] Pan et al. observed the rejection of certain spoof plasmon modes with an introduction of electrically resonant metamaterial particles to the spoof plasmonic strip. [ 15 ] Localized spoof surface plasmons were also demonstrated for metallic disks in microwave frequencies. [ 16 ] [ 17 ]
https://en.wikipedia.org/wiki/Spoof_surface_plasmon
A spoolbase is a shore-based facility used to facilitate continuous pipe laying for offshore oil and gas production . [ 1 ] The facility allows the welding of single or double joints (40' or 80') of steel pipe of 4" to 18" diameter, into predetermined lengths for spooling onto a reel lay vessel . Shore based spoolbases serve the oil and gas sector from locations in the USA, UK, Norway, Brazil, and Angola, and portable spoolbases may be set up in any location to suit local requirements. This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spoolbase
In the mathematical classification of finite simple groups , there are a number of groups which do not fit into any infinite family. These are called the sporadic simple groups , or the sporadic finite groups , or just the sporadic groups . A simple group is a group G that does not have any normal subgroups except for the trivial group and G itself. The mentioned classification theorem states that the list of finite simple groups consists of 18 countably infinite families [ a ] plus 26 exceptions that do not follow such a systematic pattern. These 26 exceptions are the sporadic groups. The Tits group is sometimes regarded as a sporadic group because it is not strictly a group of Lie type , [ 1 ] in which case there would be 27 sporadic groups. The monster group , or friendly giant , is the largest of the sporadic groups, and all but six of the other sporadic groups are subquotients of it. [ 2 ] Five of the sporadic groups were discovered by Émile Mathieu in the 1860s and the other twenty-one were found between 1965 and 1975. Several of these groups were predicted to exist before they were constructed. Most of the groups are named after the mathematician(s) who first predicted their existence. The full list is: [ 1 ] [ 3 ] [ 4 ] Various constructions for these groups were first compiled in Conway et al. (1985) , including character tables , individual conjugacy classes and lists of maximal subgroup , as well as Schur multipliers and orders of their outer automorphisms . These are also listed online at Wilson et al. (1999) , updated with their group presentations and semi-presentations. The degrees of minimal faithful representation or Brauer characters over fields of characteristic p ≥ 0 for all sporadic groups have also been calculated, and for some of their covering groups. These are detailed in Jansen (2005) . A further exception in the classification of finite simple groups is the Tits group T , which is sometimes considered of Lie type [ 5 ] or sporadic — it is almost but not strictly a group of Lie type [ 6 ] — which is why in some sources the number of sporadic groups is given as 27, instead of 26. [ 7 ] [ 8 ] In some other sources, the Tits group is regarded as neither sporadic nor of Lie type, or both. [ 9 ] [ citation needed ] The Tits group is the ( n = 0)-member 2 F 4 (2)′ of the infinite family of commutator groups 2 F 4 (2 2 n +1 )′ ; thus in a strict sense not sporadic, nor of Lie type. For n > 0 these finite simple groups coincide with the groups of Lie type 2 F 4 (2 2 n +1 ), also known as Ree groups of type 2 F 4 . The earliest use of the term sporadic group may be Burnside (1911 , p. 504) where he comments about the Mathieu groups: "These apparently sporadic simple groups would probably repay a closer examination than they have yet received." (At the time, the other sporadic groups had not been discovered.) The diagram at right is based on Ronan (2006 , p. 247). It does not show the numerous non-sporadic simple subquotients of the sporadic groups. Of the 26 sporadic groups, 20 can be seen inside the monster group as subgroups or quotients of subgroups ( sections ). These twenty have been called the happy family by Robert Griess , and can be organized into three generations. [ 10 ] [ b ] M n for n = 11, 12, 22, 23 and 24 are multiply transitive permutation groups on n points. They are all subgroups of M 24 , which is a permutation group on 24 points. [ 11 ] All the subquotients of the automorphism group of a lattice in 24 dimensions called the Leech lattice : [ 12 ] Consists of subgroups which are closely related to the Monster group M : [ 13 ] (This series continues further: the product of M 12 and a group of order 11 is the centralizer of an element of order 11 in M .) The Tits group , if regarded as a sporadic group, would belong in this generation: there is a subgroup S 4 × 2 F 4 (2)′ normalising a 2C 2 subgroup of B , giving rise to a subgroup 2·S 4 × 2 F 4 (2)′ normalising a certain Q 8 subgroup of the Monster. 2 F 4 (2)′ is also a subquotient of the Fischer group Fi 22 , and thus also of Fi 23 and Fi 24 ′, and of the Baby Monster B . 2 F 4 (2)′ is also a subquotient of the (pariah) Rudvalis group Ru , and has no involvements in sporadic simple groups except the ones already mentioned. The six exceptions are J 1 , J 3 , J 4 , O'N , Ru , and Ly , sometimes known as the pariahs . [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/Sporadic_group
In biology , a spore is a unit of sexual (in fungi) or asexual reproduction that may be adapted for dispersal and for survival, often for extended periods of time, in unfavourable conditions. [ 1 ] Spores form part of the life cycles of many plants , algae , fungi and protozoa . [ 2 ] They were thought to have appeared as early as the mid-late Ordovician period as an adaptation of early land plants. [ 3 ] Bacterial spores are not part of a sexual cycle, but are resistant structures used for survival under unfavourable conditions. [ 4 ] Myxozoan spores release amoeboid infectious germs ("amoebulae") into their hosts for parasitic infection, but also reproduce within the hosts through the pairing of two nuclei within the plasmodium, which develops from the amoebula. [ 5 ] In plants, spores are usually haploid and unicellular and are produced by meiosis in the sporangium of a diploid sporophyte . In some rare cases, a diploid spore is also produced in some algae, or fungi. [ 6 ] Under favourable conditions, the spore can develop into a new organism using mitotic division, producing a multicellular gametophyte , which eventually goes on to produce gametes. Two gametes fuse to form a zygote , which develops into a new sporophyte. This cycle is known as alternation of generations . The spores of seed plants are produced internally, and the megaspores (formed within the ovules) and the microspores are involved in the formation of more complex structures that form the dispersal units, the seeds and pollen grains. The term spore derives from the ancient Greek word σπορά spora , meaning " seed , sowing", related to σπόρος sporos , "sowing", and σπείρειν speirein , "to sow". In common parlance, the difference between a "spore" and a " gamete " is that a spore will germinate and develop into a sporeling , while a gamete needs to combine with another gamete to form a zygote before developing further. The main difference between spores and seeds as dispersal units is that spores are unicellular, the first cell of a gametophyte, while seeds contain within them a developing embryo (the multicellular sporophyte of the next generation), produced by the fusion of the male gamete of the pollen tube with the female gamete formed by the megagametophyte within the ovule. Spores germinate to give rise to haploid gametophytes, while seeds germinate to give rise to diploid sporophytes. Vascular plant spores are always haploid . Vascular plants are either homosporous (also known as isosporous ) or heterosporous . Plants that are homosporous produce spores of the same size and type. Heterosporous plants, such as seed plants , spikemosses , quillworts , and ferns of the order Salviniales produce spores of two different sizes: the larger spore (megaspore) in effect functioning as a "female" spore and the smaller (microspore) functioning as a "male". Such plants typically give rise to the two kind of spores from within separate sporangia, either a megasporangium that produces megaspores or a microsporangium that produces microspores. In flowering plants, these sporangia occur within the carpel and anthers, respectively. Fungi commonly produce spores during sexual and asexual reproduction. Spores are usually haploid and grow into mature haploid individuals through mitotic division of cells ( Urediniospores and Teliospores among rusts are dikaryotic). Dikaryotic cells result from the fusion of two haploid gamete cells. Among sporogenic dikaryotic cells, karyogamy (the fusion of the two haploid nuclei) occurs to produce a diploid cell. Diploid cells undergo meiosis to produce haploid spores. [ citation needed ] Spores can be classified in several ways such as by their spore producing structure, function, origin during life cycle, and mobility. Below is a table listing the mode of classification, name, identifying characteristic, examples, and images of different spore species. Under high magnification , spores often have complex patterns or ornamentation on their exterior surfaces. A specialized terminology has been developed to describe features of such patterns. Some markings represent apertures, places where the tough outer coat of the spore can be penetrated when germination occurs. Spores can be categorized based on the position and number of these markings and apertures. Alete spores show no lines. In monolete spores , there is a single narrow line (laesura) on the spore. [ 8 ] Indicating the prior contact of two spores that eventually separated. [ 3 ] In trilete spores , each spore shows three narrow lines radiating from a center pole. [ 8 ] This shows that four spores shared a common origin and were initially in contact with each other forming a tetrahedron. [ 3 ] A wider aperture in the shape of a groove may be termed a colpus . [ 8 ] The number of colpi distinguishes major groups of plants. Eudicots have tricolpate spores (i.e. spores with three colpi). [ 9 ] Envelope-enclosed spore tetrads are taken as the earliest evidence of plant life on land, [ 10 ] dating from the mid-Ordovician (early Llanvirn, ~ 470 million years ago ), a period from which no macrofossils have yet been recovered. [ 11 ] Individual trilete spores resembling those of modern cryptogamic plants first appeared in the fossil record at the end of the Ordovician period. [ 12 ] In fungi, both asexual and sexual spores or sporangiospores of many fungal species are actively dispersed by forcible ejection from their reproductive structures. This ejection ensures exit of the spores from the reproductive structures as well as travelling through the air over long distances. Many fungi thereby possess specialized mechanical and physiological mechanisms as well as spore-surface structures, such as hydrophobins , for spore ejection. These mechanisms include, for example, forcible discharge of ascospores enabled by the structure of the ascus and accumulation of osmolytes in the fluids of the ascus that lead to explosive discharge of the ascospores into the air. [ 13 ] The forcible discharge of single spores termed ballistospores involves formation of a small drop of water ( Buller's drop ), which upon contact with the spore leads to its projectile release with an initial acceleration of more than 10,000 g . [ 14 ] Other fungi rely on alternative mechanisms for spore release, such as external mechanical forces, exemplified by puffballs . Attracting insects, such as flies, to fruiting structures, by virtue of their having lively colours and a putrid odour, for dispersal of fungal spores is yet another strategy, most prominently used by the stinkhorns . In Common Smoothcap moss ( Atrichum undulatum ), the vibration of sporophyte has been shown to be an important mechanism for spore release. [ 15 ] In the case of spore-shedding vascular plants such as ferns, wind distribution of very light spores provides great capacity for dispersal. Also, spores are less subject to animal predation than seeds because they contain almost no food reserve; however they are more subject to fungal and bacterial predation. Their chief advantage is that, of all forms of progeny, spores require the least energy and materials to produce. In the spikemoss Selaginella lepidophylla , dispersal is achieved in part by an unusual type of diaspore , a tumbleweed . [ 16 ] Spores have been found in microfossils dating back to the mid-late Ordovician period. [ 3 ] Two hypothesized initial functions of spores relate to whether they appeared before or after land plants. The heavily studied hypothesis is that spores were an adaptation of early land plant species, such as embryophytes , that allowed for plants to easily disperse while adapting to their non-aquatic environment. [ 3 ] [ 17 ] This is particularly supported by the observation of a thick spore wall in cryptospores . These spore walls would have protected potential offspring from novel weather elements. [ 3 ] The second more recent hypothesis is that spores were an early predecessor of land plants and formed during errors in the meiosis of algae , a hypothesized early ancestor of land plants. [ 18 ] Whether spores arose before or after land plants, their contributions to topics in fields like paleontology and plant phylogenetics have been useful. [ 18 ] The spores found in microfossils, also known as cryptospores, are well preserved due to the fixed material they are in as well as how abundant and widespread they were during their respective time periods. These microfossils are especially helpful when studying the early periods of earth as macrofossils such as plants are not common nor well preserved. [ 3 ] Both cryptospores and modern spores have diverse morphology that indicate possible environmental conditions of earlier periods of Earth and evolutionary relationships of plant species. [ 3 ] [ 18 ] [ 17 ]
https://en.wikipedia.org/wiki/Spore
The sporocarp (also known as fruiting body , fruit body or fruitbody ) of fungi is a multicellular structure on which spore-producing structures , such as basidia or asci , are borne. The fruitbody is part of the sexual phase of a fungal life cycle , [ 1 ] while the rest of the life cycle is characterized by vegetative mycelial growth and asexual spore production. The sporocarp of a basidiomycete is known as a basidiocarp or basidiome , while the fruitbody of an ascomycete is known as an ascocarp . Many shapes and morphologies are found in both basidiocarps and ascocarps; these features play an important role in the identification and taxonomy of fungi. Fruitbodies are termed epigeous if they grow on the ground, while those that grow underground are hypogeous . Epigeous sporocarps that are visible to the naked eye, especially fruitbodies of a more or less agaricoid morphology, are often called mushrooms . Epigeous sporocarps have mycelia that extend underground far beyond the mother sporocarp. There is a wider distribution of mycelia underground than sporocarps above ground. [ 2 ] Hypogeous fungi are usually called truffles or false truffles . There is evidence that hypogeous fungi evolved from epigeous fungi. [ 3 ] During their evolution , truffles lost the ability to disperse their spores by air currents, and propagate instead by animal consumption and subsequent defecation. In amateur mushroom hunting , and to a large degree in academic mycology as well, identification of higher fungi is based on the features of the sporocarp. The largest known fruitbody is a specimen of Phellinus ellipsoideus (formerly Fomitiporia ellipsoidea ) found on Hainan Island , part of China . It measures up to 10.85 metres ( 35 + 1 ⁄ 2 feet) in length and is estimated to weigh between 450 and 760 kilograms (990 and 1,680 pounds). [ 4 ] [ 5 ] A wide variety of animals feed on epigeous and hypogeous fungi. The mammals that feed on fungi are as diverse as fungi themselves and are called mycophages. Squirrels and chipmunks eat the greatest variety of fungi, but there are many other mammals that also forage on fungi, such as marsupials , mice , rats , voles , lemmings , deer , shrews , rabbits , weasels , and more. [ 6 ] [ 7 ] [ 8 ] [ 9 ] Some animals feed on fungi opportunistically, while others rely on them as a primary source of food. Hypogeous sporocarps are a highly nutritious primary food source for some small mammals like the Tasmanian bettong . Evidence of this is that the composition of fungi in the diet of Tasmanian bettong was positively correlated with body condition and growth rates of pouch young. [ 10 ] Ectomycorrhizal or hypogeous fungi form a symbiotic relationship with small mycophagous mammals. Hypogeous sporocarps depend on small fungivorous mammals to disperse their spores since they are underground and cannot utilize wind dispersal like epigeous sporocarps. [ 11 ] Underground fungi also play a role in a three-way symbiotic relationship with small marsupials and Australian Eucalyptus forests. In Eucalyptus forests, hypogeous sporocarp dispersal is positively affected by fires. After a fire, most if not all epigeous sporocarps are wiped out, leaving hypogeous sporocarps to be the primary source of fungi for small marsupials. [ 12 ] The ability of hypogeous fungi to resist disasters, such as fire, could be due to their evolved ability to survive the digestive systems of animals in order to distribute. Sporocarps can also serve as a food source for other fungi. Sporocarps can be hosts to diverse communities of fungicolous fungi. Short-lived sporocarps are more often hosts to fungicolous fungi than are long-lived sporocarps, which may have evolved more investment in defense mechanisms and tend to have less water content than their short-lived counterparts. [ 1 ] Resupinate sporocarps, sporocarps that have a higher surface area to volume ratio, are hosts to a higher diversity of fungicolous fungi than pileate sporocarps are. [ 1 ] Some species of Sporocarp has been observed to marked gravesites and sites of corpse decomposition. [ 13 ]
https://en.wikipedia.org/wiki/Sporocarp_(fungus)
Sporogenesis is the production of spores in biology . The term is also used to refer to the process of reproduction via spores. Reproductive spores were found to be formed in eukaryotic organisms, such as plants , algae and fungi , during their normal reproductive life cycle . Dormant spores are formed, for example by certain fungi and algae, primarily in response to unfavorable growing conditions. Most eukaryotic spores are haploid and form through cell division, though some types are diploids or dikaryons and form through cell fusion. This type of reproduction can also be called single pollination . [ citation needed ] Reproductive spores are generally the result of cell division, most commonly meiosis (e.g. in plant sporophytes ). Sporic meiosis is needed to complete the sexual life cycle of the organisms using it. In some cases, sporogenesis occurs via mitosis (e.g. in some fungi and algae). Mitotic sporogenesis is a form of asexual reproduction . Examples are the conidial fungi Aspergillus and Penicillium , for which mitospore formation appears to be the primary mode of reproduction. Other fungi, such as ascomycetes , utilize both mitotic and meiotic spores. The red alga Polysiphonia alternates between mitotic and meiotic sporogenesis and both processes are required to complete its complex reproductive life cycle. In the case of dormant spores in eukaryotes, sporogenesis often occurs as a result of fertilization or karyogamy forming a diploid spore equivalent to a zygote . Therefore, zygospores are the result of sexual reproduction . Reproduction via spores involves the spreading of the spores by water or air. Algae and some fungi ( chytrids ) often use motile zoospores that can swim to new locations before developing into sessile organisms. Airborne spores are obvious in fungi, for example when they are released from puffballs . Other fungi have more active spore dispersal mechanisms. For example, the fungus Pilobolus can shoot its sporangia towards light. Plant spores designed for dispersal are also referred to as diaspores . Plant spores are most obvious in the reproduction of ferns and mosses . However, they also exist in flowering plants where they develop hidden inside the flower. For example, the pollen grains of flowering plants develop out of microspores produced in the anthers . Reproductive spores grow into multicellular haploid individuals or sporelings . In heterosporous organisms, two types of spores exist: microspores give rise to males and megaspores to females. In homosporous organisms, all spores look alike and grow into individuals carrying reproductive parts of both genders. Sporogenesis occurs in reproductive structures termed sporangia . The process involves sporogenous cells (sporocytes, also called spore mother cells) undergoing cell division to give rise to spores. In meiotic sporogenesis, a diploid spore mother cell within the sporangium undergoes meiosis, producing a tetrad of haploid spores. In organisms that are heterosporous , two types of spores occur: Microsporangia produce male microspores, and megasporangia produce female megaspores. In megasporogenesis, often three of the four spores degenerate after meiosis, whereas in microsporogenesis all four microspores survive. In gymnosperms , such as conifers , microspores are produced through meiosis from microsporocytes in microstrobili or male cones. In flowering plants , microspores are produced in the anthers of flowers. Each anther contains four pollen sacs , which contain the microsporocytes. After meiosis, each microspore undergoes mitotic cell division, giving rise to multicellular pollen grains (six nuclei in gymnosperms, three nuclei in flowering plants). Megasporogenesis occurs in megastrobili in conifers (for example a pine cone) and inside the ovule in the flowers of flowering plants. A megasporocyte inside a megasporangium or ovule undergoes meiosis, producing four megaspores. Only one is a functional megaspore whereas the others stay dysfunctional or degenerate. The megaspore undergoes several mitotic divisions to develop into a female gametophyte (for example the seven-cell/eight-nuclei embryo sac in flowering plants). Some fungi and algae produce mitospores through mitotic cell division within a sporangium. In fungi, such mitospores are referred to as conidia . Some algae, and fungi form resting spores made to survive unfavorable conditions. Typically, changes in the environment from favorable to unfavorable growing conditions will trigger a switch from asexual reproduction to sexual reproduction in these organisms. The resulting spores are protected through the formation of a thick cell wall and can withstand harsh conditions such as drought or extreme temperatures. Examples are chlamydospores , teliospores , zygospores , and myxospores . Similar survival structures produced in some bacteria are known as endospores . Chlamydospores are generally multicellular, asexual structures. Teliospores are a form of chlamydospore produced through the fusion of cells or hyphae where the nuclei of the fused cells stay separate. These nuclei undergo karyogamy and meiosis upon germination of the spore. Zygospores are formed in certain fungi ( zygomycota , for example Rhizopus ) and some algae (for example Chlamydomonas ). The zygospore forms through the isogamic fusion of two cells (motile single cells in Chlamydomonas ) or sexual conjugation between two hyphae (in zygomycota). Plasmogamy is followed by karyogamy , therefore zygospores are diploid ( zygotes ). They will undergo zygotic meiosis upon germinating. In oomycetes , the zygote forms through the fertilization of an egg cell with a sperm nucleus and enters a resting stage as a diploid, thick-walled oospore . The germinating oospore undergoes mitosis and gives rise to diploid hyphae which reproduce asexually via mitotic zoospores as long as conditions are favorable. In diatoms , fertilization gives rise to a zygote termed auxospore . Besides sexual reproduction and as a resting stage, the function of an auxospore is the restoration of the original cell size, as diatoms get progressively smaller during mitotic cell division. Auxospores divide by mitosis. The term sporogenesis can also refer to endospore formation in bacteria , which allows the cells to survive unfavorable conditions. Endospores are not reproductive structures and their formation does not require cell fusion or division. Instead, they form through the production of an encapsulating spore coat within the spore-forming cell. There are many parts of the spore 'plant'. The structure enclosing a group of spores is called a sporangium . [ clarification needed ]
https://en.wikipedia.org/wiki/Sporogenesis
Sporoplasm is an infectious material present in the cytoplasm of various fungi -like organisms, such as members of class Microsporidia . Sporoplasm is defined as a mass of protoplasm that gives rise to or forms a spore. The protoplasmic body that is released as an infective amoebula from a cnidosporidian cyst . [ 1 ] It is injected to host cell through a coiled polar tube which acts as a spring-like tubular extrusion mechanism. It is mainly involved in the asexual cycle of the organism. Inside the host cell, the sporoplasm multiplies to generate meronts , cells with loosely organized organelles enclosed in a simple plasma membrane . [ 2 ] Multiplication occurs either by merogony (binary fission) or schizogony (multiple fission) or plasmotomy (division of nucleus without relation to cytoplasm to produce multi-nucleated offspring).
https://en.wikipedia.org/wiki/Sporoplasm
Sporopollenin is a biological polymer found as a major component of the tough outer (exine) walls of plant spores and pollen grains. It is chemically very stable (one of the most inert among biopolymers) [ 1 ] and is usually well preserved in soils and sediments . The exine layer is often intricately sculptured in species-specific patterns, allowing material recovered from (for example) lake sediments to provide useful information to palynologists about plant and fungal populations in the past. Sporopollenin has found uses in the field of paleoclimatology as well. Sporopollenin is also found in the cell walls of several taxa of green alga , including Phycopeltis (an ulvophycean ) [ 2 ] and Chlorella . [ 3 ] Spores are dispersed by many different environmental factors, such as wind, water or animals. In suitable conditions, the sporopollenin-rich walls of pollen grains and spores can persist in the fossil record for hundreds of millions of years, since sporopollenin is resistant to chemical degradation by organic and inorganic chemicals. [ 4 ] The chemical composition of sporopollenin has long been elusive due to its unusual chemical stability, insolubility and resistance to degradation by enzymes and strong chemical reagents. It was once thought to consist of polymerised carotenoids but the application of more detailed analytical methods since the 1980s has shown that this is not correct. [ 5 ] Analyses have revealed a complex biopolymer , containing mainly long-chain fatty acids , phenylpropanoids , phenolics and traces of carotenoids in a random co-polymer. It is likely that sporopollenin derives from several precursors that are chemically cross-linked to form a rigid structure. [ 4 ] There is also good evidence that the chemical composition of sporopollenin is not the same in all plants, indicating it is a class of compounds rather than having one constant structure. [ 5 ] In 2019, thioacidolysis degradation and solid-state NMR was used to determine the molecular structure of pitch pine sporopollenin, finding it primarily composed of polyvinyl alcohol units alongside other aliphatic monomers, all crosslinked through a series of acetal linkages. Its complex and heterogeneous chemical structure give some protection from the biodegradative enzymes of bacteria, fungi and animals. [ 6 ] Some aromatic structures based on p -coumarate and naringenin were also identified within the sporopollenin polymer. These can absorb ultraviolet light and thus prevent it penetrating further into the spore. This has relevance to the role of pollen and spores in transporting and dispersing the gametes of plants. The DNA of the gametes is readily damaged by the ultraviolet component of daylight. Sporopollenin thus provides some protection from this damage as well as a physically robust container. [ 6 ] Analysis of sporopollenin from the clubmoss Lycopodium in the late 1980s have shown distinct structural differences from that of flowering plants. [ 5 ] In 2020, more detailed analysis of sporopollenin from Lycopodium clavatum provided more structural information. It showed a complete lack of aromatic structures and the presence of a macrocyclic backbone of polyhydroxylated tetraketide-like monomers with pseudo-aromatic 2-pyrone rings. These were crosslinked to a poly(hydroxy acid) chain by ether linkages to form the polymer. [ 7 ] Electron microscopy shows that the tapetal cells that surround the developing pollen grain in the anther have a highly active secretory system containing lipophilic globules. [ 8 ] These globules are believed to contain sporopollenin precursors. Tracer experiments have shown that phenylalanine is a major precursor, but other carbon sources also contribute. [ 4 ] The biosynthetic pathway for phenylpropanoid is very active in tapetal cells, supporting the idea that its products are needed for sporopollenin synthesis. Chemical inhibitors of pollen development and many male sterile mutants have effects on the secretion of these globules by the tapetal cells. [ 8 ]
https://en.wikipedia.org/wiki/Sporopollenin
Sporosarcina pasteurii formerly known as Bacillus pasteurii from older taxonomies , is a gram positive bacterium with the ability to precipitate calcite and solidify sand given a calcium source and urea ; through the process of microbiologically induced calcite precipitation (MICP) or biological cementation . [ 2 ] S. pasteurii has been proposed to be used as an ecologically sound biological construction material. Researchers studied the bacteria in conjunction with plastic and hard mineral; forming a material stronger than bone. [ 3 ] It is a commonly used for MICP since it is non-pathogenic and is able to produce high amounts of the enzyme urease which hydrolyzes urea to carbonate and ammonia . [ 4 ] S. pasteurii is a gram positive bacterium that is rod-like shaped in nature. It has the ability to form endospores in the right environmental conditions to enhance its survival, which is a characteristic of its bacillus class. [ 5 ] It has dimensions of 0.5 to 1.2 microns in width and 1.3 to 4.0 microns in length. Because it is an alkaliphile , it thrives in basic environments of pH 9–10. It can survive relatively harsh conditions up to a pH of 11.2. [ 4 ] S. pasteurii are soil-borne facultative anaerobes that are heterotrophic and require urea and ammonium for growth. [ 6 ] The ammonium is utilized in order to allow substrates to cross the cell membrane into the cell. [ 6 ] The urea is used as the nitrogen and carbon source for the bacterium. S. pasteurii are able to induce the hydrolysis of urea and use it as a source of energy by producing and secreting the urease enzyme. The enzyme hydrolyzes the urea to form carbonate and ammonia. During this hydrolysis, a few more spontaneous reactions are performed. Carbamate is hydrolyzed to carbonic acid and ammonia and then further hydrolyzed to ammonium and bicarbonate . [ 4 ] This process causes the pH of the reaction to increase 1–2 pH, making the environment more basic which promotes the conditions that this specific bacterium thrives in. [ 7 ] Maintaining a medium with this pH can be expensive for large scale production of this bacterium for biocementation. A wide range of factors can affect the growth rate of S. pasteurii. This includes finding the optimal temperature, pH, urea concentration, bacterial density, oxygen levels, etc. [ 7 ] It has been found that the optimal growing temperature is 30 °C, but this is independent of the other environmental factors present. [ 5 ] Since S. pasteurii are halotolerant , they can grow in the presence of low concentrations of aqueous chloride ions that are low enough to not inhibit bacterial cell growth. [ 7 ] This shows promising applications for MICP use. S. pasteurii DSM 33 is described to be auxotrophic for L-methionine , L-cystein , thiamine and nicotinic acid . [ 8 ] The whole genome of S. pasteurii NCTC4822 was sequenced and reported under NCBI Accession Number: NZ_UGYZ01000000 . With a chromosome length of 3.3 Mb, it contains 3,036 protein coding genes and has GC content of 39.17% . [ 9 ] When the ratio of known functional genes to the unknown genes is calculated, the bacterium shows highest ratios for transport, metabolism, and transcription. The high proportion of these functions allows the conversion of urea to carbonate ions which is necessary for the bio-mineralization process. [ 9 ] The bacterium has seven identified genes that are directly related to urease activity and assembly as well, which can be further studied to give insight about maximizing urease production for optimizing use of S. pasteurii in industrial applications. [ 9 ] S. pasteurii have the unique capability of hydrolyzing urea and through a series of reactions, produce carbonate ions. This is done by secreting copious amounts of urease through the cell membrane . [ 5 ] When the bacterium is placed in a calcite rich environment, the negatively charged carbonate ions react with the positive metal ions like calcium to precipitate calcium carbonate , or bio-cement. [ 4 ] The calcium carbonate can then be used as a precipitate or can be crystallized as calcite to cement sand particles together. Therefore, when put into a calcium chloride environment, S. pasteurii are able to survive since they are halotolerant and alkaliphiles. Since the bacteria remain intact during harsh mineralization conditions, are robust, and carry a negative surface charge , they serve as good nucleation sites for MICP . [ 9 ] The negatively charged cell wall of the bacterium provides a site of interaction for the positively charged cations to form minerals . The extent of this interaction depends on a variety of factors including the characteristics of the cell surface, amount of peptidoglycan , amidation level of free carboxyl, and availability of teichoic acids . [ 7 ] S. pasteurii show a highly negative surface charge which can be shown in its highly negative zeta potential of −67 mV compared to non-mineralizing bacteria E . coli , S . aureus and B . subtilis at −28, −26 and −40.8 mV, respectively. [ 9 ] Aside from all of these benefits towards using S. pasteurii for MICP, there are limitations like undeveloped engineering scale-up, undesired by-products, uncontrolled growth, or dependence on growth conditions like urea or oxygen concentrations. [ 9 ] S. pasteurii have a purpose in improving construction material as in concrete or mortar. Concrete is one of the most used materials in the world but it is susceptible to forming cracks which can be costly to fix. One solution is to embed this bacterium in the cracks and once it is activated using MICP. Minerals will form and repair the gap in a permanent environmentally-friendly way. One disadvantage is that this technique is possible only for external surfaces that are reachable. [ 7 ] Another application is to use S. pasteurii in bio self-healing of concrete which involves implementing the bacterium into the concrete matrix during the concrete preparation to heal micro cracks. This has a benefit of minimal human intervention and yields more durable concrete with higher compressive strength . [ 7 ] One limitation of using this bacterium for bio-mineralization is that although it is a facultative anaerobe , in the absence of oxygen, the bacterium is unable to synthesize urease anaerobically . A lack of oxygen also prevents MICP since its initiation relies heavily on oxygen. Therefore, at sites distant from the injection location or at great depths, the likelihood of precipitation decreases. [ 9 ] One potential fix is to couple this bacterium in the biocement with oxygen releasing compounds (ORCs) that are typically used for bioremediation and removal of pollutants from soil. [ 7 ] With this combination, the lack of oxygen can be diminished and the MICP can be optimized with the bacterium. Some specific examples of current applications include: More potential applications include: Considerations of using this bacterium in industrial applications is scale-up potential, economic feasibility, long-term viability of bacteria, adhesion behavior of calcium carbonate, and polymorphism . [ 7 ]
https://en.wikipedia.org/wiki/Sporosarcina_pasteurii
Sports engineering is a sub-discipline of engineering that applies math and science to develop technology, equipment, and other resources as they pertain to sport. Sports engineering was first introduced by Isaac Newton ’s observation of a tennis ball. [ 1 ] In the mid-twentieth century, Howard Head became one of the first engineers to apply engineering principles to improve sports equipment. [ 2 ] Starting in 1999, the biannual international conference for sports engineering was established to commemorate achievements in the field. [ 3 ] Presently, the journal “ Sports Engineering ” details the innovations and research projects that sports engineers are working on. [ 3 ] The study of sports engineering requires an understanding of a variety of engineering topics, including physics , mechanical engineering , materials science , and biomechanics . [ 4 ] Many practitioners hold degrees in those topics rather than in sports engineering specifically. Specific study programs in sports engineering and technology are becoming more common at the graduate level, and also at the undergraduate level in Europe . Sports engineers also employ computational engineering tools like computer-aided design (CAD), computational fluid dynamics (CFD), and finite element analysis (FEA) to design and produce sports equipment, sportswear, and more. [ 1 ] One of the earliest instances of the application of scientific principles in sports context occurred in 1671 when English mathematician Isaac Newton wrote a letter to German theologian and natural philosopher Henry Oldenburg regarding a tennis ball’s flight mechanics. [ 1 ] In the following centuries, German scientist Heinrich Gustav Magnus further examined Newton’s analysis and applied Newtonian theories to the spinning properties of balls. [ 1 ] Around 1760, in the midst of the Industrial Revolution , sports engineering was further explored with the acceleration of the manufacturing of sports equipment. [ 1 ] During this stage, the manufacturers recognized an increase in sales being directly related to better quality of equipment. [ 1 ] As a result, experimentation started to explore new designs and materials for enhanced athletic performance. [ 1 ] In modern times, sports engineers, such as Howard Head , applied engineering principles to sports equipment. [ 2 ] After finding traditional snow skis to be too heavy, Head developed a lighter, more flexible skis in 1947. [ 2 ] He used his knowledge from the aircraft industry to create skis with a metal-sandwich construction. [ 2 ] After 40 iterations and 3 years, he released his skis commercially, and they soon set the standard for skis. [ 2 ] Today, his skis are widely known and recognized under the brand Head , with Head Sportswear International, and the Head Ski Company. [ 2 ] Head also developed the Prince Classic tennis racquet. [ 2 ] He created a much lighter design, with a bigger frame supporting off-center hits, and a grip that did not twist in players' hands. [ 2 ] As with his skis, Head's oversized racquets were embraced by top athletes in the sport. [ 2 ] In 1998, the International Sports Engineering Association (ISEA) was established and the journal “ Sports Engineering ” was published. [ 3 ] In 1999, the first international sports engineering conference was organized by Steve Haake called “The International Conference on the Engineering of Sports” in Sheffield, England. [ 3 ] The conference brings world-leading researchers, sports professionals, and industry organizations together to celebrate the profession, showcasing innovations in both research and industry. Sports engineering in the United States is often part of universities' undergraduate mechanical engineering programs, rather than as stand-alone bachelor's degree programs. [ 5 ] On the graduate level, research labs often use an interdisciplinary approach to sports engineering such as in the MIT Sports Lab [ 6 ] and the Biosports Lab at UC Davis . [ 7 ] Some graduate opportunities like the program offered through Purdue include concentrations in sports engineering within the mechanical engineering or materials engineering department. [ 8 ] Most sports engineering students pursue Bachelor’s degrees in other areas within engineering including mechanical, electrical, and materials engineering; there is no uniform educational path for becoming a sports engineer. Although universities in the United States offer sports engineering courses or concentrations, more extensive degree programs in the subject are more common in the United Kingdom . Sports engineering in academics is more developed in the United Kingdom [ 9 ] with programs at the undergraduate and graduate levels. The Sports Engineering Research Group at Sheffield Hallam University [ 10 ] - the 'home' of the Sports Engineering as a discipline, and Loughborough University offer a 1 year, full-time sports engineering postgraduate program. [ 11 ] Nottingham Trent University offers a 3 year, full-time undergraduate program that is based on industry-oriented seminars and activities as well as on-campus research experiences like the Sports Engineering lab. [ 12 ] A full list of courses is available from the ISEA . [ 13 ] Course offerings in sports engineering synthesize content from both engineering and sports science. [ 1 ] Programs in sports engineering encompass engineering-oriented classes such as physics, aerodynamics, and materials science, as well as more sports science-based courses such as biomechanics and anatomy. [ 14 ] Computational modeling is commonly employed across many engineering disciplines and is often applied to sports. Computational fluid dynamics (CFD) can be used in sports engineering education to model flow in both air and water systems. Sports engineers can use computational modeling systems to analyze the behavior of an object without having to physically produce them. For example, CFD has been used to predict fluid patterns around a skier jumping through the air or a swimmer moving through the water, to reduce the drag acting on the athlete. [ 15 ] FEA or finite element analysis is another engineering modeling tool that applies to the field of sports engineering to simulate the physics of applied forces acting in a system. For example, FEA analysis can be used to analyze the impact of a ball against a tennis racket or different the deformation resulting from the impact of a football. [ 1 ] Undergraduate and graduate level programs in sports engineering are more common in Europe as opposed to the United States. The list below highlights offerings currently available in the field of sports engineering. Sports engineering has a variety of applications across the sports industry. Some examples of these applications and related technologies are listed below. Computer-aided design (CAD) and finite element analysis (FEA) can be used to design and test sports equipment. Engineers can use FEA to apply different stresses to an object and determine its strengths and weaknesses. For example, FEA can be used to model a tennis racket hitting the ball, including how the racket and ball might deform or vibrate as a result of the strike. [ 1 ] Computational Fluid Dynamics (CFD) can be applied to sports such as cycling to examine the aerodynamics of cycles and riders' body positions. [ 1 ] This information is useful in understanding how to increase cycling speeds and decrease exertion for riders. One notable example of how engineering intersects with sportswear is Speedo’s LZR Racer , a swimsuit made in collaboration with NASA researchers and engineers. [ 28 ] Sports engineers tested different materials and coatings in a wind tunnel to determine how to reduce drag. [ 28 ] Engineers also optimized stability and mobility by using layering and welding techniques specific to particular body parts. [ 29 ] For instance, the abdomen and lower back areas of the suit were made tighter to improve core stability. [ 29 ] The LZR Racer was able to reduce skin friction drag by 24% compared to Speedo’s previously most advanced suit. [ 29 ] These engineering applications helped swimmers who wore Speedo’s LZR Racer to set 93 world records. [ 29 ] Materials science , mechanical engineering , sports science , sports medicine , biomechanics , and physics are some fields that overlap with sports engineering.
https://en.wikipedia.org/wiki/Sports_engineering
Spot analysis , spot test analysis , or a spot test is a chemical test , a simple and efficient technique where analytic assays are executed in only one, or a few drops, of a chemical solution, preferably in a great piece of filter paper, without using any sophisticated instrumentation. The development and popularization of the test is credited to Fritz Feigl . [ 1 ] [ 2 ] Spot test or spot assay can also refer to a test often used in microbiology. The foundations of Feigl's work on spot analysis were the works of Hugo Schiff (the earliest publication about "spot test" was Shiff's detection of uric acid in 1859 [ 3 ] ) and of Christian Friedrich Schonberg and Friedrich Goppelsröder on capillary analysis . [ 2 ] On the occasion of Feigl's 70th birthday the Chemical Society of Midland sponsored a symposium in 1952, attended by 500 scientists from 24 countries, in which all plenary sessions were related to spot tests. [ 2 ] The test uses the qualitative characteristics of colored compounds to account for performed chemical reactions. This technique has been used to develop new quantification methods using modern technology. [ 4 ] A spot assay or spot test can also refer to a specific test in microbiology. This test is often used to check the growth rate of bacterial or yeast cells on different media or to perform serial dilution tests of micro-organisms. Usually a 96-pinner (often called frogger) is used to perform these spot assay. Another application is high-throughput screening , whichoften uses spot assays to determine the growth of eg. mated cells or to check for protein-protein interactions in a yeast two-hybrid test . This is often done with a robot. [ 5 ]
https://en.wikipedia.org/wiki/Spot_analysis
In combinatorial game theory , the Sprague–Grundy theorem states that every impartial game under the normal play convention is equivalent to a one-heap game of nim , or to an infinite generalization of nim. It can therefore be represented as a natural number , the size of the heap in its equivalent game of nim, as an ordinal number in the infinite generalization, or alternatively as a nimber , the value of that one-heap game in an algebraic system whose addition operation combines multiple heaps to form a single equivalent heap in nim. The Grundy value or nim-value of any impartial game is the unique nimber that the game is equivalent to. In the case of a game whose positions are indexed by the natural numbers (like nim itself, which is indexed by its heap sizes), the sequence of nimbers for successive positions of the game is called the nim-sequence of the game. The Sprague–Grundy theorem and its proof encapsulate the main results of a theory discovered independently by R. P. Sprague (1936) [ 1 ] and P. M. Grundy (1939). [ 2 ] For the purposes of the Sprague–Grundy theorem, a game is a two-player sequential game of perfect information satisfying the ending condition (all games come to an end: there are no infinite lines of play) and the normal play condition (a player who cannot move loses). At any given point in the game, a player's position is the set of moves they are allowed to make. As an example, we can define the zero game to be the two-player game where neither player has any legal moves. Referring to the two players as A {\displaystyle A} (for Alice) and B {\displaystyle B} (for Bob), we would denote their positions as ( A , B ) = ( { } , { } ) {\displaystyle (A,B)=(\{\},\{\})} , since the set of moves each player can make is empty. An impartial game is one in which at any given point in the game, each player is allowed exactly the same set of moves. Normal-play nim is an example of an impartial game. In nim, there are one or more heaps of objects, and two players (we'll call them Alice and Bob), take turns choosing a heap and removing 1 or more objects from it. The winner is the player who removes the final object from the final heap. The game is impartial because for any given configuration of pile sizes, the moves Alice can make on her turn are exactly the same moves Bob would be allowed to make if it were his turn. In contrast, a game such as checkers is not impartial because, supposing Alice were playing red and Bob were playing black, for any given arrangement of pieces on the board, if it were Alice's turn, she would only be allowed to move the red pieces, and if it were Bob's turn, he would only be allowed to move the black pieces. Note that any configuration of an impartial game can therefore be written as a single position, because the moves will be the same no matter whose turn it is. For example, the position of the zero game can simply be written { } {\displaystyle \{\}} , because if it's Alice's turn, she has no moves to make, and if it's Bob's turn, he has no moves to make either. A move can be associated with the position it leaves the next player in. Doing so allows positions to be defined recursively. For example, consider the following game of Nim played by Alice and Bob. The special names ∗ 0 {\displaystyle *0} , ∗ 1 {\displaystyle *1} , and ∗ 2 {\displaystyle *2} referenced in our example game are called nimbers . In general, the nimber ∗ n {\displaystyle *n} corresponds to the position in a game of nim where there are exactly n {\displaystyle n} objects in exactly one heap. Formally, nimbers are defined inductively as follows: ∗ 0 {\displaystyle *0} is { } {\displaystyle \{\}} , ∗ 1 = { ∗ 0 } {\displaystyle *1=\{*0\}} , ∗ 2 = { ∗ 0 , ∗ 1 } {\displaystyle *2=\{*0,*1\}} and for all n ≥ 0 {\displaystyle n\geq 0} , ∗ ( n + 1 ) = ∗ n ∪ { ∗ n } {\displaystyle *(n+1)=*n\cup \{*n\}} . While the word nim ber comes from the game nim , nimbers can be used to describe the positions of any finite, impartial game, and in fact, the Sprague–Grundy theorem states that every instance of a finite, impartial game can be associated with a single nimber. Two games can be combined by adding their positions together. For example, consider another game of nim with heaps A ′ {\displaystyle A'} , B ′ {\displaystyle B'} , and C ′ {\displaystyle C'} . We can combine it with our first example to get a combined game with six heaps: A {\displaystyle A} , B {\displaystyle B} , C {\displaystyle C} , A ′ {\displaystyle A'} , B ′ {\displaystyle B'} , and C ′ {\displaystyle C'} : To differentiate between the two games, for the first example game , we'll label its starting position S {\displaystyle \color {blue}S} , and color it blue: S = { { ∗ 1 , { ∗ 1 } , ∗ 2 } , { ∗ 2 , { ∗ 1 , { ∗ 1 } , ∗ 2 } } , { { ∗ 1 } , { { ∗ 1 } } , { ∗ 1 , { ∗ 1 } , ∗ 2 } } } {\displaystyle \color {blue}S={\Big \{}{\big \{}*1,\{*1\},*2{\big \}},{\big \{}*2,\{*1,\{*1\},*2\}{\big \}},{\big \{}\{*1\},\{\{*1\}\},\{*1,\{*1\},*2\}{\big \}}{\Big \}}} For the second example game , we'll label the starting position S ′ {\displaystyle \color {red}S'} and color it red: S ′ = { { ∗ 1 } } . {\displaystyle \color {red}S'={\Big \{}\{*1\}{\Big \}}.} To compute the starting position of the combined game , remember that a player can either make a move in the first game, leaving the second game untouched, or make a move in the second game, leaving the first game untouched. So the combined game's starting position is: S + S ′ = { S + { ∗ 1 } } ∪ { S ′ + { ∗ 1 , { ∗ 1 } , ∗ 2 } , S ′ + { ∗ 2 , { ∗ 1 , { ∗ 1 } , ∗ 2 } } , S ′ + { { ∗ 1 } , { { ∗ 1 } } , { ∗ 1 , { ∗ 1 } , ∗ 2 } } } {\displaystyle \color {blue}S\color {black}+\color {red}S'\color {black}={\Big \{}\color {blue}S\color {black}+\color {red}\{*1\}\color {black}{\Big \}}\cup {\Big \{}\color {red}S'\color {black}+\color {blue}\{*1,\{*1\},*2\}\color {black},\color {red}S'\color {black}+\color {blue}\{*2,\{*1,\{*1\},*2\}\}\color {black},\color {red}S'\color {black}+\color {blue}\{\{*1\},\{\{*1\}\},\{*1,\{*1\},*2\}\}\color {black}{\Big \}}} The explicit formula for adding positions is: S + S ′ = { S + s ′ ∣ s ′ ∈ S ′ } ∪ { s + S ′ ∣ s ∈ S } {\displaystyle S+S'=\{S+s'\mid s'\in S'\}\cup \{s+S'\mid s\in S\}} , which means that addition is both commutative and associative. Positions in impartial games fall into two outcome classes : either the next player (the one whose turn it is) wins (an N {\displaystyle {\boldsymbol {\mathcal {N}}}} - position ), or the previous player wins (a P {\displaystyle {\boldsymbol {\mathcal {P}}}} - position ). So, for example, ∗ 0 {\displaystyle *0} is a P {\displaystyle {\mathcal {P}}} -position, while ∗ 1 {\displaystyle *1} is an N {\displaystyle {\mathcal {N}}} -position. Two positions G {\displaystyle G} and G ′ {\displaystyle G'} are equivalent if, no matter what position H {\displaystyle H} is added to them, they are always in the same outcome class. Formally, G ≈ G ′ {\displaystyle G\approx G'} if and only if ∀ H {\displaystyle \forall H} , G + H {\displaystyle G+H} is in the same outcome class as G ′ + H {\displaystyle G'+H} . To use our running examples, notice that in both the first and second games above, we can show that on every turn, Alice has a move that forces Bob into a P {\displaystyle {\mathcal {P}}} -position. Thus, both S {\displaystyle \color {blue}S} and S ′ {\displaystyle \color {red}S'} are N {\displaystyle {\mathcal {N}}} -positions. (Notice that in the combined game, Bob is the player with the N {\displaystyle {\mathcal {N}}} -positions. In fact, S + S ′ {\displaystyle \color {blue}S\color {black}+\color {red}S'} is a P {\displaystyle {\mathcal {P}}} -position, which as we will see in Lemma 2, means S ≈ S ′ {\displaystyle \color {blue}S\color {black}\approx \color {red}S'} .) As an intermediate step to proving the main theorem, we show that for every position G {\displaystyle G} and every P {\displaystyle {\mathcal {P}}} -position A {\displaystyle A} , the equivalence G ≈ A + G {\displaystyle G\approx A+G} holds. By the above definition of equivalence, this amounts to showing that G + H {\displaystyle G+H} and A + G + H {\displaystyle A+G+H} share an outcome class for all H {\displaystyle H} . Suppose that G + H {\displaystyle G+H} is a P {\displaystyle {\mathcal {P}}} -position. Then the previous player has a winning strategy for A + G + H {\displaystyle A+G+H} : respond to moves in A {\displaystyle A} according to their winning strategy for A {\displaystyle A} (which exists by virtue of A {\displaystyle A} being a P {\displaystyle {\mathcal {P}}} -position), and respond to moves in G + H {\displaystyle G+H} according to their winning strategy for G + H {\displaystyle G+H} (which exists for the analogous reason). So A + G + H {\displaystyle A+G+H} must also be a P {\displaystyle {\mathcal {P}}} -position. On the other hand, if G + H {\displaystyle G+H} is an N {\displaystyle {\mathcal {N}}} -position, then A + G + H {\displaystyle A+G+H} is also an N {\displaystyle {\mathcal {N}}} -position, because the next player has a winning strategy: choose a P {\displaystyle {\mathcal {P}}} -position from among the G + H {\displaystyle G+H} options, and we conclude from the previous paragraph that adding A {\displaystyle A} to that position is still a P {\displaystyle {\mathcal {P}}} -position. Thus, in this case, A + G + H {\displaystyle A+G+H} must be a N {\displaystyle {\mathcal {N}}} -position, just like G + H {\displaystyle G+H} . As these are the only two cases, the lemma holds. As a further step, we show that G ≈ G ′ {\displaystyle G\approx G'} if and only if G + G ′ {\displaystyle G+G'} is a P {\displaystyle {\mathcal {P}}} -position. In the forward direction, suppose that G ≈ G ′ {\displaystyle G\approx G'} . Applying the definition of equivalence with H = G {\displaystyle H=G} , we find that G ′ + G {\displaystyle G'+G} (which is equal to G + G ′ {\displaystyle G+G'} by commutativity of addition) is in the same outcome class as G + G {\displaystyle G+G} . But G + G {\displaystyle G+G} must be a P {\displaystyle {\mathcal {P}}} -position: for every move made in one copy of G {\displaystyle G} , the previous player can respond with the same move in the other copy, and so always make the last move. In the reverse direction, since A = G + G ′ {\displaystyle A=G+G'} is a P {\displaystyle {\mathcal {P}}} -position by hypothesis, it follows from the first lemma, G ≈ G + A {\displaystyle G\approx G+A} , that G ≈ G + ( G + G ′ ) {\displaystyle G\approx G+(G+G')} . Similarly, since B = G + G {\displaystyle B=G+G} is also a P {\displaystyle {\mathcal {P}}} -position, it follows from the first lemma in the form G ′ ≈ G ′ + B {\displaystyle G'\approx G'+B} that G ′ ≈ G ′ + ( G + G ) {\displaystyle G'\approx G'+(G+G)} . By associativity and commutativity, the right-hand sides of these results are equal. Furthermore, ≈ {\displaystyle \approx } is an equivalence relation because equality is an equivalence relation on outcome classes. Via the transitivity of ≈ {\displaystyle \approx } , we can conclude that G ≈ G ′ {\displaystyle G\approx G'} . We prove that all positions are equivalent to a nimber by structural induction . The more specific result, that the given game's initial position must be equivalent to a nimber, shows that the game is itself equivalent to a nimber. Consider a position G = { G 1 , G 2 , … , G k } {\displaystyle G=\{G_{1},G_{2},\ldots ,G_{k}\}} . By the induction hypothesis , all of the options are equivalent to nimbers, say G i ≈ ∗ n i {\displaystyle G_{i}\approx *n_{i}} . So let G ′ = { ∗ n 1 , ∗ n 2 , … , ∗ n k } {\displaystyle G'=\{*n_{1},*n_{2},\ldots ,*n_{k}\}} . We will show that G ≈ ∗ m {\displaystyle G\approx *m} , where m {\displaystyle m} is the mex (minimum exclusion) of the numbers n 1 , n 2 , … , n k {\displaystyle n_{1},n_{2},\ldots ,n_{k}} , that is, the smallest non-negative integer not equal to some n i {\displaystyle n_{i}} . The first thing we need to note is that G ≈ G ′ {\displaystyle G\approx G'} , by way of the second lemma. If k {\displaystyle k} is zero, the claim is trivially true. Otherwise, consider G + G ′ {\displaystyle G+G'} . If the next player makes a move to G i {\displaystyle G_{i}} in G {\displaystyle G} , then the previous player can move to ∗ n i {\displaystyle *n_{i}} in G ′ {\displaystyle G'} , and conversely if the next player makes a move in G ′ {\displaystyle G'} . After this, the position is a P {\displaystyle {\mathcal {P}}} -position by the lemma's forward implication. Therefore, G + G ′ {\displaystyle G+G'} is a P {\displaystyle {\mathcal {P}}} -position, and, citing the lemma's reverse implication, G ≈ G ′ {\displaystyle G\approx G'} . Now let us show that G ′ + ∗ m {\displaystyle G'+*m} is a P {\displaystyle {\mathcal {P}}} -position, which, using the second lemma once again, means that G ′ ≈ ∗ m {\displaystyle G'\approx *m} . We do so by giving an explicit strategy for the previous player. Suppose that G ′ {\displaystyle G'} and ∗ m {\displaystyle *m} are empty. Then G ′ + ∗ m {\displaystyle G'+*m} is the null set, clearly a P {\displaystyle {\mathcal {P}}} -position. Or consider the case that the next player moves in the component ∗ m {\displaystyle *m} to the option ∗ m ′ {\displaystyle *m'} where m ′ < m {\displaystyle m'<m} . Because m {\displaystyle m} was the minimum excluded number, the previous player can move in G ′ {\displaystyle G'} to ∗ m ′ {\displaystyle *m'} . And, as shown before, any position plus itself is a P {\displaystyle {\mathcal {P}}} -position. Finally, suppose instead that the next player moves in the component G ′ {\displaystyle G'} to the option ∗ n i {\displaystyle *n_{i}} . If n i < m {\displaystyle n_{i}<m} then the previous player moves in ∗ m {\displaystyle *m} to ∗ n i {\displaystyle *n_{i}} ; otherwise, if n i > m {\displaystyle n_{i}>m} , the previous player moves in ∗ n i {\displaystyle *n_{i}} to ∗ m {\displaystyle *m} ; in either case the result is a position plus itself. (It is not possible that n i = m {\displaystyle n_{i}=m} because m {\displaystyle m} was defined to be different from all the n i {\displaystyle n_{i}} .) In summary, we have G ≈ G ′ {\displaystyle G\approx G'} and G ′ ≈ ∗ m {\displaystyle G'\approx *m} . By transitivity, we conclude that G ≈ ∗ m {\displaystyle G\approx *m} , as desired. If G {\displaystyle G} is a position of an impartial game, the unique integer m {\displaystyle m} such that G ≈ ∗ m {\displaystyle G\approx *m} is called its Grundy value, or Grundy number, and the function that assigns this value to each such position is called the Sprague–Grundy function. R. L. Sprague and P. M. Grundy independently gave an explicit definition of this function, not based on any concept of equivalence to nim positions, and showed that it had the following properties: It follows straightforwardly from these results that if a position G {\displaystyle G} has a Grundy value of m {\displaystyle m} , then G + H {\displaystyle G+H} has the same Grundy value as ∗ m + H {\displaystyle *m+H} , and therefore belongs to the same outcome class, for any position H {\displaystyle H} . Thus, although Sprague and Grundy never explicitly stated the theorem described in this article, it follows directly from their results and is credited to them. [ 3 ] [ 4 ] These results have subsequently been developed into the field of combinatorial game theory , notably by Richard Guy , Elwyn Berlekamp , John Horton Conway and others, where they are now encapsulated in the Sprague–Grundy theorem and its proof in the form described here. The field is presented in the books Winning Ways for your Mathematical Plays and On Numbers and Games .
https://en.wikipedia.org/wiki/Sprague–Grundy_theorem
A spray pond is a reservoir in which warmed water (e.g. from a power plant ) is cooled before reuse by spraying the warm water with nozzles into the cooler air . Cooling takes place by exchange of heat with the ambient air, involving both conductive heat transfer between the water droplets and the surrounding air and evaporative cooling (which provides by far the greatest portion, typically 85 to 90%, of the total cooling). The primary purpose of spray pond design is thus to ensure an adequate degree of contact between the hot injection water and the ambient air, so as to facilitate the process of heat transfer. The spray pond is the predecessor to the natural draft cooling tower , which is much more efficient and takes up less space but has a much higher construction cost. A spray pond requires between 25 and 50 times the area of a cooling tower. However, some spray ponds are still in use today. The height of each spray nozzle above the surface of the pond should be between 1.5 m and 2.0 m. The spray nozzles themselves should be chosen so as to provide the desired spray pattern diameter at the pond surface, while yielding a maximum spray height of 2.5 m or more above the nozzle. This will provide an adequate contact time between the air and water and should be achievable with a delivery pressure of between 50 and 75 kPa across the nozzles. The performance of a spray pond depends to a large degree on the effectiveness of the spray nozzles which are installed. Ideally, the chosen nozzles should provide a fine, evenly distributed spray in conical form, be capable of passing small particles of suspended matter without blocking and be readily dismantled for cleaning. Typical droplet sizes which are achieved by spray pond nozzles vary between 3 mm and 6 mm. While providing better cooling performance because of their increased surface-to-volume ratios, the generation of droplets of smaller size would require an excessive pressure drop across the nozzles and could lead to increased wind-drift losses from the pond. Specific spray pond surface areas tend to range between 1.2 and 1.7 m 2 per m 3 /h of water to be cooled. The width chosen for a drift channel around the active zone of the pond (containing the sprays) is dependent on a number of factors, including the prevailing wind strength, the average size of the spray droplets produced by the nozzles, and the presence of any nearby structures which may be sensitive to fogging or water drift, such as roads, houses, etc. Drift channel widths between 3 and 4 m are typically recommended. In order to be most effective in terms of heat transfer, spray ponds should always be oriented with their longer sides at right angles to the direction of the prevailing wind . Additionally, spray ponds should be made as long and narrow as possible ( i.e. with a width-to-length ratio as low as possible), so as to decrease the path length which the ambient air must travel across the pond. The depth of a spray pond has very little influence on its thermal performance. However, the pond should contain sufficient water to fill all flumes , seal wells and pump suctions during plant startup. Typically, spray pond depths of between 0.9 m and 1.5 m are recommended in the literature, with a depth of 0.9 m being most common. Additionally, sufficient additional volume above the normal operating level should be provided within the spray pond to accept all water drainage from these flumes, seal wells and pump suctions when the plant is stopped. Drift and evaporative losses from spray ponds of conventional design range between 3 and 5% The thermal efficiency of a spray pond may be calculated based on its approach to the saturation ( wet bulb ) temperature of the air: (T H - T C ) / (T H - T W ), where the subscripts H and C refer to the temperatures of the hot and cold water streams, while the subscript W refers to the wet bulb temperature of the air. Typically, spray ponds achieve thermal efficiencies of between 50% and 70%. Further details of performance estimation may be found in the engineering literature. [ 1 ]
https://en.wikipedia.org/wiki/Spray_pond
A spray tower (or spray column or spray chamber ) is a gas-liquid contactor used to achieve mass and heat transfer between a continuous gas phase (that can contain dispersed solid particles) and a dispersed liquid phase. It consists of an empty cylindrical vessel made of steel or plastic, and nozzles that spray liquid into the vessel. The inlet gas stream usually enters at the bottom of the tower and moves upward, while the liquid is sprayed downward from one or more levels. This flow of inlet gas and liquid in opposite directions is called countercurrent flow . This type of technology can be used for example as a wet scrubber for air pollution control. Countercurrent flow exposes the outlet gas with the lowest pollutant concentration to the freshest scrubbing liquid. Many nozzles are placed across the tower at different heights to spray all of the gas as it moves up through the tower. The reason for using many nozzles is to maximize the number of fine droplets impacting the pollutant particles and to provide a large surface area for absorbing gas. Theoretically, the smaller the droplets formed, the higher the collection efficiency achieved for both gaseous and particulate pollutants . However, the liquid droplets must be large enough to not be carried out of the scrubber by the scrubbed outlet gas stream. Therefore, spray towers use nozzles that produce droplets that are usually 500–1000 μm in diameter. Although small in size, these droplets are large compared to those created in venturi scrubbers that are 10–50 μm in size. The gas velocity is kept low, from 0.3 to 1.2 m/s (1–4 ft/s), to prevent excess droplets from being carried out of the tower. In order to maintain low gas velocities, spray towers must be larger than other scrubbers that handle similar gas stream flow rates. Another problem occurring in spray towers is that after the droplets have fallen a short distance, they tend to agglomerate or hit the walls of the tower. Consequently, the total liquid surface area for contact is reduced, reducing the collection efficiency of the scrubber. In addition to a countercurrent-flow configuration, the flow in spray towers can be either a cocurrent or crosscurrent in configuration. In cocurrent -flow spray towers, the inlet gas and liquid flow in the same direction. Because the gas stream does not "push" against the liquid sprays, the gas velocities through the vessels are higher than in countercurrent-flow spray towers. Consequently, cocurrent -flow spray towers are smaller than countercurrent-flow spray towers treating the same amount of exhaust flow. In crosscurrent-flow spray towers, also called horizontal-spray scrubbers, the gas and liquid flow in directions perpendicular to each other. In this vessel, the gas flows horizontally through a number of spray sections. The amount and quality of liquid sprayed in each section can be varied, usually with the cleanest liquid (if recycled liquid is used) sprayed in the last set of sprays. Spray towers are low energy scrubbers . Contacting power is much lower than in venturi scrubbers , and the pressure drops across such systems are generally less than 2.5 cm (1 in) of water. The collection efficiency for small particles is correspondingly lower than in more energy-intensive devices. They are adequate for the collection of coarse particles larger than 10–25 μm in diameter, although with increased liquid inlet nozzle pressures, particles with diameters of 2.0 μm can be collected. Smaller droplets can be formed by higher liquid pressures at the nozzle. The highest collection efficiencies are achieved when small droplets are produced and the difference between the velocity of the droplet and the velocity of the upward-moving particles is high. Small droplets, however, have small settling velocities , so there is an optimum range of droplet sizes for scrubbers that work by this mechanism. This range of droplet sizes is between 500 and 1,000 μm for gravity-spray (counter current) towers. [ 1 ] The injection of water at very high pressures – 2070–3100 kPa (300–450 psi) – creates a fog of very fine droplets. Higher particle-collection efficiencies can be achieved in such cases since collection mechanisms other than inertial impaction occur. [ 2 ] However, these spray nozzles may use more power to form droplets than would a venturi operating at the same collection efficiency. Spray towers can be used for gas absorption , but they are not as effective as packed or plate towers. Spray towers can be very effective in removing pollutants if the pollutants are highly soluble or if a chemical reagent is added to the liquid. For example, spray towers are used to remove HCl gas from the tail-gas exhaust in manufacturing hydrochloric acid . In the production of superphosphate used in manufacturing fertilizer , SiF 4 and HF gases are vented from various points in the processes. Spray towers have been used to remove these highly soluble compounds. Spray towers are also used for odor removal in bone meal and tallow manufacturing industries by scrubbing the exhaust gases with a solution of KMnO 4 . Because of their ability to handle large gas volumes in corrosive atmospheres, spray towers are also used in a number of flue-gas desulfurization systems as the first or second stage in the pollutant removal process. In a spray tower, absorption can be increased by decreasing the size of the liquid droplets and/or increasing the liquid-to-gas ratio (L/G). However, to accomplish either of these, an increase in both power consumed and operating cost is required. In addition, the physical size of the spray tower will limit the amount of liquid and the size of droplets that can be used. The main advantage of spray towers over other scrubbers is their completely open design; they have no internal parts except for the spray nozzles . This feature eliminates many of the scale buildup and plugging problems associated with other scrubbers. The primary maintenance problems are spray-nozzle plugging or eroding, especially when using recycled scrubber liquid. To reduce these problems, a settling or filtration system is used to remove abrasive particles from the recycled scrubbing liquid before pumping it back into the nozzles. Spray towers are inexpensive control devices primarily used for gas conditioning (cooling or humidifying) or for first-stage particle or gas removal. They are also used in many flue-gas desulfurization systems to reduce plugging and scale buildup by pollutants. Many scrubbing systems use sprays either prior to or in the bottom of the primary scrubber to remove large particles that could plug it. Spray towers have been used effectively to remove large particles and highly soluble gases. The pressure drop across the towers is very low – usually less than 2.5 cm (1.0 in) of water; thus, scrubber operating costs are relatively low. However, the liquid pumping costs can be very high. Spray towers are constructed in various sizes – small ones to handle small gas flows of 0.05 m 3 /s (106 ft 3 /min) or less, and large ones to handle large exhaust flows of 50 m 3 /s (106,000 m 3 /min) or greater. Because of the low gas velocity required, units handling large gas flow rates tend to be large in size. Operating characteristics of spray towers are presented in the following table. [ 3 ]
https://en.wikipedia.org/wiki/Spray_tower
Divine Proportions: Rational Trigonometry to Universal Geometry is a 2005 book by the mathematician Norman J. Wildberger on a proposed alternative approach to Euclidean geometry and trigonometry , called rational trigonometry . The book advocates replacing the usual basic quantities of trigonometry, Euclidean distance and angle measure, by squared distance and the square of the sine of the angle, respectively. This is logically equivalent to the standard development (as the replacement quantities can be expressed in terms of the standard ones and vice versa). The author claims his approach holds some advantages, such as avoiding the need for irrational numbers . The book was "essentially self-published" [ 1 ] by Wildberger through his publishing company Wild Egg. The formulas and theorems in the book are regarded as correct mathematics but the claims about practical or pedagogical superiority are primarily promoted by Wildberger himself and have received mixed reviews. The main idea of Divine Proportions is to replace distances by the squared Euclidean distance , which Wildberger calls the quadrance , and to replace angle measures by the squares of their sines, which Wildberger calls the spread between two lines. Divine Proportions defines both of these concepts directly from the Cartesian coordinates of points that determine a line segment or a pair of crossing lines. Defined in this way, they are rational functions of those coordinates, and can be calculated directly without the need to take the square roots or inverse trigonometric functions required when computing distances or angle measures. [ 1 ] For Wildberger, a finitist , this replacement has the purported advantage of avoiding the concepts of limits and actual infinity used in defining the real numbers , which Wildberger claims to be unfounded. [ 2 ] [ 1 ] It also allows analogous concepts to be extended directly from the rational numbers to other number systems such as finite fields using the same formulas for quadrance and spread. [ 1 ] Additionally, this method avoids the ambiguity of the two supplementary angles formed by a pair of lines, as both angles have the same spread. This system is claimed to be more intuitive, and to extend more easily from two to three dimensions. [ 3 ] However, in exchange for these benefits, one loses the additivity of distances and angles: for instance, if a line segment is divided in two, its length is the sum of the lengths of the two pieces, but combining the quadrances of the pieces is more complicated and requires square roots. [ 1 ] Divine Proportions is divided into four parts. Part I presents an overview of the use of quadrance and spread to replace distance and angle, and makes the argument for their advantages. Part II formalizes the claims made in part I, and proves them rigorously. [ 1 ] Rather than defining lines as infinite sets of points, they are defined by their homogeneous coordinates , which may be used in formulas for testing the incidence of points and lines. Like the sine, the cosine and tangent are replaced with rational equivalents, called the "cross" and "twist", and Divine Proportions develops various analogues of trigonometric identities involving these quantities, [ 3 ] including versions of the Pythagorean theorem , law of sines and law of cosines . [ 4 ] Part III develops the geometry of triangles and conic sections using the tools developed in the two previous parts. [ 1 ] Well known results such as Heron's formula for calculating the area of a triangle from its side lengths, or the inscribed angle theorem in the form that the angles subtended by a chord of a circle from other points on the circle are equal, are reformulated in terms of quadrance and spread, and thereby generalized to arbitrary fields of numbers. [ 3 ] [ 5 ] Finally, Part IV considers practical applications in physics and surveying, and develops extensions to higher-dimensional Euclidean space and to polar coordinates . [ 1 ] Divine Proportions does not assume much in the way of mathematical background in its readers, but its many long formulas, frequent consideration of finite fields, and (after part I) emphasis on mathematical rigour are likely to be obstacles to a popular mathematics audience. Instead, it is mainly written for mathematics teachers and researchers. However, it may also be readable by mathematics students, and contains exercises making it possible to use as the basis for a mathematics course. [ 1 ] [ 6 ] The feature of the book that was most positively received by reviewers was its work extending results in distance and angle geometry to finite fields. Reviewer Laura Wiswell found this work impressive, and was charmed by the result that the smallest finite field containing a regular pentagon is F 19 {\displaystyle \mathbb {F} _{19}} . [ 1 ] Michael Henle calls the extension of triangle and conic section geometry to finite fields, in part III of the book, "an elegant theory of great generality", [ 4 ] and William Barker also writes approvingly of this aspect of the book, calling it "particularly novel" and possibly opening up new research directions. [ 6 ] Wiswell raises the question of how many of the detailed results presented without attribution in this work are actually novel. [ 1 ] In this light, Michael Henle notes that the use of squared Euclidean distance "has often been found convenient elsewhere"; [ 4 ] for instance it is used in distance geometry , least squares statistics, and convex optimization . James Franklin points out that for spaces of three or more dimensions, modelled conventionally using linear algebra , the use of spread by Divine Proportions is not very different from standard methods involving dot products in place of trigonometric functions. [ 5 ] An advantage of Wildberger's methods noted by Henle is that, because they involve only simple algebra, the proofs are both easy to follow and easy for a computer to verify. However, he suggests that the book's claims of greater simplicity in its overall theory rest on a false comparison in which quadrance and spread are weighed not against the corresponding classical concepts of distances, angles, and sines, but the much wider set of tools from classical trigonometry. He also points out that, to a student with a scientific calculator, formulas that avoid square roots and trigonometric functions are a non-issue, [ 4 ] and Barker adds that the new formulas often involve a greater number of individual calculation steps. [ 6 ] Although multiple reviewers felt that a reduction in the amount of time needed to teach students trigonometry would be very welcome, [ 3 ] [ 5 ] [ 7 ] Paul Campbell is skeptical that these methods would actually speed learning. [ 7 ] Gerry Leversha keeps an open mind, writing that "It will be interesting to see some of the textbooks aimed at school pupils [that Wildberger] has promised to produce, and ... controlled experiments involving student guinea pigs." [ 3 ] However, these textbooks and experiments have not been published. Wiswell is unconvinced by the claim that conventional geometry has foundational flaws that these methods avoid. [ 1 ] While agreeing with Wiswell, Barker points out that there may be other mathematicians who share Wildberger's philosophical suspicions of the infinite, and that this work should be of great interest to them. [ 6 ] A final issue raised by multiple reviewers is inertia: supposing for the sake of argument that these methods are better, are they sufficiently better to make worthwhile the large individual effort of re-learning geometry and trigonometry in these terms, and the institutional effort of re-working the school curriculum to use them in place of classical geometry and trigonometry? Henle, Barker, and Leversha conclude that the book has not made its case for this, [ 3 ] [ 4 ] [ 6 ] but Sandra Arlinghaus sees this work as an opportunity for fields such as her mathematical geography "that have relatively little invested in traditional institutional rigidity" to demonstrate the promise of such a replacement. [ 8 ]
https://en.wikipedia.org/wiki/Spread_(rational_trigonometry)
In mathematics , and more specifically matrix theory , the spread of a matrix is the largest distance in the complex plane between any two eigenvalues of the matrix. Let A {\displaystyle A} be a square matrix with eigenvalues λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} . That is, these values λ i {\displaystyle \lambda _{i}} are the complex numbers such that there exists a vector v i {\displaystyle v_{i}} on which A {\displaystyle A} acts by scalar multiplication : Then the spread of A {\displaystyle A} is the non-negative number
https://en.wikipedia.org/wiki/Spread_of_a_matrix
Spreaders in mining are heavy equipment used in surface mining and mechanical engineering/civil engineering. The primary function of a spreader is to act as a continuous spreading machine in large-scale open pit mining operations. A Spreader's superstructure may be seen as superficially similar to that of a bucket-wheel excavator , however, its most striking difference is that, instead of a bucket-wheel at the end of the boom, it is a discharge boom. The spreader's design can vary, ranging from conventional single-boom spreaders to more modified two-conveyor compact spreaders. The main parts of a spreader usually come in four signature parts. The first is the signature receiving boom with or without a support crawler track. The second is the main body superstructure itself. The third is the sub-structure with crawler tracks. And the fourth and final being the discharge boom itself. The discharge boom can be fixed, liftable or slewable and is determined by specific operational requirements. [ 1 ] Spreaders therefore are incredibly large ground vehicles, often approaching the sizes of large bucket-wheel excavators in comparison, as some spreaders have capacities range up to 20,000 m³/h, with discharge boom lengths reaching 195m. [ 1 ] Bucket-wheel excavators , BWEs, are used for continuous overburden removal in surface mining applications. They use their cutting wheels to strip away a section of earth (the working block) dictated by the size of the excavator. The overburden is then delivered to the discharge boom, which transfers the cut earth to another machine for transfer it to the central collection area where the material will be sorted. Then the remains of the overburden will be transported to the spreader which then scatters the overburden at the dumping ground. Although it may appear similar in function and appearance to stackers , the purpose of the spreader is to receive overburden from the haulage conveyor from the sorting area and dump it in an orderly and efficient manner, whereas a Stacker simply piles bulk material onto a stockpile so that the resulting reclaimer could recover it. Moreover, spreaders usually run on tank tracks whereas stackers exclusively run on rails. This article about mining is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spreader_(mining)
A spring is a natural exit point at which groundwater emerges from an aquifer and flows across the ground surface as surface water . It is a component of the hydrosphere , as well as a part of the water cycle . Springs have long been important for humans as a source of fresh water , especially in arid regions which have relatively little annual rainfall . Springs are driven out onto the surface by various natural forces, such as gravity and hydrostatic pressure . A spring produced by the emergence of geothermally heated groundwater is known as a hot spring . The yield of spring water varies widely from a volumetric flow rate of nearly zero to more than 14,000 litres per second (490 cu ft/s) for the biggest springs. [ 1 ] Springs are formed when groundwater flows onto the surface. This typically happens when the water table reaches above the surface level, or if the terrain depresses sharply. Springs may also be formed as a result of karst topography , aquifers or volcanic activity . Springs have also been observed on the ocean floor , spewing warmer, low- salinity water directly into the ocean. [ 2 ] Springs formed as a result of karst topography create karst springs , in which ground water travels through a network of cracks and fissures—openings ranging from intergranular spaces to large caves , later emerging in a spring. The forcing of the spring to the surface can be the result of a confined aquifer in which the recharge area of the spring water table rests at a higher elevation than that of the outlet. Spring water forced to the surface by elevated sources are artesian wells . This is possible even if the outlet is in the form of a 300-foot-deep (91 m) cave. In this case the cave is used like a hose by the higher elevated recharge area of groundwater to exit through the lower elevation opening. Non-artesian springs may simply flow from a higher elevation through the earth to a lower elevation and exit in the form of a spring, using the ground like a drainage pipe. Still other springs are the result of pressure from an underground source in the earth, in the form of volcanic or magma activity. The result can be water at elevated temperature and pressure, i.e. hot springs and geysers . The action of the groundwater continually dissolves permeable bedrock such as limestone and dolomite , creating vast cave systems. [ 3 ] Spring discharge, or resurgence , is determined by the spring's recharge basin. Factors that affect the recharge include the size of the area in which groundwater is captured, the amount of precipitation, the size of capture points, and the size of the spring outlet. Water may leak into the underground system from many sources including permeable earth, sinkholes, and losing streams . In some cases entire creeks seemingly disappear as the water sinks into the ground via the stream bed. Grand Gulf State Park in Missouri is an example of an entire creek vanishing into the groundwater system. The water emerges 9 miles (14 km) away, forming some of the discharge of Mammoth Spring in Arkansas . Human activity may also affect a spring's discharge—withdrawal of groundwater reduces the water pressure in an aquifer, decreasing the volume of flow. [ 13 ] Springs fall into three general classifications: perennial (springs that flow constantly during the year); intermittent (temporary springs that are active after rainfall, or during certain seasonal changes); and periodic (as in geysers that vent and erupt at regular or irregular intervals). [ 5 ] Springs are often classified by the volume of the water they discharge. The largest springs are called "first-magnitude", defined as springs that discharge water at a rate of at least 2800 liters or 100 cubic feet (2.8 m 3 ) of water per second. Some locations contain many first-magnitude springs, such as Florida where there are at least 27 known to be that size; the Missouri and Arkansas Ozarks , which contain 10 [ 14 ] [ 13 ] known of first-magnitude; and 11 [ 15 ] more in the Thousand Springs area along the Snake River in Idaho . The scale for spring flow is as follows: Minerals become dissolved in the water as it moves through the underground rocks . This mineral content is measured as total dissolved solids (TDS). This may give the water flavor and even carbon dioxide bubbles, depending on the nature of the geology through which it passes. This is why spring water is often bottled and sold as mineral water , although the term is often the subject of deceptive advertising . Mineral water contains no less than 250 parts per million (ppm) of tds. Springs that contain significant amounts of minerals are sometimes called ' mineral springs '. (Springs without such mineral content, meanwhile, are sometimes distinguished as 'sweet springs'.) Springs that contain large amounts of dissolved sodium salts , mostly sodium carbonate , are called 'soda springs'. Many resorts have developed around mineral springs and are known as spa towns . Mineral springs are alleged to have healing properties. Soaking in them is said to result in the absorption of the minerals from the water. Some springs contain arsenic levels that exceed the 10 ppb World Health Organization (WHO) standard for drinking water . [ 16 ] Where such springs feed rivers they can also raise the arsenic levels in the rivers above WHO limits. [ 16 ] Water from springs is usually clear. However, some springs may be colored by the minerals that are dissolved in the water. For instance, water heavy with iron or tannins will have an orange color. [ 3 ] In parts of the United States a stream carrying the outflow of a spring to a nearby primary stream may be called a spring branch , spring creek , or run. Groundwater tends to maintain a relatively long-term average temperature of its aquifer; so flow from a spring may be cooler than other sources on a summer day, but remain unfrozen in the winter. The cool water of a spring and its branch may harbor species such as certain trout that are otherwise ill-suited to a warmer local climate . Springs have been used for a variety of human needs - including drinking water, domestic water supply, irrigation, mills , navigation, and electricity generation . Modern uses include recreational activities such as fishing, swimming, and floating; therapy ; water for livestock; fish hatcheries; and supply for bottled mineral water or bottled spring water. Springs have taken on a kind of mythic quality in that some people falsely believe that springs are always healthy sources of drinking water. They may or may not be. One must take a comprehensive water quality test to know how to use a spring appropriately, whether for a mineral bath or drinking water. Springs that are managed as spas will already have such a test. Springs are often used as sources for bottled water. [ 22 ] When purchasing bottled water labeled as spring water one can often find the water test for that spring on the website of the company selling it. Springs have been used as sources of water for gravity-fed irrigation of crops. [ 23 ] Indigenous people of the American Southwest built spring-fed acequias that directed water to fields through canals. The Spanish missionaries later used this method. [ 24 ] [ 25 ] A sacred spring, or holy well, is a small body of water emerging from underground and revered in some religious context: Christian and/or pagan and/or other. [ 26 ] [ 27 ] The lore and mythology of ancient Greece was replete with sacred and storied springs—notably, the Corycian , Pierian and Castalian springs. In medieval Europe, pagan sacred sites frequently became Christianized as holy wells. The term "holy well" is commonly employed to refer to any water source of limited size (i.e., not a lake or river, but including pools and natural springs and seeps), which has some significance in local folklore . This can take the form of a particular name, an associated legend , the attribution of healing qualities to the water through the numinous presence of its guardian spirit or of a Christian saint , or a ceremony or ritual centered on the well site. Christian legends often recount how the action of a saint caused a spring's water to flow - a familiar theme, especially in the hagiography of Celtic saints. [ citation needed ] The geothermally heated groundwater that flows from thermal springs is greater than human body temperature, usually in the range of 45–50 °C (113–122 °F), but they can be hotter. [ 6 ] Those springs with water cooler than body temperature but warmer than air temperature are sometimes referred to as warm springs. [ 28 ] Hot springs or geothermal springs have been used for balneotherapy , bathing, and relaxation for thousands of years. Because of the folklore surrounding hot springs and their claimed medical value, some have become tourist destinations and locations of physical rehabilitation centers. [ 29 ] [ 30 ] Hot springs have been used as a heat source for thousands of years. In the 20th century, they became a renewable resource of geothermal energy for heating homes and buildings. [ 29 ] The city of Beppu, Japan contains 2,217 hot spring well heads that provide the city with hot water. [ 31 ] Hot springs have also been used as a source of sustainable energy for greenhouse cultivation and the growing of crops and flowers. [ 32 ] Springs have been represented in culture through art, mythology, and folklore throughout history. The Fountain of Youth is a mythical spring which was said to restore youth to anyone who drank from it. [ 34 ] It has been claimed that the fountain is located in St. Augustine, Florida , and was discovered by Juan Ponce de León in 1513. However, it has not demonstrated the power to restore youth, and most historians dispute the veracity of Ponce de León's discovery. [ 35 ] [ 36 ] Pythia, also known as the Oracle at Delphi was the high priestess of the Temple of Apollo . She delivered prophesies in a frenzied state of divine possession that were "induced by vapours rising from a chasm in the rock". It is believed that the vapors were emitted from the Kerna spring at Delphi. [ 37 ] [ 38 ] The Greek myth of Narcissus describes a young man who fell in love with his reflection in the still pool of a spring. Narcissus gazed into "an unmuddied spring, silvery from its glittering waters, which neither shepherds nor she-goats grazing on the mountain nor any other cattle had touched, which neither bird nor beast nor branch fallen from a tree had disturbed." (Ovid) [ 39 ] The early 20th century American photographer, James Reuel Smith created a comprehensive series of photographs documenting the historical springs of New York City before they were capped by the city after the advent of the municipal water system. [ 40 ] Smith later photographed springs in Europe leading to his book, Springs and Wells in Greek and Roman Literature, Their Legends and Locations (1922). [ 41 ] The 19th century Japanese artists Utagawa Hiroshige and Utagawa Toyokuni III created a series of wood-block prints , Two Artists Tour the Seven Hot Springs (Sōhitsu shichitō meguri) in 1854. [ 42 ] The Chinese city Jinan is known as "a City of Springs" (Chinese: 泉城), because of its 72 spring attractions and numerous micro spring holes spread over the city centre. [ 43 ] [ 44 ]
https://en.wikipedia.org/wiki/Spring_(hydrology)
The spring bloom is a strong increase in phytoplankton abundance (i.e. stock) that typically occurs in the early spring and lasts until late spring or early summer. This seasonal event is characteristic of temperate North Atlantic, sub-polar, and coastal waters. [ 1 ] [ 2 ] Phytoplankton blooms occur when growth exceeds losses, however there is no universally accepted definition of the magnitude of change or the threshold of abundance that constitutes a bloom. The magnitude, spatial extent and duration of a bloom depends on a variety of abiotic and biotic factors. Abiotic factors include light availability, nutrients, temperature, and physical processes that influence light availability, [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] and biotic factors include grazing , viral lysis , and phytoplankton physiology. [ 6 ] The factors that lead to bloom initiation are still actively debated (see Critical depth ). In the spring, more light becomes available and stratification of the water column occurs as increasing temperatures warm the surface waters (referred to as thermal stratification). As a result, vertical mixing is inhibited and phytoplankton and nutrients are entrained in the euphotic zone . [ 1 ] [ 2 ] This creates a comparatively high nutrient and high light environment that allows rapid phytoplankton growth. [ 1 ] [ 2 ] [ 7 ] Along with thermal stratification, spring blooms can be triggered by salinity stratification due to freshwater input, from sources such as high river runoff. This type of stratification is normally limited to coastal areas and estuaries, including Chesapeake Bay. [ 8 ] Freshwater influences primary productivity in two ways. First, because freshwater is less dense, it rests on top of seawater and creates a stratified water column. [ 1 ] Second, freshwater often carries nutrients [ 3 ] that phytoplankton need to carry out processes, including photosynthesis. Rapid increases in phytoplankton growth, that typically occur during the spring bloom, arise because phytoplankton can reproduce rapidly under optimal growth conditions (i.e., high nutrient levels, ideal light and temperature, and minimal losses from grazing and vertical mixing). In terms of reproduction, many species of phytoplankton can double at least once per day, allowing for exponential increases in phytoplankton stock size. For example, the stock size of a population that doubles once per day will increase 1000-fold in just 10 days. [ 2 ] In addition, there is a lag in the grazing response of herbivorous zooplankton at the start of blooms, which minimize phytoplankton losses. This lag occurs because there is low winter zooplankton abundance and many zooplankton, such as copepods , have longer generation times than phytoplankton. [ 2 ] Spring blooms typically last until late spring or early summer, at which time the bloom collapses due to nutrient depletion in the stratified water column and increased grazing pressure by zooplankton. [ 1 ] [ 2 ] [ 3 ] [ 5 ] The most limiting nutrient in the marine environment is typically nitrogen (N). This is because most organisms are unable to fix atmospheric nitrogen into usable forms (i.e. ammonium , nitrite , or nitrate ). However, with the exception of coastal waters, it can be argued, that iron (Fe) is the most limiting nutrient because it is required to fix nitrogen, but is only available in small quantities in the marine environment, coming from dust storms and leaching from rocks. [ 2 ] Phosphorus can also be limiting, particularly in freshwater environments and tropical coastal regions. [ 2 ] During winter, wind-driven turbulence and cooling water temperatures break down the stratified water column formed during the summer. This breakdown allows vertical mixing of the water column and replenishes nutrients from deep water to the surface waters and the rest of the euphotic zone . However, vertical mixing also causes high losses, as phytoplankton are carried below the euphotic zone (so their respiration exceeds primary production). In addition, reduced illumination (intensity and daily duration) during winter limits growth rates. [ citation needed ] Historically, blooms have been explained by Sverdrup's critical depth hypothesis, which says blooms are caused by shoaling of the mixed layer. Similarly, Winder and Cloern (2010) described spring blooms as a response to increasing temperature and light availability. [ 3 ] However, new explanations have been offered recently, including that blooms occur due to: A 2012 study showed that the onset of the North Atlantic bloom is due to eddies. Eddies, or circular currents of water, are ubiquitous throughout the world’s ocean and play an important role in ocean mixing. [ 14 ] In the North Atlantic, surface water is colder and denser farther north and warmer and lighter in the south. This sets up a horizontal density gradient. Earth’s rotation maintains this gradient by preventing the dense water from slipping underneath the light water. Eddies, however, can mix dense water underneath the lighter water, setting up a vertical stratification that limits the depth of vertical mixing (leading to a shallower mixed layer). [ 15 ] Mechanisms that limit the depth of vertical mixing can be referred to as ‘restratifying mechanisms’ (e.g. eddies, solar heating), which compete against mechanisms that increase vertical mixing (and deepen the mixed layer). This includes convection and down-front winds. Convection is strongest in the winter when surface cooling is strongest. Convection increases the depth of vertical mixing, which can move phytoplankton away from the light they need to grow. [ 16 ] When convection weakens and wind switches direction in the spring, the re-stratifying effect of eddies becomes dominant. Phytoplankton are trapped closer to the surface, increasing their exposure to light. This spurs phytoplankton growth, leading to the onset of the North Atlantic spring bloom 20-30 days earlier than would occur with thermal stratification alone. [ 13 ] At greater latitudes , spring blooms take place later in the year. This northward progression is because spring occurs later, delaying thermal stratification and increases in illumination that promote blooms. A study by Wolf and Woods (1988) showed evidence that spring blooms follow the northward migration of the 12 °C isotherm, suggesting that blooms may be controlled by temperature limitations, in addition to stratification. [ 1 ] At high latitudes, the shorter warm season commonly results in one mid-summer bloom. These blooms tend to be more intense than spring blooms of temperate areas because there is a longer duration of daylight for photosynthesis to take place. Also, grazing pressure tends to be lower because the generally cooler temperatures at higher latitudes slow zooplankton metabolism. [ 1 ] The spring bloom often consists of a series of sequential blooms of different phytoplankton species. Succession occurs because different species have optimal nutrient uptake at different ambient concentrations and reach their growth peaks at different times. Shifts in the dominant phytoplankton species are likely caused by biological and physical (i.e. environmental) factors. [ 2 ] For instance, diatom growth rate becomes limited when the supply of silicate is depleted. [ 1 ] [ 2 ] [ 17 ] Since silicate is not required by other phytoplankton, such as dinoflagellates , their growth rates continue to increase. [ citation needed ] For example, in oceanic environments, diatoms (cells diameter greater than 10 to 70 μm or larger) typically dominate first because they are capable of growing faster. Once silicate is depleted in the environment, diatoms are succeeded by smaller dinoflagellates. [ 1 ] [ 2 ] [ 17 ] This scenario has been observed in Rhode Island, [ 18 ] [ 19 ] [ 20 ] as well as Massachusetts and Cape Cod Bay. [ 7 ] By the end of a spring bloom, when most nutrients have been depleted, the majority of the total phytoplankton biomass is very small phytoplankton, known as ultraphytoplankton (cell diameter <5 to 10 μm). [ 2 ] Ultraphytoplankton can sustain low, but constant stocks, in nutrient depleted environments because they have a larger surface area to volume ratio , which offers a much more effective rate of diffusion . [ 1 ] [ 2 ] The types of phytoplankton comprising a bloom can be determined by examination of the varying photosynthetic pigments found in chloroplasts of each species. [ 2 ] Variability in the patterns (e.g., timing of onset, duration, magnitude, position, and spatial extent) of annual spring bloom events has been well documented. [ 3 ] [ 5 ] These variations occur due to fluctuations in environmental conditions, such as wind intensity, temperature, freshwater input, and light. Consequently, spring bloom patterns are likely sensitive to global climate change . [ 21 ] Links have been found between temperature and spring bloom patterns. For example, several studies have reported a correlation between earlier spring bloom onset and temperature increases over time. [ 3 ] Furthermore, in Long Island Sound and the Gulf of Maine, blooms begin later in the year, are more productive, and last longer during colder years, while years that are warmer exhibit earlier, shorter blooms of greater magnitude. [ 5 ] Temperature may also regulate bloom sizes. In Narragansett Bay, Rhode Island, a study by Durbin et al. (1992) [ 22 ] indicated that a 2 °C increase in water temperature resulted in a three-week shift in the maturation of the copepod, Acartia hudsonica , which could significantly increase zooplankton grazing intensity. Oviatt et al. (2002) [ 4 ] noted a reduction in spring bloom intensity and duration in years when winter water temperatures were warmer. Oviatt et al. suggested that the reduction was due to increased grazing pressure, which could potentially become intense enough to prevent spring blooms from occurring altogether. [ citation needed ] Miller and Harding (2007) [ 23 ] suggested climate change (influencing winter weather patterns and freshwater influxes) was responsible for shifts in spring bloom patterns in the Chesapeake Bay. They found that during warm, wet years (as opposed to cool, dry years), the spatial extent of blooms was larger and was positioned more seaward. Also, during these same years, biomass was higher and peak biomass occurred later in the spring. [ citation needed ]
https://en.wikipedia.org/wiki/Spring_bloom
Sprinkler fitting is a skilled trade that consists of assembling, installing, testing, repairing, inspecting, and certifying automatic fire suppression systems and their associated piping in commercial, industrial and residential buildings. [ 1 ] [ 2 ] Sprinkler systems installed by sprinkler fitters can include the underground supply as well as integrated overhead piping [ ru ] systems and standpipes . The fire suppression piping may contain water, air (in a dry system), antifreeze, gas or chemicals as in a hood system, or a mixture producing fire retardant foam . Sprinkler fitters work with a variety of pipe and tubing materials including several types of plastic , copper, steel, cast iron , and ductile iron . [ 3 ] Many countries have standards or strict guidelines pertaining to the installation and maintenance of fire sprinkler systems . In the US , fire protection systems must adhere to the standards set forth in the installation standards of NFPA 13, (NFPA) 13D,(NFPA) 13R, (NFPA 14) and (NFPA) 25 which are administered, copyrighted, and published by the National Fire Protection Association . [ citation needed ] In the United Kingdom , insurers and building control authorities require that fire sprinkler systems are installed according to BS EN 12845 [ 4 ] for commercial and industrial buildings, and BS 9251 [ 5 ] for domestic and residential buildings. [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Sprinkler_fitting
A spur or track in radiation chemistry is a region of high concentration of chemical products after ionizing radiation passes through. The spur model , proposed by Samuel and Magee in 1953, describes the kinetic behavior of reaction spurs involving one type of radicals in a diffusion -driven environment. [ 1 ] The spurs from gamma rays or X-rays are considered to be spherical, while those from alpha particles are cylindrical, also called tracks . [ 2 ] This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spur_(chemistry)
In statistics , a spurious relationship or spurious correlation [ 1 ] [ 2 ] is a mathematical relationship in which two or more events or variables are associated but not causally related , due to either coincidence or the presence of a certain third, unseen factor (referred to as a "common response variable", "confounding factor", or " lurking variable "). An example of a spurious relationship can be found in the time-series literature, where a spurious regression is one that provides misleading statistical evidence of a linear relationship between independent non-stationary variables. In fact, the non-stationarity may be due to the presence of a unit root in both variables. [ 3 ] [ 4 ] In particular, any two nominal economic variables are likely to be correlated with each other, even when neither has a causal effect on the other, because each equals a real variable times the price level , and the common presence of the price level in the two data series imparts correlation to them. (See also spurious correlation of ratios .) Another example of a spurious relationship can be seen by examining a city's ice cream sales. The sales might be highest when the rate of drownings in city swimming pools is highest. To allege that ice cream sales cause drowning, or vice versa, would be to imply a spurious relationship between the two. In reality, a heat wave may have caused both. The heat wave is an example of a hidden or unseen variable, also known as a confounding variable . Another commonly noted example is a series of Dutch statistics showing a positive correlation between the number of storks nesting in a series of springs and the number of human babies born at that time. Of course there was no causal connection; they were correlated with each other only because of two independent coincidences. During the Pagan era, which can be traced back at least to medieval times more than 600 years ago, it was common for couples to wed during the annual summer solstice, because summer was associated with fertility. At the same time, storks would commence their annual migration, flying all the way from Europe to Africa. The birds would then return the following spring — exactly nine months later. [ 5 ] In rare cases, a spurious relationship can occur between two completely unrelated variables without any confounding variable, as was the case between the success of the Washington Commanders professional football team in a specific game before each presidential election and the success of the incumbent President's political party in said election. For 16 consecutive elections between 1940 and 2000, the Redskins Rule correctly matched whether the incumbent President's political party would retain or lose the Presidency. The rule eventually failed shortly after Elias Sports Bureau discovered the correlation in 2000; in 2004, 2012 and 2016, the results of the Commanders' game and the election did not match. [ 6 ] [ 7 ] [ 8 ] In a similar spurious relationship involving the National Football League , in the 1970s, Leonard Koppett noted a correlation between the direction of the stock market and the winning conference of that year's Super Bowl , the Super Bowl indicator ; the relationship maintained itself for most of the 20th century before reverting to more random behavior in the 21st. [ 9 ] Often one tests a null hypothesis of no correlation between two variables, and chooses in advance to reject the hypothesis if the correlation computed from a data sample would have occurred in less than (say) 5% of data samples if the null hypothesis were true. While a true null hypothesis will be accepted 95% of the time, the other 5% of the times having a true null of no correlation a zero correlation will be wrongly rejected, causing acceptance of a correlation which is spurious (an event known as Type I error ). Here the spurious correlation in the sample resulted from random selection of a sample that did not reflect the true properties of the underlying population. The term "spurious relationship" is commonly used in statistics and in particular in experimental research techniques, both of which attempt to understand and predict direct causal relationships (X → Y). A non-causal correlation can be spuriously created by an antecedent which causes both (W → X and W → Y). Mediating variables , (X → M → Y), if undetected, estimate a total effect rather than direct effect without adjustment for the mediating variable M. Because of this, experimentally identified correlations do not represent causal relationships unless spurious relationships can be ruled out. In experiments, spurious relationships can often be identified by controlling for other factors, including those that have been theoretically identified as possible confounding factors. For example, consider a researcher trying to determine whether a new drug kills bacteria; when the researcher applies the drug to a bacterial culture, the bacteria die. But to help in ruling out the presence of a confounding variable, another culture is subjected to conditions that are as nearly identical as possible to those facing the first-mentioned culture, but the second culture is not subjected to the drug. If there is an unseen confounding factor in those conditions, this control culture will die as well, so that no conclusion of efficacy of the drug can be drawn from the results of the first culture. On the other hand, if the control culture does not die, then the researcher cannot reject the hypothesis that the drug is efficacious. Disciplines whose data are mostly non-experimental, such as economics , usually employ observational data to establish causal relationships. The body of statistical techniques used in economics is called econometrics . The main statistical method in econometrics is multivariable regression analysis . Typically a linear relationship such as is hypothesized, in which y {\displaystyle y} is the dependent variable (hypothesized to be the caused variable), x j {\displaystyle x_{j}} for j = 1, ..., k is the j th independent variable (hypothesized to be a causative variable), and e {\displaystyle e} is the error term (containing the combined effects of all other causative variables, which must be uncorrelated with the included independent variables). If there is reason to believe that none of the x j {\displaystyle x_{j}} s is caused by y , then estimates of the coefficients a j {\displaystyle a_{j}} are obtained. If the null hypothesis that a j = 0 {\displaystyle a_{j}=0} is rejected, then the alternative hypothesis that a j ≠ 0 {\displaystyle a_{j}\neq 0} and equivalently that x j {\displaystyle x_{j}} causes y cannot be rejected. On the other hand, if the null hypothesis that a j = 0 {\displaystyle a_{j}=0} cannot be rejected, then equivalently the hypothesis of no causal effect of x j {\displaystyle x_{j}} on y cannot be rejected. Here the notion of causality is one of contributory causality : If the true value a j ≠ 0 {\displaystyle a_{j}\neq 0} , then a change in x j {\displaystyle x_{j}} will result in a change in y unless some other causative variable(s), either included in the regression or implicit in the error term, change in such a way as to exactly offset its effect; thus a change in x j {\displaystyle x_{j}} is not sufficient to change y . Likewise, a change in x j {\displaystyle x_{j}} is not necessary to change y , because a change in y could be caused by something implicit in the error term (or by some other causative explanatory variable included in the model). Regression analysis controls for other relevant variables by including them as regressors (explanatory variables). This helps to avoid mistaken inference of causality due to the presence of a third, underlying, variable that influences both the potentially causative variable and the potentially caused variable: its effect on the potentially caused variable is captured by directly including it in the regression, so that effect will not be picked up as a spurious effect of the potentially causative variable of interest. In addition, the use of multivariate regression helps to avoid wrongly inferring that an indirect effect of, say x 1 (e.g., x 1 → x 2 → y ) is a direct effect ( x 1 → y ). Just as an experimenter must be careful to employ an experimental design that controls for every confounding factor, so also must the user of multiple regression be careful to control for all confounding factors by including them among the regressors. If a confounding factor is omitted from the regression, its effect is captured in the error term by default, and if the resulting error term is correlated with one (or more) of the included regressors, then the estimated regression may be biased or inconsistent (see omitted variable bias ). In addition to regression analysis, the data can be examined to determine if Granger causality exists. The presence of Granger causality indicates both that x precedes y , and that x contains unique information about y . There are several other relationships defined in statistical analysis as follows.
https://en.wikipedia.org/wiki/Spurious_relationship
In electronics ( radio in particular), a spurious tone (also known as an interfering tone , a continuous tone or a spur ) denotes a tone in an electronic circuit which interferes with a signal and is often masked underneath that signal. Spurious tones are any tones other than a fundamental tone or its harmonics . [ 1 ] They also include tones generated within the back-to-back connected transmit and receive terminal or channel units , when the fundamental is applied to the transmit terminal or channel-unit input. This article related to radio communications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Spurious_tone
Spurious trip level (STL) is defined as a discrete level for specifying the spurious trip requirements of safety functions to be allocated to safety systems. An STL of 1 means that this safety function has the highest level of spurious trips. The higher the STL level the lower the number of spurious trips caused by the safety system. There is no limit to the number of spurious trip levels. Safety functions and systems are installed to protect people, the environment and for asset protection. A safety function should only activate when a dangerous situation occurs. A safety function that activates without the presence of a dangerous situation (e.g., due to an internal failure) causes economic loss. The spurious trip level concept represents the probability that safety function causes a spurious (unscheduled) trip. The STL is a metric that is used to specify the performance level of a safety function in terms of the spurious trips it potentially causes. Typical safety systems that benefit from an STL level are defined in standards like IEC 61508 [ 1 ] IEC 61511 , [ 2 ] IEC 62061, [ 3 ] ISA S84, [ 4 ] EN 50204 [ 5 ] and so on. An STL provides end-users of safety functions with a measurable attribute that helps them define the desired availability of their safety functions. An STL can be specified for a complete safety loop or for individual devices. For end-users there is always a potential conflict between the cost of safety solutions and the loss of profitability caused by spurious trips of these safety solutions. The STL concept helps the end-users to end this conflict in a way that safety solutions provide both the desired safety and the desired process availability. The spurious trip level represents asset loss due to an internal failure of the safety function. The more financial damage the safety function can cause due to a spurious trip the higher the STL level of the safety function should be. Each company needs to decide for themselves which level of financial loss they can or are willing to take. This actually depends on many different factors including the financial strength of the company, the insurance policy they have, the cost of process shutdown and startup, and so on. All these factors are unique to each company. The table below shows an example of how a company can calibrate its spurious trip levels. The STL level achieved by a safety function is determined by the probability of fail safe (PFS) of this safety function. The PFS value is determined by internal failures of the safety system that cause the safety function to be executed without a demand from the process. The table below demonstrates the PFS value and spurious trip reduction (TRV) values of each STL level. Today standards only define the safety integrity level (SIL) for safety functions. Standards do not define STL levels because they do in first instance not represent safety but economic loss. Despite this the STL is also a safety attribute, specially for safety functions in the process, oil & gas, chemical and nuclear industry. In those industries an undesired shutdown of the process leads to dangerous situation as the plant needs to be started up again. Startup and shutdown of a process plant are considered the two most dangerous operational modes of the plant and should be limited to the absolute minimum. In practice the STL and SIL concepts complement each other. Both factors are attributes of the same safety function. The STL level is determined by the average PFS value of the safety function. The SIL level is determined by the average probability of failure on demand. PFD value of the safety function. The STL level expresses the probability of spurious trips by the safety function, i.e., the safety function is executed without a demand from the process. The SIL level expresses the probability that the safety function does not work upon demand from the process. Both parameters are important to end-users in order to achieve safety and asset protection. In order to calculate the PFS or PFD value of a safety loop it is necessary to have a reliability model and reliability data for each component in the safety loop. The best reliability model to use is a Markov model (see Andrey Markov ). Typical data required are:
https://en.wikipedia.org/wiki/Spurious_trip_level
Sputnik 1 EMC/EMI is a class of full-scale laboratory models of the Soviet Sputnik 1 satellite, made to test ground Electromagnetic Compatibility (EMC) and Electromagnetic Interference (EMI). The models, manufactured by OKB-1 and NII-885 (headed by Mikhail Ryazansky), were introduced on February 15, 1957. [ 1 ] The first testing model Sputnik 1 EMC/EMI – Lab model 001, made on February 15, 1957, is located in Deutsches Technikmuseum Berlin , Germany. It comes from the former collections of Russian institute NII-885. The designer of Sputnik Dr. Mikhail Ryazansky was the director back then. In 2007, on the occasion of 50 years since the launch of the first artificial satellite project Sputnik 1 , was this first test model exposed to the public in Deutsches Technikmuseum Berlin, Germany. Apart from functional laboratory EMC/EMI models 001, 002 and 003 there are two view mockups, which do not contain any active radio or electronics. Of four known models, two reside in private hands, one is located at the Energia Corporate Museum outside Moscow, and one, lacking internal components, is displayed at the Museum of Flight in Seattle , Washington, US. [ 5 ]
https://en.wikipedia.org/wiki/Sputnik-1_EMC/EMI_lab_model
In physics, sputtering is a phenomenon in which microscopic particles of a solid material are ejected from its surface, after the material is itself bombarded by energetic particles of a plasma or gas . [ 2 ] It occurs naturally in outer space , and can be an unwelcome source of wear in precision components. However, the fact that it can be made to act on extremely fine layers of material is utilised in science and industry—there, it is used to perform precise etching , carry out analytical techniques, and deposit thin film layers in the manufacture of optical coatings , semiconductor devices and nanotechnology products. It is a physical vapor deposition technique. [ 3 ] When energetic ions collide with atoms of a target material, an exchange of momentum takes place between them. [ 2 ] [ 4 ] [ 5 ] These ions, known as "incident ions", set off collision cascades in the target. Such cascades can take many paths; some recoil back toward the surface of the target. If a collision cascade reaches the surface of the target, and its remaining energy is greater than the target's surface binding energy , an atom will be ejected. This process is known as "sputtering". If the target is thin (on an atomic scale), the collision cascade can reach through to its back side; the atoms ejected in this fashion are said to escape the surface binding energy "in transmission". The average number of atoms ejected from the target per incident ion is called the "sputter yield". The sputter yield depends on several things: the angle at which ions collide with the surface of the material, how much energy they strike it with, their masses, the masses of the target atoms, and the target's surface binding energy. If the target possesses a crystal structure, the orientation of its axes with respect to the surface is an important factor. The ions that cause sputtering come from a variety of sources—they can come from plasma , specially constructed ion sources , particle accelerators , outer space (e.g. solar wind ), or radioactive materials (e.g. alpha radiation ). A model for describing sputtering in the cascade regime for amorphous flat targets is Thompson's analytical model. [ 6 ] An algorithm that simulates sputtering based on a quantum mechanical treatment including electrons stripping at high energy is implemented in the program TRIM . [ 7 ] Another mechanism of physical sputtering is called "heat spike sputtering". This can occur when the solid is dense enough, and the incoming ion heavy enough, that collisions occur very close to each other. In this case, the binary collision approximation is no longer valid, and the collisional process should be understood as a many-body process. The dense collisions induce a heat spike (also called thermal spike), which essentially melts a small portion of the crystal. If that portion is close enough to its surface, large numbers of atoms may be ejected, due to liquid flowing to the surface and/or microexplosions. [ 8 ] Heat spike sputtering is most important for heavy ions (e.g. Xe or Au or cluster ions) with energies in the keV–MeV range bombarding dense but soft metals with a low melting point (Ag, Au, Pb, etc.). The heat spike sputtering often increases nonlinearly with energy, and can for small cluster ions lead to dramatic sputtering yields per cluster of the order of 10,000. [ 9 ] For animations of such a process see "Re: Displacement Cascade 1" in the external links section. Physical sputtering has a well-defined minimum energy threshold, equal to or larger than the ion energy at which the maximum energy transfer from the ion to a target atom equals the binding energy of a surface atom. That is to say, it can only happen when an ion is capable of transferring more energy into the target than is required for an atom to break free from its surface. This threshold is typically somewhere in the range of ten to a hundred eV . Preferential sputtering can occur at the start when a multicomponent solid target is bombarded and there is no solid state diffusion. If the energy transfer is more efficient to one of the target components, or it is less strongly bound to the solid, it will sputter more efficiently than the other. If in an AB alloy the component A is sputtered preferentially, the surface of the solid will, during prolonged bombardment, become enriched in the B component, thereby increasing the probability that B is sputtered such that the composition of the sputtered material will ultimately return to AB. The term electronic sputtering can mean either sputtering induced by energetic electrons (for example in a transmission electron microscope), or sputtering due to very high-energy or highly charged heavy ions that lose energy to the solid, mostly by electronic stopping power , where the electronic excitations cause sputtering. [ 10 ] Electronic sputtering produces high sputtering yields from insulators , as the electronic excitations that cause sputtering are not immediately quenched, as they would be in a conductor. One example of this is Jupiter's ice-covered moon Europa , where a MeV sulfur ion from Jupiter's magnetosphere can eject up to 10,000 H 2 O molecules. [ 11 ] In the case of multiple charged projectile ions a particular form of electronic sputtering can take place that has been termed potential sputtering . [ 12 ] [ 13 ] In these cases the potential energy stored in multiply charged ions (i.e., the energy necessary to produce an ion of this charge state from its neutral atom) is liberated when the ions recombine during impact on a solid surface (formation of hollow atoms ). This sputtering process is characterized by a strong dependence of the observed sputtering yields on the charge state of the impinging ion and can already take place at ion impact energies well below the physical sputtering threshold. Potential sputtering has only been observed for certain target species [ 14 ] and requires a minimum potential energy. [ 15 ] Removing atoms by sputtering with an inert gas is called ion milling or ion etching . Sputtering can also play a role in reactive-ion etching (RIE), a plasma process carried out with chemically active ions and radicals, for which the sputtering yield may be enhanced significantly compared to pure physical sputtering. Reactive ions are frequently used in secondary ion mass spectrometry (SIMS) equipment to enhance the sputter rates. The mechanisms causing the sputtering enhancement are not always well understood, although the case of fluorine etching of Si has been modeled well theoretically. [ 16 ] Sputtering observed to occur below the threshold energy of physical sputtering is also often called chemical sputtering. [ 2 ] [ 5 ] The mechanisms behind such sputtering are not always well understood, and may be hard to distinguish from chemical etching . At elevated temperatures, chemical sputtering of carbon can be understood to be due to the incoming ions weakening bonds in the sample, which then desorb by thermal activation. [ 17 ] The hydrogen-induced sputtering of carbon-based materials observed at low temperatures has been explained by H ions entering between C-C bonds and thus breaking them, a mechanism dubbed swift chemical sputtering . [ 18 ] Sputtering only happens when the kinetic energy of the incoming particles is much higher than conventional thermal energies ( ≫ 1 eV ). When done with direct current (DC sputtering), voltages of 3-5 kV are used. When done with alternating current ( RF sputtering), frequencies are around the 14 MHz range. Surfaces of solids can be cleaned from contaminants by using physical sputtering in a vacuum . Sputter cleaning is often used in surface science , vacuum deposition and ion plating . In 1955 Farnsworth, Schlier, George, and Burger reported using sputter cleaning in an ultra-high-vacuum system to prepare ultra-clean surfaces for low-energy electron-diffraction (LEED) studies. [ 19 ] [ 20 ] [ 21 ] Sputter cleaning became an integral part of the ion plating process. When the surfaces to be cleaned are large, a similar technique, plasma cleaning , can be used. Sputter cleaning has some potential problems such as overheating, gas incorporation in the surface region, bombardment (radiation) damage in the surface region, and the roughening of the surface, particularly if over done. It is important to have a clean plasma in order to not continually recontaminate the surface during sputter cleaning. Redeposition of sputtered material on the substrate can also give problems, especially at high sputtering pressures. Sputtering of the surface of a compound or alloy material can result in the surface composition being changed. Often the species with the least mass or the highest vapor pressure is the one preferentially sputtered from the surface. Sputter deposition is a method of depositing thin films by sputtering that involves eroding material from a "target" source onto a "substrate", e.g. a silicon wafer , solar cell, optical component, or many other possibilities. [ 22 ] Resputtering , in contrast, involves re-emission of the deposited material, e.g. SiO 2 during the deposition also by ion bombardment. Sputtered atoms are ejected into the gas phase but are not in their thermodynamic equilibrium state, and tend to deposit on all surfaces in the vacuum chamber. A substrate (such as a wafer) placed in the chamber will be coated with a thin film. Sputtering deposition usually uses an argon plasma because argon, a noble gas, will not react with the target material. Sputter damage is usually defined during transparent electrode deposition on optoelectronic devices, which is usually originated from the substrate's bombardment by highly energetic species. The main species involved in the process and the representative energies can be listed as (values taken from [ 23 ] ): As seen in the list above, negative ions (e.g., O − and In − for ITO sputtering) formed at the target surface and accelerated toward the substrate acquire the largest energy, which is determined by the potential between target and plasma potentials. Although the flux of the energetic particles is an important parameter, high-energy negative O − ions are additionally the most abundant species in plasma in case of reactive deposition of oxides. However, energies of other ions/atoms (e.g., Ar + , Ar 0 , or In 0 ) in the discharge may already be sufficient to dissociate surface bonds or etch soft layers in certain device technologies. In addition, the momentum transfer of high-energy particles from the plasma (Ar, oxygen ions) or sputtered from the target might impinge or even increase the substrate temperature sufficiently to trigger physical (e.g., etching) or thermal degradation of sensitive substrate layers (e.g. thin film metal halide perovskites). This can affect the functional properties of underlying charge transport and passivation layers and photoactive absorbers or emitters, eroding device performance. For instance, due to sputter damage, there may be inevitable interfacial consequences such as pinning of the Fermi level, caused by damage-related interface gap states, resulting in the formation of Schottky-barrier impeding carrier transport. Sputter damage can also impair the doping efficiency of materials and the lifetime of excess charge carriers in photoactive materials; in some cases, depending on its extent, such damage can even lead to a reduced shunt resistance. [ 23 ] In the semiconductor industry sputtering is used to etch the target. Sputter etching is chosen in cases where a high degree of etching anisotropy is needed and selectivity is not a concern. One major drawback of this technique is wafer damage and high voltage use. Another application of sputtering is to etch away the target material. One such example occurs in secondary ion mass spectrometry (SIMS), where the target sample is sputtered at a constant rate. As the target is sputtered, the concentration and identity of sputtered atoms are measured using mass spectrometry . In this way the composition of the target material can be determined and even extremely low concentrations (20 μg/kg) of impurities detected. Furthermore, because the sputtering continually etches deeper into the sample, concentration profiles as a function of depth can be measured. Sputtering is one of the forms of space weathering, a process that changes the physical and chemical properties of airless bodies, such as asteroids and the Moon . On icy moons, especially Europa , sputtering of photolyzed water from the surface leads to net loss of hydrogen and accumulation of oxygen-rich materials that may be important for life. Sputtering is also one of the possible ways that Mars has lost most of its atmosphere and that Mercury continually replenishes its tenuous surface-bounded exosphere . Due to its adaptability with a wide range of materials, Sputtering is used to create various types of coatings that enhance the performance of optical components. [ 24 ] Anti-reflective coatings are applied to lenses and optical instruments to minimize light reflection and increase light transmission, which improves clarity and reduces glare. [ 25 ] Sputtering is also used to deposit reflective coatings on mirrors, ensuring high reflectivity and durability for applications such as telescopes , cameras , and laser systems. [ 26 ]
https://en.wikipedia.org/wiki/Sputtering
Squalene is an organic compound . It is a triterpene with the formula C 30 H 50 . It is a colourless oil, although impure samples appear yellow. It was originally obtained from shark liver oil (hence its name, as Squalus is a genus of sharks). An estimated 12% of bodily squalene in humans is found in sebum . [ 5 ] Squalene has a role in topical skin lubrication and protection. [ 6 ] Most plants, fungi, and animals produce squalene as biochemical precursor in sterol biosynthesis, including cholesterol and steroid hormones in the human body. [ 7 ] [ 8 ] [ 9 ] It is also an intermediate in the biosynthesis of hopanoids in many bacteria . [ 10 ] Squalene is an important ingredient in some vaccine adjuvants : The Novartis and GlaxoSmithKline adjuvants are called MF59 and AS03 , respectively. [ 11 ] Squalene is a biochemical precursor to both steroids and hopanoids . [ 12 ] For sterols, the squalene conversion begins with oxidation (via squalene monooxygenase ) of one of its terminal double bonds, resulting in 2,3-oxidosqualene . It then undergoes an enzyme-catalysed cyclisation to produce lanosterol , which can be elaborated into other steroids such as cholesterol and ergosterol in a multistep process by the removal of three methyl groups, the reduction of one double bond by NADPH and the migration of the other double bond. [ 13 ] In many plants, this is then converted into stigmasterol , while in many fungi, it is the precursor to ergosterol . [ citation needed ] The biosynthetic pathway is found in many bacteria, [ 14 ] and most eukaryotes , though has not been found in Archaea. [ 15 ] Squalene is biosynthesised by coupling two molecules of farnesyl pyrophosphate . The condensation requires NADPH and the enzyme squalene synthase . Click on genes, proteins and metabolites below to link to respective articles. [ § 1 ] Synthetic squalene is prepared commercially from geranylacetone . [ 16 ] In 2020, conservationists raised concerns about the potential slaughter of sharks to obtain squalene for a COVID-19 vaccine . [ 17 ] Environmental and other concerns over shark hunting have motivated its extraction from other sources. [ 18 ] Biosynthetic processes use genetically engineered yeast or bacteria. [ 19 ] [ 20 ] Immunologic adjuvants are substances, administered in conjunction with a vaccine , that stimulate the immune system and increase the response to the vaccine. Squalene is not itself an adjuvant, but it has been used in conjunction with surfactants in certain adjuvant formulations. [ 11 ] An adjuvant using squalene is Seqirus ' proprietary MF59 , which is added to influenza vaccines to help stimulate the human body's immune response through production of CD4 memory cells. It is the first oil-in-water influenza vaccine adjuvant to be commercialised in combination with a seasonal influenza virus vaccine. It was developed in the 1990s by researchers at Ciba-Geigy and Chiron ; both companies were subsequently acquired by Novartis. [ 11 ] The Influenza vaccine business of Novartis was later acquired by CSL Bering and created the company Seqirus. [ 21 ] It is present in the form of an emulsion and is added to make the vaccine more immunogenic. [ 11 ] However, the mechanism of action remains unknown. MF59 is capable of switching on a number of genes that partially overlap with those activated by other adjuvants. [ 22 ] How these changes are triggered is unclear; to date, no receptors responding to MF59 have been identified. One possibility is that MF59 affects the cell behaviour by changing the lipid metabolism, namely by inducing accumulation of neutral lipids within the target cells. [ 23 ] An influenza vaccine called FLUAD which used MF59 as an adjuvant was approved for use in the US in people 65 years of age and older, beginning with the 2016–2017 flu season. [ 24 ] A 2009 meta-analysis assessed data from 64 clinical trials of influenza vaccines with the squalene-containing adjuvant MF59 and compared them to the effects of vaccines with no adjuvant. The analysis reported that the adjuvated vaccines were associated with slightly lower risks of chronic diseases, but that neither type of vaccines altered the rate of autoimmune diseases ; the authors concluded that their data "supports the good safety profile associated with MF59-adjuvated influenza vaccines and suggests there may be a clinical benefit over non-MF59-containing vaccines". [ 25 ] Toxicology studies indicate that in the concentrations used in cosmetics , squalene has low acute toxicity, and is not a significant contact allergen or irritant. [ 26 ] [ 27 ] The World Health Organization and the US Department of Defense have both published extensive reports that emphasise that squalene is naturally occurring, even in oils of human fingerprints. [ 11 ] [ 28 ] The WHO goes further to explain that squalene has been present in over 22 million flu vaccines given to patients in Europe since 1997 without significant vaccine-related adverse events. [ 11 ] Attempts to link squalene to Gulf War syndrome have been debunked. [ 29 ] [ 30 ] [ 31 ] [ 32 ]
https://en.wikipedia.org/wiki/Squalene
A squamulose lichen is a lichen that is composed of small, often overlapping "scales" called squamules . [ 1 ] If they are raised from the substrate and appear leafy, the lichen may appear to be a foliose lichen , but the underside does not have a "skin" ( cortex ), as foliose lichens do. [ 2 ] Squamulose lichens are composed of flattish units that are usually tightly clustered. They are like an intermediate between crustose and foliose lichens . Examples of squamulose lichens include Vahliella leucophaea , Cladonia subcervicornis and Lichenomphalia hudsoniana . [ 3 ] This article about lichens or lichenology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Squamulose_lichen
Within the area of organocatalysis , squaramide catalysis describes the use of squaramides to accelerate and stereochemically alter organic transformations. The effects arise through hydrogen-bonding interactions between the substrate and the squaramide, unlike classic catalysts, and is thus a type of hydrogen-bond catalyst . The scope of these small-molecule H-bond donors termed squaramide organocatalysis covers both non-stereoselective and stereoselective applications. [ 1 ] A squaramide organocatalyst typically contains the squaramide group and a hydrogen bond donor which is usually a tertiary amine group. The 3,5-bis(trifluoromethyl)phenyl-group is commonly used for the R group. For enantioselective squaramide catalysis, chirality is induced via the tertiary amine group. There are cases where both sides of the squaramide are tertiary amines. [ 1 ] The interaction between the substrate and the catalyst can be seen in the image above, with the electrophile being bound to the squaramide part and the protonated nucleophile to the amine part (which increases nucleophilicity). However, it must be noted that the position of the nucleophile and electrophile switch when the electrophile can only form one hydrogen bond, as in the case of most imines . [ 1 ] Squaramide catalysts are easily prepared from starting materials like methyl squarate, possess high activities under low catalyst loadings. Squaramide catalysis can be a replacement for thiourea organocatalysis in some scenarios. [ 2 ] [ 3 ] Squaramides have higher affinity for halide ions than thiourea. [ 4 ] Aqueous mediums can be used. [ 1 ] H-bond accepting substrates include carbonyl compounds imines, Michael acceptors , and epoxides . The nucleophile can be nitroalkanes , enolates , and even phenols (resulting in electrophilic aromatic substitution ). Subsequent cascade reactions are possible. [ 1 ] [ 5 ] [ 2 ] Squaramides have been synthesized in 1966. [ 1 ] Squaramide catalysts are developed in 2008 by Jeremiah P. Malerich, Koji Hagihara, and Viresh H. Rawal. [ 1 ] [ 3 ] From the general structure of squaramide catalysts, a number of catalysts have been developed, most with the aim to enable chiral catalysis.
https://en.wikipedia.org/wiki/Squaramide_catalysis
In mathematics , a square-free integer (or squarefree integer ) is an integer which is divisible by no square number other than 1. That is, its prime factorization has exactly one factor for each prime that appears in it. For example, 10 = 2 ⋅ 5 is square-free, but 18 = 2 ⋅ 3 ⋅ 3 is not, because 18 is divisible by 9 = 3 2 . The smallest positive square-free numbers are Every positive integer n {\displaystyle n} can be factored in a unique way as n = ∏ i = 1 k q i i , {\displaystyle n=\prod _{i=1}^{k}q_{i}^{i},} where the q i {\displaystyle q_{i}} different from one are square-free integers that are pairwise coprime . This is called the square-free factorization of n . To construct the square-free factorization, let n = ∏ j = 1 h p j e j {\displaystyle n=\prod _{j=1}^{h}p_{j}^{e_{j}}} be the prime factorization of n {\displaystyle n} , where the p j {\displaystyle p_{j}} are distinct prime numbers . Then the factors of the square-free factorization are defined as q i = ∏ j : e j = i p j . {\displaystyle q_{i}=\prod _{j:e_{j}=i}p_{j}.} An integer is square-free if and only if q i = 1 {\displaystyle q_{i}=1} for all i > 1 {\displaystyle i>1} . An integer greater than one is the k {\displaystyle k} th power of another integer if and only if k {\displaystyle k} is a divisor of all i {\displaystyle i} such that q i ≠ 1. {\displaystyle q_{i}\neq 1.} The use of the square-free factorization of integers is limited by the fact that its computation is as difficult as the computation of the prime factorization. More precisely every known algorithm for computing a square-free factorization computes also the prime factorization. This is a notable difference with the case of polynomials for which the same definitions can be given, but, in this case, the square-free factorization is not only easier to compute than the complete factorization, but it is the first step of all standard factorization algorithms. The radical of an integer is its largest square-free factor, that is ∏ i = 1 k q i {\displaystyle \textstyle \prod _{i=1}^{k}q_{i}} with notation of the preceding section. An integer is square-free if and only if it is equal to its radical. Every positive integer n {\displaystyle n} can be represented in a unique way as the product of a powerful number (that is an integer such that is divisible by the square of every prime factor) and a square-free integer, which are coprime . In this factorization, the square-free factor is q 1 , {\displaystyle q_{1},} and the powerful number is ∏ i = 2 k q i i . {\displaystyle \textstyle \prod _{i=2}^{k}q_{i}^{i}.} The square-free part of n {\displaystyle n} is q 1 , {\displaystyle q_{1},} which is the largest square-free divisor k {\displaystyle k} of n {\displaystyle n} that is coprime with n / k {\displaystyle n/k} . The square-free part of an integer may be smaller than the largest square-free divisor, which is ∏ i = 1 k q i . {\displaystyle \textstyle \prod _{i=1}^{k}q_{i}.} Any arbitrary positive integer n {\displaystyle n} can be represented in a unique way as the product of a square and a square-free integer: n = m 2 k {\displaystyle n=m^{2}k} In this factorization, m {\displaystyle m} is the largest divisor of n {\displaystyle n} such that m 2 {\displaystyle m^{2}} is a divisor of n {\displaystyle n} . In summary, there are three square-free factors that are naturally associated to every integer: the square-free part, the above factor k {\displaystyle k} , and the largest square-free factor. Each is a factor of the next one. All are easily deduced from the prime factorization or the square-free factorization: if n = ∏ i = 1 h p i e i = ∏ i = 1 k q i i {\displaystyle n=\prod _{i=1}^{h}p_{i}^{e_{i}}=\prod _{i=1}^{k}q_{i}^{i}} are the prime factorization and the square-free factorization of n {\displaystyle n} , where p 1 , … , p h {\displaystyle p_{1},\ldots ,p_{h}} are distinct prime numbers, then the square-free part is ∏ e i = 1 p i = q 1 , {\displaystyle \prod _{e_{i}=1}p_{i}=q_{1},} The square-free factor such the quotient is a square is ∏ e i odd p i = ∏ i odd q i , {\displaystyle \prod _{e_{i}{\text{ odd}}}p_{i}=\prod _{i{\text{ odd}}}q_{i},} and the largest square-free factor is ∏ i = 1 h p i = ∏ i = 1 k q i . {\displaystyle \prod _{i=1}^{h}p_{i}=\prod _{i=1}^{k}q_{i}.} For example, if n = 75600 = 2 4 ⋅ 3 3 ⋅ 5 2 ⋅ 7 , {\displaystyle n=75600=2^{4}\cdot 3^{3}\cdot 5^{2}\cdot 7,} one has q 1 = 7 , q 2 = 5 , q 3 = 3 , q 4 = 2. {\displaystyle q_{1}=7,\;q_{2}=5,\;q_{3}=3,\;q_{4}=2.} The square-free part is 7 , the square-free factor such that the quotient is a square is 3 ⋅ 7 = 21 , and the largest square-free factor is 2 ⋅ 3 ⋅ 5 ⋅ 7 = 210 . No algorithm is known for computing any of these square-free factors which is faster than computing the complete prime factorization. In particular, there is no known polynomial-time algorithm for computing the square-free part of an integer, or even for determining whether an integer is square-free. [ 1 ] In contrast, polynomial-time algorithms are known for primality testing . [ 2 ] This is a major difference between the arithmetic of the integers, and the arithmetic of the univariate polynomials , as polynomial-time algorithms are known for square-free factorization of polynomials (in short, the largest square-free factor of a polynomial is its quotient by the greatest common divisor of the polynomial and its formal derivative ). [ 3 ] A positive integer n {\displaystyle n} is square-free if and only if in the prime factorization of n {\displaystyle n} , no prime factor occurs with an exponent larger than one. Another way of stating the same is that for every prime factor p {\displaystyle p} of n {\displaystyle n} , the prime p {\displaystyle p} does not evenly divide n / p {\displaystyle n/p} . Also n {\displaystyle n} is square-free if and only if in every factorization n = a b {\displaystyle n=ab} , the factors a {\displaystyle a} and b {\displaystyle b} are coprime . An immediate result of this definition is that all prime numbers are square-free. A positive integer n {\displaystyle n} is square-free if and only if all abelian groups of order n {\displaystyle n} are isomorphic , which is the case if and only if any such group is cyclic . This follows from the classification of finitely generated abelian groups . A integer n {\displaystyle n} is square-free if and only if the factor ring Z / n Z {\displaystyle \mathbb {Z} /n\mathbb {Z} } (see modular arithmetic ) is a product of fields . This follows from the Chinese remainder theorem and the fact that a ring of the form Z / k Z {\displaystyle \mathbb {Z} /k\mathbb {Z} } is a field if and only if k {\displaystyle k} is prime. For every positive integer n {\displaystyle n} , the set of all positive divisors of n {\displaystyle n} becomes a partially ordered set if we use divisibility as the order relation. This partially ordered set is always a distributive lattice . It is a Boolean algebra if and only if n {\displaystyle n} is square-free. A positive integer n {\displaystyle n} is square-free if and only if μ ( n ) ≠ 0 {\displaystyle \mu (n)\neq 0} , where μ {\displaystyle \mu } denotes the Möbius function . The absolute value of the Möbius function is the indicator function for the square-free integers – that is, | μ ( n ) | is equal to 1 if n is square-free, and 0 if it is not. The Dirichlet series of this indicator function is where ζ ( s ) is the Riemann zeta function . This follows from the Euler product where the products are taken over the prime numbers. Let Q ( x ) denote the number of square-free integers between 1 and x ( OEIS : A013928 shifting index by 1). For large n , 3/4 of the positive integers less than n are not divisible by 4, 8/9 of these numbers are not divisible by 9, and so on. Because these ratios satisfy the multiplicative property (this follows from Chinese remainder theorem ), we obtain the approximation: This argument can be made rigorous for getting the estimate (using big O notation ) Sketch of a proof: the above characterization gives observing that the last summand is zero for d > x {\displaystyle d>{\sqrt {x}}} , it follows that By exploiting the largest known zero-free region of the Riemann zeta function Arnold Walfisz improved the approximation to [ 4 ] for some positive constant c . Under the Riemann hypothesis , the error term can be reduced to [ 5 ] In 2015 the error term was further reduced (assuming also Riemann hypothesis) to [ 6 ] The asymptotic/ natural density of square-free numbers is therefore Therefore over 3/5 of the integers are square-free. Likewise, if Q ( x , n ) denotes the number of n -free integers (e.g. 3-free integers being cube-free integers) between 1 and x , one can show [ 7 ] Since a multiple of 4 must have a square factor 4=2 2 , it cannot occur that four consecutive integers are all square-free. On the other hand, there exist infinitely many integers n for which 4 n +1, 4 n +2, 4 n +3 are all square-free. Otherwise, observing that 4 n and at least one of 4 n +1, 4 n +2, 4 n +3 among four could be non-square-free for sufficiently large n , half of all positive integers minus finitely many must be non-square-free and therefore contrary to the above asymptotic estimate for Q ( x ) {\displaystyle Q(x)} . There exist sequences of consecutive non-square-free integers of arbitrary length. Indeed, for every tuple ( p 1 , ..., p l ) of distinct primes, the Chinese remainder theorem guarantees the existence of an n that satisfies the simultaneous congruence Each n + i is then divisible by p 2 i . [ 8 ] On the other hand, the above-mentioned estimate Q ( x ) = 6 x / π 2 + O ( x ) {\displaystyle Q(x)=6x/\pi ^{2}+O\left({\sqrt {x}}\right)} implies that, for some constant c , there always exists a square-free integer between x and x + c x {\displaystyle x+c{\sqrt {x}}} for positive x . Moreover, an elementary argument allows us to replace x + c x {\displaystyle x+c{\sqrt {x}}} by x + c x 1 / 5 log ⁡ x . {\displaystyle x+cx^{1/5}\log x.} [ 9 ] The abc conjecture would allow x + x o ( 1 ) {\displaystyle x+x^{o(1)}} . [ 10 ] The squarefree integers ≤ x can be identified and counted in Õ ( x ) time by using a modified Sieve of Eratosthenes . If only Q ( x ) is desired, and not a list of the numbers that it counts, then ( 1 ) can be used to compute Q ( x ) in Õ ( √ x ) time. The largest known value of Q ( x ) , for x = 10 36 , was computed by Jakub Pawlewicz in 2011 using an algorithm that achieves Õ ( x 2/5 ) time, [ 11 ] and an algorithm taking Õ ( x 1/3 ) time has been outlined but not implemented. [ 12 ] : §5.5 The table shows how Q ( x ) {\displaystyle Q(x)} and 6 π 2 x {\displaystyle {\frac {6}{\pi ^{2}}}x} (with the latter rounded to one decimal place) compare at powers of 10. R ( x ) = Q ( x ) − 6 π 2 x {\displaystyle R(x)=Q(x)-{\frac {6}{\pi ^{2}}}x} , also denoted as Δ ( x ) {\displaystyle \Delta (x)} . R ( x ) {\displaystyle R(x)} changes its sign infinitely often as x {\displaystyle x} tends to infinity. [ 13 ] The absolute value of R ( x ) {\displaystyle R(x)} is astonishingly small compared with x {\displaystyle x} . If we represent a square-free number as the infinite product then we may take those a n {\displaystyle a_{n}} and use them as bits in a binary number with the encoding The square-free number 42 has factorization 2 × 3 × 7 , or as an infinite product 2 1 · 3 1 · 5 0 · 7 1 · 11 0 · 13 0 ··· Thus the number 42 may be encoded as the binary sequence ...001011 or 11 decimal. (The binary digits are reversed from the ordering in the infinite product.) Since the prime factorization of every number is unique, so also is every binary encoding of the square-free integers. The converse is also true. Since every positive integer has a unique binary representation it is possible to reverse this encoding so that they may be decoded into a unique square-free integer. Again, for example, if we begin with the number 42, this time as simply a positive integer, we have its binary representation 101010 . This decodes to 2 0 · 3 1 · 5 0 · 7 1 · 11 0 · 13 1 = 3 × 7 × 13 = 273. Thus binary encoding of squarefree numbers describes a bijection between the nonnegative integers and the set of positive squarefree integers. (See sequences A019565 , A048672 and A064273 in the OEIS .) The central binomial coefficient is never squarefree for n > 4. This was proven in 1985 for all sufficiently large integers by András Sárközy , [ 14 ] and for all integers > 4 in 1996 by Olivier Ramaré and Andrew Granville . [ 15 ] Let us call " t -free" a positive integer that has no t -th power in its divisors. In particular, the 2-free integers are the square-free integers. The multiplicative function c o r e t ( n ) {\displaystyle \mathrm {core} _{t}(n)} maps every positive integer n to the quotient of n by its largest divisor that is a t -th power. That is, The integer c o r e t ( n ) {\displaystyle \mathrm {core} _{t}(n)} is t -free, and every t -free integer is mapped to itself by the function c o r e t . {\displaystyle \mathrm {core} _{t}.} The Dirichlet generating function of the sequence ( c o r e t ( n ) ) n ∈ N {\displaystyle \left(\mathrm {core} _{t}(n)\right)_{n\in \mathbb {N} }} is See also OEIS : A007913 ( t =2), OEIS : A050985 ( t =3) and OEIS : A053165 ( t =4).
https://en.wikipedia.org/wiki/Square-free_integer
In mathematics , a square-free polynomial is a univariate polynomial (over a field or an integral domain ) that has no multiple root in an algebraically closed field containing its coefficients. In characteristic 0, or over a finite field , a univariate polynomial is square free if and only if it does not have as a divisor any square of a non-constant polynomial . [ 1 ] In applications in physics and engineering, a square-free polynomial is commonly called a polynomial with no repeated roots . The product rule implies that, if p 2 divides f , then p divides the formal derivative f ′ of f . The converse is also true and hence, f {\displaystyle f} is square-free if and only if 1 {\displaystyle 1} is a greatest common divisor of the polynomial and its derivative. [ 2 ] A square-free decomposition or square-free factorization of a polynomial is a factorization into powers of square-free polynomials where those of the a k that are non-constant are pairwise coprime square-free polynomials (here, two polynomials are said coprime is their greatest common divisor is a constant; in other words that is the coprimality over the field of fractions of the coefficients that is considered). [ 1 ] Every non-zero polynomial admits a square-free factorization, which is unique up to the multiplication and division of the factors by non-zero constants. The square-free factorization is much easier to compute than the complete factorization into irreducible factors, and is thus often preferred when the complete factorization is not really needed, as for the partial fraction decomposition and the symbolic integration of rational fractions . Square-free factorization is the first step of the polynomial factorization algorithms that are implemented in computer algebra systems . Therefore, the algorithm of square-free factorization is basic in computer algebra . Over a field of characteristic 0, the quotient of f {\displaystyle f} by its greatest common divisor (GCD) with its derivative is the product of the a i {\displaystyle a_{i}} in the above square-free decomposition. Over a perfect field of non-zero characteristic p , this quotient is the product of the a i {\displaystyle a_{i}} such that i is not a multiple of p . Further GCD computations and exact divisions allow computing the square-free factorization (see square-free factorization over a finite field ). In characteristic zero, a better algorithm is known, Yun's algorithm, which is described below. [ 1 ] Its computational complexity is, at most, twice that of the GCD computation of the input polynomial and its derivative. More precisely, if T n {\displaystyle T_{n}} is the time needed to compute the GCD of two polynomials of degree n {\displaystyle n} and the quotient of these polynomials by the GCD, then 2 T n {\displaystyle 2T_{n}} is an upper bound for the time needed to compute the complete square free decomposition. There are also known algorithms for square-free decomposition of multivariate polynomials , that proceed generally by considering a multivariate polynomial as a univariate polynomial with polynomial coefficients, and applying recursively a univariate algorithm. [ 3 ] This section describes Yun's algorithm for the square-free decomposition of univariate polynomials over a field of characteristic 0 . [ 1 ] It proceeds by a succession of GCD computations and exact divisions. The input is thus a non-zero polynomial f , and the first step of the algorithm consists of computing the GCD a 0 of f and its formal derivative f' . If is the desired factorization, we have thus and If we set b 1 = f / a 0 {\displaystyle b_{1}=f/a_{0}} , c 1 = f ′ / a 0 {\displaystyle c_{1}=f'/a_{0}} and d 1 = c 1 − b 1 ′ {\displaystyle d_{1}=c_{1}-b_{1}'} , we get that and Iterating this process until b k + 1 = 1 {\displaystyle b_{k+1}=1} we find all the a i . {\displaystyle a_{i}.} This is formalized into an algorithm as follows: a 0 := gcd ( f , f ′ ) ; b 1 := f / a 0 ; c 1 := f ′ / a 0 ; d 1 := c 1 − b 1 ′ ; i := 1 ; {\displaystyle a_{0}:=\gcd(f,f');\quad b_{1}:=f/a_{0};\quad c_{1}:=f'/a_{0};\quad d_{1}:=c_{1}-b_{1}';\quad i:=1;} repeat a i := gcd ( b i , d i ) ; b i + 1 := b i / a i ; c i + 1 := d i / a i ; i := i + 1 ; d i := c i − b i ′ ; {\displaystyle a_{i}:=\gcd(b_{i},d_{i});\quad b_{i+1}:=b_{i}/a_{i};\quad c_{i+1}:=d_{i}/a_{i};\quad i:=i+1;\quad d_{i}:=c_{i}-b_{i}';} until b i = 1 ; {\displaystyle b_{i}=1;} Output a 1 , … , a i − 1 . {\displaystyle a_{1},\ldots ,a_{i-1}.} The degree of c i {\displaystyle c_{i}} and d i {\displaystyle d_{i}} is one less than the degree of b i . {\displaystyle b_{i}.} As f {\displaystyle f} is the product of the b i , {\displaystyle b_{i},} the sum of the degrees of the b i {\displaystyle b_{i}} is the degree of f . {\displaystyle f.} As the complexity of GCD computations and divisions increase more than linearly with the degree, it follows that the total running time of the "repeat" loop is less than the running time of the first line of the algorithm, and that the total running time of Yun's algorithm is upper bounded by twice the time needed to compute the GCD of f {\displaystyle f} and f ′ {\displaystyle f'} and the quotient of f {\displaystyle f} and f ′ {\displaystyle f'} by their GCD. In general, a polynomial has no polynomial square root . More precisely, most polynomials cannot be written as the square of another polynomial. A polynomial has a square root if and only if all exponents of the square-free decomposition are even. In this case, a square root is obtained by dividing these exponents by 2. Thus the problem of deciding if a polynomial has a square root, and of computing it if it exists, is a special case of square-free factorization.
https://en.wikipedia.org/wiki/Square-free_polynomial
In mathematics , a square-integrable function , also called a quadratically integrable function or L 2 {\displaystyle L^{2}} function or square-summable function , [ 1 ] is a real - or complex -valued measurable function for which the integral of the square of the absolute value is finite. Thus, square-integrability on the real line ( − ∞ , + ∞ ) {\displaystyle (-\infty ,+\infty )} is defined as follows. f : R → C square integrable ⟺ ∫ − ∞ ∞ | f ( x ) | 2 d x < ∞ {\displaystyle f:\mathbb {R} \to \mathbb {C} {\text{ square integrable}}\quad \iff \quad \int _{-\infty }^{\infty }|f(x)|^{2}\,\mathrm {d} x<\infty } One may also speak of quadratic integrability over bounded intervals such as [ a , b ] {\displaystyle [a,b]} for a ≤ b {\displaystyle a\leq b} . [ 2 ] f : [ a , b ] → C square integrable on [ a , b ] ⟺ ∫ a b | f ( x ) | 2 d x < ∞ {\displaystyle f:[a,b]\to \mathbb {C} {\text{ square integrable on }}[a,b]\quad \iff \quad \int _{a}^{b}|f(x)|^{2}\,\mathrm {d} x<\infty } An equivalent definition is to say that the square of the function itself (rather than of its absolute value) is Lebesgue integrable . For this to be true, the integrals of the positive and negative portions of the real part must both be finite, as well as those for the imaginary part. The vector space of (equivalence classes of) square integrable functions (with respect to Lebesgue measure ) forms the L p {\displaystyle L^{p}} space with p = 2. {\displaystyle p=2.} Among the L p {\displaystyle L^{p}} spaces, the class of square integrable functions is unique in being compatible with an inner product , which allows notions like angle and orthogonality to be defined. Along with this inner product, the square integrable functions form a Hilbert space , since all of the L p {\displaystyle L^{p}} spaces are complete under their respective p {\displaystyle p} -norms . Often the term is used not to refer to a specific function, but to equivalence classes of functions that are equal almost everywhere . The square integrable functions (in the sense mentioned in which a "function" actually means an equivalence class of functions that are equal almost everywhere) form an inner product space with inner product given by ⟨ f , g ⟩ = ∫ A f ( x ) g ( x ) ¯ d x , {\displaystyle \langle f,g\rangle =\int _{A}f(x){\overline {g(x)}}\,\mathrm {d} x,} where Since | a | 2 = a ⋅ a ¯ {\displaystyle |a|^{2}=a\cdot {\overline {a}}} , square integrability is the same as saying ⟨ f , f ⟩ < ∞ . {\displaystyle \langle f,f\rangle <\infty .\,} It can be shown that square integrable functions form a complete metric space under the metric induced by the inner product defined above. A complete metric space is also called a Cauchy space , because sequences in such metric spaces converge if and only if they are Cauchy . A space that is complete under the metric induced by a norm is a Banach space . Therefore, the space of square integrable functions is a Banach space, under the metric induced by the norm, which in turn is induced by the inner product. As we have the additional property of the inner product, this is specifically a Hilbert space , because the space is complete under the metric induced by the inner product. This inner product space is conventionally denoted by ( L 2 , ⟨ ⋅ , ⋅ ⟩ 2 ) {\displaystyle \left(L_{2},\langle \cdot ,\cdot \rangle _{2}\right)} and many times abbreviated as L 2 . {\displaystyle L_{2}.} Note that L 2 {\displaystyle L_{2}} denotes the set of square integrable functions, but no selection of metric, norm or inner product are specified by this notation. The set, together with the specific inner product ⟨ ⋅ , ⋅ ⟩ 2 {\displaystyle \langle \cdot ,\cdot \rangle _{2}} specify the inner product space. The space of square integrable functions is the L p {\displaystyle L^{p}} space in which p = 2. {\displaystyle p=2.} The function 1 x n , {\displaystyle {\tfrac {1}{x^{n}}},} defined on ( 0 , 1 ) , {\displaystyle (0,1),} is in L 2 {\displaystyle L^{2}} for n < 1 2 {\displaystyle n<{\tfrac {1}{2}}} but not for n = 1 2 . {\displaystyle n={\tfrac {1}{2}}.} [ 1 ] The function 1 x , {\displaystyle {\tfrac {1}{x}},} defined on [ 1 , ∞ ) , {\displaystyle [1,\infty ),} is square-integrable. [ 3 ] Bounded functions, defined on [ 0 , 1 ] , {\displaystyle [0,1],} are square-integrable. These functions are also in L p , {\displaystyle L^{p},} for any value of p . {\displaystyle p.} [ 3 ] The function 1 x , {\displaystyle {\tfrac {1}{x}},} defined on [ 0 , 1 ] , {\displaystyle [0,1],} where the value at 0 {\displaystyle 0} is arbitrary. Furthermore, this function is not in L p {\displaystyle L^{p}} for any value of p {\displaystyle p} in [ 1 , ∞ ) . {\displaystyle [1,\infty ).} [ 3 ]
https://en.wikipedia.org/wiki/Square-integrable_function
In mathematics , a square is the result of multiplying a number by itself. The verb "to square" is used to denote this operation. Squaring is the same as raising to the power 2 , and is denoted by a superscript 2; for instance, the square of 3 may be written as 3 2 , which is the number 9. In some cases when superscripts are not available, as for instance in programming languages or plain text files, the notations x ^2 ( caret ) or x **2 may be used in place of x 2 . The adjective which corresponds to squaring is quadratic . The square of an integer may also be called a square number or a perfect square . In algebra , the operation of squaring is often generalized to polynomials , other expressions , or values in systems of mathematical values other than the numbers. For instance, the square of the linear polynomial x + 1 is the quadratic polynomial ( x + 1) 2 = x 2 + 2 x + 1 . One of the important properties of squaring, for numbers as well as in many other mathematical systems, is that (for all numbers x ), the square of x is the same as the square of its additive inverse − x . That is, the square function satisfies the identity x 2 = (− x ) 2 . This can also be expressed by saying that the square function is an even function . The squaring operation defines a real function called the square function or the squaring function . Its domain is the whole real line , and its image is the set of nonnegative real numbers. The square function preserves the order of positive numbers: larger numbers have larger squares. In other words, the square is a monotonic function on the interval [0, +∞) . On the negative numbers, numbers with greater absolute value have greater squares, so the square is a monotonically decreasing function on (−∞,0] . Hence, zero is the (global) minimum of the square function. The square x 2 of a number x is less than x (that is x 2 < x ) if and only if 0 < x < 1 , that is, if x belongs to the open interval (0,1) . This implies that the square of an integer is never less than the original number x . Every positive real number is the square of exactly two numbers, one of which is strictly positive and the other of which is strictly negative. Zero is the square of only one number, itself. For this reason, it is possible to define the square root function, which associates with a non-negative real number the non-negative number whose square is the original number. No square root can be taken of a negative number within the system of real numbers , because squares of all real numbers are non-negative . The lack of real square roots for the negative numbers can be used to expand the real number system to the complex numbers , by postulating the imaginary unit i , which is one of the square roots of −1. The property "every non-negative real number is a square" has been generalized to the notion of a real closed field , which is an ordered field such that every non-negative element is a square and every polynomial of odd degree has a root. The real closed fields cannot be distinguished from the field of real numbers by their algebraic properties: every property of the real numbers, which may be expressed in first-order logic (that is expressed by a formula in which the variables that are quantified by ∀ or ∃ represent elements, not sets), is true for every real closed field, and conversely every property of the first-order logic, which is true for a specific real closed field is also true for the real numbers. There are several major uses of the square function in geometry. The name of the square function shows its importance in the definition of the area : it comes from the fact that the area of a square with sides of length l is equal to l 2 . The area depends quadratically on the size: the area of a shape n times larger is n 2 times greater. This holds for areas in three dimensions as well as in the plane: for instance, the surface area of a sphere is proportional to the square of its radius, a fact that is manifested physically by the inverse-square law describing how the strength of physical forces such as gravity varies according to distance. The square function is related to distance through the Pythagorean theorem and its generalization, the parallelogram law . Euclidean distance is not a smooth function : the three-dimensional graph of distance from a fixed point forms a cone , with a non-smooth point at the tip of the cone. However, the square of the distance (denoted d 2 or r 2 ), which has a paraboloid as its graph, is a smooth and analytic function . The dot product of a Euclidean vector with itself is equal to the square of its length: v ⋅ v = v 2 . This is further generalised to quadratic forms in linear spaces via the inner product . The inertia tensor in mechanics is an example of a quadratic form. It demonstrates a quadratic relation of the moment of inertia to the size ( length ). There are infinitely many Pythagorean triples , sets of three positive integers such that the sum of the squares of the first two equals the square of the third. Each of these triples gives the integer sides of a right triangle. The square function is defined in any field or ring . An element in the image of this function is called a square , and the inverse images of a square are called square roots . The notion of squaring is particularly important in the finite fields Z / p Z formed by the numbers modulo an odd prime number p . A non-zero element of this field is called a quadratic residue if it is a square in Z / p Z , and otherwise, it is called a quadratic non-residue. Zero, while a square, is not considered to be a quadratic residue. Every finite field of this type has exactly ( p − 1)/2 quadratic residues and exactly ( p − 1)/2 quadratic non-residues. The quadratic residues form a group under multiplication. The properties of quadratic residues are widely used in number theory . More generally, in rings, the square function may have different properties that are sometimes used to classify rings. Zero may be the square of some non-zero elements. A commutative ring such that the square of a non zero element is never zero is called a reduced ring . More generally, in a commutative ring, a radical ideal is an ideal I such that x 2 ∈ I {\displaystyle x^{2}\in I} implies x ∈ I {\displaystyle x\in I} . Both notions are important in algebraic geometry , because of Hilbert's Nullstellensatz . An element of a ring that is equal to its own square is called an idempotent . In any ring, 0 and 1 are idempotents. There are no other idempotents in fields and more generally in integral domains . However, the ring of the integers modulo n has 2 k idempotents, where k is the number of distinct prime factors of n . A commutative ring in which every element is equal to its square (every element is idempotent) is called a Boolean ring ; an example from computer science is the ring whose elements are binary numbers , with bitwise AND as the multiplication operation and bitwise XOR as the addition operation. In a totally ordered ring , x 2 ≥ 0 for any x . Moreover, x 2 = 0 if and only if x = 0 . In a supercommutative algebra where 2 is invertible, the square of any odd element equals zero. If A is a commutative semigroup , then one has In the language of quadratic forms , this equality says that the square function is a "form permitting composition". In fact, the square function is the foundation upon which other quadratic forms are constructed which also permit composition. The procedure was introduced by L. E. Dickson to produce the octonions out of quaternions by doubling. The doubling method was formalized by A. A. Albert who started with the real number field R {\displaystyle \mathbb {R} } and the square function, doubling it to obtain the complex number field with quadratic form x 2 + y 2 , and then doubling again to obtain quaternions. The doubling procedure is called the Cayley–Dickson construction , and has been generalized to form algebras of dimension 2 n over a field F with involution. The square function z 2 is the "norm" of the composition algebra C {\displaystyle \mathbb {C} } , where the identity function forms a trivial involution to begin the Cayley–Dickson constructions leading to bicomplex, biquaternion, and bioctonion composition algebras. On complex numbers , the square function z → z 2 {\displaystyle z\to z^{2}} is a twofold cover in the sense that each non-zero complex number has exactly two square roots. The square of the absolute value of a complex number is called its absolute square , squared modulus , or squared magnitude . [ 1 ] [ better source needed ] It is the product of the complex number with its complex conjugate , and equals the sum of the squares of the real and imaginary parts of the complex number. The absolute square of a complex number is always a nonnegative real number, that is zero if and only if the complex number is zero. It is easier to compute than the absolute value (no square root), and is a smooth real-valued function . Because of these two properties, the absolute square is often preferred to the absolute value for explicit computations and when methods of mathematical analysis are involved (for example optimization or integration ). For complex vectors , the dot product can be defined involving the conjugate transpose , leading to the squared norm . Squares are ubiquitous in algebra, more generally, in almost every branch of mathematics, and also in physics where many units are defined using squares and inverse squares: see below . Least squares is the standard method used with overdetermined systems . Squaring is used in statistics and probability theory in determining the standard deviation of a set of values, or a random variable . The deviation of each value x i from the mean x ¯ {\displaystyle {\overline {x}}} of the set is defined as the difference x i − x ¯ {\displaystyle x_{i}-{\overline {x}}} . These deviations are squared, then a mean is taken of the new set of numbers (each of which is positive). This mean is the variance , and its square root is the standard deviation.
https://en.wikipedia.org/wiki/Square_(algebra)
In chemistry , the square antiprismatic molecular geometry describes the shape of compounds where eight atoms , groups of atoms, or ligands are arranged around a central atom, defining the vertices of a square antiprism . [ 1 ] This shape has D 4d symmetry and is one of the three common shapes for octacoordinate transition metal complexes, along with the dodecahedron and the bicapped trigonal prism . [ 2 ] [ 3 ] Like with other high coordination numbers, eight-coordinate compounds are often distorted from idealized geometries, as illustrated by the structure of Na 3 TaF 8 . In this case, with the small Na + ions , lattice forces are strong. With the diatomic cation NO + , the lattice forces are weaker, such as in (NO) 2 XeF 8 , which crystallizes with a more idealized square antiprismatic geometry. Square prismatic geometry (D 4h ) is much less common compared to the square antiprism. An example of a molecular species with square prismatic geometry (a slightly flattened cube) is octafluoroprotactinate(V), [PaF 8 ] 3– , as found in its sodium salt, Na 3 PaF 8 . [ 6 ] While local cubic 8-coordination is common in ionic lattices (e.g., Ca 2+ in CaF 2 ), and some 8-coordinate actinide complexes are approximately cubic, there are no reported examples of rigorously cubic 8-coordinate molecular species. A number of other rare geometries for 8-coordination are also known. [ 2 ]
https://en.wikipedia.org/wiki/Square_antiprismatic_molecular_geometry
In mathematics, specifically abstract algebra , a square class of a field F {\displaystyle F} is an element of the square class group , the quotient group F × / F × 2 {\displaystyle F^{\times }/F^{\times 2}} of the multiplicative group of nonzero elements in the field modulo the square elements of the field. Each square class is a subset of the nonzero elements (a coset of the multiplicative group) consisting of the elements of the form xy 2 where x is some particular fixed element and y ranges over all nonzero field elements. [ 1 ] For instance, if F = R {\displaystyle F=\mathbb {R} } , the field of real numbers , then F × {\displaystyle F^{\times }} is just the group of all nonzero real numbers (with the multiplication operation) and F × 2 {\displaystyle F^{\times 2}} is the subgroup of positive numbers (as every positive number has a real square root ). The quotient of these two groups is a group with two elements, corresponding to two cosets : the set of positive numbers and the set of negative numbers. Thus, the real numbers have two square classes, the positive numbers and the negative numbers. [ 1 ] Square classes are frequently studied in relation to the theory of quadratic forms . [ 2 ] The reason is that if V {\displaystyle V} is an F {\displaystyle F} - vector space and q : V → F {\displaystyle q:V\to F} is a quadratic form and v {\displaystyle v} is an element of V {\displaystyle V} such that q ( v ) = a ∈ F × {\displaystyle q(v)=a\in F^{\times }} , then for all u ∈ F × {\displaystyle u\in F^{\times }} , q ( u v ) = a u 2 {\displaystyle q(uv)=au^{2}} and thus it is sometimes more convenient to talk about the square classes which the quadratic form represents. Every element of the square class group is an involution . It follows that, if the number of square classes of a field is finite, it must be a power of two . [ 2 ] This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Square_class
In statistical mechanics , the two-dimensional square lattice Ising model is a simple lattice model of interacting magnetic spins . The model is notable for having nontrivial interactions, yet having an analytical solution . The model was solved by Lars Onsager for the special case that the external magnetic field H = 0. [ 1 ] An analytical solution for the general case for H ≠ 0 {\displaystyle H\neq 0} has yet to be found. Consider a 2D Ising model on a square lattice Λ {\displaystyle \Lambda } with N sites and periodic boundary conditions in both the horizontal and vertical directions, which effectively reduces the topology of the model to a torus . Generally, the horizontal coupling J {\displaystyle J} and the vertical coupling J ∗ {\displaystyle J^{*}} are not equal. With β = 1 k T {\displaystyle \textstyle \beta ={\frac {1}{kT}}} and absolute temperature T {\displaystyle T} and the Boltzmann constant k {\displaystyle k} , the partition function The critical temperature T c {\displaystyle T_{\text{c}}} can be obtained from the Kramers–Wannier duality relation. Denoting the free energy per site as F ( K , L ) {\displaystyle F(K,L)} , one has: where Assuming that there is only one critical line in the ( K , L ) plane, the duality relation implies that this is given by: For the isotropic case J = J ∗ {\displaystyle J=J^{*}} , one finds the famous relation for the critical temperature T c {\displaystyle T_{c}} Consider a configuration of spins { σ } {\displaystyle \{\sigma \}} on the square lattice Λ {\displaystyle \Lambda } . Let r and s denote the number of unlike neighbours in the vertical and horizontal directions respectively. Then the summand in Z N {\displaystyle Z_{N}} corresponding to { σ } {\displaystyle \{\sigma \}} is given by Construct a dual lattice Λ D {\displaystyle \Lambda _{D}} as depicted in the diagram. For every configuration { σ } {\displaystyle \{\sigma \}} , a polygon is associated to the lattice by drawing a line on the edge of the dual lattice if the spins separated by the edge are unlike. Since by traversing a vertex of Λ {\displaystyle \Lambda } the spins need to change an even number of times so that one arrives at the starting point with the same charge, every vertex of the dual lattice is connected to an even number of lines in the configuration, defining a polygon. This reduces the partition function to summing over all polygons in the dual lattice, where r and s are the number of horizontal and vertical lines in the polygon, with the factor of 2 arising from the inversion of spin configuration. At low temperatures, K , L approach infinity, so that as T → 0 , e − K , e − L → 0 {\displaystyle T\rightarrow 0,\ \ e^{-K},e^{-L}\rightarrow 0} , so that defines a low temperature expansion of Z N ( K , L ) {\displaystyle Z_{N}(K,L)} . Since σ σ ′ = ± 1 {\displaystyle \sigma \sigma '=\pm 1} one has Therefore where v = tanh ⁡ K {\displaystyle v=\tanh K} and w = tanh ⁡ L {\displaystyle w=\tanh L} . Since there are N horizontal and vertical edges, there are a total of 2 2 N {\displaystyle 2^{2N}} terms in the expansion. Every term corresponds to a configuration of lines of the lattice, by associating a line connecting i and j if the term v σ i σ j {\displaystyle v\sigma _{i}\sigma _{j}} (or w σ i σ j ) {\displaystyle w\sigma _{i}\sigma _{j})} is chosen in the product. Summing over the configurations, using shows that only configurations with an even number of lines at each vertex (polygons) will contribute to the partition function, giving where the sum is over all polygons in the lattice. Since tanh K , tanh L → 0 {\displaystyle \rightarrow 0} as T → ∞ {\displaystyle T\rightarrow \infty } , this gives the high temperature expansion of Z N ( K , L ) {\displaystyle Z_{N}(K,L)} . The two expansions can be related using the Kramers–Wannier duality . The free energy per site in the limit N → ∞ {\displaystyle N\to \infty } is given as follows. Define the parameter k {\displaystyle k} as The Helmholtz free energy per site F {\displaystyle F} can be expressed as For the isotropic case J = J ∗ {\displaystyle J=J^{*}} , from the above expression one finds for the internal energy per site: and the spontaneous magnetization is, for T < T c {\displaystyle T<T_{\text{c}}} , and M = 0 {\displaystyle M=0} for T ≥ T c {\displaystyle T\geq T_{\text{c}}} .
https://en.wikipedia.org/wiki/Square_lattice_Ising_model
In term logic (a branch of philosophical logic ), the square of opposition is a diagram representing the relations between the four basic categorical propositions . The origin of the square can be traced back to Aristotle 's tractate On Interpretation and its distinction between two oppositions: contradiction and contrariety . However, Aristotle did not draw any diagram; this was done several centuries later by Boethius . In traditional logic , a proposition (Latin: propositio ) is a spoken assertion ( oratio enunciativa ), not the meaning of an assertion, as in modern philosophy of language and logic . A categorical proposition is a simple proposition containing two terms, subject ( S ) and predicate ( P ), in which the predicate is either asserted or denied of the subject. Every categorical proposition can be reduced to one of four logical forms , named A , E , I , and O based on the Latin a ff i rmo (I affirm), for the affirmative propositions A and I , and n e g o (I deny), for the negative propositions E and O . These are: In tabular form: * Proposition A may be stated as "All S is P ." However, Proposition E when stated correspondingly as "All S is not P ." is ambiguous [ 2 ] because it can be either an E or O proposition, thus requiring a context to determine the form; the standard form "No S is P " is unambiguous, so it is preferred. Proposition O also takes the forms "Sometimes S is not P ." and "A certain S is not P ." (literally the Latin 'Quoddam S nōn est P .') ** S x {\displaystyle Sx} in the modern forms means that a statement S {\displaystyle S} applies on an object x {\displaystyle x} . It may be simply interpreted as " x {\displaystyle x} is S {\displaystyle S} " in many cases. S x {\displaystyle Sx} can be also written as S ( x ) {\displaystyle S(x)} . Aristotle states (in chapters six and seven of the Peri hermēneias (Περὶ Ἑρμηνείας, Latin De Interpretatione , English 'On Interpretation')), that there are certain logical relationships between these four kinds of proposition. He says that to every affirmation there corresponds exactly one negation, and that every affirmation and its negation are 'opposed' such that always one of them must be true, and the other false. A pair of an affirmative statement and its negation is, he calls, a ' contradiction ' (in medieval Latin, contradictio ). Examples of contradictories are 'every man is white' and 'not every man is white' (also read as 'some men are not white'), 'no man is white' and 'some man is white'. The below relations, contrary, subcontrary, subalternation, and superalternation, do hold based on the traditional logic assumption that things stated as S (or things satisfying a statement S in modern logic) exist. If this assumption is taken out, then these relations do not hold. ' Contrary ' (medieval: contrariae ) statements, are such that both statements cannot be true at the same time. Examples of these are the universal affirmative 'every man is white', and the universal negative 'no man is white'. These cannot be true at the same time. However, these are not contradictories because both of them may be false. For example, it is false that every man is white, since some men are not white. Yet it is also false that no man is white, since there are some white men. Since every statement has the contradictory opposite (its negation), and since a contradicting statement is true when its opposite is false, it follows that the opposites of contraries (which the medievals called subcontraries , subcontrariae ) can both be true, but they cannot both be false. Since subcontraries are negations of universal statements, they were called 'particular' statements by the medieval logicians. Another logical relation implied by this, though not mentioned explicitly by Aristotle, is 'alternation' ( alternatio ), consisting of ' subalternation ' and ' superalternation '. Subalternation is a relation between the particular statement and the universal statement of the same quality (affirmative or negative) such that the particular is implied by the universal, while superalternation is a relation between them such that the falsity of the universal (equivalently the negation of the universal) is implied by the falsity of the particular (equivalently the negation of the particular). [ 3 ] (The superalternation is the contrapositive of the subalternation.) In these relations, the particular is the subaltern of the universal, which is the particular's superaltern. For example, if 'every man is white' is true, its contrary 'no man is white' is false. Therefore, the contradictory 'some man is white' is true. Similarly the universal 'no man is white' implies the particular 'not every man is white'. [ 4 ] [ 5 ] In summary: These relationships became the basis of a diagram originating with Boethius and used by medieval logicians to classify the logical relationships. The propositions are placed in the four corners of a square, and the relations represented as lines drawn between them, whence the name 'The Square of Opposition'. Therefore, the following cases can be made: [ 6 ] To memorize them, the medievals invented the following Latin rhyme: [ 7 ] It affirms that A and E are not neither both true nor both false in each of the above cases. The same applies to I and O . While the first two are universal statements, the couple I / O refers to particular ones. The Square of Oppositions was used for the categorical inferences described by the Greek philosopher Aristotle: conversion , obversion and contraposition . Each of those three types of categorical inference was applied to the four Boethian logical forms: A , E , I , and O . Subcontraries ( I and O ), which medieval logicians represented in the form 'quoddam A est B ' (some particular A is B ) and 'quoddam A non est B ' (some particular A is not B ) cannot both be false, since their universal contradictory statements (no A is B / every A is B ) cannot both be true. This leads to a difficulty firstly identified by Peter Abelard (1079 – 21 April 1142). 'Some A is B ' seems to imply 'something is A ', in other words, there exists something that is A . For example, 'Some man is white' seems to imply that at least one thing that exists is a man, namely the man who has to be white, if 'some man is white' is true. But, 'some man is not white' also implies that something as a man exists, namely the man who is not white, if the statement 'some man is not white' is true. But Aristotelian logic requires that, necessarily, one of these statements (more generally 'some particular A is B ' and 'some particular A is not B ') is true, i.e., they cannot both be false. Therefore, since both statements imply the presence of at least one thing that is a man, the presence of a man or men is followed. But, as Abelard points out in the Dialectica , surely men might not exist? [ 8 ] Abelard also points out that subcontraries containing subject terms denoting nothing, such as 'a man who is a stone', are both false. Terence Parsons (born 1939) argues that ancient philosophers did not experience the problem of existential import as only the A (universal affirmative) and I (particular affirmative) forms had existential import. (If a statement includes a term such that the statement is false if the term has no instances, i.e., no thing associated with the term exists, then the statement is said to have existential import with respect to that term.) He goes on to cite a medieval philosopher William of Ockham (1215–35 – c. 1286 ), And points to Boethius ' translation of Aristotle's work as giving rise to the mistaken notion that the O form has existential import. In the 19th century, George Boole (November 1815 – 8 December 1864) argued for requiring existential import on both terms in particular claims ( I and O ), but allowing all terms of universal claims ( A and E ) to lack existential import. This decision made Venn diagrams particularly easy to use for term logic. The square of opposition, under this Boolean set of assumptions, is often called the modern square of opposition . In the modern square of opposition, A and O claims are contradictories, as are E and I , but all other forms of opposition cease to hold; there are no contraries, subcontraries, subalternations, and superalternations. Thus, from a modern point of view, it often makes sense to talk about 'the' opposition of a claim, rather than insisting, as older logicians did, that a claim has several different opposites, which are in different kinds of opposition with the claim. Gottlob Frege (8 November 1848 – 26 July 1925)'s Begriffsschrift also presents a square of oppositions, organised in an almost identical manner to the classical square, showing the contradictories, subalternates and contraries between four formulae constructed from universal quantification, negation and implication. Algirdas Julien Greimas (9 March 1917 – 27 February 1992)' semiotic square was derived from Aristotle's work. The traditional square of opposition is now often compared with squares based on inner- and outer-negation. [ 14 ] The square of opposition has been extended to a logical hexagon which includes the relationships of six statements. It was discovered independently by both Augustin Sesmat (April 7, 1885 – December 12, 1957) and Robert Blanché (1898–1975). [ 15 ] It has been proven that both the square and the hexagon, followed by a " logical cube ", belong to a regular series of n-dimensional objects called "logical bi-simplexes of dimension n ." The pattern also goes even beyond this. [ 16 ] The logical square, also called square of opposition or square of Apuleius , has its origin in the four marked sentences to be employed in syllogistic reasoning: "Every man is bad," the universal affirmative - The negation of the universal affirmative "Not every man is bad" (or "Some men are not bad") - "Some men are bad," the particular affirmative - and finally, the negation of the particular affirmative "No man is bad". Robert Blanché published with Vrin his Structures intellectuelles in 1966 and since then many scholars think that the logical square or square of opposition representing four values should be replaced by the logical hexagon which by representing six values is a more potent figure because it has the power to explain more things about logic and natural language. In modern mathematical logic , statements containing words "all", "some" and "no", can be stated in terms of set theory if we assume a set-like domain of discourse. If the set of all A 's is labeled as s ( A ) {\displaystyle s(A)} and the set of all B 's as s ( B ) {\displaystyle s(B)} , then: By definition, the empty set ∅ {\displaystyle \emptyset } is a subset of all sets. From this fact it follows that, according to this mathematical convention, if there are no A 's, then the statements "All A is B " and "No A is B " are always true whereas the statements "Some A is B " and "Some A is not B " are always false. This also implies that AaB does not entail AiB, and some of the syllogisms mentioned above are not valid when there are no A 's ( s ( A ) = ∅ {\displaystyle s(A)=\emptyset } ).
https://en.wikipedia.org/wiki/Square_of_opposition
In chemistry , the square planar molecular geometry describes the stereochemistry (spatial arrangement of atoms) that is adopted by certain chemical compounds . As the name suggests, molecules of this geometry have their atoms positioned at the corners. Numerous compounds adopt this geometry, examples being especially numerous for transition metal complexes. The noble gas compound xenon tetrafluoride adopts this structure as predicted by VSEPR theory . The geometry is prevalent for transition metal complexes with d 8 configuration, which includes Rh(I), Ir(I), Pd(II), Pt(II), and Au(III). Notable examples include the anticancer drugs cisplatin , [PtCl 2 (NH 3 ) 2 ], and carboplatin . Many homogeneous catalysts are square planar in their resting state, such as Wilkinson's catalyst and Crabtree's catalyst . Other examples include Vaska's complex and Zeise's salt . Certain ligands (such as porphyrins ) stabilize this geometry. A general d-orbital splitting diagram for square planar (D 4h ) transition metal complexes can be derived from the general octahedral (O h ) splitting diagram , in which the d z 2 and the d x 2 − y 2 orbitals are degenerate and higher in energy than the degenerate set of d xy , d xz and d yz orbitals. When the two axial ligands are removed to generate a square planar geometry, the d z 2 orbital is driven lower in energy as electron-electron repulsion with ligands on the z -axis is no longer present. However, for purely σ-donating ligands the d z 2 orbital is still higher in energy than the d xy , d xz and d yz orbitals because of the torus shaped lobe of the d z 2 orbital. It bears electron density on the x - and y -axes and therefore interacts with the filled ligand orbitals. The d xy , d xz and d yz orbitals are generally presented as degenerate but they have to split into two different energy levels with respect to the irreducible representations of the point group D 4h . Their relative ordering depends on the nature of the particular complex. Furthermore, the splitting of d-orbitals is perturbed by π-donating ligands in contrast to octahedral complexes . In the square planar case strongly π-donating ligands can cause the d xz and d yz orbitals to be higher in energy than the d z 2 orbital, whereas in the octahedral case π-donating ligands only affect the magnitude of the d-orbital splitting and the relative ordering of the orbitals is conserved. [ 1 ]
https://en.wikipedia.org/wiki/Square_planar_molecular_geometry
Square pyramidal geometry describes the shape of certain chemical compounds with the formula ML 5 where L is a ligand . If the ligand atoms were connected, the resulting shape would be that of a pyramid with a square base . The point group symmetry involved is of type C 4v . The geometry is common for certain main group compounds that have a stereochemically -active lone pair , as described by VSEPR theory . Certain compounds crystallize in both the trigonal bipyramidal and the square pyramidal structures, notably [Ni(CN) 5 ] 3− . [ 1 ] As a trigonal bipyramidal molecule undergoes Berry pseudorotation , it proceeds via an intermediary stage with the square pyramidal geometry. Thus even though the geometry is rarely seen as the ground state, it is accessed by a low energy distortion from a trigonal bipyramid. Pseudorotation also occurs in square pyramidal molecules. Molecules with this geometry, as opposed to trigonal bipyramidal, exhibit heavier vibration. The mechanism used is similar to the Berry mechanism. Some molecular compounds that adopt square pyramidal geometry are XeOF 4 , [ 2 ] and various halogen pentafluorides (XF 5 , where X = Cl, Br, I). [ 3 ] [ 4 ] Complexes of vanadium (IV), such as vanadyl acetylacetonate , [VO(acac) 2 ], are square pyramidal (acac = acetylacetonate, the deprotonated anion of acetylacetone (2,4-pentanedione)).
https://en.wikipedia.org/wiki/Square_pyramidal_molecular_geometry
The square root of 2 (approximately 1.4142) is the positive real number that, when multiplied by itself or squared, equals the number 2 . It may be written as 2 {\displaystyle {\sqrt {2}}} or 2 1 / 2 {\displaystyle 2^{1/2}} . It is an algebraic number , and therefore not a transcendental number . Technically, it should be called the principal square root of 2, to distinguish it from the negative number with the same property. Geometrically, the square root of 2 is the length of a diagonal across a square with sides of one unit of length ; this follows from the Pythagorean theorem . It was probably the first number known to be irrational . [ 1 ] The fraction ⁠ 99 / 70 ⁠ (≈ 1.4142 857) is sometimes used as a good rational approximation with a reasonably small denominator . Sequence A002193 in the On-Line Encyclopedia of Integer Sequences consists of the digits in the decimal expansion of the square root of 2, here truncated to 65 decimal places: [ 2 ] The Babylonian clay tablet YBC 7289 ( c. 1800 –1600 BC) gives an approximation of 2 {\displaystyle {\sqrt {2}}} in four sexagesimal figures, 1 24 51 10 , which is accurate to about six decimal digits, [ 3 ] and is the closest possible three-place sexagesimal representation of 2 {\displaystyle {\sqrt {2}}} , representing a margin of error of only –0.000042%: Another early approximation is given in ancient Indian mathematical texts, the Sulbasutras ( c. 800 –200 BC), as follows: Increase the length [of the side] by its third and this third by its own fourth less the thirty-fourth part of that fourth. [ 4 ] That is, This approximation, diverging from the actual value of 2 {\displaystyle {\sqrt {2}}} by approximately +0.07%, is the seventh in a sequence of increasingly accurate approximations based on the sequence of Pell numbers , which can be derived from the continued fraction expansion of 2 {\displaystyle {\sqrt {2}}} . Despite having a smaller denominator, it is only slightly less accurate than the Babylonian approximation. Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, that the square root of two is irrational . Little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as an official secret the discovery that the square root of two is irrational, and, according to legend, Hippasus was murdered for divulging it, though this has little to any substantial evidence in traditional historian practice. [ 5 ] [ 6 ] The square root of two is occasionally called Pythagoras's number [ 7 ] or Pythagoras's constant . In ancient Roman architecture , Vitruvius describes the use of the square root of 2 progression or ad quadratum technique. It consists basically in a geometric, rather than arithmetic, method to double a square, in which the diagonal of the original square is equal to the side of the resulting square. Vitruvius attributes the idea to Plato . The system was employed to build pavements by creating a square tangent to the corners of the original square at 45 degrees of it. The proportion was also used to design atria by giving them a length equal to a diagonal taken from a square, whose sides are equivalent to the intended atrium's width. [ 8 ] There are many algorithms for approximating 2 {\displaystyle {\sqrt {2}}} as a ratio of integers or as a decimal. The most common algorithm for this, which is used as a basis in many computers and calculators, is the Babylonian method [ 9 ] for computing square roots, an example of Newton's method for computing roots of arbitrary functions. It goes as follows: First, pick a guess, a 0 > 0 {\displaystyle a_{0}>0} ; the value of the guess affects only how many iterations are required to reach an approximation of a certain accuracy. Then, using that guess, iterate through the following recursive computation: Each iteration improves the approximation, roughly doubling the number of correct digits. Starting with a 0 = 1 {\displaystyle a_{0}=1} , the subsequent iterations yield: A simple rational approximation ⁠ 99 / 70 ⁠ (≈ 1.4142 857) is sometimes used. Despite having a denominator of only 70, it differs from the correct value by less than ⁠ 1 / 10,000 ⁠ (approx. +0.72 × 10 −4 ). The next two better rational approximations are ⁠ 140 / 99 ⁠ (≈ 1.414 1414...) with a marginally smaller error (approx. −0.72 × 10 −4 ), and ⁠ 239 / 169 ⁠ (≈ 1.4142 012) with an error of approx −0.12 × 10 −4 . The rational approximation of the square root of two derived from four iterations of the Babylonian method after starting with a 0 = 1 ( ⁠ 665,857 / 470,832 ⁠ ) is too large by about 1.6 × 10 −12 ; its square is ≈ 2.000 000 000 0045 . In 1997, the value of 2 {\displaystyle {\sqrt {2}}} was calculated to 137,438,953,444 decimal places by Yasumasa Kanada 's team. In February 2006, the record for the calculation of 2 {\displaystyle {\sqrt {2}}} was eclipsed with the use of a home computer. Shigeru Kondo calculated one trillion decimal places in 2010. [ 10 ] Other mathematical constants whose decimal expansions have been calculated to similarly high precision include π , e , and the golden ratio . [ 11 ] Such computations provide empirical evidence of whether these numbers are normal . This is a table of recent records in calculating the digits of 2 {\displaystyle {\sqrt {2}}} . [ 11 ] One proof of the number's irrationality is the following proof by infinite descent . It is also a proof of a negation by refutation : it proves the statement " 2 {\displaystyle {\sqrt {2}}} is not rational" by assuming that it is rational and then deriving a falsehood. Since we have derived a falsehood, the assumption (1) that 2 {\displaystyle {\sqrt {2}}} is a rational number must be false. This means that 2 {\displaystyle {\sqrt {2}}} is not a rational number; that is to say, 2 {\displaystyle {\sqrt {2}}} is irrational. This proof was hinted at by Aristotle , in his Analytica Priora , §I.23. [ 12 ] It appeared first as a full proof in Euclid 's Elements , as proposition 117 of Book X. However, since the early 19th century, historians have agreed that this proof is an interpolation and not attributable to Euclid. [ 13 ] Assume by way of contradiction that 2 {\displaystyle {\sqrt {2}}} were rational. Then we may write 2 + 1 = q p {\displaystyle {\sqrt {2}}+1={\frac {q}{p}}} as an irreducible fraction in lowest terms, with coprime positive integers q > p {\displaystyle q>p} . Since ( 2 − 1 ) ( 2 + 1 ) = 2 − 1 2 = 1 {\displaystyle ({\sqrt {2}}-1)({\sqrt {2}}+1)=2-1^{2}=1} , it follows that 2 − 1 {\displaystyle {\sqrt {2}}-1} can be expressed as the irreducible fraction p q {\displaystyle {\frac {p}{q}}} . However, since 2 − 1 {\displaystyle {\sqrt {2}}-1} and 2 + 1 {\displaystyle {\sqrt {2}}+1} differ by an integer, it follows that the denominators of their irreducible fraction representations must be the same, i.e. q = p {\displaystyle q=p} . This gives the desired contradiction. As with the proof by infinite descent, we obtain a 2 = 2 b 2 {\displaystyle a^{2}=2b^{2}} . Being the same quantity, each side has the same prime factorization by the fundamental theorem of arithmetic , and in particular, would have to have the factor 2 occur the same number of times. However, the factor 2 appears an odd number of times on the right, but an even number of times on the left—a contradiction. The irrationality of 2 {\displaystyle {\sqrt {2}}} also follows from the rational root theorem , which states that a rational root of a polynomial , if it exists, must be the quotient of a factor of the constant term and a factor of the leading coefficient . In the case of p ( x ) = x 2 − 2 {\displaystyle p(x)=x^{2}-2} , the only possible rational roots are ± 1 {\displaystyle \pm 1} and ± 2 {\displaystyle \pm 2} . As 2 {\displaystyle {\sqrt {2}}} is not equal to ± 1 {\displaystyle \pm 1} or ± 2 {\displaystyle \pm 2} , it follows that 2 {\displaystyle {\sqrt {2}}} is irrational. This application also invokes the integer root theorem, a stronger version of the rational root theorem for the case when p ( x ) {\displaystyle p(x)} is a monic polynomial with integer coefficients ; for such a polynomial, all roots are necessarily integers (which 2 {\displaystyle {\sqrt {2}}} is not, as 2 is not a perfect square) or irrational. The rational root theorem (or integer root theorem) may be used to show that any square root of any natural number that is not a perfect square is irrational. For other proofs that the square root of any non-square natural number is irrational, see Quadratic irrational number or Infinite descent . A simple proof is attributed to Stanley Tennenbaum when he was a student in the early 1950s. [ 14 ] [ 15 ] Assume that 2 = a / b {\displaystyle {\sqrt {2}}=a/b} , where a {\displaystyle a} and b {\displaystyle b} are coprime positive integers. Then a {\displaystyle a} and b {\displaystyle b} are the smallest positive integers for which a 2 = 2 b 2 {\displaystyle a^{2}=2b^{2}} . Geometrically, this implies that a square with side length a {\displaystyle a} will have an area equal to two squares of (lesser) side length b {\displaystyle b} . Call these squares A and B. We can draw these squares and compare their areas - the simplest way to do so is to fit the two B squares into the A squares. When we try to do so, we end up with the arrangement in Figure 1., in which the two B squares overlap in the middle and two uncovered areas are present in the top left and bottom right. In order to assert a 2 = 2 b 2 {\displaystyle a^{2}=2b^{2}} , we would need to show that the area of the overlap is equal to the area of the two missing areas, i.e. ( 2 b − a ) 2 {\displaystyle (2b-a)^{2}} = 2 ( a − b ) 2 {\displaystyle 2(a-b)^{2}} . In other terms, we may refer to the side lengths of the overlap and missing areas as p = 2 b − a {\displaystyle p=2b-a} and q = a − b {\displaystyle q=a-b} , respectively, and thus we have p 2 = 2 q 2 {\displaystyle p^{2}=2q^{2}} . But since we can see from the diagram that p < a {\displaystyle p<a} and q < b {\displaystyle q<b} , and we know that p {\displaystyle p} and q {\displaystyle q} are integers from their definitions in terms of a {\displaystyle a} and b {\displaystyle b} , this means that we are in violation of the original assumption that a {\displaystyle a} and b {\displaystyle b} are the smallest positive integers for which a 2 = 2 b 2 {\displaystyle a^{2}=2b^{2}} . Hence, even in assuming that a {\displaystyle a} and b {\displaystyle b} are the smallest positive integers for which a 2 = 2 b 2 {\displaystyle a^{2}=2b^{2}} , we may prove that there exists a smaller pair of integers p {\displaystyle p} and q {\displaystyle q} which satisfy the relation. This contradiction within the definition of a {\displaystyle a} and b {\displaystyle b} implies that they cannot exist, and thus 2 {\displaystyle {\sqrt {2}}} must be irrational. Tom M. Apostol made another geometric reductio ad absurdum argument showing that 2 {\displaystyle {\sqrt {2}}} is irrational. [ 16 ] It is also an example of proof by infinite descent. It makes use of classic compass and straightedge construction, proving the theorem by a method similar to that employed by ancient Greek geometers. It is essentially the same algebraic proof as Tennebaum's proof, viewed geometrically in another way. Let △ ABC be a right isosceles triangle with hypotenuse length m and legs n as shown in Figure 2. By the Pythagorean theorem , m n = 2 {\displaystyle {\frac {m}{n}}={\sqrt {2}}} . Suppose m and n are integers. Let m : n be a ratio given in its lowest terms . Draw the arcs BD and CE with centre A . Join DE . It follows that AB = AD , AC = AE and ∠ BAC and ∠ DAE coincide. Therefore, the triangles ABC and ADE are congruent by SAS . Because ∠ EBF is a right angle and ∠ BEF is half a right angle, △ BEF is also a right isosceles triangle. Hence BE = m − n implies BF = m − n . By symmetry, DF = m − n , and △ FDC is also a right isosceles triangle. It also follows that FC = n − ( m − n ) = 2 n − m . Hence, there is an even smaller right isosceles triangle, with hypotenuse length 2 n − m and legs m − n . These values are integers even smaller than m and n and in the same ratio, contradicting the hypothesis that m : n is in lowest terms. Therefore, m and n cannot be both integers; hence, 2 {\displaystyle {\sqrt {2}}} is irrational. While the proofs by infinite descent are constructively valid when "irrational" is defined to mean "not rational", we can obtain a constructively stronger statement by using a positive definition of "irrational" as "quantifiably apart from every rational". Let a and b be positive integers such that 1< ⁠ a / b ⁠ < 3/2 (as 1<2< 9/4 satisfies these bounds). Now 2 b 2 and a 2 cannot be equal, since the first has an odd number of factors 2 whereas the second has an even number of factors 2. Thus | 2 b 2 − a 2 | ≥ 1 . Multiplying the absolute difference | √ 2 − ⁠ a / b ⁠ | by b 2 ( √ 2 + ⁠ a / b ⁠ ) in the numerator and denominator, we get [ 17 ] the latter inequality being true because it is assumed that 1< ⁠ a / b ⁠ < 3/2 , giving ⁠ a / b ⁠ + √ 2 ≤ 3 (otherwise the quantitative apartness can be trivially established). This gives a lower bound of ⁠ 1 / 3 b 2 ⁠ for the difference | √ 2 − ⁠ a / b ⁠ | , yielding a direct proof of irrationality in its constructively stronger form, not relying on the law of excluded middle . [ 18 ] This proof constructively exhibits an explicit discrepancy between 2 {\displaystyle {\sqrt {2}}} and any rational. This proof uses the following property of primitive Pythagorean triples : This lemma can be used to show that two identical perfect squares can never be added to produce another perfect square. Suppose the contrary that 2 {\displaystyle {\sqrt {2}}} is rational. Therefore, Here, ( b , b , a ) is a primitive Pythagorean triple, and from the lemma a is never even. However, this contradicts the equation 2 b 2 = a 2 which implies that a must be even. The multiplicative inverse (reciprocal) of the square root of two is a widely used constant , with the decimal value: [ 20 ] It is often encountered in geometry and trigonometry because the unit vector , which makes a 45° angle with the axes in a plane , has the coordinates Each coordinate satisfies One interesting property of 2 {\displaystyle {\sqrt {2}}} is since This is related to the property of silver ratios . 2 {\displaystyle {\sqrt {2}}} can also be expressed in terms of copies of the imaginary unit i using only the square root and arithmetic operations , if the square root symbol is interpreted suitably for the complex numbers i and − i : 2 {\displaystyle {\sqrt {2}}} is also the only real number other than 1 whose infinite tetrate (i.e., infinite exponential tower) is equal to its square. In other words: if for c > 1 , x 1 = c and x n +1 = c x n for n > 1 , the limit of x n as n → ∞ will be called (if this limit exists) f ( c ) . Then 2 {\displaystyle {\sqrt {2}}} is the only number c > 1 for which f ( c ) = c 2 . Or symbolically: 2 {\displaystyle {\sqrt {2}}} appears in Viète's formula for π , which is related to the formula [ 21 ] Similar in appearance but with a finite number of terms, 2 {\displaystyle {\sqrt {2}}} appears in various trigonometric constants : [ 22 ] It is not known whether 2 {\displaystyle {\sqrt {2}}} is a normal number , which is a stronger property than irrationality, but statistical analyses of its binary expansion are consistent with the hypothesis that it is normal to base two . [ 23 ] The identity cos ⁠ π / 4 ⁠ = sin ⁠ π / 4 ⁠ = ⁠ 1 / √ 2 ⁠ , along with the infinite product representations for the sine and cosine , leads to products such as and or equivalently, The number can also be expressed by taking the Taylor series of a trigonometric function . For example, the series for cos ⁠ π / 4 ⁠ gives The Taylor series of √ 1 + x with x = 1 and using the double factorial n !! gives The convergence of this series can be accelerated with an Euler transform , producing It is not known whether 2 {\displaystyle {\sqrt {2}}} can be represented with a BBP-type formula . BBP-type formulas are known for π √ 2 and √ 2 ln (1+ √ 2 ) , however. [ 24 ] The number can be represented by an infinite series of Egyptian fractions , with denominators defined by 2 n th terms of a Fibonacci -like recurrence relation a ( n ) = 34 a ( n −1) − a ( n −2), a (0) = 0, a (1) = 6. [ 25 ] The square root of two has the following continued fraction representation: The convergents ⁠ p / q ⁠ formed by truncating this representation form a sequence of fractions that approximate the square root of two to increasing accuracy, and that are described by the Pell numbers (i.e., p 2 − 2 q 2 = ±1 ). The first convergents are: ⁠ 1 / 1 ⁠ , ⁠ 3 / 2 ⁠ , ⁠ 7 / 5 ⁠ , ⁠ 17 / 12 ⁠ , ⁠ 41 / 29 ⁠ , ⁠ 99 / 70 ⁠ , ⁠ 239 / 169 ⁠ , ⁠ 577 / 408 ⁠ and the convergent following ⁠ p / q ⁠ is ⁠ p + 2 q / p + q ⁠ . The convergent ⁠ p / q ⁠ differs from 2 {\displaystyle {\sqrt {2}}} by almost exactly ⁠ 1 / 2 √ 2 q 2 ⁠ , which follows from: The following nested square expressions converge to 2 {\textstyle {\sqrt {2}}} : In 1786, German physics professor Georg Christoph Lichtenberg [ 26 ] found that any sheet of paper whose long edge is 2 {\displaystyle {\sqrt {2}}} times longer than its short edge could be folded in half and aligned with its shorter side to produce a sheet with exactly the same proportions as the original. This ratio of lengths of the longer over the shorter side guarantees that cutting a sheet in half along a line results in the smaller sheets having the same (approximate) ratio as the original sheet. When Germany standardised paper sizes at the beginning of the 20th century, they used Lichtenberg's ratio to create the "A" series of paper sizes. [ 26 ] Today, the (approximate) aspect ratio of paper sizes under ISO 216 (A4, A0, etc.) is 1: 2 {\displaystyle {\sqrt {2}}} . Proof: Let S = {\displaystyle S=} shorter length and L = {\displaystyle L=} longer length of the sides of a sheet of paper, with Let R ′ = L ′ S ′ {\displaystyle R'={\frac {L'}{S'}}} be the analogous ratio of the halved sheet, then There are some interesting properties involving the square root of 2 in the physical sciences :
https://en.wikipedia.org/wiki/Square_root_of_2
The square root of 3 is the positive real number that, when multiplied by itself, gives the number 3 . It is denoted mathematically as 3 {\textstyle {\sqrt {3}}} or 3 1 / 2 {\displaystyle 3^{1/2}} . It is more precisely called the principal square root of 3 to distinguish it from the negative number with the same property. The square root of 3 is an irrational number . It is also known as Theodorus' constant , after Theodorus of Cyrene , who proved its irrationality. [ citation needed ] In 2013, its numerical value in decimal notation was computed to ten billion digits. [ 1 ] Its decimal expansion , written here to 65 decimal places, is given by OEIS : A002194 : The fraction 97 56 {\textstyle {\frac {97}{56}}} ( 1.732 142 857 ...) can be used as a good approximation. Despite having a denominator of only 56, it differs from the correct value by less than 1 10 , 000 {\textstyle {\frac {1}{10,000}}} (approximately 9.2 × 10 − 5 {\textstyle 9.2\times 10^{-5}} , with a relative error of 5 × 10 − 5 {\textstyle 5\times 10^{-5}} ). The rounded value of 1.732 is correct to within 0.01% of the actual value. [ citation needed ] The fraction 716 , 035 413 , 403 {\textstyle {\frac {716,035}{413,403}}} ( 1.732 050 807 56 ...) is accurate to 1 × 10 − 11 {\textstyle 1\times 10^{-11}} . [ citation needed ] Archimedes reported a range for its value: ( 1351 780 ) 2 > 3 > ( 265 153 ) 2 {\textstyle ({\frac {1351}{780}})^{2}>3>({\frac {265}{153}})^{2}} . [ 2 ] The lower limit 1351 780 {\textstyle {\frac {1351}{780}}} is an accurate approximation for 3 {\displaystyle {\sqrt {3}}} to 1 608 , 400 {\textstyle {\frac {1}{608,400}}} (six decimal places, relative error 3 × 10 − 7 {\textstyle 3\times 10^{-7}} ) and the upper limit 265 153 {\textstyle {\frac {265}{153}}} to 2 23 , 409 {\textstyle {\frac {2}{23,409}}} (four decimal places, relative error 1 × 10 − 5 {\textstyle 1\times 10^{-5}} ). It can be expressed as the simple continued fraction [1; 1, 2, 1, 2, 1, 2, 1, …] (sequence A040001 in the OEIS ). So it is true to say: then when n → ∞ {\displaystyle n\to \infty } : The square root of 3 can be found as the leg length of an equilateral triangle that encompasses a circle with a diameter of 1. If an equilateral triangle with sides of length 1 is cut into two equal halves, by bisecting an internal angle across to make a right angle with one side, the right angle triangle's hypotenuse is length one, and the sides are of length 1 2 {\textstyle {\frac {1}{2}}} and 3 2 {\textstyle {\frac {\sqrt {3}}{2}}} . From this, tan ⁡ 60 ∘ = 3 {\textstyle \tan {60^{\circ }}={\sqrt {3}}} , sin ⁡ 60 ∘ = 3 2 {\textstyle \sin {60^{\circ }}={\frac {\sqrt {3}}{2}}} , and cos ⁡ 30 ∘ = 3 2 {\textstyle \cos {30^{\circ }}={\frac {\sqrt {3}}{2}}} . The square root of 3 also appears in algebraic expressions for various other trigonometric constants , including [ 3 ] the sines of 3°, 12°, 15°, 21°, 24°, 33°, 39°, 48°, 51°, 57°, 66°, 69°, 75°, 78°, 84°, and 87°. It is the distance between parallel sides of a regular hexagon with sides of length 1. It is the length of the space diagonal of a unit cube . The vesica piscis has a major axis to minor axis ratio equal to 1 : 3 {\displaystyle 1:{\sqrt {3}}} . This can be shown by constructing two equilateral triangles within it. In power engineering , the voltage between two phases in a three-phase system equals 3 {\textstyle {\sqrt {3}}} times the line to neutral voltage. This is because any two phases are 120° apart, and two points on a circle 120 degrees apart are separated by 3 {\textstyle {\sqrt {3}}} times the radius (see geometry examples above). [ citation needed ] It is known that most roots of the n th derivatives of J ν ( n ) ( x ) {\displaystyle J_{\nu }^{(n)}(x)} (where n < 18 and J ν ( x ) {\displaystyle J_{\nu }(x)} is the Bessel function of the first kind of order ν {\displaystyle \nu } ) are transcendental . The only exceptions are the numbers ± 3 {\displaystyle \pm {\sqrt {3}}} , which are the algebraic roots of both J 1 ( 3 ) ( x ) {\displaystyle J_{1}^{(3)}(x)} and J 0 ( 4 ) ( x ) {\displaystyle J_{0}^{(4)}(x)} . [ 4 ] [ clarification needed ]
https://en.wikipedia.org/wiki/Square_root_of_3
The square root of 5 is the positive real number that, when multiplied by itself, gives the prime number 5 . It is more precisely called the principal square root of 5 , to distinguish it from the negative number with the same property. This number appears in the fractional expression for the golden ratio . It can be denoted in surd form as 5 {\textstyle {\sqrt {5}}} . It is an irrational algebraic number . [ 1 ] The first sixty significant digits of its decimal expansion are: which can be rounded down to 2.236 to within 99.99% accuracy. The approximation ⁠ 161 / 72 ⁠ (≈ 2.23611) for the square root of five can be used. Despite having a denominator of only 72, it differs from the correct value by less than ⁠ 1 / 10,000 ⁠ (approx. 4.3 × 10 −5 ). As of January 2022, the numerical value in decimal of the square root of 5 has been computed to at least 2,250,000,000,000 digits. [ 2 ] The square root of 5 can be expressed as the simple continued fraction The successive partial evaluations of the continued fraction, which are called its convergents , approach 5 {\displaystyle {\sqrt {5}}} : Their numerators are 2, 9, 38, 161, … (sequence A001077 in the OEIS ), and their denominators are 1, 4, 17, 72, … (sequence A001076 in the OEIS ). Each of these is a best rational approximation of 5 {\displaystyle {\sqrt {5}}} ; in other words, it is closer to 5 {\displaystyle {\sqrt {5}}} than any rational number with a smaller denominator. The convergents, expressed as ⁠ x / y ⁠ , satisfy alternately the Pell's equations [ 3 ] When 5 {\displaystyle {\sqrt {5}}} is approximated with the Babylonian method , starting with x 0 = 2 and using x n +1 = ⁠ 1 / 2 ⁠ ( x n + ⁠ 5 / x n ⁠ ) , the n th approximant x n is equal to the 2 n th convergent of the continued fraction: The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial x 2 − 5 {\displaystyle x^{2}-5} . The Newton's method update, x n + 1 = x n − f ( x n ) / f ′ ( x n ) {\displaystyle x_{n+1}=x_{n}-f(x_{n})/f'(x_{n})} , is equal to ( x n + 5 / x n ) / 2 {\displaystyle (x_{n}+5/x_{n})/2} when f ( x ) = x 2 − 5 {\displaystyle f(x)=x^{2}-5} . The method therefore converges quadratically . The golden ratio φ is the arithmetic mean of 1 and 5 {\displaystyle {\sqrt {5}}} . [ 4 ] The algebraic relationship between 5 {\displaystyle {\sqrt {5}}} , the golden ratio and the conjugate of the golden ratio ( Φ = − ⁠ 1 / φ ⁠ = 1 − φ ) is expressed in the following formulae: (See the section below for their geometrical interpretation as decompositions of a 5 {\displaystyle {\sqrt {5}}} rectangle .) 5 {\displaystyle {\sqrt {5}}} then naturally figures in the closed form expression for the Fibonacci numbers , a formula which is usually written in terms of the golden ratio: The quotient of 5 {\displaystyle {\sqrt {5}}} and φ (or the product of 5 {\displaystyle {\sqrt {5}}} and Φ ), and its reciprocal , provide an interesting pattern of continued fractions and are related to the ratios between the Fibonacci numbers and the Lucas numbers : [ 5 ] The series of convergents to these values feature the series of Fibonacci numbers and the series of Lucas numbers as numerators and denominators, and vice versa, respectively: In fact, the limit of the quotient of the n t h {\displaystyle n^{th}} Lucas number L n {\displaystyle L_{n}} and the n t h {\displaystyle n^{th}} Fibonacci number F n {\displaystyle F_{n}} is directly equal to the square root of 5 {\displaystyle 5} : Geometrically , 5 {\displaystyle {\sqrt {5}}} corresponds to the diagonal of a rectangle whose sides are of length 1 and 2 , as is evident from the Pythagorean theorem . Such a rectangle can be obtained by halving a square, or by placing two equal squares side by side. This can be used to subdivide a square grid into a tilted square grid with five times as many squares, forming the basis for a subdivision surface . [ 6 ] Together with the algebraic relationship between 5 {\displaystyle {\sqrt {5}}} and φ , this forms the basis for the geometrical construction of a golden rectangle from a square, and for the construction of a regular pentagon given its side (since the side-to-diagonal ratio in a regular pentagon is φ ). Since two adjacent faces of a cube would unfold into a 1:2 rectangle, the ratio between the length of the cube's edge and the shortest distance from one of its vertices to the opposite one, when traversing the cube surface , is 5 {\displaystyle {\sqrt {5}}} . By contrast, the shortest distance when traversing through the inside of the cube corresponds to the length of the cube diagonal, which is the square root of three times the edge. [ 7 ] A rectangle with side proportions 1: 5 {\displaystyle {\sqrt {5}}} is called a root-five rectangle and is part of the series of root rectangles, a subset of dynamic rectangles , which are based on 1 {\displaystyle {\sqrt {1}}} (= 1), 2 {\displaystyle {\sqrt {2}}} , 3 {\displaystyle {\sqrt {3}}} , 4 {\displaystyle {\sqrt {4}}} (= 2), 5 {\displaystyle {\sqrt {5}}} ... and successively constructed using the diagonal of the previous root rectangle, starting from a square. [ 8 ] A root-5 rectangle is particularly notable in that it can be split into a square and two equal golden rectangles (of dimensions Φ × 1 ), or into two golden rectangles of different sizes (of dimensions Φ × 1 and 1 × φ ). [ 9 ] It can also be decomposed as the union of two equal golden rectangles (of dimensions 1 × φ ) whose intersection forms a square. All this is can be seen as the geometric interpretation of the algebraic relationships between 5 {\displaystyle {\sqrt {5}}} , φ and Φ mentioned above. The root-5 rectangle can be constructed from a 1:2 rectangle (the root-4 rectangle), or directly from a square in a manner similar to the one for the golden rectangle shown in the illustration, but extending the arc of length 5 / 2 {\displaystyle {\sqrt {5}}/2} to both sides. Like 2 {\displaystyle {\sqrt {2}}} and 3 {\displaystyle {\sqrt {3}}} , the square root of 5 appears extensively in the formulae for exact trigonometric constants , including in the sines and cosines of every angle whose measure in degrees is divisible by 3 but not by 15. [ 10 ] The simplest of these are As such, the computation of its value is important for generating trigonometric tables . Since 5 {\displaystyle {\sqrt {5}}} is geometrically linked to half-square rectangles and to pentagons, it also appears frequently in formulae for the geometric properties of figures derived from them, such as in the formula for the volume of a dodecahedron . [ 7 ] Hurwitz's theorem in Diophantine approximations states that every irrational number x can be approximated by infinitely many rational numbers ⁠ m / n ⁠ in lowest terms in such a way that and that 5 {\displaystyle {\sqrt {5}}} is best possible, in the sense that for any larger constant than 5 {\displaystyle {\sqrt {5}}} , there are some irrational numbers x for which only finitely many such approximations exist. [ 11 ] Closely related to this is the theorem [ 12 ] that of any three consecutive convergents ⁠ p i / q i ⁠ , ⁠ p i +1 / q i +1 ⁠ , ⁠ p i +2 / q i +2 ⁠ , of a number α , at least one of the three inequalities holds: And the 5 {\displaystyle {\sqrt {5}}} in the denominator is the best bound possible since the convergents of the golden ratio make the difference on the left-hand side arbitrarily close to the value on the right-hand side. In particular, one cannot obtain a tighter bound by considering sequences of four or more consecutive convergents. [ 12 ] The ring Z [ − 5 ] {\displaystyle \mathbb {Z} [{\sqrt {-5}}]} contains numbers of the form a + b − 5 {\displaystyle a+b{\sqrt {-5}}} , where a and b are integers and − 5 {\displaystyle {\sqrt {-5}}} is the imaginary number i 5 {\displaystyle i{\sqrt {5}}} . This ring is a frequently cited example of an integral domain that is not a unique factorization domain . [ 13 ] The number 6 has two inequivalent factorizations within this ring: On the other hand, the real quadratic integer ring Z [ 5 + 1 2 ] {\displaystyle \mathbb {Z} [{\tfrac {{\sqrt {5}}+1}{2}}]} , adjoining the Golden ratio ϕ = 5 + 1 2 {\displaystyle \phi ={\tfrac {{\sqrt {5}}+1}{2}}} , was shown to be Euclidean , and hence a unique factorization domain, by Dedekind. The field Q [ − 5 ] , {\displaystyle \mathbb {Q} [{\sqrt {-5}}],} like any other quadratic field , is an abelian extension of the rational numbers. The Kronecker–Weber theorem therefore guarantees that the square root of five can be written as a rational linear combination of roots of unity : The square root of 5 appears in various identities discovered by Srinivasa Ramanujan involving continued fractions . [ 14 ] [ 15 ] For example, this case of the Rogers–Ramanujan continued fraction :
https://en.wikipedia.org/wiki/Square_root_of_5
The square root of 6 is the positive real number that, when multiplied by itself, gives the natural number 6 . It is more precisely called the principal square root of 6 , to distinguish it from the negative number with the same property. This number appears in numerous geometric and number-theoretic contexts. It can be denoted in surd form as [ 1 ] 6 {\textstyle {\sqrt {6}}} and in exponent form as 6 1 2 {\textstyle 6^{\frac {1}{2}}} . It is an irrational algebraic number . [ 2 ] The first sixty significant digits of its decimal expansion are: which can be rounded up to 2.45 to within about 99.98% accuracy (about 1 part in 4800); that is, it differs from the correct value by about ⁠ 1 / 2,000 ⁠ . It takes two more digits (2.4495) to reduce the error by about half. The approximation ⁠ 218 / 89 ⁠ (≈ 2.449438...) is nearly ten times better: despite having a denominator of only 89, it differs from the correct value by less than ⁠ 1 / 20,000 ⁠ , or less than one part in 47,000. Since 6 is the product of 2 and 3, the square root of 6 is the geometric mean of 2 and 3, and is the product of the square root of 2 and the square root of 3 , both of which are irrational algebraic numbers. NASA has published more than a million decimal digits of the square root of six. [ 4 ] The square root of 6 can be expressed as the simple continued fraction The successive partial evaluations of the continued fraction, which are called its convergents , approach 6 {\displaystyle {\sqrt {6}}} : Their numerators are 2, 5, 22, 49, 218, 485, 2158, 4801, 21362, 47525, 211462, …(sequence A041006 in the OEIS ), and their denominators are 1, 2, 9, 20, 89, 198, 881, 1960, 8721, 19402, 86329, …(sequence A041007 in the OEIS ). [ 5 ] Each convergent is a best rational approximation of 6 {\displaystyle {\sqrt {6}}} ; in other words, it is closer to 6 {\displaystyle {\sqrt {6}}} than any rational with a smaller denominator. Decimal equivalents improve linearly, at a rate of nearly one digit per convergent: The convergents, expressed as ⁠ x / y ⁠ , satisfy alternately the Pell's equations [ 5 ] When 6 {\displaystyle {\sqrt {6}}} is approximated with the Babylonian method , starting with x 0 = 2 and using x n +1 = ⁠ 1 / 2 ⁠ ( x n + ⁠ 6 / x n ⁠ ) , the n th approximant x n is equal to the 2 n th convergent of the continued fraction: The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial x 2 − 6 {\displaystyle x^{2}-6} . The Newton's method update, x n + 1 = x n − f ( x n ) / f ′ ( x n ) , {\displaystyle x_{n+1}=x_{n}-f(x_{n})/f'(x_{n}),} is equal to ( x n + 6 / x n ) / 2 {\displaystyle (x_{n}+6/x_{n})/2} when f ( x ) = x 2 − 6 {\displaystyle f(x)=x^{2}-6} . The method therefore converges quadratically . In plane geometry , the square root of 6 can be constructed via a sequence of dynamic rectangles , as illustrated here. [ 6 ] [ 7 ] [ 8 ] In solid geometry , the square root of 6 appears as the longest distances between corners ( vertices ) of the double cube, as illustrated above. The square roots of all lower natural numbers appear as the distances between other vertex pairs in the double cube (including the vertices of the included two cubes). [ 8 ] The edge length of a cube with total surface area of 1 is 6 6 {\displaystyle {\frac {\sqrt {6}}{6}}} or the reciprocal square root of 6. The edge lengths of a regular tetrahedron ( t ), a regular octahedron ( o ), and a cube ( c ) of equal total surface areas satisfy t ⋅ o c 2 = 6 {\displaystyle {\frac {t\cdot o}{c^{2}}}={\sqrt {6}}} . [ 3 ] [ 9 ] The edge length of a regular octahedron is the square root of 6 times the radius of an inscribed sphere (that is, the distance from the center of the solid to the center of each face). [ 10 ] The square root of 6 appears in various other geometry contexts, such as the side length 6 + 2 2 {\displaystyle {\frac {{\sqrt {6}}+{\sqrt {2}}}{2}}} for the square enclosing an equilateral triangle of side 2 (see figure). The square root of 6, with the square root of 2 added or subtracted, appears in several exact trigonometric values for angles at multiples of 15 degrees ( π / 12 {\displaystyle \pi /12} radians). [ 11 ] Villard de Honnecourt 's 13th century construction of a Gothic "fifth-point arch" with circular arcs of radius 5 has a height of twice the square root of 6, as illustrated here. [ 12 ] [ 13 ]
https://en.wikipedia.org/wiki/Square_root_of_6
The square root of 7 is the positive real number that, when multiplied by itself, gives the prime number 7 . It is more precisely called the principal square root of 7 , to distinguish it from the negative number with the same property. This number appears in various geometric and number-theoretic contexts. It can be denoted in surd form as: [ 1 ] and in exponent form as: It is an irrational algebraic number . The first sixty significant digits of its decimal expansion are: which can be rounded up to 2.646 to within about 99.99% accuracy (about 1 part in 10000); that is, it differs from the correct value by about ⁠ 1 / 4,000 ⁠ . The approximation ⁠ 127 / 48 ⁠ (≈ 2.645833...) is better: despite having a denominator of only 48, it differs from the correct value by less than ⁠ 1 / 12,000 ⁠ , or less than one part in 33,000. More than a million decimal digits of the square root of seven have been published. [ 3 ] The extraction of decimal-fraction approximations to square roots by various methods has used the square root of 7 as an example or exercise in textbooks, for hundreds of years. Different numbers of digits after the decimal point are shown: 5 in 1773 [ 4 ] and 1852, [ 5 ] 3 in 1835, [ 6 ] 6 in 1808, [ 7 ] and 7 in 1797. [ 8 ] An extraction by Newton's method (approximately) was illustrated in 1922, concluding that it is 2.646 "to the nearest thousandth". [ 9 ] For a family of good rational approximations, the square root of 7 can be expressed as the continued fraction The successive partial evaluations of the continued fraction, which are called its convergents , approach 7 {\displaystyle {\sqrt {7}}} : Their numerators are 2, 3, 5, 8, 37, 45, 82, 127, 590, 717, 1307, 2024, 9403, 11427, 20830, 32257…(sequence A041008 in the OEIS ) , and their denominators are 1, 1, 2, 3, 14, 17, 31, 48, 223, 271, 494, 765, 3554, 4319, 7873, 12192,…(sequence A041009 in the OEIS ). Each convergent is a best rational approximation of 7 {\displaystyle {\sqrt {7}}} ; in other words, it is closer to 7 {\displaystyle {\sqrt {7}}} than any rational with a smaller denominator. Approximate decimal equivalents improve linearly (number of digits proportional to convergent number) at a rate of less than one digit per step: Every fourth convergent, starting with ⁠ 8 / 3 ⁠ , expressed as ⁠ x / y ⁠ , satisfies the Pell's equation [ 10 ] When 7 {\displaystyle {\sqrt {7}}} is approximated with the Babylonian method , starting with x 1 = 3 and using x n +1 = ⁠ 1 / 2 ⁠ ( x n + ⁠ 7 / x n ⁠ ) , the n th approximant x n is equal to the 2 n th convergent of the continued fraction: All but the first of these satisfy the Pell's equation above. The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial x 2 − 7 {\displaystyle x^{2}-7} . The Newton's method update, x n + 1 = x n − f ( x n ) / f ′ ( x n ) , {\displaystyle x_{n+1}=x_{n}-f(x_{n})/f'(x_{n}),} is equal to ( x n + 7 / x n ) / 2 {\displaystyle (x_{n}+7/x_{n})/2} when f ( x ) = x 2 − 7 {\displaystyle f(x)=x^{2}-7} . The method therefore converges quadratically (number of accurate decimal digits proportional to the square of the number of Newton or Babylonian steps). In plane geometry , the square root of 7 can be constructed via a sequence of dynamic rectangles , that is, as the largest diagonal of those rectangles illustrated here. [ 11 ] [ 12 ] [ 13 ] The minimal enclosing rectangle of an equilateral triangle of edge length 2 has a diagonal of the square root of 7. [ 14 ] Due to the Pythagorean theorem and Legendre's three-square theorem , 7 {\displaystyle {\sqrt {7}}} is the smallest square root of a natural number that cannot be the distance between any two points of a cubic integer lattice (or equivalently, the length of the space diagonal of a rectangular cuboid with integer side lengths). 15 {\displaystyle {\sqrt {15}}} is the next smallest such number. [ 15 ] On the reverse of the current US one-dollar bill , the "large inner box" has a length-to-width ratio of the square root of 7, and a diagonal of 6.0 inches, to within measurement accuracy. [ 16 ]
https://en.wikipedia.org/wiki/Square_root_of_7
Philip Deidesheimer was a mining engineer in the Western United States . Deidesheimer was born in 1832, in Darmstadt , Electorate of Hesse before German unification. He attended the prestigious Freiberg University of Mining ( Technische Universität Bergakademie Freiberg ) and emigrated to California in 1852. He died on 21 July 1916 in San Francisco , California . In 1852, at nineteen, the young mining engineer traveled to the California gold fields to work for several years, including in Georgetown . In April 1860 he was hired by W. F. Babcock, a trustee of the Ophir Mine, part of the Comstock Lode silver mining boom in Nevada, and solved one of the Comstock mines' most critical engineering needs. [ 1 ] Deidesheimer invented a system, now known as square set timbering, using heavy timber "cubes" as supports for underground mining tunnels and shafts, that enabled skilled miners to open three-dimensional cavities of any size. In large openings, the cubes could be filled with waste rock, creating a solid pillar of wood and rock from floor to roof ("back" in miner's terminology). [ 2 ] Deidesheimer created the square set timbering system for the Comstock Lode's Ophir Mine in Virginia City, Nevada , in 1860. [ 3 ] The system, which was inspired by the structure of honeycombs , enabled mining of the large silver orebodies of the Comstock Lode , which were in very weak rock—in miner's terms, "heavy ground". Deidesheimer refused to patent the innovation, [ 4 ] [ 5 ] which was easily the most important mining innovation of 1860. [ 2 ] As was common with the Comstock mines, the rock in the Ophir Mine was soft and easily collapsed into the working stopes (cavities where ore is extracted). In addition, the presence of clay that would swell greatly upon exposure to air caused great pressures that the mine timbering of that day could not hold back. The square set timbering method devised by Deideshimer slowed the swelling action long enough for ore extraction, though with time and decay the timbering was crushed by the enormous pressures found in the Comstock mines. Deidesheimer was made superintendent of the Ophir Mine by mine owner William Sharon in early 1875. He was bankrupted by speculation in mining stocks in 1878. [ 3 ] In 1866 Deidesheimer designed and supervised the construction of the Hope Mill and smelter for the St. Louis and Montana Mining Company, to process silver ore from nearby mines in Granite County , Montana . [ 6 ] The town that formed around the Hope Mill was named Philipsburg , in honor of Philip Deidesheimer. [ 7 ] After the decline of the Comstock mines in the late 1870s, Deideshimer continued his successful mining engineer career at the Young America Mine in Sierra City , California, where he was one of the five mine owners made rich over the five years of good production at that mine. [ 3 ] The development of his square-set timbering method was fictionalized in "The Philip Deidesheimer Story", a 1959 first-season episode of the American television series Bonanza , in which John Beal portrayed the title character. [ 8 ] Philip Deidesheimer was the subject of the NPR radio program The Engines of Our Ingenuity in episode 1901 [ 9 ] and was inducted into the (USA) National Mining Hall of Fame. [ 10 ]
https://en.wikipedia.org/wiki/Square_set_timbering
In geometry , a square trisection is a type of dissection problem which consists of cutting a square into pieces that can be rearranged to form three identical squares. The dissection of a square in three congruent partitions is a geometrical problem that dates back to the Islamic Golden Age . Craftsman who mastered the art of zellige needed innovative techniques to achieve their fabulous mosaics with complex geometric figures. The first solution to this problem was proposed in the 10th century AD by the Persian mathematician Abu'l-Wafa' (940-998) in his treatise "On the geometric constructions necessary for the artisan" . [ 1 ] Abu'l-Wafa' also used his dissection to demonstrate the Pythagorean theorem . [ 2 ] This geometrical proof of Pythagoras' theorem would be rediscovered in the years 1835 - 1840 [ 3 ] by Henry Perigal and published in 1875. [ 4 ] The beauty of a dissection depends on several parameters. However, it is usual to search for solutions with the minimum number of parts. Far from being minimal, the square trisection proposed by Abu'l-Wafa' uses 9 pieces. In the 14th century Abu Bakr al-Khalil gave two solutions, one of which uses 8 pieces. [ 5 ] In the late 17th century Jacques Ozanam came back to this issue [ 6 ] and in the 19th century, solutions using 8 and 7 pieces were found, including one given by the mathematician Édouard Lucas . [ 7 ] In 1891 Henry Perigal published the first known solution with only 6 pieces [ 8 ] (see illustration below). Nowadays, new dissections are still found [ 9 ] (see illustration above) and the conjecture that 6 is the minimal number of necessary pieces remains unproved.
https://en.wikipedia.org/wiki/Square_trisection
In number theory , the sum of the first n cubes is the square of the n th triangular number . That is, The same equation may be written more compactly using the mathematical notation for summation : This identity is sometimes called Nicomachus's theorem , after Nicomachus of Gerasa ( c. 60 – c. 120 CE ). Nicomachus, at the end of Chapter 20 of his Introduction to Arithmetic , pointed out that if one writes a list of the odd numbers, the first is the cube of 1, the sum of the next two is the cube of 2, the sum of the next three is the cube of 3, and so on. He does not go further than this, but from this it follows that the sum of the first n {\displaystyle n} cubes equals the sum of the first n ( n + 1 ) 2 {\displaystyle {\tfrac {n(n+1)}{2}}} odd numbers, that is, the odd numbers from 1 to n ( n + 1 ) − 1 {\displaystyle n(n+1)-1} . The average of these numbers is obviously n ( n + 1 ) 2 {\displaystyle {\tfrac {n(n+1)}{2}}} , and there are n ( n + 1 ) 2 {\displaystyle {\tfrac {n(n+1)}{2}}} of them, so their sum is ( n ( n + 1 ) 2 ) 2 {\displaystyle \left({\tfrac {n(n+1)}{2}}\right)^{2}} . Many early mathematicians have studied and provided proofs of Nicomachus's theorem. Stroeker (1995) claims that "every student of number theory surely must have marveled at this miraculous fact". [ 1 ] Pengelley (2002) finds references to the identity not only in the works of Nicomachus in what is now Jordan in the 1st century CE, but also in those of Aryabhata in India in the 5th century, and in those of Al-Karaji c. 1000 in Persia . [ 2 ] Bressoud (2004) mentions several additional early mathematical works on this formula, by Al-Qabisi (10th century Arabia), Gersonides ( c. 1300 , France), and Nilakantha Somayaji ( c. 1500 , India); he reproduces Nilakantha's visual proof. [ 3 ] The sequence of squared triangular numbers is These numbers can be viewed as figurate numbers , a four-dimensional hyperpyramidal generalization of the triangular numbers and square pyramidal numbers . As Stein (1971) observes, these numbers also count the number of rectangles with horizontal and vertical sides formed in an n × n {\displaystyle n\times n} grid . For instance, the points of a 4 × 4 {\displaystyle 4\times 4} grid (or a square made up of three smaller squares on a side) can form 36 different rectangles. The number of squares in a square grid is similarly counted by the square pyramidal numbers. [ 4 ] The identity also admits a natural probabilistic interpretation as follows. Let X , Y , Z , W {\displaystyle X,Y,Z,W} be four integer numbers independently and uniformly chosen at random between 1 and n {\displaystyle n} . Then, the probability that W {\displaystyle W} is the largest of the four numbers equals the probability that Y {\displaystyle Y} is at least as large as X {\displaystyle X} and that W {\displaystyle W} is at least as large as Z {\displaystyle Z} . That is, Pr [ max ( X , Y , Z ) ≤ W ] = Pr [ X ≤ Y ∧ Z ≤ W ] . {\displaystyle \Pr[\max(X,Y,Z)\leq W]=\Pr[X\leq Y\wedge Z\leq W].} For any particular value of W {\displaystyle W} , the combinations of X {\displaystyle X} , Y {\displaystyle Y} , and Z {\displaystyle Z} that make W {\displaystyle W} largest form a cube 1 ≤ X , Y , Z ≤ n {\displaystyle 1\leq X,Y,Z\leq n} so (adding the size of this cube over all choices of W {\displaystyle W} }) the number of combinations of X , Y , Z , W {\displaystyle X,Y,Z,W} for which W {\displaystyle W} is largest is a sum of cubes, the left hand side of the Nichomachus identity. The sets of pairs ( X , Y ) {\displaystyle (X,Y)} with X ≤ Y {\displaystyle X\leq Y} and of pairs ( Z , W ) {\displaystyle (Z,W)} with Z ≤ W {\displaystyle Z\leq W} form isosceles right triangles, and the set counted by the right hand side of the equation of probabilities is the Cartesian product of these two triangles, so its size is the square of a triangular number on the right hand side of the Nichomachus identity. The probabilities themselves are respectively the left and right sides of the Nichomachus identity, normalized to make probabilities by dividing both sides by n 4 {\displaystyle n^{4}} . [ citation needed ] Charles Wheatstone ( 1854 ) gives a particularly simple derivation, by expanding each cube in the sum into a set of consecutive odd numbers. He begins by giving the identity n 3 = ( n 2 − n + 1 ) + ( n 2 − n + 1 + 2 ) + ( n 2 − n + 1 + 4 ) + ⋯ + ( n 2 + n − 1 ) ⏟ n consecutive odd numbers . {\displaystyle n^{3}=\underbrace {\left(n^{2}-n+1\right)+\left(n^{2}-n+1+2\right)+\left(n^{2}-n+1+4\right)+\cdots +\left(n^{2}+n-1\right)} _{n{\text{ consecutive odd numbers}}}.} That identity is related to triangular numbers T n {\displaystyle T_{n}} in the following way: n 3 = ∑ k = T n − 1 + 1 T n ( 2 k − 1 ) , {\displaystyle n^{3}=\sum _{k=T_{n-1}+1}^{T_{n}}(2k-1),} and thus the summands forming n 3 {\displaystyle n^{3}} start off just after those forming all previous values 1 3 {\displaystyle 1^{3}} up to ( n − 1 ) 3 {\displaystyle (n-1)^{3}} . Applying this property, along with another well-known identity: n 2 = ∑ k = 1 n ( 2 k − 1 ) , {\displaystyle n^{2}=\sum _{k=1}^{n}(2k-1),} produces the following derivation: [ 5 ] ∑ k = 1 n k 3 = 1 + 8 + 27 + 64 + ⋯ + n 3 = 1 ⏟ 1 3 + 3 + 5 ⏟ 2 3 + 7 + 9 + 11 ⏟ 3 3 + 13 + 15 + 17 + 19 ⏟ 4 3 + ⋯ + ( n 2 − n + 1 ) + ⋯ + ( n 2 + n − 1 ) ⏟ n 3 = 1 ⏟ 1 2 + 3 ⏟ 2 2 + 5 ⏟ 3 2 + ⋯ + ( n 2 + n − 1 ) ⏟ ( n 2 + n 2 ) 2 = ( 1 + 2 + ⋯ + n ) 2 = ( ∑ k = 1 n k ) 2 . {\displaystyle {\begin{aligned}\sum _{k=1}^{n}k^{3}&=1+8+27+64+\cdots +n^{3}\\&=\underbrace {1} _{1^{3}}+\underbrace {3+5} _{2^{3}}+\underbrace {7+9+11} _{3^{3}}+\underbrace {13+15+17+19} _{4^{3}}+\cdots +\underbrace {\left(n^{2}-n+1\right)+\cdots +\left(n^{2}+n-1\right)} _{n^{3}}\\&=\underbrace {\underbrace {\underbrace {\underbrace {1} _{1^{2}}+3} _{2^{2}}+5} _{3^{2}}+\cdots +\left(n^{2}+n-1\right)} _{\left({\frac {n^{2}+n}{2}}\right)^{2}}\\&=(1+2+\cdots +n)^{2}\\&=\left(\sum _{k=1}^{n}k\right)^{2}.\end{aligned}}} Row (1893) obtains another proof by summing the numbers in a square multiplication table in two different ways. The sum of the i th row is i times a triangular number, from which it follows that the sum of all the rows is the square of a triangular number. Alternatively, one can decompose the table into a sequence of nested gnomons , each consisting of the products in which the larger of the two terms is some fixed value. The sum within each gmonon is a cube, so the sum of the whole table is a sum of cubes. [ 6 ] In the more recent mathematical literature, Edmonds (1957) provides a proof using summation by parts . [ 7 ] Stein (1971) uses the rectangle-counting interpretation of these numbers to form a geometric proof of the identity. [ 8 ] Stein observes that it may also be proved easily (but uninformatively) by induction, and states that Toeplitz (1963) provides "an interesting old Arabic proof". [ 4 ] Kanim (2004) provides a purely visual proof, [ 9 ] Benjamin & Orrison (2002) provide two additional proofs, [ 10 ] and Nelsen (1993) gives seven geometric proofs. [ 11 ] A similar result to Nicomachus's theorem holds for all power sums , namely that odd power sums (sums of odd powers) are a polynomial in triangular numbers. These are called Faulhaber polynomials , of which the sum of cubes is the simplest and most elegant example. However, in no other case is one power sum a square of another. [ 7 ] Stroeker (1995) studies more general conditions under which the sum of a consecutive sequence of cubes forms a square. [ 1 ] Garrett & Hummel (2004) and Warnaar (2004) study polynomial analogues of the square triangular number formula, in which series of polynomials add to the square of another polynomial. [ 12 ]
https://en.wikipedia.org/wiki/Squared_triangular_number
The square–cube law (or cube–square law ) is a mathematical principle, applied in a variety of scientific fields, which describes the relationship between the volume and the surface area as a shape's size increases or decreases. It was first [ dubious – discuss ] described in 1638 by Galileo Galilei in his Two New Sciences as the "...ratio of two volumes is greater than the ratio of their surfaces". [ 1 ] This principle states that, as a shape grows in size, its volume grows faster than its surface area. When applied to the real world, this principle has many implications which are important in fields ranging from mechanical engineering to biomechanics . It helps explain phenomena including why large mammals like elephants have a harder time cooling themselves than small ones like mice, and why building taller and taller skyscrapers is increasingly difficult. The square–cube law can be stated as follows: When an object undergoes a proportional increase in size, its new surface area is proportional to the square of the multiplier and its new volume is proportional to the cube of the multiplier. Represented mathematically: [ 2 ] A 2 = A 1 ( ℓ 2 ℓ 1 ) 2 {\displaystyle A_{2}=A_{1}\left({\frac {\ell _{2}}{\ell _{1}}}\right)^{2}} where A 1 {\displaystyle A_{1}} is the original surface area and A 2 {\displaystyle A_{2}} is the new surface area. V 2 = V 1 ( ℓ 2 ℓ 1 ) 3 {\displaystyle V_{2}=V_{1}\left({\frac {\ell _{2}}{\ell _{1}}}\right)^{3}} where V 1 {\displaystyle V_{1}} is the original volume, V 2 {\displaystyle V_{2}} is the new volume, ℓ 1 {\displaystyle \ell _{1}} is the original length and ℓ 2 {\displaystyle \ell _{2}} is the new length. For example, a cube with a side length of 1 meter has a surface area of 6 m 2 and a volume of 1 m 3 . If the sides of the cube were multiplied by 2, its surface area would be multiplied by the square of 2 and become 24 m 2 . Its volume would be multiplied by the cube of 2 and become 8 m 3 . The original cube (1 m sides) has a surface area to volume ratio of 6 m 2 : 1 m 3 . The larger (2 m sides) cube has a surface area to volume ratio of (24/8) 3 m 2 : 1 m 3 . As the dimensions increase, the volume will continue to grow faster than the surface area. Thus the square–cube law. This principle applies to all solids. [ 3 ] When a physical object maintains the same density and is scaled up, its volume and mass are increased by the cube of the multiplier while its surface area increases only by the square of the same multiplier. This would mean that when the larger version of the object is accelerated at the same rate as the original, more pressure would be exerted on the surface of the larger object. Consider a simple example of a body of mass m {\displaystyle m} , undergoing an acceleration a {\displaystyle a} , with a surface area A {\displaystyle A} , upon which the accelerating force is acting. The force due to acceleration is F = m a {\displaystyle F=ma} and the pressure is P = F A = m a A {\displaystyle P={\frac {F}{A}}={\frac {ma}{A}}} . Now, consider the object to be exaggerated by a multiplier factor x {\displaystyle x} so that it has a new mass m ′ = x 3 m {\displaystyle m'=x^{3}m} , and a new surface area A ′ = x 2 A {\displaystyle A'=x^{2}A} . The new force due to acceleration is F ′ = x 3 m a {\displaystyle F'=x^{3}ma} and the resulting pressure is: P ′ = F ′ A ′ = x 3 m a x 2 A = x m a A = x P {\displaystyle {\begin{aligned}P'&={\frac {F'}{A'}}\\&={\frac {x^{3}ma}{x^{2}A}}\\&=x\ {\frac {ma}{A}}\\&=x\ P\\\end{aligned}}} Thus, just scaling up the size of an object, keeping the same material of construction (density), and same acceleration, would increase the pressure by the same scaling factor. This would indicate that the object would have less ability to resist stress and would be more prone to collapse while accelerating. This is why large vehicles perform poorly in crash tests and why there are theorized limits as to how high buildings can be built. Similarly, the larger an object is, the less other objects would resist its motion, causing its deceleration. If an animal were isometrically scaled up by a considerable amount, its relative muscular strength would be severely reduced, since the cross-section of its muscles would increase by the square of the scaling factor while its mass would increase by the cube of the scaling factor. As a result of this, cardiovascular and respiratory functions would be severely burdened. In the case of flying animals, the wing loading would be increased if they were isometrically scaled up, and they would therefore have to fly faster to gain the same amount of lift . Air resistance per unit mass is also higher for smaller animals (reducing terminal velocity ) which is why a small animal like an ant cannot be seriously injured from impact with the ground after being dropped from any height. As stated by J. B. S. Haldane , large animals do not look like small animals: an elephant cannot be mistaken for a mouse scaled up in size. This is due to allometric scaling : the bones of an elephant are necessarily proportionately much larger than the bones of a mouse because they must carry proportionately higher weight. Haldane illustrates this in his seminal 1928 essay On Being the Right Size in referring to allegorical giants: "...consider a man 60 feet high...Giant Pope and Giant Pagan in the illustrated Pilgrim's Progress: ...These monsters...weighed 1000 times as much as [a normal human]. Every square inch of a giant bone had to support 10 times the weight borne by a square inch of human bone. As the average human thigh-bone breaks under about 10 times the human weight, Pope and Pagan would have broken their thighs every time they took a step." [ 5 ] Consequently, most animals show allometric scaling with increased size, both among species and within a species. The giant creatures seen in monster movies (e.g., Godzilla , King Kong , and Them! , and other kaiju ) are also unrealistic, given that their sheer size would force them to collapse. Robert Wadlow , the documented tallest man to ever live (2.72m), needed leg braces to walk and suffered from numbness in his feet. [ 6 ] However, the buoyancy of water negates to some extent the effects of gravity. Therefore, aquatic animals can grow to very large sizes without the same musculoskeletal structures that would be required of similarly sized terrestrial animals, and it is the primary reason that the largest animals to ever exist on earth are aquatic animals . The metabolic rate of animals scales with a mathematical principle named quarter-power scaling [ 7 ] according to the metabolic theory of ecology . Mass transfer, such as diffusion to smaller objects such as living cells is faster than diffusion to larger objects such as entire animals. Thus, in chemical processes that take place on a surface – rather than in the bulk – finer-divided material is more active. For example, the activity of a heterogeneous catalyst is higher when it is divided into finer particles. Heat production from a chemical process scales with the cube of the linear dimension (height, width) of the vessel, but the vessel surface area scales with only the square of the linear dimension. Consequently, larger vessels are much more difficult to cool. Also, large-scale piping for transferring hot fluids is difficult to simulate on a small scale, because heat is transferred faster out from smaller pipes. Failure to take this into account in process design may lead to catastrophic thermal runaway .
https://en.wikipedia.org/wiki/Square–cube_law
The Squarial (a portmanteau of the words square and aerial ) was a satellite antenna used for reception of the now defunct British Satellite Broadcasting television service (BSB). The Squarial was a flat plate satellite antenna, built to be unobtrusive and unique. BSB were counting on the form factor of the antenna to clearly differentiate themselves from their competitors at the time. At the time of development, satellite installations usually required a 90 cm dish in order to receive a clear signal from the transmitting satellite. The smaller antenna was BSB's unique selling point and was heavily advertised in order to attract customers to their service. The Squarial was launched at a high-profile event in Marco Polo House , BSB's headquarters. The media were invited to a demonstration to see how much better MAC pictures could be than PAL . But MAC took a back seat when BSB unveiled the mock up Squarial, to replace the dish aerials usually needed for satellite reception. The Squarial was a surprise to everyone, including the four companies which had signed to manufacture the receivers which would have to work with the new aerial. The Squarial deal, with British company Fortel, had been struck only hours before the London event. BSB was itself surprised at the press reaction. The media were apparently so excited by the new antenna that they failed to ask whether there was a working prototype, and there wasn't. All that existed at this point was a wood-and-plastic dummy. Believing that someone would be able to make the Squarial work as well as a much larger dish, BSB built a whole advertising campaign on the Squarial. STC in Paignton was the first company to make a British Squarial. These were a little bigger, 38 cm across, to provide adequate reception throughout the UK, and more expensive than a dish. [ citation needed ] Due to production delays and limited availability of the STC squarial and to save face at launch, BSB sourced already available Squarials from Matsushita (now called Panasonic ) in Japan who were producing them in quantity for the Japanese market. Industry rumours at the time of launch suggested that BSB were buying the squarials from Matsushita for several hundred pounds each and heavily subsidising the cost to the four manufacturers of DMAC receiver. The Matsushita squarial was of a slightly better quality construction compared to the STC design and was used by Ferguson, Philips and Tatung while ITT-Nokia supplied the STC squarial. However all offered the 30 cm traditional mini dish for a slightly lower price (several dish manufacturers were used including Lenson Heath and Channel Master). The Squarial became obsolete in 1993, when the Marcopolo satellites, which the Squarial received, stopped broadcasting signals from BSkyB , which had carried the Sky channels over the D-Mac system for a period. Unlike a normal satellite dish , which uses a parabolic reflector to focus the radio waves on a single feed horn antenna, the Squarial was a phased array antenna, a common design in which multiple small antennas work together to receive the waves. [ 1 ] The Squarial consisted of a planar array of either 144 or 256 resonant cavity antennas spaced 0.9 wavelength apart, all embedded in plastic. Each antenna element was a tiny open-ended metal box in which the microwave downlink radio waves excited standing waves , with a wire probe projecting in which received the radio waves and conducted them to an integral low-noise block converter (LNB) amplifier. The feed network combined the radio currents from the separate elements with the correct phase so that radio waves from the desired direction would be in phase and add together, while radio waves from other directions would be out of phase and cancel. Since the microwaves had to pass through the plastic surface to reach the antennas, special low-loss plastic was used. Three of these plastic sheets were stacked upon each other, padded with polystyrene layers to add rigidity to the unit. All this was engineered into a 38 cm white plastic body with the BSB logo at the bottom. The low-noise block converter mounted in the center, behind the layers, was a standard unit similar to those in other satellite dishes, which converts the frequencies from the satellite down to a lower frequency band around 800 MHz and transmits it through a coaxial cable into the building to the set-top box at the TV. It was manufactured by Matsushita and rated as a 10 GHz standard unit. The Squarial's small size was possible thanks to the high power of the two Marcopolo DBS satellites, which simulcast the same channels on the same frequencies. The broadcast power was 59 dBW, with a 0.05 degree accuracy. [ 2 ] Manufacturers of the DMAC receivers used with the Squarial included, Ferguson, Phillips, Nokia and Tatung. The Squarial was a specialized antenna designed specifically for operation on the Marco Polo satellites' frequency range. The LNB could only tune a limited range of frequencies and when utilised in modern circumstances the frequency is subsequently offset by around 100 MHz. Some owners modified the squarial to operate with the Thor satellite system (formerly BSB's own satellites, Marcopolo) after the decline of BSB. This was due in large part to the highly discounted price of the unit during the final months of BSB's existence. D2-MAC programmes could be picked up from the Scandinavian satellites during the early 1990s and viewed using modified receivers. Once transmissions ceased from these satellites, Squarials could be used to receive broadcasts from the French terrestrial relay satellites at 5.0°W. BSB's alternative dishes were also successfully used to receive analogue transmissions from the Astra and Hot Bird satellites. BSB placed the Squarial at the heart of its advertising campaign, using the diamond shape throughout all of its channel logos and on screen presentation. This square/diamond image extended down to BSB's corporate logo and even printed and televisual advertising mediums. This led to the company's slogan (used throughout the company's existence) "It's smart to be square". [ 3 ] The unique appearance was a design first for satellite antennae, its flat plate measured only a few millimetres thick and the LNB unit protruded another 3 cm from the rear. It was built to a very high standard, featuring good quality plastics, weather resistant coatings and stainless steel mounting arm. Compared with the Amstrad-manufactured dishes offered by Sky — made from cheap metal — the Squarial offered a much more attractive, upmarket appearance. BSB offered two alternatives to the squarial, the cheaper more conventional looking mini-dish format and the rounded-rectangle format dish. The first revision was in the shape of a vertical ellipse of roughly 30 cm in diameter. The design employed a short LNB arm with a 'spike' design LNB operating at a frequency of 10 GHz. Essentially this design could be considered the forerunner to BSkyB's minidish . The second revision took on the appearance of a perfectly circular dish (around 25 cm in diameter), using a standard LNB at 10 GHz. In essence both function like a normal satellite dish, only scaled down. Galaxy ▼ Sky One The Movie Channel ▼ Retained Now ▼ Sky News & Sky Arts The Power Station ▼ Sky Movies The Sports Channel ▼ Sky Sports The Computer Channel ▼ Extinct
https://en.wikipedia.org/wiki/Squarial
Squaring the square is the problem of tiling an integral square using only other integral squares. (An integral square is a square whose sides have integer length.) The name was coined in a humorous analogy with squaring the circle . Squaring the square is an easy task unless additional conditions are set. The most studied restriction is that the squaring be perfect , meaning the sizes of the smaller squares are all different. A related problem is squaring the plane , which can be done even with the restriction that each natural number occurs exactly once as a size of a square in the tiling. The order of a squared square is its number of constituent squares. A "perfect" squared square is a square such that each of the smaller squares has a different size. Perfect squared squares were studied by R. L. Brooks , C. A. B. Smith , A. H. Stone and W. T. Tutte (writing under the collective pseudonym " Blanche Descartes ") at Cambridge University between 1936 and 1938. They transformed the square tiling into an equivalent electrical circuit – they called it a "Smith diagram" – by considering the squares as resistors that connected to their neighbors at their top and bottom edges, and then applied Kirchhoff's circuit laws and circuit decomposition techniques to that circuit. The first perfect squared squares they found were of order 69. The first perfect squared square to be published, a compound one of side 4205 and order 55, was found by Roland Sprague in 1939. [ 1 ] Martin Gardner published an extensive article written by W. T. Tutte about the early history of squaring the square in his Mathematical Games column of November 1958. [ 2 ] A "simple" squared square is one where no subset of more than one of the squares forms a rectangle or square. When a squared square has a square or rectangular subset, it is "compound". In 1978, A. J. W. Duijvestijn [ de ] discovered a simple perfect squared square of side 112 with the smallest number of squares using a computer search. His tiling uses 21 squares, and has been proved to be minimal. [ 3 ] This squared square forms the logo of the Trinity Mathematical Society . It also appears on the cover of the Journal of Combinatorial Theory . Duijvestijn also found two simple perfect squared squares of sides 110 but each comprising 22 squares. Theophilus Harding Willcocks, an amateur mathematician and fairy chess composer, found another. In 1999, I. Gambini proved that these three are the smallest perfect squared squares in terms of side length. [ 4 ] The perfect compound squared square with the fewest squares was discovered by T.H. Willcocks in 1946 and has 24 squares; however, it was not until 1982 that Duijvestijn, Pasquale Joseph Federico and P. Leeuw mathematically proved it to be the lowest-order example. [ 5 ] When the constraint of all the squares being different sizes is relaxed, a squared square such that the side lengths of the smaller squares do not have a common divisor larger than 1 is called a "Mrs. Perkins's quilt". In other words, the greatest common divisor of all the smaller side lengths should be 1. The Mrs. Perkins's quilt problem asks for a Mrs. Perkins's quilt with the fewest pieces for a given n × n {\displaystyle n\times n} square. The number of pieces required is at least log 2 ⁡ n {\displaystyle \log _{2}n} , [ 6 ] and at most 6 log 2 ⁡ n {\displaystyle 6\log _{2}n} . [ 7 ] Computer searches have found exact solutions for small values of n {\displaystyle n} (small enough to need up to 18 pieces). [ 8 ] For n = 1 , 2 , 3 , … {\displaystyle n=1,2,3,\dots } the number of pieces required is: For any integer n {\displaystyle n} other than 2, 3, and 5, it is possible to dissect a square into n {\displaystyle n} squares of one or two different sizes. [ 9 ] In 1975, Solomon Golomb raised the question whether the whole plane can be tiled by squares, one of each integer edge-length, which he called the heterogeneous tiling conjecture . This problem was later publicized by Martin Gardner in his Scientific American column and appeared in several books, but it defied solution for over 30 years. In Tilings and patterns , published in 1987, Branko Grünbaum and G. C. Shephard describe a way of tiling of the plane by integral squares by recursively taking any perfect squared square and enlarging it so that the formerly smallest tile has the size of the original squared square, then replacing this tile with a copy of the original squared square. The recursive scaling process increases the sizes of the squares exponentially – skipping most integers – a feature which they note was true of all perfect integral tilings of the plane known at that time. In 2008 James Henle and Frederick Henle proved Golomb's heterogeneous tiling conjecture: there exists a tiling of the plane by squares, one of each integer size. Their proof is constructive and proceeds by "puffing up" an L-shaped region formed by two side-by-side and horizontally flush squares of different sizes to a perfect tiling of a larger rectangular region, then adjoining the square of the smallest size not yet used to get another, larger L-shaped region. The squares added during the puffing up procedure have sizes that have not yet appeared in the construction and the procedure is set up so that the resulting rectangular regions are expanding in all four directions, which leads to a tiling of the whole plane. [ 10 ] Cubing the cube is the analogue in three dimensions of squaring the square: that is, given a cube C , the problem of dividing it into finitely many smaller cubes, no two congruent. Unlike the case of squaring the square, a hard yet solvable problem, there is no perfect cubed cube and, more generally, no dissection of a rectangular cuboid C into a finite number of unequal cubes. To prove this, we start with the following claim: for any perfect dissection of a rectangle in squares, the smallest square in this dissection does not lie on an edge of the rectangle. Indeed, each corner square has a smaller adjacent edge square, and the smallest edge square is adjacent to smaller squares not on the edge. Now suppose that there is a perfect dissection of a rectangular cuboid in cubes. Make a face of C its horizontal base. The base is divided into a perfect squared rectangle R by the cubes which rest on it. The smallest square s 1 in R is surrounded by larger , and therefore higher , cubes. Hence the upper face of the cube on s 1 is divided into a perfect squared square by the cubes which rest on it. Let s 2 be the smallest square in this dissection. By the claim above, this is surrounded on all 4 sides by squares which are larger than s 2 and therefore higher. The sequence of squares s 1 , s 2 , ... is infinite and the corresponding cubes are infinite in number. This contradicts our original supposition. [ 11 ] If a 4-dimensional hypercube could be perfectly hypercubed then its 'faces' would be perfect cubed cubes; this is impossible. Similarly, there is no solution for all cubes of higher dimensions.
https://en.wikipedia.org/wiki/Squaring_the_square
The squat effect is the hydrodynamic phenomenon by which a vessel moving through shallow water creates an area of reduced pressure that causes the ship to increase its draft (alternatively decrease the underkeel clearance of the vessel in marine terms) and thereby be closer to the seabed than would otherwise be expected. This phenomenon is caused by the water flow which accelerates as it passes between the hull and the seabed in confined waters, the increase in water velocity causing a resultant reduction in pressure . Squat effect from a combination of vertical sinkage and a change of trim may cause the vessel to dip towards the stern or towards the bow. This is understood to be a function of the Block coefficient of the vessel concerned, finer lined vessels Cb <0.7 squatting by the stern and vessels with a Cb >0.7 squatting by the head or bow. [ 1 ] Squat effect is approximately proportional to the square of the speed of the ship. Thus, by reducing speed by half, the squat effect is reduced by a factor of four. [ 2 ] Squat effect is usually felt more when the depth/ draft ratio is less than four [ 2 ] or when sailing close to a bank . It can lead to unexpected groundings and handling difficulties. There are indications of squat which mariners and ship pilots should be aware of such as vibration, poor helm response, shearing off course, change of trim and a change in wash. Squat effect is included by navigators in under keel clearance calculations. [ 3 ] It was a cause of the 7 August 1992 grounding of the Queen Elizabeth 2 (QE2) off Cuttyhunk Island, near Martha's Vineyard . The liner's speed at the time was 24 knots (12 m/s) and the draft was 32 feet (9.8 m). The rock upon which the vessel grounded was an uncharted shoal later determined to be 34.5 feet (10.5 m), which should have given her room to spare, were it not for the "squat effect." [ 4 ] U.S. National Transportation Safety Board investigators found that the QE2's officers significantly underestimated the amount the increase in speed would increase the ship's squat. The officers allowed for 2 feet (0.61 m) of squat in their calculations, but the NTSB concluded that squat at that speed and depth would have been between 4.5 and 8 feet (1.4 and 2.4 m). [ 5 ] Squat is also mentioned as a factor in the collision of the bulk carriers Tecam Sea and Federal Fuji in the port of Sorel , Quebec , in April 2000. [ 1 ] The third largest cruise ship in the world, MS Oasis of the Seas , used this effect to obtain an extra margin of clearance between the vessel and the Great Belt bridge , Denmark , 1 November 2009, on a voyage from the shipyard in Turku , Finland to Florida , USA . [ 6 ] The new cruise liner passed under the bridge at 20 knots (37 km/h) in the shallow channel, giving the ship extra clearance due to a 30 cm squat.
https://en.wikipedia.org/wiki/Squat_effect
Squawk is a Java micro edition virtual machine for embedded system and small devices. Most virtual machines for the Java platform are written in low level native languages such as C / C++ and assembler ; what makes Squawk different is that Squawk's core is mostly written in Java (this is called a meta-circular interpreter ). A Java implementation provides ease of portability, and integration of virtual machine and application resources such as objects, threads, and operating-system interfaces. The Squawk Virtual Machine figure can be simplified as: The research project was inspired by Squeak . Squawk has a Java ME heritage and features a small memory footprint . [ 1 ] It was developed to be simple with minimal external dependencies. Its simplicity made it portable and easy to debug and maintain. Squawk also provides an isolated mechanism by which an application is represented as an object. In Squawk, one or more applications can run in the single JVM. Conceptually, each application is completely isolated from all other applications. This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Squawk_virtual_machine
Squeeze flow (also called squeezing flow, squeezing film flow, or squeeze flow theory) is a type of flow in which a material is pressed out or deformed between two parallel plates or objects. First explored in 1874 by Josef Stefan , [ 1 ] squeeze flow describes the outward movement of a droplet of material, its area of contact with the plate surfaces, and the effects of internal and external factors such as temperature, viscoelasticity , and heterogeneity of the material. [ 2 ] Several squeeze flow models exist to describe Newtonian and non-Newtonian fluids undergoing squeeze flow under various geometries and conditions. Numerous applications across scientific and engineering disciplines including rheometry , welding engineering, and materials science provide examples of squeeze flow in practical use. Conservation of mass (expressed as a continuity equation ), the Navier-Stokes equations for conservation of momentum, and the Reynolds number provide the foundations for calculating and modeling squeeze flow. Boundary conditions for such calculations include assumptions of an incompressible fluid , a two-dimensional system, neglecting of body forces , and neglecting of inertial forces. Relating applied force to material thickness: F = − 4 ∗ L 3 ∗ η ∗ W h 3 d h d t {\displaystyle F=-{\frac {4*L^{3}*\eta *W}{h^{3}}}{dh \over dt}} Where F {\displaystyle F} is the applied squeezing force, 2 L {\displaystyle 2L} is the initial length of the droplet, η {\displaystyle \eta } is the fluid viscosity, W {\displaystyle W} is the width of the assumed rectangular plate, 2 h {\displaystyle 2h} is the final height of the droplet, and d h d t {\displaystyle {dh \over dt}} is the change in droplet height over time. To simplify most calculations, the applied force is assumed to be constant. Several equations accurately model Newtonian droplet sizes under different initial conditions. Consideration of a single asperity , or surface protrusion, allows for measurement of a very specific cross-section of a droplet. To measure macroscopic squeeze flow effects, models exist for two the most common surfaces: circular and rectangular plate squeeze flows. For single asperity squeeze flow: h 0 h = ( 1 + 5 ∗ F ∗ t ∗ h 0 2 4 ∗ η ∗ W ∗ L 0 3 ) 1 / 5 {\displaystyle {\frac {h_{0}}{h}}=\left(1+{\frac {5*F*t*h_{0}^{2}}{4*\eta *W*L_{0}^{3}}}\right)^{1/5}} Where 2 h 0 {\displaystyle 2h_{0}} is the initial height of the droplet, 2 h {\displaystyle 2h} is the final height of the droplet, F {\displaystyle F} is the applied squeezing force, t {\displaystyle t} is the squeezing time, η {\displaystyle \eta } is the fluid viscosity, W {\displaystyle W} is the width of the assumed rectangular plate, and 2 L 0 {\displaystyle 2L_{0}} is the initial length of the droplet. [ 3 ] Based on conservation of mass calculations, the droplet width is inversely proportional to droplet height; as the width increases, the height decreases in response to squeezing forces. [ 3 ] For circular plate squeeze flow: h 0 h = ( 1 + 16 ∗ F ∗ t ∗ h 0 2 3 ∗ π ∗ η ∗ R 4 ) 1 / 2 {\displaystyle {\frac {h_{0}}{h}}=\left(1+{\frac {16*F*t*h_{0}^{2}}{3*\pi *\eta *R^{4}}}\right)^{1/2}} R {\displaystyle R} is the radius of the circular plate. [ 3 ] For rectangular plate squeeze flow: h 0 h = ( 1 + F ∗ t ∗ h 0 2 2 ∗ μ ∗ W ∗ L 3 ) 1 / 2 {\displaystyle {\frac {h_{0}}{h}}=\left(1+{\frac {F*t*h_{0}^{2}}{2*\mu *W*L^{3}}}\right)^{1/2}} These calculations assume a melt layer that has a length much larger than the sample width and thickness. [ 3 ] Simplifying calculations for Newtonian fluids allows for basic analysis of squeeze flow, but many polymers can exhibit properties of non-Newtonian fluids , such as viscoelastic characteristics, under deformation . The power law fluid model is sufficient to describe behaviors above the melting temperature for semicrystalline thermoplastics or the glass transition temperature for amorphous thermoplastics, and the Bingham fluid model provides calculations based on variations in yield stress calculations. [ 3 ] [ 4 ] For squeeze flow in a power law fluid : h 0 h = ( 1 + t ∗ ( 2 n + 3 4 n + 2 ) ( ( 4 ∗ h 0 ∗ L 0 ) n + 1 ∗ F ∗ ( n + 2 ) ( 2 ∗ L 0 ) 2 n + 3 ∗ W ∗ m ) 1 / n ) n / 2 n + 3 {\displaystyle {\frac {h_{0}}{h}}=\left(1+t*({\frac {2n+3}{4n+2}})({\frac {(4*h_{0}*L_{0})^{n+1}*F*(n+2)}{(2*L_{0})^{2n+3}*W*m}})^{1/n}\right)^{n/2n+3}} Where m {\displaystyle m} (or K {\displaystyle K} ) is the flow consistency index and n {\displaystyle n} is the dimensionless flow behavior index . [ 3 ] m = m 0 ∗ e x p ( − E a R ∗ T ) {\displaystyle m=m_{0}*exp\left({\frac {-E_{a}}{R*T}}\right)} Where m {\displaystyle m} is the flow consistency index, m 0 {\displaystyle m_{0}} is the initial flow consistency index , E a {\displaystyle E_{a}} is the activation energy , R {\displaystyle R} is the universal gas constant , and T {\displaystyle T} is the absolute temperature . [ 3 ] During experimentation to determine the accuracy of the power law fluid model, observations showed that modeling slow squeeze flow generated inaccurate power law constants ( m {\displaystyle m} and n {\displaystyle n} ) using a standard viscometer , and fast squeeze flow demonstrated that polymers may exhibit better lubrication than current constitutive models will predict. [ 5 ] The current empirical model for power law fluids is relatively accurate for modeling inelastic flows, but certain kinematic flow assumptions and incomplete understanding of polymeric lubrication properties tend to provide inaccurate modeling of power law fluids. [ 5 ] Bingham fluids exhibit uncommon characteristics during squeeze flow. While undergoing compression, Bingham fluids should fail to move and act as a solid until achieving a yield stress; however, as the parallel plates move closer together, the fluid shows some radial movement. One study proposes a “biviscosity” model where the Bingham fluid retains some unyielded regions that maintain solid-like properties, while other regions yield and allow for some compression and outward movement. [ 4 ] τ = { η 2 ∗ d u d y + τ 1 , if τ ≥ τ 1 η 1 ∗ d u d y , if τ < τ 1 {\displaystyle \tau ={\begin{cases}\eta _{2}*{du \over dy}+\tau _{1},&{\text{if }}\tau \geq \tau _{1}\\\eta _{1}*{du \over dy},&{\text{if }}\tau <\tau _{1}\end{cases}}} Where η 2 {\displaystyle \eta _{2}} is the known viscosity of the Bingham fluid, η 1 {\displaystyle \eta _{1}} is the "paradoxical" viscosity of the solid-like state, and τ 1 {\displaystyle \tau _{1}} is the biviscosity region stress . [ 4 ] To determine this new stress: τ 0 = τ 1 ( 1 − ϵ ) {\displaystyle \tau _{0}=\tau _{1}(1-\epsilon )} Where τ 0 {\displaystyle \tau _{0}} is the yield stress and ϵ = η 2 η 1 {\displaystyle \epsilon ={\frac {\eta _{2}}{\eta _{1}}}} is the dimensionless viscosity ratio . If ϵ = 1 {\displaystyle \epsilon =1} , the fluid exhibits Newtonian behavior; as ϵ → 0 {\displaystyle \epsilon \rightarrow 0} , the Bingham model applies. [ 4 ] Squeeze flow application is prevalent in several science and engineering fields. Modeling and experimentation assist with understanding the complexities of squeeze flow during processes such as rheological testing, hot plate welding , and composite material joining. Squeeze flow rheometry allows for evaluation of polymers under wide ranges of temperatures, shear rates, and flow indexes. Parallel plate plastometers provide analysis for high viscosity materials such as rubber and glass, cure times for epoxy resins, and fiber-filled suspension flows. [ 6 ] While viscometers provide useful results for squeeze flow measurements, testing conditions such as applied rotation rates, material composition, and fluid flow behaviors under shear may require the use of rheometers or other novel setups to obtain accurate data. [ 5 ] During conventional hot plate welding, a successful joining phase depends on proper maintenance of squeeze flow to ensure that pressure and temperature create an ideal weld. Excessive pressure causes squeeze out of valuable material and weakens the bond due to fiber realignment in the melt layer, [ 7 ] while failure to allow cooling to room temperature creates weak, brittle welds that crack or break completely during use. [ 3 ] Prevalent in the aerospace and automotive industries, composites serve as expensive, yet mechanically strong, materials in the construction of several types of aircraft and vehicles. While aircraft parts are typically composed of thermosetting polymers , thermoplastics may become an analog to permit increased manufacturing of these stronger materials through their melting abilities and relatively inexpensive raw materials. Characterization and testing of thermoplastic composites experiencing squeeze flow allow for study of fiber orientations within the melt and final products to determine weld strength. [ 7 ] Fiber strand length and size show significant effects on material strength, [ 8 ] and squeeze flow causes fibers to orient along the load direction while being perpendicular to the joining direction to achieve the same final properties as thermosetting composites. [ 7 ]
https://en.wikipedia.org/wiki/Squeeze_flow
Squeeze job , [ 1 ] or squeeze cementing is a term often used in the oilfield to describe the process of injecting cement slurry into a zone, generally for pressure-isolation purposes. [ 2 ] The term probably originated from the concept that enough water is "squeezed" out of the slurry to render it unflowable, so the portion that has actually entered the zone will stay in place when the squeeze pressure is released. After surface indications (e.g., pressure reaching a predetermined maximum) that a squeeze has been attained, any still-pumpable cement slurry remaining in the drill pipe or tubing ideally can be reverse circulated out before it sets. Usually the zone to be squeezed is isolated from above with a packer (and possibly from below with a bridge plug), but sometimes the squeezing pressure is applied to the entire casing string in what is known as a bradenhead squeeze , [ 3 ] (named for an old manufacturer of casing heads ). Even if a drilling rig is on location, pumping operations usually are done by a service company's cementing unit that can easily mix small batches of cement slurry, measure displacement volume accurately to spot the slurry on bottom, then pump at very low rates and high pressures during the squeeze itself, and finally measure volumes accurately again when reversing out any excess slurry. A squeeze manifold is a compact arrangement of valves and pressure gauges that allows monitoring of the drill pipe and casing pressures throughout the job, and facilitates quick switching of the pumping pressure to either side while the fluid returning from the other side of well is directed to the mud pit or a disposal pit or tank. The generic term "squeeze" also can apply to injection of generally small volumes of other liquids (e.g., treating fluids) into a zone under pressure. Bullhead squeeze (or just plain bullheading ) refers to pumping kill-weight mud down the casing beneath closed blowout preventers in a kick-control situation when it isn't feasible to circulate in such from bottom. This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Squeeze_job
In linear algebra , a squeeze mapping , also called a squeeze transformation , is a type of linear map that preserves Euclidean area of regions in the Cartesian plane , but is not a rotation or shear mapping . For a fixed positive real number a , the mapping is the squeeze mapping with parameter a . Since is a hyperbola , if u = ax and v = y / a , then uv = xy and the points of the image of the squeeze mapping are on the same hyperbola as ( x , y ) is. For this reason it is natural to think of the squeeze mapping as a hyperbolic rotation , as did Émile Borel in 1914, [ 1 ] by analogy with circular rotations , which preserve circles. The squeeze mapping sets the stage for development of the concept of logarithms. The problem of finding the area bounded by a hyperbola (such as xy = 1) is one of quadrature . The solution, found by Grégoire de Saint-Vincent and Alphonse Antonio de Sarasa in 1647, required the natural logarithm function, a new concept. Some insight into logarithms comes through hyperbolic sectors that are permuted by squeeze mappings while preserving their area. The area of a hyperbolic sector is taken as a measure of a hyperbolic angle associated with the sector. The hyperbolic angle concept is quite independent of the ordinary circular angle , but shares a property of invariance with it: whereas circular angle is invariant under rotation, hyperbolic angle is invariant under squeeze mapping. Both circular and hyperbolic angle generate invariant measures but with respect to different transformation groups. The hyperbolic functions , which take hyperbolic angle as argument, perform the role that circular functions play with the circular angle argument. [ 2 ] In 1688, long before abstract group theory , the squeeze mapping was described by Euclid Speidell in the terms of the day: "From a Square and an infinite company of Oblongs on a Superficies, each Equal to that square, how a curve is begotten which shall have the same properties or affections of any Hyperbola inscribed within a Right Angled Cone." [ 3 ] If r and s are positive real numbers, the composition of their squeeze mappings is the squeeze mapping of their product. Therefore, the collection of squeeze mappings forms a one-parameter group isomorphic to the multiplicative group of positive real numbers . An additive view of this group arises from consideration of hyperbolic sectors and their hyperbolic angles. From the point of view of the classical groups , the group of squeeze mappings is SO + (1,1) , the identity component of the indefinite orthogonal group of 2×2 real matrices preserving the quadratic form u 2 − v 2 . This is equivalent to preserving the form xy via the change of basis and corresponds geometrically to preserving hyperbolae. The perspective of the group of squeeze mappings as hyperbolic rotation is analogous to interpreting the group SO(2) (the connected component of the definite orthogonal group ) preserving quadratic form x 2 + y 2 as being circular rotations . Note that the " SO + " notation corresponds to the fact that the reflections are not allowed, though they preserve the form (in terms of x and y these are x ↦ y , y ↦ x and x ↦ − x , y ↦ − y ) ; the additional " + " in the hyperbolic case (as compared with the circular case) is necessary to specify the identity component because the group O(1,1) has 4 connected components , while the group O(2) has 2 components: SO(1,1) has 2 components, while SO(2) only has 1. The fact that the squeeze transforms preserve area and orientation corresponds to the inclusion of subgroups SO ⊂ SL – in this case SO(1,1) ⊂ SL(2) – of the subgroup of hyperbolic rotations in the special linear group of transforms preserving area and orientation (a volume form ). In the language of Möbius transformations , the squeeze transformations are the hyperbolic elements in the classification of elements . A geometric transformation is called conformal when it preserves angles. Hyperbolic angle is defined using area under y = 1/ x . Since squeeze mappings preserve areas of transformed regions such as hyperbolic sectors , the angle measure of sectors is preserved. Thus squeeze mappings are conformal in the sense of preserving hyperbolic angle. Here some applications are summarized with historic references. Spacetime geometry is conventionally developed as follows: Select (0,0) for a "here and now" in a spacetime. Light radiant left and right through this central event tracks two lines in the spacetime, lines that can be used to give coordinates to events away from (0,0). Trajectories of lesser velocity track closer to the original timeline (0, t ). Any such velocity can be viewed as a zero velocity under a squeeze mapping called a Lorentz boost . This insight follows from a study of split-complex number multiplications and the diagonal basis which corresponds to the pair of light lines. Formally, a squeeze preserves the hyperbolic metric expressed in the form xy ; in a different coordinate system. This application in the theory of relativity was noted in 1912 by Wilson and Lewis, [ 4 ] by Werner Greub, [ 5 ] and by Louis Kauffman . [ 6 ] Furthermore, the squeeze mapping form of Lorentz transformations was used by Gustav Herglotz (1909/10) [ 7 ] while discussing Born rigidity , and was popularized by Wolfgang Rindler in his textbook on relativity, who used it in his demonstration of their characteristic property. [ 8 ] The term squeeze transformation was used in this context in an article connecting the Lorentz group with Jones calculus in optics. [ 9 ] In fluid dynamics one of the fundamental motions of an incompressible flow involves bifurcation of a flow running up against an immovable wall. Representing the wall by the axis y = 0 and taking the parameter r = exp( t ) where t is time, then the squeeze mapping with parameter r applied to an initial fluid state produces a flow with bifurcation left and right of the axis x = 0. The same model gives fluid convergence when time is run backward. Indeed, the area of any hyperbolic sector is invariant under squeezing. For another approach to a flow with hyperbolic streamlines , see Potential flow § Power laws with n = 2 . In 1989 Ottino [ 10 ] described the "linear isochoric two-dimensional flow" as where K lies in the interval [−1, 1]. The streamlines follow the curves so negative K corresponds to an ellipse and positive K to a hyperbola, with the rectangular case of the squeeze mapping corresponding to K = 1. Stocker and Hosoi [ 11 ] described their approach to corner flow as follows: Stocker and Hosoi then recall Moffatt's [ 12 ] consideration of "flow in a corner between rigid boundaries, induced by an arbitrary disturbance at a large distance." According to Stocker and Hosoi, The area-preserving property of squeeze mapping has an application in setting the foundation of the transcendental functions natural logarithm and its inverse the exponential function : Definition: Sector( a,b ) is the hyperbolic sector obtained with central rays to ( a , 1/ a ) and ( b , 1/ b ). Lemma: If bc = ad , then there is a squeeze mapping that moves the sector( a,b ) to sector( c,d ). Proof: Take parameter r = c / a so that ( u,v ) = ( rx , y / r ) takes ( a , 1/ a ) to ( c , 1/ c ) and ( b , 1/ b ) to ( d , 1/ d ). Theorem ( Gregoire de Saint-Vincent 1647) If bc = ad , then the quadrature of the hyperbola xy = 1 against the asymptote has equal areas between a and b compared to between c and d . Proof: An argument adding and subtracting triangles of area 1 ⁄ 2 , one triangle being {(0,0), (0,1), (1,1)}, shows the hyperbolic sector area is equal to the area along the asymptote. The theorem then follows from the lemma. Theorem ( Alphonse Antonio de Sarasa 1649) As area measured against the asymptote increases in arithmetic progression, the projections upon the asymptote increase in geometric sequence. Thus the areas form logarithms of the asymptote index. For instance, for a standard position angle which runs from (1, 1) to ( x , 1/ x ), one may ask "When is the hyperbolic angle equal to one?" The answer is the transcendental number x = e . A squeeze with r = e moves the unit angle to one between ( e , 1/ e ) and ( ee , 1/ ee ) which subtends a sector also of area one. The geometric progression corresponds to the asymptotic index achieved with each sum of areas which is a proto-typical arithmetic progression A + nd where A = 0 and d = 1 . Following Pierre Ossian Bonnet 's (1867) investigations on surfaces of constant curvatures, Sophus Lie (1879) found a way to derive new pseudospherical surfaces from a known one. Such surfaces satisfy the Sine-Gordon equation : where ( s , σ ) {\displaystyle (s,\sigma )} are asymptotic coordinates of two principal tangent curves and Θ {\displaystyle \Theta } their respective angle. Lie showed that if Θ = f ( s , σ ) {\displaystyle \Theta =f(s,\sigma )} is a solution to the Sine-Gordon equation, then the following squeeze mapping (now known as Lie transform [ 13 ] ) indicates other solutions of that equation: [ 14 ] Lie (1883) noticed its relation to two other transformations of pseudospherical surfaces: [ 15 ] The Bäcklund transform (introduced by Albert Victor Bäcklund in 1883) can be seen as the combination of a Lie transform with a Bianchi transform (introduced by Luigi Bianchi in 1879.) Such transformations of pseudospherical surfaces were discussed in detail in the lectures on differential geometry by Gaston Darboux (1894), [ 16 ] Luigi Bianchi (1894), [ 17 ] or Luther Pfahler Eisenhart (1909). [ 18 ] It is known that the Lie transforms (or squeeze mappings) correspond to Lorentz boosts in terms of light-cone coordinates , as pointed out by Terng and Uhlenbeck (2000): [ 13 ] This can be represented as follows: where k corresponds to the Doppler factor in Bondi k -calculus , η is the rapidity .
https://en.wikipedia.org/wiki/Squeeze_mapping
In calculus , the squeeze theorem (also known as the sandwich theorem , among other names [ a ] ) is a theorem regarding the limit of a function that is bounded between two other functions. The squeeze theorem is used in calculus and mathematical analysis , typically to confirm the limit of a function via comparison with two other functions whose limits are known. It was first used geometrically by the mathematicians Archimedes and Eudoxus in an effort to compute π , and was formulated in modern terms by Carl Friedrich Gauss . The squeeze theorem is formally stated as follows. [ 1 ] Theorem — Let I be an interval containing the point a . Let g , f , and h be functions defined on I , except possibly at a itself. Suppose that for every x in I not equal to a , we have g ( x ) ≤ f ( x ) ≤ h ( x ) {\displaystyle g(x)\leq f(x)\leq h(x)} and also suppose that lim x → a g ( x ) = lim x → a h ( x ) = L . {\displaystyle \lim _{x\to a}g(x)=\lim _{x\to a}h(x)=L.} Then lim x → a f ( x ) = L . {\displaystyle \lim _{x\to a}f(x)=L.} This theorem is also valid for sequences. Let ( a n ), ( c n ) be two sequences converging to ℓ , and ( b n ) a sequence. If ∀ n ≥ N , N ∈ N {\displaystyle \forall n\geq N,N\in \mathbb {N} } we have a n ≤ b n ≤ c n , then ( b n ) also converges to ℓ . According to the above hypotheses we have, taking the limit inferior and superior: L = lim x → a g ( x ) ≤ lim inf x → a f ( x ) ≤ lim sup x → a f ( x ) ≤ lim x → a h ( x ) = L , {\displaystyle L=\lim _{x\to a}g(x)\leq \liminf _{x\to a}f(x)\leq \limsup _{x\to a}f(x)\leq \lim _{x\to a}h(x)=L,} so all the inequalities are indeed equalities, and the thesis immediately follows. A direct proof, using the ( ε , δ ) -definition of limit, would be to prove that for all real ε > 0 there exists a real δ > 0 such that for all x with | x − a | < δ , {\displaystyle |x-a|<\delta ,} we have | f ( x ) − L | < ε . {\displaystyle |f(x)-L|<\varepsilon .} Symbolically, ∀ ε > 0 , ∃ δ > 0 : ∀ x , ( | x − a | < δ ⇒ | f ( x ) − L | < ε ) . {\displaystyle \forall \varepsilon >0,\exists \delta >0:\forall x,(|x-a|<\delta \ \Rightarrow |f(x)-L|<\varepsilon ).} As lim x → a g ( x ) = L {\displaystyle \lim _{x\to a}g(x)=L} means that and lim x → a h ( x ) = L {\displaystyle \lim _{x\to a}h(x)=L} means that then we have g ( x ) ≤ f ( x ) ≤ h ( x ) {\displaystyle g(x)\leq f(x)\leq h(x)} g ( x ) − L ≤ f ( x ) − L ≤ h ( x ) − L {\displaystyle g(x)-L\leq f(x)-L\leq h(x)-L} We can choose δ := min { δ 1 , δ 2 } {\displaystyle \delta :=\min \left\{\delta _{1},\delta _{2}\right\}} . Then, if | x − a | < δ {\displaystyle |x-a|<\delta } , combining ( 1 ) and ( 2 ), we have − ε < g ( x ) − L ≤ f ( x ) − L ≤ h ( x ) − L < ε , {\displaystyle -\varepsilon <g(x)-L\leq f(x)-L\leq h(x)-L\ <\varepsilon ,} − ε < f ( x ) − L < ε , {\displaystyle -\varepsilon <f(x)-L<\varepsilon ,} which completes the proof. Q.E.D The proof for sequences is very similar, using the ε {\displaystyle \varepsilon } -definition of the limit of a sequence. The limit lim x → 0 x 2 sin ⁡ ( 1 x ) {\displaystyle \lim _{x\to 0}x^{2}\sin \left({\tfrac {1}{x}}\right)} cannot be determined through the limit law lim x → a ( f ( x ) ⋅ g ( x ) ) = lim x → a f ( x ) ⋅ lim x → a g ( x ) , {\displaystyle \lim _{x\to a}(f(x)\cdot g(x))=\lim _{x\to a}f(x)\cdot \lim _{x\to a}g(x),} because lim x → 0 sin ⁡ ( 1 x ) {\displaystyle \lim _{x\to 0}\sin \left({\tfrac {1}{x}}\right)} does not exist. However, by the definition of the sine function , − 1 ≤ sin ⁡ ( 1 x ) ≤ 1. {\displaystyle -1\leq \sin \left({\tfrac {1}{x}}\right)\leq 1.} It follows that − x 2 ≤ x 2 sin ⁡ ( 1 x ) ≤ x 2 {\displaystyle -x^{2}\leq x^{2}\sin \left({\tfrac {1}{x}}\right)\leq x^{2}} Since lim x → 0 − x 2 = lim x → 0 x 2 = 0 {\displaystyle \lim _{x\to 0}-x^{2}=\lim _{x\to 0}x^{2}=0} , by the squeeze theorem, lim x → 0 x 2 sin ⁡ ( 1 x ) {\displaystyle \lim _{x\to 0}x^{2}\sin \left({\tfrac {1}{x}}\right)} must also be 0. Probably the best-known examples of finding a limit by squeezing are the proofs of the equalities lim x → 0 sin ⁡ x x = 1 , lim x → 0 1 − cos ⁡ x x = 0. {\displaystyle {\begin{aligned}&\lim _{x\to 0}{\frac {\sin x}{x}}=1,\\[10pt]&\lim _{x\to 0}{\frac {1-\cos x}{x}}=0.\end{aligned}}} The first limit follows by means of the squeeze theorem from the fact that [ 2 ] cos ⁡ x ≤ sin ⁡ x x ≤ 1 {\displaystyle \cos x\leq {\frac {\sin x}{x}}\leq 1} for x close enough to 0. The correctness of which for positive x can be seen by simple geometric reasoning (see drawing) that can be extended to negative x as well. The second limit follows from the squeeze theorem and the fact that 0 ≤ 1 − cos ⁡ x x ≤ x {\displaystyle 0\leq {\frac {1-\cos x}{x}}\leq x} for x close enough to 0. This can be derived by replacing sin x in the earlier fact by 1 − cos 2 ⁡ x {\textstyle {\sqrt {1-\cos ^{2}x}}} and squaring the resulting inequality. These two limits are used in proofs of the fact that the derivative of the sine function is the cosine function. That fact is relied on in other proofs of derivatives of trigonometric functions. It is possible to show that d d θ tan ⁡ θ = sec 2 ⁡ θ {\displaystyle {\frac {d}{d\theta }}\tan \theta =\sec ^{2}\theta } by squeezing, as follows. In the illustration at right, the area of the smaller of the two shaded sectors of the circle is sec 2 ⁡ θ Δ θ 2 , {\displaystyle {\frac {\sec ^{2}\theta \,\Delta \theta }{2}},} since the radius is sec θ and the arc on the unit circle has length Δ θ . Similarly, the area of the larger of the two shaded sectors is sec 2 ⁡ ( θ + Δ θ ) Δ θ 2 . {\displaystyle {\frac {\sec ^{2}(\theta +\Delta \theta )\,\Delta \theta }{2}}.} What is squeezed between them is the triangle whose base is the vertical segment whose endpoints are the two dots. The length of the base of the triangle is tan( θ + Δ θ ) − tan θ , and the height is 1. The area of the triangle is therefore tan ⁡ ( θ + Δ θ ) − tan ⁡ θ 2 . {\displaystyle {\frac {\tan(\theta +\Delta \theta )-\tan \theta }{2}}.} From the inequalities sec 2 ⁡ θ Δ θ 2 ≤ tan ⁡ ( θ + Δ θ ) − tan ⁡ θ 2 ≤ sec 2 ⁡ ( θ + Δ θ ) Δ θ 2 {\displaystyle {\frac {\sec ^{2}\theta \,\Delta \theta }{2}}\leq {\frac {\tan(\theta +\Delta \theta )-\tan \theta }{2}}\leq {\frac {\sec ^{2}(\theta +\Delta \theta )\,\Delta \theta }{2}}} we deduce that sec 2 ⁡ θ ≤ tan ⁡ ( θ + Δ θ ) − tan ⁡ θ Δ θ ≤ sec 2 ⁡ ( θ + Δ θ ) , {\displaystyle \sec ^{2}\theta \leq {\frac {\tan(\theta +\Delta \theta )-\tan \theta }{\Delta \theta }}\leq \sec ^{2}(\theta +\Delta \theta ),} provided Δ θ > 0 , and the inequalities are reversed if Δ θ < 0 . Since the first and third expressions approach sec 2 θ as Δ θ → 0 , and the middle expression approaches d d θ tan ⁡ θ , {\displaystyle {\tfrac {d}{d\theta }}\tan \theta ,} the desired result follows. The squeeze theorem can still be used in multivariable calculus but the lower (and upper functions) must be below (and above) the target function not just along a path but around the entire neighborhood of the point of interest and it only works if the function really does have a limit there. It can, therefore, be used to prove that a function has a limit at a point, but it can never be used to prove that a function does not have a limit at a point. [ 3 ] lim ( x , y ) → ( 0 , 0 ) x 2 y x 2 + y 2 {\displaystyle \lim _{(x,y)\to (0,0)}{\frac {x^{2}y}{x^{2}+y^{2}}}} cannot be found by taking any number of limits along paths that pass through the point, but since 0 ≤ x 2 x 2 + y 2 ≤ 1 − | y | ≤ y ≤ | y | ⟹ − | y | ≤ x 2 y x 2 + y 2 ≤ | y | lim ( x , y ) → ( 0 , 0 ) − | y | = 0 lim ( x , y ) → ( 0 , 0 ) | y | = 0 ⟹ 0 ≤ lim ( x , y ) → ( 0 , 0 ) x 2 y x 2 + y 2 ≤ 0 {\displaystyle {\begin{array}{rccccc}&0&\leq &\displaystyle {\frac {x^{2}}{x^{2}+y^{2}}}&\leq &1\\[4pt]-|y|\leq y\leq |y|\implies &-|y|&\leq &\displaystyle {\frac {x^{2}y}{x^{2}+y^{2}}}&\leq &|y|\\[4pt]{{\displaystyle \lim _{(x,y)\to (0,0)}-|y|=0} \atop {\displaystyle \lim _{(x,y)\to (0,0)}\ \ \ |y|=0}}\implies &0&\leq &\displaystyle \lim _{(x,y)\to (0,0)}{\frac {x^{2}y}{x^{2}+y^{2}}}&\leq &0\end{array}}} therefore, by the squeeze theorem, lim ( x , y ) → ( 0 , 0 ) x 2 y x 2 + y 2 = 0. {\displaystyle \lim _{(x,y)\to (0,0)}{\frac {x^{2}y}{x^{2}+y^{2}}}=0.}
https://en.wikipedia.org/wiki/Squeeze_theorem
In telecommunications , squelch is a circuit function that acts to suppress the audio (or video ) output of a receiver in the absence of a strong input signal . [ 1 ] Essentially, squelch is a specialized type of noise gate designed to suppress weak signals. Squelch is used in two-way radios and VHF/UHF radio scanners to eliminate the sound of noise when the radio is not receiving a desired transmission. In some designs, the squelch threshold is preset. For example, television squelch settings are usually preset. Receivers in base stations , or repeaters at remote mountain top sites, are usually not adjustable remotely from the control point. In two-way radios (also known as radiotelephones ), the received signal level required to unsquelch (un-mute) the receiver may be fixed or adjustable with a knob or a sequence of button presses. Typically the operator will adjust the control until noise is heard, and then adjust in the opposite direction until the noise is squelched. At this point, a weak signal will unsquelch the receiver and be heard by the operator. Further adjustment will increase the level of signal required to unsquelch the receiver. Some applications have the receiver tied to other equipment that uses the audio muting control voltage, as a "signal present" indication; for example, in a repeater the act of the receiver unmuting will switch on the transmitter. Squelch can be opened (turned off), which allows all signals to be heard, including radio frequency noise on the receiving frequency. This can be useful when trying to hear distant or otherwise weak signals, for example in DXing . Carrier squelch is the most simple variant of all. It functions strictly on the signal strength , such as when a television mutes the audio or blanks the video on "empty" channels , or when a walkie-talkie mutes the audio when no signal is present. Carrier squelch uses receiver Automatic gain control (AGC) to determine the squelch threshold. Single-sideband modulation (SSB) typically uses carrier squelch . Noise squelch is more reliable than carrier squelch. A noise squelch circuit is noise-operated and can be used in AM or FM receivers, and relies on the receiver quieting in the presence of an AM or FM carrier. To minimize the effects of voice audio on squelch operation, the audio from the receiver's detector is passed through a high-pass filter , typically passing 4,000 Hz (4kHz) and above, leaving only high frequency noise. The squelch control adjusts the gain of an amplifier which varies the level of the noise coming out of the filter. This noise is rectified , producing a DC voltage when noise is present. The presence of continuous noise on an idle channel creates a DC voltage which turns the receiver audio off. When a signal with little or no noise is received, the noise-derived voltage is reduced and the receiver audio is unmuted. Noise squelch can be defeated by intermodulation present in the high-pass band. For this reason, many receivers with noise squelch will also use a carrier squelch set at a higher threshold than the noise squelch . Tone squelch, or another form of selective calling, is sometimes used to solve interference problems. Where more than one user is on the same channel ( co-channel users), selective calling addresses a subset of all receivers. Instead of turning on the receiver audio for any signal, the audio turns on only in the presence of the correct selective calling code. This is akin to the use of a lock on a door. A carrier squelch is unlocked and will let any signal in. Selective calling locks out all signals except ones with the correct key to the lock (the correct code). In non-critical uses, selective calling can also be used to hide the presence of interfering signals such as receiver-produced intermodulation. Receivers with poor specifications—such as inexpensive police scanners or low-cost mobile radios—cannot reject the strong signals present in urban environments. The interference will still be present, and will still degrade system performance, but by using selective calling the user will not have to hear the noises produced by receiving the interference. Four different techniques are commonly used. Selective calling can be regarded as a form of in-band signaling . CTCSS (Continuous Tone-Coded Squelch System) continuously superimposes any one of about 50 low-pitch audio tones on the transmitted signal, ranging from 67 to 254 Hz . The original tone set was 10, then 32 tones, and has been expanded even further over the years. CTCSS is often called PL tone (for Private Line , a trademark of Motorola ), or simply tone squelch . General Electric 's implementation of CTCSS is called Channel Guard (or CG ). RCA Corporation used the name Quiet Channel , or QC . There are many other company-specific names used by radio vendors to describe compatible options. Any CTCSS system that has compatible tones is interchangeable. Old and new radios with CTCSS and radios across manufacturers are compatible. [ citation needed ] For those PMR446 radios with 38 codes, the codes 0 to 38 are CTCSS Tones: Selcall (Selective Calling) transmits a burst of up to five in-band audio tones at the beginning of each transmission. This feature (sometimes called "tone burst") is common in European systems. Early systems used one tone (commonly called "Tone Burst"). Several tones were used, the most common being 1,750 Hz, which is still used in European amateur radio repeater systems. The addressing scheme provided by one tone was not enough, so a two-tone system was devised—one tone followed by a second tone (sometimes called a "1+1" system). Motorola later marketed a system called "Quik-Call" that used two simultaneous tones followed by two more simultaneous tones (sometimes called a "2+2" system) that was heavily used by fire department dispatch systems in the US. Later selective call systems used paging system technology that made use of a burst of five sequential tones. DCS (Digital-Coded Squelch), generically known as CDCSS (Continuous Digital-Coded Squelch System), was designed as the digital replacement for CTCSS. In the same way that a single CTCSS tone would be used on an entire group of radios, the same DCS code is used in a group of radios. DCS is also referred to as Digital Private Line (or DPL ), another trademark of Motorola, and likewise, General Electric's implementation of DCS is referred to as Digital Channel Guard (or DCG ). Despite the fact that it is not a tone, DCS is also called DTCS (Digital Tone Code Squelch) by Icom , and other names by other manufacturers. Radios with DCS options are generally compatible, provided the radio's encoder-decoder will use the same code as radios in the existing system. DCS adds a 134.4 bit/s (sub-audible) bitstream to the transmitted audio. The code word is a 23-bit Golay (23,12) code which has the ability to detect and correct errors of 3 or fewer bits. The word consists of 12 data bits followed by 11 check bits. The last 3 data bits are a fixed '001', this leaves 9 code bits (512 possibilities) which are conventionally represented as a 3-digit octal number. Note that the first bit transmitted is the LSB, so the code is "backwards" from the transmitted bit order. Only 83 of the 512 possible codes are available, to prevent falsing due to alignment collisions. DCS codes are standardized by the Telecommunications Industry Association with the following 83 codes being found in their most recent standard, however, some systems use non-standard codes. [ 2 ] For those PMR446 radios with 121 codes, the codes 39 to 121 are DCS codes: [ 3 ] XTCSS is the newest signalling technique, and provides 99 codes with the added advantage of "silent operation". XTCSS-fitted radios are purposed to enjoy more privacy and flexibility of operation. XTCSS is implemented as a combination of CTCSS and in-band signalling. Squelch was invented first and is still in wide use in two-way radio. Squelch of any kind is used to indicate loss of signal, which is used to keep commercial and amateur radio repeaters from continually transmitting . Since a carrier squelch receiver cannot tell a valid carrier from a spurious signal (noise, etc.), CTCSS is often used as well, as it avoids false keyups. Use of CTCSS is especially helpful on congested frequencies or on frequency bands prone to skip and during band openings. Professional wireless microphones use squelch to avoid reproducing noise when the receiver does not receive enough signal from the microphone. Most professional models have adjustable squelch, usually set with a screwdriver adjustment or front-panel control on the receiver.
https://en.wikipedia.org/wiki/Squelch
Squelching is a biological phenomenon in which a strong transcriptional activator acts to inhibit the expression of another gene . [ 1 ] Squelching has been mostly studied in yeast, and most of the ideas regarding its mechanisms have come from research into modes of transcriptional control in yeast. [ 2 ] One important study of this topic was conducted using the Gal4-VP16 artificial transcription factor system, where it was shown that the activating complex formed by VP-16 was sequestering adapters required for transcription of other targets. [ 3 ] The primary cause of squelching is believed to be the interaction of activator molecules disrupting the biochemical pathways associated with related processes due to structural similarity between the activators and important substrates along that pathway. In particular, the activator binds to transcription factors along alternative biochemical pathways, inhibiting the ability of these transcription factors to bind to their true targets. As in the example above, sequestration of an intermediate in a metabolic pathway is a confounding variable in genetic studies because knowledge of the expected binding targets of the primary molecules involved does not help predict why unexpected behavior results.
https://en.wikipedia.org/wiki/Squelching
Squigonometry or p -trigonometry is a generalization of traditional trigonometry which replaces the circle and Euclidean distance function with the squircle (shape intermediate between a square and circle) and p -norm . While trigonometry deals with the relationships between angles and lengths in the plane using trigonometric functions defined relative to a unit circle , squigonometry focuses on analogous relationships and functions within the context of a unit squircle . The term squigonometry is a portmanteau of square or squircle and trigonometry . It was used by Derek Holton to refer to an analog of trigonometry using a square as a basic shape (instead of a circle) in his 1990 pamphlet Creating Problems . [ 1 ] In 2011 it was used by William Wood to refer to trigonometry with a squircle as its base shape in a recreational mathematics article in Mathematics Magazine . In 2016 Robert Poodiack extended Wood's work in another Mathematics Magazine article. Wood and Poodiack published a book about the topic in 2022. However, the idea of generalizing trigonometry to curves other than circles is centuries older. [ 2 ] The cosquine and squine functions, denoted as cq p ⁡ ( t ) {\displaystyle \operatorname {cq} _{p}(t)} and sq p ⁡ ( t ) , {\displaystyle \operatorname {sq} _{p}(t),} can be defined analogously to trigonometric functions on a unit circle , but instead using the coordinates of points on a unit squircle , described by the equation : where p {\displaystyle p} is a real number greater than or equal to 1. Here x {\displaystyle x} corresponds to cq p ⁡ ( t ) {\displaystyle \operatorname {cq} _{p}(t)} and y {\displaystyle y} corresponds to sq p ⁡ ( t ) {\displaystyle \operatorname {sq} _{p}(t)} Notably, when p = 2 {\displaystyle p=2} , the squigonometric functions coincide with the trigonometric functions. Similarly to how trigonometric functions are defined through differential equations, the cosquine and squine functions are also uniquely determined [ 3 ] by solving the coupled initial value problem [ 4 ] [ 5 ] Where x {\displaystyle x} corresponds to cq p ⁡ ( t ) {\displaystyle \operatorname {cq} _{p}(t)} and y {\displaystyle y} corresponds to sq p ⁡ ( t ) {\displaystyle \operatorname {sq} _{p}(t)} . [ 6 ] The definition of sine and cosine through integrals can be extended to define the squigonometric functions. Let 1 < p < ∞ {\displaystyle 1<p<\infty } and define a differentiable function F p : [ 0 , 1 ] → R {\displaystyle F_{p}:[0,1]\rightarrow {\mathbb {R} }} by: Since F p {\displaystyle F_{p}} is strictly increasing it is a one-to-one function on [ 0 , 1 ] {\displaystyle [0,1]} with range [ 0 , π p / 2 ] {\displaystyle [0,\pi _{p}/2]} , where π p {\displaystyle \pi _{p}} is defined as follows: Let sq p {\displaystyle \operatorname {sq} _{p}} be the inverse of F p {\displaystyle F_{p}} on [ 0 , π p / 2 ] {\displaystyle [0,\pi _{p}/2]} . This function can be extended to [ 0 , π p ] {\displaystyle [0,\pi _{p}]} by defining the following relationship: By this means s q p {\displaystyle sq_{p}} is differentiable in R {\displaystyle {\mathbb {R} }} and, corresponding to this, the function c q p {\displaystyle cq_{p}} is defined by: The tanquent, cotanquent, sequent and cosequent functions can be defined as follows: [ 7 ] [ 8 ] General versions of the inverse squine and cosquine can be derived from the initial value problem above. Let x = c q p ( y ) {\displaystyle x=cq_{p}(y)} ; by the inverse function rule , d x d y = − [ sq p ⁡ ( y ) ] p − 1 = ( 1 − x p ) ( p − 1 ) / p {\displaystyle {\frac {dx}{dy}}=-[\operatorname {sq} _{p}(y)]^{p-1}=(1-x^{p})^{(p-1)/p}} . Solving for y {\displaystyle y} gives the definition of the inverse cosquine: Similarly, the inverse squine is defined as: Other parameterizations of squircles give rise to alternate definitions of these functions. For example, Edmunds, Lang, and Gurka [ 9 ] define F ~ p ( x ) {\displaystyle {\tilde {F}}_{p}(x)} as: F ~ p ( x ) = ∫ 0 x ( 1 − t p ) − ( 1 / p ) d t {\displaystyle {\tilde {F}}_{p}(x)=\int _{0}^{x}(1-t^{p})^{-(1/p)}\,dt} . Since F p {\displaystyle F_{p}} is strictly increasing it has a =n inverse which, by analogy with the case p = 2 {\displaystyle p=2} , we denote by sin p {\displaystyle \sin _{p}} . This is defined on the interval [ 0 , π p / 2 ] {\displaystyle [0,\pi _{p}/2]} , where π ~ p {\displaystyle {\tilde {\pi }}_{p}} is defined as follows: π ~ p = 2 ∫ 0 1 ( 1 − t p ) − ( 1 / p ) d t {\displaystyle {\tilde {\pi }}_{p}=2\int _{0}^{1}(1-t^{p})^{-(1/p)}\,dt} . Because of this, we know that sin p {\displaystyle \sin _{p}} is strictly increasing on [ 0 , π ~ p / 2 ] {\displaystyle [0,{\tilde {\pi }}_{p}/2]} , sin p ⁡ ( 0 ) = 0 {\displaystyle \sin _{p}(0)=0} and sin p ⁡ ( π ~ p / 2 ) = 1 {\displaystyle \sin _{p}({\tilde {\pi }}_{p}/2)=1} . We extend sin p {\displaystyle \sin _{p}} to [ 0 , π ~ p ] {\displaystyle [0,{\tilde {\pi }}_{p}]} by defining: sin p ⁡ ( x ) = sin p ⁡ ( π ~ p − x ) {\displaystyle \sin _{p}(x)=\sin _{p}({\tilde {\pi }}_{p}-x)} for x ∈ [ π ~ p / 2 , π ~ p ] {\displaystyle x\in [{\tilde {\pi }}_{p}/2,{\tilde {\pi }}_{p}]} Similarly cos p ⁡ ( x ) = ( 1 − ( sin p ⁡ ( x ) ) p ) 1 p {\displaystyle \cos _{p}(x)=(1-(\sin _{p}(x))^{p})^{\frac {1}{p}}} . Thus cos p {\displaystyle \cos _{p}} is strictly decreasing on [ 0 , π ~ p / 2 ] {\displaystyle [0,{\tilde {\pi }}_{p}/2]} , cos p ⁡ ( 0 ) = 1 {\displaystyle \cos _{p}(0)=1} and cos p ⁡ ( π ~ 2 / 2 ) = 0 {\displaystyle \cos _{p}({\tilde {\pi }}_{2}/2)=0} . Also: | sin p ⁡ x | p + | cos p ⁡ x | p = 1 {\displaystyle |\sin _{p}x|^{p}+|\cos _{p}x|^{p}=1} . This is immediate if x ∈ [ 0 , π ~ / 2 ] {\displaystyle x\in [0,{\tilde {\pi }}/2]} , but it holds for all x ∈ R {\displaystyle x\in \mathbb {R} } in view of symmetry and periodicity. Squigonometric substitution can be used to solve indefinite integrals using a method akin to trigonometric substitution , such as integrals in the generic form [ 7 ] that are otherwise computationally difficult to handle. Squigonometry has been applied to find expressions for the volume of superellipsoids , such as the superegg . [ 7 ] Shelupsky, D. (1959). "A generalization of the trigonometric functions". The American Mathematical Monthly . 66 (10): 879– 884. JSTOR 2309789 .
https://en.wikipedia.org/wiki/Squigonometry
In fluid dynamics , Squire's theorem states that of all the perturbations that may be applied to a shear flow (i.e. a velocity field of the form U = ( U ( z ) , 0 , 0 ) {\displaystyle \mathbf {U} =(U(z),0,0)} ), the perturbations which are least stable are two-dimensional, i.e. of the form u ′ = ( u ′ ( x , z , t ) , 0 , w ′ ( x , z , t ) ) {\displaystyle \mathbf {u} '=(u'(x,z,t),0,w'(x,z,t))} , rather than the three-dimensional disturbances. [ 1 ] This applies to incompressible flows which are governed by the Navier–Stokes equations . The theorem is named after Herbert Squire , who proved the theorem in 1933. [ 2 ] Squire's theorem allows many simplifications to be made in stability theory . If we want to decide whether a flow is unstable or not, it suffices to look at two-dimensional perturbations. These are governed by the Orr–Sommerfeld equation for viscous flow, and by Rayleigh's equation for inviscid flow. This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Squire's_theorem
The squirmer is a model for a spherical microswimmer swimming in Stokes flow . The squirmer model was introduced by James Lighthill in 1952 and refined and used to model Paramecium by John Blake in 1971. [ 1 ] [ 2 ] Blake used the squirmer model to describe the flow generated by a carpet of beating short filaments called cilia on the surface of Paramecium. Today, the squirmer is a standard model for the study of self-propelled particles , such as Janus particles , in Stokes flow. [ 3 ] Here we give the flow field of a squirmer in the case of a non-deformable axisymmetric spherical squirmer (radius R {\displaystyle R} ). [ 1 ] [ 2 ] These expressions are given in a spherical coordinate system . u r ( r , θ ) = 2 3 ( R 3 r 3 − 1 ) B 1 P 1 ( cos ⁡ θ ) + ∑ n = 2 ∞ ( R n + 2 r n + 2 − R n r n ) B n P n ( cos ⁡ θ ) , {\displaystyle u_{r}(r,\theta )={\frac {2}{3}}\left({\frac {R^{3}}{r^{3}}}-1\right)B_{1}P_{1}(\cos \theta )+\sum _{n=2}^{\infty }\left({\frac {R^{n+2}}{r^{n+2}}}-{\frac {R^{n}}{r^{n}}}\right)B_{n}P_{n}(\cos \theta )\;,} u θ ( r , θ ) = 2 3 ( R 3 2 r 3 + 1 ) B 1 V 1 ( cos ⁡ θ ) + ∑ n = 2 ∞ 1 2 ( n R n + 2 r n + 2 + ( 2 − n ) R n r n ) B n V n ( cos ⁡ θ ) . {\displaystyle u_{\theta }(r,\theta )={\frac {2}{3}}\left({\frac {R^{3}}{2r^{3}}}+1\right)B_{1}V_{1}(\cos \theta )+\sum _{n=2}^{\infty }{\frac {1}{2}}\left(n{\frac {R^{n+2}}{r^{n+2}}}+(2-n){\frac {R^{n}}{r^{n}}}\right)B_{n}V_{n}(\cos \theta )\;.} Here B n {\displaystyle B_{n}} are constant coefficients, P n ( cos ⁡ θ ) {\displaystyle P_{n}(\cos \theta )} are Legendre polynomials , and V n ( cos ⁡ θ ) = − 2 n ( n + 1 ) ∂ θ P n ( cos ⁡ θ ) {\displaystyle V_{n}(\cos \theta )={\frac {-2}{n(n+1)}}\partial _{\theta }P_{n}(\cos \theta )} . One finds P 1 ( cos ⁡ θ ) = cos ⁡ θ , P 2 ( cos ⁡ θ ) = 1 2 ( 3 cos 2 ⁡ θ − 1 ) , … , V 1 ( cos ⁡ θ ) = sin ⁡ θ , V 2 ( cos ⁡ θ ) = 1 2 sin ⁡ 2 θ , … {\displaystyle P_{1}(\cos \theta )=\cos \theta ,P_{2}(\cos \theta )={\tfrac {1}{2}}(3\cos ^{2}\theta -1),\dots ,V_{1}(\cos \theta )=\sin \theta ,V_{2}(\cos \theta )={\tfrac {1}{2}}\sin 2\theta ,\dots } . The expressions above are in the frame of the moving particle. At the interface one finds u θ ( R , θ ) = ∑ n = 1 ∞ B n V n {\displaystyle u_{\theta }(R,\theta )=\sum _{n=1}^{\infty }B_{n}V_{n}} and u r ( R , θ ) = 0 {\displaystyle u_{r}(R,\theta )=0} . By using the Lorentz Reciprocal Theorem , one finds the velocity vector of the particle U = − 1 2 ∫ u ( R , θ ) sin ⁡ θ d θ = 2 3 B 1 e z {\displaystyle \mathbf {U} =-{\tfrac {1}{2}}\int \mathbf {u} (R,\theta )\sin \theta \mathrm {d} \theta ={\tfrac {2}{3}}B_{1}\mathbf {e} _{z}} . The flow in a fixed lab frame is given by u L = u + U {\displaystyle \mathbf {u} ^{L}=\mathbf {u} +\mathbf {U} } : u r L ( r , θ ) = R 3 r 3 U P 1 ( cos ⁡ θ ) + ∑ n = 2 ∞ ( R n + 2 r n + 2 − R n r n ) B n P n ( cos ⁡ θ ) , {\displaystyle u_{r}^{L}(r,\theta )={\frac {R^{3}}{r^{3}}}UP_{1}(\cos \theta )+\sum _{n=2}^{\infty }\left({\frac {R^{n+2}}{r^{n+2}}}-{\frac {R^{n}}{r^{n}}}\right)B_{n}P_{n}(\cos \theta )\;,} u θ L ( r , θ ) = R 3 2 r 3 U V 1 ( cos ⁡ θ ) + ∑ n = 2 ∞ 1 2 ( n R n + 2 r n + 2 + ( 2 − n ) R n r n ) B n V n ( cos ⁡ θ ) . {\displaystyle u_{\theta }^{L}(r,\theta )={\frac {R^{3}}{2r^{3}}}UV_{1}(\cos \theta )+\sum _{n=2}^{\infty }{\frac {1}{2}}\left(n{\frac {R^{n+2}}{r^{n+2}}}+(2-n){\frac {R^{n}}{r^{n}}}\right)B_{n}V_{n}(\cos \theta )\;.} with swimming speed U = | U | {\displaystyle U=|\mathbf {U} |} . Note, that lim r → ∞ u L = 0 {\displaystyle \lim _{r\rightarrow \infty }\mathbf {u} ^{L}=0} and u r L ( R , θ ) ≠ 0 {\displaystyle u_{r}^{L}(R,\theta )\neq 0} . The series above are often truncated at n = 2 {\displaystyle n=2} in the study of far field flow, r ≫ R {\displaystyle r\gg R} . Within that approximation, u θ ( R , θ ) = B 1 sin ⁡ θ + 1 2 B 2 sin ⁡ 2 θ {\displaystyle u_{\theta }(R,\theta )=B_{1}\sin \theta +{\tfrac {1}{2}}B_{2}\sin 2\theta } , with squirmer parameter β = B 2 / | B 1 | {\displaystyle \beta =B_{2}/|B_{1}|} . The first mode n = 1 {\displaystyle n=1} characterizes a hydrodynamic source dipole with decay ∝ 1 / r 3 {\displaystyle \propto 1/r^{3}} (and with that the swimming speed U {\displaystyle U} ). The second mode n = 2 {\displaystyle n=2} corresponds to a hydrodynamic stresslet or force dipole with decay ∝ 1 / r 2 {\displaystyle \propto 1/r^{2}} . [ 4 ] Thus, β {\displaystyle \beta } gives the ratio of both contributions and the direction of the force dipole. β {\displaystyle \beta } is used to categorize microswimmers into pushers, pullers and neutral swimmers. [ 5 ] The above figures show the velocity field in the lab frame and in the particle-fixed frame. The hydrodynamic dipole and quadrupole fields of the squirmer model result from surface stresses, due to beating cilia on bacteria, or chemical reactions or thermal non-equilibrium on Janus particles. The squirmer is force-free. On the contrary, the velocity field of the passive particle results from an external force, its far-field corresponds to a "stokeslet" or hydrodynamic monopole. A force-free passive particle doesn't move and doesn't create any flow field.
https://en.wikipedia.org/wiki/Squirmer
Strontium nitrate is an inorganic compound composed of the elements strontium , nitrogen and oxygen with the formula Sr ( NO 3 ) 2 . This colorless solid is used as a red colorant and oxidizer in pyrotechnics . Strontium nitrate is typically generated by the reaction of nitric acid on strontium carbonate . [ 2 ] Like many other strontium salts, strontium nitrate is used to produce a rich red flame in fireworks and road flares . The oxidizing properties of this salt are advantageous in such applications. [ 3 ] Strontium nitrate can aid in eliminating and lessening skin irritations. When mixed with glycolic acid , strontium nitrate reduces the sensation of skin irritation significantly better than using glycolic acid alone. [ 4 ] As a divalent ion with an ionic radius similar to that of Ca 2+ (1.13 Å and 0.99 Å respectively), Sr 2+ ions resembles calcium's ability to traverse calcium-selective ion channels and trigger neurotransmitter release from nerve endings. It is thus used in electrophysiology experiments. In his short story " A Germ-Destroyer ", Rudyard Kipling refers to strontium nitrate as the main ingredient of the titular fumigant .
https://en.wikipedia.org/wiki/Sr(NO3)2
Strontium hydroxide , Sr(OH) 2 , is a caustic alkali composed of one strontium ion and two hydroxide ions. It is synthesized by combining a strontium salt with a strong base . Sr(OH) 2 exists in anhydrous , monohydrate , or octahydrate form. Because Sr(OH) 2 is slightly soluble in cold water, its preparation can be easily carried out by the addition of a strong base such as NaOH or KOH , drop by drop to a solution of any soluble strontium salt, most commonly Sr(NO 3 ) 2 ( strontium nitrate ). The Sr(OH) 2 will precipitate out as a fine white powder. From here, the solution is filtered, and the Sr(OH) 2 is washed with cold water and dried. [ 3 ] Strontium hydroxide is used chiefly in the refining of beet sugar and as a stabilizer in plastic. It may be used as a source of strontium ions when the chlorine from strontium chloride is undesirable. Strontium hydroxide absorbs carbon dioxide from the air to form strontium carbonate . Strontium hydroxide is a severe skin, eye and respiratory irritant. It is harmful if swallowed.
https://en.wikipedia.org/wiki/Sr(OH)2
Distrontium ruthenate , also known as strontium ruthenate , is an oxide of strontium and ruthenium with the chemical formula Sr 2 RuO 4 . It was the first reported perovskite superconductor that did not contain copper . Strontium ruthenate is structurally very similar to the high-temperature cuprate superconductors, and in particular, is almost identical to the lanthanum doped superconductor (La, Sr) 2 CuO 4 . However, the transition temperature for the superconducting phase transition is 0.93 K (about 1.5 K for the best sample), which is much lower than the corresponding value for cuprates. [ 2 ] Superconductivity in SRO was first observed by Yoshiteru Maeno et al. Unlike the cuprate superconductors, SRO displays superconductivity in the absence of doping . [ 2 ] The superconducting order parameter in SRO exhibits signatures of time-reversal symmetry breaking, [ 3 ] and hence, it can be classified as an unconventional superconductor . Sr 2 RuO 4 is believed to be a fairly two-dimensional system, with superconductivity occurring primarily on the Ru-O plane. The electronic structure of Sr 2 RuO 4 is characterized by three bands derived from the Ru t 2g 4d orbitals, namely, α, β and γ bands, of which the first is hole-like while the other two are electron-like. Among them, the γ band arises mainly from the d xy orbital, while the α and β bands emerge from the hybridization of d xz and d yz orbitals. Due to the two-dimensionality of Sr 2 RuO 4 , its Fermi surface consists of three nearly two-dimensional sheets with little dispersion along the crystalline c-axis and that the compound is nearly magnetic. [ 4 ] Early proposals suggested that superconductivity is dominant in the γ band. In particular, the candidate chiral p-wave order parameter in the momentum space exhibits k-dependence phase winding which is characteristic of time-reversal symmetry breaking. This peculiar single-band superconducting order is expected to give rise to appreciable spontaneous supercurrent at the edge of the sample. Such an effect is closely associated with the topology of the Hamiltonian describing Sr 2 RuO 4 in the superconducting state, which is characterized by a nonzero Chern number . However, scanning probes have so far failed to detect expected time-reversal symmetry breaking fields generated by the supercurrent, off by orders of magnitude. [ 5 ] This has led some to speculate that superconductivity arises dominantly from the α and β bands instead. [ 6 ] Such a two-band superconductor, although having k-dependence phase winding in its order parameters on the two relevant bands, is topologically trivial with the two bands featuring opposite Chern numbers. Therefore, it could possibly give a much reduced if not completely cancelled supercurrent at the edge. However, this naive reasoning was later found not to be entirely correct: the magnitude of edge current is not directly related to the topological property of the chiral state. [ 7 ] In particular, although the non-trivial topology is expected to give rise to protected chiral edge states, due to U(1) symmetry-breaking the edge current is not a protected quantity. In fact, it has been shown that the edge current vanishes identically for any higher angular momentum chiral pairing states which feature even larger Chern numbers, such as chiral d-, f-wave etc. [ 8 ] [ 9 ] T c seems to increase under uniaxial compression [ 10 ] that pushes the van Hove singularity of the d xy orbital across the Fermi level. [ 11 ] Evidence was reported for p -wave singlet state as in cuprates and conventional superconductors, instead of the conjectured more unconventional p -wave triplet state . [ 12 ] [ 13 ] It has also been suggested that Strontium ruthenate superconductivity could be due to a Fulde–Ferrell–Larkin–Ovchinnikov phase . [ 14 ] [ 15 ] Strontium ruthenate behaves as a conventional Fermi liquid at temperatures below 25 K. [ 16 ] In 2023, a team of researchers from the University of Illinois Urbana-Champaign confirmed the 67-year-old prediction of Pines' demon excitation in Sr 2 RuO 4 . [ 17 ]
https://en.wikipedia.org/wiki/Sr2RuO4
Strontium nitride , Sr 3 N 2 , is produced by burning strontium metal in air (resulting in a mixture with strontium oxide ) or in nitrogen . Like other metal nitrides , it reacts with water to give strontium hydroxide and ammonia: This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sr3N2
Strontium bromide is a chemical compound with a formula Sr Br 2 . At room temperature it is a white, odourless, crystalline powder. Strontium bromide imparts a bright red colour in a flame test , showing the presence of strontium ions. It is used in flares and also has some pharmaceutical uses. SrBr 2 can be prepared from strontium hydroxide and hydrobromic acid . Alternatively strontium carbonate can also be used as strontium source. These reactions give hexahydrate of strontium bromide ( SrBr 2 ·6H 2 O ), which decomposes to dihydrate ( SrBr 2 ·2H 2 O ) at 89 °C. At 180 °C anhydrous SrBr 2 is obtained. [ 2 ] At room temperature, strontium bromide adopts a crystal structure with a tetragonal unit cell and space group P 4/ n . This structure is referred to as α- SrBr 2 and is isostructural with EuBr 2 and USe 2 . The compound's structure was initially erroneously interpreted as being of the PbCl 2 type, [ 3 ] but this was later corrected. [ 4 ] [ 1 ] Around 920 K (650 °C), α- SrBr 2 undergoes a first-order solid-solid phase transition to a much less ordered phase, β- SrBr 2 , which adopts the cubic fluorite structure. The beta phase of strontium bromide has a much higher ionic conductivity of about 1 S/cm, comparable to that of molten SrBr 2 , due to extensive disorder in the bromide sublattice . [ 1 ] Strontium bromide melts at 930 K (657 °C).
https://en.wikipedia.org/wiki/SrBr2
Strontium chloride (SrCl 2 ) is a salt of strontium and chloride . It is a "typical" salt, forming neutral aqueous solutions. As with all compounds of strontium, this salt emits a bright red colour in flame, and is commonly used in fireworks to that effect. [ citation needed ] Its properties are intermediate between those for barium chloride , which is more toxic, and calcium chloride . Strontium chloride can be prepared by treating aqueous strontium hydroxide or strontium carbonate with hydrochloric acid : Crystallization from cold aqueous solution gives the hexahydrate , SrCl 2 ·6H 2 O. Dehydration of this salt occurs in stages, commencing above 61 °C (142 °F). Full dehydration occurs at 320 °C (608 °F). [ 2 ] In the solid state, SrCl 2 adopts a fluorite structure. [ 3 ] [ 4 ] [ 5 ] In the vapour phase the SrCl 2 molecule is non-linear with a Cl-Sr-Cl angle of approximately 130°. [ 6 ] This is an exception to VSEPR theory which would predict a linear structure. Ab initio calculations have been cited to propose that contributions from d orbitals in the shell below the valence shell are responsible. [ 7 ] Another proposal is that polarisation of the electron core of the strontium atom causes a distortion of the core electron density that interacts with the Sr-Cl bonds. [ 8 ] Strontium chloride is a precursor to other compounds of strontium, such as yellow strontium chromate , strontium carbonate , and strontium sulfate . Exposure of aqueous solutions of strontium chloride to the sodium salt of the desired anion often leads to formation of the solid precipitate: [ 9 ] [ 2 ] Strontium chloride is often used as a red colouring agent in pyrotechnics . [ citation needed ] It imparts a much more intense red colour to the flames than most alternatives. It is employed in small quantities in glass -making and metallurgy . The radioactive isotope strontium-89, used for the treatment of bone cancer , is usually administered in the form of strontium chloride. Seawater aquaria require small amounts of strontium chloride, which is consumed during the growth of certain plankton . SrCl 2 is useful in reducing tooth sensitivity by forming a barrier over microscopic tubules in the dentin containing nerve endings that have become exposed by gum recession. Known in the U.S. as Elecol and Sensodyne , these products are called "strontium chloride toothpastes", although most now use saltpeter (KNO 3 ) instead which works as an analgesic rather than a barrier. [ 10 ] Brief strontium chloride exposure induces parthenogenetic activation of oocytes [ 11 ] which is used in developmental biological research. A commercial company is using a strontium chloride-based artificial solid called AdAmmine as a means to store ammonia at low pressure, mainly for use in NO x emission reduction on Diesel vehicles. They claim that their patented material can also be made from some other salts, but they have chosen strontium chloride for mass production. [ 12 ] Earlier company research also considered using the stored ammonia as a means to store synthetic ammonia fuel under the trademark HydrAmmine and the press name "hydrogen tablet", however, this aspect has not been commercialized. [ 13 ] Their processes and materials are patented. Their early experiments used magnesium chloride , and is also mentioned in that article. Strontium chloride is used with citric acid in soil testing as a universal extractant of plant nutrients. [ 14 ]
https://en.wikipedia.org/wiki/SrCl2
Strontium fluoride , SrF 2 , also called strontium difluoride and strontium(II) fluoride , is a fluoride of strontium . It is a brittle white crystalline solid. In nature, it appears as the very rare mineral strontiofluorite . [ 2 ] [ 3 ] Strontium fluoride is prepared by the action of hydrofluoric acid on strontium carbonate . [ 4 ] The solid adopts the fluorite structure. In the vapour phase the SrF 2 molecule is non-linear with an F−Sr−F angle of approximately 120°. [ 5 ] This is an exception to VSEPR theory which would predict a linear structure. Ab initio calculations have been cited to propose that contributions from d orbitals in the shell below the valence shell are responsible. [ 6 ] Another proposal is that polarization of the electron core of the strontium atom creates an approximately tetrahedral distribution of charge that interacts with the Sr−F bonds. [ 7 ] It is almost insoluble in water (its K sp value is approximately 2.0x10 −10 at 25 degrees Celsius ). It irritates eyes and skin, and is harmful when inhaled or ingested. Similar to CaF 2 and BaF 2 , SrF 2 displays superionic conductivity at elevated temperatures. [ 8 ] Strontium fluoride is transparent to light in the wavelengths from vacuum ultraviolet (150 nm ) to infrared (11 μm ). Its optical properties are intermediate to calcium fluoride and barium fluoride . [ 9 ] Strontium fluoride is used as an optical material for a small range of special applications, for example, as an optical coating on lenses and also as a thermoluminescent dosimeter crystal. Another use is as a carrier of strontium-90 radioisotope in radioisotope thermoelectric generators .
https://en.wikipedia.org/wiki/SrF2
Strontium iodide is an inorganic compound with the chemical formula Sr I 2 . It is a salt of strontium and iodine . It forms a hexahydrate SrI 2 ·6H 2 O . It is an ionic, water-soluble, and deliquescent compound that can be used in medicine as a substitute for potassium iodide . [ 5 ] It is also used as a scintillation gamma radiation detector, typically doped with europium , due to its optical clarity, relatively high density, high effective atomic number (Z=48), and high scintillation light yield. [ 6 ] In recent years, europium-doped strontium iodide ( SrI 2 : Eu 2+ ) has emerged as a promising scintillation material for gamma-ray spectroscopy with extremely high light yield and proportional response, exceeding that of the widely used high performance commercial scintillator LaBr 3 : Ce 3+ . Large diameter SrI 2 crystals can be grown reliably using vertical Bridgman technique [ 7 ] and are being commercialized by several companies. [ 8 ] [ 9 ] Strontium iodide can be prepared by reacting strontium carbonate with hydroiodic acid : Strontium iodide forms a white powder that slowly changes to a yellowish colour when exposed to air. At high temperatures (in the presence of air) strontium iodide completely decomposes to form strontium oxide and free iodine . [ 10 ]
https://en.wikipedia.org/wiki/SrI2
Strontium oxide or strontia , SrO, is formed when strontium reacts with oxygen . Burning strontium in air results in a mixture of strontium oxide and strontium nitride . It also forms from the decomposition of strontium carbonate SrCO 3 . It is a strongly basic oxide. About 8% by weight of cathode-ray tubes is strontium oxide, which has been the major use of strontium since 1970. [ 3 ] [ 4 ] Color televisions and other devices containing color cathode-ray tubes sold in the United States are required by law to use strontium in the faceplate to block X-ray emission (these X-ray emitting TVs are no longer in production). Lead(II) oxide can be used in the neck and funnel, but causes discoloration when used in the faceplate. [ 5 ] Elemental strontium is formed when strontium oxide is heated with aluminium in a vacuum. [ 1 ]
https://en.wikipedia.org/wiki/SrO
Strontium peroxide is an inorganic compound with the formula Sr O 2 that exists in both anhydrous and octahydrate form, both of which are white solids. The anhydrous form adopts a structure similar to that of calcium carbide . [ 4 ] [ 5 ] It is an oxidizing agent used for bleaching . It is used in some pyrotechnic compositions as an oxidizer and a vivid red pyrotechnic colorant . It can also be used as an antiseptic and in tracer munitions. [ citation needed ] Strontium peroxide is produced by passing oxygen over heated strontium oxide . Upon heating in the absence of O 2 , it degrades to SrO and O 2 . It is more thermally labile than BaO 2 . [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/SrO2
Monostrontium ruthenate is the inorganic compound with the formula SrRuO 3 . It is one of two main strontium ruthenates , the other having the formula Sr 2 RuO 4 . SrRuO 3 is a ferromagnetic. [ 1 ] It has a perovskite structure as do many complex metal oxides with the ABO 3 formula. The Ru 4+ ions occupy the octahedral sites and the larger Sr 2+ ions are distorted 12-coordinate. [ 2 ]
https://en.wikipedia.org/wiki/SrRuO3
Strontium sulfide is the inorganic compound with the formula Sr S . It is a white solid. The compound is an intermediate in the conversion of strontium sulfate, the main strontium ore called celestite (or, more correctly, celestine), to other more useful compounds. [ 2 ] [ 3 ] [ 4 ] Strontium sulfide is produced by roasting celestine with coke at 1100–1300 °C. [ 5 ] The sulfate is reduced , leaving the sulfide: About 300,000 tons are processed in this way annually. [ 2 ] Both luminous and nonluminous sulfide phases are known, impurities, defects, and dopants being important. [ 6 ] As expected for a sulfide salt of alkaline earth, the sulfide hydrolyzes readily: For this reason, samples of SrS have an odor of rotten eggs. Similar reactions are used in the production of commercially useful compounds, including the most useful strontium compound, strontium carbonate : a mixture of strontium sulfide with either carbon dioxide gas or sodium carbonate leads to formation of a precipitate of strontium carbonate. [ 2 ] [ 5 ] Strontium nitrate can also be prepared in this way.
https://en.wikipedia.org/wiki/SrS
Strontium sulfate (SrSO 4 ) is the sulfate salt of strontium . It is a white crystalline powder and occurs in nature as the mineral celestine . It is poorly soluble in water to the extent of 1 part in 8,800. It is more soluble in dilute HCl and nitric acid and appreciably soluble in alkali chloride solutions (e.g. sodium chloride ). Strontium sulfate is a polymeric material, isostructural with barium sulfate . Crystallized strontium sulfate is utilized by a small group of radiolarian protozoa , called the Acantharea , as a main constituent of their skeleton . Strontium sulfate is of interest as a naturally occurring precursor to other strontium compounds, which are more useful. In industry it is converted to the carbonate for use as ceramic precursor and the nitrate for use in pyrotechnics. [ 4 ] The low aqueous solubility of strontium sulfate can lead to scale formation in processes where these ions meet. For example, it can form on surfaces of equipment in underground oil wells depending on the groundwater conditions. [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/SrSO4
Tausonite Strontium titanate is an oxide of strontium and titanium with the chemical formula Sr Ti O 3 . At room temperature, it is a centrosymmetric paraelectric material with a perovskite structure. At low temperatures it approaches a ferroelectric phase transition with a very large dielectric constant ~10 4 but remains paraelectric down to the lowest temperatures measured as a result of quantum fluctuations , making it a quantum paraelectric. [ 1 ] It was long thought to be a wholly artificial material, until 1982 when its natural counterpart—discovered in Siberia and named tausonite —was recognised by the IMA . Tausonite remains an extremely rare mineral in nature, occurring as very tiny crystals . Its most important application has been in its synthesized form wherein it is occasionally encountered as a diamond simulant , in precision optics , in varistors , and in advanced ceramics . The name tausonite was given in honour of Lev Vladimirovich Tauson (1917–1989), a Russian geochemist . Disused trade names for the synthetic product include strontium mesotitanate , Diagem , and Marvelite . This product is currently being marketed for its use in jewelry under the name Fabulite . [ 2 ] Other than its type locality of the Murun Massif in the Sakha Republic , natural tausonite is also found in Cerro Sarambi , Concepción department , Paraguay ; and along the Kotaki River of Honshū , Japan . [ 3 ] [ 4 ] SrTiO 3 has an indirect band gap of 3.25 eV and a direct gap of 3.75 eV [ 5 ] in the typical range of semiconductors . Synthetic strontium titanate has a very large dielectric constant (300) at room temperature and low electric field. It has a specific resistivity of over 10 9 Ω-cm for very pure crystals. [ 6 ] It is also used in high-voltage capacitors. Introducing mobile charge carriers by doping leads to Fermi-liquid metallic behavior already at very low charge carrier densities. [ 7 ] At high electron densities strontium titanate becomes superconducting below 0.35 K and was the first insulator and oxide discovered to be superconductive. [ 8 ] Strontium titanate is both much denser ( specific gravity 4.88 for natural, 5.13 for synthetic) and much softer ( Mohs hardness 5.5 for synthetic, 6–6.5 for natural) than diamond . Its crystal system is cubic and its refractive index (2.410—as measured by sodium light, 589.3 nm) is nearly identical to that of diamond (at 2.417), but the dispersion (the optical property responsible for the "fire" of the cut gemstones) of strontium titanate is 4.3× that of diamond, at 0.190 (B–G interval). This results in a shocking display of fire compared to diamond and diamond simulants such as YAG , GAG , GGG , Cubic Zirconia , and Moissanite . [ 3 ] [ 4 ] Synthetics are usually transparent and colourless, but can be doped with certain rare earth or transition metals to give reds, yellows, browns, and blues. Natural tausonite is usually translucent to opaque, in shades of reddish brown, dark red, or grey. Both have an adamantine (diamond-like) lustre . Strontium titanate is considered extremely brittle with a conchoidal fracture ; natural material is cubic or octahedral in habit and streaks brown. Through a hand-held (direct vision) spectroscope , doped synthetics will exhibit a rich absorption spectrum typical of doped stones. Synthetic material has a melting point of ca. 2080 °C (3776 °F) and is readily attacked by hydrofluoric acid . [ 3 ] [ 4 ] Under extremely low oxygen partial pressure, strontium titanate decomposes via incongruent sublimation of strontium well below the melting temperature. [ 9 ] At temperatures lower than 105 K, its cubic structure transforms to tetragonal . [ 10 ] Its monocrystals can be used as optical windows and high-quality sputter deposition targets. SrTiO 3 is an excellent substrate for epitaxial growth of high-temperature superconductors and many oxide-based thin films . It is particularly well known as the substrate for the growth of the lanthanum aluminate-strontium titanate interface . Doping strontium titanate with niobium makes it electrically conductive, being one of the only conductive commercially available single crystal substrates for the growth of perovskite oxides. Its bulk lattice parameter of 3.905Å makes it suitable as the substrate for the growth of many other oxides, including the rare-earth manganites, titanates, lanthanum aluminate (LaAlO 3 ), strontium ruthenate (SrRuO 3 ) and many others. Oxygen vacancies are fairly common in SrTiO 3 crystals and thin films. Oxygen vacancies induce free electrons in the conduction band of the material, making it more conductive and opaque. These vacancies can be caused by exposure to reducing conditions, such as high vacuum at elevated temperatures. High-quality, epitaxial SrTiO 3 layers can also be grown on silicon without forming silicon dioxide , thereby making SrTiO 3 an alternative gate dielectric material. This also enables the integration of other thin film perovskite oxides onto silicon. [ 11 ] SrTiO 3 can change its properties when it is exposed to light. [ 12 ] [ 13 ] These changes depend on the temperature and the defects in the material. [ 13 ] [ 12 ] SrTiO 3 has been shown to possess persistent photoconductivity where exposing the crystal to light will increase its electrical conductivity by over 2 orders of magnitude. After the light is turned off, the enhanced conductivity persists for several days, with negligible decay. [ 14 ] [ 15 ] At low temperatures, the main effects of light are electronic, meaning that they involve the creation, movement, and recombination of electrons and holes (positive charges) in the material. [ 13 ] [ 12 ] These effects include photoconductivity, photoluminescence, photovoltage, and photochromism. They are influenced by the defect chemistry of SrTiO 3 , which determines the energy levels, band gap, carrier concentration, and mobility of the material. At high temperatures (>200 °C), the main effects of light are photoionic, meaning that they involve the migration of oxygen vacancies (negative ions) in the material. These vacancies are the main ionic defects in SrTiO 3 , and they can alter the electronic structure, defect chemistry, and surface properties of the material. These effects include photoinduced phase transitions, photoinduced oxygen exchange, and photoinduced surface reconstruction. They are influenced by the oxygen pressure, the crystal structure, and the doping level of SrTiO 3 . [ 13 ] [ 12 ] Due to the significant ionic and electronic conduction of SrTiO 3 , it is potent to be used as the mixed conductor . [ 16 ] Synthetic strontium titanate was one of several titanates patented during the late 1940s and early 1950s; other titanates included barium titanate and calcium titanate . Research was conducted primarily at the National Lead Company (later renamed NL Industries ) in the United States , by Leon Merker and Langtry E. Lynd . Merker and Lynd first patented the growth process on February 10, 1953; a number of refinements were subsequently patented over the next four years, such as modifications to the feed powder and additions of colouring dopants. A modification to the basic Verneuil process (also known as flame-fusion) is the favoured method of growth. An inverted oxy-hydrogen blowpipe is used, with feed powder mixed with oxygen carefully fed through the blowpipe in the typical fashion, but with the addition of a third pipe to deliver oxygen—creating a tricone burner. The extra oxygen is required for successful formation of strontium titanate, which would otherwise fail to oxidize completely due to the titanium component. The ratio is ca. 1.5 volumes of hydrogen for each volume of oxygen. The highly purified feed powder is derived by first producing titanyl double oxalate salt (SrTiO( C 2 O 4 ) 2 · 2 H 2 O ) by reacting strontium chloride (Sr Cl 2 ) and oxalic acid ((COO H ) 2 · 2 H 2 O ) with titanium tetrachloride (TiCl 4 ). The salt is washed to eliminate chloride , heated to 1000 °C in order to produce a free-flowing granular powder of the required composition, and is then ground and sieved to ensure all particles are between 0.2 and 0.5 micrometres in size. [ 17 ] The feed powder falls through the oxyhydrogen flame , melts, and lands on a rotating and slowly descending pedestal below. The height of the pedestal is constantly adjusted to keep its top at the optimal position below the flame, and over a number of hours the molten powder cools and crystallises to form a single pedunculated pear or boule crystal. This boule is usually no larger than 2.5 centimetres in diameter and 10 centimetres long; it is an opaque black to begin with, requiring further annealing in an oxidizing atmosphere in order to make the crystal colourless and to relieve strain . This is done at over 1000 °C for 12 hours. [ 17 ] Thin films of SrTiO 3 can be grown epitaxially by various methods, including pulsed laser deposition , molecular beam epitaxy , RF sputtering and atomic layer deposition . As in most thin films, different growth methods can result in significantly different defect and impurity densities and crystalline quality, resulting in a large variation of the electronic and optical properties. Its cubic structure and high dispersion once made synthetic strontium titanate a prime candidate for simulating diamond . Beginning c. 1955 , large quantities of strontium titanate were manufactured for this sole purpose. Strontium titanate was in competition with synthetic rutile ("titania") at the time, and had the advantage of lacking the unfortunate yellow tinge and strong birefringence inherent to the latter material. While it was softer, it was significantly closer to diamond in likeness. Eventually, however, both would fall into disuse, being eclipsed by the creation of "better" simulants: first by yttrium aluminium garnet (YAG) and followed shortly after by gadolinium gallium garnet (GGG); and finally by the (to date) ultimate simulant in terms of diamond-likeness and cost-effectiveness, cubic zirconia . [ 18 ] Despite being outmoded, strontium titanate is still manufactured and periodically encountered in jewellery. It is one of the most costly of diamond simulants, and due to its rarity collectors may pay a premium for large i.e. >2 carat (400 mg) specimens. As a diamond simulant, strontium titanate is most deceptive when mingled with melée i.e. <0.20 carat (40 mg) stones and when it is used as the base material for a composite or doublet stone (with, e.g., synthetic corundum as the crown or top of the stone). Under the microscope , gemmologists distinguish strontium titanate from diamond by the former's softness—manifested by surface abrasions—and excess dispersion (to the trained eye), and occasional gas bubbles which are remnants of synthesis. Doublets can be detected by a join line at the girdle ("waist" of the stone) and flattened air bubbles or glue visible within the stone at the point of bonding. [ 19 ] [ 20 ] [ 21 ] Due to its high melting point and insolubility in water, strontium titanate has been used as a strontium-90 -containing material in radioisotope thermoelectric generators (RTGs), such as the US Sentinel and Soviet Beta-M series. [ 22 ] [ 23 ] As strontium-90 has a high fission product yield and is easily extracted from spent nuclear fuel , Sr-90 based RTGs can in principle be produced cheaper than those based on plutonium-238 or other radionuclides which have to be produced in dedicated facilities. However, due to the lower power density (~0.45W thermal per gram of Strontium-90-Titanate) and half life, space based applications, which put a particular premium on low weight, high reliability and longevity prefer Plutonium-238 . Terrestrial off-grid applications of RTGs meanwhile have been largely phased out due to concern over orphan sources and the decreasing price and increasing availability of solar panels, small wind turbines, chemical battery storage and other off-grid power solutions. Strontium titanate's mixed conductivity has attracted attention for use in solid oxide fuel cells (SOFCs). It demonstrates both electronic and ionic conductivity which is useful for SOFC electrodes because there is an exchange of gas and oxygen ions in the material and electrons on both sides of the cell. Strontium titanate is doped with different materials for use on different sides of a fuel cell. On the fuel side (anode), where the first reaction occurs, it is often doped with lanthanum to form lanthanum-doped strontium titanate (LST). In this case, the A-site, or position in the unit cell where strontium usually sits, is sometimes filled by lanthanum instead, this causes the material to exhibit n-type semiconductor properties, including electronic conductivity. It also shows oxygen ion conduction due to the perovskite structure tolerance for oxygen vacancies. This material has a thermal coefficient of expansion similar to that of the common electrolyte yttria-stabilized zirconia (YSZ), chemical stability during the reactions which occur at fuel cell electrodes, and electronic conductivity of up to 360 S/cm under SOFC operating conditions. [ 24 ] Another key advantage of these LST is that it shows a resistance to sulfur poisoning, which is an issue with the currently used nickel - ceramic ( cermet ) anodes. [ 25 ] Another related compound is strontium titanium ferrite (STF) which is used as a cathode (oxygen-side) material in SOFCs. This material also shows mixed ionic and electronic conductivity which is important as it means the reduction reaction which happens at the cathode can occur over a wider area. [ 26 ] Building on this material by adding cobalt on the B-site (replacing titanium) as well as iron, we have the material STFC, or cobalt-substituted STF, which shows remarkable stability as a cathode material as well as lower polarization resistance than other common cathode materials such as lanthanum strontium cobalt ferrite . These cathodes also have the advantage of not containing rare earth metals which make them cheaper than many of the alternatives. [ 27 ]
https://en.wikipedia.org/wiki/SrTiO3
Srinivasan Chandrasekaran (born 1945) is an Indian organic and organometallic chemist , academic and a former chair of the Department of Organic Chemistry and the Division of Chemical Sciences. He was also a former Dean of the Faculty of Science at the Indian Institute of Science . He was known for his research on organic reaction mechanisms and organic synthesis . [ 1 ] and was an elected fellow of the Indian National Science Academy , [ 2 ] The World Academy of Sciences [ 3 ] and the Indian Academy of Sciences . [ 4 ] The Council of Scientific and Industrial Research , the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology , one of the highest Indian science awards, in 1989, for his contributions to chemical sciences. [ 5 ] S. Chandrasekaran was born on 15 November 1945 in the south Indian state of Tamil Nadu .He did his college studies at the Ramakrishna Mission Vivekananda College of Madras University from where he completed his graduate and master's degrees and secured a PhD from the same university in 1972, studying under the guidance of S. Swaminathan . His thesis was based on Oxy-Cope rearrangement and on the synthesis of novel norbornane derivatives . [ 2 ] Moving to the US, he did his post-doctoral studies in the laboratory of E.J. Corey at Harvard University (1973–75) and on completion of the studies, worked as a scientist at Syntex Research Laboratories during 1975–76. [ 6 ] He stayed in the US for one more year, resuming his research at Corey's laboratory before returning to India in 1977 to join IIT, Kanpur as a lecturer in chemistry. After 12 years of service there, he shifted his base to Bengaluru to continue his service at the Indian Institute of Science . He held several positions at IISc including those of the chair of Department of Organic Chemistry and the Division of Chemical Sciences as well as the Dean of the Faculty of Science. [ 2 ] Chandrasekaran lives in Bengaluru and serves as an honorary professor at the Indian Institute of Science. [ 7 ] During his post-doctoral studies with Corey, Chandrasekaran was able to accomplish the synthesis of gibberellic acid , a plant growth hormone, successfully for the first time. [ 2 ] Later at Syntex , he worked on the synthesis of beta-lactam antibiotics. Subsequently, working on organic reaction mechanisms, he developed a set of new organic synthesis reagents and using them, accomplished the creation of the carbon constellations. [ 8 ] His research has been documented by way of several articles published in peer-reviewed journals [ 9 ] [ 10 ] [ note 1 ] and ResearchGate , an online article repository has listed 318 of them. [ 11 ] Besides, he has contributed chapters to two books; [ note 2 ] 3 chapters to the Encyclopedia of Reagents for Organic Synthesis and one chapter to Particle Swarm Optimization . [ citation needed ] He has also mentored several scholars in their studies and has delivered keynote addresses and plenary speeches. [ 7 ] He was involved with the functioning of many science societies; executive committee membership and chair of International Union of Pure and Applied Chemistry , chair of the national committee of the Indian National Science Academy, secretaryship of the Indian Academy of Sciences and the presidency of the Chemical Research Society of India were some of those responsibilities. [ 2 ] Chandrasekharan received the Basudeb Banerjee Memorial Medal of the Indian Chemical Society in 1988 and the Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize , one of the highest Indian science awards, in 1989. [ 12 ] The Indian Academy of Sciences elected him as their fellow the same year and the Indian National Science Academy and The World Academy of Sciences followed suit in 1992 and 1999 respectively. [ 3 ] The other awards he has received include Silver Medal of the Chemical Research Society of India , Golden Jubilee Commemoration Medal (2007) of Indian National Science Academy and the Alumni Award of Excellence of Indian Institute of Science. He has also held the J. C. Bose National Fellowship of Department of Science and Technology and the Distinguished Fellowship of the Science and Engineering Research Board . [ 2 ]
https://en.wikipedia.org/wiki/Srinivasan_Chandrasekaran
In coding theory , Srivastava codes , formulated by Professor J. N. Srivastava , form a class of parameterised error-correcting codes which are a special case of alternant codes . The original Srivastava code over GF( q ) of length n is defined by a parity check matrix H of alternant form where the α i and z i are elements of GF( q m ) The parameters of this code are length n , dimension ≥ n − m s and minimum distance ≥ s + 1. This cryptography-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Srivastava_code
The St. Petersburg paradox or St. Petersburg lottery [ 1 ] is a paradox involving the game of flipping a coin where the expected payoff of the lottery game is infinite but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naïve decision criterion that takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. Several resolutions to the paradox have been proposed, including the impossible amount of money a casino would need to continue the game indefinitely. The problem was invented by Nicolas Bernoulli , [ 2 ] who stated it in a letter to Pierre Raymond de Montmort on September 9, 1713. [ 3 ] [ 4 ] However, the paradox takes its name from its analysis by Nicolas' cousin Daniel Bernoulli , one-time resident of Saint Petersburg , who in 1738 published his thoughts about the problem in the Commentaries of the Imperial Academy of Science of Saint Petersburg . [ 5 ] A casino offers a game of chance for a single player in which a fair coin is tossed at each stage. The initial stake begins at 2 dollars and is doubled every time tails appears. The first time heads appears, the game ends and the player wins whatever is the current stake. Thus the player wins 2 dollars if heads appears on the first toss, 4 dollars if tails appears on the first toss and heads on the second, 8 dollars if tails appears on the first two tosses and heads on the third, and so on. Mathematically, the player wins 2 k + 1 {\displaystyle 2^{k+1}} dollars, where k {\displaystyle k} is the number of consecutive tails tosses. [ 5 ] What would be a fair price to pay the casino for entering the game? To answer this, one needs to consider what would be the expected payout at each stage: with probability ⁠ 1 / 2 ⁠ , the player wins 2 dollars; with probability ⁠ 1 / 4 ⁠ the player wins 4 dollars; with probability ⁠ 1 / 8 ⁠ the player wins 8 dollars, and so on. Assuming the game can continue as long as the coin toss results in tails and, in particular, that the casino has unlimited resources, the expected value is thus This sum grows without bound so the expected win is an infinite amount of money. Considering nothing but the expected value of the net change in one's monetary wealth, one should therefore play the game at any price if offered the opportunity. Yet, Daniel Bernoulli , after describing the game with an initial stake of one ducat , stated, "Although the standard calculation shows that the value of [the player's] expectation is infinitely great, it has ... to be admitted that any fairly reasonable man would sell his chance, with great pleasure, for twenty ducats." [ 5 ] Robert Martin quotes Ian Hacking as saying, "Few of us would pay even $25 to enter such a game", and he says most commentators would agree. [ 6 ] The apparent paradox is the discrepancy between what people seem willing to pay to enter the game and the infinite expected value. [ 5 ] Several approaches have been proposed for solving the paradox. The classical resolution of the paradox involved the explicit introduction of a utility function , an expected utility hypothesis , and the presumption of diminishing marginal utility of money. According to Daniel Bernoulli: The determination of the value of an item must not be based on the price, but rather on the utility it yields ... There is no doubt that a gain of one thousand ducats is more significant to the pauper than to a rich man though both gain the same amount. A common utility model, suggested by Daniel Bernoulli, is the logarithmic function U ( w ) = ln( w ) (known as log utility ). It is a function of the gambler's total wealth w , and the concept of diminishing marginal utility of money is built into it. The expected utility hypothesis posits that a utility function exists that provides a good criterion for real people's behavior; i.e. a function that returns a positive or negative value indicating if the wager is a good gamble. For each possible event, the change in utility ln(wealth after the event) − ln(wealth before the event) will be weighted by the probability of that event occurring. Let c be the cost charged to enter the game. The expected incremental utility of the lottery now converges to a finite value: This formula gives an implicit relationship between the gambler's wealth and how much he should be willing to pay (specifically, any c that gives a positive change in expected utility). For example, with natural log utility, a millionaire ($1,000,000) should be willing to pay up to $20.88, a person with $1,000 should pay up to $10.95, a person with $2 should borrow $1.35 and pay up to $3.35. Before Daniel Bernoulli's 1738 publication, mathematician Gabriel Cramer from Geneva had already in 1728 found parts of this idea (also motivated by the St. Petersburg paradox), stating that the mathematicians estimate money in proportion to its quantity, and men of good sense in proportion to the usage that they may make of it. He demonstrated in a letter to Nicolas Bernoulli [ 7 ] that a square root function describing the diminishing marginal benefit of gains can resolve the problem. However, unlike Daniel Bernoulli, he did not consider the total wealth of a person, but only the gain by the lottery. This solution by Cramer and Bernoulli, however, is not completely satisfying, as the lottery can easily be changed in a way such that the paradox reappears. To this aim, we just need to change the game so that it gives even more rapidly increasing payoffs. For any unbounded utility function, one can find a lottery that allows for a variant of the St. Petersburg paradox, as was first pointed out by Menger. [ 8 ] Recently, expected utility theory has been extended to arrive at more behavioral decision models . In some of these new theories, as in cumulative prospect theory , the St. Petersburg paradox again appears in certain cases, even when the utility function is concave, but not if it is bounded. [ 9 ] Nicolas Bernoulli himself proposed an alternative idea for solving the paradox. He conjectured that people will neglect unlikely events. [ 4 ] Since in the St. Petersburg lottery only unlikely events yield the high prizes that lead to an infinite expected value, this could resolve the paradox. The idea of probability weighting resurfaced much later in the work on prospect theory by Daniel Kahneman and Amos Tversky . Paul Weirich similarly wrote that risk aversion could solve the paradox. Weirich went on to write that increasing the prize actually decreases the chance of someone paying to play the game, stating "there is some number of birds in hand worth more than any number of birds in the bush". [ 10 ] [ 11 ] However, this has been rejected by some theorists because, as they point out, some people enjoy the risk of gambling and because it is illogical to assume that increasing the prize will lead to more risks. Cumulative prospect theory is one popular generalization of expected utility theory that can predict many behavioral regularities. [ 12 ] However, the overweighting of small probability events introduced in cumulative prospect theory may restore the St. Petersburg paradox. Cumulative prospect theory avoids the St. Petersburg paradox only when the power coefficient of the utility function is lower than the power coefficient of the probability weighting function. [ 13 ] Intuitively, the utility function must not simply be concave, but it must be concave relative to the probability weighting function to avoid the St. Petersburg paradox. One can argue that the formulas for the prospect theory are obtained in the region of less than $400. [ 12 ] This is not applicable for infinitely increasing sums in the St. Petersburg paradox. The classical St. Petersburg game assumes that the casino or banker has infinite resources. This assumption has long been challenged as unrealistic. [ 14 ] [ 15 ] Alexis Fontaine des Bertins pointed out in 1754 that the resources of any potential backer of the game are finite. [ 16 ] More importantly, the expected value of the game only grows logarithmically with the resources of the casino. As a result, the expected value of the game, even when played against a casino with the largest bankroll realistically conceivable, is quite modest. In 1777, Georges-Louis Leclerc, Comte de Buffon calculated that after 29 rounds of play there would not be enough money in the Kingdom of France to cover the bet. [ 17 ] If the casino has finite resources, the game must end once those resources are exhausted. [ 15 ] Suppose the total resources (or maximum jackpot) of the casino are W dollars (more generally, W is measured in units of half the game's initial stake). Then the maximum number of times the casino can play before it no longer can fully cover the next bet is L = ⌊ log 2 ( W ) ⌋ . [ 18 ] [ nb 1 ] Assuming the game ends when the casino can no longer cover the bet, the expected value E of the lottery then becomes: [ 18 ] The following table shows the expected value E of the game with various potential bankers and their bankroll W : Note: Under game rules which specify that if the player wins more than the casino's bankroll they will be paid all the casino has, the additional expected value is less than it would be if the casino had enough funds to cover one more round, i.e. less than $1. For the player to win W he must be allowed to play round L +1 . So the additional expected value is W /2 L +1 . The premise of infinite resources produces a variety of apparent paradoxes in economics. In the martingale betting system , a gambler betting on a tossed coin doubles his bet after every loss so that an eventual win would cover all losses; this system fails with any finite bankroll. The gambler's ruin concept shows that a persistent gambler who raises his bet to a fixed fraction of his bankroll when he wins, but does not reduce his bet when he loses, will eventually and inevitably go broke—even if the game has a positive expected value . Buffon [ 17 ] argued that a theory of rational behavior must correspond to what a rational decision-maker would do in real life, and since reasonable people regularly ignore events that are unlikely enough, a rational decision-maker should also ignore such rare events. As an estimate of the threshold of ignorability, he argued that, since a 56-year-old man ignores the possibility of dying in the next 24 hours, which had a probability of 1/10189 according to the mortality tables of the day, events with less than 1/10,000 probability could be ignored. Assuming that, the St Petersburg game has an expected payoff of only ∑ k = 1 13 2 k 1 2 k = 13 {\displaystyle \sum _{k=1}^{13}2^{k}{\frac {1}{2^{k}}}=13} . Various authors, including Jean le Rond d'Alembert and John Maynard Keynes , have rejected maximization of expectation (even of utility) as a proper rule of conduct. [ 23 ] [ 24 ] Keynes, in particular, insisted that the relative risk [ clarification needed ] of an alternative could be sufficiently high to reject it even if its expectation were enormous. [ 24 ] Recently, some researchers have suggested to replace the expected value by the median as the fair value. [ 25 ] [ 26 ] An early resolution containing the essential mathematical arguments assuming multiplicative dynamics was put forward in 1870 by William Allen Whitworth . [ 27 ] An explicit link to the ergodicity problem was made by Peters in 2011. [ 28 ] These solutions are mathematically similar to using the Kelly criterion or logarithmic utility. General dynamics beyond the purely multiplicative case can correspond to non-logarithmic utility functions, as was pointed out by Carr and Cherubini in 2020. [ 29 ] A solution involving sampling was offered by William Feller . [ 30 ] Intuitively Feller's answer is "to perform this game with a large number of people and calculate the expected value from the sample extraction". In this method, when the games of infinite number of times are possible, the expected value will be infinity, and in the case of finite, the expected value will be a much smaller value. Paul Samuelson resolves the paradox [ 31 ] by arguing that, even if an entity had infinite resources, the game would never be offered. If the lottery represents an infinite expected gain to the player, then it also represents an infinite expected loss to the host. No one could be observed paying to play the game because it would never be offered. As Samuelson summarized the argument, "Paul will never be willing to give as much as Peter will demand for such a contract; and hence the indicated activity will take place at the equilibrium level of zero intensity." Many variants of the St Petersburg game are proposed to counter proposed solutions to the game. [ 11 ] For example, the "Pasadena game": [ 32 ] let n {\displaystyle n} be the number of coin-flips; if n {\displaystyle n} is odd, the player gains units of 2 n n {\displaystyle {\frac {2^{n}}{n}}} ; else the player loses 2 n n {\displaystyle {\frac {2^{n}}{n}}} units of utility. The expected utility from the game is then ∑ n = 1 ∞ ( − 1 ) n + 1 n = ln ⁡ 2 {\displaystyle \sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n}}=\ln 2} . However, since the sum is not absolutely convergent , it may be rearranged to sum to any number, including positive or negative infinity. This suggests that the expected utility of the Pasadena game depends on the summation order, but standard decision theory does not provide a principled way to choose a summation order. One approach that is attracting much interest in solving the St Petersburg paradox is to use a parameter related to the cognitive aspect of a strategy. This approach was developed by studying nonergodic systems in finance. There is much research on the non-stationarity of the financial markets. [ 33 ] [ 34 ] From a statistical point of view, knowledge of a phenomenon results in an increase in the probability of prediction. In practice, the results generated by a non-random prediction algorithm, which implements useful information, cannot be reproduced randomly (the probability tends to zero as the number of predictions made increases). Consequently, to understand whether a strategy operates cognitively or randomly, we need only calculate the probability of obtaining an equal or better outcome at random. In the case of the St. Petersburg paradox, the doubling strategy was compared with a constant bet strategy that was completely random but equivalent in terms of the total value of the bets. From this comparison, it is shown that a random constant bet strategy obtains better results with a probability that tends to 50% as the number of bets increases. If the doubling strategy exploited some useful information about the system this probability should tend to zero instead converging to 50%. This shows that this strategy does not use any useful information. From this point of view, the St. Petersburg paradox teaches us that an expected gain that tends to infinity does not always imply the presence of a cognitive and non-random strategy. Consequently, from the decision-making point of view, we can create a hierarchy of values, in which knowledge turns out to be more important than expected gain.
https://en.wikipedia.org/wiki/St._Petersburg_paradox
stICQ is an ICQ client for mobile phones with symbian OS. StICQ was written by the Russian programmer Sergey Taldykin. StICQ is a native Symbian application (.SIS) for instant messaging over Internet for the ICQ network (using the OSCAR protocol ). It supports all main statuses including "Not Available", "Invisible" etc., contact search using ICQ UID, black lists, multi-user support, sound announcements and even SMS sending using default ICQ server. Its features are its small size, low memory usage and relatively stable work. One of the key features of the client is its ability to suspend outcoming data until GPRS coverage is available. It also suspend the status of the user, while all other mobile clients usually report connection problem and drop the user out. Currently, [ when? ] stICQ does not support smiley pictures but have a unique feature of quick emoticon input using the call button (special plugin required). Notable, stICQ supports the yellow "Ready to chat" extended status while "Depressive", as well as "At home", "At work" etc. are displayed as "Offline". This caused to call stICQ an "anti-depressive ICQ". The source code has been sold to the development team of Quiet Internet Pager messenger. 1.01 version QiP for Symbian has been released recently. StICQ is free for download, as are a wide variety of mods changing status icons and menu text. This Internet-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/StICQ
Staballoy is the name of two different classes of metal alloys, one class typically used for munitions and a different class developed for drilling rods. In a military context, staballoys are metal alloys of a high proportion of depleted uranium with other metals, usually titanium or molybdenum , designed for use in kinetic energy penetrator armor-piercing munitions . One formulation has a composition of 99.25% of depleted uranium and 0.75% of titanium. Other variants can have 3.5% of titanium. They are about 65% more dense than lead . An alternative to staballoys in kinetic penetrator munition is tungsten , but it is more expensive, more difficult to machine and is not pyrophoric , so the munition lacks the incendiary effect enhancing its impact. Tungsten penetrators also tend to form a mushroom shaped tip during armor penetration, while uranium ones tend to be self-sharpening. [ 1 ] An emerging alternative alloy of depleted uranium is stakalloy , formed of niobium (0.01-0.95 wt.%), vanadium (1-4.5 wt.%, between gamma-eutectoid and eutectic ) and depleted uranium (balance). It has improved machinability. It can be used in structural applications too. [ 2 ] Staballoy is also a name for a class of commercially used stainless steels used for drilling rods for drilling rigs . An example is Staballoy AG17 which is a different material from military staballoy, and contains 20.00% manganese , 17.00% chromium , 0.30% silicon , 0.03% carbon , 0.50% nitrogen , and 0.05% molybdenum , alloyed with iron . It is nonmagnetic. [1] This alloy-related article is a stub . You can help Wikipedia by expanding it . This article related to weaponry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Staballoy
In coordination chemistry , a stability constant (also called formation constant or binding constant ) is an equilibrium constant for the formation of a complex in solution. It is a measure of the strength of the interaction between the reagents that come together to form the complex . There are two main kinds of complex: compounds formed by the interaction of a metal ion with a ligand and supramolecular complexes, such as host–guest complexes and complexes of anions. The stability constant(s) provide(s) the information required to calculate the concentration(s) of the complex(es) in solution. There are many areas of application in chemistry, biology and medicine. Jannik Bjerrum (son of Niels Bjerrum ) developed the first general method for the determination of stability constants of metal-ammine complexes in 1941. [ 1 ] The reasons why this occurred at such a late date, nearly 50 years after Alfred Werner had proposed the correct structures for coordination complexes, have been summarised by Beck and Nagypál. [ 2 ] The key to Bjerrum's method was the use of the then recently developed glass electrode and pH meter to determine the concentration of hydrogen ions in solution. Bjerrum recognised that the formation of a metal complex with a ligand was a kind of acid–base equilibrium : there is competition for the ligand, L, between the metal ion, M n+ , and the hydrogen ion, H + . This means that there are two simultaneous equilibria that have to be considered. In what follows electrical charges are omitted for the sake of generality. The two equilibria are Hence by following the hydrogen ion concentration during a titration of a mixture of M and HL with base , and knowing the acid dissociation constant of HL, the stability constant for the formation of ML could be determined. Bjerrum went on to determine the stability constants for systems in which many complexes may be formed. The following twenty years saw a veritable explosion in the number of stability constants that were determined. Relationships, such as the Irving-Williams series were discovered. The calculations were done by hand using the so-called graphical methods. The mathematics underlying the methods used in this period are summarised by Rossotti and Rossotti. [ 3 ] The next key development was the use of a computer program, LETAGROP [ 4 ] [ 5 ] to do the calculations. This permitted the examination of systems too complicated to be evaluated by means of hand-calculations. Subsequently, computer programs capable of handling complex equilibria in general, such as SCOGS [ 6 ] and MINIQUAD [ 7 ] were developed so that today the determination of stability constants has almost become a "routine" operation. Values of thousands of stability constants can be found in two commercial databases. [ 8 ] [ 9 ] The formation of a complex between a metal ion, M, and a ligand, L, is in fact usually a substitution reaction. For example, in aqueous solutions , metal ions will be present as aqua ions , so the reaction for the formation of the first complex could be written as The equilibrium constant for this reaction is given by [L] should be read as "the concentration of L" and likewise for the other terms in square brackets. The expression can be greatly simplified by removing those terms which are constant. The number of water molecules attached to each metal ion is constant. In dilute solutions the concentration of water is effectively constant. The expression becomes Following this simplification a general definition can be given, for the general equilibrium The definition can easily be extended to include any number of reagents. The reagents need not always be a metal and a ligand but can be any species which form a complex. Stability constants defined in this way, are association constants. This can lead to some confusion as p K a values are dissociation constants. In general purpose computer programs it is customary to define all constants as association constants. The relationship between the two types of constant is given in association and dissociation constants . A cumulative or overall constant, given the symbol β , is the constant for the formation of a complex from reagents. For example, the cumulative constant for the formation of ML 2 is given by The stepwise constants, K 1 and K 2 refer to the formation of the complexes one step at a time. It follows that A cumulative constant can always be expressed as the product of stepwise constants. Conversely, any stepwise constant can be expressed as a quotient of two or more overall constants. There is no agreed notation for stepwise constants, though a symbol such as K L ML is sometimes found in the literature. It is good practice to specify each stability constant explicitly, as illustrated above. The formation of a hydroxo complex is a typical example of a hydrolysis reaction. A hydrolysis reaction is one in which a substrate reacts with water, splitting a water molecule into hydroxide and hydrogen ions. In this case the hydroxide ion then forms a complex with the substrate. In water the concentration of hydroxide is related to the concentration of hydrogen ions by the self-ionization constant , K w . The expression for hydroxide concentration is substituted into the formation constant expression In general, for the reaction In the older literature the value of log K is usually cited for an hydrolysis constant. The log β * value is usually cited for an hydrolysed complex with the generic chemical formula M p L q (OH) r . A Lewis acid , A, and a Lewis base , B, can be considered to form a complex AB. There are three major theories relating to the strength of Lewis acids and bases and the interactions between them. The thermodynamics of metal ion complex formation provides much significant information. [ 13 ] In particular it is useful in distinguishing between enthalpic and entropic effects. Enthalpic effects depend on bond strengths and entropic effects have to do with changes in the order/disorder of the solution as a whole. The chelate effect, below, is best explained in terms of thermodynamics. An equilibrium constant is related to the standard Gibbs free energy change for the reaction R is the gas constant and T is the absolute temperature . At 25 °C, Δ G ⊖ = (−5.708 kJ mol −1 ) ⋅ log β . Free energy is made up of an enthalpy term and an entropy term. The standard enthalpy change can be determined by calorimetry or by using the Van 't Hoff equation , though the calorimetric method is preferable. When both the standard enthalpy change and stability constant have been determined, the standard entropy change is easily calculated from the equation above. The fact that stepwise formation constants of complexes of the type ML n decrease in magnitude as n increases may be partly explained in terms of the entropy factor. Take the case of the formation of octahedral complexes. For the first step m = 6 , n = 1 and the ligand can go into one of 6 sites. For the second step m = 5 and the second ligand can go into one of only 5 sites. This means that there is more randomness in the first step than the second one; Δ S ⊖ is more positive, so Δ G ⊖ is more negative and K 1 > K 2 {\displaystyle K_{1}>K_{2}} . The ratio of the stepwise stability constants can be calculated on this basis, but experimental ratios are not exactly the same because Δ H ⊖ is not necessarily the same for each step. [ 14 ] Exceptions to this rule are discussed below, in #chelate effect and #Geometrical factors . The thermodynamic equilibrium constant, K ⊖ , for the equilibrium can be defined [ 15 ] as where {ML} is the activity of the chemical species ML etc. K ⊖ is dimensionless since activity is dimensionless. Activities of the products are placed in the numerator, activities of the reactants are placed in the denominator. See activity coefficient for a derivation of this expression. Since activity is the product of concentration and activity coefficient ( γ ) the definition could also be written as where [ML] represents the concentration of ML and Γ is a quotient of activity coefficients. This expression can be generalized as To avoid the complications involved in using activities, stability constants are determined , where possible, in a medium consisting of a solution of a background electrolyte at high ionic strength , that is, under conditions in which Γ can be assumed to be always constant. [ 15 ] For example, the medium might be a solution of 0.1 mol dm −3 sodium nitrate or 3 mol dm −3 sodium perchlorate . When Γ is constant it may be ignored and the general expression in theory, above, is obtained. All published stability constant values refer to the specific ionic medium used in their determination and different values are obtained with different conditions, as illustrated for the complex CuL (L = glycinate ). Furthermore, stability constant values depend on the specific electrolyte used as the value of Γ is different for different electrolytes, even at the same ionic strength . There does not need to be any chemical interaction between the species in equilibrium and the background electrolyte, but such interactions might occur in particular cases. For example, phosphates form weak complexes with alkali metals , so, when determining stability constants involving phosphates, such as ATP , the background electrolyte used will be, for example, a tetralkylammonium salt. Another example involves iron(III) which forms weak complexes with halide and other anions, but not with perchlorate ions. When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories. [ 17 ] All equilibrium constants vary with temperature according to the Van 't Hoff equation [ 18 ] Alternatively R is the gas constant and T is the thermodynamic temperature. Thus, for exothermic reactions, where the standard enthalpy change , Δ H ⊖ , is negative, K decreases with temperature, but for endothermic reactions, where Δ H ⊖ is positive, K increases with temperature. Consider the two equilibria, in aqueous solution, between the copper (II) ion, Cu 2+ and ethylenediamine (en) on the one hand and methylamine , MeNH 2 on the other. In the first reaction the bidentate ligand ethylene diamine forms a chelate complex with the copper ion. Chelation results in the formation of a five-membered ring. In the second reaction the bidentate ligand is replaced by two monodentate methylamine ligands of approximately the same donor power, meaning that the enthalpy of formation of Cu–N bonds is approximately the same in the two reactions. Under conditions of equal copper concentrations and when the concentration of methylamine is twice the concentration of ethylenediamine, the concentration of the bidentate complex will be greater than the concentration of the complex with 2 monodentate ligands. The effect increases with the number of chelate rings so the concentration of the EDTA complex, which has six chelate rings, is much higher than a corresponding complex with two monodentate nitrogen donor ligands and four monodentate carboxylate ligands. Thus, the phenomenon of the chelate effect is a firmly established empirical fact: under comparable conditions, the concentration of a chelate complex will be higher than the concentration of an analogous complex with monodentate ligands. The thermodynamic approach to explaining the chelate effect considers the equilibrium constant for the reaction: the larger the equilibrium constant, the higher the concentration of the complex. When the analytical concentration of methylamine is twice that of ethylenediamine and the concentration of copper is the same in both reactions, the concentration [Cu(en)] 2+ is much higher than the concentration [Cu(MeNH 2 ) 2 ] 2+ because β 11 ≫ β 12 . The difference between the two stability constants is mainly due to the difference in the standard entropy change, Δ S ⊖ . In the reaction with the chelating ligand there are two particles on the left and one on the right, whereas in equation with the monodentate ligand there are three particles on the left and one on the right. This means that less entropy of disorder is lost when the chelate complex is formed than when the complex with monodentate ligands is formed. This is one of the factors contributing to the entropy difference. Other factors include solvation changes and ring formation. Some experimental data to illustrate the effect are shown in the following table. [ 19 ] These data show that the standard enthalpy changes are indeed approximately equal for the two reactions and that the main reason why the chelate complex is so much more stable is that the standard entropy term is much less unfavourable, indeed, it is favourable in this instance. In general it is difficult to account precisely for thermodynamic values in terms of changes in solution at the molecular level, but it is clear that the chelate effect is predominantly an effect of entropy. Other explanations, including that of Schwarzenbach, [ 20 ] are discussed in Greenwood and Earnshaw. [ 19 ] The chelate effect increases as the number of chelate rings increases. For example, the complex [Ni(dien) 2 )] 2+ is more stable than the complex [Ni(en) 3 )] 2+ ; both complexes are octahedral with six nitrogen atoms around the nickel ion, but dien ( diethylenetriamine , 1,4,7-triazaheptane) is a tridentate ligand and en is bidentate. The number of chelate rings is one less than the number of donor atoms in the ligand. EDTA (ethylenediaminetetracetic acid) has six donor atoms so it forms very strong complexes with five chelate rings. Ligands such as DTPA , which have eight donor atoms are used to form complexes with large metal ions such as lanthanide or actinide ions which usually form 8- or 9-coordinate complexes. 5-membered and 6-membered chelate rings give the most stable complexes. 4-membered rings are subject to internal strain because of the small inter-bond angle is the ring. The chelate effect is also reduced with 7- and 8- membered rings, because the larger rings are less rigid, so less entropy is lost in forming them. Removal of a proton from an aliphatic –OH group is difficult to achieve in aqueous solution because the energy required for this process is rather large. Thus, ionization of aliphatic –OH groups occurs in aqueous solution only in special circumstances. One such circumstance is found with compounds containing the H 2 N–C–C–OH substructure. For example, compounds containing the 2-aminoethanol substructure can form metal–chelate complexes with the deprotonated form, H 2 N–C–C–O − . The chelate effect supplies the extra energy needed to break the O–H bond. An important example occurs with the molecule tris . This molecule should be used with caution as a buffering agent as it will form chelate complexes with ions such as Fe 3+ and Cu 2+ . It was found that the stability of the complex of copper(II) with the macrocyclic ligand cyclam (1,4,8,11-tetraazacyclotetradecane) was much greater than expected in comparison to the stability of the complex with the corresponding open-chain amine . [ 21 ] This phenomenon was named the macrocyclic effect and it was also interpreted as an entropy effect. However, later studies suggested that both enthalpy and entropy factors were involved. [ 22 ] An important difference between macrocyclic ligands and open-chain (chelating) ligands is that they have selectivity for metal ions, based on the size of the cavity into which the metal ion is inserted when a complex is formed. For example, the crown ether 18-crown-6 forms much stronger complexes with the potassium ion, K + than with the smaller sodium ion, Na + . [ 23 ] In hemoglobin an iron(II) ion is complexed by a macrocyclic porphyrin ring. The article hemoglobin incorrectly states that oxyhemoglogin contains iron(III). It is now known that the iron(II) in hemoglobin is a low-spin complex , whereas in oxyhemoglobin it is a high-spin complex. The low-spin Fe 2+ ion fits snugly into the cavity of the porphyrin ring, but high-spin iron(II) is significantly larger and the iron atom is forced out of the plane of the macrocyclic ligand. [ 24 ] This effect contributes the ability of hemoglobin to bind oxygen reversibly under biological conditions. In Vitamin B12 a cobalt(II) ion is held in a corrin ring . Chlorophyll is a macrocyclic complex of magnesium(II). Successive stepwise formation constants K n in a series such as ML n ( n = 1, 2, ...) usually decrease as n increases. Exceptions to this rule occur when the geometry of the ML n complexes is not the same for all members of the series. The classic example is the formation of the diamminesilver(I) complex [Ag(NH 3 ) 2 ] + in aqueous solution. In this case, K 2 > K 1 . The reason for this is that, in aqueous solution, the ion written as Ag + actually exists as the four-coordinate tetrahedral aqua species [Ag(H 2 O) 4 ] + . The first step is then a substitution reaction involving the displacement of a bound water molecule by ammonia forming the tetrahedral complex [Ag(NH 3 )(H 2 O) 3 ] + . In the second step, all the aqua ligands are lost and a linear, two-coordinate product [H 3 N–Ag–NH 3 ] + is formed. Examination of the thermodynamic data [ 25 ] shows that the difference in entropy change is the main contributor to the difference in stability constants for the two complexation reactions. Other examples exist where the change is from octahedral to tetrahedral, as in the formation of [CoCl 4 ] 2− from [Co(H 2 O) 6 ] 2+ . Ahrland, Chatt and Davies proposed that metal ions could be described as class A if they formed stronger complexes with ligands whose donor atoms are nitrogen , oxygen or fluorine than with ligands whose donor atoms are phosphorus , sulfur or chlorine and class B if the reverse is true. [ 26 ] For example, Ni 2+ forms stronger complexes with amines than with phosphines , but Pd 2+ forms stronger complexes with phosphines than with amines. Later, Pearson proposed the theory of hard and soft acids and bases (HSAB theory). [ 27 ] In this classification, class A metals are hard acids and class B metals are soft acids. Some ions, such as copper(I), are classed as borderline. Hard acids form stronger complexes with hard bases than with soft bases. In general terms hard–hard interactions are predominantly electrostatic in nature whereas soft–soft interactions are predominantly covalent in nature. The HSAB theory, though useful, is only semi-quantitative. [ 28 ] The hardness of a metal ion increases with oxidation state. An example of this effect is given by the fact that Fe 2+ tends to form stronger complexes with N -donor ligands than with O -donor ligands, but the opposite is true for Fe 3+ . The Irving–Williams series refers to high-spin, octahedral, divalent metal ion of the first transition series. It places the stabilities of complexes in the order This order was found to hold for a wide variety of ligands. [ 29 ] There are three strands to the explanation of the series. Another example of the effect of ionic radius the steady increase in stability of complexes with a given ligand along the series of trivalent lanthanide ions, an effect of the well-known lanthanide contraction . Stability constant values are exploited in a wide variety of applications. Chelation therapy is used in the treatment of various metal-related illnesses, such as iron overload in β- thalassemia sufferers who have been given blood transfusions. The ideal ligand binds to the target metal ion and not to others, but this degree of selectivity is very hard to achieve. The synthetic drug deferiprone achieves selectivity by having two oxygen donor atoms so that it binds to Fe 3+ in preference to any of the other divalent ions that are present in the human body, such as Mg 2+ , Ca 2+ and Zn 2+ . Treatment of poisoning by ions such as Pb 2+ and Cd 2+ is much more difficult since these are both divalent ions and selectivity is harder to accomplish. [ 30 ] Excess copper in Wilson's disease can be removed by penicillamine or Triethylene tetramine (TETA). DTPA has been approved by the U.S. Food and Drug Administration for treatment of plutonium poisoning. DTPA is also used as a complexing agent for gadolinium in MRI contrast enhancement . The requirement in this case is that the complex be very strong, as Gd 3+ is very toxic. The large stability constant of the octadentate ligand ensures that the concentration of free Gd 3+ is almost negligible, certainly well below toxicity threshold. [ 31 ] In addition the ligand occupies only 8 of the 9 coordination sites on the gadolinium ion. The ninth site is occupied by a water molecule which exchanges rapidly with the fluid surrounding it and it is this mechanism that makes the paramagnetic complex into a contrast reagent. EDTA forms such strong complexes with most divalent cations that it finds many uses . For example, it is often present in washing powder to act as a water softener by sequestering calcium and magnesium ions. The selectivity of macrocyclic ligands can be used as a basis for the construction of an ion selective electrode . For example, potassium selective electrodes are available that make use of the naturally occurring macrocyclic antibiotic valinomycin . An ion-exchange resin such as chelex 100 , which contains chelating ligands bound to a polymer , can be used in water softeners and in chromatographic separation techniques. In solvent extraction the formation of electrically neutral complexes allows cations to be extracted into organic solvents. For example , in nuclear fuel reprocessing uranium (VI) and plutonium (VI) are extracted into kerosene as the complexes [MO 2 (TBP) 2 (NO 3 ) 2 ] (TBP = tri- n -butyl phosphate ). In phase-transfer catalysis , a substance which is insoluble in an organic solvent can be made soluble by addition of a suitable ligand. For example, potassium permanganate oxidations can be achieved by adding a catalytic quantity of a crown ether and a small amount of organic solvent to the aqueous reaction mixture, so that the oxidation reaction occurs in the organic phase. In all these examples, the ligand is chosen on the basis of the stability constants of the complexes formed. For example, TBP is used in nuclear fuel reprocessing because (among other reasons) it forms a complex strong enough for solvent extraction to take place, but weak enough that the complex can be destroyed by nitric acid to recover the uranyl cation as nitrato complexes, such as [UO 2 (NO 3 ) 4 ] 2− back in the aqueous phase. Supramolecular complexes are held together by hydrogen bonding, hydrophobic forces, van der Waals forces, π-π interactions, and electrostatic effects, all of which can be described as noncovalent bonding . Applications include molecular recognition , host–guest chemistry and anion sensors . A typical application in molecular recognition involved the determination of formation constants for complexes formed between a tripodal substituted urea molecule and various saccharides . [ 32 ] The study was carried out using a non-aqueous solvent and NMR chemical shift measurements. The object was to examine the selectivity with respect to the saccharides. An example of the use of supramolecular complexes in the development of chemosensors is provided by the use of transition-metal ensembles to sense for ATP . [ 33 ] Anion complexation can be achieved by encapsulating the anion in a suitable cage. Selectivity can be engineered by designing the shape of the cage. For example, dicarboxylate anions could be encapsulated in the ellipsoidal cavity in a large macrocyclic structure containing two metal ions. [ 34 ] The method developed by Bjerrum is still the main method in use today, though the precision of the measurements has greatly increased. Most commonly, a solution containing the metal ion and the ligand in a medium of high ionic strength is first acidified to the point where the ligand is fully protonated . This solution is then titrated , often by means of a computer-controlled auto-titrator, with a solution of CO 2 -free base. The concentration, or activity , of the hydrogen ion is monitored by means of a glass electrode. The data set used for the calculation has three components: a statement defining the nature of the chemical species that will be present, called the model of the system, details concerning the concentrations of the reagents used in the titration, and finally the experimental measurements in the form of titre and pH (or emf ) pairs. Other ion-selective electrodes (ISE) may be used. For example, a fluoride electrode may be used with the determination of stability complexes of fluoro-complexes of a metal ion. It is not always possible to use an ISE. If that is the case, the titration can be monitored by other types of measurement. Ultraviolet–visible spectroscopy , fluorescence spectroscopy and NMR spectroscopy are the most commonly used alternatives. Current practice is to take absorbance or fluorescence measurements at a range of wavelengths and to fit these data simultaneously. Various NMR chemical shifts can also be fitted together. The chemical model will include values of the protonation constants of the ligand, which will have been determined in separate experiments, a value for log K w and estimates of the unknown stability constants of the complexes formed. These estimates are necessary because the calculation uses a non-linear least-squares algorithm. The estimates are usually obtained by reference to a chemically similar system. The stability constant databases [ 8 ] [ 9 ] can be very useful in finding published stability constant values for related complexes. In some simple cases the calculations can be done in a spreadsheet. [ 35 ] Otherwise, the calculations are performed with the aid of a general-purpose computer programs. The most frequently used programs are: In biochemistry, formation constants of adducts may be obtained from Isothermal titration calorimetry (ITC) measurements. This technique yields both the stability constant and the standard enthalpy change for the equilibrium. [ 45 ] It is mostly limited, by availability of software, to complexes of 1:1 stoichiometry. The following references are for critical reviews of published stability constants for various classes of ligands. All these reviews are published by IUPAC and the full text is available, free of charge, in pdf format.
https://en.wikipedia.org/wiki/Stability_constants_of_complexes
In mathematics , the stability radius of an object (system, function , matrix , parameter ) at a given nominal point is the radius of the largest ball , centered at the nominal point, all of whose elements satisfy pre-determined stability conditions. The picture of this intuitive notion is this: where p ^ {\displaystyle {\hat {p}}} denotes the nominal point, P {\displaystyle P} denotes the space of all possible values of the object p {\displaystyle p} , and the shaded area, P ( s ) {\displaystyle P(s)} , represents the set of points that satisfy the stability conditions. The radius of the blue circle, shown in red, is the stability radius. The formal definition of this concept varies, depending on the application area. The following abstract definition is quite useful [ 1 ] [ 2 ] where B ( ρ , p ^ ) {\displaystyle B(\rho ,{\hat {p}})} denotes a closed ball of radius ρ {\displaystyle \rho } in P {\displaystyle P} centered at p ^ {\displaystyle {\hat {p}}} . It looks like the concept was invented in the early 1960s. [ 3 ] [ 4 ] In the 1980s it became popular in control theory [ 5 ] and optimization. [ 6 ] It is widely used as a model of local robustness against small perturbations in a given nominal value of the object of interest. It was shown [ 2 ] that the stability radius model is an instance of Wald's maximin model . That is, where The large penalty ( − ∞ {\displaystyle -\infty } ) is a device to force the max {\displaystyle \max } player not to perturb the nominal value beyond the stability radius of the system. It is an indication that the stability model is a model of local stability/robustness, rather than a global one. Info-gap decision theory is a recent non-probabilistic decision theory. It is claimed to be radically different from all current theories of decision under uncertainty. But it has been shown [ 2 ] that its robustness model, namely is actually a stability radius model characterized by a simple stability requirement of the form r c ≤ R ( q , u ) {\displaystyle r_{c}\leq R(q,u)} where q {\displaystyle q} denotes the decision under consideration, u {\displaystyle u} denotes the parameter of interest, u ~ {\displaystyle {\tilde {u}}} denotes the estimate of the true value of u {\displaystyle u} and U ( α , u ~ ) {\displaystyle U(\alpha ,{\tilde {u}})} denotes a ball of radius α {\displaystyle \alpha } centered at u ~ {\displaystyle {\tilde {u}}} . Since stability radius models are designed to deal with small perturbations in the nominal value of a parameter, info-gap's robustness model measures the local robustness of decisions in the neighborhood of the estimate u ~ {\displaystyle {\tilde {u}}} . Sniedovich [ 2 ] argues that for this reason the theory is unsuitable for the treatment of severe uncertainty characterized by a poor estimate and a vast uncertainty space. There are cases where it is more convenient to define the stability radius slightly different. For example, in many applications in control theory the radius of stability is defined as the size of the smallest destabilizing perturbation in the nominal value of the parameter of interest. [ 7 ] The picture is this: More formally, where d i s t ( p , p ^ ) {\displaystyle dist(p,{\hat {p}})} denotes the distance of p ∈ P {\displaystyle p\in P} from p ^ {\displaystyle {\hat {p}}} . The stability radius of a continuous function f (in a functional space F ) with respect to an open stability domain D is the distance between f and the set of unstable functions (with respect to D ). We say that a function is stable with respect to D if its spectrum is in D . Here, the notion of spectrum is defined on a case-by-case basis, as explained below. Formally, if we denote the set of stable functions by S(D) and the stability radius by r(f,D) , then: where C is a subset of F . Note that if f is already unstable (with respect to D ), then r(f,D)=0 (as long as C contains zero). The notion of stability radius is generally applied to special functions as polynomials (the spectrum is then the roots) and matrices (the spectrum is the eigenvalues ). The case where C is a proper subset of F permits us to consider structured perturbations (e.g. for a matrix, we could only need perturbations on the last row). It is an interesting measure of robustness, for example in control theory . Let f be a ( complex ) polynomial of degree n , C=F be the set of polynomials of degree less than (or equal to) n (which we identify here with the set C n + 1 {\displaystyle \mathbb {C} ^{n+1}} of coefficients). We take for D the open unit disk , which means we are looking for the distance between a polynomial and the set of Schur stable polynomials . Then: where q contains each basis vector (e.g. q ( z ) = ( 1 , z , … , z n ) {\displaystyle q(z)=(1,z,\ldots ,z^{n})} when q is the usual power basis). This result means that the stability radius is bound with the minimal value that f reaches on the unit circle.
https://en.wikipedia.org/wiki/Stability_radius
In model theory , a branch of mathematical logic , a complete first-order theory T is called stable in λ (an infinite cardinal number ), if the Stone space of every model of T of size ≤ λ has itself size ≤ λ. T is called a stable theory if there is no upper bound for the cardinals κ such that T is stable in κ. The stability spectrum of T is the class of all cardinals κ such that T is stable in κ. For countable theories there are only four possible stability spectra. The corresponding dividing lines are those for total transcendentality , superstability and stability . This result is due to Saharon Shelah , who also defined stability and superstability. Theorem. Every countable complete first-order theory T falls into one of the following classes: The condition on λ in the third case holds for cardinals of the form λ = κ ω , but not for cardinals λ of cofinality ω (because λ < λ cof λ ). A complete first-order theory T is called totally transcendental if every formula has bounded Morley rank , i.e. if RM(φ) < ∞ for every formula φ( x ) with parameters in a model of T , where x may be a tuple of variables. It is sufficient to check that RM( x = x ) < ∞, where x is a single variable. For countable theories total transcendence is equivalent to stability in ω, and therefore countable totally transcendental theories are often called ω-stable for brevity. A totally transcendental theory is stable in every λ ≥ | T |, hence a countable ω-stable theory is stable in all infinite cardinals. Every uncountably categorical countable theory is totally transcendental. This includes complete theories of vector spaces or algebraically closed fields. The theories of groups of finite Morley rank are another important example of totally transcendental theories. A complete first-order theory T is superstable if there is a rank function on complete types that has essentially the same properties as Morley rank in a totally transcendental theory. Every totally transcendental theory is superstable. A theory T is superstable if and only if it is stable in all cardinals λ ≥ 2 | T | . A theory that is stable in one cardinal λ ≥ | T | is stable in all cardinals λ that satisfy λ = λ | T | . Therefore a theory is stable if and only if it is stable in some cardinal λ ≥ | T |. Most mathematically interesting theories fall into this category, including complicated theories such as any complete extension of ZF set theory, and relatively tame theories such as the theory of real closed fields. This shows that the stability spectrum is a relatively blunt tool. To get somewhat finer results one can look at the exact cardinalities of the Stone spaces over models of size ≤ λ, rather than just asking whether they are at most λ. For a general stable theory T in a possibly uncountable language, the stability spectrum is determined by two cardinals κ and λ 0 , such that T is stable in λ exactly when λ ≥ λ 0 and λ μ = λ for all μ<κ. So λ 0 is the smallest infinite cardinal for which T is stable. These invariants satisfy the inequalities When | T | is countable the 4 possibilities for its stability spectrum correspond to the following values of these cardinals:
https://en.wikipedia.org/wiki/Stability_spectrum
In civil engineering , stabilization is the retrofitting of platforms or foundations as constructed for the purpose of improving the bearing capacity and levelness of the supported building. Soil failure can occur on a slope, a slope failure or landslide , or in a flat area due to liquefaction of water-saturated sand and/or mud. Generally, deep pilings or foundations must be driven into solid soil (typically hard mud or sand) or to underlying bedrock .
https://en.wikipedia.org/wiki/Stabilization_(architecture)
A stabilized liquid membrane device or SLMD is a type of passive sampling device which allows for the in situ , integrative collection of waterborne, labile ionic metal contaminants. [ 1 ] By capturing and sequestering metal ions onto its surface continuously over a period of days to weeks, an SLMD can provide an integrative measurement of bioavailable toxic metal ions present in the aqueous environment. [ 2 ] As such, they have been used in conjunction with other passive samplers in ecological field studies. [ 3 ] [ 4 ] The simple device is composed of nonporous low-density plastic lay-flat tubing, which is filled with a chemical mixture containing a chelating agent (metal-binding agent) and a long chain organic acid. The water-insoluble chelating agent-organic acid mixture diffuses in a controlled manner to the exterior surface of the sampler membrane and binds to environmental metals. In practice, the SLMD provides for continuous sequestration of bioavailable forms of trace metals, such as, cadmium (Cd), cobalt (Co), copper (Cu), nickel (Ni), lead (Pb), and zinc (Zn). The SLMD can also be utilized for in-laboratory preconcentration and speciation of bioavailable trace metals from grab water samples. [ 5 ] Passive samplers were first developed in the early 1970s to monitor concentrations of airborne contaminants industrial workers might be exposed to, but by the 1990s researchers had developed and utilized passive samplers to monitor contaminants in the aqueous environment. [ 6 ] The first type of passive sampler made for use in the aqueous environment was the semipermeable membrane device (SPMD). [ 6 ] SPMDs could be used to determine time-weighted average concentrations of hydrophobic organic contaminants, but until the early 2000s passive sampling devices for metal contaminants had not yet emerged. [ 1 ] Metals in the environment can speciate into different forms. Most metals dissolved in the aqueous environment are present as any of several ionic, complex-ion, and organically bound states. [ 1 ] For most toxic metals, bioavailability is greatest for labile metals in their free ionic state. [ 1 ] Recognizing the potential usefulness of a passive sampling device that could be used to measure trace amounts of bioavailable toxic metals, researchers at the United States Geological Survey (USGS) and University of Missouri began development on a counterpart to SPMDs that could be used to sample for labile metals. [ 2 ] The outer portion of a SLMD consists of a section of sealed, flat, semi-permeable polyethylene tubing. Sealed inside this tubing is a 1:1 mixture of a hydrophobic metal complexing agent and a long chain organic acid . [ 1 ] The organic acid diffuses through the tubing to the outer surface, where the carboxylic acid portion can form stable complexes with calcium and magnesium ions in the water. [ 2 ] This allows a waxy layer to slowly accumulate on the outside of the tube. the metal complexing agent continuously mobilizes into this waxy layer, where it can sequester metal ions from the water. [ 1 ] The hydrophobic metal complexing agent most commonly used in SLMDs is an alkylated 8-hydroxyquinoline. [ 2 ] Oleic acid is commonly used as the other half of the 1:1 hydrophobic reagent mixture, as it readily forms calcium oleates in the aqueous sampling media. [ 1 ] In addition to the base device, hydrophobic plastic sheaths are sometimes used to house SLMDs in the field. [ 1 ] [ 2 ] Variable water flow can alter the sampling rates of metals by SLMDs, making a time-averaged concentration difficult to determine. [ 2 ] By allowing liable metal analytes to diffuse to the SLMD's surface while limiting the diffusion of particulate, colloidal , or humic substances, these hydrophobic sheaths help reduce variability of SLMD uptake in faster moving waters. [ 2 ] After being deployed for a known time interval, SLMDs can be recovered from the field for analysis. Washing with 20% nitric acid allows for the extraction of accumulated metals, and by using analytical techniques like inductively coupled plasma mass spectroscopy (ICP-MS) or atomic absorption spectroscopy (flame AAS) to measure the concentration of metal in the extract, the amount of metal accumulated by the SLMD can be determined. [ 1 ] The simple device can be created in the laboratory using a nonporous polymeric tube, such as low-density polyethylene (LDPE) plastic. A sequestration medium within the tube slowly defuses through the membrane, binding to ionic metals creating non-mobile metals species that can later be extracted from the other membrane. The sequestration medium generally consists of a metal binding agent, or chelating agent, and a long chain organic acid, commonly oleic acid . [ 7 ] The SLMD tube is flat with a membrane thickness that can vary between 2 and 500 μm depending on the application. The approximate width of the SLMD is 2.5 cm and approximate length is 15 cm (these dimensions may vary based on application). The sequestration medium reagent is typically composed of an equal mixture of oleic acid (cis-9-octadecenoic acid) and Kelex-100 (ethyl-methyl-octyl, 8-quinolinol), however other chemicals may be used to perform similar functions. After deployment, the immobilized metal species can then be extracted from the outer membrane. The metal species can be identified and analyzed using widely recognized standard techniques (e.g., digestion, atomic absorption spectroscopy , inductively coupled plasma mass spectrometry, etc.). In this regard, any procedure or analytical technique applicable to measuring ionic or complexed metal species is suitable for determining metal concentrations sequestered by the SLMD. [ 7 ] SLMDs are known to accumulate cadmium , cobalt , copper , nickel , lead , and zinc , [ 1 ] [ 2 ] and have been deployed in freshwater monitoring studies by The Washington State Department of Ecology (Ecology) [ 3 ] and the USGS. [ 8 ] Ecology deployed SLMDs in upper and lower Indian Creek for 28 and 27 days respectively. [ 3 ] Metal concentrations on the SLMDs were used to estimate the true concentration of metals in the creek. The estimated concentration was expressed as a range based on sampling rate of SLMDs as well as the length of exposure. The purpose of the sampling was to investigate potential causes of sublethal effects of young trout and loss of benthic biodiversity in the creek. [ 3 ] Exposure to ionic metals has been shown to result in deleterious effects for aquatic organisms [ 9 ] and may induce oxidative stress , cause DNA damage, [ 10 ] and decrease enzyme activity. [ 11 ] In contrast, some metals under certain environmental conditions have potential moderating effects on other more toxic metals ; one example being zinc (Zn), which has been shown to reduce copper (Cu) toxicity when both metals are present. [ 12 ] Given that the presence of particular aqueous metals may have a wide array of effects on organisms, aquatic toxicologists have developed various methods for sampling them. Passive, or in situ, environmental sampling is an important tool used by toxicologists for evaluating toxicants that may exist in very small concentrations—not easily detectable via grab samples. One passive sampler, the semipermeable membrane device , or SPMD, is commonly used to measure organic contaminants in aquatic ecosystems . The SLMD was developed as a counterpart device for sampling metals. [ 13 ] Passive sampling for trace metals is more complex than for organic toxicants as most dissolved metals can simultaneously exist in any of several ionic, complex-ion, and organically bound states. [ 14 ] Metals can also bind with suspended or dissolved organic matter and exist as ultra-fine colloids , [ 15 ] or lipophilic complexes. [ 16 ] First developed by Petty, Brumbaugh, Huckins, May, and Wiedmeyer, the SLMD is used to monitor ionic metals in aquatic environments. Due to anthropogenic factors such as mining, metal refining, and industrial activity, global emissions of metals has significantly increased within the last 100 years, and will likely continue to increase during the foreseeable future. [ 7 ] Toxic metals can be present in the aqueous environment at trace or ultra-trace concentrations, yet still be toxicologically significant and thus cause harm to humans or the environment. [ 2 ] Because these concentrations are so low, they would fall beyond the detection limits of most analytical instruments if the media had been sampled using traditional grab samples. [ 17 ] Using SLMDs to passively collect metals over an extended period of time allows for trace metals to accumulate to detectable levels, which can give more accurate estimate of aquatic chemistry and contamination. [ 2 ] SLMDs also have the advantage of being able to capture pulses of metal contamination that might otherwise go undetected when using grab samples . [ 3 ] SLMDs are limited to the assessment of labile metals, and cannot be used to monitor for organic contaminants . Further, while the ability of SLMDs to sample copper, zinc, nickel, lead, and cadmium has been repeatedly demonstrated, [ 1 ] [ 2 ] [ 4 ] there has been little laboratory research on their ability to reliably uptake other toxic metals. Still, while laboratory studies on the effectiveness of SLMDs have only investigated copper, zinc, nickel, lead, and cadmium, SLMDs have been used with success in field studies to assess a wider range of metals. [ 3 ]
https://en.wikipedia.org/wiki/Stabilized_liquid_membrane_device
A stabilized soil mixing plant is a combination of kinds of machines used for mixing stabilized soil , which is used for highway construction , municipal road projects, and fertile airport areas. The plant produces stabilized soil with different gradings in a continuous way. Such a plant usually contains a cement silo , measuring and conveying system , and mixing devices. Stabilized soil is a mixture of lime , cement , coal ash , soil , sand , and other aggregates . Stabilized soil mixing plants are of two kinds: the portable stabilized soil mixing plant and the stationary stabilized soil mixing plant. The portable stabilized soil mixing plant has wheels on each part and can be driven by a trailer, but has low productivity. The stationary plant has larger productivity but is less flexible, and needs a firm groundwork. All aggregates like lime, sand, soil, coal ash, and other materials are loaded into batching hoppers by a loading machine. After measuring, the belt feeder transports the aggregates into a mixing device. Meanwhile, stabilizing powders like lime or cement are transferred from a powder material warehouse to the batch hopper by a spiral conveyor , and then moved to the belt feeder by a powder material feeder. All ingredients then go into the mixing device for final processing. Finally, the feeding belt conveyor takes the final product and delivers it to the storage warehouse.
https://en.wikipedia.org/wiki/Stabilized_soil_mixing_plant
In industrial chemistry , a stabilizer or stabiliser is a chemical that is used to prevent degradation . [ 1 ] Above all, heat and light stabilizers are added to plastic and rubber materials because they ensure safe processing and protect products against aging and weathering. In particular polyvinyl chloride would not be possible without stabilizers (PVC is one of the most important plastics and used for pipes, window frames and many other products). In economic terms the most important product groups on the market for stabilizers are compounds based on calcium (calcium- zinc and organo-calcium), lead , and tin stabilizers as well as liquid and light stabilizers ( HALS , benzophenone , benzotriazole ). Cadmium -based stabilizers largely vanished in the last years due to health and environmental concerns. In 2023, almost half of all polymer stabilizers sold worldwide were based on calcium. [ 2 ] Stabilizing additives for plastics are produced in different forms. The trend is towards fluid systems, pellets , and increased use of masterbatches . There are monofunctional, bifunctional, and polyfunctional stabilizers. Some kinds of stabilizers are: [ 3 ] [ 4 ] In foods, stabilizers prevent spoilage. Classes of food stabilizers include emulsifiers, thickeners and gelling agents, foam stabilizers, humectants , anticaking agents , and coating agents. [ 5 ] This article about chemical compounds is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stabilizer_(chemistry)
In quantum computing and quantum communication , a stabilizer code is a class of quantum codes for performing quantum error correction . The toric code , and surface codes more generally, [ 1 ] are types of stabilizer codes considered very important for the practical realization of quantum information processing. Quantum error-correcting codes restore a noisy, decohered quantum state to a pure quantum state. A stabilizer quantum error-correcting code appends ancilla qubits to qubits that we want to protect. A unitary encoding circuit rotates the global state into a subspace of a larger Hilbert space . This highly entangled , encoded state corrects for local noisy errors. A quantum error-correcting code makes quantum computation and quantum communication practical by providing a way for a sender and receiver to simulate a noiseless qubit channel given a noisy qubit channel whose noise conforms to a particular error model. The first quantum error-correcting codes are strikingly similar to classical block codes in their operation and performance. The stabilizer theory of quantum error correction allows one to import some classical binary or quaternary codes for use as a quantum code. However, when importing the classical code, it must satisfy the dual-containing (or self-orthogonality) constraint. Researchers have found many examples of classical codes satisfying this constraint, but most classical codes do not. Nevertheless, it is still useful to import classical codes in this way (though, see how the entanglement-assisted stabilizer formalism overcomes this difficulty). The stabilizer formalism exploits elements of the Pauli group Π {\displaystyle \Pi } in formulating quantum error-correcting codes. The set Π = { I , X , Y , Z } {\displaystyle \Pi =\left\{I,X,Y,Z\right\}} consists of the Pauli operators : The above operators act on a single qubit – a state represented by a vector in a two-dimensional Hilbert space . Operators in Π {\displaystyle \Pi } have eigenvalues ± 1 {\displaystyle \pm 1} and either commute or anti-commute . The set Π n {\displaystyle \Pi ^{n}} consists of n {\displaystyle n} -fold tensor products of Pauli operators : Elements of Π n {\displaystyle \Pi ^{n}} act on a quantum register of n {\displaystyle n} qubits. We occasionally omit tensor product symbols in what follows so that The n {\displaystyle n} -fold Pauli group Π n {\displaystyle \Pi ^{n}} plays an important role for both the encoding circuit and the error-correction procedure of a quantum stabilizer code over n {\displaystyle n} qubits. Let us define an [ n , k ] {\displaystyle \left[n,k\right]} stabilizer quantum error-correcting code to encode k {\displaystyle k} logical qubits into n {\displaystyle n} physical qubits. The rate of such a code is k / n {\displaystyle k/n} . Its stabilizer S {\displaystyle {\mathcal {S}}} is an abelian subgroup of the n {\displaystyle n} -fold Pauli group Π n {\displaystyle \Pi ^{n}} . S {\displaystyle {\mathcal {S}}} does not contain the operator − I ⊗ n {\displaystyle -I^{\otimes n}} . The simultaneous + 1 {\displaystyle +1} - eigenspace of the operators constitutes the codespace . The codespace has dimension 2 k {\displaystyle 2^{k}} so that we can encode k {\displaystyle k} qubits into it. The stabilizer S {\displaystyle {\mathcal {S}}} has a minimal representation in terms of n − k {\displaystyle n-k} independent generators The generators are independent in the sense that none of them is a product of any other two (up to a global phase ). The operators g 1 , … , g n − k {\displaystyle g_{1},\ldots ,g_{n-k}} function in the same way as a parity check matrix does for a classical linear block code . One of the fundamental notions in quantum error correction theory is that it suffices to correct a discrete error set with support in the Pauli group Π n {\displaystyle \Pi ^{n}} . Suppose that the errors affecting an encoded quantum state are a subset E {\displaystyle {\mathcal {E}}} of the Pauli group Π n {\displaystyle \Pi ^{n}} : Because E {\displaystyle {\mathcal {E}}} and S {\displaystyle {\mathcal {S}}} are both subsets of Π n {\displaystyle \Pi ^{n}} , an error E ∈ E {\displaystyle E\in {\mathcal {E}}} that affects an encoded quantum state either commutes or anticommutes with any particular element g {\displaystyle g} in S {\displaystyle {\mathcal {S}}} . The error E {\displaystyle E} is correctable if it anticommutes with an element g {\displaystyle g} in S {\displaystyle {\mathcal {S}}} . An anticommuting error E {\displaystyle E} is detectable by measuring each element g {\displaystyle g} in S {\displaystyle {\mathcal {S}}} and computing a syndrome r {\displaystyle \mathbf {r} } identifying E {\displaystyle E} . The syndrome is a binary vector r {\displaystyle \mathbf {r} } with length n − k {\displaystyle n-k} whose elements identify whether the error E {\displaystyle E} commutes or anticommutes with each g ∈ S {\displaystyle g\in {\mathcal {S}}} . An error E {\displaystyle E} that commutes with every element g {\displaystyle g} in S {\displaystyle {\mathcal {S}}} is correctable if and only if it is in S {\displaystyle {\mathcal {S}}} . It corrupts the encoded state if it commutes with every element of S {\displaystyle {\mathcal {S}}} but does not lie in S {\displaystyle {\mathcal {S}}} . So we compactly summarize the stabilizer error-correcting conditions: a stabilizer code can correct any errors E 1 , E 2 {\displaystyle E_{1},E_{2}} in E {\displaystyle {\mathcal {E}}} if or where Z ( S ) {\displaystyle {\mathcal {Z}}\left({\mathcal {S}}\right)} is the centralizer of S {\displaystyle {\mathcal {S}}} (i.e., the subgroup of elements that commute with all members of S {\displaystyle {\mathcal {S}}} , also known as the commutant). A simple example of a stabilizer code is a three qubit [ [ 3 , 1 , 3 ] ] {\displaystyle \left[[3,1,3\right]]} stabilizer code. It encodes k = 1 {\displaystyle k=1} logical qubit into n = 3 {\displaystyle n=3} physical qubits and protects against a single-bit flip error in the set { X i } {\displaystyle \left\{X_{i}\right\}} . This does not protect against other Pauli errors such as phase flip errors in the set { Y i } {\displaystyle \left\{Y_{i}\right\}} .or { Z i } {\displaystyle \left\{Z_{i}\right\}} . This has code distance d = 3 {\displaystyle d=3} . Its stabilizer consists of n − k = 2 {\displaystyle n-k=2} Pauli operators: If there are no bit-flip errors, both operators g 1 {\displaystyle g_{1}} and g 2 {\displaystyle g_{2}} commute, the syndrome is +1,+1, and no errors are detected. If there is a bit-flip error on the first encoded qubit, operator g 1 {\displaystyle g_{1}} will anti-commute and g 2 {\displaystyle g_{2}} commute, the syndrome is -1,+1, and the error is detected. If there is a bit-flip error on the second encoded qubit, operator g 1 {\displaystyle g_{1}} will anti-commute and g 2 {\displaystyle g_{2}} anti-commute, the syndrome is -1,-1, and the error is detected. If there is a bit-flip error on the third encoded qubit, operator g 1 {\displaystyle g_{1}} will commute and g 2 {\displaystyle g_{2}} anti-commute, the syndrome is +1,-1, and the error is detected. An example of a stabilizer code is the five qubit [ [ 5 , 1 , 3 ] ] {\displaystyle \left[[5,1,3\right]]} stabilizer code. It encodes k = 1 {\displaystyle k=1} logical qubit into n = 5 {\displaystyle n=5} physical qubits and protects against an arbitrary single-qubit error. It has code distance d = 3 {\displaystyle d=3} . Its stabilizer consists of n − k = 4 {\displaystyle n-k=4} Pauli operators: The above operators commute. Therefore, the codespace is the simultaneous +1-eigenspace of the above operators. Suppose a single-qubit error occurs on the encoded quantum register. A single-qubit error is in the set { X i , Y i , Z i } {\displaystyle \left\{X_{i},Y_{i},Z_{i}\right\}} where A i {\displaystyle A_{i}} denotes a Pauli error on qubit i {\displaystyle i} . It is straightforward to verify that any arbitrary single-qubit error has a unique syndrome. The receiver corrects any single-qubit error by identifying the syndrome via a parity measurement and applying a corrective operation. A simple but useful mapping exists between elements of Π {\displaystyle \Pi } and the binary vector space ( Z 2 ) 2 {\displaystyle \left(\mathbb {Z} _{2}\right)^{2}} . This mapping gives a simplification of quantum error correction theory. It represents quantum codes with binary vectors and binary operations rather than with Pauli operators and matrix operations respectively. We first give the mapping for the one-qubit case. Suppose [ A ] {\displaystyle \left[A\right]} is a set of equivalence classes of an operator A {\displaystyle A} that have the same phase : Let [ Π ] {\displaystyle \left[\Pi \right]} be the set of phase-free Pauli operators where [ Π ] = { [ A ] | A ∈ Π } {\displaystyle \left[\Pi \right]=\left\{\left[A\right]\ |\ A\in \Pi \right\}} . Define the map N : ( Z 2 ) 2 → Π {\displaystyle N:\left(\mathbb {Z} _{2}\right)^{2}\rightarrow \Pi } as Suppose u , v ∈ ( Z 2 ) 2 {\displaystyle u,v\in \left(\mathbb {Z} _{2}\right)^{2}} . Let us employ the shorthand u = ( z | x ) {\displaystyle u=\left(z|x\right)} and v = ( z ′ | x ′ ) {\displaystyle v=\left(z^{\prime }|x^{\prime }\right)} where z {\displaystyle z} , x {\displaystyle x} , z ′ {\displaystyle z^{\prime }} , x ′ ∈ Z 2 {\displaystyle x^{\prime }\in \mathbb {Z} _{2}} . For example, suppose u = ( 0 | 1 ) {\displaystyle u=\left(0|1\right)} . Then N ( u ) = X {\displaystyle N\left(u\right)=X} . The map N {\displaystyle N} induces an isomorphism [ N ] : ( Z 2 ) 2 → [ Π ] {\displaystyle \left[N\right]:\left(\mathbb {Z} _{2}\right)^{2}\rightarrow \left[\Pi \right]} because addition of vectors in ( Z 2 ) 2 {\displaystyle \left(\mathbb {Z} _{2}\right)^{2}} is equivalent to multiplication of Pauli operators up to a global phase: Let ⊙ {\displaystyle \odot } denote the symplectic product between two elements u , v ∈ ( Z 2 ) 2 {\displaystyle u,v\in \left(\mathbb {Z} _{2}\right)^{2}} : The symplectic product ⊙ {\displaystyle \odot } gives the commutation relations of elements of Π {\displaystyle \Pi } : The symplectic product and the mapping N {\displaystyle N} thus give a useful way to phrase Pauli relations in terms of binary algebra . The extension of the above definitions and mapping N {\displaystyle N} to multiple qubits is straightforward. Let A = A 1 ⊗ ⋯ ⊗ A n {\displaystyle \mathbf {A} =A_{1}\otimes \cdots \otimes A_{n}} denote an arbitrary element of Π n {\displaystyle \Pi ^{n}} . We can similarly define the phase-free n {\displaystyle n} -qubit Pauli group [ Π n ] = { [ A ] | A ∈ Π n } {\displaystyle \left[\Pi ^{n}\right]=\left\{\left[\mathbf {A} \right]\ |\ \mathbf {A} \in \Pi ^{n}\right\}} where The group operation ∗ {\displaystyle \ast } for the above equivalence class is as follows: The equivalence class [ Π n ] {\displaystyle \left[\Pi ^{n}\right]} forms a commutative group under operation ∗ {\displaystyle \ast } . Consider the 2 n {\displaystyle 2n} -dimensional vector space It forms the commutative group ( ( Z 2 ) 2 n , + ) {\displaystyle (\left(\mathbb {Z} _{2}\right)^{2n},+)} with operation + {\displaystyle +} defined as binary vector addition. We employ the notation u = ( z | x ) , v = ( z ′ | x ′ ) {\displaystyle \mathbf {u} =\left(\mathbf {z} |\mathbf {x} \right),\mathbf {v} =\left(\mathbf {z} ^{\prime }|\mathbf {x} ^{\prime }\right)} to represent any vectors u , v ∈ ( Z 2 ) 2 n {\displaystyle \mathbf {u,v} \in \left(\mathbb {Z} _{2}\right)^{2n}} respectively. Each vector z {\displaystyle \mathbf {z} } and x {\displaystyle \mathbf {x} } has elements ( z 1 , … , z n ) {\displaystyle \left(z_{1},\ldots ,z_{n}\right)} and ( x 1 , … , x n ) {\displaystyle \left(x_{1},\ldots ,x_{n}\right)} respectively with similar representations for z ′ {\displaystyle \mathbf {z} ^{\prime }} and x ′ {\displaystyle \mathbf {x} ^{\prime }} . The symplectic product ⊙ {\displaystyle \odot } of u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } is or where u i = ( z i | x i ) {\displaystyle u_{i}=\left(z_{i}|x_{i}\right)} and v i = ( z i ′ | x i ′ ) {\displaystyle v_{i}=\left(z_{i}^{\prime }|x_{i}^{\prime }\right)} . Let us define a map N : ( Z 2 ) 2 n → Π n {\displaystyle \mathbf {N} :\left(\mathbb {Z} _{2}\right)^{2n}\rightarrow \Pi ^{n}} as follows: Let so that N ( u ) {\displaystyle \mathbf {N} \left(\mathbf {u} \right)} and Z ( z ) X ( x ) {\displaystyle \mathbf {Z} \left(\mathbf {z} \right)\mathbf {X} \left(\mathbf {x} \right)} belong to the same equivalence class : The map [ N ] : ( Z 2 ) 2 n → [ Π n ] {\displaystyle \left[\mathbf {N} \right]:\left(\mathbb {Z} _{2}\right)^{2n}\rightarrow \left[\Pi ^{n}\right]} is an isomorphism for the same reason given as in the previous case: where u , v ∈ ( Z 2 ) 2 n {\displaystyle \mathbf {u,v} \in \left(\mathbb {Z} _{2}\right)^{2n}} . The symplectic product captures the commutation relations of any operators N ( u ) {\displaystyle \mathbf {N} \left(\mathbf {u} \right)} and N ( v ) {\displaystyle \mathbf {N} \left(\mathbf {v} \right)} : The above binary representation and symplectic algebra are useful in making the relation between classical linear error correction and quantum error correction more explicit. By comparing quantum error correcting codes in this language to symplectic vector spaces , we can see the following. A symplectic subspace corresponds to a direct sum of Pauli algebras (i.e., encoded qubits), while an isotropic subspace corresponds to a set of stabilizers.
https://en.wikipedia.org/wiki/Stabilizer_code
Stabilizing selection (not to be confused with negative or purifying selection [ 1 ] [ 2 ] ) is a type of natural selection in which the population mean stabilizes on a particular non-extreme trait value. This is thought to be the most common mechanism of action for natural selection because most traits do not appear to change drastically over time. [ 3 ] Stabilizing selection commonly uses negative selection (a.k.a. purifying selection) to select against extreme values of the character. Stabilizing selection is the opposite of disruptive selection . Instead of favoring individuals with extreme phenotypes, it favors the intermediate variants. Stabilizing selection tends to remove the more severe phenotypes, resulting in the reproductive success of the norm or average phenotypes. [ 4 ] This means that most common phenotype in the population is selected for and continues to dominate in future generations . The Russian evolutionary biologist Ivan Schmalhausen founded the theory of stabilizing selection, publishing a paper in Russian titled "Stabilizing selection and its place among factors of evolution" in 1941 and a monograph "Factors of Evolution: The Theory of Stabilizing Selection" in 1945. [ 5 ] [ 6 ] Stabilizing selection causes the narrowing of the phenotypes seen in a population. This is because the extreme phenotypes are selected against, causing reduced survival in organisms with those traits. This results in a population consisting of fewer phenotypes, with most traits representing the mean value of the population. This narrowing of phenotypes causes a reduction in genetic diversity in a population. [ 7 ] Maintaining genetic variation is essential for the survival of a population because it is what allows them to evolve over time. In order for a population to adapt to changing environmental conditions they must have enough genetic diversity to select for new traits as they become favorable. [ 8 ] There are four primary types of data used to quantify stabilizing selection in a population. The first type of data is an estimation of fitness of different phenotypes within a single generation. Quantifying fitness in a single generation creates predictions for the expected fate of selection. The second type of data is changes in allelic frequencies or phenotypes across different generations. This allows quantification of change in prevalence of a certain phenotype, indicating the type of selection. The third type of data is differences in allelic frequencies across space. This compares selection occurring in different populations and environmental conditions. The fourth type of data is DNA sequences from the genes contributing to observes phenotypic differences. The combination of these four types of data allow population studies that can identify the type of selection occurring and quantify the extent of selection. [ 9 ] However, a meta-analysis of studies that measured selection in the wild failed to find an overall trend for stabilizing selection. [ 10 ] The reason can be that methods for detecting stabilizing selection are complex. They can involve studying the changes that causes natural selection in the mean and variance of the trait, or measuring fitness for a range of different phenotypes under natural conditions and examining the relationship between these fitness measurements and the trait value, but analysis and interpretation of the results is not straightforward. [ 11 ] The most common form of stabilizing selection is based on phenotypes of a population. In phenotype based stabilizing selection, the mean value of a phenotype is selected for, resulting a decrease in the phenotypic variation found in a population. [ 12 ] Stabilizing selection is the most common form of nonlinear selection (non-directional) in humans. [ 13 ] There are few examples of genes with direct evidence of stabilizing selection in humans. However, most quantitative traits (height, birthweight, schizophrenia) are thought to be under stabilizing selection, due to their polygenicity and the distribution of the phenotypes throughout human populations. [ 14 ]
https://en.wikipedia.org/wiki/Stabilizing_selection
Stable-isotope probing ( SIP ) is a technique in microbial ecology for tracing uptake of nutrients in biogeochemical cycling by microorganisms. A substrate is enriched with a heavier stable isotope that is consumed by the organisms to be studied. [ 1 ] [ 2 ] Biomarkers with the heavier isotopes incorporated into them can be separated from biomarkers containing the more naturally abundant lighter isotope by isopycnic centrifugation . For example, 13 CO 2 can be used to find out which organisms are actively photosynthesizing or consuming new photosynthate. As the biomarker, DNA with 13 C is then separated from DNA with 12 C by centrifugation. Sequencing the DNA identifies which organisms were consuming existing carbohydrates and which were using carbohydrates more recently produced from photosynthesis. [ 3 ] SIP with 18 O-labeled water can be used to find out which organisms are actively growing, because oxygen from water is incorporated into DNA (and RNA) during synthesis. [ 4 ] When DNA is the biomarker, SIP can be performed using isotopically labeled C, H, O, or N, though 13 C is used most often. The density shift is proportional to the change in density in the DNA, which depends on the difference in mass between the rare and common isotopes for a given element, and on the abundance of elements in the DNA. For example, the difference in mass between 18 O and 16 O (two atomic mass units) is twice that between 13 C and 12 C (one atomic mass unit), so incorporation of 18 O into DNA will cause a larger per atom density shift than will incorporation of 13 C. Conversely, DNA contains nearly twice as many carbon atoms (11.25 per base, on average) as oxygen atoms (6 per base), so at equivalent labeling (e.g., 50 atom percent 13 C or 18 O), DNA labeled with 18 O will be only slightly more dense than DNA fully labeled with 13 C. Similarly, nitrogen is less abundant in DNA (3.75 atoms per base, on average), so a weaker DNA buoyant density shift is observed with 15 N- versus 13 C-labeled or 18 O-labeled substrates. Larger buoyant density shifts are observed when multiple isotope tracers are used. [ 5 ] Because density shifts as a predictable function of the change in mass caused by isotope assimilation, stable isotope probing can be modeled to estimate the amount of isotope incorporation, an approach called quantitative stable isotope probing (qSIP), [ 6 ] which has been applied to microbial communities in soils, [ 7 ] marine sediments, [ 8 ] and decomposing leaves [ 9 ] to compare rates of growth and substrate assimilation among different microbial taxa.
https://en.wikipedia.org/wiki/Stable-isotope_probing
Stable stratification of fluids occurs when each layer is less dense than the one below it. Unstable stratification is when each layer is denser than the one below it. Buoyancy forces tend to preserve stable stratification; the higher layers float on the lower ones. In unstable stratification, on the other hand, buoyancy forces cause convection . The less-dense layers rise though the denser layers above, and the denser layers sink though the less-dense layers below. Stratifications can become more or less stable if layers change density. The processes involved are important in many science and engineering fields. Stable stratifications can become unstable if layers change density. This can happen due to outside influences (for instance, if water evaporates from a freshwater lens , making it saltier and denser, or if a pot or layered beverage is heated from below, making the bottom layer less dense). However, it can also happen due to internal diffusion of heat (the warmer layer slowly heats the adjacent cooler one) or other physical properties. This often causes mixing at the interface, creating new diffusive layers (see photo of coffee and milk). Sometimes, two physical properties diffuse between layers simultaneously ; salt and temperature, for instance. This may form diffusive layers or even salt fingering , when the surfaces of the diffusive layers become so wavy that there are "fingers" of layers reaching up and down. Not all mixing is driven by density changes. Other physical forces may also mix stably-stratified layers. Sea spray and whitecaps (foaming whitewater on waves) are examples of water mixed into air, and air into water, respectively. In a fierce storm the air/water boundary may grow indistinct. Some of these wind waves are Kelvin-Helmholtz waves . [ 1 ] Depending on the size of the velocity difference and the size of the density contrast between the layers, Kelvin-Helmholtz waves can look different. For instance, between two layers of air or two layers of water, the density difference is much smaller and the layers are miscible; see black-and-white model video. Stratification is commonly seen in the planetary sciences. Solar energy passes as visible radiation through the air, and is absorbed by the ground, to be re-emitted as heat radiation. The lower atmosphere is therefore heated from below (UV absorption in the ozone layer heats that layer from within). Outdoor air is thus usually unstably stratified and convecting, giving us wind. Temperature inversions are a weather event which happens whenever an area of the lower atmosphere becomes stably-stratified and thus stops moving. [ 2 ] [ 3 ] Oceans, on the other hand, are heated from above, and are usually stably stratified. Only near the poles does the coldest and saltiest water sink. The deep ocean waters slowly warm and freshen through internal mixing (a form of double diffusion [ 4 ] ), and then rise back to the surface. Examples: In engineering applications, stable stratification or convection may or may not be desirable. In either case it may be deliberately manipulated. Stratification can strongly affect the mixing of fluids, [ 5 ] which is important in many manufacturing processes.
https://en.wikipedia.org/wiki/Stable_and_unstable_stratification
In cellular biology , stable cells are cells that multiply only when needed. They spend most of the time in the quiescent G 0 phase of the cell cycle but can be stimulated to enter the cell cycle when needed. Examples include the liver , the proximal tubules of the kidney and endocrine glands . This cell biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stable_cell
In model theory , a stable group is a group that is stable in the sense of stability theory . An important class of examples is provided by groups of finite Morley rank (see below). The Cherlin–Zilber conjecture (also called the algebraicity conjecture ), due to Gregory Cherlin (1979) and Boris Zil'ber (1977) , suggests that infinite (ω-stable) simple groups are simple algebraic groups over algebraically closed fields . The conjecture would have followed from Zilber 's trichotomy conjecture. Cherlin posed the question for all ω-stable simple groups, but remarked that even the case of groups of finite Morley rank seemed hard. Progress towards this conjecture has followed Borovik ’s program of transferring methods used in classification of finite simple groups . One possible source of counterexamples is bad groups : nonsoluble connected groups of finite Morley rank all of whose proper connected definable subgroups are nilpotent . (A group is called connected if it has no definable subgroups of finite index other than itself.) A number of special cases of this conjecture have been proved; for example:
https://en.wikipedia.org/wiki/Stable_group