text
stringlengths
11
320k
source
stringlengths
26
161
Somatomedins are a group of proteins produced predominantly by the liver when growth hormones act on target tissue. Somatomedins inhibit the release of growth hormones by acting directly on anterior pituitary and by stimulating the secretion of somatostatin from the hypothalamus. Somatomedins are a group of proteins that promote cell growth and division in response to stimulation by growth hormone (GH), also known as somatotropin (STH). [ 1 ] Somatomedins have similar biological effects to somatotropin . In addition to their actions that stimulate growth, somatomedins also stimulate production of somatostatin , which suppresses growth hormone release. Thus, levels of somatomedins are controlled via negative feedback through the intermediates of somatostatin and growth hormone. Somatomedins are produced in many tissues and have autocrine and paracrine actions in addition to their endocrine action. The liver is thought to be the predominant source of circulating somatomedins. [ 2 ] Three forms include: This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Somatomedin
A somatomedin receptor is a receptor which binds the somatomedins (IGFs). Somatomedin is abbreviated to IGF , in reference to insulin-like growth factor . There are two types: This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Somatomedin_receptor
" Some Remarks on Logical Form " (1929 [ 1 ] ) was the only academic paper ever published by Ludwig Wittgenstein , and contained Wittgenstein's thinking on logic and the philosophy of mathematics immediately before the rupture that divided the early Wittgenstein of the Tractatus Logico-Philosophicus from the late Wittgenstein . [ 2 ] The approach to logical form in the paper reflected Frank P. Ramsey 's critique of Wittgenstein's account of color in the Tractatus , and has been analyzed by G. E. M. Anscombe and Jaakko Hintikka , among others. [ 2 ] In a letter to the editor of Mind in 1933 Wittgenstein referred to it as "a short (and weak) article". [ 3 ] This article about a mathematical publication is a stub . You can help Wikipedia by expanding it . This philosophy of science -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Some_Remarks_on_Logical_Form
Something and anything are concepts of existence in ontology , contrasting with the concept of nothing . Both are used to describe the understanding that what exists is not nothing without needing to address the existence of everything . The philosopher , David Lewis , has pointed out that these are necessarily vague terms, asserting that "ontological assertions of common sense are correct if the quantifiers—such words as "something" and "anything"—are restricted roughly to ordinary or familiar things." [ 1 ] The idea that "something" is the opposite of "nothing" has existed at least since it was proposed by the Neoplatonist philosopher Porphyry in the 3rd century. [ 2 ] One of the most basic questions of both science and philosophy is: why is there something rather than nothing at all? [ 3 ] A question that follows from this is whether it is ever actually possible for there to be nothing at all, or whether there must always be something. [ 4 ] Grammatically, " something and anything are commonly classified as pronouns , although they do not stand for another noun so clearly as does thing itself, a word always classified as a noun". [ 5 ] In predicate logic , what is described in layman's terms as "something" can more specifically be regarded as existential quantification , that is, the predication of a property or relation to at least one member of the domain. It is a type of quantifier , a logical constant which is interpreted as "there exists," "there is at least one," or "for some." It expresses that a propositional function can be satisfied by at least one member of a domain of discourse . In other terms, it is the predication of a property or relation to at least one member of the domain. It asserts that a predicate within the scope of an existential quantifier is true of at least one value of a predicate variable .
https://en.wikipedia.org/wiki/Something_(concept)
Somfy is a French group of companies founded in 1969, some of the largest manufacturers and suppliers of controllers and drives for entrance gates, garage doors, window blinds and awnings. [ 1 ] They also produce other home automation products such as security devices. [ 2 ] Somfy is a member of the home automation committees for Matter , Thread and the Connectivity Standards Alliance . This French corporation or company article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Somfy
The Sommelet reaction is an organic reaction in which a benzyl halide is converted to an aldehyde by action of hexamine and water. [ 1 ] [ 2 ] It is named after the French chemist Marcel Sommelet, who first reported the reaction in 1913. [ 3 ] One example, thiophene-2-carboxaldehyde is prepared by the reaction of hexamine with 2-chloromethylthiophene. [ 4 ] The reaction is formally an oxidation of the carbon. The benzyl halide 1 reacts with hexamine to a quaternary ammonium salt 3 , each time just alkylating one nitrogen atom. Then the benzylammonium undergoes an acid-catalyzed hydrolysis process. Depending on the hydrolysis conditions, the hexamine unit might instead break apart, leaving a benzyl amine (the Delépine reaction ). The reaction can also be applied to the oxidation of benzylic amines. In this way, m -xylylenediamine can be converted to isophthalaldehyde . [ 5 ]
https://en.wikipedia.org/wiki/Sommelet_reaction
The Sommelet–Hauser rearrangement (named after M. Sommelet [ 1 ] and Charles R. Hauser [ 2 ] ) is a rearrangement reaction of certain benzyl quaternary ammonium salts . [ 3 ] [ 4 ] The reagent is sodium amide or another alkali metal amide and the reaction product a N , N -dialkylbenzylamine with a new alkyl group in the aromatic ortho position . For example, benzyltrimethylammonium iodide, [(C 6 H 5 CH 2 )N(CH 3 ) 3 ]I, rearranges in the presence of sodium amide to yield the o -methyl derivative of N , N -dimethylbenzylamine . [ 2 ] The benzylic methylene proton is acidic and deprotonation takes place to produce the benzylic ylide ( 1 ). This ylide is in equilibrium with a second ylide that is formed by deprotonation of one of the ammonium methyl groups ( 2 ). Though the second ylide is present in much smaller amounts, it undergoes a 2,3-sigmatropic rearrangement because it is more reactive than the first one and subsequent aromatization to form the final product ( 3 ). [ 5 ] The Stevens rearrangement is a competing reaction.
https://en.wikipedia.org/wiki/Sommelet–Hauser_rearrangement
In mechanics, Sommerfeld effect is a phenomenon arising from feedback in the energy exchange between vibrating systems: for example, when for the rocking table, under given conditions, energy transmitted to the motor resulted not in higher revolutions but in stronger vibrations of the table. It is named after Arnold Sommerfeld . In 1902, A. Sommerfeld analyzed the vibrations caused by a motor driving an unbalanced weight and wrote that " This experiment corresponds roughly to the case in which a factory owner has a machine set on a poor foundation running at 30 horsepower. He achieves an effective level of just 1/3, however, because only 10 horsepower are doing useful work, while 20 horsepower are transferred to the foundational masonry ". [ 1 ] [ 2 ] First mathematical descriptions of Sommerfeld effect were suggested by I. Blekhman [ 3 ] and V. Kononenko. [ 4 ] In the theory of hidden oscillations, Sommerfeld effect is explained by the multistability and presence in the phase space of dynamical model without stationary states of two coexisting hidden attractors , one of which attracts trajectories from vicinity of zero initial data (which correspond to the typical start up of the motor), and the other attractor corresponds to the desired mode of operation with a higher frequency of rotation. Depending on the model under consideration, coexisting hidden attractors in the model may be either periodic or chaotic; such dynamical models with Sommerfeld effect are the earliest known mechanical example of a system without equilibria and with hidden attractors. [ 5 ] [ 6 ] For example, the Sommerfeld effect with hidden attractors can be observed in dynamic models of drilling rigs, where the electric motor may excite torsional vibrations of the drill. [ 7 ] [ 5 ]
https://en.wikipedia.org/wiki/Sommerfeld_effect
A Sommerfeld expansion is an approximation method developed by Arnold Sommerfeld in 1928 for a certain class of integrals which are common in condensed matter and statistical physics . Physically, the integrals represent statistical averages using the Fermi–Dirac distribution . When the inverse temperature β {\displaystyle \beta } is a large quantity, the integral can be expanded [ 1 ] [ 2 ] in terms of β {\displaystyle \beta } as where H ′ ( μ ) {\displaystyle H^{\prime }(\mu )} is used to denote the derivative of H ( ε ) {\displaystyle H(\varepsilon )} evaluated at ε = μ {\displaystyle \varepsilon =\mu } and where the O ( x n ) {\displaystyle O(x^{n})} notation refers to limiting behavior of order x n {\displaystyle x^{n}} . The expansion is only valid if H ( ε ) {\displaystyle H(\varepsilon )} vanishes as ε → − ∞ {\displaystyle \varepsilon \rightarrow -\infty } and goes no faster than polynomially in ε {\displaystyle \varepsilon } as ε → ∞ {\displaystyle \varepsilon \rightarrow \infty } . If the integral is from zero to infinity, then the integral in the first term of the expansion is from zero to μ {\displaystyle \mu } and the second term is unchanged. Integrals of this type appear frequently when calculating electronic properties, like the heat capacity , in the free electron model of solids. In these calculations the above integral expresses the expected value of the quantity H ( ε ) {\displaystyle H(\varepsilon )} . For these integrals we can then identify β {\displaystyle \beta } as the inverse temperature and μ {\displaystyle \mu } as the chemical potential . Therefore, the Sommerfeld expansion is valid for large β {\displaystyle \beta } (low temperature ) systems. We seek an expansion that is second order in temperature, i.e., to τ 2 {\displaystyle \tau ^{2}} , where β − 1 = τ = k B T {\displaystyle \beta ^{-1}=\tau =k_{B}T} is the product of temperature and the Boltzmann constant . Begin with a change variables to τ x = ε − μ {\displaystyle \tau x=\varepsilon -\mu } : Divide the range of integration, I = I 1 + I 2 {\displaystyle I=I_{1}+I_{2}} , and rewrite I 1 {\displaystyle I_{1}} using the change of variables x → − x {\displaystyle x\rightarrow -x} : Next, employ an algebraic 'trick' on the denominator of I 1 {\displaystyle I_{1}} , to obtain: Return to the original variables with − τ d x = d ε {\displaystyle -\tau \mathrm {d} x=\mathrm {d} \varepsilon } in the first term of I 1 {\displaystyle I_{1}} . Combine I = I 1 + I 2 {\displaystyle I=I_{1}+I_{2}} to obtain: The numerator in the second term can be expressed as an approximation to the first derivative, provided τ {\displaystyle \tau } is sufficiently small and H ( ε ) {\displaystyle H(\varepsilon )} is sufficiently smooth: to obtain, The definite integral is known [ 3 ] to be: Hence, We can obtain higher order terms in the Sommerfeld expansion by use of a generating function for moments of the Fermi distribution. This is given by Here k B T = β − 1 {\displaystyle k_{\rm {B}}T=\beta ^{-1}} and Heaviside step function − θ ( − ϵ ) {\displaystyle -\theta (-\epsilon )} subtracts the divergent zero-temperature contribution. Expanding in powers of τ {\displaystyle \tau } gives, for example [ 4 ] A similar generating function for the odd moments of the Bose function is
https://en.wikipedia.org/wiki/Sommerfeld_expansion
The Sommerfeld identity is a mathematical identity, due Arnold Sommerfeld , used in the theory of propagation of waves , where is to be taken with positive real part, to ensure the convergence of the integral and its vanishing in the limit z → ± ∞ {\displaystyle z\rightarrow \pm \infty } and Here, R {\displaystyle R} is the distance from the origin while r {\displaystyle r} is the distance from the central axis of a cylinder as in the ( r , ϕ , z ) {\displaystyle (r,\phi ,z)} cylindrical coordinate system . Here the notation for Bessel functions follows the German convention, to be consistent with the original notation used by Sommerfeld. The function I 0 ( z ) {\displaystyle I_{0}(z)} is the zeroth-order Bessel function of the first kind, better known by the notation I 0 ( z ) = J 0 ( i z ) {\displaystyle I_{0}(z)=J_{0}(iz)} in English literature. This identity is known as the Sommerfeld identity. [ 1 ] In alternative notation, the Sommerfeld identity can be more easily seen as an expansion of a spherical wave in terms of cylindrically-symmetric waves: [ 2 ] Where The notation used here is different form that above: r {\displaystyle r} is now the distance from the origin and ρ {\displaystyle \rho } is the radial distance in a cylindrical coordinate system defined as ( ρ , ϕ , z ) {\displaystyle (\rho ,\phi ,z)} . The physical interpretation is that a spherical wave can be expanded into a summation of cylindrical waves in ρ {\displaystyle \rho } direction, multiplied by a two-sided plane wave in the z {\displaystyle z} direction; see the Jacobi-Anger expansion . The summation has to be taken over all the wavenumbers k ρ {\displaystyle k_{\rho }} . The Sommerfeld identity is closely related to the two-dimensional Fourier transform with cylindrical symmetry, i.e., the Hankel transform . It is found by transforming the spherical wave along the in-plane coordinates ( x {\displaystyle x} , y {\displaystyle y} , or ρ {\displaystyle \rho } , ϕ {\displaystyle \phi } ) but not transforming along the height coordinate z {\displaystyle z} . [ 3 ] This mathematical physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sommerfeld_identity
In the design of fluid bearings , the Sommerfeld number ( S ) is a dimensionless quantity used extensively in hydrodynamic lubrication analysis. The Sommerfeld number is very important in lubrication analysis because it contains all the variables normally specified by the designer. The Sommerfeld number is named after Arnold Sommerfeld (1868–1951). The Sommerfeld Number is typically defined by the following equation: [ 1 ] where: The second part of the equation is seen to be the Hersey number . However, an alternative definition for S is used in some texts based on angular velocity: [ 2 ] where: It is therefore necessary to check which definition is being used when referring to design data or textbooks, since the value of S will differ by a factor of 2π. Nikolai Pavlovich Petrov 's method of lubrication analysis, which assumes a concentric shaft and bearing, was the first to explain the phenomenon of bearing friction . This method, which ultimately produces the equation known as Petrov's law (or Petroff's law ), is useful because it defines groups of relevant dimensionless parameters, and predicts a fairly accurate coefficient of friction , even when the shaft is not concentric. [ 3 ] Considering a vertical shaft rotating inside a bearing, it can be assumed that the bearing is subjected to a negligible load, the radial clearance space is completely filled with lubricant, and that leakage is negligible. The surface velocity of the shaft is: U = 2 π r N {\displaystyle U=2\pi rN} , where N is the rotational speed of the shaft in rev/s. The shear stress in the lubricant can be represented as follows: Assuming a constant rate of shear, The torque required to shear the film is If a small radial load W acts on the shaft and hence the bearing, the frictional drag force can be considered equal to the product fW , with the friction torque represented as Where If the small radial load W is considered negligible, setting the two expressions for torque equal to one another and solving for the coefficient of friction yields f = 2 π 2 μ N P r c {\displaystyle f=2\pi ^{2}{\frac {\mu N}{P}}{\frac {r}{c}}} Which is known as Petroff's law or the Petroff equation. It provides a quick and simple means of obtaining reasonable estimates of coefficients of friction of lightly loaded bearings. Shigley, Joseph Edward; Mischke, Charles R. (1989). Mechanical Engineering Design . New York: McGraw-Hill.
https://en.wikipedia.org/wiki/Sommerfeld_number
Sommerfeld tracking , named after German expatriate engineer, Kurt Joachim Sommerfeld, [ 1 ] [ 2 ] then living in Cambridge, England, was a lightweight wire mesh type of prefabricated airfield surface. First put into use by the British in 1941, it consisted of wire netting stiffened laterally by steel rods. This gave it load-carrying capacity while staying flexible enough to be rolled up. [ 3 ] Kurt Sommerfeld developed the track in the workshops of D.Mackay engineering based in East Road Cambridge. He worked on the design with Donald Mackay. Nicknamed " tin lino " , [ 2 ] Sommerfeld tracking consisted of rolls 3.25 m (10 ft 8 in) wide by 23 m (75 ft 6 in) long. Mild steel rods threaded through at 9 inch intervals gave it strength. The rolls could be joined at the edges by threading flat steel bar through loops in the ends of the rods. [ 3 ] Sommerfeld tracking was used extensively by the Royal Air Force in the Second World War to make runways at their airfields, as it could be deployed quickly. In addition, some 44,500,000 yards of Sommerfeld tracking was supplied to US forces by Britain in Reverse Lend-Lease. [ 4 ] Sommerfeld tracking was used widely on RAF and USAAF Advanced Landing Grounds , both in the UK and elsewhere. The ground was cleared and, if swampy, a layer of coir (also known as coco peat ) or coconut matting laid down. The Sommerfeld tracking was unrolled over the ground, pulled tight by a tractor, bulldozer, or similar vehicle, then fastened to the ground with angle-iron pickets. [ 1 ] A typical runway made of Sommerfeld tracking was 3,000 feet (910 m) by 156 feet (48 m). It would appear that this method did have some limitations and there are various reports of airfields being out of use during heavy rainfall due to mud, and the fact that the tracking would lift off the ground. There are also anecdotal reports of it causing damage to aircraft, such as wheels being torn off. [ citation needed ]
https://en.wikipedia.org/wiki/Sommerfeld_tracking
The Sommerfeld–Kossel displacement law states that the first spark (singly ionized) spectrum of an element is similar in all details to the arc (neutral) spectrum of the element preceding it in the periodic table . Likewise, the second (doubly ionized) spark spectrum of an element is similar in all details to the first (singly ionized) spark spectrum of the element preceding it, or to the arc (neutral) spectrum of the element with atomic number two less, and so forth. [ 1 ] Hence, the spectra of C I (neutral carbon), N II (singly ionized nitrogen), and O III (doubly ionized oxygen) atoms are similar, apart from shifts of the spectra to shorter wavelengths. [ 1 ] C I, N II, and O III all have the same number of electrons, six, and the same ground-state electron configuration : The law was discovered by and named after Arnold Sommerfeld and Walther Kossel , who set it forth in a paper submitted to Verhandungen der Deutschen Physikalischen Gesellschaft in early 1919. [ 3 ]
https://en.wikipedia.org/wiki/Sommerfeld–Kossel_displacement_law
Somnium ( Latin for "The Dream") — full title: Somnium, seu opus posthumum De astronomia lunari — is a novel written in Latin in 1608 by Johannes Kepler . It was first published in 1634 by Kepler's son, Ludwig Kepler, several years after the death of his father. In the narrative, an Icelandic boy and his witch mother learn of an island named Levania (the Moon ) from a daemon ("Levana" is the Hebrew word for the moon). Somnium presents a detailed imaginative description of how the Earth might look when viewed from the Moon, and is considered the first serious scientific treatise on lunar astronomy. Carl Sagan and Isaac Asimov have referred to it as one of the earliest works of science fiction . [ 1 ] [ 2 ] The story begins with Kepler reading about a skillful magician named Libussa . He falls asleep while reading about her. He recounts a strange dream he had from reading that book. The dream begins with Kepler reading a book about Duracotus, an Icelandic boy who is 14 years old. Duracotus's mother, Fiolxhilde, makes a living selling bags of herbs and cloth with strange markings on them. After he cuts into one of these bags and ruins her sale, Duracotus is sold by Fiolxhilde to a skipper. He travels with the skipper for a while until a letter is to be delivered to Tycho Brahe on the island of Hven . Since Duracotus is made seasick by the trip there, the skipper leaves Duracotus to deliver the letter and stay with Tycho. Tycho asks his students to teach Duracotus Danish so they can talk. Along with learning Danish, Duracotus learns of astronomy from Tycho and his students. Duracotus is fascinated with astronomy and enjoys the time they spend looking at the night sky. Duracotus spends several years with Tycho before returning home to Iceland . Upon his return to Iceland, Duracotus finds his mother still alive. She is overjoyed to learn that he is well-studied in astronomy as she too possesses knowledge of astronomy. One day, Fiolxhilde reveals to Duracotus how she learned of the heavens. She tells him about the daemons she can summon. These daemons can move her anywhere on Earth in an instant. If the place is too far away for them to take her, they describe it in great detail. She then summons her favorite daemon to speak with them. The summoned daemon tells them, "Fifty thousand miles up in the Aether lies the island of Levania," which is Earth's Moon. [ 3 ] According to the daemon, there is a pathway between the island of Levania and Earth. When the pathway is open, daemons can take humans to the island in four hours. The journey is a shock to humans, so they are sedated for the trip. Extreme cold is also a concern on the trip, but the daemons use their powers to ward it off. Another concern is the air, so humans have to have damp sponges placed in their nostrils in order to breathe. The trip is made with the daemons pushing the humans toward Levania with great force. At the Lagrangian point between the Earth and the Moon, [ 4 ] the daemons have to slow the humans down lest they hurtle with great force into the Moon. After describing the trip to Levania, the daemon notes that daemons are overpowered by the Sun. They dwell in the shadows of the Earth, called Volva by the inhabitants of Levania. The daemons can rush to Volva during a solar eclipse , otherwise they remain hidden in shadows on Levania. After the daemon describes other daemons' behavior, she goes on to describe Levania. Levania is divided into two hemispheres called Privolva and Subvolva, corresponding to the far and near sides of the Moon. Privolva never sees Volva, while Subvolva sees Volva as their moon. Volva goes throughout the same phases as the actual Moon. The daemon continues the descriptions of Subvolva and Privolva. Some of these details are scientific in nature, including how eclipses would look from the Moon, the sizes of the planets varying due to the Moon's distance from the Earth, and an idea about the size of the Moon. Other details are fictional in nature, such as descriptions of the creatures that inhabit Subvolva and Privolva, plant growth on each side, and the life and death cycle of Levania. The dream is cut short in the middle of the description of the creatures of Privolva. Kepler wakes up from the dream because of a storm outside. He then realizes that his head is covered and he is wrapped in blankets just like the characters in his story. [ 5 ] Somnium began as a student dissertation in which Kepler defended the Copernican doctrine of the motion of the Earth , suggesting that an observer on the Moon would find the Earth's movements as clearly visible as the Moon's activity is to the Earth's inhabitants. Nearly 20 years later, Kepler added the dream framework, and after another decade, he drafted a series of explanatory notes reflecting upon his turbulent career and the stages of his intellectual development. The book was edited by Ludwig Kepler and Jacob Bartsch , after Kepler's death in 1630. Karl Siegfried Guthke [ de ] notes that this means that the story predates the invention of the telescope . [ 6 ] : 84 There are many similarities to Kepler's real life in Somnium . Duracotus spends a considerable amount of time working for Tycho Brahe. Kepler worked under Tycho Brahe in 1600 before becoming Imperial Mathematician. Kepler's mother, Katharina Kepler , would be arrested on charges of being a witch. Kepler fought for five years to free her. After her death, Kepler wrote extensive notes to explain his narrative. [ 7 ] The book was published posthumously in 1634 by his son, Ludwig Kepler. [ 8 ] Kepler uses a daemon to describe the island of Levania in many scientific ways. The fixed stars are in the same position as the Earth's fixed stars. The planets appear larger from Levania than from Earth due to the distance Levania is from Earth. Levania also sees planetary motions in a different way. Unlike the moon, which traverses the sky from a terrestrial point of view, the Earth remains fixed in the lunar sky as a consequence of tidal locking . The only small movements the Earth makes are due to the moon's librations . The inhabitants at the divisor see the planets different from the rest of the Moon. Mercury and Venus specifically seem bigger to them. [ 5 ] A day is around 14 Earth days sometimes less. Night on Privolva is 15 or 16 Earth days. During the nights, Privolva experiences intense cold and strong winds. During the day, Privolva experiences extreme heat with no wind. During the night, water is pumped to Subvolva. During the Privolvan day, some of the water is pumped back to Privolva to protect its inhabitants from the intense heat. The inhabitants are described as giants that hide under water to escape from the heat of the day. [ 5 ] A day and night is around 30 Earth days. A day on Subvolva represents the phases of the Moon on Earth. Subvolva sees the Earth as its moon. The Earth goes through phases just as the Moon does during their night. Kepler notes that Subvolva is inhabited by serpent-like creatures. The Subvolvan terrain is full of fields and towns, just like Earth. At night on Privolva all of the water is pumped to Subvolva to submerge the land so only a small portion of land remains above the waves. The Subvolvans are protected from the Sun by almost constant cloud cover and rain. [ 5 ]
https://en.wikipedia.org/wiki/Somnium_(novel)
In mathematical analysis and number theory , Somos' quadratic recurrence constant or simply Somos' constant is a constant defined as an expression of infinitely many nested square roots . It arises when studying the asymptotic behaviour of a certain sequence [ 1 ] and also in connection to the binary representations of real numbers between zero and one . [ 2 ] The constant named after Michael Somos . It is defined by: which gives a numerical value of approximately: [ 3 ] Somos' constant can be alternatively defined via the following infinite product : This can be easily rewritten into the far more quickly converging product representation which can then be compactly represented in infinite product form by: Another product representation is given by: [ 4 ] Expressions for ln ⁡ σ {\displaystyle \ln \sigma } (sequence A114124 in the OEIS ) include: [ 4 ] [ 5 ] Integrals for ln ⁡ σ {\displaystyle \ln \sigma } are given by: [ 4 ] [ 6 ] The constant σ {\displaystyle \sigma } arises when studying the asymptotic behaviour of the sequence [ 1 ] with first few terms 1, 1, 2, 12, 576, 1658880, ... (sequence A052129 in the OEIS ). This sequence can be shown to have asymptotic behaviour as follows: [ 4 ] Guillera and Sondow give a representation in terms of the derivative of the Lerch transcendent Φ ( z , s , q ) {\displaystyle \Phi (z,s,q)} : [ 6 ] If one defines the Euler-constant function (which gives Euler's constant for z = 1 {\displaystyle z=1} ) as: one has: [ 7 ] [ 8 ] [ 9 ] One may define a "continued binary expansion" for all real numbers in the set ( 0 , 1 ] {\displaystyle (0,1]} , similarly to the decimal expansion or simple continued fraction expansion . This is done by considering the unique base-2 representation for a number x ∈ ( 0 , 1 ] {\displaystyle x\in (0,1]} which does not contain an infinite tail of 0's (for example write one half as 0.01111... 2 {\displaystyle 0.01111..._{2}} instead of 0.1 2 {\displaystyle 0.1_{2}} ). Then define a sequence ( a k ) ⊆ N {\displaystyle (a_{k})\subseteq \mathbb {N} } which gives the difference in positions of the 1's in this base-2 representation. This expansion for x {\displaystyle x} is now given by: [ 10 ] x = ⟨ a 1 , a 2 , a 3 , . . . ⟩ {\displaystyle x=\langle a_{1},a_{2},a_{3},...\rangle } For example the fractional part of Pi we have: { π } = 0.14159 26535 89793... = 0.00100 10000 11111... 2 {\displaystyle \{\pi \}=0.14159\,26535\,89793...=0.00100\,10000\,11111..._{2}} (sequence A004601 in the OEIS ) The first 1 occurs on position 3 after the radix point . The next 1 appears three places after the first one, the third 1 appears five places after the second one, etc. By continuing in this manner, we obtain: π − 3 = ⟨ 3 , 3 , 5 , 1 , 1 , 1 , 1... ⟩ {\displaystyle \pi -3=\langle 3,3,5,1,1,1,1...\rangle } (sequence A320298 in the OEIS ) This gives a bijective map ( 0 , 1 ] ↦ N N {\displaystyle (0,1]\mapsto \mathbb {N} ^{\mathbb {N} }} , such that for every real number x ∈ ( 0 , 1 ] {\displaystyle x\in (0,1]} we uniquely can give: [ 10 ] x = ⟨ a 1 , a 2 , a 3 , . . . ⟩ :⇔ x = ∑ k = 1 ∞ 2 − ( a 1 + . . . + a k ) {\displaystyle x=\langle a_{1},a_{2},a_{3},...\rangle :\Leftrightarrow x=\sum _{k=1}^{\infty }2^{-(a_{1}+...+a_{k})}} It can now be proven that for almost all numbers x ∈ ( 0 , 1 ] {\displaystyle x\in (0,1]} the limit of the geometric mean of the terms a k {\displaystyle a_{k}} converges to Somos' constant. That is, for almost all numbers in that interval we have: [ 2 ] σ = lim n → ∞ a 1 a 2 . . . a n n {\displaystyle \sigma =\lim _{n\to \infty }{\sqrt[{n}]{a_{1}a_{2}...a_{n}}}} Somos' constant is universal for the "continued binary expansion" of numbers x ∈ ( 0 , 1 ] {\displaystyle x\in (0,1]} in the same sense that Khinchin's constant is universal for the simple continued fraction expansions of numbers x ∈ R {\displaystyle x\in \mathbb {R} } . The generalized Somos' constants may be given by: for t > 1 {\displaystyle t>1} . The following series holds: We also have a connection to the Euler-constant function : [ 8 ] and the following limit, where γ {\displaystyle \gamma } is Euler's constant :
https://en.wikipedia.org/wiki/Somos'_quadratic_recurrence_constant
Sonali Mukherjee is a woman from Dhanbad , India, whose face was permanently disfigured by an acid attack in 2003 when she was 18. [ 1 ] [ 2 ] [ 3 ] Her family has spent all their savings on her treatment. Mukherjee was born in Dhanbad . She was a National Cadet Corps cadet, which she had to quit after her attack. [ citation needed ] In 2003, almost one and half months prior to the incident, three alleged assailants - Tapas Mitra, and his two friends Sanjay Paswan and Bhrahmadev Hajra - told her that she was a Ghamandi (arrogant) person, and they would teach her a lesson. Her father later complained to the families of the three men. On 22 April, when she was asleep on the roof of her house, she was attacked with acid and left with a burnt face and other severe injuries. Her sister was also injured in the incident. [ 4 ] The perpetrators were sentenced to nine years in jail, but were granted bail when they appealed to the High Court. Mukherjee's family approached the court and various other authorities for justice, including the Chief Minister of Jharkhand, multiple MPs, but she received "aashwasan [assurances] ... nothing else". [ 4 ] Chandidas Mukherjee, Sonali's father, later stated in an interview: "We appealed in the high court... Nothing happened. They were sent to jail, but were released soon after. Now, they are busy enjoying their lives. The law against acid attackers needs to be made tougher. Otherwise, we will have many more Sonalis". [ 5 ] In February 2014, the State Government of Jharkhand appointed Sonali Mukherjee as Grade III clerk in the welfare department of the Bokaro deputy commissioner's office. Mukherjee drew global attention when she appealed for euthanasia. Her wish to meet Amitabh Bachchan on the sets of Kaun Banega Crorepati , [ 6 ] season 6 was granted in 2012. Accompanied by Lara Dutta in the game, they won ₹ 2.5 million (US$30,000). [ 7 ]
https://en.wikipedia.org/wiki/Sonali_Mukherjee
Sonar was a free mobile application which showed the user how they are connected to other individuals in a room via publicly available social media profiles and location information from Foursquare , Twitter , and Facebook . [ 1 ] [ 2 ] Sonar was founded by Ocean City, Maryland , native Brett Martin, [ 3 ] and was launched in 2011 at TechCrunch Disrupt New York. [ 1 ] [ 4 ] Sonar is the fourth business to come out of New York-based mobile incubator K2 Media Labs, with the previous three being Fingerprint, Tracks, and MarketSharing. [ 3 ] Sonar has been offline as of September 2013, at which time Martin stated that he was no longer working on the project. [ 5 ]
https://en.wikipedia.org/wiki/Sonar_(mobile_application)
Sonata was a 3D building design software application developed in the early 1980s and now regarded as the forerunner of today's building information modeling applications. [ 1 ] [ 2 ] Sonata was commercially released in 1986, [ 3 ] having been developed by Jonathan Ingram independently and was sold to T2 Solutions (renamed from GMW Computers in 1987 [ 4 ] - which was eventually bought by Alias|Wavefront), [ 5 ] and was sold as a successor to GMW's RUCAPS . It ran on workstation computer hardware (by contrast, other 2D computer-aided design (CAD) systems could run on personal computers ). The system was not expensive, according to Michael Phiri. [ 6 ] Reiach Hall purchased "three Sonata workstations on Silicon Graphics machines, at a total cost of approximately £2000 each" [1990 prices]. Approximately 1,000 seats were sold between 1985 and 1992. However, as a BIM application, in addition to geometric modelling, it could model complete buildings, including complex parametrics, costs and staging of the construction process. [ 7 ] Archicad founder Gábor Bojár has acknowledged that Sonata "was more advanced in 1986 than Archicad at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later". [ 8 ] Many projects were designed and built using Sonata, including Peddle Thorp Architect's Rod Laver Arena in 1987, and Gatwick Airport North Terminal Domestic Facility by Taylor Woodrow . [ 9 ] The US-based architect HKS used the software in 1992 to design a horse racing facility ( Lone Star Park in Grand Prairie , Texas ) and subsequently purchased the successor product, Reflex. [ 10 ] Target Australia Pty. Ltd. the Australian discount department store retailer bought two Sonata licences in 1992 to replace two RUCAPS workstations originally from Coles Supermarkets . The software was run on two Silicon Graphics IRIS Indigo workstations. Staff were trained to use the software including the parametric language. The simple but powerful parametrics enable productivity gains in documenting buildings and fixture layouts. The object-oriented system suited the standard components installed by the retailer. Combined with the multiple project access (MPA) networking on the Unix operating system platform, a key selection criteria for continuing with the RUCAPS-Sonata architecture enabled the retailer's 50 stores in 5 years program during the late 1990s be executed with a small team. More workstations were purchased, including Silicon Graphics IRIS Indigo and Personal IRIS from the Queensland University of Technology . Year 2000 funding enabled the purchase of eight Silicon Graphics O2 workstations bringing the network to 11 workstations. The department continued to follow the development of Reflex and had contact with other users including Jeff Findlay at Peddle Thorp Architects. The business change to PTC and the direction away from building to a mechanical engineering system combined with Silicon Graphics move to Intel x86 architecture led Target to change to the most similar CAD software, Graphisoft’s Archicad. The Sonata business was founded in 1984 and, by one account it "disappeared in a mysterious, corporate black hole, somewhere in eastern Canada in 1992," [ 11 ] after new owner Alias Research discontinued marketing of the product. [ 12 ] Ingram then went on to develop Reflex , bought out by Parametric Technology Corporation ( PTC ) in 1996. [ 11 ] This business software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sonata_(building_design_software)
A sonic black hole , sometimes called a dumb hole or acoustic black hole , is a phenomenon in which phonons (sound perturbations) are unable to escape from a region of a fluid that is flowing more quickly than the local speed of sound . They are called sonic, or acoustic, black holes because these trapped phonons are analogous to light in astrophysical (gravitational) black holes . Physicists are interested in them because they have many properties similar to astrophysical black holes and, in particular, emit a phononic version of Hawking radiation . [ 1 ] [ 2 ] This Hawking radiation can be spontaneously created by quantum vacuum fluctuations, in close analogy with Hawking radiation from a real black hole. On the other hand, the Hawking radiation can be stimulated in a classical process. The boundary of a sonic black hole, at which the flow speed changes from being greater than the speed of sound to less than the speed of sound, is called the event horizon . Acoustic black holes were first theorized to be useful by W. G. Unruh in 1981. [ 3 ] However, the first black hole analogue was not created in a laboratory until 2009. It was created in a rubidium Bose–Einstein condensate using a technique called density inversion. This technique creates a flow by repelling the condensate with a potential minimum. The surface gravity and temperature of the sonic black hole were measured, but no attempt was made to detect Hawking radiation. However, the scientists who created it predicted that the experiment was suitable for detection and suggested a method by which it might be done by lasing the phonons. [ 4 ] In 2014, stimulated Hawking radiation was reported in an analogue black-hole laser by the same researchers. [ 2 ] Quantum, spontaneous Hawking radiation was observed later. [ 5 ] [ 6 ] [ 7 ] A rotating sonic black hole was used in 2010 to give the first laboratory testing of superradiance , a process whereby energy is extracted from a black hole. [ 8 ] Sonic black holes are possible because phonons in perfect fluids exhibit the same properties of motion as fields, such as gravity, in space and time. [ 1 ] For this reason, a system in which a sonic black hole can be created is called a gravity analogue . Nearly any fluid can be used to create an acoustic event horizon, but the viscosity of most fluids creates random motion [ citation needed ] that makes features like Hawking radiation nearly impossible to detect. The complexity of such a system would make it very difficult to gain any knowledge about such features even if they could be detected. [ 9 ] Many nearly perfect fluids have been suggested for use in creating sonic black holes, such as superfluid helium, one–dimensional degenerate Fermi gases , and Bose–Einstein condensate . Gravity analogues other than phonons in a fluid, such as slow light and a system of ions, have also been proposed for studying black hole analogues. [ 10 ] The fact that so many systems mimic gravity is sometimes used as evidence for the theory of emergent gravity , which could help reconcile relativity, and quantum mechanics. [ 11 ] In addition to the above-mentioned sonic or acoustic black holes that can be viewed as analogues of astrophysical black holes, physical objects bearing the same names also exist in Acoustic and Vibration Engineering, where they are used for sound absorption and for damping structural vibrations. [ 12 ] The Acoustic Black Hole effect in such objects can be achieved by creating a gradual reduction of sound velocity in a waveguide or elastic wave velocity in a solid structure (e.g. flexural wave velocity in thin plates) with propagation distance. The required velocity reduction should follow a power-law function of the propagation distance, and the velocity at the end of the wave propagation path should be reduced to almost zero. Also, measures should be taken to insert a small amount of traditional sound or vibration absorbing materials in the area of very low propagation velocity. Under these conditions, the described sonic or acoustic black holes provide almost 100% absorption of the incident air-borne or structure-borne acoustic waves.
https://en.wikipedia.org/wiki/Sonic_black_hole
Sonic characteristics of marine species - Noise in the ocean is of utmost importance for ocean explorations, oceanographic as well as fisheries studies, sonar operations, etc. The wide range of systems for ocean research demands the need for characterizing the noise sources in the ocean. The ambient noise in the ocean is composite in nature, with components emanating from a variety of noise sources. The problem of identification of noise sources in the ocean is of prime importance because of its diverse practical applications. In view of the importance of the area of identification of noise sources, there is a need and requirement for characterizing the noise sources in the ocean which are of both man made and biological in origin. [ 1 ] [ 2 ] This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sonic_characteristics_of_marine_species
Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information , meaning, and aesthetic/emotional qualities in interactive contexts. [ 1 ] [ 2 ] Sonic interaction design is at the intersection of interaction design and sound and music computing . If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium. Research in this area focuses on experimental scientific findings about human sound reception in interactive contexts. [ 3 ] During closed-loop interactions, the users manipulate an interface that produces sound, and the sonic feedback affects in turn the users’ manipulation. In other words, there is a tight coupling between auditory perception and action. [ 4 ] Listening to sounds might not only activate a representation of how the sound was made: it might also prepare the listener to react to the sound. Cognitive representations of sounds might be associated with action-planning schemas, and sounds can also unconsciously cue a further reaction on the part of the listener. [ 5 ] Sonic interactions have the potential to influence the users’ emotions : the quality of the sounds affects the pleasantness of the interaction, and the difficulty of the manipulation influences whether the user feels in control or not. [ 6 ] Product design in the context of sonic interaction design is dealing with methods and experiences for designing interactive products having a salient sonic behaviour. Products, in this context, are either tangible and functional objects that are designed to be manipulated, [ 7 ] [ 8 ] or usable simulations of such objects as in virtual prototyping . Research and development in this area relies on studies from other disciplines, such as: In design research for sonic products a set of practices have been inherited from a variety of fields. Such practices have been tested in contexts where research and pedagogy naturally intermix. Among these practices it suffices to mention: In the context of sonic interaction design, interactive art and music projects are designing and researching aesthetic experiences where sonic interaction is in the focus. The creative and expressive aspects – the aesthetics – are more important than conveying information through sound. Practices include installations, performances, public art and interactions between humans through digitally-augmented objects/environments. These often integrate elements such as embedded technology, gesture-sensitive devices, speakers or context-aware systems. The experience is in the focus, addressing how humans are affected by the sound, and vice versa. Interactive art and music allows researchers to question existing paradigms and models of how humans interact with technology and sound, going beyond paradigms of control (human controlling a machine). Users are part of a loop which includes action and perception. Interactive art and music projects invite explorative actions and playful engagement. There is also a multi-sensory aspect, especially haptic-audio [ 18 ] and audio-visual projects are popular. Amongst many other influences, this field is informed by the development of the roles of instrument-maker, composer and performer merging. [ 19 ] Artistic research in sonic interaction design is about productions in the interactive arts and performing arts , exploiting the role of enactive engagement with sound–augmented interactive objects. [ 20 ] Sonification is the data-dependent generation of sound, if the transformation is systematic, objective and reproducible, so that it can be used as scientific method. [ 21 ] For sonic interaction design, sonification provides a set of methods to create interaction sounds that encode relevant data, so that the user can perceive or interpret the conveyed information. Sonification does not necessarily need to represent huge amounts of data in sound, but may only convey one or few data values in a sound. To give an example, imagine a light switch that, on activation would create a short sound that depends on the electric power consumed through the cable: more energy-wasting lamps would perhaps systematically result in more annoying switch sounds. This example shows that sonification aims to provide some information by using its systematic transformation into sound. The integration of data-driven elements in interaction sound may serve different purposes: Within the field of sonification, sonic interaction design acknowledges the importance of human interaction for understanding and using auditory feedback . [ 24 ] Within sonic interaction design, sonification can help and offer solutions, methods, and techniques to inspire and guide the design of products or interactive systems.
https://en.wikipedia.org/wiki/Sonic_interaction_design
Sonic soot blowers offer a cost-effective and non-destructive means of preventing ash and particulate build-up within the power generation industry. They use high energy – low frequency sound waves that provide 360° particulate de-bonding and at a speed in excess of 344 metres per second. Because they employ non-destructive sound waves, unlike steam soot blowers they eliminate any concerns over corrosion , erosion or mechanical damage and do not produce an effluent stream . The sonic soot blower can in some ways be compared to a musical reed instrument such as an oboe , where the ‘base tone’ is created by blowing air over a reed and then converting this ‘base tone’ into a particular high or low note, depending on how far the sound wave has to travel along inside the body of the instrument. The sonic soot blower operates in the same manner, the ‘base tone’ being produced by passing compressed air into a wave generator which houses a titanium diaphragm causing it to oscillate rapidly. This ‘base tone’ is then converted into a range of selected frequencies ranging from 350 Hz down to 60 Hz by the design and length of the horn section , producing the desired sound frequency at a sound level approaching 200 dB. The sonic soot blower is usually ‘sounded’ for a few seconds at intervals of between 3 and 10 minutes. This ‘sounding’ pattern is normally controlled via the plant’s PLC. However, it may also be operated by such means as a SCADA system, individual timers on each solenoid valve or via a manual ball valve . Sonic soot blowers are normally constructed from fabricated, 316 grade stainless steel as opposed to some sonic horns which are manufactured from heavy cast iron . For installations in harsher environments, such as high temperature or acidic gas streams, other types of stainless steels are used such as 310, 316 and 825. Sonic soot blowers create a rapid series of very powerful sound induced pressure fluctuations which when transmitted into the ash or particulate, cause them to de-bond from other particles and from the heat transfer surface to which they are bonded and so carried away in the gas stream. This prevents the ash from building up and sintering onto the boiler tubes thus significantly reducing thermal efficiency . This is in contrast to the operating principles of steam soot blowers which are usually only employed at most once every eight hours by which time the ash has built up and baked hard onto the heat transfer surfaces. The steam soot blower then tries to blast away his hard deposit, usually only from the leading edge of the steam tubes. Sonic soot blowers are a proven alternative to conventional steam soot blowers in power generation plants which burn a range of fossil fuels and other waste fuels including biofuels . Depending on the application and boiler plant design, sonic soot blowers usually totally replace existing high maintenance steam soot blowers whether of the retractable or rotary type. In a few cases, sonic soot blowers can be used to supplement steam soot blowers. Sonic soot blower cleaning technologies can be applied in superheaters , generating sections, economizers , and airheaters as well as downstream equipment such as electrostatic precipitators , baghouse filters and fans. The main advantages of sonic soot blowers over steam soot blowers are:-
https://en.wikipedia.org/wiki/Sonic_soot_blowers
Sonication is the act of applying sound energy to agitate particles in a sample, for various purposes such as the extraction of multiple compounds from plants, microalgae and seaweeds. [ 1 ] Ultrasonic frequencies (> 20 kHz) are usually used, leading to the process also being known as ultrasonication or ultra-sonication . [ 2 ] In the laboratory, it is usually applied using an ultrasonic bath or an ultrasonic probe , colloquially known as a sonicator . In a paper machine , an ultrasonic foil can distribute cellulose fibres more uniformly and strengthen the paper. Sonication has numerous effects, both chemical and physical. The scientific field concerned with understanding the effect of sonic waves on chemical systems is called sonochemistry . The chemical effects of ultrasound do not come from a direct interaction with molecular species. Studies have shown that no direct coupling of the acoustic field with chemical species on a molecular level can account for sonochemistry [ 3 ] or sonoluminescence . [ 4 ] Instead, in sonochemistry the sound waves migrate through a medium, inducing pressure variations and cavitations that grow and collapse, transforming the sound waves into mechanical energy. [ 1 ] Sonication can be used for the production of nanoparticles , such as nanoemulsions , [ 5 ] nanocrystals, liposomes and wax emulsions, as well as for wastewater purification, degassing, extraction of seaweed polysaccharides [ 1 ] and plant oil, extraction of anthocyanins and antioxidants, [ 6 ] production of biofuels , crude oil desulphurization, cell disruption , polymer and epoxy processing, adhesive thinning, and many other processes. It is applied in pharmaceutical, cosmetic, water, food, ink, paint, coating, wood treatment, metalworking, nanocomposite, pesticide, fuel, wood product and many other industries. Sonication can be used to speed dissolution, by breaking intermolecular interactions. It is especially useful when it is not possible to stir the sample, as with NMR tubes . It may also be used to provide the energy for certain chemical reactions to proceed. Sonication can be used to remove dissolved gases from liquids ( degassing ) by sonicating the liquid while it is under a vacuum. This is an alternative to the freeze-pump-thaw and sparging methods. In biological applications, sonication may be sufficient to disrupt or deactivate a biological material. For example, sonication is often used to disrupt cell membranes and release cellular contents. This process is called sonoporation . Small unilamellar vesicles (SUVs) can be made by sonication of a dispersion of large multilamellar vesicles (LMVs). Sonication is also used to fragment molecules of DNA, in which the DNA subjected to brief periods of sonication is sheared into smaller fragments. Sonication is commonly used in nanotechnology for evenly dispersing nanoparticles in liquids. Additionally, it is used to break up aggregates of micron-sized colloidal particles. Sonication can also be used to initiate crystallisation processes and even control polymorphic crystallisations. [ 7 ] It is used to intervene in anti-solvent precipitations (crystallisation) to aid mixing and isolate small crystals. Sonication is the mechanism used in ultrasonic cleaning —loosening particles adhering to surfaces. In addition to laboratory science applications, sonicating baths have applications including cleaning objects such as spectacles and jewelry . Sonication is used in food industry as well. Main applications are for dispersion to save expensive emulgators (mayonnaise) or to speed up filtration processes (vegetable oil etc.). Experiments with sonication for artificial ageing of liquors and other alcoholic beverages were conducted [ citation needed ] . Soil samples are often subjected to ultrasound in order to break up soil aggregates; this allows the study of the different constituents of soil aggregates (especially soil organic matter ) without subjecting them to harsh chemical treatment. [ 8 ] Sonication is also used to extract microfossils from rock. [ 9 ] An ultrasonic bath or an ultrasonic probe system is used for extraction. For instance, this technique was suggested to remove isoflavones from soybeans and phenolic compounds from wheat bran and coconut shell powder. [ 10 ] The outcomes differ for every raw material and solvent utilized and the other extraction techniques. Acoustic or ultrasonic cavitation is the basis for the operation of ultrasound-assisted extraction. [ 11 ] Substantial intensity of ultrasound and high ultrasonic vibration amplitudes are required for many processing applications, such as nano-crystallization, nano-emulsification, [ 5 ] deagglomeration, extraction, cell disruption, as well as many others. Commonly, a process is first tested on a laboratory scale to prove feasibility and establish some of the required ultrasonic exposure parameters. After this phase is complete, the process is transferred to a pilot (bench) scale for flow-through pre-production optimization and then to an industrial scale for continuous production. During these scale-up steps, it is essential to make sure that all local exposure conditions (ultrasonic amplitude, cavitation intensity, time spent in the active cavitation zone, etc.) stay the same. If this condition is met, the quality of the final product remains at the optimized level, while the productivity is increased by a predictable "scale-up factor". The productivity increase results from the fact that laboratory, bench and industrial-scale ultrasonic processor systems incorporate progressively larger ultrasonic horns , able to generate progressively larger high-intensity cavitation zones and, therefore, to process more material per unit of time. This is called "direct scalability". It is important to point out that increasing the power capacity of the ultrasonic processor alone does not result in direct scalability, since it may be (and frequently is) accompanied by a reduction in the ultrasonic amplitude and cavitation intensity. During direct scale-up, all processing conditions must be maintained, while the power rating of the equipment is increased in order to enable the operation of a larger ultrasonic horn. [ 12 ] [ 13 ] [ 14 ] Finding the optimum operation condition for this equipment is a challenge for process engineers and needs deep knowledge about side effects of ultrasonic processors. [ 15 ]
https://en.wikipedia.org/wiki/Sonication
Sonidegib ( INN ), sold under the brand name Odomzo , is a medication used to treat cancer. [ 1 ] Sonidegib is a Hedgehog signaling pathway inhibitor (via smoothened antagonism). [ 4 ] [ 5 ] It was approved for medical use in the United States and in the European Union in 2015 [ 6 ] [ 1 ] [ 7 ] [ 8 ] It is indicated for the treatment of adults with locally advanced basal-cell carcinoma that has recurred following surgery or radiation therapy , or those who are not candidates for surgery or radiation therapy. [ 1 ] Sonidegib is administered by mouth . Common side effects include muscle spasms, hair loss, fatigue, abdominal pain, nausea, headache, and weight loss. [ 1 ] Sonidegib binds to and inhibits smoothened to inhibit activation of the Hedgehog pathway. Sonidegib is primarily metabolized by CYP3A and is eliminated hepatically. [ 1 ] It has been investigated as a potential treatment for: It has demonstrated significant efficacy against melanoma in vitro and in vivo . [ 27 ] It also demonstrated efficacy in a mouse model of pancreatic cancer. [ 28 ]
https://en.wikipedia.org/wiki/Sonidegib
Sonja Louise Barth (21 May 1923 – 10 September 2016, née Skoklefald ) was a Norwegian environmentalist. [ 1 ] [ 2 ] During World War II she was active in XU , the secret Norwegian resistance operation whose activities were kept secret until 1988. In 2008 she described her experiences to Lars Otto Wollum, on condition that he publish nothing of them until after her death. [ 3 ] In 2008 she was appointed to the Royal Norwegian Order of Saint Olav . The citation referred to her work in public education and the dissemination of natural and cultural history, and to her work in the Rondane region. [ 4 ] On 14 November 1945 she married Edvard Kaurin Barth (1913–1996), a photographer and zoologist. [ 2 ] This article about a Norwegian scientist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sonja_Barth
Empirical methods Prescriptive and policy The Sonnenschein–Mantel–Debreu theorem is an important result in general equilibrium economics , proved by Gérard Debreu , Rolf Mantel [ es ] , and Hugo F. Sonnenschein in the 1970s. [ 1 ] [ 2 ] [ 3 ] [ 4 ] It states that the excess demand curve for an exchange economy populated with utility-maximizing rational agents can take the shape of any function that is continuous , has homogeneity degree zero, and is in accordance with Walras's law . [ 5 ] This implies that the excess demand function does not take a well-behaved form even if each agent has a well-behaved utility function. Market processes will not necessarily reach a unique and stable equilibrium point. [ 6 ] More recently, Jordi Andreu, Pierre-André Chiappori , and Ivar Ekeland extended this result to market demand curves , both for individual commodities and for the aggregate demand of an economy as a whole. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ note 1 ] This means that demand curves may take on highly irregular shapes, even if all individual agents in the market are perfectly rational. In contrast with usual assumptions, the quantity demanded of a commodity may not decrease when the price increases. Frank Hahn regarded the theorem as a dangerous critique of mainstream neoclassical economics . [ 11 ] There are several possible versions of the theorem that differ in detailed bounds and assumptions. The following version is formulated in the Arrow–Debreu model of economy . [ 12 ] For the notation, see the Arrow–Debreu model page. Theorem — Let N {\displaystyle N} be a positive integer. If Z : { p ∈ R N : ∑ n p n = 1 , ∀ n , p n > 0 } → R N {\displaystyle Z:\{p\in \mathbb {R} _{N}:\sum _{n}p_{n}=1,\forall n,p_{n}>0\}\to \mathbb {R} ^{N}} is a continuous function that satisfies Walras's law, then there exists an economy with households indexed by I {\displaystyle I} , with no producers ("pure exchange economy"), and household endowments { r i } i ∈ I {\displaystyle \{r^{i}\}_{i\in I}} such that each household satisfies all assumptions in the "Assumptions" section , and Z {\displaystyle Z} is the excess demand function for the economy. Similarly, changing Z {\displaystyle Z} to a set-valued, closed graph function, we obtain another Theorem — Let N {\displaystyle N} be a positive integer. If Z : { p ∈ R N : ∑ n p n = 1 , ∀ n , p n > 0 } → R N {\displaystyle Z:\{p\in \mathbb {R} _{N}:\sum _{n}p_{n}=1,\forall n,p_{n}>0\}\to \mathbb {R} ^{N}} is a set-valued function with closed graph that satisfies Walras's law, then there exists an economy with households indexed by I {\displaystyle I} , with no producers ("pure exchange economy"), and household endowments { r i } i ∈ I {\displaystyle \{r^{i}\}_{i\in I}} such that each household satisfies all assumptions in the "Assumptions" section except the "strict convexity" assumption, and Z {\displaystyle Z} is the excess demand function for the economy. The concept of an excess demand function is important in general equilibrium theories, because it acts as a signal for the market to adjust prices. [ 13 ] If the value of the excess demand function is positive, then more units of a commodity are being demanded than can be supplied; there is a shortage . If excess demand is negative, then more units are being supplied than are demanded; there is a glut . The assumption is that the rate of change of prices will be proportional to excess demand, so that the adjustment of prices will eventually lead to an equilibrium state in which excess demand for all commodities is zero. [ 14 ] In the 1970s, mathematical economists worked to establish rigorous microfoundations for widely used equilibrium models, on the basis of the assumption that individuals are utility-maximizing rational agents (the "utility hypothesis"). It was already known that this assumption put certain loose restrictions on the excess demand functions for individuals ( continuity and Walras's law ), and that these restrictions were "inherited" by the market excess demand function. In a 1973 paper, Hugo Sonnenschein posed the question of whether these were the only restrictions that could be placed on a market excess demand function. [ 2 ] He conjectured that the answer was "yes," and made preliminary steps toward proving it. These results were extended by Rolf Mantel, [ 3 ] and then by Gérard Debreu in 1974, [ 4 ] who proved that, as long as there are at least as many agents in the market as there are commodities, the market excess demand function inherits only the following properties of individual excess demand functions: These inherited properties are not sufficient to guarantee that the excess demand curve is downward-sloping, as is usually assumed. The uniqueness of the equilibrium point is also not guaranteed. There may be more than one price vector at which the excess demand function is zero, which is the standard definition of equilibrium in this context. [ 14 ] In the wake of these initial publications, several scholars have extended the initial Sonnenschein–Mantel–Debreu results in a variety of ways. In a 1976 paper, Rolf Mantel showed that the theorem still holds even if the very strong assumption is added that all consumers have homothetic preferences . [ 15 ] This means that the utility that consumers assign to a commodity will always be exactly proportional to the amount of the commodity offered; for example, one million oranges would be valued exactly one million times more than one orange. Furthermore, Alan Kirman and Karl-Josef Koch proved in 1986 that the SMD theorem still holds even if all agents are assumed to have identical preferences, and the distribution of income is assumed to be fixed across time and independent of prices. [ 16 ] The only income distribution that is not permissible is a uniform one where all individuals have the same income and therefore, since they have the same preferences, they are all identical. [ 17 ] For a while it was unclear whether SMD-style results also applied to the market demand curve itself, and not just the excess demand curve. But in 1982 Jordi Andreu established an important preliminary result suggesting that this was the case, [ 9 ] and in 1999 Pierre-André Chiappori and Ivar Ekeland used vector calculus to prove that the Sonnenschein–Mantel–Debreu results do indeed apply to the market demand curve. [ 7 ] [ 8 ] [ 18 ] This means that market demand curves may take on highly irregular shapes, quite unlike textbook models, even if all individual agents in the market are perfectly rational. In the 1982 book Handbook of Mathematical Economics , Hugo Sonnenschein explained some of the implications of his theorem for general equilibrium theory: …market demand functions need not satisfy in any way the classical restrictions which characterize consumer demand functions… The importance of the above results is clear: strong restrictions are needed in order to justify the hypothesis that a market demand function has the characteristics of a consumer demand function. Only in special cases can an economy be expected to act as an ‘idealized consumer.’ The utility hypothesis tells us nothing about market demand unless it is augmented by additional requirements. [ 19 ] In other words, it cannot be assumed that the demand curve for a single market, let alone an entire economy, must be smoothly downward-sloping simply because the demand curves of individual consumers are downward-sloping. This is an instance of the more general aggregation problem , which deals with the theoretical difficulty of modeling the behavior of large groups of individuals in the same way that an individual is modeled. [ 20 ] Frank Ackerman points out that it is a corollary of Sonnenschein–Mantel–Debreu that a Walrasian auction will not always find a unique and stable equilibrium, even in ideal conditions: In Walrasian general equilibrium, prices are adjusted through a tâtonnement ('groping') process: the rate of change for any commodity’s price is proportional to the excess demand for the commodity, and no trades take place until equilibrium prices have been reached. This may not be realistic, but it is mathematically tractable: it makes price movements for each commodity depend only on information about that commodity. Unfortunately, as the SMD theorem shows, tâtonnement does not reliably lead to convergence to equilibrium. [ 6 ] Léon Walras ' auction model requires that the price of a commodity will always rise in response to excess demand, and that it will always fall in response to an excess supply . But SMD shows that this will not always be the case, because the excess demand function need not be uniformly downward-sloping. [ 14 ] The theorem has also raised concerns about the falsifiability of general equilibrium theory , because it seems to imply that almost any observed pattern of market price and quantity data could be interpreted as being the result of individual utility-maximizing behavior. In other words, Sonnenschein–Mantel–Debreu raises questions about the degree to which general equilibrium theory can produce testable predictions about aggregate market variables. [ 21 ] [ 22 ] For this reason, Andreu Mas-Colell referred to the theorem as the “Anything Goes Theorem” in his graduate-level microeconomics textbook. [ 22 ] Some economists have made attempts to address this problem, with Donald Brown and Rosa Matzkin deriving some polynomial restrictions on market variables by modeling the equilibrium state of a market as a topological manifold . [ 23 ] However, Abu Turab Rizvi comments that this result does not practically change the situation very much, because Brown and Matzkin's restrictions are formulated on the basis of individual-level observations about budget constraints and incomes, while general equilibrium models purport to explain changes in aggregate market-level data. [ 24 ] Robert Solow interprets the theorem as showing that, for modelling macroeconomic growth , the dynamic stochastic general equilibrium is no more microfounded than simpler models such as the Solow–Swan model . As long as a macroeconomic growth model assumes an excess demand function satisfying continuity, homogeneity, and Walras's law, it can be microfounded. [ 25 ] The Sonnenschein–Mantel–Debreu results have led some economists, such as Werner Hildenbrand and Alan Kirman, [ 26 ] to abandon the project of explaining the characteristics of the market demand curve on the basis of individual rationality. Instead, these authors attempt to explain the law of demand in terms of the organization of society as a whole, and in particular the distribution of income. [ 27 ] [ 28 ] In mathematical terms, the number of equations that make up a market excess demand function is equal to the number of individual excess demand functions, which in turn equals the number of prices to be solved for. By Walras's law, if all but one of the excess demands is zero then the last one has to be zero as well. This means that there is one redundant equation and we can normalize one of the prices or a combination of all prices (in other words, only relative prices are determined; not the absolute price level). Having done this, the number of equations equals the number of unknowns and we have a determinate system. However, because the equations are non-linear there is no guarantee of a unique solution. Furthermore, even though reasonable assumptions can guarantee that the individual excess-demand functions have a unique root, these assumptions do not guarantee that the aggregate demand does as well. There are several things to be noted. First, even though there may be multiple equilibria, every equilibrium is still guaranteed, under standard assumptions, to be Pareto efficient . However, the different equilibria are likely to have different distributional implications and may be ranked differently by any given social welfare function . Second, by the Hopf index theorem , in regular economies the number of equilibria will be finite and all of them will be locally unique. This means that comparative statics , or the analysis of how the equilibrium changes when there are shocks to the economy, can still be relevant as long as the shocks are not too large. But this leaves the question of the stability of the equilibrium unanswered, since a comparative statics perspective does not tell us what happens when the market moves away from an equilibrium. The extension to incomplete markets was first conjectured by Andreu Mas-Colell in 1986. [ 29 ] To do this he remarks that Walras's law and homogeneity of degree zero can be understood as the fact that the excess demand only depends on the budget set itself. Hence, homogeneity is only saying that excess demand is the same if the budget sets are the same. This formulation extends to incomplete markets. So does Walras's law if seen as budget feasibility of excess-demand function. The first incomplete markets Sonnenschein–Mantel–Debreu type of result was obtained by Jean-Marc Bottazzi and Thorsten Hens . [ 30 ] Other works expanded the type of assets beyond the popular real assets structures like Chiappori and Ekland. [ 18 ] All such results are local. In 2003 Takeshi Momi extended the approach by Bottazzi and Hens as a global result. [ 31 ]
https://en.wikipedia.org/wiki/Sonnenschein–Mantel–Debreu_theorem
Sono-Seq (Sonication of Cross-linked Chromatin Sequencing ) is a method in molecular biology used for determining the sequences of those DNA regions in the genome near regions of open chromatin of expressed genes . It is also known as "Input" in the Chip-Seq protocol, since it follows the same steps except it doesn't require immunoprecipitation . [ 1 ]
https://en.wikipedia.org/wiki/Sono-Seq
The Sono arsenic filter was invented in 2006 by Abul Hussam , who is a chemistry professor at George Mason University (GMU) in Fairfax , Virginia . It was developed to deal with the problem of arsenic contamination of groundwater . [ 1 ] The filter is now in use in Hussam's native Bangladesh . Farmers had been drinking fresh groundwater from wells, whereas previously they had had to use ponds and mudholes which were contaminated with bacteria and viruses. However, these wells were also contaminated with naturally occurring high concentrations of poisonous arsenic , causing skin ailments and cancers. Awareness of the problem developed through the 1990s. Allan Smith, an epidemiologist at the University of California at Berkeley , observed that the arsenic problem affects millions of people worldwide: You can't see it or taste or smell it. The idea that crystal-clear drinking water would end up causing lung disease in 20 or 30 years is a little weird. It's unbelievable to people. Hussam developed his filter after years of testing hundreds of prototypes. The final version contains 20 pounds (9 kg) of shards of porous iron , which bonds chemically with arsenic. It also includes charcoal, sand and bits of brick. It filters nearly all arsenic from well water. Hussam was awarded the 2007 Grainger challenge Prize for Sustainability by the National Academy of Engineering . [ 2 ] Hussam plans to use 70% of the $1 million engineering prize to distribute filters to needy communities. [ 3 ]
https://en.wikipedia.org/wiki/Sono_arsenic_filter
Sonocatalysis is a field of sonochemistry which is based on the use of ultrasound to change the reactivity of a catalyst in homogenous or heterogenous catalysis . It is generally used to support catalysis . This method of catalysis has been known since the creation of sonochemistry in 1927 by Alfred Lee Loomis (1887–1975) and Robert Williams Wood (1868–1955). [ 1 ] Sonocatalysis depends on ultrasounds, which were discovered in 1794 by the Italian biologist Lazarro Spallanzani (1729–1799). [ 2 ] Sonocatalysis is not a self-sufficient catalysis technique but instead supports a catalyst in the reaction. Sonocatalysis and sonochemistry both come from a phenomenon called “acoustic cavitation ”, which happens when a liquid is irradiated by ultrasounds. Ultrasounds will create huge local variations of pressure and temperature , affecting the liquid's relative density and creating cavitation bubbles when liquid pressure decreases under its vapor pressure . When these bubbles blow up, some energy is released, which comes from the transformation of kinetic energy into heat. Sonocatalysis may happen in the homogenous phase or the heterogenous phase . This depends on the phase in which the catalyst is compared to the reaction. [ 1 ] The blowing of cavitation bubbles can cause intense local pressure and temperature conditions, going to a 1000 atm pressure and a 5000 K temperature. [ 1 ] This may provoke the creation of highly energetic radicals . Bubbles' blowing causes the formation of hydroxyl radical HO ⋅ {\displaystyle {\ce {HO^{.}}}} and hydrogen radical H ⋅ {\displaystyle {\ce {H^{.}}}} in a water-based environment. Next, these radicals may combine to produce different molecules, such as water H 2 O {\displaystyle {\ce {H2O}}} , hydroperoxyl HO 2 ⋅ {\displaystyle {\ce {HO2^{.}}}} , hydrogen peroxide H 2 O 2 {\displaystyle {\ce {H2O2}}} and dioxygen O 2 {\displaystyle {\ce {O2}}} [ 3 ] Radical formation reactions due to the decomposition of water by ultrasound can be described this way: H 2 O → ) ) ) HO ⋅ + H ⋅ {\displaystyle {\ce {H2O ->[{)))}] HO{.}+ H{.}}}} ⋅ OH + H ⋅ ⟶ H 2 O 2 {\displaystyle {\ce {.OH + H. -> H2O2}}} ⋅ H + O 2 ⟶ HO 2 ⋅ {\displaystyle {\ce {.H + O2 ->HO2.}}} 2 HO ⋅ ⟶ H 2 O 2 {\displaystyle {\ce {2HO. -> H2O2}}} 2 HO 2 ⋅ ⟶ H 2 O 2 + O 2 {\displaystyle {\ce {2HO2.->H2O2 +O2}}} H 2 O + ⋅ OH ⟶ H 2 O 2 + H ⋅ {\displaystyle {\ce {H2O +.OH->H2O2 +H.}}} Energy from ultrasonic irradiation differs from heat energy or electromagnetic radiation energy in time, pressure, and energy received by a molecule... [ 1 ] For example, a 20 kHz ultrasound creates an 8.34 x 10 −11 eV energy, while a 300 nm laser creates a 4.13 eV energy. This ultrasound causes a shorter reaction time and a better yield. There are two types of irradiation in sonocatalysis and sonochemistry: direct irradiation and indirect radiation . In direct irradiation, the solution is in touch with a sound wave emitter (generally a transducer ), while these two elements are separated by an irradiated bath in indirect irradiation. The bath transmits the radiations to the solution due to convection . While indirect irradiation is the most used irradiation technique, direct irradiation is possible too, especially when the irradiated bath may be the container for the solution too. [ 2 ] Metal carbonyls , such as Fe(CO) 5 , Fe 3 (CO) 12 , Cr(CO) 6 , Mo(CO) 6 and W(CO) 6 , are very often used in homogenous catalysis, because these are stable species at standard temperature and pressure , due to their structures. [ 4 ] Furthermore, their catalytic capacities are well-known and efficient. [ 5 ] Carbon-based species like carbon nanotubes , graphene , graphene oxide, activated carbon , biochar , g-C 3 N 4 , carbon-doped materials , Buckminsterfullerene (C60), and mesoporous carbons, are very often used in heterogeneous sonocatalysis. These species are great sonocatalysts because they favour the degradation process during sonocatalysis. Furthermore, they have a huge activity and stability for sonocatalysis, and they show the nucleation effect. These properties come from features like optic activities, electrical resistivities and conductivities , chemical stabilities, forces, and their porous structures. These species are becoming more frequently used. [ 3 ] Sonocatalysis needs equipment other than catalysts to generate ultrasound, like transducers that create ultrasound by the transformation from electrical energy to mechanical energy . There are two types of transducers: piezoelectric transducers and magnetostrictic transducers . Piezoelectric transducers are used more often because they are cheaper, lighter, and less bulky. These transducers are constituted of single crystals or ceramic and two electrodes fixed on the side of the precedent materials. These electrodes receive a voltage which equals at the most to the transducer's resonance frequency. Then, single crystals may be compressed or dilated, creating a wave. [ 2 ] The use of sonocatalysis has risen. [ 6 ] Today, sonocatalysis is used in lots of fields, like medicine, pharmacology, metallurgy, environment, nanotechnology, and wastewater treatment . Several studies showed that sonocatalysis could increase pyrazole synthesis yield, compounds that has antimicrobial , antihypertensive , anti-inflammatory and anticonvulsant activities. A study developed a new way of synthesis for this molecule, using ecological and economical reactants while keeping a high yield and using sonocatalysis. [ 7 ] The following table contains is an example for the 3-methyl-5-phenyl-4,5-dihydro-1H-pyrazole-1-carbothioamide synthesis: (*) synthesis conditions are described on the picture above An example of the use of sonocatalysis is to degrade pollutants. Ultrasound can generate the HO ⋅ {\displaystyle {\ce {HO^{.}}}} radical from a water molecule. This radical is a strong oxidizing agent , which can degrade persistent organic pollutant . However, the reaction speed for hydrophobic compounds is low, so ultrasound is often paired with a solid catalyst. The addition of this catalyst means the addition of atomic nuclei that amplifies the cavity phenomenon, and so does the ultrasonic efficiency. Near the solid-liquid contact surface, pressure is applied on one of the sides of the bubble, causing a more violent blowing of the bubble. [ 3 ] This principle can apply to the oxidated bleaching of 46 cationic red [ 9 ] by zinc oxide held by bentonite . More than 10% to 20% of organic dyes are lost and released in nature. Finding new ways to improve dyes’ bleaching is an important topic, as these dyes may be toxic and carcinogenic. The oxidation comes from the HO ⋅ {\displaystyle {\ce {HO^{.}}}} radical, whose oxidant capacities are known. Indeed, we can observe that a higher concentration of the HO ⋅ {\displaystyle {\ce {HO^{.}}}} radical provokes a better 46 red cationic bleaching, as the yield for bleaching of cationic red is 17.8% before using ultrasound and 81.6% after using ultrasound. [ 9 ] However, sonocatalysis’ efficiency mainly comes from the combination of both catalyst and ultrasound. For example, we observe a cationic red bleaching of only 25.4% by applying only ultrasound. [ 9 ] Another example of pollutant degradation is the elimination of tetracycline , an antibiotic that is frequently found as a pollutant in wastewater. When tetracycline is dissolved in aqueous solution, using only ultrasound is inefficient to degrade tetracycline, because it is kinetically unfavourable. The addition of catalysts like titanium dioxide TiO 2 {\displaystyle {\ce {TiO2}}} or hydrogen peroxide H 2 O 2 {\displaystyle {\ce {H2O2}}} to ultrasound may speed up degradation: thirty minutes are enough when ultrasound and both catalysts are used. [ 10 ] Sonocatalysis is used in rhodamine B degradation too. Rhodamine B is a synthetic dye that may be harmful for aquatic plant when released in wastewater. [ 11 ] Sonocatalysis can be applied for reactions like Fenton's reaction. By associating sonocatalysis (at a 20 kHz frequency) and Fenton's reaction, with a 5.0 mg/L iron chloride ( FeCl 2 {\displaystyle {\ce {FeCl2}}} ) mass concentration and a pH of 4, degradation efficiency is about 80% after 12 minutes. [ 12 ]
https://en.wikipedia.org/wiki/Sonocatalysis
Sonochemical synthesis is the process which utilizes the principles of sonochemistry to make molecules undergo a chemical reaction with the application of powerful ultrasound radiation (20 kHz–10 MHz). [ 1 ] [ 2 ] [ 3 ] Sonochemistry generates hot spots that can achieve very high temperatures (5000–25.000 K), pressures of more than 1000 atmospheres, and rates of heating and cooling that can exceed 10^11 K/s. High intensity ultrasound produces chemical and physical effects that can be used for the production or modification of a wide range of nanostructured materials. The principle that causes the modification of nanostructures in the sonochemical process is acoustic cavitation . [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Sonochemical_synthesis
In chemistry , the study of sonochemistry is concerned with understanding the effect of ultrasound in forming acoustic cavitation in liquids, resulting in the initiation or enhancement of the chemical activity in the solution. [ 1 ] Therefore, the chemical effects of ultrasound do not come from a direct interaction of the ultrasonic sound wave with the molecules in the solution. The influence of sonic waves travelling through liquids was first reported by Robert Williams Wood (1868–1955) and Alfred Lee Loomis (1887–1975) in 1927. The experiment was about the frequency of the energy that it took for sonic waves to "penetrate" the barrier of water. He came to the conclusion that sound does travel faster in water, but because of the water's density compared to Earth's atmosphere it was incredibly hard to get the sonic waves to couple their energy into the water. Due to the sudden density change, much of the energy is lost, similar to shining a flashlight towards a piece of glass; some of the light is transmitted into the glass, but much of it is lost to reflection outwards. Similarly with an air-water interface, almost all of the sound is reflected off the water, instead of being transmitted into it. After much research they decided that the best way to disperse sound into the water was to create bubbles at the same time as the sound. Another issue was the ratio of the amount of time it took for the lower frequency waves to penetrate the bubbles walls and access the water around the bubble, compared to the time from that point to the point on the other end of the body of water. But despite the revolutionary ideas of this article it was left mostly unnoticed. [ 2 ] Sonochemistry experienced a renaissance in the 1980s with the advent of inexpensive and reliable generators of high-intensity ultrasound, most based around piezoelectric elements. [ 3 ] Sound waves propagating through a liquid at ultrasonic frequencies have wavelengths many times longer than the molecular dimensions or the bond length between atoms in the molecule. Therefore, the sound wave cannot directly affect the vibrational energy of the bond, and can therefore not directly increase the internal energy of a molecule. [ 4 ] [ 5 ] Instead, sonochemistry arises from acoustic cavitation : the formation, growth, and implosive collapse of bubbles in a liquid. [ 3 ] The collapse of these bubbles is an almost adiabatic process, thereby resulting in the massive build-up of energy inside the bubble, resulting in extremely high temperatures and pressures in a microscopic region of the sonicated liquid. The high temperatures and pressures result in the chemical excitation of any matter within or very near the bubble as it rapidly implodes. A broad variety of outcomes can result from acoustic cavitation including sonoluminescence , increased chemical activity in the solution due to the formation of primary and secondary radical reactions, and increased chemical activity through the formation of new, relatively stable chemical species that can diffuse further into the solution to create chemical effects (for example, the formation of hydrogen peroxide from the combination of two hydroxyl radicals following the dissociation of water vapor within collapsing bubbles when water is exposed to ultrasound). Upon irradiation with high intensity sound or ultrasound, acoustic cavitation usually occurs. Cavitation – the formation, growth, and implosive collapse of bubbles irradiated with sound — is the impetus for sonochemistry and sonoluminescence . [ 6 ] Bubble collapse in liquids produces enormous amounts of energy from the conversion of kinetic energy of the liquid motion into heating the contents of the bubble. The compression of the bubbles during cavitation is more rapid than thermal transport, which generates a short-lived localized hot-spot. Experimental results have shown that these bubbles have temperatures around 5000 K, pressures of roughly 1000 atm, and heating and cooling rates above 10 10 K/s. [ 7 ] [ 8 ] These cavitations can create extreme physical and chemical conditions in otherwise cold liquids. With liquids containing solids, similar phenomena may occur with exposure to ultrasound. Once cavitation occurs near an extended solid surface, cavity collapse is nonspherical and drives high-speed jets of liquid to the surface. [ 6 ] These jets and associated shock waves can damage the now highly heated surface. Liquid-powder suspensions produce high velocity interparticle collisions. These collisions can change the surface morphology , composition, and reactivity. [ 9 ] Three classes of sonochemical reactions exist: homogeneous sonochemistry of liquids, heterogeneous sonochemistry of liquid-liquid or solid–liquid systems, and, overlapping with the aforementioned, sonocatalysis (the catalysis or increasing the rate of a chemical reaction with ultrasound). [ 10 ] [ 11 ] [ 12 ] Sonoluminescence is a consequence of the same cavitation phenomena that are responsible for homogeneous sonochemistry. [ 13 ] [ 14 ] [ 5 ] The chemical enhancement of reactions by ultrasound has been explored and has beneficial applications in mixed phase synthesis, materials chemistry, and biomedical uses. Because cavitation can only occur in liquids, chemical reactions are not seen in the ultrasonic irradiation of solids or solid–gas systems. For example, in chemical kinetics , it has been observed that ultrasound can greatly enhance chemical reactivity in a number of systems by as much as a million-fold; [ 15 ] effectively acting to activate heterogeneous catalysts. In addition, in reactions at liquid-solid interfaces, ultrasound breaks up the solid pieces and exposes active clean surfaces through microjet pitting from cavitation near the surfaces and from fragmentation of solids by cavitation collapse nearby. This gives the solid reactant a larger surface area of active surfaces for the reaction to proceed over, increasing the observed rate of reaction., [ 16 ] [ 17 ] While the application of ultrasound often generates mixtures of products, a paper published in 2007 in the journal Nature described the use of ultrasound to selectively affect a certain cyclobutane ring-opening reaction. [ 18 ] Atul Kumar has reported multicomponent reaction Hantzsch ester synthesis in Aqueous Micelles using ultrasound. [ 19 ] Some water pollutants, especially chlorinated organic compounds, can be destroyed sonochemically. [ 20 ] Sonochemistry can be performed by using a bath (usually used for ultrasonic cleaning ) or with a high power probe, called an ultrasonic horn , which funnels and couples a piezoelectric element's energy into the water, concentrated at one (typically small) point. Sonochemistry can also be used to weld metals which are not normally feasible to join, or form novel alloys on a metal surface. This is distantly related to the method of calibrating ultrasonic cleaners using a sheet of aluminium foil and counting the holes. The holes formed are a result of microjet pitting resulting from cavitation near the surface, as mentioned previously. Due to the aluminium foil's thinness and weakness, the cavitation quickly results in fragmentation and destruction of the foil. A new generation of sonochemistry is harnessing the advantages of functional, ferroelectric materials, to further enhance chemistry in a sonochemical reactor in an emerging process called piezocatalysis. [ 21 ] [ 22 ]
https://en.wikipedia.org/wiki/Sonochemistry
Sonoelectrochemistry is the application of ultrasound in electrochemistry . Like sonochemistry , sonoelectrochemistry was discovered in the early 20th century. The effects of power ultrasound on electrochemical systems and important electrochemical parameters were originally demonstrated by Moriguchi [ 1 ] and then by Schmid and Ehert [ 2 ] [ 3 ] when the researchers investigated the influence of ultrasound on concentration polarisation, metal passivation and the production of electrolytic gases in aqueous solutions. In the late 1950s, Kolb and Nyborg [ 4 ] showed that the electrochemical solution (or electroanalyte) hydrodynamics in an electrochemical cell was greatly increased in the presence of ultrasound and described this phenomenon as acoustic streaming . In 1959, Penn et al. [ 5 ] demonstrated that sonication had a great effect on the electrode surface activity and electroanalyte species concentration profile throughout the solution. In the early 1960s, the electrochemist Allen J. Bard [ 6 ] showed in controlled potential coulometry experiments that ultrasound significantly enhances mass transport of electrochemical species from the bulk solution to the electroactive surface. In the range of ultrasonic frequencies [20 kHz – 2 MHz], ultrasound has been applied to many electrochemical systems, processes and areas of electrochemistry (to name but a few: electroplating, electrodeposition, electropolymerisation, electrocoagulation, organic electrosynthesis, materials electrochemistry, environmental electrochemistry, electroanalytical chemistry, hydrogen energy and fuel cell technology) both in academia and industry, [ 7 ] as this technology offers several benefits over traditional technologies. [ 8 ] [ 9 ] The advantages are as follows: significant thinning of the diffusion layer thickness (δ) at the electrode surface; increase in electrodeposit/electroplating thickness; increase in electrochemical rates, yields and efficiencies; increase in electrodeposit porosity and hardness; increase in gas removal from electrochemical solutions; increase in electrode cleanliness and hence electrode surface activation; lowering in electrode overpotentials (due to metal depassivation and gas bubble removal generated at the electrode surface induced by cavitation and acoustic streaming); and suppression in electrode fouling (depending on the ultrasonic frequency and power). To date, over 3,500 publications [ 10 ] inc. patents, technical, research and review articles have been written on the subject with the vast majority being published post-1990 after a review paper from Mason et al. [ 11 ] entitled 'Sonoelectrochemistry' highlighting the extraordinary effects of sonication on enhancing mass transport, aiding solution degassing, improving electrode surface cleaning, producing radical species (via sonolysis) and increasing electrochemical products and yields. [ 12 ]
https://en.wikipedia.org/wiki/Sonoelectrochemistry
The Sonogashira reaction is a cross-coupling reaction used in organic synthesis to form carbon–carbon bonds . It employs a palladium catalyst as well as copper co-catalyst to form a carbon–carbon bond between a terminal alkyne and an aryl or vinyl halide . [ 1 ] R 1 − X + H − ≡ − R 2 → base, rt [Pd] cat., [Cu] cat. R 1 − ≡ − R 2 {\displaystyle {\begin{array}{c}{}\\{\color {Red}{\ce {R^{1}}}}{\ce {-X}}+{\ce {H}}{-\color {Blue}\!{\equiv }\!{\ce {-R^{2}}}}\ {\ce {->[{\text{[Pd] cat., [Cu] cat.}}][{\text{base, rt}}]}}\ {\color {Red}{\ce {R^{1}}}}{\color {Blue}-\!{\equiv }\!{\ce {-R^{2}}}}\end{array}}} The Sonogashira cross-coupling reaction has been employed in a wide variety of areas, due to its usefulness in the formation of carbon–carbon bonds. The reaction can be carried out under mild conditions, such as at room temperature, in aqueous media, and with a mild base, which has allowed for the use of the Sonogashira cross-coupling reaction in the synthesis of complex molecules. Its applications include pharmaceuticals, natural products, organic materials, and nanomaterials . [ 1 ] Specific examples include its use in the synthesis of tazarotene , [ 2 ] which is a treatment for psoriasis and acne , and in the preparation of SIB-1508Y, also known as Altinicline , [ 3 ] a nicotinic receptor agonist . The alkynylation reaction of aryl halides using aromatic acetylenes was reported in 1975 in three independent contributions by Cassar, [ 4 ] Dieck and Heck [ 5 ] as well as Sonogashira , Tohda and Hagihara. [ 6 ] All of the reactions employ palladium catalysts to afford the same reaction products. However, the protocols of Cassar and Heck are performed solely by the use of palladium and require harsh reaction conditions (i.e. high reaction temperatures). The use of copper-cocatalyst in addition to palladium complexes in Sonogashira's procedure enabled the reactions to be carried under mild reaction conditions in excellent yields. A rapid development of the Pd/Cu systems followed and enabled myriad synthetic applications, while Cassar-Heck conditions were left, maybe unjustly, all but forgotten. [ 7 ] The reaction's remarkable utility can be evidenced by the amount of research still being done on understanding and optimizing its synthetic capabilities as well as employing the procedures to prepare various compounds of synthetic, medicinal or material/industrial importance. [ 7 ] Among the cross-coupling reactions it follows in the number of publications right after Suzuki and Heck reaction [ 8 ] and a search for the term "Sonogashira" in SciFinder provides over 1500 references for journal publications between 2007 and 2010. [ 7 ] The Sonogashira reaction has become so well known that often all reactions that use modern organometallic catalyst to couple alkyne motifs are termed some variant of "Sonogashira reaction" , despite the fact that these reactions are not carried out under true Sonogashira reaction conditions. [ 7 ] The reaction mechanism is not clearly understood, but the textbook mechanism revolves around a palladium cycle which is in agreement with the "classical" cross-coupling mechanism, and a copper cycle, which is less well known. [ 9 ] Although beneficial for the effectiveness of the reaction, the use of copper salts in "classical" Sonogashira reaction is accompanied with several drawbacks, such as the application of environmentally unfriendly reagents, the formation of undesirable alkyne homocoupling ( Glaser side products ), and the necessity of strict oxygen exclusion in the reaction mixture. Thus, with the aim of excluding copper from the reaction, a lot of effort was undertaken in the developments of Cu-free Sonogashira reaction. Along the development of new reaction conditions, many experimental and computational studies focused on elucidation of reaction mechanism. [ 12 ] Until recently, the exact mechanism by which the Cu-free reaction occurs was under debate, with critical mechanistic questions unanswered. [ 7 ] It was shown in 2018 by Košmrlj et al. that the reaction proceeds along the two interconnected Pd 0 /Pd II catalytic cycles. [ 13 ] [ 14 ] It was demonstrated that amines are competitive to the phosphines and can also participate as ligands L in the described reaction species. Depending on the rate of the competition between amine and phosphines, a dynamic and complex interplay is expected when using different coordinative bases. [ 15 ] [ 16 ] [ 13 ] [ 14 ] The Sonogashira reaction is typically run under mild conditions. [ 17 ] The cross-coupling is carried out at room temperature with a base, typically an amine, such as diethylamine , [ 6 ] that also acts as the solvent. The reaction medium must be basic to neutralize the hydrogen halide produced as the byproduct of this coupling reaction, so alkylamine compounds such as triethylamine and diethylamine are sometimes used as solvents , but also DMF or ether can be used as solvent. Other bases such as potassium carbonate or cesium carbonate are occasionally used. In addition, deaerated conditions are formally needed for Sonogashira coupling reactions because the palladium(0) complexes are unstable in the air, and oxygen promotes the formation of homocoupled acetylenes. Recently, development of air-stable organopalladium catalysts enable this reaction to be conducted in the ambient atmosphere. In addition, R.M Al-Zoubi and co-workers successfully developed a method with high regioselectivity for 1,2,3-trihaloarene derivatives in good to high yields under ambient conditions. [ 18 ] Typically, two catalysts are needed for this reaction: a zerovalent palladium complex and a copper(I) halide salt. Common examples of palladium catalysts include those containing phosphine ligands such as [Pd(PPh 3 ) 4 ] . Another commonly used palladium source is [ Pd(PPh 3 ) 2 Cl 2 ] , but complexes containing bidentate phosphine ligands , such as [Pd( dppe )Cl 2 ] , [Pd( dppp )Cl 2 ] , and [Pd(dppf)Cl 2 ] have also been used. [ 9 ] The drawback to such catalysts is the need for high loadings of palladium (up to 5 mol %), along with a larger amount of a copper co-catalyst. [ 9 ] Pd II complexes are in fact pre-catalysts since they must be reduced to Pd 0 before catalysis can begin. Pd II complexes generally exhibit greater stability than Pd 0 complexes and can be stored under normal laboratory conditions for months. [ 19 ] Pd II catalysts are reduced to Pd 0 in the reaction mixture by an amine , a phosphine ligand, or another reactant in the mixture allowing the reaction to proceed. [ 20 ] For instance, oxidation of triphenylphosphine to triphenylphosphine oxide can lead to the formation of Pd 0 in situ when [Pd(PPh 3 ) 2 Cl 2 ] is used. Copper(I) salts, such as CuI , react with the terminal alkyne and produce a copper(I) acetylide, which acts as an activated species for the coupling reactions. Cu(I) is a co-catalyst in the reaction, and is used to increase the rate of the reaction. [ 7 ] The choice of aryl halide or pseudohalide substrate (sp 2 -carbon) is one of the factors that mainly influence the reactivity of the Sonogashira catalytic system. The reactivity of halides is higher towards iodine, and vinyl halides are more reactive than analogous aryl halides. The coupling of aryl iodides proceeds at room temperature, while aryl bromides require heating. This difference in reactivity can be exploited to selectively couple an aryl iodide but not an aryl bromide, by performing the reaction at room temperature. [ 9 ] An example is the symmetrical Sonogashira coupling of two equivalents of 1-bromo-4-iodobenzene with trimethylsilylacetylene (with the trimethylsilyl group removed in-situ ) to form bis(4-bromophenyl)acetylene . [ 21 ] Aryl triflates can also be employed instead of aryl halides. Arenediazonium salts have been reported as an alternative to aryl halides for the Sonogashira coupling reaction. Gold(I) chloride has been used as co-catalyst combined with palladium(II) chloride in the coupling of arenediazonium salts with terminal alkynes, a process carried out in the presence of bis-2,6-diisopropylphenyl dihydroimidazolium chloride (IPr NHC) (5 mol%) to in situ generate a NHC–palladium complex, and 2,6-di-tert-butyl-4-methylpyridine (DBMP) as base in acetonitrile as solvent at room temperature. [ 22 ] This coupling can be carried out starting from anilines by formation of the diazonium salt followed by in situ Sonogashira coupling, where anilines are transformed into diazonium salt and furtherly converted into alkyne by coupling with phenylacetylene. Various aromatic alkynes can be employed to yield desired disubstituted products with satisfactory yields. Aliphatic alkynes are generally less reactive. Due to the crucial role of base, specific amines must be added in excess or as solvent for the reaction to proceed. It has been discovered that secondary amines such as piperidine, morpholine, or diisopropylamine in particular can react efficiently and reversibly with trans – RPdX(PPh 3 ) 2 complexes by substituting one PPh 3 ligand. The equilibrium constant of this reaction is dependent on R, X, a factor for basicity, and the amine's steric hindrance. [ 23 ] The result is competition between the amine and the alkyne group for this ligand exchange, which is why the amine is generally added in excess to promote preferential substitution. Trimethylsilylacetylene is a commonly used reagent in Sonogashira couplings. [ 24 ] Being a liquid it is a more convenient reagent than the gaseous acetylene , and the trimethylsilyl group prevents addition onto the other end of the acetylene group. The trimethylsilyl group can then be removed using TBAF , yielding a monosubstituted acetylene. It may also be removed using DBU in situ, allowing the monosubstituted acetylene to react further with another aryl halide to form diphenylacetylene and derivatives. [ 21 ] While a copper co-catalyst is added to the reaction to increase reactivity, the presence of copper can result in the formation of alkyne dimers. This leads to what is known as the Glaser coupling reaction, which is an undesired formation of homocoupling products of acetylene derivatives upon oxidation . As a result, when running a Sonogashira reaction with a copper co-catalyst, it is necessary to run the reaction in an inert atmosphere to avoid the unwanted dimerization. Copper-free variations to the Sonogashira reaction have been developed to avoid the formation of the homocoupling products. [ 19 ] [ 25 ] There are other cases when the use of copper should be avoided, such as coupling reactions involving substrates which potential copper ligands, for instance free-base porphyrins . [ 9 ] In an inverse Sonogashira coupling the reactants are an aryl or vinyl compound and an alkynyl halide. [ 26 ] In some cases stoichiometric amounts of silver oxide can be used in place of CuI for copper-free Sonogashira couplings. [ 9 ] Recently, a nickel-catalyzed Sonogashira coupling has been developed which allows for the coupling of non-activated alkyl halides to acetylene without the use of palladium, although a copper co-catalyst is still needed. [ 27 ] It has also been reported that gold can be used as a heterogeneous catalyst, which was demonstrated in the coupling of phenylacetylene and iodobenzene with an Au/CeO 2 catalyst. [ 28 ] [ 29 ] In this case, catalysis occurs heterogeneously on the Au nanoparticles, [ 29 ] [ 30 ] with Au(0) as the active site. [ 31 ] Selectivity to the desirable cross coupling product was also found to be enhanced by supports such as CeO 2 and La 2 O 3 . [ 31 ] Additionally, iron-catalyzed Sonogashira couplings have been investigated as relatively cheap and non-toxic alternatives to palladium. Here, FeCl 3 is proposed to act as the transition-metal catalyst and Cs 2 CO 3 as the base, thus theoretically proceeding through a palladium-free and copper-free mechanism. [ 32 ] R − ≡ − H + Ar − ≡ − X → Cs 2 CO 3 , toluene 135 ∘ C, 72h FeCl 3 , DMEDA R − ≡ − H {\displaystyle {\begin{array}{c}{}\\{\color {Blue}{\ce {R}}-\!\!\!{\equiv }\!}{\ce {-H}}+{\color {Red}{\ce {Ar}}-\!\!\!{\equiv }\!-}{\ce {X->[{\ce {FeCl3}}{\text{, DMEDA}}][{\begin{matrix}{\ce {Cs2CO3}}{\text{, toluene}}\\135^{\circ }{\text{C, 72h}}\end{matrix}}]}}\ {\color {Blue}{\ce {R}}-\!\!\!{\equiv }\!}{\color {Red}{\ce {-H}}}\end{array}}} While the copper-free mechanism has been shown to be viable, attempts to incorporate the various transition metals mentioned above as less expensive alternatives to palladium catalysts have shown a poor track record of success due to contamination of the reagents with trace amounts of palladium, suggesting that these theorized pathways are extremely unlikely, if not impossible, to achieve. [ 33 ] Studies have shown that organic and inorganic starting materials can also contain enough ( ppb level) palladium for the coupling. [ 34 ] A highly efficient gold and palladium combined methodology for the Sonogashira coupling of a wide array of electronically and structurally diverse aryl and heteroaryl halides has been reported. [ 35 ] The orthogonal reactivity of the two metals shows high selectivity and extreme functional group tolerance in Sonogashira coupling. A brief mechanistic study reveals that the gold-acetylide intermediate enters into palladium catalytic cycle at the transmetalation step. The issues dealing with recovery of the often expensive catalyst after product formation poses a serious drawback for large-scale applications of homogeneous catalysis. [ 9 ] Structures known as metallodendrimers combine the advantages of homogeneous and heterogeneous catalysts, as they are soluble and well defined on the molecular level, and yet they can be recovered by precipitation, ultrafiltration, or ultracentrifugation. [ 36 ] Some recent examples can be found about the use of dendritic palladium complex catalysts for the copper-free Sonogashira reaction. Thus, several generations of bidentate phosphine palladium(II) polyamino dendritic catalysts have been used solubilized in triethylamine for the coupling of aryl iodides and bromides at 25-120 °C, and of aryl chlorides, but in very low yields. [ 37 ] The dendrimeric catalysts could usually be recovered by simple precipitation and filtration and reused up to five times, with diminished activity produced by dendrimer decomposition and not by palladium leaching being observed. These dendrimeric catalysts showed a negative dendritic effect; that is, the catalyst efficiency decreases as the dendrimer generation increases. A recyclable polymeric phosphine ligand is obtained from ring-opening metathesis polymerization of a norbornene derivative, and has been used in the copper co-catalyzed Sonogashira reaction of methyl p -iodobenzoate and phenylacetylene using Pd(dba) 2 ·CHCl 3 as a palladium source. [ 38 ] Despite recovery by filtration, polymer catalytic activity decreased by approximately 4-8% in each recycle experiment. Pyridines and pyrimidines have shown good complexation properties for palladium and have been employed in the formation of catalysts suitable for Sonogashira couplings. The dipyrimidyl-palladium complex shown below has been employed in the copper-free coupling of iodo-, bromo-, and chlorobenzene with phenylacetylene using N-butylamine as base in THF solvent at 65 °C. Furthermore, all structural features of this complex have been characterized by extensive X-ray analysis, verifying the observed reactivity. [ 39 ] More recently, the dipyridylpalladium complex has been obtained and has been used in the copper-free Sonogashira coupling reaction of aryl iodides and bromides in N -methylpyrrolidinone (NMP) using tetra-n-butylammonium acetate (TBAA) as base at room temperature. This complex has also been used for the coupling of aryl iodides and bromides in refluxing water as solvent and in the presence of air, using pyrrolidine as base and TBAB as additive, [ 40 ] although its efficiency was higher in N -methylpyrrolidinone (NMP) as solvent. N -heterocyclic carbenes (NHCs) have become one of the most important ligands in transition-metal catalysis. The success of normal NHCs is greatly attributed to their superior σ-donating capabilities as compared to phosphines, which is even greater in abnormal NHC counterparts. Employed as ligands in palladium complexes, NHCs contributed greatly to the stabilization and activation of precatalysts and have therefore found application in many areas of organometallic homogeneous catalysis, including Sonogashira couplings. [ 9 ] [ 42 ] [ 43 ] Interesting examples of abnormal NHCs are based on the mesoionic 1,2,3-triazol-5-ylidene structure. An efficient, cationic palladium catalyst of PEPPSI type, i.e., i PEPPSI ( i nternal p yridine- e nhanced p recatalyst p reparation s tabilization and i nitiation) was demonstrated to efficiently catalyse the copper-free Sonogashira reaction in water as the only solvent, under aerobic conditions, in the absence of copper, amines, phosphines and other additives. [ 42 ] Recent developments in heterogeneous catalysis enabled the use of metal oxide materials such as cuprous oxide nanocatalysts in flow processing technologies, which can enable the economical production of active pharmaceutical ingredients and various other fine chemicals. [ 45 ] Sonogashira couplings are employed in a wide array of synthetic reactions, primarily due to their success in facilitating the following challenging transformations: The coupling of a terminal alkyne and an aromatic ring is the pivotal reaction when talking about applications of the copper-promoted or copper-free Sonogashira reaction. The list of cases where the typical Sonogashira reaction using aryl halides has been employed is large, and choosing illustrative examples is difficult. A recent use of this methodology is shown below for the coupling of iodinated phenylalanine with a terminal alkyne derived from d -biotin using an in situ generated Pd 0 species as catalyst, which allowed the preparation of alkyne-linked phenylalanine derivative for bioanalytical applications. [ 46 ] There are also examples of the coupling partners both being attached to allyl resins, with the Pd 0 catalyst effecting cleavage of the substrates and subsequent Sonogashira coupling in solution. [ 47 ] Many metabolites found in nature contain alkyne or enyne moieties, and therefore, the Sonogashira reaction has found frequent utility in their syntheses. [ 48 ] Several of the most recent and promising applications of this coupling methodology toward the total synthesis of natural products exclusively employed the typical copper-cocatalyzed reaction. An example of the coupling of an aryl iodide to an aryl acetylene can be seen in the reaction of an iodinated alcohol and tris(isopropyl)silylacetylene, which gave an alkyne, an intermediate in the total synthesis of the benzindenoazepine alkaloid bulgaramine. [ 49 ] There are other recent examples of the use of aryl iodides for the preparation of intermediates under typical Sonogashira conditions, which, after cyclization, yield natural products such as benzylisoquinoline [ 50 ] or indole alkaloids [ 51 ] An example is the synthesis of the benzylisoquinoline alkaloids (+)-( S )- laudanosine and (–)-( S )-xylopinine. The synthesis of these natural products involved the use of Sonogashira cross-coupling to build the carbon backbone of each molecule. [ 50 ] The 1,3-enyne moiety is an important structural unit for biologically active and natural compounds. [ citation needed ] It can be derived from vinylic systems and terminal acetylenes by using a configuration-retention stereospecific procedure such as the Sonogashira reaction. Vinyl iodides are the most reactive vinyl halides to Pd 0 oxidative addition, and their use is therefore most frequent for Sonogashira cross-coupling reactions due to the usually milder conditions employed. Some examples include: The versatility of the Sonogashira reaction makes it a widely used reaction in the synthesis of a variety of compounds. One such pharmaceutical application is in the synthesis of SIB-1508Y, which is more commonly known as Altinicline . Altinicline is a nicotinic acetylcholine receptor agonist that has shown potential in the treatment of Parkinson's disease, Alzheimer's disease, Tourette's syndrome, schizophrenia, and attention deficit hyperactivity disorder (ADHD). [ 3 ] [ 54 ] As of 2008, Altinicline has undergone Phase II clinical trials. [ 55 ] [ 56 ] The Sonogashira cross coupling reaction can be used in the synthesis of imidazopyridine derivatives. [ 57 ]
https://en.wikipedia.org/wiki/Sonogashira_coupling
Sonoluminescence is the emission of light from imploding bubbles in a liquid when excited by sound. Sonoluminescence was first discovered in 1934 at the University of Cologne . It occurs when a sound wave of sufficient intensity induces a gaseous cavity within a liquid to collapse quickly, emitting a burst of light. The phenomenon can be observed in stable single-bubble sonoluminescence (SBSL) and multi-bubble sonoluminescence (MBSL). In 1960, Peter Jarman proposed that sonoluminescence is thermal in origin and might arise from microshocks within collapsing cavities. Later experiments revealed that the temperature inside the bubble during SBSL could reach up to 12,000 kelvins (11,700 °C; 21,100 °F). The exact mechanism behind sonoluminescence remains unknown, with various hypotheses including hotspot, bremsstrahlung , and collision-induced radiation. Some researchers have even speculated that temperatures in sonoluminescing systems could reach millions of kelvins, potentially causing thermonuclear fusion; this idea, however, has been met with skepticism by other researchers. [ 1 ] The phenomenon has also been observed in nature, with the pistol shrimp being the first known instance of an animal producing light through sonoluminescence. [ 2 ] The sonoluminescence effect was first discovered at the University of Cologne in 1934 as a result of work on sonar . [ 3 ] Hermann Frenzel and H. Schultes put an ultrasound transducer in a tank of photographic developer fluid . They hoped to speed up the development process. Instead, they noticed tiny dots on the film after developing and realized that the bubbles in the fluid were emitting light with the ultrasound turned on. [ 4 ] It was too difficult to analyze the effect in early experiments because of the complex environment of a large number of short-lived bubbles. This phenomenon is now referred to as multi-bubble sonoluminescence (MBSL). In 1960, Peter Jarman from Imperial College of London proposed the most reliable theory of sonoluminescence phenomenon. He concluded that sonoluminescence is basically thermal in origin and that it might possibly arise from microshocks with the collapsing cavities. [ 5 ] In 1990, an experimental advance was reported by Gaitan and Crum, who produced stable single-bubble sonoluminescence (SBSL). [ 6 ] In SBSL, a single bubble trapped in an acoustic standing wave emits a pulse of light with each compression of the bubble within the standing wave . This technique allowed a more systematic study of the phenomenon because it isolated the complex effects into one stable, predictable bubble. It was realized that the temperature inside the bubble was hot enough to melt steel , as seen in an experiment done in 2012; the temperature inside the bubble as it collapsed reached about 12,000 K (11,700 °C; 21,100 °F). [ 7 ] Interest in sonoluminescence was renewed when an inner temperature of such a bubble well above 1 MK (999,727 °C; 1,799,540 °F) was postulated. [ 8 ] This temperature is thus far not conclusively proven; rather, recent experiments indicate temperatures around 20,000 K (19,700 °C; 35,500 °F). [ 9 ] Sonoluminescence can occur when a sound wave of sufficient intensity induces a gaseous cavity within a liquid to collapse quickly. This cavity may take the form of a preexisting bubble or may be generated through a process known as cavitation . Sonoluminescence in the laboratory can be made to be stable so that a single bubble will expand and collapse over and over again in a periodic fashion, emitting a burst of light each time it collapses. For this to occur, a standing acoustic wave is set up within a liquid, and the bubble will sit at a pressure antinode of the standing wave. The frequencies of resonance depend on the shape and size of the container in which the bubble is contained. Some facts about sonoluminescence: [ citation needed ] Spectral measurements have given bubble temperatures in the range from 2,300 to 5,100 K (2,030 to 4,830 °C; 3,680 to 8,720 °F), the exact temperatures depending on experimental conditions including the composition of the liquid and gas. [ 11 ] Detection of very high bubble temperatures by spectral methods is limited due to the opacity of liquids to short wavelength light characteristic of very high temperatures. A study describes a method of determining temperatures based on the formation of plasmas . Using argon bubbles in sulfuric acid , the data shows the presence of ionized molecular oxygen O + 2 , sulfur monoxide , and atomic argon populating high-energy excited states, which confirms a hypothesis that the bubbles have a hot plasma core. [ 12 ] The ionization and excitation energy of dioxygenyl cations , which they observed, is 18 electronvolts (2.9 × 10 −18 J). From this observation, they conclude the core temperatures reach at least 20,000 K (19,700 °C; 35,500 °F) [ 9 ] —hotter than the surface of the Sun . The dynamics of the motion of the bubble is characterized to a first approximation by the Rayleigh–Plesset equation (named after Lord Rayleigh and Milton Plesset ): This is an approximate equation that is derived from the Navier–Stokes equations (written in spherical coordinate system ) and describes the motion of the radius of the bubble R as a function of time t . Here, μ is the viscosity , P ∞ ( t ) {\displaystyle P_{\infty }(t)} is the external pressure infinitely far from the bubble, P 0 ( t ) {\displaystyle P_{0}(t)} is the internal pressure of the bubble, ρ {\displaystyle \rho } is the liquid density, and γ is the surface tension . The over-dots represent time derivatives. This equation, though approximate, has been shown to give good estimates on the motion of the bubble under the acoustically driven field except during the final stages of collapse. Both simulation and experimental measurement show that during the critical final stages of collapse, the bubble wall velocity exceeds the speed of sound of the gas inside the bubble. [ 13 ] Thus a more detailed analysis of the bubble's motion is needed beyond Rayleigh–Plesset to explore the additional energy focusing that an internally formed shock wave might produce. In the static case, the Rayleigh-Plesset equation simplifies, yielding the Young–Laplace equation . The mechanism of the phenomenon of sonoluminescence is unknown. Hypotheses include: hotspot, bremsstrahlung radiation , collision-induced radiation and corona discharges , nonclassical light , proton tunneling , electrodynamic jets and fractoluminescent jets (now largely discredited due to contrary experimental evidence). [ citation needed ] In 2002, M. Brenner, S. Hilgenfeldt, and D. Lohse published a 60-page review that contains a detailed explanation of the mechanism. [ 14 ] An important factor is that the bubble contains mainly inert noble gas such as argon or xenon (air contains about 1% argon, and the amount dissolved in water is too great; for sonoluminescence to occur, the concentration must be reduced to 20–40% of its equilibrium value) and varying amounts of water vapor . Chemical reactions cause nitrogen and oxygen to be removed from the bubble after about one hundred expansion-collapse cycles. The bubble will then begin to emit light. [ 15 ] The light emission of highly compressed noble gas is exploited technologically in the argon flash devices. During bubble collapse, the inertia of the surrounding water causes high pressure and high temperature, reaching around 10,000 kelvins in the interior of the bubble, causing the ionization of a small fraction of the noble gas present. The amount ionized is small enough for the bubble to remain transparent, allowing volume emission; surface emission would produce more intense light of longer duration, dependent on wavelength , contradicting experimental results. Electrons from ionized atoms interact mainly with neutral atoms, causing thermal bremsstrahlung radiation. As the wave hits a low energy trough, the pressure drops, allowing electrons to recombine with atoms and light emission to cease due to this lack of free electrons. This makes for a 160-picosecond light pulse for argon (even a small drop in temperature causes a large drop in ionization, due to the large ionization energy relative to photon energy). This description is simplified from the literature above, which details various steps of differing duration from 15 microseconds (expansion) to 100 picoseconds (emission). Computations based on the theory presented in the review produce radiation parameters (intensity and duration time versus wavelength) that match experimental results [ citation needed ] with errors no larger than expected due to some simplifications (e.g., assuming a uniform temperature in the entire bubble), so it seems the phenomenon of sonoluminescence is at least roughly explained, although some details of the process remain obscure. Any discussion of sonoluminescence must include a detailed analysis of metastability. Sonoluminescence in this respect is what is physically termed a bounded phenomenon meaning that the sonoluminescence exists in a bounded region of parameter space for the bubble; a coupled magnetic field being one such parameter. The magnetic aspects of sonoluminescence are very well documented. [ 16 ] An unusually exotic hypothesis of sonoluminescence, which has received much popular attention, is the Casimir energy hypothesis suggested by noted physicist Julian Schwinger [ 17 ] and more thoroughly considered in a paper by Claudia Eberlein [ 18 ] of the University of Sussex . Eberlein's paper suggests that the light in sonoluminescence is generated by the vacuum within the bubble in a process similar to Hawking radiation , the radiation generated at the event horizon of black holes . According to this vacuum energy explanation, since quantum theory holds that vacuum contains virtual particles , the rapidly moving interface between water and gas converts virtual photons into real photons. This is related to the Unruh effect or the Casimir effect . The argument has been made that sonoluminescence releases too large an amount of energy and releases the energy on too short a time scale to be consistent with the vacuum energy explanation, [ 19 ] although other credible sources argue the vacuum energy explanation might yet prove to be correct. [ 20 ] Some have argued that the Rayleigh–Plesset equation described above is unreliable for predicting bubble temperatures and that actual temperatures in sonoluminescing systems can be far higher than 20,000 kelvins. Some research claims to have measured temperatures as high as 100,000 kelvins and speculates temperatures could reach into the millions of kelvins. [ 21 ] Temperatures this high could cause thermonuclear fusion . This possibility is sometimes referred to as bubble fusion and is likened to the implosion design used in the fusion component of thermonuclear weapons . Experiments in 2002 and 2005 by R. P. Taleyarkhan using deuterated acetone showed measurements of tritium and neutron output consistent with fusion. However, the papers were considered low quality and there were doubts cast by a report about the author's scientific misconduct. This made the report lose credibility among the scientific community. [ 22 ] [ 23 ] [ 24 ] On January 27, 2006, researchers at Rensselaer Polytechnic Institute claimed to have produced fusion in sonoluminescence experiments. [ 25 ] [ 26 ] Pistol shrimp (also called snapping shrimp ) produce a type of cavitation luminescence from a collapsing bubble caused by quickly snapping its claw. The animal snaps a specialized claw shut to create a cavitation bubble that generates acoustic pressures of up to 80 kPa at a distance of 4 cm from the claw. As it extends out from the claw, the bubble reaches speeds of 60 miles per hour (97 km/h) and releases a sound reaching 218 decibels. The pressure is strong enough to kill small fish. The light produced is of lower intensity than the light produced by typical sonoluminescence and is not visible to the naked eye. The light and heat produced by the bubble may have no direct significance, as it is the shockwave produced by the rapidly collapsing bubble which these shrimp use to stun or kill prey. However, it is the first known instance of an animal producing light by this effect and was whimsically dubbed "shrimpoluminescence" upon its discovery in 2001. [ 27 ] It has subsequently been discovered that another group of crustaceans, the mantis shrimp , contains species whose club-like forelimbs can strike so quickly and with such force as to induce sonoluminescent cavitation bubbles upon impact. [ 2 ] A mechanical device with 3D printed snapper claw at five times the actual size was also reported to emit light in a similar fashion, [ 28 ] this bioinspired design was based on the snapping shrimp snapper claw molt shed from an Alpheus formosus , the striped snapping shrimp. [ 29 ]
https://en.wikipedia.org/wiki/Sonoluminescence
Sonoporation , or cellular sonication , is the use of sound in the ultrasonic range for increasing the permeability of the cell plasma membrane . This technique is usually used in molecular biology and non-viral gene therapy in order to allow uptake of large molecules such as DNA into the cell, in a cell disruption process called transfection or transformation . Sonoporation employs the acoustic cavitation of microbubbles to enhance delivery of these large molecules. [ 1 ] The exact mechanism of sonoporation-mediated membrane translocation remains unclear, with a few different hypotheses currently being explored. Sonoporation is under active study for the introduction of foreign genes in tissue culture cells, especially mammalian cells. Sonoporation is also being studied for use in targeted Gene therapy in vivo , in a medical treatment scenario whereby a patient is given modified DNA, and an ultrasonic transducer might target this modified DNA into specific regions of the patient's body. [ 2 ] The bioactivity of this technique is similar to, and in some cases found superior to, electroporation . Extended exposure to low-frequency (< MHz ) ultrasound has been demonstrated to result in complete cellular death (rupturing), thus cellular viability must also be accounted for when employing this technique. Sonoporation is performed with a dedicated sonoporator. Sonoporation may also be performed with custom-built piezoelectric transducers connected to bench-top function generators and acoustic amplifiers. Standard ultrasound medical devices may also be used in some applications. Measurement of the acoustics used in sonoporation is listed in terms of mechanical index , which quantifies the likelihood that exposure to diagnostic ultrasound will produce an adverse biological effect by a non-thermal action based on pressure. [ 3 ] Microbubble contrast agents are generally used in contrast-enhanced ultrasound applications to enhance the acoustic impact of ultrasound. For sonoporation specifically, microbubbles are used to significantly enhance membrane translocation of molecular therapeutics. [ 4 ] The microbubbles used today are composed of a gas core and a surrounding shell. The makeup of these elements may vary depending on the preferred physical and chemical properties. [ 5 ] Microbubble shells have been formed with lipids , galactose , albumin , or polymers . The gas core can be made up of air or heavy gases like nitrogen or perfluorocarbon . [ 6 ] Microbubble gas cores have high compressibility relative to their liquid environment, making them highly responsive to acoustic application. As a result of ultrasound stimulation, microbubbles undergo expansion and contraction, a phenomenon called stable cavitation . If a microbubble is attached to the cell membrane , the microbubble oscillations produced by ultrasound stimulation may push and pull on the membrane to produce a membrane opening. These rapid oscillations are also responsible for adjacent fluid flow called microstreaming which increases pressure on surrounding cells producing further sonoporation to whole cell populations. [ 7 ] The physical mechanisms supposedly involved with microbubble-enhanced sonoporation have been referred to as push, pull, microstreaming, translation, and jetting. [ 8 ] The mechanism by which molecules cross cellular membrane barriers during sonoporation remains unclear. Different theories exist that may potentially explain barrier permeabilization and molecular delivery. The dominant hypotheses include pore formation, endocytosis , and membrane wounds. Pore formation following ultrasound application was first reported in 1999 in a study that observed cell membrane craters following ultrasound application at 255 kHz. [ 9 ] Later, sonoporation mediated microinjection of dextran molecules showed that membrane permeability mechanisms differ depending on the size of dextran molecules. Microinjection of dextran molecules from 3 to 70 kDa was reported to have crossed the cellular membrane via transient pores. In contrast, dextran molecules of 155 and 500 kDa were predominantly found in vesicle-like structures, likely indicating the mechanism of endocytosis . [ 10 ] This variability in membrane behavior has led to other studies investigating membrane rupture and resealing characteristics depending on ultrasound amplitude and duration. Various cellular reactions to ultrasound indicate the mechanism of molecular uptake via endocytosis. These observed reactionary phenomena include ion exchange , hydrogen peroxide , and cell intracellular calcium concentration. Studies have used patch clamping techniques to monitor membrane potential ion exchange for the role of endocytosis in sonoporation. Ultrasound application to cells and adjacent microbubbles was shown to produce marked cell membrane hyperpolarization along with progressive intracellular calcium increase, which is believed to be a consequence of calcium channels opening in response to microbubble oscillations. These findings act as support for ultrasound application inducing calcium-mediated uncoating of clathrin-coated pits seen in traditional endocytosis pathways. [ 11 ] [ 12 ] Other work reported sonoporation induced the formation of hydrogen peroxide, a cellular reaction that is also known to be involved with endocytosis. [ 9 ] Mechanically created wounds in the plasma membrane have been observed as a result of sonoporation-produced shear forces . The nature of these wounds may vary based on the degree of acoustic cavitation leading to a spectrum of cell behavior, from membrane blebbing to instant cell lysis . Multiple studies examining membrane wounds note observing resealing behavior, a process dependent on recruitment of ATP and intracellular vesicles. [ 9 ] Following sonoporation-mediated membrane permeabilization, cells can automatically repair the membrane openings through a phenomenon called "reparable sonoporation." [ 13 ] The membrane resealing process has been shown to be calcium-dependent. This property may suggest that the membrane repair process involves a cell's active repair mechanism in response to the cellular influx of calcium. [ 14 ] The first study reporting molecular delivery using ultrasound was a 1987 in vitro study attempting to transfer plasmid DNA to cultured mouse fibroblast cells using sonoporation. [ 15 ] This successful plasmid DNA transfection conferring G418 antibiotic resistance ultimately led to further in vitro studies that hinted at the potential for sonoporation transfection of plasmid DNA and siRNA in vivo. In vivo ultrasound mediated drug delivery was first reported in 1991 [ 15 ] and many other preclinical studies involving sonoporation have followed. This method is being used to deliver therapeutic drugs or genes to treat a variety of diseases including: Stroke , Cancer , Parkinson's , Alzheimer's ... [ 13 ] The preclinical utility of sonoporation is well illustrated through past tumor radiation treatments which have reported a more than 10-fold cellular destruction when ionizing radiation is coupled with ultrasound-mediated microbubble vascular disruption. This increase in delivery efficiency could allow for the appropriate reduction in therapeutic dosing. [ 16 ]
https://en.wikipedia.org/wiki/Sonoporation
The Sony Multimedia CD-ROM Player was a portable CD-ROM –based multimedia player produced by Sony and released in 1992. It was used to run reference software, such as electronic publications and encyclopedia . Before its release, both Sony representatives and the press referred to the device as the Sony Bookman ; [ 7 ] [ 8 ] [ 9 ] that name remained in use in later publications. [ 10 ] The player was sold concurrently with Sony's Data Discman e-book players. [ 11 ] Unlike those devices, the MMCD Player could read full-size 120-millimeter CD-ROM discs, including audio CDs . Software format, proprietary to the player, was one of several rich media CD formats released to the market during the early 1990s. The MMCD Player has a clamshell form factor with an LCD screen and a QWERTY keyboard, complete with a numeric keypad , four-way navigation pad, "yes" and "no" buttons and a set of function keys (F1 to F5). The keyboard is located on a top of an inner lid which covers a top-loading CD drive. [ 12 ] [ 13 ] Discs for the player used the CD-ROM XA sector format and a software format proprietary to the player. Software which the player supported was marked by the "MMCD Player Software" logo (not to be confused with MMCD , a high-density disc format proposal by Sony and Philips). Takashi Sugiyama, Sony Corporation of America 's project manager, attributed the MMCD Player's lack of support for established CD-ROM XA–based multimedia formats to its sub- VGA display resolution and the lack of hard drive caching support. [ 7 ] Newsweek chose the Sony MMCD player as a pilot platform for Newsweek InterActive , a quarterly CD-ROM magazine initially published in March 1993. [ 14 ] [ 15 ] The magazine was later released on compact disks for IBM PC compatible computers. No more than "a few thousand of units" of the MMCD version had reportedly shipped by 1995. [ 16 ] Titles by Compton's NewMedia (a CD-ROM publishing arm of Encyclopædia Britannica, Inc. ) and Random House were also available, with some disks including software for both the MMCD Player and computer platforms such as DOS and Windows . [ 4 ] [ 17 ] Several companies marketed Sony MMCD Player–based kits to real estate brokers . Digital Data, a company based in Irving, Texas , adapted Austin multiple listing service data as a weekly CD-ROM publication in 1994. [ 18 ] In 1995, San Diego –based Visual Display Marketing was pitching their MMCD Player–based product to real estate associations, with its owner Gary Ripsco describe the concept of publishing weekly or biweekly home listing discs. [ 19 ] Microsoft announced support for the MMCD Player for their multimedia authoring tool, Multimedia Viewer , upon the player's introduction on September 16, 1992. [ 20 ] The Sony MMCD player was introduced the same month as Kodak's Photo CD format and the Tandy VIS multimedia system. Multimedia & Videodisc Monitor described the interactive multimedia landscape as looking "chaotic" and stated that consumers and commercial end users "probably can't" figure out then-current format situation. [ 21 ] PC Magazine noted that the introduction of multiple CD-ROM format compatibility logos, along with Sony's MMCD one, make shopping for multimedia titles "anything but simple" and go against the goal of Multimedia PC program. [ 22 ] Later, PC Magazine advised anyone but corporate purchasers "with a driving need to do away with a paper" against buying the Sony MMCD player, criticizing its high price and the incompatibility with other multimedia formats. [ 6 ] The Washington Post noted the player's ease of use, comparing it favorably to Walkman compact disk players, but criticized the device's speed and the resolution of its built-in screen. [ 4 ] In a 2006 column, Michael Rogers , who was an editor of the Newsweek Interactive division in the 1990s, said the Sony MMCD player was "far ahead of its time" but "slow as molasses." He noted that, as the device loaded the Newsweek CD-ROM, it took a long time to display the magazine's logo and play the introductory sound bit. [ 23 ]
https://en.wikipedia.org/wiki/Sony_Multimedia_CD-ROM_Player
The Sony SmartWatch is a line of wearable devices developed and marketed by Sony Mobile from 2012 to 2016 through three generations. They connect to Android smartphones and can display information such as Twitter feeds and SMS messages, among other things. The original Sony SmartWatch, model MN2SW, came with a flexible silicone wristband with multiple colors available. It was introduced at CES 2012 and launched later in March 2012. [ 1 ] The Sony SmartWatch 2 , model SW2, was launched in late September 2013. The SW2 supported working together with any Android 4.0 (and higher) smartphone, unlike Samsung's competing Galaxy Gear smartwatch, which only worked with some of Samsung's own Galaxy handsets. The watch featured an aluminum body and came with the option of a silicone or metal wristband, but could be used with any 24mm wristband. It was 1.65 inches tall by 1.61 inches wide by 0.35 inch thick, weighed 0.8 ounces and sported a transflective LCD screen with a 220x176 resolution. The SW2 connected to the smartphone using Bluetooth, and supported NFC for easy pairing. It was rated IP57 so it could be submersed in water up to a meter for 30 minutes and was dust resistant. [ 2 ] [ 3 ] At IFA 2014 the company announced the Sony Smartwatch 3. [ 4 ] [ 5 ] Its processor switched from previous generations' ARM Cortex-M MCU [ 6 ] to an ARM Cortex-A CPU. [ 7 ] As noted by ABI Research, "The SmartWatch 3 has many new features such as waterproof (IP68 rated, not just resistant), improved styling, transition to Android Wear , and introduction of a new wearable platform from Broadcom . ... [It's] based on the Broadcom system-on-chip (SoC) platform which includes a 1.2GHz Quad-core ARM Cortex A7 processor (BCM23550), an improved GPS and ambient light sensor processing SoC (BCM47531) capable of simultaneously tracking five satellite systems (GPS, GLONASS, QZSS, SBAS, and BeiDou), the now popular Wi-Fi 802.11n /BT/NFC/FM quad-combo connectivity chip (BCM43341), and a highly integrated power management IC (BCM59054)." [ 8 ] Several apps are capable of using the Smartwatch 3's GPS, including: The watch is also capable of tracking swimming with swim.com and golf swings with vimoGolf. The Sony SmartWatch 3 will not be upgraded to version 2.0 of Android Wear . [ 9 ] 128 x 128 pixels 65k (16 bits) color [ 10 ] 270 mAh (42mm) 36 mm (1.4 in) W 8 mm (0.31 in) D 12.8 mm (0.50 in) D (includes clip) 26 g (0.92 oz) watchband [ 11 ] 361 mAh (34mm)
https://en.wikipedia.org/wiki/Sony_SmartWatch
The Sony Vaio MX series was a series of multimedia-rich desktop PCs part of Sony 's Vaio line, first launched in 2000. Sony combined a desktop PC with high-end Hi-Fi features for an entertainment system. The MX series PCs had a built-in FM radio , MiniDisc player, and an LCD . It also came with a strong bass amplifier speakers and a remote control. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sony_Vaio_MX_series
The Sony Vaio TP series was a series of living room PCs part of Sony's Vaio line that sold from 2007 through 2008. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sony_Vaio_TP_series
The Sony Watchman is a line of portable pocket televisions trademarked and produced by Sony . The line was introduced in 1982 [ 1 ] and discontinued in 2000. Its name came from a portmanteau formed of "Watch" (watching television ) and "man" from Sony's Walkman personal cassette audio players. There were more than 65 models of the Watchman before its discontinuation. As the models progressed, display size increased and new features were added. Due to the switch to digital broadcasting , most models of the Sony Watchman can no longer be used to receive live television broadcasts without the use of a digital converter box . The initial model was introduced in 1982 as the FD-210 (FD-200 in Japan), which had a black & white five- centimeter (2") Cathode-ray tube display. The device weighed around 650 grams (23 oz), with a measurement of 87 x 198 x 33 millimeters (3½" x 7¾" x 1¼"). The device was sold in Japan with a price of 54,800 yen . Roughly two years later, in 1984, the device was introduced to Europe and North America . [ 2 ] [ 3 ] Sony manufactured more than 65 models of the Watchman before its discontinuation in 2000. Upon the release of further models after the FD-210, the display size increased, and new features were introduced. The FD-3, introduced in 1987, had a built-in digital clock. The FD-30, introduced in 1984 had a built-in AM/FM Stereo radio. The FD-40/42/44/45 were among the largest Watchmen, utilizing a 4" CRT display. The FD-40 introduced a single composite A/V input. The FD-45, introduced in 1986, was water-resistant. In 1988/1989, the FDL 330S color Watchman TV/Monitor with LCD display was introduced. In 1990, the FDL-310, a Watchman with a color LCD display was introduced. The FD-280/285, made from 1990 to 1994, was the last Watchman to use a black and white CRT display. One of the last Watchmen was the FDL-22 introduced in 1998, which featured an ergonomic body which made it easier to hold, and introduced Sony's Straptenna , where the wrist strap served as the antenna. A model of the Sony Watchman (FD-40A) is seen multiple times in the film Rain Man .
https://en.wikipedia.org/wiki/Sony_Watchman
12 MP (Sony Exmor RS IMX650), f/2.3, 85 mm (telephoto), f/2.8, 125 mm (telephoto), 1/3.5", Dual Pixel PDAF, 3.5x/5.2x optical zoom, OIS 12 MP (Sony Exmor RS IMX563), f/2.2, 124˚, 16 mm (ultrawide), 1/2.55", Dual Pixel PDAF 0.3 MP (Sony Exmor R IMX316), TOF 3D, (depth) Zeiss optics, HDR, eye tracking The Sony Xperia 1 IV [ a ] is an Android smartphone manufactured by Sony . Launched on May 11, 2022, it succeeds the Xperia 1 III as the latest flagship of Sony's Xperia series . The device was announced along with the mid-range Xperia 10 IV , with expected release dates by June 2022 (Asian markets) and as late as September 2022 for other markets including the US . US shipments were delayed and ultimately began in late October 2022. The Xperia 1 IV is designed with more professionalism in mind, while improving on the now-signature designs of its predecessors, the Xperia 1 II and Xperia 1 III. It features a grippier matte frame and rear frosted glass finish akin to the Xperia PRO-I, and a boxier design than the previous flagships. The phone has Corning Gorilla Glass Victus protection both on the front and the back as well as IP65 and IP68 certifications for water resistance. The display still has symmetrical bezels on the top and the bottom, a hallmark Xperia design, where the front-facing dual stereo speakers and the front camera are placed. The left side of the phone is completely devoid of any controls or ports, with only antenna bands present. The microSD/SIM card combo tray now found at the bottom (or right-side if placed in landscape) along with the USB-C 3.2 port and the primary microphone, while the right side contains the fingerprint reader embedded into the power button, a volume rocker, and a dedicated 2-stage shutter button with an embossed finish, the previously included customisable shortcut button from the Mark 3 omitted. Xperia 1 IV is also the last Xperia 1 series to feature LED notification light as Xperia 1 V removed the feature the following year. [ 4 ] The rear cameras are arranged in a vertical strip like its predecessor, with the LED flash and color spectrum sensor along the top. The phone will be available in three colors: Black, White, and Purple, [ 5 ] with only Black and Purple being available in the North American market. [ 6 ] The Xperia 1 IV is powered by the 4 nm (4LPE) Qualcomm Snapdragon 8 Gen 1 SoC and an Adreno 730 GPU, accompanied by 12 GB of LPDDR5 RAM, 256 GB or 512 GB storage space (expandable up to 1 TB), and single/dual-hybrid nano- SIM card slot depending on region. The phone features a 21:9 4K CinemaWide HDR 10-bit 120 Hz OLED display first seen in the Xperia 1 III, now improved with 50% more brightness. The Xperia 1 IV's touch sampling rate is 240 Hz. The phone has a larger 5000 mAh battery (from 4500 mAh of the 1 III), and supports 30 W Fast Charging alongside Qi wireless charging with reverse wireless charging support. The phone has front-facing dual stereo speakers with redesigned drivers, and support for 360 Reality Audio . There is also a 3.5 mm stereo audio jack with support for both high-resolution audio output as well as microphone input for plugged in peripherals such as an external microphone for vlogging. [ citation needed ] The Xperia 1 IV has an improved triple camera setup from the 1 III. All three cameras are still 12 Megapixels , but sporting new sensors and optics for the ultrawide and telephoto. They consist of the main 12 MP Exmor RS IMX557 sensor behind a 24 mm f /1.7 lens with optical image stabilization (OIS) , an ultrawide 12MP IMX563 sensor with 16 mm f /2.2 lens, both of which have phase-detection autofocus, and a 0.3 MP IMX316 3D TOF depth sensor. The latter is also the final time it was included in any Sony Xperia device as Xperia 1 V removes the 3D TOF depth sensor and RGBC-IR sensor as well. The highlight of the 1 IV is its continuous zoom telephoto lens , a major improvement over its predecessor's variable zoom telephoto. It is a 12 MP 1/3.5" sensor with 1.0 μm pixels and PDAF, contained in the same periscope design like the 1 III, it can now zoom between 85mm all the way up to 125mm without any stepping or using digital zoom, just like a true digital camera. There is no confirm detail on the specific Sony IMX sensor used on the telephoto, other than some insights by independent reviewers such as GSMArena [ 7 ] where they've discovered that it is "presumably" an IMX650, a 40-MP sensor with a 1/1.7-inch optical format that was last used on the Huawei P30 and P30 Pro smartphones. [ 8 ] Whether or not this is true, either implementing the same 12-MP crop as the Xperia PRO-I on the IMX650, or the hardware information app HWiNFO used could be reporting incorrect data (which according to Notebookcheck seems unlikely), or if it's using a new or unknown IMX sensor altogether, remains to be seen. All 3 cameras of the 1 IV use ZEISS T✻ (T-Star) anti-reflective coating on each lens and has support for 4K video recording up to 120 FPS and 2K for up to 120 FPS like its predecessors, and it improves on the 20 FPS burst feature where it is now available on all 3 cameras. Digital zoom on the main camera can reach the equivalent of 300mm with the "AI super resolution zoom" first featured on the 1 III. It also has improved Realtime Tracking with enhanced Eye AF for human, animals and birds, instantly locking focus on the subject's eyes without losing track upon sudden loss of focus from the frame. For the first time, a new 12 MP front-facing camera with support for 4K video recording is present in the 1 IV. Surprisingly, it is the Sony IMX663 (in place of the previous Samsung ISOCELL sensor), the same sensor that was first used as the telephoto sensor for the Xperia 1 III and the Xperia PRO-I, making it on-par with the likes of Google's Pixel 6 Pro smartphone and marking another improvement over its predecessors' outdated 8 MP-resolution front cameras. [ citation needed ] The Xperia 1 IV runs on the latest Android 12 , with promise for 2 major Android software revisions and 3 years of software support. It is also equipped with 3 different camera apps specifically made to take advantage of the 1 IV's camera hardware: "Photo Pro", developed by Sony's α (Alpha) camera division, focuses on the full manual control setup and configuration commonly seen on Sony Alpha line of professional cameras; the professional movie-oriented "Cinema Pro", developed by Sony's cinematography division CineAlta , and the "Basic Mode" first seen on the 1 III, replacing the stock camera app but with additional controls from the "Photo Pro". [ citation needed ]
https://en.wikipedia.org/wiki/Sony_Xperia_1_IV
The Sony Xperia 1 V [ a ] is an Android smartphone manufactured by Sony . Launched on May 11, 2023, it succeeds the Xperia 1 IV as the latest flagship of Sony's Xperia series . The device was announced along with the mid-range Xperia 10 V , with expected release dates by June 2023 for Japan and European markets and July 2023 for the US. [ 3 ] [ 4 ] The Xperia 1 V marks the last Xperia to have a 4K display and 21:9 CinemaWide display, as its successor, the Xperia 1 VI, opted for an LTPO FHD+ display and ditched 21:9 aspect ratio display with a more conventional 19.5:9 instead. This mobile phone related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sony_Xperia_1_V
Platinum Silver Khaki Green Sony Xperia 1 VI is a smartphone product of the Sony Xperia 1 range. The phone was released on May 15, 2024, powered by the Snapdragon 8 Gen 3 chipset and Qualcomm Snapdragon X75 modem. The phone's display has a 19.5:9 aspect ratio with FHD+ resolution, unlike 21:9 with 4K display like the previous model, and it has an upgraded camera. [ 1 ] The phone will not be released in the United States. [ 2 ] There is a dedicated shutter button . It is equipped with the following camera lenses : The Xperia 1 VI comes in Black, Platinum Silver, Khaki Green, and Scarlet Red. Its display uses a 6.50" 120Hz LTPO OLED panel with BRAVIA screen, which supports the HDR BT.2020 standard. Unlike its predecessor, the Xperia 1 V , it has an FHD+ resolution instead of 4K . [ 4 ] It has a two-day battery life with a 5000mAh lithium battery . The phone has a IP65, IP68 standard dust and water body protection. [ 5 ] It has 12GB of RAM paired with 256GB or 512GB of storage . [ 6 ] There is Xperia UI and Android 14 with gaming mode customizations. A unified camera app replacing previous two separate apps has been added. The phone has a specialized music recording app. [ 7 ] The Android 15 update was released on the 20th of November 2024, which also introduced the dedicated Video Pro mode in the camera app, along with the 1st of November 2024 security patch level. [ 8 ]
https://en.wikipedia.org/wiki/Sony_Xperia_1_VI
Zeiss optics, HDR, eye tracking 360 Reality Audio hardware decoding LDAC The Sony Xperia 5 IV [ a ] is an Android smartphone manufactured by Sony . Part of Sony's Xperia series, the phone was announced on September 1, 2022. The Xperia 5 IV is built similarly to the Xperia 1 IV , using anodized aluminum for the frame and Corning Gorilla Glass Victus for the screen and back panel, as well as IP65 and IP68 certifications for water resistance. The build has a pair of symmetrical bezels on the top and the bottom, where the front-facing dual stereo speakers are placed. The right side contains a fingerprint reader embedded into the power button, a volume rocker and a shutter button. The earpiece, front-facing camera, notification LED and various sensors are housed in the top bezel. The bottom edge has the primary microphone, USB-C port, and SIM/microSDXC card slot; the rear cameras are arranged in a vertical strip. The phone is available in three colors: Black, Green and White. Xperia 5 IV marks the last both Xperia 5 series and all Xperia to feature notification LED, as the successor, Sony Xperia 5 V, has removed the feature. The Xperia 5 IV is powered by the Qualcomm Snapdragon 8 Gen 1 SoC and an Adreno 730 GPU, accompanied by 8 GB of LPDDR5 RAM. It has 128 or 256 GB of UFS internal storage, which can be expanded up to 1 TB via the microSD card slot with a hybrid dual-SIM setup. The display is a 6.1-inch 1080p (2520 × 1080) HDR OLED with a 21:9 aspect ratio , resulting in a pixel density of 449 ppi. It features a 120 Hz refresh rate , and is capable of displaying one billion colors. The battery capacity is 5000 mAh; USB Power Delivery 3.0 is supported at 30 W over USB-C in addition to wireless charging. The device includes a 3.5 mm audio jack as well as an active external amplifier. The Xperia 5 IV has three 12 MP rear-facing cameras and a 12 MP front-facing camera. The rear cameras consist of a wide-angle lens (24 mm f /1.7), an ultra wide angle lens (16 mm f /2.2), and a telephoto lens (60 mm f /2.4) with 2.5× optical zoom ; each uses ZEISS ' T✻ (T-Star) anti-reflective coating. The Xperia 5 IV runs on Android 12 . Sony has also paired the phone's camera tech with a "Pro" mode developed by Sony's camera division CineAlta , whose features take after Sony's Alpha camera lineup.
https://en.wikipedia.org/wiki/Sony_Xperia_5_IV
Hwang Woo-suk ( Korean : 황우석 , born January 29, 1953) [ 1 ] is a South Korean veterinarian and researcher. He was a professor of theriogenology and biotechnology at Seoul National University until he was dismissed on March 20, 2006. He was considered a pioneering expert in stem cell research and even called the "Pride of Korea". [ 2 ] [ 3 ] However, he became infamous around November 2005 for fabricating a series of stem cell experiments that were published in high-profile journals, the case known as the Hwang affair . Hwang was best known for two articles published in the journal Science in 2004 and 2005, where he reported he had succeeded in creating human embryonic stem cells by cloning . However, soon after the first paper was released, an article in the journal Nature accused Hwang of having committed ethical violations by using eggs from his graduate students and from the black market . [ 4 ] Although he denied the charges at first, Hwang admitted the allegations were true in November 2005. [ 5 ] Shortly after this, data from his human cloning experiments was revealed to have been falsified. On May 12, 2006, Hwang was charged with embezzlement and bioethics law violations after it emerged much of his stem cell research had been faked. [ 6 ] The Korea Times reported on June 10, 2007, that Seoul National University fired him, and the South Korean government canceled his financial support and barred him from engaging in stem cell research. [ 7 ] Hwang was sentenced to a two years suspended prison sentence at the Seoul Central District Court on 26 October 2009, after being found guilty of embezzlement and bioethical violations but cleared of fraud . [ 8 ] [ 9 ] On the same day, CNN reported that the scientist in 2006 had admitted faking his findings after questions of impropriety had emerged. [ 10 ] He had his conviction upheld but his suspended sentence reduced by 6 months on 15 December 2010 by an appeals court in South Korea. [ 11 ] In 2014, the South Korean Supreme Court upheld its 2010 ruling. [ 12 ] Since the controversy, Hwang has maintained a relatively low profile, but continues to work in scientific fields. As of September 2020, he worked at the Sooam Bioengineering Research Institute in Yongin , Gyeonggi Province , leading research efforts into creating cloned pig embryos and embryonic stem cell lines. [ 13 ] In February 2011, Hwang visited Libya as part of a US$ 133 million project in the North African country to build a stem cell research center and transfer relevant technology. The project was canceled due to the 2011 Libyan civil war . [ 14 ] In November 2015, a Chinese biotech company Boyalife Group announced that it would partner with Hwang's laboratory, Sooam Biotech , to open the world's largest animal cloning factory in Tianjin . The factory would aim to produce up to one million cattle embryos per year to meet the increasing demand for quality beef in China. [ 15 ] Hwang first caught media attention in South Korea when he announced he had successfully created a cloned dairy cow, Yeongrong-i in February 1999. His alleged success was touted as the fifth instance in the world in cow cloning, with a notable caveat: Hwang failed to provide scientifically verifiable data for the research, giving only media sessions and photo ops . Hwang's next claim came in April 1999, when he announced the cloning of a Korean cow, Jin-i , also without providing any scientifically verifiable data. Despite the notable absence of any of the scientific data needed to probe the validity of the research, Hwang's several claims were well-received by the South Korean media and public, who were attracted by Hwang's claim of immeasurable economic prospect that his research was said to be promising. The claims of his research results resulted in him being awarded the Scientist of the Year Award by the Korea Science Journalists Association [ 16 ] and the Inchon Award . Until 2004, Hwang's main area of research remained in creating genetically modified livestock that included cows and pigs. During that period, Hwang claimed to have created a BSE -resistant cow (which has not been verified), and also stated his intention to clone a Siberian tiger . In February 2004, Hwang and his team announced that they had successfully created an embryonic stem cell by the somatic cell nuclear transfer method, and published their paper in the March 12 issue of Science . [ 17 ] Although Hwang had already established himself as an expert in animal cloning and secured celebrity status in South Korea in the late 1990s, his alleged sudden success came as a surprise because this was the first reported success in human somatic cell cloning. Until Hwang's claim, it was generally agreed that creating a human stem cell by cloning was next to impossible due to the complexity of primates. Hwang explained that his team used 242 eggs to create a single cell line. In May, Nature journal published an article stating that Hwang had used eggs taken from two of his graduate students, based on an interview with one of the students. The article raised the question of whether the students might have been pressured to give eggs and thus whether such a donation would have been "voluntary" as Hwang claimed in his scientific paper. At that time, Hwang denied that he had used his students' eggs. [ 4 ] Hwang's team announced an even greater achievement a year later in May 2005, and claimed they had created 11 human embryonic stem cells using 185 eggs. His work, published in the June 17 issue of Science , [ 18 ] was instantly hailed as a breakthrough in biotechnology because the cells were allegedly created with somatic cells from patients of different age and gender, while the stem cell of 2004 was created with eggs and somatic cells from a single female donor. This meant every patient could receive custom-made treatment with no immune reactions. In addition, Hwang's claim meant that his team had boosted their success rate by 14 times and that this technology could be medically viable. Hwang made further headlines in May 2005 when he criticized U.S. President George W. Bush 's policy on embryonic stem cell research. Also, Time magazine named Hwang one of its "People Who Mattered 2004", stating that Hwang "has already proved that human cloning is no longer science fiction, but a fact of life." [ 19 ] Following on the earlier success, on August 3, 2005, Hwang announced that his team of researchers had become the first team to successfully clone a dog, which was independently verified through genetic testing. The dog, an Afghan Hound , was named Snuppy . Shortly after his groundbreaking 2005 work, Hwang was appointed to head the new World Stem Cell Hub , a facility that was to be the world's leading stem cell research center. However, in November 2005, Gerald Schatten , a University of Pittsburgh researcher who had worked with Hwang for two years, made the surprise announcement that he had ceased his collaboration with Hwang. In an interview, Schatten commented that "my decision is grounded solely on concerns regarding oocyte (egg) donations in Hwang's research reported in 2004." Following an intense media probe, Roh Sung-il , one of Hwang's close collaborators and head of MizMedi Women's Hospital , held a news conference on November 21. During the conference, Roh admitted that he had paid women US$1,400 each for donating their eggs which were later used in Hwang's research. Roh claimed Hwang was unaware of this, while the South Korean Ministry of Health asserted that no laws or ethical guidelines had been breached as there were no commercial interests involved. Hwang maintained that he was unaware that the eggs had been obtained via these methods, but regardless resigned from his post at the World Stem Cell Hub . On November 22, PD Su-cheop ( Producer's Note ), a popular MBC investigative reporting show, raised the possibility of unethical conduct in the egg cell-acquiring process. Despite the factual accuracy of the report, news media as well as people caught up in nationalistic fervor in their unwavering support for Hwang asserted that criticism of Hwang's work was "unpatriotic", so much so that the major companies who were sponsoring the show immediately withdrew their support. On November 24, Hwang held a press conference in Seoul, in which he declared his intention of resigning from most of his official posts. He also apologized for his actions [ which? ] and said, "I was blinded by work and my drive for achievement." He denied coercing his researchers into donating eggs and claimed that he found out about the situation only after it had occurred. He added that he had lied about the source of the eggs donated to protect the privacy of his female researchers, and that he was not aware of the Declaration of Helsinki , which clearly enumerates his actions as a breach of ethical conduct. After the press conference, which was aired on all major South Korean television networks, many of the nation's media outlets, government ministries, and members of the public expressed sympathy for Hwang. In mid-December, co-author of Hwang's papers came forward, telling the media that Hwang had confessed to fabricating evidence for nine of the eleven cell lines. [ 20 ] He (Dr Roh Il-Sung) reportedly said he had doubts about the remaining two lines. [ 21 ] On December 29, 2005, the university determined that all 11 of Hwang's stem cell lines were fabricated. [ 22 ] The university announced on January 10, 2006, that Hwang's 2004 and 2005 papers in Science were both fabricated. Following on the confirmation of scientific misconduct, on January 11, Science unconditionally retracted both of Hwang's papers. [ 23 ] On January 12, 2006, Hwang held a press conference to apologize for the fiasco, but did not admit to cheating. Instead, he blamed other members of his research project for having deceived him with false data and alleged a conspiracy , saying that his projects had been sabotaged and that there was theft of materials involved. He said that cloning human stem cells was possible and that he had the technology to do it, and if he were given six more months he could prove it. This is an extension of the ten days he said he needed to re-create the stem cells that he asked for back on December 16, 2005. Seoul prosecutors started a criminal investigation and raided Hwang's home that day. On January 20, 2006, Hwang maintained that two of his 11 forged stem cell lines had been maliciously switched for cells from regular, not cloned, embryos. The allegation involves the lines Hwang claims to have created at Seoul-based MizMedi Hospital. [ 24 ] On November 22, 2016, Hwang received a certificate of patent on NT-1 technology from the Korean Intellectual Property Office. [ citation needed ] In the late 1990s, the method that scientists used in cloning was somatic cell nuclear transfer , which is the same procedure that was used to create Dolly the sheep . This laboratory technique begins when an egg is taken from a donor and the nucleus is removed from the egg, creating an enucleated egg. A cell, which contains DNA , is then taken from the animal being cloned. The enucleated egg is then fused together with the nucleus of the cloning subject's cell using electricity. This creates an embryo , which is implanted into a surrogate mother through in vitro fertilization . If the procedure is successful, then the surrogate mother will give birth to a baby that is a clone of the cloning subject at the end of a normal gestation period. In 2014 researchers were reporting cloning success rates of seven to eight out of ten [ 25 ] but in 1996 it took 277 attempts to create Dolly. Hwang allegedly used this technique at his laboratory in SNU to clone dogs during his experiments throughout the early 2000s. He claimed that it was possible to clone mammals and that probability for success can be better than 1 in 277 attempts (as in similar cases such as Dolly). Hwang was the first in the world to clone a dog, an Afghan hound called Snuppy in 2005. He described his procedure for cloning in the journal Nature . [ 26 ] Researchers from the Seoul National University [ 27 ] and the US National Institutes of Health [ 28 ] confirmed that Snuppy was a clone. Since then Hwang and his associates have cloned many more dogs. [ 29 ] [ 30 ] [ 31 ] In 2015, it was reported that Huang Woo-suk's company Sooam Biotech had produced 700 cloned puppies since 2005, with their owners paying about $100,000 each to have their dogs cloned. [ 31 ] [ 32 ] [ 33 ] [ 34 ] Hwang's intention to develop better technique for cloning was focused on stem cells because they are still at an early stage of development and retain the potential to turn into many different types of cell and when they divide, each new cell has the potential to either remain a stem cell or become another type of cell with a more specialized function. According to stem cell biologists, it might be possible to harness this ability to turn stem cells into a super "repair kit" for the body, theoretically to use stem cells to generate healthy tissue to replace that either damaged by trauma or compromised by disease. The many conditions and illnesses that may eventually be treated by stem cell therapy include Parkinson's disease , Alzheimer's disease , heart disease , stroke , arthritis , diabetes , burns , and spinal cord damage. In March 2012, it was announced that Hwang would collaborate with Russian scientists in an attempt to clone a woolly mammoth from remains found in Siberia . [ 35 ] [ 36 ] He had previously successful cloned eight coyotes in March 2011 using domestic dogs [ 34 ] [ 36 ] and grey wolves as surrogate mothers. [ 37 ] However no mammoth sample fit for cloning had been found as of 2015. [ 34 ] [ 38 ] In 2015, the Chinese company BoyaLife [ 39 ] announced that in partnership with the Hwang Woo-suk's company Sooam Biotech, they were planning to build a 200 million RMB (about US$32 million) factory in Tianjin, China to produce 100,000 cloned cattle per year to supply China's growing market for quality beef, starting in 2016. [ 33 ] In 2015, Sooam Biotech cloned a male boxer puppy from a pet dog that had been dead for 12 days. This was the first time they had cloned a dog that had been dead for such a long time. [ 32 ] In 2016, Hwang's company was regularly cloning pigs which were genetically predisposed to certain diseases so that they could be used for testing pharmaceuticals, and cloning cattle which were highly valued for their meat. In total Sooam Biotech was reported to be producing roughly 500 cloned embryos a day from various species. [ 37 ] They were also reported to be attempting to clone the Ethiopian wolf , one of the world's rarest canids, of which there are only 500 in the wild, another endangered canid, the dhole , of which there only about 2,500 adults, and the Siberian musk deer , which is classified as vulnerable by the IUCN . [ 37 ] Until late November 2005, Hwang was criticized only for unpublicized ethical violations. Colleagues and media outlets asserted that he had paid female donors for egg donations and that he had received donations from two junior researchers, both of which were violations. Later controversies centered around scientific misconduct . His team, which cloned the first human embryo to use for research, said they had used the same technology to create batches of embryonic stem cells from nine patients. According to Hwang, the result was much more efficient than they had hoped. Hwang's integrity as a researcher was again put in doubt when it was revealed that PD Su-cheop had scheduled a follow-up report questioning his achievement published in Science in June 2005, which stated he had cloned 11 lines of embryonic stem cells. This caused furious backlash among many South Koreans, and the reaction only intensified when it was discovered that Kim Sun-Jong, one of Hwang's researchers from MizMedi, had been coerced by illegal means to testify against Hwang. As a result, the scheduled broadcast was canceled and the network made a public apology to the nation, everyone more or less operating under the assumption that the show was at fault and not Hwang. Yet, other news outlets began to question Hwang's claims. Close scrutiny revealed that several of the photos of purportedly different cells were in fact photos of the same cell. Hwang responded that these additional photos were accidentally included and that there was no such duplication in the original submission to Science . This was later confirmed by the journal. Researchers raised questions about striking similarities between the DNA profiles of the cloned cells. Then collaborator Gerald Schatten asked Science to remove his name from the paper, stating as a reason that there were "allegations from someone involved with the experiments that certain elements of the report may be fabricated." In the midst of national confusion, Hwang disappeared from public sight, to be hospitalized days later for alleged stress-related fatigue, while public opinion gradually began to turn against Hwang with even the major Korean companies who had withdrawn their support from PD Su-cheop reportedly now less than pleased with Hwang. Days later, Hwang started going to his laboratory while requesting Seoul National University to officially conduct a probe to the allegations surrounding him. The scandal took a dramatic turn on December 15, when Roh Sung-il, who had collaborated on the disputed paper, stated to media outlets that nine of those eleven lines had been faked; specifically, DNA tests illustrated that those nine lines shared identical DNA , implying that they had come from the same source. Roh stated that "Professor Hwang admitted to fabrication", and that he, Hwang, and another co-author had asked Science to withdraw the paper. [ 40 ] Adding fuel to the fire, MBC broadcast the content of the canceled PD Su-cheop show, which substantiated Roh's claim. On the same day, The Seattle Times reported that Science had not yet received an official request from Hwang to withdraw the paper, and it had refused to remove Schatten's name from the paper, stating, "No single author, having declared at the time of submission his full and complete confidence in the contents of the paper, can retract his name unilaterally, after publication." [ 41 ] Several prominent scientists, including Ian Wilmut , who cloned Dolly the sheep in 1996, and Bob Lanza , a cloning expert based in Worcester, Massachusetts , did call on Hwang to submit his paper to an outside group for independent analysis. Lanza noted, "You can't fake the results if they're carried out by an independent group. I think this simple test could put the charges to rest." Two major press conferences were held on Korean television networks on December 16, one with Hwang, followed by one with his former colleague, Roh Sung-il. Hwang started his press conference by claiming that the technology to make stem cells existed—not an explicit denial that the stem cell lines he used in his paper to Science were fakes. He, however, acknowledged the falsifications of research data in the paper, attributing them to unrecoverable "artificial mistakes". He said that there was a problem with the original lines caused by contamination , and if he were given ten more days he could re-create the stem cell lines. He accused Kim Sun-Jong, a former collaborator, of "switching" some of the stem cell lines. Despite Hwang's claim, in another press conference held only minutes later, Roh Sung-il rebutted Hwang's accusation, saying Hwang was blackmailing MizMedi and Kim Sun-jong. He maintained that at least nine of the eleven stem cell lines were fakes and that Hwang was simply untrustworthy. "Roh Sung-il, chairman of the board at Mizmedi Hospital, told KBS television that Hwang had agreed to ask the journal Science to withdraw the paper, published in June to international acclaim. Roh was one of the co-authors of the article that detailed how individual stem cell colonies were created for 11 patients through cloning. Roh also told MBC television that Hwang had pressured a former scientist at his lab to fake data to make it look like there were 11 stem cell colonies. In a separate report, a former researcher told MBC that Hwang ordered him to fabricate photos to make it appear there were 11 separate colonies from only three. [...] University of Pittsburgh researcher Gerald Schatten has already asked that Science remove him as the senior author of the report, citing questions about the paper's accuracy. Seoul National University announced this week it would conduct an internal probe into Hwang's research." [ 42 ] Some scientists have started questioning Hwang's earlier work published in Science in February 2004, in which he claimed to have cloned embryonic stem cells. Maria Biotech head Park Se-pill said, "Up until now, I have believed Hwang did derive cloned embryonic stem cells although he admitted to misconduct in his follow-up paper on patient-specific stem cells...Now, I am not sure whether the cloned stem cell really existed." [ 43 ] On July 26, 2006, Hwang said in testimony that he spent part of 500 million won in private donations in attempts to clone extinct Russian mammoths and Korean tigers. [ 44 ] An internal panel was set up in Seoul National University to investigate the allegation, and the probe was started on December 17, 2005. The panel sealed off Hwang's laboratory and conducted a thorough investigation, collecting testimonies from Hwang, Roh and other people who were involved with the scandal. On December 23, the panel announced its initial finding that Hwang had intentionally fabricated stem cell research results creating nine fake cell lines out of eleven, and added that the validity of two remaining cell lines is yet to be confirmed. The panel stated that Hwang's misconduct is "a grave act damaging the foundation of science." [ This quote needs a citation ] Hwang's claim of having used only 185 eggs to create stem cell lines was also denied by the panel, which indicated that more eggs may have been used in the research process. The panel announced additional findings on December 29, and confirmed that no patient-matched embryonic stem cells existed, and that Hwang's team did not have the scientific data to prove any of the stem cells had ever been made. [ 45 ] In its final report published on January 10, 2006, the panel reaffirmed its previous findings while announcing additional discoveries. [ 46 ] The panel found out that, contrary to Hwang's claim of having used 185 eggs for his team's 2005 paper, at least 273 eggs were shown to have been used according to research records kept in Hwang's lab. In addition, the panel discovered that Hwang's team was supplied with 2,061 eggs in the period of November 28, 2002, to December 8, 2005. Hwang's claim of not having known about the donation of eggs by his own female researchers was also denied by the panel; in fact, it was discovered that Hwang himself had distributed egg donation consent forms to his researchers and personally escorted one to the MizMedi Hospital to perform the egg extraction procedure. The panel stated that Hwang's 2004 Science paper was also fabricated and decided the stem cell discussed in the paper may have been generated by a case of parthenogenetic process (which is itself a significant development, as mammals rarely reproduce by parthenogenesis; in addition, this would make Hwang's lab the first ever to successfully generate human stem cells via parthenogenesis, predating other research facilities' successes). [ 47 ] Although Hwang's team didn't rule out the possibility of parthenogenetic process in the paper, the panel said, his team didn't make any conscientious effort to probe the possibility through the tests available. Chung Myunghee, the head of the panel, said at a news conference that the panel was not in a position to investigate Hwang's claim of his stem cells having been switched with MizMedi's, but added that such a claim was incomprehensible when there were no data to prove any of the stem cells were ever made to begin with. However, the panel confirmed that Hwang's team had actually succeeded in cloning a dog they named Snuppy , [ 48 ] as results from analyses of 27 markers that allowed distinguishing amongst extremely-inbred animals and of mitochondrial DNA sequencing indicated that Snuppy was a somatic cell clone of Tie (the dog who gave the somatic cells required for Snuppy, which were then inserted into the eggs of surrogate mothers whose nuclei had been removed), [ 46 ] making Snuppy the first ever dog to be cloned. The panel, in conclusion, stated that Hwang's team intentionally fabricated the data in both the 2004 and the 2005 papers, as described by Myung Hee Chung (Head of Seoul National University's investigation) and that it was an act of "deception of the scientific community and the public at large". [ 49 ] On December 23, 2005, Hwang apologized for "creating a shock and a disappointment" and announced that he was resigning his position as professor at the university. [ 50 ] However, Hwang maintained that patient-matched stem cell technology remained in South Korea, and his countrymen would see it. Seoul National University said Hwang's resignation request would not be accepted, citing a university regulation that dictates that an employee under investigation may not resign from a post, thus avoiding full retribution and possibly dismissal if found at fault, while benefiting from an honorable voluntary resignation. On February 9, 2006, the university suspended Hwang's position as a professor, together with six other faculty members who participated in Hwang's team; [ 51 ] Hwang was dismissed on March 20, 2006. On May 12, 2006, Hwang was indicted on charges of embezzlement and breach of the country's bioethics law, without physical detention . Prosecutors also brought fraud charges against the three stem cell researchers. He embezzled 2.8 billion won (US$3 million) out of some 40 billion won in research funds, for personal purposes and the illegal purchase of ova used in his experiments. The prosecution also said Hwang's three associates involved in his stem cell research, Yoon Hyun-soo , Lee Byeong-chun and Kang Sung-keun , also misappropriated tens of millions of won in research money. Investigators have been tracking 24.6 billion won to find out how the research money was spent. It was part of Hwang's 36.9 billion won research funds raised through state support and private donations. Investigators said Hwang used bank accounts held by relatives and subordinates in 2002 and 2003 to receive about 475 million won from private organizations. He allegedly laundered the money by withdrawing it all in cash, breaking it up into smaller amounts and putting it back in various bank accounts. Hwang also withdrew 140 million won in August 2001 to buy gifts for his sponsors, including politicians and other prominent social figures, before Chusok holidays, according to prosecutors. He also allegedly misappropriated around 26 million won in research funds in September 2004 to buy a car for his wife. Hwang is suspected of embezzling 600 million won, provided by a private foundation, on multiple occasions from 2001 to 2005 for personal use. Prosecutors are also accusing him of illegally paying some 38 million won to 25 women who provided ova for his research through Hanna Women's Clinic in the first eight months of 2005. They also said Hwang gave several dozen politicians about 55 million won in political funds on numerous occasions from 2001 to 2005. He allegedly provided 14 million won to executives of large companies that provided financial support for his research. The prosecution added Hwang wired about 200 million won to a Korean American, identified only as Kang, in September 2005 and received the equivalent amount in U.S. currency from him when the scientist visited the United States two months later. Also in 2005, Hwang received one billion won each in research funds from SK Group and the National Agricultural Cooperative Federation based on his fabricated stem cell research results. Meanwhile, investigators said Lee Byeong-chun and Kang Sung-keun, both professors of veterinary science at Seoul National University, embezzled about 300 million won and 100 million won each in state funds by inflating research-related expenses. Yoon Hyun-soo, a biology professor at Hanyang University , also embezzled 58 million won from the research fund managed by MizMedi Hospital . [ 52 ] On August 2, 2007, after much independent investigation, it was revealed that Hwang's team succeeded in extracting cells from eggs that had undergone parthenogenesis . Hwang claimed he and his team had extracted stem cells from cloned human embryos. However, further examination of the cells' chromosomes shows the same indicators of parthenogenesis in those extracted stem cells as are found in the mice created by Tokyo scientists in 2004. Although Hwang deceived the world about being the first to create artificially cloned human embryos, he did contribute a major breakthrough to the field of stem cell research. The process may offer a way for creating stem cells that are genetically matched to a particular woman for the treatment of degenerative diseases. [ 53 ] The news of the breakthrough came just a month after an announcement from the International Stem Cell Corporation (ISC), a California-based stem cell research company, that they had successfully created the first human embryos through parthenogenesis. Although the actual results of Hwang's work were just published, those embryos were created by him and his team before February 2004, when the fabricated cloning results were announced, which would make them the first to successfully perform the process. Jeffrey Janus, president and director of research for ISC, agrees that "Dr. Hwang's cells have characteristics found in parthenogenetic cells" but remains cautious, saying "it needs more study." [ 54 ] After having acquired a celebrity status in South Korea, Hwang actively sought to establish every possible tie to political and economic institutions in the country. Hwang especially tried to win favor from the Roh Moo-hyun government, which in turn was suffering from a lack of popular support and wanted to demonstrate its competency by creating and promoting an exemplary policy success. Hwang approached Park Ki-young, a former biology professor, then appointed as the Information, Science and Technology Advisor for the President, and put her as one of the co-authors in his 2004 Science paper. Ties with Park yielded a favorable environment for Hwang in the government, as a non-official group consisting of high-ranking government officials was created to support Hwang's research that includes not only Hwang and Park, but also Kim Byung-joon, Chief National Policy Secretary, and Jin Dae-je, Information and Communications Minister. The group was dubbed as " Hwang-kum-pak-chui ", a loose acronym made from member's family names, which means "golden bat " in Korean. After Hwang's paper was published in Science in 2005, support for Hwang came in full swing. In June 2005, the Ministry of Science and Technology selected Hwang as the first recipient of the title Supreme Scientist , an honor worth US$15 million. [ 55 ] Hwang, having already claimed the title of POSCO Chair Professor worth US$1.5 million, secured more than US$27 million worth of support in that year. [ 56 ] President Roh had been acquainted with Hwang since 2003, and made a number of comments intended to protect him from potential bioethical issues. On June 18, 2004, Roh awarded Hwang a medal and said, "it is not possible nor desirable to prohibit research, just because there are concerns that it may lead to a direction that is deemed unethical." In another instance at the opening of World Stem Cell Hub on October 19, 2005, Roh remarked, "politicians have a responsibility to manage bioethical controversies, not to get in the way of this outstanding research and progress." [ 57 ] On December 5, 2005, after PD Su-cheop stirred a national controversy, Cheong Wa Dae reaffirmed its unflinching support for Hwang and his research team. Roh said, "We'll continue to support Professor Hwang. We hope he will return to his research lab soon for the sake of people with physical difficulties and the public", according to presidential spokesman Kim Man-soo. While implying the controversies over MBC-TV's forceful methods used to gather information from Hwang's former junior staff members, Roh said, "The disputes will be resolved gradually and naturally through following scientific research and study. We hope the ongoing disputes over Hwang's achievement will be settled without further trouble." [ 58 ] It was alleged that advisor Park Ki-young deliberately avoided to report Roh about details of Hwang's allegation for misconduct, while emphasizing a breach of journalist ethics by MBC. Park, after weeks of silence for her role in the controversy, announced her intent to resign from the advisor post on January 10, 2006. On January 11, 2006, the national post office stopped selling post stamps commemorating Hwang's research. The title of Supreme Scientist awarded to Hwang was revoked on March 21, 2006, after Hwang was dismissed from Seoul National University the day before. On December 6, 2005, a group of 43 lawmakers from the ruling and opposition parties inaugurated a body to support Hwang Woo-suk. Members of the group, dubbed the "lawmakers' group supporting Professor Hwang Woo-suk", pledged to help Hwang continue his experiments in pursuit of a scientific breakthrough. "There are many lawmakers who, regardless of party affiliation, want to support Hwang. We will join forces to help Hwang devote himself to his studies", Rep. Kwon Sun-taik of the ruling Uri Party said in a news conference at the National Assembly, who was also the leader of the group. He said the group would seek to establish bioethics guidelines and come up with supporting measures for biotechnology researchers in the country. Among those who had joined the group were Reps. Kim Hyuk-kyu, Kim Young-choon and Kim Sung-gon of the ruling party, Kim Hyong-o of the main opposition Grand National Party (GNP) and Kim Hak-won, chairman of the United Liberal Democrats . Some female lawmakers participated in a civic group for voluntary egg donations for therapeutic research, which opened in November 2005 following the egg procurement scandal. Reps. Song Young-sun and Chin Soo-hee of the GNP said they would provide their eggs to Hwang's research team. Meanwhile, the ruling and opposition parties called on the Korean Broadcasting Commission to thoroughly investigate the staffers of MBC's PD Note , which broadcast a documentary program critical of Hwang with coercive tactics in interviews, and reprimand them. [ 59 ] After most of Hwang's claims were confirmed to be false on January 10, 2006, some lawmakers revealed that Hwang had made several campaign donations to them and other lawmakers. [ 60 ] The MBC investigative journalism show PD Note (Korean: PD수첩) returned on air on January 3, 2006, and summarized the course of Hwang's scandal to date. The show had been cancelled under pressure after it broadcast its show on November 22 that accused Hwang of oddities in his research. The last show in 2005, aired on November 29, covered other topics. It remained off the air for five weeks. The second show in 2006, on January 10, dealt further with the Hwang affair, focusing on several instances of Hwang's media spinning tactics. It also covered the unwillingness on the part of a significant part of the public in South Korea to believe that someone who had almost achieved a status of a national hero committed such a shame. The same day many South Korean citizens rallied outside Hwang's laboratory; as more than 1,000 women pledged to donate their eggs for the scientist's research. [ ... ] Hwang has been in seclusion since apologizing in November 2005, for ethical lapses in human egg procurement for his research. The symbolic event was as a gesture from Hwang's supporters that says they intend to donate their eggs with 1,000 of their members after they took egg-donation pledges online via their website. "Dr. Hwang will not be able to return to the lab, at least, until at the end of this week because he is extremely exhausted, mentally and physically", a key team member, Ahn Cu Rie, wrote in an e-mail to Reuters . [ ... ] At Hwang's lab at Seoul National University, women left bouquets of the national flower, a hibiscus called the Rose of Sharon , for the scientist along with notes of encouragement. The stem cell research center that Hwang led before resigning said it hoped he would return, even though his lapses could hurt its efforts to work with other research institutions. "So far more than 700 South Korean women have pledged to donate their eggs and the number is steadily rising", said Lee Sun-min, an official at a private foundation launched last week to promote egg donations. [ ... ] Thousands of patients have applied to participate in the research, hoping the technology could help treat damaged spinal cords or diseases such as Parkinson's. On Tuesday, an official at the lab said it was hoped that Hwang would return. "We're waiting for Hwang to assume the leadership after some rest", Seong Myong-hoon told a news conference. But Seong said the controversy could hurt the lab. That conclusion was reached after one of Hwang's close research partners, Ahn Cu-rie, returned Tuesday after a 10-day trip to meet with scientists in the United States and Japan, Seong said. "The reaction of foreign scientists was that they understand what Dr. Hwang disclosed, but they cannot accept that without criticism", Seong said. "We can never be optimistic about cooperation with foreign institutions." Seong added: "Researchers of our country were newly awakened to the fact that we have to take every precaution to ensure we don't fall behind international ethics (guidelines) while researching." [ 61 ] "The only hope for us is Dr. Hwang. Don't trample on our one shred of hope", a woman whose son suffers from a severe kidney ailment told South Korean broadcaster YTN at the university. The woman also pledged to sell her eggs to Hwang. A website backed by Hwang's supporters began taking egg-donation pledges online since late November 2005 after Hwang resigned all his official posts at World Stem Cell Hub , relaying them to a clinic linked to Hwang's research team. The number of pledges had reached 725 by early December 2005. [ citation needed ] Banners like "Please come back, Doctor Hwang. I'm already dying to see you, Professor Hwang", were put up on the home page . [ 62 ] The site also carried a photo of Hwang and his cloned dog, Snuppy, trimmed with images of the rose of Sharon , South Korea's national flower, in an apparent appeal for patriotism . The national anthem played as background music. Those who applied to donate ova included people with incurable illnesses and their family members, who hoped that Hwang's research would eventually lead to cures, and young, healthy women. In June 2023, Netflix released a documentary film, King of Clones which covered Hwang Woo-suk and the Hwang affair .
https://en.wikipedia.org/wiki/Sooam_Biotech_Research_Foundation
Sooraj Surendran is an Indian technologist and electronic engineering graduate from Anna University . [ 2 ] He has made significant contributions to motorized wheelchair deployment in Tamil Nadu. [ 3 ] Surendran was born in Kollam, Kerala , India. [ 2 ] His mother Sudha was a housewife and his father was K Surendran Pillai. [ 4 ] He completed schooling in Sree Buddha, a Central Board of Secondary Education school in Karunagappalli, Kerala . He earned his degree from Anna University in electronic engineering. [ citation needed ] Surendran graduated from Anna University with a BTech in electronic engineering in 2011. He worked on motorized wheelchair design and nursing care bed electronic unit design, and developed an electronic system for nursing care beds that integrated Bluetooth technology to control the functions of a nursing care bed via an Android application. [ 5 ] Surendran was invited to help develop electronic control units for lightweight motorized wheelchairs as a part of a Tamil Nadu program to distribute motorized wheelchairs to 2,000 people. [ 6 ] [ 5 ]
https://en.wikipedia.org/wiki/Sooraj_Surendran
In mathematics, Sophie Germain's identity is a polynomial factorization named after Sophie Germain stating that x 4 + 4 y 4 = ( ( x + y ) 2 + y 2 ) ⋅ ( ( x − y ) 2 + y 2 ) = ( x 2 + 2 x y + 2 y 2 ) ⋅ ( x 2 − 2 x y + 2 y 2 ) . {\displaystyle {\begin{aligned}x^{4}+4y^{4}&={\bigl (}(x+y)^{2}+y^{2}{\bigr )}\cdot {\bigl (}(x-y)^{2}+y^{2}{\bigr )}\\&=(x^{2}+2xy+2y^{2})\cdot (x^{2}-2xy+2y^{2}).\end{aligned}}} Beyond its use in elementary algebra , it can also be used in number theory to factorize integers of the special form x 4 + 4 y 4 {\displaystyle x^{4}+4y^{4}} , and it frequently forms the basis of problems in mathematics competitions . [ 1 ] [ 2 ] [ 3 ] Although the identity has been attributed to Sophie Germain, it does not appear in her works. Instead, in her works one can find the related identity [ 4 ] [ 5 ] x 4 + y 4 = ( x 2 − y 2 ) 2 + 2 ( x y ) 2 = ( x 2 + y 2 ) 2 − 2 ( x y ) 2 . {\displaystyle {\begin{aligned}x^{4}+y^{4}&=(x^{2}-y^{2})^{2}+2(xy)^{2}\\&=(x^{2}+y^{2})^{2}-2(xy)^{2}.\\\end{aligned}}} Modifying this equation by multiplying y {\displaystyle y} by 2 {\displaystyle {\sqrt {2}}} gives x 4 + 4 y 4 = ( x 2 + 2 y 2 ) 2 − 4 ( x y ) 2 , {\displaystyle x^{4}+4y^{4}=(x^{2}+2y^{2})^{2}-4(xy)^{2},} a difference of two squares , from which Germain's identity follows. [ 5 ] The inaccurate attribution of this identity to Germain was made by Leonard Eugene Dickson in his History of the Theory of Numbers , which also stated (equally inaccurately) that it could be found in a letter from Leonhard Euler to Christian Goldbach . [ 5 ] [ 6 ] The identity can be proven simply by multiplying the two terms of the factorization together, and verifying that their product equals the right hand side of the equality. [ 7 ] A proof without words is also possible based on multiple applications of the Pythagorean theorem . [ 1 ] One consequence of Germain's identity is that the numbers of the form n 4 + 4 n {\displaystyle n^{4}+4^{n}} cannot be prime for n > 1 {\displaystyle n>1} . (For n = 1 {\displaystyle n=1} , the result is the prime number 5.) They are obviously not prime if n {\displaystyle n} is even, and if n {\displaystyle n} is odd they have a factorization given by the identity with x = n {\displaystyle x=n} and y = 2 ( n − 1 ) / 2 {\displaystyle y=2^{(n-1)/2}} . [ 3 ] [ 7 ] These numbers (starting with n = 0 {\displaystyle n=0} ) form the integer sequence Many of the appearances of Sophie Germain's identity in mathematics competitions come from this corollary of it. [ 2 ] [ 3 ] Another special case of the identity with x = 1 {\displaystyle x=1} and y = 2 k {\displaystyle y=2^{k}} can be used to produce the factorization Φ 4 ( 2 2 k + 1 ) = 2 4 k + 2 + 1 = ( 2 2 k + 1 − 2 k + 1 + 1 ) ⋅ ( 2 2 k + 1 + 2 k + 1 + 1 ) , {\displaystyle {\begin{aligned}\Phi _{4}(2^{2k+1})&=2^{4k+2}+1\\&=(2^{2k+1}-2^{k+1}+1)\cdot (2^{2k+1}+2^{k+1}+1),\\\end{aligned}}} where Φ 4 ( x ) = x 2 + 1 {\displaystyle \Phi _{4}(x)=x^{2}+1} is the fourth cyclotomic polynomial . As with the cyclotomic polynomials more generally, Φ 4 {\displaystyle \Phi _{4}} is an irreducible polynomial , so this factorization of infinitely many of its values cannot be extended to a factorization of Φ 4 {\displaystyle \Phi _{4}} as a polynomial, making this an example of an aurifeuillean factorization . [ 8 ] Germain's identity has been generalized to the functional equation f ( x ) 2 + 4 f ( y ) 2 = ( f ( x + y ) + f ( y ) ) ( f ( x − y ) + f ( y ) ) , {\displaystyle f(x)^{2}+4f(y)^{2}={\bigl (}f(x+y)+f(y){\bigr )}{\bigl (}f(x-y)+f(y){\bigr )},} which by Sophie Germain's identity is satisfied by the square function . [ 4 ]
https://en.wikipedia.org/wiki/Sophie_Germain's_identity
In number theory , Sophie Germain's theorem is a statement about the divisibility of solutions to the equation x p + y p = z p {\displaystyle x^{p}+y^{p}=z^{p}} of Fermat's Last Theorem for odd prime p {\displaystyle p} . Specifically, Sophie Germain proved that at least one of the numbers x {\displaystyle x} , y {\displaystyle y} , z {\displaystyle z} must be divisible by p 2 {\displaystyle p^{2}} if an auxiliary prime q {\displaystyle q} can be found such that two conditions are satisfied: Conversely, the first case of Fermat's Last Theorem (the case in which p {\displaystyle p} does not divide x y z {\displaystyle xyz} ) must hold for every prime p {\displaystyle p} for which even one auxiliary prime can be found. Germain identified such an auxiliary prime q {\displaystyle q} for every prime less than 100. The theorem and its application to primes p {\displaystyle p} less than 100 were attributed to Germain by Adrien-Marie Legendre in 1823. [ 1 ] While the auxiliary prime q {\displaystyle q} has nothing to do with the divisibility by n {\displaystyle n} and must also divide either x {\displaystyle x} , y {\displaystyle y} or z {\displaystyle z} for which the violation of the Fermat Theorem would occur and most likely the conjecture is true that for given n {\displaystyle n} the auxiliary prime may be arbitrarily large similarly to the Mersenne primes she most likely proved the theorem in the general case by her considerations by infinite ascent because then at least one of the numbers x {\displaystyle x} , y {\displaystyle y} or z {\displaystyle z} must be arbitrarily large if divisible by infinite number of divisors and so all by the equality then they do not exist.
https://en.wikipedia.org/wiki/Sophie_Germain's_theorem
In algorithmic information theory , sophistication is a measure of complexity related to algorithmic entropy . When K is the Kolmogorov complexity and c is a constant, the sophistication of x can be defined as [ 1 ] The constant c is called significance . The S variable ranges over finite sets. Intuitively, sophistication measures the complexity of a set of which the object is a "generic" member. This theoretical computer science –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sophistication_(complexity_theory)
In mathematics, the sophomore's dream is the pair of identities (especially the first) ∫ 0 1 x − x d x = ∑ n = 1 ∞ n − n ∫ 0 1 x x d x = ∑ n = 1 ∞ ( − 1 ) n + 1 n − n = − ∑ n = 1 ∞ ( − n ) − n {\displaystyle {\begin{alignedat}{2}&\int _{0}^{1}x^{-x}\,dx&&=\sum _{n=1}^{\infty }n^{-n}\\&\int _{0}^{1}x^{x}\,dx&&=\sum _{n=1}^{\infty }(-1)^{n+1}n^{-n}=-\sum _{n=1}^{\infty }(-n)^{-n}\end{alignedat}}} discovered in 1697 by Johann Bernoulli . The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively. The name "sophomore's dream" [ 1 ] is in contrast to the name " freshman's dream " which is given to the incorrect [ note 1 ] identity ( x + y ) n = x n + y n {\textstyle (x+y)^{n}=x^{n}+y^{n}} . The sophomore 's dream has a similar too-good-to-be-true feel, but is true. The proofs of the two identities are completely analogous, so only the proof of the second is presented here. The key ingredients of the proof are: In details, x x can be expanded as x x = exp ⁡ ( x log ⁡ x ) = ∑ n = 0 ∞ x n ( log ⁡ x ) n n ! . {\displaystyle x^{x}=\exp(x\log x)=\sum _{n=0}^{\infty }{\frac {x^{n}(\log x)^{n}}{n!}}.} Therefore, ∫ 0 1 x x d x = ∫ 0 1 ∑ n = 0 ∞ x n ( log ⁡ x ) n n ! d x . {\displaystyle \int _{0}^{1}x^{x}\,dx=\int _{0}^{1}\sum _{n=0}^{\infty }{\frac {x^{n}(\log x)^{n}}{n!}}\,dx.} By uniform convergence of the power series, one may interchange summation and integration to yield ∫ 0 1 x x d x = ∑ n = 0 ∞ ∫ 0 1 x n ( log ⁡ x ) n n ! d x . {\displaystyle \int _{0}^{1}x^{x}\,dx=\sum _{n=0}^{\infty }\int _{0}^{1}{\frac {x^{n}(\log x)^{n}}{n!}}\,dx.} To evaluate the above integrals, one may change the variable in the integral via the substitution x = exp ⁡ ( − u n + 1 ) . {\textstyle x=\exp(-{\frac {u}{n+1}}).} With this substitution, the bounds of integration are transformed to 0 < u < ∞ , {\displaystyle 0<u<\infty ,} giving the identity ∫ 0 1 x n ( log ⁡ x ) n d x = ( − 1 ) n ( n + 1 ) − ( n + 1 ) ∫ 0 ∞ u n e − u d u . {\displaystyle \int _{0}^{1}x^{n}(\log x)^{n}\,dx=(-1)^{n}(n+1)^{-(n+1)}\int _{0}^{\infty }u^{n}e^{-u}\,du.} By Euler's integral identity for the Gamma function , one has ∫ 0 ∞ u n e − u d u = n ! , {\displaystyle \int _{0}^{\infty }u^{n}e^{-u}\,du=n!,} so that ∫ 0 1 x n ( log ⁡ x ) n n ! d x = ( − 1 ) n ( n + 1 ) − ( n + 1 ) . {\displaystyle \int _{0}^{1}{\frac {x^{n}(\log x)^{n}}{n!}}\,dx=(-1)^{n}(n+1)^{-(n+1)}.} Summing these (and changing indexing so it starts at n = 1 instead of n = 0 ) yields the formula. The original proof, given in Bernoulli, [ 2 ] and presented in modernized form in Dunham, [ 3 ] differs from the one above in how the termwise integral ∫ 0 1 x n ( log ⁡ x ) n d x {\textstyle \int _{0}^{1}x^{n}(\log x)^{n}\,dx} is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms. The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration + C {\displaystyle +C} both because this was done historically, and because it drops out when computing the definite integral. Integrating ∫ x m ( log ⁡ x ) n d x {\textstyle \int x^{m}(\log x)^{n}\,dx} by substituting u = ( log ⁡ x ) n {\textstyle u=(\log x)^{n}} and d v = x m d x {\textstyle dv=x^{m}\,dx} yields: ∫ x m ( log ⁡ x ) n d x = x m + 1 ( log ⁡ x ) n m + 1 − n m + 1 ∫ x m + 1 ( log ⁡ x ) n − 1 x d x (for m ≠ − 1 ) = x m + 1 m + 1 ( log ⁡ x ) n − n m + 1 ∫ x m ( log ⁡ x ) n − 1 d x (for m ≠ − 1 ) {\displaystyle {\begin{aligned}\int x^{m}(\log x)^{n}\,dx&={\frac {x^{m+1}(\log x)^{n}}{m+1}}-{\frac {n}{m+1}}\int x^{m+1}{\frac {(\log x)^{n-1}}{x}}\,dx\qquad {\text{(for }}m\neq -1{\text{)}}\\&={\frac {x^{m+1}}{m+1}}(\log x)^{n}-{\frac {n}{m+1}}\int x^{m}(\log x)^{n-1}\,dx\qquad {\text{(for }}m\neq -1{\text{)}}\end{aligned}}} (also in the list of integrals of logarithmic functions ). This reduces the power on the logarithm in the integrand by 1 (from n {\displaystyle n} to n − 1 {\displaystyle n-1} ) and thus one can compute the integral inductively , as ∫ x m ( log ⁡ x ) n d x = x m + 1 m + 1 ⋅ ∑ i = 0 n ( − 1 ) i ( n ) i ( m + 1 ) i ( log ⁡ x ) n − i {\displaystyle \int x^{m}(\log x)^{n}\,dx={\frac {x^{m+1}}{m+1}}\cdot \sum _{i=0}^{n}(-1)^{i}{\frac {(n)_{i}}{(m+1)^{i}}}(\log x)^{n-i}} where ( n ) i {\textstyle (n)_{i}} denotes the falling factorial ; there is a finite sum because the induction stops at 0, since n is an integer. In this case m = n {\textstyle m=n} , and they are integers, so ∫ x n ( log ⁡ x ) n d x = x n + 1 n + 1 ⋅ ∑ i = 0 n ( − 1 ) i ( n ) i ( n + 1 ) i ( log ⁡ x ) n − i . {\displaystyle \int x^{n}(\log x)^{n}\,dx={\frac {x^{n+1}}{n+1}}\cdot \sum _{i=0}^{n}(-1)^{i}{\frac {(n)_{i}}{(n+1)^{i}}}(\log x)^{n-i}.} Integrating from 0 to 1, all the terms vanish except the last term at 1, [ note 2 ] which yields: ∫ 0 1 x n ( log ⁡ x ) n n ! d x = 1 n ! 1 n + 1 n + 1 ( − 1 ) n ( n ) n ( n + 1 ) n = ( − 1 ) n ( n + 1 ) − ( n + 1 ) . {\displaystyle \int _{0}^{1}{\frac {x^{n}(\log x)^{n}}{n!}}\,dx={\frac {1}{n!}}{\frac {1^{n+1}}{n+1}}(-1)^{n}{\frac {(n)_{n}}{(n+1)^{n}}}=(-1)^{n}(n+1)^{-(n+1)}.} This is equivalent to computing Euler's integral identity Γ ( n + 1 ) = n ! {\displaystyle \Gamma (n+1)=n!} for the Gamma function on a different domain (corresponding to changing variables by substitution), as Euler's identity itself can also be computed via an analogous integration by parts. Footnotes
https://en.wikipedia.org/wiki/Sophomore's_dream
A sophorolipid is a surface-active glycolipid compound that can be synthesized by a selected number of non-pathogenic yeast species. [ 1 ] They are potential bio-surfactants due to their biodegradability and low eco-toxicity. Sophorolipids are glycolipids consisting of a hydrophobic fatty acid tail of 16 or 18 carbon atoms and a hydrophilic carbohydrate head sophorose , a glucose-derived di-saccharide with an unusual β-1,2 bond and can be acetylated on the 6′- and/or 6′′- positions. One terminal or sub terminal hydroxylated fatty acid is β-glycosidically linked to the sophorose module. The carboxylic end of this fatty acid is either free (acidic or open form) or internally esterified at the 4′′ or in some rare cases at the 6′- or 6′′-position (lactonic form). [ 2 ] The physicochemical and biological properties of sophorolipids are significantly influenced by the distribution of the lactone vs. acidic forms produced in the fermentative broth. In general, lactone sophorolipids are more efficient in reducing surface tension and are better antimicrobial agents, whereas acidic sophorolipids display better foaming properties. Acetyl groups can also lower the hydrophilicity of sophorolipids and enhance their antiviral and cytokine stimulating effects. [ 3 ] Sophorolipids are produced by various non pathogenic yeast species such as Candida apicola , Rhodotorula bogoriensis , [ 5 ] Wickerhamiella domercqiae , [ 6 ] and Starmerella bombicola . [ 7 ] [ 8 ] Recent research has meant sophorolipids can be recovered during a fermentation using a gravity separator in a loop with the bioreactor, enabling the production of >770 g/L sophorolipid at a productivity 4.24 g/L/h, some of the highest values seen in a fermentation process [ 9 ] Desirable properties of biosurfactants are biodegradability and low toxicity. [ 10 ] [ 11 ] Sophorolipids produced by several yeasts belonging to candida and the starmerella clade, [ 12 ] [ 13 ] and Rhamnolipid produced by Pseudomonas aeruginosa [ 14 ] etc. Besides biodegradability, low toxicity, and high production potential, sophorolipids have a high surface and interfacial activity. Sophorolipids are reported to lower surface tension (ST) of water from 72 to 30-35 mN/m and the interfacial tension (IT) water/hexadecane from 40 to 1 mN/m. [ 15 ] In addition to this, sophorolipids are reported to function under wide ranges of temperatures, pressures and ionic strengths; and they also possess a number of other useful biological activities including Antimicrobial, [ 5 ] virucidal, [ 3 ] Anticancer, Immuno-modulatory properties. [ 5 ] A detailed and comprehensive literature review on the various aspects of sophorolipids production (e.g. producing micro-organisms, bio-synthetic pathway, effect of medium components and other fermentation conditions and downstream process of sophorolipids is available in the published work of Van Bogaert et al. [ 5 ] [ 16 ] This work also discusses potential application of sophorolipids (and their derivatives) as well as the potential for genetic engineering strains to enhance sophorolipid yields. Researchers have focused on optimization of sophorolipid production in submerged fermentation, [ 17 ] [ 18 ] but some efforts have also investigated the possibility of sophorololipid production using solid state fermentation (SSF). [ 4 ] The production process can be significantly impacted by the specific properties of the carbon and oil substrates used; and several inexpensive alternatives to more traditional substrates have been investigated. These potential substrates include: biodiesel by-product streams, [ 19 ] waste frying oil, [ 20 ] [ 21 ] restaurant waste oil, [ 22 ] industrial fatty acid residues, [ 23 ] mango seed fat, [ 24 ] and soybean dark oil. The use of most of these substrates have resulted in lower yields compared to traditional fermentation substrates. To enhance the performance of surfactant properties of natural sophorolipids, chemical modification methods have been actively pursued. [ 25 ] Recently, researchers demonstrated the possibility of applying sophorolipids as building blocks via ring-opening metathesis polymerization for a new type of polymers, known as polysophorolipids which show promising potentials in biomaterials applications. [ 26 ]
https://en.wikipedia.org/wiki/Sophorolipid
Sophus Mads Jørgensen (4 July 1837 – 1 April 1914) was a Danish chemist. He is considered one of the founders of coordination chemistry , mainly by being one of the pioneers of chain theory , and is known for the debates which he had with Alfred Werner during 1893–1899. While Jørgensen's theories on coordination chemistry were ultimately proven to be incorrect, his experimental work provided much of the basis for Werner's theories. Jørgensen also made major contributions to the chemistry of platinum and rhodium compounds. Jørgensen was a board member of the Carlsberg Foundation from 1885 until his death in 1914, and was elected a member of the Royal Swedish Academy of Sciences in 1899. His son, Ove Jørgensen , became a classical scholar and later an authority on ballet, and co-edited Jørgensen's posthumously-published monograph, Det kemiske Syrebegrebs Udviklingshistorie indtil 1830 ( Development History of the Chemical Concept of Acid until 1830 ). This article about a Danish scientist is a stub . You can help Wikipedia by expanding it . This biographical article about a chemist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sophus_Mads_Jørgensen
A sorbent is an insoluble material that either absorbs or adsorbs liquids [ 1 ] or gases. [ 2 ] They are frequently used to remove pollutants and in the cleanup of chemical accidents [ 3 ] and oil spills . [ 4 ] Besides their uses in industry, sorbents are used in commercial products such as diapers [ 5 ] and odor absorbents, [ 6 ] and are researched for applications in environmental air analysis, particularly in the analysis of volatile organic compounds . [ 7 ] The name sorbent is derived from sorption , [ 8 ] which is itself a derivation from adsorption and absorption. [ 9 ] Sorbents collect specific liquids or gases depending on the composition of the material being used in the sorbent. Some of the most common sorbents used to clean oil spills are made from materials that are both oleophilic and hydrophobic , have high surface area through structural designs that include pores and capillaries, and draw in liquid through capillary action . [ 1 ] Sorbents may be used to collect undesirable ions and act like a reusable ion-exchange resin , composed of charged layers of material that can be heated or otherwise treated to remove pollutants. [ 10 ] In this and similar cases, pollutant particles are attracted to the sorbent through electrostatic forces . [ 11 ] Some sorbents chemically bind to particles through chemical adsorption, or chemisorption ; this process is often more difficult to reverse . [ 12 ] Sorbents come in various forms and materials, including:
https://en.wikipedia.org/wiki/Sorbent
Sorbent tubes are the most widely used collection media for sampling hazardous gases and vapors in air, mostly as it relates to industrial hygiene . They were developed by the US National Institute for Occupational Safety and Health (NIOSH) for air quality testing of workers. Sorbent Tubes are available from CARO Analytical Services, SKC Inc., 7Solutions BV, Uniphos Ltd., SKC Ltd, Zefon International, Sigma-Aldrich/Supelco and Markes International. SKC Inc. manufactured the first commercially available sorbent tubes. XAD2 Tubes. Sorbent tubes are typically made of glass and contain various types of solid adsorbent material ( sorbents ). Commonly used sorbents include activated charcoal , silica gel, and organic porous polymers such as Tenax and Amberlite XAD resins. Solid sorbents are selected for sampling specific compounds in air because they: Sorbent tubes are attached to air sampling pumps for sample collection. A pump with a calibrated flow rate in ml/min is normally placed on a worker’s belt and it draws a known volume of air through the sorbent tube. Alternatively, pumps and sorbent tubes are placed in areas for fixed-point sampling. Chemicals are trapped onto the sorbent material throughout the sampling period. Occasionally, when desorbing the air sample from the sorbent tube, a large portion of the analyte will fail to go into the solution. In these cases, the sorbent tubes will have to be adjusted for desorption efficiency (DE). This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sorbent_tube
The Sord M23P was a "luggable" Japanese personal computer (weighed about 9 kg), manufactured by Sord Corp. from 1983. It was one of the first machines to use the 3½" disk drive produced by Sony . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sord_M23P
Soredia are common reproductive structures of lichens . [ 1 ] Lichens reproduce asexually by employing simple fragmentation and production of soredia and isidia . [ 2 ] Soredia are powdery propagules composed of fungal hyphae wrapped around cyanobacteria or green algae . [ 1 ] These can be either scattered diffusely across the surface of the lichen's thallus , or produced in localized structures called soralia . [ 3 ] Fungal hyphae make up the basic body structure of a lichen. [ 2 ] The soredia are released through openings in the upper cortex of the lichen structure. [ 1 ] After their release, the soredia disperse to establish the lichen in a new location. [ 2 ] This article about lichens or lichenology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Soredium
Sorin Teodor Popa (born 24 March 1953) is a Romanian American mathematician working on operator algebras . He is a professor at the University of California, Los Angeles . [ 1 ] He was elected a Member of the National Academy of Sciences in 2025. [ 2 ] Popa earned his PhD from the University of Bucharest in 1983 under the supervision of Dan-Virgil Voiculescu , with thesis Studiul unor clase de subalgebre ale C ∗ {\displaystyle C^{*}} -algebrelor . [ 1 ] [ 3 ] He has advised 15 doctoral students at UCLA, including Adrian Ioana . [ 3 ] In 1990, Popa was an invited speaker at the International Congress of Mathematicians (ICM) in Kyoto , where he gave a talk on "Subfactors and Classifications in von Neumann algebras". He was a Guggenheim Fellow in 1995. [ 4 ] In 2006, he gave a plenary lecture at the ICM in Madrid on "Deformation and Rigidity for group actions and Von Neumann Algebras". [ 5 ] In 2009, he was awarded the Ostrowski Prize , [ 1 ] and in 2010 the E. H. Moore Prize . [ 6 ] He is one of the inaugural fellows of the American Mathematical Society . [ 7 ] In 2013, he was elected to the American Academy of Arts and Sciences.
https://en.wikipedia.org/wiki/Sorin_Popa
Soroush Plus ( Persian : سروش پلاس , Messenger angel ) is an Iranian cross-platform , instant messaging (IM), social media and VoIP service developed by the Setak Houshmand Sharif. [ 1 ] It is one of the most widely used messaging apps in Iran , with more than 35 million users. [ 2 ] Soroush enables users to send text and voice messages, share images and videos, make voice and video calls , share files and locations, pay bills, and access AI and services. [ 3 ] [ 4 ] Soroush is available on Android , iOS , macOS , Windows , Linux , and the web. Registration requires a mobile telephone number. [ 5 ] [ 6 ] Soroush enables text and voice messaging, multimedia sharing, voice and video calls, and file sharing. It has unlimited cloud storage and is usable internationally. The platform also provides payment services for tasks such as paying bills, making purchases, and conducting transactions securely and has two step verification for securing accounts. Users can share their real-time location and access a variety of services through bots, including customer support and bill payments. [ 2 ] [ 7 ] [ 8 ] [ 9 ] Soroush is connected to the Message Exchange Bus (MXB) , which is a technology that connects major Iranian messaging platforms like Bale , Eitaa , Rubika , Gap and iGap and enables users to send messages and files, make voice and video calls and more to more than 100 million users between these apps without needing a separate account for each one , offering communication regardless of the platform used.. [ 10 ] [ 11 ] [ 12 ] [ 13 ] Soroush was removed from both the Google Play Store and the Apple App Store , along with several other Iranian apps. This removal came as part of a broader action taken by these American platforms to restrict access to apps from Iran. Despite this, Soroush continues to be available through other app stores like Bazaar and Myket and to direct download, maintaining its user base. [ 14 ] [ 15 ] Some sources have criticized the app for its potential use by the Iranian government to monitor citizens, [ 16 ] with the BBC stating that the app is part of a wider initiative within the Iranian government to shift users away from apps like Telegram . [ 17 ] In 2018, some large Iranian Telegram channels were allegedly forced by state officials to migrate to Soroush, according to Article 19 , a British human rights advocacy organization. [ 18 ] Both Soroush and the Iranian government continue to maintain that they respect user's privacy and security. [ 19 ]
https://en.wikipedia.org/wiki/Soroush_Plus
Sorption is a physical and chemical process by which one substance becomes attached to another. Specific cases of sorption are treated in the following articles: The reverse of sorption is desorption . The adsorption and absorption rate of a diluted solute in gas or liquid solution to a surface or interface can be calculated using Fick's laws of diffusion .
https://en.wikipedia.org/wiki/Sorption
Sorption enhanced water gas shift ( SEWGS ) is a technology that combines a pre-combustion carbon capture process with the water gas shift reaction (WGS) in order to produce a hydrogen rich stream from the syngas fed to the SEWGS reactor. [ 1 ] The water gas shift reaction converts the carbon monoxide into carbon dioxide, according to the following chemical reaction: While carbon dioxide is captured and removed through an adsorption process. [ 1 ] The in-situ CO 2 adsorption and removal shifts the water gas shift reaction to the right-hand side, thereby completely converting the CO and maximizing the production of high pressure hydrogen. [ 1 ] Since the beginning of the second decade of the 21st century this technology has started gaining attention, as it shows advantages over carbon capture conventional technologies and because hydrogen is considered the energy carrier of the future. [ 2 ] [ 3 ] The SEWGS technology is the combination of the water gas shift reaction with the adsorption of carbon dioxide on a solid material. Typical temperature and pressure ranges are 350-550 °C and 20-30 bar. The inlet gas of SEWGS reactors is typically a mixture of hydrogen, CO and CO 2 , where steam is added to convert CO into CO 2 . [ 4 ] The conversion of carbon monoxide into carbon dioxide is enhanced by shifting the reaction equilibrium through CO 2 adsorption and removal, the latter being one the produced species. [ 1 ] The SEWGS technology is based on a multi-bed pressure swing adsorption (PSA) unit in which the vessels are filled with the water gas shift catalyst and the CO 2 adsorbent material.  Each vessel is subjected to a series of processes. In the sorption/reaction step, a high pressure hydrogen-rich stream is produced, while during sorbent regeneration a CO 2 rich stream is generated. [ 5 ] The process starts feeding syngas to the SEWGS reactor, where CO 2 is adsorbed and a hydrogen-rich stream is produced. The regeneration of the first vessel starts when the sorbent material is saturated by CO 2 , directing the feed stream to another vessel. After the regeneration, the vessels are re-pressurized. A multibed configuration is necessary to guarantee a continuous production of hydrogen and carbon dioxide. The optimal number of beds usually varies between 6 and 8. [ 5 ] The water gas shift reaction is the reaction between carbon monoxide and steam to form hydrogen and carbon dioxide: This reaction was discovered by Felice Fontana and nowadays is adopted in a wide range of industrial applications, such as in the production process of ammonia , hydrocarbons , methanol , hydrogen and other chemicals. In the industrial practice two water gas shift sections are necessary, one at high temperature and one at low temperature, with an intersystem cooling. [ 6 ] Adsorption is the phenomenon of sorption of gases or solutes on solid or liquid surfaces. Adsorption on solid surface occurs when some substances collide with the solid surface creating bonds with the atoms or the molecules of the solid surface. There are two main adsorption processes: physical adsorption and chemical adsorption. The first one is the result of the interaction of intermolecular forces. Since weak bonds are formed, the adsorbed substance can be easily separated. In chemical adsorption, chemical bonds are formed, meaning that the absorption or release of adsorption heat and the activation energy are larger with respect to physical adsorption. These two processes often take place simultaneously. The adsorbent material is then regenerated through desorption , which is the opposite phenomenon of sorption, releasing the captured substance from the adsorbent material. [ 7 ] In SEWGS technology the pressure swing adsorption (PSA) process is employed to regenerate the adsorbent material and produce a CO 2 rich stream. The process is similar to the one conventionally used for air separation, hydrogen purification and other gas separations. [ 5 ] The industrially used technology for carbon dioxide removal is called amine washing technology and is based on chemical absorption of carbon dioxide. In chemical absorption, reactions between the absorbed substance (CO 2 ) and the solvent occur and produce a rich liquid. Then, the rich liquid enters the desorption column where carbon dioxide is separated from the sorbent which is reused for CO 2 absorption. Ethanolamine (C 2 H 7 NO), diethanolamine (C 4 H 11 NO 2 ), triethanolamine (C 6 H 15 NO 3 ) mono-ethanolamine (C 2 H 7 NO) and methyl-diethanolamine (C 5 H 13 NO 2 ) are commonly used for the removal of CO 2 . [ 8 ] SEWGS technology shows some advantages in comparison with traditional technologies adoptable for pre-combustion removal of carbon dioxide. Traditional technologies require employing two water gas shift reactors (a high temperature and a low temperature stage) in order to get high conversions of carbon monoxide into carbon dioxide with an intermediate cooling stage between the two reactors. In addition, another cooling stage is necessary at the outlet of the second WGS reactor for the CO 2 capture with a solvent. Furthermore, the hydrogen rich stream at the outlet of SEWGS section can be directly fed into a gas turbine, while the hydrogen rich stream produced by the traditional route needs a further heating stage. [ 2 ] The importance of this technology is directly related to the problem of global warming and the mitigation of the carbon dioxide emissions. In hydrogen economy hydrogen is considered a clean energy carrier with high energy content and is expected to replace fossil fuels and other energy sources associated with pollution issues. For these reasons, since the beginning of second decade of the 21st century this technology attracted the public interest. [ 3 ] The SEWGS technology enables producing high-purity hydrogen without need for further purification processes. It furthermore finds potential application in a wide range of industrial processes, such as in the production of electricity from fossil fuels or in the iron and steel industry. [ 2 ] [ 5 ] [ 9 ] The integration of the SEWGS process in natural gas combined cycle (NGCC) and integrated gasification combined cycle (IGCC) power plants has been investigated as a possible way to produce electricity from natural gas or coal with almost-zero emissions. In NGCC power plant the carbon capture achieved is around 95% with a CO 2 purity over 99%, while in IGCC power plants the carbon capture ratio is around 90% with a CO 2 purity of 99%. [ 5 ] [ 9 ] The investigation of SEWGS integration in steel mills started during the second decade of 21st century. The goal is to reduce the carbon footprint of this industrial process that is responsible of the 6% of total global CO 2 emissions and 16% of the emissions generated by industrial processes. [ 10 ] The captured and removed CO 2 can be then stored or used for the production of high value chemical products. [ 10 ] The reactor vessels are loaded with sorbent pellets. Sorbent must have the following features: [ 5 ] Different sorbent materials have been investigated to the purpose of being employed in SEWGS. Some examples include: Potassium promoted hydrotalcite is the most studied sorbent material for SEWGS application. [ 4 ] Its principal features are listed below: [ 9 ] Projects in which SEWGS technology is investigated:
https://en.wikipedia.org/wiki/Sorption_enhanced_water_gas_shift
In computing , sort is a standard command line program of Unix and Unix-like operating systems , that prints the lines of its input or concatenation of all files listed in its argument list in sorted order. Sorting is done based on one or more sort keys extracted from each line of input. By default, the entire input is taken as sort key. Blank space is the default field separator. The command supports a number of command-line options that can vary by implementation. For instance the " -r " flag will reverse the sort order. Sort ordering is affected by the environment's locale settings. [ 1 ] A sort command that invokes a general sort facility was first implemented within Multics . [ 2 ] Later, it appeared in Version 1 Unix . This version was originally written by Ken Thompson at AT&T Bell Laboratories . By Version 4 Thompson had modified it to use pipes , but sort retained an option to name the output file because it was used to sort a file in place. In Version 5 , Thompson invented "-" to represent standard input . [ 3 ] The version of sort bundled in GNU coreutils was written by Mike Haertel and Paul Eggert. [ 1 ] This implementation employs the merge sort algorithm. Similar commands are available on many other operating systems, for example a sort command is part of ASCII 's MSX-DOS2 Tools for MSX-DOS version 2. [ 4 ] The sort command has also been ported to the IBM i operating system. [ 5 ] With no FILE , or when FILE is - , the command reads from standard input . The -n option makes the program sort according to numerical value. The du command produces output that starts with a number, the file size, so its output can be piped to sort to produce a list of files sorted by (ascending) file size: The find command with the ls option prints file sizes in the 7th field, so a list of the LaTeX files sorted by file size is produced by: Use the -k option to sort on a certain column. For example, use " -k 2 " to sort on the second column. In old versions of sort, the +1 option made the program sort on the second column of data ( +2 for the third, etc.). This usage is deprecated. The -k m,n option lets you sort on a key that is potentially composed of multiple fields (start at column m , end at column n ): Here the first sort is done using column 2. -k2,2n specifies sorting on the key starting and ending with column 2, and sorting numerically. If -k2 is used instead, the sort key would begin at column 2 and extend to the end of the line, spanning all the fields in between. -k1,1 dictates breaking ties using the value in column 1, sorting alphabetically by default. Note that bob, and chad have the same quota and are sorted alphabetically in the final output. Sorting a file with tab separated values requires a tab character to be specified as the column delimiter. This illustration uses the shell's dollar-quote notation [ 6 ] [ 7 ] to specify the tab as a C escape sequence . The -r option just reverses the order of the sort: The GNU implementation has a -R --random-sort option based on hashing; this is not a full random shuffle because it will sort identical lines together. A true random sort is provided by the Unix utility shuf . The GNU implementation has a -V --version-sort option which is a natural sort of (version) numbers within text. Two text strings that are to be compared are split into blocks of letters and blocks of digits. Blocks of letters are compared alpha-numerically, and blocks of digits are compared numerically (i.e., skipping leading zeros, more digits means larger, otherwise the leftmost digits that differ determine the result). Blocks are compared left-to-right and the first non-equal block in that loop decides which text is larger. This happens to work for IP addresses, Debian package version strings and similar tasks where numbers of variable length are embedded in strings.
https://en.wikipedia.org/wiki/Sort_(Unix)
In logistics , a sorter is a system which performs sortation of products (goods, luggage, mail, etc.) according to their destinations. [ 1 ] A common type of sorter is a conveyor -based system. While they may be based on other conveyor systems, usually sorters are unique types of conveyors. [ 1 ] Sortation is the process of identifying items on a conveyor system, and diverting them to specific destinations. Sorters are applied to different applications depending upon the product and the requested rate. A feeder system whose sole purpose to feed the products into the sorter in proper orientation and with proper spacing, so that the sorter could operate correctly. [ 1 ] Another common element are receptacles which receive the products as they leave the sorter towards a proper destination. Receptacles may be as simple as chutes, or gravity conveyors, or powered conveyors. [ 1 ] There is a number of typical sorter designs. [ 1 ]
https://en.wikipedia.org/wiki/Sorter_(logistics)
In computer science , comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks . Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. In order to sort larger amounts of inputs, new sorting networks must be constructed. This independence of comparison sequences is useful for parallel execution and for implementation in hardware . Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, [ 1 ] who subsequently patented the idea. [ 2 ] Sorting networks can be implemented either in hardware or in software . Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. [ 1 ] Batcher , in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches . [ 3 ] Since the 2000s, sorting nets (especially bitonic mergesort ) are used by the GPGPU community for constructing sorting algorithms to run on graphics processing units . [ 4 ] A sorting network consists of two types of items: comparators and wires. The wires are thought of as running from left to right, carrying values (one per wire) that traverse the network all at the same time. Each comparator connects two wires. When a pair of values, traveling through a pair of wires, encounter a comparator, the comparator swaps the values if and only if the top wire's value is greater or equal to the bottom wire's value. In a formula, if the top wire carries x and the bottom wire carries y , then after hitting a comparator the wires carry x ′ = min ( x , y ) {\displaystyle x'=\min(x,y)} and y ′ = max ( x , y ) {\displaystyle y'=\max(x,y)} , respectively, so the pair of values is sorted. [ 5 ] : 635 A network of wires and comparators that will correctly sort all possible inputs into ascending order is called a sorting network or Kruskal hub. By reflecting the network, it is also possible to sort all inputs into descending order. The full operation of a simple sorting network is shown below. It is evident why this sorting network will correctly sort the inputs; note that the first four comparators will "sink" the largest value to the bottom and "float" the smallest value to the top. The final comparator sorts out the middle two wires. The efficiency of a sorting network can be measured by its total size, meaning the number of comparators in the network, or by its depth , defined (informally) as the largest number of comparators that any input value can encounter on its way through the network. Noting that sorting networks can perform certain comparisons in parallel (represented in the graphical notation by comparators that lie on the same vertical line), and assuming all comparisons to take unit time, it can be seen that the depth of the network is equal to the number of time steps required to execute it. [ 5 ] : 636–637 We can easily construct a network of any size recursively using the principles of insertion and selection. Assuming we have a sorting network of size n , we can construct a network of size n + 1 by "inserting" an additional number into the already sorted subnet (using the principle underlying insertion sort ). We can also accomplish the same thing by first "selecting" the lowest value from the inputs and then sort the remaining values recursively (using the principle underlying bubble sort ). The structure of these two sorting networks are very similar. A construction of the two different variants, which collapses together comparators that can be performed simultaneously shows that, in fact, they are identical. [ 1 ] The insertion network (or equivalently, bubble network) has a depth of 2 n - 3 , [ 1 ] where n is the number of values. This is better than the O ( n log n ) time needed by random-access machines , but it turns out that there are much more efficient sorting networks with a depth of just O (log 2 n ) , as described below . While it is easy to prove the validity of some sorting networks (like the insertion/bubble sorter), it is not always so easy. There are n ! permutations of numbers in an n -wire network, and to test all of them would take a significant amount of time, especially when n is large. The number of test cases can be reduced significantly, to 2 n , using the so-called zero-one principle. While still exponential, this is smaller than n ! for all n ≥ 4 , and the difference grows quite quickly with increasing n . The zero-one principle states that, if a sorting network can correctly sort all 2 n sequences of zeros and ones, then it is also valid for arbitrary ordered inputs. This not only drastically cuts down on the number of tests needed to ascertain the validity of a network, it is of great use in creating many constructions of sorting networks as well. The principle can be proven by first observing the following fact about comparators: when a monotonically increasing function f is applied to the inputs, i.e., x and y are replaced by f ( x ) and f ( y ) , then the comparator produces min( f ( x ), f ( y )) = f (min( x , y )) and max( f ( x ), f ( y )) = f (max( x , y )) . By induction on the depth of the network, this result can be extended to a lemma stating that if the network transforms the sequence a 1 , ..., a n into b 1 , ..., b n , it will transform f ( a 1 ), ..., f ( a n ) into f ( b 1 ), ..., f ( b n ) . Suppose that some input a 1 , ..., a n contains two items a i < a j , and the network incorrectly swaps these in the output. Then it will also incorrectly sort f ( a 1 ), ..., f ( a n ) for the function This function is monotonic, so we have the zero-one principle as the contrapositive . [ 5 ] : 640–641 Various algorithms exist to construct sorting networks of depth O (log 2 n ) (hence size O ( n log 2 n ) ) such as Batcher odd–even mergesort , bitonic sort , Shell sort , and the Pairwise sorting network . These networks are often used in practice. It is also possible to construct networks of depth O (log n ) (hence size O ( n log n ) ) using a construction called the AKS network , after its discoverers Ajtai , Komlós , and Szemerédi . [ 6 ] While an important theoretical discovery, the AKS network has very limited practical application because of the large linear constant hidden by the Big-O notation . [ 5 ] : 653 These are partly due to a construction of an expander graph . A simplified version of the AKS network was described by Paterson in 1990, who noted that "the constants obtained for the depth bound still prevent the construction being of practical value". [ 7 ] A more recent construction called the zig-zag sorting network of size O ( n log n ) was discovered by Goodrich in 2014. [ 8 ] While its size is much smaller than that of AKS networks, its depth O ( n log n ) makes it unsuitable for a parallel implementation. For small, fixed numbers of inputs n , optimal sorting networks can be constructed, with either minimal depth (for maximally parallel execution) or minimal size (number of comparators). These networks can be used to increase the performance of larger sorting networks resulting from the recursive constructions of, e.g., Batcher, by halting the recursion early and inserting optimal nets as base cases. [ 9 ] The following table summarizes the optimality results for small networks for which the optimal depth is known: For larger networks neither the optimal depth nor the optimal size are currently known. The bounds known so far are provided in the table below: The first sixteen depth-optimal networks are listed in Knuth's Art of Computer Programming , [ 1 ] and have been since the 1973 edition; however, while the optimality of the first eight was established by Floyd and Knuth in the 1960s, this property wasn't proven for the final six until 2014 [ 15 ] (the cases nine and ten having been decided in 1991 [ 9 ] ). For one to twelve inputs, minimal (i.e. size-optimal) sorting networks are known, and for higher values, lower bounds on their sizes S ( n ) can be derived inductively using a lemma due to Van Voorhis [ 1 ] (p. 240): S ( n ) ≥ S ( n − 1) + ⌈log 2 n ⌉ . The first ten optimal networks have been known since 1969, with the first eight again being known as optimal since the work of Floyd and Knuth, but optimality of the cases n = 9 and n = 10 took until 2014 to be resolved. [ 11 ] The optimality of the smallest known sorting networks for n = 11 and n = 12 was resolved in 2020. [ 16 ] [ 1 ] Some work in designing optimal sorting network has been done using genetic algorithms : D. Knuth mentions that the smallest known sorting network for n = 13 was found by Hugues Juillé in 1995 "by simulating an evolutionary process of genetic breeding" [ 1 ] (p. 226), and that the minimum depth sorting networks for n = 9 and n = 11 were found by Loren Schwiebert in 2001 "using genetic methods" [ 1 ] (p. 229). Unless P=NP , the problem of testing whether a candidate network is a sorting network is likely to remain difficult for networks of large sizes, due to the problem being co-NP -complete. [ 17 ]
https://en.wikipedia.org/wiki/Sorting_network
SOTIO Biotech [ 1 ] is a Czech biotechnology company focused on clinical-stage research and development of innovative medicines for cancer with operations in Europe, North America, and Asia. [ 2 ] The company has clinical programs which include a superagonist of the immuno-oncology target IL-15 , a new generation of potent and stable antibody-drug conjugates (ADCs), proprietary technology designed to improve on the efficacy of CAR T therapies and a platform to streamline and enhance personalized cell therapies. [ 3 ] The company was founded in 2010 and in 2012 became part of PPF Group , owned by Petr Kellner . [ 4 ] The CEO of the company is Radek Špíšek, who has been with the company since its beginning, initially functioning as Chief Scientific Officer. [ 5 ] SOTIO conducts global operations in Europe , the USA , and China . In January 2020, the company announced the founding of its subsidiary SOTIO Biotech AG in Basel , Switzerland . [ 6 ] SOTIO also operates laboratory complexes Prague, Czech Republic and Beijing , China where it produces treatments for people suffering from oncological diseases. [ 7 ] SOTIO is currently testing multiple oncology products at different stages of clinical development. [ 8 ] Immune cytokine IL-15 is an immuno-oncology target that mobilizes cytotoxic T cells and natural killer (NK) cells. Stimulating IL-15 receptors on these cells represents a potent and complementary mechanism to existing cancer treatments. [ 9 ] SOTIO is developing an IL-15 superagonist, SO-C101, which is designed to have significant advantages over other IL-2 and IL-15 based therapies that are currently in development. SO-C101 is fused to the IL-15 alpha chain receptor, which confers specific binding to cytotoxic T cell and NK cells, which may provide superior efficacy and safety profile. Fusion to the IL-15 alpha chain receptor also optimizes half-life, which may improve efficacy by limiting T cell exhaustion . [ 10 ] SOTIO is currently conducting an ongoing Phase 1/1b dose finding study at leading clinical centers in the U.S. and EU to examine SO-C101 as monotherapy and in combination with pembrolizumab in patients with relapsed/refractory advanced/metastatic solid tumors. Interim data demonstrate that SO-C101 has been well tolerated to date, with no dose limiting toxicities observed. [ 11 ] SOTIO's BOXR cell therapy platform is designed to improve functionality of engineered T cells by discovering novel “bolt-on” transgenes that can be co-expressed with tumor-targeting receptors to overcome resistance and improve the function of T cells in the solid tumor microenvironment . [ 12 ] Lead candidate BOXR1030 uniquely combines the BOXR-discovered GOT2 transgene, a critical enzyme involved in cellular metabolism, with CAR T technology. BOXR1030 is designed to improve CAR T therapy function in the microenvironment by enhancing T cell fitness in the solid tumor microenvironment. Tumor infiltrating lymphocytes isolated from the tumors of treated animals revealed that BOXR1030 cells were more resistant to dysfunction and had fewer markers of exhaustion as compared to the control CAR T cells. [ 13 ] SOTIO, in collaboration with its partner NBE Therapeutics , is developing oncology candidates based on its platform of potent and highly stable ADCs with an unprecedented therapeutic window. Preclinical data show that these ADCs have a strong efficacy in direct tumor cell killing and a good tolerability profile, but also induce specific antitumor immunity, thereby providing a dual approach to cancer protection. SOTIO currently has two antibody drug conjugate candidates in preclinical trials, SOT102 and SOT107. [ 14 ] [ 15 ] SOTIO has its own scientific research and development and also collaborates with other partners. In recent years, SOTIO and PPF have focused on investing in a number of biotechnology companies developing innovative anticancer treatments in Europe and the US. Such as the Swiss NBE-Therapeutics on the development of novel antibody-drug conjugate products (ADC), with its affiliate Cytune Pharma on developing novel IL-15-based immunotherapies for the treatment of cancer. [ 16 ] [ 17 ] [ 18 ] [ 19 ] In August 2018, SOTIO acquired Cytune Pharma [ 20 ] and announced the continued development of the company's lead program SO-C101 (RLI-15) which is a human fusion protein of IL-15 and the high-affinity binding domain of IL-15Ra. It is a novel immunotherapeutic approach to cancer treatment with potential applications in a variety of oncology indications. The first clinical trial for the program was launched in summer 2019. [ 21 ] [ 22 ] [ 23 ] [ 24 ] At the end of 2020, PPF sold its stake in NBE-Therapeutics, a company developing innovative ADC products for the treatment of solid tumours, to the leading global pharmaceutical company Boehringer Ingelheim. This deal, the largest of its kind to be witnessed in Europe in ten years, showed the value of the ADCs that SOTIO maintains in its portfolio and that it continues to develop. These products could reach their first patients in 2022 in the first phase of clinical trials. SOTIO is also managing PPF's investments in biotechnology companies Autolus Therapeutics and Cellestia Biotech. [ 25 ]
https://en.wikipedia.org/wiki/Sotio
In chemical engineering , the Souders–Brown equation (named after Mott Souders and George Granger Brown [ 1 ] [ 2 ] ) has been a tool for obtaining the maximum allowable vapor velocity in vapor–liquid separation vessels (variously called flash drums , knockout drums , knockout pots , compressor suction drums and compressor inlet drums ). It has also been used for the same purpose in designing trayed fractionating columns , trayed absorption columns and other vapor–liquid-contacting columns. A vapor–liquid separator drum is a vertical vessel into which a liquid and vapor mixture (or a flashing liquid) is fed and wherein the liquid is separated by gravity, falls to the bottom of the vessel, and is withdrawn. The vapor travels upward at a design velocity which minimizes the entrainment of any liquid droplets in the vapor as it exits the top of the vessel. The diameter of a vapor–liquid separator drum is dictated by the expected volumetric flow rate of vapor and liquid from the drum. The following sizing methodology is based on the assumption that those flow rates are known. Use a vertical pressure vessel with a length–diameter ratio of about 3 to 4, and size the vessel to provide about 5 minutes of liquid inventory between the normal liquid level and the bottom of the vessel (with the normal liquid level being somewhat below the feed inlet). Calculate the maximum allowable vapor velocity in the vessel by using the Souders–Brown equation: v = k ρ L − ρ V ρ V {\displaystyle v=k{\sqrt {\frac {\rho _{L}-\rho _{V}}{\rho _{V}}}}} where Then the cross-sectional area of the drum can be found from: A = V ˙ v {\displaystyle A={\frac {\dot {V}}{v}}} where And the drum diameter is: D = 4 A π {\displaystyle D={\sqrt {\frac {4A}{\pi }}}} The drum should have a vapor outlet at the top, liquid outlet at the bottom, and feed inlet at about the half-full level. At the vapor outlet, provide a de-entraining mesh pad within the drum such that the vapor must pass through that mesh before it can leave the drum. Depending upon how much liquid flow is expected, the liquid outlet line should probably have a liquid level control valve . As for the mechanical design of the drum (materials of construction, wall thickness, corrosion allowance, etc.) use the same criteria as for any pressure vessel. The GPSA Engineering Data Book [ 3 ] recommends the following k values for vertical drums with horizontal mesh pads (at the denoted operating pressures): GPSA notes:
https://en.wikipedia.org/wiki/Souders–Brown_equation
In mathematics , the soul theorem is a theorem of Riemannian geometry that largely reduces the study of complete manifolds of non-negative sectional curvature to that of the compact case. Jeff Cheeger and Detlef Gromoll proved the theorem in 1972 by generalizing a 1969 result of Gromoll and Wolfgang Meyer. The related soul conjecture , formulated by Cheeger and Gromoll at that time, was proved twenty years later by Grigori Perelman . Cheeger and Gromoll's soul theorem states: [ 1 ] Such a submanifold is called a soul of ( M , g ) . By the Gauss equation and total geodesicity, the induced Riemannian metric on the soul automatically has nonnegative sectional curvature. Gromoll and Meyer had earlier studied the case of positive sectional curvature, where they showed that a soul is given by a single point, and hence that M is diffeomorphic to Euclidean space . [ 2 ] Very simple examples, as below, show that the soul is not uniquely determined by ( M , g ) in general. However, Vladimir Sharafutdinov constructed a 1-Lipschitz retraction from M to any of its souls, thereby showing that any two souls are isometric . This mapping is known as the Sharafutdinov's retraction . [ 3 ] Cheeger and Gromoll also posed the converse question of whether there is a complete Riemannian metric of nonnegative sectional curvature on the total space of any vector bundle over a closed manifold of positive sectional curvature. [ 4 ] The answer is now known to be negative, although the existence theory is not fully understood. [ 5 ] Examples. As mentioned above, Gromoll and Meyer proved that if g has positive sectional curvature then the soul is a point. Cheeger and Gromoll conjectured that this would hold even if g had nonnegative sectional curvature, with positivity only required of all sectional curvatures at a single point. [ 8 ] This soul conjecture was proved by Grigori Perelman , who established the more powerful fact that Sharafutdinov's retraction is a Riemannian submersion , and even a submetry . [ 5 ] Sources.
https://en.wikipedia.org/wiki/Soul_conjecture
Sound design is the art and practice of creating auditory elements of media. It involves specifying, acquiring and creating audio using production techniques and equipment or software. It is employed in a variety of disciplines including filmmaking , television production , video game development , theatre , sound recording and reproduction , live performance , sound art , post-production , radio , new media and musical instrument development. Sound design commonly involves performing (see e.g. Foley ) and editing of previously composed or recorded audio, such as sound effects and dialogue for the purposes of the medium, but it can also involve creating sounds from scratch through synthesizers. A sound designer is one who practices sound design. The use of sound to evoke emotion, reflect mood and underscore actions in plays and dances began in prehistoric times when it was used in religious practices for healing or recreation. In ancient Japan, theatrical events called kagura were performed in Shinto shrines with music and dance. [ 1 ] Plays were performed in medieval times in a form of theatre called Commedia dell'arte , which used music and sound effects to enhance performances. The use of music and sound in the Elizabethan Theatre followed, in which music and sound effects were produced off-stage using devices such as bells, whistles, and horns. Cues would be written in the script for music and sound effects to be played at the appropriate time. [ 2 ] Italian composer Luigi Russolo built mechanical sound-making devices, called " intonarumori ," for futurist theatrical and music performances starting around 1913. These devices were meant to simulate natural and man-made sounds, such as trains or bombs. Russolo's treatise , The Art of Noises , is one of the earliest written documents on the use of abstract noise in the theatre. After his death, his intonarumori' were used in more conventional theatre performances to create realistic sound effects. Possibly the first use of recorded sound in the theatre was a phonograph playing a baby's cry in a London theatre in 1890. [ 3 ] Sixteen years later, Herbert Beerbohm Tree used recordings in his London production of Stephen Phillips ’ tragedy NERO. The event is marked in the Theatre Magazine (1906) with two photographs; one showing a musician blowing a bugle into a large horn attached to a disc recorder, the other with an actor recording the agonizing shrieks and groans of the tortured martyrs. The article states: “these sounds are all realistically reproduced by the gramophone”. As cited by Bertolt Brecht , there was a play about Rasputin written in (1927) by Alexej Tolstoi and directed by Erwin Piscator that included a recording of Lenin 's voice. Whilst the term "sound designer" was not yet in use, some stage managers specialised as "effects men", creating and performing offstage sound effects using a mix of vocal mimicry, mechanical and electrical contraptions and gramophone records. A great deal of care and attention was paid to the construction and performance of these effects, both naturalistic and abstract. [ 4 ] Over the twentieth century recorded sound effects began to replace live sound effects, though often it was the stage manager 's duty to find the sound effects , and an electrician played the recordings during performances. Between 1980 and 1988, Charlie Richmond, USITT's first Sound Design Commissioner, oversaw efforts of their Sound Design Commission to define the duties, responsibilities, standards and procedures expected of a theatre sound designer in North America . He summarized his conclusions in a document [ 5 ] which, although somewhat dated, provides a succinct record of what was then expected. It was subsequently provided to the ADC and David Goodman at the Florida USA local when they both planned to represent sound designers in the 1990s. MIDI and digital audio technology have contributed to the evolution of sound production techniques in the 1980s and 1990s. Digital audio workstations (DAW) and a variety of digital signal processing algorithms applied in them allow more complicated soundtracks with more tracks and auditory effects to be realized. Features such as unlimited undo and sample-level editing allows fine control over the soundtracks. In theatre sound , features of computerized theatre sound design systems have also been recognized as being essential for live show control systems at Walt Disney World and, as a result, Disney utilized systems of that type to control many facilities at their Disney-MGM Studios theme park, which opened in 1989. These features were incorporated into the MIDI Show Control (MSC) specification, an open communications protocol for interacting with diverse devices. The first show to fully utilize the MSC specification was the Magic Kingdom Parade at Walt Disney World 's Magic Kingdom in September 1991. The rise of interest in game audio has also brought more advanced interactive audio tools that are also accessible without a background in computer programming. Some of such software tools (termed "implementation tools" or "audio engines") feature a workflow that's similar to that in more conventional DAW programs and can also allow the sound production personnel to undertake some of the more creative interactive sound tasks (that are considered to be part of sound design for computer applications) that previously would have required a computer programmer. Interactive applications have also given rise to many techniques in "dynamic audio" which loosely means sound that's "parametrically" adjusted during the program's run-time. This allows for a broader expression in sounds, more similar to that in films, because this way the sound designer can e.g. create footstep sounds that vary in a believable and non-repeating way and that also corresponds to what's seen in the picture. The digital audio workstation cannot directly "communicate" with game engines, because the game's events often occur in an unpredictable order, whereas traditional digital audio workstations as well as so called linear media (TV, film etc.) have everything occur in the same order every time the production is run. Especially, games have also brought in dynamic or adaptive mixing. The World Wide Web has greatly enhanced the ability of sound designers to acquire source material quickly, easily and cheaply. Nowadays, a designer can preview and download crisper, more "believable" sounds as opposed to toiling through time- and budget-draining "shot-in-the-dark" searches through record stores, libraries and "the grapevine" for (often) inferior recordings. In addition, software innovation has enabled sound designers to take more of a DIY (or "do-it-yourself") approach. From the comfort of their home and at any hour, they can simply use a computer, speakers and headphones rather than renting (or buying) costly equipment or studio space and time for editing and mixing. This provides for faster creation and negotiation with the director. In motion picture production, a Sound Editor/Designer is a member of a film crew responsible for the entirety or some specific parts of a film's soundtrack. [ 6 ] In the American film industry , the title Sound Designer is not controlled by any professional organization , unlike titles such as Director or Screenwriter . The terms sound design and sound designer began to be used in the motion picture industry in 1969. At that time, The title of Sound Designer was first granted to Walter Murch by Francis Ford Coppola in recognition for Murch's contributions to the film The Rain People . [ 7 ] The original meaning of the title Sound Designer , as established by Coppola and Murch, was "an individual ultimately responsible for all aspects of a film's audio track, from the dialogue and sound effects recording to the re-recording (mix) of the final track". [ 8 ] The term sound designer has replaced monikers like supervising sound editor or re-recording mixer for the same position: the head designer of the final sound track. Editors and mixers like Murray Spivack ( King Kong ), George Groves ( The Jazz Singer ), James G. Stewart ( Citizen Kane ), and Carl Faulkner ( Journey to the Center of the Earth ) served in this capacity during Hollywood's studio era, and are generally considered to be sound designers by a different name. The advantage of calling oneself a sound designer beginning in later decades was two-fold. It strategically allowed for a single person to work as both an editor and mixer on a film without running into issues pertaining to the jurisdictions of editors and mixers, as outlined by their respective unions. Additionally, it was a rhetorical move that legitimised the field of post-production sound at a time when studios were downsizing their sound departments, and when producers were routinely skimping on budgets and salaries for sound editors and mixers. In so doing, it allowed those who called themselves sound designers to compete for contract work and to negotiate higher salaries. The position of Sound Designer therefore emerged in a manner similar to that of Production Designer , which was created in the 1930s when William Cameron Menzies made revolutionary contributions to the craft of art direction in the making of Gone with the Wind . [ 9 ] The audio production team is a principal member of the production staff, with creative output comparable to that of the film editor and director of photography . Several factors have led to the promotion of audio production to this level, when previously it was considered subordinate to other parts of film: The contemporary title of sound designer can be compared with the more traditional title of supervising sound editor ; many sound designers use both titles interchangeably. [ 11 ] The role of supervising sound editor , or sound supervisor , developed in parallel with the role of sound designer . The demand for more sophisticated soundtracks was felt both inside and outside Hollywood, and the supervising sound editor became the head of the large sound department, with a staff of dozens of sound editors , that was required to realize a complete sound job with a fast turnaround. [ 12 ] [ 13 ] Sound design, as a distinct discipline, is one of the youngest fields in stagecraft , second only to the use of projection and other multimedia displays, although the ideas and techniques of sound design have been around almost since theatre started. Dan Dugan , working with three stereo tape decks routed to ten loudspeaker zones [ 14 ] during the 1968–69 season of American Conservatory Theater (ACT) in San Francisco, was the first person in the USA to be called a sound designer. [ 15 ] A theatre sound designer is responsible for everything the audience hears in the performance space, including music, sound effects, sonic textures, and soundscapes. These elements are created by the sound designer, or sourced from other sound professionals, such as a composer in the case of music. Pre-recorded music must be licensed from a legal entity that represents the artist's work. This can be the artist themselves, a publisher, record label, performing rights organization or music licensing company. [ 16 ] The theatre sound designer is also in charge of choosing and installing the sound system —speakers, sound desks, interfaces and convertors, playout/cueing software, microphones, radio mics, foldback, cables, computers, and outboard equipment like FX units and dynamics processors. [ 17 ] Modern audio technology has enabled theatre sound designers to produce flexible, complex, and inexpensive designs that can be easily integrated into live performance. The influence of film and television on playwriting is seeing plays being written increasingly with shorter scenes, which is difficult to achieve with scenery but easily conveyed with sound. The development of film sound design is giving writers and directors higher expectations and knowledge of sound design. Consequently, theatre sound design is widespread and accomplished sound designers commonly establish long-term collaborations with directors. Sound design for musicals often focuses on the design and implementation of a sound reinforcement system that will fulfill the needs of the production. If a sound system is already installed in the performance venue, it is the sound designer's job to tune the system for the best use for a particular production. Sound system tuning employs various methods including equalization , delay, volume, speaker and microphone placement, and in some cases, the addition of new equipment. In conjunction with the director and musical director, if any, the sound reinforcement designer determines the use and placement of microphones for actors and musicians. The sound reinforcement designer ensures that the performance can be heard and understood by everyone in the audience, regardless of the shape, size or acoustics of the venue, and that performers can hear everything needed to enable them to do their jobs. While sound design for a musical largely focuses on the artistic merits of sound reinforcement, many musicals, such as Into the Woods also require significant sound scores (see Sound Design for Plays). Sound Reinforcement Design was recognized by the American Theatre Wing's Tony Awards with the Tony Award for Best Sound Design of a Musical until the 2014–15 season, [ 18 ] later reinstating in the 2017–18 season. [ 19 ] Sound design for plays often involves the selection of music and sounds (sound score) for a production based on intimate familiarity with the play, and the design, installation, calibration and utilization of the sound system that reproduces the sound score. The sound designer for a play and the production's director work together to decide the themes and emotions to be explored. Based on this, the sound designer for plays, in collaboration with the director and possibly the composer, decides upon the sounds that will be used to create the desired moods. In some productions, the sound designer might also be hired to compose music for the play. The sound designer and the director usually work together to "spot" the cues in the play (i.e., decide when and where sound will be used in the play). Some productions might use music only during scene changes, whilst others might use sound effects. Likewise, a scene might be underscored with music, sound effects or abstract sounds that exist somewhere between the two. Some sound designers are accomplished composers, writing and producing music for productions as well as designing sound. Many sound designs for plays also require significant sound reinforcement (see Sound Design for Musicals). Sound Design for plays was recognized by the American Theatre Wing's Tony Awards with the Tony Award for Best Sound Design of a Play until the 2014–15 season, [ 18 ] later reinstating the award in the 2017–18 season. [ 19 ] In the contemporary music business, especially in the production of rock music , ambient music , progressive rock , and similar genres , the record producer and recording engineer play important roles in the creation of the overall sound (or soundscape ) of a recording, and less often, of a live performance. A record producer is responsible for extracting the best performance possible from the musicians and for making both musical and technical decisions about the instrumental timbres, arrangements, etc. On some, particularly more electronic music projects, artists and producers in more conventional genres have sometimes sourced additional help from artists often credited as "sound designers", to contribute specific auditory effects, ambiences etc. to the production. These people are usually more versed in e.g. electronic music composition and synthesizers than the other musicians on board. In the application of electroacoustic techniques (e.g. binaural sound) and sound synthesis for contemporary music or film music, a sound designer (often also an electronic musician) sometimes refers to an artist who works alongside a composer to realize the more electronic aspects of a musical production. This is because sometimes there exists a difference in interests between composers and electronic musicians or sound designers. The latter specialises in electronic music techniques, such as sequencing and synthesizers, but the former is more experienced in writing music in a variety of genres. Since electronic music itself is quite broad in techniques and often separate from techniques applied in other genres, this kind of collaboration can be seen as natural and beneficial. Notable examples of (recognized) sound design in music are the contributions of Michael Brook to the U2 album The Joshua Tree , George Massenburg to the Jennifer Warnes album Famous Blue Raincoat , Chris Thomas to the Pink Floyd album The Dark Side of the Moon , and Brian Eno to the Paul Simon album Surprise . In 1974, Suzanne Ciani started her own production company, Ciani/Musica. Inc., which became the #1 sound design music house in New York. [ 20 ] In fashion shows, the sound designer often works with the artistic director to create an atmosphere fitting the theme of a collection, commercial campaign or event. [ citation needed ] Sound is widely used in a variety of human–computer interfaces , in computer games and video games . [ 21 ] [ 22 ] There are a few extra requirements for sound production for computer applications, including re-usability, interactivity and low memory and CPU usage. For example, most computational resources are usually devoted to graphics. Audio production should account for computational limits for sound playback with audio compression or voice allocating systems. Sound design for video games requires proficient knowledge of audio recording and editing using a digital audio workstation , and an understanding of game audio integration using audio engine software, audio authoring tools, or middleware to integrate audio into the game engine. Audio middleware is a third-party toolset that sits between the game engine and the audio hardware. [ 23 ] Interactivity with computer sound can involve using a variety of playback systems or logic, using tools that allow the production of interactive sound (e.g. Max/MSP, Wwise). Implementation might require software or electrical engineering of the systems that modify sound or process user input. In interactive applications, a sound designer often collaborates with an engineer (e.g. a sound programmer) who's concerned with designing the playback systems and their efficiency. Sound designers have been recognized by awards organizations for some time, and new awards have emerged more recently in response to advances in sound design technology and quality. The Motion Picture Sound Editors and the Academy of Motion Picture Arts and Sciences recognizes the finest or most aesthetic sound design for a film with the Golden Reel Awards for Sound Editing in the film, broadcast, and game industries, and the Academy Award for Best Sound respectively. In 2021, the 93rd Academy Awards merged Best Sound Editing and Best Sound Mixing into one general Best Sound category. In 2007, the Tony Award for Best Sound Design was created to honor the best sound design in American theatre on Broadway . [ 24 ] North American theatrical award organizations that recognize sound designers include these: Major British award organizations include the Olivier Awards . The Tony Awards retired the awards for Sound Design as of the 2014–2015 season, [ 25 ] then reinstated the categories in the 2017–18 season. [ 19 ]
https://en.wikipedia.org/wiki/Sound_design
Sound energy density or sound density is the sound energy per unit volume. The SI unit of sound energy density is the pascal (Pa), which is 1 kg⋅m −1 ⋅s −2 in SI base units or 1 joule per cubic metre (J/m 3 ). [ 1 ] : Section 2.3.4: Derived units, Table 4 Sound energy density, denoted w , is defined by where The terms instantaneous energy density, maximum energy density, and peak energy density have meanings analogous to the related terms used for sound pressure. In speaking of average energy density, it is necessary to distinguish between the space average (at a given instant) and the time average (at a given point). The sound energy density level gives the ratio of a sound incidence as a sound energy value in comparison to the reference level of 1 pPa (= 10 −12 pascals). [ 2 ] It is a logarithmic measure of the ratio of two sound energy densities. The unit of the sound energy density level is the decibel (dB), a non-SI unit accepted for use with the SI Units. [ 1 ] : Chapter 4: Non-SI units that are accepted for use with the SI, Table 8 The sound energy density level, L ( E ), for a given sound energy density, E 1 , in pascals, is where E 0 is the standard reference sound energy density [ 3 ]
https://en.wikipedia.org/wiki/Sound_energy_density
Sound intensity , also known as acoustic intensity , is defined as the power carried by sound waves per unit area in a direction perpendicular to that area, also called the sound power density and the sound energy flux density . [ 2 ] The SI unit of intensity, which includes sound intensity, is the watt per square meter (W/m 2 ). One application is the noise measurement of sound intensity in the air at a listener's location as a sound energy quantity. [ 3 ] Sound intensity is not the same physical quantity as sound pressure . Human hearing is sensitive to sound pressure which is related to sound intensity. In consumer audio electronics, the level differences are called "intensity" differences, but sound intensity is a specifically defined quantity and cannot be sensed by a simple microphone. Sound intensity level is a logarithmic expression of sound intensity relative to a reference intensity. Sound intensity, denoted I , is defined by I = p v {\displaystyle \mathbf {I} =p\mathbf {v} } where Both I and v are vectors , which means that both have a direction as well as a magnitude. The direction of sound intensity is the average direction in which energy is flowing. The average sound intensity during time T is given by ⟨ I ⟩ = 1 T ∫ 0 T p ( t ) v ( t ) d t . {\displaystyle \langle \mathbf {I} \rangle ={\frac {1}{T}}\int _{0}^{T}p(t)\mathbf {v} (t)\,\mathrm {d} t.} For a plane wave [ citation needed ] , I = 2 π 2 ν 2 δ 2 ρ c {\displaystyle \mathrm {I} =2\pi ^{2}\nu ^{2}\delta ^{2}\rho c} Where, For a spherical sound wave, the intensity in the radial direction as a function of distance r from the centre of the sphere is given by I ( r ) = P A ( r ) = P 4 π r 2 , {\displaystyle I(r)={\frac {P}{A(r)}}={\frac {P}{4\pi r^{2}}},} where Thus sound intensity decreases as 1/ r 2 from the centre of the sphere: I ( r ) ∝ 1 r 2 . {\displaystyle I(r)\propto {\frac {1}{r^{2}}}.} This relationship is an inverse-square law . Sound intensity level (SIL) or acoustic intensity level is the level (a logarithmic quantity ) of the intensity of a sound relative to a reference value. It is denoted L I , expressed in nepers , bels , or decibels , and defined by [ 4 ] L I = 1 2 ln ⁡ ( I I 0 ) N p = log 10 ⁡ ( I I 0 ) B = 10 log 10 ⁡ ( I I 0 ) d B , {\displaystyle L_{I}={\frac {1}{2}}\ln \left({\frac {I}{I_{0}}}\right)\mathrm {Np} =\log _{10}\left({\frac {I}{I_{0}}}\right)\mathrm {B} =10\log _{10}\left({\frac {I}{I_{0}}}\right)\mathrm {dB} ,} where The commonly used reference sound intensity in air is [ 5 ] I 0 = 1 p W / m 2 . {\displaystyle I_{0}=1~\mathrm {pW/m^{2}} .} being approximately the lowest sound intensity hearable by an undamaged human ear under room conditions. The proper notations for sound intensity level using this reference are L I /(1 pW/m 2 ) or L I (re 1 pW/m 2 ) , but the notations dB SIL , dB(SIL) , dBSIL, or dB SIL are very common, even if they are not accepted by the SI. [ 6 ] The reference sound intensity I 0 is defined such that a progressive plane wave has the same value of sound intensity level (SIL) and sound pressure level (SPL), since I ∝ p 2 . {\displaystyle I\propto p^{2}.} The equality of SIL and SPL requires that I I 0 = p 2 p 0 2 , {\displaystyle {\frac {I}{I_{0}}}={\frac {p^{2}}{p_{0}^{2}}},} where p 0 = 20 μPa is the reference sound pressure. For a progressive spherical wave, p c = z 0 , {\displaystyle {\frac {p}{c}}=z_{0},} where z 0 is the characteristic specific acoustic impedance . Thus, I 0 = p 0 2 I p 2 = p 0 2 p c p 2 = p 0 2 z 0 . {\displaystyle I_{0}={\frac {p_{0}^{2}I}{p^{2}}}={\frac {p_{0}^{2}pc}{p^{2}}}={\frac {p_{0}^{2}}{z_{0}}}.} In air at ambient temperature, z 0 = 410 Pa·s/m , hence the reference value I 0 = 1 pW/m 2 . [ 7 ] In an anechoic chamber which approximates a free field (no reflection) with a single source, measurements in the far field in SPL can be considered to be equal to measurements in SIL. This fact is exploited to measure sound power in anechoic conditions. Sound intensity is defined as the time averaged product of sound pressure and acoustic particle velocity. [ 8 ] Both quantities can be directly measured by using a sound intensity p-u probe comprising a microphone and a particle velocity sensor , or estimated indirectly by using a p-p probe that approximates the particle velocity by integrating the pressure gradient between two closely spaced microphones. [ 9 ] Pressure-based measurement methods are widely used in anechoic conditions for noise quantification purposes. The bias error introduced by a p-p probe can be approximated by [ 10 ] I ^ n p − p ≃ I n − φ pe p rms 2 k Δ r ρ c = I n ( 1 − φ pe k Δ r p rms 2 / ρ c I r ) , {\displaystyle {\widehat {I}}_{n}^{p-p}\simeq I_{n}-{\frac {\varphi _{\text{pe}}\,p_{\text{rms}}^{2}}{k\Delta r\rho c}}=I_{n}\left(1-{\frac {\varphi _{\text{pe}}}{k\Delta r}}{\frac {p_{\text{rms}}^{2}/\rho c}{I_{r}}}\right),} where I n {\displaystyle I_{n}} is the “true” intensity (unaffected by calibration errors), I ^ n p − p {\displaystyle {\hat {I}}_{n}^{p-p}} is the biased estimate obtained using a p-p probe, p rms {\displaystyle p_{\text{rms}}} is the root-mean-squared value of the sound pressure, k {\displaystyle k} is the wave number, ρ {\displaystyle \rho } is the density of air, c {\displaystyle c} is the speed of sound and Δ r {\displaystyle \Delta r} is the spacing between the two microphones. This expression shows that phase calibration errors are inversely proportional to frequency and microphone spacing and directly proportional to the ratio of the mean square sound pressure to the sound intensity. If the pressure-to-intensity ratio is large then even a small phase mismatch will lead to significant bias errors. In practice, sound intensity measurements cannot be performed accurately when the pressure-intensity index is high, which limits the use of p-p intensity probes in environments with high levels of background noise or reflections. On the other hand, the bias error introduced by a p-u probe can be approximated by [ 10 ] I ^ n p − u = 1 2 Re ⁡ { P V ^ n ∗ } = 1 2 Re ⁡ { P V n ∗ e − j φ ue } ≃ I n + φ ue J n , {\displaystyle {\hat {I}}_{n}^{p-u}={\frac {1}{2}}\operatorname {Re} \left\{{P{\hat {V}}_{n}^{*}}\right\}={\frac {1}{2}}\operatorname {Re} \left\{{PV_{n}^{*}e^{-j\varphi _{\text{ue}}}}\right\}\simeq I_{n}+\varphi _{\text{ue}}J_{n}\,,} where I ^ n p − u {\displaystyle {\hat {I}}_{n}^{p-u}} is the biased estimate obtained using a p-u probe, P {\displaystyle P} and V n {\displaystyle V_{n}} are the Fourier transform of sound pressure and particle velocity, J n {\displaystyle J_{n}} is the reactive intensity and φ ue {\displaystyle \varphi _{\text{ue}}} is the p-u phase mismatch introduced by calibration errors. Therefore, the phase calibration is critical when measurements are carried out under near field conditions, but not so relevant if the measurements are performed out in the far field. [ 10 ] The “reactivity” (the ratio of the reactive to the active intensity) indicates whether this source of error is of concern or not. Compared to pressure-based probes, p-u intensity probes are unaffected by the pressure-to-intensity index, enabling the estimation of propagating acoustic energy in unfavorable testing environments provided that the distance to the sound source is sufficient.
https://en.wikipedia.org/wiki/Sound_intensity
A sound level meter (also called sound pressure level meter ( SPL )) is used for acoustic measurements. It is commonly a hand-held instrument with a microphone . The best type of microphone for sound level meters is the condenser microphone, which combines precision with stability and reliability. [ 1 ] The diaphragm of the microphone responds to changes in air pressure caused by sound waves. That is why the instrument is sometimes referred to as a sound pressure level meter (SPL). This movement of the diaphragm, i.e. the sound pressure (unit pascal, Pa ), is converted into an electrical signal (unit volt, V ). While describing sound in terms of sound pressure, a logarithmic conversion is usually applied and the sound pressure level is stated instead, in decibels (dB), with 0 dB SPL equal to 20 micropascals . A microphone is distinguishable by the voltage value produced when a known, constant root mean square sound pressure is applied. This is known as microphone sensitivity. The instrument needs to know the sensitivity of the particular microphone being used. Using this information, the instrument is able to accurately convert the electrical signal back to sound pressure, and display the resulting sound pressure level (unit decibel, dB ). Sound level meters are commonly used in noise pollution studies for the quantification of different kinds of noise, especially for industrial, environmental, mining and aircraft noise . [ 2 ] [ 3 ] The current international standard that specifies sound level meter functionality and performances is the IEC 61672-1:2013. However, the reading from a sound level meter does not correlate well to human-perceived loudness, which is better measured by a loudness meter. Specific loudness is a compressive nonlinearity and varies at certain levels and at certain frequencies. These metrics can also be calculated in a number of different ways. [ 4 ] [ example needed ] The world's first hand-held and transistorized sound level meter, was released in 1960 and developed by the Danish company Brüel & Kjær . [ 5 ] In 1969, a group of University researchers from California founded Pulsar Instruments Inc. which became the first company to display sound exposure times on the scale of a sound level meter, as well as the sound level. This was to comply with the 1969 Walsh-Healey Act, which demanded that the noise in US workplaces should be controlled. [ 6 ] In 1980, Britain's Cirrus Research introduced the world's first handheld sound level meter to provide integrated L eq and sound exposure level (SEL) measurements. [ 7 ] The IEC 61672-1 specifies "three kinds of sound measuring instruments". [ 8 ] They are the "conventional" sound level meter, the integrating-averaging sound level meter, and the integrating sound level meter. The standard sound level meter [ 9 ] can be called an exponentially averaging sound level meter as the AC signal from the microphone is converted to DC by a root-mean-square (RMS) circuit and thus it must have a time constant of integration; today referred to as the time-weighting. Three of these time-weightings have been internationally standardized, 'S' (1 s) originally called Slow, 'F' (125 ms ) originally called Fast, and 'I' (35 ms) originally called Impulse. Their names were changed in the 1980s to be the same in any language. I-time-weighting is no longer in the body of the standard because it has little real correlation with the impulsive character of noise events. The output of the RMS circuit is linear in voltage and is passed through a logarithmic circuit to give a readout linear in decibels (dB). This is 20 times the base 10 logarithm of the ratio of given root-mean-square sound pressure to the reference sound pressure. Root-mean-square sound pressure being obtained with a standard frequency weighting and standard time weighting. The reference pressure is set by the International agreement to be 20 micropascals for airborne sound. It follows that the decibel is, in a sense, not a unit, it is simply a dimensionless ratio; in this case the ratio of two pressures. An exponentially averaging sound level meter, which gives a snapshot of the current noise level, is of limited use for hearing damage risk measurements; an integrating or integrating-averaging meter is usually mandated. An integrating meter simply integrates—or in other words 'sums'—the frequency-weighted noise to give sound exposure and the metric used is pressure squared times time, often Pa²·s, but Pa²·h is also used. However, because the unit of sound was historically described in decibels, the exposure is most often described in terms of sound exposure level (SEL), the logarithmic conversion of sound exposure into decibels. A common variant of the sound level meter is a noise dosemeter (dosimeter in American English). However, this is now formally known as a personal sound exposure meter (PSEM) and has its own international standard IEC 61252:1993. A noise dosimeter (American) or noise dosemeter (British) is a specialized sound level meter intended specifically to measure the noise exposure of a person integrated over a period of time; usually to comply with Health and Safety regulations such as the Occupational Safety and Health (OSHA) 29 CFR 1910.95 Occupational Noise Exposure Standard [ 10 ] or EU Directive 2003–10/EC. This is normally intended to be a body-worn instrument and thus has a relaxed technical requirement, as a body-worn instrument—because of the presence of the body—has a poorer overall acoustic performance. A PSEM gives a read-out based on sound exposure, usually Pa²·h, and the older 'classic' dosimeters giving the metric of 'percentage dose' are no longer used in most countries. The problem with "%dose" is that it relates to the political situation and thus any device can become obsolete if the "100%" value is changed by local laws. Traditionally, noise dosemeters were relatively large devices with a microphone mounted near the ear and having a cable going to the instrument body, itself usually belt worn. These devices had several issues, mainly the reliability of the cable and the disturbance to the user's normal work mode, caused by the presence of the cable. In 1997 following a UK research grant an EU patent was issued for the first of a range of devices that were so small that they resembled a radiation badge and no cable was needed as the whole unit could be fitted near the ear. UK designer and manufacturer, Cirrus Research , introduced the doseBadge personal noise dosimeter , which was the world's first truly wireless noise dosimeter. [ 7 ] Today these devices measure not only simple noise dose, but some even have four separate dosemeters, each with many of the functions of a full-sized sound level meter, including in the latest models full octave band analysis. IEC standards divide sound level meters into two "classes". Sound level meters of the two classes have the same functionality, but different tolerances for error. Class 1 instruments have a wider frequency range and a tighter tolerance than a lower cost Class 2 unit. This applies to both the sound level meter itself as well as the associated calibrator. Most national standards permit the use of "at least a Class 2 instrument". For many measurements, it is not necessary to use a Class 1 unit; these are best employed for research and law enforcement. Similarly, the American National Standards Institute (ANSI) specifies sound level meters as three different Types 0, 1 and 2. These are described, as follows, in the Occupational Safety and Health OSHA Technical Manual TED01-00-015, Chapter 5, OSHA Noise and Hearing Conservation, Appendix III:A, [ 11 ] "These ANSI standards set performance and accuracy tolerances according to three levels of precision: Types 0, 1, and 2. Type 0 is used in laboratories, Type 1 is used for precision measurements in the field, and Type 2 is used for general-purpose measurements. For compliance purposes, readings with an ANSI Type 2 sound level meter and dosimeter are considered to have an accuracy of ±2 dBA, while a Type 1 instrument has an accuracy of ±1 dBA. A Type 2 meter is the minimum requirement by OSHA for noise measurements and is usually sufficient for general-purpose noise surveys. The Type 1 meter is preferred for the design of cost-effective noise controls. For unusual measurement situations, refer to the manufacturer's instructions and appropriate ANSI standards for guidance in interpreting instrument accuracy." Labels used to describe sound and noise level values are defined in the IEC Standard 61672-1:2013 [ 12 ] For labels, the first letter is always an L . This stands for Level , as in the sound pressure level measured through a microphone or the electronic signal level measured at the output from an audio component, such as a mixing desk. Measurement results depend on the frequency weighting (how the sound level meter responds to different sound frequencies), and time weighting (how the sound level meter reacts to changes in sound pressure with time) applied. [ 1 ] The second letter indicates the frequency weighting. "Pattern approved" sound level meters typically offer noise measurements with A, C and Z frequency weighting. [ 13 ] Z-weighting represents the sound pressure equally at all frequencies. A-weighting, weights lower and higher frequencies much less, and has a slight boost in the mid-range, representing the sensitivity of normal human hearing at low (quiet) levels. C-Weighting, more sensitive to the lower frequencies, represents what humans hear when the sound is loud (near 100 dB SPL). The IEC 61672-1:2013 mandates the inclusion of an A - weighting filter in all sound level meters, and also describes C and Z (zero) frequency weightings. The older B and D frequency weightings are now obsolete and are no longer described in the standard. In almost all countries, the use of A-weighting is mandated to be used for the protection of workers against noise-induced hearing loss. The A-weighting curve was based on the historical equal-loudness contours and while arguably A-weighting is no longer the ideal frequency weighting on purely scientific grounds, it is nonetheless the legally required standard for almost all such measurements and has the huge practical advantage that old data can be compared with new measurements. It is for these reasons that A-weighting is the only weighting mandated by the international standard, the frequency weightings 'C' and 'Z' being options. Originally, the A-weighting was only meant for quiet sounds in the region of 40 dB sound pressure level (SPL), but is now mandated for all levels. C-weighting is however still used in the measurement of the peak value of a noise in some legislation, but B-weighting – a halfway house between 'A' and 'C' has almost no practical use. D-weighting was designed for use in measuring aircraft noise when non-bypass jets were being measured; after the demise of Concord, these are all military types. For all civil aircraft noise measurements, A-weighting is used, as is mandated by the ISO and ICAO standards. If the third letter is F , S or I , this represents the time weighting , with F = fast, S = slow, I = impulse. [ 14 ] Time weighting is applied so that levels measured are easier to read on a sound level meter. The time weighting damps sudden changes in level, thus creating a smoother display. The graph indicates how this works. In this example, the input signal suddenly increases from 50 dB to 80 dB, stays there for 6 seconds, then drops back suddenly to the initial level. A slow measurement (yellow line) will take approximately 5 seconds (attack time) to reach 80 dB and around 6 seconds (decay time) to drop back down to 50 dB. S is appropriate when measuring a signal that fluctuates a lot. [ citation needed ] A fast measurement (green line) is quicker to react. It will take approximately 0.6 seconds to reach 80 dB and just under 1 second to drop back down to 50 dB. F may be more suitable where the signal is less impulsive. [ citation needed ] The decision to use fast or slow is often reached by what is prescribed in a standard or a law. However, the following can be used as a guideline: The slow characteristic is mainly used in situations where the reading with the fast response fluctuates too much (more than about 4 dB) to give a reasonably well-defined value. Modern digital displays largely overcome the problem of fluctuating analogue meters by indicating the maximum r.m.s. value for the preceding second. [ 15 ] An impulse measurement (blue line) will take approximately 0.3 seconds to reach 80 dB and over 9 seconds to drop back down to 50 dB. The impulse response, I can be used in situations where there are sharp impulsive noises to be measured, such as fireworks or gunshots. [ citation needed ] eq = equivalent. Equivalent values are averaged over longer time and thus easier to read on a display than sound level with F, S or I time weighting. If you look at these graphs of sound level over time, the area under the blue curve represents the energy. The horizontal red line drawn to represent the same area under the blue curve, gives us the LAeq. That is the equivalent value or average of the energy over the entire graph. LAeq is not always a straight line. If the LAeq is plotted as the equivalent from the beginning of the graph to each of the measurement points, the plot is shown in the second graph. Sound exposure level—in decibels—is not much used in industrial noise measurement. Instead, the time-averaged value is used. This is the time average sound level or as it is usually called the 'equivalent continuous sound level' has the formal symbol L AT as described in paragraph 3,9 "Definitions" of IEC 61672-1 where many correct formal symbols and their common abbreviations are given. These mainly follow the formal ISO acoustic definitions. However, for mainly historical reasons, L AT is commonly referred to as L eq . [ 16 ] Formally, L AT is 10 times the base 10 logarithm of the ratio of a root-mean-square A-weighted sound pressure during a stated time interval to the reference sound pressure and there is no time constant involved. To measure L AT an integrating-averaging meter is needed; this in concept takes the sound exposure, divides it by time, and then takes the logarithm of the result. An important variant of overall L AT is "short L eq " where very short L eq values are taken in succession, say at 1/8 second intervals, each being stored in a digital memory. These data elements can either be transmitted to another unit or be recovered from the memory and re-constituted into almost any conventional metric long after the data has been acquired. This can be done using either dedicated programs or standard spreadsheets. Short L eq has the advantage that as regulations change, old data can be re-processed to check if a new regulation is met. It also permits data to be converted from one metric to another in some cases. Today almost all fixed airport noise monitoring systems, which are in concept just complex sound level meters, use short L eq as their metric, as a steady stream of the digital one second L eq values can be transmitted via telephone lines or the Internet to a central display and processing unit. Short L eq is a feature of most commercial integrating sound level meters—although some manufacturers give it many different names. Short L eq is a very valuable method for acoustic data storage; initially, a concept of the French Government's Laboratoire National d'Essais (ref 1), it has now become the most common method of storing and displaying a true time history of the noise in professional commercial sound level meters. The alternative method, which is to generate a time history by storing and displaying samples of the exponential sound level, displays too many artifacts of the sound level meter to be as valuable and such sampled data cannot be readily combined to form an overall set of data. Until 2003 there were separate standards for exponential and linear integrating sound level meters, (IEC 60651 and IEC 60804—both now withdrawn), but since then the combined standard IEC 61672 has described both types of meter. For short L eq to be valuable the manufacturer must ensure that each separate L eq element fully complies with IEC 61672. If the words max or min appear in the label, this simply represents the maximum or minimum value measured over a certain period of time. Most national regulations also call for the absolute peak value to be measured to protect workers hearing against sudden large pressure peaks, using either 'C' or 'Z' frequency weighting. [ citation needed ] 'Peak sound pressure level' should not be confused with 'MAX sound pressure level'. 'Max sound pressure level' is simply the highest RMS reading a conventional sound level meter gives over a stated period for a given time-weighting (S, F, or I) and can be many decibels less than the peak value. [ citation needed ] In the European Union, the maximum permitted value of the peak sound level is 140 dB(C) [ citation needed ] and this equates to 200 Pa pressure. The symbol for the A -frequency and S -time weighted maximum sound level is LAS max . For the C -frequency weighted peak it is LC pk or L C,peak . IEC61010-1 Ed. 2.0 (2001–02) The following International standards define sound level meters, PSEM and associated devices. Most countries' national standards follow these very closely, the exception being the US. In many cases the equivalent European standard, agreed by the EU, is designated for example EN 61672 and the UK national standard then becomes BS. EN 61672. These International Standards were prepared by IEC technical committee 29:Electroacoustics, in cooperation with the International Organization of Legal Metrology (OIML). Until 2003 there were separate standards for exponential and linear integrating sound level meters, but since then IEC 61672 has described both types. The classic exponential meter was originally described in IEC 123 for 'industrial' meters followed by IEC 179 for 'precision' meters. Both of these were replaced by IEC 651, later renamed IEC 60651, while the linear integrating meters were initially described by IEC 804, later renamed IEC 60804. Both IEC 60651 and 60804 included four accuracy classes, called "types". In IEC 61672 these were reduced to just two accuracy classes 1 and 2. New in the standard IEC 61672 is a minimum 60 dB linear span requirement and Z -frequency-weighting, with a general tightening of limit tolerances, as well as the inclusion of maximum allowable measurement uncertainties for each described periodic test. The periodic testing part of the standard (IEC61672.3) also requires that manufacturers provide the testing laboratory with correction factors to allow laboratory electrical and acoustic testing to better mimic Free field (acoustics) responses. Each correction used should be provided with uncertainties, [ 17 ] that need to be accounted for in the testing laboratory final Measurement uncertainty budget. This makes it unlikely that a sound level meter designed to the older 60651 and 60804 standards will meet the requirements of IEC 61672 : 2013. These 'withdrawn' standards should no longer be used, especially for any official purchasing requirements, as they have significantly poorer accuracy requirements than IEC 61672. Combatants in every branch of the United States' military are at risk for auditory impairments from steady state or impulse noises . While applying double hearing protection helps prevent auditory damage, it may compromise effectiveness by isolating the user from his or her environment. With hearing protection on, a soldier is less likely to be aware of his or her movements, alerting the enemy to their presence. Hearing protection devices (HPD) could also require higher volume levels for communication, negating their purpose. [ 18 ] A problem in selecting a sound level meter is "How do you know if it complies with its claimed standard?" This is a difficult question and IEC 61672 part 2 [ 24 ] tries to answer this by the concept of "pattern approval". A manufacturer has to supply instruments to a national laboratory which tests one of them and if it meets its claims issue a formal Pattern Approval certificate. [ 25 ] In Europe, the most common approval is often considered to be that from the PTB in Germany ( Physikalisch-Technische Bundesanstalt ). If a manufacturer cannot show at least one model in his range that has such approval, it is reasonable to be wary, but the cost of this approval militates against any manufacturer having all his range approved. Inexpensive sound level meters (under $200) are unlikely to have a Pattern Approval and may produce incorrect measurement results. Even the most accurate approved sound level meter must be regularly checked for sensitivity—what most people loosely call 'calibration'. The procedures for periodic testing are defined within IEC61672.3-2013. To ensure accuracy in periodic testing, procedures should be carried out by a facility that can produce results traceable to International Laboratory Accreditation Cooperation , or other local International Laboratory Accreditation Cooperation signatories. For a simple single level and frequency check, units consisting of a computer controlled generator with additional sensors to correct for humidity, temperature, battery voltage and static pressure can be used. The output of the generator is fed to a transducer in a half-inch cavity into which the sound level meter microphone is inserted. The sound level generated is 94 dB, which corresponds to a root-mean-square sound pressure of 1 pascal and is at a frequency of 1 kHz where all the frequency weightings have the same sensitivity. For a complete sound level meter check, periodic testing outlined in IEC61672.3-2013 should be carried out. These tests excite the sound level meter across the entire frequency and dynamic range ensuring compliance with expected design goals defined in IEC61672.1-2013. Sound level meters are also divided into two types in "the Atlantic divide". Sound level meters meeting the US American National Standards Institute (ANSI) specifications [ 26 ] cannot usually meet the corresponding International Electrotechnical Commission (IEC) specifications [ 27 ] at the same time, as the ANSI standard describes instruments that are calibrated to a randomly incident wave, i.e. a diffuse sound field, while internationally meters are calibrated to a free field wave, that is sound coming from a single direction. Further, US dosimeters have an exchange rate of level against time where every 5 dB increase in level halves the permitted exposure time; whereas in the rest of the world a 3 dB increase in level halves the permitted exposure time. The 3 dB doubling method is called the "equal energy" rule and there is no possible way of converting data taken under one rule to be used under the other. Despite these differences, many developing countries refer to both US and international specifications within one instrument in their national regulations. Because of this, many commercial PSEM have dual channels with 3 and 5 dB doubling, some even having 4 dB for the U.S. Air Force. Some advanced sound level meters can also include reverberation time (RT60) (a measure of the time required for the sound to "fade away" in an enclosed area after the source of the sound has stopped) measurement capabilities. Measurements can be done using the integrated impulse response or interrupted noise methods. Such sound level meters should comply with latest ISO 3382-2 and ASTM E2235-04 measurement standards. Required for measuring the acoustics in buildings is a signal generator that provides pink or white noise through an amplifier and omnidirectional speakers. In fact, the omnidirectional speaker, or sound source, should provide an equal dispersion of sound throughout the room. To achieve accurate measurements, sound should radiate evenly. This can be achieved using a spherical distribution aligning 12 speakers in a so-called dodecahedral configuration, as illustrated by Brüel & Kjær OmniPower Sound Source Type 4292 . All speakers should be connected in a series–parallel network, to achieve in-phase operation and impedance matching to the amplifier. The reverberation-time measurements are often used to calculate wall/partition sound insulation or to quantify and validate building acoustics. [ 28 ] Some applications require the ability to monitor noise continuously on a permanent or semi-permanent basis. Some manufacturers offer permanent and semi-permanent noise monitoring stations for this purpose. [ 30 ] [ 31 ] Such monitoring stations are typically based on a sound level meter at the heart and some added capabilities such as remote communication, GPS, and weather stations. These can often also be powered using solar power. Applications for such monitoring stations include airport noise, construction noise, mining noise, traffic noise, rail noise, community noise, wind farm noise, industrial noise, etc. Modern monitoring stations can also offer remote communication capabilities using cellular modems, WiFi networks or direct LAN wires. Such devices allow for real-time alerts and notifications via email and text messages upon exceeding a certain dB level. Systems can also remotely email reports on a daily, weekly or monthly basis. Real-time data publication is often also desired, which can be achieved by pushing data to a website. [ 32 ] [ 33 ] The ubiquity of smartphones , their constant network connectivity, the built-in geographic information system functionality and user-interactivity features present a great opportunity to revolutionize the way we look at noise, its measurement, and its effects on hearing and overall health. The ability to acquire and display real-time noise exposure data raises people's awareness about their work (and off-work) environment and allows them to make informed decisions about hearing hazards and overall well-being. The National Institute for Occupational Safety and Health (NIOSH ) conducted a pilot study to select and characterize the functionality and accuracy of smartphone sound measurement applications (apps) as an initial step in a broader effort to determine whether these apps can be relied on to conduct participatory noise monitoring studies in the workplace. [ 35 ] Researchers reported that challenges remain with using smartphones to collect and document noise exposure data due to encounters with privacy and collection of personal data, motivation to participate in such studies, corrupted or bad data, and the ability to store the data collected. Researchers concluded that smartphone sound apps can serve to empower workers and help them make educated decisions about their workplace environments. [ 36 ] Although most smartphone sound measurement apps are not accurate enough to be used for legally required measurements, the NIOSH Sound Level Meter app met the requirements of IEC 61672/ANSI S1.4 Sound Level Meter Standards (Electroacoustics - Sound Level Meters - Part 3: Periodic Tests). [ 37 ] Calibrated microphones greatly enhances the accuracy and precision of smartphone-based noise measurements. To calibrate the sound level meter apps one must use an acoustical calibrator rather than relying on the pre-defined profiles. This study indicated that the gap between professional instruments and smartphone-based apps are narrowing. [ 38 ] Healthy Hearing, [ 39 ] an organization dedicated to hearing health, reported on the top smartphone sound level meter apps: [ 40 ] NIOSH Sound Level Meter, [ 41 ] Decibel X, [ 42 ] and Too Noisy Pro. [ 43 ] General:
https://en.wikipedia.org/wiki/Sound_level_meter
SOT is an acronym for the phrase sound on tape . It refers to any audio recorded on analog or digital video formats. It is used in scriptwriting for television productions and filmmaking to indicate the portions of the production that will use room tone or other audio from the time of recording, as opposed to audio recorded later (studio voice-over , Foley , etc.). [ 1 ] In broadcast journalism , SOT is generally considered to be audio captured from an individual who is on camera, like an interviewee and may also be referred to as a soundbite . [ 2 ] This sound technology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sound_on_tape
Sound power or acoustic power is the rate at which sound energy is emitted, reflected , transmitted or received, per unit time. [ 1 ] It is defined [ 2 ] as "through a surface, the product of the sound pressure , and the component of the particle velocity , at a point on the surface in the direction normal to the surface, integrated over that surface." The SI unit of sound power is the watt (W). [ 1 ] It relates to the power of the sound force on a surface enclosing a sound source, in air. For a sound source, unlike sound pressure, sound power is neither room-dependent nor distance-dependent. Sound pressure is a property of the field at a point in space, while sound power is a property of a sound source, equal to the total power emitted by that source in all directions. Sound power passing through an area is sometimes called sound flux or acoustic flux through that area. Regulations often specify a method for measurement [ 3 ] that integrates sound pressure over a surface enclosing the source. L WA specifies the power delivered to that surface in decibels relative to one picowatt. Devices (e.g., a vacuum cleaner) often have labeling requirements and maximum amounts they are allowed to produce. The A-weighting scale is used in the calculation as the metric is concerned with the loudness as perceived by the human ear. Measurements [ 4 ] in accordance with ISO 3744 are taken at 6 to 12 defined points around the device in a hemi-anechoic space. The test environment can be located indoors or outdoors. The required environment is on hard ground in a large open space or hemi-anechoic chamber (free-field over a reflecting plane.) Here is a table of some examples, from an on-line source. [ 5 ] For omnidirectional point sources in free space, sound power in L wA is equal to sound pressure level in dB above 20 micropascals at a distance of 0.2821 m [ 6 ] Sound power, denoted P , is defined by [ 8 ] where In a medium , the sound power is given by where For example, a sound at SPL = 85 dB or p = 0.356 Pa in air ( ρ = 1.2 kg⋅m −3 and c = 343 m⋅s −1 ) through a surface of area A = 1 m 2 normal to the direction of propagation ( θ = 0°) has a sound energy flux P = 0.3 mW . This is the parameter one would be interested in when converting noise back into usable energy, along with any losses in the capturing device. Sound power is related to sound intensity : where Sound power is related sound energy density : where Sound power level (SWL) or acoustic power level is a logarithmic measure of the power of a sound relative to a reference value. Sound power level, denoted L W and measured in dB , [ 9 ] is defined by: [ 10 ] where The commonly used reference sound power in air is [ 11 ] The proper notations for sound power level using this reference are L W /(1 pW) or L W (re 1 pW) , but the suffix notations dB SWL , dB(SWL) , dBSWL, or dB SWL are very common, even if they are not accepted by the SI. [ 12 ] The reference sound power P 0 is defined as the sound power with the reference sound intensity I 0 = 1 pW/m 2 passing through a surface of area A 0 = 1 m 2 : hence the reference value P 0 = 1 pW . The generic calculation of sound power from sound pressure is as follows: where: A S {\displaystyle {A_{S}}} defines the area of a surface that wholly encompasses the source. This surface may be any shape, but it must fully enclose the source. In the case of a sound source located in free field positioned over a reflecting plane (i.e. the ground), in air at ambient temperature, the sound power level at distance r from the sound source is approximately related to sound pressure level (SPL) by [ 13 ] where Derivation of this equation: For a progressive spherical wave, where z 0 is the characteristic specific acoustic impedance . Consequently, and since by definition I 0 = p 0 2 / z 0 , where p 0 = 20 μPa is the reference sound pressure, The sound power estimated practically does not depend on distance. The sound pressure used in the calculation may be affected by distance due to viscous effects in the propagation of sound unless this is accounted for.
https://en.wikipedia.org/wiki/Sound_power
Sound pressure or acoustic pressure is the local pressure deviation from the ambient (average or equilibrium) atmospheric pressure , caused by a sound wave . In air, sound pressure can be measured using a microphone , and in water with a hydrophone . The SI unit of sound pressure is the pascal (Pa). [ 1 ] A sound wave in a transmission medium causes a deviation (sound pressure, a dynamic pressure) in the local ambient pressure, a static pressure. Sound pressure, denoted p , is defined by p total = p stat + p , {\displaystyle p_{\text{total}}=p_{\text{stat}}+p,} where In a sound wave, the complementary variable to sound pressure is the particle velocity . Together, they determine the sound intensity of the wave. Sound intensity , denoted I and measured in W · m −2 in SI units, is defined by I = p v , {\displaystyle \mathbf {I} =p\mathbf {v} ,} where Acoustic impedance , denoted Z and measured in Pa·m −3 ·s in SI units, is defined by [ 2 ] Z ( s ) = p ^ ( s ) Q ^ ( s ) , {\displaystyle Z(s)={\frac {{\hat {p}}(s)}{{\hat {Q}}(s)}},} where Specific acoustic impedance , denoted z and measured in Pa·m −1 ·s in SI units, is defined by [ 2 ] z ( s ) = p ^ ( s ) v ^ ( s ) , {\displaystyle z(s)={\frac {{\hat {p}}(s)}{{\hat {v}}(s)}},} where The particle displacement of a progressive sine wave is given by δ ( r , t ) = δ m cos ⁡ ( k ⋅ r − ω t + φ δ , 0 ) , {\displaystyle \delta (\mathbf {r} ,t)=\delta _{\text{m}}\cos(\mathbf {k} \cdot \mathbf {r} -\omega t+\varphi _{\delta ,0}),} where It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by v ( r , t ) = ∂ δ ∂ t ( r , t ) = ω δ m cos ⁡ ( k ⋅ r − ω t + φ δ , 0 + π 2 ) = v m cos ⁡ ( k ⋅ r − ω t + φ v , 0 ) , {\displaystyle v(\mathbf {r} ,t)={\frac {\partial \delta }{\partial t}}(\mathbf {r} ,t)=\omega \delta _{\text{m}}\cos \left(\mathbf {k} \cdot \mathbf {r} -\omega t+\varphi _{\delta ,0}+{\frac {\pi }{2}}\right)=v_{\text{m}}\cos(\mathbf {k} \cdot \mathbf {r} -\omega t+\varphi _{v,0}),} p ( r , t ) = − ρ c 2 ∂ δ ∂ x ( r , t ) = ρ c 2 k x δ m cos ⁡ ( k ⋅ r − ω t + φ δ , 0 + π 2 ) = p m cos ⁡ ( k ⋅ r − ω t + φ p , 0 ) , {\displaystyle p(\mathbf {r} ,t)=-\rho c^{2}{\frac {\partial \delta }{\partial x}}(\mathbf {r} ,t)=\rho c^{2}k_{x}\delta _{\text{m}}\cos \left(\mathbf {k} \cdot \mathbf {r} -\omega t+\varphi _{\delta ,0}+{\frac {\pi }{2}}\right)=p_{\text{m}}\cos(\mathbf {k} \cdot \mathbf {r} -\omega t+\varphi _{p,0}),} where Taking the Laplace transforms of v and p with respect to time yields v ^ ( r , s ) = v m s cos ⁡ φ v , 0 − ω sin ⁡ φ v , 0 s 2 + ω 2 , {\displaystyle {\hat {v}}(\mathbf {r} ,s)=v_{\text{m}}{\frac {s\cos \varphi _{v,0}-\omega \sin \varphi _{v,0}}{s^{2}+\omega ^{2}}},} p ^ ( r , s ) = p m s cos ⁡ φ p , 0 − ω sin ⁡ φ p , 0 s 2 + ω 2 . {\displaystyle {\hat {p}}(\mathbf {r} ,s)=p_{\text{m}}{\frac {s\cos \varphi _{p,0}-\omega \sin \varphi _{p,0}}{s^{2}+\omega ^{2}}}.} Since φ v , 0 = φ p , 0 {\displaystyle \varphi _{v,0}=\varphi _{p,0}} , the amplitude of the specific acoustic impedance is given by z m ( r , s ) = | z ( r , s ) | = | p ^ ( r , s ) v ^ ( r , s ) | = p m v m = ρ c 2 k x ω . {\displaystyle z_{\text{m}}(\mathbf {r} ,s)=|z(\mathbf {r} ,s)|=\left|{\frac {{\hat {p}}(\mathbf {r} ,s)}{{\hat {v}}(\mathbf {r} ,s)}}\right|={\frac {p_{\text{m}}}{v_{\text{m}}}}={\frac {\rho c^{2}k_{x}}{\omega }}.} Consequently, the amplitude of the particle displacement is related to that of the acoustic velocity and the sound pressure by δ m = v m ω , {\displaystyle \delta _{\text{m}}={\frac {v_{\text{m}}}{\omega }},} δ m = p m ω z m ( r , s ) . {\displaystyle \delta _{\text{m}}={\frac {p_{\text{m}}}{\omega z_{\text{m}}(\mathbf {r} ,s)}}.} When measuring the sound pressure created by a sound source, it is important to measure the distance from the object as well, since the sound pressure of a spherical sound wave decreases as 1/ r from the centre of the sphere (and not as 1/ r 2 , like the sound intensity ): [ 3 ] p ( r ) ∝ 1 r . {\displaystyle p(r)\propto {\frac {1}{r}}.} This relationship is an inverse-proportional law . If the sound pressure p 1 is measured at a distance r 1 from the centre of the sphere, the sound pressure p 2 at another position r 2 can be calculated: p 2 = r 1 r 2 p 1 . {\displaystyle p_{2}={\frac {r_{1}}{r_{2}}}\,p_{1}.} The inverse-proportional law for sound pressure comes from the inverse-square law for sound intensity : I ( r ) ∝ 1 r 2 . {\displaystyle I(r)\propto {\frac {1}{r^{2}}}.} Indeed, I ( r ) = p ( r ) v ( r ) = p ( r ) [ p ∗ z − 1 ] ( r ) ∝ p 2 ( r ) , {\displaystyle I(r)=p(r)v(r)=p(r)\left[p*z^{-1}\right](r)\propto p^{2}(r),} where hence the inverse-proportional law: p ( r ) ∝ 1 r . {\displaystyle p(r)\propto {\frac {1}{r}}.} Sound pressure level ( SPL ) or acoustic pressure level ( APL ) is a logarithmic measure of the effective pressure of a sound relative to a reference value. Sound pressure level, denoted L p and measured in dB , [ 4 ] is defined by: [ 5 ] L p = ln ⁡ ( p p 0 ) Np = 2 log 10 ⁡ ( p p 0 ) B = 20 log 10 ⁡ ( p p 0 ) dB , {\displaystyle L_{p}=\ln \left({\frac {p}{p_{0}}}\right)~{\text{Np}}=2\log _{10}\left({\frac {p}{p_{0}}}\right)~{\text{B}}=20\log _{10}\left({\frac {p}{p_{0}}}\right)~{\text{dB}},} where The commonly used reference sound pressure in air is [ 7 ] which is often considered as the threshold of human hearing (roughly the sound of a mosquito flying 3 m away). The proper notations for sound pressure level using this reference are L p /(20 μPa) or L p (re 20 μPa) , but the suffix notations dB SPL , dB(SPL) , dBSPL, and dB SPL are very common, even if they are not accepted by the SI. [ 8 ] Most sound-level measurements will be made relative to this reference, meaning 1 Pa will equal an SPL of 20 log 10 ⁡ ( 1 2 × 10 − 5 ) dB ≈ 94 dB {\displaystyle 20\log _{10}\left({\frac {1}{2\times 10^{-5}}}\right)~{\text{dB}}\approx 94~{\text{dB}}} . In other media, such as underwater , a reference level of 1 μPa is used. [ 9 ] These references are defined in ANSI S1.1-2013 . [ 10 ] The main instrument for measuring sound levels in the environment is the sound level meter . Most sound level meters provide readings in A, C, and Z-weighted decibels and must meet international standards such as IEC 61672-2013 . The lower limit of audibility is defined as SPL of 0 dB , but the upper limit is not as clearly defined. While 1 atm ( 194 dB peak or 191 dB SPL ) [ 11 ] [ 12 ] is the largest pressure variation an undistorted sound wave can have in Earth's atmosphere (i. e., if the thermodynamic properties of the air are disregarded; in reality, the sound waves become progressively non-linear starting over 150 dB), larger sound waves can be present in other atmospheres or other media, such as underwater or through the Earth. [ 13 ] Ears detect changes in sound pressure. Human hearing does not have a flat spectral sensitivity ( frequency response ) relative to frequency versus amplitude . Humans do not perceive low- and high-frequency sounds as well as they perceive sounds between 3,000 and 4,000 Hz, as shown in the equal-loudness contour . Because the frequency response of human hearing changes with amplitude, three weightings have been established for measuring sound pressure: A, B and C. In order to distinguish the different sound measures, a suffix is used: A-weighted sound pressure level is written either as dB A or L A . B-weighted sound pressure level is written either as dB B or L B , and C-weighted sound pressure level is written either as dB C or L C . Unweighted sound pressure level is called "linear sound pressure level" and is often written as dB L or just L. Some sound measuring instruments use the letter "Z" as an indication of linear SPL. [ 13 ] The distance of the measuring microphone from a sound source is often omitted when SPL measurements are quoted, making the data useless, due to the inherent effect of the inverse proportional law . In the case of ambient environmental measurements of "background" noise, distance need not be quoted, as no single source is present, but when measuring the noise level of a specific piece of equipment, the distance should always be stated. A distance of one metre (1 m) from the source is a frequently used standard distance. Because of the effects of reflected noise within a closed room, the use of an anechoic chamber allows sound to be comparable to measurements made in a free field environment. [ 13 ] According to the inverse proportional law, when sound level L p 1 is measured at a distance r 1 , the sound level L p 2 at the distance r 2 is L p 2 = L p 1 + 20 log 10 ⁡ ( r 1 r 2 ) dB . {\displaystyle L_{p_{2}}=L_{p_{1}}+20\log _{10}\left({\frac {r_{1}}{r_{2}}}\right)~{\text{dB}}.} The formula for the sum of the sound pressure levels of n incoherent radiating sources is L Σ = 10 log 10 ⁡ ( p 1 2 + p 2 2 + ⋯ + p n 2 p 0 2 ) dB = 10 log 10 ⁡ [ ( p 1 p 0 ) 2 + ( p 2 p 0 ) 2 + ⋯ + ( p n p 0 ) 2 ] dB . {\displaystyle L_{\Sigma }=10\log _{10}\left({\frac {p_{1}^{2}+p_{2}^{2}+\dots +p_{n}^{2}}{p_{0}^{2}}}\right)~{\text{dB}}=10\log _{10}\left[\left({\frac {p_{1}}{p_{0}}}\right)^{2}+\left({\frac {p_{2}}{p_{0}}}\right)^{2}+\dots +\left({\frac {p_{n}}{p_{0}}}\right)^{2}\right]~{\text{dB}}.} Inserting the formulas ( p i p 0 ) 2 = 10 L i 10 dB , i = 1 , 2 , … , n {\displaystyle \left({\frac {p_{i}}{p_{0}}}\right)^{2}=10^{\frac {L_{i}}{10~{\text{dB}}}},\quad i=1,2,\ldots ,n} in the formula for the sum of the sound pressure levels yields L Σ = 10 log 10 ⁡ ( 10 L 1 10 dB + 10 L 2 10 dB + ⋯ + 10 L n 10 dB ) dB . {\displaystyle L_{\Sigma }=10\log _{10}\left(10^{\frac {L_{1}}{10~{\text{dB}}}}+10^{\frac {L_{2}}{10~{\text{dB}}}}+\dots +10^{\frac {L_{n}}{10~{\text{dB}}}}\right)~{\text{dB}}.}
https://en.wikipedia.org/wiki/Sound_pressure
Sound recognition is a technology, which is based on both traditional pattern recognition theories and audio signal analysis methods. Sound recognition technologies contain preliminary data processing, feature extraction and classification algorithms. Sound recognition can classify feature vectors. Feature vectors are created as a result of preliminary data processing and linear predictive coding . Sound recognition technologies are used for: In monitoring and security, an important contribution to alarm detection and alarm verification can be supplied, using sound recognition techniques. In particular, these methods could be helpful for intrusion detection in places like offices, stores, private homes or for the supervision of public premises exposed to person aggression. In all these cases, a recognition system can report about a danger or distress event. It could further identify sounds like glass break, doorbells, smoke detector alarms, red alerts, human screams, baby cries and others. Sometimes, the alarm is triggered by other detectors (e.g. temperature or video-based) and the sound recognizer would be associated with these other modalities, to verify the alarm, with the purpose of decreasing the global false alarm detection rate. Solutions based on a sound recognition technology can offer assistance to disabled and elderly people affected in hearing capabilities, helping them to keep or recover some independence in their daily occupations. [ 1 ] There are only a handful of companies who are working on sound recognition technology:
https://en.wikipedia.org/wiki/Sound_recognition
Sound recording and reproduction is the electrical , mechanical , electronic, or digital inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music , or sound effects . The two main classes of sound recording technology are analog recording and digital recording . Acoustic analog recording is achieved by a microphone diaphragm that senses changes in atmospheric pressure caused by acoustic sound waves and records them as a mechanical representation of the sound waves on a medium such as a phonograph record (in which a stylus cuts grooves on a record). In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current , which is then converted to a varying magnetic field by an electromagnet , which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it. Analog sound reproduction is the reverse process, with a larger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves. Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of sampling . This lets the audio data be stored and transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers (zeros and ones) representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard . A digital audio signal must be reconverted to analog form during playback before it is amplified and connected to a loudspeaker to produce sound. Long before sound was first recorded, music was recorded—first by written music notation , then also by mechanical devices (e.g., wind-up music boxes , in which a mechanism turns a spindle, which plucks metal tines, thus reproducing a melody ). Automatic music reproduction traces back as far as the 9th century, when the Banū Mūsā brothers invented the earliest known mechanical musical instrument, in this case, a hydropowered (water-powered) organ that played interchangeable cylinders. According to Charles B. Fowler, this "... cylinder with raised pins on the surface remained the basic device to produce and reproduce music mechanically until the second half of the nineteenth century." [ 1 ] [ 2 ] Carvings in the Rosslyn Chapel from the 1560s may represent an early attempt to record the Chladni patterns produced by sound-in-stone representations, although this theory has not been conclusively proved. [ 3 ] [ 4 ] In the 14th century, a mechanical bell-ringer controlled by a rotating cylinder was introduced in Flanders . [ citation needed ] Similar designs appeared in barrel organs (15th century), musical clocks (1598), barrel pianos (1805), and music boxes ( c. 1800 ). A music box is an automatic musical instrument that produces sounds by the use of a set of pins placed on a revolving cylinder or disc so as to pluck the tuned teeth (or lamellae ) of a steel comb. The fairground organ , developed in 1892, used a system of accordion-folded punched cardboard books. The player piano , first demonstrated in 1876, used a punched paper scroll that could store a long piece of music. The most sophisticated of the piano rolls were hand-played , meaning that they were duplicates from a master roll that had been created on a special piano, which punched holes in the master as a live performer played the song. Thus, the roll represented a recording of the actual performance of an individual, not just the more common method of punching the master roll through transcription of the sheet music. This technology to record a live performance onto a piano roll was not developed until 1904. Piano rolls were in continuous mass production from 1896 to 2008. [ 5 ] [ 6 ] A 1908 U.S. Supreme Court copyright case noted that, in 1902 alone, there were between 70,000 and 75,000 player pianos manufactured, and between 1,000,000 and 1,500,000 piano rolls produced. [ 7 ] The first device that could record actual sounds as they passed through the air (but could not play them back—the purpose was only visual study) was the phonautograph , patented in 1857 by Parisian inventor Édouard-Léon Scott de Martinville . The earliest known recordings of the human voice are phonautograph recordings, called phonautograms , made in 1857. [ 8 ] They consist of sheets of paper with sound-wave-modulated white lines created by a vibrating stylus that cut through a coating of soot as the paper was passed under it. An 1860 phonautogram of " Au Clair de la Lune ", a French folk song, was played back as sound for the first time in 2008 by scanning it and using software to convert the undulating line, which graphically encoded the sound, into a corresponding digital audio file. [ 8 ] [ 9 ] Thomas Edison's work on two other innovations, the telegraph and the telephone, led to the development of the phonograph. Edison was working on a machine in 1877 that would transcribe telegraphic signals onto paper tape, which could then be transferred over the telegraph again and again. The phonograph was both in a cylinder and a disc form. [ citation needed ] On April 30, 1877, French poet, humorous writer and inventor Charles Cros submitted a sealed envelope containing a letter to the Academy of Sciences in Paris fully explaining his proposed method, called the paleophone. [ 10 ] Though no trace of a working paleophone was ever found, Cros is remembered by some historians as an early inventor of a sound recording and reproduction machine. [ 11 ] The first practical sound recording and reproduction device was the mechanical phonograph cylinder , invented by Thomas Edison in 1877 and patented in 1878. [ 12 ] [ 13 ] The invention soon spread across the globe and over the next two decades the commercial recording, distribution, and sale of sound recordings became a growing new international industry, with the most popular titles selling millions of units by the early 1920s. [ 14 ] A process for mass-producing duplicate wax cylinders by molding instead of engraving them was put into effect in 1901. [ 15 ] The development of mass-production techniques enabled cylinder recordings to become a major new consumer item in industrial countries and the cylinder was the main consumer format from the late 1880s until around 1910. [ citation needed ] The next major technical development was the invention of the gramophone record , generally credited to Emile Berliner [ by whom? ] and patented in 1887, [ 16 ] though others had demonstrated similar disk apparatus earlier, most notably Alexander Graham Bell in 1881. [ 17 ] Discs were easier to manufacture, transport and store, and they had the additional benefit of being marginally louder than cylinders. Sales of the gramophone record overtook the cylinder ca. 1910, and by the end of World War I the disc had become the dominant commercial recording format. Edison, who was the main producer of cylinders, created the Edison Disc Record in an attempt to regain his market. The double-sided (nominally 78 rpm) shellac disc was the standard consumer music format from the early 1910s to the late 1950s. In various permutations, the audio disc format became the primary medium for consumer sound recordings until the end of the 20th century. Although there was no universally accepted speed, and various companies offered discs that played at several different speeds, the major recording companies eventually settled on a de facto industry standard of nominally 78 revolutions per minute. The specified speed was 78.26 rpm in America and 77.92 rpm throughout the rest of the world. The difference in speeds was due to the difference in the cycle frequencies of the AC electricity that powered the stroboscopes used to calibrate recording lathes and turntables. [ 18 ] The nominal speed of the disc format gave rise to its common nickname, the seventy-eight (though not until other speeds had become available). Discs were made of shellac or similar brittle plastic-like materials, played with needles made from a variety of materials including mild steel, thorn, and even sapphire. Discs had a distinctly limited playing life that varied depending on how they were manufactured. Earlier, purely acoustic methods of recording had limited sensitivity and frequency range. Mid-frequency range notes could be recorded, but very low and very high frequencies could not. Instruments such as the violin were difficult to transfer to disc. One technique to deal with this involved using a Stroh violin , which uses a conical horn connected to a diaphragm that in turn is connected to the violin bridge. The horn was no longer needed once electrical recording was developed. The long-playing 33 1 ⁄ 3 rpm microgroove LP record , was developed at Columbia Records and introduced in 1948. The short-playing but convenient 7-inch (18 cm) 45 rpm microgroove vinyl single was introduced by RCA Victor in 1949. In the US and most developed countries, the two new vinyl formats completely replaced 78 rpm shellac discs by the end of the 1950s, but in some corners of the world, the 78 lingered on far into the 1960s. [ 19 ] Vinyl was much more expensive than shellac, one of the several factors that made its use for 78 rpm records very unusual, but with a long-playing disc the added cost was acceptable. The compact 45 format required very little material. Vinyl offered improved performance, both in stamping and in playback. Vinyl records were, over-optimistically, advertised as "unbreakable". They were not, but they were much less fragile than shellac, which had itself once been touted as unbreakable compared to wax cylinders. Sound recording began as a purely mechanical process. Except for a few crude telephone-based recording devices with no means of amplification, such as the telegraphone , [ a ] it remained so until the 1920s. Between the invention of the phonograph in 1877 and the first commercial digital recordings in the early 1970s, arguably the most important milestone in the history of sound recording was the introduction of what was then called electrical recording , in which a microphone was used to convert the sound into an electrical signal that was amplified and used to actuate the recording stylus. This innovation eliminated the horn sound resonances characteristic of the acoustical process, produced clearer and more full-bodied recordings by greatly extending the useful range of audio frequencies, and allowed previously unrecordable distant and feeble sounds to be captured. During this time, several radio-related developments in electronics converged to revolutionize the recording process. These included improved microphones and auxiliary devices such as electronic filters, all dependent on electronic amplification to be of practical use in recording. In 1906, Lee De Forest invented the Audion triode vacuum tube, an electronic valve that could amplify weak electrical signals. By 1915, it was in use in long-distance telephone circuits that made conversations between New York and San Francisco practical. Refined versions of this tube were the basis of all electronic sound systems until the commercial introduction of the first transistor -based audio devices in the mid-1950s. During World War I, engineers in the United States and Great Britain worked on ways to record and reproduce, among other things, the sound of a German U-boat for training purposes. Acoustical recording methods of the time could not reproduce the sounds accurately. The earliest results were not promising. The first electrical recording issued to the public, with little fanfare, was of the November 11, 1920, funeral service for The Unknown Warrior in Westminster Abbey , London. The recording engineers used microphones of the type used in contemporary telephones. Four were discreetly set up in the abbey and wired to recording equipment in a vehicle outside. Although electronic amplification was used, the audio was weak and unclear, as only possible in those circumstances. For several years, this little-noted disc remained the only issued electrical recording. Several record companies and independent inventors, notably Orlando Marsh , experimented with equipment and techniques for electrical recording in the early 1920s. Marsh's electrically recorded Autograph Records were already being sold to the public in 1924, a year before the first such offerings from the major record companies, but their overall sound quality was too low to demonstrate any obvious advantage over traditional acoustical methods. Marsh's microphone technique was idiosyncratic and his work had little if any impact on the systems being developed by others. [ 20 ] Telephone industry giant Western Electric had research laboratories [ b ] with material and human resources that no record company or independent inventor could match. They had the best microphone, a condenser type developed there in 1916 and greatly improved in 1922, [ 21 ] and the best amplifiers and test equipment. They had already patented an electromechanical recorder in 1918, and in the early 1920s, they decided to intensively apply their hardware and expertise to developing two state-of-the-art systems for electronically recording and reproducing sound: one that employed conventional discs and another that recorded optically on motion picture film. Their engineers pioneered the use of mechanical analogs of electrical circuits and developed a superior rubber line recorder for cutting the groove into the wax master in the disc recording system. [ 22 ] By 1924, such dramatic progress had been made that Western Electric arranged a demonstration for the two leading record companies, the Victor Talking Machine Company and the Columbia Phonograph Company . Both soon licensed the system and both made their earliest published electrical recordings in February 1925, but neither actually released them until several months later. To avoid making their existing catalogs instantly obsolete, the two long-time archrivals agreed privately not to publicize the new process until November 1925, by which time enough electrically recorded repertory would be available to meet the anticipated demand. During the next few years, the lesser record companies licensed or developed other electrical recording systems. By 1929 only the budget label Harmony was still issuing new recordings made by the old acoustical process. Comparison of some surviving Western Electric test recordings with early commercial releases indicates that the record companies artificially reduced the frequency range of recordings so they would not overwhelm non-electronic playback equipment, which reproduced very low frequencies as an unpleasant rattle and rapidly wore out discs with strongly recorded high frequencies. [ citation needed ] In the 1920s, Phonofilm and other early motion picture sound systems employed optical recording technology, in which the audio signal was graphically recorded on photographic film. The amplitude variations comprising the signal were used to modulate a light source which was imaged onto the moving film through a narrow slit, allowing the signal to be photographed as variations in the density or width of a sound track . The projector used a steady light and a photodetector to convert these variations back into an electrical signal, which was amplified and sent to loudspeakers behind the screen. [ c ] Optical sound became the standard motion picture audio system throughout the world and remains so for theatrical release prints despite attempts in the 1950s to substitute magnetic soundtracks. Currently, all release prints on 35 mm movie film include an analog optical soundtrack, usually stereo with Dolby SR noise reduction. In addition, an optically recorded digital soundtrack in Dolby Digital or Sony SDDS form is likely to be present. An optically recorded timecode is also commonly included to synchronize CDROMs that contain a DTS soundtrack. This period also saw several other historic developments including the introduction of the first practical magnetic sound recording system, the magnetic wire recorder , which was based on the work of Danish inventor Valdemar Poulsen . Magnetic wire recorders were effective, but the sound quality was poor, so between the wars, they were primarily used for voice recording and marketed as business dictating machines. In 1924, a German engineer, Kurt Stille, improved the Telegraphone with an electronic amplifier. [ 23 ] The following year, Ludwig Blattner began work that eventually produced the Blattnerphone, [ 24 ] which used steel tape instead of wire. The BBC started using Blattnerphones in 1930 to record radio programs. In 1933, radio pioneer Guglielmo Marconi 's company purchased the rights to the Blattnerphone, and newly developed Marconi-Stille recorders were installed in the BBC's Maida Vale Studios in March 1935. [ 25 ] The tape used in Blattnerphones and Marconi-Stille recorders was the same material used to make razor blades, and not surprisingly the fearsome Marconi-Stille recorders were considered so dangerous that technicians had to operate them from another room for safety. Because of the high recording speeds required, they used enormous reels about one meter in diameter, and the thin tape frequently broke, sending jagged lengths of razor steel flying around the studio. Magnetic tape recording uses an amplified electrical audio signal to generate analogous variations of the magnetic field produced by a tape head , which impresses corresponding variations of magnetization on the moving tape. In playback mode, the signal path is reversed, the tape head acting as a miniature electric generator as the varyingly magnetized tape passes over it. [ 26 ] The original solid steel ribbon was replaced by a much more practical coated paper tape, but acetate soon replaced paper as the standard tape base. Acetate has fairly low tensile strength and if very thin it will snap easily, so it was in turn eventually superseded by polyester. This technology, the basis for almost all commercial recording from the 1950s to the 1980s, was developed in the 1930s by German audio engineers who also rediscovered the principle of AC biasing (first used in the 1920s for wire recorders ), which dramatically improved the frequency response of tape recordings. The K1 Magnetophon was the first practical tape recorder, developed by AEG in Germany in 1935. The technology was further improved just after World War II by American audio engineer John T. Mullin with backing from Bing Crosby Enterprises . Mullin's pioneering recorders were modifications of captured German recorders. In the late 1940s, the Ampex company produced the first tape recorders commercially available in the US. Magnetic tape brought about sweeping changes in both radio and the recording industry. Sound could be recorded, erased and re-recorded on the same tape many times, sounds could be duplicated from tape to tape with only minor loss of quality, and recordings could now be very precisely edited by physically cutting the tape and rejoining it. Within a few years of the introduction of the first commercial tape recorder—the Ampex 200 model, launched in 1948—American musician-inventor Les Paul had invented the first multitrack tape recorder , ushering in another technical revolution in the recording industry. Tape made possible the first sound recordings totally created by electronic means, opening the way for the bold sonic experiments of the Musique Concrète school and avant-garde composers like Karlheinz Stockhausen , which in turn led to the innovative pop music recordings of artists such as the Beatles and the Beach Boys . The ease and accuracy of tape editing, as compared to the cumbersome disc-to-disc editing procedures previously in some limited use, together with tape's consistently high audio quality, finally convinced radio networks to routinely prerecord their entertainment programming, most of which had formerly been broadcast live. Also, for the first time, broadcasters, regulators and other interested parties were able to undertake comprehensive audio logging of each day's radio broadcasts. Innovations like multitracking and tape echo allowed radio programs and advertisements to be produced to a high level of complexity and sophistication. The combined impact with innovations such as the endless loop broadcast cartridge led to significant changes in the pacing and production style of radio program content and advertising. In 1881, it was noted during experiments in transmitting sound from the Paris Opera that it was possible to follow the movement of singers on the stage if earpieces connected to different microphones were held to the two ears. This discovery was commercialized in 1890 with the Théâtrophone system, which operated for over forty years until 1932. In 1931, Alan Blumlein , a British electronics engineer working for EMI , designed a way to make the sound of an actor in a film follow his movement across the screen. In December 1931, he submitted a patent application including the idea, and in 1933, this became UK patent number 394,325 . [ 27 ] Over the next two years, Blumlein developed stereo microphones and a stereo disc-cutting head, and recorded a number of short films with stereo soundtracks. In the 1930s, experiments with magnetic tape enabled the development of the first practical commercial sound systems that could record and reproduce high-fidelity stereophonic sound . The experiments with stereo during the 1930s and 1940s were hampered by problems with synchronization. A major breakthrough in practical stereo sound was made by Bell Laboratories , who in 1937 demonstrated a practical system of two-channel stereo, using dual optical sound tracks on film. [ 28 ] Major movie studios quickly developed three-track and four-track sound systems, and the first stereo sound recording for a commercial film was made by Judy Garland for the MGM movie Listen, Darling in 1938. [ citation needed ] The first commercially released movie with a stereo soundtrack was Walt Disney's Fantasia , released in 1940. The 1941 release of Fantasia used the Fantasound sound system. This system used a separate film for the sound, synchronized with the film carrying the picture. The sound film had four double-width optical soundtracks, three for left, center, and right audio—and a fourth as a control track with three recorded tones that controlled the playback volume of the three audio channels. Because of the complex equipment this system required, Disney exhibited the movie as a roadshow, and only in the United States. Regular releases of the movie used standard mono optical 35 mm stock until 1956, when Disney released the film with a stereo soundtrack that used the Cinemascope four-track magnetic sound system. German audio engineers working on magnetic tape developed stereo recording by 1941. Of 250 stereophonic recordings made during WW2, only three survive: Beethoven's 5th Piano Concerto with Walter Gieseking and Arthur Rother, a Brahms Serenade, and the last movement of Bruckner's 8th Symphony with Von Karajan. [ d ] Other early German stereophonic tapes are believed to have been destroyed in bombings. Not until Ampex introduced the first commercial two-track tape recorders in the late 1940s did stereo tape recording become commercially feasible. Despite the availability of multitrack tape, stereo did not become the standard system for commercial music recording for some years, and remained a specialist market during the 1950s. EMI (UK) was the first company to release commercial stereophonic tapes. They issued their first Stereosonic tape in 1954. Others quickly followed, under their His Master's Voice and Columbia labels. 161 Stereosonic tapes were released, mostly classical music or lyric recordings. RCA imported these tapes into the USA. Although some British His Master's Voice imports released in the USA cost up to $15, two-track stereophonic tapes were more successful in America during the second half of the 1950s. The history of stereo recording changed after the late 1957 introduction of the Westrex stereo phonograph disc , which used the groove format developed earlier by Blumlein. Decca Records in England came out with FFRR (Full Frequency Range Recording) in the 1940s, which became internationally accepted as a worldwide standard for higher-quality recording on vinyl records. The Ernest Ansermet recording of Igor Stravinsky 's Petrushka was key in the development of full frequency range records and alerting the listening public to high fidelity in 1946. [ 29 ] Until the mid-1960s, record companies mixed and released most popular music in monophonic sound. From mid-1960s until the early 1970s, major recordings were commonly released in both mono and stereo. Recordings originally released only in mono have been re-rendered and released in stereo using a variety of techniques from remixing to pseudostereo . Magnetic tape transformed the recording industry. By the early 1950s, most commercial recordings were mastered on tape instead of recorded directly to disc. Tape facilitated a degree of manipulation in the recording process that was impractical with mixes and multiple generations of directly recorded discs. An early example is Les Paul 's 1951 recording of How High the Moon , on which Paul played eight overdubbed guitar tracks. [ 30 ] In the 1960s Brian Wilson of The Beach Boys , Frank Zappa , and The Beatles (with producer George Martin ) were among the first popular artists to explore the possibilities of multitrack recording techniques and effects on their landmark albums Pet Sounds , [ 31 ] Freak Out! , and Sgt. Pepper's Lonely Hearts Club Band . [ 32 ] The next important innovation was small cartridge-based tape systems, of which the compact cassette , commercialized by the Philips electronics company in 1964, is the best known. Initially a low-fidelity format for spoken-word voice recording and inadequate for music reproduction, after a series of improvements it entirely replaced the competing consumer tape formats: the larger 8-track tape [ 33 ] (used primarily in cars). The compact cassette became a major consumer audio format and advances in electronic and mechanical miniaturization led to the development of the Sony Walkman , a pocket-sized cassette player introduced in 1979. The Walkman was the first personal music player and it gave a major boost to sales of prerecorded cassettes. [ 34 ] A key advance in audio fidelity came with the Dolby A noise reduction system, invented by Ray Dolby and introduced into professional recording studios in 1966. It suppressed the background of hiss, which was the only easily audible downside of mastering on tape instead of recording directly to disc. [ 35 ] A competing system, dbx , invented by David Blackmer, [ 36 ] also found success in professional audio. [ 37 ] A simpler consumer variant of Dolby's noise reduction system, known as Dolby B, greatly improved the sound of cassette tape recordings by reducing the especially high level of hiss that resulted from the cassette's miniaturized tape format and slow tape speed. The compact cassette format also benefited from improvements to the tape itself as coatings with wider frequency responses and lower inherent noise were developed, often based on cobalt and chrome oxides as the magnetic material instead of the more usual iron oxide. The multitrack audio cartridge had been in wide use in the radio industry, from the late 1950s to the 1980s, but in the 1960s the pre-recorded 8-track tape was launched as a consumer audio format by the Lear Jet aircraft company. [ e ] Aimed particularly at the automotive market, they were the first practical, affordable car hi-fi systems, and could produce sound quality superior to that of the compact cassette. The smaller size and greater durability – augmented by the ability to create home-recorded music mixtapes since 8-track recorders were rare – saw the cassette become the dominant consumer format for portable audio devices in the 1970s and 1980s. [ 38 ] There had been experiments with multi-channel sound for many years – usually for special musical or cultural events – but the first commercial application of the concept came in the early 1970s with the introduction of Quadraphonic sound. This spin-off development from multitrack recording used four tracks (instead of the two used in stereo) and four speakers to create a 360-degree audio field around the listener. [ 39 ] Following the release of the first consumer 4-channel hi-fi systems, a number of popular albums were released in one of the competing four-channel formats; among the best known are Mike Oldfield 's Tubular Bells and Pink Floyd 's The Dark Side of the Moon . Quadraphonic sound was not a commercial success, partly because of competing and somewhat incompatible four-channel sound systems (e.g., CBS , JVC , Dynaco and others all had systems) and generally poor quality, even when played as intended on the correct equipment, of the released music. It eventually faded out in the late 1970s, although this early venture paved the way for the eventual introduction of domestic surround sound systems in home theatre use, which gained popularity following the introduction of the DVD. [ 40 ] The replacement of the relatively fragile vacuum tube by the smaller, rugged and efficient transistor also accelerated the sale of consumer high-fidelity sound systems from the 1960s onward. In the 1950s, most record players were monophonic and had relatively low sound quality. Few consumers could afford high-quality stereophonic sound systems. In the 1960s, American manufacturers introduced a new generation of modular hi-fi components — separate turntables, pre-amplifiers, amplifiers, both combined as integrated amplifiers, tape recorders, and other ancillary equipment like the graphic equalizer , which could be connected together to create a complete home sound system. These developments were rapidly taken up by major Japanese electronics companies, which soon flooded the world market with relatively affordable, high-quality transistorized audio components. By the 1980s, corporations like Sony had become world leaders in the music recording and playback industry. The advent of digital sound recording and later the compact disc (CD) in 1982 brought significant improvements in the quality and durability of recordings. The CD initiated another massive wave of change in the consumer music industry, with vinyl records effectively relegated to a small niche market by the mid-1990s. The record industry fiercely resisted the introduction of digital systems, fearing wholesale piracy on a medium able to produce perfect copies of original released recordings. The most recent and revolutionary developments have been in digital recording, with the development of various uncompressed and compressed digital audio file formats , processors capable and fast enough to convert the digital data to sound in real time , and inexpensive mass storage . [ 41 ] This generated new types of portable digital audio players . The minidisc player, using ATRAC compression on small, re-writeable discs was introduced in the 1990s, but became obsolescent as solid-state non-volatile flash memory dropped in price. As technologies that increase the amount of data that can be stored on a single medium, such as Super Audio CD , DVD-A , Blu-ray Disc , and HD DVD became available, longer programs of higher quality fit onto a single disc. Sound files are readily downloaded from the Internet and other sources, and copied onto computers and digital audio players. Digital audio technology is now used in all areas of audio, from casual use of music files of moderate quality to the most demanding professional applications. New applications such as internet radio and podcasting have appeared. Technological developments in recording, editing, and consuming have transformed the record , movie and television industries in recent decades. Audio editing became practicable with the invention of magnetic tape recording , but technologies like MIDI , sound synthesis and digital audio workstations allow greater control and efficiency for composers and artists. Digital audio techniques and mass storage have reduced recording costs such that high-quality recordings can be produced in small studios. [ 42 ] Today, the process of making a recording is separated into tracking, mixing and mastering . Multitrack recording makes it possible to capture signals from several microphones, or from different takes to tape, disc or mass storage allowing previously unavailable flexibility in the mixing and mastering stages. There are many different digital audio recording and processing programs running under several computer operating systems for all purposes, ranging from casual users and serious amateurs working on small projects to professional sound engineers who are recording albums, film scores and doing sound design for video games . Digital dictation software for recording and transcribing speech has different requirements; intelligibility and flexible playback facilities are priorities, while a wide frequency range and high audio quality are not. The development of analog sound recording in the nineteenth century and its widespread use throughout the twentieth century had a huge impact on the development of music. Before analog sound recording was invented, most music was as a live performance. Throughout the medieval , Renaissance , Baroque , Classical , and through much of the Romantic music era , the main way that songs and instrumental pieces were recorded was through music notation . While notation indicates the pitches of the melody and their rhythm many aspects of the performance are undocumented. Indeed, in the Medieval era, Gregorian chant did not indicate the rhythm of the chant. In the Baroque era, instrumental pieces often lack a tempo indication [ 43 ] and usually none of the ornaments were written down. As a result, each performance of a song or piece would be slightly different. With the development of analog sound recording, though, a performance could be permanently fixed, in all of its elements: pitch, rhythm, timbre, ornaments and expression. This meant that many more elements of a performance would be captured and disseminated to other listeners. The development of sound recording also enabled a much larger proportion of people to hear famous orchestras, operas, singers and bands, because even if a person could not afford to hear the live concert, they may be able to hear the recording. [ 44 ] The availability of sound recording thus helped to spread musical styles to new regions, countries and continents. The cultural influence went in a number of directions. Sound recordings enabled Western music lovers to hear actual recordings of Asian, Middle Eastern and African groups and performers, increasing awareness of non-Western musical styles. At the same time, sound recordings enabled music lovers outside the West to hear the most famous North American and European groups and singers. [ 45 ] As digital recording developed, so did a controversy commonly known as the analog versus digital controversy. Audio professionals, audiophiles, consumers, musicians alike contributed to the debate based on their interaction with the media and the preferences for analog or digital processes. [ 46 ] Scholarly discourse on the controversy came to focus on concern for the perception of moving image and sound. [ 47 ] There are individual and cultural preferences for either method. While approaches and opinions vary, some emphasize sound as paramount, others focus on technology preferences as the deciding factor. Analog fans might embrace limitations as strengths of the medium inherent in the compositional, editing, mixing, and listening phases. [ 48 ] Digital advocates boast flexibility in similar processes. This debate fosters a revival of vinyl in the music industry, [ 49 ] as well as analog electronics and analog-type plug-ins for recording and mixing software. In copyright law, a phonogram or sound recording is a work that results from the fixation of sounds in a medium. The notice of copyright in a phonogram uses the sound recording copyright symbol , which the Geneva Phonograms Convention defines as ℗ (the letter P in a full circle). This usually accompanies the copyright notice for the underlying musical composition, which uses the ordinary © symbol. The recording is separate from the song, so copyright for a recording usually belongs to the record company. It is less common for an artist or producer to hold these rights. Copyright for recordings has existed since 1972, while copyright for musical composition, or songs, has existed since 1831. Disputes over sampling and beats [ clarification needed ] are ongoing. [ 42 ] United States copyright law defines "sound recordings" as "works that result from the fixation of a series of musical, spoken, or other sounds" other than an audiovisual work's soundtrack. [ 50 ] Prior to the Sound Recording Amendment (SRA), [ 51 ] which took effect in 1972, copyright in sound recordings was handled at the state level. Federal copyright law preempts most state copyright laws but allows state copyright in sound recordings to continue for one full copyright term after the SRA's effective date, [ 52 ] which means 2067. Since 1934, copyright law in Great Britain has treated sound recordings (or phonograms ) differently from musical works . [ 53 ] The Copyright, Designs and Patents Act 1988 defines a sound recording as (a) a recording of sounds, from which the sounds may be reproduced, or (b) a recording of the whole or any part of a literary, dramatic or musical work, from which sounds reproducing the work or part may be produced, regardless of the medium on which the recording is made or the method by which the sounds are reproduced or produced. It thus covers vinyl records, tapes, compact discs , digital audiotapes, and MP3s that embody recordings.
https://en.wikipedia.org/wiki/Sound_recording_and_reproduction
A sound reinforcement system is the combination of microphones , signal processors , amplifiers , and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. [ 1 ] [ 2 ] In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects , such as reverb , as opposed to simply amplifying the sources unaltered. A sound reinforcement system for a rock concert in a stadium may be very complex, including hundreds of microphones, complex live sound mixing and signal processing systems, tens of thousands of watts of amplifier power, and multiple loudspeaker arrays , all overseen by a team of audio engineers and technicians. On the other hand, a sound reinforcement system can be as simple as a small public address (PA) system, consisting of, for example, a single microphone connected to a 100-watt amplified loudspeaker for a singer-guitarist playing in a small coffeehouse . In both cases, these systems reinforce sound to make it louder or distribute it to a wider audience. [ 3 ] Some audio engineers and others in the professional audio industry disagree over whether these audio systems should be called sound reinforcement (SR) systems or PA systems. Distinguishing between the two terms by technology and capability is common, while others distinguish by intended use (e.g., SR systems are for live event support and PA systems are for reproduction of speech and recorded music in buildings and institutions). In some regions or markets, the distinction between the two terms is important, though the terms are considered interchangeable in many professional circles. [ 4 ] A typical sound reinforcement system consists of; input transducers (e.g., microphones ), which convert sound energy such as a person singing into an electric signal, signal processors which alter the signal characteristics (e.g., equalizers that adjust the bass and treble, compressors that reduce signal peaks, etc.), amplifiers , which produce a powerful version of the resulting signal that can drive a loudspeaker and output transducers (e.g., loudspeakers in speaker cabinets ), which convert the signal back into sound energy (the sound heard by the audience and the performers). These primary parts involve varying numbers of individual components [ 5 ] to achieve the desired goal of reinforcing and clarifying the sound to the audience, performers, or other individuals. Sound reinforcement in a large format system typically involves a signal path that starts with the signal inputs, which may be instrument pickups (on an electric guitar or electric bass ) or a microphone that a vocalist is singing into or a microphone placed in front of an instrument or guitar amplifier . These signal inputs are plugged into the input jacks of a thick multicore cable (often called a snake ). The snake then delivers the signals of all of the inputs to one or more mixing consoles . In a coffeehouse or small nightclub, the snake may be only routed to a single mixing console, which an audio engineer will use to adjust the sound and volume of the onstage vocals and instruments that the audience hears through the main speakers and adjust the volume of the monitor speakers that are aimed at the performers. Mid- to large-size performing venues typically route the onstage signals to two mixing consoles : the front of house (FOH), and the stage monitor system , which is often a second mixer at the side of the stage. In these cases, at least two audio engineers are required; one to do the main mix for the audience at FOH and another to do the monitor mix for the performers on stage. Once the signal arrives at an input on a mixing console, this signal can be adjusted in many ways by the sound engineer. A signal can be equalized (e.g., by adjusting the bass or treble of the sound), compressed (to avoid unwanted signal peaks), or panned (that is sent to the left or right speakers). The signal may also be routed into an external effects processor , such as a reverb effect, which outputs a wet (effected) version of the signal, which is typically mixed in varying amounts with the dry (effect-free) signal. Many electronic effects units are used in sound reinforcement systems, including digital delay and reverb . Some concerts use pitch correction effects (e.g., AutoTune ), which electronically correct any out-of-tune singing. Mixing consoles also have additional sends , also referred to as auxes or aux sends (an abbreviation for "auxiliary send"), on each input channel so that a different mix can be created and sent elsewhere for another purpose. One usage for aux sends is to create a mix of the vocal and instrument signals for the monitor mix (this is what the onstage singers and musicians hear from their monitor speakers or in-ear monitors ). Another use of an aux send is to select varying amounts of certain channels (via the aux send knobs on each channel), and then route these signals to an effects processor. A common example of the second use of aux sends is to send all of the vocal signals from a rock band through a reverb effect. While reverb is usually added to vocals in the main mix, it is not usually added to electric bass and other rhythm section instruments. The processed input signals are then mixed to the master faders on the console. The next step in the signal path generally depends on the size of the system in place. In smaller systems, the main outputs are often sent to an additional equalizer, or directly to a power amplifier , with one or more loudspeakers (typically two, one on each side of the stage in smaller venues, or a large number in big venues) that are connected to that amplifier. In large-format systems, the signal is typically first routed through an equalizer then to a crossover . A crossover splits the signal into multiple frequency bands with each band being sent to separate amplifiers and speaker enclosures for low, middle, and high-frequency signals. Low-frequency signals are sent to amplifiers and then to subwoofers , and middle and high-frequency sounds are typically sent to amplifiers which power full-range speaker cabinets. Using a crossover to separate the sound into low, middle and high frequencies can lead to a "cleaner", clearer sound (see bi-amplification ) than routing all of the frequencies through a single full-range speaker system. Nevertheless, many small venues still use a single full-range speaker system, as it is easier to set up and less expensive. Many types of input transducers can be found in a sound reinforcement system, with microphones being the most commonly used input device. Microphones can be classified according to their method of transduction, polar pattern or their functional application. Most microphones used in sound reinforcement are either dynamic or condenser microphones. One type of directional microphone, called cardioid mics, are widely used in live sound, because they reduce pickup from the side and rear, helping to avoid unwanted feedback from the stage monitor system . Microphones used for sound reinforcement are positioned and mounted in many ways, including base-weighted upright stands, podium mounts, tie-clips, instrument mounts, and headset mounts . Microphones on stands are also placed in front of instrument amplifiers to pick up the sound. Headset-mounted and tie-clip-mounted microphones are often used with wireless transmission to allow performers or speakers to move freely. Early adopters of headset mounted microphones technology included country singer Garth Brooks , [ 6 ] Kate Bush , and Madonna . [ 7 ] Other types of input transducers include magnetic pickups used in electric guitars and electric basses, contact microphones used on stringed instruments, and pianos and phonograph pickups (cartridges) used in record players. Electronic instruments such as synthesizers can have their output signal routed directly to the mixing console. A DI unit may be necessary to adapt some of these sources to the inputs of the console. Wireless systems are typically used for electric guitar, bass, handheld microphones and in-ear monitor systems. This lets performers move about the stage during the show or even go out into the audience without the worry of tripping over or disconnecting cables. Mixing consoles are the heart of a sound reinforcement system. This is where the sound engineer can adjust the volume and tone of each input, whether it is a vocalist's microphone or the signal from an electric bass , and mix, equalize and add effects to these sound sources. Doing the mixing for a live show requires a mix of technical and artistic skills. A sound engineer needs to have an expert knowledge of speaker and amplifier set-up, effects units and other technologies and a good "ear" for what the music should sound like in order to create a good mix. Multiple consoles can be used for different purposes in a single sound reinforcement system. The front-of-house (FOH) mixing console is typically located where the operator can see the action on stage and hear what the audience hears. For broadcast and recording applications, the mixing console may be placed within an enclosed booth or outside in an OB van . Large music productions often use a separate stage monitor mixing console which is dedicated to creating mixes for the performers on-stage. These consoles are typically placed at the side of the stage so that the operator can communicate with the performers on stage. [ 8 ] [ a ] Small PA systems for venues such as bars and clubs are now available with features that were formerly only available on professional-level equipment, such as digital reverb effects, graphic equalizers , and, in some models, feedback prevention circuits which electronically sense and prevent audio feedback when it becomes a problem. Digital effects units may offer multiple pre-set and variable reverb, echo and related effects . Digital loudspeaker management systems offer sound engineers digital delay (to ensure speakers are in sync with each other), limiting, crossover functions, EQ filters, compression and other functions in a single rack-mountable unit. In previous decades, sound engineers typically had to transport a substantial number of rack-mounted analog effects unit devices to accomplish these tasks. Equalizers are electronic devices that allow audio engineers to control the tone and frequencies of the sound in a channel, group (e.g., all the mics on a drumkit) or an entire stage's mix. The bass and treble controls on a home stereo are a simple type of equalizer. Equalizers exist in professional sound reinforcement systems in three forms: shelving equalizers (typically for a whole range of bass and treble frequencies), graphic equalizers and parametric equalizers . Graphic equalizers have faders (vertical slide controls) which together resemble a frequency response curve plotted on a graph. The faders can be used to boost or cut specific frequency bands. Using equalizers, frequencies that are too weak, such as a singer with modest projection in their lower register, can be boosted. Frequencies that are too loud, such as a "boomy" sounding bass drum , or an overly resonant dreadnought guitar can be cut. Sound reinforcement systems typically use graphic equalizers with one-third octave frequency centers. These are typically used to equalize output signals going to the main loudspeaker system or the monitor speakers on stage. Parametric equalizers are often built into each channel in mixing consoles, typically for the mid-range frequencies. They are also available as separate rack-mount units that can be connected to a mixing board. Parametric equalizers typically use knobs and sometimes buttons. The audio engineer can select which frequency band to cut or boost, and then use additional knobs to adjust how much to cut or boost this frequency range. Parametric equalizers first became popular in the 1970s and have remained the program equalizer of choice for many engineers since then. A high-pass (low-cut) and/or low-pass (high-cut) filter may also be included on equalizers or audio consoles. High-pass and low-pass filters restrict a given channel's bandwidth extremes. Cutting very low-frequency sound signals (termed infrasonic , or subsonic ) reduces the waste of amplifier power which does not produce audible sound and which moreover can be hard on the subwoofer drivers. A low-pass filter to cut ultrasonic energy is useful to prevent interference from radio frequencies, lighting control, or digital circuitry creeping into the power amplifiers. Such filters are often paired with graphic and parametric equalizers to give the audio engineer full control of the frequency range. High-pass filters and low-pass filters used together function as a band-pass filter, eliminating undesirable frequencies both above and below the auditory spectrum. A band-stop filter , does the opposite. It allows all frequencies to pass except for one band in the middle. A feedback suppressor, using an microprocessor , automatically detects the onset of feedback and applies a narrow band-stop filter (a notch filter ) at specific frequency or frequencies pertaining to the feedback. Dynamic range compression is designed to help the audio engineer to manage the dynamic range of audio signals. Prior to the invention of automatic compressors, audio engineers accomplished the same goal by "riding the faders", listening carefully to the mix and lowering the faders of any singer or instrument which was getting too loud. A compressor accomplishes this by reducing the gain of a signal that is above a defined level (the threshold) by a defined amount determined by the ratio setting. Most compressors available are designed to allow the operator to select a ratio within a range typically between 1:1 and 20:1, with some allowing settings of up to ∞:1. A compressor with high compression ratio is typically referred to as a limiter . The speed that the compressor adjusts the gain of the signal ( attack and release ) is typically adjustable as is the final output or make-up gain of the device. Compressor applications vary widely. Some applications use limiters for component protection and gain structure control. Artistic signal manipulation using a compressor is a subjective technique widely utilized by mix engineers to improve clarity or to creatively alter the signal in relation to the program material. An example of artistic compression is the typical heavy compression used on the various components of a modern rock drum kit. The drums are processed to be perceived as sounding more punchy and full. A noise gate mutes signals below a set threshold level. A noise gate's function is in, a sense, opposite to that of a compressor. Noise gates are useful for microphones which will pick up noise that is not relevant to the program, such as the hum of a miked electric guitar amplifier or the rustling of papers on a minister's lectern. Noise gates are also used to process the microphones placed near the drums of a drum kit in many hard rock and metal bands. Without a noise gate, the microphone for a specific instrument such as the floor tom will also pick up signals from nearby drums or cymbals. With a noise gate, the threshold of sensitivity for each microphone on the drum kit can be set so that only the direct strike and subsequent decay of the drum will be heard, not the nearby sounds. Reverberation and delay effects are widely used in sound reinforcement systems to enhance the sound of the mix and create a desired artistic effect. Reverb and delay add a sense of spaciousness to the sound. Reverb can give the effect of singing voice or instrument being present in anything from a small room to a massive hall, or even in a space that does not exist in the physical world. The use of reverb often goes unnoticed by the audience, as it often sounds more natural than if the signal was left "dry" (without effects). [ 10 ] Many modern mixing boards designed for live sound include on-board reverb effects. Other effects include modulation effects such as Flanger , phaser , and chorus and spectral manipulation or harmonic effects such as the exciter and harmonizer . The use of effects in the reproduction of 2010-era pop music is often in an attempt to mimic the sound of the studio version of the artist's music in a live concert setting. For example, an audio engineer may use an Auto Tune effect to produce unusual vocal sound effects that a singer used on their recordings. The appropriate type, variation, and level of effects is quite subjective and is often collectively determined by a production's audio engineer, artists, bandleader , music producer , or musical director. A feedback suppressor detects unwanted audio feedback and suppresses it, typically by automatically inserting a notch filter into the signal path of the system. Audio feedback can create unwanted loud, screaming noises that are disruptive to the performance, and can damage speakers and performers' and audience members' ears. Audio feedback from microphones occurs when a microphone is too near a monitor or main speaker and the sound reinforcement system amplifies itself. Audio feedback through a microphone is almost universally regarded as a negative phenomenon, many electric guitarists use guitar feedback as part of their performance. This type of feedback is intentional, so the sound engineer does not try to prevent it. A power amplifier is an electronic device that uses electrical power and circuitry to boost a line level signal and provides enough electrical power to drive a loudspeaker and produce sound. All loudspeakers, including headphones , require power amplification. Most professional audio power amplifiers also provide protection from clipping typically as some form of limiting . A power amplifier pushed into clipping can damage loudspeakers. Amplifiers also typically provide protection against short circuits across the output and overheating. Audio engineers select amplifiers that provide enough headroom . Headroom refers to the amount by which the signal-handling capabilities of an audio system exceed a designated nominal level . [ 11 ] Headroom can be thought of as a safety zone allowing transient audio peaks to exceed the nominal level without damaging the system or the audio signal, e.g., via clipping . Standards bodies differ in their recommendations for nominal level and headroom. Selecting amplifiers with enough headroom helps to ensure that the signal will remain clean and undistorted. Like most sound reinforcement equipment, professional power amplifiers are typically designed to be mounted within standard 19-inch racks . Rack-mounted amps are typically housed in road cases to prevent damage to the equipment during transportation. Active loudspeakers have internally mounted amplifiers that have been selected by the manufacturer to match the requirements of the loudspeaker. Some active loudspeakers also have equalization, crossover and mixing circuitry built in. Since amplifiers can generate a significant amount of heat, thermal dissipation is an important factor for operators to consider when mounting amplifiers into equipment racks. [ 12 ] Many power amplifiers feature internal fans to draw air across their heat sinks. The heat sinks can become clogged with dust, which can adversely affect the cooling capabilities of the amplifier. In the 1970s and 1980s, most PAs employed heavy class AB amplifiers . In the late 1990s, power amplifiers in PA applications became lighter, smaller, more powerful, and more efficient, with the increasing use of switching power supplies and class D amplifiers , which offered significant weight- and space-savings as well as increased efficiency. Often installed in railroad stations, stadia, and airports, class D amplifiers can run with minimal additional cooling and with higher rack densities, compared to older amplifiers. Digital loudspeaker management systems (DLMS) that combine digital crossover functions, compression, limiting, and other features in a single unit are used to process the mix from the mixing console and route it to the various amplifiers. Systems may include several loudspeakers, each with its own output optimized for a specific range of frequencies (i.e. bass, midrange, and treble). Bi-amping and tri-amping of a sound reinforcement system with the aid of a DLMS results in more efficient use of amplifier power by sending each amplifier only the frequencies appropriate for its respective loudspeaker and eliminating losses associated with passive crossover circuits. A simple and inexpensive PA loudspeaker may have a single full-range loudspeaker driver , housed in a suitable enclosure. More elaborate, professional-caliber sound reinforcement loudspeakers may incorporate separate drivers to produce low, middle, and high frequency sounds. A crossover network routes the different frequencies to the appropriate drivers. In the 1960s, horn loaded theater and PA speakers were commonly columns of multiple drivers mounted in a vertical line within a tall enclosure. The 1970s to early 1980s was a period of innovation in loudspeaker design with many sound reinforcement companies designing their own speakers using commercially available drivers. The areas of innovation were in cabinet design, durability, ease of packing and transport, and ease of setup. This period also saw the introduction of the hanging or flying of main loudspeakers at large concerts. During the 1980s the large speaker manufacturers started producing standard products using the innovations of the 1970s. These were mostly smaller two way systems with 12", 15" or double 15" woofers and a high frequency driver attached to a high frequency horn. The 1980s also saw the start of loudspeaker companies focused on the sound reinforcement market. The 1990s saw the introduction of line arrays , where long vertical arrays of loudspeakers in smaller cabinets are used to increase efficiency and provide even dispersion and frequency response. Trapezoidal -shaped enclosures became popular as this shape allowed many of them to be easily arrayed together. This period also saw the introduction of inexpensive molded plastic speaker enclosures mounted on tripod stands. Many feature built-in power amplifiers which made them practical for non-professionals to set up and operate successfully. The sound quality available from these simple powered speakers varies widely depending on the implementation. Many sound reinforcement loudspeaker systems incorporate protection circuitry to prevent damage from excessive power or operator error. Resettable fuses , specialized current-limiting light bulbs, and circuit breakers were used alone or in combination to reduce driver failures. During the same period, the professional sound reinforcement industry made the Neutrik Speakon NL4 and NL8 connectors the standard speaker connectors, replacing 1/4" jacks , XLR connectors , and Cannon multipin connectors which are all limited to a maximum of 15 amps of current. XLR connectors are still the standard input connector on active loudspeaker cabinets. To help users avoid overpowering them, loudspeakers have a power rating (in watts ) which indicates their maximum power capacity. Thanks to the efforts of the Audio Engineering Society (AES) and the loudspeaker industry group ALMA in developing the EIA-426 testing standard, power-handling specifications became more trustworthy. Lightweight, portable speaker systems for small venues route the low-frequency parts of the music (electric bass, bass drum, etc.) to a powered subwoofer . Routing the low-frequency energy to a separate amplifier and subwoofer can substantially improve the bass response of the system. Also, clarity may be enhanced because low-frequency sounds can cause intermodulation and other distortion in speaker systems. Professional sound reinforcement speaker systems often include dedicated hardware for safely flying them above the stage area, to provide more even sound coverage and to maximize sightlines within performance venues. Monitor loudspeakers , also called foldback loudspeakers, are speaker cabinets used onstage to help performers to hear their singing or playing. As such, monitor speakers are pointed towards a performer or a section of the stage. They are generally sent a different mix of vocals or instruments than the mix that is sent to the main loudspeaker system. Monitor loudspeaker cabinets are often a wedge shape, directing their output upwards towards the performer when set on the floor of the stage. Simple two-way, dual-driver designs with a speaker cone and a horn are common, as monitor loudspeakers need to be smaller to save space on the stage. These loudspeakers typically require less power and volume than the main loudspeaker system, as they only need to provide sound for a few people who are in relatively close proximity to the loudspeaker. Some manufacturers have designed loudspeakers for use either as a component of a small PA system or as a monitor loudspeaker. A number of manufacturers produce powered monitor speakers , which contain an integrated amplifier. Using monitor speakers instead of in-ear monitors typically results in an increase of stage volume, which can lead to more feedback issues and progressive hearing damage for the performers in front of them. [ 13 ] The clarity of the mix for the performer on stage is also typically compromised as they hear more extraneous noise from around them. The use of monitor loudspeakers, active (with an integrated amplifier) or passive, requires more cabling and gear on stage, resulting in a more cluttered stage. These factors, amongst others, have led to the increasing popularity of in-ear monitors. In-ear monitors are headphones that have been designed for use as monitors by a live performer. They are either of a universal fit or custom fit design. The universal fit in-ear monitors feature rubber or foam tips that can be inserted into virtually anybody's ear. Custom-fit in-ear monitors are created from an impression of the user's ear that has been made by an audiologist . In-ear monitors are almost always used in conjunction with a wireless transmitting system, allowing the performer to freely move about the stage while receiving their monitor mix. In-ear monitors offer considerable isolation for the performer using them: no on-stage sound is heard and the monitor engineer can deliver a much more accurate and clear mix for the performer. With in-ear monitors, each performer can be sent their own customized mix; although this was also the case with monitor speakers, the in-ear monitors of one performer cannot be heard by the other musicians. A downside of this isolation is that the performer cannot hear the crowd or the comments from other performers on stage that do not have microphones (e.g., if the bass player wishes to communicate to the drummer). This has been remedied in larger productions by setting up microphones facing the audience that can be mixed into the in-ear monitor sends. [ 13 ] Since their introduction in the mid-1980s, in-ear monitors have grown to be the most popular monitoring choice for large touring acts. The reduction or elimination of loudspeakers other than instrument amplifiers on stage has allowed for cleaner and less problematic mixing for both the front of house and monitor engineers. Audio feedback is greatly reduced and there is less sound reflecting off the back wall of the stage out into vocal mics and the audience, which improves the clarity of the front-of-house mix. Sound reinforcement systems are used in a broad range of different settings, each of which poses different challenges. Audio-visual rental systems have to be able to withstand heavy use and even abuse from renters. For this reason, rental companies tend to own speaker cabinets that are heavily braced and protected with steel corners, and electronic equipment such as power amplifiers or effects are often mounted into protective road cases. Rental companies also tend to select gear that have electronic protection features, such as speaker-protection circuitry and amplifier limiters. Rental systems for non-professionals need to be easy to use and set up and they must be easy to repair and maintain for the renting company. From this perspective, speaker cabinets need to have easy-to-access horns, speakers, and crossover circuitry, so that repairs or replacements can be made. Many touring acts and large venue corporate events will rent large sound reinforcement systems that typically include one or more audio engineers on staff with the renting company. In the case of rental systems for tours, there are typically several audio engineers and technicians from the rental company that tour with the band to set up and calibrate the equipment. The individual that mixes the band is often selected and provided by the band, as they are familiar with the various aspects of the show and understand how the band wants the show to sound. Setting up sound reinforcement for live music clubs and dance events often poses unique challenges, because there is such a large variety of venues that are used as clubs, ranging from former warehouses or music theaters to small restaurants or basement pubs with concrete walls. Dance events may be held in huge warehouses, aircraft hangars or outdoor spaces. In some cases, clubs are housed in multi-story venues with balconies or in L-shaped rooms, which makes it hard to get a consistent sound for all audience members. The solution is to use fill-in speakers to obtain good coverage, using a delay to ensure that the audience does not hear the same reinforced sound at different times. The number of subwoofer speaker cabinets and power amplifiers dedicated to low-frequency sounds used in a club depends on the type of club, the genres of music played there, and the size of the venue. A small coffeehouse where traditional folk, bluegrass or jazz groups are the main performers may have no subwoofers, and instead rely on the full-range main PA speakers to reproduce bass sounds. On the other hand, a club where hard rock or heavy metal music bands play or a nightclub where DJs play dance music may have multiple large subwoofers, as these genres and music styles typically use powerful, deep bass sound. A challenge with designing sound systems for clubs is that the sound system may need to be used for both prerecorded music played by DJs and live music. A club system designed for DJs needs a DJ mixer and space for record players . In contrast, a live music club needs a mixing board designed for live sound, an onstage monitor system, and a multicore snake cable running from the stage to the mixer. Clubs that feature both types of shows may face challenges providing the desired equipment and set-up for both uses. Clubs can be a hostile environment for sound gear, in that the air may be hot, humid, and smoky. In some clubs, keeping power amplifiers cool may be a challenge. Churches and similar houses of worship often pose design challenges. Speakers may need to be unobtrusive to blend in with antique woodwork and stonework. In some cases, audio designers have designed custom-painted speaker cabinets. Some facilities, such as sanctuaries or chapels are long rooms with low ceilings and additional fill-in speakers are needed throughout the room to give good coverage. Once installed, church systems are often operated by amateur volunteers from the congregation, which means that they must be easy to operate and troubleshoot. To this end, some mixing consoles designed for houses of worship have automatic mixers, which turn down unused channels to reduce noise, and automatic feedback elimination circuits which detect and notch out frequencies that are feeding back. These features may also be available in multi-function consoles used in convention facilities and multi-purpose venues. Touring sound systems are available in many different sizes and shapes as they have to be powerful and versatile enough to cover many different halls and venues. Touring systems range from mid-sized systems for bands playing nightclub and other mid-sized venues to large systems for groups playing stadiums , arenas and outdoor festivals . Tour sound systems are often designed with substantial redundancy features, so that in the event of equipment failure or amplifier overheating, the system will continue to function. Touring systems for bands performing for crowds of a few thousand people and up are typically set up and operated by a team of technicians and engineers who travel with the performers to every show. Mainstream bands that are going to perform in mid- to large-sized venues during their tour schedule one to two weeks of technical rehearsal with the entire concert system and production staff, including audio engineers, at hand. This allows the audio and lighting engineers to become familiar with the show and establish presets on their digital equipment (e.g., digital mixers) for each part of the show, if needed. Many modern musical groups work with their front of house and monitor mixing engineers during this time to establish what their general idea is of how the show and mix should sound, both for themselves on stage and for the audience. This often involves programming different effects and signal processing for use on specific songs, to make the songs sound somewhat similar to the studio versions. To manage a show with a lot of effects changes, the mixing engineers for the show often choose to use a digital mixing console so that they can save and automatically recall these many settings in between each song. This time is also used by the system technicians to get familiar with the specific combination of gear that is going to be used on the tour and how it acoustically responds during the show. These technicians remain busy during the show, making sure the SR system is operating properly and that the system is tuned correctly, as the acoustic response of a room or venue will respond differently throughout the day depending on the temperature, humidity, and number of people in the room or space. Sound for live theater, operatic theater, and other dramatic applications may pose problems similar to those of churches; theaters may be in heritage buildings where speakers and wiring is required to blend in with the architecture. The need for clear sightlines may make the use of regular speaker cabinets unacceptable; instead, slim, low-profile speakers are often used instead. In live theater and drama, performers move around onstage, which means that wireless microphones may be necessary. Some of the higher-budget theater shows and musicals are mixed in surround sound live, often with the show's sound operator triggering sound effects that are being mixed with music and dialogue by the show's mixing engineer. These systems are usually much more extensive to design, typically involving separate sets of speakers for different zones in the theater. A subtle type of sound reinforcement called acoustic enhancement is used in some concert halls where classical music such as symphonies and opera is performed. Acoustic enhancement systems add more sound to the hall and prevent dead spots in the audience seating area by "...augment[ing] a hall's intrinsic acoustic characteristics." The systems use "...an array of microphones connected to a computer [which is] connected to an array of loudspeakers." However, as concertgoers have become aware of the use of these systems, debates have arisen, because "...purists maintain that the natural acoustic sound of [Classical] voices [or] instruments in a given hall should not be altered." [ 14 ] Kai Harada's article Opera's Dirty Little Secret states that opera houses have begun using electronic acoustic enhancement systems "...to compensate for flaws in a venue's acoustical architecture." Despite the uproar that has arisen amongst operagoers, Harada points out that none of the opera houses using acoustic enhancement systems "...use traditional, Broadway-style sound reinforcement, in which most if not all singers are equipped with radio microphones mixed to a series of unsightly loudspeakers scattered throughout the theatre." Instead, most opera houses use the sound reinforcement system for acoustic enhancement, and for subtle boosting of offstage voices, onstage dialogue, and sound effects (e.g., church bells in Tosca or thunder in Wagnerian operas). [ 15 ] These systems use microphones, computer processing "with delay, phase, and frequency-response changes", and then send the signal "... to a large number of loudspeakers placed in extremities of the performance venue." Another acoustic enhancement system, VRAS uses "...different algorithms based on microphones placed around the room." The Deutsche Staatsoper in Berlin and the Hummingbird Centre in Toronto use a LARES system. The Ahmanson Theatre in Los Angeles, the Royal National Theatre in London, and the Vivian Beaumont Theater in New York City use the SIAP system. [ 16 ] Lecture halls and conference rooms pose the challenge of reproducing speech clearly in a large hall, which may have reflective, echo -producing surfaces. One issue with reproducing speech is that the microphone used to pick up the sound of an individual's voice may also pick up unwanted sounds, such as the rustling of papers on a podium. A more tightly directional microphone may help to reduce unwanted background noises. Another challenge with doing live sound for individuals who are speaking at a conference is that, in comparison with professional singers , individuals who are invited to speak at a forum may not be familiar with how microphones work. Some individuals may accidentally point the microphone towards a speaker or monitor speaker, which may cause audio feedback . In some conferences, sound engineers have to provide microphones for a large number of people who are speaking, in the case of a panel conference or debate. In some cases, automatic mixers are used to control the levels of the microphones and turn off the channels for microphones that are not being spoken into, to reduce unwanted background noise and reduce the likelihood of feedback. Systems for sports facilities often have to deal with substantial echo, which can make speech unintelligible. Sports and recreational sound systems often face environmental challenges as well, such as the need for weather-proof outdoor speakers in outdoor stadiums and humidity - and splash-resistant speakers in swimming pools. Another challenge with sports sound reinforcement setups is that in many arenas and stadiums, the spectators are on all four sides of the playing field. This requires 360-degree sound coverage. This is very different from the norm with music festivals and music halls, where the musicians are on stage and the audience is seated in front of the stage. Large-scale sound reinforcement systems are designed, installed, and operated by audio engineers and audio technicians. During the design phase of a newly constructed venue, audio engineers work with architects and contractors, to ensure that the proposed design will accommodate the speakers and provide an appropriate space for sound technicians and the racks of audio equipment. Audio engineers will also provide advice on which audio components would best suit the space and its intended use, and on the correct placement and installation of these components. During the installation phase, audio engineers ensure that high-power electrical components are safely installed and connected and that ceiling or wall-mounted speakers are properly mounted (or "flown") onto rigging . When the sound reinforcement components are installed, the audio engineers test and calibrate the system so that its sound production will be even across the frequency spectrum. A sound reinforcement system should be able to accurately reproduce a signal from its input, through any processing, to its output without any coloration or distortion. However, due to inconsistencies in venue sizes, shapes, building materials, and even crowd densities, this is not always possible without prior calibration of the system. This can be done in one of several ways. The oldest method of system calibration involves a set of healthy ears, test program material (i.e. music or speech), a graphic equalizer, and a familiarity with the desired frequency response. One must then listen to the program material through the system, take note of any noticeable frequency deviation or resonances, and correct them using the equalizer. Engineers typically use a familiar playlist to calibrate a new system. This by ear process is still done by many engineers, even when analysis equipment is used, as a final check of how the system sounds with music or speech playing through the system. Another method of manual calibration requires a pair of high-quality headphones patched into the input signal before any processing. [ b ] One can then use this direct signal as a reference with which to identify any differences in frequency response. [ 17 ] Since the development of digital signal processing (DSP), there have been many pieces of equipment and computer software designed to shift the bulk of the work of system calibration from human auditory interpretation to software algorithms that run on microprocessors. One tool for calibrating a sound system is a real-time analyzer (RTA). This tool is usually used by piping pink noise into the system and measuring the result with a special calibrated microphone connected to the RTA. Using this information, the system can be adjusted to help achieve the desired frequency response. More recently, sound engineers have seen the introduction of dual fast-Fourier transform (FFT) based audio analysis software, such as Smaart , which allows an engineer to view not only frequency response information that an RTA provides, but also in the time domain. This provides the engineer with much more meaningful data than an RTA alone. Dual FFT analysis allows one to compare the source signal with the output signal. A system can be calibrated using normal program material instead of pink noise or other special test signals. Calibration can be monitored during a performance. Professional audio stores sell microphones, speaker enclosures , monitor speakers, mixing boards , rack-mounted effects units and related equipment designed for use by audio engineers and technicians. Stores often use the word professional or pro in their name or the description of their store, to differentiate their stores from consumer electronics stores , which sell consumer-grade loudspeakers , home cinema equipment, and amplifiers, which are designed for private, in-home use.
https://en.wikipedia.org/wiki/Sound_reinforcement_system
Sound scenography (also known as acoustic scenography ) [ 1 ] is the process of staging spaces and environments through sound. [ 2 ] It combines expertise from the fields of architecture , acoustics , communication , sound design and interaction design to convey artistic, historical, scientific, or commercial content or to establish atmospheres and moods. [ 3 ] Initially developed as a sub-discipline of scenography , it is now primarily used in the context of exhibitions, museums, media installations and trade fairs, as well as shops, adventure parks, spas, reception areas, and open-plan offices. [ 4 ] Distinct from other applications in sound design , spatial localisation plays a central role in sound scenography. Sound in contexts such as film soundtracks has a synchronised and standardised listening experience. The sound experience should be the same for every visitor at every position (and in every cinema). Because exhibition spaces are freely traversable and show audio-visual content at various stations across the room, sound scenography aims at providing every visitor with an individual listening experience with distinct start and end points as well as a distinct progression. Thus, the dramaturgy of the sound experience is no longer determined by the timeline of the soundtrack, but by the position and movement of the visitor. [ 5 ] Spaces can be staged with sound in various ways. Rooms have different tonal properties and acoustics depending on their architecture and interior design. Live musicians can spread across the room or play in motion, which is especially common in spatial music. [ 6 ] The reproduction of sounds via loudspeakers, offers a wide range of possibilities for integrating sound into spaces and is therefore the most commonly used method. In that context, sound scenography is influenced from various practices in the wider field of sound design and composition, such as generative music , sonic interaction design , and sound masking . [ 7 ] Loudspeaker systems used to distribute sound range from standard spatial audio setups to the more customised distributions common in sound installation , such as the Acousmatic Room Orchestration System . [ 8 ] The spatial integration of sound delivered via headphones is a defining feature of interactive soundwalks . Leveraging technologies such as geolocation and head tracking , sounds are used to augment real environments in what the BBC's R&D department calls "Audio AR". [ 9 ] In the more controlled environment of an exhibition, this approach has been used to create fully virtual sound environments. [ 10 ] Sound scenography performs many of the established functions of sound in film soundtracks. It gives emotional connotations to spaces, exhibits or even individual interactions through the use of sound. Soundscapes are used to establish atmospheres and moods with varying degrees of realism. Sound content is also used to evoke memories and associations. Soundscapes and musical accents clarify visual content or re-contextualise it. Content can also be conveyed purely sonically without accompanying visual media. [ 11 ] Especially in connection with large-scale video projection, sound is used to direct the viewer's attention. In all these application areas, sound scenography relates the different sonic components of an exhibition to one another in order to create a coherent overall soundscape. [ 12 ]
https://en.wikipedia.org/wiki/Sound_scenography
Sound Transmission Class (or STC ) is an integer rating of how well a building partition attenuates airborne sound . In the US, it is widely used to rate interior partitions, ceilings, floors, doors, windows and exterior wall configurations. Outside the US, the ISO Sound Reduction Index (SRI) is used. The STC rating very roughly reflects the decibel reduction of noise that a partition can provide. The STC is useful for evaluating annoyance due to speech sounds, but not music or machinery noise as these sources contain more low frequency energy than speech. [ 1 ] There are many ways to improve the sound transmission class of a partition, though the two most basic principles are adding mass and increasing the overall thickness. In general, the sound transmission class of a double wythe wall (e.g. two 4-inch-thick [100 mm] block walls separated by a 2-inch [51 mm] airspace) is greater than a single wall of equivalent mass (e.g. homogeneous 8-inch [200 mm] block wall). [ 2 ] The STC or sound transmission class is a single number method of rating how well wall partitions reduce sound transmission. [ 3 ] The STC provides a standardized way to compare products such as doors and windows made by competing manufacturers. A higher number indicates more effective sound insulation than a lower number. The STC is a standardized rating provided by ASTM E413 based on laboratory measurements performed in accordance with ASRM E90. ASTM E413 can also be used to determine similar ratings from field measurements performed in accordance with ASTM E336. [ 3 ] Sound Isolation and Sound Insulation are used interchangeably, though the term "Insulation" is preferred outside the US. [ 4 ] The term "sound proofing" is typically avoided in architectural acoustics as it is a misnomer and connotes inaudibility. Through research, acousticians have developed tables that pair a given STC rating with a subjective experience. The table below is used to determine the degree of sound isolation provided by typical multi-family construction. Generally, a difference of one or two STC points between similar constructions is subjectively insignificant. [ 5 ] Tables like the one above are highly dependent on the background noise levels in the receiving room: the louder the background noise, the greater the perceived sound isolation. [ 7 ] Prior to the STC rating, the sound isolation performance of a partition was measured and reported as the average transmission loss of over the frequency range 128 to 4096 Hz or 256 to 1021 Hz. [ 8 ] [ 9 ] This method is valuable at comparing homogeneous partitions that follow the mass law, but can be misleading when comparing complex or multi-leaf walls. In 1961, the ASTM International Standards Organization adopted E90-61T, which served as the basis for the STC method used today. The STC standard curve is based on European studies of multi-family residential construction, and closely resembles the sound isolation performance of a 9-inch-thick (230 mm) brick wall. [ 10 ] The STC number is derived from sound attenuation values tested at sixteen standard frequencies from 125 Hz to 4000 Hz. These Transmission Loss values are then plotted on a sound pressure level graph and the resulting curve is compared to a standard reference contour provided by the ASTM. [ 11 ] Sound isolation metrics, such as the STC, are measured in specially-isolated and designed laboratory test chambers. There are nearly infinite field conditions that will affect sound isolation on site when designing building partitions and enclosures. Sound travels through both the air and structure, and both paths must be considered when designing sound isolating walls and ceilings. To eliminate air borne sound all air paths between the areas must be eliminated. This is achieved by making seams airtight and closing all sound leaks. To eliminate structure-borne noise one must create isolation systems that reduce mechanical connections between those structures. [ 12 ] Adding mass to a partition reduces the transmission of sound. This is often achieved by adding additional layers of gypsum. It is preferable to have non symmetrical leaves, for example with different thickness gypsum. [ 13 ] The effect of adding multiple layers of gypsum wallboard to a frame also varies depending on the framing type and configuration. [ 14 ] [ 15 ] Doubling the mass of a partition does not double the STC, as the STC is calculated from a non-linear decibel sound transmission loss measurement. [ 16 ] So, whereas installing an additional layer of gypsum wallboard to a light-gauge (25-ga. or lighter) steel stud partition will result in about a 5 STC-point increase, doing the same on single wood or single heavy-gauge steel will result in only 2 to 3 additional STC points. [ 14 ] [ 15 ] Adding a second additional layer (to the already three-layer system) does not result in as drastic an STC change as the first additional layer. [ 14 ] The effect of additional gypsum wallboard layers on double- and staggered-stud partitions is similar to that of light-gauge steel partitions. Due to increased mass, poured concrete and concrete blocks typically achieve higher STC values (in the mid STC 40s to the mid STC 50s) than equally thick framed walls. [ 17 ] However the additional weight, added complexity of construction, and poor thermal insulation tend to limit masonry wall partitions as a viable sound isolation solution in many building construction projects. In recent years, [ when? ] gypsum board manufacturers have started to offer lightweight drywall board: Normal-weight gypsum has a nominal density of 43 pounds per cubic foot (690 kg/m 3 ), and lightweight drywall has a nominal density of 36 pounds per cubic foot (580 kg/m 3 ). This does not have a large effect on the STC rating, though lightweight gypsum can significantly degrade the low frequency performance of a partition as compared to normal weight gypsum. Sound absorption entails turning acoustical energy into some other form of energy, usually heat. [ 18 ] Adding absorptive materials to the interior surfaces of rooms, for example fabric-faced fiberglass panels and thick curtains, will result in a decrease of reverberated sound energy within the room. However, absorptive interior surface treatments of this kind do not significantly improve the sound transmission class. [ 19 ] Installing absorptive insulation, for example fiberglass batts and blow-in cellulose, into the wall or ceiling cavities does increase the sound transmission class significantly. [ 14 ] The presence of insulation in single 2x4 wood stud framing spaced 16 inches (410 mm) on-center results in only a few STC points. This is because a wall with 2x4 wood stud framing spaced 16 inches develops significant resonances which are not mitigated by the cavity insulation. In contrast, adding standard fiberglass insulation to an otherwise empty cavity in light-gauge (25-gauge or lighter) steel stud partitions can result in a nearly 10 STC-point improvement. Other studies have shown that fibrous insulation materials, such as mineral wool, can increase the STC by 5 to 8 points. [ 13 ] The effect of stiffness on sound isolation can relate to either the material stiffness of the sound isolating material or the stiffness caused by framing methods. Structurally decoupling the gypsum wallboard panels from the partition framing can result in a large increase in sound isolation when installed correctly. Examples of structural decoupling in building construction include resilient channels, sound isolation clips and hat channels, and staggered- or double-stud framing. The STC results of decoupling in wall and ceiling assemblies varies significantly depending on the framing type, air cavity volume, and decoupling material type. [ 14 ] Great care must be taken in each type of decoupled partition construction, as any fastener that becomes mechanically (rigidly) coupled to the framing can undermine the decoupling and result in drastically lower sound isolation results. [ 20 ] When two leaves are rigidly tied or coupled by a stud, the sound isolation of the system depends on the stiffness of the stud. Light-gauge (25-gauge or lighter) provides better sound isolation than 16-20-gauge steel, and noticeably better performance than wood studs. [ 21 ] When heavy gauge steel or wood studs are spaced 16 inches (410 mm) on center, additional resonances form which further lower the sound isolation performance of a partition. For typical gypsum stud walls, this resonance occurs in the 100–160 Hz region and is thought to be a hybrid of the mass-air-mass resonance and a bending mode resonance caused when a plate is closely supported by stiff members. [ 22 ] Single metal stud partitions are more effective than single wood stud partitions, and have been shown to increase the STC rating by up to 10 points. However, there is little difference between metal and wood studs when used in double stud partitions. [ 13 ] Double stud partitions have a higher STC than single stud. [ 13 ] In certain assemblies, increasing the stud spacing from 16 to 24 inches (410–610 mm) increases the STC rating by 2 to 3 points. [ 13 ] Though the terms sound absorption and damping are often interchangeable when discussing room acoustics , acousticians define these as two distinct properties of sound-isolating walls. Several gypsum manufacturers offer specialty products which use constrained layer damping , which is a form of viscous damping . [ 23 ] [ 24 ] Damping generally increases the sound isolation of partitions, particularly at mid-and-high frequencies. Damping is also used to improve the sound isolation performance of glazing assemblies. Laminated glazing, which consists of a Polyvinyl butyral (or PVB) inter-layer, performs better acoustically than a non-laminated glass of equivalent thickness. [ 25 ] All holes and gaps should be filled and the enclosure hermetically sealed for sound isolation to be effective. The table below illustrates sound proofing test results from a wall partition that has a theoretical maximum loss of 40 dB from one room to the next and a partition area of 10 meters squared. Even small open gaps and holes in the partition have a disproportionate reduction in sound proofing. A 5% opening in the partition, which offers unrestricted sound transmission from one room to the next, caused the transmission loss to reduce from 40 dB to 13 dB. A 0.1% open area will reduce the transmission loss from 40 dB to 30 dB, which is typical of walls where caulking has not been applied effectively [ 26 ] Partitions that are inadequately sealed and contain back-to-back electrical boxes, untreated recessed lighting and unsealed pipes offer flanking paths for sound and significant leakage. [ 27 ] Acoustic joint tapes and caulking have been used to improve sound isolation since the early 1930s. [ 28 ] Although the applications of tapes was largely limited to defense and industrial applications such as naval vessels and aircraft in the past, recent research has proven the effectiveness of sealing gaps and thereby improving the sound isolation performance of a partition. [ 29 ] Building codes typically allow for a 5-point tolerance between the lab-tested and field-measured STC rating; however, studies have shown that even in well-built and sealed installations the difference between the lab and field rating is highly dependent on the type of assembly. [ 30 ] By nature, the STC rating is derived from lab testing under ideal conditions. There are other versions of the STC rating to account for real-world conditions. The net sound isolation performance of a partition containing multiple sound isolating elements such as doors, windows, etc. The sound isolation performance of a partition measured in the field according to ASTM E336, normalized to account for different room finishes and the area of the tested partition (i.e. compare the same wall measured in a bare living room and an acoustically dry recording booth). The sound isolation performance of a partition measured in the field according to ASTM E336, normalized to account for the reverberation time in the room. The sound isolation performance of a partition measured in the field according to ASTM E336, not normalized to the room conditions of the test. The sound isolation performance of a specific elements in a partition, as measured in the field and achieved by suppressing the effects of sound flanking paths. This can be useful for measuring walls with doors, when you are interested in removing the influence of the door on the measured field STC. The FSTC testing method was historically prescribed by ASTM E336, however the latest version of this standard does not include FSTC. [ 31 ] The sound isolation performance of doors when measured according to ASTM E2964. [ 32 ] Section 1206 of International Building Code 2021 states that separation between dwelling units and public and service areas must achieve STC 50 where tested in accordance with ASTM E90, or NNIC 45 if field tested in accordance with ASTM E336. However, not all jurisdictions use the IBC for their building or municipal code. Interior walls with 1 sheet of 1 ⁄ 2 -inch (13 mm) gypsum wallboard ( drywall ) on either side of 2x4 ( 1 + 1 ⁄ 2 in × 3 + 1 ⁄ 2 in or 38 mm × 89 mm) wood studs spaced 16 inches (410 mm) on-center with no fiberglass insulation filling each stud cavity have an STC of about 33. [ 14 ] When asked to rate their acoustical performance, people often describe these walls as "paper thin." They offer little in the way of privacy. Double stud partition walls are typically constructed with varying gypsum wallboard panel layers attached to both sides of double 2x4 wood studs spaced 16 inches on-center and separated by a 1-inch (25 mm) airspace. These walls vary in sound isolation performance from the mid STC-40s into the high STC-60s depending on the presence of insulation and the gypsum wallboard type and quantity. [ 14 ] Commercial buildings are typically constructed using steel studs of varying widths, gauges, and on-center spacings. Each of these framing characteristics have an effect on the sound isolation of the partition to varying degrees. [ 15 ] There are several commercially available software which predict the STC ratings of partitions using a combination of theoretical models and empirically derived lab data. These programs can predict STC ratings within several points of a tested partition and are an approximation at best. [ 36 ] The Outdoor–Indoor Transmission Class (OITC) is a standard used for indicating the rate of sound transmission from outdoor noise sources into a building. It is based on the ASTM E-1332 Standard Classification for Rating Outdoor-Indoor Sound Attenuation. [ 37 ] Unlike the STC, which is based on a noise spectrum targeting speech sounds, OITC uses a source noise spectrum that considers frequencies down to 80 Hz (aircraft/rail/truck traffic) and is weighted more to lower frequencies. The OITC value is typically used to rate, evaluate, and select exterior glazing assemblies.
https://en.wikipedia.org/wiki/Sound_transmission_class
In logic and deductive reasoning , an argument is sound if it is both valid in form and has no false premises . [ 1 ] Soundness has a related meaning in mathematical logic , wherein a formal system of logic is sound if and only if every well-formed formula that can be proven in the system is logically valid with respect to the logical semantics of the system. In deductive reasoning , a sound argument is an argument that is valid and all of its premises are true (and as a consequence its conclusion is true as well). An argument is valid if, assuming its premises are true, the conclusion must be true. An example of a sound argument is the following well-known syllogism : Because of the logical necessity of the conclusion, this argument is valid; and because the argument is valid and its premises are true, the argument is sound. However, an argument can be valid without being sound. For example: This argument is valid as the conclusion must be true assuming the premises are true. However, the first premise is false. Not all birds can fly (for example, ostriches). For an argument to be sound, the argument must be valid and its premises must be true. [ 2 ] Some authors, such as Lemmon , have used the term "soundness" as synonymous with what is now meant by "validity", [ 3 ] which left them with no particular word for what is now called "soundness". But nowadays, this division of the terms is very widespread. In mathematical logic , a logical system has the soundness property if every formula that can be proved in the system is logically valid with respect to the semantics of the system. In most cases, this comes down to its rules having the property of preserving truth . [ 4 ] The converse of soundness is known as completeness . A logical system with syntactic entailment ⊢ {\displaystyle \vdash } and semantic entailment ⊨ {\displaystyle \models } is sound if for any sequence A 1 , A 2 , . . . , A n {\displaystyle A_{1},A_{2},...,A_{n}} of sentences in its language, if A 1 , A 2 , . . . , A n ⊢ C {\displaystyle A_{1},A_{2},...,A_{n}\vdash C} , then A 1 , A 2 , . . . , A n ⊨ C {\displaystyle A_{1},A_{2},...,A_{n}\models C} . In other words, a system is sound when all of its theorems are validities . Soundness is among the most fundamental properties of mathematical logic. The soundness property provides the initial reason for counting a logical system as desirable. The completeness property means that every validity (truth) is provable. Together they imply that all and only validities are provable. Most proofs of soundness are trivial. [ citation needed ] For example, in an axiomatic system , proof of soundness amounts to verifying the validity of the axioms and that the rules of inference preserve validity (or the weaker property, truth). If the system allows Hilbert-style deduction , it requires only verifying the validity of the axioms and one rule of inference, namely modus ponens (and sometimes substitution). Soundness properties come in two main varieties: weak and strong soundness, of which the former is a restricted form of the latter. Weak soundness of a deductive system is the property that any sentence that is provable in that deductive system is also true on all interpretations or structures of the semantic theory for the language upon which that theory is based. In symbols, where S is the deductive system, L the language together with its semantic theory, and P a sentence of L : if ⊢ S P , then also ⊨ L P . Strong soundness of a deductive system is the property that any sentence P of the language upon which the deductive system is based that is derivable from a set Γ of sentences of that language is also a logical consequence of that set, in the sense that any model that makes all members of Γ true will also make P true. In symbols, where Γ is a set of sentences of L : if Γ ⊢ S P , then also Γ ⊨ L P . Notice that in the statement of strong soundness, when Γ is empty, we have the statement of weak soundness. If T is a theory whose objects of discourse can be interpreted as natural numbers , we say T is arithmetically sound if all theorems of T are actually true about the standard mathematical integers. For further information, see ω-consistent theory . The converse of the soundness property is the semantic completeness property. A deductive system with a semantic theory is strongly complete if every sentence P that is a semantic consequence of a set of sentences Γ can be derived in the deduction system from that set. In symbols: whenever Γ ⊨ P , then also Γ ⊢ P . Completeness of first-order logic was first explicitly established by Gödel , though some of the main results were contained in earlier work of Skolem . Informally, a soundness theorem for a deductive system expresses that all provable sentences are true. Completeness states that all true sentences are provable. Gödel's first incompleteness theorem shows that for languages sufficient for doing a certain amount of arithmetic, there can be no consistent and effective deductive system that is complete with respect to the intended interpretation of the symbolism of that language. Thus, not all sound deductive systems are complete in this special sense of completeness, in which the class of models (up to isomorphism ) is restricted to the intended one. The original completeness proof applies to all classical models, not some special proper subclass of intended ones.
https://en.wikipedia.org/wiki/Soundness
Soundproofing is any means of impeding sound propagation . There are several methods employed including increasing the distance between the source and receiver, decoupling, using noise barriers to reflect or absorb the energy of the sound waves , using damping structures such as sound baffles for absorption , or using active anti-noise sound generators. [ 1 ] [ 2 ] Acoustic quieting and noise control can be used to limit unwanted noise. Soundproofing can reduce the transmission of unwanted direct sound waves from the source to an involuntary listener through the use of distance and intervening objects in the sound path (see sound transmission class and sound reduction index ). Soundproofing can suppress unwanted indirect sound waves such as reflections that cause echoes and resonances that cause reverberation . Sound-absorbing material controls reverberant sound pressure levels within a cavity, enclosure or room. Synthetic absorption materials are porous, referring to open cell foam (acoustic foam, soundproof foam). [ 3 ] Fibrous absorption material such as cellulose, mineral wool, fiberglass, sheep's wool, are more commonly used to deaden resonant frequencies within a cavity (wall, floor, or ceiling insulation), serving a dual purpose along with their thermal insulation properties. Both fibrous and porous absorption material are used to create acoustic panels , which absorb sound reflections in a room, improving speech intelligibility. [ 4 ] [ 5 ] Porous absorbers, typically open cell rubber foams or melamine sponges , absorb noise by friction within the cell structure. [ 6 ] Porous open cell foams are highly effective noise absorbers across a broad range of medium-high frequencies. Performance can be less impressive at lower frequencies. The exact absorption profile of a porous open-cell foam will be determined by a number of factors including cell size, tortuosity , porosity, thickness, and density. The absorption aspect in soundproofing should not be confused with sound-absorbing panels used in acoustic treatments. Absorption in this sense refers to reducing a resonating frequency in a cavity by installing insulation between walls, ceilings or floors. Acoustic panels can play a role in treatment reducing reflections that make the overall sound in the source room louder, after walls, ceilings, and floors have been soundproofed. Resonant panels, Helmholtz resonators and other resonant absorbers work by damping a sound wave as they reflect it. [ 7 ] Unlike porous absorbers, resonant absorbers are most effective at low-medium frequencies and the absorption of resonant absorbers is matched to a narrow frequency range. Damping serves to reduce resonance in the room , by absorption or redirection through reflection or diffusion. Absorption reduces the overall sound level, whereas redirection makes unwanted sound harmless or even beneficial by reducing coherence . Damping can be separately applied to reduce the acoustic resonance in the air or to reduce mechanical resonance in the structure of the room itself or things in the room. Creating separation between a sound source and any form of adjoining mass, hindering the direct pathway for sound transfer. The energy density of sound waves decreases as they become farther apart so increasing the distance between the receiver and source results in a progressively lesser intensity of sound at the receiver. In a normal three-dimensional setting, with a point source and point receptor, the intensity of sound waves will be attenuated according to the inverse square of the distance from the source. Adding dense material to treatment helps stop sound waves from exiting a source wall, ceiling or floor. Materials include mass-loaded vinyl, soundproof sheetrock or drywall, plywood, fibreboard , concrete or rubber. Different widths and densities in soundproofing material reduce sound within a variable frequency range. When sound waves hit a medium, the reflection of that sound is dependent on the dissimilarity of the material it comes in contact with. [ 8 ] Sound hitting a concrete surface will result in a much different reflection than if the sound were to hit a softer medium such as fiberglass. In an outdoor environment such as highway engineering, embankments or paneling are often used to reflect sound upwards into the sky. If a specular reflection from a hard flat surface is giving a problematic echo then an acoustic diffuser may be applied to the surface. It will scatter sound in all directions. In active noise control , a microphone is used to pick up the sound that is then analyzed by a computer; then, sound waves with opposite polarity (180° phase at all frequencies) are output through a speaker, causing destructive interference and canceling much of the noise. Residential sound programs aim to decrease or eliminate the effects of exterior noise. The main focus of a residential sound program in existing structures is the windows and doors. Solid wood doors are a better sound barrier than hollow doors. [ 9 ] Curtains can be used to dampen sound, either through use of heavy materials or through the use of air chambers known as honeycombs . Single-, double- and triple-honeycomb designs achieve relatively greater degrees of sound damping. The primary soundproofing limit of curtains is the lack of a seal at the edge of the curtain, although this may be alleviated with the use of sealing features, such as hook and loop fastener, adhesive, magnets, or other materials. The thickness of glass will play a role when diagnosing sound leakage. Double-pane windows achieve somewhat greater sound damping than single-pane windows when well-sealed into the opening of the window frame and wall. [ 10 ] Significant noise reduction can also be achieved by installing a second interior window. In this case, the exterior window remains in place while a slider or hung window is installed within the same wall openings. [ 11 ] In the US, the FAA offers sound-reducing for homes that fall within a noise contour where the average sound level is 65 dB SPL or greater. It is part of their Residential Sound Insulation Program. The program provides solid-core wood entry doors plus windows and storm doors. [ 12 ] Sealing gaps and cracks around electrical wiring, water pipes and ductwork using acoustical caulk or spray foam will significantly reduce unwanted noise as a preliminary step for ceiling soundproofing. Acoustical caulk should be used along the perimeter of the wall and around all fixtures and duct registers to further seal the treatment. Mineral wool insulation is most commonly used in soundproofing for its density and low cost compared to other soundproofing materials. Spray foam insulation should only be used to fill gaps and cracks or as a 1-2 inch layer before installing mineral wool. Cured spray foam and other closed-cell foam can be a sound conductor. Spray foam is not porous enough to absorb sound and is also not dense enough to stop sound. [ citation needed ] An effective method to reduce impact noise is the "resilient isolation channel". [ 13 ] The channels decouple the drywall from the joists, reducing the transfer of vibration. Mass is the only way to stop sound. Mass refers to drywall, plywood or concrete. Mass-loaded vinyl (MLV) is used to dampen or weaken sound waves between layers of mass. Use of a viscoelastic damping compound [ 14 ] or MLV converts sound waves into heat, weakening the waves before they reach the next layer of mass. It is important to use multiple layers of mass, in different widths and densities, to optimize any given soundproofing treatment. [ 15 ] Installing soundproof drywall is recommended for its higher sound transmission class (STC) value. Soundproof drywall in combination with a viscoelastic compound may achieve a noise reduction of STC 60+. Walls are filled with mineral wool insulation. Depending on the desired level of treatment, two layers of insulation may be required. Outlets, light switches, and electrical boxes are weak points in any given soundproofing treatment. Electrical boxes should be wrapped in clay or putty and backed with MLV . After switch plates, outlet covers and lights are installed, acoustical caulking should be applied around the perimeter of the plates or fixtures. Decoupling between the joist and subfloor plywood using neoprene joist tape or u-shaped rubber spacers helps create soundproof flooring. An additional layer of plywood can be installed with a viscoelastic compound. Mass loaded vinyl , in combination with open-cell rubber or a closed-cell foam floor underlayment, will further reduce sound transmission. After applying these techniques, hardwood flooring or carpeting can be installed. Additional area rugs and furniture will help reduce unwanted reflection within the room. A room within a room (RWAR) is one method of isolating sound and preventing it from transmitting to the outside world where it may be undesirable. Most sound transfer from a room to the outside occurs through mechanical means. The vibration passes directly through the brick, woodwork and other solid structural elements. When it meets with an element such as a wall, ceiling, floor or window, which acts as a sounding board , the vibration is amplified and heard in the second space. A mechanical transmission is much faster, more efficient and more readily amplified than an airborne transmission of the same initial strength. The use of acoustic foam and other absorbent means is less effective against this transmitted vibration. The transmission can be stopped by breaking the connection between the room that contains the noise source and the outside world. This is called acoustic decoupling. Restaurants, schools, office businesses, and healthcare facilities use architectural acoustics to reduce noise for their customers. In the United States, OSHA has requirements regulating the length of exposure of workers to certain levels of noise. [ 16 ] For educators and students, improving the sound quality of an environment will subsequently improve student learning, concentration, and teacher-student inter-communications. In 2014, a research study conducted by Applied Science revealed 86% of students perceived their instructors more intelligibly, while 66% of students reported experiencing higher concentration levels after sound-absorbing materials were incorporated into the classroom. [ 17 ] Automotive soundproofing aims to decrease or eliminate the effects of exterior noise, primarily engine, exhaust and tire noise across a wide frequency range. A panel damping material is fitted which reduces the vibration of the vehicle's body panels when they are excited by one of the many high-energy sound sources in play when the vehicle is in use. There are many complex noises created within vehicles which change with the driving environment and speed at which the vehicle travels. [ 18 ] Significant noise reductions of up to 8 dB can be achieved by installing a combination of different types of materials. [ 19 ] The automotive environment limits the thickness of materials that can be used, but combinations of dampers, barriers, and absorbers are common. Common materials include felt, foam, polyester, and polypropylene blend materials. Waterproofing may be necessary depending on the materials used. [ 20 ] Acoustic foam can be applied in different areas of a vehicle during manufacture to reduce cabin noise. Foams also have cost and performance advantages in installation since foam material can expand and fill cavities after application and also prevent leaks and some gases from entering the vehicle. Vehicle soundproofing can reduce wind, engine, road , and tire noise. Vehicle soundproofing can reduce sound inside a vehicle from five to 20 decibels. [ 21 ] Surface-damping materials are very effective at reducing structure-borne noise. Passive damping materials have been used since the early 1960s in the aerospace industry. Over the years, advances in material manufacturing and the development of more efficient analytical and experimental tools to characterize complex dynamic behaviors enabled the expansion of the usage of these materials to the automotive industry. Nowadays, multiple viscoelastic damping pads are usually attached to the body in order to attenuate higher-order structural panel modes that significantly contribute to the overall noise level inside the cabin. Traditionally, experimental techniques are used to optimize the size and location of damping treatments. In particular, laser vibrometer-type tests are often conducted on the body in white structures enabling the fast acquisition of a large number of measurement points with a good spatial resolution. However, testing a complete vehicle is mostly infeasible, requiring evaluation of every subsystem individually, hence limiting the usability of this technology in a fast and efficient way. Alternatively, structural vibrations can also be acoustically measured using particle velocity sensors located near a vibrating structure. Several studies have revealed the potential of particle velocity sensors for characterizing structural vibrations, which accelerates the entire testing process when combined with scanning techniques. [ 22 ] Since the early 1970s, it has become common practice in the United States and other industrialized countries to engineer noise barriers along major highways to protect adjacent residents from intruding roadway noise . The Federal Highway Administration (FHWA) in conjunction with State Highway Administration (SHA) adopted Federal Regulation (23 CFR 772) requiring each state to adopt their own policy in regards to abatement of highway traffic noise. [ 23 ] Engineering techniques have been developed to predict an effective geometry for the noise barrier design in a particular real-world situation. Noise barriers may be constructed of wood, masonry , earth or a combination thereof.
https://en.wikipedia.org/wiki/Soundproofing
Soundscape ecology is the study of the acoustic relationships between living organisms, human and other, and their environment, whether the organisms are marine or terrestrial. First appearing in the Handbook for Acoustic Ecology edited by Barry Truax , in 1978, [ 1 ] the term has occasionally been used, sometimes interchangeably, with the term acoustic ecology . Soundscape ecologists also study the relationships between the three basic sources of sound that comprise the soundscape: those generated by organisms are referred to as the biophony ; those from non-biological natural categories are classified as the geophony , and those produced by humans , the anthropophony . Increasingly, soundscapes are dominated by a sub-set of anthropophony (sometimes referred to in older, more archaic terminology as "anthropogenic noise"), or technophony, the overwhelming presence of electro-mechanical noise. This sub-class of noise pollution or disturbance may produce a negative effect on a wide range of organisms. Variations in soundscapes as a result of natural phenomena and human endeavor may have wide-ranging ecological effects as many organisms have evolved to respond to acoustic cues that emanate primarily from undisturbed habitats. Soundscape ecologists use recording devices , audio tools, and elements of traditional ecological and acoustic analyses to study soundscape structure. Soundscape ecology has deepened current understandings of ecological issues and established profound visceral connections to ecological data. The preservation of natural soundscapes is now a recognized conservation goal. As an academic discipline, soundscape ecology shares some characteristics with other fields of inquiry but is also distinct from them in significant ways. [ 2 ] For instance, acoustic ecology is also concerned with the study of multiple sound sources. However, acoustic ecology, which derives from the founding work of R. Murray Schafer and Barry Truax , primarily focuses on human perception of soundscapes. Soundscape ecology seeks a broader perspective by considering soundscape effects on communities of living organisms, human and other, and the potential interactions between sounds in the environment. [ 3 ] Compared to soundscape ecology, the discipline of bioacoustics tends to have a narrower interest in individual species’ physiological and behavioral mechanisms of auditory communication. Soundscape ecology also borrows heavily from some concepts in landscape ecology , which focuses on ecological patterns and processes occurring over multiple spatial scales. [ 2 ] [ 4 ] Landscapes may directly influence soundscapes as some organisms use physical features of their habitat to alter their vocalizations. For example, baboons and other animals exploit specific habitats to generate echoes of the sounds they produce. [ 2 ] [ 3 ] The function and importance of sound in the environment may not be fully appreciated unless one adopts an organismal perspective on sound perception, and, in this way, soundscape ecology is also informed by sensory ecology . [ 2 ] [ 4 ] Sensory ecology focuses on understanding the sensory systems of organisms and the biological function of information obtained from these systems. In many cases, humans must acknowledge that sensory modalities and information used by other organisms may not be obvious from an anthropocentric viewpoint. This perspective has already highlighted many instances where organisms rely heavily on sound cues generated within their natural environments to perform important biological functions. For example, a broad range of crustaceans are known to respond to biophony generated around coral reefs . Species that must settle on reefs to complete their developmental cycle are attracted to reef noise while pelagic and nocturnal crustaceans are repelled by the same acoustic signal, presumably as a mechanism to avoid predation (predator densities are high in reef habitats). [ 5 ] Similarly, juvenile fish may use biophony as a navigational cue to locate their natal reefs, [ 6 ] and may also be encouraged to resettle damaged coral reefs by playback of healthy reef sound. [ 7 ] Other species’ movement patterns are influenced by geophony, as in the case of the reed frog which is known to disperse away from the sound of fire. [ 8 ] In addition, a variety of bird and mammal species use auditory cues, such as movement noise, in order to locate prey. [ 9 ] Disturbances created by periods of environmental noise may also be exploited by some animals while foraging. For example, insects that prey on spiders concentrate foraging activities during episodes of environmental noise to avoid detection by their prey. [ 10 ] These examples demonstrate that many organisms are highly capable of extracting information from soundscapes. According to academic Bernie Krause , soundscape ecology serves as a lens into other fields including medicine, music, dance, philosophy, environmental studies, etc. ( the soundscape ). [ 11 ] [ 2 ] Krause sees the soundscape of a given region as the sum of three separate sound sources (as described by Gage and Krause) defined as follows: According to Krause various combinations of these acoustic expressions across space and time generate unique soundscapes. [ citation needed ] Soundscape ecologists seek to investigate the structure of soundscapes, explain how they are generated, and study how organisms interrelate acoustically. A number of hypotheses have been proposed to explain the structure of soundscapes, particularly elements of biophony. For instance, an ecological theory known as the acoustic adaptation hypothesis predicts that acoustic signals of animals are altered in different physical environments in order to maximize their propagation through the habitat. [ 2 ] [ 19 ] In addition, acoustic signals from organisms may be under selective pressure to minimize their frequency (pitch) overlap with other auditory features of the environment. This acoustic niche hypothesis is analogous to the classical ecological concept of niche partitioning . It suggests that acoustic signals in the environment should display frequency partitioning as a result of selection acting to maximize the effectiveness of intraspecific communication for different species. Observations of frequency differentiation among insects , birds , and anurans support the acoustic niche hypothesis. [ 20 ] [ 3 ] Organisms may also partition their vocalization frequencies to avoid overlap with pervasive geophonic sounds. For example, territorial communication in some frog species takes place partially in the high frequency ultrasonic spectrum. [ 21 ] This communication method represents an evolutionary adaptation to the frogs' riparian habitat where running water produces constant low frequency sound. Invasive species that introduce new sounds into soundscapes can disrupt acoustic niche partitioning in native communities, a process known as biophonic invasion. [ 4 ] Although adaptation to acoustic niches may explain the frequency structure of soundscapes, spatial variation in sound is likely to be generated by environmental gradients in altitude , latitude , or habitat disturbance . [ 4 ] These gradients may alter the relative contributions of biophony, geophony, and anthrophony to the soundscape. For example, when compared with unaltered habitats, regions with high levels of urban land-use are likely to have increased levels of anthrophony and decreased physical and organismal sound sources. Soundscapes typically exhibit temporal patterns, with daily and seasonal cycles being particularly prominent. [ 4 ] These patterns are often generated by the communities of organisms that contribute to biophony. For example, birds chorus heavily at dawn and dusk while anurans call primarily at night; the timing of these vocalization events may have evolved to minimize temporal overlap with other elements of the soundscape. [ 4 ] [ 22 ] Acoustic information describing the environment is the primary data required in soundscape ecology studies. Technological advances have provided improved methods for the collection of such data. Automated recording systems allow for temporally replicated samples of soundscapes to be gathered with relative ease. Data collected from such equipment can be extracted to generate a visual representation of the soundscape in the form of a spectrogram . [ 2 ] Spectrograms provide information on a number of sound properties that may be subject to quantitative analysis. The vertical axis of a spectrogram indicates the frequency of a sound while the horizontal axis displays the time scale over which sounds were recorded. In addition, spectrograms display the amplitude of sound, a measure of sound intensity . Ecological indices traditionally used with species-level data, such as diversity and evenness , have been adapted for use with acoustic metrics. [ 2 ] These measures provide a method of comparing soundscapes across time or space. For example, automated recording devices have been used to gather acoustic data in different landscapes across yearlong time scales, and diversity metrics were employed to evaluate daily and seasonal fluctuations in soundscapes across sites. The demise of a habitat can be seen by measuring before and after "logging" for example. [ 23 ] [ 2 ] Spatial patterns of sound may also be studied using tools familiar to landscape ecologists such as geographic information systems (GIS). [ 4 ] Finally, recorded samples of the soundscape can provide proxy measures for biodiversity inventories in cases where other sampling methods are impractical or inefficient. [ 24 ] These techniques may be especially important for the study of rare or elusive species that are especially difficult to monitor in other ways. Although soundscape ecology has only recently been defined as an independent academic discipline (it was first described in 2011 and formalized at the first meeting of the International Society of Ecoacoustics, held in Paris in 2014), many earlier ecological investigations have incorporated elements of soundscape ecology theory. For instance, a large body of work has focused on documenting the effects of anthropophony on wildlife . Anthropophony (the uncontrolled version, is often used synonymously with noise pollution ) can emanate from a variety of sources, including transportation networks or industry, and may represent a pervasive disturbance to natural systems even in seemingly remote regions such as national parks . [ 9 ] A major effect of noise is the masking of organismal acoustic signals that contain information. Against a noisy background, organisms may have trouble perceiving sounds that are important for intraspecific communication, foraging, predator recognition , or a variety of other ecological functions. [ 9 ] In this way, anthropogenic noise may represent a soundscape interaction wherein increased anthropophony interferes with biophonic processes. The negative effects of anthropogenic noise impact a wide variety of taxa including fish, amphibians, birds, and mammals. [ 25 ] In addition to interfering with ecologically important sounds, anthropophony can also directly affect the biological systems of organisms. Noise exposure, which may be perceived as a threat, can lead to physiological changes. [ 9 ] For example, noise can increase levels of stress hormones , impair cognition , reduce immune function , and induce DNA damage . [ 26 ] Although much of the research on anthropogenic noise has focused on behavioral and population-level responses to noise disturbance, these molecular and cellular systems may prove promising areas for future work. Birds have been used as study organisms in much of the research concerning wildlife responses to anthropogenic noise, and the resulting literature documents many effects that are relevant to other taxa affected by anthropophony . Birds may be particularly sensitive to noise pollution given that they rely heavily on acoustic signals for intraspecific communication. Indeed, a wide range of studies demonstrate that birds use altered songs in noisy environments. [ 25 ] Research on great tits in an urban environment revealed that male birds inhabiting noisy territories tended to use higher frequency sounds in their songs. [ 27 ] Presumably these higher-pitched songs allow male birds to be heard above anthropogenic noise, which tends to have high energy in the lower frequency range thereby masking sounds in that spectra. A follow-up study of multiple populations confirmed that great tits in urban areas sing with an increased minimum frequency relative to forest-dwelling birds. [ 28 ] In addition, this study suggests that noisy urban habitats host birds that use shorter songs but repeat them more rapidly. In contrast to frequency modulations, birds may simply increase the amplitude (loudness) of their songs to decrease masking in environments with elevated noise. [ 29 ] Experimental work and field observations show that these song alterations may be the result of behavioral plasticity rather than evolutionary adaptations to noise (i.e., birds actively change their song repertoire depending on the acoustic conditions they experience). [ 30 ] In fact, avian vocal adjustments to anthropogenic noise are unlikely to be the products of evolutionary change simply because high noise levels are a relatively recent selection pressure. [ 22 ] However, not all bird species adjust their songs to improve communication in noisy environments, which may limit their ability to occupy habitats subject to anthropogenic noise. [ 31 ] In some species, individual birds establish a relatively rigid vocal repertoire when they are young, and these sorts of developmental constraints may limit their ability to make vocal adjustments later in life. [ 22 ] Thus, species that do not or cannot modify their songs may be particularly sensitive to habitat degradation as a result of noise pollution. [ 27 ] [ 31 ] Even among birds that are able to alter their songs to be better heard in environments inundated with anthropophony, these behavioral changes may have important fitness consequences. In the great tit, for example, there is a tradeoff between signal strength and signal detection that depends on song frequency. [ 32 ] Male birds that include more low frequency sounds in their song repertoire experience better sexual fidelity from their mates which results in increased reproductive success. However, low frequency sounds tend to be masked when anthropogenic noise is present, and high frequency songs are more effective at eliciting female responses under these conditions. Birds may therefore experience competing selective pressures in habitats with high levels of anthropogenic noise: pressure to call more at lower frequencies in order to improve signal strength and secure good mates versus opposing pressure to sing at higher frequencies in order to ensure that calls are detected against a background of anthrophony. In addition, use of certain vocalizations, including high amplitude sounds that reduce masking in noisy environments, may impose energetic costs that reduce fitness. [ 22 ] Because of the reproductive trade-offs and other stresses they impose on some birds, noisy habitats may represent ecological traps , habitats in which individuals have reduced fitness yet are colonized at rates greater than or equal to other habitats. [ 25 ] [ 33 ] Anthropophony may ultimately have population - or community -level impacts on avian fauna . One study focusing on community composition found that habitats exposed to anthropophony hosted fewer bird species than regions without noise, but both areas had similar numbers of nests. [ 34 ] In fact, nests in noisy habitats had higher survival than those laid in control habitats, presumably because noisy environments hosted fewer western scrub jays which are major nest predators of other birds. Thus, anthropophony can have negative effects on local species diversity, but the species capable of coping with noise disturbance may actually benefit from the exclusion of negative species interactions in those areas. Other experiments suggest that noise pollution has the potential to affect avian mating systems by altering the strength of pair bonds . When exposed to high amplitude environmental noise in a laboratory setting, zebra finches , a monogamous species, show a decreased preference for their mated partners. [ 35 ] Similarly, male reed buntings in quiet environments are more likely to be part of a mated pair than males in noisy locations. [ 30 ] Such effects may ultimately result in reduced reproductive output of birds subject to high levels of environmental noise. [ 36 ] In comparison to other taxa, relatively little research has been done on the effects of anthropogenic noise on insects. However, current knowledge indicates that they are likely affected by anthropogenic noise to a greater extent than many other animal groups. [ 37 ] [ 38 ] Insects, like birds, rely heavily on acoustic signals for communication, which can be disrupted by noise. However, while birds and other taxa often studied for effects of anthropogenic noise primarily rely on airborne acoustic signals, insects frequently utilize vibrational signals for communication. [ 39 ] The properties of vibrational signals increases the threat posed to them by anthropogenic noise. Furthermore, due to limited dispersal capacity and narrow habitat requirements, insects may be unable to avoid anthropogenic noise by moving to quieter locations. [ 38 ] Certain behavioral responses could allow for insects to compensate for the presence of anthropogenic noise, but physiological and environmental constraints limit the efficacy of these strategies. As a result of interference with communication, insects are at a greater risk of experiencing negative fitness consequences due to impacts on mating, foraging, and survival. Noise that masks or distorts signals used for mate location or courtship can prevent mating from taking place. [ 40 ] Similarly, noise that prevents insects from perceiving prey or potential dangers may result in decreased foraging success and survival. [ 41 ] Vibrational signals used by most insects have the majority of their power concentrated below 2 kHz , a frequency range that is lower than most airborne communication but has high overlap with many types of anthropogenic noise. [ 37 ] As a result, anthropogenic noise can mask and/or distort the properties of vibrational signals. [ 38 ] Noise that overlaps acoustic signals can prevent insects from identifying intraspecific courtship signals, discerning the meaning of signals, and perceiving signals made by predator or prey species. [ 42 ] Any reduced ability to recognize and locate mates, avoid predation and other dangers, or forage for food is likely to have negative consequences for survival and reproduction. [ 38 ] Insects display a variety of responses to noise, such as shifting signal frequency or rate to reduce overlap with noise [ 43 ] and altering signal timing to take advantage of noise gaps. The efficacy of these responses varies depending on insects' ability to plastically modulate their behavior or signals, as well as the characteristics of the anthropogenic noise. [ 44 ] Some insects can modulate the frequencies of their signals, shifting them higher or lower to avoid overlap with other noise. [ 43 ] For example, male Chorthippus biguttulus grasshoppers, which use airborne signals, produce higher frequency signals when living by roads to avoid overlap with low frequency traffic noise. [ 43 ] Similarly, female Nezara viridula stinkbugs, which use vibrational signals, alter the dominant frequency of their calling song to avoid overlap and interference by vibratory disturbances. [ 45 ] The ability of an insect species to modulate signals is constrained by physiological limits to the range of frequencies they are capable of producing. [ 46 ] Additionally, numerous anthropogenic noises occupy a wide range of frequencies that may exceed the frequency range that insects can produce. [ 38 ] Insects may alter the timing or structure of their signals to avoid overlap with noise by changing the rate of signal production, the pacing of signal components, or the length of signal components. [ 38 ] Thermals constraints on signal rates and timing can limit the ability to modulate signal behavior to seasons or times of day when the temperature is within an optimal range. [ 47 ] Insects can also alter their behavior in response to noise by signaling within "gaps" of anthropogenic noise, during which there is less noise and less risk of being overlap. [ 46 ] This response is dependent on both the ability to quickly perceive a noise gap and then to initiate a signal. Insect species that utilize this technique include the treehopper Enchenopa Binotata and katydid Copiphora brevirostris , both of which identify gaps in wind noise to initiate signaling during short quiet periods. [ 48 ] [ 49 ] In environments when anthropogenic noise is constant, such as gas fields and wind farms , this behavioral modification likely is not a potential option for insects. [ 47 ] Interference from anthropogenic noise on insect communication can affect mating, foraging, and survival. Disruption of mating by noise masking occurs when noise overlap reduces perception of signals and insects are unable to modulate signaling to avoid it. This can hinder species recognition and mate location, and may preclude courtship and mating altogether. [ 40 ] [ 50 ] [ 51 ] Decreased mating has been observed in multiple species as a result of interfering noise, including Schizocosa ocreata wolf spiders, Graminella nigrifrons leafhoppers, and Dendroctonus pine beetles. [ 52 ] [ 53 ] [ 54 ] Even if insects can alter signaling behavior, they still might suffer reductions in fitness if females do not recognize the altered signals or respond to them as readily as non-altered signals. [ 55 ] Under noisy conditions, females may also choose to mate with the first male encountered rather than sampling and comparing between males. [ 56 ] Noise can also affect interactions among species. When noise masks airborne or vibrational signals made by prey, insects that rely on these cues to locate prey may be unable to, or prey species may alter their behavior to compensate for the noise. [ 41 ] These changes can reduce foraging success, thus constraining growth and limiting reproduction. Alternatively, insects that utilize warning signals or that detect potential dangers through predator vibrations may be unable to do so, leading to increased predation rates. [ 57 ] While there is little research on community or ecosystem level impacts of anthropogenic noise on insects, studies indicate that noise can decrease the diversity and abundance of insect communities. [ 58 ] [ 59 ] Potential consequences of these shifts may lead to cascading effects on higher levels of the food chain , reduced ecological resilience , and the provision of critical ecosystem services such as pollination . [ 37 ] The discipline of conservation biology has traditionally been concerned with the preservation of biodiversity and the habitats that organisms are dependent upon. However, soundscape ecology encourages biologists to consider natural soundscapes as resources worthy of conservation efforts. Soundscapes that come from relatively untrammeled habitats have value for wildlife as demonstrated by the numerous negative effects of anthropogenic noise on various species. [ 9 ] Organisms that use acoustic cues generated by their prey may be particularly impacted by human-altered soundscapes. [ 60 ] In this situation, the (unintentional) senders of the acoustic signals will have no incentive to compensate for masking imposed by anthropogenic sound. In addition, natural soundscapes can have benefits for human wellbeing and may help generate a distinct sense of place, connecting people to the environment and providing unique aesthetic experiences. [ 24 ] Because of the various values inherent in natural soundscapes, they may be considered ecosystem services that are provisioned by intact, functioning ecosystems . [ 2 ] Targets for soundscape conservation may include soundscapes necessary for the persistence of threatened wildlife, soundscapes that are themselves being severely altered by anthrophony, and soundscapes that represent unique places or cultural values. [ 24 ] Some governments and management agencies have begun to consider preservation of natural soundscapes as an environmental priority. [ 61 ] [ 62 ] [ 63 ] In the United States, the National Park Service's Natural Sounds and Night Skies Division is working to protect natural and cultural soundscapes.
https://en.wikipedia.org/wiki/Soundscape_ecology
Sour crude oil is crude oil containing a high amount of the impurity sulfur . It is common to find crude oil containing some impurities. When the total sulfur level in the oil is more than 0.5% (by weight), the oil is called "sour". [ 1 ] The impurities need to be removed before this lower-quality crude can be refined into petrol , thereby increasing the cost of processing. This results in a higher-priced gasoline than that made from sweet crude oil . [ 1 ] Current environmental regulations in the United States strictly limit the sulfur content in refined fuels such as diesel and gasoline . The majority of the sulfur in crude oil occurs bonded to carbon atoms, with a small amount occurring as elemental sulfur in solution and as hydrogen sulfide gas. Sour oil can be toxic and corrosive, especially when the oil contains higher levels of hydrogen sulfide, which is a breathing hazard. At low concentrations the gas gives the oil the smell of rotting eggs. For safety reasons, sour crude oil needs to be stabilized by having hydrogen sulfide gas (H 2 S) removed from it before being transported by oil tankers . [ 2 ] Since sour crude is more common than sweet crude in the U.S. part of the Gulf of Mexico , Platts has come out in March 2009 with a new sour crude benchmark ( oil marker ) called "Americas Crude Marker (ACM)". [ 3 ] Dubai Crude and Oman Crude, both sour crude oils, have been used as a benchmark (crude oil) oil marker for Middle East crude oils for some time. The major producers of sour crude oil include:
https://en.wikipedia.org/wiki/Sour_crude_oil
In a Unix shell , the full stop called the dot command ( . ) is a command that evaluates commands in a computer file in the current execution context. [ 1 ] In the C shell , a similar functionality is provided as the source command, [ 2 ] and this name is seen in "extended" POSIX shells as well. [ 3 ] [ 4 ] The dot command is not to be confused with a dot file , which is a dot-prefixed hidden file or hidden directory . Nor is it to be confused with the ./scriptfile notation for running commands, which is simply a relative path pointing to the current directory (notated in Unix as a '.' character, and typically outside of the Path variable ). The filename is the dot command's first argument . When this argument does not contain a slash , the shell will search for the file in all directories defined in the PATH environment variable . Unlike normal commands which are also found in PATH, the file to source does not have to be executable . Otherwise the filename is considered as a simple path to the file. [ 1 ] In several "extended" shells including bash, [ 3 ] zsh [ 4 ] and ksh, [ 5 ] one may specify parameters in a second argument. If no parameters are specified, the sourced file will receive the set of positional parameters available in the current context. If parameters are specified, the sourced file will receive only the specified parameters. In any case, parameter $0 will be the $0 of the current context. Since the execution of the source file is done in the invoking context, environment [ note 1 ] , changes within apply to the current process or shell. This is very different from scripts run directly by shebang or as sh foo.sh , which are run in a new, separate process space, with a separate environment. Therefore, the dot command can be used for splitting a big script into smaller pieces, potentially enabling modular design. Sourcing is also often done by the shell on session startup for user profile files like .bashrc and .profile . source is a shell-builtin command that evaluates the file following the command, as a list of commands, executed in the current context. [ 6 ] Frequently the "current context" is a terminal window into which the user is typing commands during an interactive session. The source command can be abbreviated as just a dot ( . ) in Bash and similar POSIX-ish shells. However, this is not acceptable in C shell , where the command first appeared. Some Bash scripts should be run using the source your-script syntax rather than run as an executable command, e.g., if they contain a change directory ( cd ) command and the user intends that they be left in that directory after the script is complete, or they contain an export command and the user wants to modify the environment of the current shell. Another usage situation is when a script file does not have the "execute" permission . Passing the script filename to the desired shell will run the script in a subshell , not the current context.
https://en.wikipedia.org/wiki/Source_(command)
In the field of epidemiology , source attribution refers to a category of methods with the objective of reconstructing the transmission of an infectious disease from a specific source, such as a population, individual, or location. For example, source attribution methods may be used to trace the origin of a new pathogen that recently crossed from another host species into humans, or from one geographic region to another . It may be used to determine the common source of an outbreak of a foodborne infectious disease, such as a contaminated water supply . Finally, source attribution may be used to estimate the probability that an infection was transmitted from one specific individual to another, i.e. , "who infected whom". Source attribution can play an important role in public health surveillance and management of infectious disease outbreaks . In practice, it tends to be a problem of statistical inference , because transmission events are seldom observed directly and may have occurred in the distant past. Thus, there is an unavoidable level of uncertainty when reconstructing transmission events from residual evidence, such as the spatial distribution of the disease. As a result, source attribution models often employ Bayesian methods that can accommodate substantial uncertainty in model parameters. Molecular source attribution is a subfield of source attribution that uses the molecular characteristics of the pathogen — most often its nucleic acid genome — to reconstruct transmission events. Many infectious diseases are routinely detected or characterized through genetic sequencing , which can be faster than culturing isolates in a reference laboratory and can identify specific strains of the pathogen at substantially higher precision than laboratory assays, such as antibody-based assays or drug susceptibility tests . On the other hand, analyzing the genetic (or whole genome ) sequence data requires specialized computational methods to fit models of transmission. Consequently, molecular source attribution is a highly interdisciplinary area of molecular epidemiology that incorporates concepts and skills from mathematical statistics and modeling, microbiology , public health and computational biology . There are generally two ways that molecular data are used for source attribution. First, infections can be categorized into different "subtypes" that each corresponds to a unique molecular variety, or a cluster of similar varieties. Source attribution can then be inferred from the similarity of subtypes. Individual infections that belong to the same subtype are more likely to be related epidemiologically , including direct source-recipient transmission, because they have not substantially evolved away from their common ancestor. Similarly, we assume the true source population will have frequencies of subtypes that are more similar to the recipient population, relative to other potential sources. Second, molecular (genetic) sequences from different infections can be directly compared to reconstruct a phylogenetic tree , which represents how they are related by common ancestors. The resulting phylogeny can approximate the transmission history, and a variety of methods have been developed to adjust for confounding factors. Due to the associated stigma and the criminalization of transmission for specific infectious diseases, molecular source attribution at the level of individuals can be a controversial use of data that was originally collected in a healthcare setting, with potentially severe legal consequences for individuals who become identified as putative sources. In these contexts, the development and application of molecular source attribution techniques may involve trade-offs between public health responsibilities and individual rights to data privacy . Microbial subtyping or strain typing is the use of laboratory methods to assign microbial samples to subtypes, which are predefined classifications based on distinct characteristics. [ 1 ] The assignment of specimens to subtypes can provide a basis of source attribution, since we assume that a pathogen undergoes minimal change when transmitted to an uninfected host. Therefore, infections of the same subtype are implied to be epidemiologically related, i.e., linked by one or more recent transmission events. The assumption that the pathogen is unchanged when transmitted is generally reasonable if the rate of evolution for the pathogen is slower than the rate of transmission, such that few mutations are observed on an epidemiological time scale. [ 2 ] For example, suppose host A is infected by a pathogen that we have categorized as subtype 1. They are more likely to have been infected by host B, who also carries the subtype 1 pathogen, than host C who carries the subtype 2 pathogen ( Figure 1 ). In other words, transmission from host B is a more parsimonious explanation if there is a relatively small probability that the pathogen population in host C evolved from subtype 1 to subtype 2 after transmission to host A. Today it is more common to use genetic sequencing to characterize the microbial sample at the level of its nucleotide sequence by sequencing the whole genome or proportions thereof. [ 3 ] However, other molecular methods such as restriction length fragment polymorphism [ 1 ] have historically played an important role in microbial subtyping before genetic sequencing became an affordable and ubiquitous technology in reference laboratories. Sequence-based typing methods confer an advantage over other laboratory methods (such as serotyping or pulsed-field gel electrophoresis [ 4 ] ) because there is an enormous number of potential subtypes that can be resolved at the level of the genetic sequence. Consider the above example again; however, this time host A carries the same infection subtype as many other hosts. In this case we would have no information to differentiate between these hosts as the potential source of host A's infection. Our ability to identify potential sources, therefore, depends on having a sufficient number of different subtypes. However, defining too many subtypes in the population makes it likely that every individual carries a unique subtype, especially for rapidly-evolving pathogens that can accumulate high levels of genetic diversity in a relatively short period of time. Hence, there exists an intermediate level of subtype resolution that confers the greatest amount of information for source attribution. [ 5 ] When source attribution is considered for a pathogen with high diversity, such that most specimens have unique genetic sequences, it is useful to group multiple unique sequences with a clustering method . Before whole-genome sequencing was cost-effective, targeting a specific part of the pathogen genome (a.k.a. single-locus typing) was an important step to facilitate microbial subtyping. For example, the ribosomal gene 16S is a standard target for identifying bacteria, in part because it is present across all known species and contains a mixture of conserved and variable regions. [ 6 ] Within a pathogen species, sequencing targets tended to be selected on the basis of their length, ubiquity and exposure to diversifying selection, which may be dictated by the function of the gene product for expressed regions. For example, so-called "housekeeping" or core genes have indispensable biological functions, such as copying genetic material or building proteins. These genes are often preferred candidates for microbial subtyping because they are less likely to be absent from a given genome. [ 7 ] Gene presence/absence is particularly relevant for bacteria where genetic material is frequently exchanged through horizontal gene transfer . [ citation needed ] Targeting multiple regions ( loci ) of the pathogen genome confers greater precision to distinguish between lineages, since the chance to observe informative genetic differences between infections is increased. This approach is referred to as multi-locus sequence typing (MLST). [ 8 ] Similar to single-locus typing, MLST requires the selection of specific loci to target for sequencing. Moreover, for subtyping to be consistent across laboratories a reference database must be maintained that maps sequences from single or multiple loci to a fixed notation of allele numbers or designations. [ 9 ] Although single- and multiple-locus subtyping is still predominantly used for molecular epidemiology , ongoing improvements in sequencing technologies and computing power continue to lower the barrier to whole-genome sequencing. Next-generation sequencing (NGS) technologies provide cost-effective methods to generate whole genome sequences from a given sample by individually amplifying and sequencing templates in parallel using customized technologies such as sequencing-by-synthesis . [ 10 ] Shotgun sequencing applications of NGS generate full-length genome sequences by shearing the nucleic acid extracted from the sample into small fragments that are converted into a sequencing library, and then using a de novo sequence assembler program the genome sequence is reconstituted from the sequence fragments (short reads). [ 11 ] Alternatively, short reads can be mapped to a reference genome sequence that has been converted into an index for efficient lookup of exact substring matches. This approach can be faster than de novo assembly, but relies on having a reference genome that is sufficiently similar to the genome sequence of the sample. While NGS makes it feasible to simultaneously generate full-length genome sequences from hundreds of pathogen samples in a single run, it introduces a number of other challenges. For instance, NGS platforms tend to have higher sequencing error rates than conventional sequencing, and regions of the genome with long stretches of repetitive sequence can be difficult to reassemble. [ citation needed ] Whole genome sequencing (WGS) can confer a significant advantage for source attribution over single- or multiple-locus subtyping. Sequencing the entire genome is the maximal extent of multi-locus typing, in that all possible loci are covered. Having whole genome sequences will tend to make one-to-one subtyping ( Figure 1 ) less useful, since most genomes will be unique by at least one mutation for rapidly evolving pathogens. Consequently, applications of WGS for source attribution at a population level will likely have to cluster similar genomes together . [ 12 ] The breadth of coverage offered by WGS is more advantageous for the epidemiology of bacterial pathogens than viruses. Bacterial genomes tend to be longer, ranging from about 10 6 to 10 7 base pairs , whereas virus genomes seldom exceed 10 6 base pairs. In addition, bacteria tend to evolve at a slower rate than viruses, so mutations tend to be distributed more sparsely throughout a bacterial genome. For example, WGS data revealed differences between isolates of Burkholderia pseudomallei from Australia and Cambodia that had otherwise appeared to be identical by multi-locus subtyping due to convergent evolution. [ 13 ] WGS has also been utilized in several recent studies to resolve transmission networks of Mycobacterium tuberculosis in greater detail, because isolates with identical multi-locus subtypes ( e.g. , MIRU-VNTR profiles targeting 24 loci) were frequently separated by large numbers of nucleotide differences in the full genome sequence, comprising roughly 4.3 million nucleotides encodoing over 4,000 genes. [ 14 ] [ 15 ] When applied to genetic sequences, a clustering method is a set of rules for assigning the sequences to a smaller number of clusters such that members of the same cluster are more genetically similar to each other than sequences in other clusters. Put another way, a clustering method defines a partition on the set of genetic sequences using some similarity measure . Clustering is inherently subjective and there are usually no formal guidelines for setting the clustering criteria. Consequently, cluster definitions can vary substantially from one study to the next. In addition, clustering is an intuitive process that can be accomplished by a wide variety of approaches; because of this flexibility, numerous different methods of genetic clustering have been described in the literature. [ 16 ] Genetic clustering provides a way of dealing with sequences from rapidly evolving pathogens, or whole genome sequences from pathogens with less divergence. In either case, there can be an enormous number of distinct genetic sequences in the data set. If each subtype must correspond to a unique sequence variant, then one could potentially have to track an unwieldy number of microbial subtypes for these pathogens when subtypes are defined on a one-to-one basis ( Figure 1 ). The number of subtypes can be greatly reduced by expanding the definition of microbial subtypes from individually unique sequence variants to clusters of similar sequences. [ 17 ] For example, pairwise distance clustering is a nonparametric approach in which clusters are assembled from pairs of sequences that fall within a threshold distance of each other. The distance between sequences is computed by a genetic distance measure (a mathematical formula that maps two sequences to a non-negative real number ) that quantifies the evolutionary divergence between the sequences under some model of molecular evolution. When the potential sources are populations, not individuals, then we are comparing the frequencies of subtypes in the respective populations. The most likely source population should have a subtype frequency distribution that is the most similar to the reference population. Methods that employ this approach have been referred to as "frequency-based" or "frequency-matching" models. [ 18 ] These subtypes are not necessarily derived from molecular data; for instance, these methods were originally applied to microbial strains defined by non-genetic antigenic or resistance profiling. For example, the "Dutch model" [ 19 ] was originally developed to estimate the most likely source of a number of foodborne illnesses due to Salmonella by comparing the relative frequencies of bacterial subtypes (based on phage typing ) in different commercial livestock populations (including poultry, swine and cattle) through routine surveillance programs. For a given subtype, the expected number of human cases attributed to each source is proportional to the relative frequencies of that subtype among sources: where p i j {\displaystyle p_{ij}} is the proportion of (non-human) cases in the j {\displaystyle j} -th source population associated with subtype i {\displaystyle i} , and n i {\displaystyle n_{i}} is the number of cases of subtype i {\displaystyle i} in the recipient (human) population. For instance, if the frequencies of subtype X among three potential sources was 0.8, 0.5 and 0.1, respectively, then the expected number of cases (out of a total of 100) from the second source is 0.5 / ( 0.8 + 0.5 + 0.1 ) × 100 = 35.7 {\displaystyle 0.5/(0.8+0.5+0.1)\times 100=35.7} . This simple formula is a maximum likelihood estimator when the total force of infection from each source into the human population is uniform, e.g. , the sources have equal population sizes. Subsequently, this model was extended by Hald and colleagues [ 20 ] to account for variation among sources and subtypes using Bayesian inference methods. This extension, typically referred to as the Hald model, has become a standard model in source attribution for food-borne illnesses. The observed numbers of each subtype in the human population was assumed to be a Poisson distributed outcome with a mean λ i {\displaystyle \lambda _{i}} for the i -th subtype, after adjusting for cases related to travel and outbreaks: where q i {\displaystyle q_{i}} is the marginal effect of the i -th subtype ( e.g. , elevated infectiousness of a bacterial variant), M j {\displaystyle M_{j}} is the observed total amount ( mass ) of the j -th food source, a j {\displaystyle a_{j}} is the marginal effect of the j -th food source, and p i j {\displaystyle p_{ij}} is the same observed case proportion as the original "Dutch" model. This model is visualized in Figure 2 . The addition of a large number of parameters to the "Dutch" model by Hald and colleagues yielded a more realistic model. However, it was too complex to solve for exact maximum likelihood estimates, in contrast to the original model. Many of the parameters could not be directly measured, such as the relative transmission risk associated with a specific food source. Consequently, Hald and colleagues adopted a Bayesian approach to estimate the model parameters. A similar approach has also been used to reconstruct the contribution of different environmental and livestock reservoirs of the bacteria Campylobacter jejuni to an outbreak of food poisoning in England, [ 21 ] where the migration of different subtypes among reservoirs was jointly estimated by Bayesian methods. Although Bayesian inference is discussed extensively elsewhere, it plays an important role in computationally-intensive methods of source attribution, so we provide a brief description here. In the context of Bayesian inference every parameter is described by a probability distribution that represents our belief about its true value. Thus, the statistical principle that underlies Bayesian inference ( i.e., Bayes' rule ) can be expressed in terms of the model parameters ( θ {\displaystyle \theta } ) and the data ( D {\displaystyle D} ): where P ( θ ∣ D ) {\displaystyle P(\theta \mid D)} , P ( D ∣ θ ) {\displaystyle P(D\mid \theta )} and P ( θ ) {\displaystyle P(\theta )} are known as the posterior , sampling ( likelihood ), and prior distributions, respectively. A simple way to think about Bayesian inference is that our prior belief about the parameters is "updated" once we have seen the data. As a result, our posterior belief becomes a compromise between our prior belief and the data. To update our belief, we need to have a sampling distribution or model that describes the probability of different outcomes of an experiment. We also require a prior distribution that represents our belief in a statistical form. While modern computation allows almost any probability distribution to be used, the uniform distribution is commonly used because it assigns the same probability to every value within some range. After incorporating new information from the data, our updated belief about the model parameters is represented by the posterior distribution . This use of distributions to represent our belief distinguishes Bayesian inference from maximum likelihood, which results in a single combination of parameter values as a point estimate . Hald and colleagues used uniform prior distributions for many of their parameters to express the prior belief that the true value fell within a continuous range with specific upper and lower limits. They constrained some parameters to take the same numerical value as others. For example, the effects of domestic and imported supplies of the same food source were linked in this manner. This assumption expressed a strong belief that a given food source carried the same transmission risk irrespective of its origin, and simplified the model so that it was more feasible to fit the data. Other parameters were set to a fixed reference value to further simplify the model. Hald and colleagues employed a Poisson model to describe the probability of observing the number ( Y {\displaystyle Y} ) of rare transmission events that occur at a rate λ {\displaystyle \lambda } . As described above, the rate of cases due to a specific bacterial subtype was the sum of transmission rates across all potential sources. The Hald model was more realistic than the "Dutch" model because it allowed transmission rates to vary between subtypes and food sources. However, it was not feasible to directly measure these different rates — these parameters needed to be estimated from the data. Instead of comparing the frequencies of subtypes to reconstruct the transmission of pathogens between populations, many source attribution methods compare the pathogen sequences at the level of individual hosts. One way of comparing sequences is to calculate some measure of genetic distance or similarity, a concept that we introduced earlier on the topic of pooling sequences into composite subtypes . For example, infections that are grouped into clusters are assumed to be related through one or more recent and rapid transmission events. Short genetic distances imply that limited time has passed for mutations to accumulate in lineages descending from their common ancestor. Consequently, these clusters are often referred to as "transmission clusters". Other studies have used genetic distances that exceed some threshold to rule out host individuals as potential sources of transmission. [ 22 ] [ 14 ] Although this application of clustering is related to source attribution, it is not possible to infer the direction of transmission solely from the genetic distance between infections. Furthermore, the genetic distance separating infections is not solely determined by the rate of transmission; for example, they are strongly influenced by how infections are sampled from the population. [ 23 ] [ 16 ] Sequences can also be compared in the context of their shared evolutionary history. A phylogenetic tree or phylogeny is a hypothesis about the common ancestry of species or populations. In the context of molecular epidemiology , phylogenies are used to relate infections in different hosts and are usually reconstructed from genetic sequences of each pathogen population. To reconstruct the phylogeny, the sequences must cover the same parts of the pathogen genome; for example, sequences that represent multiple copies of the same gene from different infections. It is this residual similarity ( homology ) between diverging populations that implies recent common ancestry. A molecular phylogeny comprises "tips" or "leaves" that represent different genetic sequences that are connected by branches to a series of common ancestors that eventually converge to a "root". The composition of the ancestral sequence at the root, the order of branching events, and the relative amount of change along each branch are all quantities that must be extrapolated from the observed sequences at the tips. There are multiple approaches to reconstruct a phylogenetic tree from genetic sequence variation. [ 24 ] For example, distance-based methods use a hierarchical clustering method to build up a tree based on the observed genetic distances. A common simplifying assumption in phylogenetic investigations is that the phylogenetic tree reconstructed from the data is the "true" tree — that is, an accurate representation of the common ancestry relating the sampled infections. For instance, a single tree is often used as the input for comparative methods to detect the signature of natural selection in protein-coding sequences . On the other hand, if the phylogeny is handled as an uncertain estimate derived from the data (including the sequence alignment), then the analysis becomes a hierarchical model in which the problem of phylogenetic reconstruction is nested within the problem of estimating the other model parameters that are conditional on the phylogeny ( Figure 3 ). Sampling both the phylogeny and other model parameters from their joint posterior distribution using methods such as Markov chain Monte Carlo (MCMC) should confer more accurate parameter estimates. However, the greatly expanded model space also makes it more difficult for MCMC samples to converge to the posterior distribution. Such hierarchical methods are often implemented in the software package BEAST2 [ 25 ] (Bayesian Evolutionary Analysis by Sampling Trees), which provides generic routines for MCMC sampling from tree space, and calculates the likelihood of a time-scaled phylogenetic tree given sequence data and sample collection dates. There are a number of sources of phylogenetic uncertainty. For instance, the common ancestry of lineages can be difficult to reconstruct if there has been little to no evolution along the respective branches. This can occur when the rate of evolution is substantially slower than the time scale of transmission, such that mutations are unlikely to accumulate between the start of one infection and its transmission to the next host ( i.e. , the generation time). It can also arise when existing divergence is not captured due to incomplete sequencing of the respective genomes. Furthermore, reconstructing the common ancestry of lineages is progressively more uncertain as we move deeper into the tree, forcing us to extrapolate the ancestral states at greater distances from the observed data. Reconstructing phylogenies from molecular sequences generally requires a multiple sequence alignment , a table in which homologous residues in different sequences occupy the same position. Although alignments are often treated as observed data known without ambiguity, the process of aligning sequences is also uncertain and can become more difficult with the rapid accumulation of sequence insertions and deletions among diverging pathogen lineages. While there are Bayesian methods that address uncertainty in alignment by joint sampling of the alignment along with the phylogeny, [ 26 ] this approach is computationally complex and is seldom used in the context of source attribution. Furthermore, sequences are themselves uncertain estimates of the genetic composition of individual pathogens or infecting populations, and next-generation sequencing technologies tend to have substantially higher error rates than conventional Sanger sequencing , [ 27 ] and analysis pipelines must be carefully validated to reduce the effects of sample cross-contamination and adapter contamination. Genetic recombination is the exchange of genetic material between individual genomes. For pathogens, recombination can occur when a cell is infected by multiple copies of the pathogen. If some hosts were infected multiple times by two or more divergent variants from different sources ( i.e., superinfection ), then recombination can produce mosaic genomes that complicate the reconstruction of an accurate phylogeny. [ 28 ] In other words, different segments of a recombinant genome may be related to other genomes through discordant phylogenies in such a way that cannot be accurately represented by a single tree. In practice, it is common to screen for recombinant sequences and discard them before reconstructing a phylogeny from an alignment that is assumed to be free of recombination. [ 29 ] The basic premise in applying phylogenetics to source attribution is that the shape of the phylogenetic tree approximates the transmission history, [ 30 ] which can also be represented by a tree where each split into two branches represents the transmission of an infection from one host to another. In conjunction with reconstructing the transmission tree from other sources of information, such as contact tracing , reconstructing a phylogenetic tree can serve as a useful, additional information source especially when genetic sequences are already available. Because of the visual and conceptual similarity between phylogenetic and transmission trees, it is a common assumption that the branching points (splits) of the phylogeny represent transmission events. However, this assumption will often be inaccurate. A transmission event may have occurred at any point along the two branches that separate one sampled infection from the other in the virus phylogeny ( Figure 3A ). The transmission tree only constrains the shape of the phylogenetic tree. Thus, even if we can reconstruct the phylogenetic tree without error, there are several reasons why it will not be an accurate representation of the transmission tree, including incomplete sampling, pathogen evolution within hosts, and secondary infection of the same host. Equating the phylogenetic tree with the transmission history implicitly assumes that genetic sequences have been obtained from every infected host in the epidemic. In practice, only a fraction of infected hosts are represented in the sequence data. The existence of an unknown and inevitably substantial number of unsampled infected hosts is a major challenge for source attribution. Even if the phylogenetic tree indicates that two infections are most closely related than any other sampled infection, one cannot rule out the existence of one or more unsampled hosts whom are intermediate links in the "transmission chain" separating the known hosts ( Figure 3B ). Similarly, an unsampled infection may have been the source population for both observed infections at the tips of the tree ( Figure 3C ). By itself, the phylogenetic tree does not explicitly discriminate among these alternative transmission scenarios. The shape of the phylogenetic tree may diverge from the underlying transmission history because of the evolution of diverse populations of the pathogen within each host. Individual copies of the pathogen genome that are transmitted to the next host are, by definition, no longer in the source population. A split exists in the phylogenetic tree that represents the common ancestor between the transmitted lineages and the other lineages that have remained and persisted in the source population. If we follow both sets of lineages back in time, the time of the transmission event is the most recent possible time that they could converge to a common ancestor. Put another way, this event represents one extreme of a continuous range where the common ancestor is located further back in time. This process is often modelled by Kingman's coalescent , [ 31 ] which describes the number of generations we expect to follow randomly selected lineages back in time until we encounter a common ancestor. The expected time until two lineages converge to a common ancestor, known as a coalescence event, is proportional to the effective population size , which determines the number of possible ancestors. Put another way, two randomly selected people in a large city are less likely to have a great-grandparent in common than two people in a small rural community. Longer coalescence times in a large, diverse within-host pathogen populations are a significant challenge for source attribution, because it uncouples the virus phylogeny from the transmission tree. For example, if a host has transmitted their infection to two others, then there can be as many as three sets of lineages whose ancestry can be traced in the source population in that host ( Figure 3D ). As a result, there is some chance that the branching order in the virus phylogeny implies a different order of transmission events if we interpret the phylogeny as equivalent to a transmission tree. For example, in Figure 3D hosts 1 and 3 are more closely related in the transmission history, but not in the phylogeny. Many infections can be spontaneously cleared by the host's immune system. If a host that has cleared a previously diagnosed infection becomes re-infected from another source, then it is possible for the same host to be represented by different infections in the phylogenetic and transmission trees, respectively. In addition, some individuals may become infected from multiple different sources. For example, roughly one-third of infections by hepatitis C virus are spontaneously cleared within the first six months of infection. [ 32 ] This previous exposure, however, does not confer immunity to re-infection by the same virus. [ 33 ] In addition, co-infection by multiple strains of hepatitis C virus that persist simultaneously within the same host can occur relatively frequently in populations with a high rate of transmission, such as people who inject drugs using shared equipment (ranging from 14% to 39%). [ 34 ] The persistence of strains from additional exposures may be missed by conventional genetic sequencing techniques if they are present at a low frequencies within the host, necessitating the use of "next-generation" sequencing technologies. For these reasons, the epidemiological linkage of hepatitis C virus infections through genetic similarity may be a transient phenomenon, leading some investigators to recommend using multiple virus sequences sampled from different time points of each infection for molecular epidemiology applications. [ 29 ] Ancestral reconstruction is the application of a model of evolution to a phylogenetic tree to reconstruct character states , such as nucleotide sequences or phenotypes , at the different ancestral nodes of the tree down to the root. [ 35 ] In the context of source attribution, ancestral reconstruction is frequently used to estimate the geographic location of pathogen lineages as they are carried from one region to another by their hosts. [ 36 ] Drawing this analogy between character evolution and the spatial migration of individuals or populations is known as phylogeography , [ 37 ] where the geographic location of an ancestral population is reconstructed from the current locations of its sampled descendants under some model of migration. Migration models generally fall into two categories of discrete-state and continuous-state models. Discrete-state or island migration models assume that a given lineage is in one of a finite number of locations, and that it changes location at a constant rate over time according to a continuous-time Markov process , analogous to the models used for molecular evolution . Ancestral reconstruction with a discrete-state migration model has also been utilized to reconstruct the early spread of HIV-1 in association with development of transport networks and increasing population density in central Africa. [ 38 ] Discrete models can also be applied to the population-level source attribution of zoonotic transmissions by reconstructing different host species as ancestral character states. For example, a discrete trait model of evolution was used to reconstruct the ancestral host species in a phylogeny relating Staphylococcus aureus specimens from humans and domesticated animals. [ 39 ] Similarly, Faria and colleagues [ 40 ] analyzed the cross-species transmission of rabies virus as a discrete diffusion process along the virus phylogeny, with rates influenced by the evolutionary relatedness and geographic range overlap of the respective host species. Continuous-state migration models are more similar to models of Brownian motion in that a lineage may occupy any point within a defined space. Although continuous models can be more realistic than discrete migration models, they may also be more challenging to fit to data. Taken literally, a continuous model requires precise geolocation data for every infection sampled from the population. In many applications, however, these metadata are not available; for example, some studies approximate the true spatial distribution of sampled infections by the centroids of their respective regions. [ 41 ] This can become problematic if the regions vary substantially in area, and host populations are seldom uniformly distributed within regions. Paraphyly is a term that originates from the study of cladistics , an evolutionary approach to systematics that groups organisms on the basis of their common ancestry. A group of infections is paraphyletic if the group includes the most recent common ancestor, but does not include all its descendants. In other words, one group is nested within an ancestral group. For example, birds are descended from a common ancestor that in turn shares a common ancestor with all reptiles ; thus, birds are nested within the phylogeny of reptiles, making the latter a paraphyletic group. Thus, paraphyly is evidence of evolutionary precedence: the ancestor of all birds was a reptile. In the context of source attribution, paraphyly can be used as evidence that one infection preceded another. It does not provide evidence that the infection was directly transmitted from one individual to another, in part because of incomplete sampling . The application of paraphyly for source attribution requires that the phylogenetic tree relates multiple copies of the pathogen from both the putative source and recipient hosts. To elaborate, phylogenetic trees relating different infections are often reconstructed from population-based sequences (direct sequencing of the PCR amplification product), where each sequence represents the consensus of the individual pathogen genomes sampled from the infected host. If copies of the pathogen genome are sequenced individually by limiting dilution protocols or next-generation sequencing , then one can reconstruct a tree that represents the genealogy of individual pathogen lineages, rather than the phylogeny of pathogen populations. If sequences from host A form a monophyletic clade (in which members are the complete set of descendants from a common ancestor) that has a nested paraphyletic clade of sequences from host B, then the tree is consistent with the direction of transmission having originated from host A. [ 42 ] Directionality does not imply that host A directly transmitted their infection to host B, because the pathogen may have been transmitted through an unknown number of intermediate unsampled hosts before establishing an infection in host B. The statistical confidence in directionality of transmission from a given tree is usually quantified by the support value associated with the node that is ancestral to the nested monophyletic clade. The support of node X is the estimated probability that if we repeated the phylogenetic reconstruction on an equivalent data set, the new tree would contain exactly the same clade consisting exclusively of all descendants of node X in the original tree. In other words, it quantifies the reproducibility of that node given the data. It should not be interpreted as the probability that the clade below node X appears in the "true" tree. [ 43 ] There are generally three approaches to estimating node support: 1. Bootstrapping. Felsenstein adapted the concept of nonparametric bootstrapping to the problem of phylogenetic reconstruction by maximum likelihood. [ 44 ] Bootstrapping provides a way to characterize the sampling variation associated with the data without having to collect additional, equivalent samples. To start, one generates a new data set by sampling an equivalent number of nucleotide or amino acid positions at random with replacement from the multiple sequence alignment – this new data set is referred to as a "bootstrap sample". A tree is reconstructed from the bootstrap sample using the same method as the original tree. Since we are sampling sets of homologous characters (columns) from the alignment, the information on the evolutionary history contained at that position is intact. We record the presence or absence of clades from the original tree in the new tree, and then repeat the entire process until a target number of replicate trees have been processed. The frequency at which a given clade is observed in the bootstrap sample of trees quantifies the reproducibility of that node in the original tree. Non-parametric bootstrapping is a time-consuming process that scales linearly with the number of replicates, since every bootstrap sample is processed by the same method as the original tree, and post-processing steps are required to enumerate clades. The precision of estimating the node support values increases with the number of bootstrap replicates. For instance, it is not possible to obtain a node support of 99% if fewer than 100 bootstrap samples have been processed. Consequently, it is now more common to use faster approximate methods to estimate the support values associated with different nodes of the tree (for instance, see approximate likelihood-ratio testing below). 2. Bayesian sampling. Instead of using bootstrapping to resample the data, one can quantify node support by examining the uncertainty in reconstructing the phylogeny from the given data. Bayesian sampling methods such as Markov chain Monte Carlo (see Hald model ) are designed to generate a random sample of parameters from the posterior distribution given the model and data. In this case, the tree is a collection of parameters. A Bayesian estimate of node support can be extracted from this sample of trees by counting the number of trees in which the monophyletic clade that descends from that specific node appears. [ 45 ] Bayesian sampling is computationally demanding because the space of all possible trees is enormous, making convergence difficult or not feasible to attain for large data sets. [ 46 ] 3. Approximate likelihood-ratio testing. Unlike Bayesian sampling, this method is performed on a single estimate of the tree based on maximum likelihood , where the likelihood is the probability of the observed data given the tree and model of evolution. The likelihood ratio test (LRT) is a method for selecting between two models or hypotheses, where the ratio of their likelihoods is a test statistic that is mapped to a null distribution to assess statistical significance. In this application, the alternative hypothesis is that a branch in the reconstructed tree has a length of zero, which would imply that the descendant clade cannot be distinguished from its background. [ 47 ] This makes the LRT a localized analysis: it evaluates the support of a node when the rest of the tree is assumed to be true. On the other hand, this narrow scope makes the approximate LRT method computationally efficient in comparison to Bayesian sampling and bootstrap sampling. In addition to the LRT method, there are several other methods for fast approximation of bootstrap support and this remains an active area of research. [ 48 ] The interpretation of monophyletic and paraphyletic clades is contingent on whether a sufficient number of infections have been sampled from the host population. Sequences from one host can only become paraphyletic relative to sequences from a second host if the tree contains additional sequences from at least one other host in the population. As noted above, there may be unsampled host individuals in a "transmission chain" connecting the putative source to the recipient host ( Figure 3B ). The incorporation of background sequences from additional hosts in the population is similar to the problem of rooting a phylogeny using an outgroup , where the root represents the earliest point in time in the tree. The location of this "root" in the section of the tree relating the sequences from the two hosts determines which host is interpreted to be the potential source. There are no formal guidelines for selecting background sequences. Typically, one incorporates sequences that were collected in the same geographic region as the two hosts under investigation. These local sequences are sometimes augmented with additional sequences that are retrieved from public databases based on their genetic similarity ( e.g. , BLAST ), which were not necessarily collected from the same region. Generally, the background data comprise consensus (bulk) sequences where each host is represented by a single sequence, unlike the putative source and recipient hosts from whom multiple clonal sequences have been sampled. Because clonal sequencing is more labor-intensive, such data are usually not available to use as background sequences. The incorporation of different types of sequences (clonal and bulk) into the same phylogeny may bias the interpretation of results, because it is not possible for sequences to be nested within the consensus sequence from a single background host. In general, phylodynamics is a subdiscipline of molecular epidemiology and phylogenetics that concerns the reconstruction of epidemiological processes, such as the rapid expansion of an epidemic or the emergence of herd immunity in the host population, from the shape of the phylogenetic tree relating infections sampled from the population. [ 49 ] A phylodynamic method uses tree shape as the primary data source to parameterize models representing the biological processes that influenced the evolutionary relationships among the observed infections. This process should not be confused with fitting models of evolution (such as a nucleotide substitution model or molecular clock model) to reconstruct the shape of the tree from the observed characteristics of related populations (infections), which originates from the field of phylogenetics . The relatively rapid evolution of viruses and bacteria makes it feasible to reconstruct the recent dynamics of an epidemic from the shape of the phylogeny reconstructed from infections sampled in the present. The use of phylodynamic methods for source attribution involve reconstruction of the transmission tree, which cannot be directly observed, from its residual effect on the shape of the phylogenetic tree. Although there are established methods for reconstructing phylogenetic trees from the genetic divergence among pathogen populations sampled from different host individuals, there are several reasons why the phylogeny may be a poor approximation of the transmission tree ( Figure 3 ). In this context, phylodynamic methods attempt to reconcile the discordance between the phylogeny and the transmission tree by modeling one or more of the processes responsible for this discordance, and fitting these models to the data ( Figure 4 ). Given the complexity of phylodynamic models, these methods predominantly use Bayesian inference to sample transmission trees from the posterior distribution, where the transmission tree is an explicit model of "who infected whom". Although these methods can estimate the probability of a direct transmission from one individual to another, this probability is conditional on how well the model (selected from a number of possible models) approximates reality. Below we describe models that have been implemented to incorporate, but not eliminate, the additional uncertainty caused by the various assumptions required when using the phylogenetic tree as an approximation of the transmission history. A basic simplifying assumption is that every infection in the epidemic is represented by at least one genetic sequence in the data set [ 50 ] [ 51 ] [ 52 ] (complete sampling). Although complete sampling may be feasible in circumstances such as an outbreak of disease transmission among farms in a defined geographic region, [ 53 ] it is generally not possible to rule out unsampled sources in other contexts. This is especially true for infectious diseases that are stigmatized and/or associated with marginalized populations, [ 54 ] that have a long asymptomatic period, [ 55 ] or in the context of a generalized epidemic where disease prevalence may substantially exceed the local capacity for sample collection and genetic sequencing. Several methods attempt to address the presence of unsampled hosts by modeling the growth of the epidemic over time, which predicts the total number of infected hosts at any given time. Put another way, the probability that an infection was transmitted from an unsampled source is determined in part by the total size of the infected population at the time of transmission. These models of epidemic growth are sometimes referred to as demographic models because some are derived from population growth models such as the exponential and logistic growth models. Alternatively, the number of infections can be modeled by a compartmental model that describes the rate that individual hosts switch from susceptible to infected states, and can be extended to incorporate additional states such as recovery from infection or different stages of infection . [ 56 ] [ 51 ] An important distinction between population growth and compartmental models is that the number of uninfected susceptible hosts is tracked explicitly in the latter. A phylodynamic analysis attempts to parameterize the growth model by using the phylogeny as either a direct proxy of the transmission tree, or to account for the discordance between these trees due to within-host diversity using a population genetic model, such as the coalescent ( Figure 4 ). Bayesian methods make it feasible to supplement this task with other data sources, such as the reported case incidence and/or prevalence over time. [ 57 ] The transmission process can be mapped to the size of the infected population using either a coalescent (reverse-time) model or a forward-time model such as birth-death or branching processes . Thus, the coalescent model has two different applications in phylodynamics. First, it can be used to address the confounding effect of diverse pathogen populations within hosts, by explicitly modeling the common ancestry of individual pathogens. [ 31 ] Second, the coalescent can be adapted to model the spread of infections back in time, [ 58 ] drawing an analogy between the common ancestry of individuals within hosts and the transmission of infections among hosts. This parallel has also been explored by phylodynamic models based on the structured coalescent, [ 59 ] where the population can be partitioned into two or more subpopulations ( demes ). Each deme represents an infected host individual. Due to limited migration of pathogen lineages between demes, two pathogen lineages sampled at random are more likely to share a recent common ancestor if they belong to the same deme. Birth-death models describe the proliferation of infections forward in time, where a "birth" event represents the transmission of an infection to an uninfected susceptible host, and a "death" event can represent either the diagnosis and treatment of an infection, or its spontaneous clearance by the host. [ 60 ] This class of models was originally formulated to describe the proliferation of species through speciation and extinction. [ 61 ] Similarly, branching processes model the growth of an epidemic forward in time where the number of transmissions from each infected host ("offspring") is described by a discrete probability distribution over non-negative integers, such as the negative binomial distribution . [ 62 ] Branching process models tend to use the simplifying assumption that this offspring distribution remains constant over time, making this class of models more appropriate for the initial stage of an epidemic where most of the population is uninfected. As noted above , the diversification of pathogen populations within each host results in a discordance between the shapes of the pathogen phylogeny and the transmission tree. Phylodynamic methods that treat the phylogeny as equivalent to the transmission tree assume implicitly that the population within each host is small enough to be approximated by a single lineage. [ 53 ] [ 49 ] [ 63 ] If the within-host population is diverse, then a transmission event will tend to underestimate the time since two lineages split from their common ancestor ( Figure 3A ); this phenomenon is analogous to the incomplete lineage sorting affecting gene trees relative to the species tree. [ 64 ] The resulting discordance between the phylogenetic and transmission trees makes it more difficult to reconstruct the latter from the observed data. Moreover, the effect of within-host diversity becomes even greater if there are incomplete transmission bottlenecks — where a new infection is established by more than one lineage transmitted from the source population — because the common ancestor of pathogen lineages may be located in previous hosts further back in time. [ 59 ] Source attribution is an inherently controversial application of molecular epidemiology because it identifies a specific population or individual as being responsible for the onward transmission of an infectious disease. Because molecular source attribution increasingly requires the specialized and computationally-intensive analysis of complex data, the underlying model assumptions and level of uncertainty in these analyses are often not made accessible to principal stakeholders, including the key affected populations and community advocates. Outside of a public health context, the concept of source attribution has significant legal and ethical implications for people living with HIV to potentially become prosecuted for transmitting their infection to another person. The transmission of HIV-1 without disclosing one's infection status is a criminally prosecutable offense in many countries, [ 65 ] including the United States . For example, defendants in HIV transmission cases in Canada have been charged with aggravated sexual assault , with a "maximum penalty of life imprisonment and mandatory lifetime registration as a sex offender". [ 66 ] Molecular source attribution methods have been utilized as forensic evidence in such criminal cases. One of the earliest and well-known examples of an HIV-1 transmission case was the investigation of the so-called "Florida dentist" , where an HIV-positive dentist was accused of transmitting his infection to a patient. Although genetic clustering — specifically, clustering in the context of a phylogeny — was applied to these data to demonstrate that HIV-1 particles sampled from the dentist were genetically similar to those sampled from the patient, [ 67 ] clustering alone is not sufficient for source attribution. Clusters can only provide evidence that infections are unlikely to be epidemiologically linked because they are too dissimilar relative to other infections in the population. [ 68 ] For example, similar phylogenetic methods were used in a subsequent case to demonstrate that the HIV-1 sequence obtained from the patient was far more similar to the sequence from their sexual partner than the sequence from a third party under investigation. [ 69 ] Clustering provides no information on the directionality of transmission ( e.g. , whether the infection was transmitted from individual A to individual B, or from B to A; Figure 3 ), nor can it rule out the possibility that one or more other unknown persons (from whom no virus sequences have been obtained) were involved in the transmission history. Despite these known limitations of clustering, statements on the genetic similarity of infections continue to appear in court cases. [ 70 ] On the other hand, clustering can have population-level benefits by enabling public health agencies to rapidly detect elevated rates of transmission in a population, and thereby optimize the allocation of prevention efforts. [ 71 ] The expansion of public health applications of clustering [ 72 ] has raised concerns among people living with HIV that this use of personal health data might also expose them to a greater risk of criminal prosecution for transmission. [ 73 ] [ 74 ] Source attribution methods based on paraphyly have been used in the prosecution of individuals for HIV-1 transmission. One of the earliest examples was published in 2002, where a physician was accused of intentionally injecting blood from one patient ( P ) who was HIV-1 positive into another patient ( V ) who had previous been in a relationship with the physician. [ 75 ] This study used maximum likelihood methods to reconstruct a phylogenetic tree relating HIV-1 sequences from both patients. Paraphyly of sequences from P implying either direct or indirect transmission to V was reported for the phylogeny reconstructed from RT sequences ( Figure 5 ). However, a second tree reconstructed from the more diverse HIV-1 envelope ( env ) sequences from the same group was inconclusive on the direction of transmission - only that the env sequences from patients P and V clustered respectively into two monophyletic groups that were jointly distinct from the background. The use of paraphyly for source attribution was stimulated with the onset of next-generation sequencing , which made it more cost-effective to rapidly sequence large numbers of individual viruses from multiple host individuals. More recent work [ 42 ] has also developed a formalized framework for interpreting the distribution of sequences in the phylogeny as being consistent with a direction of transmission. Several studies have since applied this framework to re-analyze or develop forensic evidence for HIV transmission cases in Serbia, [ 76 ] Taiwan, [ 77 ] China [ 78 ] and Portugal [ 79 ] The growing number of such studies has led to controversy on the ethical and legal implications of this type of phylogenetic analysis for HIV-1. [ 80 ] The accuracy of classifying a group of sequences in a phylogeny into monophyletic or paraphyletic groups is highly contingent on the accuracy of tree reconstruction . As described above (see Paraphyly ), our statistical confidence of a specific clade in the tree is quantified by the estimated probability that the same clade would be obtained if the tree reconstruction was repeated on an equivalent data set. This support value is not the probability that the clade appears in the "true" tree because this quantity is conditional on the data at hand - however, it is often misinterpreted this way. [ 81 ] If the branch separating a nested monophyletic clade of sequences from host A from the paraphyletic group of sequences from host B has a low support value, then the conventional procedure would be to remove that branch from the tree. This would have the result of collapsing the monophyletic and paraphyletic clades so that the tree is inconclusive about either direction of transmission. However, this procedure has not been consistently used in source attribution investigations. For example, the trees displayed in the 2020 study in Taiwan [ 77 ] do not support transmission from the defendant to the plaintiff when branches with low support (<80%) are collapsed. Moreover, the result can vary with the region of the virus genome targeted for sequencing. [ 82 ] The use of paraphyly to infer the direction of transmission was recently evaluated on a prospective cohort of HIV serodiscordant couples (where one partner was HIV positive at the start of the study). [ 83 ] Applying the paraphyly method to next-generation sequence data generated from samples obtained from 33 pairs where the HIV negative partner became infected over the course of the study, the authors found that the direction of transmission was incorrectly reconstructed in about 13% to 21% of cases, depending on which sequences were analyzed. However, a follow-up study involving many of the same authors [ 84 ] used a more comprehensive sequencing method to cover the full virus genome in depth from all host individuals, lowering the percentage of misclassified cases to 3.1%. A common feature of both clustering and paraphyly methods is that neither approach explicitly tests the hypothesis that an infection was directly transmitted from a specific source population or individual to the recipient. Phylodynamic methods attempt to overcome the discordance between the pathogen phylogeny and the underlying transmission history by modeling the processes that contribute to this discordance, such as the evolution of pathogen populations within each host. The development of phylodynamic methods for source attribution has been a rapidly expanding area, with a large number of published studies and associated software released since 2014 (see Software ). Because these methods have tended to be applied to other infectious diseases including influenza A virus, [ 85 ] foot-and-mouth disease virus [ 86 ] and Mycobacterium tuberculosis , [ 51 ] they have so far avoided the ethical issues of stigma and criminalization associated with HIV-1. However, applications of phylodynamic source attribution to HIV-1 have begun to appear in the literature. For example, in a study based in Alberta, Canada, [ 87 ] the investigators used a phylodynamic method (TransPhylo [ 62 ] ) to reconstruct transmission events among patients receiving treatment at their clinic from HIV-1 sequence data. Although the program TransPhylo attempts, by default, to estimate the proportion of infections that are unsampled, the investigators fixed this proportion to 1%. By so doing, their analysis carried the unrealistic assumption that nearly every person living with HIV-1 in their regional epidemic (comprising at least 1,800 people) was represented in their data set of 139 sequences. In the aftermath of a magnitude 7.0 earthquake that struck Haiti in 2010, there was a large-scale outbreak of cholera , a gastrointestinal infection caused by the bacterium Vibrio cholerae . Nearly 800,000 Haitians became infected and nearly 10,000 died in one of the most significant outbreaks of cholera in modern history. Initial microbial subtyping using pulsed-field gel electrophoresis indicated that the outbreak was most genetically similar to cholera strains sampled in South Asia. [ 88 ] In order to more comprehensively map the plausible source of infection, cholera strains from Southern Asia and South America were compared to the strains sampled from the Haitian outbreak. Whole genome sequences taken from cases in Haiti shared more sites in common with the sequences taken from South Asia ( i.e. , Nepal and Bangladesh) than those in geographic areas more immediate to Haiti. [ 89 ] Direct comparisons were also made between the cholera strains taken from three Nepalese soldiers and three Haitian locals, which were nearly identical in genome sequence, forming a phylogenetic cluster. [ 90 ] Based on the evidence gathered by phylogenetic source attribution studies, the role of Nepalese soldiers who were part of the United Nations Stabilization Mission to Haiti (MINUSTAH) in this outbreak was officially recognized by the United Nations in 2016. [ 91 ] In December 2019, an outbreak of 27 cases of viral pneumonia was reported in association with a seafood market in Wuhan, China. Known respiratory viruses including influenza A virus, respiratory syncytial virus and SARS coronavirus were soon ruled out by laboratory testing. On January 10, 2020, the genome sequence of the novel coronavirus, most closely related to bat SARS-coronaviruses, was released into the public domain. Despite unprecedented quarantine measures, the virus (eventually named SARS-CoV-2 ) spread to other countries including the United States, with global prevalence exceeding 556 million confirmed cases as of July 15, 2022. [ 92 ] This outbreak spurred an unprecedented level of epidemiological and genomic data sharing and real-time analysis, which was often communicated by social media prior to peer review. Much of this knowledge translation was mediated through the open-source project Nextstrain [ 93 ] that performs phylogenetic analyses on pathogen sequence data as they become available on public and access-restricted databases, and uses the results to update web documents in real time. On March 4, 2020, Nextstrain developers released a phylogeny in which a SARS-CoV-2 genome that was isolated from a German patient occupied an ancestral position relative to a monophyletic clade of sequences sampled from Europe and Mexico. Users of the Twitter social media platform soon commented on the related post from Nextstrain that onward transmission from the German individual seemed to have "led directly to some fraction of the widespread outbreak circulating in Europe today". [ 94 ] These comments were soon followed by criticism from other users that attributing the outbreak in Europe to the German patient as the source individual was drawing conclusions about the directionality of transmission from an incompletely sampled tree. [ 95 ] In other words, the tree was reconstructed from a highly incomplete sample of cases from the ongoing outbreak, and the addition of other sequences had a substantial probability of modifying the inferred relationship between the German sequence and the clade in question. Nevertheless, the interpretation attributing the European outbreak to a German source propagated through social media, causing some users to call on Germany to apologize. [ 96 ] There are numerous computational tools for source attribution that have been published, particularly for phylodynamic methods . Table 1 provides a non-exhaustive listing of some of the software in the public domain. Several of these programs are implemented within the Bayesian software package BEAST, [ 25 ] including SCOTTI, BadTrIP, and beastlier. This listing does not include clustering methods , which are not designed for the purpose of source attribution, but may be used to develop microbial subtype definitions — clustering methods have previously been reviewed in molecular epidemiology literature [ 97 ] [ 16 ] This article incorporates text from a free content work. Licensed under CC BY 4.0 . Text taken from "Molecular source attribution"​ , Chao E, Chato C, Vender R, Olabode AS, Ferreira RC, Poon AFY (2022), PLOS Computational Biology , doi : 10.1371/journal.pcbi.1010649 . [[Category:Free content from PLOS Computational Biology , doi:10.1371/journal.pcbi.1010649]]
https://en.wikipedia.org/wiki/Source_attribution
Erlang is an open source programming language . Multiple development environments (including IDEs and source code editors with plug-ins adding IDE features) have support for Erlang. [ 1 ]
https://en.wikipedia.org/wiki/Source_code_editors_for_Erlang
Source code viruses are a subset of computer viruses that make modifications to source code located on an infected machine. A source file can be overwritten such that it includes a call to some malicious code. By targeting a generic programming language, such as C , source code viruses can be very portable . Source code viruses are rare, partly due to the difficulty of parsing source code programmatically, but have been reported to exist. One such virus (W32/Induc-A) was identified by anti-virus specialist Sophos as capable of injecting itself into the source code of any Delphi program it finds on an infected computer, and then compiles itself into a finished executable. [ 1 ] This malware -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Source_code_virus
The source function is a characteristic of a stellar atmosphere, and in the case of no scattering of photons, describes the ratio of the emission coefficient to the absorption coefficient. It is a measure of how photons in a light beam are removed and replaced by new photons by the material it passes through. Its units in the cgs-system are erg s −1 cm −2 sr −1 Hz −1 and in SI are W m −2 sr −1 Hz −1 . The source function can be written where j λ {\displaystyle j_{\lambda }} is the emission coefficient, κ λ {\displaystyle \kappa _{\lambda }} is the absorption coefficient (also known as the opacity ). Putting this into the equation for radiative transfer we get where s is the distance measured along the path traveled by the beam. The minus sign on the left hand side shows that the intensity decreases as the beam travels, due to the absorption of photons. This article about stellar astronomy is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Source_function
Source reduction is activities designed to reduce the volume, mass, or toxicity of products throughout the life cycle. It includes the design and manufacture, use, and disposal of products with minimum toxic content, minimum volume of material, and/or a longer useful life . The term is also used to describe measures to reduce or eliminate breeding places for disease-carrying mosquitoes. [ 1 ] [ 2 ] Pollution prevention and toxics use reduction are also called source reduction because they address the use of hazardous substances at the source. Source reduction is achieved through improvements in design, production, use, reuse, recycling, and through environmentally preferable purchasing (EPP). A Life-cycle assessment is useful to help choose among several alternatives and options. [ 3 ] [ 4 ] In the United States , the Federal Trade Commission offers guidance for labelling claims: "Source reduction" refers to reducing or lowering the weight, volume or toxicity of a product or package. To avoid being misleading, source reduction claims must qualify the amount of the source reduction and give the basis for any comparison that is made. These principles apply regardless of whether a term like "source reduced" is used. The Massachusetts Toxics Use Reduction Program (TURA) offers six strategies to achieve source reduction: [ citation needed ]
https://en.wikipedia.org/wiki/Source_reduction
Source transformation is the process of simplifying a circuit solution, especially with mixed sources, by transforming voltage sources into current sources, and vice versa, using Thévenin's theorem and Norton's theorem respectively. [ 1 ] Performing a source transformation consists of using Ohm's law to take an existing voltage source in series with a resistance , and replacing it with a current source in parallel with the same resistance, or vice versa. The transformed sources are considered identical and can be substituted for one another in a circuit. [ 2 ] Source transformations are not limited to resistive circuits. They can be performed on a circuit involving capacitors and inductors as well, by expressing circuit elements as impedances and sources in the frequency domain . In general, the concept of source transformation is an application of Thévenin's theorem to a current source , or Norton's theorem to a voltage source . However, this means that source transformation is bound by the same conditions as Thevenin's theorem and Norton's theorem; namely that the load behaves linearly, and does not contain dependent voltage or current sources. [ 3 ] Source transformations are used to exploit the equivalence of a real current source and a real voltage source, such as a battery . Application of Thévenin's theorem and Norton's theorem gives the quantities associated with the equivalence. Specifically, given a real current source, which is an ideal current source I {\displaystyle I} in parallel with an impedance Z {\displaystyle Z} , applying a source transformation gives an equivalent real voltage source, which is an ideal voltage source in series with the impedance. The impedance Z {\displaystyle Z} retains its value and the new voltage source V {\displaystyle V} has value equal to the ideal current source's value times the impedance, according to Ohm's Law V = I Z {\displaystyle V=I\,Z} . In the same way, an ideal voltage source in series with an impedance can be transformed into an ideal current source in parallel with the same impedance, where the new ideal current source has value I = V / Z {\displaystyle I=V/Z} . Source transformations are easy to compute using Ohm's law . If there is a voltage source in series with an impedance , it is possible to find the value of the equivalent current source in parallel with the impedance by dividing the value of the voltage source by the value of the impedance. The converse also holds: if a current source in parallel with an impedance is present, multiplying the value of the current source with the value of the impedance provides the equivalent voltage source in series with the impedance. A visual example of a source transformation can be seen in Figure 1. The transformation can be derived from the uniqueness theorem . In the present context, it implies that a black box with two terminals must have a unique well-defined relation between its voltage and current. It is readily to verify that the above transformation indeed gives the same V-I curve, and therefore the transformation is valid.
https://en.wikipedia.org/wiki/Source_transformation
Source water protection is a planning process conducted by local water utilities, as well as regional or national government agencies, to protect drinking water sources from overuse and contamination. The process includes identification of water sources, assessment of known and potential threats of contamination, notification of the public, and steps to eliminate the contamination. The process is applicable to lakes, rivers and groundwater (aquifers). [ 1 ] Source water protection is part of a multi-barrier approach to protecting municipal sources of drinking water that was recommended by the Canadian Justice Dennis O'Connor in his Walkerton reports. [ 2 ] This study was released in 2002 as a response to the Walkerton Tragedy , in which the town of Walkerton, Ontario's drinking water became contaminated with E. coli bacteria. The Safe Drinking Water Act requires each state to delineate the boundaries of areas that public water systems use for their sources of drinking water—both surface and underground sources. [ 3 ] The U.S. Environmental Protection Agency (EPA) encourages states and local water utilities to conduct source water assessments and take steps to protect the sources. [ 4 ] EPA provides some financial assistance to states and utilities to conduct source water planning, through the Drinking Water State Revolving Fund . Technical and financial assistance is also available through the agency's Water Infrastructure and Resiliency Finance Center. [ 5 ]
https://en.wikipedia.org/wiki/Source_water_protection
Source–sink dynamics is a theoretical model used by ecologists to describe how variation in habitat quality may affect the population growth or decline of organisms . Since quality is likely to vary among patches of habitat, it is important to consider how a low quality patch might affect a population. In this model, organisms occupy two patches of habitat. One patch, the source, is a high quality habitat that on average allows the population to increase. The second patch, the sink, is a very low quality habitat that, on its own, would not be able to support a population. However, if the excess of individuals produced in the source frequently moves to the sink, the sink population can persist indefinitely. Organisms are generally assumed to be able to distinguish between high and low quality habitat, and to prefer high quality habitat. However, ecological trap theory describes the reasons why organisms may actually prefer sink patches over source patches . Finally, the source–sink model implies that some habitat patches may be more important to the long-term survival of the population, and considering the presence of source–sink dynamics will help inform conservation decisions. Although the seeds of a source–sink model had been planted earlier, [ 1 ] Pulliam [ 2 ] is often recognized as the first to present a fully developed source–sink model. He defined source and sink patches in terms of their demographic parameters, or BIDE rates ( birth , immigration , death , and emigration rates). In the source patch, birth rates were greater than death rates, causing the population to grow. The excess individuals were expected to leave the patch, so that emigration rates were greater than immigration rates. In other words, sources were a net exporter of individuals. In contrast, in a sink patch, death rates were greater than birth rates, resulting in a population decline toward extinction unless enough individuals emigrated from the source patch. Immigration rates were expected to be greater than emigration rates, so that sinks were a net importer of individuals. As a result, there would be a net flow of individuals from the source to the sink (see Table 1) . Pulliam's work was followed by many others who developed and tested the source–sink model. Watkinson and Sutherland [ 3 ] presented a phenomenon in which high immigration rates could cause a patch to appear to be a sink by raising the patch's population above its carrying capacity (the number of individuals it can support). However, in the absence of immigration, the patches are able to support a smaller population. Since true sinks cannot support any population, the authors called these patches "pseudo-sinks". Definitively distinguishing between true sinks and pseudo-sinks requires cutting off immigration to the patch in question and determining whether the patch is still able to maintain a population. Thomas et al. [ 4 ] were able to do just that, taking advantage of an unseasonable frost that killed off the host plants for a source population of Edith's checkerspot butterfly ( Euphydryas editha ). Without the host plants, the supply of immigrants to other nearby patches was cut off. Although these patches had appeared to be sinks, they did not become extinct without the constant supply of immigrants. They were capable of sustaining a smaller population, suggesting that they were in fact pseudo-sinks. Watkinson and Sutherland's [ 3 ] caution about identifying pseudo-sinks was followed by Dias, [ 5 ] who argued that differentiating between sources and sinks themselves may be difficult. She asserted that a long-term study of the demographic parameters of the populations in each patch is necessary. Otherwise, temporary variations in those parameters, perhaps due to climate fluctuations or natural disasters, may result in a misclassification of the patches. For example, Johnson [ 6 ] described periodic flooding of a river in Costa Rica which completely inundated patches of the host plant for a rolled-leaf beetle ( Cephaloleia fenestrata ). During the floods, these patches became sinks, but at other times they were no different from other patches. If researchers had not considered what happened during the floods, they would not have understood the full complexity of the system. Dias [ 5 ] also argued that an inversion between source and sink habitat is possible so that the sinks may actually become the sources. Because reproduction in source patches is much higher than in sink patches, natural selection is generally expected to favor adaptations to the source habitat. However, if the proportion of source to sink habitat changes so that sink habitat becomes much more available, organisms may begin to adapt to it instead. Once adapted, the sink may become a source habitat. This is believed to have occurred for the blue tit ( Parus caeruleus ) 7500 years ago as forest composition on Corsica changed, but few modern examples are known. Boughton [ 7 ] described a source—pseudo-sink inversion in butterfly populations of E. editha . [ 4 ] Following the frost, the butterflies had difficulty recolonizing the former source patches. Boughton found that the host plants in the former sources senesced much earlier than in the former pseudo-sink patches. As a result, immigrants regularly arrived too late to successfully reproduce. He found that the former pseudo-sinks had become sources, and the former sources had become true sinks. One of the most recent additions to the source–sink literature is by Tittler et al., [ 8 ] who examined wood thrush ( Hylocichla mustelina ) survey data for evidence of source and sink populations on a large scale. The authors reasoned that emigrants from sources would likely be the juveniles produced in one year dispersing to reproduce in sinks in the next year, producing a one-year time lag between population changes in the source and in the sink. Using data from the Breeding Bird Survey , an annual survey of North American birds, they looked for relationships between survey sites showing such a one-year time lag. They found several pairs of sites showing significant relationships 60–80 km apart. Several appeared to be sources to more than one sink, and several sinks appeared to receive individuals from more than one source. In addition, some sites appeared to be a sink to one site and a source to another (see Figure 1). The authors concluded that source–sink dynamics may occur on continental scales. One of the more confusing issues involves identifying sources and sinks in the field. [ 9 ] Runge et al. [ 9 ] point out that in general researchers need to estimate per capita reproduction, probability of survival, and probability of emigration to differentiate source and sink habitats. If emigration is ignored, then individuals that emigrate may be treated as mortalities, thus causing sources to be classified as sinks. This issue is important if the source–sink concept is viewed in terms of habitat quality (as it is in Table 1) because classifying high-quality habitat as low-quality may lead to mistakes in ecological management. Runge et al. [ 9 ] showed how to integrate the theory of source–sink dynamics with population projection matrices [ 10 ] and ecological statistics [ 11 ] in order to differentiate sources and sinks. Why would individuals ever leave high quality source habitat for a low quality sink habitat? This question is central to source–sink theory. Ultimately, it depends on the organisms and the way they move and distribute themselves between habitat patches. For example, plants disperse passively, relying on other agents such as wind or water currents to move seeds to another patch. Passive dispersal can result in source–sink dynamics whenever the seeds land in a patch that cannot support the plant's growth or reproduction. Winds may continually deposit seeds there, maintaining a population even though the plants themselves do not successfully reproduce. [ 12 ] Another good example for this case are soil protists. Soil protists also disperse passively, relying mainly on wind to colonize other sites. [ 13 ] As a result, source–sink dynamics can arise simply because external agents dispersed protist propagules (e.g., cysts, spores), forcing individuals to grow in a poor habitat. [ 14 ] In contrast, many organisms that disperse actively should have no reason to remain in a sink patch, [ 15 ] provided the organisms are able to recognize it as a poor quality patch (see discussion of ecological traps ). The reasoning behind this argument is that organisms are often expected to behave according to the " ideal free distribution ", which describes a population in which individuals distribute themselves evenly among habitat patches according to how many individuals the patch can support. [ 16 ] When there are patches of varying quality available, the ideal free distribution predicts a pattern of "balanced dispersal". [ 15 ] In this model, when the preferred habitat patch becomes crowded enough that the average fitness (survival rate or reproductive success) of the individuals in the patch drops below the average fitness in a second, lower quality patch, individuals are expected to move to the second patch. However, as soon as the second patch becomes sufficiently crowded, individuals are expected to move back to the first patch. Eventually, the patches should become balanced so that the average fitness of the individuals in each patch and the rates of dispersal between the two patches are even. In this balanced dispersal model, the probability of leaving a patch is inversely proportional to the carrying capacity of the patch. [ 15 ] In this case, individuals should not remain in sink habitat for very long, where the carrying capacity is zero and the probability of leaving is therefore very high. An alternative to the ideal free distribution and balanced dispersal models is when fitness can vary among potential breeding sites within habitat patches and individuals must select the best available site. This alternative has been called the "ideal preemptive distribution", because a breeding site can be preempted if it has already been occupied. [ 17 ] For example, the dominant, older individuals in a population may occupy all of the best territories in the source so that the next best territory available may be in the sink. As the subordinate, younger individuals age, they may be able to take over territories in the source, but new subordinate juveniles from the source will have to move to the sink. Pulliam [ 2 ] argued that such a pattern of dispersal can maintain a large sink population indefinitely. Furthermore, if good breeding sites in the source are rare and poor breeding sites in the sink are common, it is even possible that the majority of the population resides in the sink. The source–sink model of population dynamics has made contributions to many areas in ecology. For example, a species' niche was originally described as the environmental factors required by a species to carry out its life history, and a species was expected to be found only in areas that met these niche requirements. [ 18 ] This concept of a niche was later termed the "fundamental niche", and described as all of the places a species could successfully occupy. In contrast, the "realized niche", was described as all of the places a species actually did occupy, and was expected to be less than the extent of the fundamental niche as a result of competition with other species. [ 19 ] However, the source–sink model demonstrated that the majority of a population could occupy a sink which, by definition, did not meet the niche requirements of the species, [ 2 ] and was therefore outside the fundamental niche (see Figure 2). In this case, the realized niche was actually larger than the fundamental niche, and ideas about how to define a species' niche had to change. Source–sink dynamics has also been incorporated into studies of metapopulations , a group of populations residing in patches of habitat. [ 20 ] Though some patches may go extinct, the regional persistence of the metapopulation depends on the ability of patches to be re-colonized. As long as there are source patches present for successful reproduction, sink patches may allow the total number of individuals in the metapopulation to grow beyond what the source could support, providing a reserve of individuals available for re-colonization. [ 21 ] Source–sink dynamics also has implications for studies of the coexistence of species within habitat patches. Because a patch that is a source for one species may be a sink for another, coexistence may actually depend on immigration from a second patch rather than the interactions between the two species. [ 2 ] Similarly, source–sink dynamics may influence the regional coexistence and demographics of species within a metacommunity , a group of communities connected by the dispersal of potentially interacting species. [ 22 ] Finally, the source–sink model has greatly influenced ecological trap theory , a model in which organisms prefer sink habitat over source habitat. [ 23 ] Besides being ecological trap sink habitat may vary in their response i major disturbance and colonization of sink habitat may allow species survival even if population in source habitat extinct due to some catastrophic event [ 24 ] which may substantially increase metapopulational stability. [ 25 ] Land managers and conservationists have become increasingly interested in preserving and restoring high quality habitat, particularly where rare, threatened, or endangered species are concerned. As a result, it is important to understand how to identify or create high quality habitat, and how populations respond to habitat loss or change. Because a large proportion of a species' population could exist in sink habitat, [ 26 ] conservation efforts may misinterpret the species' habitat requirements. Similarly, without considering the presence of a trap, conservationists might mistakenly preserve trap habitat under the assumption that an organism's preferred habitat was also good quality habitat. Simultaneously, source habitat may be ignored or even destroyed if only a small proportion of the population resides there. Degradation or destruction of the source habitat will, in turn, impact the sink or trap populations, potentially over large distances. [ 8 ] Finally, efforts to restore degraded habitat may unintentionally create an ecological trap by giving a site the appearance of quality habitat, but which has not yet developed all of the functional elements necessary for an organism's survival and reproduction. For an already threatened species, such mistakes might result in a rapid population decline toward extinction. In considering where to place reserves , protecting source habitat is often assumed to be the goal, although if the cause of a sink is human activity, simply designating an area as a reserve has the potential to convert current sink patches to source patches (e.g. no-take zones ). [ 27 ] Either way, determining which areas are sources or sinks for any one species may be very difficult, [ 28 ] and an area that is a source for one species may be unimportant to others. Finally, areas that are sources or sinks currently may not be in the future as habitats are continually altered by human activity or climate change . Few areas can be expected to be universal sources, or universal sinks. [ 27 ] While the presence of source, sink, or trap patches must be considered for short-term population survival, especially for very small populations, long-term survival may depend on the creation of networks of reserves that incorporate a variety of habitats and allow populations to interact. [ 27 ]
https://en.wikipedia.org/wiki/Source–sink_dynamics
South is one of the cardinal directions or compass points . The direction is the opposite of north and is perpendicular to both west and east . The word south comes from Old English sūþ , from earlier Proto-Germanic *sunþaz ("south"), possibly related to the same Proto-Indo-European root that the word sun derived from. Some languages describe south in the same way, from the fact that it is the direction of the sun at noon (in the Northern Hemisphere), [ 1 ] like Latin meridies 'noon, south' (from medius 'middle' + dies 'day', cf English meridional), while others describe south as the right-hand side of the rising sun, like Biblical Hebrew תֵּימָן teiman 'south' from יָמִין yamin 'right', Aramaic תַּימנַא taymna from יָמִין yamin 'right' and Syriac ܬܰܝܡܢܳܐ taymna from ܝܰܡܝܺܢܳܐ yamina (hence the name of Yemen , the land to the south/right of the Levant [ 2 ] ). South is sometimes abbreviated as S . By convention , the bottom or down-facing side of a map is south, although reversed maps exist that defy this convention. [ 3 ] To go south using a compass for navigation , set a bearing or azimuth of 180°. Alternatively, in the Northern Hemisphere outside the tropics , the Sun will be roughly in the south at midday . [ 4 ] True south is one end of the axis about which the Earth rotates, called the South Pole . The South Pole is located in Antarctica . Magnetic south is the direction towards the south magnetic pole , some distance away from the south geographic pole. [ 5 ] Roald Amundsen , from Norway , was the first person to reach the South Pole , on 14 December 1911, after Ernest Shackleton from the UK was forced to turn back some distance short. [ 6 ] The Global South refers to the socially and economically less-developed southern half of the globe. 95% of the Global North has enough food and shelter, and a functioning education system. [ 7 ] In the South, on the other hand, only 5% of the population has enough food and shelter. It "lacks appropriate technology, it has no political stability, the economies are disarticulated, and their foreign exchange earnings depend on primary product exports". [ 7 ] Use of the term "South" may also be country-relative, particularly in cases of noticeable economic or cultural divide. For example, the Southern United States , separated from the Northeastern United States by the Mason–Dixon line , or the South of England , which is politically and economically unmatched with all of the North of England . Southern Cone is the name that is often referred to as the southernmost area of South America that, in the form of an inverted "cone", almost like a large peninsula, encompasses Argentina , Chile , Paraguay , Uruguay and the entire South of Brazil (Brazilian states of Rio Grande do Sul , Santa Catarina , Paraná and São Paulo ). Rarely does the meaning broaden to Bolivia , and in the most restricted sense it only covers Chile , Argentina and Uruguay . The country of South Africa is so named because of its location at the southern tip of Africa. Upon formation the country was named the Union of South Africa in English, reflecting its origin from the unification of four formerly separate British colonies. Australia derives its name from the Latin Terra Australis ("Southern Land"), a name used for a hypothetical continent in the Southern Hemisphere since ancient times. In the card game bridge , one of the players is known for scoring purposes as South. South partners with North and plays against East and West. [ 8 ] In Greek religion, Notos, was the south wind and bringer of the storms of late summer and autumn.
https://en.wikipedia.org/wiki/South
The south-pointing chariot (or carriage ) was an ancient Chinese two-wheeled vehicle that carried a movable pointer to indicate the south , no matter how the chariot turned. Usually, the pointer took the form of a doll or figure with an outstretched arm. The chariot was supposedly used as a non-magnetic compass for navigation and may also have had other purposes. The ancient Chinese invented a mobile-like armored cart in the 5th century BC called the Dongwu Che ( Chinese : 洞屋车 ). It was used for the purpose of protecting warriors on the battlefield. The Chinese war wagon was designed as a kind of mobile protective cart with a shed-like roof. It would serve to be rolled up to city fortifications to provide protection for sappers digging underneath to weaken a wall's foundation. The early Chinese war wagon became the basis of technologies for the making of ancient Chinese south-pointing chariots. [ 1 ] [ 2 ] There are legends of earlier south-pointing chariots, but the first reliably documented one was created by the Chinese mechanical engineer Ma Jun ( c. 200 – 265) of Cao Wei during the Three Kingdoms . No ancient chariots still exist, but many extant ancient Chinese texts mention them, saying they were used intermittently until about 1300. Some include information about their inner components and workings. There were probably several types of south-pointing chariot which worked differently. In most or all of them, the rotating road wheels mechanically operated a geared mechanism to keep the pointer aimed correctly. The pointer was aimed southward by hand at the start of a journey. Subsequently, whenever the chariot turned, the mechanism rotated the pointer relative to the body of the chariot to counteract the turn. This kept the pointer aiming in a constant direction, equal to the starting position. Thus the mechanism did a kind of directional dead reckoning , which is inherently prone to cumulative errors and uncertainties. Some chariots' mechanisms may have had differential gears. The south-pointing chariot, a mechanical-geared, wheeled vehicle used to discern the southern cardinal direction (without magnetics), was given a brief description by Ma's contemporary Fu Xuan . [ 3 ] The contemporary 3rd century source of the Weilüe , written by the East Han dynasty politician Yuan Huan also described the south-pointing chariot of belonging to the Chinese mechanical engineer and politician Ma Jun . [ 4 ] The Jin dynasty (266–420) era text of the Shu Zheng Ji (Records of Military Expeditions), written by Guo Yuansheng, recorded that south-pointing chariots were often stored in the northern gatehouse of the Government Workshops (Shang Fang) of the capital city. [ 4 ] However, the later written Song Shu ( Book of Song ) (6th century) recorded the south-pointing chariot's design and use in further detail, as well as creating the background legend of the device's (supposed) use long before Ma's time, in the Western Zhou dynasty (1050–771 BC). The book also provided a description of the south-pointing chariot's re-invention and use in times after Ma Jun and the Three Kingdoms. The 6th century text, translated by the British scientist and historian Joseph Needham , reads as follows (the south-pointing chariot is referred to as the south-pointing carriage): The south-pointing carriage was first constructed by the Duke of Zhou (beginning of the 1st millennium BC ) as a means of conducting homewards certain envoys who had arrived from a great distance beyond the frontiers. The country to be traversed was a boundless plain, in which people lost their bearings as to east and west, so (the Duke) caused this vehicle to be made in order that the ambassadors should be able to distinguish north and south. The Gui Gu Zi book says that the people of the State of Zheng , when collecting jade, always carried with them a 'south-pointer', and by means of this were never in doubt (as to their position). During the Qin and Former Han dynasties, however, nothing more was heard of the vehicle. In the Later Han period, Zhang Heng re-invented it, but owing to the confusion and turmoil at the close of the dynasty it was not preserved. [ 5 ] In the State of Wei, (in the San Guo period) Gaotong Long and Qin Lang were both famous scholars; they disputed about the south-pointing carriage before the court, saying that there was no such thing, and that the story was nonsense. But during the Qing-long reign period (233–237) the emperor Ming Di commissioned the scholar Ma Jun to construct one, and he duly succeeded. This again was lost during the troubles attending the establishment of the Jin dynasty . [ 6 ] Later on, Shi Hu (emperor of the Jie Later Zhao dynasty) had one made by Xie Fei, and again Linghu Sheng made one for Yao Xing (emperor of the Later Qin dynasty). The latter was obtained by emperor An Di of the Jin in the 13th year of the Yi-xi reign-period (417), and it finally came into the hands of emperor Wu Di of the Liu Song dynasty when he took over the administration of Chang'an . Its appearance and construction was like that of a drum-carriage ( odometer ). A wooden figure of a man was placed at the top, with its arm raised and pointing to the south, (and the mechanism was arranged in such a way that) although the carriage turned round and round, the pointer-arm still indicated the south. In State processions, the south-pointing carriage led the way, accompanied by the imperial guard. [ 7 ] These vehicles, constructed as they had been by barbarian ( Qiang ) workmen, did not function particularly well. Though called south-pointing carriages, they very often did not point true, and had to negotiate curves step by step, with the help of someone inside to adjust the machinery. The ingenious man from Fanyang , Zi Zu Chongzhi frequently said, therefore, that a new (and properly automatic) south-pointing carriage ought to be constructed. So towards the close of the Sheng-Ming reign period (477–479) the emperor Shun Di , during the premiership of the Prince of Qi, commissioned (Zi Zu Chongzhi) to make one, and when it was completed it was tested by Wang Seng-qian, military governor of Tanyang, and Liu Hsiu, president of the Board of Censors. The workmanship was excellent, and although the carriage was twisted and turned in a hundred directions, the hand never failed to point to the south. Under the Jin, moreover, there had also been a south-pointing ship. [ 7 ] The last sentence of the passage is of great interest for navigation at sea, since the magnetic compass used for seafaring navigation was not used until the time of Shen Kuo (1031–1095). Although the Song Shu text describes earlier precedents of the south-pointing chariot before the time of Ma Jun, this is not entirely credible, as there are no pre-Han or Han dynasty era texts that describe the device. [ 8 ] In fact, the first known source to describe stories of its legendary use during the Zhou period was the Gu Jin Zhu book of Cui Bao (c. 300), written soon after the Three Kingdoms era. [ 4 ] Cui Bao also wrote that the intricate details of construction for the device were once written in the Shang Fang Gu Shi ( Traditions of the Imperial Workshops ), but the book was lost by his time. [ 4 ] The invention of the south-pointing chariot also made its way to Japan by the 7th century. The Nihon Shoki (The Chronicles of Japan) of 720 described the earlier Chinese Buddhist monks Zhi Yu and Zhi You constructing several south-pointing Chariots for Emperor Tenji of Japan in 658. [ 9 ] This was followed up by several more chariot devices built in 666 as well. [ 9 ] The south-pointing chariot was also combined with the earlier Han dynasty era invention of the odometer , a mechanical device used to measure distance traveled, and found in all modern automobiles . It was mentioned in the Song dynasty (960–1279) historical text of the Song Shi (compiled in 1345) that the engineers Yan Su (in 1027) and Wu Deren (in 1107) both created south-pointing chariots, which it details as follows. [ 10 ] (In Needham's translation, inches and feet (ft) are used as units of distance. 1 inch is 25.4 millimetres. 1 ft is 12 inches or 304.8 mm.) In the 5th year of the Tian-Sheng reign period of the emperor Renzong (1027), Yan Su, a Divisional Director in the Ministry of Works, made a south-pointing carriage. He memorialised the throne, saying, [after the usual historical introduction]: "Throughout the Five Dynasties and until the reigning dynasty there has been, so far as I know, no one who has been able to construct such a vehicle. But now I have invented a design myself and have succeeded in completing it". [ 10 ] "The method involves using a carriage with a single pole (for two horses). Above the outside framework of the body of the carriage let there be a cover in two stories. Set a wooden image of a xian (immortal) at the top, stretching out its arm to indicate the south. Use 9 wheels, great and small, with a total of 120 teeth, i.e. 2 foot-wheels (i.e. road-wheels, on which the carriage runs) 6 ft. high and 18 ft. in circumference, attached to the foot wheels, 2 vertical subordinate wheels, 2.4 ft. in diameter and 7.2 ft. in circumference, each with 24 teeth, the teeth being at intervals of 3 inches apart. [ 10 ] "... Then below the crossbar at the end of the pole, two small vertical wheels 3 inches in diameter and pierced by an iron axle, to the left 1 small horizontal wheel, 1.2 feet in diameter, with 12 teeth, to the right 1 small horizontal wheel, 1.2 ft. in diameter, with 12 teeth, in the middle 1 large horizontal wheel, of diameter 4.8 ft. and circumference 14.4 ft., with 48 teeth, the teeth at intervals of 3 inches apart; in the middle a vertical shaft piercing the center (of the large horizontal wheel) 8 ft. high and 3 inches in diameter; at the top carrying the wooden figure of the xian . [ 10 ] "When the carriage moves (southward) let the wooden figure point south. When it runs (and goes) eastwards, the (back end of the) pole is pushed to the right; the subordinate wheel attached to the right road-wheel will turn forward 12 teeth, drawing with it the right small horizontal wheel one revolution (and so) pushing the central large horizontal wheel to revolve a quarter turn to the left. When it has turned around 12 teeth, the carriage moves eastwards, and the wooden figure stands crosswise and points south. If (instead) it turns (and goes) westwards, the (back end of the) pole is pushed to the left; the subordinate wheel attached to the left road-wheel will turn forward with the road-wheel 12 teeth, drawing with it the left small horizontal wheel one revolution, and pushing the central large horizontal wheel to revolve a quarter turn to the right. When it has turned round 12 teeth, the carriage moves due west, but still the wooden figure stands crosswise and points south. If one wishes to travel northwards, the turning round, whether by east or west, is done in the same way." [ 11 ] After this initial description of Yan Su's device, the text continues to describe the work of Wu Deren, who crafted a wheeled device that would combine the odometer and south-pointing chariot: It was ordered that the method should be handed down to the (appropriate) officials so that the machine might be made. In the first year of the Da-Guan reign period (1107), the Chamberlain Wu Deren presented specifications of the south-pointing carriage and the carriage with the li-recording drum (odometer). The two vehicles were made, and were first used that year at the great ceremony of the ancestral sacrifice. [ 12 ] The body of the south-pointing carriage was 11.15 ft. (long), 9.5 ft. wide, and 10.9 ft. deep. The carriage wheels were 5.7 ft. in diameter, the carriage pole 10.5 ft. long, and the carriage body in two stories, upper and lower. In the middle was placed a partition. Above there stood a figure of a xian holding a rod, on the left and right were tortoises and cranes, one each on either side, and four figures of boys each holding a tassel. In the upper story there were at the four corners trip-mechanisms , and also 13 horizontal wheels, each 1.85 ft. in diameter, 5.55 ft. in circumference, with 32 teeth at intervals of 1.8 inches apart. A central shaft, mounted on the partition, pierced downwards. [ 12 ] In the lower story were 13 wheels. In the middle was the largest horizontal wheel, 3.8 ft. in diameter, 11.4 ft. in circumference, and having 100 teeth at intervals of 2.1 inches apart. (On vertical axles) reaching to the top (of the compartment) left and right, were two small horizontal wheels which could rise and fall, having an iron weight (attached to) each. Each of these was 1.1 ft. in diameter and 3.3 ft. in circumference, with 17 teeth, at intervals of 1.9 inches apart. Again, to left and right, were attached wheels, one on each side, in diameter 1.55 ft., in circumference 4.65 ft., and having 24 teeth, at intervals of 2.1 inches. [ 12 ] Left and right, too, were double gear-wheels (lit. tier-wheels), a pair on either side. Each of the lower component gears was 2.1 ft. in diameter and 6.3 ft. in circumference, with 32 teeth, at intervals of 2.1 inches apart. Each of the upper component gears was 1.2 ft. in diameter and 3.6 ft. in circumference, with 32 teeth, at intervals of 1.1 inches apart. On each of the road-wheels of the carriage, left and right, was a vertical wheel 2.2 ft. in diameter, 6.6 ft. in circumference, with 32 teeth at intervals of 2.25 inches apart. Both to left and right at the back end of the pole there were small wheels without teeth ( pulleys ), from which hung bamboo cords, and both were tied above the left and right (ends of the) axle (of the carriage) respectively. [ 12 ] If the carriage turns to the right, it causes the small pulley to the left of the back end of the pole to let down the left-hand (small horizontal) wheel. If it turns to the left, it causes the small pulley to the right of the back end of the pole to let down the right (small horizontal) wheel. However, the carriage moves the xian and the boys stand crosswise and point south. The carriage is harnessed with two red horses, bearing frontlets of bronze. [ 12 ] The English engineer George Lanchester proposed that some south-pointing chariots employed differential gears. [ 13 ] A differential is an assembly of gears, nowadays used in almost all automobiles except some electric and hybrid-electric versions, which has three shafts linking it to the external world. They are conveniently labelled A, B, and C. The gears cause the rotation speed of Shaft A to be proportional to the sum of the rotation speeds of Shafts B and C. There are no other limitations on the rotation speeds of the shafts. In an automobile, Shaft A is connected to the engine (through the transmission), and Shafts B and C are connected to two road wheels, one on each side of the vehicle. When the vehicle turns, the wheel going around the outside of the turning curve has to roll further and rotate faster than the wheel on the inside. The differential permits this to happen while both wheels are being driven by the engine. If the sum of the speeds of the wheels is constant, the speed of the engine does not change. In a south-pointing chariot, according to the hypothesis, Shaft B was connected to one road wheel and Shaft C was connected through a direction-reversing gear to the other road wheel. This made Shaft A rotate at a speed that was proportional to the difference between the rotation speeds of the two wheels. The pointing doll was connected (possibly through intermediate gears) to Shaft A. When the chariot moved in a straight line, the two wheels turned at equal speeds, and the doll did not rotate. When the chariot turned, the wheels rotated at different speeds (for the same reason as in an automobile), so the differential caused the doll to rotate, compensating for the turning of the chariot. The hypothesis that there were south-pointing chariots with differential gears originated in the 20th century. People who were familiar with modern (e.g. automotive) uses of differentials interpreted some of the ancient Chinese descriptions in ways that agreed with their own ideas. Essentially, they re-invented the south-pointing chariot, as it had previously been re-invented several times in antiquity. Working chariots that use differentials have been constructed in recent decades. Whether any such chariots existed previously is not known with certainty. Although the Antikythera mechanism is believed to have used differential gears, the first true differential gear definitely known to have been used was by Joseph Williamson in 1720. [ 14 ] He used a differential for correcting the equation of time for a clock that displayed both mean and solar time . [ 14 ] If the south-pointing chariot were built perfectly accurately, using a differential gear, and if it travelled on an Earth that was perfectly smooth, it would have interesting properties. It would be a mechanical compass that transports a direction, given by the pointer, along the path it travels. Mathematically the device performs parallel transport along the path it travels. The chariot can be used to detect straight lines or geodesics . A path on a surface the chariot travels along is a geodesic if and only if the pointer does not rotate with respect to the base of the chariot. Because of the curvature of the Earth's surface (due to it being curved around as a globe), the chariot would generally not continue to point due south as it moves. For example, if the chariot moves along a geodesic (as approximated by any great circle ) the pointer should instead stay at a fixed angle to the path. Also, if two chariots travel by different routes between the same starting and finishing points, their pointers, which were aimed in the same direction at the start, usually do not point in the same direction at the finish. Likewise, if a chariot goes around a closed loop, starting and finishing at the same point on the Earth's surface, its pointer generally does not aim in the same direction at the finish as it did at the start. The difference is the holonomy of the path, and is proportional to the enclosed area. If the journeys are short compared with the radius of the Earth, these discrepancies are small and may have no practical importance. Nevertheless, they show that this type of chariot, based on differential gears, would be an imperfect compass even if constructed exactly and used in ideal conditions. Real machines are never built perfectly accurately. Simple geometry shows that if the chariot's mechanism is based on a differential gear and if, for example, the width of the track of the chariot (the separation between its wheels) is three metres, and if the wheels are intended to be identical but actually differ in diameter by one part in a thousand, then if the chariot travels one kilometre in a straight line, the "south-pointing" figure will rotate nearly twenty degrees. If it initially points exactly to the south, at the end of the one-kilometre trip it will point almost to the south-southeast or south-southwest , depending on which wheel is the larger. If the chariot travels nine kilometres, the figure will end up pointing almost due north. Obviously, this would make it useless as a south-pointing compass. To be a useful navigational tool, the figure would have to rotate no more than a couple of degrees over a journey of a hundred kilometres, but this would require the chariot's wheels to be equal in diameter to within one part in a million. Even if the process of manufacturing the wheels were capable of this precision (which would not be possible with ancient Chinese methods), it is doubtful that the equality of the wheels could be maintained for long as they are subjected to the wear and tear of travelling across open country. Irregularity of the ground would add further errors to the device's functioning. Considerable scepticism is therefore warranted as to whether this type of south-pointing chariot, using a differential gear for the whole time, was used in practice to navigate over long distances. Conceivably, the south-pointing doll was fixed to the body of the chariot while it was travelling in straight lines, and coupled to the differential only when the chariot was turning. The charioteer could have operated a control to do this just before and after making each turn, or maybe shouted commands to someone inside the chariot who connected and disconnected the doll and the differential. This could have been done without stopping the chariot. If turns were brief and rare, this would have greatly reduced the pointing errors, since they would have accumulated only during the short periods when the doll and differential were connected. However, it raises the problem of how the chariot could have been kept travelling in straight lines with sufficient accuracy without using the pointing doll. If the real purposes of the chariot and the accounts of it were amusement and impressing visiting foreigners, rather than actual long-distance navigation, then its inaccuracy might not have been important. Considering that a large mechanical wagon or chariot would be obligated to travel on roads, the destination in question would typically not be in an unknown direction. The fact that the sources cited above mention that the chariot was placed at the front of processions, its high level of mechanical complexity and fragility, and that it was 'reinvented' several times contribute to the conclusion that it was not used for navigation, as a truly practical and useful navigational tool would not be forgotten or left unused. Although the hypothesis that the south-pointing chariot used differential gears has gained wide acceptance, it should be recognized that functional south-pointing chariots without differential gears are possible. The ancient descriptions are often unclear, but they suggest that the Chinese implemented several different designs, at least some of which did not include differentials. Some of the ancient descriptions suggest that some south-pointing chariots could move in only three ways: straight ahead, or turning left or right with a fixed radius of curvature. A third wheel might have been used to fix the turning radius. If the chariot was turning, the pointing doll was connected by gears to one or other of the two main road wheels (e.g. whichever was on the outside of the curve around which the chariot was moving) so the doll rotated at a fixed speed, relative to the rate of the chariot's movement, to compensate for the predetermined rate of turn. The doll turned in opposite directions depending on which road wheel was connected to it, so its rotation compensated for the chariot turning left or right. This design would have been simpler than using a differential gear. The chariots of Yan Su and Wu Deren appear to have used this type of mechanism. (See descriptions quoted from the Song Shi , above.) Apart from the presence of components in Wu Deren's vehicle to make it function as an odometer , there were only minor differences between them. In each chariot, the two main road wheels were attached to vertical gear wheels. A large horizontal gear wheel was linked (possibly via intermediate gearing) to the pointing doll, and was positioned so a diameter almost spanned the space between the uppermost points of the vertical gear wheels. When the chariot was moving straight ahead, there was no connection between these gears, but when the chariot turned, a small gear wheel was lowered into contact with the horizontal gear and one of the vertical gears, thus linking the doll to one of the road wheels. Two small gear wheels were available, one to connect the horizontal gear to each of the vertical ones. Of course, they were not used simultaneously. The small gear wheels were raised and lowered by an arrangement of weights, pulleys and cords which were attached to the pole to which the horses that pulled the chariot were harnessed. When the horses moved to one side or the other, in order to turn the chariot, the pole moved and the cords lowered the appropriate small gear wheel into its working position. When the horses returned to walking straight ahead, the small gear wheel was raised out of contact with the main horizontal and vertical gears. Thus the system functioned automatically. The mirror-symmetry of the vertical gears being linked by the small gears to the horizontal gear at diametrically opposite points caused the horizontal gear to rotate in opposite directions depending on which road wheel was linked to it, thus rotating the pointing doll in opposite directions when the chariot turned left and right. The description does not mention a third road wheel to fix the turning radius, but it is possible that such a wheel was present. No gears would have been attached to it, so perhaps the author of the description did not mention it because he did not realize that it was an important part of the mechanism. Putting such a wheel on the chariot and making it function properly would not have been difficult. It might have been attached to the pole to which the horses were harnessed. Stops would have been provided to limit the motions of the pole to left and right. If a third road wheel was included, this type of south-pointing chariot could have worked quite accurately as a compass when used for short journeys under good conditions, but if used for long journeys it would have been subject to cumulative errors, like chariots using the differential mechanism. If in fact there was no third road wheel, the chariot might have functioned as a compass if turns were always made so that one of the two wheels was stationary and only the other rotated, with the pointing doll connected to it by gears. The charioteer could have kept the stationary wheel from turning by controlling the horses appropriately. (A brake would have helped, but there is no mention of one in the description.) The radius of the curve around which the rotating wheel moved would have equalled the track-width of the chariot, and the gears turning the doll would have been chosen accordingly. This design would have worked as a compass for short journeys, but would have suffered from cumulative errors if used for long ones. Also, the chariot would have been slow and awkward to turn. This might not have mattered if turns were rarely executed. The Song Shi description of the gears in Yan Su's chariot, and the numbers of teeth on them, suggests that it worked this way, without a third road wheel. The gear ratios would have been correct if the pointing doll was attached directly to the large horizontal gear wheel, and the track-width of the chariot equalled the diameter of the road wheels. Wu Deren's chariot also appears to have been designed to work this way. The width of the chariot, and therefore presumably the track-width, was greater than the diameter of the wheels. The gear ratios were appropriate for these dimensions. The charioteer would have had to use great skill to ensure that the radius of each turn of the chariot was correct to make one of the wheels exactly stop rotating. Unless he did this correctly, the pointing doll would not have kept aiming to the south. He would have been able to adjust the direction in which it aimed by making turns that were more or less sharp. This would sometimes have given him opportunities to use the chariot dishonestly. If it was being demonstrated to spectators, for example, and was being driven around in front of them, making many turns, the charioteer, who would have known which way was south, would have been able to make the chariot appear to work extremely accurately as a compass for long periods. The spectators could have been shown the machinery, and would have seen that the charioteer could not manipulate the doll. They would presumably have been impressed by the apparent accuracy of the mechanism. It is possible that this type of chariot was sometimes constructed with the prime purpose of fraudulently impressing spectators. Possibly, people who built these chariots deceived their own employers with them, which could have gained them fame and fortune provided nobody tried using the chariots for real navigation. Other mechanical designs for the south-pointing chariot are also possible, including ones that employ a device that is used today, the gyrocompass . However, there is no indication that the ancient Chinese knew of these. Some south-pointing chariots may not have been purely mechanical devices. Someone riding inside the chariot may have used some non-mechanical method of determining the compass directions, and turned the doll on top of the chariot accordingly. There are several methods that could have been used, for example: Unlike mechanisms that rely on the rotation of road wheels, most of these methods can be used at sea. This may account for the mention (see "Earliest sources" above) that a marine version of the south-pointing chariot existed. These methods can work accurately over long distances, unlike the mechanical designs for the chariot. While none of the historic south-pointing chariots remain, [ citation needed ] full sized replicas can be found. The History Museum [ clarification needed ] in Beijing , China, holds a replica based on the mechanism of Yen Su (1027). [ citation needed ] The National Palace Museum in Taipei , Taiwan, holds a replica based on the Lanchester mechanism of 1932. [ citation needed ] Referred to as the "southern pointing man", two replicas can also be seen (and physically experimented with) at the Ontario Science Centre in Toronto, Canada. [ citation needed ]
https://en.wikipedia.org/wiki/South-pointing_chariot
The South African Academy of Engineering (S.A.A.E.) is a non-profit , independent institution with some 151 fellows (June 2009) drawn mainly from the engineering sector of the Republic of South Africa . The aims of the academy are to promote excellence in the science and application of engineering for the benefit of South Africa. [ citation needed ] The academy's membership is drawn from South Africa's engineers and related professions. [ citation needed ] This article about an organisation in South Africa is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/South_African_Academy_of_Engineering
The South African Chemical Workers' Union (SACWU) is a trade union representing workers in the chemical industry in South Africa . The union was founded in 1973 and affiliated to the Consultative Committee, a loose grouping of trade unions. It was initially very small, and had grown to only 960 members by 1979. In 1980, it affiliated to the new Council of Unions of South Africa , and grew rapidly, with 9,479 members by the end of the year. [ 1 ] In 1986, it transferred to the new National Council of Trade Unions (NACTU), at which point it had 30,000 members. [ 2 ] By 1994, SACWU was NACTU's largest affiliate, its membership being similar to that of the rival Chemical Workers' Industrial Union . [ 3 ] In 2011, its membership was about 30,000. [ 4 ]
https://en.wikipedia.org/wiki/South_African_Chemical_Workers'_Union
The South African Institute of Electrical Engineers (SAIEE) is a professional association representing electrical and electronic engineers, technologists and technicians in Southern Africa. The organisation is listed as a recognised Voluntary Association [ 1 ] by the Engineering Council of South Africa (ECSA), the statutory body that registers professional engineers, professional certificated engineers, professional engineering technologists and professional engineering technicians in South Africa . Over a century, [ 2 ] the activities of the SAIEE have included publication, education, the promotion of electrical engineering , professional development of its members, public events, and participation in public debate affecting the profession, industry and society. The SAIEE has sections that cover the following aspects of electrical engineering: The SAIEE administers a number of university bursaries and scholarships in the field of electrical and electronic engineering in South Africa. Through its marketing and outreach activities, the organisation promotes engineering, and encourages young people to enter the profession. The SAIEE also provides accreditation for courses for Continuing Professional Development (CPD) points, as required by ECSA for renewal of professional registration. The SAIEE runs regular seminars, lectures and other events for its members and the public. Notable annual events include the Bernard Price Memorial Lecture , arranged jointly with the University of the Witwatersrand since 1951, and the President's Invitation Lecture.
https://en.wikipedia.org/wiki/South_African_Institute_of_Electrical_Engineers
The South African National Biodiversity Institute (SANBI) is an organisation tasked with research and dissemination of information on biodiversity, and legally mandated to contribute to the management of the country's biodiversity resources. [ 3 ] It was established in 2004 in terms of the National Environmental Management: Biodiversity Act, No 10 of 2004 , under the South African Department of Environmental Affairs (later named Department of Forestry, Fisheries and the Environment ). SANBI was established on 1 September 2004 in terms of the National Environmental Management: Biodiversity Act, No 10 of 2004. [ 3 ] Previously, in 1989, the autonomous statutory National Botanical Institute (NBI) had been formed from the National Botanic Gardens and the Botanical Research Institute, which had been founded in the early 20th century to study and conserve the South African flora. The mandate of the National Botanical Institute was expanded by the act to include the full diversity of the South African ecosystems. The NBI had its head office at Kirstenbosch in Cape Town, and gardens and research centres throughout South Africa. [ 4 ] Functions include providing knowledge, information, policy support and advice, managing botanical gardens for research, education and public enjoyment, and engaging in ecosystem restoration and rehabilitation programmes and providing models of best practice for biodiversity management. [ 3 ] Core activities include research into conservation and sustainable use, garden development and horticulture, education and provision of biodiversity information systems, ecosystems rehabilitation and development of bioregional planning programmes and policies. [ 3 ] SANBI contributes to the reduction of poverty by providing training and creating sustainable employment in programmes for rehabilitating ecosystems, and programmes to encourage participation in biodiversity science at school level and to strengthen the quality of biodiversity teaching and learning. [ 3 ] Research is a primary component of SANBI's agenda, and includes research into climate change and bio-adaptation. The research is intended to inform climate change policy development and decision making. [ 3 ] SANBI is legally mandated to contribute to the management of the country's biodiversity resources. [ 3 ] The Institute hosts the Red List of South African Plants , a database with descriptions of the country's indigenous plants and their national conservation status. [ 5 ] SANBI also maintains the website PlantZAfrica , which contains over 1,850 Plant of the Week articles, with two new Plant of the Week articles added every week. The site also contains some basic information on the vegetation of SA and related topics. Content is developed by the horticultural and scientific staff of SANBI to provide easy access to popular information. SANBI conducts nationwide biodiversity conservation assessments of various classes of animals, which generally involve field trips for the collection of data. Interested members of the public can participate in several citizen science projects. [ 3 ] A biodiversity knowledge and information management system is provided which integrates existing information resources for easy access for both internal and external end-users. [ 3 ] Since 2014 Ronell Renett Klopper is the coordinator for the South African National Plant Checklist in the Fundamental Biodiversity Sciences Division of SANBI.
https://en.wikipedia.org/wiki/South_African_National_Biodiversity_Institute
The South Carolina Aeronautics Commission ( SCAC ) is a government agency in the U.S. state of South Carolina . The SCAC, in conjunction with the Federal Aviation Administration , is "responsible for collecting, validating, and distributing the operational status of all aspects of the state’s air traffic facilities, in addition to the safety of the people in these locations." [ 1 ] The agency also promulgates rules and regulations of airports and administers airport grants in the state. [ 2 ] [ 3 ] The SCAC was created in 1935 by an act of the South Carolina General Assembly . [ 4 ] Unlike most state aeronautics agencies in the Southeastern United States, the SCAC is not a part of the state's department of transportation . [ 4 ] This South Carolina -related article is a stub . You can help Wikipedia by expanding it . This aviation -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/South_Carolina_Aeronautics_Commission
The South Turkmenistan Complex Archaeological Expedition (STACE), also called the South Turkmenistan Archaeological Inter-disciplinary Expedition of the Academy of Sciences of the Turkmen Soviet Socialist Republic (YuTAKE) was endorsed by the Turkmenistan Academy of Sciences. It was initially organized by the orientalist Mikhail Evgenievich Masson in 1946. [ 1 ] [ 2 ] [ 3 ] The expedition had several excavations or "Brigades", based on sites and periods, and were spread over many years. The Chalcolithic settlements of southern Turkmenistan, according to Masson, date to the late 5th millennium – early 3rd millennium BC, as assessed by carbon dating and paleomagnetic studies of the findings from the excavations carried out by STACE in the Altyndepe and Tekkendepe . The foothills of the Kopetdag mountains have revealed the earliest village cultures of Central Asia in the areas of Namazga-Tepe (more than 50 ha) and Altyndepe (26 ha), Ulug Depe (20 ha), Kara Depe (15 ha), and Geok-Syur (12 ha). [ 4 ] In 1952, Boris Kuftin , established the basic Chalcolithics to Late Bronze Age sequence based on the excavations carried out at Namazga-Tepe (termed Namazga (NMG) I-VI). [ 5 ] However, the Chalcolithic period ended about 2700 BC due to natural factors of ecology, with the Geok-Syur oasis becoming desertified. This resulted in the migration of people to the ancient delta of the Tedzhen River . This also led to the Early Bronze Age Settlements at Khapuz-depe . [ 4 ] The geographical location of the explorations in southern Turkmenistan were not marked with precision. The Merv Oasis is one of the regions explored by Soviet archeologists of the YuTAKE; Antiochia is the primary site in this oasis. [ 3 ] Namazgtepe ('tepe' means "hill") is 100–120 kilometers from Ashgabat in Turkmenistan at the border with Iran, southeast of the Caspian Sea . Excavations at this site have provided a chronological approach for Central Asia. [ 6 ] Namazga-Tepewas the largest settlement found in the Kopetdag foothills, a range of hills extending up to the border with Iran. The Merv Oasis had been extensively explored in 1904 by an American team; however, the reports published were of a preliminary nature. During the period 1940–1950, the Asian Republics started establishing archaeological institutions in their respective countries. Among them, the South Turkmenistan Complex Archaeological Expedition was established in 1946 by Masson under the aegis of the Turkmenistan Academy of Sciences to carry out explorations at several locations. These explorations included: Since 1992, excavations have been carried out by a joint project titled “Turkmen-British-Merv Project”. This has yielded historical data on fortifications and a residential complex of the Hellenistic , Parthina and Sassanian period. [ 3 ] The V Brigade uncovered peculiarities in metal composition during the Palaeo-Metallic epoch at the Allyn-Depe settlement. [ 7 ] The VII Brigade, carried out by Kuftin, was of the Namazgadepe explorations, which revealed six phases, sequentially deposited, and referred to as Namazga I to VI. It established the Chalcolithic ( Eneolithic ) to the late Bronze Age period. Between 1951 and 1961, the VII Brigade explored the Bronze Age site of Altyndepe (which had been discovered earlier by A.A. Semenov in 1929), Iron Age Yaz I complex in Margiana (old delta of the Morghab River , 11 sites at the ancient delta of the Geoksyur oasis, and Bronze Age piedmont sites in the Sumbar Valley with a noteworthy discovery of the Early Bronze Age cemetery of Parkhai II. [ 1 ] In the Margiana archaeological expeditions undertaken during the second phase, work was continued at the Auchindepe and Takirbaidepe, which revealed 100 Bronze Age sites and the settlement of Gomur I. Also explored were the sites at the southern and eastern Togolok and Gomur, and in the northern part of Kalleli group sites. The Jeitun Culture of the Kopetdag Neolithic sites were explored from 1963 to 1973. The survey covered Jeitun Culture as a whole, and particular credit is given to the Turkmen archaeologist, O. K. Berdiev who died in an accident at a young age; his 10 years of explorations have been published under the title “The Most Ancient Agriculturalists of Southern Turkmenistan.” Neolithic pediment sites of Jeitun Culture extended from Bami in the west to the Meana Chacha district in the east. In the explorations done at the north mound of Anau , excavation in the Komanov trench at the north end were subject to deep sounding which revealed consecutive layers of buildings. From this, a stratigraphic sequence of developments evolved with "craft production and social stratification". [ 1 ] The IX Brigade, led by Okladnikov, worked in the Greater Balkan region of Turkmenistan, and in the plateau of Krasnovodsk . The finds at the Jebel rock shelter site near the Caspian Sea on the southwestern end of the Uly Balkan massif was a stratigraphic sequence of Mesolithic and Neolithic deposits, considered a model for the Turkmenistan Caspian Mesolithic period. Two other sites, located in the southern escarpments of the Greater Balkan, were examined in great detail by G. E. Markov of Moscow State University ; these were the Mesolithic sites of Dam-Dam Cheshme 1 and 2. [ 1 ] The XIV Brigade occurred in 1952 and researched primitive settled-agriculturalist settlement attributed to the Copper and Bronze periods. [ 8 ] The explorations in the foothills of the Kopetdag revealed well developed irrigation systems with water control arrangements which resulted in prosperous, well settled large regional centres. [ 6 ] The largest of these settlements is Namazga-Tepe with an area of 50 ha. The excavations done at this site lead to the discovery of six distinct periods. Named Namazgadepe I to VI, the periods extended over the late 5th millennium to early 3rd millennium BC. In the process of development over these centuries, the transition observed was from Chalcolithic period to Early Bronze Age with urban characteristics in the settlements. Dwelling houses also emerged from chaotically planned one room houses to larger houses with many rooms with the interiors painted (lac paintings) and with a hearth. Defensive forts were part of the settlements. Chalcolithic stone amulets with geometric shapes, pottery traditions with two-tiered furnaces for firing ceramics, terracotta figurines, stamp seals of clay and stone, and centres of metallurgical production were uncovered. Rosette and zoomorphic patterns were unearthed, representing various periods, both at Namazga-Tepe and also at other settlements in the foothills of the Kopetdag mountains. These are clearly indicative of the village cultures of Central Asia. [ 6 ] Kuftin was invited to Central Asia to carry out explorations in 1949. He first reconnoitered Turkmenistan and selected a very large tepe (hill), the Altyndepe (in Turkmen language meaning: the "Golden Hill" ). This tepe overlooks the Tedzen delta at the foot of Kopetdag. He found a Neolithic settlement extending into Bronze Age in southern Turkmenistan near the village of Miana , a settlement of 25 ha area with a total stratification thickness of 30 metres (98 ft) with an 8 metres (26 ft) strip of human habitation. This excavated tepe turned out to be a large settlement, 2.5 kilometres (1.6 mi) in length and 0.5 kilometres (0.31 mi) in width, and was identified as a major Bronze Age town. From the highest point of this tepe, a trench was dug to a depth of 30 metres (98 ft) and the section was logged, which revealed layers of the Bronze Age, of neolithic and Eneolithic periods. Ceramics collected from the different layers of the trench enabled Kuftin to establish the sequence and chronology of the findings. One year after he started sequencing the site, he died suddenly and was replaced by Vadim Mikhailovich Masson who published a book on the Bronze Age sequence of this site. The settlement of Ilgynly had also shifted to Altyndepe. Early Bronze period fort walls with decorated towers and a huge entrance had encircled this settlement, though when found, they were in ruins. [ 6 ] [ 9 ] Discoveries by Soviet archeologists dated the finds at this place, in a chronological order, to the later half of the third millennium BC. [ 10 ] Altyn-Depe also provided a link to the several Bronze Age cultures of Eurasia . [ 11 ] The most notable findings in the burial ground of the elite, located in the outskirts of Altyndepe, were "a disk-like stone 'weight', a miniature column, more than 1500 beads, a steatite plate with an image of cross and half-moon, a moulded clay wolf, as well as a golden head of a bull with a turquoise sickle inlaid in the forehead". [ 4 ] Excavations revealed bone and copper artifacts of the fifth millennium BC (Neolithic period), female figurines painted with ornaments, and necklaces of the fourth million BC. Brick walls of 1.5–2 metres (4 ft 11 in – 6 ft 7 in) thickness with brick kilns and a hearth in the middle of the house dated to early third millennium, and small temple buildings and rectangular hearths of Namazga V type of the middle third millennium were also found. In the period from late third millennium to early second millennium, the antiquaries revealed an urban habitation with artisans' houses. Also unearthed were 62 double-tiered kilns, beads and seals. Four stepped ziggurats were found. Further were revealed female terracotta figurines with plaited hair, stone vessels, hafted bronze and copper daggers with flat blades, tabbed silver and bronze seals . The further findings were, However, the settlement gradually disappeared (it was deserted around 1600 BC) as a result of climatic changes; people migrated to the Mugrab region, another area of South Uzbekistan ( Sapali ), and Northern Afghanistan ( Dashli ). [ 10 ] Further, these findings confirmed the Middle Asian interaction from the north to the Oxus civilization . [ 6 ] Geoksyur Oasis, located in the foothills of the Kopetdag, to the east of Altyndepe, is in the center of a cluster of tepes in the desert region on the northern Iranian border. It extends over an area of 12 ha. It is 20 kilometres (12 mi) to the east of the city of Tedzhen . Even though in the Aneolithic Period (4th – early 3rd century BC), the space between houses was used for burials, the settlement was not a cemetery but rather a settlement which was affected by shifting sand dunes and scarcity of water. Geoksyr was revealed to contain "adobe multi-room houses and group burial chambers". Ceramics were also found with dichromatic paintings and many female terracotta figurines. The culture of Geoksyurtepe was correlated with an eastern Anau group of tribes linked to Elam and Mesopotamia . [ 6 ] According to the Greek - Russian archaeologist, Sarianidi, who explored the tepes, Gonurtepe was the "capital or the imperial city –of a complex Bronze Age state, one that stretched at least a thousand square miles and encompassing hundreds of satellite settlements". He also called it the "world's fifth center of ancient civilization" with its refined society called the "Turkmenistan's Morghab River society", formally called the " Bactria-Margiana Archaeological Complex ". It is said to be in league with the "cultural cradles of antiquity" of Egypt , Mesopotamia , India , and China . [ 12 ] The meandering Morghab River along which the Morghab civilization developed by Gonurdepe and Merv , which was once an important place along the Silk Route . But the river flows through the regional capital city of Mary , about 40 miles away from the exploration site of Gonurtepe. The site is dated to late 3rd millennium BC. Excavations have taken place for more than 35 years and still continue at a slow pace due to a lack of adequate funding. The main findings of the excavations are that the site was "an agricultural and herding community who grew grain, raised sheep, built sophisticated irrigation and sewage systems, and produced ceramics in the many kilns that dot the landscape." A fort had been built with thick walls and the enclosed area within the fort had single storied houses, and also a palace, two observatories and cremation grounds. The excavation of the cemeteries revealed many objects, both local and imported (from Indus Valley and Egypt). Religious practices indicated that it was the birthplace of the Zoroastrian religion , a monolithic religion. The practices of sheep sacrifices, temples dedicated to fire and water, drinking of soma-haoma (a brew presumed to be made of opium , ephedra , and a local narcotic ) have been deduced as practices followed by Zoroastrians . [ 12 ]
https://en.wikipedia.org/wiki/South_Turkmenistan_Complex_Archaeological_Expedition
Southeast is a compass point. Southeast , south-east , south east , southeastern , south-eastern , or south eastern may also refer to: United Kingdom Elsewhere
https://en.wikipedia.org/wiki/Southeast_(disambiguation)