text
stringlengths
60
353k
source
stringclasses
2 values
**Monoidal category** Monoidal category: In mathematics, a monoidal category (or tensor category) is a category C equipped with a bifunctor ⊗:C×C→C that is associative up to a natural isomorphism, and an object I that is both a left and right identity for ⊗, again up to a natural isomorphism. The associated natural isomorphisms are subject to certain coherence conditions, which ensure that all the relevant diagrams commute. Monoidal category: The ordinary tensor product makes vector spaces, abelian groups, R-modules, or R-algebras into monoidal categories. Monoidal categories can be seen as a generalization of these and other examples. Every (small) monoidal category may also be viewed as a "categorification" of an underlying monoid, namely the monoid whose elements are the isomorphism classes of the category's objects and whose binary operation is given by the category's tensor product. Monoidal category: A rather different application, of which monoidal categories can be considered an abstraction, is that of a system of data types closed under a type constructor that takes two types and builds an aggregate type; the types are the objects and ⊗ is the aggregate constructor. The associativity up to isomorphism is then a way of expressing that different ways of aggregating the same data—such as ((a,b),c) and (a,(b,c)) —store the same information even though the aggregate values need not be the same. The aggregate type may be analogous to the operation of addition (type sum) or of multiplication (type product). For type product, the identity object is the unit () , so there is only one inhabitant of the type, and that is why a product with it is always isomorphic to the other operand. For type sum, the identity object is the void type, which stores no information and it is impossible to address an inhabitant. The concept of monoidal category does not presume that values of such aggregate types can be taken apart; on the contrary, it provides a framework that unifies classical and quantum information theory.In category theory, monoidal categories can be used to define the concept of a monoid object and an associated action on the objects of the category. They are also used in the definition of an enriched category. Monoidal category: Monoidal categories have numerous applications outside of category theory proper. They are used to define models for the multiplicative fragment of intuitionistic linear logic. They also form the mathematical foundation for the topological order in condensed matter physics. Braided monoidal categories have applications in quantum information, quantum field theory, and string theory. Formal definition: A monoidal category is a category C equipped with a monoidal structure. A monoidal structure consists of the following: a bifunctor ⊗:C×C→C called the monoidal product, or tensor product, an object I called the monoidal unit, unit object, or identity object, three natural isomorphisms subject to certain coherence conditions expressing the fact that the tensor operation: is associative: there is a natural (in each of three arguments A , B , C ) isomorphism α , called associator, with components αA,B,C:A⊗(B⊗C)≅(A⊗B)⊗C has I as left and right identity: there are two natural isomorphisms λ and ρ , respectively called left and right unitor, with components λA:I⊗A≅A and ρA:A⊗I≅A .Note that a good way to remember how λ and ρ act is by alliteration; Lambda, λ , cancels the identity on the left, while Rho, ρ , cancels the identity on the right. Formal definition: The coherence conditions for these natural transformations are: for all A , B , C and D in C , the pentagon diagram commutes;for all A and B in C , the triangle diagramcommutes.A strict monoidal category is one for which the natural isomorphisms α, λ and ρ are identities. Every monoidal category is monoidally equivalent to a strict monoidal category. Examples: Any category with finite products can be regarded as monoidal with the product as the monoidal product and the terminal object as the unit. Such a category is sometimes called a cartesian monoidal category. For example: Set, the category of sets with the Cartesian product, any particular one-element set serving as the unit. Cat, the category of small categories with the product category, where the category with one object and only its identity map is the unit. Examples: Dually, any category with finite coproducts is monoidal with the coproduct as the monoidal product and the initial object as the unit. Such a monoidal category is called cocartesian monoidal R-Mod, the category of modules over a commutative ring R, is a monoidal category with the tensor product of modules ⊗R serving as the monoidal product and the ring R (thought of as a module over itself) serving as the unit. As special cases one has: K-Vect, the category of vector spaces over a field K, with the one-dimensional vector space K serving as the unit. Examples: Ab, the category of abelian groups, with the group of integers Z serving as the unit. For any commutative ring R, the category of R-algebras is monoidal with the tensor product of algebras as the product and R as the unit. The category of pointed spaces (restricted to compactly generated spaces for example) is monoidal with the smash product serving as the product and the pointed 0-sphere (a two-point discrete space) serving as the unit. The category of all endofunctors on a category C is a strict monoidal category with the composition of functors as the product and the identity functor as the unit. Just like for any category E, the full subcategory spanned by any given object is a monoid, it is the case that for any 2-category E, and any object C in Ob(E), the full 2-subcategory of E spanned by {C} is a monoidal category. In the case E = Cat, we get the endofunctors example above. Bounded-above meet semilattices are strict symmetric monoidal categories: the product is meet and the identity is the top element. Any ordinary monoid (M,⋅,1) is a small monoidal category with object set M , only identities for morphisms, ⋅ as tensorproduct and 1 as its identity object. Conversely, the set of isomorphism classes (if such a thing makes sense) of a monoidal category is a monoid w.r.t. the tensor product. Any commutative monoid (M,⋅,1) can be realized as a monoidal category with a single object. Recall that a category with a single object is the same thing as an ordinary monoid. By an Eckmann-Hilton argument, adding another monoidal product on M requires the product to be commutative. Examples: Monoidal preorders Monoidal preorders, also known as "preordered monoids", are special cases of monoidal categories. This sort of structure comes up in the theory of string rewriting systems, but it is plentiful in pure mathematics as well. For example, the set N of natural numbers has both a monoid structure (using + and 0) and a preorder structure (using ≤), which together form a monoidal preorder, basically because m≤n and m′≤n′ implies m+m′≤n+n′ . We now present the general case. Examples: It's well known that a preorder can be considered as a category C, such that for every two objects c,c′∈Ob(C) , there exists at most one morphism c→c′ in C. If there happens to be a morphism from c to c' , we could write c≤c′ , but in the current section we find it more convenient to express this fact in arrow form c→c′ . Because there is at most one such morphism, we never have to give it a name, such as f:c→c′ . The reflexivity and transitivity properties of an order are respectively accounted for by the identity morphism and the composition formula in C. We write c≅c′ iff c≤c′ and c′≤c , i.e. if they are isomorphic in C. Note that in a partial order, any two isomorphic objects are in fact equal. Examples: Moving forward, suppose we want to add a monoidal structure to the preorder C. To do so means we must choose an object I∈C , called the monoidal unit, and a functor C×C→C , which we will denote simply by the dot " ⋅ ", called the monoidal multiplication.Thus for any two objects c1,c2 we have an object c1⋅c2 . We must choose I and ⋅ to be associative and unital, up to isomorphism. This means we must have: (c1⋅c2)⋅c3≅c1⋅(c2⋅c3) and I⋅c≅c≅c⋅I .Furthermore, the fact that · is required to be a functor means—in the present case, where C is a preorder—nothing more than the following: if c1→c1′ and c2→c2′ then (c1⋅c2)→(c1′⋅c2′) .The additional coherence conditions for monoidal categories are vacuous in this case because every diagram commutes in a preorder. Examples: Note that if C is a partial order, the above description is simplified even more, because the associativity and unitality isomorphisms becomes equalities. Another simplification occurs if we assume that the set of objects is the free monoid on a generating set Σ . In this case we could write Ob(C)=Σ∗ , where * denotes the Kleene star and the monoidal unit I stands for the empty string. If we start with a set R of generating morphisms (facts about ≤), we recover the usual notion of semi-Thue system, where R is called the "rewriting rule". Examples: To return to our example, let N be the category whose objects are the natural numbers 0, 1, 2, ..., with a single morphism i→j if i≤j in the usual ordering (and no morphisms from i to j otherwise), and a monoidal structure with the monoidal unit given by 0 and the monoidal multiplication given by the usual addition, := i+j . Then N is a monoidal preorder; in fact it is the one freely generated by a single object 1, and a single morphism 0 ≤ 1, where again 0 is the monoidal unit. Properties and associated notions: It follows from the three defining coherence conditions that a large class of diagrams (i.e. diagrams whose morphisms are built using α , λ , ρ , identities and tensor product) commute: this is Mac Lane's "coherence theorem". It is sometimes inaccurately stated that all such diagrams commute. Properties and associated notions: There is a general notion of monoid object in a monoidal category, which generalizes the ordinary notion of monoid from abstract algebra. Ordinary monoids are precisely the monoid objects in the cartesian monoidal category Set. Further, any (small) strict monoidal category can be seen as a monoid object in the category of categories Cat (equipped with the monoidal structure induced by the cartesian product). Properties and associated notions: Monoidal functors are the functors between monoidal categories that preserve the tensor product and monoidal natural transformations are the natural transformations, between those functors, which are "compatible" with the tensor product. Every monoidal category can be seen as the category B(∗, ∗) of a bicategory B with only one object, denoted ∗. The concept of a category C enriched in a monoidal category M replaces the notion of a set of morphisms between pairs of objects in C with the notion of an M-object of morphisms between every two objects in C. Properties and associated notions: Free strict monoidal category For every category C, the free strict monoidal category Σ(C) can be constructed as follows: its objects are lists (finite sequences) A1, ..., An of objects of C; there are arrows between two objects A1, ..., Am and B1, ..., Bn only if m = n, and then the arrows are lists (finite sequences) of arrows f1: A1 → B1, ..., fn: An → Bn of C; the tensor product of two objects A1, ..., An and B1, ..., Bm is the concatenation A1, ..., An, B1, ..., Bm of the two lists, and, similarly, the tensor product of two morphisms is given by the concatenation of lists. The identity object is the empty list.This operation Σ mapping category C to Σ(C) can be extended to a strict 2-monad on Cat. Specializations: If, in a monoidal category, A⊗B and B⊗A are naturally isomorphic in a manner compatible with the coherence conditions, we speak of a braided monoidal category. If, moreover, this natural isomorphism is its own inverse, we have a symmetric monoidal category. A closed monoidal category is a monoidal category where the functor X↦X⊗A has a right adjoint, which is called the "internal Hom-functor" X↦HomC(A,X) . Examples include cartesian closed categories such as Set, the category of sets, and compact closed categories such as FdVect, the category of finite-dimensional vector spaces. Autonomous categories (or compact closed categories or rigid categories) are monoidal categories in which duals with nice properties exist; they abstract the idea of FdVect. Dagger symmetric monoidal categories, equipped with an extra dagger functor, abstracting the idea of FdHilb, finite-dimensional Hilbert spaces. These include the dagger compact categories. Tannakian categories are monoidal categories enriched over a field, which are very similar to representation categories of linear algebraic groups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isoperimetric ratio** Isoperimetric ratio: In analytic geometry, the isoperimetric ratio of a simple closed curve in the Euclidean plane is the ratio L2/A, where L is the length of the curve and A is its area. It is a dimensionless quantity that is invariant under similarity transformations of the curve. According to the isoperimetric inequality, the isoperimetric ratio has its minimum value, 4π, for a circle; any other curve has a larger value. Thus, the isoperimetric ratio can be used to measure how far from circular a shape is. Isoperimetric ratio: The curve-shortening flow decreases the isoperimetric ratio of any smooth convex curve so that, in the limit as the curve shrinks to a point, the ratio becomes 4π.For higher-dimensional bodies of dimension d, the isoperimetric ratio can similarly be defined as Bd/Vd − 1 where B is the surface area of the body (the measure of its boundary) and V is its volume (the measure of its interior). Other related quantities include the Cheeger constant of a Riemannian manifold and the (differently defined) Cheeger constant of a graph.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Security characteristic line** Security characteristic line: Security characteristic line (SCL) is a regression line, plotting performance of a particular security or portfolio against that of the market portfolio at every point in time. The SCL is plotted on a graph where the Y-axis is the excess return on a security over the risk-free return and the X-axis is the excess return of the market in general. The slope of the SCL is the security's beta, and the intercept is its alpha. Formula: SCL:Ri,t−Rf=αi+βi(RM,t−Rf)+ϵi,t where: αi is called the asset's alpha (abnormal return) βi(RM,t – Rf) is a nondiversifiable or systematic risk εi,t is the non-systematic or diversifiable, non-market or idiosyncratic risk RM,t is the return to market portfolio Rf is a risk-free rate
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Uncertainty principle** Uncertainty principle: In quantum mechanics, the uncertainty principle (also known as Heisenberg's uncertainty principle) is any of a variety of mathematical inequalities asserting a fundamental limit to the product of the accuracy of certain related pairs of measurements on a quantum system, such as position, x, and momentum, p. Such paired-variables are known as complementary variables or canonically conjugate variables. Uncertainty principle: Introduced first in 1927 by German physicist Werner Heisenberg, the uncertainty principle states that the more precisely the position of some particle is determined, the less precisely its momentum can be predicted from initial conditions, and vice versa. In the published 1927 paper, Heisenberg originally concluded that the uncertainty principle was ΔpΔq ≈ h using the full Planck constant. The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928: where ħ is the reduced Planck constant, h2π Historically, the uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the system, that is, without changing something in a system. Heisenberg utilized such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty. It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems, and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology. Indeed the uncertainty principle has its roots in how we apply calculus to write the basic equations of mechanics. It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers. Introduction: It is vital to illustrate how the principle applies to relatively intelligible physical situations since it is indiscernible on the macroscopic scales that humans experience. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily. Introduction: Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized at the same time. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation p = ħk, where k is the wavenumber. Introduction: In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. However, the particular eigenstate of the observable A need not be an eigenstate of another observable B: If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable. Visualization: The uncertainty principle can be visualized using the position- and momentum-space wavefunctions for one spinless particle with mass in one dimension. The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. These wavefunctions are Fourier transforms of each other: mathematically, the uncertainty principle expresses the relationship between conjugate variables in the transform. Visualization: Wave mechanics interpretation According to the de Broglie hypothesis, every object in the universe is associated with a wave. Thus every object, from an elementary particle to atoms, molecules and on up to planets and beyond are subject to the uncertainty principle. The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is In the case of the single-mode plane wave, |ψ(x)|2 is 1 if X=x and 0 otherwise. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. Visualization: On the other hand, consider a wave function that is a sum of many waves, which we may write as where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes with φ(p) representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that φ(p) is the Fourier transform of ψ(x) and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta.One way to quantify the precision of the position and momentum is the standard deviation σ. Since |ψ(x)|2 is a probability density function for position, we calculate its standard deviation. Visualization: The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound. Visualization: Matrix mechanics interpretation (Ref ) In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators  and B^ , one defines their commutator as In the case of position and momentum, the commutator is the canonical commutation relation The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let |ψ⟩ be a right eigenstate of position with a constant eigenvalue x0. By definition, this means that x^|ψ⟩=x0|ψ⟩. Visualization: Applying the commutator to |ψ⟩ yields where Î is the identity operator. Suppose, for the sake of proof by contradiction, that |ψ⟩ is also a right eigenstate of momentum, with constant eigenvalue p0. If this were true, then one could write On the other hand, the above canonical commutation relation requires that This implies that no quantum state can simultaneously be both a position and a momentum eigenstate. Visualization: When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations, As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle. Heisenberg limit: In quantum metrology, and especially interferometry, the Heisenberg limit is the optimal rate at which the accuracy of a measurement can scale with the energy used in the measurement. Typically, this is the measurement of a phase (applied to one arm of a beam-splitter) and the energy is given by the number of photons used in an interferometer. Although some claim to have broken the Heisenberg limit, this reflects disagreement on the definition of the scaling resource. Suitably defined, the Heisenberg limit is a consequence of the basic principles of quantum mechanics and cannot be beaten, although the weak Heisenberg limit can be beaten. Robertson–Schrödinger uncertainty relations: The most common general form of the uncertainty principle is the Robertson uncertainty relation.For an arbitrary Hermitian operator O^ we can associate a standard deviation where the brackets ⟨O⟩ indicate an expectation value. For a pair of operators A^ and B^ , we may define their commutator as In this notation, the Robertson uncertainty relation is given by The Robertson uncertainty relation immediately follows from a slightly stronger inequality, the Schrödinger uncertainty relation, where we have introduced the anticommutator, Mixed states The Robertson–Schrödinger uncertainty relation may be generalized in a straightforward way to describe mixed states. Robertson–Schrödinger uncertainty relations: The Maccone–Pati uncertainty relations The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Maccone and Pati give non-trivial bounds on the sum of the variances for two incompatible observables. (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., Ref. due to Huang.) For two non-commuting observables A and B the first stronger uncertainty relation is given by where σA2=⟨Ψ|A2|Ψ⟩−⟨Ψ∣A∣Ψ⟩2 , σB2=⟨Ψ|B2|Ψ⟩−⟨Ψ∣B∣Ψ⟩2 , |Ψ¯⟩ is a normalized vector that is orthogonal to the state of the system |Ψ⟩ and one should choose the sign of ±i⟨Ψ∣[A,B]∣Ψ⟩ to make this real quantity a positive number. Robertson–Schrödinger uncertainty relations: The second stronger uncertainty relation is given by where |Ψ¯A+B⟩ is a state orthogonal to |Ψ⟩ The form of |Ψ¯A+B⟩ implies that the right-hand side of the new uncertainty relation is nonzero unless |Ψ⟩ is an eigenstate of (A+B) . One may note that |Ψ⟩ can be an eigenstate of (A+B) without being an eigenstate of either A or B . However, when |Ψ⟩ is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless |Ψ⟩ is an eigenstate of both. Robertson–Schrödinger uncertainty relations: Improving the Robertson–Schrödinger uncertainty relation based on decompositions of the density matrix The Robertson–Schrödinger uncertainty can be improved noting that it must hold for all components ϱk in any decomposition of the density matrix given as Here, for the probailities pk≥0 and ∑kpk=1 hold. Then, using the relation for ak,bk≥0 it follows that where the function in the bound is defined The above relation very often has a bound larger than that of the original Robertson–Schrödinger uncertainty relation. Thus, we need to calculate the bound of the Robertson–Schrödinger uncertainty for the mixed components of the quantum state rather than for the quantum state, and compute an average of their square roots. The following expression is stronger than the Robertson–Schrödinger uncertainty relation where on the right-hand side there is a concave roof over the decompositions of the density matrix. Robertson–Schrödinger uncertainty relations: The improved relation above is saturated by all single-qubit quantum states.With similar arguments, one can derive a relation with a convex roof on the right-hand side where FQ[ϱ,B] denotes the quantum Fisher information and the density matrix is decomposed to pure states as The derivation takes advantage of the fact that the quantum Fisher information is the convex roof of the variance times four.A simpler inequality follows without a convex roof which is stronger than the Heisenberg uncertainty relation, since for the quantum Fisher information we have while for pure states the equality holds. Robertson–Schrödinger uncertainty relations: Phase space In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function W(x,p) with star product ★ and a function f, the following is generally true: Choosing f=a+bx+cp , we arrive at Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are non-negative. Robertson–Schrödinger uncertainty relations: The non-negative eigenvalues then imply a corresponding non-negativity condition on the determinant, or, explicitly, after algebraic manipulation, Examples: Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below. Examples: For position and linear momentum, the canonical commutation relation [x^,p^]=iℏ implies the Kennard inequality from above: For two orthogonal components of the total angular momentum operator of an object: where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for [Jx,Jy]=iℏεxyzJz , a choice A^=Jx , B^=Jy , in angular momentum multiplets, ψ = |j, m⟩, bounds the Casimir invariant (angular momentum squared, ⟨Jx2+Jy2+Jz2⟩ ) from below and thus yields useful constraints such as j(j + 1) ≥ m(m + 1), and hence j ≥ m, among others. Examples: In non-relativistic mechanics, time is privileged as an independent variable. Nevertheless, in 1945, L. I. Mandelstam and I. E. Tamm derived a non-relativistic time–energy uncertainty relation, as follows. For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator B^ , the following formula holds: where σE is the standard deviation of the energy operator (Hamiltonian) in the state ψ, σB stands for the standard deviation of B. Although the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters the Schrödinger equation. It is a lifetime of the state ψ with respect to the observable B: In other words, this is the time interval (Δt) after which the expectation value ⟨B^⟩ changes appreciably. An informal, heuristic meaning of the principle is the following: A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth. The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width). Examples: For the number of electrons in a superconductor and the phase of its Ginzburg–Landau order parameter A counterexample Suppose we consider a quantum particle on a ring, where the wave function depends on an angular variable θ , which we may take to lie in the interval [0,2π] . Define "position" and "momentum" operators A^ and B^ by and where we impose periodic boundary conditions on B^ . The definition of A^ depends on our choice to have θ range from 0 to 2π . These operators satisfy the usual commutation relations for position and momentum operators, [A^,B^]=iℏ .Now let ψ be any of the eigenstates of B^ , which are given by ψ(θ)=e2πinθ . These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator A^ is bounded, since θ ranges over a bounded interval. Thus, in the state ψ , the uncertainty of B is zero and the uncertainty of A is finite, so that Although this result appears to violate the Robertson uncertainty principle, the paradox is resolved when we note that ψ is not in the domain of the operator B^A^ , since multiplication by θ disrupts the periodic boundary conditions imposed on B^ . Thus, the derivation of the Robertson relation, which requires A^B^ψ and B^A^ψ to be defined, does not apply. (These also furnish an example of operators satisfying the canonical commutation relations but not the Weyl relations.) For the usual position and momentum operators X^ and P^ on the real line, no such counterexamples can occur. As long as σx and σp are defined in the state ψ , the Heisenberg uncertainty principle holds, even if ψ fails to be in the domain of X^P^ or of P^X^ Examples (Refs ) Quantum harmonic oscillator stationary states Consider a one-dimensional quantum harmonic oscillator. It is possible to express the position and momentum operators in terms of the creation and annihilation operators: Using the standard rules for creation and annihilation operators on the energy eigenstates, the variances may be computed directly, The product of these standard deviations is then In particular, the above Kennard bound is saturated for the ground state n=0, for which the probability density is just the normal distribution. Examples: Quantum harmonic oscillators with Gaussian initial condition In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the full time-dependent solution. After many cancelations, the probability densities reduce to where we have used the notation N(μ,σ2) to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as From the relations we can conclude the following (the right most equality holds only when Ω = ω): Coherent states A coherent state is a right eigenstate of the annihilation operator, which may be represented in terms of Fock states as In the picture where the coherent state is a massive particle in a quantum harmonic oscillator, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances, Therefore, every coherent state saturates the Kennard bound with position and momentum each contributing an amount {\textstyle {\sqrt {\hbar /2}}} in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general. Examples: Particle in a box Consider a particle in a one-dimensional box of length L . The eigenfunctions in position and momentum space are and where {\textstyle \omega _{n}={\frac {\pi ^{2}\hbar n^{2}}{8L^{2}m}}} and we have used the de Broglie relation p=ℏk . The variances of x and p can be calculated explicitly: The product of the standard deviations is therefore For all n=1,2,3,… , the quantity {\textstyle {\sqrt {{\frac {n^{2}\pi ^{2}}{3}}-2}}} is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when n=1 , in which case Constant momentum Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to where we have introduced a reference scale {\textstyle x_{0}={\sqrt {\hbar /m\omega _{0}}}} , with ω0>0 describing the width of the distribution—cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are Since ⟨p(t)⟩=p0 and σp(t)=ℏ/(2x0) , this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is such that the uncertainty product can only increase with time as Additional uncertainty relations: Systematic and statistical errors The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation σ . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect. Additional uncertainty relations: If we let εA represent the error (i.e., inaccuracy) of a measurement of an observable A and ηB the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Ozawa — encompassing both systematic and statistical errors — holds: Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years. Additional uncertainty relations: Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors σA and σB . There is increasing experimental evidence that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality. Using the same formalism, it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time): The two simultaneous measurements on A and B are necessarily unsharp or weak. Additional uncertainty relations: It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson and Ozawa relations we obtain The four terms can be written as: Defining: as the inaccuracy in the measured values of the variable A and as the resulting fluctuation in the conjugate variable B, Fujikawa established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors: Quantum entropic uncertainty principle For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period. Other examples include highly bimodal distributions, or unimodal distributions with divergent variance. Additional uncertainty relations: A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty. This conjecture, also studied by Hirschman and proven in 1975 by Beckner and by Iwo Bialynicki-Birula and Jerzy Mycielski is that, for two normalized, dimensionless Fourier transform pairs f(a) and g(b) where f(a)=∫−∞∞g(b)e2πiabdb and g(b)=∫−∞∞f(a)e−2πiabda the Shannon information entropies and are subject to the following constraint, where the logarithms may be in any base. Additional uncertainty relations: The probability distribution functions associated with the position wave function ψ(x) and the momentum wave function φ(x) have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by where x0 and p0 are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function ψ(x) and the momentum wavefunction φ(p), the above constraint can be written for the corresponding entropies as where h is Planck's constant. Additional uncertainty relations: Depending on one's choice of the x0 p0 product, the expression may be written in many ways. If x0 p0 is chosen to be h, then If, instead, x0 p0 is chosen to be ħ, then If x0 and p0 are chosen to be unity in whatever system of units are being used, then where h is interpreted as a dimensionless number equal to the value of Planck's constant in the chosen system of units. Note that these inequalities can be extended to multimode quantum states, or wavefunctions in more than one spatial dimension.The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities (equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof). Additional uncertainty relations: A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as Under the above definition, the entropic uncertainty relation is Here we note that δx δp/h is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research. Uncertainty relation with three angular momentum components: For a particle of spin- j the following uncertainty relation holds where Jl are angular momentum components. The relation can be derived from and The relation can be strengthened as where FQ[ϱ,Jz] is the quantum Fisher information. Harmonic analysis: In the context of harmonic analysis, a branch of mathematics, the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds, Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function f and its Fourier transform ƒ̂: Signal processing In the context of signal processing, and in particular time–frequency analysis, uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. More accurately, the time-bandwidth or duration-bandwidth product satisfies where energy ,t and energy ,f are the standard deviations of the time and frequency energy or power (i.e. squared) representations respectively. The minimum is attained for a Gaussian-shaped pulse (Gabor wavelet) [For the un-squared Gaussian (i.e. signal amplitude) and its un-squared Fourier transform magnitude σtσf=1/2π ; squaring reduces each σ by a factor 2 .] Another common measure is the product of the time and frequency full width at half maximum (of the power/energy), which for the Gaussian equals ln 0.44 (see bandwidth-limited pulse). Harmonic analysis: Stated alternatively, "One cannot simultaneously sharply localize a signal (function f) in both the time domain and frequency domain (ƒ̂, its Fourier transform)". When applied to filters, the result implies that one cannot achieve high temporal resolution and frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off. Harmonic analysis: Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other. Harmonic analysis: As a result, in order to analyze signals where the transients are important, the wavelet transform is often used instead of the Fourier. Discrete Fourier transform Let := x0,x1,…,xN−1 be a sequence of N complex numbers and := X0,X1,…,XN−1, its discrete Fourier transform. Harmonic analysis: Denote by ‖x‖0 the number of non-zero elements in the time sequence x0,x1,…,xN−1 and by ‖X‖0 the number of non-zero elements in the frequency sequence X0,X1,…,XN−1 . Then, This inequality is sharp, with equality achieved when x or X is a Dirac mass, or more generally when x is a nonzero multiple of a Dirac comb supported on a subgroup of the integers modulo N (in which case X is also a Dirac comb supported on a complementary subgroup, and vice versa). Harmonic analysis: More generally, if T and W are subsets of the integers modulo N, let LT,RW:ℓ2(Z/NN)→ℓ2(Z/NN) denote the time-limiting operator and band-limiting operators, respectively. Then where the norm is the operator norm of operators on the Hilbert space ℓ2(Z/NZ) of functions on the integers modulo N. This inequality has implications for signal reconstruction.When N is a prime number, a stronger inequality holds: Discovered by Terence Tao, this inequality is also sharp. Harmonic analysis: Benedicks's theorem Amrein–Berthier and Benedicks's theorem intuitively says that the set of points where f is non-zero and the set of points where ƒ̂ is non-zero cannot both be small. Specifically, it is impossible for a function f in L2(R) and its Fourier transform ƒ̂ to both be supported on sets of finite Lebesgue measure. A more quantitative version is One expects that the factor CeC|S||Σ| may be replaced by CeC(|S||Σ|)1/d, which is only known if either S or Σ is convex. Harmonic analysis: Hardy's uncertainty principle The mathematician G. H. Hardy formulated the following uncertainty principle: it is not possible for f and ƒ̂ to both be "very rapidly decreasing". Specifically, if f in L2(R) is such that and (C>0,N an integer), then, if ab > 1, f = 0, while if ab = 1, then there is a polynomial P of degree ≤ N such that This was later improved as follows: if f∈L2(Rd) is such that then where P is a polynomial of degree (N − d)/2 and A is a real d × d positive definite matrix. Harmonic analysis: This result was stated in Beurling's complete works without proof and proved in Hörmander (the case d=1,N=0 ) and Bonami, Demange, and Jaming for the general case. Note that Hörmander–Beurling's version implies the case ab > 1 in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.A full description of the case ab < 1 as well as the following extension to Schwartz class distributions appears in ref. History: Werner Heisenberg formulated the uncertainty principle at Niels Bohr's institute in Copenhagen, while working on the mathematical foundations of quantum mechanics. History: In 1925, following pioneering work with Hendrik Kramers, Heisenberg developed matrix mechanics, which replaced the ad hoc old quantum theory with modern quantum mechanics. The central premise was that the classical concept of motion does not fit at the quantum level, as electrons in an atom do not travel on sharply defined orbits. Rather, their motion is smeared out in a strange way: the Fourier transform of its time dependence only involves those frequencies that could be observed in the quantum jumps of their radiation. History: Heisenberg's paper did not admit any unobservable quantities like the exact position of the electron in an orbit at any time; he only allowed the theory to talk about the Fourier components of the motion. Since the Fourier components were not defined at the classical frequencies, they could not be used to construct an exact trajectory, so that the formalism could not answer certain overly precise questions about where the electron was or how fast it was going, without losing some information about one or the other variable.According to one account: "Heisenberg's paper marked a radical departure from previous attempts to solve atomic problems by making use of observable quantities only. 'My entire meagre efforts go toward killing off and suitably replacing the concept of the orbital paths that one cannot observe,' he wrote in a letter dated 9 July 1925."It was actually Einstein who first raised the problem to Heisenberg in 1926 upon their first real discussion. Einstein had invited Heisenberg to his home for a discussion of matrix mechanics upon its introduction. As Heisenberg describes the discussion: "On the way home, he questioned me about my background, my studies with Sommerfeld. But, on arrival, he at once began with a central question about the philosophical foundation of the new quantum mechanics. He pointed out to me that in my mathematical description the notion of 'electron path' did not occur at all, but that in a cloud-chamber the track of the electron can of course be observed directly. It seemed to him absurd to claim that there was indeed an electron path in the cloud-chamber, but none in the interior of the atom." In this situation, of course, we [Heisenberg and Bohr] had many discussions, difficult discussions, because we all felt that the mathematical scheme of quantum or wave mechanics was already final. It could not be changed, and we would have to do all our calculations from this scheme. On the other hand, nobody knew how to represent in this scheme such a simple case as the path of an electron through a cloud chamber." In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. This implication provided a clear physical interpretation for the non-commutativity, and it laid the foundation for what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg showed that the commutation relation implies an uncertainty, or in Bohr's language a complementarity. Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote:It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant. History: In his celebrated 1927 paper, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. In his Chicago lecture he refined his principle: Kennard in 1927 first proved the modern inequality: where ħ = h/2π, and σx, σp are the standard deviations of position and momentum. Heisenberg only proved relation (A2) for the special case of Gaussian states. History: Terminology and translation Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word "Ungenauigkeit" ("indeterminacy"), to describe the basic theoretical principle. Only in the endnote did he switch to the word "Unsicherheit" ("uncertainty"). When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, the translation "uncertainty" was used, and it became the more commonly used term in the English language thereafter. History: Heisenberg's microscope The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by utilizing the observer effect of an imaginary microscope as a measuring device.He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.: 49–50  Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely. History: Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around.The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to Planck's constant. Heisenberg did not care to formulate the uncertainty principle as an exact limit, and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable. Critical reactions: The Copenhagen interpretation of quantum mechanics and Heisenberg's Uncertainty Principle were, in fact, seen as twin targets by detractors who believed in an underlying determinism and realism. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be. Critical reactions: Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years. Critical reactions: The ideal of the detached observer Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German): "Like the moon has a definite position" Einstein said to me last winter, "whether or not we look at the moon, the same must also hold for the atomic objects, as there is no sharp distinction possible between these and macroscopic objects. Observation cannot create an element of reality like a position, there must be something contained in the complete description of physical reality which corresponds to the possibility of observing a position, already before the observation has been actually made." I hope, that I quoted Einstein correctly; it is always difficult to quote somebody out of memory with whom one does not agree. It is precisely this kind of postulate which I call the ideal of the detached observer. Critical reactions: Einstein's slit The first of Einstein's thought experiments challenging the uncertainty principle went as follows: Consider a particle passing through a slit of width d. The slit introduces an uncertainty in momentum of approximately h/d because the particle passes through the wall. But let us determine the momentum of the particle by measuring the recoil of the wall. In doing so, we find the momentum of the particle to arbitrary accuracy by conservation of momentum. Critical reactions: Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy Δp, the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to h/Δp, and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement. Critical reactions: A similar analysis with particles diffracting through multiple slits is given by Richard Feynman. Critical reactions: Einstein's box Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to Planck's constant." Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box." "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle."Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the Earth's surface will result in an uncertainty in the rate of the clock," because of Einstein's own theory of gravity's effect on time. Critical reactions: "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape." EPR paradox for entangled particles Bohr was compelled to modify his understanding of the uncertainty principle after another thought experiment by Einstein. In 1935, Einstein, Podolsky and Rosen published an analysis of widely separated entangled particles (EPR paradox). Measuring one particle, Einstein realized, would alter the probability distribution of the other, yet here the other particle could not possibly be disturbed. This example led Bohr to revise his understanding of the principle, concluding that the uncertainty was not caused by a direct interaction.But Einstein came to much more far-reaching conclusions from the same thought experiment. He believed the "natural basic assumption" that a complete description of reality would have to predict the results of experiments from "locally changing deterministic quantities" and therefore would have to include more information than the maximum possible allowed by the uncertainty principle. Critical reactions: In 1964, John Stewart Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out Einstein's basic assumption that led him to the suggestion of his hidden variables. These hidden variables may be "hidden" because of an illusion that occurs during observations of objects that are too large or too small. This illusion can be likened to rotating fan blades that seem to pop in and out of existence at different locations and sometimes seem to be in the same place at the same time when observed. This same illusion manifests itself in the observation of subatomic particles. Both the fan blades and the subatomic particles are moving so fast that the illusion is seen by the observer. Therefore, it is possible that there would be predictability of the subatomic particles behavior and characteristics to a recording device capable of very high speed tracking....Ironically this fact is one of the best pieces of evidence supporting Karl Popper's philosophy of invalidation of a theory by falsification-experiments. That is to say, here Einstein's "basic assumption" became falsified by experiments based on Bell's inequalities. For the objections of Karl Popper to the Heisenberg inequality itself, see below. Critical reactions: While it is possible to assume that quantum mechanical predictions are due to nonlocal, hidden variables, and in fact David Bohm invented such a formulation, this resolution is not satisfactory to the vast majority of physicists. The question of whether a random outcome is predetermined by a nonlocal theory can be philosophical, and it can be potentially intractable. If the hidden variables were not constrained, they could just be a list of random digits that are used to produce the measurement outcomes. To make it sensible, the assumption of nonlocal hidden variables is sometimes augmented by a second assumption—that the size of the observable universe puts a limit on the computations that these variables can do. A nonlocal theory of this sort predicts that a quantum computer would encounter fundamental obstacles when attempting to factor numbers of approximately 10,000 digits or more; a potentially achievable task in quantum mechanics. Critical reactions: Popper's criticism Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist. He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations". In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. This directly contrasts with the Copenhagen interpretation of quantum mechanics, which is non-deterministic but lacks local hidden variables. Critical reactions: In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations) in Naturwissenschaften, and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing: [Heisenberg's] formulae are, beyond all doubt, derivable statistical formulae of the quantum theory. But they have been habitually misinterpreted by those quantum theorists who said that these formulae can be interpreted as determining some upper limit to the precision of our measurements. [original emphasis] Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Weizsäcker, Heisenberg, and Einstein; this experiment may have influenced the formulation of the EPR experiment. Critical reactions: Many-worlds uncertainty The many-worlds interpretation originally outlined by Hugh Everett III in 1957 is partly meant to reconcile the differences between Einstein's and Bohr's views by replacing Bohr's wave function collapse with an ensemble of deterministic and independent universes whose distribution is governed by wave functions and the Schrödinger equation. Thus, uncertainty in the many-worlds interpretation follows from each observer within any universe having no knowledge of what goes on in the other universes. Critical reactions: Free will Some scientists including Arthur Compton and Martin Heisenberg have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature. Proponents of this theory commonly say that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells. Critical reactions: Thermodynamics There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics. See Gibbs paradox.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zaid crop** Zaid crop: Zaid crops are summer season crops. They grow for a short time period between kharif and rabi crops, mainly from March to June. These crops are mainly grown in the summer season during a period called the zaid crop season. They require warm dry weather as major growth period and longer day length for flowering. Some summer months and rainy season is required. These crops also mature early. Zaid crop: In between the rabi and the kharif seasons, there is a short season during the summer months known as the zaid season. Some of the crops produced during zaid season are watermelon, muskmelon, cucumber, vegetables and fodder crops. Sugarcane takes almost a year to grow. Zaid crops: Watermelon Muskmelon Cucumber Bitter gourd Fodder Pumpkin Guar (Cluster Beans) strawberry Arhar (Pigeon pea) Masur (Lentil) Sugarcane
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Meatball soup** Meatball soup: Meatball soup is a soup made using meatballs, simmered with various other ingredients. The classic meatball soup consists of a clear broth, often with pieces of or whole meatballs with vegetables; common additions are pasta (e.g., noodles, although almost any form can be used), dumplings, or grains such as rice and barley. Various types of meat are used, such as beef, lamb, pork and poultry. Varieties: Bakso Ciorbă de perișoare Sulu köfte
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rotation map** Rotation map: In mathematics, a rotation map is a function that represents an undirected edge-labeled graph, where each vertex enumerates its outgoing neighbors. Rotation maps were first introduced by Reingold, Vadhan and Wigderson (“Entropy waves, the zig-zag graph product, and new constant-degree expanders”, 2002) in order to conveniently define the zig-zag product and prove its properties. Given a vertex v and an edge label i , the rotation map returns the i 'th neighbor of v and the edge label that would lead back to v Definition: For a D-regular graph G, the rotation map RotG:[N]×[D]→[N]×[D] is defined as follows: RotG(v,i)=(w,j) if the i th edge leaving v leads to w, and the j th edge leaving w leads to v. Basic properties: From the definition we see that RotG is a permutation, and moreover RotG∘RotG is the identity map ( RotG is an involution). Special cases and properties: A rotation map is consistently labeled if all the edges leaving each vertex are labeled in such a way that at each vertex, the labels of the incoming edges are all distinct. Every regular graph has some consistent labeling. A consistent rotation map can be used to encode a coined discrete time quantum walk on a (regular) graph. A rotation map is π -consistent if ∀vRotG(v,i)=(v[i],π(i)) . From the definition, a π -consistent rotation map is consistently labeled.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apolipoprotein L** Apolipoprotein L: Apolipoprotein L (Apo L) is found in high-density lipoprotein complexes that plays a central role in cholesterol transport. The cholesterol content of membranes is important in cellular processes such as modulating gene transcription and signal transduction both in the adult brain and during neurodevelopment. There are six apo L genes located in close proximity to each other on chromosome 22q12 in humans. 22q12 is a confirmed high-susceptibility locus for schizophrenia and close to the region associated with velocardiofacial syndrome that includes symptoms of schizophrenia. Human proteins containing this domain: APOL1; APOL2; APOL3; APOL4; APOL5; APOL6; APOLD1;
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conceptual photography** Conceptual photography: Conceptual photography is a type of photography that illustrates an idea. There have been illustrative photographs made since the medium's invention, for example in the earliest staged photographs, such as Hippolyte Bayard's Self Portrait as a Drowned Man (1840). However, the term conceptual photography derives from conceptual art, a movement of the late 1960s. Today the term is used to describe either a methodology or a genre. Conceptual photography as a methodology: As a methodology conceptual photography is a type of photography that is staged to represent an idea. The 'concept' is both preconceived and, if successful, understandable in the completed image. It is most often seen in advertising and illustration where the picture may reiterate a headline or catchphrase that accompanies it. Photographic advertising and illustration commonly derive from Stock photography, which is often produced in response to current trends in image usage as determined by the research of picture agencies like Getty Images or Corbis. These photographs are therefore produced to visualize a predetermined concept. The advent of picture editing software like Adobe Photoshop has allowed the greater manipulation of images to seamlessly combine elements that previously it would only have been possible to combine in graphic illustration. Conceptual photography as a genre: The term 'conceptual photography' used to describe a genre may refer to the use of photography in conceptual art or in contemporary art photography. In either case, the term is not widely used or consistently applied. Conceptual photography and conceptual art: Conceptual art of the late 1960s and early 1970s often involved photography to document performances, ephemeral sculpture or actions. The artists did not describe themselves as photographers, for example Edward Ruscha said "Photography's just a playground for me. I'm not a photographer at all." These artists are sometimes referred to as conceptual photographers but those who used photography extensively such as John Hilliard and John Baldessari and Pedram Mousavi are more often described as photoconceptualists or "artists using photography". Conceptual photography and fine-art photography: Since the 1970s artists using photography like Cindy Sherman and latterly Thomas Ruff and Thomas Demand have been described as conceptual. Although their work does not generally resemble the lo-fi aesthetic of 1960s conceptual art they may use certain methods in common such as documenting performance (Sherman), typological or serial imagery (Ruff) or the restaging of events (Demand). In fact the indebtedness to these and other approaches from conceptual art is so widespread in contemporary fine-art photography that almost any work might be described as conceptual. The term has perhaps been used most specifically in a negative sense to distinguish some contemporary art photography from documentary photography or photojournalism. This distinction has been made in the coverage of the Deutsche Börse Photography Prize. Conceptual photography is often used interchangeably with fine-art photography, and there has been some dispute about whether there is a difference between the two. However, the central school of thought is that conceptual photography is a type of fine-art photography. Fine art photography is inclusive of conceptual photography. While all conceptual photography is fine art, not all fine art is conceptual.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HIPK1** HIPK1: Homeodomain-interacting protein kinase 1 is an enzyme that, in humans is encoded by the HIPK1 gene. Function: The protein encoded by this gene belongs to the Ser/Thr family of protein kinases and HIPK subfamily. It phosphorylates homeodomain transcription factors and may also function as a co-repressor for homeodomain transcription factors. Alternative splicing results in four transcript variants encoding four distinct isoforms. Interactions: HIPK1 has been shown to interact with P53.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hydroperoxide dehydratase** Hydroperoxide dehydratase: The enzyme hydroperoxide dehydratase (EC 4.2.1.92) catalyzes the chemical reaction (9Z,11E,14Z)-(13S)-hydroperoxyoctadeca-9,11,14-trienoate ⇌ (9Z)-(13S)-12,13-epoxyoctadeca-9,11-dienoate + H2OThis enzyme belongs to the family of lyases, specifically the hydro-lyases, which cleave carbon-oxygen bonds. The systematic name of this enzyme class is (9Z,11E,14Z)-(13S)-hydroperoxyoctadeca-9,11,14-trienoate 12,13-hydro-lyase [(9Z)-(13S)-12,13-epoxyoctadeca-9,11-dienoate-forming]. Other names in common use include hydroperoxide isomerase, linoleate hydroperoxide isomerase, linoleic acid hydroperoxide isomerase, HPI, (9Z,11E,14Z)-(13S)-hydroperoxyoctadeca-9,11,14-trienoate, and 12,13-hydro-lyase. Structural studies: As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1U5U.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Use of drugs in warfare** Use of drugs in warfare: Use of mind-altering substances in warfare has included drugs used for relaxation and stimulation. Historically, drug use was often sanctioned and encouraged by militaries through including alcohol and tobacco in troop rations. Stimulants like cocaine and amphetamines were widely used in both World Wars to increase alertness and suppress appetite. Drug use can negatively affect combat readiness and reduce the performance of troops. Drug use also poses additional expenses to the health care systems of militaries. Drugs: Alcohol Alcohol has a long association of military use, and has been called "liquid courage" for its role in preparing troops for battle. It has also been used to anaesthetize injured soldiers, celebrate military victories, and cope with the emotions of defeat. In the Russo-Japanese War, alcohol has been implicated as a factor contributing to the Russian Empire's loss. Russian commanders, sailors, and soldiers were said to be drunk more than sober. Countries often enabled alcohol use by their troops through providing alcohol in their rations. The British Royal Navy and other Commonwealth navies once maintained a rum ration for sailors until Britain retired it in 1970. The Royal Canadian Navy followed suit in 1972 as did the Royal New Zealand Navy in 1990. The United States Navy similarly provided a distilled spirits ration between 1794 and 1862 when Secretary of the Navy Gideon Welles removed most non-medicinal alcohol from U.S. naval vessels, with all alcohol consumption aboard ship banned in 1914.There is a strong association of military service and alcohol use disorder. In 1862, British soldiers in India responded to the threat of problematic alcohol use by establishing the Soldiers' Total Abstinence Association, which became the Army Temperance Association in 1888. Similar organizations formed in other branches of military and for British troops stationed in other colonies. Members of these abstinence associations were encouraged to sign pledges to avoid alcohol entirely. Medals were awarded to individuals who remained abstinent. Studies show that Australian Defence Force veterans of the Gulf War had a prevalence of alcohol use disorder higher than any other psychological disorder; British Armed Forces veterans of modern conflicts in Iraq and Afghanistan had higher rates of alcohol use disorder than servicemembers who were not deployed.Reports from the Russian invasion of Ukraine in 2022 and since suggested that Russian soldiers are drinking significant amount of alcohol (as well as consuming harder drugs), which increases their losses. Some reports suggest that on occasion, alcohol and drugs have been provided to the lower quality troops by their commanders, in order to facilitate their use as expendable cannon fodder. Drugs: Amphetamines Amphetamines were given to troops to increase alertness. They had the added benefits of reducing appetites and fatigue. Nazi Germany, in particular, embraced amphetamines during World War II. From April to July 1940, German service members on the Western Front received more than 35 million methamphetamine pills. German troops would go as many as three days without sleep during the invasion of France. In contrast, Britain distributed 72 million amphetamine tablets during the entire war.A 2023 report by a British military think tank cited evidence that the Russian military had been giving amphetamines, most likely in liquid form, to its soldiers during the Russian invasion of Ukraine. Drugs: Anabolic Steroids Although the usage of anabolic steroids is illegal in the United States Military, their usage among service members has increased in the 21st Century, particularly among the special forces. Because anabolic steroids increases muscle mass, boosts physical endurance, and speeds up recovery from injury, they are used by service members to meet the physical demands of their duties. An anonymous survey of US Army Rangers in 2007 found that a quarter of those surveyed admitted to using anabolic steroids and other performance enhancing drugs. The abuse of anabolic steroids can cause heart attacks, strokes, tumors in the liver, renal failure, and psychiatric episodes. Following the death of a sailor during SEAL training in 2022, the navy opened an investigation into the program and discovered that sailors were using anabolic steroids to pass the course and that their usage was tacitly endorsed by instructors. As a result of the investigation, the navy recommended more stringent drug testing and three commanding officers were pulled from their jobs at the US Navy Special Warfare Center. Drugs: Caffeine Military use has contributed to the rise of caffeine as the world's most popular drug. During the American Civil War, each Union troop received a coffee ration of 36 lb (16 kg) annually. World War I saw the dramatic rise of instant coffee: by the end of the conflict, daily production was 42,500 lb (19,300 kg), a 3,000% increase from pre-war production. Drugs: Cannabis In the 1910s U.S. Army soldiers stationed in the Panama Canal Zone and in the Pancho Villa Expedition began using cannabis. Although the drug became illegal to use on bases, the U.S. Army Medical Corps prepared the 1933 report Mariajuana Smoking in Panama for the Panama Canal Department recommending no further restrictions. Between 1948 and 1975 the U.S. Army Chemical Corps also tested chemical agents, including cannabinoids considered for "nonlethal incapacitating agents," to volunteering soldiers in the Edgewood Arsenal human experiments. During the Vietnam War American soldiers frequently used cannabis. A 1971 U.S. Department of Defense report claimed that over half of U.S. Armed Forces personnel had used the drug. Beginning in 1968 this led to a political scandal in America that led the Nixon administration to more tightly restrict drug use in the military as part of the War on Drugs, requiring all returning soldiers to pass a clinical urine test in Operation Golden Flow.After the passage of the Cannabis Act legalizing the recreational use of the drug in Canada in 2018, its use became legal for most active-duty Canadian Armed Forces personnel with restrictions against its use eight hours before duty, 24 hours before handling a loaded weapon, and 28 days before entering an aircraft or submarine. Drugs: Cocaine World War I saw the greatest use of cocaine amongst militaries. It was used for medical purposes and as a performance enhancer. At the time, it was not a controlled substance, and was readily available to troops. The British Army distributed cocaine-containing pills under Tabloid's brand name "Forced March", which were advertised to suppress appetite and increase endurance. In response to a moral panic about the effects of cocaine on society, the British Army Council passed an order in 1916 that prohibited the unauthorized sale of psychoactive drugs like cocaine and opiates to service members. The German Army for its part, also produced during the closing days of World War II a combination of 5 mg of Cocaine, 3 mg of Methamphetamine and 5 mg of Oxycodone in a compound they named D-IX; the compound was reportedly tested on prisoners at the Sachsenhausen concentration camp and found out an individual who had consumed the compound could march 90 kilometers per day without rest while carrying 20 kilograms of equipment. The doctors and military authorities testing the compound were enthusiastic about the results but the war ended before the compound could be mass produced and distributed. Drugs: Hallucinogens The 16th century Spanish Franciscan scholar Bernardino de Sahagún wrote that the Chichimeca people of Mexico consumed the root of the peyote, a cactus, to stimulate themselves for battle. In his 1887 Historia del Nayarit (English: History of Nayarit), José Ortego also wrote that it was a favorite stimulant in warfare. It has been speculated that berserkers, who were Old Norse warriors, used the hallucinogenic mushroom Amanita muscaria to enter a trance-like state before battle. Karsten Fatur thought it was plausible that instead of A. muscaria, berserkers consumed the plant Hyoscyamus niger (known as henbane or stinking nightshade). While both A. muscaria and H. niger can result in delirious behavior, twitching, increased strength, and redness of the face, H. niger is additionally known to have pain-numbing properties. Drugs: Khat Khat may cause a feeling of invincibility and an increased tendency of violent behavior. As a result, its use is popular among combatants in countries where its use is traditional, such as Somalia and Yemen, and it is considered a contributing factor to violent conflicts in these countries. Drugs: Opiates During the American Civil War, opiates were the most effective painkillers available to military surgeons. They were also used to treat diarrhea, muscle spasms of amputees, gangrene, dysentery, inflammation from gunshot wounds, and to sedate agitated troops. The Union Army requisitioned 5.3–10 million opium pills throughout the war, and a further 2.8 million ounces of opiate preparation (such as laudanum). Many veterans of the war had opiate addictions. Opiate addiction became known as "soldier's disease" and "army disease", though the precise effect of the American Civil War on the overall prevalence of opiate addiction is unknown. As a result of World War I, hundreds of thousands of soldiers developed severe opiate addictions, as morphine was commonly used to treat injuries. Drugs: Tobacco Tobacco has been viewed as essential to maintaining the morale of troops. Starting with the Thirty Years' War in 17th century Europe, major military encounters caused a surge in tobacco usage, mostly stemming from soldiers' use. During World War I, US Army General John J. Pershing noted, "You ask me what we need to win this war. I answer tobacco as much as bullets. Tobacco is as indispensable as the daily ration; we must have thousands of tons without delay." This sentiment was echoed by US Army General George Goethals, who noted tobacco was as important as food, and US medical officer William Gorgas who said that tobacco promoted contentment and morale, and the benefits outweighed any potential health risks. Such was the tobacco consumption of its troops that the US Government became the single-largest purchaser of cigarettes, including cigarettes in troops' rations and at discounted prices at post exchanges. Health and social impacts: Heavy drinking, tobacco use, and use of illegal drugs are common in the US military. Alcohol consumption in the US Military is higher than any other profession, according to CDC data from 2013–2017. American troops spend more days per year consuming alcohol than those in other professions (130 days), and additionally spend more days binge-drinking than those of other professions (41 days). Substance-use disorders were often attributed to moral failure, with the US Supreme Court ruling as recently as 1988 that the Department of Veterans Affairs did not have to pay benefits to alcoholics, as drinking was a result of "willful misconduct". Substance use can adversely affect combat readiness, with tobacco use negatively impacting troop performance and readiness. It can also be costly: In 2006, the cost of tobacco use to the Military Health System $564 million.The Department of Defense Survey of Health Related Behaviors among Active Duty Military Personnel published that 47% of active duty members engage in binge drinking, with another 20% engaging in heavy drinking in the past 30 days. 11% of respondents engages in prescription drug misuse. Lastly, 30% reported to smoke tobacco, and 10% would smoke one or more packs of cigarettes daily. Health and social impacts: After the service members come back from deployments, they go through a post deployment screening, screening them for alcohol, drugs, and mental health disorders. They have to repeat the screening again in a couple of weeks to make sure they are stable afterwards. There is no prolonged treatment necessary to the service members if their screenings come back clear. So, many of the service members disorders go unnoticed because they are able to hide their issues during the screening only, then carry on afterwards. The amount of substance use disorders diagnosed in the military is significantly lower than any other mental health disorder. This is because many of the clinicians providing these screenings are also service members, and they are aware of the stigma and consequences of a SUD or AUD diagnosis in the military. This is also because of the lack of screening and clinicians available, and they usually only catch a SUD or AUD when the veteran is coming in for another mental illness, such as PTSD.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tautology (language)** Tautology (language): In literary criticism and rhetoric, a tautology is a statement that repeats an idea, using near-synonymous morphemes, words or phrases, effectively "saying the same thing twice". Tautology and pleonasm are not consistently differentiated in literature. Like pleonasm, tautology is often considered a fault of style when unintentional. Intentional repetition may emphasize a thought or help the listener or reader understand a point. Sometimes logical tautologies like "Boys will be boys" are conflated with language tautologies, but a language tautology is not inherently true, while a logical tautology always is. Etymology: The word was coined in Hellenistic Greek from ταὐτός ('the same') plus λόγος ('word' or 'idea'), and transmitted through 3rd-century Latin tautologia and French tautologie. It first appeared in English in the 16th century. The use of the term logical tautology was introduced in English by Wittgenstein in 1919, perhaps following Auguste Comte's usage in 1835. Examples: "Only time will tell if we stand the test of time", from the Van Halen song "Why Can't This Be Love" "After we change the game it won't remain the same." from the Blackalicious song "Blazing Arrow" "That tautological statement has repeated an idea." "There once was a fellow from PerthWho was born on the day of his birth. He got married, they say On his wife's wedding day, And died when he quitted the earth.""...A forget-me-not, to remind me to remember not to forget." from the Benny Hill song "My Garden of Love" Assless chaps. Chaps by definition are separate leg-coverings; a similar garment joined at the seat would instead be called a pair of trousers. "...und wenn sie nicht gestorben sind, dann leben sie noch heute" (and if they are not dead, then they are still alive today), traditional German formula to end a fairy tale (like "they lived happily ever after"). "'Former alumni' - alumni means those who are former members of an institution, group, school etc. "'Wandering planet' - the word planet comes from the Greek word πλανήτης (planḗtēs), which itself means "wanderer". "If you know, you know", a common English phrase. "What's for you won't go by you", a Scottish proverb that is tautological Abbreviations: Abbreviations whose last abbreviated word is often redundantly included anyway. ATM machine COVID disease DC Comics DVD disc EDM music GPS system HIV virus ICBM missile ISBN number LCD display PAT test (Photographic Activity Test or Paddington alcohol test) PIN number Please R.S.V.P. RAS syndrome RAT test SARS syndrome SAT test SSD drive UPC code VIN number VIP person Discussion: Intentional repetition of meaning intends to amplify or emphasize a particular, usually significant fact about what is being discussed. For example, a gift is, by definition, free of charge; using the phrase "free gift" might emphasize that there are no hidden conditions or fine print (such as the expectation of money or reciprocation) or that the gift is being given by volition. Discussion: This is related to the rhetorical device of hendiadys, where one concept is expressed through the use of two descriptive words or phrases: for example, using "goblets and gold" to mean wealth, or "this day and age" to refer to the present time. Superficially, these expressions may seem tautological, but they are stylistically sound because the repeated meaning is just a way to emphasize the same idea. Discussion: The use of tautologies, however, is usually unintentional. For example, the phrases "mental telepathy", "planned conspiracies", and "small dwarfs" imply that there are such things as physical telepathy, spontaneous conspiracies, and giant dwarfs, which are oxymorons.Parallelism is not tautology, but rather a particular stylistic device. Much Old Testament poetry is based on parallelism: the same thing said twice, but in slightly different ways (Fowler describes this as pleonasm). However, modern biblical study emphasizes that there are subtle distinctions and developments between the two lines, such that they are usually not truly the "same thing". Parallelism can be found wherever there is poetry in the Bible: Psalms, the Books of the Prophets, and in other areas as well.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peterson's algorithm** Peterson's algorithm: Peterson's algorithm (or Peterson's solution) is a concurrent programming algorithm for mutual exclusion that allows two or more processes to share a single-use resource without conflict, using only shared memory for communication. It was formulated by Gary L. Peterson in 1981. While Peterson's original formulation worked with only two processes, the algorithm can be generalized for more than two. The algorithm: The algorithm uses two variables: flag and turn. A flag[n] value of true indicates that the process n wants to enter the critical section. Entrance to the critical section is granted for process P0 if P1 does not want to enter its critical section or if P1 has given priority to P0 by setting turn to 0. The algorithm: The algorithm satisfies the three essential criteria to solve the critical-section problem. The while condition works even with preemption.The three criteria are mutual exclusion, progress, and bounded waiting.Since turn can take on one of two values, it can be replaced by a single bit, meaning that the algorithm requires only three bits of memory.: 22 Mutual exclusion P0 and P1 can never be in the critical section at the same time. If P0 is in its critical section, then flag[0] is true. In addition, either flag[1] is false (meaning that P1 has left its critical section), or turn is 0 (meaning that P1 is just now trying to enter the critical section, but graciously waiting), or P1 is at label P1_gate (trying to enter its critical section, after setting flag[1] to true but before setting turn to 0 and busy waiting). So if both processes are in their critical sections, then we conclude that the state must satisfy flag[0] and flag[1] and turn = 0 and turn = 1. No state can satisfy both turn = 0 and turn = 1, so there can be no state where both processes are in their critical sections. The algorithm: (This recounts an argument that is made rigorous in.) Progress Progress is defined as the following: if no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in making the decision as to which process will enter its critical section next. Note that for a process or thread, the remainder sections are parts of the code that are not related to the critical section. This selection cannot be postponed indefinitely. A process cannot immediately re-enter the critical section if the other process has set its flag to say that it would like to enter its critical section. The algorithm: Bounded waiting Bounded waiting, or bounded bypass, means that the number of times a process is bypassed by another process after it has indicated its desire to enter the critical section is bounded by a function of the number of processes in the system.: 11  In Peterson's algorithm, a process will never wait longer than one turn for entrance to the critical section. The algorithm: Filter algorithm: Peterson's algorithm for more than two processes The filter algorithm generalizes Peterson's algorithm to N > 2 processes. Instead of a boolean flag, it requires an integer variable per process, stored in a single-writer/multiple-reader (SWMR) atomic register, and N − 1 additional variables in similar registers. The registers can be represented in pseudocode as arrays: level : array of N integers last_to_enter : array of N − 1 integers The level variables take on values up to N − 1, each representing a distinct "waiting room" before the critical section. Processes advance from one room to the next, finishing in room N − 1, which is the critical section. Specifically, to acquire a lock, process i executes: 22  i ← ProcessNo for ℓ from 0 to N − 1 exclusive level[i] ← ℓ last_to_enter[ℓ] ← i while last_to_enter[ℓ] = i and there exists k ≠ i, such that level[k] ≥ ℓ wait To release the lock upon exiting the critical section, process i sets level[i] to −1. The algorithm: That this algorithm achieves mutual exclusion can be proven as follows. Process i exits the inner loop when there is either no process with a higher level than level[i], so the next waiting room is free; or, when i ≠ last_to_enter[ℓ], so another process joined its waiting room. At level zero, then, even if all N processes were to enter waiting room zero at the same time, no more than N − 1 will proceed to the next room, the final one finding itself the last to enter the room. Similarly, at the next level, N − 2 will proceed, etc., until at the final level, only one process is allowed to leave the waiting room and enter the critical section, giving mutual exclusion.: 22–24 Unlike the two-process Peterson algorithm, the filter algorithm does not guarantee bounded waiting.: 25–26 Note: When working at the hardware level, Peterson's algorithm is typically not needed to achieve atomic access. Some processors have special instructions, like test-and-set or compare-and-swap, which, by locking the memory bus, can be used to provide mutual exclusion in SMP systems. Note: Most modern CPUs reorder memory accesses to improve execution efficiency (see memory ordering for types of reordering allowed). Such processors invariably give some way to force ordering in a stream of memory accesses, typically through a memory barrier instruction. Implementation of Peterson's and related algorithms on processors that reorder memory accesses generally requires use of such operations to work correctly to keep sequential operations from happening in an incorrect order. Note that reordering of memory accesses can happen even on processors that don't reorder instructions (such as the PowerPC processor in the Xbox 360).Most such CPUs also have some sort of guaranteed atomic operation, such as XCHG on x86 processors and load-link/store-conditional on Alpha, MIPS, PowerPC, and other architectures. These instructions are intended to provide a way to build synchronization primitives more efficiently than can be done with pure shared memory approaches.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sarvatobhadra Chakra** Sarvatobhadra Chakra: Sarvatobhadra Chakra (Sanskrit: सर्वतोभद्र) in Hindu astrology (abbrev. SBC) is a unique technique for prediction based on the Nakshatras. It is an ancient system because it takes into account Abhijit nakshatra which is now not referred to in matters pertaining to methods that are generally employed for making astrological predictions. Janardan Harji in his Mansagari has described it as - संप्रवक्ष्यामि चक्रं त्रिलोक्यदीपिकम् - the trust-worthy quickly-revealing Trilokyadeepika Chakra.The term, Sarvatobhadra, derived from Sarva (सर्व) meaning – all, and Bhadra (भद्र)) meaning – good or auspicious, means overall auspiciousness. Abhijit nakshatra is located between Uttarashada and Sravana, it is the last quarter of Uttarashada and the first half of Sravana nakshatra. Methodology: Sarvatobhadra Chakra is based on the 28 nakshatras including Abhijit nakshatra forming part of the sidereal Capricorn sign, along with the twelve Zodiac signs, the thirty Tithis, the seven days of the Week (Vara) and the fifty Aksaras or letters of Alphabet. It is used to judge the transit-effects of the Sun, the Moon, Mercury, Venus, Mars, Jupiter, Saturn, and the two Lunar nodes - Rahu and Ketu. Abhijit nakshatra is not used in fixing the benefic nakshatras. The available standard texts do not deal with the construction of this Chakra. However, in his commentary on the Chapter XXVI of Mantrewara’s Phaladeepika, Gopesh Kumar Ojha has given the required method. The main source of the Sarvatobhadra chakra method of prognostication is Brhmayamal Grantha.Sarvatobhadra Chakra is drawn in a rectangle divided 9x9 = 81 sections and the counting is clockwise starting from the auspicious north-east corner the right-hand-top side and called the Ishana kona (also in use is SBC drawn by starting from left-hand-top side). The outer square or ring of 32 boxes depicts 28 nakshatras (7x7) with the four corners remaining allotted to four vowels, the immediately inner two rings – the aksaras i.e. vowels and consonants where the first swara or aksara of name is to be placed, the next inner ring – the signs and the inner-most - the week-days. In this manner the Chakra will cover 28 sections or vargas pertaining to the nakshatras, 12 to signs, 5 to tithis/weekdays, 16 to vowels and 20 to consonants – total 81 vargas. Its use involves the concept of Vedha. In this method the nakshatras aspect each other i.e. the nakshatra directly in opposite direction and the nakshatra which is to the left and the right. These 28 nakshatras also represent the Sapta Nadis. Vedha: Vedha (वेध), meaning – Obstruction, is the power of a planet to influence favourably (by benefic planets) or unfavourably through obstruction (by malefic planets), the influence is stronger when the planets influenced and influencing are exalted or retrograde but weaker if they be debilitated; a favourable vedha can nullify an adverse vedha. Vedha: Mansagari tells us that one vedha results in conflict, two vedhas result in loss of wealth, three vedhas indicate defeat or failure, and four vedhas indicate death; one vedha of a papa-graha results in misunderstanding, jealousy, ill-feelings etc.; two vedhas indicate loss, three vadhas indicate illness and four vedhas indicate death. In case the Janam nakshatra suffers vedha then travel or meaningless wandering is indicated, if it suffers Akshara-vedha – loss, if it suffers Swara-vedha – illness, if it suffers Tithi–vedha – fear, if it suffers Rasi-vedha – great opposition or grave calamity, and if it simultaneously suffers from all five vedhas death is certain. The vedha caused by the Sun indicates - grief, by Mars – loss of wealth, by Saturn – pain and ailments, by Rahu and Ketu – obstructions, by the Moon – good and bad happenings, by Venus – fear from foes, by Mercury – sharpening of intellect, and by Jupiter – many good happenings and gains. At the commencement of journey or initiation of an auspicious work if the Janam nakshtra (the nakshtra occupied by the Moon at birth) suffers vedha it indicates death or failure, if Karma nakshtra (the 10th counted from Janam nakshatra) suffers vedha it indicates difficulty and suffering, if Adhana nakshtra (the 21st nakshatra counted from Janam nakshatra) suffers vedha it indicates affliction of the mind, if Vinasa nakshatra (the 23rd nakshatra counted from Janam nakshatra) suffers vedha it indicates strife with relatives and friends, if Samudayika nakshatra (the 18th nakshatra counted from Janam nakshatra) suffers vedha it indicates various difficulties, if Samghatika nakshatra (the 16th nakshatra counted from Janam nakshatra) suffers vedha it indicates losses, if Jati nakshatra suffers vedha it indicates destruction of family and if Abhijit nakshatra suffers vedha it indicates incarceration; this is with reference to the Nadis.The planets in direct motion at normal speed possess the front-vedha (or opposite-vedha) aspect, at accelerated motion the left–vedha and in retrograde motion – the backward–vedha (relative to their normal direction of movement). The Sun and the two lunar nodes have fixed front–vedha and vedha to the right and the left. Application: Sarvatobhadra chakra is used for determining auspicious mahurata for good luck and to ensure success in undertakings and happiness. All possible queries at individual and national levels can be successfully answered using this method, which is based on astrological norms, whenever horoscope is not available and which method can also be used to verify the correctness of predictions made by using other methods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Endometrium** Endometrium: The endometrium is the inner epithelial layer, along with its mucous membrane, of the mammalian uterus. It has a basal layer and a functional layer: the basal layer contains stem cells which regenerate the functional layer. The functional layer thickens and then is shed during menstruation in humans and some other mammals, including apes, Old World monkeys, some species of bat, the elephant shrew and the Cairo spiny mouse. In most other mammals, the endometrium is reabsorbed in the estrous cycle. During pregnancy, the glands and blood vessels in the endometrium further increase in size and number. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus. The speculated presence of an endometrial microbiota has been argued against. Structure: The endometrium consists of a single layer of columnar epithelium plus the stroma on which it rests. The stroma is a layer of connective tissue that varies in thickness according to hormonal influences. In the uterus, simple tubular glands reach from the endometrial surface through to the base of the stroma, which also carries a rich blood supply provided by the spiral arteries. In women of reproductive age, two layers of endometrium can be distinguished. These two layers occur only in the endometrium lining the cavity of the uterus, and not in the lining of the fallopian tubes. Structure: The functional layer is adjacent to the uterine cavity. This layer is built up after the end of menstruation during the first part of the previous menstrual cycle. Proliferation is induced by estrogen (follicular phase of menstrual cycle), and later changes in this layer are engendered by progesterone from the corpus luteum (luteal phase). It is adapted to provide an optimum environment for the implantation and growth of the embryo. This layer is completely shed during menstruation. Structure: The basal layer, adjacent to the myometrium and below the functional layer, is not shed at any time during the menstrual cycle. It contains stem cells that regenerate the functional layer, which develops on top of it.In the absence of progesterone, the arteries supplying blood to the functional layer constrict, so that cells in that layer become ischaemic and die, leading to menstruation. Structure: It is possible to identify the phase of the menstrual cycle by reference to either the ovarian cycle or the uterine cycle by observing microscopic differences at each phase—for example in the ovarian cycle: Gene and protein expression About 20,000 protein coding genes are expressed in human cells and some 70% of these genes are expressed in the normal endometrium. Just over 100 of these genes are more specifically expressed in the endometrium with only a handful genes being highly endometrium specific. The corresponding specific proteins are expressed in the glandular and stromal cells of the endometrial mucosa. The expression of many of these proteins vary depending on the menstrual cycle, for example the progesterone receptor and thyrotropin-releasing hormone both expressed in the proliferative phase, and PAEP expressed in the secretory phase. Other proteins such as the HOX11 protein that is required for female fertility, is expressed in endometrial stroma cells throughout the menstrual cycle. Certain specific proteins such as the estrogen receptor are also expressed in other types of female tissue types, such as the cervix, fallopian tubes, ovaries and breast. Structure: Microbiome speculation The uterus and endometrium was for a long time thought to be sterile. The cervical plug of mucosa was seen to prevent the entry of any microorganisms ascending from the vagina. In the 1980s this view was challenged when it was shown that uterine infections could arise from weaknesses in the barrier of the cervical plug. Organisms from the vaginal microbiota could enter the uterus during uterine contractions in the menstrual cycle. Further studies sought to identify microbiota specific to the uterus which would be of help in identifying cases of unsuccessful IVF and miscarriages. Their findings were seen to be unreliable due to the possibility of cross-contamination in the sampling procedures used. The well-documented presence of Lactobacillus species, for example, was easily explained by an increase in the vaginal population being able to seep into the cervical mucous. Another study highlighted the flaws of the earlier studies including cross-contamination. It was also argued that the evidence from studies using germ-free offspring of axenic animals (germ-free) clearly showed the sterility of the uterus. The authors concluded that in light of these findings there was no existence of a microbiome.The normal dominance of Lactobacilli in the vagina is seen as a marker for vaginal health. However, in the uterus this much lower population is seen as invasive in a closed environment that is highly regulated by female sex hormones, and that could have unwanted consequences. In studies of endometriosis Lactobacillus is not the dominant type and there are higher levels of Streptococcus and Staphylococcus species. Half of the cases of bacterial vaginitis showed a polymicrobial biofilm attached to the endometrium. Function: The endometrium is the innermost lining layer of the uterus, and functions to prevent adhesions between the opposed walls of the myometrium, thereby maintaining the patency of the uterine cavity. During the menstrual cycle or estrous cycle, the endometrium grows to a thick, blood vessel-rich, glandular tissue layer. This represents an optimal environment for the implantation of a blastocyst upon its arrival in the uterus. The endometrium is central, echogenic (detectable using ultrasound scanners), and has an average thickness of 6.7 mm. Function: During pregnancy, the glands and blood vessels in the endometrium further increase in size and number. Vascular spaces fuse and become interconnected, forming the placenta, which supplies oxygen and nutrition to the embryo and fetus. Function: Cycle The functional layer of the endometrial lining undergoes cyclic regeneration from stem cells in the basal layer. Humans, apes, and some other species display the menstrual cycle, whereas most other mammals are subject to an estrous cycle. In both cases, the endometrium initially proliferates under the influence of estrogen. However, once ovulation occurs, the ovary (specifically the corpus luteum) will produce much larger amounts of progesterone. This changes the proliferative pattern of the endometrium to a secretory lining. Eventually, the secretory lining provides a hospitable environment for one or more blastocysts. Function: Upon fertilization, the egg may implant into the uterine wall and provide feedback to the body with human chorionic gonadotropin (hCG). hCG provides continued feedback throughout pregnancy by maintaining the corpus luteum, which will continue its role of releasing progesterone and estrogen. In case of implantation, the endometrial lining remains as decidua. The decidua becomes part of the placenta; it provides support and protection for the gestation. Function: Without implantation of a fertilized egg, the endometrial lining is either reabsorbed (estrous cycle) or shed (menstrual cycle). In the latter case, the process of shedding involves the breaking down of the lining, the tearing of small connective blood vessels, and the loss of the tissue and blood that had constituted it through the vagina. The entire process occurs over a period of several days. Menstruation may be accompanied by a series of uterine contractions; these help expel the menstrual endometrium. Function: If there is inadequate stimulation of the lining, due to lack of hormones, the endometrium remains thin and inactive. In humans, this will result in amenorrhea, or the absence of a menstrual period. After menopause, the lining is often described as being atrophic. In contrast, endometrium that is chronically exposed to estrogens, but not to progesterone, may become hyperplastic. Long-term use of oral contraceptives with highly potent progestins can also induce endometrial atrophy.In humans, the cycle of building and shedding the endometrial lining lasts an average of 28 days. The endometrium develops at different rates in different mammals. Various factors including the seasons, climate, and stress can affect its development. The endometrium itself produces certain hormones at different stages of the cycle and this affects other parts of the reproductive system. Diseases related with endometrium: Chorionic tissue can result in marked endometrial changes, known as an Arias-Stella reaction, that have an appearance similar to cancer. Historically, this change was diagnosed as endometrial cancer and it is important only in so far as it should not be misdiagnosed as cancer. Adenomyosis is the growth of the endometrium into the muscle layer of the uterus (the myometrium). Endometriosis is the growth of tissue similar to the endometrium, outside the uterus. Endometrial hyperplasia Endometrial cancer is the most common cancer of the human female genital tract. Diseases related with endometrium: Asherman's syndrome, also known as intrauterine adhesions, occurs when the basal layer of the endometrium is damaged by instrumentation (e.g., D&C) or infection (e.g., endometrial tuberculosis) resulting in endometrial sclerosis and adhesion formation partially or completely obliterating the uterine cavity.Thin endometrium may be defined as an endometrial thickness of less than 8 mm. It usually occurs after menopause. Treatments that can improve endometrial thickness include Vitamin E, L-arginine and sildenafil citrate.Gene expression profiling using cDNA microarray can be used for the diagnosis of endometrial disorders. Diseases related with endometrium: The European Menopause and Andropause Society (EMAS) released Guidelines with detailed information to assess the endometrium. Embryo transfer An endometrial thickness (EMT) of less than 7 mm decreases the pregnancy rate in in vitro fertilization by an odds ratio of approximately 0.4 compared to an EMT of over 7 mm. However, such low thickness rarely occurs, and any routine use of this parameter is regarded as not justified. The optimal endometrial thickness is 10mm. Diseases related with endometrium: Observation of the endometrium by transvaginal ultrasonography is used when administering fertility medication, such as in in vitro fertilization. At the time of embryo transfer, it is favorable to have an endometrium of a thickness of between 7 and 14 mm with a triple-line configuration, which means that the endometrium contains a hyperechoic (usually displayed as light) line in the middle surrounded by two more hypoechoic (darker) lines. A triple-line endometrium reflects the separation of the basal layer and the functional layer, and is also observed in the periovulatory period secondary to rising estradiol levels, and disappears after ovulation.Endometrial thickness is also associated with live births in IVF. The live birth rate in a normal endometrium is halved when the thickness is <5mm. Endometrial protection: Estrogens stimulate endometrial proliferation and carcinogenesis. Conversely, progestogens inhibit endometrial proliferation and carcinogenesis caused by estrogens and stimulate differentiation of the endometrium into decidua, which is termed endometrial transformation or decidualization. This is mediated by the progestogenic and functional antiestrogenic effects of progestogens in this tissue. These effects of progestogens and their protection against endometrial hyperplasia and endometrial cancer caused by estrogens is referred to as endometrial protection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Musical road** Musical road: A musical road is a road, or section of a road, which when driven over causes a tactile vibration and audible rumbling that can be felt through the wheels and body of the vehicle. This rumbling is heard within the car as well as the surrounding area, in the form of a musical tune. Musical roads are known to currently exist in Hungary, Japan, South Korea, the United States, China, Iran, Taiwan, Indonesia, the United Arab Emirates and Argentina. In the past, they could be found in France, Denmark and the Netherlands as well. Musical road: Each note is produced by varying the spacing of strips in, or on, the road. For example, an E note requires a frequency of around 330 vibrations a second. Therefore, strips 2.4 in (61 mm) apart will produce an E note in a vehicle travelling at 45 mph (72 km/h). By country: Denmark The first known musical road, the Asfaltofon (English: Asphaltophone), was created in October 1995 in Gylling, Denmark, by Steen Krarup Jensen and Jakob Freud-Magnus, two Danish artists. The Asphaltophone was made from a series of raised pavement markers, similar to Botts' dots, spaced out at intermittent intervals so that as a vehicle passed over the markers, the vibrations caused by the wheels could be heard inside the car. The song played was an arpeggio in the key of F major. By country: France In 2000, a musical road with a 28-note melody composed by Gaellic Guillerm was built in the suburb of Villepinte, Seine-Saint-Denis, France. It was located on boulevard Laurent and Danielle Casanova and was supposedly paved over in 2002. However, as of 2006, subsequent visits to the site of this musical road claimed that the song could still be heard faintly. By country: Hungary In 2019, Hungary installed a musical road in memoriam of László Bódi (better known by his stage name Cipő), lead singer from the band Republic. When going on the side of the road, one can hear an approximately 30-second snippet of their song 67-es út (Road 67). It is located at 46.530547°N 17.817368°E / 46.530547; 17.817368 on Road 67 between Mernyeszentmiklós and Mernye, in the southbound direction. By country: Indonesia In 2019, Indonesia installed a musical road along the Ngawi–Kertosono section of the Solo–Kertosono Toll Road in Java. The song played is the first six notes of "Happy Birthday To You," but the fifth note is off-key by a half-step. It was installed to reduce the number of traffic accidents, and the song was chosen because it is familiar to the community. By country: Japan In Japan, Shizuo Shinoda accidentally scraped some markings into a road with a bulldozer and drove over them and realized that it was possible to create tunes depending on the depth and spacing of the grooves. In 2007, the Hokkaido National Industrial Research Institute, which had previously worked on a system using infra-red lights to detect dangerous road surfaces, refined Shinoda's designs to create the Melody Road. They used the same concept of cutting grooves into the concrete at specific intervals and found the closer the grooves are, the higher the pitch of the sound; while grooves which are spaced farther apart create lower pitched sounds.There are multiple permanently paved 250-meter (820 ft) Melody Roads sections throughout Japan. The first ones built included one in Hokkaido in Shibetsu, Nemuro which plays the "Shiretoko Love Song" on the site of where Shinoda's first bulldozer scrapings were, another in the town of Kimino in Wakayama Prefecture where a car can produce the Japanese ballad "Miagete goran yoru no hoshi wo" by Kyu Sakamoto, one in Shizuoka Prefecture on the ascending drive to Mount Fuji, and a fourth in the village of Katashina in Gunma, which consists of 2,559 grooves cut into a 175-meter (574 ft) stretch of existing roadway and produces the tune of "Memories of Summer". A 320-meter (1050 ft) stretch of the Ashinoko Skyline in Hakone plays "A Cruel Angel's Thesis", the theme song from the anime Neon Genesis Evangelion, when driven over at 40 km/h. Yet another can be found on the road between Nakanojo town and Shima Onsen, which plays "Always With Me" (Japanese title: いつも何度でも, Itsumo nando demo) from the feature animation Spirited Away.The roads work by creating sequences of variable width groove intervals to create specific low and high frequency vibrations. Some of these roads, such as one in Okinawa that produces the Japanese folk song "Futami Jowa", as well as one in Hiroshima Prefecture, are polyphonic, with different sequences of rumble strips for the left and right tires so that a melody and harmony can be heard. As of 2016, there are over 30 Melody Roads in Japan. By country: Netherlands A singing road had been installed near the village of Jelsum in Friesland. The Friesland provincial anthem (De Alde Friezen) would play if drivers obeyed the speed limits, otherwise the song would play off-key. After complaints from villagers, the singing road was removed. By country: South Korea The Singing Road can be found close to Anyang, Gyeonggi, South Korea, and was created using grooves cut into the ground, similar to the Japanese Melody Roads. Unlike the Japanese roads, however, which were designed to attract tourists, the Singing Road is intended to help motorists stay alert and awake – 68% of traffic accidents in South Korea are caused by inattentive, sleeping or speeding drivers. The tune played is "Mary Had a Little Lamb" and took four days to construct. It is likely that the song was chosen because the road leads to an airport - in Korean, the melody of "Mary Had a Little Lamb" is known as "Airplane," with lyrics describing an airplane flying. As of 2022, however, it was paved over and the song can no longer be heard. By country: As of 2022, there are five singing roads in South Korea. There were formerly six, but the first was paved over. The second one, built at an unknown date, plays a traditional folk tune called “Mountain Wind, River Wind” for guests exiting the ski resort Kangwonland. The third is located on the way from Osan to Chinhae and plays a song called “Bicycle.” The fourth was constructed in 2019 and plays the first verse of “Twinkle Twinkle Little Star”. It was constructed inside of the Inje-Yangyang Tunnel on the Seoul-Yangyang Expressway, the longest tunnel in Korea. The fifth is located on the Donghae Expressway inside of a tunnel and plays a well-known Korean children's folk song called “Cheer Up, Dad.” The sixth one was constructed inside the Marae tunnel on route 17, but the title of the song played by the road is unknown. By country: China A 300-meter stretch of asphalt road in Beijing's south-western Fengtai district in the Qianlingshan Mountain Scenic Area has been made into a singing road and will play the tune "Ode to the Motherland", as long as drivers follow the speed limit of 40 km/h. Construction was completed in 2016. "We have small grooves built into the road surface, positioned apart with different sizes of gap according to the melody of the song. These 'rumble strips' cause the car tires to play music and then make a singing road," said Lin Zhong, general manager of Beijing Luxin Dacheng landscape architecture company. "Our first idea is to get cars moving at a constant speed. Because only in that way can you enjoy good musical effect. We use it as a reminder of speed limit," added Lin.Two other musical roads in China exist: the first at a nature reserve in Henan that plays the national anthem and "Mo Li Hua", and the second near Yangma Dao in Yantai which plays the overture from "Carmen" and "Ode to Joy." One song is paved into each side of the road at both locations so drivers can experience a song both traveling one way and the other way. By country: In June 2021, a 587-meter portion of G108 in Xiayunling Township, Fangshan, Beijing, was made into a musical road which plays the tune of Without the Communist Party, There Would Be No New China. Xiayunling was the birthplace of this song. By country: United States The Civic Musical Road was built on Avenue K in Lancaster, California, on 5 September 2008. Covering a quarter-mile stretch of road between 60th Street West and 70th Street West, the Civic Musical Road used grooves cut into the asphalt to replicate part of the finale of the William Tell overture. It was paved over on 23 September after nearby residents complained to the city council about noise levels. After further complaints from city residents about its removal, work began to re-create it on 15 October 2008 on Avenue G between 30th Street West and 40th Street West—this time, two miles away from any residence. This road is named after the Honda Civic. It opened two days later. The new section on Avenue G is only in the far left lane of the westbound side of the road. The road appears in Honda Civic commercials. The rhythm is recognizable, but the intervals are so far off that the melody bears only a slight resemblance to the William Tell overture, regardless of the car speed. It is likely the designers made a systematic miscalculation not to include the width of the groove in the relevant width of the spacing plus groove. This failure was made on both Avenue K and Avenue G.In October 2014, the village of Tijeras, New Mexico, installed a musical road on a two-lane stretch of U.S. Route 66 which plays "America the Beautiful", when a vehicle drives over it at 45 mph. This highway is labelled NM 333, between Miles 4 and 5, eastbound. Funded by the National Geographic Society, the project was coordinated with the New Mexico Department of Transportation who described the project as a way to get drivers to slow down, "and to bring a little excitement to an otherwise monotonous highway." By 2020, however, the tune was fading and most of the ridges were even paved over. A spokesperson for New Mexico's Department of Transportation said, "...there are no plans to restore the musical highway. The cost is outrageous, and they have since restored portions of the roadway and removed all of the signs. Unfortunately, this was part of a previous administration and never set in stone to keep up with the maintenance of this singing highway."In October 2019, Tim Arnold, an alumnus of Auburn University's College of Engineering, created and installed a musical road that plays the first seven notes of the Auburn Tigers fight song, "War Eagle". Inspired by previous musical roads, the short section of South Donahue Drive has been dubbed "War Eagle Road" and was created with a revolutionary process utilizing a surface-application material which does not damage the road. Working with support from Auburn University and the National Center for Asphalt Technology, Arnold developed the War Eagle Road to be a work of public art welcoming fans and rivals as they approach campus. The project was approved by Office of the University Architect within Facilities Management and completed to coordinate with the final three home games of the Auburn Tigers football season. The musical road has enjoyed a positive public reaction and seems to be welcomed as a permanent fixture. By country: United Arab Emirates On January 13, 2023, a musical road was built in the city of Al Ain in the United Arab Emirates, playing the national anthem of the country, Ishy Bilady, when driven over. However, it is being used as an experiment; the strips on the road are temporary and will be removed in the future to study the possibility of a better implementation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**European Magnetic Field Laboratory** European Magnetic Field Laboratory: The European Magnetic Field Laboratory (EMFL) gathers the efforts of three laboratories in Germany, France, and the Netherlands: the Dresden High Magnetic Field Laboratory (HLD), the Laboratoire National des Champs Magnétiques Intenses (LNCMI) in Grenoble and Toulouse and the High Field Magnet Laboratory (HFML) in Nijmegen. EMFL was an initiative of Prof. Jan Kees Maan, former director of HFML in Nijmegen. European Magnetic Field Laboratory: Research in the high field magnets of the EMFL leads to new insights in material properties. Any kind of material can be explored in a high magnetic field, for example superconductors, biological molecules and nanostructures. This project is financially supported by the European Commission.The “ISABEL” project is a four-year project (2020-2024) of eighteen partners, funded within Horizon 2020 and coordinated by LNCMI. The principal goal of this project is to ensure the long-term sustainability of the EMFL and define a roadmap for its future development.The “SuperEMFL” project runs for four years (2021-2024), likewise funded within Horizon 2020 and coordinated by LNCMI. The project is a design study aiming at the development of the high-temperature superconductor (HTS) technology, providing the EMFL with much higher superconducting fields and novel superconducting magnet geometries. Mission: It is the mission of the EMFL to generate the highest possible magnetic fields for use in scientific research and make them available to the scientific community. The EMFL provides pulsed (in Toulouse and Dresden) as well as static magnetic fields (Grenoble and Nijmegen). A call for proposal is launched twice a year for users to get access to the facilities. The best projects are chosen by a European selection committee, composed of experts from around the world. The three laboratories: HFML (Nijmegen, The Netherlands) is operated by the Radboud University (RU) and the Dutch Research Council (NWO). Its director is Prof. Dr. Peter Christianen HLD (Dresden, Germany) is part of the Helmholtz-Zentrum Dresden-Rossendorf. Its director is Prof. Dr. Joachim Wosnitza. LNCMI (Grenoble & Toulouse, France) is a laboratory of the CNRS, associated to several universities: UJF, INSA and UPS. Continuous field are available in Grenoble and pulsed field in Toulouse. The LNCMI is managed by Dr. Charles Simon.Klaus von Klitzing discovered the Quantum Hall effect at the LNCMI (Nobel prize of Physics of 1985). Andre Geim and Konstantin Novoselov have been awarded the Nobel Prize for Physics 2010 for the discovery of graphene, the thinnest material in the world. They explored the material in the High Field Magnet Laboratory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fuse (electrical)** Fuse (electrical): In electronics and electrical engineering, a fuse is an electrical safety device that operates to provide overcurrent protection of an electrical circuit. Its essential component is a metal wire or strip that melts when too much current flows through it, thereby stopping or interrupting the current. It is a sacrificial device; once a fuse has operated it is an open circuit, and must be replaced or rewired, depending on its type. Fuse (electrical): Fuses have been used as essential safety devices from the early days of electrical engineering. Today there are thousands of different fuse designs which have specific current and voltage ratings, breaking capacity, and response times, depending on the application. The time and current operating characteristics of fuses are chosen to provide adequate protection without needless interruption. Wiring regulations usually define a maximum fuse current rating for particular circuits. Short circuits, overloading, mismatched loads, or device failure are the prime or some of the reasons for fuse operation. When a damaged live wire makes contact with a metal case that is connected to ground, a short circuit will form and the fuse will melt. Fuse (electrical): A fuse is an automatic means of removing power from a faulty system; often abbreviated to ADS (Automatic Disconnection of Supply). Circuit breakers can be used as an alternative to fuses, but have significantly different characteristics. History: Breguet recommended the use of reduced-section conductors to protect telegraph stations from lightning strikes; by melting, the smaller wires would protect apparatus and wiring inside the building. A variety of wire or foil fusible elements were in use to protect telegraph cables and lighting installations as early as 1864.A fuse was patented by Thomas Edison in 1890 as part of his electric distribution system. Construction: A fuse consists of a metal strip or wire fuse element, of small cross-section compared to the circuit conductors, mounted between a pair of electrical terminals, and (usually) enclosed by a non-combustible housing. The fuse is arranged in series to carry all the charge passing through the protected circuit. The resistance of the element generates heat due to the current flow. The size and construction of the element is (empirically) determined so that the heat produced for a normal current does not cause the element to attain a high temperature. If too high a current flows, the element rises to a higher temperature and either directly melts, or else melts a soldered joint within the fuse, opening the circuit. Construction: The fuse element is made of zinc, copper, silver, aluminum, or alloys among these or other various metals to provide stable and predictable characteristics. The fuse ideally would carry its rated current indefinitely, and melt quickly on a small excess. The element must not be damaged by minor harmless surges of current, and must not oxidize or change its behavior after possibly years of service. Construction: The fuse elements may be shaped to increase heating effect. In large fuses, current may be divided between multiple strips of metal. A dual-element fuse may contain a metal strip that melts instantly on a short circuit, and also contain a low-melting solder joint that responds to long-term overload of low values compared to a short circuit. Fuse elements may be supported by steel or nichrome wires, so that no strain is placed on the element, but a spring may be included to increase the speed of parting of the element fragments. Construction: The fuse element may be surrounded by air, or by materials intended to speed the quenching of the arc. Silica sand or non-conducting liquids may be used. Characteristics: Rated current IN A maximum current that the fuse can continuously conduct without interrupting the circuit. Characteristics: Time vs current characteristics The speed at which a fuse blows depends on how much current flows through it and the material of which the fuse is made. Manufacturers can provide a plot of current vs time, often plotted on logarithmic scales, to characterize the device and to allow comparison with the characteristics of protective devices upstream and downstream of the fuse. Characteristics: The operating time is not a fixed interval but decreases as the current increases. Fuses are designed to have particular characteristics of operating time compared to current. A standard fuse may require twice its rated current to open in one second, a fast-blow fuse may require twice its rated current to blow in 0.1 seconds, and a slow-blow fuse may require twice its rated current for tens of seconds to blow. Characteristics: Fuse selection depends on the load's characteristics. Semiconductor devices may use a fast or ultrafast fuse as semiconductor devices heat rapidly when excess current flows. The fastest blowing fuses are designed for the most sensitive electrical equipment, where even a short exposure to an overload current could be damaging. Normal fast-blow fuses are the most general purpose fuses. A time-delay fuse (also known as an anti-surge or slow-blow fuse) is designed to allow a current which is above the rated value of the fuse to flow for a short period of time without the fuse blowing. These types of fuse are used on equipment such as motors, which can draw larger than normal currents for up to several seconds while coming up to speed. Characteristics: The I2t value The I2t rating is related to the amount of energy let through by the fuse element when it clears the electrical fault. This term is normally used in short circuit conditions and the values are used to perform co-ordination studies in electrical networks. I2t parameters are provided by charts in manufacturer data sheets for each fuse family. For coordination of fuse operation with upstream or downstream devices, both melting I2t and clearing I2t are specified. The melting I2t is proportional to the amount of energy required to begin melting the fuse element. The clearing I2t is proportional to the total energy let through by the fuse when clearing a fault. The energy is mainly dependent on current and time for fuses as well as the available fault level and system voltage. Since the I2t rating of the fuse is proportional to the energy it lets through, it is a measure of the thermal damage from the heat and magnetic forces that will be produced by a fault end. Characteristics: Breaking capacity The breaking capacity is the maximum current that can safely be interrupted by the fuse. This should be higher than the prospective short-circuit current. Miniature fuses may have an interrupting rating only 10 times their rated current. Fuses for small, low-voltage, usually residential, wiring systems are commonly rated, in North American practice, to interrupt 10,000 amperes. Fuses for commercial or industrial power systems must have higher interrupting ratings, with some low-voltage current-limiting high interrupting fuses rated for 300,000 amperes. Fuses for high-voltage equipment, up to 115,000 volts, are rated by the total apparent power (megavolt-amperes, MVA) of the fault level on the circuit. Characteristics: Some fuses are designated high rupture capacity (HRC) or high breaking capacity (HBC) and are usually filled with sand or a similar material. Low-voltage high rupture capacity (HRC) fuses are used in the area of main distribution boards in low-voltage networks where there is a high prospective short circuit current. They are generally larger than screw-type fuses, and have ferrule cap or blade contacts. High rupture capacity fuses may be rated to interrupt current of 120 kA. HRC fuses are widely used in industrial installations and are also used in the public power grid, e.g. in transformer stations, main distribution boards, or in building junction boxes and as meter fuses. In some countries, because of the high fault current available where these fuses are used, local regulations may permit only trained personnel to change these fuses. Some varieties of HRC fuse include special handling features. Characteristics: Rated voltage The voltage rating of the fuse must be equal to or, greater than, what would become the open-circuit voltage. For example, a glass tube fuse rated at 32 volts would not reliably interrupt current from a voltage source of 120 or 230 V. If a 32 V fuse attempts to interrupt the 120 or 230 V source, an arc may result. Plasma inside the glass tube may continue to conduct current until the current diminishes to the point where the plasma becomes a non-conducting gas. Rated voltage should be higher than the maximum voltage source it would have to disconnect. Connecting fuses in series does not increase the rated voltage of the combination, nor of any one fuse. Characteristics: Medium-voltage fuses rated for a few thousand volts are never used on low voltage circuits, because of their cost and because they cannot properly clear the circuit when operating at very low voltages. Voltage drop: The manufacturer may specify the voltage drop across the fuse at rated current. There is a direct relationship between a fuse's cold resistance and its voltage drop value. Once current is applied, resistance and voltage drop of a fuse will constantly grow with the rise of its operating temperature until the fuse finally reaches thermal equilibrium. The voltage drop should be taken into account, particularly when using a fuse in low-voltage applications. Voltage drop often is not significant in more traditional wire type fuses, but can be significant in other technologies such as resettable (PPTC) type fuses. Temperature derating: Ambient temperature will change a fuse's operational parameters. A fuse rated for 1 A at 25 °C may conduct up to 10% or 20% more current at −40 °C and may open at 80% of its rated value at 100 °C. Operating values will vary with each fuse family and are provided in manufacturer data sheets. Markings: Most fuses are marked on the body or end caps with markings that indicate their ratings. Surface-mount technology "chip type" fuses feature few or no markings, making identification very difficult. Similar appearing fuses may have significantly different properties, identified by their markings. Fuse markings will generally convey the following information, either explicitly as text, or else implicit with the approval agency marking for a particular type: Current rating of the fuse. Voltage rating of the fuse. Time-current characteristic; i.e. fuse speed. Approvals by national and international standards agencies. Manufacturer/part number/series. Interrupting rating (breaking capacity) Packages and materials: Fuses come in a vast array of sizes and styles to serve in many applications, manufactured in standardised package layouts to make them easily interchangeable. Fuse bodies may be made of ceramic, glass, plastic, fiberglass, molded mica laminates, or molded compressed fibre depending on application and voltage class. Cartridge (ferrule) fuses have a cylindrical body terminated with metal end caps. Some cartridge fuses are manufactured with end caps of different sizes to prevent accidental insertion of the wrong fuse rating in a holder, giving them a bottle shape. Fuses for low voltage power circuits may have bolted blade or tag terminals which are secured by screws to a fuseholder. Some blade-type terminals are held by spring clips. Blade type fuses often require the use of a special purpose extractor tool to remove them from the fuse holder. Renewable fuses have replaceable fuse elements, allowing the fuse body and terminals to be reused if not damaged after a fuse operation. Fuses designed for soldering to a printed circuit board have radial or axial wire leads. Surface mount fuses have solder pads instead of leads. High-voltage fuses of the expulsion type have fiber or glass-reinforced plastic tubes and an open end, and can have the fuse element replaced. Semi-enclosed fuses are fuse wire carriers in which the fusible wire itself can be replaced. The exact fusing current is not as well controlled as an enclosed fuse, and it is extremely important to use the correct diameter and material when replacing the fuse wire, and for these reasons these fuses are slowly falling from favour. These are still used in consumer units in some parts of the world, but are becoming less common. Packages and materials: While glass fuses have the advantage of a fuse element visible for inspection purposes, they have a low breaking capacity (interrupting rating), which generally restricts them to applications of 15 A or less at 250 VAC. Ceramic fuses have the advantage of a higher breaking capacity, facilitating their use in circuits with higher current and voltage. Filling a fuse body with sand provides additional cooling of the arc and increases the breaking capacity of the fuse. Medium-voltage fuses may have liquid-filled envelopes to assist in the extinguishing of the arc. Some types of distribution switchgear use fuse links immersed in the oil that fills the equipment. Packages and materials: Fuse packages may include a rejection feature such as a pin, slot, or tab, which prevents interchange of otherwise similar appearing fuses. For example, fuse holders for North American class RK fuses have a pin that prevents installation of similar-appearing class H fuses, which have a much lower breaking capacity and a solid blade terminal that lacks the slot of the RK type. Dimensions: Fuses can be built with different sized enclosures to prevent interchange of different ratings of fuse. For example, bottle style fuses distinguish between ratings with different cap diameters. Automotive glass fuses were made in different lengths, to prevent high-rated fuses being installed in a circuit intended for a lower rating. Special features Glass cartridge and plug fuses allow direct inspection of the fusible element. Other fuses have other indication methods including: Indicating pin or striker pin — extends out of the fuse cap when the element is blown. Indicating disc — a coloured disc (flush mounted in the end cap of the fuse) falls out when the element is blown. Element window — a small window built into the fuse body to provide visual indication of a blown element. External trip indicator — similar function to striker pin, but can be externally attached (using clips) to a compatible fuse.Some fuses allow a special purpose micro switch or relay unit to be fixed to the fuse body. When the fuse element blows, the indicating pin extends to activate the micro switch or relay, which, in turn, triggers an event. Some fuses for medium-voltage applications use two or three separate barrels and two or three fuse elements in parallel. Fuse standards: IEC 60269 fuses The International Electrotechnical Commission publishes standard 60269 for low-voltage power fuses. The standard is in four volumes, which describe general requirements, fuses for industrial and commercial applications, fuses for residential applications, and fuses to protect semiconductor devices. The IEC standard unifies several national standards, thereby improving the interchangeability of fuses in international trade. All fuses of different technologies tested to meet IEC standards will have similar time-current characteristics, which simplifies design and maintenance. Fuse standards: UL 248 fuses (North America) In the United States and Canada, low-voltage fuses to 1 kV AC rating are made in accordance with Underwriters Laboratories standard UL 248 or the harmonized Canadian Standards Association standard C22.2 No. 248. This standard applies to fuses rated 1 kV or less, AC or DC, and with breaking capacity up to 200 kA. These fuses are intended for installations following Canadian Electrical Code, Part I (CEC), or the National Electrical Code, NFPA 70 (NEC). Fuse standards: The standard ampere ratings for fuses (and circuit breakers) in USA/Canada are considered 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90, 100, 110, 125, 150, 175, 200, 225, 250, 300, 350, 400, 450, 500, 600, 700, 800, 1000, 1200, 1600, 2000, 2500, 3000, 4000, 5000, and 6000 amperes. Additional standard ampere ratings for fuses are 1, 3, 6, 10, and 601. Fuse standards: UL 248 currently has 19 "parts". UL 248-1 sets the general requirements for fuses, while the latter parts are dedicated to specific fuses sizes (ex: 248-8 for Class J, 248-10 for Class L), or for categories of fuses with unique properties (ex: 248-13 for semiconductor fuses, 248-19 for photovoltaic fuses). The general requirements (248-1) apply except as modified by the supplemental part (240-x). For example, UL 248-19 allows photovoltaic fuses to be rated up to 1500 volts, DC, versus 1000 volts under the general requirements. Fuse standards: IEC and UL nomenclature varies slightly. IEC standards refer to a "fuse" as the assembly of a fusible link and a fuse holder. In North American standards, the fuse is the replaceable portion of the assembly, and a fuse link would be a bare metal element for installation in a fuse. Automotive fuses: Automotive fuses are used to protect the wiring and electrical equipment for vehicles. There are several different types of automotive fuses and their usage is dependent upon the specific application, voltage, and current demands of the electrical circuit. Automotive fuses can be mounted in fuse blocks, inline fuse holders, or fuse clips. Some automotive fuses are occasionally used in non-automotive electrical applications. Standards for automotive fuses are published by SAE International (formerly known as the Society of Automotive Engineers). Automotive fuses: Automotive fuses can be classified into four distinct categories: Blade fuses Glass tube or Bosch type Fusible links Fuse limitersMost automotive fuses rated at 32 volts are used on circuits rated 24 volts DC and below. Some vehicles use a dual 12/42 V DC electrical system that will require a fuse rated at 58 V DC. High voltage fuses: Fuses are used on power systems up to 115,000 volts AC. High-voltage fuses are used to protect instrument transformers used for electricity metering, or for small power transformers where the expense of a circuit breaker is not warranted. A circuit breaker at 115 kV may cost up to five times as much as a set of power fuses, so the resulting saving can be tens of thousands of dollars.In medium-voltage distribution systems, a power fuse may be used to protect a transformer serving 1–3 houses. Pole-mounted distribution transformers are nearly always protected by a fusible cutout, which can have the fuse element replaced using live-line maintenance tools. High voltage fuses: Medium-voltage fuses are also used to protect motors, capacitor banks and transformers and may be mounted in metal enclosed switchgear, or (rarely in new designs) on open switchboards. High voltage fuses: Expulsion fuses Large power fuses use fusible elements made of silver, copper or tin to provide stable and predictable performance. High voltage expulsion fuses surround the fusible link with gas-evolving substances, such as boric acid. When the fuse blows, heat from the arc causes the boric acid to evolve large volumes of gases. The associated high pressure (often greater than 100 atmospheres) and cooling gases rapidly quench the resulting arc. The hot gases are then explosively expelled out of the end(s) of the fuse. Such fuses can only be used outdoors. High voltage fuses: These type of fuses may have an impact pin to operate a switch mechanism, so that all three phases are interrupted if any one fuse blows. High-power fuse means that these fuses can interrupt several kiloamperes. Some manufacturers have tested their fuses for up to 63 kA short-circuit current. Comparison with circuit breakers: Fuses have the advantages of often being less costly and simpler than a circuit breaker for similar ratings. The blown fuse must be replaced with a new device which is less convenient than simply resetting a breaker and therefore likely to discourage people from ignoring faults. On the other hand, replacing a fuse without isolating the circuit first (most building wiring designs do not provide individual isolation switches for each fuse) can be dangerous in itself, particularly if the fault is a short circuit. Comparison with circuit breakers: High rupturing capacity fuses can be rated to safely interrupt up to 300,000 amperes at 600 V AC. Special current-limiting fuses are applied ahead of some molded-case breakers to protect the breakers in low-voltage power circuits with high short-circuit levels. Current-limiting fuses operate so quickly that they limit the total "let-through" energy that passes into the circuit, helping to protect downstream equipment from damage. These fuses open in less than one cycle of the AC power frequency; circuit breakers cannot match this speed. Some types of circuit breakers must be maintained on a regular basis to ensure their mechanical operation during an interruption. This is not the case with fuses, which rely on melting processes where no mechanical operation is required for the fuse to operate under fault conditions. In a multi-phase power circuit, if only one fuse opens, the remaining phases will have higher than normal currents, and unbalanced voltages, with possible damage to motors. Fuses only sense overcurrent, or to a degree, over-temperature, and cannot usually be used independently with protective relaying to provide more advanced protective functions, for example, ground fault detection. Some manufacturers of medium-voltage distribution fuses combine the overcurrent protection characteristics of the fusible element with the flexibility of relay protection by adding a pyrotechnic device to the fuse operated by external protective relays. For domestic applications, Miniature circuit breakers (MCB) are widely used as an alternative to fuses. Their rated current depend on the load current of the equipment to be protected and the ambient operational temperature. They are available in the following ratings: 6A, 10A, 16A, 20A, 25A, 32A, 45A, 50A, 63A, 80A, 100A, 125A. Fuse boxes: United Kingdom In the UK, older electrical consumer units (also called fuse boxes) are fitted either with semi-enclosed (rewirable) fuses (BS 3036) or cartridge fuses (BS 1361). (Fuse wire is commonly supplied to consumers as short lengths of 5 A-, 15 A- and 30 A-rated wire wound on a piece of cardboard.) Modern consumer units usually contain miniature circuit breakers (MCBs) instead of fuses, though cartridge fuses are sometimes still used, as in some applications MCBs are prone to nuisance tripping. Fuse boxes: Renewable fuses (rewirable or cartridge) allow user replacement, but this can be hazardous as it is easy to put a higher-rated or double fuse element (link or wire) into the holder (overfusing), or simply fitting it with copper wire or even a totally different type of conducting object (coins, hairpins, paper clips, nails, etc.) to the existing carrier. One form of fuse box abuse was to put a penny in the socket, which defeated overcurrent protection and resulted in a dangerous condition. Such tampering will not be visible without full inspection of the fuse. Fuse wire was never used in North America for this reason, although renewable fuses continue to be made for distribution boards. Fuse boxes: UK fuse boxes and rewirable fuses The Wylex standard consumer unit was very popular in the United Kingdom until the wiring regulations started demanding residual-current devices (RCDs) for sockets that could feasibly supply equipment outside the equipotential zone. The design does not allow for fitting of RCDs or RCBOs. Some Wylex standard models were made with an RCD instead of the main switch, but (for consumer units supplying the entire installation) this is no longer compliant with the wiring regulations as alarm systems should not be RCD-protected. There are two styles of fuse base that can be screwed into these units: one designed for rewirable fusewire carriers and one designed for cartridge fuse carriers. Over the years MCBs have been made for both styles of base. In both cases, higher rated carriers had wider pins, so a carrier couldn't be changed for a higher rated one without also changing the base. Cartridge fuse carriers are also now available for DIN-rail enclosures. Fuse boxes: North America In North America, fuses were used in buildings wired before 1960. These Edison base fuses would screw into a fuse socket similar to Edison-base incandescent lamps. Ratings were 5, 10, 15, 20, 25, and 30 amperes. To prevent installation of fuses with an excessive current rating, later fuse boxes included rejection features in the fuse-holder socket, commonly known as Rejection Base (Type S fuses) which have smaller diameters that vary depending on the rating of the fuse. This means that fuses can only be replaced by the preset (Type S) fuse rating. This is a North American, tri-national standard (UL 4248-11; CAN/CSA-C22.2 NO. 4248.11-07 (R2012); and, NMX-J-009/4248/11-ANCE). Existing Edison fuse boards can easily be converted to only accept Rejection Base (Type S) fuses, by screwing-in a tamper-proof adapter. This adapter screws into the existing Edison fuse holder, and has a smaller diameter threaded hole to accept the designated Type S rated fuse. Fuse boxes: Some companies manufacture resettable miniature thermal circuit breakers, which screw into a fuse socket. Some installations use these Edison-base circuit breakers. However, any such breaker sold today does have one flaw. It may be installed in a circuit-breaker box with a door. If so, if the door is closed, the door may hold down the breaker's reset button. While in this state, the breaker is effectively useless: it does not provide any overcurrent protection.In the 1950s, fuses in new residential or industrial construction for branch circuit protection were superseded by low voltage circuit breakers. Fuse boxes: Fuses are widely used for protection of electric motor circuits; for small overloads, the motor protection circuit will open the controlling contactor automatically, and the fuse will only operate for short circuits or extreme overload. Coordination of fuses in series: Where several fuses are connected in series at the various levels of a power distribution system, it is desirable to blow (clear) only the fuse (or other overcurrent device) electrically closest to the fault. This process is called "coordination" or "discrimination" and may require the time-current characteristics of two fuses to be plotted on a common current basis. Fuses are selected so that the minor branch fuse disconnects its circuit well before the supplying, major fuse starts to melt. In this way, only the faulty circuit is interrupted with minimal disturbance to other circuits fed by a common supplying fuse. Coordination of fuses in series: Where the fuses in a system are of similar types, simple rule-of-thumb ratios between ratings of the fuse closest to the load and the next fuse towards the source can be used. Other circuit protectors: Resettable fuses So-called self-resetting fuses use a thermoplastic conductive element known as a polymeric positive temperature coefficient (PPTC) thermistor that impedes the circuit during an overcurrent condition (by increasing device resistance). The PPTC thermistor is self-resetting in that when current is removed, the device will cool and revert to low resistance. These devices are often used in aerospace/nuclear applications where replacement is difficult, or on a computer motherboard so that a shorted mouse or keyboard does not cause motherboard damage. Other circuit protectors: Thermal fuses A thermal fuse is often found in consumer equipment such as coffee makers, hair dryers or transformers powering small consumer electronics devices. They contain a fusible, temperature-sensitive composition which holds a spring contact mechanism normally closed. When the surrounding temperature gets too high, the composition melts and allows the spring contact mechanism to break the circuit. The device can be used to prevent a fire in a hair dryer for example, by cutting off the power supply to the heater elements when the air flow is interrupted (e.g., the blower motor stops or the air intake becomes accidentally blocked). Thermal fuses are a 'one shot', non-resettable device which must be replaced once they have been activated (blown). Other circuit protectors: Cable limiter A cable limiter is similar to a fuse but is intended only for protection of low voltage power cables. It is used, for example, in networks where multiple cables may be used in parallel. It is not intended to provide overload protection, but instead protects a cable that is exposed to a short circuit. The characteristics of the limiter are matched to the size of cable so that the limiter clears a fault before the cable insulation is damaged. Unicode symbol: The Unicode character for the fuse's schematic symbol, found in the Miscellaneous Technical block, is U+23DB (⏛).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sensorimotor rhythm** Sensorimotor rhythm: The sensorimotor rhythm (SMR) is a brain wave. It is an oscillatory idle rhythm of synchronized electric brain activity. It appears in spindles in recordings of EEG, MEG, and ECoG over the sensorimotor cortex. For most individuals, the frequency of the SMR is in the range of 13 to 15 Hz. Meaning: The meaning of SMR is not fully understood. Phenomenologically, a person is producing a stronger SMR amplitude when the corresponding sensorimotor areas are idle, e.g. during states of immobility. SMR typically decreases in amplitude when the corresponding sensory or motor areas are activated, e.g. during motor tasks and even during motor imagery.Conceptually, SMR is sometimes mixed up with alpha waves of occipital origin, the strongest source of neural signals in the EEG. One reason might be, that without appropriate spatial filtering the SMR is very difficult to detect because it is usually flooded by the stronger occipital alpha waves. The feline SMR has been noted as being analogous to the human mu rhythm. Relevance in research: Neurofeedback Neurofeedback training can be used to gain control over the SMR activity. Neurofeedback practitioners believe that this feedback enables the subject to learn the regulation of their own SMR. People with learning difficulties, ADHD, epilepsy, and autism may benefit from an increase in SMR activity via neurofeedback. Furthermore, in the sport domain, SMR neurofeedback training has been found to be useful to enhance the golf putting performance. In the field of Brain–Computer Interfaces (BCI), the deliberate modification of the SMR amplitude during motor imagery can be used to control external applications.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dust bathing** Dust bathing: Dust bathing (also called sand bathing) is an animal behavior characterized by rolling or moving around in dust, dry earth or sand, with the likely purpose of removing parasites from fur, feathers or skin. Dust bathing is a maintenance behavior performed by a wide range of mammalian and avian species. For some animals, dust baths are necessary to maintain healthy feathers, skin, or fur, similar to bathing in water or wallowing in mud. In some mammals, dust bathing may be a way of transmitting chemical signals (or pheromones) to the ground which marks an individual's territory. Birds: Birds crouch close to the ground while taking a dust bath, vigorously wriggling their bodies and flapping their wings. This disperses loose substrate into the air. The birds spread one or both wings which allows the falling substrate to fall between the feathers and reach the skin. The dust bath is often followed by thorough shaking to further ruffle the feathers which may be accompanied with preening using the bill. Birds: The California quail is a highly sociable bird; one of their daily communal activities is a dust bath. A group of quail will select an area where the ground has been freshly turned or is soft. Using their underbellies, they burrow downward into the soil about 2–5 cm (1–2 in). They then wriggle about in the indentations, flapping their wings and ruffling their feathers, causing dust to rise in the air. They seem to prefer sunny places in which to create these dust baths. An ornithologist is able to detect the presence of quail in an area by spotting the circular indentations left behind in the soft dirt, some 7–15 cm (3–6 in) in diameter. Birds: Birds without a uropygial gland (e.g., the emu, kiwi, ostrich and bustard) rely on dust bathing to keep their feathers healthy and dry. Domestic chicken: Dust bathing has been extensively studied in the domestic hen. In normal dust bathing, the hen initially scratches and bill-rakes at the ground, then erects her feathers and squats. Once lying down, the behavior contains four main elements: vertical wing-shaking, head rubbing, bill-raking and scratching with one leg. The dust collects between the feathers and is then subsequently shaken off which may reduce the amount of feather lipids and so help the plumage maintain good insulating capacity and may help control of ectoparasites. Domestic chicken: Preferences for substrate Hens exhibit preferences for dust bathing substrate. When given a choice between wood shavings, lignocellulose (soft wood fibre, pelleted), Astroturf mat without substrate, or food particles, the time spent dust bathing and number of dust baths were higher in lignocellulose compared with wood shavings, food particles, and Astroturf. The average duration of a single dust bath was longer in food particles compared with lignocellulose and wood shavings. Most vertical wing shakes and scratching bouts within a single dust bath were observed in lignocellulose. Bill raking occurred more frequently in wood shavings and lignocellulose in comparison to the other substrates. No differences in the relative durations of behavioral patterns within a single dust bath were found. In contrast, other research shows that straw or wood-shavings were no more attractive than feathers as a substrate for dust bathing. Domestic chicken: Motivation Dust bathing is motivated by complex interactions between internal factors which build up over time, peripheral factors relating to the skin and feathers, and external factors, such as the sight of a dusty substrate. Internal factors The tendency to dust bathe fluctuates according to time of day, with more dust bathing occurring in the middle of the day which suggests some type of endogenous circadian rhythm of motivation. If birds are denied the opportunity to dustbathe, the tendency to dustbathe increases with time, suggesting a Lorenzian build-up of motivation. Domestic chicken: Peripheral factors Peripheral factors seem relatively unimportant in controlling dust bathing. Deprivation of dust bathing results in an increase in lipids on the feathers and a subsequent increase in dust bathing activity when this is allowed. However, although it has been speculated that the function of dust bathing is probably removal of excess lipids on the feathers, lipid accumulation as a major cause of dust bathing has not been proven. A 1991 experiment by Van Liere, et al. of the Wageningen Agricultural University of the Netherlands could only increase the duration of dust bathing bouts marginally by spreading lipids, equivalent to 1–2 months' accumulation, on birds' feathers. Moreover, removal of the oil gland in chicks, which eliminated the main source of lipids, had no effect on subsequent dust bathing. It therefore seems that the main effects of deprivation of dust bathing in hens act through a central mechanism and not a peripheral one. Domestic chicken: External factors Environmental temperature is an important external factor; the frequency of dust bathing is greater at 22 °C (72 °F) than at 10 °C (50 °F). Addition of supplementary visible light also increases components of dust bathing, and when hens are individually housed, the presence of a group of hens dust bathing in an adjoining pen with a dust bath increased dust bathing compared with the amount occurring when the hens were absent from the pen., i.e. there is a strong influence of social facilitation.Wrens and House Sparrows frequently follow a water bath with a dust bath (one reason to suspect an anti-parasite function for dusting). Overall, the amount of time and effort birds put into bathing and dusting indicates how critical feather maintenance may be. Keeping feathers functional requires constant care. Domestic chicken: Sham dustbathing Battery cages for domestic egg-laying hens usually have no dust bathing substrate. This is considered to be a welfare concern and as a consequence, dust bathing has been closely studied in domestic egg-laying hens. In the absence of substrate in cages, hens often perform sham dust bathing, a behavior during which the birds perform all the elements of normal dust bathing, but in the complete absence of any substrate. Mammals: Many mammals roll in sand or dirt, presumably to keep parasites away or to help dry themselves after exercise or becoming wet. A sand roll, which is a stall or yard covered with deep sand, is traditionally included as part of stable complexes for use by racehorses after exercise.Dust bathing has been suggested to have a communicatory function in several mammals such as the common degu (Octodon degus), the long-eared jerboa (Euchoreutes naso), and possibly in Belding's ground squirrel as they leave a "pungent" odor in the dust bathing areas. It has been suggested that wallowing (a behavior similar to dust bathing) may serve functions such as thermoregulation, providing a sunscreen, ecto-parasite control and scent-marking.Mammals that perform dust bathing include: Bison Cape ground squirrel Chinchilla Domestic cat Domestic dog Degu Elephant Gerbil Hamster Horse Jerboa Kangaroo rat Llama Pig Prairie dog
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Urethroplasty** Urethroplasty: Urethroplasty is the surgical repair of an injury or defect within the walls of the urethra. Trauma, iatrogenic injury and infections are the most common causes of urethral injury/defect requiring repair. Urethroplasty is regarded as the gold standard treatment for urethral strictures and offers better outcomes in terms of recurrence rates than dilatations and urethrotomies. It is probably the only useful modality of treatment for long and complex strictures though recurrence rates are higher for this difficult treatment group.There are four commonly used types of urethroplasty performed; anastomotic, buccal mucosal onlay graft, scrotal or penile island flap, and Johansen's urethroplasty.With an average operating room time of between three and eight hours, urethroplasty is not considered a minor operation. Patients who undergo a shorter duration procedure may have the convenience of returning home that same day (between 20% and 30% in total of urethroplasty patients). Hospital stays of two or three days duration are the average. More complex procedures may require a hospitalization of seven to ten days. Phases of the operation: These parts of the operation are common to all specific operations. Phases of the operation: Preoperative Ideally, the patient will have undergone urethrography to visualize the positioning and length of the defect. The normal pre-surgical testing/screening (per the policies of the admitting hospital, anesthesiologist, and urological surgeon) will be performed, and the patient will be advised to ingest nothing by mouth, "NPO", for a predetermined period of time (usually 8 to 12 hours) prior to the appointed time.Upon arrival to the preoperative admitting area, the patient will be instructed to don a surgical gown and be placed into a receiving bed, where monitoring of vital signs, initiation of a normal saline IV drip, and pre-surgical medication including IV antibiotics, and a benzodiazepine class sedative, usually diazepam or midazolam will be started/administered. Phases of the operation: Operative The patient will be transported to the operating room and the procedures for induction of the type of anesthesia chosen by both the patient and medical staff will be started. The subject area will be prepped by shaving, application of an antiseptic wash (usually povidone iodine or chlorhexidine gluconate - if sensitive or allergic to the former), surgically draped and placed in the Lloyd-Davies position. Note: throughout the duration of the procedure, the patient's legs will be massaged and manipulated at predetermined intervals in an attempt to prevent compartment syndrome, a complication from circulatory and nerve compression resultant from the lithotomy positioning. Some hospitals utilize the Allen Medical Stirrup System, which automatically inflates a compression sleeve applied to the thigh-portion of the stirrup device at predetermined intervals. This system is designed to prevent compartment syndrome in surgeries lasting more than six hours.At this time the surgical team will perform testing to determine if the anesthesia has taken effect. Upon satisfactory finding(s), a suprapubic catheter (with drainage system) will be inserted into the urinary bladder (to create urinary diversion during the procedure), and the chosen procedure will then be initiated.Note: The surgical procedures listed below may have small variances in the methodology used from surgeon to surgeon. Consider the following as a generalized description of each individual procedure, although every precaution was taken to ensure the accuracy of the information. Types of operations: The choice of procedure is dependent on factors including: physical condition of the patient overall condition of the remainder of the urethra (not affected by the stricture) the length of the defect (best determined by urethrography) multiple or misaligned strictures anatomical positioning of the defect with regard to the prostate gland, urinary sphincter, and ejaculatory duct position of the most patent area of the urethral wall (necessary for determination of the location of the onlay/graft site, most often dorsal or ventral) complications and scarring from previous surgery(ies), stent explantation (if applicable), and the condition of the urethral wall availability of autograft tissue from the buccal cavity (buccal mucosa) (primary selection) availability of autograft tissue from the penis and scrotum (secondary selection) skill level and training of the surgeon performing the procedureNote: in more complex cases, more than one type of procedure may be performed, especially where longer strictures exist. Types of operations: Anastomotic urethroplasty In this single-stage procedure the urethra will be visualized (in the area of the defect), and the incision will be started at its mid-line (usually) using a bovie knife to dissect the dermal and sub-dermal layers until the associated musculature, corpus cavernosum, corpus spongiosum, and ventral urethral aspects are exposed. Particular care is used during the dissection to prevent damage to nerves and blood vessels (which could result in erectile dysfunction or loss of tactile sensation of the penis). The area of the defect is evaluated and marked both mid-line (laterally), and at the distal and proximal borders (transversely). Marked/labeled positioning sutures are secured (one, each) at the proximal and distal ends of the mid-line area of urethra closest to the bisection points. Using an index finger, the urethra is gently separated from the cavernosum, and a specially designed retractor is then placed behind the urethra (to protect vulnerable areas from damage during the transecting and removal of the urethral defect. The now patent ends of the urethra are prepared using a technique called "spatulation", which (essentially) allows for the end-to-end anastomosis to adjust to the differing diameters of the urethra. A silicone catheter is inserted through the penis and (temporary) distal-urethral end, and threaded into the (temporary) proximal-urethral end, leaving a wide loop for the surgeon to have access to the dorsal urethral aspect for micro-suturing, and start of the anastomosis. The dorsal one-third of the urethral anastomosis is begun, completed, and the catheter is retracted slightly to allow for its positioning within the pre-anastomosed urethra. At this time, using micro surgical technique, the anastomosis is completed and fibrin glue is applied to the anastomotic suture line to help prevent leakage and fistula formation. The silicone guide catheter will then be withdrawn from the penis and (a) replaced by an appropriately sized Foley catheter (and urinary drainage system), and the incision closed (layer by layer). Some surgeons will inject a local anesthetic such as 2% plain lidocaine or 0.5% bupivicaine into the areas to allow the patient an additional period of relief from discomfort.Micro-doppler circulatory measurement of the penile vasculature is performed at way points throughout the procedure, and a final assessment is taken and recorded. The incision is inspected and dressed, and the patient is discharged to recovery.(a) some surgeons prefer the use of a suprapubic catheter, as they believe insertion of an in-dwelling urethral catheter may damage the anastomosed areaExpected average success rate: The success rate for this procedure is above 95%, anastomotic urethroplasty is considered the "gold standard" of surgical repair options. It is generally used when strictures are less than 2 cm in length, however, some surgeons have had success with defects approaching 3 cm in length. Types of operations: Buccal mucosal onlay graft of the ventral urethra In this single-stage procedure the urethra will be visualized (in the area of the defect), and the incision will be started at its mid-line (usually) using a bovie knife to dissect the dermal and sub-dermal layers until the associated musculature, corpus cavernosum, corpus spongiosum, and ventral urethral aspects are exposed. (a) Particular care is used during the dissection to prevent damage to nerves and blood vessels (which could result in erectile dysfunction or loss of tactile sensation of the penis). The area of the defect is evaluated and marked laterally mid-line, and (marked) positioning sutures are positioned (one, each) at the proximal and distal ends of the area of urethra closest to border of the defective area. Simultaneously, a urological surgeon who is specifically trained in buccal mucosal harvesting techniques will begin harvest and repair of a section of the inside cheek of the patient, corresponding to the dimension/shape calculated and requested by the surgeon performing the urethral aspect of the procedure. When available, an oral/maxillofacial surgeon or ENT specialist will harvest the buccal mucosa in accordance with those requested specifications. Upon retrieval, the buccal graft is presented to the urethral surgeon, who will then prepare the graft by trimming and removal of extraneous tissue.The surgeon will create an incised opening laterally between the known outer borders of the defect, retract the incised opening to the desired diameter, and position the graft to cover the incision. This will form a tunnel, or diversion through the stricture which is 10 mm (optimally) in estimated diameter, to allow for the flow of urine. Using micro surgical techniques, the buccal graft will be sutured in place and fibrin glue applied to the suture line to prevent leakage and formation of a fistula. At this time an appropriately sized (a) Foley catheter will be inserted through the repair and into the bladder (and connected to a urinary drainage system), and the incision closed (layer by layer). Some surgeons will inject a local anesthetic such as 2% plain lidocaine or 0.5% bupivicaine into the areas to allow the patient an additional period of relief from discomfort.Micro-doppler circulatory measurement of the penile vasculature is performed at way points throughout the procedure, and a final assessment is taken and recorded. The incision is inspected and dressed, and the patient is discharged to recovery. Types of operations: (a) At this time, some surgeons prefer to insert a safety guide (as used in urethrotomy) from the urinary meatus, through the stricture, and into the bladder for purposes of maintaining positioning. (b) some surgeons prefer the use of a suprapubic catheter, as they believe insertion of an in-dwelling urethral catheter may damage the surgically repaired area. Types of operations: Expected average success rate: The success rate for this procedure is between 87 and 98%, buccal mucosal onlay urethroplasty is considered the best of repair options for strictures greater than 2 cm in length. Within recent years, surgeons have been applying the onlay to the dorsal aspect of the urethra with great success. Buccal mucosa best approximates the tissue which composes the urethra. Types of operations: Scrotal or penile island flap (graft) of the ventral urethra In this single-stage procedure the urethra will be visualized (in the area of the defect), and the incision will be started at its mid-line (usually) using a bovie knife to dissect the dermal and sub-dermal layers until the associated musculature, corpus cavernosum, corpus spongiosum, and ventral urethral aspects are exposed. (a) Particular care is used during the dissection to prevent damage to nerves and blood vessels (which could result in erectile dysfunction or loss of tactile sensation of the penis). The area of the defect is evaluated and marked laterally mid-line, and (marked) positioning sutures are positioned (one, each) at the proximal and distal ends of the area of urethra closest to border of the defective area. The surgeon will then harvest a section of tissue from the scrotum or penile foreskin (or what remains in circumsised males) corresponding to the previously determined dimension/shape. Upon retrieval, the graft is prepared for attachment by trimming and removal of extraneous tissue.The surgeon will create an incised opening laterally between the known outer borders of the defect, retract the incised opening to the desired diameter, and position the graft to cover the incision. This will form a tunnel, or diversion through the stricture which is 10 mm (optimally) in estimated diameter, to allow for the flow of urine. Using micro surgical techniques, the scrotal graft or penile island flap will be sutured in place and fibrin glue applied to the suture line to help prevent leakage and formation of a fistula. At this time an appropriately sized (b) Foley catheter will be inserted through the repair and into the bladder (and connected to a urinary drainage system), and the incision closed (layer by layer). Some surgeons will inject a local anesthetic such as 2% plain lidocaine or 0.5% bupivicaine into the areas to allow the patient an additional period of relief from discomfort.Micro-doppler circulatory measurement of the penile vasculature is performed at way points throughout the procedure, and a final assessment is taken and recorded. The incision is inspected and dressed, and the patient is discharged to recovery. Types of operations: (a) At this time, some surgeons prefer to insert a safety guide (as used in urethrotomy) from the urinary meatus, through the stricture, and into the bladder for purposes of maintaining positioning. Types of operations: (b) some surgeons prefer the use of a suprapubic catheter, as they believe insertion of an in-dwelling urethral catheter may damage the surgically repaired area Expected average success rate: The success rate for this procedure is between 70% and 85%, scrotal or penile island flap urethroplasty is considered the least attractive of repair options for urethral defects, it is, however, the standard procedure used in the repair of strictures greater than 4 cm in length. As with the buccal mucosal onlay, surgeons have been performing the dorsal aspect procedure since the late 1990s, with an estimated success rate approaching 90%. Types of operations: Johansen's urethroplasty The Johansen's procedure sometimes referred to as "Johanson's urethroplasty" is a two-stage procedure which was developed during the 1950s and 1960s by Swedish surgeon Dr. Bengt Johansen, and was originally designed as a surgical repair for hypospadias. Over the years, the surgery has evolved into a fairly complex operation whereby the damaged area of the urethra is opened ventrally and left open as a buried skin strip with a deep diversion created from scrotal or penile skin covering the area of the repair. An appropriately sized in-dwelling catheter is inserted, and the repaired area is temporarily closed (sutured in some locations, with packing and dressings in others) until the newly created diversion forms completely, usually within six months. Upon the confirmation of completed healing, the catheter is withdrawn and the surgical site closed permanently. There are numerous methods attributed to the name "Johansen's". Most severe urethral trauma is reconstructed using the Johansen's urethroplastic procedure. It is also the procedure normally utilized in the repair of damage caused by balanitis lichen sclerosus, also referred to as balanitis xerotica obliterans.The Johansen's procedure is used in the most difficult of traumatic reconstruction cases. Because of the variations of practice within this procedure, an estimated success rate is not available. Post-procedural care: Constant monitoring of vital signs including pulse oximetry, cardiac monitoring (ECG), body temperature and blood pressure are carried out by the anesthesia practitioner until the patient is discharged post-operatively to the post-surgical recovery unit. After sufficient awakening from the anesthetic agent has taken place, and if the patient is a candidate for same day discharge, he (and the person responsible for his transport home) will be instructed in the care and emptying of the catheter and its drainage system, cleansing of the involved area(s) and methods/intervals for dressing change, monitoring for signs of infection and for signs of catheter blockage. The patient will be given prescriptions for an antibiotic or anti-infective agent, a urinary anti-spasmodic, and a mild to moderate pain medication (no more than a few days worth of pain is expected). The patient will be instructed to optimize bed rest for the first two days after the operation, be limited to absolutely no lifting, and instructed to consume a high fiber diet and use a stool softener such as polyethylene glycol to help in avoiding straining during evacuation. After days 1 and 2, the patient will be instructed to sensibly increase physical activity, and avoid becoming sedentary. Adequate hydration is essential during the post-recovery phase of the procedure.In accordance with the preference of the surgeon, a retrograde urethrogram will be scheduled to coincide with the anticipated removal date of the suprapubic or Foley catheter (usually 7 to 14 days post-procedure, however some surgeons will attempt removal in 3 to 5 days). At 10 days post procedure, the suture line(s) will be evaluated, and the sutures removed if applicable (in many cases, the surgeon will utilize absorbable sutures, which do not require removal).The length of hospitalization is usually determined by the: status/condition of the patient, post recovery after-effects of the anesthesia/sedation/spinal anesthesia utilized during the procedure anticipated post-surgical care, per care plan (dressing changes, packing changes, and monitoring of (any) surgical drains - if used) monitoring of the newly established urethral cystostomy (Johansen's urethroplasty) if applicable monitoring of the suprapubic catheter or Foley catheter for signs of infection and proper urine output if applicable titration of palliative and anti-spasmodic medication(s) if applicable post surgical complications if any Possible post surgical complications: Note: Urethroplasty is generally well tolerated with a high rate of success, serious complications occur in fewer than ten percent of patients though complications particularly recurrences are commoner in long and complex strictures. Possible post surgical complications: recurrence of the stricture infection urinary incontinence (symptoms of incontinence often improve over time with strengthening exercises) urinary retention requiring intermittent catheterization to completely empty the urinary bladder erectile dysfunction loss of penile sensation, decreased tactile sensation of the penile shaft and corona retrograde ejaculation, changes in ejaculation, and decrease in intensity of orgasm referred pain urinary fistula urinary urgency urine spraying hematoma external bleeding (from the suture line(s)) bleeding from the internal suture lines (seen as bloody discharge from the urethra) Research: Urethrotomy vs. urethroplasty Comparing the two surgical procedures, a UK trial found that both urethrotomy and urethroplasty are effective in treating urethral stricture in the bulbar region. At the same time the more invasive urethroplasty had longer-lasting benefit and was associated with fewer re-interventions. The results were integrated into the new UK guidelines on the treatment urethral narrowing by British Association of Urological Surgeons.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Frozen banana** Frozen banana: Frozen bananas are desserts made by placing a banana upon a stick, freezing it, and usually dipping it in melted chocolate or yogurt. They may be covered with toppings such as chopped nuts, sprinkles, sugar and crushed cookies. History: Don Phillips, also known as the frozen banana king, opened the first frozen banana stand - The Original Frozen Banana - on Balboa Peninsula, located in Orange County, California, circa 1940. In 1963, Bob Teller, who moved to the area with plans to manufacture car seat belts, instead opened a frozen banana stand. Teller had sold frozen bananas at the Arizona State Fair, and opened his stand - The Original Banana Rolla Rama - right across the street from Phillips' original shop. In the winter months, Teller hauled the trailer to various county fairs. When Phillips died a few years later, Teller bought the business and used it to expand to other locations. Frozen bananas are a tradition on Balboa Island to this day. Bob Fitch created the Frozen Banana in the 1950s on Balboa island maybe earlier. He purchased the spot where the Sugar and Spice sits today from Don Philips, renamed it the Sugar and Spice. Bob Fitch sold the building around 1978-1979 to Mrs. Banto for $110,000.00. Cultural references: In the Fox television series Arrested Development, set in Orange County, the Bluth Company owns and runs a frozen banana stand.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Building maintenance unit** Building maintenance unit: A building maintenance unit (BMU) is an automatic, remote-controlled, or mechanical device, usually suspended from the roof, which moves systematically over some surface of a structure while carrying human window washers or mechanical robots to maintain or clean the covered surfaces. It can also be used on interior surfaces such as large ceilings (e.g. in stadiums or train stations) or atrium walls.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Logical relations** Logical relations: Logical relations are a proof method employed in programming language semantics to show that two denotational semantics are equivalent. Logical relations: To describe the process, let us denote the two semantics by [[−]]i , where i=1,2 . For each type A , there is a particular associated relation ∼ between [[A]]1 and [[A]]2 . This relation is defined such that for each program phrase M , the two denotations are related: [[M]]1∼[[M]]2 . Another property of this relation is that related denotations for ground types are equivalent in some sense, usually equal. The conclusion is then that both denotations exhibit equivalent behavior on ground terms, hence are equivalent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gap wedge** Gap wedge: In golf, a gap wedge, also known as an approach wedge, is a wedge used to hit a shot with higher and shorter trajectory than a pitching wedge and lower and longer trajectory than a sand wedge. The name derives from the club's design to fill the "gap" between sand and pitching wedges. History: Over time the loft angle on irons in matched sets has been reduced for multiple reasons. Manufacturers, always wanting to advertise longer distances than their competitors, sometimes "cheat" by de-lofting their iron sets by a degree or two compared to their competitor's set, producing between 2-5 yards of extra distance per degree of "strengthening". In addition, several significant advances in clubhead design, most notably the 1970s development of investment-cast "cavity-back" designs, and the 1990s introduction of clubfaces that increased backspin to improve "bite", resulted in clubs with higher launch angles and flight paths for the same loft angle than their predecessors. Clubmakers then compensated for this in both cases by reducing loft, to translate that higher flight path into greater distance. Currently, the pitching wedge of a typical matched iron set has a loft similar to a 9-iron from the 1980s, at about 46 degrees, and much stronger lofts are found in game improvement sets.However, sand wedges generally have not received this same reduction in loft, even as they were designed with similar weight-distribution and backspin-improving features. This is because the sand wedge is typically not used with distance in mind; its eponymous purpose requires the traditional 54-56° loft angle in order to dig into the soft sand surrounding the ball and lift it out. The sand wedge's nominal loft and "bounce" angles have not changed appreciably from Gene Sarazen's original concept based on the niblick. Cavity-backed, perimeter-weighted sets may de-loft this club by a degree or two compared to a forged set, but this is nowhere near the amount of loft reduction seen in the numbered irons. This leaves a "gap" in loft angle between the pitching and sand wedges of up to 10 degrees, causing a distance difference with a full swing of up to 30 yards, both of which are differences normally seen between irons two or more loft numbers apart (e.g., between a 7 and 9 iron) instead of "adjacent" lofts as the PW and SW traditionally are. History: As a result, some players who had upgraded to these newer de-lofted iron sets began carrying the pitching wedge of an older set, lofted around 50-52°, to "fill the gap". This additional wedge, with a full swing, falls between the average distances of a sand and pitching wedge, allowing the player to fine-tune their approach shot's distance without needing excessive variations in swing speed. Clubmakers, sensing an opportunity, began to offer a purpose-built wedge in this general loft range starting in the early to middle 1990s. These have become known colloquially as "gap wedges" due to their origin, despite various proprietary names applied to wedges in this class by their manufacturers. Design: Gap wedges are loosely defined, but typically have the loft between that of a pitching wedge and sand wedge, between 50 and 54 degrees. At the extremes there is redundancy with either the pitching wedge (typically 48°) or the sand wedge (typically 56°), however some players will "fine-tune" the lofts of these other wedges to their play style, leading to alternate loft choices for a gap wedge. Most players look for a separation of 4 degrees between clubs, and so with the standard pitching and sand wedge lofts, the complementary gap wedge would be 52°. Design: Within the range of lofts seen in gap wedges, the angle that the sole makes to the ground at address, also known as the club's "bounce angle", varies from 0° up to 12° or more. Lower lofts typically benefit from a lower bounce angle, suiting their use as effectively an "11-iron" for shots from firmer lies such as grass. Higher lofts, generally used from softer lies where the ball may have dug itself in more, require a higher bounce similar to the sand wedge to dig in and then lift back out of the ground. The most common 52° wedge is sold in a wide range of bounce angles; 8° is a common "medium bounce" choice, allowing the golfer to use the club in a variety of lies, from the fairway or rough to "fried egg" semi-embedded sand or mud situations. Design: The relation between actual loft and bounce can change based on how the player addresses the ball; the more forward the ball, and the more open the clubface, the higher the effective loft and bounce angles. Some clubmakers will vary the amount of bounce that the sole has from toe to heel, allowing the player to fine-tune the club to the specific situation by opening it. When square at address, such a club behaves more like a lower-bounce pitching wedge, while when opened, the club behaves more like a sand wedge (without the bounce becoming too high and making a "skulled" shot more likely, as it would with a constant bounce angle from tow to heel). A few manufacturers call attention to this by labeling the wedge "D" for "dual wedge", indicating it can be used as either a pitching or sand wedge (or anything in between). Design: There is little consistency in labeling gap wedges; most manufacturers simply label the wedge with its angle, optionally including additional information about the amount of bounce (sometimes the angle measure, more often a series of one to three dots indicating "low", "medium" and "high" bounce). Some manufacturers call it an "Approach", "Attack" or "All" wedge, labeling it in these cases with "A". The Karsten Manufacturing Company, maker of the Ping brand of golf clubs, favored the use of "U" for "utility wedge", but currently only uses this label on wedges sold in matched sets; most individual Ping wedges are currently labelled with their angle. It is actually uncommon to find a gap wedge labeled "G"; Adams Golf, Cobra, Mizuno, and Wilson are among the few manufacturers that do so. Controversy: The necessity of the gap wedge is contested by some golfers and clubfitters, who assert that this additional wedge would not be necessary if clubmakers had not de-lofted the clubs in the first place as a marketing move, to attract amateurs looking for more distance from each number. In addition, many matched sets do not include a gap wedge, when most clubmakers have included the pitching wedge since irons first began to be offered as matched numbered sets in the 1930s. The end result, critics claim, is that the 3 and 4-iron of a matched set have become just as hard to hit as the 1 and 2-irons of the 1970s, and with the average golfer carrying a set numbered between 4-iron and gap wedge, clubmakers might as well simply reduce all their labelled loft numbers by one, making the pitching wedge a 9-iron and the gap wedge a pitching wedge. Instead, most clubmakers continue to include the 3-iron and exclude the gap wedge from matched sets, which forces golfers to buy a 3-iron they are extremely unlikely to ever use, while not getting the much more important gap wedge and so having to buy it individually. Controversy: For their part, clubmakers contest that the modern golfer demands customization; wedges (including gap wedges) are available in many combinations of loft and bounce angle, allowing the player to choose exactly the combination they find most useful. Including a 52-degree mid-bounce wedge in a matched set may prove useless to a golfer who prefers a different loft or bounce than this standard offering. They also contend that, adjusting for inflation ($1.00 in 1970 would have the same buying power as $6.13 in 2014), the price of a matched iron set has decreased over this time period, even as design and manufacturing advances have genuinely increased the distance and accuracy that the average golfer can expect, compared to a club of an older design but similar launch angle. They also assert that these design advances have made the modern long irons easier to hit well than older generations, so golfers should at least try these long irons before removing them from the bag. Lastly, some manufacturers do in fact include a gap wedge as part of a matched iron set, often removing the 3 iron to provide the same overall number of clubs in the set (and in response to the concern about including such a difficult-to-use club). Ping, for instance, offers its G-series iron sets in two loft ranges, 3-PW and 4-UW.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ladinin 1** Ladinin 1: Ladinin-1 is a protein that in humans is encoded by the LAD1 gene.The protein encoded by this gene may be an anchoring filament that is a component of basement membranes. It may contribute to the stability of the association of the epithelial layers with the underlying mesenchyme.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trichilemmal cyst** Trichilemmal cyst: A trichilemmal cyst (or pilar cyst) is a common cyst that forms from a hair follicle, most often on the scalp, and is smooth, mobile, and filled with keratin, a protein component found in hair, nails, skin, and horns. Trichilemmal cysts are clinically and histologically distinct from trichilemmal horns, hard tissue that is much rarer and not limited to the scalp. Rarely, these cysts may grow more extensively and form rapidly multiplying trichilemmal tumors, also called proliferating trichilemmal cysts, which are benign, but may grow aggressively at the cyst site. Very rarely, trichilemmal cysts can become cancerous. Classification: Trichilemmal cysts may be classified as sebaceous cysts, although technically speaking are not sebaceous. "True" sebaceous cysts, which originate from sebaceous glands and which contain sebum, are relatively rare and are known as steatocystoma simplex or, if multiple, as steatocystoma multiplex. Medical professionals have suggested that the term "sebaceous cyst" be avoided since it can be misleading.: 31  In practice, however, the term is still often used for epidermoid and pilar cysts. Pathogenesis: Trichilemmal cysts are derived from the outer root sheath of the hair follicle. Their origin is currently unknown, but they may be produced by budding from the external root sheath as a genetically determined structural aberration. They arise preferentially in areas of high hair follicle concentrations, so 90% of cases occur on the scalp. They are solitary in 30% and multiple in 70% of cases.Histologically, they are lined by stratified squamous epithelium that lacks a granular cell layer and are filled with compact "wet" keratin. Areas consistent with proliferation can be found in some cysts. In rare cases, this leads to formation of a tumor, known as a proliferating trichilemmal cyst. The tumor is clinically benign, although it may display nuclear atypia, dyskeratotic cells, and mitotic figures. These features can be misleading, and a diagnosis of squamous cell carcinoma may be mistakenly rendered. Treatment: Surgical excision is required to treat a trichilemmal cyst. The method of treatment varies depending on the physician's training. Most physicians perform the procedure under local anesthetic. Others prefer a more conservative approach. This involves the use of a small punch biopsy about one-fourth the diameter of the cyst. The punch biopsy is used to enter the cyst cavity. The contents of the cyst are emptied, leaving an empty sac. As the pilar cyst wall is the thickest and most durable of the many varieties of cysts, it can be grabbed with forceps and pulled out of the small incision. This method is best performed on cysts larger than a pea that have formed a thick enough wall to be easily identified after the sac is emptied. Small cysts have thin walls, so are easily fragmented on traction. This increases the likelihood of cyst recurrence. This method often results in only a small scar, and very little if any bleeding.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**North-south traffic** North-south traffic: In computer networking, north-south traffic is network traffic flowing into and out of a data center. Traffic: Based on the most commonly deployed network topology of systems within a data center, north-south traffic typically indicates data flow that either enters or leaves the data center from/to a system physically residing outside the data center, such as user to server. Southbound traffic is data entering the data center (through a firewall and/or other networking infrastructure). Data exiting the data center is northbound traffic, commonly routed through a firewall to Internet space. The other direction of traffic flow is east-west traffic which typically indicates data flow within a data center.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Medical device hijack** Medical device hijack: A medical device hijack (also called medjack) is a type of cyber attack. The weakness they target are the medical devices of a hospital. This was covered extensively in the press in 2015 and in 2016.Medical device hijacking received additional attention in 2017. This was both a function of an increase in identified attacks globally and research released early in the year. These attacks endanger patients by allowing hackers to alter the functionality of critical devices such as implants, exposing a patient's medical history, and potentially granting access to the prescription infrastructure of many institutions for illicit activities. MEDJACK.3 seems to have additional sophistication and is designed to not reveal itself as it searches for older, more vulnerable operating systems only found embedded within medical devices. Further, it has the ability to hide from sandboxes and other defense tools until it is in a safe (non-VM) environment. Medical device hijack: There was considerable discussion and debate on this topic at the RSA 2017 event during a special session on MEDJACK.3. Debate ensued between various medical device suppliers, hospital executives in the audience and some of the vendors over ownership of the financial responsibility to remediate the massive installed base of vulnerable medical device equipment. Further, notwithstanding this discussion, FDA guidance, while well intended, may not go far enough to remediate the problem. Mandatory legislation as part of new national cyber security policy may be required to address the threat of medical device hijacking, other sophisticated attacker tools that are used in hospitals, and the new variants of ransomware which seem targeted to hospitals. Overview: In such a cyberattack the attacker places malware within the networks through a variety of methods (malware-laden website, targeted email, infected USB stick, socially engineered access, etc.) and then the malware propagates within the network. Most of the time existing cyber defenses clear the attacker tools from standard serves and IT workstations (IT endpoints) but the cyber defense software cannot access the embedded processors within medical devices. Most of the embedded operating systems within medical devices are running on Microsoft Windows 7 and Windows XP. The security in these operating systems is no longer supported. So they are relatively easy targets in which to establish attacker tools. Inside of these medical devices, the cyber attacker now finds safe harbor in which to establish a backdoor (command and control). Since medical devices are FDA certified, hospital and cybersecurity team personnel cannot access the internal software without perhaps incurring legal liability, impacting the operation of the device or violating the certification. Given this open access, once the medical devices are penetrated, the attacker is free to move laterally to discover targeted resources such as patient data, which is then quietly identified and exfiltrated. Overview: Organized crime targets healthcare networks in order to access and steal the patient records. Impacted devices: Virtually any medical device can be impacted by this attack. In one of the earliest documented examples testing identified malware tools in a blood gas analyzer, magnetic resonance imaging (MRI) system, computerized tomogram (CT) scan, and x-ray machines. In 2016 case studies became available that showed attacker presence also in the centralized PACS imaging systems which are vital and important to hospital operations. In August 2011, representatives from IBM demonstrated how an infected USB device can be used to identify the serial numbers of devices within a close range and facilitate fatal dosage injections to patients with an insulin pump in the annual BlackHat conference. Impacted institutions: This attack primarily centers on the largest 6,000 hospitals on a global basis. Healthcare data has the highest value of any stolen identity data, and given the weakness in the security infrastructure within the hospitals, this creates an accessible and highly valuable target for cyber thieves. Besides hospitals, this can impact large physician practices such as accountable care organizations (ACOs) and Independent Physician Associations (IPAs), skilled nursing facilities (SNFs) both for acute care and long-term care, surgical centers and diagnostic laboratories. Instances: There are many reports of hospitals and hospital organizations getting hacked, including ransomware attacks, Windows XP exploits, viruses, and data breaches of sensitive data stored on hospital servers. Instances: Community Health Systems, June 2014 In an official filing to the United States Securities and Exchange Commission, Community Health Systems declared that their network of 206 hospitals in 28 states were targets of a cyber-attack between April and June 2014. The breached data included sensitive personal information of 4.5 million patients including social security numbers. The FBI determined that the attacks were facilitated by a group in China and issued a broad warning to the industry, advising companies to strengthen their network systems and follow legal protocols to help the FBI restraint future attacks. Instances: Medtronic, March 2019 In 2019 the FDA submitted an official warning concerning security vulnerabilities in devices produced by Medtronic ranging from Insulin pumps to various models of cardiac implants. The agency concluded that CareLink, the primary mechanism used for software updates in addition to monitoring patients and transferring data during implantation and follow-up visits, did not possess a satisfactory security protocol to prevent potential hackers from gaining access to these devices. The FDA recommended that health care providers restrict software access to established facilities while unifying the digital infrastructure in order to maintain full control throughout the process. Scope: Various informal assessments have estimated that medical device hijacking currently impacts a majority of the hospitals worldwide and remains undetected in the bulk of them. The technologies necessary to detect medical device hijacking, and the lateral movement of attackers from command and control within the targeted medical devices, are not installed in the great majority of hospitals as of February 2017. A statistic would note that in a hospital with 500 beds, there are roughly fifteen medical devices (usually internet of things (IoT) connected) per bed. That is in addition to centralized administration systems, the hospital diagnostic labs which utilized medical devices, EMR/EHR systems and CT/MRI/X-ray centers within the hospital. Detection and remediation: These attacks are very hard to detect and even harder to remediate. Deception technology (the evolution and automation of honeypot or honey-grid networks) can trap or lure the attackers as they move laterally within the networks. The medical devices typically must have all of their software reloaded by the manufacturer. The hospital security staff is not equipped nor able to access the internals of these FDA approved devices. They can become reinfected very quickly as it only takes one medical device to potentially re-infect the rest in the hospital. Countermeasures: On 28 December 2016 the US Food and Drug Administration released its recommendations that are not legally enforceable for how medical device manufacturers should maintain the security of Internet-connected devices. The United States Government Accountability Office studied the issue and concluded that the FDA must become more proactive in minimizing security flaws by guiding manufacturers with specific design recommendations instead of exclusively focusing on protecting the networks that are utilized to collect and transfer data between medical devices. The following table provided in the report highlights the design aspects of medical implants and how they affect the overall security of the device in focus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DoDonPachi DaiOuJou** DoDonPachi DaiOuJou: DoDonPachi DaiOuJou is the fourth arcade game in Cave's DonPachi series. The history section of DoDonPachi Resurrection on iPhone calls it DoDonPachi Blissful Death in localisation. CAVE later ported the game to iOS under this localised name. Gameplay: DaiOuJou follows the conventions of the previous game with only a few changes. The chaining system is intact and works in much the same way. Causing an enemy to explode fills a meter, and every enemy destroyed before the meter depletes adds to the current chain and again refills the meter. Holding the laser weapon over a large enemy will hold the meter steady and slowly accumulate hits. In this way it is possible to create a single chain out of any of the 5 stages.The controls in DaiOuJou are identical to the previous games in the series, and the same shot-laser dynamic as seen in Donpachi and Dodonpachi are also present, with spread bombs and laser bombs also making a return. However, there are only 2 ships, a narrow shot ship (Type-A) and a wide shot ship (Type-B). In addition, the game introduces Element Dolls, which will power up the player's weapons. The element dolls are: Shotia: Powers up standard shots and has a less powerful laser. Can carry 3 bombs at the start, and holds a maximum of 6. Gameplay: Leinyan: Powers up laser shots and has a weaker shot attack. Can carry 2 bombs at the start, and holds a maximum of 4. EXY: Powers up both shot and laser weapons, giving them intense power. Can carry 1 bomb at the start, and holds a maximum of 2. Gameplay: Piper: Exclusive to the Xbox 360 in X-mode, Piper will fire hyper versions of the shot and laser, rapidly increasing the combo gauge. The player can still gain and activate hypers, but they make the shots green and give the player the ability to cancel bullets with their shots. To compensate, the player cannot use bombs (although the MAXIMUM bonus for obtaining bomb items is still intact).Introduced in this game and carried on in later entries is the Hyper system. By gaining large combos, attacking enemies at close range, and collecting bee medals while a large combo is active (Bee medals now give score based on how large of a combo is racked up), you build up a hyper meter at the top of the screen. When the meter fills up, a hyper powerup drops from the bottom of the screen. Activating the hyper will cancel all bullets currently onscreen and increases the firepower of your shot and laser. Hypers also increase the rate of which combos are built up, as small enemies will give more hits on the combo gauge, larger enemies will add to the current combo chain much faster, and combos on bosses build up at a much faster rate. In addition, if a hyper meter is filled during a boss fight, all bullets on screen are cancelled and turned into score bonuses. Plot: After the events of Dodonpachi, in which the pilot stops Colonel Schwarlitz Longhener's plan to annihilate humanity, the remains of the Donpachi Corps are sealed away in the moon, never to be heard from again. 1000 years later, the robotic army, led by Hibachi, have reawakened and are slowly rebuilding itself in order to wipe out humanity again, taking the defenseless moon colony of Lunapolis. The world quickly resurrects the Donpachi Corps, and aided by Element Dolls, sentient androids meant to increase a ship's power, are deployed on the moon to destroy Hibachi's army. Plot: Depending on the element doll the player chose at the beginning of the game, the ending will differ upon defeating Hibachi; Shotia: In a last resort attack, Hibachi's program turns into a virus and invades Shotia's system, slowly deleting her memories bit by bit. Shotia's last memories are those with her pilot, and she dies with a smile. Leinyan: After defeating Hibachi, Leinyan has developed feeling for her pilot. Upon returning to Earth, Leinyan is taken away to be experimented on, but escaped and reunites with her pilot. EXY: EXY shuts down Hibachi's computer network, but was driven insane due to the overflow of data. EXY kills her pilot and does not return to Earth. The events of this game will eventually lead to Dodonpachi Daifukkatsu. Development: Black Label This variant was a limited edition release. The arcade board includes the original and Black Label games, which can be selected during boot time. The Black Label game can be identified by the black title screen. After the release of the Black label, the original version is called White Label, particularly for clarification. A prototype export/overseas version of the Black Label edition named DoDonPachi III was discovered in 2016. Music The music tracks are puns of shooting game companies. Mukei, Toua, Takimi, Torejya, Saikyou, Seibu, Sakusetsu, Taitou, Raijin, and Awaremu are named after NMK, Toaplan, Takumi Corporation, Treasure, Psikyo, Seibu Kaihatsu, Success Corporation, Taito, 8ing/Raizing, and Irem respectively. As often pointed out by fans, Manabu Namiki confirmed that the tracks from the game are the shooting game companies stated above he wanted to show respect for. Graphics With Junya Inoue still a graphical designer, the serene steampunk world of Progear has been replaced with hard sci-fi. The graphic style, especially the ships were drawn so as to resemble the original Dodonpachi. Bullets are drawn in blue and pink, and many of the backgrounds are deliberately flat so as not to distract from the on-screen action. Releases: PlayStation 2 release This version added the following modes/features: Death Label arcade mode. No bullets mode Simulation (training) mode, with a replay feature. Gallery. High score DVD video from 4 players who completed the second loop of the game. Releases: Player : 長田仙人, KTL-NAL (A.K.A. Homestay Akira), Clover-TAC Score : 1.89 Billion Death Label modeDeath Label mode sets the player against a boss rush, with maximum firepower at all times and a full stock of Hyper granted before each boss. Death Label's difficulty is roughly equivalent to that of the normal game's second loop, with a number of alterations made to the bosses and their attack pattern. The most notable change is made at the final boss fight of Death Label, where the player faces two Hibachis simultaneously. According to top players, this is the most difficult iteration in the DoDonPachi series taking 7 years (from 2003 until 2010-09-18) to clear. Releases: Tamashii This edition is aimed at the Taiwan-Chinese market and some in-game text has been translated in Chinese. It features an easy mode for beginners (not Black Label). It was published by IGS on April 20, 2010. Releases: Black Label EXTRA release The 2008-03-07 issue of Famitsu Weekly magazine reported that 5pb. Inc.'s 5pb.Games Division #2 would bring this game to the Xbox 360 platform as an Xbox Live Arcade title. However, 5pb representative Masaki Sakari claimed that Microsoft rejected 5pb's proposals and 'decided to cut down faithful arcade ports.'. 5pb considered releasing Black Label and Ketsui on a retail DVD instead. Releases: On 2008-09-26, Famitsu announced the official title of the Xbox 360 version of the game, dodonpachi DAI-OU-JOU Black Label EXTRA (怒首領蜂 大往生 ブラックレーベル EXTRA), scheduled for a release on Christmas Day of 2008. The port includes the original and Black Label editions of the game, as well as online score ranking, replay saving, enhanced graphics, and Xbox Live Marketplace content. There is an Xbox 360 original mode for beginners named the "X Mode", where a new Element Doll named Piper is introduced.The pre-order also includes a guidebook. Releases: Arcade mode - Old Version - is a port of the original "White Label" arcade release. Arcade mode - New Version - is a port of the newer "Black Label" arcade release. The X Mode features a 1-loop, 5-stage layout with a new game system. The game's music can be changed from Mono (from the arcade), Stereo, or X Mode, which features rearranged music. Xbox achievements feature 50 categories for 1,000 points in total.The player can get extra credits, X mode, and unlock Config EX options that alter gameplay mechanics by playing the game for a specific amount of time or earning achievement points. Releases: The Xbox 360 version was plagued at released with bugs and problems that rendered the game highly inaccurate and glitchy. It was eventually found that 5pb had, without permission, lifted the source code from the PS2 version of the game and slotted it in for the 360 version while making adjustments as needed. Patches were eventually made with Cave and Microsoft stepping in to aid the patching process. Reception: In Japan, Game Machine listed DoDonPachi DaiOuJou on their June 1, 2002 issue as being the third most-popular arcade game during the previous two weeks. DaiOuJou was met with positive reception from critics since its release in arcades and other platforms. According to review aggregator site Metacritic, the iOS version received "generally favorable" reviews. Famitsu reported that the PlayStation 2 and Xbox 360 versions sold over 19,593 and 10,526 copies in their first week on the market respectively. Both the PlayStation 2 and Xbox 360 versions sold approximately 53,881 copies combined during their lifetime in Japan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Farampator** Farampator: Farampator (developmental code names CX-691, ORG-24448, SCH-900460) is an ampakine drug. It was developed by Cortex Pharmaceuticals, and licensed to Organon BioSciences for commercial development. Following the purchase of Organon by Schering-Plough in 2007, the development license to farampator was transferred. The development of farampator was eventually terminated, reportedly due to concerns about cardiac toxicity.Farampator has been investigated for its effect on AMPA receptors and researched for potential use in the treatment of schizophrenia and Alzheimer's disease. It was found to improve short-term memory, but impaired episodic memory. It produced side effects such as headache, somnolence and nausea. Subjects reporting side effects had significantly higher plasma levels of farampator than subjects without. Additional analyses revealed that in the farampator condition the group without side effects showed a significantly superior memory performance relative to the group with side effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Theistic rationalism** Theistic rationalism: Theistic rationalism is a hybrid of natural religion, Christianity, and rationalism, in which rationalism is the predominant element. According to Henry Clarence Thiessen, the concept of theistic rationalism first developed during the eighteenth century as a form of English and German Deism. The term "theistic rationalism" occurs as early as 1856, in the English translation of a German work on recent religious history. Some scholars have argued that the term properly describes the beliefs of some of the prominent Founding Fathers of the United States, including George Washington, John Adams, Benjamin Franklin, James Wilson, and Thomas Jefferson. Definition: Theistic rationalists believe natural religion, Christianity, and rationalism typically coexist compatibly, with rational thought balancing the conflicts between the first two aspects. They often assert that the primary role of a person's religion should be to bolster morality, a fixture of daily life.Theistic rationalists believe that God plays an active role in human life, rendering prayer effective. They accept parts of the Bible as divinely inspired, using reason as their criterion for what to accept or reject. Their belief that God intervenes in human affairs and their approving attitude toward parts of the Bible distinguish theistic rationalists from Deists. Definition: Anthony Ashley-Cooper, 3rd Earl of Shaftesbury (1671–1713), has been described as an early theistic rationalist. According to Stanley Grean, Both Shaftesbury and the Deists wanted to preserve theology while freeing it from supernaturalism; both denied the occurrence of miracles; both called for free criticism of the Bible and questioned the absoluteness of its authority; both shared a distrust of sacramental and priestly religion; and both stressed the importance of morality in religion. However, despite this broad area of agreement, Shaftesbury did not identify himself unreservedly with the developing Deistic movement, and he expressed some serious doubts about certain aspects of it... The Deists were wrong if they relegated God to the status of a Prime Mover without subsequent contact with the universe; Deity must be conceived as being in constant and living interaction with the creation; otherwise the concept is "dry and barren." Moral Law: Moral law of Theistic Rationalism chooses the highest good of being in general. It accepts, as a first truth of reason, that Man is a subject of moral obligation. Men are to be judged by their motives, that is, by their designs, intentions. Moral Law: If a man intend evil, though, perchance, he may do us good, we do not excuse him, but hold him guilty of the crime which he intended. So if he intends to do us good, and, perchance, do us evil, we do not, and cannot condemn him. He may have been to blame for many things connected with the transaction, but for a sincere, and of course hearty endeavour to do us good, he is not culpable, nor can he be, however it may result. If he honestly intended to do us good, it is impossible that he should not have used the best means in his power, at the time: this is implied in honesty of intention. And if he did this, reason cannot pronounce him guilty, for it must judge him by his intentions. Courts of criminal law have always in every enlightened country assumed this as a first truth. They always inquire into the quo animo, that is, the intention, and judge accordingly. Moral Law: The universally acknowledged truth that lunatics are not moral agents and responsible for their conduct, is but an illustration of the fact that the truth we are considering, is regarded, and assumed, as a first truth of reason. Moral Law: Moral law is a pure and simple idea of the reason. It is the idea of perfect, universal, and constant consecration of the whole being, to the highest good of being. Just this is, and nothing more nor less can be, moral law; for just this, and nothing more nor less, is a state of heart and a course of life exactly suited to the nature and relations of moral agents, which is the only true definition of moral law. Moral Law: Thus, whatever is plainly inconsistent with the highest good of the universe is illegal, unwise, inexpedient, and must be prohibited by the spirit of moral law. Civil and family governments are indispensable to the securing of this end.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pan–tilt–zoom camera** Pan–tilt–zoom camera: A pan–tilt–zoom camera (PTZ camera) is robotic camera capable of panning horizontally (from left to right), tilting verticallly (up and down), and zooming (for magnification). PTZ cameras are often positioned at guard posts where active employees may manage them using a remote camera controller. Their primary function is to monitor expansive open regions that need views in the range of 180 or 360 degrees. Depending on the camera or software being used, they may also be set up to automatically monitor motion-activated activities or adhere to a defined schedule. Functions: A pan-tilt-zoom camera may be operated by remote control, by computer software, or it may be physically present for the purpose of recognizing patterns and people. PTZ cameras may zoom in very close, pan widely in all directions, and tilt up and down to get a good look at whatever's in the frame. These cameras have strong motors that move the lenses. Functions: The camera's focus triggers a corresponding motorized zoom-in on the subject of the camera's attention. In addition, the lens's versatility allows the camera to catch unique perspectives and details that would otherwise be missed. Use: In television production, PTZ controls are used with professional video cameras in television studios, sporting events, and other spaces. They are commonly referred to as robos, an abbreviation of robotic camera. These systems can be remotely controlled by automation systems. PTZ cameras are in high demand as a solution because of the diverse range of applications that they can support. Some examples of these applications are provided below. Live-streaming A PTZ camera is a crucial piece of equipment for live-streaming, as it gives the streaming material an extra dose of realism. It is a great option for a permanent installation since it can be mounted on a tripod, a table, or even the ceiling. In addition to taking a panoramic view, the remote controllability of a PTZ camera is a major advantage. This allows for a multi-cam arrangement in which many PTZ cameras may be placed at various locations and used simultaneously. They also have fantastic zoom and video frame speeds. Use: Churches PTZ cameras are deployed at churches, which tend to be massive buildings with elaborate designs. The compact, unobtrusive nature of PTZ cameras allows churches to keep their buildings' aesthetic value intact while yet maintaining security. Instead of taking up valuable floor space with a camera operator stationed at each camera, PTZ cameras are strategically placed to record and broadcast events in high definition on projection screens and social media platforms. Use: Broadcast The introduction of pan-tilt-zoom cameras has revolutionized television production facilities by allowing producers to consolidate their staff in one place. By using PTZ camera presets, smaller studios may record more camera perspectives with the same number of cameras. Operators in the video production room may control the cameras from afar and use the different teleprompters to provide instructions to the performer. Use: Auto tracking PTZ cameras are useful in fields such as sports broadcasting and newsgathering because of their auto-tracking and zoom capabilities. It is significantly easier to cover sporting events and competitions. Movements from all directions may be followed using PTZ cameras. It's capable of recording both on-field and in-stadium activity. Use: Video surveillance Modern surveillance systems cannot function without pan-tilt-zoom cameras. Because of their mobility, these cameras can always keep an eye on a certain region while also focusing on any questionable activity. Using PTZ cameras for video surveillance is helpful in a wide variety of settings, including but not limited to guard posts, courtrooms, supermarkets, airports, museums, stores, and restaurants. Use: Video conferencing Video conferences may benefit from the use of PTZ cameras. The recent health crisis has increased the popularity of online events in particular. Remote attendees may tune in to events in real-time thanks to pan-tilt-zoom cameras that stream live video to displays at the venue. The main auditorium and the other training rooms are both suitable locations for the installation of PTZ cameras, where they may be utilized during seminars and conferences. With this simple arrangement, the whole event may be seen. Types: Outdoor PTZ Camera: Can withstand the elements better than their indoor counterparts. Waterproof housings with an IP certification imply that outdoor PTZ cameras can withstand the effects of water, such as rain and snow. Wireless PTZ Camera: For video security, wireless PTZ cameras are a common means to transfer data over long distances when installing cable would be costly or inconvenient. In confined areas, wireless PTZ cameras may even provide superior viewing angles than wired alternatives. IP PTZ Camera: Internet Protocol (IP) pan-tilt-zoom cameras are preferable to wired analog models for a number of reasons. PoE PTZ Camera: A PoE (Power over Ethernet) PTZ camera may be powered and connected to the internet with a single Ethernet cable, eliminating the need for additional wiring. Analog PTZ Camera: Analog pan-tilt-zoom cameras are used to record surveillance footage, which is then stored in a DVR. DVR functionality is essential for converting, processing, and storing video data. Differences between PTZ and IP cameras: Both cameras have the same purpose, but they operate in different ways to meet various needs in terms of safety. Installing: Positioning a PTZ camera takes time and attention. If you make a mistake installing PTZ cameras, they may not work, and reinstalling them will take a long time. Area/Movement: PTZ cameras can see more than IP cameras. IP cameras can't pan, tilt, and zoom like PTZ cameras. No blind spots with the PTZ camera. Resolution: PTZ cameras move and zoom on their own, hence zooming may generate haze. We recommend IP cameras in this case. Shaking doesn't blur these fixed cameras' photos or recordings. Operation/Control: If you're not present, you can't utilize a PTZ camera. IP cameras give great control. You control location and time. They may be linked to any internet connection and controlled through IP. Cost: PTZ cameras are more expensive than IP cameras. Due to their mobility, PTZ cameras are more prone to damage, requiring regular maintenance that adds to expenses. Mounting: Installation of a pan-tilt-zoom camera is crucial. PTZ cameras may be mounted or set up in a variety of ways. The cameras installed on walls provide unique, eye-level perspectives. This setup is ideal for use on a veranda to survey large areas. It is possible to find PTZ cameras installed in the ceiling. Disadvantages: Although pan-tilt-zoom cameras have numerous uses and advantages, they also have certain drawbacks. One drawback of PTZ cameras is the blind spots and coverage holes they might leave behind. Camera functions include panning, tilting, and zooming, but not all at once for comprehensive monitoring. Disadvantages: The lengthy delay between a command being sent and the camera responding is a common complaint about PTZ cameras. The delay between making a change to the camera's field of vision and seeing that change reflected on the screen is known as "command latency." Another common criticism of PTZ cameras is that they are fragile. Problems are often encountered, according to some people. The costs of ignoring the abovementioned problems might add up over time. Cost: PTZ cameras have more complex internal technology, allowing them to move and adapt while keeping most or all of the advanced capabilities of traditional security cameras. They function similarly to traditional security cameras, but the owner has significantly more mobility and autonomy thanks to advanced features. The cameras, which perform the same purpose as license plate readers, differ little from one another mechanically. So, the PTZ cameras need extra hardware to allow for panning, tilting and zooming to serve this purpose.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Electron energy loss spectroscopy** Electron energy loss spectroscopy: Electron energy loss spectroscopy (EELS) is a form of electron microscopy in which a material is exposed to a beam of electrons with a known, narrow range of kinetic energies. Some of the electrons will undergo inelastic scattering, which means that they lose energy and have their paths slightly and randomly deflected. The amount of energy loss can be measured via an electron spectrometer and interpreted in terms of what caused the energy loss. Inelastic interactions include phonon excitations, inter- and intra-band transitions, plasmon excitations, inner shell ionizations, and Cherenkov radiation. The inner-shell ionizations are particularly useful for detecting the elemental components of a material. For example, one might find that a larger-than-expected number of electrons comes through the material with 285 eV less energy than they had when they entered the material. This is approximately the amount of energy needed to remove an inner-shell electron from a carbon atom, which can be taken as evidence that there is a significant amount of carbon present in the sample. With some care, and looking at a wide range of energy losses, one can determine the types of atoms, and the numbers of atoms of each type, being struck by the beam. The scattering angle (that is, the amount that the electron's path is deflected) can also be measured, giving information about the dispersion relation of whatever material excitation caused the inelastic scattering. History: The technique was developed by James Hillier and RF Baker in the mid-1940s but was not widely used over the next 50 years, only becoming more widespread in research in the 1990s due to advances in microscope instrumentation and vacuum technology. With modern instrumentation becoming widely available in laboratories worldwide, the technical and scientific developments from the mid-1990s have been rapid. The technique is able to take advantage of modern aberration-corrected probe forming systems to attain spatial resolutions down to ~0.1 nm, while with a monochromated electron source and/or careful deconvolution the energy resolution can be 0.1 eV or better. This has enabled detailed measurements of the atomic and electronic properties of single columns of atoms, and in a few cases, of single atoms. Comparison with EDX: EELS is spoken of as being complementary to energy-dispersive x-ray spectroscopy (variously called EDX, EDS, XEDS, etc.), which is another common spectroscopy technique available on many electron microscopes. EDX excels at identifying the atomic composition of a material, is quite easy to use, and is particularly sensitive to heavier elements. EELS has historically been a more difficult technique but is in principle capable of measuring atomic composition, chemical bonding, valence and conduction band electronic properties, surface properties, and element-specific pair distance distribution functions. EELS tends to work best at relatively low atomic numbers, where the excitation edges tend to be sharp, well-defined, and at experimentally accessible energy losses (the signal being very weak beyond about 3 keV energy loss). EELS is perhaps best developed for the elements ranging from carbon through the 3d transition metals (from scandium to zinc). For carbon, an experienced spectroscopist can tell at a glance the differences between diamond, graphite, amorphous carbon, and "mineral" carbon (such as the carbon appearing in carbonates). The spectra of 3d transition metals can be analyzed to identify the oxidation states of the atoms. Cu(I), for instance, has a different so-called "white-line" intensity ratio than Cu(II) does. This ability to "fingerprint" different forms of the same element is a strong advantage of EELS over EDX. The difference is mainly due to the difference in energy resolution between the two techniques (~1 eV or better for EELS, perhaps a few tens of eV for EDX). Variants: There are several basic flavors of EELS, primarily classified by the geometry and by the kinetic energy of the incident electrons (typically measured in kiloelectron-volts, or keV). Probably the most common today is transmission EELS, in which the kinetic energies are typically 100 to 300 keV and the incident electrons pass entirely through the material sample. Usually this occurs in a transmission electron microscope (TEM), although some dedicated systems exist which enable extreme resolution in terms of energy and momentum transfer at the expense of spatial resolution. Variants: Other flavors include reflection EELS (including reflection high-energy electron energy-loss spectroscopy (RHEELS)), typically at 10 to 30 keV, and aloof EELS (sometimes called near-field EELS), in which the electron beam does not in fact strike the sample but instead interacts with it via the long-ranged Coulomb interaction. Aloof EELS is particularly sensitive to surface properties but is limited to very small energy losses such as those associated with surface plasmons or direct interband transitions. Variants: Within transmission EELS, the technique is further subdivided into valence EELS (which measures plasmons and interband transitions) and inner-shell ionization EELS (which provides much the same information as x-ray absorption spectroscopy, but from much smaller volumes of material). The dividing line between the two, while somewhat ill-defined, is in the vicinity of 50 eV energy loss. Instrumental developments have opened up the ultra-low energy loss part of the EELS spectrum, enabling vibrational spectroscopy in the TEM. Both IR-active and non-IR-active vibrational modes are present in EELS. EEL spectrum: The electron energy loss (EEL) spectrum can be roughly split into two different regions: the low-loss spectrum (up until about 50eV in energy loss) and the high-loss spectrum. The low-loss spectrum contains the zero-loss peak as well as the plasmon peaks, and contains information about the band structure and dielectric properties of the sample. The high-loss spectrum contains the ionisation edges that arise due to inner shell ionisations in the sample. These are characteristic to the species present in the sample, and as such can be used to obtain accurate information about the chemistry of a sample. Thickness measurements: EELS allows quick and reliable measurement of local thickness in transmission electron microscopy. The most efficient procedure is the following: Measure the energy loss spectrum in the energy range about −5..200 eV (wider better). Such measurement is quick (milliseconds) and thus can be applied to materials normally unstable under electron beams. Analyse the spectrum: (i) extract zero-loss peak (ZLP) using standard routines; (ii) calculate integrals under the ZLP (I0) and under the whole spectrum (I). Thickness measurements: The thickness t is calculated as mfp*ln(I/I0). Here mfp is the mean free path of electron inelastic scattering, which has been tabulated for most elemental solids and oxides.The spatial resolution of this procedure is limited by the plasmon localization and is about 1 nm, meaning that spatial thickness maps can be measured in scanning transmission electron microscopy with ~1 nm resolution. Pressure measurements: The intensity and position of low-energy EELS peaks are affected by pressure. This fact allows mapping local pressure with ~1 nm spatial resolution. Pressure measurements: Peak shift method is reliable and straightforward. The peak position is calibrated by independent (usually optical) measurement using a diamond anvil cell. However, the spectral resolution of most EEL spectrometers (0.3-2 eV, typically 1 eV) is often too crude for the small pressure-induced shifts. Therefore, the sensitivity and accuracy of this method is relatively poor. Nevertheless, pressures as small as 0.2 GPa inside helium bubbles in aluminum have been measured. Pressure measurements: Peak intensity method relies on pressure-induced change in the intensity of dipole-forbidden transitions. Because this intensity is zero for zero pressure the method is relatively sensitive and accurate. However, it requires existence of allowed and forbidden transitions of similar energies and thus is only applicable to specific systems, e.g., Xe bubbles in aluminum. Use in confocal geometry: Scanning confocal electron energy loss microscopy (SCEELM) is a new analytical microscopy tool that enables a double corrected transmission electron microscope to achieve sub-10 nm depth resolution in depth sectioning imaging of nanomaterials. It was previously termed as energy filtered scanning confocal electron microscopy due to the lack to full spectrum acquisition capability (only a small energy window on the order of 5 eV can be used at a time). SCEELM takes advantages of the newly developed chromatic aberration corrector which allows electrons of more than 100 eV of energy spread to be focused to roughly the same focal plane. It has been demonstrated that a simultaneous acquisition of the zero loss, low-loss, and core loss signals up to 400 eV in the confocal geometry with depth discrimination capability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Snowball fight** Snowball fight: A snowball fight is a physical game in which balls of snow are thrown with the intention of hitting somebody else. The game is similar to dodgeball in its major factors, though typically less organized. This activity is primarily played during winter when there is sufficient snowfall. Two examples of organized games involving snowball fights are Yukigassen and SheenAab Jung. Yukigassen (雪合戦) is a snowball fighting-competition from Japan. SheenAab Jung (aka 'Snow Fighting') is played in Jammu and Kashmir of India. History: Legal prohibition In 1472, the city council of Amsterdam allegedly prohibited snowball fights for reasons of public safety, a prohibition which occasionally finds its way into lists of strange laws. The law, if it ever existed, is not presently enforced.Several localities have passed ordinances prohibiting snowball fights, typically as part of a larger prohibition on thrown missiles. In 2018, the town council of Severance, Colorado unanimously overturned one such ban after hearing from a local youth. Similarly, after its "snowball ordinance" became the subject of national news coverage, the city of Wausau, Wisconsin chose to remove the word "snowball" from a list of dangerous objects specifically prohibited from being thrown on public property. History: Large snowball fights During the American Civil War, on January 29, 1863, the largest military snow exchange occurred in the Rappahannock Valley in Northern Virginia. What began as a few hundred men from Texas plotting a friendly fight against their Arkansas camp mates soon escalated into a brawl that involved 9,000 soldiers of the Army of Northern Virginia.In his memoir of the American Civil War, Samuel H. Sprott describes a snowball battle that occurred early in 1864 involving the Army of Tennessee. Sprott states that the fight started when Strahl’s Brigade was attacked by a brigade of Breckenridge’s Division, but soon other brigades became involved, and ultimately five or six thousand men were engaged. History: On January 29, 2005, a crowd of 3,027 people gathered in the town of Wauconda, Illinois for a snowball fight organized by Bill Lutz, with the town receiving a mention in the 2006 Guinness Book of World Records.On October 14, 2009, 5,768 people in Leuven, Belgium took part in a University of Pennsylvania-funded snowball fight and broke the world record for the largest snowball fight.On December 9, 2009, an estimated crowd of over 4,000 students at the University of Wisconsin–Madison participated in a snowball fight on Bascom Hill. There were reports of several injuries, mainly broken noses, and a few incidences of vandalism, mainly stolen lunch trays from Memorial Union. The snowball fight was scheduled weeks in advance, and was helped by the fact that the University canceled all classes due to 12–16 inches of snow that fell the night before. However, this snowball fight failed to break the record set in October of the same year in Leuven. History: On January 22, 2010, 5,387 people in Taebaek, South Korea, set the world record for most people engaged in a snowball fight.On February 6, 2010, some 2,000 people met at Dupont Circle in Washington, D.C. for a snowball fight organized over the internet after over two feet of snow fell in the region during the North American blizzards of 2010. The event was promoted via Facebook and Twitter. At least a half-dozen D.C. and U.S. Park police cars were positioned around Dupont Circle throughout the snowball fight. Minor injuries were reported. History: On January 12, 2013, 5,834 people officially took part in Seattle, Washington set the Guinness World Records record for the world's largest snowball fight, during Seattle's Snow Day. On February 8, 2013, nearly 2,500 students of the Boston University participated in a snowball fight on Boston's Esplanade facilitated by historic winter storm "Nemo". Yukigassen (雪合戦) is a snowball fighting-competition originating in Japan. There are annual Yukigassen tournaments in Japan, Finland, Norway, Russia, Sweden, the United States and Canada. History: Seattle's world record was broken on January 31, 2016 in Saskatoon, Canada, where more than 20,000 participants came to Victoria park to attempt the Guinness World Record. Rick Mercer was one of the participants who came to shoot an episode about Team Canada's Yukigassen team and compete in the world record. Underestimating the number of participants the event ran out of their 8200 wristbands hours before the competition took place. In total 7,681 participants was the official Guinness record and was achieved by the City of Saskatoon thanks to Yukigassen Team Canada.The event was organized to send off Team Canada for the Showa Shinzan International Yukigassen World Championships, an annual professional snowball fighting competition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Live scan** Live scan: Live scan fingerprinting refers to both the technique and the technology used by law enforcement agencies and private facilities to capture fingerprints and palm prints electronically, without the need for the more traditional method of ink and paper.In the United States, most law enforcement agencies use live scan as their primary tool in the recognition of human individuals. Live scan is commonly used for criminal booking, sexual offender registration, civil applicant and background check. Live scan: In the UK, many major police custody suites are now equipped with Live Scan machines, which allow for suspects' fingerprints to be instantly compared with a national database, IDENT1, with results usually reported in less than ten minutes. Live scan package includes a PC workstation (desktop or laptop), a fingerprint capture device, a digital camera and a signature pad. The product and technology have been around for over twenty years. Applications: Level 2 background check Level 2 background checks are a fingerprint-based criminal background check which includes in-depth investigation of an individual’s criminal history, credit report and other relevant public records, Processed by the FBI's National Crime Information Center (NCIC). In the vast majority of cases, level 2 background checks are required for contract personnel who are permitted access on school grounds when students are present, or if they will have direct contact with students or have access to or control of school funds. By law, only certified law enforcement personnel can fulfill the fingerprinting process, as they must attach their police ID number to the fingerprint card. Often times, school districts have their own set of requirements implemented into each contractor's RFP bids. They may require additional criminal background checks from local police departments where a subject has resided in their past. Applications: Out-of-State Live Scan As a result of many different procedures and processes for each state, Live Scan previously had to be completed in person because states do not communicate with other states. In 2013, many states started allowing applicants to submit FD-258 fingerprinting cards to registered live scan providers because the cost was found to be astronomically high, especially for travelling medical professionals who are required by law to be registered in each state they work in.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Roland Fraïssé** Roland Fraïssé: Roland Fraïssé (French: [ʁɔlɑ̃ fʁajse]; 12 March 1920 – 30 March 2008) was a French mathematical logician. Life: Fraïssé received his doctoral degree from the University of Paris in 1953. In his thesis, Fraïssé used the back-and-forth method to determine whether two model-theoretic structures were elementarily equivalent. This method of determining elementary equivalence was later formulated as the Ehrenfeucht–Fraïssé game. Fraïssé worked primarily in relation theory. Another of his important works was the Fraïssé construction of a Fraïssé limit of finite structures. He also formulated Fraïssé's conjecture on order embeddings, and introduced the notion of compensor in the theory of posets.Most of his career was spent as Professor at the University of Provence in Marseille, France. Selected publications: Sur quelques classifications des systèmes de relations, thesis, University of Paris, 1953; published in Publications Scientifiques de l'Université d'Alger, series A 1 (1954), 35–182. Cours de logique mathématique, Paris: Gauthier-Villars Éditeur, 1967; second edition, 3 vols., 1971–1975; tr. into English and ed. by David Louvish as Course of Mathematical Logic, 2 vols., Dordrecht: Reidel, 1973–1974. Theory of relations, tr. into English by P. Clote, Amsterdam: North-Holland, 1986; rev. ed. 2000.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HIVE (virtual environment)** HIVE (virtual environment): The H.I.V.E. (Huge Immersive Virtual Environment) is a joint research project between the departments of Psychology, Computer Science, and Systems Analysis at Miami University. The project is funded by a grant from the U.S. Army Research Office and is currently the world's largest virtual environment in terms of navigable floor area (currently over 1200m2). The goal of the research project is to conduct experiments in human spatial cognition. System Components: The H.I.V.E. platform consists of several components, including: Position-Tracking Camera Array Wearable Rendering System
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Whirlwind mill** Whirlwind mill: A whirlwind mill is a beater mill for pulverising and micro-pulverising in process engineering. Construction: Whirlwind mills essentially consist of a mill base, a mill cover and a rotor. The inner side of the cover is equipped with wear protection elements. The top of the rotor is equipped with precrushing tools, and its side is covered with numerous U-shaped grinding tools. Function: The grinding stock is fed to the mill via an inlet box and is pre-crushed by the tools on top of the rotor. The precrushing tools also carry the product into the milling zone at the side of the rotor. There the grinding stock is fluidised in the air stream between rotor and stator caused by rotation and the U-shaped grinding tools. The special design of these tools creates massive air whirls in the grinding zone (this is where the name of the mill comes from). These air whirls cause the main grinding effect. The particles collide with each other in these whirlwinds. The final particle size can be adjusted by changing the clearance between rotor and stator, air flow and rotor speed. Applications: Whirlwind mills are basically used for pulverisation and micro-pulverisation of soft to medium hard products. In addition they can be used for cryogenic grinding, combined grinding/drying, combined drying/blending and defibration of organic substances (such as paper, cellulose, etc.). Whirlwind Mills can be found in different industries, such as chemical, plastic, building material, and food industry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thus have I heard** Thus have I heard: Thus have I heard (Pali: Evaṃ me sutaṃ; Sanskrit: Evaṃ mayā śrūtaṃ) is the common translation of the first line of the standard introduction (Pāli and Sanskrit: nidāna) of Buddhist discourses. This phrase serves to confirm that the discourse is coming from the Buddha himself, as a "seal of authenticity". Buddhist tradition maintains that the disciple Ānanda used the formula for the first time, as a form of personal testimony, but this is disputed by some scholars. It is also disputed how the phrase relates to the words that follow, and several theories have been developed with regard to how the text was originally intended to be read. The formula has also been used in later Mahāyāna and Vajrayāna discourses. History and function: According to Buddhist tradition—based on the commentary to the Dīgha Nikāya—the formula was first used by the disciple Ānanda during the First Buddhist Council held at Rājagṛha (present-day Rajgir). At this gathering, the Buddhist Canon was established, and Ānanda was given the role of rapporteur (Sanskrit: saṃgītakāra) of the Buddha's teachings, being the personal attendant of the Buddha.The formula is usually followed by the place where the discourse is given, as well as the names and numbers of those it is given to. In the Chinese exegetical tradition, the formula is known as the generic preface (Chinese: 通序; pinyin: tōngxù), as opposed to the subsequent part that differs between discourses, introducing the specifics, known as the specific preface (Chinese: 別序; pinyin: biéxù). In some Early Buddhist Texts, other similar constructions are used, such as 'This was said by the Blessed One' (Pali: Vutaṃ hetaṃ bhagavatā) in the Itivuttaka. Interpretation and translation: The formula is glossed by the 5th-century Indian commentator Buddhaghosa as "received in the Buddha's presence". Indologist Jean Filliozat (1906–82) disagreed with the traditional explanation that Ānanda was the one who invented the formula, arguing that the formula is an odd way to describe a first-hand witness account, as it sounds as though what follows is hearsay. He argued instead that it was a later compiler who added it. However, comparing Buddhist with Jain texts, Sanskrit scholar John Brough (1917–84) concluded the formula indicates personal testimony as opposed to hearsay.Indologist Jean Przyluski (1885–1944) argued that the formula originally may also have meant that the Buddhist discourses were presented as part of sacred revelation (śruti). This was intended to prove that the Buddhist texts were on the same level with, or superior than, the Vedas in the Brahmanical tradition. Brough concurred with Przyluski that this may have played some role in the development of the phrase, but concluded that the motivation of declaring oneself a witness of the Buddha's teaching "could by itself quite adequately explain it". Brough relates a traditional account in which the Buddha's disciples weep when they hear Ānanda say the words Thus have I heard for the first time, "marvelling that they should hear again the very words of their dead master". Indologist Konrad Klaus disagrees with Brough, however, citing two discourses from the Dīgha Nikāya and Majjhima Nikāya in which the formula refers to what "... was acquired through communication by others", as opposed to personal experience. Klaus also points at another expression which does mean that a discourse has been directly received from someone, that is samukkhā me taṃ ... samukkhā paṭiggahitaṃ, meaning 'I heard and learned this from ...'s own lips': an expression often used with regard to the Buddha. He proposes that the formula Thus have I heard does mark a discourse as the Buddha's word, but not because the discourse has been heard from the Buddha's own lips by the speaker. He does admit that the early Sanskrit texts contain a later interpretation of the formula, which does refer to personal experience.Indologist Étienne Lamotte (1903–83) argued it was the Buddha who had the formula placed at the beginning of the Buddhist discourses, conveying this through Ānanda.In addition, the formula may have been used by editors to standardize the discourses, as it is even used in discourses given by Ānanda himself. Punctuation: There has been considerable debate as to how the first sentences of the preface of Buddhist discourses should be translated, especially with regard to punctuation. There are three main opinions. The first possible and most common translation is Thus have I heard. At one time the Blessed One was at ... in ... Buddhist studies scholar Mark Allon has defended this translation based on metrical and rhyme patterns. The words of the Pāli formula indicate the oral tradition through which the discourses were passed down. As with many parts of the discourses, the preface consist of rhymes to help memorization of the text, such as repetition of initial consonant sounds (alliteration; evaṃ, ekaṃ) and final sounds (homoioteleuton; evaṃ, suttaṃ, ekaṃ and samayaṃ). These rhyme patterns show that the two phrases, the first phrase starting with 'thus' (evaṃ me suttaṃ) and the second phrase, ekaṃ samayaṃ (Pāli; Sanskrit: ekasmin samaye), 'at one time', were seen as two separate units. On a similar note, the first phrase has a vedha type metrical pattern, which is repeated by the second phrase, ekaṃ samayaṃ, 'at one time'. Buddhist studies scholars Fernando Tola and Carmen Dragonetti have also argued for this translation with a three-word pre-amble (the three words being evaṃ me suttaṃ), on the grounds that it gives the best meaning to the context.However, numerous scholars read the words 'at one time' (Pali: ekaṃ samayaṃ; Sanskrit: ekasmin samaye) as combined with the first phrase, making for a five-word preamble. In their opinion, the first lines should be translated to Thus have i heard at one time. The Blessed One was staying at ... in ... This translation is often attributed to Brough, but was first proposed by Orientalist Alexander von Staël-Holstein (1877–1937). Von Staël-Holstein preferred this translation, basing himself on Indian commentaries, and Brough based himself on Tibetan translations, common usage in Avadānas and Early Buddhist Texts, as well as Pāli and Sanskrit commentators. Indologist Oskar von Hinüber rejects Von Staël-Holstein's and Brough's interpretation, however. He argues that although in Sanskrit it may be possible to connect the two phrases in one sentence, in Pāli this is highly unusual. Von Hinüber further states that in the early Pāli texts, as well as the Pāli commentaries, separating the two phrases is actually quite common. Konrad Klaus agrees with von Hinüber's arguments. Buddhist studies scholar Brian Galloway further states that many Tibetan and Indian commentators such as Vimalamitra (8th century) did not support a five-word but rather a three-word pre-amble, reading at one time with the text following it. Religious Studies scholar Mark Tatz disagrees with Galloway's interpretation, however, providing several reasons. In response, Galloway rejects most of Tatz' arguments.A third group of scholars believe that the details of the place should also be mentioned within the same sentence, with no punctuation: Thus have I heard at the one time when the Blessed one was staying at ... in ... This type of translation, called the "double-jointed construction", has been proposed by Religious Studies scholar Paul Harrison and Buddhologist Tilmann Vetter. Harrison bases himself on Tibetan translations and discussion in Sanskrit commentaries. Usage in Buddhist history: Prior to the 5th century, Chinese translations of Buddhist texts would often translate the standard formula as Heard like this (Chinese: 聞如是), leaving out the I for stylistic reasons. During the 5th century, translator Kumārajīva (344–413 CE) started rendering the formula as Rushi wowen (Chinese: 如是我聞; lit. 'Like this I hear'), which became the standard Chinese translation, despite its unnatural construction.Mahāyāna and Vajrayāna traditions considered many later discourses the Buddha's word, and also included the formula at the beginning of those. Indeed, the 5th-century Chinese commentary Dazhidulun recommends editors to do so. Often, Mahāyāna commentaries state that the formula can not only refer to Ānanda, but also to certain bodhisattvas, such as Mañjuśrī. Modern scholarship has drawn into question the historical value of most of these introductions of Mahāyāna discourses, though some scholars do not exclude the possibility that some of the content of the discourses themselves goes back to the Buddha.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Floor hockey** Floor hockey: Floor hockey is a broad term for several indoor floor game codes which involve two teams using a stick and type of ball or disk. Disks are either open or closed but both designs are usually referred to as "pucks". These games are played either on foot or with wheeled skates. Variants typically reflect the style of ice hockey, field hockey, bandy or some other combination of sport. Games are commonly known by various names including cosom hockey, ball hockey, floorball, or simply floor hockey. Floor hockey: Two floor hockey variants involve the use of wheeled skates and are categorized as roller sports under the title of roller hockey. Quad hockey uses quad skates, commonly known as roller skates, and appears similar to bandy, while inline hockey uses inline skates and is of the ice hockey variation. All styles and codes are played on dry, flat floor surfaces such as a gymnasium or basketball court. As in other hockey codes, players on each team attempt to shoot a ball, disk or puck into a goal using sticks, some with a curved end and others a straight, bladeless stick. Floor hockey: Floor hockey games differ from street hockey in that the games are more structured and have a codified set of rules. The variants which do not involve wheeled skates and use a closed puck are sometimes used as a form of dryland training to help teach and train children to play ice hockey while the floorball variant is sometimes used as a dryland training program for bandy. History: Floor hockey was originally a physical fitness sport in many public schools developed for physical education class but has since developed several variants played in a variety of ways and is no longer restricted to educational institutions. History: Canada Floor hockey codes derived from ice hockey were first officially played in Montreal, Quebec, Canada in 1875, but the game's official creation is credited to Canada's Sports Hall of Fame inductee, Samuel Perry Jacks, better known as "Sam Jacks". Jacks is the individual who codified floor hockey's first set of rules in 1936. However, his version did not involve either a closed disk (puck) or a ball, but an open disk (disk with a hole in the center). At the time, Jacks was working as assistant physical director at the West End YMCA in Toronto. His achievement was later recognized by the Youth Branch of the United Nations.In 1947, Sam Jacks became the head coach of the Canadian Floor Hockey Team which competed in the AAU Junior Olympic Games (Amateur Athletic Union) in the USA where the Canadian team finished in third place. It is unclear whether the style of play was the one of his own making or some other format.In 1991 the Canadian Ball Hockey Association (CBHA) was formed to provide more formal leagues of ball-based floor hockey. The CBHA runs leagues for men, women, and juniors, and organizes National Championships for each division. History: United States of America In 1947, Canada's Sam Jacks was the head coach of the Canadian Floor Hockey Team which travelled from Canada to compete in the AAU Junior Olympic Games in the USA. The Canadians finished in third place. It is unclear if the style of play was the one he codified in 1936 or another variant.In 1962, one of the first variants of organized indoor hockey games were created in Battle Creek, Michigan in the United States by Tom Harter who used plastic sticks and pucks. It is unclear whether other floor hockey codes using a ball or a felt puck were in existence in the USA at the time or if this marked a new emerging variant in the country.In 1974, Barbara Walters & Ethel Kennedy played "Sam Jacks" floor hockey (incorrectly labelled "Floor Ringette") at Margaret Chapman School. A photograph was taken of one of the school's students, Maria, stick handling by Ethel Kennedy during the game. The game involved disabled children and was organized by the Joseph P. Kennedy Foundation. This was during a period where this particular variant was being changed and adapted from its initial form in order to make it playable for the Special Olympics.In 2003, the National Intramural-Recreational Sports Association Hockey Committee released a baseline set of rules for a specific intramural floor hockey variant for college campuses across the United States. History: Special Olympics One version of floor hockey was introduced as a sport in the Winter Special Olympics in 1932.In 1970, the Special Olympics World Winter Games added team floor hockey as an event, with the distinction of it being the only team sport under its purview.In 1974, Barbara Walters & Ethel Kennedy played "Sam Jacks" floor hockey (incorrectly labelled "Floor Ringette") at Margaret Chapman School. A photograph was taken of one of the school's students, Maria, stick handling by Ethel Kennedy during the game. The game involved handicapped children and was organized by the Joseph P. Kennedy Foundation. This was during a period where this particular variant was being changed and adapted from its initial form in order to make it playable for the Special Olympics. Equipment: Floor hockey equipment differs from code to code. The types of checking and protective equipment allowed also vary. It is also important to note that when it comes to equipment, many floor hockey games today use some type of plastic, the first of which wasn't invented until 1907 by Leo Baekeland. Object of play Various objects can be used for play depending on the code, but they fall into three main types: a ball, a closed disk called a puck, or an open disk with a hole in the middle. These objects are variously constructed of either plastic or a felt-like material. Sticks Sticks used for play depend on the game codes. Some codes require standard ice hockey, field hockey or bandy sticks, while others use lightweight plastic sticks. The Special Olympics version of floor hockey uses blade-less wooden sticks. Types of sticks The type of floor hockey game that is played and the object of play that is used often determines the type of stick. The material used to make floor hockey sticks varies and can include plastic or some type of composite. Shafts are either rectangular or rounded like in the case of a broomstick. Equipment: Ball and puck Games which use a ball such as quad hockey will typically use a stick ending in a type of hook though this is not always the case as can be seen in ball hockey and road hockey. Games which use a type of puck (closed disk) such as cosom hockey and inline hockey, will typically use a stick ending in a blade with sharp angle at the end of the shaft with a blade which generally lies flat along the floor. Equipment: In the case of floorball the end of the stick involves a design that is a mix between a blade and a hook. Exceptions Three exceptions in regards to sticks can found in floor hockey. These games use either an open disk or a ring. Equipment: The first is in the case of Sam Jacks's floor hockey, the Canadian variant developed during the Great Depression of the 1930s. The second one can be found in the Special Olympics which was developed in the 1960s. The third can be found in gym ringette which was developed in the 1990s, but gym ringette itself is not in fact a direct variant of floor hockey and was more heavily influenced by the ice sport of ringette. Equipment: In all the first two examples the puck used is in fact an open disk, and is a type of felt disk with a hole in the middle. As a result a straight stick is used as a handle and does not include any type of blade or hook. The end however may include a type of drag-tip. Shafts are either rectangular or round like a broomstick handle. Equipment: In the third example, gym ringette uses a plastic shaft with a plastic drag-tip. Gym ringette does not use any type of puck. Instead, gym ringette uses a ring made of a type of rubber foam. The shaft is rectangular in shape. Variants: All floor hockey variants can be separated into four general categories based on four main variables: ball games, puck games (closed disk), disk games (open disk), and a separate category for wheeled skates called 'Roller Games'. The first three categories are floor hockey variants played on foot while the latter involves the use of wheeled skates. All four categories can have their own sub-divisions to help categorize the existing floor hockey variants even further. Variants: Ball games (on foot) Ball hockey Ball hockey is an indoor game using a lightweight ball. Outdoor variants exist such as street hockey and dek hockey. Floorball One variation which is especially popular in Europe, is floorball. Floorball uses a lightweight plastic ball and sticks made of plastic and carbon fiber. Limited checking is permitted. Puck games (on foot) This section refers to floor hockey games using a closed disk often referred to as a "puck". Cosom hockey Another variation, cosom hockey, uses plastic sticks and pucks. Disk games (on foot) This section refers to floor hockey games using an open disk which is in some cases referred to as a puck and sometimes has been referred to as a ring. Variants: Sam Jacks floor hockey "Sam Jacks" floor hockey is an early Canadian design of floor hockey whose rules were created and codified by Canada's Sam Jacks in 1936. It is sometimes mistaken for ringette or gym ringette. The game uses straight, bladeless sticks and a disk made of felt with a hole in the middle. Several public schools in Canada used the game in physical education and gym classes, but the game is far less commonly played today. Variants: Jacks would later create the ice team skating sport of ringette in Canada in 1963. Today ringette only loosely resembles floor hockey, with ringette having been influenced variously by rules in from basketball, ice hockey, and broomball when its first rules were designed. Though ringette's first experimental ring was a felt floor hockey puck (sometimes referred to as a "ring") it was quickly replaced by deck tennis rings due to the felt puck accumulating snow and sticking to the ice. Variants: Special Olympics The Special Olympics variant of floor hockey uses a wide disc with a hole in the middle and a blade-less stick. Floor hockey pucks are donut shaped felt pucks with a center hole of 10 cm (4 inches), a diameter of 20 cm (8 inches), a thickness of 2.5 cm (1inch) and a weight of 140 to 225 grams (5 to 8 ounces). Variants: Protective equipment is required. It is believed to have been derived from a much earlier floor hockey variant from early 20th century Canada whose rules were codified by Sam Jacks. Roller games (wheeled skates) There are two variants of floor hockey which use wheeled skates: quad hockey which is also known by other names like rink hockey, a sport with a resemblance more reminiscent of bandy and field hockey, and in-line hockey which is a wheeled variant of ice hockey. Quad hockey Quad hockey is a wheeled floor hockey variant also known by various names including roller hockey and rink hockey. In-line hockey In-line hockey is a wheeled floor hockey variant derived from the ice sport of ice hockey. Variants: Gym Ringette Gym ringette is the off-ice variant of the winter team skating sport of ringette and today is only distantly related to floor hockey. While the sport of ringette was initially influenced by the rules of basketball, ice hockey, broomball, and a variety of floor hockey games played during the early part of the 20th century, particularly the floor hockey style codified by Sam Jacks, gym ringette was developed in Canada near the end of the 20th century and was designed as an off-ice variant of the ice game of ringette rather than floor hockey. Rules: Although there are different codes of floor hockey rules, there are some basic rules which are typically followed regardless of code, with the exception of gym ringette. Start of play Floor hockey games start with a face-off, in which a player from each team has an equal chance to gain possession. The face-off is also used to resume play after goals, and to start each period. Scoring A goal is scored when the entire puck or ball crosses the plane of the goal line, unless it is intentionally kicked in by the attacking team. The team with the most goals at the end of the game is declared the winner. If the game is tied, the games usually proceed into golden goal period(s) in order to determine a winner. Overtime rules vary, but typically include extra time and/or penalty shootout. Rules: Penalties Penalties for illegal actions are enforced. A player committing a major infraction is required to sit out of the game for two minutes, resulting a power play, but a minor infraction may result in a free hit. Penalties are typically given for the following actions: Tripping – Using the body or stick to intentionally cause a player to fall Hooking – Using the curved end of the stick to impede a player's forward progress by pulling him or her back Slashing – Using the stick to hit an opposing player's body Interference – Using the body to move a player from his current position on the floor or preventing him from playing the ball or puck High Sticking – Allowing the curved end of the stick to come above your waist Pushing Down – Using the stick to push an opponent down Checking from behind – Hitting a player from behind Cross-checking – ramming opponent with stick using both hands Too many players on court - to be served by designated player Spearing – stabbing opponent with stick blade (game misconduct) Deliberate intent to injure opponents (game misconduct)Due to the limited padding worn by players, body checking is typically disallowed in floor hockey games, although shoulder-to-shoulder checking is allowed. Common misconceptions: The term "floor hockey" has at times been incorrectly called ringette and vice versa. Ringette is not a floor sport, but an ice skating sport. Another common mistake is to confuse gym ringette with floor hockey. Though one of the two floor hockey variants which use a disc with a hole in the center was codified by the Canadian Sam Jacks in the 1930s, gym ringette should not be confused with floor hockey variants due to the fact gym ringette was designed in Canada in the late 20th century as the off-ice variant of the ice skating sport of ringette, a sport which was also created by Sam Jacks in Canada in the 1960s.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Web traffic** Web traffic: Web traffic is the data sent and received by visitors to a website. Since the mid-1990s, web traffic has been the largest portion of Internet traffic. Sites monitor the incoming and outgoing traffic to see which parts or pages of their site are popular and if there are any apparent trends, such as one specific page being viewed mostly by people in a particular country. There are many ways to monitor this traffic, and the gathered data is used to help structure sites, highlight security problems or indicate a potential lack of bandwidth. Not all web traffic is welcomed. Web traffic: Some companies offer advertising schemes that, in return for increased web traffic (visitors), pay for screen space on the site. Sites also often aim to increase their web traffic through inclusion on search engines and through search engine optimization. Analysis: Web analytics is the measurement of the behavior of visitors to a website. In a commercial context, it especially refers to the measurement of which aspects of the website work towards the business objectives of Internet marketing initiatives; for example, which landing pages encourage people to make a purchase. Control: The amount of traffic seen by a website is a measure of its popularity. By analyzing the statistics of visitors, it is possible to see shortcomings of the site and look to improve those areas. It is also possible to increase the popularity of a site and the number of people that visit it. Limiting access It is sometimes important to protect some parts of a site by password, allowing only authorized people to visit particular sections or pages. Control: Some site administrators have chosen to block their page to specific traffic, such as by geographic location. The re-election campaign site for U.S. President George W. Bush (GeorgeWBush.com) was blocked to all internet users outside of the U.S. on 25 October 2004 after a reported attack on the site.It is also possible to limit access to a web server both based on the number of connections and the bandwidth expended by each connection. Sources: From search engines The majority of website traffic is driven by search engines. Millions of people use search engines every day to research various topics, buy products, and go about their daily surfing activities. Search engines use keywords to help users find relevant information, and each of the major search engines has developed a unique algorithm to determine where websites are placed within the search results. When a user clicks on one of the listings in the search results, they are directed to the corresponding website and data is transferred from the website's server, thus counting the visitors towards the overall flow of traffic to that website. Sources: Search engine optimization (SEO), is the ongoing practice of optimizing a website to help improve its rankings in the search engines. Several internal and external factors are involved which can help improve a site's listing within the search engines. The higher a site ranks within the search engines for a particular keyword, the more traffic it will receive. Sources: Increasing traffic Web traffic can be increased by the placement of a site in search engines and the purchase of advertising, including bulk e-mail, pop-up ads, and in-page advertisements. Web traffic can also be purchased through web traffic providers that can deliver targeted traffic. However, buying traffic may negatively affect a site’s search engine rank.Web traffic can be increased not only by attracting more visitors to a site, but also by encouraging individual visitors to "linger" on the site, viewing many pages in a visit. (see Outbrain for an example of this practice) If a web page is not listed in the first pages of any search, the odds of someone finding it diminishes greatly (especially if there is other competition on the first page). Very few people go past the first page, and the percentage that go to subsequent pages is substantially lower. Consequently, getting proper placement on search engines, a practice known as SEO, is as important as the website itself. Traffic overload: Too much web traffic can dramatically slow down or prevent all access to a website. This is caused by more file requests going to the server than it can handle and may be an intentional attack on the site or simply caused by over-popularity. Large-scale websites with numerous servers can often cope with the traffic required, and it is more likely that smaller services are affected by traffic overload. Sudden traffic load may also hang your server or may result in a shutdown of your services. Traffic overload: Denial of service attacks Denial-of-service attacks (DoS attacks) have forced websites to close after a malicious attack, flooding the site with more requests than it could cope with. Viruses have also been used to coordinate large-scale distributed denial-of-service attacks. Sudden popularity A sudden burst of publicity may accidentally cause a web traffic overload. A news item in the media, a quickly propagating email, or a link from a popular site may cause such a boost in visitors (sometimes called a flash crowd or the Slashdot effect). Fake traffic: Interactive Advertising Bureau estimated in 2014 that around one third of Web traffic is generated by Internet bots and malware. Traffic encryption: According to Mozilla since January 2017, more than half of the Web traffic is encrypted with HTTPS. Hypertext Transfer Protocol Secure (HTTPS) is the secure version of HTTP, and it secures information and data transfer between a user's browser and a website.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Progress in Physical Geography** Progress in Physical Geography: Progress in Physical Geography is a peer-reviewed academic journal that publishes papers in the fields of Geosciences, multidisciplinary and physical geography. The journal's editors are Nicholas Clifford (King's College London) and George Malanson (University of Iowa). It has been in publication since 1977 and is currently published by SAGE Publications. Scope: Progress In Physical Geography is an international, interdisciplinary journal which publishes papers that focus on developments and debates within Physical Geography. The bi-monthly published journal which is edited by Nicholas Clifford and George Malanson also covers interrelated fields across the Earth, Biological and Ecological System Sciences. Abstracting and indexing: Progress in Physical Geography is abstracted and indexed in, among other databases: SCOPUS, and the Social Sciences Citation Index. According to the Journal Citation Reports, its 2016 impact factor is 3.375, ranking it 35 out of 188 journals in the category ‘Geosciences, Multidisciplinary’. and 11 out of 49 journals in the category ‘Geography, Physical’.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Morphing** Morphing: Morphing is a special effect in motion pictures and animations that changes (or morphs) one image or shape into another through a seamless transition. Traditionally such a depiction would be achieved through dissolving techniques on film. Since the early 1990s, this has been replaced by computer software to create more realistic transitions. A similar method is applied to audio recordings, for example, by changing voices or vocal lines. Early transformation techniques: Long before digital morphing, several techniques were used for similar image transformations. Some of those techniques are closer to a matched dissolve - a gradual change between two pictures without warping the shapes in the images - while others did change the shapes in between the start and end phases of the transformation. Tabula scalata Known since at least the end of the 16th century, Tabula scalata is a type of painting with two images divided over a corrugated surface. Each image is only correctly visible from a certain angle. If the pictures are matched properly, a primitive type of morphing effect occurs when changing from one viewing angle to the other. Early transformation techniques: Mechanical transformations Around 1790 French shadow play showman François Dominique Séraphin used a metal shadow figure with jointed parts to have the face of a young woman changing into that of a witch.Some 19th century mechanical magic lantern slides produced changes to the appearance of figures. For instance a nose could grow to enormous size, simply by slowly sliding away a piece of glass with black paint that masked part of another glass plate with the picture. Early transformation techniques: Matched dissolves In the first half of the 19th century "dissolving views" were a popular type of magic lantern show, mostly showing landscapes gradually dissolving from a day to night version or from summer to winter. Other uses are known, for instance Henry Langdon Childe showed groves transforming into cathedrals.The 1910 short film Narren-grappen shows a dissolve transformation of the clothing of a female character.Maurice Tourneur's 1915 film Alias Jimmy Valentine featured a subtle dissolve transformation of the main character from respected citizen Lee Randall into his criminal alter ego Jimmy Valentine. Early transformation techniques: The Peter Tchaikovsky Story in a 1959 TV-series episode of Disneyland features a swan automaton transforming into a real ballet dancer.In 1985, Godley & Creme created a "morph" effect using analogue cross-fades on parts of different faces in the video for "Cry". Animation In animation, the morphing effect was created long before the introduction of cinema. A phenakistiscope designed by its inventor Joseph Plateau was printed around 1835 and shows the head of a woman changing into a witch and then into a monster.Émile Cohl's 1908 animated film Fantasmagorie featured much morphing of characters and objects drawn in simple outlines. Digital morphing: In the early 1990s, computer techniques capable of more convincing results saw increasing use. These involved distorting one image at the same time that it faded into another through marking corresponding points and vectors on the "before" and "after" images used in the morph. For example, one would morph one face into another by marking key points on the first face, such as the contour of the nose or location of an eye, and mark where these same points existed on the second face. The computer would then distort the first face to have the shape of the second face at the same time that it faded the two faces. To compute the transformation of image coordinates required for the distortion, the algorithm of Beier and Neely can be used. Digital morphing: Early examples In or before 1986, computer graphics company Omnibus created a digital animation for a Tide commercial with a Tide detergent bottle smoothly morphing into the shape of the United States. The effect was programmed by Bob Hoffman. Omnibus re-used the technique in the movie Flight of the Navigator (1986). It featured scenes with a computer generated spaceship that appeared to change shape. The plaster cast of a model of the spaceship was scanned and digitally modified with techniques that included a reflection mapping technique that was also developed by programmer Bob Hoffman.The 1986 movie The Golden Child implemented early digital morphing effects from animal to human and back. Digital morphing: Willow (1988) featured a more detailed digital morphing sequence with a person changing into different animals. A similar process was used a year later in Indiana Jones and the Last Crusade to create Walter Donovan's gruesome demise. Both effects were created by Industrial Light & Magic, using software developed by Tom Brigham and Doug Smythe (AMPAS).In 1991, morphing appeared notably in the Michael Jackson music video "Black or White" and in the movies Terminator 2: Judgment Day and Star Trek VI: The Undiscovered Country. The first application for personal computers to offer morphing was Gryphon Software Morph on the Macintosh. Other early morphing systems included ImageMaster, MorphPlus and CineMorph, all of which premiered for the Amiga in 1992. Other programs became widely available within a year, and for a time the effect became common to the point of cliché. For high-end use, Elastic Reality (based on MorphPlus) saw its first feature film use in In The Line of Fire (1993) and was used in Quantum Leap (work performed by the Post Group). At VisionArt Ted Fay used Elastic Reality to morph Odo for Star Trek: Deep Space Nine. The Snoop Dogg music video "Who Am I? (What's My Name?)", where Snoop Dogg and the others morph into dogs. Elastic Reality was later purchased by Avid, having already become the de facto system of choice, used in many hundreds of films. The technology behind Elastic Reality earned two Academy Awards in 1996 for Scientific and Technical Achievement going to Garth Dickie and Perry Kivolowitz. The effect is technically called a "spatially warped cross-dissolve". The first social network designed for user-generated morph examples to be posted online was Galleries by Morpheus (morphing software). Digital morphing: In Taiwan, Aderans, a hair loss solutions provider, did a TV commercial featuring a morphing sequence in which people with lush, thick hair morph into one another, reminiscent of the end sequence of the "Black or White" video. Digital morphing: Present use Morphing algorithms continue to advance and programs can automatically morph images that correspond closely enough with relatively little instruction from the user. This has led to the use of morphing techniques to create convincing slow-motion effects where none existed in the original film or video footage by morphing between each individual frame using optical flow technology. Morphing has also appeared as a transition technique between one scene and another in television shows, even if the contents of the two images are entirely unrelated. The algorithm in this case attempts to find corresponding points between the images and distort one into the other as they crossfade. Digital morphing: While perhaps less obvious than in the past, morphing is used heavily today. Whereas the effect was initially a novelty, today, morphing effects are most often designed to be seamless and invisible to the eye. Digital morphing: A particular use for morphing effects is modern digital font design. Using morphing technology, called interpolation or multiple master tech, a designer can create an intermediate between two styles, for example generating a semibold font by compromising between a bold and regular style, or extend a trend to create an ultra-light or ultra-bold. The technique is commonly used by font design studios. Software: After Effects Elastic Reality FantaMorph Gryphon Software Morph Morpheus MorphThing Nuke SilhouetteFX
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chills** Chills: Chills is a feeling of coldness occurring during a high fever, but sometimes is also a common symptom which occurs alone in specific people. It occurs during fever due to the release of cytokines and prostaglandins as part of the inflammatory response, which increases the set point for body temperature in the hypothalamus. The increased set point causes the body temperature to rise (pyrexia), but also makes the patient feel cold or chills until the new set point is reached. Shivering also occurs along with chills because the patient's body produces heat during muscle contraction in a physiological attempt to increase body temperature to the new set point. When it does not accompany a high fever, it is normally a light chill. Sometimes a chill of medium power and short duration may occur during a scare, especially in scares of fear, commonly interpreted like or confused by trembling. Chills: Severe chills with violent shivering are called rigors. Causes: Chills are commonly caused by inflammatory diseases, such as influenza. Malaria is one of the common reasons for chills and rigors. In malaria, the parasites enter the liver, grow there and then attack the red blood cells which causes rupture of these cells and release of a toxic substance hemozoin which causes chills recurring every 3 to 4 days. Sometimes they happen in specific people almost all the time, in a slight power, or it less commonly happens in a generally healthy person.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**176 (number)** 176 (number): 176 (one hundred [and] seventy-six) is the natural number following 175 and preceding 177. In mathematics: 176 is an even number and an abundant number. It is an odious number, a self number, a semiperfect number, and a practical number.176 is a cake number, a happy number, a pentagonal number, and an octagonal number. 15 can be partitioned in 176 ways. The Higman–Sims group can be constructed as a doubly transitive permutation group acting on a geometry containing 176 points, and it is also the symmetry group of the largest possible set of equiangular lines in 22 dimensions, which contains 176 lines. In astronomy: 176 Iduna is a large main belt asteroid with a composition similar to that of the largest main belt asteroid, 1 Ceres Gliese 176 is a red dwarf star in the constellation of Taurus Gliese 176 b is a super-Earth exoplanet in the constellation of Taurus. This planet orbits close to its parent star Gliese 176 In the Bible: Minuscule 176 (in the Gregory-Aland numbering), a Greek minuscule manuscript of the New Testament In the military: Attack Squadron 176 United States Navy squadron during the Vietnam War USS General J. C. Breckinridge (AP-176) was a United States Navy troop transport during World War II, the Korean War and Vietnam War USS Kershaw (APA-176) was a United States Navy Haskell-class attack transport during World War II USS Micka (DE-176) was a United States Navy Cannon-class destroyer escort during World War II USS Perch (SS-176) was a United States Navy Porpoise-class submarine during World War II USS Peregrine (AG-176) was a United States Navy Auk-class minesweeper during World War II USS Renshaw (DD-176) was a United States Navy Wickes-class destroyer following World War I USS Tonkawa (AT-176) was a United States Navy Sonoma-class fleet tug during World War II 176th Wing is the largest unit of the Alaska Air National Guard In transportation: Heinkel He 176 was a German rocket-powered aircraft London Buses route 176 176th Street, Bronx elevated station on the IRT Jerome Avenue Line of the New York City Subway In other fields: 176 is also: The year AD 176 or 176 BC 176 AH is a year in the Islamic calendar that corresponds to 792 – 793 CE The atomic number of an element temporarily called Unsepthexium
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Noisemaker** Noisemaker: A noisemaker is something intended to make a loud noise, usually for fun. Instruments or devices commonly considered "noisemakers" include: pea whistles air horns, composed of a pressurized air source coupled to a horn, designed to create an extremely loud noise fireworks, such as firecrackers, bottle rockets, bang snaps and others party horns, paper tubes often flattened and rolled into a coil, which unrolls when blown into, producing a horn-like noise ratchets, orchestral musical instruments played by percussionists. See also derkach and rapach. Noisemaker: sirens vuvuzelas, plastic horns that produce a loud monotone note the head joint of recorders couesnophones Groan Tubes moo boxes whirly tubes firecrackersNoisemakers are popular with children as toy musical instruments. They can be perfectly included in loud rhythm bands and in the music education for young children.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Successive linear programming** Successive linear programming: Successive Linear Programming (SLP), also known as Sequential Linear Programming, is an optimization technique for approximately solving nonlinear optimization problems. It is related to, but distinct from, quasi-Newton methods. Successive linear programming: Starting at some estimate of the optimal solution, the method is based on solving a sequence of first-order approximations (i.e. linearizations) of the model. The linearizations are linear programming problems, which can be solved efficiently. As the linearizations need not be bounded, trust regions or similar techniques are needed to ensure convergence in theory. SLP has been used widely in the petrochemical industry since the 1970s. Sources: Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-30303-1. Bazaraa, Mokhtar S.; Sherali, Hanif D.; Shetty, C.M. (1993). Nonlinear Programming, Theory and Applications (2nd ed.). John Wiley & Sons. ISBN 0-471-55793-5. Palacios-Gomez, F.; Lasdon, L.; Enquist, M. (October 1982). "Nonlinear Optimization by Successive Linear Programming". Management Science. 28 (10): 1106–1120. doi:10.1287/mnsc.28.10.1106.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Engine shaft** Engine shaft: For mine construction, an engine shaft is a mine shaft used for the purpose of pumping, irrespective of the prime mover.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**V1191 Cygni** V1191 Cygni: V1191 Cygni is the variable star designation for an overcontact binary star system in the constellation Cygnus. First found to be variable in 1965, it is a W Ursae Majoris variable with a maximum apparent magnitude 10.82. It drops by 0.33 magnitudes during primary eclipses with a period of 0.3134 days, while dropping by 0.29 magnitudes during secondary eclipses. The primary star, which is also the cooler star, appears to have a spectral type of F6V, while the secondary is slightly cooler with a spectral type of G5V. With a mass of 1.29 solar masses and a luminosity of 2.71 solar luminosities, it is slightly more massive and luminous than the sun, while the secondary is only around 1/10 as massive and less than half as luminous. With a separation of 2.20 solar radii, the mass transfer of about 2×10−7 solar masses per year from the secondary to the primary is one of the highest known for a system of its type.V1191 Cygni is a W-type W UMa variable, meaning that the primary eclipse occurs when the less-massive component is eclipsed by the larger, more massive component, although the masses are unusually different for such a system. The current period is very short for a system of its spectral type, suggesting that the stars are relatively small for their mass and age, which is likely around 3.85 billion years. The pair's orbital period is increasing at a rate of over 4×10−7 days per year, one of the fastest known rates among contact binary systems, likely due to the high rate of mass transfer. In addition to the period increase, there is cyclic period change of 0.023 days over 26.7 years, caused by either a third body with a mass of 0.77 solar masses or magnetic activity cycles. The mass transfer will likely eventually cause the system to evolve into a single star with a very high rotation rate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stellation diagram** Stellation diagram: In geometry, a stellation diagram or stellation pattern is a two-dimensional diagram in the plane of some face of a polyhedron, showing lines where other face planes intersect with this one. The lines cause 2D space to be divided up into regions. Regions not intersected by any further lines are called elementary regions. Usually unbounded regions are excluded from the diagram, along with any portions of the lines extending to infinity. Each elementary region represents a top face of one cell, and a bottom face of another. Stellation diagram: A collection of these diagrams, one for each face type, can be used to represent any stellation of the polyhedron, by shading the regions which should appear in that stellation. A stellation diagram exists for every face of a given polyhedron. In face transitive polyhedra, symmetry can be used to require all faces have the same diagram shading. Semiregular polyhedra like the Archimedean solids will have different stellation diagrams for different kinds of faces.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hugh Spikes** Hugh Spikes: Hugh Alexander Spikes is a British mechanical engineer. He is emeritus professor of tribology at Imperial College London. He is the former head of the Tribology Group at Imperial College. Tribology is the science and engineering of friction, lubrication and wear. Early life and education: Spikes was born in 1945. He studied the Natural Sciences Tripos at the University of Cambridge, graduating in 1968. He obtained his Doctor of Philosophy for research in tribology from Imperial College London in 1972. His PhD research was performed under the supervision of Professor Alastair Cameron. Research and career: Spikes has published over 300 peer-reviewed papers and patents in the field of tribology, spanning many aspects of liquid lubricant behaviour ranging from boundary to hydrodynamic lubrication. Another focus of his research has been lubricant additives, particularly antiwear additives and friction modifiers. As of September 2021, Spikes' work had been cited on more than 17,500 occasions and he had a h-index of 74 and an i10-index of 262 (Google Scholar). He is a member of the Distinguished Advisory Board for the International Tribology Council. He is Editor Emeritus for the Wiley journal Lubrication Science and he is a member of the editorial board of the Springer Nature journals Tribology Letters and Friction.Spikes was the Head of the Tribology Group at Imperial College, he was succeeded in this role by Professor Daniele Dini. Research and career: Honours and awards In 2004, Spikes was awarded the three major medals that are bestowed internationally for contributions to tribology, the International Award of the Society of Tribologists and Lubrication Engineers (STLE), the Mayo D. Hersey Award of the American Society of Mechanical Engineers (ASME), and the Tribology Trust Gold Medal of the Institution of Mechanical Engineers (IMechE). With his research students, he has also received ten best paper awards, from the STLE, the ASME and the IMechE. In 2019, he received The Tribochemistry Award from the Japanese Society of Tribologists.In 2012, he was elected as a Fellow of the Royal Academy of Engineering (FREng). He is also a Fellow of the Institution of Mechanical Engineers (FIMechE), a Chartered Engineer (CEng) and a Fellow of the Society of Tribologists and Lubrication Engineers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**VLDLR-associated cerebellar hypoplasia** VLDLR-associated cerebellar hypoplasia: VLDLR-associated cerebellar hypoplasia (VLDLRCH) is a rare autosomal recessive condition caused by a disruption of the VLDLR gene. First described as a form of cerebral palsy in the 1970s, it is associated with parental consanguinity and is found in secluded communities, with a number of cases described in Hutterite families.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diverticular disease** Diverticular disease: Diverticular disease is when problems occur due to diverticulosis, a condition defined by the presence of pouches in the wall of the large intestine (diverticula). This includes diverticula becoming inflamed (diverticulitis) or bleeding. Colonic perforation due to diverticular disease may be classified using the Hinchey Classification. Signs and symptoms: The signs and symptoms of diverticulitis are as follows: Nausea and vomiting. Fever Abdominal tenderness Constipation Diarrhea Pain, may be persistent for days. Pain is usually most felt at the right side of the abdomen, by people of Asian descent. Causes: The causes of Diverticular disease can be classified into: Diverticula: This occurs when the weak tissues around the colon tear under pressure. Pouches of marble size protrude through the colon wall as a result of this breakaway. Diverticulitis: This is a direct result of the tear in diverticula, causing inflammation. Risk Factors: Several factors contribute to the increased risk factor for diverticulitis:A diet that is low in fibre and high in fat may increase the risk of developing diverticular disease Other factors are obesity, lack of exercise or physical inactivity, smoking, genetic disorders. Diagnosis: Tenderness can be noticed over the inflamed tissues and an elevation of white blood cell counts is also observed after an examination by a physician. This inflammation is typically in the lower left abdomen. CT scan is the most appropriate method used for diagnosis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kamen Rider Stronger** Kamen Rider Stronger: Kamen Rider Stronger (仮面ライダーストロンガー, Kamen Raidā Sutorongā) is a Japanese Tokusatsu television show. It is the fifth entry in the Kamen Rider Series, the show was broadcast on TBS and MBS from April 5, 1975 to December 27, 1975. Stronger is a co-production between Ishinomori Productions and Toei, and was created by Shōtarō Ishinomori. Story: Following the death of his close friend and mentor Gorō Numata (沼田 五郎, Numata Gorō), Shigeru Jo joins the evil organization Black Satan. With the promise of great power, and fueled by a desire for revenge, Shigeru undergoes surgery to become one of Black Satan's super soldiers. Secretly, Shigeru knows that Black Satan was responsible for Numata's murder, and he uses the organization as a means to gain the power he needs to exact his vengeance. The newly transformed Shigeru escapes from the Black Satan headquarters before they can brainwash him, and becomes Kamen Rider Stronger. Shortly after his escape, Stronger meets Yuriko Misaki, another cyborg soldier created by Black Satan who can transform into Electro-Wave Human Tackle. Together they combat Black Satan and later the Delza Army, to restore peace in Japan. During a battle against the Delza Army, Tackle sacrifices herself to protect Stronger, and Stronger performs a special procedure on himself to achieve his "Charge-Up Form". Later, the six previous Kamen Riders return to Japan and join Stronger to finally topple the evil Delza Army. Characters: Shigeru Jo (城 茂, Jō Shigeru)/Kamen Rider Stronger is the protagonist and eponymous character of the series. When he transforms into his rider form, he shouts "Henshin... Stronger!". In episode 31, he gains the ability to power up into a new "Charge Up" form after undergoing surgery. However, he has to use this power up in less than a minute or else he'll explode. Characters: Yuriko Misaki/Electro-Wave Human Tackle (岬 ユリ子/電波人間タックル, Misaki Yuriko/Denpa Ningen Takkuru) is the first attempt at a female Kamen Rider. She was to be used by Black Satan like Shigeru, but was saved by him while he was trying to find a way out of Black Satan's secret lair. After some talk, he convinced her to come with him and together with a good friend to many Kamen Riders, Tobei Tachibana they fought Black Satan as Kamen Rider Stronger and the Electro-Wave Human Tackle for quite some time. Over time, Yuriko fell in love with Shigeru, but sadly, in episode 30, she gave her life to defeat the evil Doctor Kate in order to save him from being poisoned. This would forever haunt Shigeru and push him to undergo surgery to attain the even more powerful (and dangerous) Charge Up form. Shotaro Ishinomori and Kamen Rider co-creator Toru Hirayama conceived the idea of Tackle after receiving fan letters from young girls who said they wanted a hero to pretend to be when playing Kamen Rider with other boys. Characters: Tōbei Tachibana (立花 藤兵衛, Tachibana Tōbee): The mentor of the previous Kamen Riders. Black Satan Black Satan (ブラックサタン, Burakku Satan) is a terrorist organization formed by the remnants of Shocker. Great Boss of Black Satan (ブラックサタン大首領, Burakku Satan Dai Shuryō, 26): The leader of Black Satan who is the boss of Satan Bug (サタン虫, Satan Mushi). Destroyed by Stronger's Stronger Electro Kick. Characters: Titan (Mr. Titan)/One-Eyed Titan (タイタン/一つ目タイタン, Taitan/Hitotsume Taitan, 1-13): A high-ranking officer of Black Satan. He and General Shadow are rivals. He can assume a human form. He was thrown into the sea by Stronger and destroyed. However, Black Satan learns of his demise and performs a ritual that revives Titan into Hundred-Eyed Titan (百目タイタン, Hyakume Taitan, 17-23). He was then destroyed by Stronger's Stronger Double Kick. Characters: Dead Lion (デッドライオン, Deddo Raion, 24-26): A lionlike monster from Egypt who is known as the strongest great commander of Black Satan. He took over after Mr. Titan's death. After Black Satan's destruction, his whereabouts are unknown. Black Satan Soldiers (ブラックサタン戦闘員, Burakku Satan Sentōin): Black owl foot soldiers of Black Satan. Scientists wear white gowns. General Shadow General Shadow (ジェネラルシャドウ, Jeneraru Shadō, 13-38) is a high-ranking officer who is rivals with Titan. Armed with the Shadow Sword, his trademark attack is Trump Shoot. After abandoning Black Satan, he forms the Delza Army. Destroyed by Stronger (Charge Up)'s Super Electro Lightning Kick. Characters: Delza Army The Delza Army (デルザー軍団, Deruzā Gundan) is created by General Shadow after he leaves Black Satan. After its destruction, he forms the Delza Army to fight Kamen Rider Stronger. The first eight of his officers schemed against him, but never challenged General Shadow, but soon General Shadow was overthrown by Great Marshal Machine. The organization was destroyed by the combined efforts of Kamen Rider Stronger and the six previous Riders. Characters: Great Leader of Delza Army (デルザー軍団大首領, Deruzā Gundan Dai Shuryō): A stone giant controlled by the one-eyed brain-like Great Leader; later revealed to be the true form of the Great Leader of Shocker from the original series. He takes on the form of a giant rock, impervious to the attacks of the Riders. He then self-destructs in an attempt to kill all seven Riders. Characters: Delza Army Corps (デルザー軍団戦闘員, Deruzā Gundan Sentōin, 26-39): The foot soldiers of the Delza Army. Each foot soldier wears a different mask depending on the one who leads them. Staff Officer Steel (鋼鉄参謀, Kōtetsu Sanbō, 27-29): An armor monster. Destroyed by Stronger's Stronger Electro Kick. Division Commander Wild Eagle (荒ワシ師団長, Arawashi Shidanchō, 27, 28, 39 & All Together! Seven Kamen Riders!!): An eaglelike monster. Destroyed by Stronger's Underwater Electro Fire. Doctor Kate (ドクターケイト, Dokutā Keito, 27-30): A plumed cockscomblike monster that shoots poisonous liquid. She is destroyed by the poisoned Tackle's Ultra Cyclone. Major Skull (ドクロ少佐, Dokuro Shōsa, 27-31): A skulllike monster. Destroyed by Stronger (Charge Up)'s Super Electron Drill Kick. Baron Rock (岩石男爵, Ganseki Danshaku, 27-32): A rocklike monster. Destroyed by Stronger (Charge Up)'s Super Electron Drill Kick (head) and Super Electro Three-step Kick (body). General Wolf (狼長官, Ōkami Chōkan, 27-33 & All Together! Seven Kamen Riders!!): A wolflike monster. Destroyed by Stronger (Charge Up)'s Super Electro Lightning Kick. Commanding Officer Brank (隊長ブランク, Taichō Buranku, 27-34): A Frankenstein's monster-inspired monster. Destroyed by Stronger (Charge Up)'s Super Electro Speed Diving Punch. Snake Woman (ヘビ女, Hebi Onna, 34, 35 & All Together! Seven Kamen Riders!!): A snakelike monster who has the ability to suck blood and make humans who look into her eyes follow her orders. Destroyed by Stronger (Charge Up)'s Super Electro Big Wheel Kick. Marshal Machine (マシーン大元帥, Mashīn Dai Gensui, 35-39): A mummy monster from Egypt who later along with his follower Commanders Jishaku and Armored Knight overthrew General Shadow when he failed to capture Kamen Rider Stronger while Commander Jishaku and Armored Knight were able to capture Kamen Rider V3 and Riderman. Destroyed by Stronger's Electro Punch. Commander Magnet (磁石団長, Jishaku Danchō, 36-39): A magnet monster who becomes one of Marshal Machine's followers and has the ability to throw magnets and make the target move to whatever direction he's aiming. He was destroyed with the revived Kaijin Corps. Armored Knight (ヨロイ騎士, Yoroi Kishi, 36-39): An armor monster who becomes one of Marshal Machine's followers and has the ability to make fire from his two swords but later one of his two swords is destroyed by Kamen Rider X's Ridol with help from Kamen Rider Amazon. He was destroyed with the revived Kaijin Corps. Special guest stars Takeshi Hongo/Kamen Rider 1 from Kamen Rider Hayato Ichimonji/Kamen Rider 2 from Kamen Rider Shiro Kazami/Kamen Rider V3 from Kamen Rider V3 Joji Yuki/Riderman from Kamen Rider V3 Keisuke Jin/Kamen Rider X from Kamen Rider X Daisuke Yamamoto/Kamen Rider Amazon from Kamen Rider Amazon List of episodes: I am the Electric Human Stronger!! (おれは電気人間ストロンガー!!, Ore wa Denki Ningen Sutorongā!!) (Original Airdate: April 5, 1975) The Secret of Stronger and Tackle! (ストロンガーとタックルの秘密!, Sutorongā to Takkuru no Himitsu!) (Original Airdate: April 12, 1975) The Thriller House Calls for Children!! (スリラーハウスが子どもを呼ぶ!!, Surirā Hausu ga Kodomo o Yobu!!) (Original Airdate: April 19, 1975) The Demonic Motorbike Reckless Driving Operation! (悪魔のオートバイ暴走作戦!!, Akuma no Ōtobai Bōsō Sakusen!!) (Original Airdate: April 26, 1975) Black Satan's School Lunch!? (ブラックサタンの学校給食!?, Burakku Satan no Gakkō Kyūshoku!?) (Original Airdate: May 3, 1975) The Jellyfish Kikkaijin Who Took the Form of a Teacher! (先生に化けたクラゲ奇械人!, Sensei ni Baketa Kurage Kikkaijin!) (Original Airdate: May 10, 1975) Rider Great Reversal!! (ライダー大逆転!!, Raidā Dai Gyakuten!!) (Original Airdate: May 17, 1975) Don't Melt, Rider! The Final Blow, Electro Kick!! (溶けるなライダー!とどめの電キック!!, Tokeru na Raidā! Todome no Den Kikku!!) (Original Airdate: May 24, 1975) The Band of Demons Has Come!! (悪魔の音楽隊がやって来た!!, Akuma no Ongakutai ga Yattekita!!) (Original Airdate: May 31, 1975) The Frightful Gummer Bug! It Targets Humans!! (恐怖のガンマー虫!人間を狙う!!, Kyōfu no Ganmā Mushi! Ningen o Nerau!!) (Original Airdate: June 7, 1975) Chameleorn! Demonic Film!? (カメレオーン!悪魔のフィルム!?, Kamereōn! Akuma no Firumu!?) (Original Airdate: June 14, 1975) Duel! Stronger's Grave!? (決闘!ストロンガーの墓場!?, Kettō! Sutorongā no Hakaba!?) (Original Airdate: June 21, 1975) The One-Eyed Titan! The Final Counter Attack!! (一ツ目タイタン!最後の逆襲!!, Hitotsume Taitan! Saigo no Gyakushū!!) (Original Airdate: June 28, 1975) The Appearance of Enigmatic Chief Executive Shadow! (謎の大幹部シャドウの出現!, Nazo no Dai Kanbu Shadō no Shutsugen!) (Original Airdate: July 5, 1975) Shadow's Trump That Calls Death!! (死を呼ぶシャドウのトランプ!!, Shi o Yobu Shadō no Toranpu!!) (Original Airdate: July 12, 1975) The Bloodsucking Bubunger's Demonic Present! (吸血ブブンガー悪魔のプレゼント!, Kyūketsu Bubungā Akuma no Purezento!) (Original Airdate: July 19, 1975) Ghost Story, The Demonic Easter (怪談 悪魔の復活祭, Kaidan Akuma no Fukkatsusai) (Original Airdate: July 26, 1975) Ghost Story, The Bottomless Swamp (怪談 底なし沼, Kaidan Sokonashi Numa) (Original Airdate: August 2, 1975) Ghost Story: The Cursed Old Castle! (怪談 呪われた古城!, Kaidan Norowareta Kojō!) (Original Airdate: August 9, 1975) The Great Scary Desert! Two Tōbeis?! (恐怖の大砂漠!二人の藤兵衛!?, Kyōfu no Dai Sabaku! Futari no Tōbei!?) (Original Airdate: August 16, 1975) Samegashima, Decisive Battle in the Sea! (鮫ヶ島海中大決戦!, Samegashima Kaichū Dai Kessen!) (Original Airdate: August 23, 1975) Rider Execution at 12:00!? (12時00分ライダー処刑!?, Jūniji Zerofun Raidā Shokei!?) (Original Airdate: August 30, 1975) The Devil of the Underground Kingdom!! (地底王国の魔王!!, Chitei Ōkoku no Maō!!) (Original Airdate: September 6, 1975) Bizarre! The Unmanned Train Runs!! (怪奇!無人電車が走る!!, Kaiki! Mujin Densha ga Hashiru!!) (Original Airdate: September 13, 1975) Don't Die!! Shigeru Jō in the Electric Chair (死ぬな!!電気椅子の城茂, Shinu na!! Denki Isu no Jō Shigeru) (Original Airdate: September 20, 1975) Seen!! The Great Leader's True Identity!! (見た!!大首領の正体!!, Mita!! Dai Shuryō no Shōtai!!) (Original Airdate: September 27, 1975) Remodelled Majin! The Delza Army Appears!! (改造魔人!デルザー軍団現わる!!, Kaizō Majin! Deruzā Gundan Arawareru!!) (Original Airdate: October 4, 1975) Oh! Stronger...into Small Pieces?! (あ!ストロンガーがこなごなに…?!, A! Sutorongā ga Konagona ni...?!) (Original Airdate: October 11, 1975) The Curse of Majin Kate's Blood! (魔人ケイト血ののろい!, Majin Keito Chi no Noroi!) (Original Airdate: October 18, 1975) Goodbye, Tackle! Her Last Activity!! (さようならタックル!最後の活躍!!, Sayōnara Takkuru! Saigo no Katsuyaku!!) (Original Airdate: October 25, 1975) Stronger's Great Remodelling!! (ストロンガー大改造!!, Sutorongā Dai Kaizō!!) (Original Airdate: November 1, 1975) Deadly! Super Electro Three-step Kick!! (必殺!超電三段キック!!, Hissatsu! Chō Den Sandan Kikku!!) (Original Airdate: November 8, 1975) Stronger Dies in the Full Moon!? (ストロンガー満月に死す!?, Sutorongā Mangetsu ni Shisu!?) (Original Airdate: November 15, 1975) The Snake Woman's Bloodsucking Hell! (ヘビ女の吸血地獄!, Hebi Onna no Kyūketsu Jigoku!) (Original Airdate: November 22, 1975) The Man Who Returned! The Name is V3!! (帰って来た男!その名はV3!!, Kaettekita Otoko! Sono Na wa Bui Surī!!) (Original Airdate: November 29, 1975) Three Riders Vs. The Powerful Delza Army! (三人ライダー対強力デルザー軍団!, Sannin Raidā Tai Kyōryoku Deruzā Gundan!) (Original Airdate: December 6, 1975) Riders Captured! Long Live Delza!! (ライダー捕らわる!デルザー万才!!, Raidā Torawaru! Deruzā Banzai!!) (Original Airdate: December 13, 1975) Appearance! Riders 1, 2!! (出現!ライダー1号2号!!, Shutsugen! Raidā Ichigō Nigō!!) (Original Airdate: December 20, 1975) Goodbye! The Glorious Seven Riders! (さようなら!栄光の七人ライダー!, Sayōnara! Eikō no Shichinin Raidā!) (Original Airdate: December 27, 1975) Movie: Kamen Rider Stronger A reedited episode 7. Waniida from Black Satan attacks a young boy and long-time Kamen Rider ally Tōbei Tachibana, but Kamen Rider Stronger and Electro-Wave Human Tackle run in to make the save. However, Shigeru and Yuriko fall into separate traps and are captured by Black Satan. Yuriko fights for her life as Shigeru is brainwashed by the organization. Special: The 45-minute All Together! Seven Kamen Riders!! (集合!7人の仮面ライダー!! (Zen'in Shūgō! Shichinin no Kamen Raidā!!)), first broadcast on January 3, 1976 (one week after the series' finale) opens with Tobei Tachibana taking some children to a Kamen Rider roadshow. Just as he's reminiscing about all the heroic modified humans he's lived alongside: 1, 2, V3, Riderman, X, Amazon, Stronger and Tackle, the first seven Riders gradually show up to greet him in their human guises, unrecognized by the crowds. When it's revealed that the monsters onstage are real and not actors, the Riders transform to save the crowd and Rider actors, uniting their power to defeat the Delza Army's true leader, Great Generat Darkness (暗黒大将軍, Ankoku Daishōgun) in his hideout beneath the stadium. Cast: Shigeru Araki as Shigeru Jō/Kamen Rider Stronger Kyōko Okada as Yuriko Misaki Akiji Kobayashi as Tōbei Tachibana Hiroshi Ogasawara as Yōichirō Masaki Akira Hamada as Titan Mahito Tsujimura as Dead Lion (voice) Gorō Naya as The Great Leader of Black Satan, The Great Leader of Delza Army, Great Leader Rock (voice) Hidekatsu Shibata as General Shadow (voice) Osamu Ichikawa as Marshal Machine (voice) Shinji Nakae as Narrator Songs: Opening theme"Kamen Rider Stronger no Uta" (仮面ライダーストロンガーの歌, Kamen Raidā Sutorongā no Uta, "The Song of Kamen Rider Stronger") Lyrics: Saburō Yatsude Composition: Shunsuke Kikuchi Artist: Ichirou MizukiEnding theme"Kyō mo Tatakau Stronger" (今日もたたかうストロンガー, Kyō mo Tatakau Sutorongā, "Stronger Fights Today") Lyrics: Saburō Yatsude Composition: Shunsuke Kikuchi Artists: Masato Shimon & Mitsuko Horie (episodes 1 & 2), Ichirou Mizuki & Mitsuko Horie (episodes 3-31) Episodes: 1-31 "Stronger Action" (ストロンガーアクション, Sutorongā Akushon) Lyrics: Shotaro Ishinomori Composition: Shunsuke Kikuchi Artists: Ichirou Mizuki & Mitsuko Horie Episodes 32-39
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pole mass** Pole mass: In quantum field theory, the pole mass of an elementary particle is the limiting value of the rest mass of a particle, as the energy scale of measurement increases. Running mass: In quantum field theory, quantities like coupling constant and mass "run" with the energy scale of high energy physics. The running mass of a fermion or massive boson depends on the energy scale at which the observation occurs, in a way described by a renormalization group equation (RGE) and calculated by a renormalization scheme such as the on-shell scheme or the minimal subtraction scheme. The running mass refers to a Lagrangian parameter whose value changes with the energy scale at which the renormalization scheme is applied. A calculation, typically done by a computerized algorithm intractable by paper calculations, relates the running mass to the pole mass. The algorithm typically relies on a perturbative calculation of the self energy. Propagator pole: A loop in a Feynman diagram requires an integral over a continuum of possible energies and momenta. In general, the integrals of products of Feynman propagators diverge at propagator poles, and the divergences must be removed by renormalization. The process of renormalization might be thought of as a theory of cancellations of virtual particle paths, thus revealing the "bare" or renormalized physics, such as the pole mass.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Coffee ground vomiting** Coffee ground vomiting: Coffee ground vomitus refers to a particular appearance of vomit. Within organic heme molecules of red blood cells is the element iron, which oxidizes following exposure to gastric acid. This reaction causes the vomitus to look like ground coffee. Causes: Esophagitis, esophageal varices, gastritis, cirrhosis or gastric ulcers for example, may bleed and produce coffee-ground vomitus. When unaccompanied by melena, hematemesis or a fall in hemoglobin with corresponding urea rises and creates an unstable reaction, and other causes of coffee ground vomitus need to be elucidated; for example, gastric stasis, bowel obstruction or ileus, that can cause oxidised food material to be vomited. Vomiting iron supplements can also mimic coffee grounds to the untrained eye.Diseases such as Ebola, yellow fever, viral hepatitis, haemophilia B, fatty liver disease and cancers of stomach, pancreas, esophagus and, rarely, retrograde jejunogastric intussusception might also be the reason behind coffee-ground vomitus.When attributed to peptic inflammation, use of nonsteroidal anti-inflammatory drugs (NSAIDs) is commonly implicated. These drugs can interfere with the stomach's natural defenses against the strongly acidic environment, causing damage to the mucosa that can result in bleeding. Therefore, it is recommended that these class of drugs be taken with food or on a full stomach. Other causes of inflammation may be due to severe gastroesophageal reflux disease, Helicobacter pylori gastritis, portal hypertensive gastropathy or malignancy. Causes: When bright red blood is vomited, it is termed hematemesis. Hematemesis, in contrast to coffee ground vomitus, suggests that upper gastrointestinal bleeding is more acute or more severe, for example due to a Mallory–Weiss tear, gastric ulcer or Dieulafoy's lesion, or esophageal varices. This condition may be a medical emergency and urgent care may be required.Oxidized blood from an upper gastrointestinal bleed can also be excreted in stool. It produces blackened, "tarry" stools known as melena. Evaluation: Upper endoscopy can be used to locate bleeding in the upper gastrointestinal system. In this method a camera is inserted through the mouth to visualize the esophagus, stomach, and duodenum. Evaluation: Numerous studies have suggested that urgent endoscopy is not required for coffee ground emesis alone. Other factors, such as hemodynamic stability, hemoglobin concentration, and various elements of the patient's history may guide clinicians to obtain or defer urgent endoscopy. Additionally, nasogastric aspirates can be used to predict the likelihood that endoscopy will reveal high-risk bleeding.While endoscopic visualization may be sufficient for diagnosis, a biopsy may also be taken during endoscopy, aiding in the diagnosis of H. pylori infections, and differentiating tumors. Evaluation: CT angiography may also be used to locate the source of upper-GI bleeding. Management: Treatment of coffee ground emesis depends on underlying etiology. Patient history and initial labs, especially hemoglobin, can help stratify patients by need for immediate intervention. Management: Bleeding ulcers believed to be caused by H. pylori infections are typically treated with a combination of medications. Medications used in the treatment of such ulcers fall into two categories. First, medications are used to decrease pain associated with ulcers by limiting acid exposure to sensitive ulcers. This can be accomplished by medications that reduce stomach acid production, such as proton pump inhibitors (PPI) or H2-blockers, or through conventional antacids. Sucralfate is also effective in this role, as it coats the ulcer, thus protecting it from caustic stomach acid. Second, antibiotic therapy is used to eliminate the underlying bacterial infection. Clarithromycin and amoxicillin are commonly used in tandem, but antibiotic regiments may vary based on organism susceptibility, side effects, and patient allergies. Gastric ulcers caused by NSAID use can be treated with NSAID cessation, or a proton pump inhibitor if cessation is not possible. Non-healing ulcers should be examined for other causes, such as cancer or Zollinger-Elison syndrome.Esophageal bleeding is predominantly caused by gastrointestinal reflux disease (GERD). PPI medications are preferred to H2-blocking medication due to increased rates of patient improvement, though both medications are commonly used. Severe cases of GERD may be refractory to these medications and require fundoplication, a surgery in which the gastroesophageal junction is surgically reinforced. While lifestyle modifications, diet modification, and antacid use may reduce GERD symptoms such as heartburn, these methods are not sufficient to heal esophageal ulcers.Variceal bleeding may be treated through a variety of medications and interventions, depending on underlying causes and severity. Severe cases are unlikely to present as coffee ground emesis, and are more likely to present as bright red vomitus.Esophageal lacerations (Mallory-Weiss tears) are mostly self-limiting, though the majority require blood transfusions to compensate for blood loss. Endoscopic interventions, including epinephrine injections, clipping, and cauterization may be utilized if needed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Foil (fencing)** Foil (fencing): A foil is one of the three weapons used in the sport of fencing. It is a flexible sword of total length 110cm or under, rectangular in cross section, weighing under 500g, with a blunt tip. As with the épée, points are only scored by making contact with the tip. The foil is the most commonly used weapon in fencing. Non-electric and electric foils: Background There are two types of foil used in modern fencing. Both types are made with the same basic parts: the pommel, grip, guard, and blade. The difference between them is one is electric, and the other is known as "steam" or "dry". The blades of both varieties are capped with a plastic or rubber piece, with a button at the tip in electric blades, that provides information when the blade tip touches the opponent. (There are also a range of plastic swords made by varying manufacturers for use by juniors.) Lacking the button and associated electrical mechanism, a judge is required to determine the scoring and the victor in a tournament with non-electric foils.Non-electric ones are primarily used for practice. The Fédération Internationale d'Escrime and most national organizations require electric scoring apparatus since the 1956 Olympics, although some organizations still fence competitively with non-electric swords. Non-electric and electric foils: Blade Foils have standardized, tapered, rectangular blades in length and cross-section that are made of tempered and annealed low-carbon steel—or maraging steel as required for international competitions. To prevent the blade from breaking or causing harm to an opponent, the blade is made to bend upon impact with its target. The maximum length of the blade must be 90 cm. The length of the assembled weapon at maximum is 110 cm, and the maximum weight must be less than 500g; however, most competition foils are lighter, closer to 350g.The blade of a foil has two sections: the forte (strong) which is the one third of the blade near the guard, and the foible (weak) which is the two thirds of the blade near the tip. There is a part of the blade contained within the grip called a tang. It extends past the grip enough to be fastened to the pommel and to hold the rest of the foil together. When an Italian grip is used, see below, a ricasso extends from under the guard, inside of the grip's quillons, into the tang. Non-electric and electric foils: Guard assembly The guard is fastened to the blade, plug, and grip. Then the pommel, a type of fastener, is attached to the grip and holds the rest together. The type of pommel used depends on the type of grip. Two grips are used in foil: straight traditional grips with external pommels (Italian, French, Spanish, and orthopedic varieties); and the newer design of pistol grips, which fix the hand in a specific, ergonomic position, and which have pommels that fit into a countersink in the grip. Non-electric and electric foils: Electric foils Beginning with the 1956 Olympics, scoring in foil has been accomplished by means of registering the touch with an electric circuit. A switch at the tip of the foil registers the touch, and a metallic foil vest, or lamé, verifies that the touch is on valid target. Cord The cord of any type of electric fencing weapon goes through the fencing gear, coming out behind the fencer. The cord of a foil has one end connecting to the back of the fencing strip, and the other end attaches to the foil. The two ends are not interchangeable with one another. Non-electric and electric foils: Socket The electric foil contains a socket underneath the guard that connects to the scoring apparatus via the body cord and a wire that runs down a channel cut into the top of the blade. Electric foil sockets are fixed so that the body cord plugs into the weapon at the fencer's wrist. There are two main sockets in use today: the "bayonette" which has a single prong and twists-locks into the foil, and the two prong, which has different diameters for each prong, held in place by a clip. Non-electric and electric foils: Tip The tip of the electric foil terminates in a button assembly that generally consists of a barrel, plunger, spring, and retaining screws. The circuit is a "normally closed" one, meaning that at rest there is always a complete power circuit; depressing the tip breaks this circuit, and the scoring apparatus illuminates an appropriate light. Color-coding is used: white or yellow indicates hits not on the valid target area, and either red or green indicate hits on the valid target area (red for one fencer, green for the other). History: The modern foil is the training weapon for the small-sword, the common sidearm of 18th century gentleman. Rapier and even longsword foils are also known to have been used, but their weight and use were very different. History: Although the foil as a blunted weapon for sword practice goes back to the 16th century (for example, in Hamlet, Shakespeare writes "let the foils be brought"), the use as a weapon for sport is more recent. The foil was used in France as a training weapon in the middle of the 18th century in order to practice fast and elegant thrust fencing. Fencers blunted the point by wrapping a foil around the blade or fastening a knob on the point ("blossom", French fleuret). In addition to practicing, some fencers took away the protection and used the sharp foil for duels. German students took up that practice in academic fencing and developed the Pariser ("Parisian") thrusting small sword for the Stoßmensur ("thrusting mensur").The target area for modern foil is said to come from a time when fencing was practiced with limited safety equipment. Another factor in the target area is that foil rules are derived from a period when dueling to the death was the norm. Hence, the favored target area is the torso, where the vital organs are.In 1896, foil (and saber) were included as events in the first Olympic Games in Athens. History: Women's foil Women's foil was first competed at the Olympics in 1924 in Paris, and was the only Olympic fencing event in which women competed until women's épée was introduced at the 1996 Olympics. Nowadays, women's fencing is just as popular as men's, and consists of all weapons (foil, épée, and sabre). Ratings Ratings/Rankings are generally run by national fencing federations and use varying scales based on that particular federations system. These ratings are used as the basis for initial seeding into the pool rounds of tournaments and vary country to country. History: Groups Age groups are necessary to separate skill and body maturity levels in order to create a level playing field. The current age groups for foil (and also épée and sabre) are Y10 (age 10 and under), Y12 (age 12 and under), Y14 (age 14 and under), cadet (age 16 and under), junior (age 19 and under), and senior (anything over 19). While an older competitor cannot compete in a younger category, the contrary is allowed and encouraged, in order to expedite learning. History: The veteran age group consists of 40 and over, 60 and over, and 70 and over sub-groups. Rules: The rules for the sport of fencing are regulated by national sporting associations—in the United States, the United States Fencing Association (USFA) and internationally by the International Fencing Federation, or Fédération Internationale d'Escrime (FIE).The detailed rules for foil are listed in the USFA Rulebook.Rules for the sport of fencing date back to the 19th century. The current international rules for foil were adopted by the FIE Committee for Foil on 12 June 1914. They are based on previous sets of rules adopted by national associations. The rules governing the use of electrical judging apparatus were adopted in 1957 and have been amended several times. Rules: Scoring The foil is used as a thrusting (or point) weapon only. Contact with the side of the blade (a slap or slash) does not result in a score. The tip of the foil must be depressed for at least 15 (± .5) milliseconds while in contact with the opponent's lamé (wire-mesh jacket which covers valid target area) to score a touch. The foil lamé only covers the torso while in saber it covers the whole upper body. The tip must be able to support a minimum force of 4.90 newtons (500 grams-force) without the circuit breaking. This is tested with a 500g (± 3g) weight. Rules: Target area In foil the valid target area includes the torso (including the lower part of the bib of the mask) and the groin. The head (except the lower part of the bib of the mask), arms, and legs are considered off target. Touches made off-target do not count for points, but do stop play. Touches to the guard are the only touches that do not stop play. The target area has been changed multiple times, with the latest change consisting of adding the bottom half of the bib to the target zone. Rules: Priority (right of way) Foil competition and scoring is governed by the rules of priority, also known as right of way. Originally meant to indicate which competitor would have scored the touch (or lethally injured the other), it is now a main contributor to the appeal of the sport of fencing. In essence, it decides who receives the point (there can only be one competitor that receives a point per engagement) when both competitors hit. Rules: The basic rules are whoever the referee judges to be the attacking fencer has "priority". This "priority" can be changed in several ways. The first is the defending fencer deflects the attack from the fencer with "priority" with the forte (strong) of their blade (a "parry"). This switches the "priority" to the fencer who just parried. The second way priority can be switched is if the attacking fencer's attack misses (this is generally judged off of the attacking fencer's arm extension. The final major way "priority" can be shifted is if the defending fencer "beats" their opponent's blade (this can also be used by the attacking fencer to make it clear to the referee that they are continuing their attack) this involves striking the foible (weak) of their opponents blade with their own. If both fencers are judged by the referee to be seeking to beat each other's blades then the fencer who is on the attack is favored.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbon dating the Dead Sea Scrolls** Carbon dating the Dead Sea Scrolls: Carbon dating the Dead Sea Scrolls refers to a series of radiocarbon dating tests performed on the Dead Sea Scrolls, first by the AMS (Accelerator Mass Spectrometry) lab of the Zurich Institute of Technology in 1991 and then by the AMS Facility at the University of Arizona in Tucson in 1994–95. There was also a historical test of a piece of linen performed in 1946 by Willard Libby, the inventor of the dating method. Testing: One of the earliest carbon dating tests was carried out on November 14, 1950. This was on a piece of linen from Qumran Cave 1, the resulting date range being 167 BCE – 233 CE. Libby had first started using the dating method in 1946 and the early testing required relatively large samples, so testing on scrolls themselves only became feasible when methods used in the dating process were improved upon. F.E. Zeuner carried out tests on date palm wood from the Qumran site yielding a date range of 70 BCE – 90 CE. In 1963 Libby tested a sample from the Isaiah Scroll, which provided a range of 200 BCE – 1 CE.In 1991, Robert Eisenman and Philip R. Davies made a request to date a number of scrolls, which led to a series of tests carried out in Zurich on samples from fourteen scrolls. Among these were samples from other sites around the Dead Sea, which contained date indications within the text to supply a control for the carbon dating results. A similar battery of tests was carried out in 1994–95 in Tucson, this time with samples from twenty-two scrolls as well as another piece of linen. 14C Test results: The following table shows all the Qumran-related samples that were tested by Zurich (Z), Tucson (T) and Libby (L). The column headed "14C Age" provides a raw age before 1950 for each sample tested. This represents the ideal date for the amount of 14C measured for the sample. However, as the quantity of 14 absorbed by all life fluctuates from year to year, the figure must be calibrated based on known fluctuation. This calibrated range of dates is represented in the last column, given with a 2-sigma error rating, which means at 95% confidence. With the exception of the first text from Wadi-ed-Daliyeh, the texts in the table below are only those from the caves around Qumran. The table orders them chronologically, based on 14C age. 14C Test results: Non-scroll material tested: Many of the date ranges provided are actually two date ranges, for example the Habakkuk Commentary (#13), which is given as 160–148 or 111–2 CE. The section of the calibration curve for the 14C age of the Habakkuk Commentary is complex, so that the 14C age of 2054 cuts through a few spikes on the curve, providing two date ranges. Observations: The Great Isaiah Scroll 1QIsaa has been tested three times, once by Libby, once at Zurich and once at Tucson. The results from the latter two were almost identical, which is a good indicator of the basic accuracy of this dating method. 1QS (#15), tested at Zurich, and 4QSamc (#8), tested at Tucson, provide overlapping date ranges, which is expected when both texts are attributed to the same scribe. When 4Q258 (#24) was tested at Tucson its result was so anomalous (129–255 or 303–318 CE) that the laboratory was asked to retest another sample from the same document. The second test (#21) yielded a result (50 BCE–130 CE) that was deemed more satisfactory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Langton's ant** Langton's ant: Langton's ant is a two-dimensional universal Turing machine with a very simple set of rules but complex emergent behavior. It was invented by Chris Langton in 1986 and runs on a square lattice of black and white cells. The universality of Langton's ant was proven in 2000. The idea has been generalized in several different ways, such as turmites which add more colors and more states. Rules: Squares on a plane are colored variously either black or white. We arbitrarily identify one square as the "ant". The ant can travel in any of the four cardinal directions at each step it takes. The "ant" moves according to the rules below: At a white square, turn 90° clockwise, flip the color of the square, move forward one unit At a black square, turn 90° counter-clockwise, flip the color of the square, move forward one unitLangton's ant can also be described as a cellular automaton, where the grid is colored black or white and the "ant" square has one of eight different colors assigned to encode the combination of black/white state and the current direction of motion of the ant. Modes of behavior: These simple rules lead to complex behavior. Three distinct modes of behavior are apparent, when starting on a completely white grid. Simplicity. During the first few hundred moves it creates very simple patterns which are often symmetric. Chaos. After a few hundred moves, a large, irregular pattern of black and white squares appears. The ant traces a pseudo-random path until around 10,000 steps. Modes of behavior: Emergent order. Finally the ant starts building a recurrent "highway" pattern of 104 steps that repeats indefinitely.All finite initial configurations tested eventually converge to the same repetitive pattern, suggesting that the "highway" is an attractor of Langton's ant, but no one has been able to prove that this is true for all such initial configurations. It is only known that the ant's trajectory is always unbounded regardless of the initial configuration – this is known as the Cohen-Kong theorem. Computational universality: In 2000, Gajardo et al. showed a construction that calculates any boolean circuit using the trajectory of a single instance of Langton's ant. Additionally, it would be possible to simulate an arbitrary Turing machine using the ant's trajectory for computation. This means that the ant is capable of universal computation. Extension to multiple colors: Greg Turk and Jim Propp considered a simple extension to Langton's ant where instead of just two colors, more colors are used. The colors are modified in a cyclic fashion. A simple naming scheme is used: for each of the successive colors, a letter "L" or "R" is used to indicate whether a left or right turn should be taken. Langton's ant has the name "RL" in this naming scheme. Extension to multiple colors: Some of these extended Langton's ants produce patterns that become symmetric over and over again. One of the simplest examples is the ant "RLLR". One sufficient condition for this to happen is that the ant's name, seen as a cyclic list, consists of consecutive pairs of identical letters "LL" or "RR". (the term "cyclic list" indicates that the last letter may pair with the first one) The proof involves Truchet tiles. Extension to multiple colors: Some example patterns in the multiple-color extension of Langton's ants: The hexagonal grid permits up to six different rotations, which are notated here as N (no change), R1 (60° clockwise), R2 (120° clockwise), U (180°), L2 (120° counter-clockwise), L1 (60° counter-clockwise). Extension to multiple states: A further extension of Langton's ants is to consider multiple states of the Turing machine – as if the ant itself has a color that can change. These ants are called turmites, a contraction of "Turing machine termites". Common behaviours include the production of highways, chaotic growth and spiral growth. Some example turmites: Extension to multiple ants: Multiple Langton's ants can co-exist on the 2D plane, and their interactions give rise to complex, higher-order automata that collectively build a wide variety of organized structures. There are different ways of modelling their interaction and the results of the simulation may strongly depend on the choices made.One may choose that all the ants sitting on the same square simultaneously make the same change to the tape. There is a YouTube video showing the simulation of such multiple ant interactions. Also there exists family of colonies which is absolute oscillator with linear period 4(8n+3). Extension to multiple ants: Multiple turmites can co-exist on the 2D plane as long as there is a rule that defines what happens when they meet. Ed Pegg, Jr. considered ants that can turn for example both left and right, splitting in two and annihilating each other when they meet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Promoter (genetics)** Promoter (genetics): In genetics, a promoter is a sequence of DNA to which proteins bind to initiate transcription of a single RNA transcript from the DNA downstream of the promoter. The RNA transcript may encode a protein (mRNA), or can have a function in and of itself, such as tRNA or rRNA. Promoters are located near the transcription start sites of genes, upstream on the DNA (towards the 5' region of the sense strand). Promoter (genetics): Promoters can be about 100–1000 base pairs long, the sequence of which is highly dependent on the gene and product of transcription, type or class of RNA polymerase recruited to the site, and species of organism.Promoters control gene expression in bacteria and eukaryotes. RNA polymerase must attach to DNA near a gene for transcription to occur. Promoter DNA sequences provide an enzyme binding site. The -10 sequence is TATAAT. -35 sequences are conserved on average, but not in most promoters. Promoter (genetics): Artificial promoters with conserved -10 and -35 elements transcribe more slowly. All DNAs have "Closely spaced promoters" Divergent, tandem, and convergent orientations are possible. Two closely spaced promoters will likely interfere. Regulatory elements can be several kilobases away from the transcriptional start site in gene promoters (enhancers). Promoter (genetics): In eukaryotes, the transcriptional complex can bend DNA, allowing regulatory sequences to be placed far from the transcription site. The distal promoter is upstream of the gene and may contain additional regulatory elements with a weaker influence. RNA polymerase II (RNAP II) bound to the transcription start site promoter can start mRNA synthesis. It also typically contains CpG islands, a TATA box, and TFIIB recognition elements. Promoter (genetics): Hypermethylation downregulates both genes, while demethylation upregulates them. Non-coding RNAs are linked to mRNA promoter regions, according to research. Subgenomic promoters range from 24 to 100 nucleotides (Beet necrotic yellow vein virus). Gene expression depends on promoter binding. Unwanted gene changes can increase a cell's cancer risk. MicroRNA promoters often contain CpG islands. DNA methylation forms 5-methylcytosines at the 5' pyrimidine ring of CpG cytosine residues. Some cancer genes are silenced by mutation, but most are silenced by DNA methylation. Others are regulated promoters. Selection may favor less energetic transcriptional binding. Variations in promoters or transcription factors cause some diseases. Misunderstandings can result from using canonical sequence to describe a promoter. Overview: For transcription to take place, the enzyme that synthesizes RNA, known as RNA polymerase, must attach to the DNA near a gene. Promoters contain specific DNA sequences such as response elements that provide a secure initial binding site for RNA polymerase and for proteins called transcription factors that recruit RNA polymerase. These transcription factors have specific activator or repressor sequences of corresponding nucleotides that attach to specific promoters and regulate gene expression. Overview: In bacteria The promoter is recognized by RNA polymerase and an associated sigma factor, which in turn are often brought to the promoter DNA by an activator protein's binding to its own DNA binding site nearby. In eukaryotes The process is more complicated, and at least seven different factors are necessary for the binding of an RNA polymerase II to the promoter.Promoters represent critical elements that can work in concert with other regulatory regions (enhancers, silencers, boundary elements/insulators) to direct the level of transcription of a given gene. A promoter is induced in response to changes in abundance or conformation of regulatory proteins in a cell, which enable activating transcription factors to recruit RNA polymerase. Identification of relative location: As promoters are typically immediately adjacent to the gene in question, positions in the promoter are designated relative to the transcriptional start site, where transcription of DNA begins for a particular gene (i.e., positions upstream are negative numbers counting back from -1, for example -100 is a position 100 base pairs upstream). Relative location in the cell nucleus: In the cell nucleus, it seems that promoters are distributed preferentially at the edge of the chromosomal territories, likely for the co-expression of genes on different chromosomes. Furthermore, in humans, promoters show certain structural features characteristic for each chromosome. Elements: Bacterial In bacteria, the promoter contains two short sequence elements approximately 10 (Pribnow Box) and 35 nucleotides upstream from the transcription start site. The sequence at -10 (the -10 element) has the consensus sequence TATAAT. The sequence at -35 (the -35 element) has the consensus sequence TTGACA. Elements: The above consensus sequences, while conserved on average, are not found intact in most promoters. On average, only 3 to 4 of the 6 base pairs in each consensus sequence are found in any given promoter. Few natural promoters have been identified to date that possess intact consensus sequences at both the -10 and -35; artificial promoters with complete conservation of the -10 and -35 elements have been found to transcribe at lower frequencies than those with a few mismatches with the consensus. Elements: The optimal spacing between the -35 and -10 sequences is 17 bp. Elements: Some promoters contain one or more upstream promoter element (UP element) subsites (consensus sequence 5'-AAAAAARNR-3' when centered in the -42 region; consensus sequence 5'-AWWWWWTTTTT-3' when centered in the -52 region; W = A or T; R = A or G; N = any base).The above promoter sequences are recognized only by RNA polymerase holoenzyme containing sigma-70. RNA polymerase holoenzymes containing other sigma factors recognize different core promoter sequences. Elements: ← upstream downstream --> 5'-XXXXXXXPPPPPPXXXXXXPPPPPPXXXXGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGXXXX-3' -35 -10 Gene to be transcribed Probability of occurrence of each nucleotide for -10 sequence T A T A A T 77% 76% 60% 61% 56% 82% for -35 sequence T T G A C A 69% 79% 61% 56% 54% 54% Bidirectional (prokaryotic) Promoters can be very closely located in the DNA. Such "closely spaced promoters" have been observed in the DNAs of all life forms, from humans to prokaryotes and are highly conserved. Therefore, they may provide some (presently unknown) advantages. Elements: These pairs of promoters can be positioned in divergent, tandem, and convergent directions. They can also be regulated by transcription factors and differ in various features, such as the nucleotide distance between them, the two promoter strengths, etc. The most important aspect of two closely spaced promoters is that they will, most likely, interfere with each other. Several studies have explored this using both analytical and stochastic models. There are also studies that measured gene expression in synthetic genes or from one to a few genes controlled by bidirectional promoters. Elements: More recently, one study measured most genes controlled by tandem promoters in E. coli. In that study, it was measured and then modeled two main forms of interference. One is when an RNAP is on the downstream promoter, blocking the movement of RNAPs elongating from the upstream promoter. The other is when the two promoters are so close that when an RNAP sits on one of the promoters, it blocks any other RNAP from reaching the other promoter. These events are possible because the RNAP occupies several nucleotides when bound to the DNA, including in transcription start sites. Elements: Similar events occur when the promoters are in divergent and convergent formations. The possible events also depend on the distance between them. Eukaryotic Eukaryotic promoters are diverse and can be difficult to characterize, however, recent studies show that they are divided in more than 10 classes. Elements: Gene promoters are typically located upstream of the gene and can have regulatory elements several kilobases away from the transcriptional start site (enhancers). In eukaryotes, the transcriptional complex can cause the DNA to bend back on itself, which allows for placement of regulatory sequences far from the actual site of transcription. Eukaryotic RNA-polymerase-II-dependent promoters can contain a TATA box (consensus sequence TATAAA), which is recognized by the general transcription factor TATA-binding protein (TBP); and a B recognition element (BRE), which is recognized by the general transcription factor TFIIB. The TATA element and BRE typically are located close to the transcriptional start site (typically within 30 to 40 base pairs). Elements: Eukaryotic promoter regulatory sequences typically bind proteins called transcription factors that are involved in the formation of the transcriptional complex. An example is the E-box (sequence CACGTG), which binds transcription factors in the basic helix-loop-helix (bHLH) family (e.g. BMAL1-Clock, cMyc). Some promoters that are targeted by multiple transcription factors might achieve a hyperactive state, leading to increased transcriptional activity. Elements: Core promoter – the minimal portion of the promoter required to properly initiate transcriptionIncludes the transcription start site (TSS) and elements directly upstream A binding site for RNA polymerase RNA polymerase I: transcribes genes encoding 18S, 5.8S and 28S ribosomal RNAs RNA polymerase II: transcribes genes encoding messenger RNA and certain small nuclear RNAs and microRNA RNA polymerase III: transcribes genes encoding transfer RNA, 5s ribosomal RNAs and other small RNAs General transcription factor binding sites, e.g. TATA box, B recognition element. Elements: Many other elements/motifs may be present. There is no such thing as a set of "universal elements" found in every core promoter. Elements: Proximal promoter – the proximal sequence upstream of the gene that tends to contain primary regulatory elements Approximately 250 base pairs upstream of the start site Specific transcription factor binding sites Distal promoter – the distal sequence upstream of the gene that may contain additional regulatory elements, often with a weaker influence than the proximal promoter Anything further upstream (but not an enhancer or other regulatory region whose influence is positional/orientation independent) Specific transcription factor binding sites Mammalian promoters Up-regulated expression of genes in mammals is initiated when signals are transmitted to the promoters associated with the genes. Promoter DNA sequences may include different elements such as CpG islands (present in about 70% of promoters), a TATA box (present in about 24% of promoters), initiator (Inr) (present in about 49% of promoters), upstream and downstream TFIIB recognition elements (BREu and BREd) (present in about 22% of promoters), and downstream core promoter element (DPE) (present in about 12% of promoters). The presence of multiple methylated CpG sites in CpG islands of promoters causes stable silencing of genes. However, the presence or absence of the other elements have relatively small effects on gene expression in experiments. Two sequences, the TATA box and Inr, caused small but significant increases in expression (45% and 28% increases, respectively). The BREu and the BREd elements significantly decreased expression by 35% and 20%, respectively, and the DPE element had no detected effect on expression.Cis-regulatory modules that are localized in DNA regions distant from the promoters of genes can have very large effects on gene expression, with some genes undergoing up to 100-fold increased expression due to such a cis-regulatory module. These cis-regulatory modules include enhancers, silencers, insulators and tethering elements. Among this constellation of elements, enhancers and their associated transcription factors have a leading role in the regulation of gene expression.Enhancers are regions of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to promoters. Multiple enhancers, each often at tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control expression of their common target gene.The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function specific transcription factors (there are about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern the level of transcription of the target gene. Mediator (coactivator) (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter.Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the Figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating a promoter to initiate transcription of messenger RNA from its target gene. Elements: Bidirectional (mammalian) Bidirectional promoters are short (<1 kbp) intergenic regions of DNA between the 5' ends of the genes in a bidirectional gene pair. A “bidirectional gene pair” refers to two adjacent genes coded on opposite strands, with their 5' ends oriented toward one another. The two genes are often functionally related, and modification of their shared promoter region allows them to be co-regulated and thus co-expressed. Bidirectional promoters are a common feature of mammalian genomes. About 11% of human genes are bidirectionally paired.Bidirectionally paired genes in the Gene Ontology database shared at least one database-assigned functional category with their partners 47% of the time. Microarray analysis has shown bidirectionally paired genes to be co-expressed to a higher degree than random genes or neighboring unidirectional genes. Although co-expression does not necessarily indicate co-regulation, methylation of bidirectional promoter regions has been shown to downregulate both genes, and demethylation to upregulate both genes. There are exceptions to this, however. In some cases (about 11%), only one gene of a bidirectional pair is expressed. In these cases, the promoter is implicated in suppression of the non-expressed gene. The mechanism behind this could be competition for the same polymerases, or chromatin modification. Divergent transcription could shift nucleosomes to upregulate transcription of one gene, or remove bound transcription factors to downregulate transcription of one gene.Some functional classes of genes are more likely to be bidirectionally paired than others. Genes implicated in DNA repair are five times more likely to be regulated by bidirectional promoters than by unidirectional promoters. Chaperone proteins are three times more likely, and mitochondrial genes are more than twice as likely. Many basic housekeeping and cellular metabolic genes are regulated by bidirectional promoters. Elements: The overrepresentation of bidirectionally paired DNA repair genes associates these promoters with cancer. Forty-five percent of human somatic oncogenes seem to be regulated by bidirectional promoters – significantly more than non-cancer causing genes. Hypermethylation of the promoters between gene pairs WNT9A/CD558500, CTDSPL/BC040563, and KCNK15/BF195580 has been associated with tumors.Certain sequence characteristics have been observed in bidirectional promoters, including a lack of TATA boxes, an abundance of CpG islands, and a symmetry around the midpoint of dominant Cs and As on one side and Gs and Ts on the other. A motif with the consensus sequence of TCTCGCGAGA, also called the CGCG element, was recently shown to drive PolII-driven bidirectional transcription in CpG islands. CCAAT boxes are common, as they are in many promoters that lack TATA boxes. In addition, the motifs NRF-1, GABPA, YY1, and ACTACAnnTCCC are represented in bidirectional promoters at significantly higher rates than in unidirectional promoters. The absence of TATA boxes in bidirectional promoters suggests that TATA boxes play a role in determining the directionality of promoters, but counterexamples of bidirectional promoters do possess TATA boxes and unidirectional promoters without them indicates that they cannot be the only factor.Although the term "bidirectional promoter" refers specifically to promoter regions of mRNA-encoding genes, luciferase assays have shown that over half of human genes do not have a strong directional bias. Research suggests that non-coding RNAs are frequently associated with the promoter regions of mRNA-encoding genes. It has been hypothesized that the recruitment and initiation of RNA polymerase II usually begins bidirectionally, but divergent transcription is halted at a checkpoint later during elongation. Possible mechanisms behind this regulation include sequences in the promoter region, chromatin modification, and the spatial orientation of the DNA. Subgenomic: A subgenomic promoter is a promoter added to a virus for a specific heterologous gene, resulting in the formation of mRNA for that gene alone. Many positive-sense RNA viruses produce these subgenomic mRNAs (sgRNA) as one of the common infection techniques used by these viruses and generally transcribe late viral genes. Subgenomic promoters range from 24 nucleotide (Sindbis virus) to over 100 nucleotides (Beet necrotic yellow vein virus) and are usually found upstream of the transcription start. Detection: A wide variety of algorithms have been developed to facilitate detection of promoters in genomic sequence, and promoter prediction is a common element of many gene prediction methods. A promoter region is located before the -35 and -10 Consensus sequences. The closer the promoter region is to the consensus sequences the more often transcription of that gene will take place. There is not a set pattern for promoter regions as there are for consensus sequences. Evolutionary change: Changes in promoter sequences are critical in evolution as indicated by the relatively stable number of genes in many lineages. For instance, most vertebrates have roughly the same number of protein-coding genes (about 20,000) which are often highly conserved in sequence, hence much of evolutionary change must come from changes in gene expression. De novo origin of promoters Given the short sequences of most promoter elements, promoters can rapidly evolve from random sequences. For instance, in E. coli, ~60% of random sequences can evolve expression levels comparable to the wild-type lac promoter with only one mutation, and that ~10% of random sequences can serve as active promoters even without evolution. Binding: The initiation of the transcription is a multistep sequential process that involves several mechanisms: promoter location, initial reversible binding of RNA polymerase, conformational changes in RNA polymerase, conformational changes in DNA, binding of nucleoside triphosphate (NTP) to the functional RNA polymerase-promoter complex, and nonproductive and productive initiation of RNA synthesis.The promoter binding process is crucial in the understanding of the process of gene expression. Tuning synthetic genetic systems relies on precisely engineered synthetic promoters with known levels of transcription rates. Binding: Location Although RNA polymerase holoenzyme shows high affinity to non-specific sites of the DNA, this characteristic does not allow us to clarify the process of promoter location. This process of promoter location has been attributed to the structure of the holoenzyme to DNA and sigma 4 to DNA complexes. Diseases associated with aberrant function: Most diseases are heterogeneous in cause, meaning that one "disease" is often many different diseases at the molecular level, though symptoms exhibited and response to treatment may be identical. How diseases of different molecular origin respond to treatments is partially addressed in the discipline of pharmacogenomics. Diseases associated with aberrant function: Not listed here are the many kinds of cancers involving aberrant transcriptional regulation owing to creation of chimeric genes through pathological chromosomal translocation. Importantly, intervention in the number or structure of promoter-bound proteins is one key to treating a disease without affecting expression of unrelated genes sharing elements with the target gene. Some genes whose change is not desirable are capable of influencing the potential of a cell to become cancerous. CpG islands in promoters: In humans, about 70% of promoters located near the transcription start site of a gene (proximal promoters) contain a CpG island. CpG islands are generally 200 to 2000 base pairs long, have a C:G base pair content >50%, and have regions of DNA where a cytosine nucleotide is followed by a guanine nucleotide and this occurs frequently in the linear sequence of bases along its 5' → 3' direction. CpG islands in promoters: Distal promoters also frequently contain CpG islands, such as the promoter of the DNA repair gene ERCC1, where the CpG island-containing promoter is located about 5,400 nucleotides upstream of the coding region of the ERCC1 gene. CpG islands also occur frequently in promoters for functional noncoding RNAs such as microRNAs. Methylation of CpG islands stably silences genes: In humans, DNA methylation occurs at the 5' position of the pyrimidine ring of the cytosine residues within CpG sites to form 5-methylcytosines. The presence of multiple methylated CpG sites in CpG islands of promoters causes stable silencing of genes. Silencing of a gene may be initiated by other mechanisms, but this is often followed by methylation of CpG sites in the promoter CpG island to cause the stable silencing of the gene. Promoter CpG hyper/hypo-methylation in cancer: Generally, in progression to cancer, hundreds of genes are silenced or activated. Although silencing of some genes in cancers occurs by mutation, a large proportion of carcinogenic gene silencing is a result of altered DNA methylation (see DNA methylation in cancer). DNA methylation causing silencing in cancer typically occurs at multiple CpG sites in the CpG islands that are present in the promoters of protein coding genes. Promoter CpG hyper/hypo-methylation in cancer: Altered expressions of microRNAs also silence or activate many genes in progression to cancer (see microRNAs in cancer). Altered microRNA expression occurs through hyper/hypo-methylation of CpG sites in CpG islands in promoters controlling transcription of the microRNAs. Silencing of DNA repair genes through methylation of CpG islands in their promoters appears to be especially important in progression to cancer (see methylation of DNA repair genes in cancer). Canonical sequences and wild-type: The usage of the term canonical sequence to refer to a promoter is often problematic, and can lead to misunderstandings about promoter sequences. Canonical implies perfect, in some sense. In the case of a transcription factor binding site, there may be a single sequence that binds the protein most strongly under specified cellular conditions. This might be called canonical. However, natural selection may favor less energetic binding as a way of regulating transcriptional output. In this case, we may call the most common sequence in a population the wild-type sequence. It may not even be the most advantageous sequence to have under prevailing conditions. Recent evidence also indicates that several genes (including the proto-oncogene c-myc) have G-quadruplex motifs as potential regulatory signals. Synthetic promoter design and engineering: Promoters are important gene regulatory elements used in tuning synthetically designed genetic circuits and metabolic networks. For example, to overexpress an important gene in a network, to yield higher production of target protein, synthetic biologists design promoters to upregulate its expression. Automated algorithms can be used to design neutral DNA or insulators that do not trigger gene expression of downstream sequences. Diseases that may be associated with variations: Some cases of many genetic diseases are associated with variations in promoters or transcription factors. Examples include: Asthma Beta thalassemia Rubinstein-Taybi syndrome Constitutive vs regulated: Some promoters are called constitutive as they are active in all circumstances in the cell, while others are regulated, becoming active in the cell only in response to specific stimuli. Tissue-Specific Promoter: Slc6a4, PDGF, PDGFRb, CX3CR1, TRPA1, Krt5, actb, aMHC, 1a1kin, Cck, zDC, cFos, Hand1, Rosa, Insl5, Cart, Sctr, Ins1, Nrsn1, Foxp3, Tph, Cnr1, Pzp, CD23, Cx40, Foxn1, Rspo3, Krt13, Pnoc, ChAT, MMTV, Myh6, Sftpc, Mlc2a, Atf3, Pirt, Dbh, Villin, CD4, Vav1, Sox2, Dat, Pdx1, Cal, Gfral, Cr, BAF53b, Cntn2, Nav1.8, ObRb, Krt5, Advillin, Mrgprd, PV, Pax7, Calb1, Mx1, Nmu, Aldh1, CAG, CD19, Krt14, Vil1, Stra8, E8i, BAF53b, Pf4, UBC, Vip, TCF21, Cart, Htr3b, Pdx1, Mgarp, Mx1, Nmu, GFAP, vGlut2. Use of the term: When referring to a promoter some authors actually mean promoter + operator; i.e., the lac promoter is IPTG inducible, meaning that besides the lac promoter, the lac operon is also present. If the lac operator were not present the IPTG would not have an inducible effect. Another example is the Tac-Promoter system (Ptac). Notice how tac is written as a tac promoter, while in fact tac is actually both a promoter and an operator.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Striker (video game)** Striker (video game): Striker is a soccer video game series first released by Rage Software in 1992. Later also for the Commodore Amiga, Amiga CD32, Atari ST, PC, Mega Drive/Genesis, and Super NES. It was bundled in one of the Amiga 1200 launch packs. It was one of the first soccer games to feature a 3D viewpoint, after Simulmondo's I Play 3D Soccer. Striker (video game): In 1993 it was released in Japan by Coconuts Japan for the Super Famicom as World Soccer (ワールドサッカー, Wārudo Sakkā), while the French Super NES version of Striker is known as Eric Cantona Football Challenge, playing on the popularity of French forward Eric Cantona, while the North American Super NES release of Striker was known as World Soccer '94: Road to Glory. The Mega Drive and Game Gear versions were branded as Sega Sports Striker. They were published by SEGA and developed by Rage Software in 1994 and released in 1995. Critical reaction: The game received a mixed reaction from the gaming press, with some condemning and others praising its extreme speed. For example, CU Amiga Magazine awarded the game 94% in its June 1992 issue along with the CU Amiga Screenstar award while German magazine Amiga Joker awards the game 64% in the September 1992 edition. By 1995, Striker sold 700,000 copies. Ports/Sequels: Ports Striker was ported to several consoles between 1992 and 1999. World Soccer '94: Road to Glory (SNES) The Super NES port World Soccer '94: Road to Glory, known as Striker in Europe, Eric Cantona Football Challenge in France and World Soccer in Japan) was released in North America by Atlus Software, in Europe directly by Rage Software and in Japan by Coconot. Ports/Sequels: The game lets the player choose from five different modes, including indoor soccer, and then pick from 128 different international teams, all with different strengths and weaknesses. Unlike in the original game where the strongest or the most well-known teams had real names, in World Soccer '94: Road to Glory all the footballers' names are fictitious. Every UEFA (Europe) CAF (Africa) CONCACAF (North America Central America The & Caribbean) AFC (Asia) OFC (Oceania) team of that time appears in the game except for Yugoslavia, which was banned from international competition from 1992 to 1994 for being at war with itself. World Soccer '94: Road to Glory doesn't have a language-select prompt at the opening screen. Ports/Sequels: There are many options and features, for example, the pitch surface changes field conditions in outdoor friendlies; wet surfaces are slower than drier ones. Wind Strength can affect the flight of the ball, new FIFA Rules affects whether or not extra time will use the "Golden Goal" (sudden death) rules - since abolished. Auto Keeper will, when turned on, make the goalkeeper kick the ball upfield automatically after saved shots on target. After saves, the goalkeeper takes control automatically unless "Auto Keeper" is turned OFF. Ports/Sequels: List of ports Sequels A sequel, World Cup Striker (known in North America as Elite Soccer), was released for the Super NES in 1994. It was basically a repackaged version of Striker, but slightly better. It was published in Japan by Coconuts Japan and in Europe by Elite. A Game Boy game developed by Denton Designs was also released at the same time, in Europe it was released as Soccer, in North America as Elite Soccer (both published by GameTek), and in Japan as World Cup Striker (published by Coconuts Japan and endorsed by Yasutaro Matsuki). Ports/Sequels: Also, Striker Pro was released in Europe and North America for the CD-i. In 1995, Striker: World Cup Special was released for the 3DO. A version of Striker '95 was in development for the Atari Jaguar but never released. An entry in the Striker franchise was in the works for the Panasonic M2 but it never happened due to the system's cancellation.A year later Striker '96 (known in Japan as Striker: World Cup Premiere Stage) was released for the PlayStation, Sega Saturn and MS-DOS. Striker '96 is known for being the first soccer game on the original PlayStation.In 1999 UEFA Striker, known in North America as Striker Pro 2000, was released for the Dreamcast and PlayStation. Ports/Sequels: A follow up, UEFA 2001, was announced for the Dreamcast in 2000, but was cancelled in October 2000 when Infogrames was re-evaluating their Dreamcast support, and the game was never released on any platform.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chicken as food** Chicken as food: Chicken is the most common type of poultry in the world. Owing to the relative ease and low cost of raising chickens—in comparison to mammals such as cattle or hogs—chicken meat (commonly called just "chicken") and chicken eggs have become prevalent in numerous cuisines. Chicken as food: Chicken can be prepared in a vast range of ways, including baking, grilling, barbecuing, frying, and boiling. Since the latter half of the 20th century, prepared chicken has become a staple of fast food. Chicken is sometimes cited as being more healthful than red meat, with lower concentrations of cholesterol and saturated fat.The poultry farming industry that accounts for chicken production takes on a range of forms across different parts of the world. In developed countries, chickens are typically subject to intensive farming methods while less-developed areas raise chickens using more traditional farming techniques. The United Nations estimates there to be 19 billion chickens on Earth today, making them outnumber humans more than two to one. History: The modern chicken is a descendant of red junglefowl hybrids along with the grey junglefowl first raised thousands of years ago in the northern parts of the Indian subcontinent.Chicken as a meat has been depicted in Babylonian carvings from around 600 BC. Chicken was one of the most common meats available in the Middle Ages. For thousands of years, a number of different kinds of chicken have been eaten across most of the Eastern hemisphere, including capons, pullets, and hens. It was one of the basic ingredients in blancmange, a stew usually consisting of chicken and fried onions cooked in milk and seasoned with spices and sugar.In the United States in the 1800s, chicken was more expensive than other meats and it was "sought by the rich because [it is] so costly as to be an uncommon dish." Chicken consumption in the U.S. increased during World War II due to a shortage of beef and pork. In Europe, consumption of chicken overtook that of beef and veal in 1996, linked to consumer awareness of Bovine spongiform encephalopathy (mad cow disease). Breeding: Modern varieties of chicken such as the Cornish Cross, are bred specifically for meat production, with an emphasis placed on the ratio of feed to meat produced by the animal. The most common breeds of chicken consumed in the U.S. are Cornish and White Rock.Chickens raised specifically for food are called broilers. In the U.S., broilers are typically butchered at a young age. Modern Cornish Cross hybrids, for example, are butchered as early as 8 weeks for fryers and 12 weeks for roasting birds.Capons (castrated cocks) produce more and fattier meat. For this reason, they are considered a delicacy and were particularly popular in the Middle Ages. Edible components: Main Breast: These are white meat and are relatively dry. The breast has two segments which are sold together on bone-in breasts, but separated on boneless breasts: The "breast", when sold as boneless, and two "tenderloin", located on each side between the breast meat and the ribs. These are removed from boneless breasts and sold separately as tenderloins. Leg: Comprises two segments: The "drumstick"; this is dark meat and is the lower part of the leg, the "thigh"; also dark meat, this is the upper part of the leg. Wing: Often served as a light meal or bar food. Buffalo wings are a typical example. Comprises three segments: the "drumette", shaped like a small drumstick, this is white meat, the middle "flat" segment, containing two bones, and the tip, often discarded. Other Chicken feet: These contain relatively little meat, and are eaten mainly for the skin and cartilage. Although considered exotic in Western cuisine, the feet are common fare in other cuisines, especially in the Caribbean, China and Vietnam. Giblets: organs such as the heart, gizzards, and liver may be included inside a butchered chicken or sold separately. Head: Considered a delicacy in China, the head is split down the middle, and the brains and other tissue is eaten. Kidneys: Normally left in when a broiler carcass is processed, they are found in deep pockets on each side of the vertebral column. Neck: This is served in various Asian dishes. It is stuffed to make helzel among Ashkenazi Jews. Oysters: Located on the back, near the thigh, these small, round pieces of dark meat are often considered to be a delicacy. Pygostyle (chicken's buttocks) and testicles: These are commonly eaten in East Asia and some parts of South East Asia. By-products Blood: Immediately after slaughter, blood may be drained into a receptacle, which is then used in various products. In many Asian countries, the blood is poured into low, cylindrical forms, and left to congeal into disc-like cakes for sale. These are commonly cut into cubes, and used in soup dishes. Carcass: After the removal of the flesh, this is used for soup stock. Chicken eggs: The most well-known and well-consumed byproduct. Heart and gizzard: in Brazilian churrascos, chicken hearts are an often seen as a delicacy. Liver: This is the largest organ of the chicken, and is used in such dishes as Pâté and chopped liver. Schmaltz: This is produced by rendering the fat, and is used in various dishes. Health: Chicken meat contains about two to three times as much polyunsaturated fat as most types of red meat when measured as weight percentage.Chicken generally includes low fat in the meat itself (castrated roosters excluded). The fat is highly concentrated on the skin. A 100g serving of baked chicken breast contains 4 grams of fat and 31 grams of protein, compared to 10 grams of fat and 27 grams of protein for the same portion of broiled, lean skirt steak. Health: Use of roxarsone in chicken production In factory farming, chickens are routinely administered with the feed additive roxarsone, an organoarsenic compound which partially decomposes into inorganic arsenic compounded in the flesh of chickens, and in their feces, which are often used as a fertilizer. The compound is used to control stomach pathogens and promote growth. In a 2013 sample conducted by the Johns Hopkins School of Public Health of chicken meat from poultry producers that did not prohibit roxarsone, 70% of the samples in the US had levels which exceeded the safety limits as set by the FDA. The FDA has since revised its stance on safe limits to inorganic arsenic in animal feed by stating that "any new animal drug that contributes to the overall inorganic arsenic burden is of potential concern". Health: Antibiotic resistance Information obtained by the Canadian Integrated Program for Antimicrobial Resistance (CIPARS) "strongly indicates that cephalosporin resistance in humans is moving in lockstep with the use of the drug in poultry production". According to the Canadian Medical Association Journal, the unapproved antibiotic ceftiofur is routinely injected into eggs in Quebec and Ontario to discourage infection of hatchlings. Although the data are contested by the industry, antibiotic resistance in humans appears to be directly related to the antibiotic's use in eggs.A recent study by the Translational Genomics Research Institute showed that nearly half (47%) of the meat and poultry in US grocery stores was contaminated with S. aureus, with more than half (52%) of those bacteria resistant to antibiotics. Furthermore, as per the FDA, more than 25% of retail chicken is resistant to 5 or more different classes of antibiotic treatment drugs in the United States. An estimated 90–100% of conventional chicken contains, at least, one form of antibiotic resistance microorganism, while organic chicken has been found to have a lower incidence at 84%. Health: Fecal matter contamination In random surveys of chicken products across the United States in 2012, the Physicians Committee for Responsible Medicine found 48% of samples to contain fecal matter. On most commercial chicken farms, the chickens spend their entire life standing in, lying on, and living in their own manure, which is somewhat mixed in with the bedding material (e.g., sawdust, wood shavings, chopped straw, etc.). Health: During shipping from the concentrated animal feeding operation farm to the abattoir, the chickens are usually placed inside shipping crates that usually have slatted floors. Those crates are then piled 5 to 10 rows high on the transport truck to the abattoir. During shipment, the chickens tend to defecate, and that chicken manure tends to sit inside the crowded cages, contaminating the feathers and skin of the chickens, or rains down upon the chickens and crates on the lower levels of the transport truck. By the time the truck gets to the abattoir, most chickens have had their skin and feathers contaminated with feces. Health: There is also fecal matter in the intestines. While the slaughter process removes the feathers and intestines, only visible fecal matter is removed. The high speed automated processes at the abattoir are not designed to remove this fecal contamination on the feather and skin. The high speed processing equipment tend to spray the contamination around to the birds going down the processing line, and the equipment on the line itself. At one or more points on most abattoirs, chemical sprays and baths (e.g., bleach, acids, peroxides, etc.) are used to partially rinse off or kill this bacterial contamination. Unfortunately, the fecal contamination, once it has occurred, especially in the various membranes between the skin and muscle, is impossible to completely remove. Marketing and sales: Chicken is sold both as whole birds and broken down into pieces. In the United Kingdom, juvenile chickens of less than 28 days of age at slaughter are marketed as poussin. Mature chicken is sold as small, medium or large. Marketing and sales: In the United States, whole mature chickens are marketed as fryers, broilers, and roasters. Fryers are the smallest size (2.5-4 lbs dressed for sale), and the most common, as chicken reach this size quickly (about 7 weeks). Broilers are larger than fryers. Roasters, or roasting hens, are the largest chickens commonly sold (3–5 months and 6-8 lbs) and are typically more expensive. Even larger and older chickens are called stewing chickens but these are no longer usually found commercially. The names reflect the most appropriate cooking method for the surface area to volume ratio. As the size increases, the volume (which determines how much heat must enter the bird for it to be cooked) increases faster than the surface area (which determines how fast heat can enter the bird). For a fast method of cooking, such as frying, a small bird is appropriate: frying a large piece of chicken results in the inside being undercooked when the outside is ready. Marketing and sales: Chicken is also sold broken down into pieces. Such pieces usually come from smaller birds that would qualify as fryers if sold whole. Pieces may include quarters, or fourths of the chicken. A chicken is typically cut into two leg quarters and two breast quarters. Each quarter contains two of the commonly available pieces of chicken. A leg quarter contains the thigh, drumstick and a portion of the back; a leg has the back portion removed. A breast quarter contains the breast, wing and portion of the back; a breast has the back portion and wing removed. Pieces may be sold in packages of all of the same pieces, or in combination packages. Whole chicken cut up refers to either the entire bird cut into 8 individual pieces. (8-piece cut); or sometimes without the back. A 9-piece cut (usually for fast food restaurants) has the tip of the breast cut off before splitting. Pick of the chicken, or similar titles, refers to a package with only some of the chicken pieces, typically the breasts, thighs, and legs without wings or back. Thighs and breasts are sold boneless or skinless. Chicken livers and gizzards are commonly available packaged separately. Other parts of the chicken, such as the neck, feet, combs, etc. are not widely available except in countries where they are in demand, or in cities that cater to ethnic groups who favor these parts. Marketing and sales: Worldwide, there are many fast food restaurant chains that sell exclusively or primarily poultry products including KFC (global), Red Rooster (Australia), Hector Chicken (Belgium) and CFC (Indonesia). Most of the products on the menus in such eateries are fried or breaded and are served with french fries. Cooking: Raw chicken may contain Salmonella. The safe minimum cooking temperature recommended by the U.S. Department of Health & Human Services is 165 °F (74 °C) to prevent foodborne illness because of bacteria and parasites. However, in Japan raw chicken is sometimes consumed in a dish called torisashi, which is sliced raw chicken served in sashimi style. Another preparation is toriwasa which is lightly seared on the outsides while the inside remains raw.Chicken can be cooked in many ways. It can be made into sausages, skewered, put in salads, traditionally grilled or by using electric grill, breaded and deep-fried, or used in various curries. There is significant variation in cooking methods amongst cultures. Historically common methods include roasting, baking, broasting, and frying. Western cuisine frequently has chicken prepared by deep frying for fast foods such as fried chicken, chicken nuggets, chicken lollipops or Buffalo wings. They are also often grilled for salads or tacos. Cooking: Chickens often come with labels such as "roaster", which suggest a method of cooking based on the type of chicken. While these labels are only suggestions, ones labeled for stew often do not do well when cooked with other methods.Some chicken breast cuts and processed chicken breast products include the moniker "with rib meat". This is a misnomer, as it refers to the small piece of white meat that overlays the scapula, removed along with the breast meat. The breast is cut from the chicken and sold as a solid cut, while the leftover breast and true rib meat is stripped from the bone through mechanical separation for use in chicken franks, for example. Breast meat is often sliced thinly and marketed as chicken slices, an easy filling for sandwiches. Often, the tenderloin (pectoralis minor) is marketed separately from the breast (pectoralis major). In the US, "tenders" can be either tenderloins or strips cut from the breast. In the UK the strips of pectoralis minor are called "chicken mini-fillets". Cooking: Chicken bones are hazardous to health as they tend to break into sharp splinters when eaten, but they can be simmered with vegetables and herbs for hours or even days to make chicken stock. Cooking: In Asian countries it is possible to buy bones alone as they are very popular for making chicken soups, which are said to be healthy. In Australia the rib cages and backs of chickens after the other cuts have been removed are frequently sold cheaply in supermarket delicatessen sections as either "chicken frames" or "chicken carcasses" and are purchased for soup or stock purposes. Freezing: Raw chicken maintains its quality longer than fresh in the freezer, as moisture is lost during cooking. There is little change in nutrient value of chicken during freezer storage. For optimal quality, however, a maximum storage time in the freezer of 12 months is recommended for uncooked whole chicken, 9 months for uncooked chicken parts, 3 to 4 months for uncooked chicken giblets, and 4 months for cooked chicken. Freezing does not usually cause color changes in poultry, but the bones and the meat near them can become dark. This bone darkening results when pigment seeps through the porous bones of young poultry into the surrounding tissues when the poultry meat is frozen and thawed.It is safe to freeze chicken directly in its original packaging, but this type of wrap is permeable to air and quality may diminish over time. Therefore, for prolonged storage, it is recommended to overwrap these packages. It is recommended to freeze unopened vacuum packages as is. If a package has accidentally been torn or has opened while food is in the freezer, the food is still safe to use, but it is still recommended to overwrap or rewrap it. Chicken should be away from other foods, so if they begin to thaw, their juices will not drip onto other foods. If previously frozen chicken is purchased at a retail store, it can be refrozen if it has been handled properly.Bacteria survives but does not grow in freezing temperatures. However, if frozen cooked foods are not defrosted properly and are not reheated to temperatures that kill bacteria, chances of getting a foodborne illness greatly increase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phosphatidylcholine—retinol O-acyltransferase** Phosphatidylcholine—retinol O-acyltransferase: In enzymology, a phosphatidylcholine---retinol O-acyltransferase (EC 2.3.1.135) is an enzyme that catalyzes the chemical reaction phosphatidylcholine + retinol---[cellular-retinol-binding-protein] ⇌ 2-acylglycerophosphocholine + retinyl-ester---[cellular-retinol-binding-protein]Thus, the two substrates of this enzyme are phosphatidylcholine and [[retinol---[cellular-retinol-binding-protein]]], whereas its two products are 2-acylglycerophosphocholine and [[retinyl-ester---[cellular-retinol-binding-protein]]]. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is phosphatidylcholine:retinol---[cellular-retinol-binding-protein] O-acyltransferase. Other names in common use include lecithin---retinol acyltransferase, phosphatidylcholine:retinol-(cellular-retinol-binding-protein), and O-acyltransferase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ShelXle** ShelXle: The program ShelXle is a graphical user interface for the structure refinement program SHELXL. ShelXle combines an editor with syntax highlighting for the SHELXL-associated .ins (input) and .res (output) files with an interactive graphical display for visualization of a three-dimensional structure including the electron density (Fo) and difference density (Fo-Fc) maps. Overview: ShelXle can display electron density maps like the macromolecular program Coot but is more intended for smaller molecules. Overview: A number of excellent graphical user interfaces (GUIs) exist for small molecule crystal structure refinement with SHELX (e.g., WINGX, Olex2, XSEED, PLATON and SYSTEM-S, and the Bruker programs XP and XSHELL) ShelXle is free software, distributed under the GNU LGPL. It is available from the ShelXle website or from SourceForge. Binaries are available for Windows, macOS and the Linux distributions SuSE, Debian and Ubuntu. Overview: The Windows binary is distributed with the NSIS Installer. Features: Editor featuring syntax highlighting and code completion for the SHELX instructions. Clicking on an atom in the structure view sets the text cursor to the line that contains this atom. Locating atoms in structure view from the editor. Rename mode with support of residues and disordered parts and free variables. Program architecture: ShelXle uses the Qt (framework). It is written entirely in C++ and does not use any scripting language. For the refinement it calls the external binary of SHELXL which might also be SHELXH, SHELXLMP from George M. Sheldrick or XL from Bruker. SHELX: SHELX is developed by George M. Sheldrick since the late 1960s. Important releases are SHELX76 and SHELX97. It is still developed but releases are usually after ten years of testing. Academic users can download the SHELX programs freely after registration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glicko rating system** Glicko rating system: The Glicko rating system and Glicko-2 rating system are methods of assessing a player's strength in games of skill, such as chess and Go. The Glicko rating system was invented by Mark Glickman in 1995 as an improvement on the Elo rating system, and initially intended for the primary use as a chess rating system. Glickman's principal contribution to measurement is "ratings reliability", called RD, for ratings deviation. Overview: Mark Glickman created the Glicko rating system in 1995 as an improvement on the Elo rating system.Both the Glicko and Glicko-2 rating systems are under public domain and have been implemented on game servers online like Pokémon Showdown, Pokémon Go, Lichess, Free Internet Chess Server, Chess.com, Online Go Server (OGS), Counter-Strike: Global Offensive, Quake Live, Team Fortress 2, Dota 2, Dota Underlords, Guild Wars 2, Splatoon 2 and 3, Dominion Online, TETR.IO, and competitive programming competitions. Overview: The Reliability Deviation (RD) measures the accuracy of a player's rating, where the RD is equal to one standard deviation. For example, a player with a rating of 1500 and an RD of 50 has a real strength between 1400 and 1600 (two standard deviations from 1500) with 95% confidence. Twice (exact: 1.96) the RD is added and subtracted from their rating to calculate this range. After a game, the amount the rating changes depends on the RD: the change is smaller when the player's RD is low (since their rating is already considered accurate), and also when their opponent's RD is high (since the opponent's true rating is not well known, so little information is being gained). The RD itself decreases after playing a game, but it will increase slowly over time of inactivity. Overview: The Glicko-2 rating system improves upon the Glicko rating system and further introduces the rating volatility σ. A very slightly modified version of the Glicko-2 rating system is implemented by the Australian Chess Federation. The algorithm of Glicko: Step 1: Determine ratings deviation The new Ratings Deviation ( RD ) is found using the old Ratings Deviation ( RD0 ): min 350 ) where t is the amount of time (rating periods) since the last competition and '350' is assumed to be the RD of an unrated player. If several games have occurred within one rating period, the method treats them as having happened simultaneously. The rating period may be as long as several months or as short as a few minutes, according to how frequently games are arranged. The constant c is based on the uncertainty of a player's skill over a certain amount of time. It can be derived from thorough data analysis, or estimated by considering the length of time that would have to pass before a player's rating deviation would grow to that of an unrated player. If it is assumed that it would take 100 rating periods for a player's rating deviation to return to an initial uncertainty of 350, and a typical player has a rating deviation of 50 then the constant can be found by solving 350 50 100 c2 for c .Or 350 50 100 34.6 Step 2: Determine new rating The new ratings, after a series of m games, are determined by the following equation: r=r0+q1RD2+1d2∑i=1mg(RDi)(si−E(s|r0,ri,RDi)) where: g(RDi)=11+3q2(RDi2)π2 10 400 ) ln 10 400 0.00575646273 d2=1q2∑i=1m(g(RDi))2E(s|r0,ri,RDi)(1−E(s|r0,ri,RDi)) ri represents the ratings of the individual opponents. The algorithm of Glicko: RDi represents the rating deviations of the individual opponents. si represents the outcome of the individual games. A win is 1, a draw is 12 , and a loss is 0. Step 3: Determine new ratings deviation The function of the prior RD calculation was to increase the RD appropriately to account for the increasing uncertainty in a player's skill level during a period of non-observation by the model. Now, the RD is updated (decreased) after the series of games: RD′=(1RD2+1d2)−1 Glicko-2 algorithm: Glicko-2 works in a similar way to the original Glicko algorithm, with the addition of a rating volatility σ which measures the degree of expected fluctuation in a player’s rating, based on how erratic the player's performances are. For instance, a player's rating volatility would be low when they performed at a consistent level, and would increase if they had exceptionally strong results after that period of consistency. A simplified explanation of the Glicko-2 algorithm is presented below: Step 1: Compute ancillary quantities Across one rating period, a player with a current rating μ and ratings deviation ϕ plays against m opponents, with ratings μ1,...,μm and RDs ϕ1,...,ϕm , resulting in scores s1,...,sm . We first need to compute the ancillary quantities v and Δ :v=[∑j=1mg(ϕj)2E(μ,μj,ϕj){1−E(μ,μj,ϕj)}]−1 Δ=v∑j=1mg(ϕj){sj−E(μ,μj,ϕj)} where g(ϕj)=11+3ϕj2/π2, exp ⁡{−g(ϕj)(μ−μj)}. Glicko-2 algorithm: Step 2: Determine new rating volatility We then need to choose a small constant τ which constrains the volatility over time, for instance 0.2 (smaller values of τ prevent dramatic rating changes after upset results). Then, for ln ⁡(σ2)τ2, we need to find the value A which satisfies f(A)=0 . An efficient way of solving this would be to use the Illinois algorithm, a modified version of the regula falsi procedure (see Regula falsi § The Illinois algorithm for details on how this would be done). Once this iterative procedure is complete, we set the new rating volatility σ′ as exp ⁡{A/2}. Glicko-2 algorithm: Step 3: Determine new ratings deviation and rating We then get the new RD ϕ′=1/1ϕ2+σ′2+1v, and new rating μ′=μ+ϕ′2∑j=1mg(ϕj){sj−E(μ,μj,ϕj)}. These ratings and RDs are on a different scale than in the original Glicko algorithm, and would need to be converted to properly compare the two.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radiometric dating** Radiometric dating: Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formed. The method compares the abundance of a naturally occurring radioactive isotope within the material to the abundance of its decay products, which form at a known constant rate of decay. The use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of Earth itself, and can also be used to date a wide range of natural and man-made materials. Radiometric dating: Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scale. Among the best-known techniques are radiocarbon dating, potassium–argon dating and uranium–lead dating. By allowing the establishment of geological timescales, it provides a significant source of information about the ages of fossils and the deduced rates of evolutionary change. Radiometric dating is also used to date archaeological materials, including ancient artifacts. Radiometric dating: Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied. Fundamentals: Radioactive decay All ordinary matter is made up of combinations of chemical elements, each with its own atomic number, indicating the number of protons in the atomic nucleus. Additionally, elements may exist in different isotopes, with each isotope of an element differing in the number of neutrons in the nucleus. A particular isotope of a particular element is called a nuclide. Some nuclides are inherently unstable. That is, at some point in time, an atom of such a nuclide will undergo radioactive decay and spontaneously transform into a different nuclide. This transformation may be accomplished in a number of different ways, including alpha decay (emission of alpha particles) and beta decay (electron emission, positron emission, or electron capture). Another possibility is spontaneous fission into two or more nuclides.While the moment in time at which a particular nucleus decays is unpredictable, a collection of atoms of a radioactive nuclide decays exponentially at a rate described by a parameter known as the half-life, usually given in units of years when discussing dating techniques. After one half-life has elapsed, one half of the atoms of the nuclide in question will have decayed into a "daughter" nuclide or decay product. In many cases, the daughter nuclide itself is radioactive, resulting in a decay chain, eventually ending with the formation of a stable (nonradioactive) daughter nuclide; each step in such a chain is characterized by a distinct half-life. In these cases, usually the half-life of interest in radiometric dating is the longest one in the chain, which is the rate-limiting factor in the ultimate transformation of the radioactive nuclide into its stable daughter. Isotopic systems that have been exploited for radiometric dating have half-lives ranging from only about 10 years (e.g., tritium) to over 100 billion years (e.g., samarium-147).For most radioactive nuclides, the half-life depends solely on nuclear properties and is essentially constant. This is known because decay constants measured by different techniques give consistent values within analytical errors and the ages of the same materials are consistent from one method to another. It is not affected by external factors such as temperature, pressure, chemical environment, or presence of a magnetic or electric field. The only exceptions are nuclides that decay by the process of electron capture, such as beryllium-7, strontium-85, and zirconium-89, whose decay rate may be affected by local electron density. For all other nuclides, the proportion of the original nuclide to its decay products changes in a predictable way as the original nuclide decays over time. This predictability allows the relative abundances of related nuclides to be used as a clock to measure the time from the incorporation of the original nuclides into a material to the present. Fundamentals: Decay constant determination The radioactive decay constant, the probability that an atom will decay per year, is the solid foundation of the common measurement of radioactivity. The accuracy and precision of the determination of an age (and a nuclide's half-life) depends on the accuracy and precision of the decay constant measurement. The in-growth method is one way of measuring the decay constant of a system, which involves accumulating daughter nuclides. Unfortunately for nuclides with high decay constants (which are useful for dating very old samples), long periods of time (decades) are required to accumulate enough decay products in a single sample to accurately measure them. A faster method involves using particle counters to determine alpha, beta or gamma activity, and then dividing that by the number of radioactive nuclides. However, it is challenging and expensive to accurately determine the number of radioactive nuclides. Alternatively, decay constants can be determined by comparing isotope data for rocks of known age. This method requires at least one of the isotope systems to be very precisely calibrated, such as the Pb-Pb system. Fundamentals: Accuracy of radiometric dating The basic equation of radiometric dating requires that neither the parent nuclide nor the daughter product can enter or leave the material after its formation. The possible confounding effects of contamination of parent and daughter isotopes have to be considered, as do the effects of any loss or gain of such isotopes since the sample was created. It is therefore essential to have as much information as possible about the material being dated and to check for possible signs of alteration. Precision is enhanced if measurements are taken on multiple samples from different locations of the rock body. Alternatively, if several different minerals can be dated from the same sample and are assumed to be formed by the same event and were in equilibrium with the reservoir when they formed, they should form an isochron. This can reduce the problem of contamination. In uranium–lead dating, the concordia diagram is used which also decreases the problem of nuclide loss. Finally, correlation between different isotopic dating methods may be required to confirm the age of a sample. For example, the age of the Amitsoq gneisses from western Greenland was determined to be 3.60 ± 0.05 Ga (billion years ago) using uranium–lead dating and 3.56 ± 0.10 Ga (billion years ago) using lead–lead dating, results that are consistent with each other.: 142–143 Accurate radiometric dating generally requires that the parent has a long enough half-life that it will be present in significant amounts at the time of measurement (except as described below under "Dating with short-lived extinct radionuclides"), the half-life of the parent is accurately known, and enough of the daughter product is produced to be accurately measured and distinguished from the initial amount of the daughter present in the material. The procedures used to isolate and analyze the parent and daughter nuclides must be precise and accurate. This normally involves isotope-ratio mass spectrometry.The precision of a dating method depends in part on the half-life of the radioactive isotope involved. For instance, carbon-14 has a half-life of 5,730 years. After an organism has been dead for 60,000 years, so little carbon-14 is left that accurate dating cannot be established. On the other hand, the concentration of carbon-14 falls off so steeply that the age of relatively young remains can be determined precisely to within a few decades. Fundamentals: Closure temperature The closure temperature or blocking temperature represents the temperature below which the mineral is a closed system for the studied isotopes. If a material that selectively rejects the daughter nuclide is heated above this temperature, any daughter nuclides that have been accumulated over time will be lost through diffusion, resetting the isotopic "clock" to zero. As the mineral cools, the crystal structure begins to form and diffusion of isotopes is less easy. At a certain temperature, the crystal structure has formed sufficiently to prevent diffusion of isotopes. Thus an igneous or metamorphic rock or melt, which is slowly cooling, does not begin to exhibit measurable radioactive decay until it cools below the closure temperature. The age that can be calculated by radiometric dating is thus the time at which the rock or mineral cooled to closure temperature. This temperature varies for every mineral and isotopic system, so a system can be closed for one mineral but open for another. Dating of different minerals and/or isotope systems (with differing closure temperatures) within the same rock can therefore enable the tracking of the thermal history of the rock in question with time, and thus the history of metamorphic events may become known in detail. These temperatures are experimentally determined in the lab by artificially resetting sample minerals using a high-temperature furnace. This field is known as thermochronology or thermochronometry. Fundamentals: The age equation The mathematical expression that relates radioactive decay to geologic time is where t is age of the sample, D* is number of atoms of the radiogenic daughter isotope in the sample, D0 is number of atoms of the daughter isotope in the original or initial composition, N(t) is number of atoms of the parent isotope in the sample at time t (the present), given by N(t) = N0e−λt, and λ is the decay constant of the parent isotope, equal to the inverse of the radioactive half-life of the parent isotope times the natural logarithm of 2.The equation is most conveniently expressed in terms of the measured quantity N(t) rather than the constant initial value No.To calculate the age, it is assumed that the system is closed (neither parent nor daughter isotopes have been lost from system), D0 either must be negligible or can be accurately estimated, λ is known to high precision, and one has accurate and precise measurements of D* and N(t).The above equation makes use of information on the composition of parent and daughter isotopes at the time the material being tested cooled below its closure temperature. This is well established for most isotopic systems. However, construction of an isochron does not require information on the original compositions, using merely the present ratios of the parent and daughter isotopes to a standard isotope. An isochron plot is used to solve the age equation graphically and calculate the age of the sample and the original composition. Modern dating methods: Radiometric dating has been carried out since 1905 when it was invented by Ernest Rutherford as a method by which one might determine the age of the Earth. In the century since then the techniques have been greatly improved and expanded. Dating can now be performed on samples as small as a nanogram using a mass spectrometer. The mass spectrometer was invented in the 1940s and began to be used in radiometric dating in the 1950s. It operates by generating a beam of ionized atoms from the sample under test. The ions then travel through a magnetic field, which diverts them into different sampling sensors, known as "Faraday cups", depending on their mass and level of ionization. On impact in the cups, the ions set up a very weak current that can be measured to determine the rate of impacts and the relative concentrations of different atoms in the beams. Modern dating methods: Uranium–lead dating method Uranium–lead radiometric dating involves using uranium-235 or uranium-238 to date a substance's absolute age. This scheme has been refined to the point that the error margin in dates of rocks can be as low as less than two million years in two-and-a-half billion years. An error margin of 2–5% has been achieved on younger Mesozoic rocks.Uranium–lead dating is often performed on the mineral zircon (ZrSiO4), though it can be used on other materials, such as baddeleyite and monazite (see: monazite geochronology). Zircon and baddeleyite incorporate uranium atoms into their crystalline structure as substitutes for zirconium, but strongly reject lead. Zircon has a very high closure temperature, is resistant to mechanical weathering and is very chemically inert. Zircon also forms multiple crystal layers during metamorphic events, which each may record an isotopic age of the event. In situ micro-beam analysis can be achieved via laser ICP-MS or SIMS techniques.One of its great advantages is that any sample provides two clocks, one based on uranium-235's decay to lead-207 with a half-life of about 700 million years, and one based on uranium-238's decay to lead-206 with a half-life of about 4.5 billion years, providing a built-in crosscheck that allows accurate determination of the age of the sample even if some of the lead has been lost. This can be seen in the concordia diagram, where the samples plot along an errorchron (straight line) which intersects the concordia curve at the age of the sample. Modern dating methods: Samarium–neodymium dating method This involves the alpha decay of 147Sm to 143Nd with a half-life of 1.06 x 1011 years. Accuracy levels of within twenty million years in ages of two-and-a-half billion years are achievable. Potassium–argon dating method This involves electron capture or positron decay of potassium-40 to argon-40. Potassium-40 has a half-life of 1.3 billion years, so this method is applicable to the oldest rocks. Radioactive potassium-40 is common in micas, feldspars, and hornblendes, though the closure temperature is fairly low in these materials, about 350 °C (mica) to 500 °C (hornblende). Modern dating methods: Rubidium–strontium dating method This is based on the beta decay of rubidium-87 to strontium-87, with a half-life of 50 billion years. This scheme is used to date old igneous and metamorphic rocks, and has also been used to date lunar samples. Closure temperatures are so high that they are not a concern. Rubidium-strontium dating is not as precise as the uranium-lead method, with errors of 30 to 50 million years for a 3-billion-year-old sample. Application of in situ analysis (Laser-Ablation ICP-MS) within single mineral grains in faults have shown that the Rb-Sr method can be used to decipher episodes of fault movement. Modern dating methods: Uranium–thorium dating method A relatively short-range dating technique is based on the decay of uranium-234 into thorium-230, a substance with a half-life of about 80,000 years. It is accompanied by a sister process, in which uranium-235 decays into protactinium-231, which has a half-life of 32,760 years.While uranium is water-soluble, thorium and protactinium are not, and so they are selectively precipitated into ocean-floor sediments, from which their ratios are measured. The scheme has a range of several hundred thousand years. A related method is ionium–thorium dating, which measures the ratio of ionium (thorium-230) to thorium-232 in ocean sediment. Modern dating methods: Radiocarbon dating method Radiocarbon dating is also simply called carbon-14 dating. Carbon-14 is a radioactive isotope of carbon, with a half-life of 5,730 years (which is very short compared with the above isotopes), and decays into nitrogen. In other radiometric dating methods, the heavy parent isotopes were produced by nucleosynthesis in supernovas, meaning that any parent isotope with a short half-life should be extinct by now. Carbon-14, though, is continuously created through collisions of neutrons generated by cosmic rays with nitrogen in the upper atmosphere and thus remains at a near-constant level on Earth. The carbon-14 ends up as a trace component in atmospheric carbon dioxide (CO2).A carbon-based life form acquires carbon during its lifetime. Plants acquire it through photosynthesis, and animals acquire it from consumption of plants and other animals. When an organism dies, it ceases to take in new carbon-14, and the existing isotope decays with a characteristic half-life (5730 years). The proportion of carbon-14 left when the remains of the organism are examined provides an indication of the time elapsed since its death. This makes carbon-14 an ideal dating method to date the age of bones or the remains of an organism. The carbon-14 dating limit lies around 58,000 to 62,000 years.The rate of creation of carbon-14 appears to be roughly constant, as cross-checks of carbon-14 dating with other dating methods show it gives consistent results. However, local eruptions of volcanoes or other events that give off large amounts of carbon dioxide can reduce local concentrations of carbon-14 and give inaccurate dates. The releases of carbon dioxide into the biosphere as a consequence of industrialization have also depressed the proportion of carbon-14 by a few percent; in contrast, the amount of carbon-14 was increased by above-ground nuclear bomb tests that were conducted into the early 1960s. Also, an increase in the solar wind or the Earth's magnetic field above the current value would depress the amount of carbon-14 created in the atmosphere. Modern dating methods: Fission track dating method This involves inspection of a polished slice of a material to determine the density of "track" markings left in it by the spontaneous fission of uranium-238 impurities. The uranium content of the sample has to be known, but that can be determined by placing a plastic film over the polished slice of the material, and bombarding it with slow neutrons. This causes induced fission of 235U, as opposed to the spontaneous fission of 238U. The fission tracks produced by this process are recorded in the plastic film. The uranium content of the material can then be calculated from the number of tracks and the neutron flux.This scheme has application over a wide range of geologic dates. For dates up to a few million years micas, tektites (glass fragments from volcanic eruptions), and meteorites are best used. Older materials can be dated using zircon, apatite, titanite, epidote and garnet which have a variable amount of uranium content. Because the fission tracks are healed by temperatures over about 200 °C the technique has limitations as well as benefits. The technique has potential applications for detailing the thermal history of a deposit. Modern dating methods: Chlorine-36 dating method Large amounts of otherwise rare 36Cl (half-life ~300ky) were produced by irradiation of seawater during atmospheric detonations of nuclear weapons between 1952 and 1958. The residence time of 36Cl in the atmosphere is about 1 week. Thus, as an event marker of 1950s water in soil and ground water, 36Cl is also useful for dating waters less than 50 years before the present. 36Cl has seen use in other areas of the geological sciences, including dating ice and sediments. Modern dating methods: Luminescence dating methods Luminescence dating methods are not radiometric dating methods in that they do not rely on abundances of isotopes to calculate age. Instead, they are a consequence of background radiation on certain minerals. Over time, ionizing radiation is absorbed by mineral grains in sediments and archaeological materials such as quartz and potassium feldspar. The radiation causes charge to remain within the grains in structurally unstable "electron traps". Exposure to sunlight or heat releases these charges, effectively "bleaching" the sample and resetting the clock to zero. The trapped charge accumulates over time at a rate determined by the amount of background radiation at the location where the sample was buried. Stimulating these mineral grains using either light (optically stimulated luminescence or infrared stimulated luminescence dating) or heat (thermoluminescence dating) causes a luminescence signal to be emitted as the stored unstable electron energy is released, the intensity of which varies depending on the amount of radiation absorbed during burial and specific properties of the mineral.These methods can be used to date the age of a sediment layer, as layers deposited on top would prevent the grains from being "bleached" and reset by sunlight. Pottery shards can be dated to the last time they experienced significant heat, generally when they were fired in a kiln. Modern dating methods: Other methods Other methods include: Argon–argon (Ar–Ar) Iodine–xenon (I–Xe) Lanthanum–barium (La–Ba) Lead–lead (Pb–Pb) Lutetium–hafnium (Lu–Hf) Hafnium–tungsten dating (Hf-W) Potassium–calcium (K–Ca) Rhenium–osmium (Re–Os) Uranium–uranium (U–U) Krypton–krypton (Kr–Kr) Beryllium (10Be–9Be) Dating with decay products of short-lived extinct radionuclides: Absolute radiometric dating requires a measurable fraction of parent nucleus to remain in the sample rock. For rocks dating back to the beginning of the solar system, this requires extremely long-lived parent isotopes, making measurement of such rocks' exact ages imprecise. To be able to distinguish the relative ages of rocks from such old material, and to get a better time resolution than that available from long-lived isotopes, short-lived isotopes that are no longer present in the rock can be used.At the beginning of the solar system, there were several relatively short-lived radionuclides like 26Al, 60Fe, 53Mn, and 129I present within the solar nebula. These radionuclides—possibly produced by the explosion of a supernova—are extinct today, but their decay products can be detected in very old material, such as that which constitutes meteorites. By measuring the decay products of extinct radionuclides with a mass spectrometer and using isochronplots, it is possible to determine relative ages of different events in the early history of the solar system. Dating methods based on extinct radionuclides can also be calibrated with the U-Pb method to give absolute ages. Thus both the approximate age and a high time resolution can be obtained. Generally a shorter half-life leads to a higher time resolution at the expense of timescale. Dating with decay products of short-lived extinct radionuclides: The 129I – 129Xe chronometer 129I beta-decays to 129Xe with a half-life of 16 million years. The iodine-xenon chronometer is an isochron technique. Samples are exposed to neutrons in a nuclear reactor. This converts the only stable isotope of iodine (127I) into 128Xe via neutron capture followed by beta decay (of 128I). After irradiation, samples are heated in a series of steps and the xenon isotopic signature of the gas evolved in each step is analysed. When a consistent 129Xe/128Xe ratio is observed across several consecutive temperature steps, it can be interpreted as corresponding to a time at which the sample stopped losing xenon.Samples of a meteorite called Shallowater are usually included in the irradiation to monitor the conversion efficiency from 127I to 128Xe. The difference between the measured 129Xe/128Xe ratios of the sample and Shallowater then corresponds to the different ratios of 129I/127I when they each stopped losing xenon. This in turn corresponds to a difference in age of closure in the early solar system. Dating with decay products of short-lived extinct radionuclides: The 26Al – 26Mg chronometer Another example of short-lived extinct radionuclide dating is the 26Al – 26Mg chronometer, which can be used to estimate the relative ages of chondrules. 26Al decays to 26Mg with a half-life of 720 000 years. The dating is simply a question of finding the deviation from the natural abundance of 26Mg (the product of 26Al decay) in comparison with the ratio of the stable isotopes 27Al/24Mg.The excess of 26Mg (often designated 26Mg*) is found by comparing the 26Mg/27Mg ratio to that of other Solar System materials.The 26Al – 26Mg chronometer gives an estimate of the time period for formation of primitive meteorites of only a few million years (1.4 million years for Chondrule formation). Dating with decay products of short-lived extinct radionuclides: A terminology issue In a July 2022 paper in the journal Applied Geochemistry, the authors proposed that the terms “parent isotope" and "daughter isotope” be avoided in favor of the more descriptive "precursor isotope" and "product isotope", analogous to “precursor ion” and “product ion” in mass spectrometry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pendentive** Pendentive: In architecture, a pendentive is a constructional device permitting the placing of a circular dome over a square room or of an elliptical dome over a rectangular room. The pendentives, which are triangular segments of a sphere, taper to points at the bottom and spread at the top to establish the continuous circular or elliptical base needed for a dome. In masonry the pendentives thus receive the weight of the dome, concentrating it at the four corners where it can be received by the piers beneath. Pendentive: Prior to the pendentive's development, builders used the device of corbelling or squinches in the corners of a room. Pendentives commonly occurred in Orthodox, Renaissance, and Baroque churches, with a drum with windows often inserted between the pendentives and the dome. The first experimentation with pendentives began with Roman dome construction in the 2nd–3rd century AD, while full development of the form came in the 6th-century Eastern Roman Hagia Sophia at Constantinople. Sources: Heinle, Erwin; Schlaich, Jörg (1996), Kuppeln aller Zeiten, aller Kulturen, Stuttgart, ISBN 3-421-03062-6{{citation}}: CS1 maint: location missing publisher (link) Rasch, Jürgen (1985), "Die Kuppel in der römischen Architektur. Entwicklung, Formgebung, Konstruktion", Architectura, vol. 15, pp. 117–139
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strong perfect graph theorem** Strong perfect graph theorem: In graph theory, the strong perfect graph theorem is a forbidden graph characterization of the perfect graphs as being exactly the graphs that have neither odd holes (odd-length induced cycles of length at least 5) nor odd antiholes (complements of odd holes). It was conjectured by Claude Berge in 1961. A proof by Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas was announced in 2002 and published by them in 2006. Strong perfect graph theorem: The proof of the strong perfect graph theorem won for its authors a $10,000 prize offered by Gérard Cornuéjols of Carnegie Mellon University and the 2009 Fulkerson Prize. Statement: A perfect graph is a graph in which, for every induced subgraph, the size of the maximum clique equals the minimum number of colors in a coloring of the graph; perfect graphs include many well-known graph classes including the bipartite graphs, chordal graphs, and comparability graphs. In his 1961 and 1963 works defining for the first time this class of graphs, Claude Berge observed that it is impossible for a perfect graph to contain an odd hole, an induced subgraph in the form of an odd-length cycle graph of length five or more, because odd holes have clique number two and chromatic number three. Similarly, he observed that perfect graphs cannot contain odd antiholes, induced subgraphs complementary to odd holes: an odd antihole with 2k + 1 vertices has clique number k and chromatic number k + 1, which is again impossible for perfect graphs. The graphs having neither odd holes nor odd antiholes became known as the Berge graphs. Statement: Berge conjectured that every Berge graph is perfect, or equivalently that the perfect graphs and the Berge graphs define the same class of graphs. This became known as the strong perfect graph conjecture, until its proof in 2002, when it was renamed the strong perfect graph theorem. Relation to the weak perfect graph theorem: Another conjecture of Berge, proved in 1972 by László Lovász, is that the complement of every perfect graph is also perfect. This became known as the perfect graph theorem, or (to distinguish it from the strong perfect graph conjecture/theorem) the weak perfect graph theorem. Because Berge's forbidden graph characterization is self-complementary, the weak perfect graph theorem follows immediately from the strong perfect graph theorem. Proof ideas: The proof of the strong perfect graph theorem by Chudnovsky et al. follows an outline conjectured in 2001 by Conforti, Cornuéjols, Robertson, Seymour, and Thomas, according to which every Berge graph either forms one of five types of basic building block (special classes of perfect graphs) or it has one of four different types of structural decomposition into simpler graphs. A minimally imperfect Berge graph cannot have any of these decompositions, from which it follows that no counterexample to the theorem can exist. This idea was based on previous conjectured structural decompositions of similar type that would have implied the strong perfect graph conjecture but turned out to be false.The five basic classes of perfect graphs that form the base case of this structural decomposition are the bipartite graphs, line graphs of bipartite graphs, complementary graphs of bipartite graphs, complements of line graphs of bipartite graphs, and double split graphs. It is easy to see that bipartite graphs are perfect: in any nontrivial induced subgraph, the clique number and chromatic number are both two and therefore both equal. The perfection of complements of bipartite graphs, and of complements of line graphs of bipartite graphs, are both equivalent to Kőnig's theorem relating the sizes of maximum matchings, maximum independent sets, and minimum vertex covers in bipartite graphs. The perfection of line graphs of bipartite graphs can be stated equivalently as the fact that bipartite graphs have chromatic index equal to their maximum degree, proven by Kőnig (1916). Thus, all four of these basic classes are perfect. The double split graphs are a relative of the split graphs that can also be shown to be perfect.The four types of decompositions considered in this proof are 2-joins, complements of 2-joins, balanced skew partitions, and homogeneous pairs. Proof ideas: A 2-join is a partition of the vertices of a graph into two subsets, with the property that the edges spanning the cut between these two subsets form two vertex-disjoint complete bipartite graphs. When a graph has a 2-join, it may be decomposed into induced subgraphs called "blocks", by replacing one of the two subsets of vertices by a shortest path within that subset that connects one of the two complete bipartite graphs to the other; when no such path exists, the block is formed instead by replacing one of the two subsets of vertices by two vertices, one for each complete bipartite subgraph. A 2-join is perfect if and only if its two blocks are both perfect. Therefore, if a minimally imperfect graph has a 2-join, it must equal one of its blocks, from which it follows that it must be an odd cycle and not Berge. For the same reason, a minimally imperfect graph whose complement has a 2-join cannot be Berge.A skew partition is a partition of a graph's vertices into two subsets, one of which induces a disconnected subgraph and the other of which has a disconnected complement; Chvátal (1985) had conjectured that no minimal counterexample to the strong perfect graph conjecture could have a skew partition. Chudnovsky et al. introduced some technical constraints on skew partitions, and were able to show that Chvátal's conjecture is true for the resulting "balanced skew partitions". The full conjecture is a corollary of the strong perfect graph theorem.A homogeneous pair is related to a modular decomposition of a graph. It is a partition of the graph into three subsets V1, V2, and V3 such that V1 and V2 together contain at least three vertices, V3 contains at least two vertices, and for each vertex v in V3 and each i in {1,2} either v is adjacent to all vertices in Vi or to none of them. It is not possible for a minimally imperfect graph to have a homogeneous pair. Subsequent to the proof of the strong perfect graph conjecture, Chudnovsky (2006) simplified it by showing that homogeneous pairs could be eliminated from the set of decompositions used in the proof. Proof ideas: The proof that every Berge graph falls into one of the five basic classes or has one of the four types of decomposition follows a case analysis, according to whether certain configurations exist within the graph: a "stretcher", a subgraph that can be decomposed into three induced paths subject to certain additional constraints, the complement of a stretcher, and a "proper wheel", a configuration related to a wheel graph, consisting of an induced cycle together with a hub vertex adjacent to at least three cycle vertices and obeying several additional constraints. For each possible choice of whether a stretcher or its complement or a proper wheel exists within the given Berge graph, the graph can be shown to be in one of the basic classes or to be decomposable. This case analysis completes the proof.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hat-trick (cricket)** Hat-trick (cricket): In cricket, a hat-trick occurs when a bowler takes three wickets with consecutive deliveries. The deliveries may be interrupted by an over bowled by another bowler from the other end of the pitch or the other team's innings, but must be three consecutive deliveries by the individual bowler in the same match. Only wickets attributed to the bowler count towards a hat-trick; run outs do not count, although they can contribute towards a so-called team hat-trick, which is ostensibly a normal hat-trick except that the three successive deliveries can be wickets from any bowler in the team and with any mode of dismissal. Hat-trick (cricket): Hat-tricks are rare, and as such are treasured by bowlers. The term is also sometimes used to mean winning the same competition three times in a row. For example, Australia winning the Cricket World Cup in 1999, 2003 and 2007, and Lancashire winning the County Championship in 1926, 1927 and 1928. Test cricket: In Test cricket history there have been just 46 hat-tricks, the first achieved by Fred Spofforth for Australia against England in 1879. In 1912, Australian Jimmy Matthews achieved the feat twice in one game against South Africa. The only other players to achieve two hat-tricks are Australia's Hugh Trumble, against England in 1902 and 1904, Pakistan's Wasim Akram, in separate games against Sri Lanka in 1999, and England's Stuart Broad in 2011 and 2014. Test cricket: Nuwan Zoysa of Sri Lanka is the First bowler to achieve a hat-trick with his first three balls in a Test, against Zimbabwe in 1999. Irfan Pathan of India achieved a hat-trick in the first over of a Test match against Pakistan. Australian fast bowler Peter Siddle took a hat-trick in an Ashes Test match against England on 25 November 2010, Siddle's 26th birthday. One Day Internationals: In One Day International cricket there have been 50 hat-tricks, the first by Jalal-ud-Din for Pakistan against Australia in 1982, and the most recent player to achieve this feat is Wesley Madhevere of Zimbabwe against the Netherlands in March 2023.Sri Lanka's Lasith Malinga is the only bowler to take three hat-tricks in a single form of international cricket with his three in ODIs. Four players have taken at least two ODI hat-tricks in their careers: Wasim Akram and Saqlain Mushtaq of Pakistan, Chaminda Vaas of Sri Lanka and Kuldeep Yadav of India. (Akram therefore has four international hat-tricks in total). One Day Internationals: Chaminda Vaas is the only player to achieve a hat-trick with the first three deliveries in a One Day International, against Bangladesh in the tenth match of 2003 ICC World Cup at City Oval, Pietermaritzburg. He also took a fourth wicket with the fifth ball of the same over, just missing the double hat-trick (read the section "More than three dismissals" below for more information). T20 Internationals: As of 28 July 2023, there have been 52 hat-tricks in T20Is.The first Twenty20 hat-trick was taken by Brett Lee of Australia, playing against Bangladesh in Cape Town in September 2007. Lasith Malinga is the only bowler with multiple T20I hat-tricks. His first hat-trick was achieved against Bangladesh in 2017 and his second hat-trick occurred in September 2019 against New Zealand.Rashid Khan, Lasith Malinga, Curtis Campher and Jason Holder are the only bowlers to take four wickets in four balls in T20Is, Khan achieving this feat against Ireland in February 2019, and Malinga repeating the achievement against New Zealand in September 2019. On 18 October 2021 at 2021 ICC Men's T20 World Cup, Campher achieved the feat against the Netherlands On 30 January 2022, Holder achieved this feat against England.On 6 August 2021, Nathan Ellis picked up three wickets off the last three balls of Bangladesh innings to become the first male cricketer to take a hat-trick on his debut in a T20I match. Unusual hat-tricks: Some hat-tricks are particularly extraordinary. On 2 December 1988, Merv Hughes, playing for Australia, dismissed Curtly Ambrose with the last ball of his penultimate over and Patrick Patterson with the first ball of his next over, wrapping up the West Indies first innings. When Hughes returned to bowl in the West Indies second innings, he trapped Gordon Greenidge lbw with his first ball, completing a hat-trick over two different innings and becoming the only player in Test cricket history to achieve the three wickets of a hat-trick in three different overs. Unusual hat-tricks: In 1844, underarm bowler William Clarke, playing for "England" against Kent, achieved a hat-trick spread over two innings, dismissing Kent batsman John Fagge twice within the hat-trick. Fagge batted at number 11 in the first innings and at number 3 in the second. This event is believed to be unique in first-class cricket.The most involved hat-trick was perhaps when Melbourne club cricketer Stephen Hickman, playing for Power House in March 2002, achieved a hat-trick spread over three overs, two days, two innings, involving the same batsman twice, and observed by the same non-striker, with the hat-trick ball being bowled from the opposite end to the first two. In the Mercantile Cricket Association C-Grade semi-final at Fawkner Park, South Yarra, Gunbower United Cricket Club were 8 for 109 when Hickman came on to bowl his off spin. He took a wicket with the last ball of his third over and then bowled number 11 batsman Richard Higgins with the first ball of his next over to complete the Gunbower innings, leaving Chris Taylor the not out batsman. Power House scored 361, putting the game out of reach of Gunbower. In the second innings, opener Taylor was joined by Higgins at the fall of the fourth wicket as Hickman returned to the attack. With his first ball, observed by an incredulous Taylor at the non-striker's end, he clean bowled Higgins, leaving Higgins with a pair of golden ducks. More than three dismissals: The feat of taking four wickets in four balls has never occurred in Test cricket, and only once in One Day International cricket, in the 2007 World Cup, when Lasith Malinga managed the feat against South Africa. Malinga then repeated this triumph in a T20I against New Zealand during their 2019 tour of Sri Lanka. Afghanistan's Rashid Khan, Ireland's Curtis Campher and West Indian Jason Holder took four wickets in four balls in T20Is, against Ireland, Netherlands and England respectively. It has also occurred on other occasions in first-class cricket. Kevan James of Hampshire took four wickets in four balls and scored a century in the same county game against India in 1996. The Cricinfo report on the game claimed that this was unique in cricket. It is sometimes claimed that the first cricketer to achieve this feat was Joseph Wells (father of novelist H. G. Wells): in 1862 he dismissed Sussex's James Dean, Spencer Leigh, Charles Ellis and Richard Fillery with successive balls. (Spencer Leigh was the great-nephew of Jane Austen.)Albert Trott and Joginder Rao are the only known bowlers credited with two hat-tricks in the same innings in first-class cricket (double hat-tricks notwithstanding). One of Trott's two hat-tricks, for Middlesex against Somerset at Lords in 1907, was a four in four (i.e. a double hat-trick). Similarly, there are at least two known instances of first-class hat-tricks from two innings in the same match. Amin Lakhani achieved this feat for the Combined XI side against India in Multan in 1979, while Starc's hat-tricks occurred in 2017 in a Sheffield Shield clash between New South Wales and Western Australia.For Gloucestershire against Yorkshire in 1922, Charlie Parker had a hat-trick that was nearly five wickets in five balls: he actually struck the stumps with five successive deliveries, but the second was a no-ball. More than three dismissals: Five wickets in five balls was achieved by Scott Babot of Wainuiomata Cricket Club playing in the Senior 3 competition in New Zealand in 2008; it happened across two innings and separated by seven days, as the match took place on consecutive Saturdays.During Brazil's national T20 in 2017, the spectators witnessed a triple hat trick when Carioca Cricket Club's off spinner, Rafi ur Rahman claimed 5 wickets with 5 consecutive balls. The feat came against Brasilia Federal District when the unorthodox off spinner claimed a leg before, two players clean bowled and two caught. The moment was declared "Best of the year" in the 2017 national awards by the club.A 'perfect over' of 6 wickets taken with 6 consecutive balls was achieved by Australian Aled Carey on 21 January 2017 while bowling for Golden Point Cricket Club against East Ballarat Cricket Club in the Ballarat Cricket Association competition. This very rare feat consisted of 2 catches, an LBW and 3 bowled.This feat was also achieved by Matt Rowe, aged 17, playing for Palmerston North Boys' High School 1st XI on 22 March 2023 in a match against Rotorua Boys' High School in Tauranga, New Zealand. Rowe's first delivery of the 'perfect over' netted a catch in the slips, followed by 4 clean-bowled and the 6th LBW. Rowe finished with bowling figures of 9 for 12 and PNBHS subsequently chased the total of 26 in 2.1 overs.Taking two wickets in two consecutive deliveries is occasionally known as a brace, or, more commonly, especially until the next delivery has been made, being on a hat-trick. In Australia, four wickets in four balls is sometimes referred to as a double hat-trick on the basis that there are two ways of compiling the three-in-three sequence (i.e. wickets 1,2 and 3 or wickets 2,3 and 4). Three dismissals by fielders: There are very few cases of a fielder or wicket keeper taking a hat-trick of dismissals off consecutive deliveries in first-class cricket, and none in international cricket. The first such instance is the only known hat-trick of stumpings by a wicket-keeper: W. H. Brain for Gloucestershire against Somerset in 1893, all off the bowling of C. L. Townsend. There has never been a first-class wicket-keeping hat-trick that mixes catches and stumpings, but four other wicket-keepers have taken a hat-trick of catches: KR Meherhomji for Railways vs Freelooters at Secunderabad (the only instance outside England) in 1931, GO Dawkes for Derbyshire vs Worcestershire at Kidderminster in 1958, RC Russell for Gloucestershire against Surrey at The Oval in 1986, and T Frost for Warwickshire against Surrey at Edgbaston in 2003. (In Russell and Frost's cases, no bowler took a hat-trick, since their catches were taken off different bowlers in successive overs: Meherhomji's and Dawkes's feats were hat-tricks for the bowlers as well, L Ramji and HL Jackson.) There are only two recorded cases of a hat-trick of catches being recorded by a non-wicket-keeper, both of which were also hat-tricks for the bowler as well: GJ Thompson, for Northants against Warwickshire at Edgbaston in 2014 (all off SG Smith), and Marcus Trescothick for Somerset against Notts in 2018 at Trent Bridge (all off Craig Overton). Trescothick – though more famous as a batsman and only an occasional bowler – has also taken a hat-trick as a bowler, in 1995 against the Young Australians.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quarterback** Quarterback: The quarterback (commonly abbreviated "QB"), colloquially known as the "signal caller", is a position in gridiron football. Quarterbacks are members of the offensive platoon and mostly line up directly behind the offensive line. In modern American football, the quarterback is usually considered the leader of the offense, and is often responsible for calling the play in the huddle. The quarterback also touches the ball on almost every offensive play, and is almost always the offensive player that throws forward passes. When the QB is tackled behind the line of scrimmage, it is called a sack. Overview: In modern American football, the starting quarterback is usually the leader of the offense, and their successes and failures can have a significant impact on the fortunes of their team. Accordingly, the quarterback is among the most glorified, scrutinized, and highest-paid positions in team sports. Bleacher Report describes the signing of a starting quarterback as a catch-22, where "NFL teams cannot maintain success without excellent quarterback play. But excellent quarterback play is usually so expensive that it prevents NFL teams from maintaining success"; a star quarterback's high salary may prevent the signing of other expensive star players as the team has to stay under the hard salary cap. The majority of the highest-paid players in the NFL are quarterbacks. Teams often use their top draft picks to select a quarterback.The quarterback touches the ball on almost every offensive play. Depending on the play calling system, prior to each play the quarterback will usually gather the rest of his team together in a huddle to tell them which play the team will run. However, when there isn't much time left, or when an offense simply wants to increase the tempo of their plays, teams will forgo the huddle and the quarterback may call plays while the other offensive players get into position or at the line of scrimmage. After the team is lined up, the center will pass the ball back to the quarterback (a process called the snap). Usually on a running play, the quarterback will then hand or pitch the ball backwards to a halfback or fullback. On a passing play, the quarterback is almost always the player responsible for trying to throw the ball downfield to an eligible receiver. Additionally, the quarterback may run with the football himself, as part of a designed play like the option run or quarterback sneak, or the quarterback could make an impromptu run on his own (called a "scramble") to avoid being sacked by the defense. Overview: Depending on the offensive scheme used by their team, the quarterback's role can vary. In systems like the triple option, the quarterback will only pass the ball a few times per game, if at all, while the pass-heavy spread offense, as run by schools like Texas Tech, requires quarterbacks to throw the ball on most plays. The passing game is emphasized heavily in the Canadian Football League (CFL), where there are only three downs (as opposed to the four downs used in American football), a larger field of play and an extra eligible receiver. Different skillsets are required of the quarterback depending upon the offensive system. Quarterbacks that perform well in a pass-heavy spread offense system, a popular offensive scheme in the NCAA and NFHS, rarely perform well in the National Football League (NFL), as the fundamentals of the pro-style offense used in the NFL are very different from those in the spread system, while quarterbacks in Canadian football need to be able to throw the ball often and accurately. In general, quarterbacks need to have physical skills such as arm strength, mobility and a quick throwing motion, in addition to intangibles such as competitiveness, leadership, intelligence and downfield vision.In the NFL, quarterbacks are required to wear a uniform number between 1 and 19. In the National Collegiate Athletic Association (NCAA) and National Federation of State High School Associations (NFHS), quarterbacks are required to wear a uniform number between 1 and 49; in the NFHS, the quarterback can also wear a number between 80 and 89. In the CFL, the quarterback can wear any number from 0 to 49 and 70 to 99. Because of their numbering, quarterbacks are eligible receivers in the NCAA, NFHS and CFL; in the NFL, quarterbacks are eligible receivers if they are not lined up directly under center. Leadership: Often compared to captains of other team sports, before the implementation of NFL team captains in 2007, the starting quarterback is usually the de facto team leader and a well-respected player on and off the field. Since 2007, when the NFL allowed teams to designate several captains to serve as on-field leaders, the starting quarterback has usually been one of the team captains as the leader of the team's offense. Leadership: In the NFL, while the starting quarterback has no other responsibility or authority, they may, depending on the league or individual team, have various informal duties, such as participation in pre-game ceremonies, the coin toss or other events outside the game. For instance, the starting quarterback is the first player (and third person after the team owner and head coach) to be presented with the Lamar Hunt Trophy/George Halas Trophy (after winning the AFC/NFC Conference title) and the Vince Lombardi Trophy (after a Super Bowl victory). The starting quarterback of the victorious Super Bowl team is often chosen for the "I'm going to Disney World!" campaign (which includes a trip to Walt Disney World for them and their families), whether they are the Super Bowl MVP or not; examples include Joe Montana (XXIII), Trent Dilfer (XXXV), Peyton Manning (50) and Tom Brady (LIII). Dilfer was chosen even though teammate Ray Lewis was the MVP of Super Bowl XXXV, due to the bad publicity from Lewis' murder trial the previous year.Being able to rely on a quarterback is vital to team morale. San Diego Chargers safety Rodney Harrison called the 1998 season a "nightmare" because of poor play by Ryan Leaf and Craig Whelihan and, from the rookie Leaf, obnoxious behavior toward teammates. Although their 1999 season replacements Jim Harbaugh and Erik Kramer were not stars, linebacker Junior Seau said, "You can't imagine the security we feel as teammates knowing we have two quarterbacks who have performed in this league and know how to handle themselves as players and as leaders".Commentators have noted the "disproportionate importance" of the quarterback, describing it as the "most glorified—and scrutinized—position" in team sports. It is believed that "there is no other position in sports that 'dictates the terms' of a game the way quarterback does", whether that impact is positive or negative, as "Everybody feeds off of what the quarterback can and cannot do...Defensively, offensively, everybody reacts to what threats or non-threats the quarterback has. Everything else is secondary". "An argument can be made that quarterback is the most influential position in team sports, considering he touches the ball on virtually every offensive play of a far shorter season than baseball, basketball or hockey—a season in which every game is vitally important". Most consistently successful NFL teams (for instance, multiple Super Bowl appearances within a short period of time) have been centered around a single starting quarterback; the one exception was the Washington Redskins under head coach Joe Gibbs who won three Super Bowls with three different starting quarterbacks from 1982 to 1991. Many of these NFL dynasties ended with the departure of their starting quarterback.On a team's defense, the middle linebacker is regarded as "quarterback of the defense" and is often the defensive leader, since he must be as smart as he is athletic. The middle linebacker (MLB), sometimes known as the "Mike", is the only inside linebacker in the 4–3 scheme. Backup: Compared to other positions in gridiron football, the backup quarterback gets considerably less playing time than the starting quarterback. While players at many other positions may rotate in and out during a game, and even a starter at most other positions rarely plays every snap, a team's starting quarterback often remains in the game for every play, which means that a team's primary backup may go an entire season without taking a meaningful offensive snap. While their primary role may be to be available in case of injury to the starter, the backup quarterback may also have additional roles such as a holder on placekicks or as a punter, and will often play a key role in practice, serving as the upcoming opponent's quarterback during the preceding week's practices. A backup quarterback may also be put in during "garbage time" (when the score is so lopsided and the time left in the game is so short that the final outcome cannot realistically be changed), or start a meaningless late-season game (either the team has been eliminated from the postseason, or the playoff seeding cannot be affected), in order to ensure the starting quarterback does not needlessly risk an injury. Backup quarterbacks typically have the career of a journeyman quarterback and have short stints with multiple teams, a notable exception being Frank Reich, who backed up Jim Kelly for nine years with the Buffalo Bills in the 1980s and 1990s. Backup: A quarterback controversy results when a team has two capable quarterbacks competing for the starting position. Dallas Cowboys head coach Tom Landry alternated Roger Staubach and Craig Morton on each play, sending in the quarterbacks with the play call from the sideline; Morton started in Super Bowl V, which his team lost, while Staubach started in Super Bowl VI the following year and won. Although Morton played most of the 1972 season due to an injury to Staubach, Staubach took back the starting job when he rallied the Cowboys in a come-from-behind win in the playoffs and Morton was subsequently traded; Staubach and Morton faced each other in Super Bowl XII. Another notable quarterback controversy involved the San Francisco 49ers, who had three capable starters: Joe Montana, Steve Young and Steve Bono. Montana suffered a season-ending injury that cost him the 1991 NFL season and was supplanted by Young. Young was injured midway through the season, but Bono held the starting job (despite Young's recovery) until Bono's own injury let Young reclaim it. Montana also missed most of the 1992 NFL season, making only one appearance, then was traded away at his request to take over as the starter for the Kansas City Chiefs; upon retirement, he was succeeded by Bono as the Chiefs' starting quarterback.Teams will often bring in a capable backup quarterback via the draft or a trade, as competition or potential replacement which would certainly threaten the starting quarterback's place in the team (see Two-quarterback system below). For instance, Drew Brees began his career with the San Diego Chargers but the team also drafted Philip Rivers; despite Brees initially retaining his starting job and being the Comeback Player of the Year he was not re-signed due to an injury and joined the New Orleans Saints as a free agent. Brees and Rivers both retired in 2021, each having been a starter for the Saints and Chargers, respectively, for over a decade. Aaron Rodgers was drafted by the Green Bay Packers as the eventual successor to Brett Favre, though Rodgers served in a backup role for a few years to develop sufficiently for the team to give him the starting job; Rodgers would himself encounter a similar situation in 2020 when the Packers drafted quarterback Jordan Love. Similarly, Patrick Mahomes was selected by the Kansas City Chiefs to eventually supplant Alex Smith, with the latter willingly serving as a mentor. Trends and other roles: In addition to their main role, quarterbacks are occasionally used in other roles. Most teams utilize a backup quarterback as their holder on placekicks. A benefit of using quarterbacks as holders is that it would be easier to pull off a fake field goal attempt, but many coaches prefer to use punters as holders because a punter will have far more time in practice sessions to work with the kicker than any quarterback would. In the Wildcat formation, where a halfback lines up behind the center and the quarterback lines up out wide, the quarterback can be used as a receiving target or a blocker. A more rare use for a quarterback is to punt the ball themself, a play known as a quick kick. Denver Broncos quarterback John Elway was known to perform quick kicks occasionally, typically when the Broncos were facing a third-and-long situation. Philadelphia Eagles quarterback Randall Cunningham, an All-America punter in college, was also known to punt the ball occasionally, and was assigned as the team's default punter for certain situations, such as when the team was backed up inside their own five-yard line.As Roger Staubach's backup, Dallas Cowboys quarterback Danny White was also the team's punter, opening strategic possibilities for coach Tom Landry. Ascending to the starting role upon Staubach's retirement, White held his position as the team's punter for several seasons—a double duty he performed to All-American standard at Arizona State University. White also had two touchdown receptions as a Dallas Cowboy, both from the halfback option. Trends and other roles: Special tactics If quarterbacks are uncomfortable with the formation the defense is using, they may call an audible change to their play. For example, if a quarterback receives the call to execute a running play, but he notices that the defense is ready to blitz—that is, to send additional defenders across the line of scrimmage in an attempt to tackle the quarterback or short his ability to pass—the quarterback may want to change the play. To do this, the quarterback yells a special code, like "Blue 42" or "Texas 29", which tells the offense to switch to a specific play or formation. Trends and other roles: Quarterbacks can also "spike" (throw the football at the ground) to stop the official game clock. For example, if a team is down by a field goal with only seconds remaining, a quarterback may spike the ball to prevent the game clock from running out. This usually allows the field goal unit to come onto the field, or attempt a final "Hail Mary pass". However, if a team is winning, a quarterback can keep the clock running by kneeling after the snap. This is normally done when the opposing team has no timeouts and there is little time left in the game, as it allows a team to burn up the remaining time on the clock without risking a turnover or injury. Trends and other roles: Dual-threat quarterbacks A dual-threat quarterback possesses the skills and physique to run with the ball if necessary. With the rise of several blitz-heavy defensive schemes and increasingly faster defensive players, the importance of a mobile quarterback has been redefined. While arm power, accuracy, and pocket presence—the ability to successfully operate from within the "pocket" formed by his blockers—are still the most important quarterback virtues, the ability to elude or run past defenders creates an additional threat that allows greater flexibility in a team's passing and running game. Trends and other roles: Dual-threat quarterbacks have historically been more prolific at the college level. Typically, a quarterback with exceptional quickness is used in an option offense, which allows the quarterback to hand the ball off, run it themself or pitch it to a running back shadowing them to the outside. This type of offense forces defenders to commit to the running back up the middle, the quarterback around the end or the running back trailing the quarterback. It is then that the quarterback has the "option" to identify which matchup is most favorable to the offense as the play unfolds and exploit that defensive weakness. In the college game, many schools employ several plays that are designed for the quarterback to run with the ball. This is much less common in professional football, except for a quarterback sneak, a play that involves the quarterback diving forward behind the offensive line to gain a small amount of yardage, but there is still an emphasis on being mobile enough to escape a heavy pass rush. Historically, high-profile dual-threat quarterbacks in the NFL were uncommon—among the notable exceptions were Steve Young and John Elway, who led their teams to one and five Super Bowl appearances respectively; and Michael Vick, whose rushing ability was a rarity in the early 2000s, although he never led his team to a Super Bowl. In the 2010s, quarterbacks with dual-threat capabilities have become more popular. Current NFL quarterbacks considered to be dual-threats include Russell Wilson, Lamar Jackson, and Josh Allen. Trends and other roles: Two-quarterback system Some teams employ a strategy that involves the use of more than one quarterback during the course of a game. This is more common at lower levels of football, such as high school or small college, but rare in major college or professional football. There are four circumstances in which a two-quarterback system may be used. Trends and other roles: The first is when a team is in the process of determining which quarterback will eventually be the starter, and may choose to use each quarterback for part of the game in order to compare the performances. For instance, the Seattle Seahawks' Pete Carroll used the preseason games in 2012 to select Russell Wilson as the starting quarterback over Matt Flynn and Tarvaris Jackson. Trends and other roles: The second is a starter–reliever system, in which the starting quarterback splits the regular season playing time with the backup quarterback, although the former will start playoff games. This strategy is rare, and was last seen in the NFL in the "WoodStrock" combination of Don Strock and David Woodley, which took the Miami Dolphins to the Epic in Miami in 1982 and Super Bowl XVII the following year. The starter–reliever system is distinct from a one-off situation in which a starter is benched in favor of the backup because the switch is part of the game plan (usually if the starter is playing poorly for that game), and the expectation is that the two players will assume the same roles game after game. Trends and other roles: The third is if a coach decides that the team has two quarterbacks who are equally effective and proceeds to rotate the quarterbacks at predetermined intervals, such as after each quarter or after each series. Southern California high school football team Corona Centennial operated this model during the 2014 football season, rotating quarterbacks after every series. In a game against the Chicago Bears in week 7 of the 1971 season, Dallas Cowboys head coach Tom Landry alternated Roger Staubach and Craig Morton on each play, sending in the quarterbacks with the playcall from the sideline. Trends and other roles: The fourth, still occasionally seen in major-college football, is the use of different quarterbacks in different game or down-and-distance situations. Generally this involves a running quarterback and a passing quarterback in an option or wishbone offense. In Canadian football, quarterback sneaks or other runs in short-yardage situations tend to be successful as a result of the distance between the offensive and defensive lines being one yard. Drew Tate, a quarterback for the Calgary Stampeders, was primarily used in short-yardage situations and led the CFL in rushing touchdowns during the 2014 season with 10 scores as the backup to Bo Levi Mitchell. This strategy had all but disappeared from professional American football, but returned to some extent with the advent of the "wildcat" offense. There is debate within football circles as to the effectiveness of the so-called "two-quarterback system". Many coaches and media personnel remain skeptical of the model. Teams such as USC (Southern California), OSU (Oklahoma State), Northwestern and smaller West Georgia have utilized the two-quarterback system; West Georgia, for example, uses the system due to the skillsets of its quarterbacks. As recently as 2020, Oregon, who had two quarterbacks capable of starting (Boston College transfer Anthony Brown and sophomore Tyler Shough), utilized a similar tactic in the 2020 Pac-12 Football Championship Game, giving Shough the start but inserting the dual-threat Brown on short-yardage plays, red zone situations and the final drive of the game. Teams like these use this situation because of the advantages it gives them against defenses of the other team, so that the defense is unable to adjust to their gameplan. History: The quarterback position dates to the late 1800s, when American Ivy League schools playing a form of rugby union imported from the United Kingdom began to put their own spin on the game. Walter Camp, a prominent athlete and rugby player at Yale University, pushed through a change in rules at a meeting in 1880 that established a line of scrimmage and allowed for the football to be snapped to a quarterback. The change was meant to allow for teams to strategize their play more thoroughly and retain possession more easily than was possible in the chaos of a scrummage in rugby. In Camp's formulation, the "quarter-back" was the person who received a ball snapped back with another player's foot. Originally he was not allowed to run forward of the line of scrimmage: A scrimmage takes place when the holder of the ball puts it on the ground before him and puts it in play while on-side either by kicking the ball or by snapping it back with his foot. The man who first receives the ball from the snap-back shall be called the quarter-back and shall not rush forward with the ball under penalty of foul. History: In the primary formation of Camp's time, there were four "back" positions, with the tailback playing furthest back, followed by the fullback, the halfback, and the quarterback closest to the line. As the quarterback was not allowed to run past the line of scrimmage, and the forward pass had not yet been invented, their primary role was to receive the snap from the center, and immediately hand or toss the ball backwards to the fullback or halfback to run. By the early 1900s, their role had been further reduced, as teams began to employ longer, direct snaps to one of the other backs (who by rule were allowed to run) and the quarterback became the primary "blocking back", leading the way through the defense but rarely carrying the ball themselves. This was the primary strategy of the single wing offense which was popular during the early decades of the 20th century. After the growth of the forward pass, the role of the quarterback changed again. The quarterback would later be returned to his role as the primary receiver of the snap after the advent of the T-formation offense, especially under the success of former single wing tailback, and later T-formation quarterback, Sammy Baugh. History: The requirement to stay behind the line of scrimmage was soon rescinded, but it was later reimposed in six-man football. The exchange between the person snapping the ball (typically the center) and the quarterback was initially an awkward one because it involved a kick. At first, centers gave the ball a small boot, and then picked it up and handed it to the quarterback. By 1889, Yale center Bert Hanson was bouncing the ball on the ground to the quarterback between his legs. The following year, a rule change officially made snapping the ball using the hands between the legs legal. Several years later, Amos Alonzo Stagg at the University of Chicago invented the lift-up snap: the center passed the ball off the ground and between his legs to a standing quarterback. A similar set of changes were later adopted in Canadian football as part of the Burnside rules, a set of rules proposed by John Meldrum "Thrift" Burnside, the captain of the University of Toronto's football team.The change from a scrummage to a "scrimmage" made it easier for teams to decide what plays they would run before the snap. At first, the captains of college teams were put in charge of play calling, indicating with shouted codes which players would run with the ball and how the men on the line were supposed to block. Yale later used visual signals, including adjustments of the captain's knit hat, to call plays. Centers could also signal plays based on the alignment of the ball before the snap. In 1888, however, Princeton University began to have its quarterback call plays using number signals. That system caught on and quarterbacks began to act as directors and organizers of offensive play.Early on, quarterbacks were used in a variety of formations. Harvard's team put seven men on the line of scrimmage, with three halfbacks who alternated at quarterback and a lone fullback. Princeton put six men on the line and had one designated quarterback, while Yale used seven linemen, one quarterback and two halfbacks who lined up on either side of the fullback. This was the origin of the T-formation, an offensive set that remained in use for many decades afterward and gained popularity in professional football starting in the 1930s. History: In 1906, the forward pass was legalized in American football; Canadian football did not adopt the forward pass until 1929. Despite the legalization of the forward pass, the most popular formations of the early 20th century focused mostly on the rushing game. The single-wing formation, a run-oriented offensive set, was invented by football coach Glenn "Pop" Warner around the year 1908. In the single-wing, the quarterback was positioned behind the line of scrimmage and was flanked by a tailback, fullback and wingback. He served largely as a blocking back; the tailback typically took the snap, either running forward with the ball or making a lateral pass to one of the other players in the backfield. The quarterback's job was usually to make blocks upfield to help the tailback or fullback gain yards. Passing plays were rare in the single-wing, an unbalanced power formation where four linemen lined up to one side of the center and two lined up to the other. The tailback was the focus of the offense, and was often a triple-threat man who would either pass, run or kick the ball.Offensive playcalling continued to focus on rushing up through the 1920s, when professional leagues began to challenge the popularity of college football. In the early days of the professional National Football League (NFL), which was founded in 1920, games were largely low-scoring affairs. Two-thirds of all games in the 1920s were shutouts, and quarterbacks/tailbacks usually passed only out of desperation. In addition to a reluctance to risk turnovers by passing, various rules existed that limited the effectiveness of the forward pass: passers were required to drop back five yards behind the line of scrimmage before they could attempt a pass, and incomplete passes in the end zone resulted in a change of possession and a touchback. Additionally, the rules required the ball to be snapped from the location on the field where it was ruled dead; if a play ended with a player going out of bounds, the center had to snap the ball from the sideline, an awkward place to start a play.Despite these constraints, player-coach Curly Lambeau of the Green Bay Packers, along with several other NFL figures of his era, was a consistent proponent of the forward pass. The Packers found success in the 1920s and 1930s using variations on the single-wing that emphasized the passing game. Packers quarterback Red Dunn and New York Giants and Brooklyn Dodgers quarterback Benny Friedman were the leading passers of their era, but passing remained a relative rarity among other teams; between 1920 and 1932, there were three times as many running plays as there were passing plays.Early NFL quarterbacks typically were responsible for calling the team's offensive plays with signals before the snap. The use of the huddle to call plays originated with Stagg in 1896, but only began to be used regularly in college games in 1921. In the NFL, players were typically assigned numbers, as were the gaps between offensive linemen. One player, usually the quarterback, would call signals indicating which player was to run the ball and which gap he would run toward. Playcalling (or any other kind of coaching from the sidelines) was not permitted during this period, leaving the quarterback to devise the offensive strategy (often, the quarterback doubled as head coach during this era). Substitutions were limited and quarterbacks often played on both offense and defense. History: The period between 1933 and 1945 was marked by numerous changes for the quarterback position. The rule requiring a quarterback/tailback to be five yards behind the line of scrimmage to pass was abolished, and hash marks were added to the field that established a limited zone between which the ball was placed before snaps, making offensive formations more flexible. Additionally, incomplete passes in the end zone were no longer counted as turnovers and touchbacks.The single-wing continued to be in wide use throughout this, and a number of forward-passing tailbacks became stars, including Sammy Baugh of the Washington Redskins. In 1939, University of Chicago head football coach Clark Shaughnessy made modifications to the T-formation, a formation that put the quarterback behind the center and had him receive the snap directly. Shaughnessy altered the formation by having the linemen be spaced further apart, and he began having players go in motion behind the line of scrimmage before the snap to confuse defenses. These changes were picked up by Chicago Bears coach George Halas, a close friend of Shaughnessy, and they quickly caught on in the professional ranks. Utilizing the T-formation and led by quarterback Sid Luckman, the Bears reached the NFL championship game in 1940 and beat the Redskins by a score of 73–0. The blowout led other teams across the league to adopt variations on the T-formation, including the Philadelphia Eagles, Cleveland Rams and Detroit Lions. Baugh and the Redskins converted to the T-formation and continued to succeed.Thanks in part to the emergence of the T-formation and changes in the rulebooks to liberalize the passing game, passing from the quarterback position became more common in the 1940s and as teams switched to the T-formation, passing tailbacks, such as Sammy Baugh, would line up as quarterbacks instead. Over the course of the decade, passing yards began to exceed rushing yards for the first time in the history of football. The Cleveland Browns of the late 1940s in the All-America Football Conference (AAFC), a professional league created to challenge the NFL, were one of the teams of that era that relied most on passing. Quarterback Otto Graham helped the Browns win four AAFC championships in the late 1940s in head coach Paul Brown's T-formation offense, which emphasized precision timing passes. Cleveland, along with several other AAFC teams, was absorbed by the NFL in 1950 after the dissolution of the AAFC that same year. By the end of the 1940s, all NFL teams aside from the Pittsburgh Steelers used the T-formation as their primary offensive formation. History: As late as the 1960s, running plays occurred more frequently than passes. NFL quarterback Milt Plum later stated that during his career (1957–1969) passes typically only occurred on third downs and sometimes on first downs. Quarterbacks only increased in importance as rules changed to favor passing and higher scoring and as football gained popularity on television after the 1958 NFL Championship Game, often referred to as "The Greatest Game Ever Played". Early modern offenses evolved around the quarterback as a passing threat, boosted by rules changes in 1978 and 1979 that made it a penalty for defensive backs to interfere with receivers downfield and allowed offensive linemen to pass-block using their arms and open hands; the rules had limited them to blocking with their hands held to their chests. Average passing yards per game rose from 283.3 in 1977 to 408.7 in 1979. History: The NFL continues to be a pass-heavy league, in part due to further rule changes that prescribed harsher penalties for hitting the quarterback and for hitting defenseless receivers as they awaited passes. Passing in wide-open offenses has also been an emphasis at the high school and college levels, and professional coaches have devised schemes to fit the talents of new generations of quarterbacks.While quarterbacks and team captains usually called plays in football's early years, today coaches often decide which plays the offense will run. Some teams use an offensive coordinator, an assistant coach whose duties include offensive game-planning and often play-calling. In the NFL, coaches are allowed to communicate with quarterbacks and call plays using audio equipment built into the player's helmet. Quarterbacks are allowed to hear, but not talk to, their coaches until there are fifteen seconds left on the play clock. Once the quarterback receives the call, he may relay it to other players via signals or in a huddle. History: Dallas Cowboys head coach Tom Landry was an early advocate of taking play calling out of the quarterback's hands. Although this remained a common practice in the NFL through the 1970s, fewer QBs were doing it by the 1980s and even Hall of Famers like Joe Montana did not call their own plays. Buffalo Bills QB Jim Kelly was one of the last to regularly call plays. Peyton Manning, formerly of the Indianapolis Colts and Denver Broncos, was the best modern example of a quarterback who called his own plays, primary using an uptempo, no-huddle-based attack. Manning had almost complete control over the offense. Former Baltimore Ravens quarterback Joe Flacco retained a high degree of control over the offense as well, particularly when running a no-huddle scheme, as did Ben Roethlisberger of the Pittsburgh Steelers. Race: Throughout football history, the racial makeup of quarterbacks did not reflect the racial makeup of the sport. Black quarterbacks especially faced barriers in breaking into the starting job at the highest levels. The first black starting quarterback in the Super Bowl era was Marlin Briscoe in 1968, who started for the American Football League's Denver Broncos during part of one season; he was later converted to wide receiver. James Harris started several games for the Buffalo Bills after the AFL-NFL merger, and later started games for the Los Angeles Rams. Other early NFL black starting quarterbacks include Joe Gilliam of the Pittsburgh Steelers, who was the first black quarterback to start a season for any NFL team; though he was benched after the first six games. The New York Giants became the last team to field a black starting QB during an NFL season when Geno Smith filled in for Eli Manning in 2017.During the 2013 NFL season, 67 percent of NFL players were African American yet only 17 percent of quarterbacks were; 82 percent of quarterbacks were white, with just one percent of quarterbacks from other races. Since the inception of the game, only three quarterbacks with known black ancestry have led their team to a Super Bowl victory: Doug Williams in 1988, Russell Wilson, who is multiracial, in 2014, and Patrick Mahomes (biracial) in 2020 and 2023. However, numerous quarterbacks with African ancestry did start the Super Bowl since the 2010s, including four in a row (Super Bowl XLVII, Super Bowl XLVIII, Super Bowl XLIX, Super Bowl 50). Quarterbacks with known black ancestry have also won the Associated Press NFL Most Valuable Player Award in recent years, including Cam Newton, Patrick Mahomes, and Lamar Jackson. Race: Some black quarterbacks claim to have experienced bias towards or against them due to their race. Despite his ability to both pass and run effectively, current Cleveland Browns signal-caller Deshaun Watson despises being called a dual-threat quarterback because he believes the term is often used to stereotype black quarterbacks.Super Bowl LVII was the first Super Bowl in history where each starting quarterback (Jalen Hurts and Patrick Mahomes) were black.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Helene Langevin** Helene Langevin: Helene Langevin is Director of the National Center for Complementary and Integrative Health (NCCIH) at the National Institutes of Health (NIH). Helene Langevin: She was a professor in the University of Vermont College of Medicine's Department of Neurological Sciences. She is best known for characterizing certain cellular and mechanical effects of acupuncture. She was also a Professor in Residence of Medicine at Harvard Medical School, Brigham and Women's Hospital. Prior to working at NIH, Langevin was the Director of the Osher Center for Integrative Medicine, jointly owned by Brigham and Women's Hospital and Harvard Medical School. Helene Langevin: Langevin was principal investigator of studies funded by the National Institutes of Health. The Boston Globe describes her as a "celebrity" in the world of acupuncture. Biography: Langevin received an MD degree from McGill University in 1978. She did a post doctoral research fellowship in Neurochemistry at the MRC Neurochemical Pharmacology Unit in Cambridge, England, residency in Internal Medicine and fellowship in Endocrinology and Metabolism at Johns Hopkins Hospital. She was a Professor in Residence of Medicine at Harvard Medical School, Brigham and Women's Hospital. She was also a part-time Professor of Neurology, Orthopedics and Rehabilitation at the University of Vermont College of Medicine. She was the Principal Investigator of two NIH-funded studies investigating the role of connective tissue in low back pain and the mechanisms of manual and movement based therapies. Her previous studies in humans and animal models have found that "needle grasp", the biomechanical component of de qi, may be caused by connective tissue winding around the needle.Helene Langevin was appointed as Director of the Osher Center for Integrative Medicine at Harvard Medical School and Brigham and Women's Hospital in November 2012. Research related to acupuncture: In December 2001, a study by Langevin and several other researchers at the University of Vermont College of Medicine regarding the "Biomechanical response to acupuncture needling in humans" was published by the peer-reviewed Journal of Applied Physiology, which examined the effects of mechanical tissue stimulation during tissue stretch and during acupuncture.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Contactin 2** Contactin 2: Contactin-2 is a protein that in humans is encoded by the CNTN2 gene. Function: The protein encoded by this gene is a member of the immunoglobulin superfamily. It is a glycosylphosphatidylinositol (GPI)-anchored neuronal membrane protein that functions as a cell adhesion molecule. It may play a role in the formation of axon connections in the developing nervous system. It may also be involved in glial tumorigenesis and may provide a potential target for therapeutic intervention. Interactions: CNTN2 has been shown to interact with CNTNAP2 and NFYB.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermal expansion valve** Thermal expansion valve: A thermal expansion valve or thermostatic expansion valve (often abbreviated as TEV, TXV, or TX valve) is a component in vapor-compression refrigeration and air conditioning systems that controls the amount of refrigerant released into the evaporator and is intended to regulate the superheat of the refrigerant that flows out of the evaporator to a steady value. Although often described as a "thermostatic" valve, an expansion valve is not able to regulate the evaporator's temperature to a precise value. The evaporator's temperature will vary only with the evaporating pressure, which will have to be regulated through other means (such as by adjusting the compressor's capacity). Thermal expansion valve: Thermal expansion valves are often referred to generically as "metering devices", although this may also refer to any other device that releases liquid refrigerant into the low-pressure section but does not react to temperature, such as a capillary tube or a pressure-controlled valve. Theory of operation: A thermal expansion valve is a key element to a heat pump; this is the cycle that makes air conditioning, or air cooling, possible. A basic refrigeration cycle consists of four major elements: a compressor, a condenser, a metering device and an evaporator. As a refrigerant passes through a circuit containing these four elements, air conditioning occurs. Theory of operation: The cycle starts when refrigerant enters the compressor in a low-pressure, moderate-temperature, gaseous form. The refrigerant is compressed by the compressor to a high-pressure and high-temperature gaseous state. The high-pressure and high-temperature gas then enters the condenser. The condenser cools the high-pressure and high-temperature gas allowing it to condense to a high-pressure liquid by transferring heat to a lower temperature medium, usually ambient air. In order to produce a cooling effect from the higher pressure liquid, the flow of refrigerant entering the evaporator is restricted by the expansion valve, reducing the pressure and allowing isenthalpic expansion back into the vapor phase to take place, which absorbs heat and results in cooling. Theory of operation: A TXV type expansion device has a sensing bulb that is filled with a liquid whose thermodynamic properties are similar to those of the refrigerant. This bulb is thermally connected to the output of the evaporator so that the temperature of the refrigerant that leaves the evaporator can be sensed. The gas pressure in the sensing bulb provides the force to open the TXV, and as the temperature drops this force will decrease, therefore dynamically adjusting the flow of refrigerant into the evaporator. Theory of operation: The superheat is the excess temperature of the vapor above its boiling point at the evaporating pressure. No superheat indicates that the refrigerant is not being fully vaporized within the evaporator and liquid may end up recirculated to the compressor which is inefficient and can cause damage. On the other hand, excessive superheat indicates that there is insufficient refrigerant flowing through the evaporator coil, and thus a significant portion toward the end is not providing cooling. Therefore, by regulating the superheat to a small value, typically only a few °C, the heat transfer of the evaporator will be near optimal, without excess liquid refrigerant being returned to the compressor.In order to provide an appropriate superheat, a spring force is often applied in the direction that would close the valve, meaning that the valve will close when the bulb is at a higher temperature than the refrigerant is evaporating at. Spring-type valves may be fixed, or adjustable, although other methods to ensure a superheat also exist, such as the sensing bulb having a different vapor composition to the rest of the system. Theory of operation: Some thermal expansion valves are also specifically designed to ensure that a certain minimum flow of refrigerant can always flow through the system, while others can also be designed to control the evaporator's pressure so that it never rises above a maximum value. Description: Flow control, or metering, of the refrigerant is accomplished by use of a temperature sensing bulb, filled with a gas or liquid charge similar to the one inside the system, that causes the orifice in the valve to open against the spring pressure in the valve body as the temperature on the bulb increases. As the suction line temperature decreases, so does the pressure in the bulb and therefore on the spring, causing the valve to close. An air conditioning system with a TX valve is often more efficient than those with designs that do not use one. Also, TX valve air conditioning systems do not require an accumulator (a refrigerant tank placed downstream of the evaporator's outlet), since the valves reduce the liquid refrigerant flow when the evaporator's thermal load decreases, so that all the refrigerant completely evaporates inside the evaporator (in normal operating conditions such as a proper evaporator temperature and airflow). However, a liquid refrigerant receiver tank needs to be placed in the liquid line before the TX valve so that, in low evaporator thermal load conditions, any excess liquid refrigerant can be stored inside it, preventing any liquid from backflowing inside the condenser coil from the liquid line. Description: At heat loads which are very low compared to the valve's power rating, the orifice can become oversized for the heat load, and the valve can begin to repeatedly open and close, in an attempt to control the superheat to the set value, making the superheat oscillate. Cross charges, that is, sensing bulb charges composed of a mixture of different refrigerants or also non-refrigerant gases such as nitrogen (as opposed to a charge composed exclusively of the same refrigerant inside the system, known as a parallel charge), set so that the vapor pressure vs temperature curve of the bulb charge "crosses" the vapor pressure vs temperature curve of the system's refrigerant at a certain temperature value (that is, a bulb charge set so that, below a certain refrigerant temperature, the vapor pressure of the bulb charge suddenly becomes higher than that of the system's refrigerant, forcing the metering pin to stay into an open position), help to reduce the superheat hunt phenomenon by preventing the valve orifice from completely closing during system operation. The same result can be attained through different kinds of bleed passages that generate a minimum refrigerant flow at all times. The cost, however, is determining a certain flow of refrigerant that will not reach the suction line in a fully evaporated state while the heat load is particularly low, and that the compressor must be designed to handle. By carefully selecting the amount of a liquid sensing bulb charge, a so-called MOP (maximum operating pressure) effect can be also attained; above a precise refrigerant temperature, the sensing bulb charge will be entirely evaporated, making the valve begin restricting flow irrespective of the sensed superheat, rather than increasing it in order to bring evaporator superheat down to the target value. Therefore, the evaporator pressure will be kept from increasing above the MOP value. This feature helps to control the compressor's maximum operating torque to a value that is acceptable for the application, such as a small displacement car engine. Description: A low refrigerant charge condition is often accompanied when the compressor is operational by a loud whooshing sound heard from the thermal expansion valve and the evaporator, which is caused by the lack of a liquid head right before the valve's moving orifice, resulting in the orifice trying to meter a vapor or a vapor/liquid mixture instead of a liquid. Types: There are two main types of thermal expansion valves: internally or externally equalized. The difference between externally and internally equalized valves is how the evaporator pressure affects the position of the needle. In internally equalized valves, the evaporator pressure against the diaphragm is the pressure at the inlet of the evaporator (typically via an internal connection to the outlet of the valve), whereas in externally equalized valves, the evaporator pressure against the diaphragm is the pressure at the outlet of the evaporator. Externally equalized thermostatic expansion valves compensate for any pressure drop through the evaporator. For internally equalised valves a pressure drop in the evaporator will have the effect of increasing the superheat. Types: Internally equalized valves can be used on single circuit evaporator coils having low-pressure drop. If a refrigerant distributor is used for multiple parallel evaporators (rather than a valve on each evaporator) then an externally equalized valve must be used. Externally equalized TXVs can be used on all applications; however, an externally equalized TXV cannot be replaced with an internally equalized TXV. For automotive applications, a type of externally equalized thermal expansion valve, known as the block type valve, is often used. In this type, either a sensing bulb is located within the suction line connection within the valve body and is in constant contact with the refrigerant that flows out of the evaporator's outlet, or a heat transfer means is provided so that the refrigerant is able to exchange heat with the sensing charge contained in a chamber located above the diaphragm as it flows to the suction line. Types: Although the bulb/diaphragm type is used in most systems that control the refrigerant superheat, electronic expansion valves are becoming more common in larger systems or systems with multiple evaporators to allow them to be adjusted independently. Although electronic valves can provide greater control range and flexibility that bulb/diaphragm types cannot provide, they add complexity and points of failure to a system as they require additional temperature and pressure sensors and an electronic control circuit. Most electronic valves use a stepper motor hermetically sealed inside the valve to actuate a needle valve with a screw mechanism, on some units only the stepper rotor is within the hermetic body and is magnetically driven through the sealed valve body by stator coils on the outside of the device.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Crouton** Crouton: A crouton is a piece of toasted or fried bread, normally cubed and seasoned. Croutons are used to add texture and flavor to salads—notably the Caesar salad— as an accompaniment to soups and stews, or eaten as a snack food. Etymology: The word crouton is derived from the French croûton, itself a diminutive of croûte, meaning "crust". Croutons are often seen in the shape of small cubes, but they can be of any size and shape, up to a very large slice. Many people now use crouton for croute, so the usage has changed. Historically, however, a croute was a slice of a baguette lightly brushed with oil or clarified butter and baked. In English descriptions of French cooking, croûte is not only a noun but also has a verb form that describes the cooking process that transforms the bread into the crust. Preparation: The preparation of croutons is relatively simple. Typically the cubes of bread are lightly coated in oil or butter (which may be seasoned or flavored for variety) and then baked. Croutons can also be cut into sticks. Some commercial preparations use machinery to sprinkle various seasonings on them. Alternatively, they may be fried lightly in butter or vegetable oil, until crisp and brown, to give them a buttery flavor and crunchy texture. Some croutons are prepared with the addition of cheese.Nearly any type of bread—in a loaf or pre-sliced, with or without crust—may be used to make croutons. Dry or stale bread or leftover bread is usually used instead of fresh bread. Once prepared, the croutons will remain fresh far longer than unprepared bread. List of possible ingredients: bread garlic butter or oil (e.g., olive oil or rapeseed oil) Parmesan herbs spices (ground or whole) Gastronomy: A dish prepared à la Grenobloise (in the Grenoble manner) has a garnish of small croutons along with beurre noisette, capers, parsley, and lemon. Dried and cubed bread is commonly sold in large bags in North America to make Thanksgiving holiday stuffing or dressing. However, these are generally different from salad croutons, being only dry bread instead of buttered or oiled and with other seasonings.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Independent hardware vendor** Independent hardware vendor: An independent hardware vendor (IHV) is a company specializing in making or selling computer hardware, usually for niche markets.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sulforhodamine B** Sulforhodamine B: Sulforhodamine B or Kiton Red 620 (C27H30N2O7S2) is a fluorescent dye with uses spanning from laser-induced fluorescence (LIF) to the quantification of cellular proteins of cultured cells. This red solid dye is very water-soluble. Spectroscopy: The dye has maximal absorbance at 565 nm light and maximal fluorescence emission at 586 nm light. It does not exhibit pH-dependent absorption or fluorescence over the range of 3 to 10. Applications: Sulforhodamine B is often used as a membrane-impermeable polar tracer or used for cell density determination via determination of cellular proteins (cytotoxicity assay).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**X-ray photoelectron spectroscopy** X-ray photoelectron spectroscopy: X-ray photoelectron spectroscopy (XPS) is a surface-sensitive quantitative spectroscopic technique based on the photoelectric effect that can identify the elements that exist within a material (elemental composition) or are covering its surface, as well as their chemical state, and the overall electronic structure and density of the electronic states in the material. XPS is a powerful measurement technique because it not only shows what elements are present, but also what other elements they are bonded to. The technique can be used in line profiling of the elemental composition across the surface, or in depth profiling when paired with ion-beam etching. It is often applied to study chemical processes in the materials in their as-received state or after cleavage, scraping, exposure to heat, reactive gasses or solutions, ultraviolet light, or during ion implantation. X-ray photoelectron spectroscopy: XPS belongs to the family of photoemission spectroscopies in which electron population spectra are obtained by irradiating a material with a beam of X-rays. Chemical states are inferred from the measurement of the kinetic energy and the number of the ejected electrons. XPS requires high vacuum (residual gas pressure p ~ 10−6 Pa) or ultra-high vacuum (p < 10−7 Pa) conditions, although a current area of development is ambient-pressure XPS, in which samples are analyzed at pressures of a few tens of millibar. X-ray photoelectron spectroscopy: When laboratory X-ray sources are used, XPS easily detects all elements except hydrogen and helium. The detection limit is in the parts per thousand range, but parts per million (ppm) are achievable with long collection times and concentration at top surface. X-ray photoelectron spectroscopy: XPS is routinely used to analyze inorganic compounds, metal alloys, polymers, elements, catalysts, glasses, ceramics, paints, papers, inks, woods, plant parts, make-up, teeth, bones, medical implants, bio-materials, coatings,viscous oils, glues, ion-modified materials and many others. Somewhat less routinely XPS is used to analyze the hydrated forms of materials such as hydrogels and biological samples by freezing them in their hydrated state in an ultrapure environment, and allowing multilayers of ice to sublime away prior to analysis. Basic physics: Because the energy of an X-ray with particular wavelength is known (for Al Kα X-rays, Ephoton = 1486.7 eV), and because the emitted electrons' kinetic energies are measured, the electron binding energy of each of the emitted electrons can be determined by using the photoelectric effect equation, binding photon kinetic +ϕ) ,where Ebinding is the binding energy (BE) of the electron measured relative to the chemical potential, Ephoton is the energy of the X-ray photons being used, Ekinetic is the kinetic energy of the electron as measured by the instrument and ϕ is a work function-like term for the specific surface of the material, which in real measurements includes a small correction by the instrument's work function because of the contact potential. This equation is essentially a conservation of energy equation. The work function-like term ϕ can be thought of as an adjustable instrumental correction factor that accounts for the few eV of kinetic energy given up by the photoelectron as it gets emitted from the bulk and absorbed by the detector. It is a constant that rarely needs to be adjusted in practice. History: In 1887, Heinrich Rudolf Hertz discovered but could not explain the photoelectric effect, which was later explained in 1905 by Albert Einstein (Nobel Prize in Physics 1921). Two years after Einstein's publication, in 1907, P.D. Innes experimented with a Röntgen tube, Helmholtz coils, a magnetic field hemisphere (an electron kinetic energy analyzer), and photographic plates, to record broad bands of emitted electrons as a function of velocity, in effect recording the first XPS spectrum. Other researchers, including Henry Moseley, Rawlinson and Robinson, independently performed various experiments to sort out the details in the broad bands. After WWII, Kai Siegbahn and his research group in Uppsala (Sweden) developed several significant improvements in the equipment, and in 1954 recorded the first high-energy-resolution XPS spectrum of cleaved sodium chloride (NaCl), revealing the potential of XPS. A few years later in 1967, Siegbahn published a comprehensive study of XPS, bringing instant recognition of the utility of XPS and also the first hard X-ray photoemission experiments, which he referred to as Electron Spectroscopy for Chemical Analysis (ESCA). In cooperation with Siegbahn, a small group of engineers (Mike Kelly, Charles Bryson, Lavier Faye, Robert Chaney) at Hewlett-Packard in the US, produced the first commercial monochromatic XPS instrument in 1969. Siegbahn received the Nobel Prize for Physics in 1981, to acknowledge his extensive efforts to develop XPS into a useful analytical tool. In parallel with Siegbahn's work, David Turner at Imperial College London (and later at Oxford University) developed ultraviolet photoelectron spectroscopy (UPS) for molecular species using helium lamps. Measurement: A typical XPS spectrum is a plot of the number of electrons detected at a specific binding energy. Each element produces a set of characteristic XPS peaks. These peaks correspond to the electron configuration of the electrons within the atoms, e.g., 1s, 2s, 2p, 3s, etc. The number of detected electrons in each peak is directly related to the amount of element within the XPS sampling volume. To generate atomic percentage values, each raw XPS signal is corrected by dividing the intensity by a relative sensitivity factor (RSF), and normalized over all of the elements detected. Since hydrogen is not detected, these atomic percentages exclude hydrogen. Measurement: Quantitative accuracy and precision XPS is widely used to generate an empirical formula because it readily yields excellent quantitative accuracy from homogeneous solid-state materials. Absolute quantification requires the use of certified (or independently verified) standard samples, and is generally more challenging, and less common. Relative quantification involves comparisons between several samples in a set for which one or more analytes are varied while all other components (the sample matrix) are held constant. Quantitative accuracy depends on several parameters such as: signal-to-noise ratio, peak intensity, accuracy of relative sensitivity factors, correction for electron transmission function, surface volume homogeneity, correction for energy dependence of electron mean free path, and degree of sample degradation due to analysis. Under optimal conditions, the quantitative accuracy of the atomic percent (at%) values calculated from the major XPS peaks is 90-95% for each peak. The quantitative accuracy for the weaker XPS signals, that have peak intensities 10-20% of the strongest signal, are 60-80% of the true value, and depend upon the amount of effort used to improve the signal-to-noise ratio (for example by signal averaging). Quantitative precision (the ability to repeat a measurement and obtain the same result) is an essential consideration for proper reporting of quantitative results. Measurement: Detection limits Detection limits may vary greatly with the cross section of the core state of interest and the background signal level. In general, photoelectron cross sections increase with atomic number. The background increases with the atomic number of the matrix constituents as well as the binding energy, because of secondary emitted electrons. For example, in the case of gold on silicon where the high cross section Au4f peak is at a higher kinetic energy than the major silicon peaks, it sits on a very low background and detection limits of 1ppm or better may be achieved with reasonable acquisition times. Conversely for silicon on gold, where the modest cross section Si2p line sits on the large background below the Au4f lines, detection limits would be much worse for the same acquisition time. Detection limits are often quoted as 0.1–1.0 % atomic percent (0.1% = 1 part per thousand = 1000 ppm) for practical analyses, but lower limits may be achieved in many circumstances. Measurement: Degradation during analysis Degradation depends on the sensitivity of the material to the wavelength of X-rays used, the total dose of the X-rays, the temperature of the surface and the level of the vacuum. Metals, alloys, ceramics and most glasses are not measurably degraded by either non-monochromatic or monochromatic X-rays. Some, but not all, polymers, catalysts, certain highly oxygenated compounds, various inorganic compounds and fine organics are. Non-monochromatic X-ray sources produce a significant amount of high energy Bremsstrahlung X-rays (1–15 keV of energy) which directly degrade the surface chemistry of various materials. Non-monochromatic X-ray sources also produce a significant amount of heat (100 to 200 °C) on the surface of the sample because the anode that produces the X-rays is typically only 1 to 5 cm (2 in) away from the sample. This level of heat, when combined with the Bremsstrahlung X-rays, acts to increase the amount and rate of degradation for certain materials. Monochromatised X-ray sources, because they are farther away (50–100 cm) from the sample, do not produce noticeable heat effects. In those, a quartz monochromator system diffracts the Bremsstrahlung X-rays out of the X-ray beam, which means the sample is only exposed to one narrow band of X-ray energy. For example, if aluminum K-alpha X-rays are used, the intrinsic energy band has a FWHM of 0.43 eV, centered on 1,486.7 eV (E/ΔE = 3,457). If magnesium K-alpha X-rays are used, the intrinsic energy band has a FWHM of 0.36 eV, centered on 1,253.7 eV (E/ΔE = 3,483). These are the intrinsic X-ray line widths; the range of energies to which the sample is exposed depends on the quality and optimization of the X-ray monochromator. Because the vacuum removes various gases (e.g., O2, CO) and liquids (e.g., water, alcohol, solvents, etc.) that were initially trapped within or on the surface of the sample, the chemistry and morphology of the surface will continue to change until the surface achieves a steady state. This type of degradation is sometimes difficult to detect. Measurement: Measured area Measured area depends on instrument design. The minimum analysis area ranges from 10 to 200 micrometres. Largest size for a monochromatic beam of X-rays is 1–5 mm. Non-monochromatic beams are 10–50 mm in diameter. Spectroscopic image resolution levels of 200 nm or below has been achieved on latest imaging XPS instruments using synchrotron radiation as X-ray source. Sample size limits Instruments accept small (mm range) and large samples (cm range), e.g. wafers. The limiting factor is the design of the sample holder, the sample transfer, and the size of the vacuum chamber. Large samples are laterally moved in x and y direction to analyze a larger area. Measurement: Analysis time Typically ranging 1–20 minutes for a broad survey scan that measures the amount of all detectable elements, typically 1–15 minutes for high resolution scan that reveal chemical state differences (for a high signal/noise ratio for count area result often requires multiple sweeps of the region of interest), 1–4 hours for a depth profile that measures 4–5 elements as a function of etched depth (this process time can vary the most as many factors will play a role). Surface sensitivity: XPS detects only electrons that have actually escaped from the sample into the vacuum of the instrument. In order to escape from the sample, a photoelectron must travel through the sample. Photo-emitted electrons can undergo inelastic collisions, recombination, excitation of the sample, recapture or trapping in various excited states within the material, all of which can reduce the number of escaping photoelectrons. These effects appear as an exponential attenuation function as the depth increases, making the signals detected from analytes at the surface much stronger than the signals detected from analytes deeper below the sample surface. Thus, the signal measured by XPS is an exponentially surface-weighted signal, and this fact can be used to estimate analyte depths in layered materials. Chemical states and chemical shift: The ability to produce chemical state information, i.e. the local bonding environment of an atomic species in question from the topmost few nanometers of the sample makes XPS a unique and valuable tool for understanding the chemistry of the surface. The local bonding environment is affected by the formal oxidation state, the identity of its nearest-neighbor atoms, and its bonding hybridization to the nearest-neighbor or next-nearest-neighbor atoms. For example, while the nominal binding energy of the C1s electron is 284.6 eV, subtle but reproducible shifts in the actual binding energy, the so-called chemical shift (analogous to NMR spectroscopy), provide the chemical state information.Chemical-state analysis is widely used for carbon. It reveals the presence or absence of the chemical states of carbon, in approximate order of increasing binding energy, as: carbide (-C2−), silane (-Si-CH3), methylene/methyl/hydrocarbon (-CH2-CH2-, CH3-CH2-, and -CH=CH-), amine (-CH2-NH2), alcohol (-C-OH), ketone (-C=O), organic ester (-COOR), carbonate (-CO32−), monofluoro-hydrocarbon (-CFH-CH2-), difluoro-hydrocarbon (-CF2-CH2-), and trifluorocarbon (-CH2-CF3), to name but a few.Chemical state analysis of the surface of a silicon wafer reveals chemical shifts due to different formal oxidation states, such as: n-doped silicon and p-doped silicon (metallic silicon), silicon suboxide (Si2O), silicon monoxide (SiO), Si2O3, and silicon dioxide (SiO2). An example of this is seen in the figure "High-resolution spectrum of an oxidized silicon wafer in the energy range of the Si 2p signal". Instrumentation: The main components of an XPS system are the source of X-rays, an ultra-high vacuum (UHV) chamber with mu-metal magnetic shielding, an electron collection lens, an electron energy analyzer, an electron detector system, a sample introduction chamber, sample mounts, a sample stage with the ability to heat or cool the sample, and a set of stage manipulators. Instrumentation: The most prevalent electron spectrometer for XPS is the hemispherical electron analyzer. They have high energy resolution and spatial selection of the emitted electrons. Sometimes, however, much simpler electron energy filters - the cylindrical mirror analyzers are used, most often for checking the elemental composition of the surface. They represent a trade-off between the need for high count rates and high angular/energy resolution. This type consists of two co-axial cylinders placed in front of the sample, the inner one being held at a positive potential, while the outer cylinder is held at a negative potential. Only the electrons with the right energy can pass through this setup and are detected at the end. The count rates are high but the resolution (both in energy and angle) is poor. Instrumentation: Electrons are detected using electron multipliers: a single channeltron for single energy detection, or arrays of channeltrons and microchannel plates for parallel acquisition. These devices consists of a glass channel with a resistive coating on the inside. A high voltage is applied between the front and the end. An incoming electron is accelerated to the wall, where it removes more electrons, in such a way that an electron avalanche is created, until a measurable current pulse is obtained. Instrumentation: Laboratory based XPS In laboratory systems, either 10–30 mm beam diameter non-monochromatic Al Kα or Mg Kα anode radiation is used, or a focused 20-500 micrometer diameter beam single wavelength Al Kα monochromatised radiation. Monochromatic Al Kα X-rays are normally produced by diffracting and focusing a beam of non-monochromatic X-rays off of a thin disc of natural, crystalline quartz with a <1010> orientation. The resulting wavelength is 8.3386 angstroms (0.83386 nm) which corresponds to a photon energy of 1486.7 eV. Aluminum Kα X-rays have an intrinsic full width at half maximum (FWHM) of 0.43 eV, centered on 1486.7 eV (E/ΔE = 3457). For a well–optimized monochromator, the energy width of the monochromated aluminum Kα X-rays is 0.16 eV, but energy broadening in common electron energy analyzers (spectrometers) produces an ultimate energy resolution on the order of FWHM=0.25 eV which, in effect, is the ultimate energy resolution of most commercial systems. When working under practical, everyday conditions, high energy-resolution settings will produce peak widths (FWHM) between 0.4 and 0.6 eV for various pure elements and some compounds. For example, in a spectrum obtained in 1 minute at a pass energy of 20 eV using monochromated aluminum Kα X-rays, the Ag 3d5/2 peak for a clean silver film or foil will typically have a FWHM of 0.45 eV. Non-monochromatic magnesium X-rays have a wavelength of 9.89 angstroms (0.989 nm) which corresponds to a photon energy of 1253 eV. The energy width of the non-monochromated X-ray is roughly 0.70 eV, which, in effect is the ultimate energy resolution of a system using non-monochromatic X-rays. Non-monochromatic X-ray sources do not use any crystals to diffract the X-rays which allows all primary X-rays lines and the full range of high-energy Bremsstrahlung X-rays (1–12 keV) to reach the surface. The ultimate energy resolution (FWHM) when using a non-monochromatic Mg Kα source is 0.9–1.0 eV, which includes some contribution from spectrometer-induced broadening. Instrumentation: Synchrotron based XPS A breakthrough has been brought about in the last decades by the development of large scale synchrotron radiation facilities. Here, bunches of relativistic electrons kept in orbit inside a storage ring are accelerated through bending magnets or insertion devices like wigglers and undulators to produce a high brilliance and high flux photon beam. The beam is orders of magnitude more intense and better collimated than typically produced by anode-based sources. Synchrotron radiation is also tunable over a wide wavelength range, and can be made polarized in several distinct ways. This way, photon can be selected yielding optimum photoionization cross-sections for probing a particular core level. The high photon flux, in addition, makes it possible to perform XPS experiments also from low density atomic species, such as molecular and atomic adsorbates. Data processing: Peak identification The number of peaks produced by a single element varies from 1 to more than 20. Tables of binding energies that identify the shell and spin-orbit of each peak produced by a given element are included with modern XPS instruments, and can be found in various handbooks and websites. Because these experimentally determined energies are characteristic of specific elements, they can be directly used to identify experimentally measured peaks of a material with unknown elemental composition. Data processing: Before beginning the process of peak identification, the analyst must determine if the binding energies of the unprocessed survey spectrum (0-1400 eV) have or have not been shifted due to a positive or negative surface charge. This is most often done by looking for two peaks that are due to the presence of carbon and oxygen. Data processing: Charge referencing insulators Charge referencing is needed when a sample suffers a charge induced shift of experimental binding energies to obtain meaningful binding energies from both wide-scan, high sensitivity (low energy resolution) survey spectra (0-1100 eV), and also narrow-scan, chemical state (high energy resolution) spectra. Charge induced shifting is normally due to a modest excess of low voltage (-1 to -20 eV) electrons attached to the surface, or a modest shortage of electrons (+1 to +15 eV) within the top 1-12 nm of the sample caused by the loss of photo-emitted electrons. If, by chance, the charging of the surface is excessively positive, then the spectrum might appear as a series of rolling hills, not sharp peaks as shown in the example spectrum. Data processing: Charge referencing is performed by adding a Charge Correction Factor to each of the experimentally measured peaks. Since various hydrocarbon species appear on all air-exposed surfaces, the binding energy of the hydrocarbon C (1s) XPS peak is used to charge correct all energies obtained from non-conductive samples or conductors that have been deliberately insulated from the sample mount. The peak is normally found between 284.5 eV and 285.5 eV. The 284.8 eV binding energy is routinely used as the reference binding energy for charge referencing insulators, so that the charge correction factor is the difference between 284.8 eV and the experimentally measured C (1s) peak position. Data processing: Conductive materials and most native oxides of conductors should never need charge referencing. Conductive materials should never be charge referenced unless the topmost layer of the sample has a thick non-conductive film. The charging effect, if needed, can also be compensated by providing suitable low energy charges to the surface by the use of low-voltage (1-20 eV) electron beam from an electron flood gun, UV lights, low-voltage argon ion beam with low-voltage electron beam (1-10 eV), aperture masks, mesh screen with low-voltage electron beams, etc. Data processing: Peak-fitting The process of peak-fitting high energy resolution XPS spectra is a mixture of scientific knowledge and experience. The process is affected by instrument design, instrument components, experimental settings and sample variables. Before starting any peak-fit effort, the analyst performing the peak-fit needs to know if the topmost 15 nm of the sample is expected to be a homogeneous material or is expected to be a mixture of materials. If the top 15 nm is a homogeneous material with only very minor amounts of adventitious carbon and adsorbed gases, then the analyst can use theoretical peak area ratios to enhance the peak-fitting process. Peak fitting results are affected by overall peak widths (at half maximum, FWHM), possible chemical shifts, peak shapes, instrument design factors and experimental settings, as well as sample properties: The full width at half maximum (FWHM) values are useful indicators of chemical state changes and physical influences. Their increase may indicate a change in the number of chemical bonds, a change in the sample condition (x-ray damage) or differential charging of the surface (localised differences in the charge state of the surface). However, the FWHM also depends on the detector, and can also increase due to the sample getting charged. When using high energy resolution experiment settings on an XPS equipped with a monochromatic Al K-alpha X-ray source, the FWHM of the major XPS peaks range from 0.3 eV to 1.7 eV. The following is a simple summary of FWHM from major XPS signals: Main metal peaks (e.g. 1s, 2p3, 3d5, 4f7) from pure metals have FWHMs that range from 0.30 eV to 1.0 eV Main metal peaks (e.g. 1s, 2p3, 3d5, 4f7) from binary metal oxides have FWHMs that range from 0.9 eV to 1.7 eV The O (1s) peak from binary metal oxides have FWHMs that, in general, range from 1.0 eV to 1.4 eV The C (1s) peak from adventitious hydrocarbons have FWHMs that, in general, range from 1.0 eV to 1.4 eV Chemical shift values depend on the degree of electron bond polarization between nearest-neighbor atoms. A specific chemical shift is the difference in BE values of one specific chemical state versus the BE of one form of the pure element, or of a particular agreed-upon chemical state of that element. Component peaks derived from peak-fitting a raw chemical state spectrum can be assigned to the presence of different chemical states within the sampling volume of the sample. Data processing: Peak shapes depend on instrument parameters, experimental parameters and sample characteristics. Instrument design factors include linewidth and purity of X-rays used (monochromatic Al, non-monochromatic Mg, Synchrotron, Ag, Zr), as well as properties of the electron analyzer. Settings of the electron analyzer (e.g. pass energy, step size) Sample factors that affect the peak fitting are the number of physical defects within the analysis volume (from ion etching, or laser cleaning), and the very physical form of the sample (single crystal, polished, powder, corroded) Theoretical aspects: Quantum mechanical treatment When a photoemission event takes place, the following energy conservation rule holds: hν=|Ebv|+Ekin where hν is the photon energy, |Ebv| is the electron BE (binding energy with respect to the vacuum level) prior to ionization, and Ekin is the kinetic energy of the photoelectron. If reference is taken with respect to the Fermi level (as it is typically done in photoelectron spectroscopy) |Ebv| must be replaced by the sum of the binding energy (BE) relative to the Fermi level, |EbF| , and the sample work function, Φ0 From the theoretical point of view, the photoemission process from a solid can be described with a semiclassical approach, where the electromagnetic field is still treated classically, while a quantum-mechanical description is used for matter. Theoretical aspects: The one—particle Hamiltonian for an electron subjected to an electromagnetic field is given by: iℏ∂ψ∂t=[12m(p^−ecA^)2+V^]ψ=H^ψ ,where ψ is the electron wave function, A is the vector potential of the electromagnetic field and V is the unperturbed potential of the solid. In the Coulomb gauge ( ∇⋅A=0 ), the vector potential commutes with the momentum operator ([p^,A^]=0 ), so that the expression in brackets in the Hamiltonian simplifies to: (p^−ecA^)2=p^2−2ecA^⋅p^+(ec)2A^2 Actually, neglecting the ∇⋅A term in the Hamiltonian, we are disregarding possible photocurrent contributions. Such effects are generally negligible in the bulk, but may become important at the surface. The quadratic term in A can be instead safely neglected, since its contribution in a typical photoemission experiment is about one order of magnitude smaller than that of the first term . Theoretical aspects: In first-order perturbation approach, the one-electron Hamiltonian can be split into two terms, an unperturbed Hamiltonian H^0 , plus an interaction Hamiltonian H^′ , which describes the effects of the electromagnetic field: H^′=−emcA^⋅p^ In time-dependent perturbation theory, for an harmonic or constant perturbation, the transition rate between the initial state ψi and the final state ψf is expressed by Fermi's Golden Rule: dωdt∝2πℏ|⟨ψf|H^′|ψi⟩|2δ(Ef−Ei−hν) ,where Ei and Ef are the eigenvalues of the unperturbed Hamiltonian in the initial and final state, respectively, and hν is the photon energy. Fermi's Golden Rule uses the approximation that the perturbation acts on the system for an infinite time. This approximation is valid when the time that the perturbation acts on the system is much larger than the time needed for the transition. It should be understood that this equation needs to be integrated with the density of states ρ(E) which gives: dωdt∝2πℏ|⟨ψf|H^′|ψi⟩|2ρ(Ef)=|Mfi|2ρ(Ef) In a real photoemission experiment the ground state core electron BE cannot be directly probed, because the measured BE incorporates both initial state and final state effects, and the spectral linewidth is broadened owing to the finite core-hole lifetime ( τ ). Theoretical aspects: Assuming an exponential decay probability for the core hole in the time domain ( exp ⁡−t/τ ), the spectral function will have a Lorentzian shape, with a FWHM (Full Width at Half Maximum) Γ given by: IL(E)=I0πΓ/2(E−Eb)2+(Γ/2)2 From the theory of Fourier transforms, Γ and τ are linked by the indeterminacy relation: Γτ≥ℏ The photoemission event leaves the atom in a highly excited core ionized state, from which it can decay radiatively (fluorescence) or non-radiatively (typically by Auger decay). Theoretical aspects: Besides Lorentzian broadening, photoemission spectra are also affected by a Gaussian broadening, whose contribution can be expressed by exp ⁡(−(E−Eb)22σ2) Three main factors enter the Gaussian broadening of the spectra: the experimental energy resolution, vibrational and inhomogeneous broadening. Theoretical aspects: The first effect is caused by the non perfect monochromaticity of the photon beam -which results in a finite bandwidth- and by the limited resolving power of the analyzer. The vibrational component is produced by the excitation of low energy vibrational modes both in the initial and in the final state. Finally, inhomogeneous broadening can originate from the presence of unresolved core level components in the spectrum. Theoretical aspects: Theory of core level photoemission of electrons Inelastic mean free path In a solid, inelastic scattering events also contribute to the photoemission process, generating electron-hole pairs which show up as an inelastic tail on the high BE side of the main photoemission peak. In fact this allows the calculation of electron inelastic mean free path (IMFP). This can be modeled based on the Beer–Lambert law, which states I(z)=I0e−z/λ where λ is the IMFP and z is the axis perpendicular to the sample. In fact it is generally the case that the IMFP is only weakly material dependent, but rather strongly dependent on the photoelectron kinetic energy. Quantitatively we can relate kin to IMFP by nm 538 kin 0.41 kin )1/2 where a is the mean atomic diameter as calculated by the density so a=ρ−1/3 . The above formula was developed by Seah and Dench. Theoretical aspects: Plasmonic effects In some cases, energy loss features due to plasmon excitations are also observed. This can either be a final state effect caused by core hole decay, which generates quantized electron wave excitations in the solid (intrinsic plasmons), or it can be due to excitations induced by photoelectrons travelling from the emitter to the surface (extrinsic plasmons). Due to the reduced coordination number of first-layer atoms, the plasma frequency of bulk and surface atoms are related by the following equation: surface bulk 2 ,so that surface and bulk plasmons can be easily distinguished from each other. Plasmon states in a solid are typically localized at the surface, and can strongly affect IMFP. Theoretical aspects: Vibrational effects Temperature-dependent atomic lattice vibrations, or phonons, can broaden the core level components and attenuate the interference patterns in an X-ray photoelectron diffraction (XPD) experiment. The simplest way to account for vibrational effects is by multiplying the scattered single-photoelectron wave function ϕj by the Debye–Waller factor: exp ⁡(−Δkj2Uj2¯) ,where Δkj2 is the squared magnitude of the wave vector variation caused by scattering, and Uj2¯ is the temperature-dependent one-dimensional vibrational mean squared displacement of the jth emitter. In the Debye model, the mean squared displacement is calculated in terms of the Debye temperature, ΘD , as: Uj2¯(T)=9ℏ2T2/mkBΘD
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LZ77 and LZ78** LZ77 and LZ78: LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as LZ1 and LZ2 respectively. These two algorithms form the basis for many variations including LZW, LZSS, LZMA and others. Besides their academic influence, these algorithms formed the basis of several ubiquitous compression schemes, including GIF and the DEFLATE algorithm used in PNG and ZIP. They are both theoretically dictionary coders. LZ77 maintains a sliding window during compression. This was later shown to be equivalent to the explicit dictionary constructed by LZ78—however, they are only equivalent when the entire data is intended to be decompressed. LZ77 and LZ78: Since LZ77 encodes and decodes from a sliding window over previously seen characters, decompression must always start at the beginning of the input. Conceptually, LZ78 decompression could allow random access to the input if the entire dictionary were known in advance. However, in practice the dictionary is created during encoding and decoding by creating a new phrase whenever a token is output.The algorithms were named an IEEE Milestone in 2004. In 2021 Jacob Ziv was awarded the IEEE Medal of Honor for his involvement in their development. Theoretical efficiency: In the second of the two papers that introduced these algorithms they are analyzed as encoders defined by finite-state machines. A measure analogous to information entropy is developed for individual sequences (as opposed to probabilistic ensembles). This measure gives a bound on the data compression ratio that can be achieved. It is then shown that there exists finite lossless encoders for every sequence that achieve this bound as the length of the sequence grows to infinity. In this sense an algorithm based on this scheme produces asymptotically optimal encodings. This result can be proven more directly, as for example in notes by Peter Shor. LZ77: LZ77 algorithms achieve compression by replacing repeated occurrences of data with references to a single copy of that data existing earlier in the uncompressed data stream. A match is encoded by a pair of numbers called a length-distance pair, which is equivalent to the statement "each of the next length characters is equal to the characters exactly distance characters behind it in the uncompressed stream". (The distance is sometimes called the offset instead.) To spot matches, the encoder must keep track of some amount of the most recent data, such as the last 2 KB, 4 KB, or 32 KB. The structure in which this data is held is called a sliding window, which is why LZ77 is sometimes called sliding-window compression. The encoder needs to keep this data to look for matches, and the decoder needs to keep this data to interpret the matches the encoder refers to. The larger the sliding window is, the longer back the encoder may search for creating references. LZ77: It is not only acceptable but frequently useful to allow length-distance pairs to specify a length that actually exceeds the distance. As a copy command, this is puzzling: "Go back four characters and copy ten characters from that position into the current position". How can ten characters be copied over when only four of them are actually in the buffer? Tackling one byte at a time, there is no problem serving this request, because as a byte is copied over, it may be fed again as input to the copy command. When the copy-from position makes it to the initial destination position, it is consequently fed data that was pasted from the beginning of the copy-from position. The operation is thus equivalent to the statement "copy the data you were given and repetitively paste it until it fits". As this type of pair repeats a single copy of data multiple times, it can be used to incorporate a flexible and easy form of run-length encoding. LZ77: Another way to see things is as follows: While encoding, for the search pointer to continue finding matched pairs past the end of the search window, all characters from the first match at offset D and forward to the end of the search window must have matched input, and these are the (previously seen) characters that comprise a single run unit of length LR, which must equal D. Then as the search pointer proceeds past the search window and forward, as far as the run pattern repeats in the input, the search and input pointers will be in sync and match characters until the run pattern is interrupted. Then L characters have been matched in total, L > D, and the code is [D, L, c]. LZ77: Upon decoding [D, L, c], again, D = LR. When the first LR characters are read to the output, this corresponds to a single run unit appended to the output buffer. At this point, the read pointer could be thought of as only needing to return int(L/LR) + (1 if L mod LR ≠ 0) times to the start of that single buffered run unit, read LR characters (or maybe fewer on the last return), and repeat until a total of L characters are read. But mirroring the encoding process, since the pattern is repetitive, the read pointer need only trail in sync with the write pointer by a fixed distance equal to the run length LR until L characters have been copied to output in total. LZ77: Considering the above, especially if the compression of data runs is expected to predominate, the window search should begin at the end of the window and proceed backwards, since run patterns, if they exist, will be found first and allow the search to terminate, absolutely if the current maximal matching sequence length is met, or judiciously, if a sufficient length is met, and finally for the simple possibility that the data is more recent and may correlate better with the next input. LZ77: Pseudocode The pseudocode is a reproduction of the LZ77 compression algorithm sliding window. LZ77: while input is not empty do match := longest repeated occurrence of input that begins in window if match exists then d := distance to start of match l := length of match c := char following match in input else d := 0 l := 0 c := first char of input end if output (d, l, c) discard l + 1 chars from front of window s := pop l + 1 chars from front of input append s to back of window repeat Implementations Even though all LZ77 algorithms work by definition on the same basic principle, they can vary widely in how they encode their compressed data to vary the numerical ranges of a length–distance pair, alter the number of bits consumed for a length–distance pair, and distinguish their length–distance pairs from literals (raw data encoded as itself, rather than as part of a length–distance pair). A few examples: The algorithm illustrated in Lempel and Ziv's original 1977 article outputs all its data three values at a time: the length and distance of the longest match found in the buffer, and the literal that followed that match. If two successive characters in the input stream could be encoded only as literals, the length of the length–distance pair would be 0. LZ77: LZSS improves on LZ77 by using a 1-bit flag to indicate whether the next chunk of data is a literal or a length–distance pair, and using literals if a length–distance pair would be longer. LZ77: In the PalmDoc format, a length–distance pair is always encoded by a two-byte sequence. Of the 16 bits that make up these two bytes, 11 bits go to encoding the distance, 3 go to encoding the length, and the remaining two are used to make sure the decoder can identify the first byte as the beginning of such a two-byte sequence. LZ77: In the implementation used for many games by Electronic Arts, the size in bytes of a length–distance pair can be specified inside the first byte of the length–distance pair itself; depending on whether the first byte begins with a 0, 10, 110, or 111 (when read in big-endian bit orientation), the length of the entire length–distance pair can be 1 to 4 bytes. LZ77: As of 2008, the most popular LZ77-based compression method is DEFLATE; it combines LZSS with Huffman coding. Literals, lengths, and a symbol to indicate the end of the current block of data are all placed together into one alphabet. Distances can be safely placed into a separate alphabet; because a distance only occurs just after a length, it cannot be mistaken for another kind of symbol or vice versa. LZ78: The LZ78 algorithms compress sequential data by building a dictionary of token sequences from the input, and then replacing the second and subsequent occurrence of the sequence in the data stream with a reference to the dictionary entry. The observation is that the number of repeated sequences is a good measure of the non random nature of a sequence. The algorithms represent the dictionary as an n-ary tree where n is the number of tokens used to form token sequences. Each dictionary entry is of the form dictionary[...] = {index, token}, where index is the index to a dictionary entry representing a previously seen sequence, and token is the next token from the input that makes this entry unique in the dictionary. Note how the algorithm is greedy, and so nothing is added to the table until a unique making token is found. The algorithm is to initialize last matching index = 0 and next available index = 1 and then, for each token of the input stream, the dictionary searched for a match: {last matching index, token}. If a match is found, then last matching index is set to the index of the matching entry, nothing is output, and last matching index is left representing the input so far. Input is processed until a match is not found. Then a new dictionary entry is created, dictionary[next available index] = {last matching index, token}, and the algorithm outputs last matching index, followed by token, then resets last matching index = 0 and increments next available index. As an example consider the sequence of tokens AABBA which would assemble the dictionary; 0 {0,_} 1 {0,A} 2 {1,B} 3 {0,B} and the output sequence of the compressed data would be 0A1B0B. Note that the last A is not represented yet as the algorithm cannot know what comes next. In practice an EOF marker is added to the input - AABBA$ for example. Note also that in this case the output 0A1B0B1$ is longer than the original input but compression ratio improves considerably as the dictionary grows, and in binary the indexes need not be represented by any more than the minimum number of bits.Decompression consists of rebuilding the dictionary from the compressed sequence. From the sequence 0A1B0B1$ the first entry is always the terminator 0 {...} , and the first from the sequence would be 1 {0,A} . The A is added to the output. The second pair from the input is 1B and results in entry number 2 in the dictionary, {1,B}. The token "B" is output, preceded by the sequence represented by dictionary entry 1. Entry 1 is an 'A' (followed by "entry 0" - nothing) so AB is added to the output. Next 0B is added to the dictionary as the next entry, 3 {0,B} , and B (preceded by nothing) is added to the output. Finally a dictionary entry for 1$ is created and A$ is output resulting in A AB B A$ or AABBA removing the spaces and EOF marker. LZW: LZW is an LZ78-based algorithm that uses a dictionary pre-initialized with all possible characters (symbols) or emulation of a pre-initialized dictionary. The main improvement of LZW is that when a match is not found, the current input stream character is assumed to be the first character of an existing string in the dictionary (since the dictionary is initialized with all possible characters), so only the last matching index is output (which may be the pre-initialized dictionary index corresponding to the previous (or the initial) input character). Refer to the LZW article for implementation details. LZW: BTLZ is an LZ78-based algorithm that was developed for use in real-time communications systems (originally modems) and standardized by CCITT/ITU as V.42bis. When the trie-structured dictionary is full, a simple re-use/recovery algorithm is used to ensure that the dictionary can keep adapting to changing data. A counter cycles through the dictionary. When a new entry is needed, the counter steps through the dictionary until a leaf node is found (a node with no dependents). This is deleted and the space re-used for the new entry. This is simpler to implement than LRU or LFU and achieves equivalent performance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Voiced epiglottal tap** Voiced epiglottal tap: The voiced epiglottal or pharyngeal tap or flap is not known to exist as a phoneme in any language. However, it exists as the intervocalic voiced allophone of the otherwise voiceless epiglottal stop /ʡ/ of Dahalo and perhaps of other languages. Voiced epiglottal tap: It may also exist in Iraqi Arabic, where the consonant 'ayn is too short to be an epiglottal stop, but has too much of a burst to be a fricative or approximant.There is no dedicated symbol for this sound in the IPA, but it can be transcribed by adding an "extra short" diacritic to the symbol for the stop, ⟨ʡ̆⟩. Features: Its manner of articulation is tap or flap, which means it is produced with a single contraction of the muscles so that one articulator (usually the tongue) is thrown against another. Its place of articulation is epiglottal, which means it is articulated with the aryepiglottic folds against the epiglottis. Its phonation is voiced, which means the vocal cords vibrate during the articulation. It is an oral consonant, which means air is allowed to escape through the mouth only. It is a central consonant, which means it is produced by directing the airstream along the center of the tongue, rather than to the sides. The airstream mechanism is pulmonic, which means it is articulated by pushing air solely with the intercostal muscles and diaphragm, as in most sounds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Entropy (energy dispersal)** Entropy (energy dispersal): In thermodynamics, the interpretation of entropy as a measure of energy dispersal has been exercised against the background of the traditional view, introduced by Ludwig Boltzmann, of entropy as a quantitative measure of disorder. The energy dispersal approach avoids the ambiguous term 'disorder'. An early advocate of the energy dispersal conception was Edward A. Guggenheim in 1949, using the word 'spread'.In this alternative approach, entropy is a measure of energy dispersal or spread at a specific temperature. Changes in entropy can be quantitatively related to the distribution or the spreading out of the energy of a thermodynamic system, divided by its temperature. Entropy (energy dispersal): Some educators propose that the energy dispersal idea is easier to understand than the traditional approach. The concept has been used to facilitate teaching entropy to students beginning university chemistry and biology. Comparisons with traditional approach: The term "entropy" has been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels. Comparisons with traditional approach: Such descriptions have tended to be used together with commonly used terms such as disorder and randomness, which are ambiguous, and whose everyday meaning is the opposite of what they are intended to mean in thermodynamics. Not only does this situation cause confusion, but it also hampers the teaching of thermodynamics. Students were being asked to grasp meanings directly contradicting their normal usage, with equilibrium being equated to "perfect internal disorder" and the mixing of milk in coffee from apparent chaos to uniformity being described as a transition from an ordered state into a disordered state.The description of entropy as the amount of "mixedupness" or "disorder," as well as the abstract nature of the statistical mechanics grounding this notion, can lead to confusion and considerable difficulty for those beginning the subject. Even though courses emphasised microstates and energy levels, most students could not get beyond simplistic notions of randomness or disorder. Many of those who learned by practising calculations did not understand well the intrinsic meanings of equations, and there was a need for qualitative explanations of thermodynamic relationships.Arieh Ben-Naim recommends abandonment of the word entropy, rejecting both the 'dispersal' and the 'disorder' interpretations; instead he proposes the notion of "missing information" about microstates as considered in statistical mechanics, which he regards as commonsensical. Description: Increase of entropy in a thermodynamic process can be described in terms of "energy dispersal" and the "spreading of energy," while avoiding mention of "disorder" except when explaining misconceptions. All explanations of where and how energy is dispersing or spreading have been recast in terms of energy dispersal, so as to emphasise the underlying qualitative meaning.In this approach, the second law of thermodynamics is introduced as "Energy spontaneously disperses from being localized to becoming spread out if it is not hindered from doing so," often in the context of common experiences such as a rock falling, a hot frying pan cooling down, iron rusting, air leaving a punctured tyre and ice melting in a warm room. Entropy is then depicted as a sophisticated kind of "before and after" yardstick — measuring how much energy is spread out over time as a result of a process such as heating a system, or how widely spread out the energy is after something happens in comparison with its previous state, in a process such as gas expansion or fluids mixing (at a constant temperature). The equations are explored with reference to the common experiences, with emphasis that in chemistry the energy that entropy measures as dispersing is the internal energy of molecules. Description: The statistical interpretation is related to quantum mechanics in describing the way that energy is distributed (quantized) amongst molecules on specific energy levels, with all the energy of the macrostate always in only one microstate at one instant. Entropy is described as measuring the energy dispersal for a system by the number of accessible microstates, the number of different arrangements of all its energy at the next instant. Thus, an increase in entropy means a greater number of microstates for the final state than for the initial state, and hence more possible arrangements of a system's total energy at any one instant. Here, the greater 'dispersal of the total energy of a system' means the existence of many possibilities.Continuous movement and molecular collisions visualised as being like bouncing balls blown by air as used in a lottery can then lead on to showing the possibilities of many Boltzmann distributions and continually changing "distribution of the instant", and on to the idea that when the system changes, dynamic molecules will have a greater number of accessible microstates. In this approach, all everyday spontaneous physical happenings and chemical reactions are depicted as involving some type of energy flows from being localized or concentrated to becoming spread out to a larger space, always to a state with a greater number of microstates.This approach provides a good basis for understanding the conventional approach, except in very complex cases where the qualitative relation of energy dispersal to entropy change can be so inextricably obscured that it is moot. Thus in situations such as the entropy of mixing when the two or more different substances being mixed are at the same temperature and pressure so there will be no net exchange of heat or work, the entropy increase will be due to the literal spreading out of the motional energy of each substance in the larger combined final volume. Each component’s energetic molecules become more separated from one another than they would be in the pure state, when in the pure state they were colliding only with identical adjacent molecules, leading to an increase in its number of accessible microstates. Current adoption: Variants of the energy dispersal approach have been adopted in number of undergraduate chemistry texts, mainly in the United States. One respected text states: The concept of the number of microstates makes quantitative the ill-defined qualitative concepts of 'disorder' and the 'dispersal' of matter and energy that are used widely to introduce the concept of entropy: a more 'disorderly' distribution of energy and matter corresponds to a greater number of micro-states associated with the same total energy. — Atkins & de Paula (2006): 81 History: The concept of 'dissipation of energy' was used in Lord Kelvin's 1852 article "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy." He distinguished between two types or "stores" of mechanical energy: "statical" and "dynamical." He discussed how these two types of energy can change from one form to the other during a thermodynamic transformation. When heat is created by any irreversible process (such as friction), or when heat is diffused by conduction, mechanical energy is dissipated, and it is impossible to restore the initial state.Using the word 'spread', an early advocate of the energy dispersal concept was Edward Armand Guggenheim. In the mid-1950s, with the development of quantum theory, researchers began speaking about entropy changes in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels, such as by the reactants and products of a chemical reaction.In 1984, the Oxford physical chemist Peter Atkins, in a book The Second Law, written for laypersons, presented a nonmathematical interpretation of what he called the "infinitely incomprehensible entropy" in simple terms, describing the Second Law of thermodynamics as "energy tends to disperse". His analogies included an imaginary intelligent being called "Boltzmann's Demon," who runs around reorganizing and dispersing energy, in order to show how the W in Boltzmann's entropy formula relates to energy dispersion. This dispersion is transmitted via atomic vibrations and collisions. Atkins wrote: "each atom carries kinetic energy, and the spreading of the atoms spreads the energy…the Boltzmann equation therefore captures the aspect of dispersal: the dispersal of the entities that are carrying the energy.": 78, 79 In 1997, John Wrigglesworth described spatial particle distributions as represented by distributions of energy states. According to the second law of thermodynamics, isolated systems will tend to redistribute the energy of the system into a more probable arrangement or a maximum probability energy distribution, i.e. from that of being concentrated to that of being spread out. By virtue of the First law of thermodynamics, the total energy does not change; instead, the energy tends to disperse over the space to which it has access. In his 1999 Statistical Thermodynamics, M.C. Gupta defined entropy as a function that measures how energy disperses when a system changes from one state to another. Other authors defining entropy in a way that embodies energy dispersal are Cecie Starr and Andrew Scott.In a 1996 article, the physicist Harvey S. Leff set out what he called "the spreading and sharing of energy." Another physicist, Daniel F. Styer, published an article in 2000 showing that "entropy as disorder" was inadequate. In an article published in the 2002 Journal of Chemical Education, Frank L. Lambert argued that portraying entropy as "disorder" is confusing and should be abandoned. He has gone on to develop detailed resources for chemistry instructors, equating entropy increase as the spontaneous dispersal of energy, namely how much energy is spread out in a process, or how widely dispersed it becomes – at a specific temperature.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Torpex Games** Torpex Games: Torpex Games was a game development studio located in Bellevue, Washington, United States. The studio was notable because their video game Schizoid was the first Xbox Live Arcade title to utilize the Microsoft framework, XNA Game Studio Express.Torpex Games was founded by industry veterans Bill Dugan and Jamie Fristrom. Games: Schizoid (2008) Bejeweled Blitz LIVE (2011)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Compound of five small cubicuboctahedra** Compound of five small cubicuboctahedra: This uniform polyhedron compound is a composition of 5 small cubicuboctahedra, in the same vertex arrangement as the compound of 5 small rhombicuboctahedra.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Low-head hydro power** Low-head hydro power: Low-head hydropower refers to the development of hydroelectric power where the head is typically less than 20 metres, although precise definitions vary. Head is the vertical height measured between the hydro intake water level and the water level at the point of discharge. Using only a low head drop in a river or tidal flows to create electricity may provide a renewable energy source that will have a minimal impact on the environment. Since the generated power (calculated the same as per general hydropower) is a function of the head these systems are typically classed as small-scale hydropower, which have an installed capacity of less than 5MW. Comparison to conventional hydro: Most current hydroelectric projects use a large hydraulic head to power turbines to generate electricity. The hydraulic head either occurs naturally, such as a waterfall, or is created by constructing a dam in a river valley, creating a reservoir. Using a controlled release of water from the reservoir drives the turbines. The costs and environmental impacts of constructing a dam can make traditional hydroelectric projects unpopular in some countries. From 2010 onwards new innovative ecologically friendly technologies have evolved and have become economically viable. Comparison to conventional hydro: Within low-head hydropower there are several of standard situations: Run-of-the-river: Low-head small hydropower can be produced from rivers, often described as run-of-river or run-of-the-river projects. Suitable locations include weirs, streams, locks, rivers and wastewater outfalls. Weirs are common in rivers across Europe, as well as rivers that are canalized or have groynes. Generating significant power from low-head locations using conventional technologies typically requires large volumes of water. Due to the low rotational speeds produced, gearboxes are required to efficiently drive generators, which can result in large and expensive equipment and civil infrastructure. Comparison to conventional hydro: Tidal power: In combination with a lagoon or barrage the tides can be used to create a head difference. The largest tidal range is at the Bay of Fundy, between the Canadian provinces of New Brunswick and Nova Scotia, Canada which can reach 13.6m. The first tidal range installation was opened in 1966 at Le Rance, France. Low-head pumped seawater storage: Currently at very low TRL levels but in the coming decade these technologies could become part of the energy system. Dynamic tidal power: Another potentially promising type of low-head hydro power is dynamic tidal power, a novel and unapplied method to extract power from tidal movements. Although a dam-like structure is required, no area is enclosed, and therefore most of the benefits of 'damless hydro' are retained, while providing for vast amounts of power generation. Low-head hydro is not to be confused with "free flow" or "stream" technologies, which work solely with the kinetic energy and the velocity of the water. Types of low-head turbines: Turbines suitable for use in very-low-head applications are different from the Francis, propeller, Kaplan, or Pelton types used in more conventional large hydro. Different types of low-head turbines are: Venturi-enhanced turbine: This type of turbine uses venturi principles to achieve a pressure amplification for the turbine so that smaller, faster, no-gearbox turbines can be deployed in low-head hydro settings, without the need for large infrastructure or large watercourses. Water passing through a venturi (a constriction) creates an area of low pressure. A turbine discharging into this area of low pressure then experiences a higher pressure differential, i.e. a higher head. Only ca. 20% of the flow passes through the propeller turbine and therefore requires screening but fish and aquatic life can pass safely through the venturi (80% of the flow), preventing the need for large screens. Venturi turbines can be used at low heads (1.5–5 metres) and medium to high flows (1m3/s–20 m3/s). Multiple turbines can be installed in parallel. Types of low-head turbines: Archimedes screw: Water is fed into the top of the screw forcing it to rotate. The rotating shaft can then be used to drive an electric generator. A gear box is required, since the rotational speed is very low. The screw is used at low heads (1.5–5 metres) and medium to high flows (1 to 20 m3/s). For higher flows, multiple screws are used. Due to the construction and slow movement of the blades of the turbine, the turbine tends to be very large but is considered to be friendly to aquatic wildlife. Types of low-head turbines: Kaplan turbine: This turbine is a propeller-type turbine which has adjustable blades to achieve efficiency over a wide range of heads and flows. The Kaplan can be used at low to medium heads (1.5–20 metres) and medium to high flows (3 m3/s–30 m3/s). For higher flows multiple turbines can be used. They present a risk to aquatic life and in most situations require complete screening. Types of low-head turbines: Cross-flow turbine: Also known as Banki-Mitchell or Ossberger turbines, these devices are used for a large range of hydraulic heads (from 2 to 100 meters) and flow rates (from 0.03 to 20 m3/s), but are more efficient for low heads and low power outputs. They are considered "impulse" turbines, since they get energy from water by reducing its velocity (all hydraulic energy is converted into kinetic energy). They present a high risk to aquatic life and require complete screening. Types of low-head turbines: Water wheel: Water wheels can be used at low heads (1–5 metres) and medium flows (0.3–1.5 m3/s) and are considered safe for aquatic life. Gravitation water vortex power plant: This type of hydro power plant use the power of a gravitation water vortex, which only exists at low head. Environmental impact of low-head hydropower: A number of concerns have been raised about the environmental impacts of river current and tidal devices. Among the most important of these are: Aquatic life. Concerns have been raised about the danger of rotating blades to aquatic life, such as seals and fish. Installations within watercourses can be screened to ensure marine life does not come into contact with any moving parts. After extensive testing and auditing by environmental regulators technology can gain certification to show they are safe for smolts, mature fish, eels and marine ecosystems. Environmental impact of low-head hydropower: Bathymetry. By altering wave patterns and tidal streams, devices will undoubtedly have an effect, for example, on the deposition of sediment. Research carried out to date would seem to indicate that the effects would not be significant, and may even be positive, for example by helping to slow down coastal erosion. (This is particularly pertinent in light of evidence that waves have steadily increased in size in the recent past.) The sea in the lee of devices would almost certainly be calmer than normal, but, it has been suggested, this would help in creating more areas for activities such as water sports or yachting. Environmental impact of low-head hydropower: Landscape. In rivers or similar watercourses, sensitive environmental parameters can make planning permissions for hydropower installations difficult. Large infrastructure, and above water visible infrastructure such as Archimedes Screw systems and turbine houses can incur objections. In addition, vibrations and noise levels from gearboxes can cause environmental objections due to feared impact on local wildlife such as otters or birds (for example, at Balmoral Estate, Scotland). The main impact would probably be from the extensive transmission lines needed to take the energy from the shoreline to final users. This problem would have to be addressed, possibly by using underground transmission lines.Weirs and groynes have historically been used for water management and to permit upstream riverine transportation. Weirs and groynes can have negative effects on river bathymetry and prevent upstream fish migration that will have an effect on local ecology and water levels. By installing low-head hydropower turbines on historic structures sediment transport can be increased along with new fish migration passages, either through the turbine itself or by installing fish ladders. Environmental impact of low-head hydropower: Where large sites aren't cleared “the vegetation overwhelmed by the rising water decays to form methane – a far worse greenhouse gas than carbon dioxide”, particularly in the tropics. Low-head dams and weirs do not produce harmful methane. Groynes but also weirs prevent the transport of silt (sediment) downstream to fertilize fields and to move sediment towards the oceans. Low-head hydropower is typically installed close to areas where the energy is needed, preventing the need for large electrical transmission lines. Implementation and regulations: Government regulation Most government regulation comes from the use of waterways. Most low-head water turbine systems are smaller engineering projects than traditional water turbines. Even so, one needs to obtain permission from state and federal government institutions before implementing these systems [1] . Some of the constraints faced with these systems in larger waterways are making sure waterways can still be used for boats and making sure that routes of migration of fish are not disturbed. Implementation and regulations: Government subsidies US government subsidies can be obtained for implementation of small-scale hydro facilities most easily through federal grants, namely green energy grants [2]. A specific example is the Renewable Electricity Production Tax Credit. This is a federal tax credit aimed at promoting renewable energy resources. To qualify, the hydro source must have a minimum capacity of 150 kW. This subsidy is given for the first ten years of production. Organizations receive $.011/kWh. [3]. For hydroelectric projects, this subsidy expired on December 31, 2017 [4]. Implementation and regulations: Public perception Since these are sustainable energy source, are non detrimental to the water sources they utilize and are visually not an eyesore, they are well regarded within the public sphere [5]. However, there is little public and industrial knowledge of these systems as they are still being tested to "answer real-world questions". As such, proponents and manufacturers of these systems have tried to bring them into public knowledge [6]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cetacaine** Cetacaine: Cetacaine is a topical anesthetic that contains the active ingredients benzocaine (14%), butamben (2%), and tetracaine hydrochloride (2%). Cetacaine also contains small amounts of benzalkonium chloride at 0.5% and 0.005% of cetyl dimethyl ethyl ammonium bromide all in a bland water-soluble base. Although Cetacaine has been widely used in the medical and dental fields, it has yet to be officially approved by the FDA. Cetacaine is produced by the company Cetylite Industries, Inc. and they provide Cetacaine in three forms: liquid, gel, and spray. Medical uses: Cetacaine is a benzocaine-based anesthetic that also contains other active ingredients that include butamben and tetracaine hydrochloride. The main use for this drug is to produce anesthesia to mucous membranes to numb and help control the pain in that area. The spray form of Cetacaine is also used to help prevent gagging in the patient. The anesthetic effect of Cetacaine can be expected to take effect in about 30 seconds and last between 30–60 minutes depending on location and application amount. Cetacaine can and has been used for surgeries that include bronchi, ear, esophagus, larynx, mouth, nose, pharynx, rectal, and vaginal procedures. These procedures can include periodontal treatment, pre-probing, pre-scaling/root planning procedures, pre-injection, and laser dentistry. Medical uses: Available forms The dosage should be applied directly to the site where anesthesia is required. The dosage should be modified according to the patient and there has not been a dosage specified for children.Spray: Cetacaine spray should be applied for only one second and dosage should not exceed an application spray longer than 2 seconds. Gel: Use a cotton swab to apply 200 mg to the needed area and the dosage should not exceed 400 mg. Liquid: Apply 200 mg either directly or by using cotton applicator to the location and the dosage should not exceed 400 mg. Adverse effects: Cetacaine has been known to cause adverse effects in the patients it has been administered to. These include hypersensitivity in the form of anaphylaxis, dermatitis, erythema, pruritus which can lead to oozing and vesiculation. There have also been accounts of rashes, edema, urticarial and other allergic symptoms as well as methemoglobinemia. Other adverse effects can include: tremors, twitching, dizziness, confusion, hypo-tension, vomiting, euphoria, and blurred or double vision. Adverse effects: Pregnancy and breastfeeding It has not been determined of Cetacaine has any adverse defects on the formation of the fetus or if it is transferred through breastfeeding. It is recommended that professional advise should be taken in these regards. Pharmacology: Mechanism of action Cetacaine acts quickly in about 30 seconds and can last between 30–60 minutes. This is due to benzocaine causing the immediate anesthetic effect, while butamben and tetracaine hydrochloride causes the extended effect of Cetacaine.The actual mechanism for the onset of anesthesia is unknown, but it is believed that the active ingredients reversibly block nerve conduction therefore causing the numbing sensation. This stabilizes the neuron and prevents signals from being transferred. Pharmacology: Pharmacokinetics The rate of absorption through the skin and after diffusing in and back out of the nerve membrane it is metabolized by plasma cholinesterase and then excreted in urine. Contraindications: This product should not be used to cover a large area for anesthetic affect causing an adverse reaction. The liquid and other forms of Cetacaine should not be administered via injection or used under dentures, on eyes or with patients with a cholinesterase deficiency. Interactions: Cetacaine can have interaction with other drugs being taken by patients one of the interactions that can lead to methemoglobinemia is the interaction with sodium nitrate as well as prilocaine, which can lead to severe illness or death. As well as others listed on the referenced site. History: The marketing start date for Cetacaine was January 1, 1960, but benzocaine was first produced in 1890 by German scientist 1890 by Eduard Ritsert. Cetacaine is mainly used in the dental field but has seen use as well in the medical field when dealing with small surgeries on or around mucus membranes. Benzocaine-based anesthetics (which includes Cetacaine) have started to come scrutiny by the FDA. In 2006 the FDA has announced that benzocaine-based anesthetics can cause methemoglobinemia and with that listed warnings and precautions to take when dealing with benzocaine based drugs. The FDA also during this time started to take many Benzocaine based drugs that were not approved off the market and fining those companies they were under. Research: From looking at the patent bank the only research that has occurred around Cetacaine has been with certain medical procedures that use Cetacaine as an anesthetic or new dispensing containers or methods. One of the only studies that are current with Cetacaine is the one that the FDA is conducting surrounding the issue of patients contracting methemoglobinemia from the use of Cetacaine. In these studies it was recorded that 319 cases were reported and out of the 319, 32 were considered life-threatening and 3 cases resulted in death. Economics: Cost effectiveness Cetacaine has been used in the medical and dental field for a long time now. Its main competitors have been benzocaine and other benzocaine-based drugs. The use of Cetacaine has allowed for faster in and out times for patients, cheaper costs, easier use for the doctors or dentists needing to apply an anesthetic and better patient compliance (less anxiety).Cetacaine compared to some of the leading competitors is considered by most a cheaper option. For the spray option the bottle containing 56g can dispense 100 doses and only cost the dentist $0.79 per dose. The liquid Cetacaine that comes in the 30 g bottle can dose 73 full mouths at a cost of about $0.75 per dose. Economics: Manufacturer The company the makes Cetacaine is called Cetylite Industries, Inc. This company is based out of Pennsauken, NJ and has a total of 75 employees. Cetylite brings in total revenue of around $7,500,000 with their main product being the topical anesthetics, and infection prevention products.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scientific pitch** Scientific pitch: Scientific pitch, also known as philosophical pitch, Sauveur pitch or Verdi tuning, is an absolute concert pitch standard which is based on middle C (C4) being set to 256 Hz rather than 261.62 Hz, making it approximately 37.6 cents lower than the common A440 pitch standard. It was first proposed in 1713 by French physicist Joseph Sauveur, promoted briefly by Italian composer Giuseppe Verdi in the 19th century, then advocated by the Schiller Institute beginning in the 1980s with reference to the composer, but naming a pitch slightly lower than Verdi's preferred 432 Hz for A, and making controversial claims regarding the effects of this pitch. Scientific pitch: Scientific pitch is not used by concert orchestras but is still sometimes favored in scientific writings for the convenience of all the octaves of C being an exact round number in the binary system when expressed in hertz (symbol Hz). The octaves of C remain a whole number in Hz all the way down to 1 Hz in both binary and decimal counting systems. Instead of A above middle C (A4) being set to the widely used standard of 440 Hz, scientific pitch assigns it a frequency of 430.54 Hz.Since 256 is a power of 2, only octaves (factor 2:1) and, in just tuning, higher-pitched perfect fifths (factor 3:2) of the scientific pitch standard will have a frequency of a convenient integer value. With a Verdi pitch standard of A4 = 432 Hz = 24 × 33, in just tuning all octaves (factor 2), perfect fourths (factor 4:3) and fifths (factor 3:2) will have pitch frequencies of integer numbers, but not the major thirds (factor 5:4) nor major sixths (factor 5:3) which have a prime factor 5 in their ratios. However scientific tuning implies an equal temperament tuning where the frequency ratio between each half tone in the scale is the same, being the 12th root of 2 (a factor of approximately 1.059463), which is not a rational number: therefore in scientific pitch only the octaves of C have a frequency of a whole number in hertz. History: Concert tuning pitches tended to vary from group to group, and by the 17th century the pitches had been generally creeping upward (i.e. becoming "sharper"). The French acoustic physicist Joseph Sauveur, a non-musician, researched musical pitches and determined their frequencies. He found several frequency values for A4 as presented to him by musicians and their instruments, with A4 ranging from 405 to 421 Hz. (Other contemporary researchers such as Christiaan Huygens, Vittorio Francesco Stancari and Brook Taylor were finding similar and lower values for A4, as low as 383 Hz.) In 1701, Sauveur proposed that all musical pitches should be based on a son fixe (fixed sound), that is, one unspecified note set to 100 Hz, from which all others would be derived. In 1713, Sauveur changed his proposal to one based on C4 set to 256 Hz; this was later called "philosophical pitch" or "Sauveur pitch". Sauveur's push to standardize a concert pitch was strongly resisted by the musicians with whom he was working, and the proposed standard was not adopted. The notion was revived periodically, including by mathematician Sir John Herschel and composer John Pyke Hullah in the mid-19th century, but never established as a standard.In the 19th century, Italian composer Giuseppe Verdi tried to stop the increase in pitch to which orchestras were tuned. In 1874 he wrote his Requiem using the official French standard diapason normal pitch of A4 tuned to 435 Hz. Later, he indicated that 432 Hz would be slightly better for orchestras. One solution he proposed was scientific pitch. He had little success.In 1988, Lyndon LaRouche's Schiller Institute initiated a campaign to establish scientific pitch as the classical music concert pitch standard. The Institute called this pitch "Verdi tuning" because of the connection to the famous composer. Even though Verdi tuning uses 432 Hz for A4 and not 430.54, it is said by the Schiller Institute to be derived from the same mathematical basis: 256 Hz for middle C. The Institute's arguments for the notation included points about historical accuracy and references to Johannes Kepler's treatise on the movement of planetary masses. The Schiller Institute initiative was opposed by opera singer Stefan Zucker. According to Zucker, the Institute offered a bill in Italy to impose scientific notation on state-sponsored musicians that included provisions for fines and confiscation of all other tuning forks. Zucker has written that he believes the Schiller Institute claims about Verdi tuning are historically inaccurate. Institute followers are reported by Tim Page of Newsday to have stood outside concert halls with petitions to ban the music of Antonio Vivaldi and even to have disrupted a concert conducted by Leonard Slatkin in order to pass out pamphlets titled "Leonard Slatkin Serves Satan".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dragsaw** Dragsaw: A dragsaw or drag saw is a large reciprocating saw using a long steel crosscut saw to buck logs to length. Prior to the popularization of the chainsaw during World War II, the dragsaw was a popular means of taking the hard work out of cutting wood. They would only work for a log on the ground. Dragsaws are known as the first mechanical saws to be used in the timber industry operation. These tools were most useful in the logging business, because they were efficient and very resilient. Not to be confused with a steam donkey. History: Early dragsaws of the modern era were powered by levers or foot pedals, steam and later gasoline-powered. The post-modern steam-powered dragsaw was most commonly used in logging industry rather than merely clearing land due to its versatility. Many of the basic design principles from early dragsaws still apply to current products in the industry today. The inventor Robert G. Moores hypothesized that early non-mechanical versions of dragsaws may have been used to cut stone in the Fourth Dynasty, Egypt, with copper saws suspended from ropes and advancing into the stone blocks by gravity. Types: Human-powered The human-powered dragsaw was a much more commonly used dragsaw among the general population due to their relatively low cost compared with their higher efficiency. Dragsaws powered by humans would often stem from a lever the person would use to manipulate the saw blade in a much easier manner. Other common formats included foot pedals or treadles. These allowed for greater maneuverability when clearing a tree. Types: Engine-powered Gasoline or kerosene powered drag saws were popular between the 1910s-1940s when chain saws became preferable. They usually did 90 strokes of the saw per minute. Most of all gasoline-engine-powered dragsaws were made in Portland, Oregon, United States. Steam-powered dragsaws utilized a piston hooked directly to the saw blade. The boiler was separate for easier portability. "They were very reliable and very rugged and were significantly more efficient than cutting (bucking) by hand." Some engine-powered dragsaws used a separate engine and were geared to a pulley. Types: Manufacturers: OttawaAmong the first engine-powered drag saw companies. Saws used a four-cycle hit 'n miss engine usually equipped with an angled water-hopper. Direct gear drive. Wolf Iron Works Small machine shop that made saws under their own Timber Wolf name as well as Ward Sawer for Montgomery Wards. Saws were two-cycle, chain driven and had a round gas tank that contained the radiator. The factory today resides at Powerland Heritage Park. VaughnVaughn made steam-powered drag saws. They also made drag saws of similar design to Timber Wolf between 1909-1948. After drag saws lost popularity, Vaughn made tracked tractors. MultnomahNamed for the county Portland, Oregon is in. Wee McGregor R.M. Wade. Preservation: Engine enthusiasts and vintage logging machinery collectors have restored many examples of engine powered dragsaws. Restored saws can be seen at some steam fairs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Recombination detection program** Recombination detection program: The Recombination detection program (RDP) is a computer program used to analyse nucleotide sequence data and identify evidence of genetic recombination. Besides applying a large number of different recombination detection methods it also implements various phylogenetic tree construction methods and recombination hotspot tests. The latest version is RDP4.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ship model basin** Ship model basin: A ship model basin is a basin or tank used to carry out hydrodynamic tests with ship models, for the purpose of designing a new (full sized) ship, or refining the design of a ship to improve the ship's performance at sea. It can also refer to the organization (often a company) that owns and operates such a facility. An engineering firm acts as a contractor to the relevant shipyards, and provides hydrodynamic model tests and numerical calculations to support the design and development of ships and offshore structures. History: The eminent English engineer William Froude published a series of influential papers on ship designs for maximising stability in the 1860s. The Institution of Naval Architects eventually commissioned him to identify the most efficient hull shape. He validated his theoretical models with extensive empirical testing, using scale models for the different hull dimensions. He established a formula (now known as the Froude number) by which the results of small-scale tests could be used to predict the behaviour of full-sized hulls. He built a sequence of 3, 6 and (shown in the picture) 12 foot scale models and used them in towing trials to establish resistance and scaling laws. His experiments were later vindicated in full-scale trials conducted by the Admiralty and as a result the first ship model basin was built, at public expense, at his home in Torquay. Here he was able to combine mathematical expertise with practical experimentation to such good effect that his methods are still followed today.Inspired by Froude's successful work, shipbuilding company William Denny and Brothers completed the world's first commercial example of a ship model basin in 1883. The facility was used to test models of a variety of vessels and explored various propulsion methods, including propellers, paddles and vane wheels. Experiments were carried out on models of the Denny-Brown stabilisers and the Denny hovercraft to gauge their feasibility. Tank staff also carried out research and experiments for other companies: Belfast-based Harland & Wolff decided to fit a bulbous bow on the liner Canberra after successful model tests in the Denny Tank. Test facilities: The hydrodynamic test facilities present at a model basin site include at least a towing tank and a cavitation tunnel and workshops. Some ship model basins have further facilities such as a maneuvering and seakeeping basin and an ice tank. Test facilities: Towing tank A towing tank is a basin, several metres wide and hundreds of metres long, equipped with a towing carriage that runs on two rails on either side. The towing carriage can either tow the model or follow the self-propelled model, and is equipped with computers and devices to register or control, respectively, variables such as speed, propeller thrust and torque, rudder angle etc. The towing tank serves for resistance and propulsion tests with towed and self-propelled ship models to determine how much power the engine will have to provide to achieve the speed laid down in the contract between shipyard and ship owner. The towing tank also serves to determine the maneuvering behaviour in model scale. For this, the self-propelled model is exposed to a series of zig-zag maneuvers at different rudder angle amplitudes. Post-processing of the test data by means of system identification results in a numerical model to simulate any other maneuver like Dieudonné spiral test or turning circles. Additionally, a towing tank can be equipped with a PMM (planar motion mechanism) or a CPMC (computerized planar motion carriage) to measure the hydrodynamic forces and moments on ships or submerged objects under the influence of oblique inflow and enforced motions. The towing tank can also be equipped with a wave generator to carry out seakeeping tests, either by simulating natural (irregular) waves or by exposing the model to a wave packet that yields a set of statistics known as response amplitude operators (acronym RAO), that determine the ship's likely real-life sea-going behavior when operating in seas with varying wave amplitudes and frequencies (these parameters being known as sea states). Modern seakeeping test facilities can determine these RAO statistics, with the aid of appropriate computer hardware and software, in a single test. Test facilities: Cavitation tunnel A cavitation tunnel is used to investigate propellers. This is a vertical water circuit with large diameter pipes. At the top, it carries the measuring facilities. A parallel inflow is established. With or without a ship model, the propeller, attached to a dynamometer, is brought into the inflow, and its thrust and torque is measured at different ratios of propeller speed (number of revolutions) to inflow velocity. A stroboscope synchronized with the propeller speed serves to visualize cavitation as if the cavitation bubble would not move. By this, one can observe if the propeller would be damaged by cavitation. To ensure similarity to the full-scale propeller, the pressure is lowered, and the gas content of the water is controlled. Test facilities: Workshops Ship model basins manufacture their ship models from wood or paraffin with a computerized milling machine. Some of them also manufacture their model propellers. Equipping the ship models with all drives and gauges and manufacturing equipment for non-standard model tests are the main tasks of the workshops. Test facilities: Maneuvering and seakeeping basin This is a test facility that is wide enough to investigate arbitrary angles between waves and the ship model, and to perform maneuvers like turning circles, for which the towing tank is too narrow. However, some important maneuvers like the spiral test still require even more space and still have to be simulated numerically after system identification. Test facilities: Ice tank An ice tank is used to develop ice breaking vessels, this tank fulfills similar purposes as the towing tank does for open water vessels. Resistance and required engine power as well as maneuvering behaviour are determined depending on the ice thickness. Also ice forces on offshore structures can be determined. Ice layers are frozen with a special procedure to scale down the ice crystals to model scale. Software: Additionally, these companies or authorities have CFD software and experience to simulate the complicated flow around ships and their rudders and propellers numerically. Today's state of the art does not yet allow software to replace model tests in their entirety by CFD calculations. One reason, but not the only one, is that elementization is still expensive. Also the lines design of some of the ships is carried out by the specialists of the ship model basin, either from the beginning or by optimizing the initial design obtained from the shipyard. The same applies to the design of propellers. Examples: The ship model basins worldwide are organized in the ITTC (International Towing Tank Conference) to standardize their model test procedures. Some of the most significant ship model basins are: Denny Tank in Dumbarton, Scotland. The Denny tank was the first commercial ship model basin. Current Meter Rating Trolly, CMC Division, CWPRS Pune, India SINTEF Ocean, towing tank, ocean basin, cavitation tunnel in Trondheim, Norway High speed towing tank - Wolfson Unit MTIA - specialists in high performance power and sail. Examples: David Taylor Model Basin and the Davidson Laboratory at the Carderock Division of the Naval Surface Warfare Center in the United States High Speed Towing Tank facility at Naval Science and Technological Labs at Vizag India The Institute for Ocean Technology in St. Johns, Canada FORCE Technology in Lyngby, Denmark SSPA, in Gothenburg, Sweden Laboratory of Naval and Oceanic Engineering() of Institute for Technological Research of São Paulo in São Paulo, Brazil. Examples: Maritime Research Institute Netherlands (MARIN) in Wageningen, the Netherlands CNR-INSEAN in Rome, Italy University of Naples Federico II in Naples, Italy SVA Potsdam in Potsdam, Germany HSVA in Hamburg, Germany "Bassin d'essai des carènes" in Val de Reuil, France CEHIPAR in Madrid, Spain CTO S.A.] in Gdansk, Poland FloWaveTT in Edinburgh, Scotland Krylov state research centre Крыловский государственный научный центр in Saint-Petersburg, Russia National Maritime Research Institute (NMRI) in Tokyo, Japan China Ship Scientific Research Center(CSSRC) in Wuxi, China Rosa Röhre in Berlin, Germany
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Orgy** Orgy: In modern usage, an orgy is a sex party where guests freely engage in open and unrestrained sexual activity or group sex. Orgy: Swingers' parties do not always conform to this designation, because at many swinger parties the sexual partners may all know each other or at least have some commonality among economic class, educational attainment or other shared attributes. Some swingers contend that an orgy, as opposed to a sex party, requires some anonymity of sexual partners in complete sexual abandon. Other kinds of "sex party" may fare less well with this labeling. Orgy: Participation in an "orgy" is a common sexual fantasy, and group sex targeting such consumers is a subgenre in pornographic films. The term is also used metaphorically in expressions, such as an "orgy of colour" or an "orgy of destruction" to indicate excess, overabundance. The term "orgiastic" does not generally connote group sex and is closer to the classical roots and this metaphorical usage. Ancient orgia: In ancient Greek religion, orgia (ὄργια, sing. ὄργιον, orgion) were ecstatic rites characteristic of the Greek and Hellenistic mystery religions. Unlike public religion, or the private religious practices of a household, the mysteries were open only to initiates, and were thus "secret". Some rites were held at night. Orgia were part of the Eleusinian Mysteries, the Dionysian Mysteries, and the cult of Cybele, which involved the castration of her priests in a frenzied trance. Because of their secret, nocturnal, and unscripted nature, the orgia were subject to prurient speculation and regarded with suspicion, particularly by the Romans, who attempted to suppress the Bacchanals in 186 BC. Orgia are popularly thought to have involved sex, but, while sexuality and fertility were cultic concerns, the primary goal of the orgia was to achieve an ecstatic union with the divine. The Adamites were also accused of participating in orgies. In films: Orgy scenes are featured in various films, including Caligula, Bachelor Party, Zoolander 2, Eyes Wide Shut, Sausage Party, and The Wolf of Wall Street.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Google Charts** Google Charts: Google Charts is an interactive Web service that creates graphical charts from user-supplied information. The user supplies data and a formatting specification expressed in JavaScript embedded in a Web page; in response the service sends an image of the chart.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spekesild** Spekesild: Spekesild (Norwegian for raw herring pickled in salt) is Atlantic herring preserved using salt. Salt curing: The preservation takes place by the salt extracting water from the herring, and thus poorer growth conditions are created for microbes. Until the 1960s, herring was an important export item for Norway, but the decline in the herring fisheries led to these exports stagnating sharply. In the 1990s, exports picked up somewhat, and Russia, Sweden and Poland are important markets. Desalination: The salted herring is soaked in freshwater for one and a half to two hours for desalination before eating. Important food resource: In Norway, spekesild was for hundreds of years considered a poor man's diet that kept hunger away. A traditional Norwegian dish with salted herring (spekesild) is along with boiled potatoes, raw onions, dill, pickled beetroots, butter or crème fraîche and flatbrød.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Disbarment** Disbarment: Disbarment, also known as striking off, is the removal of a lawyer from a bar association or the practice of law, thus revoking their law license or admission to practice law. Disbarment is usually a punishment for unethical or criminal conduct but may also be imposed for incompetence or incapacity. Procedures vary depending on the law society; temporary disbarment may be referred to as suspension. Australia: In Australia, states regulate the Legal Profession under state law, despite many participating in a uniform scheme. Admission as a lawyer is the business of the admissions board and the Supreme Court. Disciplinary proceedings may be commenced by the Bar Association or the Law Society of which one is a member, or the board itself. Germany: In Germany, a Berufsverbot is a ban on practicing a profession, which the government can issue to a lawyer for misconduct, Volksverhetzung or for serious mismanagement of personal finances. In April 1933, the Nazi government issued a Berufsverbot forbidding the practice of law by Jews, Communists, and other political opponents, except for those protected by the Frontkämpferprivileg. United Kingdom: In the United Kingdom, the removal of the licence to practise of a barrister or Scottish advocate is called being "disbarred", whilst the removal of a solicitor from the rolls in England and Wales, Scotland, or Northern Ireland is called being "struck off". United States: Overview Generally, disbarment is imposed as a sanction for conduct indicating that an attorney is not fit to practice law, willfully disregarding the interests of a client, commingling funds, or engaging in fraud which impedes the administration of justice. In some states, any lawyer who is convicted of a felony is automatically suspended pending further disciplinary proceedings, or, in New York, automatically disbarred. Automatic disbarment, although opposed by the American Bar Association, has been described as a convicted felon's just deserts.In the United States legal system, disbarment is specific to regions; one can be disbarred from some courts, while still being a member of the bar in another jurisdiction. However, under the American Bar Association's Model Rules of Professional Conduct, which have been adopted in most states, disbarment in one state or court is grounds for disbarment in a jurisdiction which has adopted the Model Rules. United States: Disbarment is quite rare: in 2011, only 1,046 lawyers were disbarred. Instead, lawyers are usually sanctioned by their own clients through civil malpractice proceedings, or via fine, censure, suspension, or other punishments from the disciplinary boards. To be disbarred is considered a great embarrassment and shame, even if one no longer wishes to continue a career in law. Because disbarment rules vary by area, different rules can apply depending on where a lawyer is disbarred. Notably, the majority of US states have no procedure for permanently disbarring a person. Depending on the jurisdiction, a lawyer may reapply to the bar immediately, after five to seven years, or be banned for life. Notable U.S. disbarments The 20th and the 21st centuries have seen one former U.S. president and one former U.S. vice president disbarred, and another former president suspended from one bar and caused to resign from another bar rather than face disbarment. Former vice president Spiro Agnew, having pleaded no contest (which subjects a person to the same criminal penalties as a guilty plea, but is not an admission of guilt for a civil suit) to charges of bribery and tax evasion, was disbarred from Maryland, the state of which he had previously been governor. United States: Former president Richard Nixon was disbarred from New York in 1976 for obstruction of justice related to the Watergate scandal. He had attempted to resign from the New York bar, as he had done with California and the Supreme Court, but his resignation was not accepted as he would not acknowledge that he was unable to defend himself from the charges brought against him.In 2001, following a 5-year suspension by the Arkansas bar, the United States Supreme Court suspended Bill Clinton, providing 40 days for him to contest disbarment. He resigned before the end of the 40 day period, thus avoiding disbarment.Alger Hiss was disbarred for a felony conviction, but later became the first person reinstated to the bar in Massachusetts after disbarment.In 2007, Mike Nifong, the District Attorney of Durham County, North Carolina who presided over the 2006 Duke University lacrosse case, was disbarred for prosecutorial misconduct related to his handling of the case.In April 2012, a three-member panel appointed by the Arizona Supreme Court voted unanimously to disbar Andrew Thomas, former County Attorney of Maricopa County, Arizona, and a former close confederate of Maricopa County Sheriff Joe Arpaio. According to the panel, Thomas "outrageously exploited power, flagrantly fostered fear, and disgracefully misused the law" while serving as Maricopa County Attorney. The panel found "clear and convincing evidence" that Thomas brought unfounded and malicious criminal and civil charges against political opponents, including four state judges and the state attorney general. "Were this a criminal case," the panel concluded, "we are confident that the evidence would establish this conspiracy beyond a reasonable doubt."Jack Thompson, the Florida lawyer noted for his activism against Howard Stern, video games, and rap music, was permanently disbarred for various charges of misconduct. The action was the result of several grievances claiming that Thompson had made defamatory, false statements and attempted to humiliate, embarrass, harass or intimidate his opponents. The order was made on September 25, 2008, effective October 25. However, Thompson attempted to appeal to the higher courts in order to avoid the penalty actually taking effect. Neither the US District court, nor the US Supreme Court would hear his appeal, rendering the judgment of the Florida Supreme Court final. United States: Ed Fagan, a New York lawyer who prominently represented Holocaust victims against Swiss banks, was disbarred in New York (in 2008) and New Jersey (in 2009) for failing to pay court fines and fees; and for misappropriating client and escrow trust funds.F. Lee Bailey, noted criminal defense attorney, was disbarred by the state of Florida in 2001, with reciprocal disbarment in Massachusetts in 2002. The Florida disbarment was the result of his handling of stock in the DuBoc marijuana case. Bailey was found guilty of 7 counts of attorney misconduct by the Florida Supreme Court. Bailey had transferred a large portion of DuBoc's assets into his own accounts, using the interest gained on those assets to pay for personal expenses. In March 2005, Bailey filed to regain his law license in Massachusetts. The book Florida Pulp Nonfiction details the peculiar facts of the DuBoc case along with extended interviews with Bailey that include his own defense. Bailey is also best known for representing murder suspect O. J. Simpson in 1994.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Decorticator** Decorticator: A decorticator (from Latin: cortex, bark) is a machine for stripping the skin, bark, or rind off nuts, wood, plant stalks, grain, etc., in preparation for further processing. History: In 1933, a farmer named Bernagozzi from Bologna manufactured a machine called a "scavezzatrice", a decorticator for hemp. A working hemp decorticator from 1890, manufactured in Germany, is preserved in a museum in Bologna.In Italy, the"scavezzatrice" faded in the 1950s because of monopolisation from fossil fuel, paper interests, synthetic materials and from other less profitable crops. History: Many types of decorticators have been developed since 1890.In 1919, George Schlichten received a U.S. patent on his improvements of the decorticator for treating fiber bearing plants. Schlichten failed to find investors for production of his decorticator and died in 1923, a broken man. His business was revived a decade after death in 1933.Newer, high-speed kinematic decorticators, use a different mechanism, enabling separation into three streams; bast fibre, hurd, and green microfiber. Current usage: In some decorticators, the operation is "semi-automatic", featuring several stops during operation, while more modern systems, such as high-speed kinematic decorticators, are fully automatic. There are companies who produce and sell decorticators for different crops.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sports visor** Sports visor: A sports visor, also called a sun visor or visor cap, is a type of crownless hat consisting simply of a visor or brim with a strap or buckle encircling the head. The top of the head is not covered and the visor protects only the face, including eyes, nose, and cheeks, from the sun. Sports visor: The visor portion of a sun visor may be either curved or flat and the strap is often equipped with an adjustable velcro fastener in back. The strap can function as a sweatband although usually not. This type of headgear was designed for use in outdoor sports (especially golf, tennis, volleyball, and softball) where eye protection from direct sunlight is desirable, while the missing crown allows for ventilation. It is now often used by non-athletes at beach and other sunny outdoor events.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heathkit** Heathkit: Heathkit is the brand name of kits and other electronic products produced and marketed by the Heath Company. The products over the decades have included electronic test equipment, high fidelity home audio equipment, television receivers, amateur radio equipment, robots, electronic ignition conversion modules for early model cars with point style ignitions, and the influential Heath H-8, H-89, and H-11 hobbyist computers, which were sold in kit form for assembly by the purchaser. Heathkit: Heathkit manufactured electronic kits from 1947 until 1992. After closing that business, the Heath Company continued with its products for education, and motion-sensor lighting controls. The lighting control business was sold around 2000. The company announced in 2011 that they were reentering the kit business after a 20-year hiatus but then filed for bankruptcy in 2012, and under new ownership began restructuring in 2013. As of 2022, the company has a live website with newly designed products, services, vintage kits, and replacement parts for sale. Founding: The Heath Company was founded as an aircraft company in 1911 by Edward Bayard Heath with the purchase of Bates Aeroplane Co, soon renamed to E.B. Heath Aerial Vehicle Co. Starting in 1926 it sold a light aircraft, the Heath Parasol, in kit form. Heath died during a 1931 test flight. The company reorganized and moved from Chicago to Niles, Michigan.In 1935, Howard Anthony purchased the then-bankrupt Heath Company, and focused on selling accessories for small aircraft. After World War II, Anthony decided that entering the electronics industry was a good idea, and bought a large stock of surplus wartime electronic parts with the intention of building kits with them. In 1947, Heath introduced its first electronic kit, the O1 oscilloscope with 5 inch diameter cathode ray tube display (CRT) that sold for US$39.50 (equivalent to $518 in 2022) – the price was unbeatable at the time, and the oscilloscope went on to be a huge seller. Heathkit product concept: After the success of the oscilloscope kit, Heath went on to produce dozens of Heathkit products. Heathkits were influential in shaping two generations of electronic hobbyists. The Heathkit sales premise was that by investing the time to assemble a Heathkit, the purchasers could build something comparable to a factory-built product at a significantly lower cash cost and, if it malfunctioned, could repair it themselves. During those decades, the premise was basically valid.: 141 Commercial factory-built electronic products were constructed from generic, discrete components such as vacuum tubes, tube sockets, capacitors, inductors, and resistors, mostly hand-wired and assembled using point-to-point construction technology. The home kit-builder could perform these labor-intensive assembly tasks himself, and if careful, attain at least the same standard of quality. In the case of Heathkit's most expensive product at the time, the Thomas electronic organ, building the kit version represented substantial savings. Heathkit product concept: One category in which Heathkit enjoyed great popularity was amateur radio. Ham radio operators had frequently been forced to build their equipment from scratch before the advent of kits, with the difficulty of procuring all the parts separately and relying on often-experimental designs. Kits brought the convenience of all parts being supplied together, with the assurance of a predictable finished product; many Heathkit model numbers became well known in the ham radio community. The HW-101 HF transceiver became so ubiquitous that even today the "Hot Water One-Oh-One" can be found in use, or purchased as used equipment at hamfests, decades after it went out of production. Heathkit product concept: In the case of electronic test equipment, Heathkits often filled a low-end entry-level niche, giving hobbyists access at an affordable price.The instruction books were regarded as among the best in the kit industry, being models of clarity, beginning with basic lessons on soldering technique, and proceeding with explicit step-by-step directions, illustrated with numerous line drawings; the drawings could be folded out to be visible next to the relevant text (which might be bound several pages away) and were aligned with the assembler's viewpoint. Also in view was a checkbox to mark with a pencil as each task was accomplished.: 146–147  The instructions usually included complete schematic diagrams, block diagrams depicting different subsystems and their interconnections, and a "Theory of Operation" section that explained the basic function of each section of the electronics.: 146–147 Heathkits as education: No knowledge of electronics was needed to assemble a Heathkit. The assembly process itself did not teach much about electronics, but provided a great deal of what could have been called basic "electronics literacy", such as the ability to identify tube pin numbers or to read a resistor color code. Many hobbyists began by assembling Heathkits, became familiar with the appearance of components like capacitors, transformers, resistors, and tubes, and were motivated to understand just what these components actually did. For those builders who had a deeper knowledge of electronics (or for those who wanted to be able to troubleshoot/repair the product in the future), the assembly manuals usually included a detailed "Theory of Operation" chapter, which explained the functioning of the kit's circuitry, section by section. Heath developed a business relationship with electronics correspondence schools (e.g., NRI and Bell & Howell), and supplied electronic kits to be assembled as part of their courses, with the schools basing their texts and lessons around the kits. In the 1960s, Heathkit marketed a line of its electronic instruments which had been modified for use in teaching physics at the high school (Physical Science Study Committee, PSSC) and college levels (Berkeley Physics Course).: 149 Heathkits could teach deeper lessons. "The kits taught Steve Jobs that products were manifestations of human ingenuity, not magical objects dropped from the sky", writes a business author, who goes on to quote Jobs as saying "It gave a tremendous level of self-confidence, that through exploration and learning one could understand seemingly very complex things in one's environment." Diversification: After the death of Howard Anthony in a 1954 airplane crash, his widow sold the company to Daystrom Company, a management holding company that also owned several other electronics companies.: 147  Daystrom was absorbed by oilfield service company Schlumberger Limited in 1962, and the Daystrom/Schlumberger days were to be among Heathkit's most successful.: 148 Those years saw some "firsts" in the general consumer market. The early 1960s saw the introduction of the AA-100 integrated amplifier. The early 1970s saw Heath introduce the AJ-1510, an FM tuner using digital synthesis, the GC-1005 digital clock, and the GR-2000 color television set. In 1974, Heathkit started "Heathkit Educational Systems", which expanded their manuals into general electronics and computer training materials. Heathkit also expanded their expertise into digital and, eventually, computerized equipment, producing among other things digital clocks and weather stations with the new technology.Kits were compiled in small batches mostly by hand, using roller conveyor lines. These lines were put up and taken down as needed. Some kits were sold completely "assembled and tested" in the factory. These models were differentiated with a "W" suffix after the model number, indicating that they were factory-wired. Diversification: For much of Heathkit's history, there were competitors. In electronic kits: Allied Radio, an electronic parts supply house, had its KnightKits, Lafayette Radio offered some kits, Radio Shack made a few forays into this market with its Archerkit line, Dynaco made its audio products available in kit form (Dynakits), as did H. H. Scott, Inc., Fisher, and Eico; and later such companies as Southwest Technical Products and the David Hafler Company. Personal computers: Before entering the burgeoning home computer market, Heathkit marketed and sold microprocessor-based systems aimed at learning about this technology. The ET-3400, for example, was released in 1976 and was based on the Motorola 6800 microprocessor. This system included 256 bytes of RAM, a 1k monitor in ROM, and a keypad for easy entry and modification of programs. Despite being a small trainer kit, it was powerful and flexible enough to be used in rudimentary control systems.In 1978, Heathkit introduced the Heathkit H8 home computer. The H8 was very successful, as were the H19 and H29 terminals, and the H89 "All in One" computer. The H8 and H89 ran the Heathkit custom operating system HDOS as well as the popular CP/M operating system. The H89 contained two Zilog Z80 8-bit processors, one for the computer and one for the built-in H-19 terminal. The H11, a low-end DEC LSI-11 16-bit computer, was less successful, probably because it was substantially more expensive than the 8-bit computer line. Personal computers: Seeing the potential in personal computers, Zenith Radio Company bought Heath Company from Schlumberger in 1979 for $63 million, renaming the computer division Zenith Data Systems (ZDS). Zenith purchased Heath for the flexible assembly line infrastructure at the nearby St. Joseph facility as well as the R&D assets.: 151 Heath/Zenith was in the vanguard of companies to start selling personal computers to small businesses. The H-89 kit was re-branded as the Zenith Z-89/Z-90, an assembled all in one system with a monitor and a floppy disk drive. They had agreements with Peachtree Software to sell a customized "turn-key" version of their accounting, CPA, and real estate management software. Shortly after the release of the Z-90, they released a 5MB hard disk unit and double-density external floppy disk drives, which were much more practical for business data storage than punched paper tapes. Personal computers: While the H11 was popular with hard-core hobbyists, Heath engineers realized that DEC's low-end PDP-11 microprocessors would not be able to get Heath up the road to more powerful systems at an affordable price. Heath/Zenith then designed a dual Intel 8085/8088-based system dubbed the H100 (or Z-100, in preassembled form, sold by ZDS). The machine featured advanced (for the day) bit mapped video that allowed up to 640 x 225 pixels of 8 color graphics. The H100 was interesting in that it could run either the CP/M operating system, or their OEM version of MS-DOS named Z-DOS, which were the two leading business PC operating systems at the time. Although the machine had to be rebooted to change modes, the competing operating systems could read each other's disks. Personal computers: In 1982 Heath introduced the Hero-1 robot kit to teach principles of industrial robotics. The robot included a Motorola 6808 processor, ultrasonic sensor, and optionally a manipulator arm; the complete robot could be purchased assembled for $2495 or a basic kit without the arm purchased for $999. This was the first in a popular series of HeathKit robot kits sold to educational and hobbyist users. Kit era comes to a close: While Heath/Zenith's computer business was successful, the growing popularity of home computers as a hobby hurt the company because many customers began writing computer programs instead of assembling Heathkits. Also, while their assembly was still an interesting and educational hobby, kits were no longer less expensive than preassembled products; BYTE reported in 1984 that the kit version of the Z-150 IBM PC compatible cost $100 more than the preassembled computer from some dealers, but needed about 20 hours and soldering skills to assemble. The continuation of the integration trend (printed circuit boards, integrated circuits, etc.), and mass production of electronics (especially computer manufacturing overseas and plug-in modules) eroded the basic Heathkit business model. Assembling a kit might still be fun, but it could no longer save much money. The switch to surface mount components and LSI ICs finally made it impossible for the home assembler to construct an electronic device for significantly less money than assembly line factory products.: 152–153 As sales of its kits dwindled during the decade, Heath relied on its training materials and a new venture in home automation and lighting products to stay afloat. When Zenith eventually sold ZDS to Groupe Bull in 1989, Heathkit was included in the deal.: 153 In March 1992, Heath announced that it was discontinuing electronic kits after 45 years. The company had been the last sizable survivor of a dozen kit manufacturers from the 1960s. In 1995, Bull sold Heathkit to a private investor group called HIG, which then sold it to another investment group in 1998. Wanting to only concentrate on the educational products, this group sold the Heath/Zenith name and products to DESA International,: 154  a maker of specialty tools and heaters. In late 2008, Heathkit Educational Systems sold a large portion of its physical collection of legacy kit schematics and manuals along with permission to make reproductions to Don Peterson, though it still retained the copyrights and trademarks, and had pointers to people that could help with the older equipment. Kit era comes to a close: DESA filed bankruptcy in December 2008. The Heathkit company existed for a few years as Heathkit Educational Systems located in Saint Joseph, Michigan, concentrating on the educational market. The Heathkit company filed for bankruptcy in 2012. Revival: In May 2013, Heathkit's corporate restructuring was announced on their website. An extensive FAQ accessible from their homepage stated clearly that Heathkit was back, and that they would resume electronic kit production and sales.On October 8, 2015, Heathkit circulated an email to its "insiders", who had indicated an interest in the company's progress by completing its online marketing survey. It had now secured the rights to all Heathkit designs and trademarks; secured several new patents; established new offices, warehouse space, and a factory in Santa Cruz, California; and had introduced the renewed company's first new electronic kit in decades. Since then, Heathkit has announced and sold further kits in its new lineup of products. In addition, limited repair service on vintage products, reprints of manuals and schematics, remaining inventories of original parts, and upgrades of some vintage models are available. Amateur radio: Heathkit made amateur radio kits almost from the beginning. In addition to their low prices compared with commercially manufactured equipment, Heathkits appealed to amateurs who had an interest in building their own equipment, but did not necessarily have the expertise or desire to design it and obtain all the parts themselves. They expanded and enhanced their line of amateur radio gear through nearly four decades. By the late 1960s, Heathkit had as large a selection of ham equipment as any company in the field. Amateur radio: Beginnings They entered the market in 1954 with the AT-1, a simple, three tube, crystal controlled transmitter. It was capable of operating CW on the six most popular amateur short wave bands, and sold for $29.50 (equivalent to $320 in 2022). The 39-page catalog contained only two pages of “ham gear”. An antenna coupler was the only other piece of equipment specifically intended for amateur radio use. The other two items were a general coverage short wave receiver, the AR-2, and an impedance meter. A VFO for the AT-1, the model VF-1, came out the following year. Amateur radio: Early DX-series transmitters The company's first full featured transmitter, the DX-100, appeared in 1956. It filled two facing catalog pages, indicating Heathkit's seriousness in building kits for amateurs. The description noted that it was “amateur designed” – meant to convey expertise in designing specifically for the amateur radio operator – not the usual sense of the term amateur. And it stated that “amateurs in the field are enthusiastic about praising its performance under actual operating conditions”, indicating that it had been through what we would call beta testing today. Amateur radio: Heathkit had been including schematic diagrams of nearly every major kit in its catalog since 1954. In addition, the DX-100's listing contained two interior pictures and a block diagram. The 15-tube design could transmit either CW or AM (voice) with 100 to 140 watts output on all seven short wave amateur bands. It had a built-in power supply and VFO, and weighed 100 pounds. Priced at $189.50, it was expensive for the time (equivalent to $2,070 in 2022), yet undercut other amateur transmitters having similar features. It became quite popular. Amateur radio: The following year they introduced two scaled-down transmitters: the CW-only DX-20 model, meant for beginners, and the DX-35, capable of both CW and AM phone. Both models covered six bands, only lacking the DX-100's coverage of the 160m (1.8 MHz) band. Although they resembled the DX-100 in appearance, they lacked many of its features. But at $35.95 and $56.95, they were much more affordable. The DX-35 was superseded a year later by the improved DX-40. Amateur radio: The DX-100 was upgraded in 1959 to the DX-100B (there apparently was no DX-100A) and sold for the same price. By 1960, the catalog advertised it as the “best watts per dollar value” and called the 5-year-old design “classic”. Amateur radio: Heathkit tribes Apache, Mohawk, Chippewa, Seneca In 1959, a year before the last DX-100 was sold, a new deluxe line of amateur equipment was introduced. The TX-1 Apache transmitter and the RX-1 Mohawk receiver were about the same size and weight as the DX-100 but had updated styling and a new cabinet (to which the DX-100 also changed). The transmitter had many more features than its predecessor, and the RX-1 was Heathkit's first full featured amateur band receiver. Amateur radio: Both units used a "slide rule dial" with a scale on a rotating drum that changed with the band selection, and provided more accurate tuning. Together, Heath's top-of-the-line pair sold for $504.45 (equivalent to $5,060 in 2022). The SB-10 SSB adapter was introduced in 1959 to enable both the Apache and DX-100 to operate on the new mode. The next year, a matching kilowatt linear amplifier, the KL-1 Chippewa, was added to the line. Completing the line, the model VHF-1 Seneca covered the 6 meter (50 MHz) and 2 meter (144 MHz) bands. Amateur radio: Cheyenne, Comanche The MT-1 Cheyenne transmitter and MR-1 Comanche receiver were considerably smaller and lighter than the Apache-Mohawk pair. Used with either an AC or DC external power supply, they could be operated in fixed or mobile service. Without transceive capability, this pair was probably challenging to operate while driving. A year later these units were reborn as the HX-20 transmitter and HR-20 receiver (and were no longer given names), capable of SSB operation. Amateur radio: Marauder, Warrior The HX-10 Marauder was a redesigned replacement for the Apache, operating on SSB without an external adapter. It appeared in the 1962–63 catalog along with a new linear amplifier, the HA-10 Warrior. VHF The last new entry in the tribes generation was the HX-30 transmitter and HA-20 linear amplifier, both capable of SSB operation on the six meter (50 MHz) band. Heathkit also brought out a pair of single band, low power, CW and AM phone VHF transceivers – the HW-10 and HW-20 for the 6 meter and 2 meter bands, respectively. Designed primarily for mobile use, they were much smaller than the tribes but bore a strong family resemblance down to their chrome knobs. Amateur radio: In 1961 they also brought out a distinctive set of low cost, compact, single band transceivers for 6 and 2 meters, the HW-29 and HW-30, also called the Sixer and Twoer. Completely self-contained, with a built-in speaker and a matching microphone, they could operate from AC or DC power. Somewhat limited in features, they were designed for AM phone operation only and frequency control was crystal controlled on transmit. Amateur radio: These portable transceivers looked distinctly different from other Heathkit gear. Tan and brown rather than the pervasive green, they were roughly rectangular shaped with rounded corners and had a handle on top. That particular shape and appearance would lead to them being dubbed the “Benton Harbor Lunchboxes” in the 1966 catalog. Amateur radio: New novice station To succeed the DX-series that started in the 1950s, Heathkit designed an entirely new novice station consisting of the DX-60 transmitter, HR-10 receiver, and HG-10 VFO. These matching units were smaller and lighter than the tribes, covered five bands, and were much lower priced. They would go through incremental improvement and sell for more than a decade. In 1969 Heathkit added the HW-16 to its beginner-level line – a transceiver designed specifically for the Novice licensee. It covered the three HF Novice bands, CW only, and was crystal controlled but could be used with the HG-10 VFO. Amateur radio: SB-series and HW-series By the early 1960s, a large majority of amateurs had adopted SSB as their primary mode of voice communication on the HF bands. This led to the development of equipment that was specifically designed for transceive operation on SSB, and also much smaller and lighter than the previous generation of ham gear. As with other manufacturers, such as Drake and Collins, Heathkit began in 1964 by introducing a transceiver. It covered only one band and came in three models: The HW-12, -22, and -32, covering the 20m (14 MHz), 40m (7 MHz) and 75m (3.8 MHz) bands, respectively. Amateur radio: Influenced heavily by the S/Line from Collins, Heathkit designed the SB-series to become their top-line set of amateur radio equipment. Like the S/Line, these new products were designed to operate together in various combinations as a system. The first models appeared in the 1965 catalog, displacing the large, heavy units of the tribes generation (except for the Marauder and Warrior, and the 6 meter units which remained for one year). Amateur radio: When used together, the SB-300 receiver and SB-400 transmitter could transceive and had many other features of the S/Line, including crystal bandwidth filters and 1 kHz tuning dial resolution. They could also operate separately (if the optional crystal pack was installed in the transmitter), giving the operator more flexibility in communicating with foreign stations, aka "DX Stations", who were not authorized to transmit within the same frequency ranges as the U.S. stations were authorized to use. The S/Line influence was easy to see too, in its cabinet styling, tuning mechanism and knobs. But by designing them as kits and using less expensive construction, Heathkit could offer these units at much lower prices. The pair sold for $590 that same year (equivalent to $5,480 in 2022). The matching SB-200 1,200-watt input/700-800-watt output linear amplifier completed the line for 1965. Amateur radio: The following year two more units were added: the SB-110 transceiver for the 6 meter band and the HA-14 “Kompact Kilowatt”, a smaller kilowatt linear amplifier based on the SB-200. The HA-14 also used grounded grid 572Bs but with external AC and DC power supplies. At 7 lbs the amplifier itself was very small, matching the HW mono-banders in style, and usable in both mobile and desktop service. Like the SB-200 from which it derived, its input was designed to match the 100-watt output of the Heathkit SB and HW series plus as Collins and others. Amateur radio: In a last minute, four page, center insert to the 1966 catalog titled “New Product News” Heathkit announced the SB-100 five-band SSB transceiver. Like the other transceivers of this time, the SB-100 (and later improved models SB-101 and SB-102) would become one of Heathkit's best selling amateur radio products. This included a scaled-back, lower-priced version of the SB-100 called the HW-100 (later updated to the HW-101) introduced in 1969. While the SB series were affectionately nicknamed the "Sugar Baker" series, the HW series were affectionately nicknamed the "Hot Water" series. Amateur radio: In the next three years, Heathkit brought out several more SB-series accessories, including a 2 kilowatt input/1.4 kilowatt output linear amplifier, the SB-220. The SB-400 model transmitter was slightly updated with the newer SB-401 model. The final model in the original SB-series was the SB-303 receiver, a solid state replacement (in a smaller case) for the SB-301 and its earlier sibling, the SB-300 receiver. The 2000 film Frequency, starring Dennis Quaid and Jim Caviezel, featured a Heathkit SB-301 receiver being used with artistic license as a transceiver (film studio prop), although the SB-301 did not have any transmitter stages in it and was not a transceiver. An SB-302 receiver was never produced (no reason ever given for why the 302 model number was skipped) and some hams who worked at Heath hinted that there was talk of a solid state SB-103 transceiver, but it never made it past the proposal stage. Amateur radio: The SB-series would continue to be improved and sell well until 1974 and the arrival of solid state and digital design, with the SB-104 transceiver, its accessories and a new generation of amateur radio gear. Though somewhat redesigned physically it had a similar appearance to the earlier SB-series generation. The SB-670 was a short-lived dream antenna tuner that matched the SB-104. Unfortunately, only a few were produced and those were considered "prototypes". However, technical issues with the first production run of the SB-104 led to Heath having to quickly update it with the SB-104A. By that time, amateurs were buying transceivers made overseas being produced for the same amount of money with more features (including the AM and FM modes that the Heathkit SB and HW series did not have) and did not require user assembly. Amateur radio: In 1983 Heathkit came out with their last ham radio kit, the HW-5400 transceiver. It was all solid state with 100 watts output on 160 through 10 meters including the newly available WARC bands. Also available was a matching power supply. Solid state and digital In the late 1970s Heathkit redesigned the line again, bringing out a series of transceivers and separates with more advanced digital features and new styling (abandoning the green motif, a distinguishing feature of Heathkits for more than two decades). Amateur radio: During the 1980s, with increasing competition primarily from Japanese equipment makers, wide use of automated manufacturing techniques, and increasingly complex designs, it became much more difficult to produce kits that were both easy to construct and feature-rich at a competitive price. Heathkit began to introduce models that were unavailable in kit form such as the SS-9000. The SS-9000 is an all solid-state, synthesized transceiver covering 160 through 10 meters (including the WARC bands) with 100 watts output. A total of 375 were produced according to the Yahoo Heath user group. This continued until they left the electronic kit business in 1992. As of 2022, the Heathkit company has been revived, and is offering both newly designed and vintage products for the amateur radio market.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Next Generation Multiple Warhead System** Next Generation Multiple Warhead System: The Next Generation Multiple Warhead System, or NGMWS, is a weapon developed by MBDA to defeat hard and deeply buried targets (hence an alternative name, HARDBUT).The system includes a precursor charge and a follow-through bomb. Development was funded by the British and French ministries of defence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Small modular immunopharmaceutical** Small modular immunopharmaceutical: Small modular immunopharmaceuticals, or SMIPs for short, are artificial proteins that are intended for use as pharmaceutical drugs. They are largely built from parts of antibodies (immunoglobulins), and like them have a binding site for antigens that could be used for monoclonal antibody therapy. SMIPs have similar biological half-life and, being smaller than antibodies, are reasoned to have better tissue penetration properties. They were invented by Trubion and are now being developed by Emergent BioSolutions, which acquired Trubion in 2010. Structure: SMIPs are single-chain proteins that comprise one binding region, one hinge region as a connector, and one effector domain. The binding region is a modified single-chain variable fragment (scFv), and the rest of the protein can be constructed from the fragment crystallizable region (Fc) and the hinge region of an immunoglobulin G1 (IgG1). Genetically modified cells produce SMIPs as antibody-like dimers, which are about 30% smaller than real antibodies.Like ordinary monoclonal antibodies, SMIPs are monospecific, meaning they recognize and attach to a single antigen target to initiate their biological activity. SMIP drug candidates are intended to target antigens with the same specificity and predictable biological activity as monoclonal antibodies. Examples are TRU-015, a CD20 targeting SMIP under research for rheumatoid arthritis, and TRU-016, a CD37 targeting potential treatment for chronic lymphocytic leukemia and other B-cell cancers. Production: A monoclonal antibody targeting the desired antigen can be developed the classical way, using hybridoma technology. The scFv is then constructed from the antibody's variable regions. A large number of different hinge regions and effector domains are taken from libraries of immunoglobulins, and the combined proteins are produced in genetically modified (transfected) cells and screened for clones with useful properties like high binding specificity. The selected protein is multiplied in transfected cells suitable for medium- or large-scale production, for example Chinese hamster ovary cells, and purified by chromatography.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TB6Cs1H4 snoRNA** TB6Cs1H4 snoRNA: TB6Cs1H4 is a member of the H/ACA-like class of non-coding RNA (ncRNA) molecule that guide the sites of modification of uridines to pseudouridines of substrate RNAs. It is known as a small nucleolar RNA (snoRNA) thus named because of its cellular localization in the nucleolus of the eukaryotic cell. TB6Cs1H4 is predicted to guide the pseudouridylation of LSU5 ribosomal RNA (rRNA) at residue Ψ824.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded