text
stringlengths
60
353k
source
stringclasses
2 values
**Two shot** Two shot: A two shot (or short two) is a type of shot in which the frame encompasses two people (the subjects). The subjects do not have to be next to each other, and there are many common two shots which have one subject in the foreground and the other subject in the background. Overview: Classic two shots are shot with a medium lens, head to knees or closer (the term two shot is shorthand for "medium two shot"), and show the characters so that both of their faces can be clearly seen. Common variations include two people in profile, one person in profile and the other 3/4 or full towards camera, two people looking towards camera either side by side or with one behind the other, one person with their back to the other while the other looks at them, either profile, 3/4, or full face, or the mirror two shot. Overview: An "American two shot" shows the two heads facing each other in profile to the camera. Overview: In a "two shot west," two characters will begin a conversation face-to-face, then one character will turn 180° away from the other character while the other character keeps looking at them, and they will continue with the conversation. This enables both characters to appear together in a single shot directly facing the audience. It is rather unrealistic, and is primarily seen in American soap operas.In a "full two shot," the two characters are shown from head to toe. A "wide two shot" is a master shot showing two people using a wider lens, including an overview of their surroundings. A "close two shot" is a close-up with two people's heads in the frame, shot with a long lens. This framing is often used for shots of two people kissing or in moments of great dramatic tension. Overview: In classic movies, long takes were often used in which several types of shots were used without cutting. For instance, if two people are talking facing the camera in a medium shot and the foreground character turns their back to the camera, the shot turns into an "over the shoulder" or "OTS" shot. If that character then walks towards the character in the background with both characters in profile, the shot turns into a full two shot. If the camera moves closer, the shot becomes a medium two shot again, and so on. Similarly, a three shot has three people featured prominently in the composition of the frame. In contrast, the term "one shot" has another meaning: it is used to describe a whole film, sequence or scene captured in one continuous take, usually footage without actual or noticeable cuts. Overview: Shots that frame only one actor are called single shots (or short singles).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chern–Simons theory** Chern–Simons theory: The Chern–Simons theory is a 3-dimensional topological quantum field theory of Schwarz type developed by Edward Witten. It was discovered first by mathematical physicist Albert Schwarz. It is named after mathematicians Shiing-Shen Chern and James Harris Simons, who introduced the Chern–Simons 3-form. In the Chern–Simons theory, the action is proportional to the integral of the Chern–Simons 3-form. Chern–Simons theory: In condensed-matter physics, Chern–Simons theory describes the topological order in fractional quantum Hall effect states. In mathematics, it has been used to calculate knot invariants and three-manifold invariants such as the Jones polynomial.Particularly, Chern–Simons theory is specified by a choice of simple Lie group G known as the gauge group of the theory and also a number referred to as the level of the theory, which is a constant that multiplies the action. The action is gauge dependent, however the partition function of the quantum theory is well-defined when the level is an integer and the gauge field strength vanishes on all boundaries of the 3-dimensional spacetime. Chern–Simons theory: It is also the central mathematical object in theoretical models for topological quantum computers (TQC). Specifically, an SU(2) Chern–Simons theory describes the simplest non-abelian anyonic model of a TQC, the Yang–Lee–Fibonacci model.The dynamics of Chern–Simons theory on the 2-dimensional boundary of a 3-manifold is closely related to fusion rules and conformal blocks in conformal field theory, and in particular WZW theory. The classical theory: Mathematical origin In the 1940s S. S. Chern and A. Weil studied the global curvature properties of smooth manifolds M as de Rham cohomology (Chern–Weil theory), which is an important step in the theory of characteristic classes in differential geometry. Given a flat G-principal bundle P on M there exists a unique homomorphism, called the Chern–Weil homomorphism, from the algebra of G-adjoint invariant polynomials on g (Lie algebra of G) to the cohomology H∗(M,R) . If the invariant polynomial is homogeneous one can write down concretely any k-form of the closed connection ω as some 2k-form of the associated curvature form Ω of ω. The classical theory: In 1974 S. S. Chern and J. H. Simons had concretely constructed a (2k − 1)-form df(ω) such that dTf(ω)=f(Ωk), where T is the Chern–Weil homomorphism. This form is called Chern–Simons form. If df(ω) is closed one can integrate the above formula Tf(ω)=∫Cf(Ωk), where C is a (2k − 1)-dimensional cycle on M. This invariant is called Chern–Simons invariant. As pointed out in the introduction of the Chern–Simons paper, the Chern–Simons invariant CS(M) is the boundary term that cannot be determined by any pure combinatorial formulation. It also can be defined as CS ⁡(M)=∫s(M)12Tp1∈R/Z, where p1 is the first Pontryagin number and s(M) is the section of the normal orthogonal bundle P. Moreover, the Chern–Simons term is described as the eta invariant defined by Atiyah, Patodi and Singer. The classical theory: The gauge invariance and the metric invariance can be viewed as the invariance under the adjoint Lie group action in the Chern–Weil theory. The action integral (path integral) of the field theory in physics is viewed as the Lagrangian integral of the Chern–Simons form and Wilson loop, holonomy of vector bundle on M. These explain why the Chern–Simons theory is closely related to topological field theory. The classical theory: Configurations Chern–Simons theories can be defined on any topological 3-manifold M, with or without boundary. As these theories are Schwarz-type topological theories, no metric needs to be introduced on M. The classical theory: Chern–Simons theory is a gauge theory, which means that a classical configuration in the Chern–Simons theory on M with gauge group G is described by a principal G-bundle on M. The connection of this bundle is characterized by a connection one-form A which is valued in the Lie algebra g of the Lie group G. In general the connection A is only defined on individual coordinate patches, and the values of A on different patches are related by maps known as gauge transformations. These are characterized by the assertion that the covariant derivative, which is the sum of the exterior derivative operator d and the connection A, transforms in the adjoint representation of the gauge group G. The square of the covariant derivative with itself can be interpreted as a g-valued 2-form F called the curvature form or field strength. It also transforms in the adjoint representation. The classical theory: Dynamics The action S of Chern–Simons theory is proportional to the integral of the Chern–Simons 3-form tr (A∧dA+23A∧A∧A). The constant k is called the level of the theory. The classical physics of Chern–Simons theory is independent of the choice of level k. Classically the system is characterized by its equations of motion which are the extrema of the action with respect to variations of the field A. In terms of the field curvature F=dA+A∧A the field equation is explicitly 0=δSδA=k2πF. The classical theory: The classical equations of motion are therefore satisfied if and only if the curvature vanishes everywhere, in which case the connection is said to be flat. Thus the classical solutions to G Chern–Simons theory are the flat connections of principal G-bundles on M. Flat connections are determined entirely by holonomies around noncontractible cycles on the base M. More precisely, they are in one-to-one correspondence with equivalence classes of homomorphisms from the fundamental group of M to the gauge group G up to conjugation. The classical theory: If M has a boundary N then there is additional data which describes a choice of trivialization of the principal G-bundle on N. Such a choice characterizes a map from N to G. The dynamics of this map is described by the Wess–Zumino–Witten (WZW) model on N at level k. Quantization: To canonically quantize Chern–Simons theory one defines a state on each 2-dimensional surface Σ in M. As in any quantum field theory, the states correspond to rays in a Hilbert space. There is no preferred notion of time in a Schwarz-type topological field theory and so one can require that Σ be a Cauchy surface, in fact, a state can be defined on any surface. Quantization: Σ is of codimension one, and so one may cut M along Σ. After such a cutting M will be a manifold with boundary and in particular classically the dynamics of Σ will be described by a WZW model. Witten has shown that this correspondence holds even quantum mechanically. More precisely, he demonstrated that the Hilbert space of states is always finite-dimensional and can be canonically identified with the space of conformal blocks of the G WZW model at level k. Quantization: For example, when Σ is a 2-sphere, this Hilbert space is one-dimensional and so there is only one state. When Σ is a 2-torus the states correspond to the integrable representations of the affine Lie algebra corresponding to g at level k. Characterizations of the conformal blocks at higher genera are not necessary for Witten's solution of Chern–Simons theory. Observables: Wilson loops The observables of Chern–Simons theory are the n-point correlation functions of gauge-invariant operators. The most often studied class of gauge invariant operators are Wilson loops. A Wilson loop is the holonomy around a loop in M, traced in a given representation R of G. As we will be interested in products of Wilson loops, without loss of generality we may restrict our attention to irreducible representations R. Observables: More concretely, given an irreducible representation R and a loop K in M, one may define the Wilson loop WR(K) by Tr exp ⁡(i∮KA) where A is the connection 1-form and we take the Cauchy principal value of the contour integral and exp is the path-ordered exponential. Observables: HOMFLY and Jones polynomials Consider a link L in M, which is a collection of ℓ disjoint loops. A particularly interesting observable is the ℓ-point correlation function formed from the product of the Wilson loops around each disjoint loop, each traced in the fundamental representation of G. One may form a normalized correlation function by dividing this observable by the partition function Z(M), which is just the 0-point correlation function. Observables: In the special case in which M is the 3-sphere, Witten has shown that these normalized correlation functions are proportional to known knot polynomials. For example, in G = U(N) Chern–Simons theory at level k the normalized correlation function is, up to a phase, equal to sin sin ⁡(πN/(k+N)) times the HOMFLY polynomial. In particular when N = 2 the HOMFLY polynomial reduces to the Jones polynomial. In the SO(N) case, one finds a similar expression with the Kauffman polynomial. Observables: The phase ambiguity reflects the fact that, as Witten has shown, the quantum correlation functions are not fully defined by the classical data. The linking number of a loop with itself enters into the calculation of the partition function, but this number is not invariant under small deformations and in particular, is not a topological invariant. This number can be rendered well defined if one chooses a framing for each loop, which is a choice of preferred nonzero normal vector at each point along which one deforms the loop to calculate its self-linking number. This procedure is an example of the point-splitting regularization procedure introduced by Paul Dirac and Rudolf Peierls to define apparently divergent quantities in quantum field theory in 1934. Observables: Sir Michael Atiyah has shown that there exists a canonical choice of 2-framing, which is generally used in the literature today and leads to a well-defined linking number. With the canonical framing the above phase is the exponential of 2πi/(k + N) times the linking number of L with itself. Observables: Problem(Extension of Jones polynomial to general 3-manifolds) "The original Jones polynomial was defined for 1-links in the 3-sphere (the 3-ball, the 3-space R3). Can you define the Jones polynomial for 1-links in any 3-manifold?" See section 1.1 of this paper for the background and the history of this problem. Kauffman submitted a solution in the case of the product manifold of closed oriented surface and the closed interval, by introducing virtual 1-knots. It is open in the other cases. Witten's path integral for Jones polynomial is written for links in any compact 3-manifold formally, but the calculus is not done even in physics level in any case other than the 3-sphere (the 3-ball, the 3-space R3). This problem is also open in physics level. In the case of Alexander polynomial, this problem is solved. Relationships with other theories: Topological string theories In the context of string theory, a U(N) Chern–Simons theory on an oriented Lagrangian 3-submanifold M of a 6-manifold X arises as the string field theory of open strings ending on a D-brane wrapping X in the A-model topological string theory on X. The B-model topological open string field theory on the spacefilling worldvolume of a stack of D5-branes is a 6-dimensional variant of Chern–Simons theory known as holomorphic Chern–Simons theory. Relationships with other theories: WZW and matrix models Chern–Simons theories are related to many other field theories. For example, if one considers a Chern–Simons theory with gauge group G on a manifold with boundary then all of the 3-dimensional propagating degrees of freedom may be gauged away, leaving a two-dimensional conformal field theory known as a G Wess–Zumino–Witten model on the boundary. In addition the U(N) and SO(N) Chern–Simons theories at large N are well approximated by matrix models. Relationships with other theories: Chern–Simons gravity theory In 1982, S. Deser, R. Jackiw and S. Templeton proposed the Chern–Simons gravity theory in three dimensions, in which the Einstein–Hilbert action in gravity theory is modified by adding the Chern–Simons term. (Deser, Jackiw & Templeton (1982)) In 2003, R. Jackiw and S. Y. Pi extended this theory to four dimensions (Jackiw & Pi (2003)) and Chern–Simons gravity theory has some considerable effects not only to fundamental physics but also condensed matter theory and astronomy. Relationships with other theories: The four-dimensional case is very analogous to the three-dimensional case. In three dimensions, the gravitational Chern–Simons term is CS ⁡(Γ)=12π2∫d3xεijk(Γiqp∂jΓkpq+23ΓiqpΓjrqΓkpr). This variation gives the Cotton tensor =−12g(εmijDiRjn+εnijDiRjm). Then, Chern–Simons modification of three-dimensional gravity is made by adding the above Cotton tensor to the field equation, which can be obtained as the vacuum solution by varying the Einstein–Hilbert action. Relationships with other theories: Chern–Simons matter theories In 2013 Kenneth A. Intriligator and Nathan Seiberg solved these 3d Chern–Simons gauge theories and their phases using monopoles carrying extra degrees of freedom. The Witten index of the many vacua discovered was computed by compactifying the space by turning on mass parameters and then computing the index. In some vacua, supersymmetry was computed to be broken. These monopoles were related to condensed matter vortices. (Intriligator & Seiberg (2013)) The N = 6 Chern–Simons matter theory is the holographic dual of M-theory on AdS4×S7 Four-dimensional Chern–Simons theory In 2013 Kevin Costello defined a closely related theory defined on a four-dimensional manifold consisting of the product of a two-dimensional 'topological plane' and a two-dimensional (or one complex dimensional) complex curve. He later studied the theory in more detail together with Witten and Masahito Yamazaki., demonstrating how the gauge theory could be related to many notions in integrable systems theory, including exactly solvable lattice models (like the six-vertex model or the XXZ spin chain), integrable quantum field theories (such as the Gross–Neveu model, principal chiral model and symmetric space coset sigma models), the Yang–Baxter equation and quantum groups such as the Yangian which describe symmetries underpinning the integrability of the aforementioned systems. Relationships with other theories: The action on the 4-manifold M=Σ×C where Σ is a two-dimensional manifold and C is a complex curve is where ω is a meromorphic one-form on C Chern–Simons terms in other theories: The Chern–Simons term can also be added to models which aren't topological quantum field theories. In 3D, this gives rise to a massive photon if this term is added to the action of Maxwell's theory of electrodynamics. This term can be induced by integrating over a massive charged Dirac field. It also appears for example in the quantum Hall effect. The addition of the Chern–Simons term to various theories gives rise to vortex- or soliton-type solutions Ten- and eleven-dimensional generalizations of Chern–Simons terms appear in the actions of all ten- and eleven-dimensional supergravity theories. Chern–Simons terms in other theories: One-loop renormalization of the level If one adds matter to a Chern–Simons gauge theory then, in general it is no longer topological. However, if one adds n Majorana fermions then, due to the parity anomaly, when integrated out they lead to a pure Chern–Simons theory with a one-loop renormalization of the Chern–Simons level by −n/2, in other words the level k theory with n fermions is equivalent to the level k − n/2 theory without fermions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microemulsion** Microemulsion: Microemulsions are clear, thermodynamically stable isotropic liquid mixtures of oil, water and surfactant, frequently in combination with a cosurfactant. The aqueous phase may contain salt(s) and/or other ingredients, and the "oil" may actually be a complex mixture of different hydrocarbons. In contrast to ordinary emulsions, microemulsions form upon simple mixing of the components and do not require the high shear conditions generally used in the formation of ordinary emulsions. The three basic types of microemulsions are direct (oil dispersed in water, o/w), reversed (water dispersed in oil, w/o) and bicontinuous. Microemulsion: In ternary systems such as microemulsions, where two immiscible phases (water and ‘oil’) are present with a surfactant, the surfactant molecules may form a monolayer at the interface between the oil and water, with the hydrophobic tails of the surfactant molecules dissolved in the oil phase and the hydrophilic head groups in the aqueous phase. Uses: Microemulsions have many commercially important uses: Water-in-oil microemulsions for some dry cleaning processes Floor polishers and cleaners Personal care products Pesticide formulations Cutting oils DrugsMuch of the work done on these systems have been motivated by their possible use to mobilize petroleum trapped in porous sandstone for enhanced oil recovery. A fundamental reason for the uses of these systems is that a microemulsion phase sometimes has an ultralow interfacial tension with a separate oil or aqueous phase, which may release or mobilize them from solid phases even in conditions of slow flow or low pressure gradients. Uses: Microemulsions also have industrial applications, one of them being the synthesis of polymers. Microemulsion polymerization is a complex heterogeneous process where transport of monomers, free radicals and other species (such as chain transfer agent, co-surfactant and inhibitors) between the aqueous and organic phases, takes place. Compared with other heterogeneous polymerization processes (suspension or emulsion) microemulsion polymerization is a more complicated system. Polymerization rate is controlled by monomer partitioning between the phases, particle nucleation, and adsorption and desorption of radicals. Particle stability is affected by the amount and type of surfactant and pH of dispersing medium. Uses: It is also used in the process of creating nanoparticles. The kinetics of microemulsion polymerization has much in common with emulsion polymerization kinetics, the most characteristic feature of which is the compartmentalization, where the radicals growing inside the particles are separated from each other, thus suppressing termination to a high extent and, as a consequence, providing high rates of polymerization. Theory: Various theories concerning microemulsion formation, stability and phase behavior have been proposed over the years. For example, one explanation for their thermodynamic stability is that the oil/water dispersion is stabilized by the surfactant present and their formation involves the elastic properties of the surfactant film at the oil/water interface, which involves as parameters, the curvature and the rigidity of the film. These parameters may have an assumed or measured pressure and/or temperature dependence (and/or the salinity of the aqueous phase), which may be used to infer the region of stability of the microemulsion, or to delineate the region where three coexisting phases occur, for example. Calculations of the interfacial tension of the microemulsion with a coexisting oil or aqueous phase are also often of special focus and may sometimes be used to guide their formulation. History and terminology: The term microemulsion was first used by T. P. Hoar and J. H. Shulman, professors of chemistry at Cambridge University, in 1943. Alternative names for these systems are often used, such as transparent emulsion, swollen micelle, micellar solution, and solubilized oil. More confusingly still, the term microemulsion can refer to the single isotropic phase that is a mixture of oil, water and surfactant, or to one that is in equilibrium with coexisting predominantly oil and/or aqueous phases, or even to other non-isotropic phases. As in the binary systems (water/surfactant or oil/surfactant), self-assembled structures of different types can be formed, ranging, for example, from (inverted) spherical and cylindrical micelles to lamellar phases and bicontinuous microemulsions, which may coexist with predominantly oil or aqueous phases. Phase diagrams: Microemulsion domains are usually characterized by constructing ternary-phase diagrams. Phase diagrams: Three components are the basic requirement to form a microemulsion: two immiscible liquids and a surfactant. The majority of microemulsions use oil and water as immiscible liquid pairs. If a cosurfactant is used, it may sometimes be represented at a fixed ratio to surfactant as a single component, and treated as a single "pseudo-component". The relative amounts of these three components can be represented in a ternary phase diagram. Gibbs phase diagrams can be used to show the influence of changes in the volume fractions of the different phases on the phase behavior of the system. Phase diagrams: The three components composing the system are each found at an apex of the triangle, where their corresponding volume fraction is 100%. Moving away from that corner reduces the volume fraction of that specific component and increases the volume fraction of one or both of the two other components. Each point within the triangle represents a possible composition of a mixture of the three components or pseudo-components, which may consist (ideally, according to the Gibbs' phase rule) of one, two or three phases. These points combine to form regions with boundaries between them, which represent the "phase behavior" of the system at constant temperature and pressure. Phase diagrams: The Gibbs phase diagram, however, is an empirical visual observation of the state of the system and may, or may not express the true number of phases within a given composition. Apparently clear single phase formulations can still consist of multiple iso-tropic phases (e.g. the apparently clear heptane/AOT/water microemulsions consist multiple phases). Since these systems can be in equilibrium with other phases, many systems, especially those with high volume fractions of both the two imiscible phases, can be easily destabilised by anything that changes this equilibrium e.g. high or low temperature or addition of surface tension modifying agents. Phase diagrams: However, examples of relatively stable microemulsions can be found. It is believed that the mechanism for removing acid build up in car engine oils involves low water phase volume, water-in-oil (w/o) microemulsions. Theoretically, transport of the aqueous acid droplets through the engine oil to microdispersed calcium carbonate particles in the oil should be most efficient when the aqueous droplets are small enough to transport a single hydrogen ion (the smaller the droplets, the greater the number of acid water droplets, the faster the neutralisation). Such microemulsions are probably very stable across a reasonably wide range of elevated temperatures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Krabbe disease** Krabbe disease: Krabbe disease (KD) (also known as globoid cell leukodystrophy or galactosylceramide lipidosis) is a rare and often fatal lysosomal storage disease that results in progressive damage to the nervous system. KD involves dysfunctional metabolism of sphingolipids and is inherited in an autosomal recessive pattern. The disease is named after the Danish neurologist Knud Krabbe (1885–1961). Signs and symptoms: Symptoms in asymptomatic infantile-onset (<12 months after birth) and later-onset Krabbe disease present themselves differently. Of individuals with infantile-onset Krabbe disease, 85–90% display progressive neurologic deterioration in infancy and death before the age of two. Symptoms include irritability, fevers, limb stiffness, seizures, feeding difficulties (like GERD), vomiting, staring episodes, and slowing of mental and motor development. In the first stages of the disease, doctors often mistake the symptoms for those of cerebral palsy. Other symptoms include muscle weakness, spasticity, deafness, optic atrophy, optic nerve enlargement, blindness, paralysis, and difficulty when swallowing. Prolonged weight loss may also occur.10–15% of individuals with later-onset Krabbe disease have a much slower disease progression. These individuals may also display symptoms such as esotropia, slurred speech, and slow development or loss of motor milestones. Causes: Krabbe disease is caused by mutations in the GALC gene located on chromosome 14 (14q31), which is inherited in an autosomal recessive manner. Mutations in the GALC gene cause a deficiency of an enzyme called galactosylceramidase. In rare cases, it may be caused by a lack of active saposin A (a derivative of prosaposin).The buildup of unmetabolized lipids adversely affects the growth of the nerve's protective myelin sheath (the covering that insulates many nerves) resulting in demyelination and severe progressive degeneration of motor skills. As part of a group of disorders known as leukodystrophies, Krabbe disease results from the imperfect growth and development of myelin.Galactosylceramidase deficiency also results in a buildup of a glycosphingolipid called psychosine, which is toxic to oligodendrocytes, a type of non-neuronal cell found in the nervous system, collectively termed neuroglia. Diagnosis: There are a few ways to help pinpoint the presence of Krabbe disease. Newborn screening for Krabbe disease includes assaying dried blood cells for GALC enzyme activity and molecular analysis for evidence of GALC enzyme mutations. Infants displaying low enzyme activity and/or enzyme mutations should be referred for additional diagnostic testing and neurological examination. 0-5% GALC enzyme activity is observed in all symptomatic individuals with Krabbe disease. High concentration of psychosine in dried blood spots may also be identified as a marker for Krabbe disease. A 2011 study discovered that individuals with Krabbe disease, more so in later-onset individuals, tend to have an abnormal increase in CSF protein concentration.The disease may be diagnosed by its characteristic grouping of certain cells (multinucleated globoid cells), nerve demyelination and degeneration, and destruction of brain cells. Special stains for myelin (e.g., luxol fast blue) may be used to aid diagnosis. Diagnosis: New York, Missouri and Kentucky include Krabbe in the newborn screening panel. Indiana started screening in 2020. Treatment: Although there is no known cure for Krabbe disease, bone marrow transplantation or hematopoietic stem cell transplantation (HSCT) has been shown to benefit cases early in the course of the disease. Generally, treatment for the disorder is symptomatic and supportive. Physical therapy may help maintain or increase muscle tone and circulation.A 15-year study on the developmental outcomes of children with Krabbe disease who underwent HSCT in the first seven weeks after birth found that patients have a better prognosis for both lifespan and functionality, with a slower progression of the disease. Even symptomatic individuals with later-onset Krabbe disease may benefit from HSCT if diagnosed early enough. Umbilical-cord blood is typically used as the source for the transplant stem cells. Clinical trials for gene therapy are currently enrolling patients. Treatment: Management Symptom management can be particularly difficult for individuals with infantile onset, as symptoms tend to progress rapidly. Because there is no treatment for Krabbe disease, management of the condition is typically supportive and aimed at alleviating symptoms. Frequent evaluation is encouraged in order to anticipate the onset of, and preparation for, certain symptoms. Physical therapy can help to alleviate motor difficulties and increase strength, mobility, and flexibility.Gastrostomy tubes are used to circumvent feeding difficulties and prevent aspiration. A simultaneous gastrostomy tube insertion and Nissen fundoplication procedure is commonly performed to prevent the need for a secondary surgical procedure. Individuals with Krabbe disease with severe motor deficits tend to be more susceptible to overfeeding, as they require less calorie consumption and thus consume fewer calories than what caretakers may expect. There is also evidence that routine vaccines may accelerate disease progression; many individuals with Krabbe disease tend to not follow traditional vaccination procedures. Prognosis: In infantile Krabbe disease, death usually occurs in early childhood. A 2011 study found one-, two-, and three-year survival rates of 60%, 26%, and 14%, respectively, with a few surviving longer. Patients with late-onset Krabbe disease tend to have a slower progression of the disease and live significantly longer. Epidemiology: This disease does not only impact humans, but other animals such as monkeys, mice, and dogs have been observed to develop Krabbe disease as well. While certain gene deletions are more frequent than others, novel mutations resulting in Krabbe disease have been discovered worldwide. Most commonly, the underlying cause of the disease is a deletion of a GALC gene, which causes a deficiency in the GALC enzyme. This is the circumstance in 80% of patients who have European and Mexican origins. The mortality rate of early infantile Krabbe disease is 90% before the age of two. Later onset of symptoms is associated with longer life expectancy, with older children generally surviving two to seven years after the initial diagnosis.Krabbe disease occurs in about one in 100,000 births. Because the disease is genetic, incidence rates vary widely from population to population. The incidence rate is extremely low in Japan, with between 5 and 10 cases per 1,000,000 live births. In the United States, Krabbe disease occurs in approximately 1 out of every 100,000 live births. Scandinavian countries report incidence rates of one in 50,000 births. In certain communities Krabbe disease is much more frequent, such as the Druze community in Israel, which has an incidence rate of 6 out of every 1,000 live births. This higher rate is thought to be due in part to a high frequency of consanguineous marriages. Almost 35% of all Druze marriages were found to be between first-cousin familial relations. There have been no reported cases of Krabbe disease among the Jewish community.Time of onset also varies in frequency by location. Early infantile Krabbe Disease is the most common form of the disease overall, but Nordic communities tend to have even higher rates of early infantile onset Krabbe disease, while Southern European countries have higher incidences of late-onset cases. It is difficult to estimate the incidence of adult-onset Krabbe disease, due to discrepancies in classifying cases late-onset versus adult-onset. Society and culture: Former Buffalo Bills quarterback Jim Kelly has been a leader in gaining recognition and research funding for Krabbe disease following the diagnosis of his son, Hunter, in 1997. Hunter Kelly died of the disease on August 5, 2005, at the age of eight. They created Hunter's Hope - a foundation that seeks to advance Newborn Screening, research and treatments, and provides support to families of leukodystrophy children.Family advocacy is a critical part of advancing newborn screening, and many Krabbe families have made significant advocacy progress in their states.As an example, Cove Ellis is a child from Georgia, United States who was diagnosed with the disease in early 2016. Ellis' family, along with her community, has worked to raise awareness of the disease and helped pass "Cove's Law", which provides parents the option to have prenatal screening for the disease, which can, potentially, save the child from the morbidity and mortality of Krabbe disease. Other animals: Krabbe disease is found in mice may also be found in cats and in dogs, particularly the West Highland White Terriers and Cairn Terriers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Virophysics** Virophysics: Virophysics is a branch of biophysics in which the theoretical concepts and experimental techniques of physics are applied to study the mechanics and dynamics driving the interactions between virions and cells. Overview: Research in virophysics typically focuses on resolving the physical structure and structural properties of viruses, the dynamics of their assembly and disassembly, their population kinetics over the course of an infection, and the emergence and evolution of various strains. The common aim of these efforts is to establish a set of models (expressions or laws) that quantitatively describe the details of all processes involved in viral infections with reliable predictive power. Having such a quantitative understanding of viruses would not only rationalize the development of strategies to prevent, guide, or control the course of viral infections, but could also be used to exploit virus processes and put virus to work in areas such as nanosciences, materials, and biotechnologies. Overview: Traditionally, in vivo and in vitro experimentation has been the only way to study viral infections. This approach for deriving knowledge based solely on experimental observations relies on common-sense assumptions (e.g., a higher virus count means a fitter virus). These assumptions often go untested due to difficulties controlling individual components of these complex systems without affecting others. The use of mathematical models and computer simulations to describe such systems, however, makes it possible to deconstruct an experimental system into individual components and determine how the pieces combine to create the infection we observe. Overview: Virophysics has large overlaps with other fields. For example, the modelling of infectious disease dynamics is a popular research topic in mathematics, notably in applied mathematics or mathematical biology. While most modelling efforts in mathematics have focused on elucidating the dynamics of spread of infectious diseases at an epidemiological scale (person-to-person), there is also important work being done at the cellular scale (cell-to-cell). Virophysics focuses almost exclusively on the single-cell or multi-cellular scale, utilizing physical models to resolve the temporal and spatial dynamics of viral infection spread within a cell culture (in vitro), an organ (ex vivo or in vivo) or an entire host (in vivo).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modem** Modem: A modulator-demodulator or modem is a computer hardware device that converts data from a digital format into a format suitable for an analog transmission medium such as telephone or radio. A modem transmits data by modulating one or more carrier wave signals to encode digital information, while the receiver demodulates the signal to recreate the original digital information. The goal is to produce a signal that can be transmitted easily and decoded reliably. Modems can be used with almost any means of transmitting analog signals, from light-emitting diodes to radio. Modem: Early modems were devices that used audible sounds suitable for transmission over traditional telephone systems and leased lines. These generally operated at 110 or 300 bits per second (bit/s), and the connection between devices was normally manual, using an attached telephone handset. By the 1970s, higher speeds of 1,200 and 2,400 bit/s for asynchronous dial connections, 4,800 bit/s for synchronous leased line connections and 35 kbit/s for synchronous conditioned leased lines were available. By the 1980s, less expensive 1,200 and 2,400 bit/s dialup modems were being released, and modems working on radio and other systems were available. As device sophistication grew rapidly in the late 1990s, telephone-based modems quickly exhausted the available bandwidth, reaching 56 kbit/s. Modem: The rise of public use of the internet during the late 1990s led to demands for much higher performance, leading to the move away from audio-based systems to entirely new encodings on cable television lines and short-range signals in subcarriers on telephone lines. The move to cellular telephones, especially in the late 1990s and the emergence of smartphones in the 2000s led to the development of ever-faster radio-based systems. Today, modems are ubiquitous and largely invisible, included in almost every mobile computing device in one form or another, and generally capable of speeds on the order of tens or hundreds of megabytes per second. Speeds: Modems are frequently classified by the maximum amount of data they can send in a given unit of time, usually expressed in bits per second (symbol bit/s, sometimes abbreviated "bps") or rarely in bytes per second (symbol B/s). Modern broadband modem speeds are typically expressed in megabits per second (Mbit/s). Speeds: Historically, modems were often classified by their symbol rate, measured in baud. The baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, the ITU-T V.21 standard used audio frequency-shift keying with two possible frequencies, corresponding to two distinct symbols (or one bit per symbol), to carry 300 bits per second using 300 baud. By contrast, the original ITU-T V.22 standard, which could transmit and receive four distinct symbols (two bits per symbol), transmitted 1,200 bits by sending 600 symbols per second (600 baud) using phase-shift keying. Speeds: Many modems are variable-rate, permitting them to be used over a medium with less than ideal characteristics, such as a telephone line that is of poor quality or is too long. This capability is often adaptive so that a modem can discover the maximum practical transmission rate during the connect phase, or during operation. Overall history: Modems grew out of the need to connect teleprinters over ordinary phone lines instead of the more expensive leased lines which had previously been used for current loop–based teleprinters and automated telegraphs. The earliest devices that satisfy the definition of a modem may be the multiplexers used by news wire services in the 1920s.In 1941, the Allies developed a voice encryption system called SIGSALY which used a vocoder to digitize speech, then encrypted the speech with one-time pad and encoded the digital data as tones using frequency shift keying. This was also a digital modulation technique, making this an early modem.Commercial modems largely did not become available until the late 1950s, when the rapid development of computer technology created demand for a method of connecting computers together over long distances, resulting in the Bell Company and then other businesses producing an increasing number of computer modems for use over both switched and leased telephone lines. Overall history: Later developments would produce modems that operated over cable television lines, power lines, and various radio technologies, as well as modems that achieved much higher speeds over telephone lines. Dial-up: A dial-up modem transmits computer data over an ordinary switched telephone line that has not been designed for data use. It was once a widely known technology, since it was mass-marketed to consumers in many countries for dial-up internet access. In the 1990s, tens of millions of people in the United States used dial-up modems for internet access.Dial-up service has since been largely supplanted by broadband internet, such as DSL. Dial-up: History 1950s Mass production of telephone line modems in the United States began as part of the SAGE air-defense system in 1958, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the United States and Canada. Shortly afterwards in 1959, the technology in the SAGE modems was made available commercially as the Bell 101, which provided 110 bit/s speeds. Bell called this and several other early modems "datasets". Dial-up: 1960s Some early modems were based on touch-tone frequencies, such as Bell 400-style touch-tone modems.The Bell 103A standard was introduced by AT&T in 1962. It provided full-duplex service at 300 bit/s over normal phone lines. Frequency-shift keying was used, with the call originator transmitting at 1,070 or 1,270 Hz and the answering modem transmitting at 2,025 or 2,225 Hz.The 103 modem would eventually become a de facto standard once third-party (non-AT&T modems) reached the market, and throughout the 1970s, independently made modems compatible with the Bell 103 de facto standard were commonplace. Example models included the Novation CAT and the Anderson-Jacobson. A lower-cost option was the Pennywhistle modem, designed to be built using readily available parts.Teletype machines were granted access to remote networks such as the Teletypewriter Exchange using the Bell 103 modem. AT&T also produced reduced-cost units, the originate-only 113D and the answer-only 113B/C modems. Dial-up: 1970s The 201A Data-Phone was a synchronous modem using two-bit-per-symbol phase-shift keying (PSK) encoding, achieving 2,000 bit/s half-duplex over normal phone lines. In this system the two tones for any one side of the connection are sent at similar frequencies as in the 300 bit/s systems, but slightly out of phase. In early 1973, Vadic introduced the VA3400 which performed full-duplex at 1,200 bit/s over a normal phone line.In November 1976, AT&T introduced the 212A modem, similar in design, but using the lower frequency set for transmission. It was not compatible with the VA3400, but it would operate with 103A modem at 300 bit/s. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, and 103A operation. Dial-up: 1980s A significant advance in modems was the Hayes Smartmodem, introduced in 1981. The Smartmodem was an otherwise standard 103A 300 bit/s direct-connect modem, but it introduced a command language which allowed the computer to make control requests, such as commands to dial or answer calls, over the same RS-232 interface used for the data connection. The command set used by this device became a de facto standard, the Hayes command set, which was integrated into devices from many other manufacturers. Dial-up: Automatic dialing was not a new capability – it had been available via separate Automatic Calling Units, and via modems using the X.21 interface – but the Smartmodem made it available in a single device that could be used with even the most minimal implementations of the ubiquitous RS-232 interface, making this capability accessible from virtually any system or language.The introduction of the Smartmodem made communications much simpler and more easily accessed. This provided a growing market for other vendors, who licensed the Hayes patents and competed on price or by adding features. This eventually led to legal action over use of the patented Hayes command language.Dial modems generally remained at 300 and 1,200 bit/s (eventually becoming standards such as V.21 and V.22) into the mid-1980s. Dial-up: Commodore's 1982 VicModem for the VIC-20 was the first modem to be sold under $100, and the first modem to sell a million units.In 1984, V.22bis was created, a 2,400-bit/s system similar in concept to the 1,200-bit/s Bell 212. This bit rate increases was achieved by defining four or eight distinct symbols, which allowed the encoding of two or three bits per symbol instead of only one. By the late 1980s, many modems could support improved standards like this, and 2,400-bit/s operation was becoming common. Dial-up: Increasing modem speed greatly improved the responsiveness of online systems and made file transfer practical. This led to rapid growth of online services with large file libraries, which in turn gave more reason to own a modem. The rapid update of modems led to a similar rapid increase in BBS use. Dial-up: The introduction of microcomputer systems with internal expansion slots made small internal modems practical. This led to a series of popular modems for the S-100 bus and Apple II computers that could directly dial out, answer incoming calls, and hang up entirely from software, the basic requirements of a bulletin board system (BBS). The seminal CBBS for instance was created on an S-100 machine with a Hayes internal modem, and a number of similar systems followed. Dial-up: Echo cancellation became a feature of modems in this period, which improved the bandwidth available to both modems by allowing them to ignore their own reflected signals. Additional improvements were introduced by quadrature amplitude modulation (QAM) encoding, which increased the number of bits per symbol to four through a combination of phase shift and amplitude. Transmitting at 1,200 baud produced the 4,800 bit/s V.27ter standard, and at 2,400 baud the 9,600 bit/s V.32. The carrier frequency was 1,650 Hz in both systems. The introduction of these higher-speed systems also led to the development of the digital fax machine during the 1980s. While early fax technology also used modulated signals on a phone line, digital fax used the now-standard digital encoding used by computer modems. This eventually allowed computers to send and receive fax images. 1990s In the early 1990s, V.32 modems operating at 9,600 bit/s were introduced, but were expensive and were only starting to enter the market when V.32bis was standardized, which operated at 14,400 bit/s. Dial-up: Rockwell International's chip division developed a new driver chip set incorporating the V.32bis standard and aggressively priced it. Supra, Inc. arranged a short-term exclusivity arrangement with Rockwell, and developed the SupraFAXModem 14400 based on it. Introduced in January 1992 at $399 (or less), it was half the price of the slower V.32 modems already on the market. This led to a price war, and by the end of the year V.32 was dead, never having been really established, and V.32bis modems were widely available for $250. Dial-up: V.32bis was so successful that the older high-speed standards had little advantages. USRobotics (USR) fought back with a 16,800 bit/s version of HST, while AT&T introduced a one-off 19,200 bit/s method they referred to as V.32ter, but neither non-standard modem sold well. Consumer interest in these proprietary improvements waned during the lengthy introduction of the 28800 bit/s V.34 standard. While waiting, several companies decided to release hardware and introduced modems they referred to as V.Fast. In order to guarantee compatibility with V.34 modems once a standard was ratified (1994), manufacturers used more flexible components, generally a DSP and microcontroller, as opposed to purpose-designed ASIC modem chips. This would allow later firmware updates to conform with the standards once ratified. Dial-up: The ITU standard V.34 represents the culmination of these joint efforts. It employed the most powerful coding techniques available at the time, including channel encoding and shape encoding. From the mere four bits per symbol (9.6 kbit/s), the new standards used the functional equivalent of 6 to 10 bits per symbol, plus increasing baud rates from 2,400 to 3,429, to create 14.4, 28.8, and 33.6 kbit/s modems. This rate is near the theoretical Shannon limit of a phone line. Dial-up: 56 kbit/s technologies While 56 kbit/s speeds had been available for leased-line modems for some time, they did not become available for dial up modems until the late 1990s. In the late 1990s, technologies to achieve speeds above 33.6 kbit/s began to be introduced. Several approaches were used, but all of them began as solutions to a single fundamental problem with phone lines. Dial-up: By the time technology companies began to investigate speeds above 33.6 kbit/s, telephone companies had switched almost entirely to all-digital networks. As soon as a phone line reached a local central office, a line card converted the analog signal from the subscriber to a digital one and conversely. While digitally encoded telephone lines notionally provide the same bandwidth as the analog systems they replaced, the digitization itself placed constraints on the types of waveforms that could be reliably encoded. Dial-up: The first problem was that the process of analog-to-digital conversion is intrinsically lossy, but second, and more importantly, the digital signals used by the telcos were not "linear": they did not encode all frequencies the same way, instead utilizing a nonlinear encoding (μ-law and a-law) meant to favor the nonlinear response of the human ear to voice signals. This made it very difficult to find a 56 kbit/s encoding that could survive the digitizing process. Dial-up: Modem manufacturers discovered that, while the analog to digital conversion could not preserve higher speeds, digital-to-analog conversions could. Because it was possible for an ISP to obtain a direct digital connection to a telco, a digital modem – one that connects directly to a digital telephone network interface, such as T1 or PRI – could send a signal that utilized every bit of bandwidth available in the system. While that signal still had to be converted back to analog at the subscriber end, that conversion would not distort the signal in the same way that the opposite direction did. Dial-up: Early 56k dial-up products The first 56k (56 kbit/s) dial-up option was a proprietary design from USRobotics, which they called "X2" because 56k was twice the speed (×2) of 28k modems. At that time, USRobotics held a 40% share of the retail modem market, while Rockwell International held an 80% share of the modem chipset market. Concerned with being shut out, Rockwell began work on a rival 56k technology. They joined with Lucent and Motorola to develop what they called "K56Flex" or just "Flex". Dial-up: Both technologies reached the market around February 1997; although problems with K56Flex modems were noted in product reviews through July, within six months the two technologies worked equally well, with variations dependent largely on local connection characteristics.The retail price of these early 56k modems was about US$200, compared to $100 for standard 33k modems. Compatible equipment was also required at the Internet service providers (ISPs) end, with costs varying depending on whether their current equipment could be upgraded. About half of all ISPs offered 56k support by October 1997. Consumer sales were relatively low, which USRobotics and Rockwell attributed to conflicting standards. Dial-up: Standardized 56k (V.90/V.92) In February 1998, The International Telecommunication Union (ITU) announced the draft of a new 56 kbit/s standard V.90 with strong industry support. Incompatible with either existing standard, it was an amalgam of both, but was designed to allow both types of modem by a firmware upgrade. The V.90 standard was approved in September 1998 and widely adopted by ISPs and consumers.The ITU-T V.92 standard was approved by ITU in November 2000 and utilized digital PCM technology to increase the upload speed to a maximum of 48 kbit/s. Dial-up: The high upload speed was a tradeoff. 48 kbit/s upstream rate would reduce the downstream as low as 40 kbit/s due to echo effects on the line. To avoid this problem, V.92 modems offer the option to turn off the digital upstream and instead use a plain 33.6 kbit/s analog connection in order to maintain a high digital downstream of 50 kbit/s or higher.V.92 also added two other features. The first is the ability for users who have call waiting to put their dial-up Internet connection on hold for extended periods of time while they answer a call. The second feature is the ability to quickly connect to one's ISP, achieved by remembering the analog and digital characteristics of the telephone line and using this saved information when reconnecting. Dial-up: Evolution of dial-up speeds These values are maximum values, and actual values may be slower under certain conditions (for example, noisy phone lines). For a complete list see the companion article list of device bandwidths. A baud is one symbol per second; each symbol may encode one or more data bits. Dial-up: Compression Many dial-up modems implement standards for data compression to achieve higher effective throughput for the same bitrate. V.44 is an example used in conjunction with V.92 to achieve speeds greater than 56k over ordinary phone lines.As telephone-based 56k modems began losing popularity, some Internet service providers such as Netzero/Juno, Netscape, and others started using pre-compression to increase apparent throughput. This server-side compression can operate much more efficiently than the on-the-fly compression performed within modems, because the compression techniques are content-specific (JPEG, text, EXE, etc.).The drawback is a loss in quality, as they use lossy compression which causes images to become pixelated and smeared. ISPs employing this approach often advertised it as "accelerated dial-up".These accelerated downloads are integrated into the Opera and Amazon Silk web browsers, using their own server-side text and image compression requiring all data to pass through their own servers before reaching the user. Dial-up: Methods of attachment Dial-up modems can attach in two different ways: with an acoustic coupler, or with a direct electrical connection. Dial-up: Directly connected modems The case Hush-A-Phone Corp. v. United States, which legalized acoustic couplers, applied only to mechanical connections to a telephone set, not electrical connections to the telephone line. The Carterfone decision of 1968, however, permitted customers to attach devices directly to a telephone line as long as they followed stringent Bell-defined standards for non-interference with the phone network. This opened the door to independent (non-AT&T) manufacture of direct-connect modems, that plugged directly into the phone line rather than via an acoustic coupler. Dial-up: While Carterfone required AT&T to permit connection of devices, AT&T successfully argued that they should be allowed to require the use of a special device to protect their network, placed in between the third-party modem and the line, called a Data Access Arrangement or DAA. The use of DAAs was mandatory from 1969 to 1975 when the new FCC Part 68 rules allowed the use of devices without a Bell-provided DAA, subject to equivalent circuitry being included in the third-party device.Virtually all modems produced after the 1980s are direct-connect. Dial-up: Acoustic couplers While Bell (AT&T) provided modems that attached via direct wire connection to the phone network as early as 1958, their regulations at the time did not permit the direct electrical connection of any non-Bell device to a telephone line. However, the Hush-a-Phone ruling allowed customers to attach any device to a telephone set as long as it did not interfere with its functionality. This allowed third-party (non-Bell) manufacturers to sell modems utilizing an acoustic coupler.With an acoustic coupler, an ordinary telephone handset was placed in a cradle containing a speaker and microphone positioned to match up with those on the handset. The tones used by the modem were transmitted and received into the handset, which then relayed them to the phone line.Because the modem was not electrically connected, it was incapable of picking up, hanging up or dialing, all of which required direct control of the line. Touch-tone dialing would have been possible, but touch-tone was not universally available at this time. Consequently, the dialing process was executed by the user lifting the handset, dialing, then placing the handset on the coupler. To accelerate this process, a user could purchase a dialer or Automatic Calling Unit. Dial-up: Automatic calling units Early modems could not place or receive calls on their own, but required human intervention for these steps. Dial-up: As early as 1964, Bell provided automatic calling units that connected separately to a second serial port on a host machine and could be commanded to open the line, dial a number, and even ensure the far end had successfully connected before transferring control to the modem. Later on, third-party models would become available, sometimes known simply as dialers, and features such as the ability to automatically sign in to time-sharing systems.Eventually this capability would be built into modems and no longer require a separate device. Dial-up: Controller-based modems vs. soft modems Prior to the 1990s, modems contained all the electronics and intelligence to convert data in discrete form to an analog (modulated) signal and back again, and to handle the dialing process, as a mix of discrete logic and special-purpose chips. This type of modem is sometimes referred to as controller-based.In 1993, Digicom introduced the Connection 96 Plus, a modem which replaced the discrete and custom components with a general purpose digital signal processor, which could be reprogrammed to upgrade to newer standards.Subsequently, USRobotics released the Sportster Winmodem, a similarly upgradable DSP-based design.As this design trend spread, both terms – soft modem and Winmodem – obtained a negative connotation in non-Windows-based computing circles because the drivers were either unavailable for non-Windows platforms, or were only available as unmaintainable closed-source binaries, a particular problem for Linux users.Later in the 1990s, software-based modems became available. These are essentially sound cards, and in fact a common design uses the AC'97 audio codec, which provides multichannel audio to a PC and includes three audio channels for modem signals. Dial-up: The audio sent and received on the line by a modem of this type is generated and processed entirely in software, often in a device driver. There is little functional difference from the user's perspective, but this design reduces the cost of a modem by moving most of the processing power into inexpensive software instead of expensive hardware DSPs or discrete components. Dial-up: Soft modems of both types either are internal cards or connect over external buses such as USB. They never utilize RS-232 because they require high bandwidth channels to the host computers to carry the raw audio signals generated (sent) or analyzed (received) by software. Since the interface is not RS-232, there is no standard for communication with the device directly. Instead, soft modems come with drivers which create an emulated RS-232 port, which standard modem software (such as an operating system dialer application) can communicate with. Voice/fax modems "Voice" and "fax" are terms added to describe any dial modem that is capable of recording/playing audio or transmitting/receiving faxes. Some modems are capable of all three functions.Voice modems are used for computer telephony integration applications as simple as placing/receiving calls directly through a computer with a headset, and as complex as fully automated robocalling systems. Fax modems can be used for computer-based faxing, in which faxes are sent and received without inbound or outbound faxes ever needing to ever be printed on paper. This differs from efax, in which faxing occurs over the internet, in some cases involving no phone lines whatsoever. Dial-up: Modem Over IP (Modem Relay) The ITU-T V.150.1 Recommendation defines procedures for the inter-operation of PSTN to IP gateways. In a classic example of this setup, each dial-up modem would connect to a modem relay gateway. The gateways are then connected to an IP network (such as the Internet). The analog connection from the modem is terminated at the gateway and the signal is demodulated. The demodulated control signals are transported over the IP network in an RTP packet type defined as State Signaling Events (SSEs). The data from the demodulated signal is sent over the IP network via a transport protocol (also defined as an RTP payload) called Simple Packet Relay Transport (SPRT). Both the SSE and SPRT packet formats are defined in the V.150.1 Recommendation (Annex C and Annex B respectively). The gateway at the remote end that receives the packets uses the information to re-modulate the signal for the modem connected at that end. Dial-up: While the V.150.1 Recommendation is not widely deployed, a pared down version of the recommendation called "Minimum Essential Requirements (MER) for V.150.1 Gateways" (SCIP-216) is used in Secure Telephony applications. Dial-up: Cloud-based Modems While traditionally a hardware device, fully software-based modems with the ability to be deployed in a cloud environment (such as Microsoft Azure or AWS) do exist. Leveraging a Voice-over-IP (VoIP) connection through a SIP Trunk, the modulated audio samples are generated and sent over an IP network via RTP and an uncompressed audio codec (such as G.711 μ-law or a-law). Dial-up: Popularity A 1994 Software Publishers Association found that although 60% of computers in US households had a modem, only 7% of households went online. A CEA study in 2006 found that dial-up Internet access was declining in the US. In 2000, dial-up Internet connections accounted for 74% of all US residential Internet connections. The United States demographic pattern for dial-up modem users per capita has been more or less mirrored in Canada and Australia for the past 20 years. Dial-up: Dial-up modem use in the US had dropped to 60% by 2003, and stood at 36% in 2006. Voiceband modems were once the most popular means of Internet access in the US, but with the advent of new ways of accessing the Internet, the traditional 56K modem was losing popularity. The dial-up modem is still widely used by customers in rural areas where DSL, cable, wireless broadband, satellite, or fiber optic service are either not available or they are unwilling to pay what the available broadband companies charge. In its 2012 annual report, AOL showed it still collected around $700 million in fees from about three million dial-up users. Dial-up: TTY/TDD TDD devices are a subset of the teleprinter intended for use by the deaf or hard of hearing, essentially a small teletype with a built-in dial-up modem and acoustic coupler. The first models produced in 1964 utilized FSK modulation much like early computer modems. Leased-line modems: A leased line modem also uses ordinary phone wiring, like dial-up and DSL, but does not use the same network topology. While dial-up uses a normal phone line and connects through the telephone switching system, and DSL uses a normal phone line but connects to equipment at the telco central office, leased lines do not terminate at the telco. Leased lines are pairs of telephone wire that have been connected together at one or more telco central offices so that they form a continuous circuit between two subscriber locations, such as a business' headquarters and a satellite office. They provide no power or dialtone - they are simply a pair of wires connected at two distant locations. Leased-line modems: A dialup modem will not function across this type of line, because it does not provide the power, dialtone and switching that those modems require. However, a modem with leased-line capability can operate over such a line, and in fact can have greater performance because the line is not passing through the telco switching equipment, the signal is not filtered, and therefore greater bandwidth is available. Leased-line modems: Leased-line modems can operate in 2-wire or 4-wire mode. The former uses a single pair of wires and can only transmit in one direction at a time, while the latter uses two pairs of wires and can transmit in both directions simultaneously. When two pairs are available, bandwidth can be as high as 1.5 Mbit/s, a full data T1 circuit. Broadband: The term broadband was previously used to describe communications faster than what was available on voice grade channels. The term broadband gained widespread adoption in the late 1990s to describe internet access technology exceeding the 56 kilobit/s maximum of dialup. There are many broadband technologies, such as various DSL (digital subscriber line) technologies and cable broadband. Broadband: DSL technologies such as ADSL, HDSL, and VDSL use telephone lines (wires that were installed by a telephone company and originally intended for use by a telephone subscriber) but do not utilize most of the rest of the telephone system. Their signals are not sent through ordinary phone exchanges, but are instead received by special equipment (a DSLAM) at the telephone company central office. Broadband: Because the signal does not pass through the telephone exchange, no "dialing" is required, and the bandwidth constraints of an ordinary voice call are not imposed. This allows much higher frequencies, and therefore much faster speeds. ADSL in particular is designed to permit voice calls and data usage over the same line simultaneously. Similarly, cable modems use infrastructure originally intended to carry television signals, and like DSL, typically permit receiving television signals at the same time as broadband internet service. Other broadband modems include FTTx modems, satellite modems, and power line modems. Terminology Different terms are used for broadband modems, because they frequently contain more than just a modulation/demodulation component. Broadband: Because high-speed connections are frequently used by multiple computers at once, many broadband modems do not have direct (e.g. USB) PC connections. Rather they connect over a network such as Ethernet or Wi-Fi. Early broadband modems offered Ethernet handoff allowing the use of one or more public IP addresses, but no other services such as NAT and DHCP that would allow multiple computers to share one connection. This led to many consumers purchasing separate "broadband routers," placed between the modem and their network, to perform these functions.Eventually, ISPs began providing residential gateways which combined the modem and broadband router into a single package that provided routing, NAT, security features, and even Wi-Fi access in addition to modem functionality, so that subscribers could connect their entire household without purchasing any extra equipment. Even later, these devices were extended to provide "triple play" features such as telephony and television service. Nonetheless, these devices are still often referred to simply as "modems" by service providers and manufacturers.Consequently, the terms "modem", "router", and "gateway" are now used interchangeably in casual speech, but in a technical context "modem" may carry a specific connotation of basic functionality with no routing or other features, while the others describe a device with features such as NAT.Broadband modems may also handle authentication such as PPPoE. While it is often possible to authenticate a broadband connection from a users PC, as was the case with dial-up internet service, moving this task to the broadband modem allows it to establish and maintain the connection itself, which makes sharing access between PCs easier since each one does not have to authenticate separately. Broadband modems typically remain authenticated to the ISP as long as they are powered on. Radio: Any communication technology sending digital data wirelessly involves a modem. This includes direct broadcast satellite, WiFi, WiMax, mobile phones, GPS, Bluetooth and NFC. Modern telecommunications and data networks also make extensive use of radio modems where long distance data links are required. Such systems are an important part of the PSTN, and are also in common use for high-speed computer network links to outlying areas where fiber optic is not economical. Radio: Wireless modems come in a variety of types, bandwidths, and speeds. Wireless modems are often referred to as transparent or smart. They transmit information that is modulated onto a carrier frequency to allow many wireless communication links to work simultaneously on different frequencies.Transparent modems operate in a manner similar to their phone line modem cousins. Typically, they were half duplex, meaning that they could not send and receive data at the same time. Typically, transparent modems are polled in a round robin manner to collect small amounts of data from scattered locations that do not have easy access to wired infrastructure. Transparent modems are most commonly used by utility companies for data collection. Radio: Smart modems come with media access controllers inside, which prevents random data from colliding and resends data that is not correctly received. Smart modems typically require more bandwidth than transparent modems, and typically achieve higher data rates. The IEEE 802.11 standard defines a short range modulation scheme that is used on a large scale throughout the world. Mobile broadband Modems which use a mobile telephone system (GPRS, UMTS, HSPA, EVDO, WiMax, 5G etc.), are known as mobile broadband modems (sometimes also called wireless modems). Wireless modems can be embedded inside a laptop, mobile phone or other device, or be connected externally. External wireless modems include connect cards, USB modems, and cellular routers. Most GSM wireless modems come with an integrated SIM cardholder (i.e. Huawei E220, Sierra 881.) Some models are also provided with a microSD memory slot and/or jack for additional external antenna, (Huawei E1762, Sierra Compass 885.)The CDMA (EVDO) versions do not typically use R-UIM cards, but use Electronic Serial Number (ESN) instead. Radio: Until the end of April 2011, worldwide shipments of USB modems surpassed embedded 3G and 4G modules by 3:1 because USB modems can be easily discarded. Embedded modems may overtake separate modems as tablet sales grow and the incremental cost of the modems shrinks, so by 2016, the ratio may change to 1:1.Like mobile phones, mobile broadband modems can be SIM locked to a particular network provider. Unlocking a modem is achieved the same way as unlocking a phone, by using an 'unlock code'. Optical modem: A modem that connects to a fiber optic network is known as an optical network terminal (ONT) or optical network unit (ONU). These are commonly used in fiber to the home installations, installed inside or outside a house to convert the optical medium to a copper Ethernet interface, after which a router or gateway is often installed to perform authentication, routing, NAT, and other typical consumer internet functions, in addition to "triple play" features such as telephony and television service. Optical modem: Fiber optic systems can use quadrature amplitude modulation to maximize throughput. 16QAM uses a 16-point constellation to send four bits per symbol, with speeds on the order of 200 or 400 gigabits per second. 64QAM uses a 64-point constellation to send six bits per symbol, with speeds up to 65 terabits per second. Although this technology has been announced, it may not yet be commonly used. Home networking: Although the name modem is seldom used, some high-speed home networking applications do use modems, such as powerline ethernet. The G.hn standard for instance, developed by ITU-T, provides a high-speed (up to 1 Gbit/s) local area network using existing home wiring (power lines, phone lines, and coaxial cables). G.hn devices use orthogonal frequency-division multiplexing (OFDM) to modulate a digital signal for transmission over the wire. Home networking: As described above, technologies like Wi-Fi and Bluetooth also use modems to communicate over radio at short distances. Null modem: A null modem cable is a specially wired cable connected between the serial ports of two devices, with the transmit and receive lines reversed. It is used to connect two devices directly without a modem. The same software or hardware typically used with modems (such as Procomm or Minicom) could be used with this type of connection. A null modem adapter is a small device with plugs on both end which is placed on the end of a normal "straight-through" serial cable to convert it into a null-modem cable. Short-haul modem: A "short haul modem" is a device that bridges the gap between leased-line and dial-up modems. Like a leased-line modem, they transmit over "bare" lines with no power or telco switching equipment, but are not intended for the same distances that leased lines can achieve. Ranges up to several miles are possible, but significantly, short-haul modems can be used for medium distances, greater than the maximum length of a basic serial cable but still relatively short, such as within a single building or campus. This allows a serial connection to be extended for perhaps only several hundred to several thousand feet, a case where obtaining an entire telephone or leased line would be overkill. Short-haul modem: While some short-haul modems do in fact use modulation, low-end devices (for reasons of cost or power consumption) are simple "line drivers" that increase the level of the digital signal but do not modulate it. These are not technically modems, but the same terminology is used for them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Girt** Girt: In architecture or structural engineering, a girt, also known as a sheeting rail, is a horizontal structural member in a framed wall. Girts provide lateral support to the wall panel, primarily to resist wind loads.A comparable element in roof construction is a purlin. Stability in steel building construction: The girt is commonly used as a stabilizing element to the primary structure (e.g. column, post). Wall cladding fastened to the girt, or a discrete bracing system which includes the girt, can provide shear resistance, in the plane of the wall, along the length of the primary member. Since the girts are normally fastened to, or near, the exterior flange of a column, stability braces may be installed at a girt to resist rotation of the unsupported, inner flange of the primary member. The girt system must be competent and adequately stiff to provide the required stabilizing resistance in addition to its role as a wall panel support.Girts are stabilized by (sag) rods/angles/straps and by the wall cladding. Stabilizing rods are discrete brace members to prevent rotation of an unsupported flange of the girt. Sheet metal wall panels are usually considered providing lateral bracing to the connected, typically exterior flange along the length of the girt. Under restricted circumstances, sheet metal wall panels are also capable of providing rotational restraint to the girt section.In general: Girt supports panel, panel stabilizes girt; Column supports girt, girt stabilizes column. The building designer should be knowledgeable in the complexities of this interactive design condition to ensure competent design of the complete structure.An article of clothing: Reference to wearing a girt and having to bleach it often was made by British soldiers during World War II.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kepler-296e** Kepler-296e: Kepler-296e (also known by its Kepler Object of Interest designation KOI-1422.05) is a confirmed Earth-sized exoplanet orbiting within the habitable zone of Kepler-296. The planet was discovered by NASA's Kepler spacecraft using the transit method, in which the dimming effect that a planet causes as it crosses in front of its star is measured. NASA announced the discovery of the exoplanet on 26 February 2014. Confirmed exoplanet: Kepler-296e is a super-Earth with a radius 1.75 times that of Earth. The planet orbits Kepler-296 once every 34.1 days. Habitability: The planet was announced as being located within the habitable zone of Kepler-296. In this region, liquid water could exist on the surface of the planet. As of 2017, with an ESI of 0.85, it is the fifth-most Earth-like planet after Kepler-438b, TRAPPIST-1 d, and two Gliese-designated planets, GJ 3323 b and GJ 273 b, which were both discovered in 2017.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tampon** Tampon: A tampon is a menstrual product designed to absorb blood and vaginal secretions by insertion into the vagina during menstruation. Unlike a pad, it is placed internally, inside of the vaginal canal. Once inserted correctly, a tampon is held in place by the vagina and expands as it soaks up menstrual blood. However, in addition to menstrual blood, the tampon also absorbs the vagina's natural lubrication and bacteria, which can change the normal pH, increasing the risk of infections from the bacterium Staphylococcus aureus, which can lead to toxic shock syndrome (TSS). TSS is a rare but life-threatening infection that requires immediate medical attention.The majority of tampons sold are made of rayon, or a blend of rayon and cotton, along with synthetic fibers. Some tampons are made out of organic cotton. Tampons are available in several absorbency ratings. Brands include (but are not limited to) Kotex, Playtex, Tampax (Always), O.B., Cora, Lola, Sustain, Honest Company, Seventh Generation, Solimo, and Rael Tampons. Tampon: Several countries regulate tampons as medical devices. In the United States, they are considered to be a Class II medical device by the Food and Drug Administration (FDA). They are sometimes used for hemostasis in surgery. Design and packaging: Tampon design varies between companies and across product lines in order to offer a variety of applicators, materials and absorbencies. There are two main categories of tampons based on the way of insertion - digital tampons inserted by finger, and applicator tampons. Tampon applicators may be made of plastic or cardboard, and are similar in design to a syringe. The applicator consists of two tubes, an "outer", or barrel, and "inner", or plunger. The outer tube has a smooth surface to aid insertion and sometimes comes with a rounded end that is petaled.Differences exist in the way tampons expand when in use: applicator tampons generally expand axially (increase in length), while digital tampons will expand radially (increase in diameter). Most tampons have a cord or string for removal. The majority of tampons sold are made of rayon, or a blend of rayon and cotton. Organic cotton tampons are made from only 100% cotton. Tampons may also come in scented or unscented varieties. Design and packaging: Absorbency ratings In the US Tampons are available in several absorbency ratings, which are consistent across manufacturers in the U.S. These differ in the amount of cotton in each product and are measured based on the amount of fluid they are able to absorb. The absorbency rates required by the U.S. Food and Drug Administration (FDA) for manufacturer labeling are listed below: In Europe Absorbency ratings outside the US may be different. The majority of non-US manufacturers use absorbency rating and Code of Practice recommended by EDANA (European Disposables and Nonwovens Association). Design and packaging: In the UK In the UK, the Absorbent Hygiene Product Manufacturers Association (AHPMA) has written a Tampon Code of Practice which companies can follow on a volunteer basis. According to this code, UK manufacturers should follow the (European) EDANA code (see above). Design and packaging: Testing A piece of test equipment referred to as a Syngyna (short for synthetic vagina) is usually used to test absorbency. The machine uses a condom into which the tampon is inserted, and synthetic menstrual fluid is fed into the test chamber.A novel way of testing was developed by feminist medical experts after the toxic shock syndrome (TSS) crisis, and used blood - rather than the industry standard blue saline - as a test material. Design and packaging: Labeling The FDA requires the manufacturer to perform absorbency testing to determine the absorbency rating using the Syngyna method or other methods that are approved by the FDA. The manufacturer is also required to include on the package label the absorbency rating and a comparison to other absorbency ratings as an attempt to help consumers choose the right product and avoid complications of TSS. In addition, The following statement of association between tampons and TSS is required by the FDA to be on the package label as part of the labeling requirements: "Attention: Tampons are associated with Toxic Shock Syndrome (TSS). TSS is a rare but serious disease that may cause death. Read and save the enclosed information."Such guidelines for package labeling are more lenient when it comes to tampons bought from vending machines. For example, tampons sold in vending machines are not required by the FDA to include labeling such as absorbency ratings or information about TSS. Design and packaging: Costs The average woman may use approximately 11,400 tampons in her lifetime (if using only tampons). Generally, a box of tampons costs can range from $6 to $10 USD and has 12 to 40 tampons per box. Thus, women could use around 9 boxes a year, leading to a total cost between $54 and $90 USD a year (around $0.20-$0.40 a tampon). Activists call the problem some women have when not being able to afford products "period poverty". Health aspects: Toxic shock syndrome Menstrual toxic shock syndrome (mTSS) is a life-threatening disease most commonly caused by infection of superantigen-producing Staphylococcus aureus. The superantigen toxin secreted in S. aureus infections is TSS Toxin-1, or TSST-1. Incidence ranges from 0.03 to 0.50 cases per 100,000 people, with an overall mortality around 8%. mTSS signs and symptoms include fever (greater than or equal to 38.9 °C), rash, desquamation, hypotension (systolic blood pressure less than 90 mmHg), and multi-system organ involvement with at least three systems, such as gastrointestinal complications (vomiting), central nervous system (CNS) effects (disorientation), and myalgia.Toxic shock syndrome was named by James K. Todd in 1978. Philip M. Tierno Jr., Director of Clinical Microbiology and Immunology at the NYU Langone Medical Center, helped determine that tampons were behind toxic shock syndrome (TSS) cases in the early 1980s. Tierno blames the introduction of higher-absorbency tampons made with rayon in 1978, as well as the relatively recent decision by manufacturers to recommend that tampons can be worn overnight, for the surge in cases of TSS. However, a later meta-analysis found that the material composition of tampons is not directly correlated to the incidence of toxic shock syndrome, whereas oxygen and carbon dioxide content of menstrual fluid uptake is associated more strongly.In 1982, a liability case called Kehm v. Proctor & Gamble took place, where the family of Patricia Kehm sued Procter & Gamble for her death on September 6, 1982, from TSS, while using Rely brand tampons. The case was the first successful case to sue the company. Procter & Gamble paid $300,000 in compensatory damages to the Kehm family. This case can be attributed to the increase in regulations and safety protocol testing for current FDA requirements.Some risk factors identified for developing TSS include recent labor and delivery, tampon use, recent staphylococcus infection, recent surgery, and foreign objects inside the body.The FDA suggests the following guidelines for decreasing the risk of contracting TSS when using tampons: Choose the lowest absorbency needed for one's flow (test of absorbency is approved by FDA) Follow package directions and guidelines for insertion and tampon usage (located on box's label) Change the tampon at least every 6 to 8 hours or more often if needed Alternate usage between tampons and pads Avoid tampon usage overnight or when sleeping Increase awareness of the warning signs of Toxic Shock Syndrome and other tampon-associated health risks (and remove the tampon as soon as a risk factor is noticed)The FDA also advises those with a history of TSS not to use tampons and instead turn to other feminine hygiene products to control menstrual flow. Other menstrual hygiene products available include pads, menstrual cups, menstrual discs, and reusable period underwear.Cases of tampon-connected TSS are very rare in the United Kingdom and United States. A controversial study by Tierno found that all-cotton tampons were less likely than rayon tampons to produce the conditions in which TSS can grow. This was done using a direct comparison of 20 brands of tampons, including conventional cotton/rayon tampons and 100% organic cotton tampons. In a series of studies conducted after this initial claim, it was shown that all tampons (regardless of composition) are similar in their effect on TSS and that tampons made with rayon do not have an increased incidence of TSS. Instead, tampons should be selected based on minimum absorbency rating necessary to absorb flow corresponding to the individual.Sea sponges are also marketed as menstrual hygiene products. A 1980 study by the University of Iowa found that commercially sold sea sponges contained sand, grit, and bacteria. Hence, sea sponges could also potentially cause toxic shock syndrome.Studies have shown non-significantly higher mean levels of mercury in tampon users compared to non tampon users. No evidence showed an association between tampon use and inflammation biomarkers. Health aspects: Other considerations Bleached products According to the Women's Environmental Network research briefing on menstrual products made from wood pulp:The basic ingredient for menstrual pads is wood pulp, which begins life as a brown coloured product. Various ‘purification’ processes can be used to bleach it white. Measurable levels of dioxin have been found near paper pulping mills, where chlorine has been used to bleach the wood pulp. Dioxin is one of the most persistent and toxic chemicals, and can cause reproductive disorders, damage to the immune system and cancer (26). There are no safe levels and it builds up in our fat tissue and in our environment. Health aspects: Marine pollution In the UK, the Marine Conservation Society has researched the prevalence and problem of plastic tampon applicators found on beaches. Disposal and flushing Disposal of tampons, especially flushing (which manufacturers warn against) may lead to clogged drains and waste management problems. Health aspects: Tampon-drug interactions There are multiple cases in which the use of tampons may need medical advice from a healthcare professional. For example, as part of the National Institutes of Health, the U.S. National Library of Medicine and its branch MedlinePlus advise against using tampons while being treated with any of several medications taken by the vaginal route such as vaginal suppositories and creams, as tampons may decrease the absorbance of these drugs by the body. Example of these medications include clindamycin, terconazole, miconazole, clotrimazole, when used as a vaginal cream or vaginal suppository, as well as butoconazole vaginal cream. Health aspects: Increased risk for infections According to the American Society for Blood and Marrow Transplantation (ASBMT), tampons may be responsible for an increased risk of infection due to the erosions it causes in the tissue of the cervix and vagina, leaving the skin prone to infections. Thus, ASBMT advises hematopoietic stem-cell transplantation recipients against using tampons while undergoing therapy. Other uses: Clinical use Tampons are currently being used and tested to restore and/or maintain the normal microbiota of the vagina to treat bacterial vaginosis. Some of these are available to the public but come with disclaimers. The efficacy of the use of these probiotic tampons has not been established. Other uses: Tampons have also been used in cases of tooth extraction to reduce post-extraction bleeding.Tampons are currently being investigated as a possible use to detect endometrial cancer. Endometrial cancer does not currently have effective cancer screening methods if an individual is not showing symptoms. Tampons not only absorb menstrual blood, but also vaginal fluids. The vaginal fluids absorbed in the tampons would also contain the cancerous DNA, and possibly contain precancerous material, allowing for earlier detection of endometrial cancer. Clinical trials are currently being conducted to evaluate the use of tampons as a screening method for early detection of endometrial cancer. Environment and waste: Appropriate disposal of used tampons is still lacking in many countries. Because the lack of menstrual management practices in some countries, many sanitary pads or other menstrual products will be disposed into domestic solid wastes or garbage bins that eventually becomes part of a solid wastes.The issue that underlies the governance or implementation of menstrual waste management is how country categorizes menstrual waste. This waste could be considered as a common household waste, hazardous household waste (which will required to be segregated from routine household waste), biomedical waste given amount of blood it contains, or plastic waste given the plastic content in many commercial disposal pads (some only the outer case of the tampon or pads).Ecological impact varies according to disposal method (whether a tampon is flushed down the toilet or placed in a garbage bin - the latter is the recommended option). Factors such as tampon composition will likewise impact sewage treatment plants or waste processing. The average use of tampons in menstruation may add up to approximately 11,400 tampons in someone's lifetime (if they use only tampons rather than other products). Tampons are made of cotton, rayon, polyester, polyethylene, polypropylene, and fiber finishes. Aside from the cotton, rayon and fiber finishes, these materials are not biodegradable. Organic cotton tampons are biodegradable, but must be composted to ensure they break down in a reasonable amount of time. Rayon was found to be more biodegradable than cotton.Environmentally friendly alternatives to using tampons are the menstrual cup, reusable sanitary pads, menstrual sponges, reusable tampons, and reusable absorbent underwear.The Royal Institute of Technology in Stockholm carried out a life-cycle assessment (LCA) comparison of the environmental impact of tampons and sanitary pads. They found that the main environmental impact of the products was in fact caused by the processing of raw materials, particularly LDPE (low density polyethylene) – or the plastics used in the backing of pads and tampon applicators, and cellulose production. As production of these plastics requires a lot of energy and creates long-lasting waste, the main impact from the life cycle of these products is fossil fuel use, though the waste produced is significant in its own right.The menstrual material was disposed according to the type of product, and even based on cultural beliefs. This was done regardless of giving any importance to the location and proper techniques of disposal. In some areas of the world, menstrual waste is disposed into pit latrines, as burning and burial were difficult due to limited private space. History: Women have used tampons during menstruation for thousands of years. In her book Everything You Must Know About Tampons (1981), Nancy Friedman writes,[T]here is evidence of tampon use throughout history in a multitude of cultures. The oldest printed medical document, Ebers Papyrus, refers to the use of soft papyrus tampons by Egyptian women in the fifteenth century B.C. Roman women used wool tampons. Women in ancient Japan fashioned tampons out of paper, held them in place with a bandage, and changed them 10 to 12 times a day. Traditional Hawaiian women used the furry part of a native fern called hapu'u; and grasses, mosses and other plants are still used by women in parts of Asia and Africa.R. G. Mayne defined a tampon in 1860 as: "a less inelegant term for the plug, whether made up of portions of rag, sponge, or a silk handkerchief, where plugging the vagina is had recourse to in cases of hemorrhage."Earle Haas patented the first modern tampon, Tampax, with the tube-within-a-tube applicator. Gertrude Schulte Tenderich (née Voss) bought the patent rights to her company trademark Tampax and started as a seller, manufacturer, and spokesperson in 1933. Tenderich hired women to manufacture the item and then hired two sales associates to market the product to drugstores in Colorado and Wyoming, and nurses to give public lectures on the benefits of the creation, and was also instrumental in inducing newspapers to run advertisements. History: In 1945, Tampax presented a number of studies to prove the safety of tampons. A 1965 study by the Rock Reproductive Clinic stated that the use of tampons "has no physiological or clinical undesired side effects".During her study of female anatomy, German gynecologist Judith Esser-Mittag developed a digital-style tampon, which was made to be inserted without an applicator. In the late 1940s, Carl Hahn and Heinz Mittag worked on the mass production of this tampon. Hahn sold his company to Johnson & Johnson in 1974.In 1992, Congress found an internal FDA memo about the presence of dioxin, a known carcinogen, in tampons. Dioxin is one of the toxic chemicals produced when wood pulp is bleached with chlorine. Congressional hearings were held and tampon manufacturers assured Congress that the trace levels of dioxin in tampons was well below EPA level. The EPA has stated there is no accept level of dioxin. Following this, major commercial tampon brands began switching from dioxin-producing chlorine gas bleaching methods to either elemental “chlorine-free” or “totally chlorine free” bleaching processes.In the United States, the Tampon Safety and Research Act was introduced to Congress in 1997 in an attempt to create transparency between tampon manufacturers and consumers. The bill would mandate the conduct or support of research on the extent to which additives in feminine hygiene products pose any risks to the health of women or to the children of women who use those products during or before the pregnancies involved. Although yet to be passed, the bill has been continually reintroduced, most recently in 2019 as the Robin Danielson Feminine Hygiene Product Safety Act. Data would also be required from manufacturers regarding the presence of dioxins, synthetic fibers, chlorine, and other components (including contaminants and substances used as fragrances, colorants, dyes, and preservatives) in their feminine hygiene products. Society and culture: Tampon tax "Tampon tax" refers to tampons' lack of tax exempt status that is often in place for other basic need products. Several political statements have been made in regards to tampon use. In 2000, a 10% goods and services tax (GST) was introduced in Australia. While lubricant, condoms, incontinence pads and numerous medical items were regarded as essential and exempt from the tax, tampons continue to be charged GST. Prior to the introduction of GST, several states also applied a luxury tax to tampons at a higher rate than GST. Specific petitions such as "Axe the Tampon Tax" have been created to oppose this tax, and the tax was removed in 2019. Society and culture: In the UK, tampons are subject to a zero rate of value added tax (VAT), as opposed to the standard rate of 20% applied to the vast majority of products sold in the country. The UK was previously bound by the EU VAT directive, which required a minimum of 5% VAT on sanitary products. Since 1 January 2021, VAT applied to menstrual sanitary products has been 0%. Society and culture: In Canada, the federal government has removed the goods and services tax (GST) and harmonized sales tax (HST) from tampons and other menstrual hygiene products as of July 1, 2015.In the US, access to menstrual products such as pads and tampons and taxes added on these products, have also been controversial topics especially when it comes to people with low income. Laws for exempting such taxes differ vastly from state to state. The American Civil Liberties Union (ACLU) has published a report discussing these laws and listing the different guidelines followed by institutions such as schools, shelters, and prisons when providing menstrual goods.The report by ACLU also discusses the case of Kimberly Haven who was a former prisoner that had a hysterectomy after she had experienced toxic shock syndrome (TSS) due to using handmade tampons from toilet paper in prison. Her testimony supported a Maryland bill that is intended to increase access of menstrual products for imprisoned women. Society and culture: Etymology Historically, the word "tampon" originated from the medieval French word "tampion", meaning a piece of cloth to stop a hole, a stamp, plug, or stopper. Virginity Tampon use may stretch or break the hymen of individuals that have never been sexually active. Some cultures regard preservation of the hymen as a supposed evidence of virginity, which may discourage some people from using tampons. In popular culture In Stephen King's novel Carrie, the title character is bullied for menstruating and is bombarded with tampons and pads by her peers. Society and culture: In 1985, Tampon Applicator Creative Klubs International (TACKI) was established to develop creative uses for discarded, non-biodegradable, plastic feminine hygiene products, commonly referred to as “beach whistles”. TACKI President Jay Critchley launched his corporation in order to develop a global folk art movement and cottage industry, promote awareness of these throwaway objects washed up on beaches worldwide from faulty sewage systems, create the world's largest collection of discarded plastic tampon applicators, and ban their manufacture and sale through legislative action. The project and artwork was carried out during numerous site-specific performances and installations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gyaku Jūji-jime** Gyaku Jūji-jime: Gyaku Jūji-jime (逆十字絞), or gyakujujijime, is a chokehold in judo. It is one of the twelve constriction techniques of Kodokan Judo in the Shime-waza list. Danzan Ryu includes this technique in the Shimete list under the name Namijujijime. Ura-Juji-Jime is described in the Canon Of Judo and demonstrated in The Essence of Judo by Kyuzo Mifune. Gyaku Jūji-jime: The technique is called 'reverse' because the palms of the person applying the choke are facing the person who is being choked. The thumbs are on top, outside of the clothing and the fingers grab inside underneath the gi or clothing. The hands are high up each side of the neck. Scissoring the hands applies pressure to the carotid arteries reducing blood flow, rapidly resulting in loss of consciousness. In judo, this technique is always taught under strict supervision and is similarly closely observed by referees in competition. Examples of contest this finished: 2018 World Judo Championships – Men's 90 kg Bronze medal matchLoss Eduard Trippel (GER) (1:55 Kata-Juji-Jime) Axel Clerget (FRA)Win Video (This movie can not be seen in Japan etc.) Video Judo(IJF)channel "Highlights Judo For The World - BAKU WORLD JUDO CHAMPIONSHIPS 2018 (32m5s - )" on YouTube
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CDH17** CDH17: Cadherin-17 is a protein that in humans is encoded by the CDH17 gene.This gene is a member of the cadherin superfamily, genes encoding calcium-dependent, membrane-associated glycoproteins. The encoded protein is cadherin-like, consisting of an extracellular region, containing 7 cadherin domains, and a transmembrane region but lacking the conserved cytoplasmic domain. The protein is a component of the gastrointestinal tract and pancreatic ducts, acting as an intestinal proton-dependent peptide transporter in the first step in oral absorption of many medically important peptide-based drugs. The protein may also play a role in the morphological organization of liver and intestine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NGC 5544** NGC 5544: NGC 5544 is a barred spiral galaxy in the constellation Boötes. It is interacting with spiral galaxy NGC 5545.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Interpretation of Music** The Interpretation of Music: The Interpretation of Music is a book by Thurston Dart. It is described by the Encyclopædia Britannica as "the best direct and concise account of the issues of performance".This book deals with correct performance conventions and procedures relevant to different periods and styles (for example Gregorian intonation, divisions upon parts, French baroque over-dotting, etc.). It covers these various topics in a chronological order, also giving descriptions of period instruments and their uses. It is a book useful for those wishing to compose in a more authentic antiquated style, and for those wishing to make performances more historically "correct".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alimentary Pharmacology &amp; Therapeutics** Alimentary Pharmacology &amp; Therapeutics: Alimentary Pharmacology & Therapeutics is a bimonthly peer-reviewed medical journal concerned with the effects of drugs on the human gastrointestinal and hepato-biliary systems, particularly with relevance to clinical practice. The journal publishes original papers concerned with all aspects of basic and clinical pharmacology, pharmacokinetics, and the therapeutic use of drugs in the alimentary tract including the liver, gall bladder, and pancreas. Its editors are J. M. Rhodes and C. W. Howden.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DAZN** DAZN: DAZN (; "da zone") is an international over-the-top sports streaming service owned by DAZN Group, which is majority owned by Access Industries. DAZN: DAZN Group began operations in 2016 under the ownership of Perform Group, anchored by its purchase of global media rights to Japan's J.League football. It launched in Austria, Germany, and Switzerland on 10 August 2016, followed by Japan on 23 August 2016. The service expanded to Canada in 2017 (with a focus on streaming rights to the National Football League). After hiring former ESPN president John Skipper, DAZN expanded into the United States in 2018 with a focus on boxing, including major broadcast agreements with promoter Eddie Hearn and Mexican boxer Canelo Álvarez. DAZN: In 2018, DAZN and Perform Group's content business was spun off as DAZN Group, with its sports data business later sold to Vista Equity Partners, who merged it with STATS LLC to form Stats Perform. In 2020, DAZN announced plans to expand worldwide, with a focus on its portfolio of global broadcast rights such as boxing, and original content. History: In July 2016, prior to DAZN's launch, Perform Group announced its acquisition of exclusive worldwide media rights to Japanese J.League football under a 10-year, ¥210 billion (US$2 billion) contract, succeeding the league's ¥5 billion deal with SKY Perfect. Under the new contract, all matches from the three J.League divisions (J1, J2, and J3) would be broadcast by DAZN beginning in 2017. The league described the contract as the largest broadcast rights deal in the history of Japanese sport.DAZN first launched in Austria, Germany, and Switzerland on 10 August 2016, closely followed by Japan on 23 August 2016.In February 2018, DAZN sub-licensed Japanese rights to the B.League, Nippon Professional Baseball, La Liga, and the Premier League from Softbank, after the company announced that it would shut down its Sportsnavi Live service at the end of May. As part of the arrangement, DAZN offered a promotional offer for former Sportsnavi customers. History: Canadian launch In July 2017, DAZN announced that it would expand into Canada, after having acquired over-the-top streaming rights to the National Football League in Canada, including NFL Game Pass and access to NFL RedZone; as a result of the deal, television providers would no longer sell the out-of-market sports package NFL Sunday Ticket to residential customers. The DAZN deal does not affect the NFL's newly extended linear television rights deal with Bell Media. On 8 August 2017, DAZN reached a deal to sublicense content from beIN Sports Canada, including selected UEFA Champions League and UEFA Europa League matches (themselves sub-licensed from TSN), as well as other international sports rights.The Canadian launch was met with technical issues; DAZN apologized for the "inadequate service" that it delivered, and stated that it was working to rectify them. However, users still reported problems, including inconsistent stream qualities, buffering, and latency between the streams and television broadcasts. As a result, DAZN began to distribute NFL Sunday Ticket to television providers in October 2017, as had been the case before. On 20 November 2017, DAZN acquired Canadian rights to International Basketball Federation (FIBA) events.In February 2018, DAZN acquired Canadian broadcast rights to the 2018 Commonwealth Games (later sub-licensing portions of the coverage to CBC Sports), and subsumed Major League Soccer's digital out-of-market service MLS Live — with live and on-demand streaming of matches featuring U.S. teams (matches with Canadian teams will only be available after a 48-hour delay to protect the league's main rightsholders TSN and TVA Sports). Roku support was also added that month. In March, DAZN reached a syndication deal to carry content from Pac-12 Network on the service in Canada.On 25 May 2018, DAZN announced that it had acquired exclusive Canadian rights to the UEFA Champions League and Europa League, beginning in the 2018–19 season and replacing TSN. In April 2019, DAZN announced that it had acquired Canadian rights to the Premier League, replacing Sportsnet and TSN, under a three-year deal. History: U.S. launch & combat sports On 8 May 2018, DAZN announced that it had hired former ESPN president John Skipper as executive chairman. Skipper stated that he wanted DAZN eventually to compete directly with traditional U.S. cable sports networks (such as ESPN).Two days later, DAZN announced that it would launch in the United States, and that it had reached a major broadcasting deal with Eddie Hearn's Matchroom Sport. Under the deal, DAZN streams 32 cards per-year, including 16 British Sky Sports Box Office cards, as well as 16 held in the United States (described by Hearn as being "twelve massive shows and four absolute monsters"). Hearn claimed that the deal, which would last for at least two years, with an option for a six-year extension (totalling US$1 billion over the life of the contract if realised), was a "groundbreaking deal in the history of boxing".On 26 June 2018, DAZN announced a five-year streaming rights deal with the Viacom-owned mixed martial arts promotion Bellator, which began with Bellator 206 on 29 September 2018, and includes the U.S. and all other regions currently served by DAZN. The rights include seven exclusive events per-year, as well as all events televised by Paramount Network.DAZN officially launched in the U.S. in September 2018, ahead of its first boxing event—Anthony Joshua vs. Alexander Povetkin, on 22 September. Its launch content also included the World Boxing Super Series, as well as the AFC Champions League, the Chilean Primera Division, J-League and other content. DAZN's broadcast team for its U.S. boxing events is led by "Sugar" Ray Leonard and Brian Kenny on play-by-play, with LZ Granderson as ringside reporter, and Michael Buffer as ring announcer. Buffer appeared in a U.S. marketing campaign for the service, contrasting its business model to pay-per-views.In September 2018, DAZN's parent company Perform Group underwent a reorganization, with its sports data business spun into a second company known as Perform Content (which was later sold to Vista Equity Partners and merged with STATS LLC in 2019 to form Stats Perform), and its consumer properties (including DAZN itself, as well as several co-owned sports news websites) retained as DAZN Group.On 17 October 2018, DAZN announced that it had signed a five-year, 11-fight deal with Mexican boxer Canelo Álvarez valued at a minimum of $365 million, beginning with his then-upcoming bout against Rocky Fielding in December for the WBA super middleweight title. Álvarez was previously aligned with HBO, which had announced that it would discontinue boxing broadcasts. The contract overtook Giancarlo Stanton's $325 million contract with the Miami Marlins as the highest-valued contract with a single athlete in sport known at the time (since overtaken by Mike Trout's 10-year, near-$430 million contract with the Los Angeles Angels in 2019, Patrick Mahomes' 12-year, near-$503 million contract with Kansas City Chiefs in 2020, and Lionel Messi's 4-year, near-$673 million contract with FC Barcelona in 2017, revealed in 2021).In December 2018, DAZN was estimated to be worth £3 billion: it was described by the Evening Standard as one of the United Kingdom's few tech "unicorns".In November 2018, Major League Baseball announced a three-year content partnership with DAZN, which includes on-demand highlights, and ChangeUp—a live nightly studio program featuring look-ins and analysis. Hosted by former ESPN Baseball Tonight anchor Adnan Virk. it was described by executive producer Logan Swaim as stylistically mixing elements of NFL RedZone and his previous role Good Morning Football, and considered part of a goal to offer more content relating to mainstream, non-combat sports. Just before the start of the 2020 season, DAZN canceled MLB-related programming due to financial stresses caused by the COVID-19 pandemic. History: European & Asian expansion DAZN launched in Italy in August 2018, with an acquisition of exclusive rights to 114 Serie A matches beginning in the 2018–19 season (with Sky Italia holding rights to 266), and other domestic rights on launch including the European Rugby Champions Cup, Showtime Championship Boxing, UFC programming, and the World Rally Championship, alongside DAZN's global rights portfolio. The following September, DAZN announced that in order to improve the accessibility of its Serie A rights (especially in regions where internet service quality is insufficient for using DAZN), it would begin to offer a subscription-based linear channel on Sky Italia's satellite service, carrying selected content from the service (including its Serie A rights).In January 2019, DAZN acquired the rights to broadcast the 2019 AFC Asian Cup in Canada and the United States, beginning with the quarter-finals. In March 2019, DAZN doubled its U.S. monthly cost, but also introduced a new yearly option at a discount.DAZN launched in Spain in February 2019, becoming its eighth market. The service went live with a roster of exclusive premium sport content including MotoGP, Moto 2 and Moto3 (2019–2022), EuroLeague (2019/20–2022/23), EuroCup and Premier League (2019/20 to 2024/25). Other rights included FA Cup, EFL Cup, Coppa Italia and Supercoppa Italiana, EFL Championship, UFC, Golden Boy, Matchroom Boxing, and PDC Darts. History: On 8 March 2019, DAZN signed a three-year, six-fight deal with Gennady Golovkin, under which it would broadcast two fights per-year. The contract also includes two cards per-year from Golovkin's GGG Promotions beginning in 2020. The deal began with his June 2019 bout against Canadian boxer Steve Rolls: Golovkin's promoter explained that the choice of a Canadian boxer was intended to help encourage DAZN subscriptions in the country. Golovkin cited the broadcaster's "global vision" as an influence on the decision.In May 2019, former ESPN and Fox Sports executive Jamie Horowitz (who was known for having placed a large focus on debate-driven studio programs during his tenures at both networks) became DAZN's head of content. Also that month, the service announced an expansion into Brazil as its ninth market (and first in South America), acquiring rights to the Copa Sudamericana and Campeonato Brasileiro Série C, and other international football competitions among other properties.In July 2019, DAZN's then CEO Simon Denyer told Bloomberg News that the company was interested in pursuing rights to the NFL in the United States to some degree (the NFL Sunday Ticket out-of-market package is currently exclusive to DirecTV). That month, DAZN also reached a syndication deal with Eurosport in Austria, Germany, Italy, and Spain, allowing DAZN subscribers to access live and on-demand sports programming from Eurosport in these regions. In addition, DAZN sub-licensed 45 Bundesliga matches from Eurosport in Germany and Austria over the next two seasons — with 39 exclusive to the service.In October 2019, mobile analytics firm SensorTower listed DAZN as having overtaken Major League Baseball as the highest-grossing sports-related mobile app in the first half of 2019 in terms of worldwide revenue on application storefronts. History: Global launch On 2 March 2020, DAZN announced that it would expand into 200 additional countries worldwide, with an initial focus on giving wider distribution to its boxing and original content portfolio. The launch was originally scheduled for 2 May 2020, but was rescheduled to 24 July, before ultimately launching on 1 December (coinciding with Ryan Garcia vs. Luke Campbell. DAZN EVP of Global Brand Joe Markowski told Finder that the global launch was about "getting a foot in the door", and that further investments (such as domestic broadcast rights) would depend on the service being "taken up in high numbers, and exceeding our expectations. Then [when] we have local content opportunities that make sense for us and our board economically, we're going to really get started."With the COVID-19 pandemic resulting in widespread suspension of international sport, DAZN stated in late-March 2020 that it would not pay rightsholders for content that had not been delivered under their contracts.In September 2020, DAZN extended their carriage agreement with Eurosport through August 2023, and added Switzerland to the agreement.In January 2021, the DAZN linear channels were added to Spanish television service Movistar+. That month, former Entain CEO Shay Segev was named the new CEO of DAZN, after having acted alongside founder James Rushton for the previous six months. In March 2021, senior adviser to Access Industries and former Walt Disney Direct-to-Consumer & International executive Kevin A. Mayer became the chairman of DAZN, replacing John Skipper.In June 2021, DAZN announced a five-year agreement with Matchroom Boxing in the UK and Ireland beginning 31 July, ending the promotion's long-term association with Sky Sports. It also announced a four-year global broadcasting deal for the UEFA Women's Champions League (outside of China, the Middle East, and North Africa) under which it will partner with YouTube to simulcast 61 matches during the 2021–22 and 2022–23 seasons (after which YouTube will stream 19 matches per-season, with the remainder exclusive to DAZN).In February 2022, DAZN began a foray into non-fungible tokens (NFTs) in partnership with Mixi, known as "DAZN Moments", in partnership with the J League. In July 2022, DAZN expanded this operation into a global "DAZN Boxing" NFT marketplace, which is based on highlights from DAZN boxing matches.In April 2022, DAZN announced a partnership with Pragmatic Group to operate sports betting services under the DAZN Bet banner.In May 2022, DAZN signed a deal to carry Red Bull TV, including live and on-demand content. DAZN also signed a four-event deal with KSI's Misfits Boxing, carrying cards under the branding "MF & DAZN: X Series": their first PPV was held on 27 August 2022.In June 2022, DAZN announced a global broadcasting deal with British boxer Anthony Joshua, beginning with his 20 August rematch against Oleksandr Usyk in Saudi Arabia; the deal was reported to be valued at £100 million per-year, with Joshua also becoming a brand ambassador for DAZN.In July 2022, Segev stated that there were plans to add more interactive features to the platform, such as "watch parties", alternative broadcasts of events, and sports betting integration. In September 2022, after an attempted pursuit of British sports network BT Sport, DAZN announced that it would acquire sports broadcaster Eleven Group, expanding its position in parts of Asia and Europe, and in global sports streaming rights and technologies. The acquisitions was later finalized on February 16, 2023.In January 2023, DAZN signed a five-year deal with KSI and Misfits Boxing (covering all KSI bouts, and six X Series cards per-year, with two on PPV), and a multi-year agreement with All Elite Wrestling (AEW) to carry its programming in 42 Asian and European territories. In February 2023, DAZN announced that it had acquired the global rights to the NFL's Game Pass service outside of the U.S. and China under a 10-year deal beginning in the 2023 NFL season; it will be sold as a standalone subscription service on the DAZN platform. DAZN then proceeded to botch the launch and remove many of the features even the previously unloved NFL Gamepass had managed to execute. In August 2023, it was announced DAZN had acquired US-based women’s football streaming platform, Ata Football. Programming: Sports rights Noted sports rights held by DAZN include: Football American football Archery Baseball Basketball Beach soccer Bowling Chess Combat sports Cricket Cue sports Cycling Darts eSports Field hockey Fishing Five-a-side football Golf Handball Historic football Indoor football Motorsport Multi-sport Netball Padel Professional wrestling Rugby sevens Rugby union Sailing Skateboarding Squash Tennis Triathlon Weightlifting Winter sports Woodchopping Volleyball Linear channels Linear channels on DAZN include: Red Bull TV, Unbeaten, NFL Network, DAZN Combat and DAZN Women's Football are available globally. Programming: Original content In April 2019, DAZN premiered a new, candid camera esque show, Da Pull Up, hosted by Akin "Ak" Reyes and Barak Bess, and premiered the first episode of 40 Days - docuseries chronicling the lead-up to Canelo Álvarez's bout against Daniel Jacobs.In July 2019, former Indianapolis Colts punter and WWE personality Pat McAfee signed a content deal with DAZN, which added television simulcasts of his podcast and new syndicated, daily radio show to the service, as well as contributions to shoulder content for DAZN's NFL rights. DAZN and McAfee terminated their broadcast partnership in May 2020. Device support: DAZN supports online streaming on personal computers, as well as mobile apps on Android and iOS, digital media players and smart TVs, and video game consoles. In September 2019, Comcast reached a deal with DAZN to offer an app for the service its Xfinity X1 cable boxes, becoming the first U.S. television provider to offer support for the service within their platform.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Manchamanteles** Manchamanteles: Manchamanteles (literally, "tablecloth stainer") in Mexican cuisine, is a stew of assorted meat, chili peppers, vegetables, and fruits. A typical recipe for mancha manteles contains chicken and/or pork, chorizo, pineapple, apple, banana, chili peppers, almonds, cinnamon, lard and tomatoes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pressure flow hypothesis** Pressure flow hypothesis: The pressure flow hypothesis, also known as the mass flow hypothesis, is the best-supported theory to explain the movement of sap through the phloem. It was proposed by Ernst Münch, a German plant physiologist in 1930. Pressure flow hypothesis: A high concentration of organic substances, particularly sugar, inside cells of the phloem at a source, such as a leaf, creates a diffusion gradient (osmotic gradient) that draws water into the cells from the adjacent xylem. This creates turgor pressure, also known as hydrostatic pressure, in the phloem. Movement of phloem sap occurs by bulk flow (mass flow) from sugar sources to sugar sinks. The movement in phloem is bidirectional, whereas, in xylem cells, it is unidirectional (upward). Because of this multi-directional flow, coupled with the fact that sap cannot move with ease between adjacent sieve-tubes, it is not unusual for sap in adjacent sieve-tubes to be flowing in opposite directions. Sources and sinks: A sugar source is any part of the plant that is producing or releasing sugar. During the plant's growth period, usually during the spring, storage organs such as the roots are sugar sources, and the plant's many growing areas are sugar sinks. After the growth period, when the meristems are dormant, the leaves are sources, and storage organs are sinks. Developing seed-bearing organs (such as fruit) are always sinks. Mechanisms: While movement of water and minerals through the xylem is driven by negative pressures (tension) most of the time, movement through the phloem is driven by positive hydrostatic pressure. This process is termed translocation, and is accomplished by a process called phloem loading and unloading. Cells in a sugar source "load" a sieve-tube element by actively transporting solute molecules into it. This causes water to move into the sieve-tube element by osmosis, creating pressure that pushes the sap down the tube. In sugar sinks, cells actively transport solutes out of the sieve-tube elements, producing the exactly opposite effect. The gradient of sugar from source to sink causes pressure flow through the sieve tube toward the sink. Mechanisms: The mechanisms are as follows: Glucose is produced by photosynthesis in the mesophyll cells of green leaves. Some glucose is used within the cells during respiration. The rest of the glucose is converted into non-reducing sugar i.e. sucrose. It has been shown that the sucrose concentration in sieve tubes in leaves is commonly between 10 and 30 percent whereas it forms only 0.5% solution in the photosynthesis cells. Mechanisms: The sucrose is actively transported to the companion cells of the smallest veins in the leaves. The sucrose diffuses through the plasmodesmata from the companion cells to the sieve tube elements. As a result, concentration of sucrose increases in the sieve tube elements. Water moves by osmosis from the nearby xylem in the same leaf vein. This increases the hydrostatic pressure of the sieve tube elements. Hydrostatic pressure moves the sucrose and other substances through the sieve tube cells, towards a sink. In the storage sinks, such as sugar beet root and sugar cane stem, sucrose is removed into apoplast prior to entering the symplast of the sink. Mechanisms: Water moves out of the sieve tube cells by osmosis, lowering the hydrostatic pressure within them. Thus the pressure gradient is established as a consequence of entry of sugars in sieve elements at the source and removal of sucrose at the sink. The presence of sieve plates greatly increases the resistance along the pathway and results in the generation and maintenance of substantial pressure gradients in the sieve elements between source and sink. Mechanisms: The phloem sugar is removed by the cortex of both stem and root, and is consumed by cellular respiration or else converted into starch. Starch is insoluble and exerts no osmotic effect. Consequently, the osmotic pressure of the contents of phloem decreases. Finally relatively pure water is left in the phloem and this is thought to leave by osmosis or be drawn back into nearby xylem vessels by suction of the transpiration pull.The pressure flow mechanism depends upon: Turgor pressure Difference of osmotic pressure gradient along the direction of flow between the source and the sink. Evidence: There are different pieces of evidences that support the hypothesis. Firstly, there is an exudation of solution from the phloem when the stem is cut or punctured by the Stylet of an aphid, a classical experiment demonstrating the translocation function of phloem, indicating that the phloem sap is under pressure. Secondly, concentration gradients of organic solutes are proved to be present between the sink and the source. Thirdly, when viruses or growth chemicals are applied to a well-illuminated (actively photosynthesising) leaf, they are translocated downwards to the roots. Yet, when applied to shaded leaves, such downward translocation of chemicals does not occur, hence showing that diffusion is not a possible process involved in translocation. Criticisms: Opposition or criticisms against the hypothesis are often voiced. Some argue that mass flow is a passive process while sieve tube vessels are supported by companion cells. Hence, the hypothesis neglects the living nature of phloem. Moreover, it is found that amino acids and sugars (examples of organic solutes) are translocated at different rates, which is contrary to the assumption in the hypothesis that all materials being transported would travel at uniform speed. Bi-directional movements of solutes in translocation process as well as the fact that translocation is heavily affected by changes in environmental conditions like temperature and metabolic inhibitors are two defects of the hypothesis. Criticisms: An objection leveled against the pressure flow mechanism is that it does not explain the phenomenon of bidirectional movement i.e. movement of different substances in opponent directions at the same time. The phenomenon of bidirectional movement can be demonstrated by applying two different substances at the same time to the phloem of a stem at two different points, and following their longitudinal movement along the stem. If the mechanism of translocation operates according to pressure flow hypothesis, bidirectional movement in a single sieve tube is not possible. Experiments to demonstrate bidirectional movement in a single sieve tube are technically very difficult to perform. Some experiments indicate that bidirectional movement may occur in a single sieve tube, whereas others do not. Other theories: Some plants appear not to load phloem by active transport. In these cases a mechanism known as the polymer trap mechanism was proposed by Robert Turgeon. In this case small sugars such as sucrose move into intermediary cells through narrow plasmodesmata, where they are polymerised to raffinose and other larger oligosaccharides. Now they are unable to move back, but can proceed through wider plasmodesmata into the sieve tube element. Other theories: The symplastic phloem loading is confined mostly to plants in tropical rain forests and is seen as more primitive. The actively transported apoplastic phloem loading is viewed as more advanced, as it is found in the later-evolved plants, and particularly in those in temperate and arid conditions. This mechanism may therefore have allowed plants to colonise the cooler locations. Organic molecules such as sugars, amino acids, certain hormones, and even messenger RNAs are transported in the phloem through sieve tube elements.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Forensic arts** Forensic arts: Forensic art is any art used in law enforcement or legal proceedings. Forensic art is used to assist law enforcement with the visual aspects of a case, often using witness descriptions and video footage.It is a highly specialized field that covers a wide range of artistic skills, such as composite drawing, crime scene sketching, image modification and identification, courtroom drawings, demonstrative evidence, and postmortem and facial approximation aids. It is rare for a forensic artist to specialize in more than one of these skills.Many forensic artists do the job as a collateral duty to their "regular" job in law enforcement, such as police officer, crime scene tech, etc. Such forensic artists perform their work while on a fixed salary and are not additionally compensated for artistic duties. There are few full-time forensic artist jobs available. Most full-time artists work in large cities, or in state or federal agencies. "Freelancing" in forensic art is a difficult career path, as ties to law enforcement are a necessary part of the job, and agencies have limited budgets to pay outside contractors. Forensic arts: The skill of facial approximation is closely associated and related to forensic anthropology in that an artist specializes in the reconstruction of the remains of a human body. Generally this discipline focuses on the human face for identification purposes. The forensic artist can create a facial approximation in a number of ways to include 2D (drawings), 3D (sculptures) and other methods using new computer technology. Forensic artists generally can add greater character and make their subjects come back to "life". Types of Forensic Arts and Methods: The following is a quick description of different forensic arts skills and what they involve: Composite Sketching: the main objective is to help investigators generate leads based on physical descriptions. This is usually drawn by hand; an artist who is trained in interviewing victims and witnesses uses the information provided to draw what is described. Composite art produces a single, graphic image that is designed to be a likeness or similarity of the individual. Types of Forensic Arts and Methods: Image Modification: this is used to change and enhance a photograph in order to help an investigator and/or trial attorney. Examples of this include age progression/regression (see below), and the clarifying of images, such as from CCTV footage, to identify an individual. Image Identification: this is the recording a person's distinguishing features for future reference. Investigators can use this tool to identify suspects who attempt to change their appearance to evade capture, as well as in the study of cold cases in which individuals may have changed their appearance since the event. Crime Scene Sketching: the drawing of a crime scene; in the sketch, an investigator includes measurements and dimensions to aid in displaying the layout of the scene. This helps support the information shown in photographs of the scene. Demonstrative Evidence: any visible, physical evidence used in legal proceedings. These are used to demonstrate aspects of the case, reconstruct an event, and illustrate what happened. There are two categories of demonstrative evidence; court displays and investigative aids. Postmortem Drawing: when an artist either looks at a deceased person's photograph or their remains to help identify who the person is and what they looked like prior to their death. This helps most in cases where the body is too damaged by an accident or decomposition. Age Progression/Regression: useful to determine what a person's appearance before or after a period of time. This is most commonly used in missing persons cases or during cold cases when investigators need an idea of what an individual looked like years prior to or following the investigation. Types of Forensic Arts and Methods: Forensic Sculpture: this is used to create three-dimensional models, usually using the skull of the victims. Other features are added such as fake eyes and wigs to add realism. This process can also inform the investigators of certain characteristics of the victim - such as age, race, and gender—through a detailed knowledge of the intricacies of skeletal structure and other corresponding features such as dental records. It is later photograph and used like postmortem and composite drawings. However, because forensic sculpture relies heavily on assumptions made by the artist, it is not considered a legally recognized technique for positive identification, and is thus used in an advisory capacity only. Types of Forensic Arts and Methods: Collaboration: forensic artists, anthropologists and other professions are used to help determine the age, sex, and race of an identified skull. Composite Sketching: Composite sketching is arguable the most fundamental example of forensic art. Louis Gibson, the most successful forensic artist leading to identify 750+ criminals, does composite drawings of perpetrators using a witnesses description. The first steps to making a sketch is to talk to a witness or victim. Interviewing the witness is half the job because they often want to forget the event due to trauma, so forensic artist must be gentle enough to coax descriptions out of the witness. When drawing, the artist ask for details, such as the hair color and style, eye shape and color, the shape and proportion of the nose and the mouth, and any particular facial expression. The artist usually will have a catalogue of visual aids that have individual parts of a person's face, with the most common being the FBI Facial Identification Catalogue. Next are any hairstyles, tattoos, scars, and clothes from the shoulders up. Clothing is usually remembered more than the face, and sometimes unique accessories like glasses or a bright hoodie can lead to a person's arrest. While the artist ask for some specifics, they tend to leave the drawing rather vague, as more calls and hints are made if the sketch kind of resembles a person than if it were to be an exact match to the person. Throughout the process, a suggestion about the look of the person being drawn must never be made. Some common drawing mistakes made by beginners are the shading of the nose, giving it depth, and the shape of the eyes. While it started out with a pencil and piece of paper, which some people still use, there is now also the option of using tablets or touchpads with a wireless pen. Around 10%-30% of sketches actually lead to a suspect's capture. While composite sketching may be helpful in identifying some people, it cannot be used in court as a piece of evidence, and the same goes for other facial recreations. Methods of Manual 3D Construction: Anthropometric, the study of measurements and proportions of the human body, was a method developed by Alphonse Bertillon.A good start to all manual 3D construction is the actual shape of the person's actual skull. It is ideal to get a copy of the skull due to small structural differences between each person, and the fragility of using the actual skull of the person. Holes are covered to prevent damage to the actual skull and silicone is gently applied in layers, letting each layer dry as to not damage the previous one.The Manchester method, also known as the anatomical method, is a form of 3D facial construction that is the most widely accepted. It was started by Richard Neave, who sculpts the facial muscles first, then adds a layer of clay as the skin, while also using tissue depth markers.The American method was invented by Betty Pat. Gatliff. The reconstruction is straightforward by replicating the tissue depths using clay, skipping the facial muscles, and is as successful as Mikhail Gerasimov and Richard Neave.Now, there is also computerized 3D forensic facial reconstruction. Manual model clay techniques are used within this method, but the computer systems vary, in that some computerized systems used 3D animation software to model the face onto the skull, while other systems use a virtual sculpture system with Haptic feedback. Haptic feedback is the ability to feel the surface of the skull during analysis and also provide important skeletal details for facial reconstruction such as muscle attachment strength, position of eye, position of malar tubercle etc. Face Model Creation: Face model creation is the process of getting 2D or 3D scan or pictures, and using it to build a computer model of a person's face. When creating a face model, the forensic artist looks at whether the person is masculine or feminine, as well as their skin tone, age, wrinkles, freckles, the shadow of the beard, and attractiveness. There are three segments they examine, the first being face shape. Some common features being checked are if the head is round, narrow or a heart shape, has high or low cheeks, has a high or low chin, possible a double chin, and the distance between lips and nose. The second segment is the eyes. Are they round, pointing upwards or downwards, eye color, thickness or thinness of eyebrows, whether the arch is high or low, the distance between the eyeballs, the distance between both eyebrows, and the distance between eyebrows and the eyes. The third segment is the hair. The 3D models have not been programmed to be used with hair, therefore, hair cannot be used the same way as facial attributes. A profile and frontal image of a person is used where five to 15 different key features of each hairstyle are selected. These different hairstyles features are from a database used by these professionals. The styles are put into the system where the algorithm automatically estimates the structure of the face. The system is also able to depict different shades of color. Becoming a Forensic Artist: To qualify for a professional certification for forensic arts - such as the Forensic Art Certification, which is provided through the International Association for Identification - applicants can focus on one or more of the three available categories of professional designation: composite imaging, facial reconstruction, and image enhancement/age progression, and must: have at least 80 hours of IAI-approved forensic art training have at least 40 hours of related workshops, lectures, and short program training have at least two years of experience as forensic arts provide at least 30 forensic art examples that include age progressions, composites, and reconstructions have a portfolio that demonstrates their forensic art techniques (must include at least 10 forensic art images that were prepared for law enforcement investigation cases) Sources: Gibson, Lois (2008). Forensic Art Essentials. doi:10.1016/B978-0-12-370898-4.X5001-5. ISBN 978-0-12-370898-4.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clifford parallel** Clifford parallel: In elliptic geometry, two lines are Clifford parallel or paratactic lines if the perpendicular distance between them is constant from point to point. The concept was first studied by William Kingdon Clifford in elliptic space and appears only in spaces of at least three dimensions. Since parallel lines have the property of equidistance, the term "parallel" was appropriated from Euclidean geometry, although the "lines" of elliptic geometry are geodesic curves and, unlike the lines of Euclidean geometry, are of finite length. Clifford parallel: The algebra of quaternions provides a descriptive geometry of elliptic space in which Clifford parallelism is made explicit. Introduction: The lines on 1 in elliptic space are described by versors with a fixed axis r: {ear:0≤a<π} For an arbitrary point u in elliptic space, two Clifford parallels to this line pass through u. The right Clifford parallel is {uear:0≤a<π}, and the left Clifford parallel is {earu:0≤a<π}. Generalized Clifford parallelism: Clifford's original definition was of curved parallel lines, but the concept generalizes to Clifford parallel objects of more than one dimension. In 4-dimensional Euclidean space Clifford parallel objects of 1, 2, 3 or 4 dimensions are related by isoclinic rotations. Clifford parallelism and isoclinic rotations are closely related aspects of the SO(4) symmetries which characterize the regular 4-polytopes. Clifford surfaces: Rotating a line about another, to which it is Clifford parallel, creates a Clifford surface. The Clifford parallels through points on the surface all lie in the surface. A Clifford surface is thus a ruled surface since every point is on two lines, each contained in the surface. Given two square roots of minus one in the quaternions, written r and s, the Clifford surface through them is given by {earebs:0≤a,b<π}. History: Clifford parallels were first described in 1873 by the English mathematician William Kingdon Clifford.In 1900 Guido Fubini wrote his doctoral thesis on Clifford's parallelism in elliptic spaces.In 1931 Heinz Hopf used Clifford parallels to construct the Hopf map.In 2016 Hans Havlicek showed that there is a one-to-one correspondence between Clifford parallelisms and planes external to the Klein quadric.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tetrode** Tetrode: A tetrode is a vacuum tube (called valve in British English) having four active electrodes. The four electrodes in order from the centre are: a thermionic cathode, first and second grids, and a plate (called anode in British English). There are several varieties of tetrodes, the most common being the screen-grid tube and the beam tetrode. In screen-grid tubes and beam tetrodes, the first grid is the control grid and the second grid is the screen grid. In other tetrodes one of the grids is a control grid, while the other may have a variety of functions. Tetrode: The tetrode was developed in the 1920s by adding an additional grid to the first amplifying vacuum tube, the triode, to correct limitations of the triode. During the period 1913 to 1927, three distinct types of tetrode valves appeared. All had a normal control grid whose function was to act as a primary control for current passing through the tube, but they differed according to the intended function of the other grid. In order of historical appearance these are: the space-charge grid tube, the bi-grid valve, and the screen-grid tube. The last of these appeared in two distinct variants with different areas of application: the screen-grid valve proper, which was used for medium-frequency, small signal amplification, and the beam tetrode which appeared later, and was used for audio or radio-frequency power amplification. The former was quickly superseded by the rf pentode, while the latter was initially developed as an alternative to the pentode as an audio power amplifying device. The beam tetrode was also developed as a high power radio transmitting tube. Tetrode: Tetrodes were widely used in many consumer electronic devices such as radios, televisions, and audio systems until transistors replaced valves in the 1960s and 70s. Beam tetrodes have remained in use until quite recently in power applications such as audio amplifiers and radio transmitters. How it works: The tetrode functions in a similar way to the triode, from which it was developed. A current through the heater or filament heats the cathode, which causes it to emit electrons by thermionic emission. A positive voltage is applied between the plate and cathode, causing a flow of electrons from the cathode to plate through the two grids. A varying voltage applied to the control grid can control this current, causing variations in the plate current. With a resistive or other load in the plate circuit, the varying current will result in a varying voltage at the plate. With proper biasing, this voltage will be an amplified (but inverted) version of the AC voltage applied to the control grid, providing voltage gain. In the tetrode, the function of the other grid varies according to the type of tetrode; this is discussed below. Space charge grid tube: The space charge grid tube was the first type of tetrode to appear. In the course of his research into the action of the audion triode tube invented by Edwin Howard Armstrong and Lee de Forest, Irving Langmuir found that the action of the heated thermionic cathode was to create a space charge, or cloud of electrons, around the cathode. This cloud acted as a virtual cathode. With low applied anode voltage, many of the electrons in the space charge returned to the cathode, and did not contribute to the anode current; only those at its outer limit would be affected by the electric field due to the anode, and would be accelerated towards it. However, if a grid bearing a low positive applied potential (about 10V) were inserted between the cathode and the control grid, the space charge could be made to extend further away from the cathode. This had two advantageous effects, both related to the influence of the electric fields of the other electrodes (anode and control grid) on the electrons of the space charge. First, a significant increase in anode current could be achieved with low anode voltage; the valve could be made to work well with lower applied anode voltage. Second, the transconductance (rate of change of anode current with respect to control grid voltage) of the tube was increased. The latter effect was particularly important since it increased the voltage gain available from the valve.Space-charge valves remained useful devices throughout the valve era, and were used in applications such as car radios operating directly from a 12V supply, where only a low anode voltage was available. The same principle was applied to other types of multi-grid tubes such as pentodes. As an example, the Sylvania 12K5 is described as "a tetrode designed for space-charge operation. It is intended for service as a power amplifier driver where the potentials are obtained directly from a 12V automobile battery." The space-charge grid was operated at +12V, the same as the anode supply voltage.Another important application of the space-charge tetrode was as an electrometer tube for detecting and measuring extremely small currents. For example, the General Electric FP54 was described as a "space-charge grid tube ... designed to have a very high input impedance and a very low grid current. It is designed particularly for amplification of direct currents smaller than about 10−9 amperes, and has been found capable of measuring currents as small as 5 x 10−18 amperes. It has a current amplification factor of 250,000, and operates with an anode voltage of 12V, and space-charge grid voltage of +4V." The mechanism by which the space-charge grid lowers control-grid current in an electrometer tetrode is that it prevents positive ions originating in the cathode from reaching the control grid.Note that when a space-charge grid is added to a triode, the first grid in the resulting tetrode is the space-charge grid, and the second grid is the control grid. Bi-grid valve: In the bi-grid type of tetrode, both grids are intended to carry electrical signals, so both are control grids. The first example to appear in Britain was the Marconi-Osram FE1, which was designed by H. J. Round, and became available in 1920. The tube was intended to be used in a reflex circuit (for example the single-valve ship receiver Type 91) where the same valve performed the combined functions of RF amplifier, AF amplifier, and diode detector. The RF signal was applied to one control grid, and the AF signal to the other. This type of tetrode was used in many imaginative ways in the period before the appearance of the screen-grid valve revolutionised receiver design. Bi-grid valve: One application is shown in the illustration. This is recognisable as an AM telephony transmitter in which the second grid and the anode form a power oscillator, and the first grid acts as a modulating electrode. The anode current in the valve, and hence the RF output amplitude, is modulated by the voltage on G1, which is derived from a carbon microphone. Bi-grid valve: A tube of this type could also be used as a direct conversion CW (radiotelegraphy) receiver. Here the valve oscillates as a consequence of coupling between the first grid and the anode, while the second grid is coupled to the antenna. The AF beat frequency is audible in the headphones. The valve acts as a self-oscillating product detector. Bi-grid valve: Another, very similar application of the bi-grid valve was as a self oscillating frequency mixer in early superhet receivers One control grid carried the incoming RF signal, while the other was connected into an oscillator circuit which generated the local oscillation within the same valve. Since the anode current of the bi-grid valve was proportional both to the signal on the first grid, and also to the oscillator voltage on the second grid, the required multiplication of the two signals was achieved, and the intermediate frequency signal was selected by a tuned circuit connected to the anode. In each of these applications, the bi-grid tetrode acted as an unbalanced analogue multiplier in which the plate current, in addition to passing both input signals includes the product of the two signals applied to the grids. The superheterodyne receiver: The principle of the modern superheterodyne (or superhet) receiver (originally named the super-sonic heterodyne receiver, because the intermediate frequency was at an ultrasonic frequency) was invented in France by Lucien Levy in 1917 (p 66), though credit is usually also given to Edwin Armstrong. The original reason for the invention of the superhet was that before the appearance of the screen-grid valve, amplifying valves, then triodes, had difficulty amplifying radio frequencies (i.e. frequencies much above 100 kHz) due to the Miller effect. In the superheterodyne design, rather than amplifying the incoming radio signal, it was first mixed with a constant RF oscillator (the so-called local oscillator) to produce a heterodyne of typically 30 kHz. This intermediate frequency (IF) signal had an identical envelope as the incoming signal but a much lower carrier frequency, so it could be efficiently amplified using triodes. When detected, the original modulation of the higher frequency radio signal is obtained. A somewhat complicated technique, it went out of favor when screen-grid tetrodes made tuned radio frequency (TRF) receivers practical. However the superheterodyne principle resurfaced in the early 1930s when their other advantages, such as greater selectivity became appreciated, and almost all modern receivers operate on this principle but with a higher IF frequency (sometimes higher than the original RF) with amplifiers (such as the tetrode) having surpassed the triode's limitation in amplifying high (radio) frequency signals. The superheterodyne receiver: The superheterodyne concept could be implemented using a valve as the local oscillator and a separate valve as the mixer which takes the antenna signal and the local oscillator as input signals. But for economy, those two functions could also be combined in a single bi-grid tetrode which would both oscillate and frequency-mix the RF signal from the antenna. In later years this was similarly accomplished by the pentagrid converter tube, a similar two-input amplifying/oscillating valve, but which (like pentode tubes) incorporated a suppressor grid and in this case two screen grids in order to electrostatically isolate the plate and both signal grids from each other. In today's receivers, based on inexpensive semiconductor technology (transistors), there is no cost benefit in combining the two functions in one active device. Screen grid valve: The screen grid tube provides much smaller control grid to anode capacitance and much greater amplification factor than a triode. Radio frequency amplifier circuits using triodes were prone to oscillation due to the grid to anode capacitance of the triode. In the screen grid tube, a grid referred to as the screen grid, shield grid or sometimes accelerating grid is inserted between the control grid and the anode. The screen grid provides an electrostatic shield between the control grid and the anode, reducing the capacitance between them to a very small amount. To reduce the influence of the anode's electric field on the cathode space charge and on the control grid, during 1915 - 1916 physicist Walter H. Schottky developed the first tubes having a grid positioned between the anode and the control grid to provide an electrostatic shield. Schottky patented these screen grid tubes in Germany in 1916 and in the U.S. in 1919. These tubes were produced in Germany and known as Siemens-Schottky tubes. In Japan, Hiroshi Ando patented improvements to the construction of the screen grid in 1919. During the latter half of the 1920s, Neal H. Williams and Albert Hull at General Electric, H. J. Round at MOV and Bernard Tellegen at Phillips developed improved screen grid tubes. These improved screen grid tubes were first marketed in 1927.Feedback through the anode to grid capacitance (Miller effect) of the triode could cause oscillation, especially when both anode and grid were connected to tuned resonant circuits as is usual in a radio frequency (RF) amplifier. For frequencies above about 100 kHz, neutralizing circuitry was necessary. A typical triode used for small-signal amplification had a grid to anode capacitance of 8 pF, while the corresponding figure for a typical screen grid valve was 0.025 pF. Neutralizing circuits were not required for a well designed screen grid tube RF amplifier stage.The screen grid is connected to a positive DC voltage and at AC ground as insured by a bypass capacitor to ground. The useful region of operation of the screen grid tube as an amplifier is limited to anode voltages greater than the screen grid voltage. At anode voltages greater than the screen grid voltage some electrons from the cathode will hit the screen grid, producing screen current, but most will pass through the open spaces of the screen and continue to the anode. As the anode voltage approaches and falls below that of the screen grid, screen current will increase as shown in the plate characteristics image. Screen grid valve: An additional advantage of the screen grid became apparent when it was added. The anode current becomes almost completely independent of the anode voltage, as long as the anode voltage is greater than the screen voltage. This corresponds to a very high anode dynamic resistance, thus allowing for a much larger voltage gain when the anode load impedance is large. The anode current is controlled by the control grid and screen grid voltages. Consequently, tetrodes are mainly characterized by their transconductance (change in anode current relative to control grid voltage) whereas triodes are characterized by their amplification factor (mu), their maximum possible voltage gain. At the time of the introduction of screen grid valves, a typical triode used in radio receivers had an anode dynamic resistance of 20 kΩ or less while the corresponding figure for a typical screen grid valve was 500 kΩ. A typical triode medium wave RF amplifier stage produced voltage gain of around 14, but screen grid tube RF amplifier stages produced voltage gains of 30 to 60. Screen grid valve: To take full advantage of the very low grid-anode capacitance, the shielding between anode and grid circuits was observed in the construction of the radio. The S625 valve was mounted in a grounded, plane, metal shield aligned to correspond with the position of the internal screen grid. The input, or control-grid circuit was on one side of the shield, while the anode, or output circuit was on the other. In the receiver shown using S23 tubes, each entire stage of the 2-stage rf amplifier, as well as the tuned detector stage, was enclosed in an individual large metallic box for electrostatic shielding. These boxes have been removed in the illustration, but the up-turned edges of the bases of the boxes can be seen. Screen grid valve: Thus screen grid valves permitted better radio frequency amplification in the medium and high frequency ranges in radio equipment. They were commonly used in the design of radio-frequency amplification stage(s) of radio receivers from late 1927 through 1931, then were superseded by the pentode tube. Anode characteristic of screen-grid valves: The reason for the limited applicability of the screen-grid valve, and its rapid replacement by the RF pentode (introduced around 1930) was the peculiar anode characteristic (i.e. variation of anode current with respect to anode voltage) of the former type of tube. Anode characteristic of screen-grid valves: In normal applications, the anode voltage was about 150 V, while that of the screen-grid was about 60 V (Thrower p 183). As the screen grid is positive with respect to the cathode, it collects a certain fraction (perhaps a quarter) of the electrons which would otherwise pass from the grid region to the anode. This causes current to flow in the screen grid circuit. Usually, the screen current due to this cause is small, and of little interest. However, if the anode voltage should be below that of the screen, the screen grid can also collect secondary electrons ejected from the anode by the impact of the energetic primary electrons. Both effects tend to reduce the anode current. If the anode voltage is increased from a low value, with the screen grid at its normal operating voltage (60V, say) the anode current initially increases rapidly because more of those electrons which pass through the screen-grid are collected by the anode rather than passing back to the screen grid. This part of the tetrode anode characteristic resembles the corresponding part of that of a triode or pentode. However, when the anode voltage is increased further, the electrons arriving at the anode have sufficient energy to cause copious secondary emission, and many of these secondary electrons will be captured by the screen, which is at a higher positive voltage than the anode. This causes the anode current to fall rather than increase when the anode voltage is increased. In some cases the anode current can actually become negative (current flows out of the anode); this is possible since each primary electron may produce more than one secondary. Falling positive anode current accompanied by rising anode voltage gives the anode characteristic a region of negative slope, and this corresponds to a negative resistance which can cause instability in certain circuits. In a higher range of anode voltage, the anode voltage sufficiently exceeds that of the screen for an increasing proportion of the secondary electrons to be attracted back to the anode, so the anode current increases once more, and the slope of the anode characteristic becomes positive again. In a yet higher range of anode voltages, the anode current becomes substantially constant, since all of the secondary electrons now return to the anode, and the main control of current through the tube is the voltage of the control grid. This is the normal operating mode of the tube. Anode characteristic of screen-grid valves: The anode characteristic of a screen-grid valve is thus quite unlike that of a triode. Where the anode voltage is less than that of the screen grid, there is a distinctive negative resistance characteristic, called the dynatron region or tetrode kink. The approximately constant-current region of low slope at anode voltages greater than the screen grid voltage is also markedly different from that of the triode, and provides the useful region of operation of the screen grid tube as an amplifier. The low slope is highly desirable, since it greatly enhances the voltage gain which the device can produce. Early screen-grid valves had amplification factors (i.e. the product of transconductance and anode slope resistance, Ra) fifty times or more greater than that of comparable triode. The high anode resistance in the normal operating range is a consequence of the electrostatic shielding action of the screen grid, since it prevents the electric field due to the anode from penetrating to the control grid region, where it might otherwise influence the passage of electrons, increasing the electron current when the anode voltage is high, reducing it when low. The negative resistance operating region of the tetrode is exploited in the dynatron oscillator, which is an example of a negative resistance oscillator.(Eastman, p431) Beam tetrode: The beam tetrode eliminates the dynatron region or tetrode kink of the screen grid tube by utilizing partially collimated electron beams to develop a dense low potential space charge region between the screen grid and anode that returns anode secondary emission electrons to the anode. The anode characteristic of the beam tetrode is less rounded at lower anode voltages than the anode characteristic of the power pentode, resulting in greater power output and less third harmonic distortion with the same anode supply voltage. Beam tetrodes are usually used for power amplification, from audio frequency to radio frequency. The beam tetrode was patented in Britain in 1933 by three EMI engineers, Isaac Shoenberg, Cabot Bull and Sidney Rodda. Critical-distance tetrode: The High Vacuum Valve company of London, England (Hivac) introduced a line of power output tetrodes in August 1935 that utilized J. H. Owen Harries' critical distance effect to eliminate the dynatron region of the anode voltage - anode current characteristic. The critical distance tubes utilized space charge return of anode secondary electrons to the anode. Distinctive physical characteristics of the critical distance tetrode were large screen grid to anode distance and elliptical grid structure. The large screen grid to anode distance facilitated formation of the low potential space charge to return anode secondary electrons to the anode when the anode potential was less than that of the screen grid. The elliptical grids permitted the control grid support rods to be farther away from the cathode so as to reduce their effect on amplification factor with control grid voltage. At zero and negative control grid voltage, the control grid support rods and control grid formed the electron stream from the cathode into two major regions of space current, 180 degrees apart, directed toward two wide sectors of the anode circumference. These features resulted in somewhat greater output power and lower distortion than a comparable power pentode, due to saturation occurring at lower anode voltage and increased curvature (smaller radius) of the anode voltage - anode current characteristic at low anode voltages. Critical-distance tetrode: A range of tetrodes of this type were introduced, aimed at the domestic receiver market, some having filaments rated for two volts direct current, intended for low-power battery-operated sets; others having indirectly heated cathodes with heaters rated for four volts or higher for mains operation. Output power ratings ranged from 0.5 watts to 11.5 watts. Confusingly, several of these new valves bore the same type number as existing pentodes with almost identical characteristics. Examples include Y220 (0.5W, 2V filament), AC/Y (3W, 4V heater), AC/Q (11.5W, 4V heater).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equal-loudness contour** Equal-loudness contour: An equal-loudness contour is a measure of sound pressure level, over the frequency spectrum, for which a listener perceives a constant loudness when presented with pure steady tones. The unit of measurement for loudness levels is the phon and is arrived at by reference to equal-loudness contours. By definition, two sine waves of differing frequencies are said to have equal-loudness level measured in phons if they are perceived as equally loud by the average young person without significant hearing impairment. Equal-loudness contour: The Fletcher–Munson curves are one of many sets of equal-loudness contours for the human ear, determined experimentally by Harvey Fletcher and Wilden A. Munson, and reported in a 1933 paper entitled "Loudness, its definition, measurement and calculation" in the Journal of the Acoustical Society of America. Fletcher–Munson curves have been superseded and incorporated into newer standards. The definitive curves are those defined in ISO 226 from the International Organization for Standardization, which are based on a review of modern determinations made in various countries. Equal-loudness contour: Amplifiers often feature a "loudness" button, known technically as loudness compensation, that boosts low and high-frequency components of the sound. These are intended to offset the apparent loudness fall-off at those frequencies, especially at lower volume levels. Boosting these frequencies produces a flatter equal-loudness contour that appears to be louder even at low volume, preventing the perceived sound from being dominated by the mid-frequencies where the ear is most sensitive. Fletcher–Munson curves: The first research on the topic of how the ear hears different frequencies at different levels was conducted by Fletcher and Munson in 1933. Until recently, it was common to see the term Fletcher–Munson used to refer to equal-loudness contours generally, even though a re-determination was carried out by Robinson and Dadson in 1956, which became the basis for an ISO 226 standard. Fletcher–Munson curves: It is now better to use the generic term equal-loudness contours, of which the Fletcher–Munson curves are now a sub-set, and especially since a 2003 survey by ISO redefined the curves in a new standard. Experimental determination: The human auditory system is sensitive to frequencies from about 20 Hz to a maximum of around 20,000 Hz, although the upper hearing limit decreases with age. Within this range, the human ear is most sensitive between 2 and 5 kHz, largely due to the resonance of the ear canal and the transfer function of the ossicles of the middle ear. Experimental determination: Fletcher and Munson first measured equal-loudness contours using headphones (1933). In their study, test subjects listened to pure tones at various frequencies and over 10 dB increments in stimulus intensity. For each frequency and intensity, the listener also listened to a reference tone at 1000 Hz. Fletcher and Munson adjusted the reference tone until the listener perceived that it was the same loudness as the test tone. Loudness, being a psychological quantity, is difficult to measure, so Fletcher and Munson averaged their results over many test subjects to derive reasonable averages. The lowest equal-loudness contour represents the quietest audible tone—the absolute threshold of hearing. The highest contour is the threshold of pain. Experimental determination: Churcher and King carried out a second determination in 1937, but their results and Fletcher and Munson's showed considerable discrepancies over parts of the auditory diagram.In 1956 Robinson and Dadson produced a new experimental determination that they believed was more accurate. It became the basis for a standard (ISO 226) that was considered definitive until 2003 when ISO revised the standard on the basis of recent assessments by research groups worldwide. Recent revision aimed at more precise determination – ISO 226:2003: Perceived discrepancies between early and more recent determinations led the International Organization for Standardization (ISO) to revise the standard curves in ISO 226. They did this in response to recommendations in a study coordinated by the Research Institute of Electrical Communication, Tohoku University, Japan. The study produced new curves by combining the results of several studies—by researchers in Japan, Germany, Denmark, UK, and the US. (Japan was the greatest contributor with about 40% of the data.) This has resulted in the recent acceptance of a new set of curves standardized as ISO 226:2003. The report comments on the surprisingly large differences, and the fact that the original Fletcher–Munson contours are in better agreement with recent results than the Robinson–Dadson, which appear to differ by as much as 10–15 dB especially in the low-frequency region, for reasons not explained.According to the ISO report, the Robinson–Dadson results were the odd one out, differing more from the current standard than did the Fletcher–Munson curves. The report states that it is fortunate that the 40-phon Fletcher–Munson curve on which the A-weighting standard was based turns out to have been in agreement with modern determinations.The report also comments on the large differences apparent in the low-frequency region, which remain unexplained. Possible explanations are: The equipment used was not properly calibrated. Recent revision aimed at more precise determination – ISO 226:2003: The criteria used for judging equal loudness at different frequencies had differed. Subjects were not properly rested for days in advance, or were exposed to loud noise in traveling to the tests which tensed the tensor tympani and stapedius muscles controlling low-frequency mechanical coupling. Side versus frontal presentation: Real-life sounds from a reasonably distant source arrive as planar wavefronts. If the source of sound is directly in front of the listener, then both ears receive equal intensity, but at frequencies above about 1 kHz the sound that enters the ear canal is partially reduced by the head shadow, and also highly dependent on reflection off the pinna (outer ear). Off-centre sounds result in increased head masking at one ear, and subtle changes in the effect of the pinna, especially at the other ear. This combined effect of head-masking and pinna reflection is quantified in a set of curves in three-dimensional space referred to as head-related transfer functions (HRTFs). Frontal presentation is now regarded as preferable when deriving equal-loudness contours, and the latest ISO standard is specifically based on frontal and central presentation. Because no HRTF is involved in normal headphone listening, equal-loudness curves derived using headphones are valid only for the special case of what is called side-presentation, which is not how we normally hear. The Robinson–Dadson determination used loudspeakers, and for a long time the difference from the Fletcher–Munson curves was explained partly on the basis that the latter used headphones. However, the ISO report actually lists the latter as using compensated headphones, though it doesn't make clear how Robinson–Dadson achieved compensation. Headphones versus loudspeaker testing: Good headphones, well sealed to the ear, provide a flat low-frequency pressure response to the ear canal, with low distortion even at high intensities. At low frequencies, the ear is purely pressure-sensitive, and the cavity formed between headphones and ear is too small to introduce modifying resonances. Headphone testing is, therefore, a good way to derive equal-loudness contours below about 500 Hz, though reservations have been expressed about the validity of headphone measurements when determining the actual threshold of hearing, based on the observation that closing off the ear canal produces increased sensitivity to the sound of blood flow within the ear, which the brain appears to mask in normal listening conditions. At high frequencies, headphone measurement becomes unreliable, and the various resonances of pinnae (outer ears) and ear canals are severely affected by proximity to the headphone cavity. Headphones versus loudspeaker testing: With speakers, the opposite is true. A flat low-frequency response is hard to obtain—except in free space high above ground, or in a very large and anechoic chamber that is free from reflections down to 20 Hz. Until recently, it was not possible to achieve high levels at frequencies down to 20 Hz without high levels of harmonic distortion. Even today, the best speakers are likely to generate around 1 to 3% of total harmonic distortion, corresponding to 30 to 40 dB below fundamental. This is not good enough, given the steep rise in loudness (rising to as much as 24 dB per octave) with frequency revealed by the equal-loudness curves below about 100 Hz. A good experimenter must ensure that trial subjects really hear the fundamental and not harmonics—especially the third harmonic, which is especially strong as a speaker cone's travel becomes limited as its suspension reaches the limit of compliance. A possible way around the problem is to use acoustic filtering, such as by resonant cavity, in the speaker setup. A flat free-field high-frequency response up to 20 kHz, on the other hand, is comparatively easy to achieve with modern speakers on-axis. These effects must be considered when comparing results of various attempts to measure equal-loudness contours. Relevance to sound level and noise measurements: The A-weighting curve—in widespread use for noise measurement—is said to have been based on the 40-phon Fletcher–Munson curve. However, research in the 1960s demonstrated that determinations of equal-loudness made using pure tones are not directly relevant to our perception of noise. This is because the cochlea in our inner ear analyzes sounds in terms of spectral content, each "hair-cell" responding to a narrow band of frequencies known as a critical band. The high-frequency bands are wider in absolute terms than the low-frequency bands, and therefore "collect" proportionately more power from a noise source. However, when more than one critical band is stimulated, the signals to the brain add the various bands to produce the impressions of loudness. For these reasons Equal-loudness curves derived using noise bands show an upwards tilt above 1 kHz and a downward tilt below 1 kHz when compared to the curves derived using pure tones. Relevance to sound level and noise measurements: Various weighting curves were derived in the 1960s, in particular as part of the DIN 4550 standard for audio quality measurement, which differed from the A-weighting curve, showing more of a peak around 6 kHz. These gave a more meaningful subjective measure of noise on audio equipment, especially on the newly invented compact cassette tape recorders with Dolby noise reduction, which were characterized by a noise spectrum dominated by the higher frequencies. Relevance to sound level and noise measurements: BBC Research conducted listening trials in an attempt to find the best weighting curve and rectifier combination for use when measuring noise in broadcast equipment, examining the various new weighting curves in the context of noise rather than tones, confirming that they were much more valid than A-weighting when attempting to measure the subjective loudness of noise. This work also investigated the response of human hearing to tone-bursts, clicks, pink noise and a variety of other sounds that, because of their brief impulsive nature, do not give the ear and brain sufficient time to respond. The results were reported in BBC Research Report EL-17 1968/8 entitled The Assessment of Noise in Audio Frequency Circuits. Relevance to sound level and noise measurements: The ITU-R 468 noise weighting curve, originally proposed in CCIR recommendation 468, but later adopted by numerous standards bodies (IEC, BSI, JIS, ITU) was based on the research, and incorporates a special Quasi-peak detector to account for our reduced sensitivity to short bursts and clicks. It is widely used by Broadcasters and audio professionals when they measure noise on broadcast paths and audio equipment, so they can subjectively compare equipment types with different noise spectra and characteristics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spin wave** Spin wave: In condensed matter physics, a spin wave is a propagating disturbance in the ordering of a magnetic material. These low-lying collective excitations occur in magnetic lattices with continuous symmetry. From the equivalent quasiparticle point of view, spin waves are known as magnons, which are bosonic modes of the spin lattice that correspond roughly to the phonon excitations of the nuclear lattice. As temperature is increased, the thermal excitation of spin waves reduces a ferromagnet's spontaneous magnetization. The energies of spin waves are typically only μeV in keeping with typical Curie points at room temperature and below. Theory: The simplest way of understanding spin waves is to consider the Hamiltonian H for the Heisenberg ferromagnet: H=−12J∑i,jSi⋅Sj−gμB∑iH⋅Si where J is the exchange energy, the operators S represent the spins at Bravais lattice points, g is the Landé g-factor, μB is the Bohr magneton and H is the internal field which includes the external field plus any "molecular" field. Note that in the classical continuum case and in 1 + 1 dimensions Heisenberg ferromagnet equation has the form St=S×Sxx. Theory: In 1 + 1, 2 + 1 and 3 + 1 dimensions this equation admits several integrable and non-integrable extensions like the Landau-Lifshitz equation, the Ishimori equation and so on. For a ferromagnet J > 0 and the ground state of the Hamiltonian |0⟩ is that in which all spins are aligned parallel with the field H. That |0⟩ is an eigenstate of H can be verified by rewriting it in terms of the spin-raising and spin-lowering operators given by: S±=Sx±iSy resulting in H=−12J∑i,jSizSjz−gμBH∑iSiz−14J∑i,j(Si+Sj−+Si−Sj+) where z has been taken as the direction of the magnetic field. The spin-lowering operator S− annihilates the state with minimum projection of spin along the z-axis, while the spin-raising operator S+ annihilates the ground state with maximum spin projection along the z-axis. Since Siz|0⟩=s|0⟩ for the maximally aligned state, we find H|0⟩=(−Js2−gμBHs)N|0⟩ where N is the total number of Bravais lattice sites. The proposition that the ground state is an eigenstate of the Hamiltonian is confirmed. Theory: One might guess that the first excited state of the Hamiltonian has one randomly selected spin at position i rotated so that Siz|1⟩=(s−1)|1⟩, but in fact this arrangement of spins is not an eigenstate. The reason is that such a state is transformed by the spin raising and lowering operators. The operator Si+ will increase the z-projection of the spin at position i back to its low-energy orientation, but the operator Sj− will lower the z-projection of the spin at position j. The combined effect of the two operators is therefore to propagate the rotated spin to a new position, which is a hint that the correct eigenstate is a spin wave, namely a superposition of states with one reduced spin. The exchange energy penalty associated with changing the orientation of one spin is reduced by spreading the disturbance over a long wavelength. The degree of misorientation of any two near-neighbor spins is thereby minimized. From this explanation one can see why the Ising model magnet with discrete symmetry has no spin waves: the notion of spreading a disturbance in the spin lattice over a long wavelength makes no sense when spins have only two possible orientations. The existence of low-energy excitations is related to the fact that in the absence of an external field, the spin system has an infinite number of degenerate ground states with infinitesimally different spin orientations. The existence of these ground states can be seen from the fact that the state |0⟩ does not have the full rotational symmetry of the Hamiltonian H , a phenomenon which is called spontaneous symmetry breaking. Theory: Magnetization In this model the magnetization M=NμBgsV where V is the volume. The propagation of spin waves is described by the Landau-Lifshitz equation of motion: dMdt=−γM×H−λM×(M×H)M2 where γ is the gyromagnetic ratio and λ is the damping constant. The cross-products in this forbidding-looking equation show that the propagation of spin waves is governed by the torques generated by internal and external fields. (An equivalent form is the Landau-Lifshitz-Gilbert equation, which replaces the final term by a more "simply looking" equivalent one.) The first term on the right hand side of the equation describes the precession of the magnetization under the influence of the applied field, while the above-mentioned final term describes how the magnetization vector "spirals in" towards the field direction as time progresses. In metals the damping forces described by the constant λ are in many cases dominated by the eddy currents. Theory: One important difference between phonons and magnons lies in their dispersion relations. The dispersion relation for phonons is to first order linear in wavevector k, namely ώ = ck, where ω is frequency, and c is the velocity of sound. Magnons have a parabolic dispersion relation: ώ = Ak2 where the parameter A represents a "spin stiffness." The k2 form is the third term of a Taylor expansion of a cosine term in the energy expression originating from the Si ⋅ Sj dot product. The underlying reason for the difference in dispersion relation is that the order parameter (magnetization) for the ground-state in ferromagnets violates time-reversal symmetry. Two adjacent spins in a solid with lattice constant a that participate in a mode with wavevector k have an angle between them equal to ka. Experimental observation: Spin waves are observed through four experimental methods: inelastic neutron scattering, inelastic light scattering (Brillouin scattering, Raman scattering and inelastic X-ray scattering), inelastic electron scattering (spin-resolved electron energy loss spectroscopy), and spin-wave resonance (ferromagnetic resonance). In the first method the energy loss of a beam of neutrons that excite a magnon is measured, typically as a function of scattering vector (or equivalently momentum transfer), temperature and external magnetic field. Inelastic neutron scattering measurements can determine the dispersion curve for magnons just as they can for phonons. Important inelastic neutron scattering facilities are present at the ISIS neutron source in Oxfordshire, UK, the Institut Laue-Langevin in Grenoble, France, the High Flux Isotope Reactor at Oak Ridge National Laboratory in Tennessee, USA, and at the National Institute of Standards and Technology in Maryland, USA. Brillouin scattering similarly measures the energy loss of photons (usually at a convenient visible wavelength) reflected from or transmitted through a magnetic material. Brillouin spectroscopy is similar to the more widely known Raman scattering, but probes a lower energy and has a superior energy resolution in order to be able to detect the meV energy of magnons. Ferromagnetic (or antiferromagnetic) resonance instead measures the absorption of microwaves, incident on a magnetic material, by spin waves, typically as a function of angle, temperature and applied field. Ferromagnetic resonance is a convenient laboratory method for determining the effect of magnetocrystalline anisotropy on the dispersion of spin waves. One group at the Max Planck Institute of Microstructure Physics in Halle, Germany proved that by using spin polarized electron energy loss spectroscopy (SPEELS), very high energy surface magnons can be excited. This technique allows one to probe the dispersion of magnons in the ultrathin ferromagnetic films. The first experiment was performed for a 5 ML Fe film. With momentum resolution, the magnon dispersion was explored for an 8 ML fcc Co film on Cu(001) and an 8 ML hcp Co on W(110), respectively. The maximum magnon energy at the border of the surface Brillouin zone was 240 meV. Practical significance: When magnetoelectronic devices are operated at high frequencies, the generation of spin waves can be an important energy loss mechanism. Spin wave generation limits the linewidths and therefore the quality factors Q of ferrite components used in microwave devices. The reciprocal of the lowest frequency of the characteristic spin waves of a magnetic material gives a time scale for the switching of a device based on that material.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Horn function** Horn function: In the theory of special functions in mathematics, the Horn functions (named for Jakob Horn) are the 34 distinct convergent hypergeometric series of order two (i.e. having two independent variables), enumerated by Horn (1931) (corrected by Borngässer (1933)). They are listed in (Erdélyi et al. 1953, section 5.7.1). B. C. Carlson revealed a problem with the Horn function classification scheme. Horn function: The total 34 Horn functions can be further categorised into 14 complete hypergeometric functions and 20 confluent hypergeometric functions. The complete functions, with their domain of convergence, are: F1(α;β,β′;γ;z,w)≡∑m=0∞∑n=0∞(α)m+n(β)m(β′)n(γ)m+nzmwnm!n!/;|z|<1∧|w|<1 F2(α;β,β′;γ,γ′;z,w)≡∑m=0∞∑n=0∞(α)m+n(β)m(β′)n(γ)m(γ′)nzmwnm!n!/;|z|+|w|<1 F3(α,α′;β,β′;γ;z,w)≡∑m=0∞∑n=0∞(α)m(α′)n(β)m(β′)n(γ)m+nzmwnm!n!/;|z|<1∧|w|<1 F4(α;β;γ,γ′;z,w)≡∑m=0∞∑n=0∞(α)m+n(β)m+n(γ)m(γ′)nzmwnm!n!/;|z|+|w|<1 G1(α;β,β′;z,w)≡∑m=0∞∑n=0∞(α)m+n(β)n−m(β′)m−nzmwnm!n!/;|z|+|w|<1 G2(α,α′;β,β′;z,w)≡∑m=0∞∑n=0∞(α)m(α′)n(β)n−m(β′)m−nzmwnm!n!/;|z|<1∧|w|<1 27 18 |z||w|±4(|z|−|w|)<1 H1(α;β;γ;δ;z,w)≡∑m=0∞∑n=0∞(α)m−n(β)m+n(γ)n(δ)mzmwnm!n!/;4|z||w|+2|w|−|w|2<1 H2(α;β;γ;δ;ϵ;z,w)≡∑m=0∞∑n=0∞(α)m−n(β)m(γ)n(δ)n(δ)mzmwnm!n!/;1/|w|−|z|<1 H3(α;β;γ;z,w)≡∑m=0∞∑n=0∞(α)2m+n(β)n(γ)m+nzmwnm!n!/;|z|+|w|2−|w|<0 H4(α;β;γ;δ;z,w)≡∑m=0∞∑n=0∞(α)2m+n(β)n(γ)m(δ)nzmwnm!n!/;4|z|+2|w|−|w|2<1 16 36 27 |z||w|2)<−1 H6(α;β;γ;z,w)≡∑m=0∞∑n=0∞(α)2m−n(β)n−m(γ)nzmwnm!n!/;|z||w|2+|w|<1 H7(α;β;γ;δ;z,w)≡∑m=0∞∑n=0∞(α)2m−n(β)n(γ)n(δ)mzmwnm!n!/;4|z|+2/|s|−1/|s|2<1 while the confluent functions include: Φ1(α;β;γ;x,y)≡∑m=0∞∑n=0∞(α)m+n(β)m(γ)m+nxmynm!n! Φ2(β,β′;γ;x,y)≡∑m=0∞∑n=0∞(β)m(β′)n(γ)m+nxmynm!n! Φ3(β;γ;x,y)≡∑m=0∞∑n=0∞(β)m(γ)m+nxmynm!n! Ψ1(α;β;γ,γ′;x,y)≡∑m=0∞∑n=0∞(α)m+n(β)m(γ)m(γ′)nxmynm!n! Ψ2(α;γ,γ′;x,y)≡∑m=0∞∑n=0∞(α)m+n(γ)m(γ′)nxmynm!n! Ξ1(α,α′;β;γ;x,y)≡∑m=0∞∑n=0∞(α)m(α′)n(β)m(γ)m+n(γ′)nxmynm!n! Ξ2(α;β;γ;x,y)≡∑m=0∞∑n=0∞(α)m(α)m(γ)m+nxmynm!n! Γ1(α;β,β′;x,y)≡∑m=0∞∑n=0∞(α)m(β)n−m(β′)m−nxmynm!n! Γ2(β,β′;x,y)≡∑m=0∞∑n=0∞(β)n−m(β′)m−nxmynm!n! H1(α;β;δ;x,y)≡∑m=0∞∑n=0∞(α)m−n(β)m+n(δ)mxmynm!n! H2(α;β;γ;δ;x,y)≡∑m=0∞∑n=0∞(α)m−n(β)m(γ)n(δ)mxmynm!n! H3(α;β;δ;x,y)≡∑m=0∞∑n=0∞(α)m−n(β)m(δ)mxmynm!n! H4(α;γ;δ;x,y)≡∑m=0∞∑n=0∞(α)m−n(γ)n(δ)nxmynm!n! H5(α;δ;x,y)≡∑m=0∞∑n=0∞(α)m−n(δ)mxmynm!n! H6(α;γ;x,y)≡∑m=0∞∑n=0∞(α)2m+n(γ)m+nxmynm!n! H7(α;γ;δ;x,y)≡∑m=0∞∑n=0∞(α)2m+n(γ)m(δ)nxmynm!n! H8(α;β;x,y)≡∑m=0∞∑n=0∞(α)2m−n(β)n−mxmynm!n! H9(α;β;δ;x,y)≡∑m=0∞∑n=0∞(α)2m−n(β)n(δ)mxmynm!n! 10 (α;δ;x,y)≡∑m=0∞∑n=0∞(α)2m−n(δ)mxmynm!n! 11 (α;β;γ;δ;x,y)≡∑m=0∞∑n=0∞(α)m−n(β)n(γ)n(δ)mxmynm!n! Notice that some of the complete and confluent functions share the same notation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**1,2,3-Tribromopropane** 1,2,3-Tribromopropane: 1,2,3-Tribromopropane (TBP) is a toxic organic compound. It is a clear colorless to light yellow liquid.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Janina Kneipp** Janina Kneipp: Janina Kneipp is a German scientist who is Professor of Physical Chemistry Humboldt University of Berlin. Her research considers surface enhanced Raman scattering and plasmonic enhancement in multi-modal micro spectroscopy. Early life and education: Kneipp was an undergraduate student at the Free University of Berlin, where she specialised in biology and physics. She remained in Berlin for graduate studies, where she worked on Fourier-transform infrared spectroscopy at RKI. After earning her doctorate, she moved to the Erasmus University Rotterdam, where she worked on optical spectroscopies. She was a postdoctoral researcher at Princeton University. Research and career: In 2005, Kneipp joined the BAM Federal Institute for Materials Research and Testing. She moved to the Humboldt University of Berlin in 2008. Her research develops multi-photon spectroscopy for bioanalysis. She was supported by the European Research Council to develop Multiphoton Processes Using Plasmonics. As part of her work, Kneipp developed multi-functional nanosensors, which can be combined with plasmonic nanoparticles and provide multiple surface-enhanced spectroscopic signatures. Plasmonic structures can enhance local optical fields, In particular, Kneipp is interested in Surface-enhanced Raman scattering (SERS) of complex samples. She uses SERS to better understand how molecules interact with nanostructures, for applications in biospectroscopy and in plasmonic catalysis.Beyond SERS, Kneipp has shown that a combination of Raman spectroscopy with other methods can be used to study plant samples. Vibrational spectra of plants can provide information about the biochemical composition of structures like pollen, and can give information on plant-climate interactions.From 2015-2020, Kneipp joined the German Research Foundation (DFG) review board for chemistry. She is member of excellence cluster UniSysCat and the Einstein Center of Catalysis. She is co-founder of the School of Analytical Sciences Adlershof (SALSA), a graduate program at HU. Awards and honours: 2010 European Research Council Starting Grant 2010 Bunsen-Kirchhoff Award for Analytical Spectroscopy 2013 Wilhelm Ostwald Fellow 2018 Caroline von Humboldt Professorship Selected publications: Janina Kneipp; Harald Kneipp; Katrin Kneipp (20 March 2008). "SERS--a single-molecule and nanoscale tool for bioanalytics". Chemical Society Reviews. 37 (5): 1052–1060. doi:10.1039/B708459P. ISSN 0306-0012. PMID 18443689. Wikidata Q37149997. Fani Madzharova; Zsuzsanna Heiner; Janina Kneipp (22 May 2017). "Surface enhanced hyper Raman scattering (SEHRS) and its applications". Chemical Society Reviews. 46 (13): 3980–3999. doi:10.1039/C7CS00137A. ISSN 0306-0012. PMID 28530726. Wikidata Q38680877. Katrin Kneipp; Harald Kneipp; Janina Kneipp (1 July 2006). "Surface-enhanced Raman scattering in local optical fields of silver and gold nanoaggregates-from single-molecule Raman spectroscopy to ultrasensitive probing in live cells". Accounts of Chemical Research. 39 (7): 443–450. doi:10.1021/AR050107X. ISSN 0001-4842. PMID 16846208. Wikidata Q36538471.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Becampanel** Becampanel: Becampanel (INN) (code name AMP397) is a quinoxalinedione derivative drug which acts as a competitive antagonist of the AMPA receptor (IC50 = 11 nM). It was investigated as an anticonvulsant for the treatment of epilepsy by Novartis, and was also looked at as a potential treatment for neuropathic pain and cerebral ischemia, but never completed clinical trials.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Analog signature analysis** Analog signature analysis: Analog signature analysis is electronic component and circuit board troubleshooting technique which applies a current-limited AC sinewave across two points of an electronic component or circuit. The resulting current/voltage waveform is shown on a signature display using vertical deflection for current and horizontal deflection for voltage. This unique analog signature represents the overall health of the part being analyzed. By comparing the signatures of known good circuit boards to those of suspect boards, faulty nets and components can be quickly identified. Analog signature analysis: Analog signature analysis relies on a change in electrical characteristics to detect problems on a circuit board.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Digital do MaiN** Digital do MaiN: Digital do MaiN (Japanese: デジタルドメイン株式会社, Dezitaru do MēiN Kabushiki Gaisha) is a Japanese audio engineering company headquartered in Chiyoda, Tokyo, Japan. The company name emphasises symbiosis of analog and digital technologies (implemented, for example, in a volume control subsystem); the logo symbolizes an input pin jack (left square), output pin jack (right square) and an innovative signal processing unit in between. Technology: Digital do MaiN's power amplifiers use V-FET technology transistors. Initially developed by Nippon Gakki Seizo K.K. in the 1970s (US Patent 4,216,038), the technology was improved, and the 2SK77B transistor was released. As V-FET devices are no longer manufactured, Digital do MaiN builds them itself. Original design and usage of the 2SK77B V-FET transistor give amplifiers characteristics similar to vacuum tube devices and Triode class A amplifiers which feature very high quality of output sound and cancellation of most of the even distortion harmonics, and allow noise distortion to be less than 0.005% and no loss of original harmonics. Digital do MaiN also uses technologies and complementary products from its partners: MSB Technology's (USA) DACs, Cabasse (France) loudspeakers, Denon (Japan) waveform reproduction technology. Awards: Japanese Audio Excellence Award 2009, Separate Digital Players category (D-1a D/A converter) and Main Amplifiers category (B-1a power amplifier)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WinFS** WinFS: WinFS (short for Windows Future Storage) was the code name for a canceled data storage and management system project based on relational databases, developed by Microsoft and first demonstrated in 2003 as an advanced storage subsystem for the Microsoft Windows operating system, designed for persistence and management of structured, semi-structured and unstructured data. WinFS: WinFS includes a relational database for storage of information, and allows any type of information to be stored in it, provided there is a well defined schema for the type. Individual data items could then be related together by relationships, which are either inferred by the system based on certain attributes or explicitly stated by the user. As the data has a well defined schema, any application can reuse the data; and using the relationships, related data can be effectively organized as well as retrieved. Because the system knows the structure and intent of the information, it can be used to make complex queries that enable advanced searching through the data and aggregating various data items by exploiting the relationships between them. WinFS: While WinFS and its shared type schema make it possible for an application to recognize the different data types, the application still has to be coded to render the different data types. Consequently, it would not allow development of a single application that can view or edit all data types; rather, what WinFS enables applications to do is understand the structure of all data and extract the information that they can use further. When WinFS was introduced at the 2003 Professional Developers Conference, Microsoft also released a video presentation, named IWish, showing mockup interfaces that showed how applications would expose interfaces that take advantage of a unified type system. The concepts shown in the video ranged from applications using the relationships of items to dynamically offer filtering options to applications grouping multiple related data types and rendering them in a unified presentation. WinFS: WinFS was billed as one of the pillars of the "Longhorn" wave of technologies, and would ship as part of the next version of Windows. It was subsequently decided that WinFS would ship after the release of Windows Vista, but those plans were shelved in June 2006, with some of its component technologies being integrated into ADO.NET and Microsoft SQL Server. Motivation: Many filesystems found on common operating systems, including the NTFS filesystem which is used in modern versions of Microsoft Windows, store files and other objects only as a stream of bytes, and have little or no information about the data stored in the files. Such file systems also provide only a single way of organizing the files, namely via directories and file names.Because a file system has no knowledge about the data it stores, applications tend to use their own, often proprietary, file formats. This hampers sharing of data between multiple applications. It becomes difficult to create an application which processes information from multiple file types, because the programmers have to understand the structure and semantics of all the files. Using common file formats is a workaround to this problem but not a universal solution; there is no guarantee that all applications will use the format. Data with standardized schema, such as XML documents and relational data fare better, as they have a standardized structure and run-time requirements.Also, a traditional file system can retrieve and search data based only on the filename, because the only knowledge it has about the data is the name of the files that store the data. A better solution is to tag files with attributes that describe them. Attributes are metadata about the files such as the type of file (such as document, picture, music, creator, etc.). This allows files to be searched for by their attributes, in ways not possible using a folder hierarchy, such as finding "pictures which have person X". The attributes can be recognizable by either the file system natively, or via some extension. Desktop search applications take this concept a step further. They extract data, including attributes, from files and index it. To extract the data, they use a filter for each file format. This allows for searching based on both the file's attributes and the data in it.However, this still does not help in managing related data, as disparate items do not have any relationships defined. For example, it is impossible to search for "the phone numbers of all persons who live in Acapulco and each have more than 100 appearances in my photo collection and from whom I have had e-mail within the last month". Such a search could not be done unless it is based on a data model which has both the semantics as well as relationships of data defined. WinFS aims to provide such a data model and the runtime infrastructure that can be used to store the data, as well as the relationships between data items according to the data model, doing so at a satisfactory level of performance. Overview: WinFS natively recognizes different types of data, such as picture, e-mail, document, audio, video, calendar, contact, rather than just leaving them as raw unanalyzed bytestreams (as most file systems do). Data stored and managed by the system are instances of the data type recognized by the WinFS runtime. The data are structured by means of properties. For example, an instance of a résumé type will surface the data by exposing properties, such as Name, Educational Qualification, Experience. Each property may be a simple type (strings, integers, dates) or complex types (contacts). Different data types expose different properties. Besides that, WinFS also allows different data instances to be related together; such as a document and a contact can be related by an Authored By relationship. Relationships are also exposed as properties; for example if a document is related to a contact by a Created By relationship, then the document will have a Created By property. When it is accessed, the relationship is traversed and the related data returned. By following the relations, all related data can be reached. WinFS promotes sharing of data between applications by making the data types accessible to all applications, along with their schemas. When an application wants to use a WinFS type, it can use the schema to find the data structure and can use the information. So, an application has access to all data on the system even though the developer did not have to write parsers to recognize the different data formats. It can also use relationships and related data to create dynamic filters to present the information the application deals with. The WinFS API further abstracts the task of accessing data. All WinFS types are exposed as .NET objects with the properties of the object directly mapping to the properties of the data type. Also, by letting different applications that deal with the same data share the same WinFS data instance rather than storing the same data in different files, the hassles of synchronizing the different stores when the data change are removed. Thus WinFS can reduce redundancies.Access to all the data in the system allows complex searches for data across all the data items managed by WinFS. In the example used above ("the phone numbers of all persons who live in Acapulco and each have more than 100 appearances in my photo collection and with whom I have had e-mail within last month"), WinFS can traverse the subject relationship of all the photos to find the contact items. Similarly, it can filter all emails in last month and access the communicated with relation to reach the contacts. The common contacts can then be figured out from the two sets of results and their phone number retrieved by accessing the suitable property of the contact items. Overview: In addition to fully schematized data (like XML and relational data), WinFS supports semi-structured data (such as images, which have an unstructured bitstream plus structured metadata) as well as unstructured data (such as files) as well. It stores the unstructured components as files while storing the structured metadata in the structured store. Internally, WinFS uses a relational database to manage data. It does not limit the data to belonging to any particular data model. The WinFS runtime maps the schema to a relational modality, by defining the tables it will store the types in and the primary keys and foreign keys that would be required to represent the relationships. WinFS includes mappings for object and XML schemas by default. Mappings for other schemas must be specified. Object schemas are specified in XML; WinFS generates code to surface the schemas as .NET classes. ADO.NET can be used to directly specify the relational schema, though a mapping to the object schema must be provided to surface it as classes. Relationship traversals are performed as joins on these tables. WinFS also automatically creates indexes on these tables, to enable fast access to the information. Indexing speeds up joins significantly, and traversing relationships to retrieve related data is performed very fast. Indexes are also used during information search; searching and querying use the indexes to quickly complete the operations, much like desktop search systems. Development: The development of WinFS is an extension to a feature that was initially planned in the early 1990s. Dubbed Object File System, it was supposed to be included as part of Cairo. OFS was supposed to have powerful data aggregation features, but the Cairo project was shelved, and with it OFS. However, later during the development of COM, a storage system, called Storage+, based on then-upcoming SQL Server 8.0, was planned, which was slated to offer similar aggregation features. This, too, never materialized, and a similar technology, Relational File System (RFS), was conceived to be launched with SQL Server 2000. However, SQL Server 2000 ended up being a minor upgrade to SQL Server 7.0 and RFS was not implemented. Development: The concept was not scrapped, and served as the base for WinFS. WinFS was initially planned for inclusion in Windows Vista, and build 4051 of Windows Vista, then called by its codename "Longhorn", given to developers at the Microsoft Professional Developers Conference in 2003, included WinFS, but it suffered from significant performance issues. In August 2004, Microsoft announced that WinFS would not ship with Windows Vista; it would instead be available as a downloadable update after Vista's release.On August 29, 2005, Microsoft quietly made Beta 1 of WinFS available to MSDN subscribers. It worked on Windows XP, and required the .NET Framework to run. The WinFS API was included in the System.Storage namespace. The beta was refreshed on December 1, 2005 to be compatible with version 2.0 of the .NET Framework. WinFS Beta 2 was planned for some time later in 2006, and was supposed to include integration with Windows Desktop Search, so that search results include results from both regular files and WinFS stores, as well as allow access of WinFS data using ADO.NET.On June 23, 2006, the WinFS team at Microsoft announced that WinFS would no longer be delivered as a separate product, and some components would be brought under the umbrella of other technologies. Many of the principle features Microsoft intended to provide with WinFS included a pane for metadata property editing, breadcrumb-based property navigation, filtering or stacking items over properties, incremental search, and saved searches; these features were incorporated in Windows Vista. Query composition, a feature of WinFS that allowed users to perform additional searches that reuse the results of a previous query, was later incorporated in Windows Vista.Examples of uses of the technology are the object-relational mapping components into ADO.NET Entity Framework; support for unstructured data, adminless mode of operation, support for file system objects via the FILESTREAM data type, and hierarchical data in SQL Server 2008, then codenamed Katmai, as well as integration with Win32 APIs and Windows Shell and support for traversal of hierarchies by traversing relationships into later releases of Microsoft SQL Server; and the synchronization components into Microsoft Sync Framework.In 2013 Bill Gates cited WinFS as his greatest disappointment at Microsoft and that the idea of WinFS was ahead of its time, which will re-emerge. Data storage: Architecture WinFS uses a relational engine, which derives from SQL Server 2005, to provide the data-relations mechanism. WinFS stores are simply SQL Server database (.MDF) files with the FILESTREAM attribute set. These files are stored in the access-restricted folder named "System Volume Information" (placed in the volume root), in folders under the folder "WinFS" with names of GUIDs of these stores.At the bottom of the WinFS stack lies WinFS Core, which interacts with the filesystem and provides file-access and -addressing capabilities. The relational engine leverages the WinFS core services to present a structured store and other services such as locking, which the WinFS runtime uses to implement the functionality. The WinFS runtime exposes services such as Synchronization and Rules that can be used to synchronize WinFS stores or perform certain actions on the occurrence of certain events.WinFS runs as a service that runs three processes: WinFS.exe, which hosts relational datastore WinFSSearch.exe, which hosts the indexing and querying engine WinFPM.exe (WinFS File Promotion Manager), which interfaces with the underlying file-systemIt allows programmatic access to its features via a set of .NET Framework APIs. These enable applications to define custom-made data types, define relationships among data, store and retrieve information, and allow advanced searches. The applications can then aggregate the data and present the aggregated data to the user. Data storage: Data store WinFS stores data in relational stores, which are exposed as virtual locations called stores. A WinFS store is a common repository where any application can store data along with its metadata, relationships and schema. WinFS runtime can apply certain relationships itself; for example, if the values of the subject property of a picture and the name property of a contact are same, then WinFS can relate the contact with the picture. Relations can also be specified by other applications or the user.WinFS provides unified storage, but stops short of defining the format that is to be stored in the data stores. Instead it supports writing data in application-specific formats. But applications must provide a schema that defines how the file format should be interpreted. For example, a schema could be added to allow WinFS to understand how to read and thus be able to search and analyze, (say) a PDF file. By using the schema, any application can read data from any other application, and this also allows different applications to write in each other's format by sharing the schema.Multiple WinFS stores can be created on a single machine. This allows different classes of data to be kept segregated; for example, official documents and personal documents can be kept in different stores. WinFS, by default, provides only one store, named "DefaultStore". WinFS stores are exposed as shell objects, akin to Virtual folders, which dynamically generate a list of all items present in the store and present them in a folder view. The shell object also allows searching information in the datastore.A data unit that has to be stored in a WinFS store is called a WinFS Item. A WinFS item, along with the core data item, also contains information on how the data item is related to other data. This Relationship is stored in terms of logical links. Links specify which other data items the current item is related with. Put in other words, links specify the relationship of the data with other data items. Links are physically stored using a link identifier, which specifies the name and intent of the relationship, such as type of or consists of. The link identifier is stored as an attribute of the data item. All the objects that have the same link ID are considered to be related. An XML schema, defining the structure of the data items that will be stored in WinFS, must be supplied to the WinFS runtime beforehand. In Beta 1 of WinFS, the schema assembly had to be added to the GAC before it could be used. Data storage: Data model WinFS models data using the data items, along with their relationships, extensions and rules governing its usage. WinFS needs to understand the type and structure of the data items, so that the information stored in the data item can be made available to any application that requests it. This is done by the use of schemas. For every type of data item that is to be stored in WinFS, a corresponding schema needs to be provided to define the type, structure and associations of the data. These schemas are defined using XML.Predefined WinFS schemas include schemas for documents, e-mail, appointments, tasks, media, audio, video, and also includes system schemas that include configuration, programs, and other system-related data. Custom schemas can be defined on a per-application basis, in situations where an application wants to store its data in WinFS, but not share the structure of that data with other applications, or they can be made available across the system. Data storage: Type system The most important difference between a file system and WinFS is that WinFS knows the type of each data item that it stores. And the type specifies the properties of the data item. The WinFS type system is closely associated with the .NET framework's concept of classes and inheritance. A new type can be created by extending and nesting any predefined types.WinFS provides four predefined base types – Items, Relationships, ScalarTypes and NestedTypes. An Item is the fundamental data object which can be stored, and a Relationship is the relation or link between two data items. Since all WinFS items must have a type, the type of item stored defines its properties. The properties of an Item may be a ScalarType, which defines the smallest unit of information a property can have, or a NestedType, which is a collection of more than one ScalarTypes and/or NestedTypes. All WinFS types are made available as .NET CLR classes.Any object represented as a data unit, such as contact, image, video, document etc., can be stored in a WinFS store as a specialization of the Item type. By default, WinFS provides Item types for Files, Contact, Documents, Pictures, Audio, Video, Calendar, and Messages. The File Item can store any generic data, which is stored in file systems as files. But unless an advanced schema is provided for the file, by defining it to be a specialized Item, WinFS will not be able to access its data. Such a file Item can only support being related to other Items. Data storage: A developer can extend any of these types, or the base type Item, to provide a type for their custom data. The data contained in an Item is defined in terms of properties, or fields that hold the actual data. For example, an Item Contact may have a field Name that is a ScalarType, and one field Address, a NestedType, which is further composed of two ScalarTypes. To define this type, the base class Item is extended and the necessary fields are added to the class. A NestedType field can be defined as another class that contains the two ScalarType fields. Once the type is defined, a schema has to be defined, which denotes the primitive type of each field, for example, the Name field is a String, the Address field is a custom defined Address class, both the fields of which are Strings. Other primitive types that WinFS supports are Integer, Byte, Decimal, Float, Double, Boolean and DateTime, among others. The schema will also define which fields are mandatory and which are optional. The Contact Item defined in this way will be used to store information regarding the Contact, by populating the properties field and storing it. Only those fields marked as mandatory needs to be filled up during initial save. Other fields may be populated later by the user, or not populated at all. If more properties fields, such as last conversed date, need to be added, this type can be extended to accommodate them. Item types for other data can be defined similarly. Data storage: WinFS creates tables for all defined Items. All the fields defined for the Item form the columns of the table and all instances of the Item are stored as rows in the table for the respective Items. Whenever some field in the table refers to data in some other table, it is considered a relationship. The schema of the relationship specifies which tables are involved and what the kind and name of the relationship is. The WinFS runtime manages the relationship schemas. All Items are exposed as .NET CLR objects, with a uniform interface providing access to the data stored in the fields. Thus any application can retrieve object of any Item type and can use the data in the object, without being aware of the physical structure the data was stored in.WinFS types are exposed as .NET classes, which can be instantiated as .NET objects. Data are stored in these type instances by setting their properties. Once done, they are persisted into the WinFS store. A WinFS store is accessed using an ItemContext class (see Data retrieval section for details). ItemContext allows transactional access to the WinFS store; i.e. all the operations since binding an ItemContext object to a store till it is closed either all succeed or are all rolled back. As changes are made to the data, they are not written to the disc; rather they are written to an in-memory log. Only when the connection is closed are the changes written to the disc in a batch. This helps to optimize disc I/O. The following code snippet, written in C#, creates a contact and stores it in a WinFS store. Data storage: Relationships A datum can be related to one more item, giving rise to a one-to-one relationship, or with more than one items, resulting in a one-to-many relationship. The related items, in turn, may be related to other data items as well, resulting in a network of relationships, which is called a many-to-many relationship. Creating a relationship between two Items creates another field in the data of the Items concerned which refer the row in the other Item's table where the related object is stored. Data storage: In WinFS, a Relationship is an instance of the base type Relationship, which is extended to signify a specialization of a relation. A Relationship is a mapping between two items, a Source and a Target. The source has an Outgoing Relationship, whereas the target gets an Incoming Relationship. WinFS provides three types of primitive relationships – Holding Relationship, Reference Relationship and Embedding Relationship. Any custom relationship between two data types are instances of these relationship types. Data storage: Holding Relationships specify ownership and lifetime (which defines how long the relationship is valid) of the Target Item. For example, the Relationship between a folder and a file, and between an Employee and his Salary record, is a Holding Relationship – the latter is to be removed when the former is removed. A Target Item can be a part of more than one Holding Relationships. In such a case, it is to be removed when all the Source Items are removed. Data storage: Reference Relationships provide linkage between two Items, but do not have any lifetime associated, i.e., each Item will continue to be stored even without the other. Data storage: Embedding Relationships give order to the two Items that are linked by the Relationship, such as the Relationship between a Parent Item and a Child Item.Relationships between two Items can either be set programmatically by the application creating the data, or the user can use the WinFS Item Browser to manually relate the Items. A WinFS item browser can also graphically display the items and how they are related, to enable the user to know how their data are organized. Data storage: Rules WinFS includes Rules, which are executed when a certain condition is met. WinFS rules work on data and data relationships. For example, a rule can be created that states that whenever an Item is created which contains field "Name" and if the value of that field is some particular name, a relationship should be created that relates the Item with some other Item. WinFS rules can also access any external application. For example, a rule can be built which launches a Notify application whenever a mail is received from a particular contact. WinFS rules can also be used to add new properties fields to existing data Items.WinFS rules are also exposed as .NET CLR objects. As such any rule can be used for any purpose. A rule can even be extended by inheriting from it to form a new rule that consists of the condition and action of the parent rule plus something more. Data storage: RAV WinFS supports creating Rich Application Views (RAV) by aggregating different data in a virtual table format. Unlike database view, where each individual element can only be a scalar value, RAVs can have complex Items or even collections of Items. The actual data can be across multiple data types or instances and can even be retrieved by traversing relationships. RAVs are intrinsically paged (dividing the entire set of data into smaller pages containing disconnected subsets of the data) by the WinFS runtime. The page size is defined during creation of the view and the WinFS API exposes methods to iterate over the pages. RAVs also supports modification of the view according to different grouping parameters. Views can also be queried against. Data storage: Access control Even though all data are shared, everything is not equally accessible. WinFS uses the Windows authentication system to provide two data protection mechanisms. First, there is share-level security that controls access to your WinFS share. Second, there is item level security that supports NT compatible security descriptors. The process accessing the item must have enough privileges to access it. Also in Vista there is the concept of "integrity level" for an application. Higher integrity data cannot be accessed by a lower integrity process. Data retrieval: The primary mode of data retrieval from a WinFS store is querying the WinFS store according to some criteria, which returns an enumerable set of items matching the criteria. The criteria for the query is specified using the OPath query language. The returned data are made available as instances of the type schemas, conforming to the .NET object model. The data in them can be accessed by accessing the properties of individual objects.Relations are also exposed as properties. Each WinFS Item has two properties, named IncomingRelationships and OutgoingRelationships, which provide access to the set of relationship instances the item participates in. The other item which participates in one relationship instance can be reached through the proper relationship instance.The fact that the data can be accessed using its description, rather than location, can be used to provide end-user organizational capabilities without limiting to the hierarchical organization as used in file-systems. In a file system, each file or folder is contained in only one folder. But WinFS Items can participate in any number of holding relationships, that too with any other items. As such, end users are not limited to only file/folder organization. Rather, a contact can become a container for documents; a picture a container for contacts and so on. For legacy compatibility, WinFS includes a pseudo-type called Folder, which is present only to participate in holding relationships and emulate file/folder organization. Since any WinFS Item can be related with more than one Folder item, from an end user perspective, an item can reside in multiple folders without duplicating the actual data. Applications can also analyze the relationship graphs to present various filters. For example, an email application can analyze the related contacts and the relationships of the contacts with restaurant bills and dynamically generate filters like "Emails sent to people I had lunch with". Data retrieval: Searches The WinFS API provides a class called the ItemContext class, which is bound to a WinFS store. The ItemContext object can be used to scope the search to the entire store or a subset of it. It also provides transactional access to the store. An object of this class can then spawn an ItemSearcher object which then takes the type (an object representing the type) of the item to be retrieved or the relationship and the OPath query string representing the criteria for the search. A set of all matches is returned, which can then be bound to a UI widget for displaying en masse or enumerating individually. The properties items can also be modified and then stored back to the data store to update the data. The ItemContext object is closed (which marks the end of association of the object with the store) when the queries are made or changes merged into the store. Data retrieval: Related items can also be accessed through the items. The IncomingRelationships and OutgoingRelationships properties give access to all the set of relationship instances, typed to the name of the relationship. These relationship objects expose the other item via a property. So, for example, if a picture is related to a picture, it can be accessed by traversing the relationship as: An OPath query string allows to express the parameters that will be queried for to be specified using Item properties, embedded Items as well as Relationships. It can specify a single search condition, such as "title = Something'", or a compound condition such as "title = 'Title 1' || title = 'Title 2' && author = 'Someone'". These boolean and relational operations can be specified using C# like &&, ||, =, != operators as well as their English-like equivalent like EQUAL, NOT EQUAL. SQL like operators such as LIKE, GROUP BY and ORDER BY are also supported, as are wildcard conditions. So, "title LIKE 'any*'" is a valid query string. These operators can be used to execute complex searches such as The above code snippet creates an ItemSearcher object that searches on the OutContactRelationship instance that relates pictures and contacts, in effect searching all pictures related with a contact. It then runs the query Name LIKE 'A*'" on all contacts reachable through OutContactRelationship, returning the list of "contacts whose names start with A and whose pictures I have". Similarly, more relationships could be taken into account to further narrow down the results. Further, a natural language query processor, which parses query in natural language and creates a well-formed OPath query string to search via proper relationships, can allow users to make searches such as "find the name of the wine I had with person X last month", provided financial management applications are using WinFS to store bills. Data retrieval: Different relations specify a different set of data. So when a search is made that encompasses multiple relations, the different sets of data are retrieved individually and a union of the different sets is computed. The resulting set contains only those data items that correspond to all the relations. Notifications WinFS includes better support for handling data that changes frequently. Using WinFS Notifications, applications choose to be notified of changes to selected data Items. WinFS will raise an ItemChangedEvent, using the .NET Event model, when a subscribed-to Item changes, and the event will be published to the applications. Data retrieval: Information Agent WinFS includes an Information Agent feature for the management, retrieval, and storage of end-user notification rules and preferences for changes to items in the data store. Using Information Agent, it is possible to automatically define relations to new items based on events such as appointments, with an example being that appointments can be related to photos based on the dates the photos were taken, enabling queries for birthdays or holidays without needing to know the actual dates of such events ("find all photos taken on this birthday"). Other examples include automatically moving new items to specific folders based on a rule as determined by appointment times and dates the photos were taken ("when I import a photo taken during a business event, move it to the Business Events folder") or more complex possibilities. Information Agent can also forward notifications to other devices ("if I receive a high priority email from my boss, send a notification to my phone") and is similar to Rules and Alerts functionality of Microsoft Outlook. Data sharing: WinFS allows easy sharing of data between applications, and among multiple WinFS stores, which may reside on different computers, by copying to and from them. A WinFS item can also be copied to a non-WinFS file system, but unless that data item is put back into the WinFS store, it will not support the advanced services provided by WinFS. Data sharing: The WinFS API also provides some support for sharing with non-WinFS applications. WinFS exposes a shell object to access WinFS stores. This object maps WinFS items to a virtual folder hierarchy, and can be accessed by any application. Virtual folders can automatically share new content referenced by the query with users (a virtual folder for "all vacation photos" can automatically share new items returned by this query with users). WinFS data can also be manually shared using network shares, by sharing the legacy shell object. Non-WinFS file formats can be stored in WinFS stores, using the File Item, provided by WinFS. Importers can be written, to convert specific file formats to WinFS Item types.In addition, WinFS provides services to automatically synchronize items in two or more WinFS stores, subject to some predefined condition, such as "share only photos" or "share photos that have an associated contact X". The stores may be on different computers. Synchronization is done in a peer-to-peer fashion; there is no central authority. A synchronization can be either manual or automatic or scheduled. During synchronization, WinFS finds the new and modified Items, and updates accordingly. If two or more changes conflict, WinFS can either resort to automatic resolution based on predefined rules, or defer the synchronization for manual resolution. WinFS also updates the schemas, if required. Application support: Shell namespace WinFS Beta 1 includes a shell namespace extension, which surfaces WinFS stores as top level objects in My Computer view. Files can be copied into and out of the stores, as well as applications can be directly used to save there. Even folders such as My Documents can be redirected to the stores. WinFS uses Importer plug-ins to analyze the files as they were being imported to the store and create proper WinFS schemas and objects, and when taking the objects out, re-pack them into files. If importers for certain files are not installed, they are stored as generic File types. Application support: Microsoft Rave Microsoft Rave is an application that shipped with WinFS Beta 1. It allows synchronization of two or more WinFS stores, and supports synchronization in full mesh mode as well as the central hub topology. While synchronizing, Microsoft Rave will determine the changes made to each store since the last sync, and update accordingly. When applying the changes, it also detects if there is any conflict, i.e., the same data has been changed on both stores since the last synchronization. It will either log the conflicting data for later resolution or have it resolved immediately. Microsoft Rave uses peer-to-peer technology to communicate and transfer data. Application support: StoreSpy With WinFS Beta 1, Microsoft included an unsupported application called StoreSpy, which allowed one to browse WinFS stores by presenting a hierarchical view of WinFS Items. It automatically generated virtual folders based on access permissions, date and other metadata, and presented them in a hierarchical tree view, akin to what traditional folders are presented in. The application generated tabs for different Item types. StoreSpy allowed viewing Items, Relationships, MultiSet, Nested Elements, Extensions and other types in the store along with its full metadata. It also presented a search interface to perform manual searches, and save them as virtual folders. The application also presented a graphical view of WinFS Rules. However, it did not allow editing of Items or their properties, though it was slated for inclusion in a future release. But the WinFS project was cut back before it could materialize. Application support: Type Browser WinFS also includes another application, named WinFS Type Browser, which can be used to browse the WinFS types, as well as visualize the hierarchical relationship between WinFS types. A WinFS type, both built-in types as well as custom schemas, can be visualized along with all the properties and methods that it supports. It also shows the types that it derives from as well as other types that extend the type schema. However, while it was included with WinFS, it was released as an unsupported tool. Application support: OPather WinFS Beta 1 also includes an unsupported application, named OPather. It presents a graphical interface for writing Opath queries. It can be used by selecting target object type and specifying the parameters of the query. It also includes Intellisense-like parameter completion feature. It can then be used to perform visualization tasks like binding results of a query to a DataGrid control, create views of the data in WinFS itself, or just extract the query string. Application support: Project "Orange" Microsoft launched a project to build a data visualization application for WinFS. It was codenamed "Project Orange" and was supposedly built using Windows Presentation Foundation. It was supposed to provide exploration of Items stored in WinFS stores, and data relationships were supposed to be a prominent part of the navigation model. It was supposed to let people allow organization of the WinFS stores graphically as well – productizing many of the concepts shown in the IWish Concept Video WMV File. However, since the WinFS project went dark, the status of this project is unknown.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bell pattern** Bell pattern: A bell pattern is a rhythmic pattern of striking a hand-held bell or other instrument of the idiophone family, to make it emit a sound at desired intervals. It is often a key pattern (also known as a guide pattern, phrasing referent, timeline, or asymmetrical timeline), in most cases it is a metal bell, such as an agogô, gankoqui, or cowbell, or a hollowed piece of wood, or wooden claves. In band music, bell patterns are also played on the metal shell of the timbales, and drum kit cymbals. Sub-Saharan African music: Gerhard Kubik notes that key patterns are not universally found in sub-Saharan Africa: "Their geographical distribution mainly covers those parts of Africa where I.A.4 (Kwa languages) and the 'western stream' of the I.A.5 (Benue–Congo languages), or 'Bantu' languages are spoken, with offshoots into the Lower Zambezi valley and the Nyasa/Ruvuma area in southeast Africa" [within the larger Niger–Congo-B group]. Use of the patterns has since spread throughout the greater Niger–Congo language family. The use of iron bells (gongs) in sub-Saharan African music is linked to the early iron-making technology spread by the great Bantu migrations. The spread of the African bell patterns is probably similarly linked. Sub-Saharan African music: Throughout Africa, wherever these gongs have occurred they have been manufactured by the same process of welding the two halves together along a wide flange. This indicates a common origin. Sub-Saharan African music: Kubik observes that "at the broadest level," the various key patterns "are all interrelated." Key patterns exist in their own right, as well as in relation to the three inner reference levels of elementary pulsation, main reference beat, and primary cycle. Kubik further states that key patterns represent the structural core of a musical piece, something like a condensed and extremely concentrated expression of the motional possibilities open to the participants (musicians and dancers). Sub-Saharan African music: [Key patterns] express the rhythm’s organizing principle, defining rhythmic structure, as scales or tonal modes define harmonic structure . . . Put simply, key patterns epitomize the complete rhythmic matrix. Sub-Saharan African music: Key patterns are generated through cross-rhythm. They typically consist of 12 or 16 pulses, and have a bipartite structure, which evenly divides the pattern into two rhythmically opposed cells of 6 or 8 pulses each. The key pattern defines the musical period; the first cell is antecedent, and the second is consequent. The asymmetrical array of attack-points contradicts the metrical symmetry of the two cells. Sub-Saharan African music: Standard pattern The most commonly used key pattern in sub-Saharan Africa is the seven-stroke figure known in ethnomusicology as the standard pattern, or bembé. The standard pattern is expressed in both a triple-pulse (128 or 68) and a duple-pulse (44 or 22) structure. Many North American percussionists refer to the triple-pulse form as the 68 bell. The standard pattern has strokes on: 1, 1a, 2& 2a, 3&, 4, 4a. Sub-Saharan African music: In 128: 1 & a 2 & a 3 & a 4 & a || X . X . X X . X . X . X || In 44: 1 e & a 2 e & a 3 e & a 4 e & a || X . . X . . X X . . X . X . . X || The axatse (Ghanaian beaded gourd instrument) part which typically accompanies the 12-pulse standard pattern in Ewe music is verbalized as: "pa ti pa pa ti pa ti pa ti pa pa". The "pa"s sound the standard pattern by striking the gourd against the knee. The "ti"s sound pulses in between the bell strokes, by raising the gourd in an upward motion and striking it with the free hand. As is common with many African rhythms, the axatse part begins (first "pa") on the second stroke of the bell (1a), and the last "pa" coincides with 1. By ending at the beginning of the cycle, the axatse part contributes to the cyclic nature of the overall rhythm. Sub-Saharan African music: See: standard bell with accompanying axatse part. Atsiagbekor. Sub-Saharan African music: 128 bell patterns There are many different triple-pulse bell patterns found in sub-Saharan Africa. These are but a small sample. Bell patterns 1 and 2 are considered by A. M. Jones to be the two simplified forms of the standard pattern. Pattern 2 was the first African bell pattern to be transcribed. Pattern 2 contains exactly the same pattern of attack-points as Pattern 1, but begins on a different stroke, has a different relationship to the main beats, and therefore, is a related, but different key pattern. Pattern 3 is another variant of the standard pattern, one which contains exactly the same pattern of attack-points as the standard pattern, but in a different relationship to the main beats. The geographical border of Pattern 3 seems to be the Niger River. Kubik states that east of the Niger, Pattern 3 is used "among the Igbo, and the large group of Benue-Congo speakers from eastern Nigeria through western Cameroon, down to southern Democratic Republic of the Congo, eastern Angola and northern Zambia." The pattern is also used in Cuba and Haiti. Pattern 4 is a bell pattern used by the Hausa people of Nigeria. It is also used in the Cuban-Congolese rhythm palo. The figure is sometimes referred to as a horizontal hemiola.Three-beat cycle bell patterns There is a category of 128 bell patterns based on "slow" cycles of three cross-beats across four or eight main beats. Three-over-eight (3:8) is one of the most metrically contradictive, and extraordinarily dynamic cross-rhythms found in African music. Within the context of a single four-beat cycle (single measure or musical period), the cross-rhythmic ratio is 1.5:4. The three cross-beats, spanning 24 pulses, are represented as whole-notes below for visual emphasis. Sub-Saharan African music: The following 24-pulse bell pattern is used in the Ewe rhythm kadodo. The three single strokes are muted. The kadodo bell pattern is an embellishment of three "slow" cross-beats spanning two measures, or three-over-eight (3:8). Sub-Saharan African music: 44 bell patterns Pattern 1 (44 standard pattern) is played on the head of a small Yoruba bata drum in Benin. Pattern 2 is used by the Yoruba and Igbo people of Nigeria. Pattern 3 is the bell part in fufume (Ghana). Pattern 4 is used by the Ga people (Ghana) for the rhythm gahu. Patterns 3 and 5 are used in the Ghanaian rhythm kpanlogo. Patterns 2 and 3 are known in Cuba as rumba clave and son clave respectively. Sub-Saharan African music: Single-celled bell patterns Some bell patterns are single-celled and therefore, not key patterns. A single-celled pattern cycles over two main beats, while a two-celled key pattern cycles over four main beats. The most basic single-celled pattern in duple-pulse structure consists of three strokes, known in Cuban music as tresillo. Metric structure: Divisive rhythm versus additive rhythm Sub-Saharan African rhythm is divisive rhythm. However, perhaps because of their seemingly asymmetric structure, bell patterns are sometimes perceived in an additive rhythmic form. For example, Justin London describes the five-stroke version of the standard pattern as "2-2-3-2-3", while Godfried Toussaint describes the seven-stroke form as "2-2-1-2-2-2-1." The following example of the five-stroke standard pattern is represented within an additive structure: 2+2+3+2+3. Metric structure: The bell pattern, and every aspect of the overall rhythm, is considered divisive within both cultural understanding, and by most contemporary music theoreticians. Novotney states: "The African rhythmic structure which generates the standard pattern is a divisive structure and not an additive one . . . the standard pattern represents a series of attack points, . . . not a series of durational values." Kubik concurs: "Although on the level of structural analysis it cannot be denied that different 'distances' of strokes, combining two or three elementary pulses, are 'added up' within the cycle, performers do not think of time-line patterns as 'additive rhythms,' . . . 'Additive rhythms' are the analytic construction of the musicologist." Agawu states: "Additive rhythm . . . is a highly problematic concept for African music . . . it is not in sync with indigenous conceptions of musical structure. It arises as a kind of default grouping mechanism for those transcribers who either disregard the choreography or fail to accord it foundational status."Tresillo is often interpreted as an additive rhythm because of the irregular grouping of its strokes: 3+3+2. However, tresillo is generated through cross-rhythm: 8 pulses ÷ 3 = 2 cross-beats (consisting of three pulses each), with a remainder of a partial cross-beat (spanning two pulses). In other words, 8 ÷ 3 = 2, r2. Tresillo is a cross-rhythmic fragment. It contains the first three cross-beats of the four-over-three cross-rhythm. Metric structure: Although the difference between the two ways of notating this rhythm may seem small, they stem from fundamentally different conceptions. Those who wish to convey a sense of the rhythm’s background [main beats], and who understand the surface morphology in relation to a regular subsurface articulation, will prefer the divisive format. Those who imagine the addition of three, then three, then two sixteenth notes will treat the well-formedness of 3+3+2 as fortuitous, a product of grouping rather than of metrical structure. They will be tempted to deny that African music has a bona fide metrical structure because of its frequent departures from normative grouping structure—Agawu (2003: 87). Metric structure: In divisive form, the strokes of tresillo contradict the beats. In additive form, the strokes of tresillo are the beats. Metric structure: Counter-meter versus polymeter A.M. Jones correctly identified the importance of this key pattern, but he mistook its accents as indicators of meter rather than the counter-metric (cross-rhythmic) phenomena they actually are. Similarly, while Anthony King identified this five-stroke figure as the ‘standard pattern’ in its simplest and most basic form, he did not correctly identify its metric structure. King represented the pattern in a polymetric 78 + 58 time signature. Metric structure: Because this triple-pulse pattern is generated from cross-rhythm, it is possible to count or feel it in several different ways, and divide by several different beat schemes. In the diagram below the five-stroke bell pattern is shown on top and a beat cycle is shown below it. Any or all of these structures may be the emphasis at a given point in a piece of music using the bell pattern. Metric structure: The example on the left (68) represents the correct count and ground of the bell pattern. The four dotted quarter-notes across the two bottom measures are the main beats. All key patterns are built upon four main beats. The bottom measures on the other two examples (32 and 64) show cross-beats. Observing the dancer's steps almost always reveals the main beats of the music. Because the main beats are usually emphasized in the steps and not the music, it is often difficult for an "outsider" to feel the proper metric structure without seeing the dance component. Kubik states: "In order to understand the motional structure of any music in Africa, one has to look at the dancers as well and see how they relate to the instrumental background" (2010: 78). Metric structure: For cultural insiders, identifying the . . . ‘dance feet’ occurs instinctively and spontaneously. Those not familiar with the choreographic supplement, however, sometimes have trouble locating the main beats and expressing them in movement. Hearing African music on recordings alone without prior grounding in its dance-based rhythms may not convey the choreographic supplement. Not surprisingly, many misinterpretations of African rhythm and meter stem from a failure to observe the dance—Agawu (2003). Afro-Cuban music: Standard pattern The method of constructing iron bells in Cuba is identical to how it is done in Africa. Not surprising, many African bell patterns are played in Cuba as well. The standard pattern is the most widely used bell pattern in Cuba. Some of the Afro-Cuban rhythms that use the standard pattern are: Congolese (Bantu): palo, triallo; Lucumí (Yoruba): iyesá (128 form), bembé, agbe; Arará (Fon): sabalú, egbado; "Haitiano" (Fon, Yoruba): vodú-radá, yanvalú, nagó; the rumba form columbia. Afro-Cuban music: In the Yoruba-based, Afro-Cuban rhythms agbe (toque güiro) and bembé, standard pattern variations are used spontaneously. The following 24-pulse bell pattern is used in the arará rhythm afrekete. The first measure simply sounds the four main beats. Notice that the first five strokes of the second measure are identical to the first five strokes of the standard pattern. Three-beat cycle bell patterns There are several 128 bell patterns based on "slow" cycles of three beats across four or eight main beats. The three-beat cycle is represented as half-notes in the following example for visual emphasis. This bell pattern, an embellishment of the three-beat cycle, is used in the Afro-Cuban rhythm abakuá. It consists of three sets of three strokes each. The bell pattern is also played in a displaced position, beginning on 4a, the pulse immediately preceding beat 1. Afro-Cuban music: The following 24-pulse bell pattern is used in the arará rhythm afrekete. The Arará are Cuban descendants of the Fon/Ewe ethnic group, so it's perhaps not surprising that it is the same pattern as the bell part used in the Ewe rhythm kadodo, shown earlier in this article. However, as used in afrekete, the part begins in the second measure of 128. Notice that the first five strokes are identical to the first five strokes of the standard pattern. Like the kadodo bell, this pattern is an embellishment of the 3:8, or 1+1⁄2:4 cross-rhythm. Afro-Cuban music: 44 Cuban bell patterns A variety of Cuban 44 bell patterns have spread worldwide due to the global success of Cuban-based popular music. Afro-Cuban music: Pattern 1 is son clave, usually played on wooden claves. Pattern 2 is the baqueteo, the key pattern used in danzón and the first expression of clave in written music. The baqueteo consists of the son clave strokes, plus four additional strokes. Not technically a bell pattern, the baqueteo is played on the güiro and on the heads of the timbales. The slashed noteheads are muted tones and the regular noteheads are open tones. Afro-Cuban music: In the 1940s the cowbell was added to the timbales in the first danzón-mambos of the charanga orchestras. Arcaño y sus Maravillas introduced this development. Later, multiple cowbells, a cymbal and the occasional woodblock were added to the timbale setup. Patterns 3 and 4 are guaguancó cáscara patterns adopted as mambo bell parts. During the mambo era of the 1940s, bongo players began regularly using a large hand-held cowbell during the montuno section in son groups. This bongo bell role was introduced in the son conjunto of Arsenio Rodríguez. Pattern 5 is the basic bongo bell pattern. Afro-Cuban music: The rhythmic basis for one of the most enduring Latin jazz tunes comes from a cáscara variant adopted as a mambo bell pattern. "Manteca," co-written by Dizzy Gillespie and Chano Pozo in 1947, is the first jazz standards to be rhythmically based on clave. The rhythm of the melody in the A section is identical to a common mambo bell pattern. Afro-Cuban music: Timbale bell and bongo bell interplay Patterns 3 and 4 are timbale bell parts that were introduced in mambo big bands. During the early 1940s Machito and his Afro-Cubans was the first band to employ the triumvirate of congas, bongos and timbales, the standard battery of percussion used in contemporary salsa. In the montuno section the bongo bell and the timbale bell parts are sounded simultaneously in a contrapuntal interplay. Afro-Cuban music: In the 1970s José Luis Quintana "Changuito" developed the technique of simultaneously playing timbale and bongo bell parts when he held the timbales chair in the songo band Los Van Van. The example below shows the combined bell patterns (written in a 2-3 clave sequence). Afro-Brazilian music: Afro-Brazilian music uses a variety of bell patterns, many of which are different than the patterns used in Cuba. Afro-Brazilian music: Bell pattern 1 is used in maculelê and some Candomblé and Macumba rhythms. Pattern 1 is known in Cuba as son clave. Bell 2 is used in afoxê and can be thought of as pattern 1 embellished with four additional strokes. Bell 3 is used in batucada. Pattern 4 is the maracatu bell and can be thought of as pattern 1 embellished with four additional strokes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Scene description language** Scene description language: A scene description language is any description language used to describe a scene to a 3D renderer, such as a ray tracer. The scene is written in a text editor (which may include syntax highlighting), as opposed to being modeled in a graphical way, but a 3D modelling program may allow for a scene to be exported to a specified scene description language. Scene description language: Some scene description languages may include variables, constants, conditional statements, and while and for loops. For example, 3DMLW and X3D are XML-based scene description languages; YafaRay also employs an XML-based language. Tao Presentations uses XL as a dynamic document description language. POV-Ray has its own Turing-complete language.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SIR proteins** SIR proteins: Silent Information Regulator (SIR) proteins are involved in regulating gene expression. SIR proteins organize heterochromatin near telomeres, ribosomal DNA (rDNA), and at silent loci including hidden mating type loci in yeast. The SIR family of genes encodes catalytic and non-catalytic proteins that are involved in de-acetylation of histone tails and the subsequent condensation of chromatin around a SIR protein scaffold. Some SIR family members are conserved from yeast to humans. History: SIR proteins have been identified in many screens, and have historically been known as SIR (silent information regulator), MAR (mating-type regulator), STE (sterile), CMT (change of mating type) or SSP (sterile suppressor) according to which screen led to their identification. Ultimately, the name SIR had the most staying power, because it most accurately describes the function of the encoded proteins.One of the early yeast screens to identify SIR genes was performed by Anita Hopper and Benjamin Hall, who screened with mutagenesis for alleles that allow sporulation in a normally sporulation-deficient heterothallic α/α (ho/ho MATα/MATα). Their screen identified a mutation in a novel gene that was not linked to HO that allowed the α/α diploid to sporulate, as if it were an α/a diploid, and inferred that the mutation affected a change in mating type by an HO-independent mechanism. Later, it was discovered at the CMT allele identified by Hopper & Hall did not cause a mating type conversion at the MAT locus, but rather allowed the expression of cryptic mating type genes that are silenced in wild-type yeast. In their paper clarifying the mechanism of the CMT mutation, Haber and acknowledge the contribution of Amar Klar, who presented his MAR mutant strains that had similar properties as the CMT mutants at the Cold Spring Harbor Laboratory yeast genetics meeting, which led Haber and to consider the hypothesis that the cmt mutants may act by de-repressing silent information.In the same year that Haber & demonstrated that the cmt mutant restores sporulation by de-repressing hidden mating type loci, two other groups published screens for genes involved in the regulation of silent mating type cassettes. The first study, performed by Amar Klar, Seymour Fogel and Kathy Macleod, identified a mutation in a spontaneous a/a diploid that caused the products of sporulation to be haploids with an apparent diploid phenotype, as assayed by ability to mate. The authors reasoned that the mutation caused the de-repression of then-recently appreciated silent mating type loci HMa and HMα, which would allow an a/a diploid to sporulate and would cause haploid segregants inheriting the mutant allele to behave as a/α diploids despite being haploid. The authors named the mutation MAR for its apparent role in mating type regulation, and were able to map the mutation to chromosome IV, and determined that it was located 27.3 cM from a commonly used trp1 marker.A few months later, Jasper Rine and Ira Herskowitz published a different screen for genes that affect the ability of yeast to mate, and ultimate discovered the gene family that they called SIR, a name that remains in the modern parlance. Unlike the Klar et al. screen that identified a mutant by its inability to mate, Rine & Herskowitz took a more directed approach towards discovering factors responsible for mating type silencing. Specifically, Rine & Herskowitz reasoned that a haploid yeast cell with a recessive mutation in matα1 could be complemented if the silent copy of MATα were de-repressed. Starting in a ho matα1 haploid strain, Rine & Herskowitz screened mutants arising from mutagenesis and identified five mutants that restored a MATα phenotype in matα cells, but were not linked to the MAT locus and did not cause a gene conversion between the HMα locus and matα. These mutants, they reasoned, were specifically defective in silencing the cryptic mating type genes. History: Eventually, all of the mutants resulting from the original Hopper & Hall screen as well as the later Rine & Herskowitz screen and the Klar et al. screen were characterized and mapped, and it was shown that the causative genes were the same. In fact, the genes that are now referred to as SIR1-4 have at one time been referred to as MAR, CMT or STE according to the screen that identified the mutants. History: Although Klar, Hartwell and Hopper identified mutations in SIR genes and applied other names to the genes before Rine performed his screen, the SIR name was eventually adopted because Rine eventually identified the most complete set of functionally related genes (SIR1-4), and because the work by Rine and Herskowitz most accurately described the function of the SIR family genes. Later it would be shown that in yeast and in higher organisms, SIR proteins are important for transcriptional regulation of many chromatin domains. Molecular mechanism: In budding yeast, SIR proteins are found at the silent mating type loci, telomeres, and at the rDNA locus. At the silent mating type loci and at the telomeres, SIR proteins participate in transcriptional silencing of genes within their domain of localization. At the rDNA locus, SIR proteins are thought to primarily be important for repressing recombination between rDNA repeats rather than for suppressing transcription. Molecular mechanism: Transcriptional silencing in budding yeast In transcriptional silencing, SIR2,3,4 are required in stoichiometric amounts to silence specific chromosomal regions. In yeast, SIR proteins bind sites on nucleosome tails and form a multimeric compound of SIR2,3,4 that condenses chromatin and is thought to physically occlude promoters in the silenced interval, preventing their interaction with transcription machinery. The establishment of SIR-repressed heterochromatin domains is a complicated process that involves different subsets of proteins and regulatory proteins depending on the locus in the genome. At the silent mating type loci and at yeast telomeres, the transcription factors Abf1 (ARS binding factor) and Rap1 (repressor-activator protein) associate with specific nucleotide sequences in the silencers that flank heterochromatic regions. Rap1 contains a Sir3-binding domain that recruits SIR3 to the silencers. Once at the silencers, Sir3 recruits Sir4-Sir2 dimers to the chromatin nucleation site. Sir2 then deacetylates histone H3 and H4 tails, and free Sir3 binds the now-deacetylated lysine residues H4K16,79, and recruits additional Sir4-Sir2 dimers to promote the further spreading of the heterochromatin domain.Once it has spread to cover a genomic locus, the SIR2,3,4 effectively prevents transcription from the region it occupies, in a process that is thought to depend on the physical occlusion of DNA by SIR proteins. Recently, it has been shown that certain promoters are capable of directing transcription inside regions that are otherwise silenced by SIR proteins. Specifically, if an inducible promoter is induced inside a silent chromatin domain, it can achieve ~200x increase in expression levels with little detectable change in covalent histone modifications. Molecular mechanism: Roles and interactions between SIR proteins SIR2 SIR2 is an NAD-dependent lysine deacetylase. It was the first-discovered member of the Sirtuin protein family and it is highly conserved, with homologs found in organisms ranging from humans to bacteria and archaea. It interacts with a variety of protein substrates, but does not exhibit strong affinity for DNA, chromatin, or other silencer-binding factors. Instead, it relies on other SIR proteins to find its appropriate silencing target.In the SIR protein complex, SIR2 removes acetyl groups from the lysine on histone tails H3 and H4, 'priming' the nucleosome for chromatin packaging by the SIR3 component of the complex. Molecular mechanism: Stabilization of rDNA in budding yeast Beyond its canonical role in the SIR complex, SIR2 also plays a role in rDNA repression. As part of the cell's regulation mechanism, rDNA repeats are excised from the chromosome so they cannot be expressed. SIR2 forms a complex with NET1 (a nuclear protein) and CDC14 (a phosphatase) to form the regulator of nucleolar silencing and telophase (RENT) complex. The RENT complex sequesters excised rDNA in 'extrachromosomal circles,' preventing recombination. Accumulation of these circles has been linked to premature aging. Sirtuin 2 (SIRT2), SIR2's human analog, has also been linked to age-related disease. Molecular mechanism: SIR3 SIR3 is principally involved in heterochromatin spreading, the silencing activity of the SIR protein complex. When overexpressed, SIR3 leads to spreading beyond the normal nucleation site. SIR3 can continue to operate at very low levels of SIR2 and SIR4, but not without them. It preferentially binds to unmodified nucleosomes (no acetylation at H4K16 or methylation at H3K79), and relies on SIR2's deacetylation of H4K16 to enhance silencing. H3K79 methylation by DOT1 methyltransferase inhibits SIR3, resulting in an unsilenced chromatin region. SIR3 is recruited to target sequence by the transcription factors RAP1 or ABF1. Molecular mechanism: SIR4 SIR4 is involved in scaffolding the assembly of silenced chromatin. It binds to DNA with high affinity, but low specificity. It is most stable when co-expressed with SIR2, but neither SIR2 nor SIR3 are required for it to operate at the telomeres. Each half of the SIR4 protein has distinct responsibilities in heterochromatin spreading. SIR4's N-terminus is required for telomeric silencing, but not for homothallic mating-type (HM) silencing. Conversely, its C-terminus supports HM but not telomeric repression. The N-terminus is positively charged and can be recruited to the telomeric repression site by SIR1 and YKU80. The C-terminus contains the coiled-coil region, which interacts with SIR3 in the heterotrimeric SIR complex and can also interact with RAP1 and YKU70 for recruitment to the telomeric region of the chromosome. The C-terminus also contains the SIR2-interacting domain (SID), where SIR4 can bind to the extended N-terminus of SIR2. SIR2 can catalyze reactions without being bound to SIR4, but SIR2's catalytic activity is enhanced when interacting with SIR4. Conservation: SIR proteins are conserved from yeast to humans, and lend their name to a class of mammalian histone deacetylases (Sirtuins, homologs of Sir2). Sirtuins have been implicated in myriad human traits including Alzheimer's and diabetes, and have been proposed to regulate of lifespan.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brutos Framework** Brutos Framework: The Brutos Application Framework is MVC controller written in Java. Designed to reduce the complexity of web development, with configurable mapping, view resolution as well as support for uploading and downloading files. Can be configured using XML, annotations and CoC. The framework follows the below principles: flexibility; loose coupling and productivity. Release bundle downloads: The Brutos team provides release bundles hosted on the SourceForge File Release System, in ZIP. Each release bundle contains JARs, documentation, source code, and other information. You can download releases of Brutos, from the list at sourceforge.net/projects/brutos/files/brutos/. Maven repository artifacts: They are produced a number of artifacts. All under the org.brandao directory. brutos-core: The main artifact, it is needed to build applications using the Brutos native APIs. brutos-annotation: Optional artifact that allows building applications using annotations. This artifact depends on the brutos-core. brutos-web: Optional artifact that allows building web applications. This artifact depends on the brutos-core.The official repository is www.brutosframework.com.br/maven/2. How to configure?: Register the listener in web.xml Register the filter in web.xml Attention: If you are using a container that supports the Servlet 3.0 specification, the registration of ContextLoadListener and DispatcherServlet or BrutosRequestFilter are not necessary. They will be automatically registered. Register the artifacts in pom.xml Create the file brutos-config.xml in /WEB-INF. Examples: Web Service Methods: Controller: Exception Handler Controller Level Action Level Method Build Action Result Controller Polymorphic Mapping Methods: Controller Beans Abstract action URI mapping: Controller Using URI template URI mapping: Controller File upload and download Form and Session
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fences (software)** Fences (software): Fences is a utility for Windows that helps to organize icons on the desktop. It is developed by Stardock and distributed as part of their Object Desktop suite. Version 1 was freeware after which it has become a commercial product. Functionality: Fences defines translucent areas on the desktop that contain groups of icons. These fences can be individually created, named, moved, and resized — they will also display a scroll bar if necessary.Double-clicking on the desktop hides all non-excluded fences and icons, while another double-click causes them to reappear. Snapshots can restore fences to a particular configuration after use. Reception: A PC World reviewer praised the free edition of Fences, saying that "it wasn't five minutes after installing this program that I realized I'll be using it for the rest of my computing life. It's that good." A preview edition was listed as TechSpot's download of the week in February 2009.Download.com approved of the snapshots, and the ability to change colour schemes, but criticized the process of fence creation and the inability to sort icons into fences by type — a feature added in the Pro edition. Some ZDNet readers also noticed visual similarities to the Folderview feature introduced in KDE 4.1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Exatron** Exatron: Exatron manufactures a series of automated handling, testing, programming, and marking equipment for the packaged integrated circuit industry. Products: Exatron designs, develops, manufactures, markets, and services a wide variety of I.C. component handling and testing equipment. Exatron products are used in the testing and programming of PLDs and other integrated circuits. Stringy Floppy and Entrepo: In the late 1970s and early 1980s, Exatron designed and manufactured the Exatron Stringy Floppy (ESF) tape storage device for a variety of microcomputers. Coleco also planned to use the ESF in their Colecovision Super Game Module; however, it ultimately proved to be unsuitable for the amounts and types of accesses that games inflict (the Super Game Module ended up being shelved in favor of the Adam computer, which used a different type of tape drive that Coleco developed internally).Around this same time, Exatron announced that it was changing its name to Entrepo. From Video Games magazine, June 1983, page 49: "Last February the Exatron Corporation changed its name to Entrepo (meaning "a storage place")."This name change caused some people to believe that there were two manufacturers of Stringy Floppy tape drives, when in fact there were not. Note that since the company's present name is still Exatron, it is unclear if this name change never actually took place or if it was changed back at some point. Stringy Floppy and Entrepo: The ESF was available by mail-order through A&J Micro Drive, a related company.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Clinical Information Access Portal** Clinical Information Access Portal: The Clinical Information Access Portal, commonly referred to as CIAP, is a project of the New South Wales Department of Health that provides online clinical resources for health professionals working within the New South Wales public health system (NSW Health). Major resources available through CIAP include: Australian Medicines Handbook Harrison's Online Journal databases – Medline, EMBASE, PsycINFO MD Consult MIMS Online Therapeutic Guidelines Micromedex BMJ Best Practice Various full text journals and eBooks
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ensemble (mathematical physics)** Ensemble (mathematical physics): In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a set of systems of particles used in statistical mechanics to describe a single system. The concept of an ensemble was introduced by J. Willard Gibbs in 1902.A thermodynamic ensemble is a specific variety of statistical ensemble that, among other properties, is in statistical equilibrium (defined below), and is used to derive the properties of thermodynamic systems from the laws of classical or quantum mechanics. Physical considerations: The ensemble formalises the notion that an experimenter repeating an experiment again and again under the same macroscopic conditions, but unable to control the microscopic details, may expect to observe a range of different outcomes. Physical considerations: The notional size of ensembles in thermodynamics, statistical mechanics and quantum statistical mechanics can be very large, including every possible microscopic state the system could be in, consistent with its observed macroscopic properties. For many important physical cases, it is possible to calculate averages directly over the whole of the thermodynamic ensemble, to obtain explicit formulas for many of the thermodynamic quantities of interest, often in terms of the appropriate partition function. Physical considerations: The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles. Although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called stationary and can be said to be in statistical equilibrium. Physical considerations: Terminology The word "ensemble" is also used for a smaller set of possibilities sampled from the full set of possible states. For example, a collection of walkers in a Markov chain Monte Carlo iteration is called an ensemble in some of the literature. The term "ensemble" is often used in physics and the physics-influenced literature. In probability theory, the term probability space is more prevalent. Main types: The study of thermodynamics is concerned with systems that appear to human perception to be "static" (despite the motion of their internal parts), and which can be described simply by a set of macroscopically observable variables. These systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, with particular statistical characteristics. "We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing in not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities..." J. W. Gibbs (1903)Three important thermodynamic ensembles were defined by Gibbs:Microcanonical ensemble (or NVE ensemble) —a statistical ensemble where the total energy of the system and the number of particles in the system are each fixed to particular values; each of the members of the ensemble are required to have the same total energy and particle number. The system must remain totally isolated (unable to exchange energy or particles with its environment) in order to stay in statistical equilibrium. Main types: Canonical ensemble (or NVT ensemble)—a statistical ensemble where the energy is not known exactly but the number of particles is fixed. In place of the energy, the temperature is specified. The canonical ensemble is appropriate for describing a closed system which is in, or has been in, weak thermal contact with a heat bath. In order to be in statistical equilibrium, the system must remain totally closed (unable to exchange particles with its environment) and may come into weak thermal contact with other systems that are described by ensembles with the same temperature. Main types: Grand canonical ensemble (or μVT ensemble)—a statistical ensemble where neither the energy nor particle number are fixed. In their place, the temperature and chemical potential are specified. The grand canonical ensemble is appropriate for describing an open system: one which is in, or has been in, weak contact with a reservoir (thermal contact, chemical contact, radiative contact, electrical contact, etc.). The ensemble remains in statistical equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature and chemical potential.The calculations that can be made using each of these ensembles are explored further in their respective articles. Main types: Other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived. For example, in the reaction ensemble, particle number fluctuations are only allowed to occur according to the stoichiometry of the chemical reactions which are present in the system. Representations: The precise mathematical expression for a statistical ensemble has a distinct form depending on the type of mechanics under consideration (quantum or classical). In the classical case, the ensemble is a probability distribution over the microstates. In quantum mechanics, this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables. In classical mechanics, the ensemble is instead written as a probability distribution in phase space; the microstates are the result of partitioning phase space into equal-sized units, although the size of these units can be chosen somewhat arbitrarily. Representations: Requirements for representations Putting aside for the moment the question of how statistical ensembles are generated operationally, we should be able to perform the following two operations on ensembles A, B of the same system: Test whether A, B are statistically equivalent. If p is a real number such that 0 < p < 1, then produce a new ensemble by probabilistic sampling from A with probability p and from B with probability 1 – p.Under certain conditions, therefore, equivalence classes of statistical ensembles have the structure of a convex set. Representations: Quantum mechanical A statistical ensemble in quantum mechanics (also known as a mixed state) is most often represented by a density matrix, denoted by ρ^ . The density matrix provides a fully general tool that can incorporate both quantum uncertainties (present even if the state of the system were completely known) and classical uncertainties (due to a lack of knowledge) in a unified manner. Any physical observable X in quantum mechanics can be written as an operator, X̂. The expectation value of this operator on the statistical ensemble ρ is given by the following trace: Tr ⁡(X^ρ). Representations: This can be used to evaluate averages (operator X̂), variances (using operator X̂ 2), covariances (using operator X̂Ŷ), etc. The density matrix must always have a trace of 1: Tr ⁡ρ^=1 (this essentially is the condition that the probabilities must add up to one). In general, the ensemble evolves over time according to the von Neumann equation. Representations: Equilibrium ensembles (those that do not evolve over time, dρ^/dt=0 ) can be written solely as a function of conserved variables. For example, the microcanonical ensemble and canonical ensemble are strictly functions of the total energy, which is measured by the total energy operator Ĥ (Hamiltonian). The grand canonical ensemble is additionally a function of the particle number, measured by the total particle number operator N̂. Such equilibrium ensembles are a diagonal matrix in the orthogonal basis of states that simultaneously diagonalize each conserved variable. In bra–ket notation, the density matrix is ρ^=∑iPi|ψi⟩⟨ψi| where the |ψi⟩, indexed by i, are the elements of a complete and orthogonal basis. (Note that in other bases, the density matrix is not necessarily diagonal.) Classical mechanical In classical mechanics, an ensemble is represented by a probability density function defined over the system's phase space. While an individual system evolves according to Hamilton's equations, the density function (the ensemble) evolves over time according to Liouville's equation. Representations: In a mechanical system with a defined number of parts, the phase space has n generalized coordinates called q1, ... qn, and n associated canonical momenta called p1, ... pn. The ensemble is then represented by a joint probability density function ρ(p1, ... pn, q1, ... qn). Representations: If the number of parts in the system is allowed to vary among the systems in the ensemble (as in a grand ensemble where the number of particles is a random quantity), then it is a probability distribution over an extended phase space that includes further variables such as particle numbers N1 (first kind of particle), N2 (second kind of particle), and so on up to Ns (the last kind of particle; s is how many different kinds of particles there are). The ensemble is then represented by a joint probability density function ρ(N1, ... Ns, p1, ... pn, q1, ... qn). The number of coordinates n varies with the numbers of particles. Representations: Any mechanical quantity X can be written as a function of the system's phase. The expectation value of any such quantity is given by an integral over the entire phase space of this quantity weighted by ρ: ⟨X⟩=∑N1=0∞…∑Ns=0∞∫…∫ρXdp1…dqn. The condition of probability normalization applies, requiring 1. Representations: Phase space is a continuous space containing an infinite number of distinct physical states within any small region. In order to connect the probability density in phase space to a probability distribution over microstates, it is necessary to somehow partition the phase space into blocks that are distributed representing the different states of the system in a fair way. It turns out that the correct way to do this simply results in equal-sized blocks of canonical phase space, and so a microstate in classical mechanics is an extended region in the phase space of canonical coordinates that has a particular volume. In particular, the probability density function in phase space, ρ, is related to the probability distribution over microstates, P by a factor ρ=1hnCP, where h is an arbitrary but predetermined constant with the units of energy×time, setting the extent of the microstate and providing correct dimensions to ρ. Representations: C is an overcounting correction factor (see below), generally dependent on the number of particles and similar concerns.Since h can be chosen arbitrarily, the notional size of a microstate is also arbitrary. Still, the value of h influences the offsets of quantities such as entropy and chemical potential, and so it is important to be consistent with the value of h when comparing different systems. Representations: Correcting overcounting in phase space Typically, the phase space contains duplicates of the same physical state in multiple distinct locations. This is a consequence of the way that a physical state is encoded into mathematical coordinates; the simplest choice of coordinate system often allows a state to be encoded in multiple ways. An example of this is a gas of identical particles whose state is written in terms of the particles' individual positions and momenta: when two particles are exchanged, the resulting point in phase space is different, and yet it corresponds to an identical physical state of the system. It is important in statistical mechanics (a theory about physical states) to recognize that the phase space is just a mathematical construction, and to not naively overcount actual physical states when integrating over phase space. Overcounting can cause serious problems: Dependence of derived quantities (such as entropy and chemical potential) on the choice of coordinate system, since one coordinate system might show more or less overcounting than another. Representations: Erroneous conclusions that are inconsistent with physical experience, as in the mixing paradox. Foundational issues in defining the chemical potential and the grand canonical ensemble.It is in general difficult to find a coordinate system that uniquely encodes each physical state. As a result, it is usually necessary to use a coordinate system with multiple copies of each state, and then to recognize and remove the overcounting. Representations: A crude way to remove the overcounting would be to manually define a subregion of phase space that includes each physical state only once and then exclude all other parts of phase space. In a gas, for example, one could include only those phases where the particles' x coordinates are sorted in ascending order. While this would solve the problem, the resulting integral over phase space would be tedious to perform due to its unusual boundary shape. (In this case, the factor C introduced above would be set to C = 1, and the integral would be restricted to the selected subregion of phase space.) A simpler way to correct the overcounting is to integrate over all of phase space but to reduce the weight of each phase in order to exactly compensate the overcounting. This is accomplished by the factor C introduced above, which is a whole number that represents how many ways a physical state can be represented in phase space. Its value does not vary with the continuous canonical coordinates, so overcounting can be corrected simply by integrating over the full range of canonical coordinates, then dividing the result by the overcounting factor. However, C does vary strongly with discrete variables such as numbers of particles, and so it must be applied before summing over particle numbers. Representations: As mentioned above, the classic example of this overcounting is for a fluid system containing various kinds of particles, where any two particles of the same kind are indistinguishable and exchangeable. When the state is written in terms of the particles' individual positions and momenta, then the overcounting related to the exchange of identical particles is corrected by using C=N1!N2!…Ns!. Representations: This is known as "correct Boltzmann counting". Ensembles in statistics: The formulation of statistical ensembles used in physics has now been widely adopted in other fields, in part because it has been recognized that the canonical ensemble or Gibbs measure serves to maximize the entropy of a system, subject to a set of constraints: this is the principle of maximum entropy. This principle has now been widely applied to problems in linguistics, robotics, and the like. Ensembles in statistics: In addition, statistical ensembles in physics are often built on a principle of locality: that all interactions are only between neighboring atoms or nearby molecules. Thus, for example, lattice models, such as the Ising model, model ferromagnetic materials by means of nearest-neighbor interactions between spins. The statistical formulation of the principle of locality is now seen to be a form of the Markov property in the broad sense; nearest neighbors are now Markov blankets. Thus, the general notion of a statistical ensemble with nearest-neighbor interactions leads to Markov random fields, which again find broad applicability; for example in Hopfield networks. Ensemble average: In statistical mechanics, the ensemble average is defined as the mean of a quantity that is a function of the microstate of a system, according to the distribution of the system on its micro-states in this ensemble. Since the ensemble average is dependent on the ensemble chosen, its mathematical expression varies from ensemble to ensemble. However, the mean obtained for a given physical quantity does not depend on the ensemble chosen at the thermodynamic limit. The grand canonical ensemble is an example of an open system. Ensemble average: Classical statistical mechanics For a classical system in thermal equilibrium with its environment, the ensemble average takes the form of an integral over the phase space of the system: A¯=∫Ae−βH(q1,q2,...qM,p1,p2,...pN)dτ∫e−βH(q1,q2,...qM,p1,p2,...pN)dτ where: A¯ is the ensemble average of the system property A, β is 1kT , known as thermodynamic beta,H is the Hamiltonian of the classical system in terms of the set of coordinates qi and their conjugate generalized momenta pi , and dτ is the volume element of the classical phase space of interest.The denominator in this expression is known as the partition function, and is denoted by the letter Z. Ensemble average: Quantum statistical mechanics In quantum statistical mechanics, for a quantum system in thermal equilibrium with its environment, the weighted average takes the form of a sum over quantum energy states, rather than a continuous integral: A¯=∑iAie−βEi∑ie−βEi Canonical ensemble average The generalized version of the partition function provides the complete framework for working with ensemble averages in thermodynamics, information theory, statistical mechanics and quantum mechanics. Ensemble average: The microcanonical ensemble represents an isolated system in which energy (E), volume (V) and the number of particles (N) are all constant. The canonical ensemble represents a closed system which can exchange energy (E) with its surroundings (usually a heat bath), but the volume (V) and the number of particles (N) are all constant. The grand canonical ensemble represents an open system which can exchange energy (E) as well as particles with its surroundings but the volume (V) is kept constant. Operational interpretation: In the discussion given so far, while rigorous, we have taken for granted that the notion of an ensemble is valid a priori, as is commonly done in physical context. What has not been shown is that the ensemble itself (not the consequent results) is a precisely defined object mathematically. For instance, It is not clear where this very large set of systems exists (for example, is it a gas of particles inside a container?) It is not clear how to physically generate an ensemble.In this section, we attempt to partially answer this question. Operational interpretation: Suppose we have a preparation procedure for a system in a physics lab: For example, the procedure might involve a physical apparatus and some protocols for manipulating the apparatus. As a result of this preparation procedure, some system is produced and maintained in isolation for some small period of time. By repeating this laboratory preparation procedure we obtain a sequence of systems X1, X2, ....,Xk, which in our mathematical idealization, we assume is an infinite sequence of systems. The systems are similar in that they were all produced in the same way. This infinite sequence is an ensemble. In a laboratory setting, each one of these prepped systems might be used as input for one subsequent testing procedure. Again, the testing procedure involves a physical apparatus and some protocols; as a result of the testing procedure we obtain a yes or no answer. Given a testing procedure E applied to each prepared system, we obtain a sequence of values Meas (E, X1), Meas (E, X2), ...., Meas (E, Xk). Each one of these values is a 0 (or no) or a 1 (yes). Operational interpretation: Assume the following time average exists: lim Meas ⁡(E,Xk) For quantum mechanical systems, an important assumption made in the quantum logic approach to quantum mechanics is the identification of yes-no questions to the lattice of closed subspaces of a Hilbert space. With some additional technical assumptions one can then infer that states are given by density operators S so that: Tr ⁡(ES). Operational interpretation: We see this reflects the definition of quantum states in general: A quantum state is a mapping from the observables to their expectation values.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Eberhard's theorem** Eberhard's theorem: In mathematics, and more particularly in polyhedral combinatorics, Eberhard's theorem partially characterizes the multisets of polygons that can form the faces of simple convex polyhedra. It states that, for given numbers of triangles, quadrilaterals, pentagons, heptagons, and other polygons other than hexagons, there exists a convex polyhedron with those given numbers of faces of each type (and an unspecified number of hexagonal faces) if and only if those numbers of polygons obey a linear equation derived from Euler's polyhedral formula.The theorem is named after Victor Eberhard, a blind German mathematician, who published it in 1888 in his habilitation thesis and in expanded form in an 1891 book on polyhedra. Definitions and statement: For an arbitrary convex polyhedron, one can define numbers p3 , p4 , p5 , etc., where pi counts the faces of the polyhedron that have exactly i sides. A three-dimensional convex polyhedron is defined to be simple when every vertex of the polyhedron is incident to exactly three edges. In a simple polygon, every vertex is incident to three angles of faces, and every edge is incident to two sides of faces. Since the numbers of angles and sides of the faces are given, one can calculate the three numbers v (the total number of vertices), e (the total number of edges), and f (the total number of faces), by summing over all faces and multiplying by an appropriate factor: v=13∑iipi, e=12∑iipi, and f=∑ipi. Definitions and statement: Plugging these values into Euler's polyhedral formula v−e+f=2 and clearing denominators leads to the equation 12 , which must be satisfied by the face counts of every simple polyhedron. However, this equation is not affected by the value of p6 (as its multiplier 6−i is zero), and, for some choices of the other face counts, changing p6 can change whether or not a polyhedron with those face counts exists. That is, obeying this equation on the face counts is a necessary condition for the existence of a polyhedron, but not a sufficient condition, and a complete characterization of which face counts are realizable would need to take into account the value of p6 .Eberhard's theorem implies that the equation above is the only necessary condition that does not depend on p6 . It states that, if an assignment of numbers to p3,p4,p5,p7,… (omitting p6 ) obeys the equation 12 , then there exists a value of p6 and a simple convex polyhedron with exactly pi i -sided faces for all i Examples: There are three simple Platonic solids, the tetrahedron, cube, and dodecahedron. The tetrahedron has p3=4 , the cube has p4=6 , and the dodecahedron has 12 , with all other values of pi being zero. These three assignments of numbers to pi all obey the equation that Eberhard's theorem requires them to obey. The existence of these polyhedra shows that, for these three assignments of numbers to pi , there exists a polyhedron with p6=0 . The case of the dodecahedron, with 12 and all others except p6 zero, describes more generally the fullerenes. There is no fullerene with p6=1 but these graphs are realizable for any other value of p6 ; see for instance, the 26-fullerene graph, with p6=3 There is no simple convex polyhedron with three triangle faces, three pentagon faces, and no other faces. That is, it is impossible to have a simple convex polyhedron with p3=p5=3 , and pi=0 for i∉{3,5} . However, Eberhard's theorem states that it should be possible to form a simple polyhedron by adding some number of hexagons, and in this case one hexagon suffices: bisecting a cube on a regular hexagon passing through six of its faces produces two copies of a simple roofless polyhedron with three triangle faces, three pentagon faces, and one hexagon face. That is, setting p6=1 suffices in this case to produce a realizable combination of face counts. Related results: An analogous result to Eberhard's theorem holds for the existence of polyhedra in which all vertices are incident to exactly four edges. In this case the equation derived from Euler's formula is not affected by the number p4 of quadrilaterals, and for every assignment to the numbers of faces of other types that obeys this equation it is possible to choose a number of quadrilaterals that allows a 4-regular polyhedron to be realized.A strengthened version of Eberhard's theorem states that, under the same conditions as the original theorem, there exists a number m such that all choices of p6 that are greater than equal to m and have the same parity as m are realizable by simple convex polyhedra.A theorem of David W. Barnette provides a lower bound on the number of hexagons that are needed, whenever the number of faces of order seven or higher is at least three. It states that, in these cases, p6≥2+p32−p52−∑i>6pi. Related results: For polygons with few pentagons and many high-order faces, this inequality can force the number of hexagons to be arbitrarily large. More strongly, it can be used to find assignments to the numbers of faces for which the required number of hexagons cannot be bounded by any function of the maximum number of sides of a face.Analogues of Eberhard's theorem have also been studied for other systems of faces and face counts than simple convex polyhedra, for instance for toroidal graphs and for tessellations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cube root** Cube root: In mathematics, a cube root of a number x is a number y such that y3 = x. All nonzero real numbers, have exactly one real cube root and a pair of complex conjugate cube roots, and all nonzero complex numbers have three distinct complex cube roots. For example, the real cube root of 8, denoted 83 , is 2, because 23 = 8, while the other cube roots of 8 are −1+i3 and −1−i3 . The three cube roots of −27i are and −332−32i. Cube root: In some contexts, particularly when the number whose cube root is to be taken is a real number, one of the cube roots (in this particular case the real one) is referred to as the principal cube root, denoted with the radical sign 3. Cube root: The cube root is the inverse function of the cube function if considering only real numbers, but not if considering also complex numbers: although one has always (x3)3=x, the cube of a nonzero number has more than one complex cube root and its principal cube root may not be the number that was cubed. For example, (−1+i3)3=8 , but 2. Formal definition: The cube roots of a number x are the numbers y which satisfy the equation y3=x. Properties: Real numbers For any real number x, there is one real number y such that y3 = x. The cube function is increasing, so does not give the same result for two different inputs, and it covers all real numbers. In other words, it is a bijection, or one-to-one. Then we can define an inverse function that is also one-to-one. For real numbers, we can define a unique cube root of all real numbers. If this definition is used, the cube root of a negative number is a negative number. Properties: If x and y are allowed to be complex, then there are three solutions (if x is non-zero) and so x has three cube roots. A real number has one real cube root and two further cube roots which form a complex conjugate pair. For instance, the cube roots of 1 are: 1,−12+32i,−12−32i. The last two of these roots lead to a relationship between all roots of any real or complex number. If a number is one cube root of a particular real or complex number, the other two cube roots can be found by multiplying that cube root by one or the other of the two complex cube roots of 1. Complex numbers For complex numbers, the principal cube root is usually defined as the cube root that has the greatest real part, or, equivalently, the cube root whose argument has the least absolute value. It is related to the principal value of the natural logarithm by the formula exp ln ⁡x). If we write x as exp ⁡(iθ) where r is a non-negative real number and θ lies in the range −π<θ≤π ,then the principal complex cube root is exp ⁡(iθ3). This means that in polar coordinates, we are taking the cube root of the radius and dividing the polar angle by three in order to define a cube root. With this definition, the principal cube root of a negative number is a complex number, and for instance 3√−8 will not be −2, but rather 1 + i√3. This difficulty can also be solved by considering the cube root as a multivalued function: if we write the original complex number x in three equivalent forms, namely exp exp exp ⁡(iθ−2iπ). The principal complex cube roots of these three forms are then respectively exp exp exp ⁡(iθ3−2iπ3). Unless x = 0, these three complex numbers are distinct, even though the three representations of x were equivalent. For example, 3√−8 may then be calculated to be −2, 1 + i√3, or 1 − i√3. This is related with the concept of monodromy: if one follows by continuity the function cube root along a closed path around zero, after a turn the value of the cube root is multiplied (or divided) by e2iπ/3. Impossibility of compass-and-straightedge construction: Cube roots arise in the problem of finding an angle whose measure is one third that of a given angle (angle trisection) and in the problem of finding the edge of a cube whose volume is twice that of a cube with a given edge (doubling the cube). In 1837 Pierre Wantzel proved that neither of these can be done with a compass-and-straightedge construction. Numerical methods: Newton's method is an iterative method that can be used to calculate the cube root. For real floating-point numbers this method reduces to the following iterative algorithm to produce successively better approximations of the cube root of a: xn+1=13(axn2+2xn). The method is simply averaging three factors chosen such that xn×xn×axn2=a at each iteration. Halley's method improves upon this with an algorithm that converges more quickly with each iteration, albeit with more work per iteration: xn+1=xn(xn3+2a2xn3+a). This converges cubically, so two iterations do as much work as three iterations of Newton's method. Each iteration of Newton's method costs two multiplications, one addition and one division, assuming that 1/3a is precomputed, so three iterations plus the precomputation require seven multiplications, three additions, and three divisions. Each iteration of Halley's method requires three multiplications, three additions, and one division, so two iterations cost six multiplications, six additions, and two divisions. Thus, Halley's method has the potential to be faster if one division is more expensive than three additions. Numerical methods: With either method a poor initial approximation of x0 can give very poor algorithm performance, and coming up with a good initial approximation is somewhat of a black art. Some implementations manipulate the exponent bits of the floating-point number; i.e. they arrive at an initial approximation by dividing the exponent by 3.Also useful is this generalized continued fraction, based on the nth root method: If x is a good first approximation to the cube root of a and y = a − x3, then: 15 x2+8y2x+⋱ 15 10 21 (2x3+y)−⋱. Numerical methods: The second equation combines each pair of fractions from the first into a single fraction, thus doubling the speed of convergence. Appearance in solutions of third and fourth degree equations: Cubic equations, which are polynomial equations of the third degree (meaning the highest power of the unknown is 3) can always be solved for their three solutions in terms of cube roots and square roots (although simpler expressions only in terms of square roots exist for all three solutions, if at least one of them is a rational number). If two of the solutions are complex numbers, then all three solution expressions involve the real cube root of a real number, while if all three solutions are real numbers then they may be expressed in terms of the complex cube root of a complex number. Appearance in solutions of third and fourth degree equations: Quartic equations can also be solved in terms of cube roots and square roots. History: The calculation of cube roots can be traced back to Babylonian mathematicians from as early as 1800 BCE. In the fourth century BCE Plato posed the problem of doubling the cube, which required a compass-and-straightedge construction of the edge of a cube with twice the volume of a given cube; this required the construction, now known to be impossible, of the length 3√2. History: A method for extracting cube roots appears in The Nine Chapters on the Mathematical Art, a Chinese mathematical text compiled around the 2nd century BCE and commented on by Liu Hui in the 3rd century CE. The Greek mathematician Hero of Alexandria devised a method for calculating cube roots in the 1st century CE. His formula is again mentioned by Eutokios in a commentary on Archimedes. In 499 CE Aryabhata, a mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, gave a method for finding the cube root of numbers having many digits in the Aryabhatiya (section 2.5).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indigenous horticulture** Indigenous horticulture: Indigenous horticulture is practised in various ways across all inhabited continents. Indigenous refers to the native peoples of a given area and horticulture is the practice of small-scale intercropping. Africa: North Africa In North Africa, one such example is the farming practices of the Eggon, a Nigerian hill farming community. The Eggon live in the Mada hills, between Lafia and Akwanga. The hills lay between two rivers, the Mada and Arikya. The altitude helps crops retain moisture on the hills, due to early morning mists and fogs; this also makes for earlier and longer crop cultivation. They practice bush fallow agriculture as well as mixed farming land management styles. They focus on growing yams, cassava, maize, beans, and African rice; much of what is produced is exported as a cash crop and is their primary source of cash income. The Eggon use a terraced agricultural system to maximize space on the hills. The goats they raise are kept mostly for fertilizers used in farming. They are only killed on special occasions, such as weddings. The Eggon use the diversity in their environment to maximize their crop production. Africa: West Africa In West Africa, the Kissidougou live on the savannah, dotted by dense areas of "forest islands" created by them. The Kissidougou practice intercropping within the forested areas. However, they also operate farms maintained on slopes or plateaus located between the forest islands. They prepare the savannah lands for forestation through farming and burning of the grasses to fertilize the soils. The Kissidougou graze their cattle on the savannah to help to maintain flammable grasses around the farms and the villages. The Kissidougou create diversity in their environment by farming and transforming savannah into lush, dense forest. Africa: The prevalence of wetlands in West Africa has helped to support local indigenous horticulture. Seasonal flooding of major rivers in the region, such as the Niger, the Sudd, and the Senegal, have made it possible for flood-cropping in many areas. Indigenous people have utilized a variety of irrigation techniques in order to take advantage of this flooding. Additionally, they plan their plantings and harvests specifically around the flooding of local rivers. For example, some farmers choose to plant on rising floods and harvest as the flooding diminishes. This techniques is utilized when cultivating rice. Africa: East Africa East Africa is one of the areas most affected by corporate farming. In western Uganda, there is a farming society called the Banyankore. Their land is part of the Ryampara hill country, between two flat, dry, and less-populated areas of land. On the hills, the average monthly rainfall is 970mm, with two short dry months in June and July and a short, less severe, dry month in January. Seven months out of the year the rainfall is over 1000mm per month. Their primary crop productions are bananas and coffee; they use these as cash crops. Farmers still use substantial areas for millet farming. This is their main food crop. They have intensive home gardens to produce for the families' needs and the outlying farming is used mostly for cash crop production, where coffees and bananas are cultivated using intercropping methods. Africa: Southern Africa In Southern Africa, conglomerates of farming companies have primarily written-off the lands in Northwest Zambia. The land is mostly made of plateaus, with lower-lying lands in the Kabompo and Zambezi river valleys. Much of the area is in the Congo–Zambezi watershed, the area of land where all the sources of water run into the same river basin. The land has several types of soil. There are areas of yellow clay, with higher concentrations of sand, where higher rainfall causes soil erosion; this makes farming in those areas difficult. There are also areas with fertile red clay; these are rich soils good for cultivation of crops. Farmers in the area grow sorghum, millet, sweet potatoes, pumpkin, and maize. There is marginal maize production, used as a cash crop for the community. The area is usually covered by thick forestation that must be cleared before crops can be cultivated; they do not practice agroforestry. Traditionally, spouses must maintain their own gardens. The produce is shared communally, but the practice leads to intensive home gardens that are maintained by the women of the villages. South Pacific: Highlander horticulture The Enga of the Western Highlands Province in New Guinea receive most of their food from growing sweet potatoes Ipomoea batatas which they plant in mulch mounds at elevations up to 2,700 m or higher. The mounds that the Enga make to plant their crops of potatoes are formed from by piling large amounts of grass taken from fallow, or unplanted, plots then by covering the grass with dirt. The size of the mounds depends on elevation; the higher the elevation; the bigger the mounds will be. Mounds above 2,500 m in altitude can have a height of 0.85 m in height; while crops below 1,500 m are not mounded at all. The function of the mound is to protect the crops from the frequent frosts that occur at the high altitudes of the Enga. With sweet potatoes having a very long maturation period of nine months, the Enga also invest their time and space on the mounds with planting other crops that have much shorter maturation periods such as peas in case a heavy frost does claim the crop.The planting of the mounds is done so that the plants which have a higher frost tolerance, such as the Irish potatoes, are planted casually throughout the mound and the low tolerant sweet potatoes are planted in the best position to avoid the frost. Peas, beans, and cabbage which are all highly tolerant to frost will be planted outside the circle of sweet potatoes and lower on the mound placing them closer to the cold temperatures of the ground. The Enga practice fallow rotation where a garden will in crop for about four years followed by about four years of fallow grassland to let the soil replenish.Garden size for an average Enga garden is about 0.21 hectare or about 2,100 square meters and can contain a few hundred mounds. Another gardening strategy the Enga have implemented is the use of kin lands that are usually within one to two days walk from the farmers normal planting grounds. The uses of multiple gardens at differing elevation and the ability access clan lands in different areas for gardening have allowed the Enga to adapt to their environment and survive under harsh conditions. South Pacific: Lowland Swidden cultivation Swidden cultivation is an extensive agriculture practice that is also known as slash-and-burn agriculture. The process is extensive because it requires a vast amount of land divided into several plots with one plot planted for a period of years, while the other plots lay fallow for a number of years.For the Bine-speaking peoples of the New Guinea lowlands swidden cultivation is a main practice for crop propagation. The main crop the Bine grow is the taro root, although they grow about 15 subsidiary crops including: sweet potato, banana, manioc, maize, yam, pawpaw, sugar cane, pineapple, and others. The swiddens which can be placed in either savannas or forests are created by cutting down all the vegetation in the area that the swidden will be. The farmers then pile all of the cut vegetation on the swidden plot and leave it to dry out through the dry season. Right before the wet season begins the piles are burned and the soil and ash are tilled together. The process of tilling the soil and ash mixes the carbon and nitrogen rich ash into the soil thereby fertilizing the soil for the coming crop. After the soil is tilled the crops are planted. South Pacific: There are two planting years for a single swidden for the Bine farmers. In the first year the Bine plant primary taro root with a few subsidiary crops like bananas and sweet potatoes. In the second year taro root makes up about 50 percent of the swidden and the rest of the swidden is mixed with about 15 other plants. After the second year the Bine farmers move on to an adjacent swidden and allow the previous swidden to lay fallow or unplanted for a period of 5 to 10 years in order to repopulate the vegetation. The number of years that a swidden will lay fallow is determined by the plants demand for the nitrogen in the soil. Some plants will leach the soil of nitrogen in a few years and require four or five times that fallow; while other plants can be planted for many years and lay fallow only one or two times the planting period. Swidden cultivation requires a lot of land in order to feed only a few people, but the Bine, whose numbers are low, make good uses of their land through swidden farming. South Pacific: Island horticulture For most South Pacific Island cultures the main subsistence techniques are hunting and gathering. Fishing and the gathering of sago, banana, and other tropical foods are the norm with very little organized agriculture. The Tabalu of Kiriwina located in the Trobriand Islands practice a form of agriculture called Kaylu’ebila, a form of garden magic. The main crop for the Tabalu is the yam and there is a definite division of labour according to sex when it comes to gardening. Heavy work is done by the men and it includes clearing the vegetation, caring the yam supports, and planting the yam tubers in the ground. The women aid by weeding the gardens. South Pacific: Gardening for the Tabalu is a very long and in depth magical process; with special magicians and magical ingredients which have been handed down from family member to family member over time. Garden fields which are called Kwabila are fenced in on all sides to keep out the swine that are breed by the Tabalu. Kwabila are then divided into many smaller plots called baleko, these are the individual gardens that the crops will be planted in. South America: South America consists of modern-day Venezuela, Colombia, Ecuador, Suriname, Brazil, Peru, Bolivia, Paraguay, Uruguay, Argentina, and Chile. South America has historically been a land exploited not only for its natural resources, but also for its indigenous knowledge and labor force. The environmental diversification of South America has been at the foundation of its presence in the global economy as a resource for agriculture, forestry, fishing, hunting, livestock, mining and quarrying. South America: South America can be seen as cultural regions inhabited by marginal tribes, tropical forest tribes, and the circum-Caribbean tribes each with its own distinct way of agricultural cultivation. South America's geographic regions are inhabited by regional tribes which include; the Chocó in the Northern Columbian area, The Kayapo in the Eastern Para area, the Chono in the Southern Fuegian area, and the Quechuas in the Western Peruvian area, Each of these regions has adapted not only their own cultural identity and agricultural style, The Eastern region of South America is known as the Para area in what is now Brazil, and has for millennia been home to the tropical forest Kayapớ tribe. The Kayapớ lived in sedentary villages and were proficient in pottery and loom-weaving, yet they did not domesticate animals or poses knowledge on metallurgy. These Tropical forest Tribes can be characterized through their farming, dugout canoes, woven baskets, loom weaving, and pole and thatch houses.In the Para area the Kayapớ like most tribes of this region practiced intensive agriculture or clearing cultivation. Beginning their agricultural year with a low water season intensifies fishing. The low water season is then followed by the high water season or harvest season. It is during this harvest season that the Kayapớ are able to exercise their leisure time before the cycle ends with (low water levels) and a return to intense fishing. Each changing season commences ceremonies for the Kayapớ that are directly tied to agriculture, hunting, or fishing. Unlike the Chocó the Kayapớ used an agricultural method known as the slash-burn method (shifting agriculture). The Kayapớ cut the forest in April to September (dry season) and time their burns just before the raining season. The Kayapớ used circular plots for agricultural cultivation consisting of five rings (characterized as cultivation zones). The first circle or the inner circle was used for taro and sweet potatoes that thrive in the hotter soil found in the center of the plot. The second circle cultivated maize, manioc, and rice an area that needed various ash enrichment treatments and would experience short term growth. The third zone was an area of rich soil and best served mixed crops including the banana, urucu, papaya. The fourth zones consists of shade loving plants and are for a medical purpose, yet evidence of beans and yams have been also found here. The fifth zone or the outer ring was left as a protective zone that included trap animals protective insects and birds. This form of agriculture requires not only intense physical labor but also requires a knowledge of not only the land, but various types of ground cover, shades and temperatures of local soils, as well as cloud formations to time careful burning. When the Kayapớ manage their agricultural plots they must work with a variety of interacting factors including the background soil fertility, the heterogeneous quality of ash and its distribution, crop nutrient requirements, cropping cycles, management requirements, and pest and disease control clearly illustrating the common misconception that this form of agriculture is primitive and ineffiecnet.It has often been thought that the slash-burn plots are abandoned after one or two years because of un productive soil, but this is a common misconception. The Kayapớ revisit abandoned fields because plants can offer direct and indirect benefits. One direct benefit would be the ability to eat that which has been produced and an indirect benefit would be that open fields allow attract game for hunting and can produce long after they have been tended.The Southern region of South America is known as the Fuegian area and is occupied by the Chono, Alacaluf and Yahgan. These Marginal tribes differed greatly from the other regions in that they were expert in making bark or plank canoes and domesticating dogs, hunting, fishing and gathering. Nomads with simple socio-religious patterns, yet completely lacked the technology of pottery, loom-weaving, metallurgy and even agriculture. Since there was a lack of agriculture in this region the Chono ate native berries, roots, fruits, and wild celery. The source of nutrition for the Chono, Yahgans, and Alcaaluf thus mainly consisted of sea food such as; whales, seal porpoises, guanacos, and otters. The Southern tribes of South America distinguished themselves not in having a rare form of agriculture like the North's slush-mush method or the having intensive agriculture like the East's sophisticated slash-burn method, yet were able to distinguished themselves in being absent of this trait. North America: Farming methods developed by Native Americans include terracing, irrigation, mound building, crop rotation and fertilization. They also used extensive companion planting (see the Three Sisters). North America: Terracing is an effective technique in a steep-sloped, semi-arid climate. The Indigenous farmers stair-stepped the hills so that soil erosion was minimal and land surface was better suited for farming. In the Southwest, including parts of New Mexico, Arizona, and parts of Northern Mexico, terracing was extensive. Terraces were constructed by placing rock dams to redirect runoff water to canals that evenly dispersed rain water. The terraced field transformed the terrain into land suitable for farming maize. There is evidence that terracing has been used in the Southwest for about 2,500 years. The Anasazi people from this region built reservoirs and directed rain water through ditches to water the crops in the terraces. The natives grew corn, squash, and beans, along with other crops in the terraced fields. North America: Corn, squash, and beans were staple crops for Native Americans and were grown throughout much of the North American continent. This trio is known as the Three sisters. Ancient folklore belief says that the Three Sisters represented three goddesses. Each sister protected the other two, and therefore the Three Sisters were never be separated and instead be planted, cooked, and consumed together. In reality, This Triad was an example of symbiotic planting. The corn stalks functioned as a support for the beans. The beans fixed nitrogen into a usable form for the corn and squash, and the broad squash leaves provided shade for the soil, which aided in preventing evaporation and controlling weeds. As the success of the Three Sisters spread, many cultures turned away from hunting and gathering and relied much more on farming. Geographically native cultures in the Woodlands, Prairie, Plains, Great Basin, and Plateau regions of North America all utilized the Three Sisters to some extent. Where they were not grown, the locals traded for them. Nomadic tribes, such as the Dakotas, would trade meat for these vegetable staples. The Three Sisters were usually eaten in unison, as they provide fairly balanced nutrition when consumed together(for example, beans and corn together provide a complete set of the essential amino acids). North America: Native American farmers also employed irrigation. This technique was utilized throughout much of the Southwest and is useful where water is scarce. Irrigation was and is still used today throughout much of the world. Native Americans controlled the amount of water that reached their fields by building long irrigation canals to redirect water from a source to water their crops. The Hohokam people constructed about 600 miles of irrigation canals from AD 50 to 1450 near Phoenix, Arizona. Part of the canal is used by the City of Phoenix today. The Olmecs of Mesoamerica built canals over 4,000 years ago.Chinampas, artificial islands constructed in swamplands and lakes, were invented by Mayan farmers and the technique became used extensively throughout Mesoamerica and was later used by the Aztecs as part of the land reclamation process of the city of Tenochtitlan. This technique increases arable land, and provided additional farming plots the population of Mesoamerica grew.In the Northeast Woodlands and the Great Lakes region, an advanced society known as the Moundbuilders emerged. This society lived in the flood plains of the Mississippi river basin. This culture farmed mainly maize. They had little need for foraging and grew to an advanced civilization due to food surplus. They were the largest civilization north of the Rio Grande. North America: Native Americans also developed storage systems such as storage containers which allowed them to store seeds to plant during the next planting season. They also stored food in dug-out pits or holes in hillsides. Native Americans developed corn cribs. These were storage bins that were elevated off the ground. This technique prevented moisture and animal intrusion.Selective crop breeding was also employed. Corn is a domestic plant and cannot grow on its own. The first corn grown by Native Americans had small ears, and only produced a few kernels per ear. By 2,000 years ago, single stalks with large ears were being produced. Native Americans created over 700 varieties of corn by 1500 AD. "Hands-off" hunter and gatherer misconception: Anderson breaks down common misconceptions surrounding the way in which Native Americans lived among nature and shows that their true impact was often one that attracted and invited others to the land they inhabited. Native Americans, regardless of location, were privy to wildlife management such as, "coppicing, pruning, harrowing, sowing, weeding, burning, digging, thinning, and selective harvesting." These practices were acted out in calculated intervals according to season and by witnessing the signals conveyed by nature to do so and, "on the whole, allowed for sustainable harvest of plants over centuries." Anderson's writing focuses specifically on Natives established in California and emphasizes these techniques applied by them to be essential in maintaining, and even creating, the rich Californian landscapes settlers later happened upon, such as: the coastal prairies, valley grasslands, and oak savannas.Overall, the Native's practices were nature conserving and sustaining due to the specificity of their environmental knowledge which was learned through more than twelve-thousand years of trial and error. Anderson also establishes that the Native's practices were most likely not always beneficial or environmentally friendly. Though he asserts that there is no real evidence of their destruction available, it is possible that the Indigenous peoples in California may have been responsible for the extinction of early regional species.In discussing California Native's "tempered" land tenure practices, Anderson deconstructs the idea of Native Americans as hunter-gatherers whose sparse population and nomadic ways left little to no mark on the land they traveled. When European and Asian farmers, ranchers, and entrepreneurs established themselves in that same land, the concept of their surroundings that they perpetuated was one of an uninhabited wilderness, untouched by man. Though as Anderson points out, this was never the case. This "wilderness" was carefully tended to by the Natives for hundreds of years who had both negative and positive impacts on its conservation. It is theorized by Anderson that without the Natives' calculated intervention, real wilderness in the form of thickets, dense understory, and wildfire would have deemed the land uninhabitable. In establishing this belief, the foreign settlers not only erased the Native's long history of masterful cultivation from the land, but they pushed their people out of the land as well, establishing a new, Eurocentric construction of the American's continent. This constructed history has not only diminished Native ancestry and culture, but also the land, which is no longer maintained or protected by strategic resource management. The European way of mitigating resource and land depletion follows a "hands off" model, which comes from the misconceptions of "leave no trace" mentioned previously.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Image and object order rendering** Image and object order rendering: In computer graphics, image order algorithms iterate over the pixels in the image to be produced, rather than the elements in the scene to be rendered. Object order algorithms are those that iterate over the elements in the scene to be rendered, rather than the pixels in the image to be produced. For typical rendering applications, the scene contains many fewer elements (e.g. geometric primitives) than image pixels. In those cases, object order algorithms are usually most efficient (e.g. scan conversion or shear warp). But when the scene complexity exceeds that of the image, such as is the case often in volume rendering, then image order algorithms (e.g., ray casting) may be more efficient.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ZBED3** ZBED3: Zinc finger BED domain-containing protein 3 also known as axin-interacting protein is a protein in humans that is encoded by the ZBED3 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nearest-neighbor chain algorithm** Nearest-neighbor chain algorithm: In the theory of cluster analysis, the nearest-neighbor chain algorithm is an algorithm that can speed up several methods for agglomerative hierarchical clustering. These are methods that take a collection of points as input, and create a hierarchy of clusters of points by repeatedly merging pairs of smaller clusters to form larger clusters. The clustering methods that the nearest-neighbor chain algorithm can be used for include Ward's method, complete-linkage clustering, and single-linkage clustering; these all work by repeatedly merging the closest two clusters but use different definitions of the distance between clusters. The cluster distances for which the nearest-neighbor chain algorithm works are called reducible and are characterized by a simple inequality among certain cluster distances. Nearest-neighbor chain algorithm: The main idea of the algorithm is to find pairs of clusters to merge by following paths in the nearest neighbor graph of the clusters. Every such path will eventually terminate at a pair of clusters that are nearest neighbors of each other, and the algorithm chooses that pair of clusters as the pair to merge. In order to save work by re-using as much as possible of each path, the algorithm uses a stack data structure to keep track of each path that it follows. By following paths in this way, the nearest-neighbor chain algorithm merges its clusters in a different order than methods that always find and merge the closest pair of clusters. However, despite that difference, it always generates the same hierarchy of clusters. Nearest-neighbor chain algorithm: The nearest-neighbor chain algorithm constructs a clustering in time proportional to the square of the number of points to be clustered. This is also proportional to the size of its input, when the input is provided in the form of an explicit distance matrix. The algorithm uses an amount of memory proportional to the number of points, when it is used for clustering methods such as Ward's method that allow constant-time calculation of the distance between clusters. However, for some other clustering methods it uses a larger amount of memory in an auxiliary data structure with which it keeps track of the distances between pairs of clusters. Background: Many problems in data analysis concern clustering, grouping data items into clusters of closely related items. Hierarchical clustering is a version of cluster analysis in which the clusters form a hierarchy or tree-like structure rather than a strict partition of the data items. In some cases, this type of clustering may be performed as a way of performing cluster analysis at multiple different scales simultaneously. In others, the data to be analyzed naturally has an unknown tree structure and the goal is to recover that structure by performing the analysis. Both of these kinds of analysis can be seen, for instance, in the application of hierarchical clustering to biological taxonomy. In this application, different living things are grouped into clusters at different scales or levels of similarity (species, genus, family, etc). This analysis simultaneously gives a multi-scale grouping of the organisms of the present age, and aims to accurately reconstruct the branching process or evolutionary tree that in past ages produced these organisms.The input to a clustering problem consists of a set of points. A cluster is any proper subset of the points, and a hierarchical clustering is a maximal family of clusters with the property that any two clusters in the family are either nested or disjoint. Background: Alternatively, a hierarchical clustering may be represented as a binary tree with the points at its leaves; the clusters of the clustering are the sets of points in subtrees descending from each node of the tree.In agglomerative clustering methods, the input also includes a distance function defined on the points, or a numerical measure of their dissimilarity. The distance or dissimilarity should be symmetric: the distance between two points does not depend on which of them is considered first. However, unlike the distances in a metric space, it is not required to satisfy the triangle inequality. Background: Next, the dissimilarity function is extended from pairs of points to pairs of clusters. Different clustering methods perform this extension in different ways. For instance, in the single-linkage clustering method, the distance between two clusters is defined to be the minimum distance between any two points from each cluster. Given this distance between clusters, a hierarchical clustering may be defined by a greedy algorithm that initially places each point in its own single-point cluster and then repeatedly forms a new cluster by merging the closest pair of clusters.The bottleneck of this greedy algorithm is the subproblem of finding which two clusters to merge in each step. Background: Known methods for repeatedly finding the closest pair of clusters in a dynamic set of clusters either require superlinear space to maintain a data structure that can find closest pairs quickly, or they take greater than linear time to find each closest pair. The nearest-neighbor chain algorithm uses a smaller amount of time and space than the greedy algorithm by merging pairs of clusters in a different order. In this way, it avoids the problem of repeatedly finding closest pairs. Nevertheless, for many types of clustering problem, it can be guaranteed to come up with the same hierarchical clustering as the greedy algorithm despite the different merge order. The algorithm: Intuitively, the nearest neighbor chain algorithm repeatedly follows a chain of clusters A → B → C → ... where each cluster is the nearest neighbor of the previous one, until reaching a pair of clusters that are mutual nearest neighbors.In more detail, the algorithm performs the following steps: Initialize the set of active clusters to consist of n one-point clusters, one for each input point. The algorithm: Let S be a stack data structure, initially empty, the elements of which will be active clusters. While there is more than one cluster in the set of clusters: If S is empty, choose an active cluster arbitrarily and push it onto S. Let C be the active cluster on the top of S. Compute the distances from C to all other clusters, and let D be the nearest other cluster. If D is already in S, it must be the immediate predecessor of C. Pop both clusters from S and merge them. The algorithm: Otherwise, if D is not already in S, push it onto S.When it is possible for one cluster to have multiple equal nearest neighbors, then the algorithm requires a consistent tie-breaking rule. For instance, one may assign arbitrary index numbers to all of the clusters, and then select (among the equal nearest neighbors) the one with the smallest index number. This rule prevents certain kinds of inconsistent behavior in the algorithm; for instance, without such a rule, the neighboring cluster D might occur earlier in the stack than as the predecessor of C. Time and space analysis: Each iteration of the loop performs a single search for the nearest neighbor of a cluster, and either adds one cluster to the stack or removes two clusters from it. Every cluster is only ever added once to the stack, because when it is removed again it is immediately made inactive and merged. There are a total of 2n − 2 clusters that ever get added to the stack: n single-point clusters in the initial set, and n − 2 internal nodes other than the root in the binary tree representing the clustering. Therefore, the algorithm performs 2n − 2 pushing iterations and n − 1 popping iterations.Each of these iterations may spend time scanning as many as n − 1 inter-cluster distances to find the nearest neighbor. Time and space analysis: The total number of distance calculations it makes is therefore less than 3n2. For the same reason, the total time used by the algorithm outside of these distance calculations is O(n2).Since the only data structure is the set of active clusters and the stack containing a subset of the active clusters, the space required is linear in the number of input points. Correctness: For the algorithm to be correct, it must be the case that popping and merging the top two clusters from the algorithm's stack preserves the property that the remaining clusters on the stack form a chain of nearest neighbors. Correctness: Additionally, it should be the case that all of the clusters produced during the algorithm are the same as the clusters produced by a greedy algorithm that always merges the closest two clusters, even though the greedy algorithm will in general perform its merges in a different order than the nearest-neighbor chain algorithm. Both of these properties depend on the specific choice of how to measure the distance between clusters.The correctness of this algorithm relies on a property of its distance function called reducibility. This property was identified by Bruynooghe (1977) in connection with an earlier clustering method that used mutual nearest neighbor pairs but not chains of nearest neighbors. A distance function d on clusters is defined to be reducible if, for every three clusters A, B and C in the greedy hierarchical clustering such that A and B are mutual nearest neighbors, the following inequality holds: d(A ∪ B, C) ≥ min(d(A,C), d(B,C)).If a distance function has the reducibility property, then merging two clusters C and D can only cause the nearest neighbor of E to change if that nearest neighbor was one of C and D. This has two important consequences for the nearest neighbor chain algorithm. First, it can be shown using this property that, at each step of the algorithm, the clusters on the stack S form a valid chain of nearest neighbors, because whenever a nearest neighbor becomes invalidated it is immediately removed from the stack.Second, and even more importantly, it follows from this property that, if two clusters C and D both belong to the greedy hierarchical clustering, and are mutual nearest neighbors at any point in time, then they will be merged by the greedy clustering, for they must remain mutual nearest neighbors until they are merged. It follows that each mutual nearest neighbor pair found by the nearest neighbor chain algorithm is also a pair of clusters found by the greedy algorithm, and therefore that the nearest neighbor chain algorithm computes exactly the same clustering (although in a different order) as the greedy algorithm. Application to specific clustering distances: Ward's method Ward's method is an agglomerative clustering method in which the dissimilarity between two clusters A and B is measured by the amount by which merging the two clusters into a single larger cluster would increase the average squared distance of a point to its cluster centroid. That is, d(A,B)=∑x∈A,y∈Bd2(x,y)|A|+|B|−∑x,y∈Ad2(x,y)|A|−∑x,y∈Bd2(x,y)|B|. Expressed in terms of the centroid cA and cardinality nA of the two clusters, it has the simpler formula d(A,B)=d2(ca,cb)1/nA+1/nB, allowing it to be computed in constant time per distance calculation. Application to specific clustering distances: Although highly sensitive to outliers, Ward's method is the most popular variation of agglomerative clustering both because of the round shape of the clusters it typically forms and because of its principled definition as the clustering that at each step has the smallest variance within its clusters. Alternatively, this distance can be seen as the difference in k-means cost between the new cluster and the two old clusters. Application to specific clustering distances: Ward's distance is also reducible, as can be seen more easily from a different formula for calculating the distance of a merged cluster from the distances of the clusters it was merged from: d(A∪B,C)=nA+nCnA+nB+nCd(A,C)+nB+nCnA+nB+nCd(B,C)−nCnA+nB+nCd(A,B). Distance update formulas such as this one are called formulas "of Lance–Williams type" after the work of Lance & Williams (1967). Application to specific clustering distances: If d(A,B) is the smallest of the three distances on the right hand side (as would necessarily be true if A and B are mutual nearest-neighbors) then the negative contribution from its term is cancelled by the nC coefficient of one of the two other terms, leaving a positive value added to the weighted average of the other two distances. Therefore, the combined distance is always at least as large as the minimum of d(A,C) and d(B,C) , meeting the definition of reducibility. Application to specific clustering distances: Because Ward's distance is reducible, the nearest-neighbor chain algorithm using Ward's distance calculates exactly the same clustering as the standard greedy algorithm. For n points in a Euclidean space of constant dimension, it takes time O(n2) and space O(n). Application to specific clustering distances: Complete linkage and average distance Complete-linkage or furthest-neighbor clustering is a form of agglomerative clustering that defines the dissimilarity between clusters to be the maximum distance between any two points from the two clusters. Similarly, average-distance clustering uses the average pairwise distance as the dissimilarity. Like Ward's distance, these two forms of clustering obey a formula of Lance–Williams type. In complete linkage, the distance d(A∪B,C) is the maximum of the two distances d(A,C) and d(B,C) . Therefore, it is at least equal to the minimum of these two distances, the requirement for being reducible. For average distance, d(A∪B,C) is just a weighted average of the distances d(A,C) and d(B,C) . Again, this is at least as large as the minimum of the two distances. Thus, in both of these cases, the distance is reducible.Unlike Ward's method, these two forms of clustering do not have a constant-time method for computing distances between pairs of clusters. Instead it is possible to maintain an array of distances between all pairs of clusters. Whenever two clusters are merged, the formula can be used to compute the distance between the merged cluster and all other clusters. Maintaining this array over the course of the clustering algorithm takes time and space O(n2). The nearest-neighbor chain algorithm may be used in conjunction with this array of distances to find the same clustering as the greedy algorithm for these cases. Its total time and space, using this array, is also O(n2).The same O(n2) time and space bounds can also be achieved in a different way, by a technique that overlays a quadtree-based priority queue data structure on top of the distance matrix and uses it to perform the standard greedy clustering algorithm. Application to specific clustering distances: This quadtree method is more general, as it works even for clustering methods that are not reducible. However, the nearest-neighbor chain algorithm matches its time and space bounds while using simpler data structures. Application to specific clustering distances: Single linkage In single-linkage or nearest-neighbor clustering, the oldest form of agglomerative hierarchical clustering, the dissimilarity between clusters is measured as the minimum distance between any two points from the two clusters. With this dissimilarity, min (d(A,C),d(B,C)), meeting as an equality rather than an inequality the requirement of reducibility. (Single-linkage also obeys a Lance–Williams formula, but with a negative coefficient from which it is more difficult to prove reducibility.) As with complete linkage and average distance, the difficulty of calculating cluster distances causes the nearest-neighbor chain algorithm to take time and space O(n2) to compute the single-linkage clustering. Application to specific clustering distances: However, the single-linkage clustering can be found more efficiently by an alternative algorithm that computes the minimum spanning tree of the input distances using Prim's algorithm, and then sorts the minimum spanning tree edges and uses this sorted list to guide the merger of pairs of clusters. Within Prim's algorithm, each successive minimum spanning tree edge can be found by a sequential search through an unsorted list of the smallest edges connecting the partially constructed tree to each additional vertex. This choice saves the time that the algorithm would otherwise spend adjusting the weights of vertices in its priority queue. Using Prim's algorithm in this way would take time O(n2) and space O(n), matching the best bounds that could be achieved with the nearest-neighbor chain algorithm for distances with constant-time calculations. Application to specific clustering distances: Centroid distance Another distance measure commonly used in agglomerative clustering is the distance between the centroids of pairs of clusters, also known as the weighted group method. It can be calculated easily in constant time per distance calculation. However, it is not reducible. For instance, if the input forms the set of three points of an equilateral triangle, merging two of these points into a larger cluster causes the inter-cluster distance to decrease, a violation of reducibility. Therefore, the nearest-neighbor chain algorithm will not necessarily find the same clustering as the greedy algorithm. Nevertheless, Murtagh (1983) writes that the nearest-neighbor chain algorithm provides "a good heuristic" for the centroid method. Application to specific clustering distances: A different algorithm by Day & Edelsbrunner (1984) can be used to find the greedy clustering in O(n2) time for this distance measure. Application to specific clustering distances: Distances sensitive to merge order The above presentation explicitly disallowed distances sensitive to merge order. Indeed, allowing such distances can cause problems. In particular, there exist order-sensitive cluster distances which satisfy reducibility, but for which the above algorithm will return a hierarchy with suboptimal costs. Therefore, when cluster distances are defined by a recursive formula (as some of the ones discussed above are), care must be taken that they do not use the hierarchy in a way which is sensitive to merge order. History: The nearest-neighbor chain algorithm was developed and implemented in 1982 by Jean-Paul Benzécri and J. Juan. They based this algorithm on earlier methods that constructed hierarchical clusterings using mutual nearest neighbor pairs without taking advantage of nearest neighbor chains.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Imix video cube** Imix video cube: The Imix (also known as ImMix) Video Cube is one of the first computer non-linear editing systems that was a full broadcast quality online video finishing machine. After its release in 1994, Imix released a more advanced version, the Imix Turbo Cube, which boasted 4 channels of real time layered visual effects. It was a hardware computer system controlled by an Apple Macintosh computer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oberstarzt** Oberstarzt: Oberstarzt (OTA) is a military rank in German speaking armed forces. It denotes a medical staff officer surgeon or medical staff officer dentist and is comparable to Colonel (de: Oberst) or Captain (naval) (de: Kapitän zur See) NATO-Rangcode OF5 in anglophone armed forces. Germany: Bundeswehr In the Joint Medical Service of the German Bundeswehr Oberstarzt, Oberstapotheker, and Oberstveterinär are comparable in NATO to the OF-5 rank Oberst; Flottenarzt, and Flottenapotheker are equivalent to Kapitän zur See, OF-5 as well. Address: The manner of formal addressing of military surgeons/dentists with the rank Oberstarzt is "Herr/Frau Oberstarzt"; with the rank Oberstapotheker, "Herr/Frau Oberstapotheker". Military surgeons/dentists with the rank Flottenarzt are addressed "Herr/Frau Flottenarzt"; with the rank Flottenapotheker, "Herr/Frau Flottenapotheker". The "Inspector of veterinary medicine of the Bundeswehr" (de: Inspizient Veterinärmedizin der Bundeswehr) holds the rank Oberstveterinär. Address: Rank insignias On the shoulder straps (Heer, Luftwaffe) there are three silver stars in silver oak leaves and the career insignia (de: Laufbahnabzeichen) as symbol of the medical standing, or course of studies. The piping on shoulder straps shows the Waffenfarbe (en: corps- or troop-function colour), corresponding to the appropriate military service, branch, or special force. The corps colour of the Bundeswehr Joint Medical Service is dark blue. Address: In the Marines, the career insignia is in the middle of both sleeves, 3 cm above the cuff strips, and on the shoulder straps between strips and button. Address: Wehrmacht Oberstarzt of the German Wehrmacht was comparable to the Oberst (OF-5), as well as to the Standartenführer and Oberst of the Waffen-SS. In line to the so-called Reichsbesoldungsordnung (en: Reich's salary order), appendixes to the Salary law of the German Empire (de: Besoldungsgesetz des Deutschen Reiches) of 1927 (changes 1937 – 1940), the comparative ranks were as follows: C 4 Oberst (Heer and Luftwaffe) Kapitän zur See (Kriegsmarine) Oberstarzt from 1934 (medical service of the Wehrmacht) Flottenarzt, introduced June 26, 1935 (medical service of the Kriegsmarine) Oberstveterinär from 1934 (veterinarian service of the Wehrmacht)During wartime, regular assignments of Oberstarzt was Division surgeon – IVb (de: Divisionsarzt – IVb). Address: The corps colour of the military Health Service Support (HSS) in German armed forces was traditional  dark bluedark blue, and of the veterinarian service  carmine red. This tradition was continued by the medical service corps in Heer and Luftwaffe of the Reichswehr and Wehrmacht. However, the corps colour of the Waffen-SS and Kriegsmarine HSS was  cornflower blue. Address: Rank insignias Address The manner of formal addressing of military surgeons/dentists with the rank Oberstarzt was, "Herr Oberstarzt"; with the rank Flottenarz - "Herr Flottenarzt". Austria-Hungary: In the Imperial & Royal Common Army of Austria-Hungary (de: Gemeinsame Armee or k.u.k. Armee) there was the rank Oberstabsarzt 1. Klasse (en: Senior staff surgeon 1st class) until 1918, equivalent to Oberstarzt in Germany. That particular rank was comparable to the Oberst OF5-rank (en: colonel) as well. There was also a rank Oberstabsarzt 2. Klasse (or másodosztályú fõtörzsorvos in Hungarian). Austria-Hungary: Officers with that rank Andreas Mollat (1802-1891) k. k. Oberststabsarzt
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Laryngotracheal groove** Laryngotracheal groove: The laryngotracheal groove is a precursor for the larynx and trachea. Laryngotracheal groove: The rudiment of the respiratory organs appears as a median longitudinal groove in the ventral wall of the pharynx. The groove deepens and its lips fuse to form a septum which grows from below upward and converts the groove into a tube, the laryngotracheal tube. The cephalic end opens into the pharynx by a slit-like aperture formed by the persistent anterior part of the groove. Initially the cephalic end is in open communication with the foregut but eventually it becomes separated by indentations of mesoderm, the tracheoesophageal folds. Laryngotracheal groove: When the tracheoesophageal folds fuse in the midline to form the tracheoesophageal septum, the foregut is divided into trachea ventrally and the esophagus dorsally. The tube is lined by endoderm from which the epithelial lining of the respiratory tract is developed. The cephalic part of the tube becomes the larynx, and its next succeeding part the trachea, while from its caudal end a respiratory diverticulum appears as the lung bud. The lung bud branches into two lateral outgrowths known as the bronchial buds, one on each side of the trachea. The right and left bronchial buds branch into main (primary), lobar (secondary), segmental (tertiary), and subsegmental bronchi and lead to the development of the lungs. The Hox complex, FGF-10 (fibroblast growth factor), BMP-4 (bone morphogenetic protein), N-myc (a proto-oncogene), syndecan (a proteglycan), tenascin (an extracellular matrix protein) and epimorphin (a protein) appear to play a role in development of the respiratory system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Muscular atrophy-ataxia-retinitis pigmentosa-diabetes mellitus syndrome** Muscular atrophy-ataxia-retinitis pigmentosa-diabetes mellitus syndrome: Muscular atrophy-ataxia-retinitis pigmentosa-diabetes mellitus syndrome, also known as Kurukawa-Takagi-Nakao syndrome is a very rare genetic disorder which is characterized by muscular atrophy, cerebellar ataxia, reduced sense of touch, retinal degeneration, and diabetes mellitus beginning in late childhood-early adolescence. It is inherited in an autosomal dominant manner. It has been described in 10 members from a large 4-generation Japanese family (1986).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Completeness (order theory)** Completeness (order theory): In the mathematical area of order theory, completeness properties assert the existence of certain infima or suprema of a given partially ordered set (poset). The most familiar example is the completeness of the real numbers. A special use of the term refers to complete partial orders or complete lattices. However, many other interesting notions of completeness exist. Completeness (order theory): The motivation for considering completeness properties derives from the great importance of suprema (least upper bounds, joins, " ∨ ") and infima (greatest lower bounds, meets, " ∧ ") to the theory of partial orders. Finding a supremum means to single out one distinguished least element from the set of upper bounds. On the one hand, these special elements often embody certain concrete properties that are interesting for the given application (such as being the least common multiple of a set of numbers or the union of a collection of sets). On the other hand, the knowledge that certain types of subsets are guaranteed to have suprema or infima enables us to consider the evaluation of these elements as total operations on a partially ordered set. For this reason, posets with certain completeness properties can often be described as algebraic structures of a certain kind. In addition, studying the properties of the newly obtained operations yields further interesting subjects. Types of completeness properties: All completeness properties are described along a similar scheme: one describes a certain class of subsets of a partially ordered set that are required to have a supremum or required to have an infimum. Hence every completeness property has its dual, obtained by inverting the order-dependent definitions in the given statement. Some of the notions are usually not dualized while others may be self-dual (i.e. equivalent to their dual statements). Types of completeness properties: Least and greatest elements The easiest example of a supremum is the empty one, i.e. the supremum of the empty set. By definition, this is the least element among all elements that are greater than each member of the empty set. But this is just the least element of the whole poset, if it has one, since the empty subset of a poset P is conventionally considered to be both bounded from above and from below, with every element of P being both an upper and lower bound of the empty subset. Other common names for the least element are bottom and zero (0). The dual notion, the empty lower bound, is the greatest element, top, or unit (1). Types of completeness properties: Posets that have a bottom are sometimes called pointed, while posets with a top are called unital or topped. An order that has both a least and a greatest element is bounded. However, this should not be confused with the notion of bounded completeness given below. Types of completeness properties: Finite completeness Further simple completeness conditions arise from the consideration of all non-empty finite sets. An order in which all non-empty finite sets have both a supremum and an infimum is called a lattice. It suffices to require that all suprema and infima of two elements exist to obtain all non-empty finite ones; a straightforward induction argument shows that every finite non-empty supremum/infimum can be decomposed into a finite number of binary suprema/infima. Thus the central operations of lattices are binary suprema ∨ and infima ∧ . It is in this context that the terms meet for ∧ and join for ∨ are most common. Types of completeness properties: A poset in which only non-empty finite suprema are known to exist is therefore called a join-semilattice. The dual notion is meet-semilattice. Further completeness conditions The strongest form of completeness is the existence of all suprema and all infima. The posets with this property are the complete lattices. However, using the given order, one can restrict to further classes of (possibly infinite) subsets, that do not yield this strong completeness at once. If all directed subsets of a poset have a supremum, then the order is a directed-complete partial order (dcpo). These are especially important in domain theory. The seldom-considered dual notion to a dcpo is the filtered-complete poset. Dcpos with a least element ("pointed dcpos") are one of the possible meanings of the phrase complete partial order (cpo). Types of completeness properties: If every subset that has some upper bound has also a least upper bound, then the respective poset is called bounded complete. The term is used widely with this definition that focuses on suprema and there is no common name for the dual property. However, bounded completeness can be expressed in terms of other completeness conditions that are easily dualized (see below). Although concepts with the names "complete" and "bounded" were already defined, confusion is unlikely to occur since one would rarely speak of a "bounded complete poset" when meaning a "bounded cpo" (which is just a "cpo with greatest element"). Likewise, "bounded complete lattice" is almost unambiguous, since one would not state the boundedness property for complete lattices, where it is implied anyway. Also note that the empty set usually has upper bounds (if the poset is non-empty) and thus a bounded-complete poset has a least element. Types of completeness properties: One may also consider the subsets of a poset which are totally ordered, i.e. the chains. If all chains have a supremum, the order is called chain complete. Again, this concept is rarely needed in the dual form. Relationships between completeness properties: It was already observed that binary meets/joins yield all non-empty finite meets/joins. Likewise, many other (combinations) of the above conditions are equivalent. Relationships between completeness properties: The best-known example is the existence of all suprema, which is in fact equivalent to the existence of all infima. Indeed, for any subset X of a poset, one can consider its set of lower bounds B. The supremum of B is then equal to the infimum of X: since each element of X is an upper bound of B, sup B is smaller than all elements of X, i.e. sup B is in B. It is the greatest element of B and hence the infimum of X. In a dual way, the existence of all infima implies the existence of all suprema. Relationships between completeness properties: Bounded completeness can also be characterized differently. By an argument similar to the above, one finds that the supremum of a set with upper bounds is the infimum of the set of upper bounds. Consequently, bounded completeness is equivalent to the existence of all non-empty infima. Relationships between completeness properties: A poset is a complete lattice if and only if it is a cpo and a join-semilattice. Indeed, for any subset X, the set of all finite suprema (joins) of X is directed and the supremum of this set (which exists by directed completeness) is equal to the supremum of X. Thus every set has a supremum and by the above observation we have a complete lattice. The other direction of the proof is trivial. Relationships between completeness properties: Assuming the axiom of choice, a poset is chain complete if and only if it is a dcpo. Completeness in terms of universal algebra: As explained above, the presence of certain completeness conditions allows to regard the formation of certain suprema and infima as total operations of a partially ordered set. It turns out that in many cases it is possible to characterize completeness solely by considering appropriate algebraic structures in the sense of universal algebra, which are equipped with operations like ∨ or ∧ . By imposing additional conditions (in form of suitable identities) on these operations, one can then indeed derive the underlying partial order exclusively from such algebraic structures. Details on this characterization can be found in the articles on the "lattice-like" structures for which this is typically considered: see semilattice, lattice, Heyting algebra, and Boolean algebra. Note that the latter two structures extend the application of these principles beyond mere completeness requirements by introducing an additional operation of negation. Completeness in terms of adjunctions: Another interesting way to characterize completeness properties is provided through the concept of (monotone) Galois connections, i.e. adjunctions between partial orders. In fact this approach offers additional insights both into the nature of many completeness properties and into the importance of Galois connections for order theory. The general observation on which this reformulation of completeness is based is that the construction of certain suprema or infima provides left or right adjoint parts of suitable Galois connections. Completeness in terms of adjunctions: Consider a partially ordered set (X, ≤). As a first simple example, let 1 = {*} be a specified one-element set with the only possible partial ordering. There is an obvious mapping j: X → 1 with j(x) = * for all x in X. X has a least element if and only if the function j has a lower adjoint j*: 1 → X. Indeed the definition for Galois connections yields that in this case j*(*) ≤ x if and only if * ≤ j(x), where the right hand side obviously holds for any x. Dually, the existence of an upper adjoint for j is equivalent to X having a greatest element. Completeness in terms of adjunctions: Another simple mapping is the function q: X → X × X given by q(x) = (x, x). Naturally, the intended ordering relation for X × X is just the usual product order. q has a lower adjoint q* if and only if all binary joins in X exist. Conversely, the join operation ∨ : X × X → X can always provide the (necessarily unique) lower adjoint for q. Dually, q allows for an upper adjoint if and only if X has all binary meets. Thus the meet operation ∧ , if it exists, always is an upper adjoint. If both ∨ and ∧ exist and, in addition, ∧ is also a lower adjoint, then the poset X is a Heyting algebra—another important special class of partial orders. Completeness in terms of adjunctions: Further completeness statements can be obtained by exploiting suitable completion procedures. For example, it is well known that the collection of all lower sets of a poset X, ordered by subset inclusion, yields a complete lattice D(X) (the downset-lattice). Furthermore, there is an obvious embedding e: X → D(X) that maps each element x of X to its principal ideal {y in X | y ≤ x}. A little reflection now shows that e has a lower adjoint if and only if X is a complete lattice. In fact, this lower adjoint will map any lower set of X to its supremum in X. Composing this lower adjoint with the function that maps any subset of X to its lower closure (again an adjunction for the inclusion of lower sets in the powerset), one obtains the usual supremum map from the powerset 2X to X. As before, another important situation occurs whenever this supremum map is also an upper adjoint: in this case the complete lattice X is constructively completely distributive. See also the articles on complete distributivity and distributivity (order theory). Completeness in terms of adjunctions: The considerations in this section suggest a reformulation of (parts of) order theory in terms of category theory, where properties are usually expressed by referring to the relationships (morphisms, more specifically: adjunctions) between objects, instead of considering their internal structure. For more detailed considerations of this relationship see the article on the categorical formulation of order theory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Screen tearing** Screen tearing: Screen tearing is a visual artifact in video display where a display device shows information from multiple frames in a single screen draw.The artifact occurs when the video feed to the device is not synchronized with the display's refresh rate. That can be caused by non-matching refresh rates, and the tear line then moves as the phase difference changes (with speed proportional to the difference of frame rates). It can also occur simply from a lack of synchronization between two equal frame rates, and the tear line is then at a fixed location that corresponds to the phase difference. During video motion, screen tearing creates a torn look as the edges of objects (such as a wall or a tree) fail to line up. Screen tearing: Tearing can occur with most common display technologies and video cards and is most noticeable in horizontally-moving visuals, such as in slow camera pans in a movie or classic side-scrolling video games. Screen tearing is less noticeable when more than two frames finish rendering during the same refresh interval since that means the screen has several narrower tears, instead of a single wider one. Prevention: Ways to prevent video tearing depend on the display device and video card technology, the software in use, and the nature of the video material. The most common solution is to use multiple buffering. Most systems use multiple buffering and some means of synchronization of display and video memory refresh cycles. Prevention: Option "TearFree" "boolean": disable or enable TearFree updates. This option forces X to perform all rendering to a back buffer before updating the actual display. It requires an extra memory allocation the same size as a framebuffer, the occasional extra copy, and requires Damage tracking. Thus, enabling TearFree requires more memory and is slower (reduced throughput) and introduces a small amount of output latency, but it should not impact input latency. However, the update to the screen is then performed synchronously with the vertical refresh of the display so that the entire update is completed before the display starts its refresh. That is only one frame is ever visible, preventing an unsightly tear between two visible and differing frames. This replicates what the compositing manager should be doing, however, TearFree will redirect the compositor updates (and those of fullscreen games) directly onto the scan out thus incurring no additional overhead in the composited case. Not all compositing managers prevent tearing, and if the outputs are rotated, there will still be tearing without TearFree enabled. Prevention: Vertical synchronization A vertical synchronization is an option in most systems in which the video card is prevented from doing anything visible to the display memory until after the monitor finishes its current refresh cycle. During the vertical blanking interval, the driver orders the video card to either rapidly copy the off-screen graphics area into the active display area (double buffering), or treat both memory areas as displayable, and simply switch back and forth between them (page flipping). Prevention: Nvidia and AMD video adapters provide an 'Adaptive Vsync' option, which will turn on vertical synchronization only when the frame rate of the software exceeds the display's refresh rate, disabling it otherwise. That eliminates the stutter that occurs as the rendering engine frame rate drops below the display's refresh rate.Alternatively, technologies like FreeSync and G-Sync reverse the concept and adapt the display's refresh rate to the content coming from the computer. Such technologies require specific support from both the video adapter and the display. Prevention: Complications When vertical synchronization is used, the frame rate of the rendering engine gets limited to the video signal frame rate. That feature normally improves video quality but involves trade-offs in some cases. Prevention: Judder Vertical synchronization can also cause artifacts in video and movie presentations since they are generally recorded at frame rates significantly lower than the typical monitor frame rates (24–30 frame/s). When such a movie is played on a monitor set for a typical 60 Hz refresh rate, the video player misses the monitor's deadline fairly frequently, and the interceding frames are displayed slightly faster than intended, resulting in an effect similar to judder. (See Telecine: Frame rate differences.) Input lag Video games, which use a wide variety of rendering engines, tend to benefit visually from vertical synchronization since a rendering engine is normally expected to build each frame in real-time, based on whatever the engine's variables specify at the moment a frame is requested. However, because vertical synchronization causes input lag, it interferes with the interactive nature of games, and particularly interferes with games that require precise timing or fast reaction times. Prevention: Benchmarking Lastly, benchmarking a video card or rendering engine generally implies that the hardware and software render the display as fast as possible, without regard to monitor capabilities or resultant video tearing. Otherwise, the monitor and video card throttle the benchmarking program, causing invalid results. Prevention: Other techniques Some graphics systems let the software perform its memory accesses so that they stay at the same time point relative to the display hardware's refresh cycle, known as raster interrupt or racing the beam. In that case, the software writes to the areas of the display that have just been updated, staying just behind the monitor's active refresh point. That allows for copy routines or rendering engines with less predictable throughput as long as the rendering engine can "catch up" with the monitor's active refresh point when it falls behind. Prevention: Alternatively, the software can instead stay just ahead of the active refresh point. Depending on how far ahead one chooses to stay, that method may demand code that copies or renders the display at a fixed, constant speed. Too much latency causes the monitor to overtake the software on occasion, leading to rendering artifacts, tearing, etc. Demo software on classic systems such as the Commodore 64 and ZX Spectrum frequently exploited those techniques because of the predictable nature of their respective video systems to achieve effects that might otherwise be impossible.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bleomycin hydrolase** Bleomycin hydrolase: Bleomycin hydrolase is an enzyme that in humans is encoded by the BLMH gene.Bleomycin hydrolase (BMH) is a cytoplasmic cysteine peptidase that is highly conserved through evolution. Its biological function is hydrolysis of the reactive electrophile homocysteine thiolactone. Another of its activities is metabolic inactivation of the glycopeptide bleomycin (BLM), an essential component of combination chemotherapy regimens for cancer. The protein contains the signature active site residues of the cysteine protease papain superfamily. Interactions: BLMH has been shown to interact with RPL29, RPL11, UBE2I and Amyloid precursor protein.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Atomic fountain** Atomic fountain: An atomic fountain is a cloud of atoms that is tossed upwards in the Earth's gravitational field by lasers. If it were visible, it would resemble the water in a fountain. While weightless in the toss, the atoms are measured to set the frequency of an atomic clock.The primary motivation behind the development of the atomic fountain derives from the Ramsey method of measuring the frequency of atomic transitions. In broad strokes, the Ramsey method involves exposing a cloud of atoms to a brief radiofrequency (rf) electromagnetic field; waiting a time T; briefly exposing the cloud to the rf field again; and then measuring what fraction of the atoms in the cloud have transitioned. If the frequency of the rf field is identical to the atomic transition frequency, 100% of the atoms will have transitioned; if the frequency of the field differs slightly from the transition frequency, some of the atoms will not have transitioned. By repeatedly sending clouds of atoms through such an apparatus, the frequency of the field can be adjusted to match the atomic transition frequency.The precision of the Ramsey method can be increased by increasing the wait time T of the cloud. The use of an atomic fountain with a cooled atomic cloud allows for wait times on the order of one second, which is vastly greater than what can be achieved by performing the Ramsey method on a hot atomic beam. This is one reason why NIST-F1, a caesium fountain clock, can keep time more precisely than NIST-7, a caesium beam clock. History: The idea of the atomic fountain was first proposed in the 1950s by Jerrold Zacharias. Zacharias attempted to implement an atomic fountain using a thermal beam of atoms, under the assumption that the atoms at the low-velocity end of the Maxwell–Boltzmann distribution would be of sufficiently low energy to execute a reasonably sized parabolic trajectory. However, the attempt was not successful because fast atoms in a thermal beam struck the low-velocity atoms and scattered them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cruise missile** Cruise missile: A cruise missile is a guided missile used against terrestrial or naval targets, that remains in the atmosphere and flies the major portion of its flight path at an approximately constant speed. Cruise missiles are designed to deliver a large warhead over long distances with high precision. Modern cruise missiles are capable of traveling at high subsonic, supersonic, or hypersonic speeds, are self-navigating, and are able to fly on a non-ballistic, extremely low-altitude trajectory. History: The idea of an "aerial torpedo" was shown in the British 1909 film The Airship Destroyer in which flying torpedoes controlled wirelessly are used to bring down airships bombing London.In 1916, the American aviator Lawrence Sperry built and patented an "aerial torpedo", the Hewitt-Sperry Automatic Airplane, a small biplane carrying a TNT charge, a Sperry autopilot and barometric altitude control. Inspired by the experiments, the United States Army developed a similar flying bomb called the Kettering Bug. Germany had also flown trials with remote-controlled aerial gliders (Torpedogleiter) built by Siemens-Schuckert beginning in 1916.In the Interwar Period, Britain's Royal Aircraft Establishment developed the Larynx (Long Range Gun with Lynx Engine), which underwent a few flight tests in the 1920s.In the Soviet Union, Sergei Korolev headed the GIRD-06 cruise missile project from 1932 to 1939, which used a rocket-powered boost-glide bomb design. The 06/III (RP-216) and 06/IV (RP-212) contained gyroscopic guidance systems. The vehicle was designed to boost to 28 km altitude and glide a distance of 280 km, but test flights in 1934 and 1936 only reached an altitude of 500 meters. History: In 1944, during World War II, Germany deployed the first operational cruise missiles. The V-1, often called a flying bomb, contained a gyroscope guidance system and was propelled by a simple pulsejet engine, the sound of which gave it the nickname of "buzz bomb" or "doodlebug". Accuracy was sufficient only for use against very large targets (the general area of a city), while the range of 250 km was significantly lower than that of a bomber carrying the same payload. The main advantages were speed (although not sufficient to outperform contemporary propeller-driven interceptors) and expandability. The production cost of a V-1 was only a small fraction of that of a V-2 supersonic ballistic missile with a similar-sized warhead. Unlike the V-2, the initial deployments of the V-1 required stationary launch ramps which were susceptible to bombardment. Nazi Germany, in 1943, also developed the Mistel composite aircraft program, which can be seen as a rudimentary air-launched cruise missile, where a piloted fighter-type aircraft was mounted atop an unpiloted bomber-sized aircraft that was packed with explosives to be released while approaching the target. Bomber-launched variants of the V-1 saw limited operational service near the end of the war, with the pioneering V-1's design reverse-engineered by the Americans as the Republic-Ford JB-2 cruise missile. History: Immediately after the war, the United States Air Force had 21 different guided missile projects, including would-be cruise missiles. All but four were cancelled by 1948: the Air Materiel Command Banshee, the SM-62 Snark, the SM-64 Navaho, and the MGM-1 Matador. The Banshee design was similar to Operation Aphrodite; like Aphrodite, it failed, and was canceled in April 1949. Concurrently, the US Navy's Operation Bumblebee, was conducted at Topsail Island, North Carolina, from c. 1 June 1946, to 28 July 1948. Bumblebee produced proof-of-concept technologies that influenced the US military's other missile projects. History: During the Cold War, both the United States and the Soviet Union experimented further with the concept, of deploying early cruise missiles from land, submarines, and aircraft. The main outcome of the United States Navy submarine missile project was the SSM-N-8 Regulus missile, based upon the V-1. History: The United States Air Force's first operational surface-to-surface missile was the winged, mobile, nuclear-capable MGM-1 Matador, also similar in concept to the V-1. Deployment overseas began in 1954, first to West Germany and later to the Republic of China and South Korea. On 7 November 1956, the U.S. Air Force deployed Matador units in West Germany, whose missiles were capable of striking targets in the Warsaw Pact, from their fixed day-to-day sites to unannounced dispersed launch locations. This alert was in response to the crisis posed by the Soviet attack on Hungary which suppressed the Hungarian Revolution of 1956. History: Between 1957 and 1961 the United States followed an ambitious and well-funded program to develop a nuclear-powered cruise missile, Supersonic Low Altitude Missile (SLAM). It was designed to fly below the enemy's radar at speeds above Mach 3 and carry hydrogen bombs that it would drop along its path over enemy territory. Although the concept was proven sound and the 500-megawatt engine finished a successful test run in 1961, no airworthy device was ever completed. The project was finally abandoned in favor of ICBM development. History: While ballistic missiles were the preferred weapons for land targets, heavy nuclear and conventional weapon tipped cruise missiles were seen by the USSR as a primary weapon to destroy United States naval carrier battle groups. Large submarines (for example, Echo and Oscar classes) were developed to carry these weapons and shadow United States battle groups at sea, and large bombers (for example, Backfire, Bear, and Blackjack models) were equipped with the weapons in their air-launched cruise missile (ALCM) configuration. Categories: Cruise missiles can be categorized by size, speed (subsonic or supersonic), range, and whether launched from land, air, surface ship, or submarine. Often versions of the same missile are produced for different launch platforms; sometimes air- and submarine-launched versions are a little lighter and smaller than land- and ship-launched versions. Guidance systems can vary across missiles. Some missiles can be fitted with any of a variety of navigation systems (Inertial navigation, TERCOM, or satellite navigation). Larger cruise missiles can carry either a conventional or a nuclear warhead, while smaller ones carry only conventional warheads. Hypersonic A hypersonic cruise missile travels at least five times the speed of sound (Mach 5). 14-X , a scramjet engine currently under development by Brazil. 3M22 Zircon (>1000–1500 km) hypersonic anti-ship cruise missile. ASN4G (Air-Sol Nucléaire de 4e Génération) , a scramjet-powered hypersonic cruise missile being developed by France BrahMos-II (≈800–1500 km) /, a hypersonic missile under development as of 2011 in India and Russia. FC/ASW (300 km) (under development) – Franco-British stealth hypersonic cruise missile concept. HSTDV - hypersonic scramjet demonstration a carrier vehicle for hypersonic and long-range cruise missiles is being developed by Defence Research and Development Organisation (DRDO). Categories: Hyfly-2 - hypersonic air-launched cruise missile first displayed at Sea Air Space 2021, developed by Boeing Hypersonic Air-breathing Weapon Concept (HAWC, pronounced Hawk) - a scramjet powered hypersonic air-launched cruise missile without a warhead that uses its own kinetic energy upon impact to destroy the target, developed by DARPA Kh-90 (3,000–4,000 km) / - a hypersonic air-to-surface cruise missile developed in 1990 by the USSR and later by Russia. This missile was designed to cruise from Mach 4 to Mach 6, eventually being able to travel at speeds lower than Mach 10–15. But this cruise-missile system did not enter service. Categories: Hypersonic Air Launched Offensive Anti-Surface (HALO) - air-launched anti-ship missile under Offensive Anti-Surface Warfare Increment 2 (OASuW Inc 2) program for the US Navy (Navy) Hypersonic Attack Cruise Missile (HACM) - planned for use by the United States Air Force. SCIFiRE / - Southern Cross Integrated Flight Research Experiment (SCIFiRE) is a joint program between the US Department of Defense and the Australian Department of Defence for a Mach 5 scramjet powered missile. In September 2021, the US Department of Defense awarded Preliminary Design Review contracts to Boeing, Lockheed Martin and Raytheon Missiles & Defense. Supersonic These missiles travel faster than the speed of sound, usually using ramjet engines. The range is typically 100–500 km, but can be greater. Guidance systems vary. Categories: Examples: 3M-54 Kalibr (range: up to 4,500 km, max speed: Mach 3) Russia (the "Sizzler" variant is capable of supersonic speed at the terminal stage only) 3M-51 Alfa (250 km, Mach 2.5) Air-Sol Moyenne Portée (300–500 km+, Mach 3) France – supersonic stand-off nuclear missile ASM-3 (400 km, Mach 3+) Japan BrahMos (block-I 290 km, Block-II 500 & Block-IIA 600 km, Mach 3.2) / India / Russia – the only one to complete the tactical cruise missile triad Blyskavka Artem Luch Pivdenmash 100 – 370 km C-101 (50 km, Mach 2) China C-301 (100+ km, Mach ) China C-803 (230 km, Mach 1.4) China – supersonic terminal stage only C-805 China CX-1 (280 km, Mach 3) China CJ-100 / DF-100 ((2000-3000 km, Mach 5) China Hsiung Feng III (400 km, Mach 3.5) Taiwan Hyunmoo-3 (1500 km, Mach 1.2) South Korea KD-88 (200 km, Mach 0.85) China Kh-20 (380–600 km, Mach 2) USSR Kh-31 (25–110 km, Mach 3.5) Russia Kh-32 (600–1,000 km, Mach 4.6) Russia Kh-80 (3,000–5,000 km, Mach 3) / P-270 Moskit (120–250 km, Mach 2-3) / USSR / Russia P-500 Bazalt (550 km, Mach 3+) / USSR / Russia P-700 Granit (625 km, Mach 2.5+) / USSR / Russia P-800 Oniks / Kh-61 (600–800 km, Mach 2.6) / USSR / Russia P-1000 Vulkan (800 km, Mach 3+) / USSR / Russia YJ-12 (250–400 km, Mach 4) China YJ-18 (220–540 km, Mach 3) China YJ-91 (15-120 km, Mach 3.5) China Yun Feng (1200-2,000 km, Mach 3) Taiwan SSM-N-9 Regulus II (1,852 km, Mach 2) United States Intercontinental-range supersonic 9M730 Burevestnik (Unlimited Range) Russia Burya (8,500 km) USSR MKR (8,000 km) USSR RSS-40 Buran (8,500 km) USSR SLAM (cancelled in 1964) United States SM-62 Snark (10,200 km) United States SM-64 Navaho (canceled in 1958) United States Long-range subsonic The United States, Russia, North Korea, India, Iran, South Korea, Israel, France, China and Pakistan have developed several long-range subsonic cruise missiles. These missiles have a range of over 1,000 kilometres (620 mi) and fly at about 800 kilometres per hour (500 mph). They typically have a launch weight of about 1,500 kilograms (3,300 lb) and can carry either a conventional or a nuclear warhead. Earlier versions of these missiles used inertial navigation; later versions use much more accurate TERCOM and DSMAC systems. Most recent versions can use satellite navigation. Categories: Examples: 3M-54 Kalibr (up to 4,500 km) Russia AGM-86 ALCM (from 1100 to >2400 km) United States AGM-129 ACM (from 3450 to 3700 km) United States AGM-181 LRSO (>2500 km) United States BGM-109 Tomahawk (up to 1,700 km) United States BGM-109G Ground Launched Cruise Missile (2,500 km) Kh-55 (3,000 km) and Kh-65 Russia Kh-101 (4500–5500 km) Russia Iskander-K not less than 3 500 km Hwasal-2 North Korea > 2000 km RK-55 (3,000 km) Soviet Union Nirbhay India (up to 1500 km) Meshkat Iran (Range 2000 km) MdCN (up to 1,400 km) France Soumar (Range allegedly 2,000–3,000 km) Hoveyzeh (Range 1,350 km) Iran Quds 1 Houthi Hsiung Feng IIE Taiwan (600 - 1,200 km) Hyunmoo III South Korea (Hyunmoo IIIA-500 km, Hyunmoo IIIB-1000 km, Hyunmoo IIIC-1500 km) Type 12 SSM (1,500 km under development) Japan MGM-13 Mace United States DF-10/CJ-10 China (CJ-10K - 1500 km, CJ-20 - 2000 km) Popeye Turbo SLCM Israel GEZGİN (800-1,200 km) Turkey Medium-range subsonic These missiles are about the same size and weight and fly at similar speeds to the above category. Guidance systems vary. Categories: Examples: AGM-158 JASSM (370–1900 km) United States AGM-158C LRASM (370 km+-560 km+) United States Babur missile 1, 1A, 1B, 2, 3 (450-600 km) Pakistan (300 km) Harbah (250–450 km) Pakistan Hatf-VIII / Ra'ad Mark-2 ALCM (400 km) Pakistan Hsiung Feng IIE (600-2000 km) Taiwan Hyunmoo-3 (within 1500 km) South Korea Type 12 SSM (within 1000 km under development) Japan KD-63 China Taurus KEPD 350 (500+ km) // Germany / Sweden / Spain Kh-50 (Kh-SD) and Kh-101 Kh-65 variants Russia MGM-1 Matador (700 km) United States Ra'ad ALCM (350 km) Pakistan Raad Iran (360 km) SOM (SOM B Block I) Turkey (350 km range under serial production, 500 km + range under development) – 500 km, 1500 km and 2500 km versions SSM-N-8 Regulus (926 km) United States P-5 Pyatyorka (450–750 km) Russia, North Korea Storm Shadow / SCALP-EG (560 km, Mach 0.65) / France/UK Ya-Ali (700 km) Iran Zarb (320 km) Pakistan Short-range subsonic These are subsonic missiles that weigh around 500 kilograms (1,102 lb) and have a range of up to 300 km (190 mi). Categories: Examples: Apache (100–140 km) France AVMT-300 (300 km) Brazil MICLA-BR (300 km) Brazil Hyunmoo-3 (over 300 km) shorter range South Korea SSM-700K Haeseong (180+ km) South Korea Kh-35 (130–300 km) Russia, KN-19 Ks3/4 North Korea Kh-59 (115–550 km) Russia P-15 (40–80 km) Russia, KN-1 North Korea Nasr-1 Iran Zafar (25 km) Iran Noor Iran Qader Iran Paveh (1650 km) Iran Naval Strike Missile (185–555 km) Norway RBS-15 Sweden Korshun a local derivative of Kh-55 and RK-55, made by Artem Luch Vizar (ZhMZ), KhAZ, Yuzhnoe Pivdenmash, powered by an AI Progress Motor Sich MS-400 like Neptun missile and same builders designer. Categories: Neptune Ukraine V-1 flying bomb (250 km) Nazi GermanyHsiung Feng II Taiwan Wan Chien Taiwan VCM-01 Vietnam 100–300 km Aist Belarus 100 200 – 300 km Marte ER 100+ km Sea Killer export variant Otomat (180 km) / France / ItalyOtomat Mk2 E / Teseo Mk2/E 360 km new turbofanC-801 (40 km) China C-802 (120–230 km) China C-803 China C-805 China C-602 China CM-602G China Delilah missile (250 km) Israel Gabriel IV (200 km) Israel Popeye turbo ALCM (78 km) Israel RGM-84 Harpoon (124–310 km) United States AGM-84E Standoff Land Attack Missile (110 km) United States AGM-84H/K SLAM-ER (270 km) United States Silkworm (100–500 km) China SOM Turkey Atmaca Turkey Çakır Turkey Deployment: The most common mission for cruise missiles is to attack relatively high-value targets such as ships, command bunkers, bridges and dams. Modern guidance systems permit accurate attacks. Deployment: As of 2001, the BGM-109 Tomahawk missile model has become a significant part of the United States naval arsenal. It gives ships and submarines an somewhat accurate, long-range, conventional land attack weapon. Each costs about US$1.99 million. Both the Tomahawk and the AGM-86 were used extensively during Operation Desert Storm. On 7 April 2017, during the Syrian Civil War, U.S. warships fired more than 50 cruise missiles into a Syrian residential areas in retaliation for a Syrian attack against a rebel stronghold.The United States Air Force (USAF) deploys an air-launched cruise missile, the AGM-86 ALCM. The Boeing B-52 Stratofortress is the exclusive delivery vehicle for the AGM-86 and AGM-129 ACM. Both missile types are configurable for either conventional or nuclear warheads. Deployment: The USAF adopted the AGM-86 for its bomber fleet while AGM-109 was adapted to launch from trucks and ships and adopted by the USAF and Navy. The truck-launched versions, and also the Pershing II and SS-20 Intermediate Range Ballistic Missiles, were later destroyed under the bilateral INF (Intermediate-Range Nuclear Forces) treaty with the USSR. Deployment: The British Royal Navy (RN) also operates cruise missiles, specifically the U.S.-made Tomahawk, used by the RN's nuclear submarine fleet. UK conventional warhead versions were first fired in combat by the RN in 1999, during the Kosovo War (the United States fired cruise missiles in 1991). The Royal Air Force uses the Storm Shadow cruise missile on its Typhoon and previously its Tornado GR4 aircraft. It is also used by France, where it is known as SCALP EG, and carried by the Armée de l'Air's Mirage 2000 and Rafale aircraft. Deployment: India and Russia have jointly developed the supersonic cruise missile BrahMos. There are three versions of the Brahmos: ship/land-launched, air-launched, and sub-launched. The ship/land-launched version was operational as of late 2007. The Brahmos have the capability to attack targets on land. Russia also continues to operate other cruise missiles: the SS-N-12 Sandbox, SS-N-19 Shipwreck, SS-N-22 Sunburn and SS-N-25 Switchblade. Germany and Spain operate the Taurus missile while Pakistan has made the Babur missile Both the People's Republic of China and the Republic of China (Taiwan) have designed several cruise missile variants, such as the well-known C-802, some of which are capable of carrying biological, chemical, nuclear, and conventional warheads. Deployment: Nuclear warhead versions China China has CJ-10 land attack cruise missile which is capable of carrying a nuclear warhead. Additionally, China appears to have tested a hypersonic cruise missile in August 2021, a claim it denies. France The French Force de Frappe nuclear forces include both land and sea-based bombers with Air-Sol Moyenne Portée (ASMP) high-speed medium-range nuclear cruise missiles. Two models are in use, ASMP and a newer ASMP-Ameliorer Plus (ASMP-A), which was developed in 1999. An estimated 40 to 50 were produced. India India in 2017 successfully flight-tested its indigenous Nirbhay ('Fearless') land-attack cruise missile, which can deliver nuclear warheads to a strike range of 1,000 km. Nirbhay had been flight-tested successfully. Israel The Israel Defense Forces reportedly deploy the medium-range air-launched Popeye Turbo ALCM and the Popeye Turbo SLCM medium-long range cruise missile with nuclear warheads on Dolphin class submarines. Deployment: Pakistan Pakistan currently has four cruise missile systems: the air-launched Ra'ad and its enhanced version Ra'ad II; the ground and underwater launched Babur; ship-launched Harbah missile and surface launched Zarb missile. Both, Ra'ad and Babur, can carry nuclear warheads between 10 and 25 kt, and deliver them to targets at a range of up to 300 km (190 mi) and 450 km (280 mi) respectively. Babur has been in service with the Pakistan Army since 2010. Deployment: Russia Russia has Kh-55SM cruise missiles, with a range similar to the United States' AGM-129 range of 3000 km, but are able to carry a more powerful warhead of 200 kt. They are equipped with a TERCOM system which allows them to cruise at an altitude lower than 110 meters at subsonic speeds while obtaining a CEP accuracy of 15 meters with an inertial navigation system. They are air-launched from either Tupolev Tu-95s, Tupolev Tu-22Ms, or Tupolev Tu-160s, each able to carry 16 for the Tu-95, 12 for the Tu-160, and 4 for the Tu-22M. A stealth version of the missile, the Kh-101 is in development. It has similar qualities as the Kh-55, except that its range has been extended to 5,000 km, is equipped with a 1,000 kg conventional warhead, and has stealth features which reduce its probability of intercept.After the collapse of the Soviet Union, the most recent cruise missile developed was the Kalibr missile which entered production in the early 1990s and was officially inducted into the Russian arsenal in 1994. However, it only saw its combat debut on 7 October 2015, in Syria as a part of the Russian military campaign in Syria. The missile has been used 14 more times in combat operations in Syria since its debut. Deployment: In the late 1950s and early 1960s, the Soviet Union was attempting to develop cruise missiles. In this short time frame, the Soviet Union was working on nearly ten different types of cruise missiles. However, due to resources, most of the initial types of cruise missiles developed by the Soviet Union were Sea-Launched Cruise Missiles or Submarine-Launched Cruise Missiles (SLCMs). The SS-N-1 cruise missile was developed to have different configurations to be fired from a submarine or a ship. However, as time progressed, the Soviet Union began to work on air-launched cruise missiles as well (ALCM). These ACLM missiles were typically delivered via bombers designated as "Blinders" or "Backfire". The missiles in this configuration were called the AS-1, and AS-2 with eventual new variants with more development time. The main purpose of Soviet-based cruise missiles was to have defense and offensive mechanisms against enemy ships; in other words, most of the Soviet cruise missiles were anti-ship missiles. In the 1980s the Soviet Union had developed an arsenal of cruise missiles nearing 600 platforms which consisted of land, sea, and air delivery systems. Deployment: United States The United States has deployed nine nuclear cruise missiles at one time or another. Deployment: MGM-1 Matador ground-launched missile, out of service MGM-13 Mace ground-launched missile, out of service SSM-N-8 Regulus submarine-launched missile, out of service SM-62 Snark ground-launched missile, out of service AGM-28 Hound Dog air-launched missile, out of service BGM-109G Ground Launched Cruise Missile, out of service AGM-86 ALCM air-launched cruise missile, 350 to 550 missiles and W80 warheads still in service BGM-109 Tomahawk cruise missile in a nuclear submarine-, surface ship-, and ground-launched models, nuclear models out of service but warheads kept in reserve. Deployment: AGM-129 ACM air-launched missile, out of service Efficiency in modern warfare Currently, cruise missiles are among the most expensive of single-use weapons, up to several million dollars apiece. One consequence of this is that its users face difficult choices in target allocation, to avoid expending the missiles on targets of low value. For instance, during the 2001 strikes on Afghanistan the United States attacked targets of very low monetary value with cruise missiles, which led many to question the efficiency of the weapon. However, proponents of the cruise missile counter that the weapon can not be blamed for poor target selection, and the same argument applies to other types of UAVs: they are cheaper than human pilots when total training and infrastructure costs are taken into account, not to mention the risk of loss of personnel. As demonstrated in Libya in 2011 and prior conflicts, cruise missiles are much more difficult to detect and intercept than other aerial assets (reduced radar cross-section, infrared and visual signature due to smaller size), suiting them to attacks against static air defense systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benjamin Munson (professor)** Benjamin Munson (professor): Benjamin Munson is a professor and chair of Speech-Language-Hearing Sciences University of Minnesota. His research relates to relationships among speech perception, speech production, and vocabulary growth in children. The bulk of his research has examined how speech perception, production, and word knowledge interact during development in typically developing children, in children with Speech Sound Disorder, in children with Developmental Language Disorder, in adult second-language learners, and in adults with age-related hearing impairment. He has also studied how people convey and perceive sexuality through phonetic variation. In research presented at the American Association for the Advancement of Science in 2018, he revealed that the voices of boys and girls were identifiably different even before puberty with the boys' voices being lower and boys that were gender dysphoric showing traits more associated with women.Munson received his BA in Russian and Political Science from State University of New York at Buffalo (1992), his MA in speech-language pathology from Ohio State University (1997) and his PhD in speech and hearing science also from Ohio State. Prior to entering academia, he was a political activist. He was arrested at the 1992 Republican National Convention while protesting President George H.W. Bush with the group ACT-UP (he can be seen yelling "what about AIDS?" at the 2:40 mark in this C-SPAN video). Following this event, he served a brief sentence at the Harris County Jail.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Syntactic sugar** Syntactic sugar: In computer science, syntactic sugar is syntax within a programming language that is designed to make things easier to read or to express. It makes the language "sweeter" for human use: things can be expressed more clearly, more concisely, or in an alternative style that some may prefer. Syntactic sugar is usually a shorthand for a common operation that could also be expressed in an alternate, more verbose, form: The programmer has a choice of whether to use the shorter form or the longer form, but will usually use the shorter form since it is shorter and easier to type and read. Syntactic sugar: For example, many programming languages provide special syntax for referencing and updating array elements. Abstractly, an array reference is a procedure of two arguments: an array and a subscript vector, which could be expressed as get_array(Array, vector(i,j)). Instead, many languages provide syntax such as Array[i,j]. Similarly an array element update is a procedure consisting of three arguments, for example set_array(Array, vector(i,j), value), but many languages also provide syntax such as Array[i,j] = value. Syntactic sugar: A construct in a language is syntactic sugar if it can be removed from the language without any effect on what the language can do: functionality and expressive power will remain the same. Language processors, including compilers and static analyzers, often expand sugared constructs into their more verbose equivalents before processing, a process sometimes called "desugaring". Origins: The term syntactic sugar was coined by Peter J. Landin in 1964 to describe the surface syntax of a simple ALGOL-like programming language which was defined semantically in terms of the applicative expressions of lambda calculus, centered on lexically replacing λ with "where". Later programming languages, such as CLU, ML and Scheme, extended the term to refer to syntax within a language which could be defined in terms of a language core of essential constructs; the convenient, higher-level features could be "desugared" and decomposed into that subset. This is, in fact, the usual mathematical practice of building up from primitives. Building on Landin's distinction between essential language constructs and syntactic sugar, in 1991, Matthias Felleisen proposed a codification of "expressive power" to align with "widely held beliefs" in the literature. He defined "more expressive" to mean that without the language constructs in question, a program would have to be completely reorganized. Notable examples: In COBOL, many of the intermediate keywords are syntactic sugar that may optionally be omitted. For example, the sentence MOVE A B. and the sentence MOVE A TO B. perform exactly the same function, but the second makes the action to be performed clearer. Augmented assignment or compound assignment operators: For example, a += b is equivalent to a = a + b in C and similar languages, assuming a has no side effects such as if a is a regular variable. Some languages, such as Python may allow overloading augmented assignment operators, so they may behave differently than standard ones. In Perl, unless (condition) {...} is syntactic sugar for if (not condition) {...}. Additionally, any statement can be followed by a condition, so statement if condition is equivalent to if (condition) {statement}, but the former is more naturally formatted on a single line. In the C language, the a[i] notation is syntactic sugar for *(a + i). Likewise, the a->x notation is syntactic sugar for accessing members using the dereference operator (*a).x. The using statement in C# ensures that certain objects are disposed of correctly. The compiler expands the statement into a try-finally block. The C# language allows variables to be declared as var x = expr, which allows the compiler to infer the type of x from the expression expr, instead of requiring an explicit type declaration. Similarly, C++ allows auto x = expr since C++11 and Java allows var x = expr since Java 11. Python list comprehensions (such as [x*x for x in range(10)] for a list of squares) and decorators (such as @staticmethod). In Haskell, a string, denoted in quotation marks, is semantically equivalent to a list of characters. In the tidyverse collection of R packages, the pipe, denoted by %>%, declares that the data (or output of the function) preceding the pipe will serve as the first argument for the function following the pipe. So, x %>% f(y) is equivalent to f(x,y). In SQL, a mere JOIN is equivalent to an INNER JOIN, the latter clarifying that the join statement is specifically an inner join operation as opposed to an outer join operation. Likewise, one may omit the OUTER from the LEFT OUTER JOIN, RIGHT OUTER JOIN and FULL OUTER JOIN. Method calling in OOP languages in the form of myObject.myMethod(parameter1, parameter2, parameter3) is syntactic sugar for calling a global function as myMethod(myObject, parameter1, parameter2, parameter3). The reference to the object is passed as a hidden argument, usually accessible from within the method as this. A parameter called by reference is syntactic sugar for technically passing a pointer as the parameter, but syntactically handling it as the variable itself, to avoid constant pointer de-referencing in the code inside the function. In Java, an import declaration enables the compiler to find classes that are not otherwise specified with fully qualified names. For example import javax.swing.*; allows the programmer to reference a Swing object such as javax.swing.JButton using the shorter name JButton. In ES6 version of JavaScript, arrow function have short version (x) => x + 1 that is equivalent of longer (x) => { return x + 1; }. Criticism: Some programmers feel that these syntax usability features are either unimportant or outright frivolous. Notably, special syntactic forms make a language less uniform and its specification more complex, and may cause problems as programs become large and complex. This view is particularly widespread in the Lisp community, as Lisp has very simple and regular syntax, and the surface syntax can easily be modified. Criticism: For example, Alan Perlis once quipped in "Epigrams on Programming", in a reference to bracket-delimited languages, that "Syntactic sugar causes cancer of the semi-colons". Derivative terms: Syntactic salt The metaphor has been extended by coining the term syntactic salt, which indicates a feature designed to make it harder to write bad code. Specifically, syntactic salt is a hoop that programmers must jump through just to prove that they know what is going on, rather than to express a program action. For example, in Java and Pascal assigning a float value to a variable declared as an int without additional syntax explicitly stating that intention will result in a compile error, while C and C++ will automatically truncate any floats assigned to an int. However this is not syntax, but semantics. Derivative terms: In C#, when hiding an inherited class member, a compiler warning is issued unless the new keyword is used to specify that the hiding is intentional. To avoid potential bugs owing to the similarity of the switch statement syntax with that of C or C++, C# requires a break for each non-empty case label of a switch (unless goto, return, or throw is used) even though it does not allow implicit fall-through. (Using goto and specifying the subsequent label produces a C/C++-like fall-through.) Syntactic salt may defeat its purpose by making the code unreadable and thus worsen its quality – in extreme cases, the essential part of the code may be shorter than the overhead introduced to satisfy language requirements. Derivative terms: An alternative to syntactic salt is generating compiler warnings when there is high probability that the code is a result of a mistake – a practice common in modern C/C++ compilers. Syntactic saccharin Other extensions are syntactic saccharin and syntactic syrup, meaning gratuitous syntax that does not make programming any easier. Sugared types Data types with core syntactic support are said to be "sugared types". Common examples include quote-delimited strings, curly braces for object and record types, and square brackets for arrays.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dynamic pricing** Dynamic pricing: Dynamic pricing, also referred to as surge pricing, demand pricing, or time-based pricing, is a revenue management pricing strategy in which businesses set flexible prices for products or services based on current market demands. Businesses are able to change prices based on algorithms that take into account competitor pricing, supply and demand, and other external factors in the market. Dynamic pricing is a common practice in several industries such as hospitality, tourism, entertainment, retail, electricity, and public transport. Each industry takes a slightly different approach to dynamic pricing based on its individual needs and the demand for the product. History of dynamic pricing: Dynamic pricing has been the norm for most of human history. Traditionally, two parties would negotiate a price for a product based on a variety of factors, including who was involved, stock levels, time of day, and more. Store owners relied heavily on experienced shopkeepers to manage this process, and these shopkeepers would negotiate the price for every single product in a store. Shopkeepers needed to know everything they could about a product, including the purchase price, stock levels, market demand, and more, to succeed in their jobs and bring profit to the store. History of dynamic pricing: As retail expanded in the Industrial Revolution, storeowners faced the challenge of scaling this traditional haggling system. As assortments expanded and the number of stores grew, it quickly became impossible for shopkeepers to keep up with the store. The negotiation model quickly proved inefficient within an economy of scale. History of dynamic pricing: The invention of the price tag in the 1870s presented a solution: one price for every person. This idea harkened back to a traditional Quaker idea of fairness: Quaker store owners had long employed a fixed-price system in the name of egalitarianism. By charging the same price of all shoppers, Quakers created a system that was fair for all, regardless of shoppers' wealth or status. History of dynamic pricing: Unlike the Quakers, who used fixed pricing as a way to maintain fairness, retailers used fixed pricing to reduce the need for highly skilled shopkeepers and smooth out the shopping experience within a store. The price tag made it easier to train shopkeepers, reduced wait time at checkout, and improved the overall customer experience. This fixed-price model with price tags would dominate retail and commerce for years to come. The current concept of dynamic pricing would emerge anew in the 1980s, aided by innovations in technology and computerized automation. History of dynamic pricing: Dynamic pricing in air transportation Dynamic pricing re-appeared in the market at large in the 1980s airline industry in the United States. Before the 1980s, the airline industry's seat prices were heavily regulated by the United States government, but a change in legislation during the decade gave airlines control over their prices. Companies invested millions of dollars to develop computer programs that would adjust prices automatically based on known variables like departure time, destination, season, events, and more. History of dynamic pricing: After seeing the success of dynamic pricing in selling airline seats, many other verticals within the travel and tourism industry adopted the practice. Dynamic pricing is now the norm for hotels, car rentals, and more, and consumers have largely accepted the practice as commonplace. The practice is now moving beyond the travel and tourism industry into other fields. History of dynamic pricing: Dynamic pricing in rideshare services The most recent innovation in dynamic pricing—and the one felt most by consumers—is the rise of dynamic pricing in rideshare apps like Uber. Uber's “Surge Pricing” model, where riders pay more for a trip during peak travel times, began as a way to incentivize drivers to stay out later in Boston, according to Bill Gurley, former board member of Uber. The incentive worked, and the number of drivers on the road in the early morning hours increased by 70%-80%, and the number of unfilled Uber requests plummeted. History of dynamic pricing: Dynamic pricing in entertainment and sports events Dynamic pricing has also made its way into other industries, such as entertainment and sports events, where prices for tickets can vary depending on factors like demand, seat location, and time of purchase. E-commerce platforms, such as Amazon, also utilize dynamic pricing strategies to optimize sales and profits by adjusting prices based on competitors, stock levels, and customer demand. History of dynamic pricing: Dynamic pricing in realm of utilities Dynamic pricing has entered the realm of utilities, particularly with the advent of smart grid technology. Electricity providers, for instance, can now employ dynamic pricing to encourage customers to reduce consumption during periods of high demand by charging higher rates, while offering lower rates during periods of low demand. This approach can help utilities manage their resources more efficiently, benefiting both the providers and the consumers. Dynamic pricing today: The extent to which dynamic pricing has become popular differs across countries. While it is popular in Western countries, it is rarely observed in developing countries, as well as China and Japan. In the West, dynamic pricing has become commonplace in many industries for a variety of reasons. Dynamic pricing today: Hospitality Time-based pricing is the standard method of pricing in the tourism industry. Higher prices are charged during the peak season, or during special event periods. In the off-season, hotels may charge only the operating costs of the establishment, whereas investments and any profit are gained during the high season (this is the basic principle of long-run marginal cost pricing: see also long run and short run). Dynamic pricing today: Hotels and other players in the hospitality industry use dynamic pricing to adjust the cost of rooms and packages based on the supply and demand needs at a particular moment. The goal of dynamic pricing in this industry is to find the highest price that consumers are willing to pay. Another name for dynamic pricing in the industry is demand pricing. This form of price discrimination is used to try to maximize revenue based on the willingness to pay of different market segments. It features price increases when demand is high and decreases to stimulate demand when it is low. Having a variety of prices based on the demand at each point in the day makes it possible for hotels to generate more revenue by bringing in customers at the different price points they are willing to pay. Dynamic pricing today: Transportation Airlines change prices often depending on the day of the week, time of day, and the number of days before the flight. For airlines, dynamic pricing factors in different components such as: how many seats a flight has, departure time, and average cancellations on similar flights. A 2022 study in Econometrica estimated that dynamic pricing was beneficial for "early-arriving, leisure consumers at the expense of late-arriving, business travelers. Although dynamic pricing ensures seat availability for business travelers, these consumers are then charged higher prices. When aggregated over markets, welfare is higher under dynamic pricing than under uniform pricing."Congestion pricing is often used in public transportation and road pricing, where a higher price at peak periods is used to encourage more efficient use of the service or time-shifting to cheaper or free off-peak travel. For example, the San Francisco Bay Bridge charges a higher toll during rush hour and on the weekend, when drivers are more likely to be traveling. This is an effective way to boost revenue when demand is high, while also managing demand since drivers unwilling to pay the premium will avoid those times. The London congestion charge discourages automobile travel to Central London during peak periods. The Washington Metro and Long Island Rail Road charge higher fares at peak times. Dynamic pricing today: The tolls on the Custis Memorial Parkway vary automatically according to the actual number of cars on the roadway, and at times of severe congestion can reach almost $50. Dynamic pricing today: Dynamic pricing is also used by Uber and Lyft. Uber's system for "dynamically adjusting prices for service" measures supply (Uber drivers) and demand (passengers hailing rides by use of smartphones), and prices fares accordingly.In recent times, ride-sharing companies such as Uber and Lyft have increasingly incorporated dynamic pricing into their operations. This strategy enables these businesses to offer the best prices for both drivers and passengers by adjusting prices in real-time in response to supply and demand. When there is a strong demand for rides, rates go up to encourage more drivers to offer their services, and when there is a low demand, prices go down to draw in more passengers. The ride-sharing industry has been credited with cutting waiting times and improving efficiency, which has benefited both passengers and drivers. Dynamic pricing today: Professional sports Some professional sports teams use dynamic pricing structures to boost revenue. Dynamic pricing is particularly important in baseball because MLB teams play around twice as many games as some other sports and in much larger venues.Sports that are outdoors have to factor weather into pricing strategy, in addition to the date of the game, date of purchase, and opponent. Tickets for a game during inclement weather will sell better at a lower price; conversely, when a team is on a winning streak, fans will be willing to pay more. Dynamic pricing today: Dynamic pricing was first introduced to sports by a start-up software company from Austin, Texas, Qcue and Major League Baseball club San Francisco Giants. The San Francisco Giants implemented a pilot of 2,000 seats in the View Reserved and Bleachers and moved on to dynamically pricing the entire venue for the 2010 season. Qcue currently works with two-thirds of Major League Baseball franchises, not all of which have implemented a full dynamic pricing structure, and for the 2012 postseason, the San Francisco Giants, Oakland Athletics, and St. Louis Cardinals became the first teams to dynamically price postseason tickets. While behind baseball in terms of adoption, the National Basketball Association, National Hockey League, and NCAA have also seen teams implement dynamic pricing. Outside of the U.S., it has since been adopted on a trial basis by some clubs in the Football League. Scottish Premier League club Heart of Midlothian introduced dynamic pricing for the sale of their season tickets in 2012, but supporters complained that they were being charged significantly more than the advertised price. Dynamic pricing today: Retail Retail is the next frontier for dynamic pricing. As e-commerce grows in importance and the size of assortments expands, retailers are turning to software to help track product prices and make pricing updates. Dynamic pricing today: Retailers, and online retailers, in particular, adjust the price of their products according to competitors, time, traffic, conversion rates, and sales goals. Dynamic pricing is quickly becoming a best practice within the retail industry to help stores manage these factors in a fast-paced market. Dynamic pricing software allows retailers to easily understand what happens in their assortments at a glance and act proactively on market changes. Dynamic pricing today: Some retailers will build their own dynamic pricing software, but many more will outsource to a software vendor. Retailers in all categories use dynamic pricing software including sporting goods, beauty, fashion, do-it-yourself and hardware, baby and family, auto parts, home care, fast-moving consumer goods (FMCGs), and more. Dynamic pricing can even be used by brick and mortar stores with the help of electronic shelf labels (ESLs). Dynamic pricing today: Another notable application of dynamic pricing in retail is within the grocery sector. Supermarkets often use dynamic pricing strategies to manage perishable inventory, such as fresh produce and meat products, that have a limited shelf life. By adjusting prices based on factors like expiration dates and current inventory levels, retailers can minimize waste and maximize revenue. Additionally, the widespread adoption of electronic shelf labels (ESLs) in grocery stores has made it easier to implement dynamic pricing strategies in real-time, enabling retailers to respond quickly to changing market conditions and consumer preferences. Dynamic pricing today: Theme parks Theme Parks have also recently adopted this pricing model in hopes to boost sales. Disneyland and Disney World adapted this practice in 2016, and Universal Studios followed suit. It needs to be pointed out that this pricing model resembles price discrimination more than dynamic pricing, however for the sake of uniformity is included.Since the supply of parks is limited and new rides cannot be added based on the surge of demand, the model followed by theme parks in regards to dynamic pricing resembles that followed by the hotel industry. During summertime, when demand is rather inelastic, the parks charge high prices due to the holiday season, whereas during 'off-peak' times such as winters, low prices are charged. 'Off-peak' pricing makes the term 'cheap-holiday' come to life as it encourages ticket sales at times where these parks experience a fall in demand, resulting in a win-win situation for both parties involved. Dynamic pricing today: Brands and dynamic pricing In recent years, more brands have launched direct-to-consumer sales channels to capture more consumer data and control brand perception. Many brands turn to dynamic pricing to help manage this sales channel and follow the market. With dynamic pricing, brands can more easily control their market perception and create a direct relationship with consumers. However, the most interesting benefit of a direct-to-consumer strategy is the market data that brands can collect on their customers. Some third-party sellers in the Amazon Marketplace use software to change prices more frequently than would be feasible for people to do, in order to compete for business on price. Dynamic pricing methods: There are a number of ways to execute a pricing strategy with dynamic pricing software, and they can all be combined to match any commercial strategy. This section details some of the most well-known and popular pricing methods and explains how they change in a dynamic pricing engine. These pricing mechanisms are from the seller's point of view and not the consumer's point of view, meaning that the seller plays an active role in the price setting due to the assumption of high bargaining power of sellers. Dynamic pricing methods: Cost-plus pricing Cost-plus pricing is the most basic method of pricing. A store will simply charge consumers the cost required to produce a product plus a predetermined amount of profit. Dynamic pricing methods: Cost-plus pricing is simple to execute, but it only considers internal information when setting the price and does not factor in external influencers like market reactions, the weather, or changes in consumer value. A dynamic pricing tool can make it easier to update prices, but will not make the updates often if the user doesn't account for external information like competitor market prices. Dynamic pricing methods: Due to its simplicity, this is the most widely used method of pricing with around 74% of companies in the United States employing this dynamic pricing strategy. Although widely used, the usage is skewed, with companies facing a high degree of competition using this strategy the most, on the other hand, companies that deal with manufacturing tend to use this strategy the least. Dynamic pricing methods: Pricing based on competitors Businesses that want to price competitively will monitor their competitors’ prices and adjust accordingly. This is called competitor-based pricing. In retail, the competitor that many companies watch is Amazon, which changes prices frequently throughout the day. Amazon is a market leader in retail that changes prices often, which encourages other retailers to alter their prices to stay competitive. Such online retailers use price-matching mechanisms like price trackers. The retailers give the end-user an option for the same, and upon selecting the option to price match, an online bot searches for the lowest price across various websites and offers a price lower than the lowest.Such pricing behavior depends on market conditions, as well as a firm's planning. Although a firm existing within a highly competitive market is compelled to cut prices, that is not always the case. In case of high competition, yet a stable market, and a long-term view, it was predicted that firms will tend to cooperate on a price basis rather than undercut each other. It needs to be pointed out that the three conditions are necessary in the case of firms deciding to forego competitive pricing. Dynamic pricing methods: Pricing based on value or elasticity Ideally, companies should ask the price for a product that is equal to the value a consumer attaches to a product. This is called value-based pricing. As this value can differ from person to person, it is difficult to uncover the perfect value and have a differentiated price for every person. However, consumers' willingness to pay can be used as a proxy for the perceived value. With the price elasticity of products, companies can calculate how many consumers are willing to pay for the product at each price point. Products with high elasticities are highly sensitive to changes in price, while products with low elasticities are less sensitive to price changes (ceteris paribus). Subsequently, products with low elasticity are typically valued more by consumers if everything else is equal. The dynamic aspect of this pricing method is that elasticities change with respect to the product, category, time, location, and retailers. With the price elasticity of products and the margin of the product, retailers can use this method with their pricing strategy to aim for volume, revenue, or profit maximization strategies. Dynamic pricing methods: Bundle pricing There are two types of bundle pricing strategies: one from the consumer's point of view, and one from the seller's point of view. From the seller's point of view, an end product's price depends on whether it is bundled with something else; which bundle it belongs to; and sometimes on which customers it is offered to. This strategy is adopted by print-media houses and other subscription-based services. The Wall Street Journal, for example, offers a standalone price if an electronic mode of delivery is purchased, and a discount when it is bundled with print delivery. Software companies and music-streaming sites offer student discounts as part of their bundle-pricing tactics. Dynamic pricing methods: Time-based dynamic pricing Time-based dynamic pricing is popular in industries in which demand changes throughout the day or where suppliers want to offer customers an incentive to use a product at a certain time of day. Time-based retail pricing Many industries, especially online retailers, change prices depending on the time of day. Most retail customers shop during weekly office hours (between 9 AM and 5 PM), so many retailers will raise prices during the morning and afternoon, then lower prices during the evening. Dynamic pricing methods: Time-based utility pricing Time-based pricing of services such as provision of electric power includes: Time-of-use pricing (TOU pricing), whereby electricity prices are set for a specific time period on an advance or forward basis, typically not changing more often than twice a year. Prices paid for energy consumed during these periods are pre-established and known to consumers in advance, allowing them to vary their usage in response to such prices and manage their energy costs by shifting usage to a lower-cost period, or reducing their consumption overall (demand response) Critical peak pricing, whereby time-of-use prices are in effect except for certain peak days, when prices may reflect the costs of generating and/or purchasing electricity at the wholesale level. Dynamic pricing methods: Real-time pricing, whereby electricity prices may change as often as hourly (exceptionally more often). Prices may be signaled to a user on an advanced or forward basis, reflecting the utility's cost of generating and/or purchasing electricity at the wholesale level; and Peak-load reduction credits, for consumers with large loads who enter into pre-established peak-load-reduction agreements that reduce a utility's planned capacity obligations.Peak fit pricing is best used for products that are inelastic in supply, where suppliers are fully able to anticipate demand growth and thus be able to charge differently for service during systematic periods of time. Dynamic pricing methods: A utility with regulated prices may develop a time-based pricing schedule on analysis of its long-run costs, such as operation and investment costs. A utility such as electricity (or another service), operating in a market environment, may be auctioned on a competitive market; time-based pricing will typically reflect price variations on the market. Such variations include both regular oscillations due to the demand patterns of users; supply issues (such as availability of intermittent natural resources like water flow or wind); and exceptional price peaks. Dynamic pricing methods: Price peaks reflect strained conditions in the market (possibly augmented by market manipulation, as during the California electricity crisis), and convey a possible lack of investment. Extreme events include the default by Griddy after the 2021 Texas power crisis. Conversion-rate pricing Conversion rates measure how many browsers on a website turn into buyers. When conversion rates of viewers to buyers are low, dropping the price to increase conversions is standard with a dynamic pricing strategy. A good conversion rate means a good ROI. If your conversion rate decreases, the ROI will also decrease, and the cost of conversion will increase, which is when you need to change your marketing strategy. Controversy: Some critics of dynamic pricing, also known as 'surge pricing', say it is a form of price gouging. Dynamic pricing is widely unpopular among consumers as some feel it tends to favour particular buyers. While the intent of surge pricing is generally driven by demand-supply dynamics, some instances have proven otherwise. Some businesses utilise modern technologies (Big data and IoT) to adopt dynamic pricing strategies, where collection and analysis of real-time private data occur almost instantaneously.As modern technology on data analysis is developing rapidly, enabling to detect one’s browsing history, age, gender, location and preference, some consumers fear “unwanted privacy invasions and data fraud” as the extent of their information being used is often undisclosed or ambiguous. Even with firms’ disclaimers stating private information will only be used strictly for data collection and promising no third-party distribution will occur, few cases of misconducting companies can disrupt consumers’ perceptions. Some consumers were simply skeptical on general information collection outright due to the potentiality of “data leakages and misuses”, possibly impacting suppliers’ long-term profitability stimulated by reduced customer loyalty.Consumers can also develop price fairness/unfairness perceptions, whereby different prices being offered to individuals for the same products can affect customers’ perceptions on price fairness. Studies discovered easiness of learning other individuals’ purchase price induced consumers to sense price unfairness and lower satisfaction when others paid less than themselves. However, when consumers were price-advantaged, development of trust and increased repurchase intentions were observed. Other research indicated price fairness perceptions varied depending on their privacy sensitivity and natures of dynamic pricing like, individual pricing, segment pricing, location data pricing and purchase history pricing. Cases of internet giants suffering severe backlash for their dynamic pricing practices from consumer perceptions include: Amazon.com Amazon.com engaged in price discrimination for some customers in the year 2000, showing different prices at the same time for the same item to different customers, potentially violating the Robinson–Patman Act. When this incident was criticised, Amazon issued a public apology with refunds to almost 7000 customers but did not cease the practice.During the COVID-19 pandemic, prices of certain items in high demand were reported to shoot up by quadruple their original price, garnering negative attention. Although Amazon denied claims of any such manipulation and blamed a few sellers for shooting up prices for essentials such as sanitizers and masks, prices of essential products 'sold by Amazon' had also seen a hefty rise in prices. Amazon claimed this was a result of software malfunction. Controversy: Uber Uber's surge pricing has also created controversy. In 2013, when New York was in the midst of a storm, Uber users saw fares go up eight times the usual fares. This incident attracted public backlash from even celebrities, with Salman Rushdie amongst others publicly criticising this move.After this incident, the company started placing caps on how high surge pricing can go during times of emergency, starting in 2015.Drivers have been known to hold off on accepting rides in an area until surge pricing forces fares up to a level satisfactory to them. Controversy: Coca Cola As the largest beverage company in the world, Coca-Cola sells nearly 20,000 bottles of Coca-Cola every second. Dynamic pricing is unacceptable to consumers. Controversy: In 1999, Coca-Cola launched a new interactive vending machine that can detect the surrounding temperature, and the price of Coca-Cola in the vending machine will be more expensive when the temperature is higher. The incident sparked reports in The New York Times and others. From a customer perspective, it would no longer be considered fair that the weather would be hotter and pay more. Controversy: Price discrimination, where prices may be higher all year round in warm countries and lower in countries with cooler average temperatures. Sales in warmer countries are likely to fall. Price discrimination can harm Coca-Cola's brand image. Controversy: Bruce Springsteen In 2022, Bruce Springsteen, famous American singer-songwriter, raised questions on his 2023 U.S. concert ticket prices. Springsteen teamed up with Ticketmaster in distributing the concert tickets who adopted dynamic pricing strategy to capture strong demand. As a result, some seats' prices soared above $5000. The ticketing giant explained that only 1% of the tickets were sold above $1000 and 18% were in fact sold below $99. Also outlined that, dynamic pricing is utilised to "capture more value for the artist at the initial onsale", where if tickets were sold below a certain level while strong demand exists, the potential value favours the resellers in the secondary market.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PANC-1** PANC-1: PANC-1 is a human pancreatic cancer cell line isolated from a pancreatic carcinoma of ductal cell origin.PANC-1 was derived from the tissue of a 56-year-old male. The cells can metastasize but have poor differentiation abilities. PANC-1 cells take 52 hours to double in population, have a modal chromosome number of 63, and show G6PD of the slow mobility type. PANC-1 cells are known to have an epithelial morphology and are adherent in cell culture flasks. The cells can be frozen and regrown in culture, provided that they are appropriately warmed. Additionally, PANC-1 cells have a tendency to clump, a feature which can be avoided with trypsinization.PANC-1 cells have been used to study the role of keratin reorganization during the migration of cancer cells, along with calcium-mediated actin reset in response to physiological changes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sony Ericsson Elm** Sony Ericsson Elm: The Sony Ericsson Elm (J10 and J10i2) is a cell phone released in 2010. It is a compact handset noted for its environmentally friendly features. Features: Elm has a 5-megapixel camera, microSD memory card slot, and FM radio. The phone also came with social media widgets built in, such as Facebook, Twitter and MySpace. Upon release, the phone was available in two color schemes: black/silver (matte-black frame with silver-coloured battery lid), and pink. Features: Apart from the form factor and the very minor software changes required by the difference in form, the Elm is feature-wise identical to the Sony Ericsson Hazel (J20i). The J10 variant has a second video call camera on display side, whereas the J10i2 features A-GPS and Wi-Fi. However, the J10i2 was more widely available on the market, making the J10 variant extremely rare to find. Greenheart: The phone is one of Sony Ericsson's earliest environmentally friendly "Greenheart" range, featuring devices made of recycled materials, longer battery life and low-energy chargers, as well as minimal use of paper through reduced packaging and the replacement of the traditional printed user manual with one stored on the phone. The Elm was made from recycled plastic and free of toxic chemicals. The device also shipped with "eco-aware" applications such as the "green calculator", which showed saved CO2 by walking, minimized packaging and the use of waterborne paint, as well as "Walk Mate", a walking navigation app.Eco-rating, an organization providing sustainability scores to mobile devices, gave Elm 4.3 out of 5 rating, which is the highest among the handsets it evaluated in 2010.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Landspout** Landspout: Landspout is a term created by atmospheric scientist Howard B. Bluestein in 1985 for a tornado not associated with a mesocyclone. The Glossary of Meteorology defines a landspout as "Colloquial expression describing tornadoes occurring with a parent cloud in its growth stage and with its vorticity originating in the boundary layer. The parent cloud does not contain a preexisting mid-level mesocyclone. The landspout was so named because it looks like "a weak Florida Keys waterspout over land."Landspouts are typically weaker than mesocyclone-associated tornadoes spawned within supercell thunderstorms, in which the strongest tornadoes form. Characteristics: Landspouts are a type of tornado that forms during the growth stage of a cumulus congestus or occasionally a cumulonimbus cloud when an updraft stretches boundary layer vorticity upward into a vertical axis and tightens it into a strong vortex. These generally are smaller and weaker than supercell tornadoes and do not form from a mesocyclone or pre-existing rotation in the cloud. Because of this lower depth, smaller size, and weaker intensity, landspouts are rarely detected by Doppler weather radar (NWS).Landspouts share a strong resemblance and development process to that of waterspouts, usually taking the form of a translucent and highly laminar helical tube. "They are typically narrow, rope-like condensation funnels that form while the thunderstorm cloud is still growing and there is no rotating updraft", according to the National Weather Service. Landspouts are considered tornadoes since a rapidly rotating column of air is in contact with both the surface and a cumuliform cloud. Not all landspouts are visible, and many are first sighted as debris swirling at the surface before eventually filling in with condensation and dust. Characteristics: Orography can influence landspout (and even mesocyclone tornado) formation. A notable example is the propensity for landspout occurrence in the Denver Convergence Vorticity Zone (DCVZ). Life cycle: Forming in relation to misocyclones and under updrafts, a landspout generally lasts for less than 15 minutes; however, they can persist substantially longer, and produce significant damage. Landspouts tend to progress through recognizable stages of formation, maturation, and dissipation, and usually decay when a downdraft or significant precipitation (outflow) occur nearby. They may form in lines or groups of multiple landspouts. Damage: Landspouts are commonly weak; however, on rare occasions, a landspout can be as strong as an EF2 or EF3 tornado.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Photofragment-ion imaging** Photofragment-ion imaging: Photofragment ion imaging or, more generally, Product Imaging is an experimental technique for making measurements of the velocity of product molecules or particles following a chemical reaction or the photodissociation of a parent molecule. The method uses a two-dimensional detector, usually a microchannel plate, to record the arrival positions of state-selected ions created by resonantly enhanced multi-photon ionization (REMPI). The first experiment using photofragment ion imaging was performed by David W Chandler and Paul L Houston in 1987 on the phototodissociation dynamics of methyl iodide (iodomethane, CH3I). Background: Many problems in molecular reaction dynamics demand the simultaneous measurement of a particle's speed and angular direction; the most demanding require the measurement of this velocity in coincidence with internal energy. Studies of molecular reactions, energy transfer processes and photodissociation can only be understood completely if the internal energies and velocities of all products can be specified. Product imaging approaches this goal by determining the three-dimensional velocity distribution of one state-selected product of the reaction. For a reaction producing two products, because the speed of the unobserved sibling product is related to that of the measured product through conservation of momentum and energy, the internal state of the sibling can often be inferred. Background: Example A simple example illustrates the principle. Ozone (O3) dissociates following ultraviolet excitation to yield an oxygen atom and an oxygen molecule. Although there are (at least) two possible channels, the principle products are O(1D) and O2(1Δ); that is, both the atom and the molecule are in their first excited electronic state (see atomic term symbol and molecular term symbol for further explanation). At a wavelength of 266 nm, the photon has enough energy to dissociate ozone to these two products, to excite the O2(1Δ) vibrationally to a maximum level of v = 3, and to provide some energy to the recoil velocity between the two fragments. Of course, the more energy that is used to excite the O2 vibrations, the less will be available for the recoil. The O(1D) atom's REMPI, combined with the product imaging technique, yields an image that can be used to calculate the O(1D) three-dimensional velocity distribution. A slice through this cylindrically symmetric distribution is shown in the figure, where an O(1D) atom that has zero velocity in the center-of-mass frame would arrive at the center of the figure. Background: Note that there are four rings, corresponding to four main groups of O(1D) speeds. These correspond to O2(1) production at vibrational levels v = 0, 1, 2, and 3. The ring corresponding to v = 0 is the outer one, since production of the O2(1Δ) in this level leaves the most energy for recoil between the O(1D) and O2(1Δ). Thus, the product imaging technique immediately shows the vibrational distribution of the O2(1Δ). Background: Note that the angular distribution of the O(1D) is not uniform – more of the atoms fly toward the north or south pole than to the equator. In this case, the north-south axis is parallel to the polarization direction of the light that dissociated the ozone. Ozone molecules that absorb the polarized light are those in a particular alignment distribution, with a line connecting the end oxygen atoms in O3 roughly parallel to the polarization. Because the ozone dissociates more rapidly than it rotates, the O and O2 products recoil predominantly along this polarization axis. But there is more detail as well. A close examination shows that the peak in the angular distribution is not actually exactly at the north or south pole, but rather at an angle of about 45 degrees. This has to do with the polarization of the laser that ionizes the O(1D), and can be analyzed to show that the angular momentum of this atom (which has 2 units) is aligned relative to the velocity of recoil. More detail can be found elsewhere.There are other dissociation channels available to ozone following excitation at this wavelength. One produces O(3P) and O2(3Σ), indicating that both the atom and molecule are in their ground electronic state. The image above has no information on this channel, since only the O(1D) is probed. However, by tuning the ionization laser to the REMPI wavelength of O(3P) one finds a completely different image that provides information about the internal energy distribution of O2(3Σ). The Product Imaging Technique: In the original product imaging paper, the positions of the ions are imaged onto a two-dimensional detector. A photolysis laser dissociates methyl iodide (CH3I), while an ionization laser is used REMPI to ionize a particular vibrational level of the CH3 product. Both lasers are pulsed, and the ionization laser is fired at a delay short enough that the products have not moved appreciably. Because ejection of an electron by the ionization laser does not change the recoil velocity of the CH3 fragment, its position at any time following the photolysis is nearly the same as it would have been as a neutral. The advantage of converting it to an ion is that, by repelling it with a set of grids (represented by the vertical solid lines in the figure), one can project it onto a two-dimensional detector. The detector is a double microchannel plate consisting of two glass discs with closely packed open channels (several micrometres in diameter). A high voltage is placed across the plates. As an ion hits inside a channel, it ejects secondary electrons that are then accelerated into the walls of the channel. Since multiple electrons are ejected for each one that hits the wall, the channels act as individual particle multipliers. At the far end of the plates approximately 107 electrons leave the channel for each ion that entered. Importantly, they exit from a spot right behind where the ion entered. The electrons are then accelerated to a phosphor screen, and the spots of light are recorded with a gated charge-coupled device (CCD) camera. The image collected from each pulse of the lasers is then sent to a computer, and the results of many thousands of laser pulses are accumulated to provide an image such as the one for ozone shown previously. The Product Imaging Technique: In this position-sensing version of product imaging, the position of the ions as they hit the detector is recorded. One can imagine the ions produced by the dissociation and ionization lasers as expanding outward from the center-of-mass with a particular distribution of velocities. It is this three-dimensional object that we wish to detect. Since the ions created should be of the same mass, they will all be accelerated uniformly toward the detector. It takes very little time for the whole three-dimensional object to be crushed into the detector, so the position of an ion on the detector relative to the center position is given simply by v Δt, where v is its velocity and Δt is the time between when the ions were made and when they hit the detector. The image is thus a two-dimensional projection of the desired three-dimensional velocity distribution. Fortunately, for systems with an axis of cylindrical symmetry parallel to the surface of the detector, the three-dimensional distribution may be recovered from the two-dimensional projection by the use of the inverse Abel transform. The cylindrical axis is the axis containing the polarization direction of the dissociating light. It is important to note that the image is taken in the center-of-mass frame; no transformation, other than from time to speed, is needed. The Product Imaging Technique: A final advantage of the technique should also be mentioned: ions of different masses arrive at the detector at different times. This differential arises because each ion is accelerated to the same total energy, E, as it traverses the electric field, but the acceleration speed, vz, varies as E = ½ mvz2. Thus, vz varies as the reciprocal of the square root of the ion mass, or the arrival time is proportional to the square root of the ion mass. In a perfect experiment, the ionization laser would ionize only the products of the dissociation, and those only in a particular internal energy state. But the ionization laser, and perhaps the photolysis laser, can create ions from other material, such as pump oil or other impurities. The ability to selectively detect a single mass by gating the detector electronically is thus an important advantage in reducing noise. Improvements to the Product Imaging Technique: Velocity Map Imaging A major improvement to the product imaging technique was achieved by Eppink and Parker. A difficulty that limits the resolution in the position-sensing version is that the spot on the detector is no smaller than the cross-sectional area of the ions excited. For example, if the volume of interaction of the molecular beam, photolysis laser, and ionization laser is, say 1 mm x 1 mm x 1 mm, then the spot for an ion moving with a single velocity would still span 1mm x 1mm at the detector. This dimension is much larger than the limit of a channel width (10 μm) and is substantial compared to the radius of a typical detector (25 mm). Without some further improvement, the velocity resolution for a position-sensing apparatus would be limited to about one part in twenty-five. Eppink and Parker found a way around this limit. Their version of the product imaging technique is called velocity map imaging. Improvements to the Product Imaging Technique: Velocity map imaging is based on the use of an electrostatic lens to accelerate the ions toward the detector. When the voltages are properly adjusted, this lens has the advantage that it focuses ions with the same velocity to a single spot on the detector regardless where the ion was created. This technique thus overcomes the blurring caused by the finite overlap of the laser and molecular beams. Improvements to the Product Imaging Technique: In addition to ion imaging, velocity map imaging is also used for electron kinetic energy analysis in photoelectron photoion coincidence spectroscopy. Improvements to the Product Imaging Technique: Three-Dimensional (3D) Ion Imaging Chichinin, Einfeld, Maul, and Gericke replaced the phosphor screen by a time-resolving delay line anode in order to be able to measure all three components of the initial product momentum vector simultaneously for each individual product particle arriving at the detector. This technique allows one to measure the three-dimensional product momentum vector distribution without having to rely on mathematical reconstruction methods which require the investigated systems to be cylindrically symmetric. Later, velocity mapping was added to 3D imaging. 3D techniques have been used to characterize several elementary photodissociation processes and bimolecular chemical reactions. Improvements to the Product Imaging Technique: Centroiding Chang et al., realized that further increase in resolution could be gained if one carefully analyzed the results of each spot detected by the CCD camera. Under the microchannel plate amplification typical in most laboratories, each such spot was 5-10 pixels in diameter. By programming a microprocessor to examine each of up to 200 spots per laser shot to determine the center of the distribution of each spot, Chang et al. were able to further increase the velocity resolution to the equivalent of one pixel out of the 256-pixel radius of the CCD chip. Improvements to the Product Imaging Technique: DC Slice Imaging DC slice imaging is a developed version of traditional velocity map imaging technique which was developed in Suits group. In dc slicing the ion cloud is allowed to expand by a weaker field in the ionization region. By this the arrival time is expanded to several hundred ns. By a fast transistor switch one is able to select the central part of the ion cloud (Newton sphere). This central slice has the full velocity and angular distribution. A reconstruction by mathematical methods is not necessary. (D. Townsend, S. K. Lee and A. G. Suits, “Orbital polarization from DC slice imaging: S(1D) alignment in the photodissociation of ethylene sulfide,” Chem. Phys., 301, 197 (2004).) Electron Imaging Product imaging of positive ions formed by REMPI detection is only one of the areas where charged particle imaging has become useful. Another area was in the detection of electrons. The first ideas along these lines seem to have an early history. Demkov et al. were perhaps the first to propose a "photoionization microscope". They realized that trajectories of an electron emitted from an atom in different directions may intersect again at a large distance from the atom and create an interference pattern. They proposed building an apparatus to observe the predicted rings. Blondel et al. eventually realized such a "microscope" and used it to study the photodetachment of Br−. It was Helm and co-workers, however, who were the first to create an electron imaging apparatus. The instrument is an improvement on previous photoelectron spectrometers in that it provides information on all energies and all angles of the photoelectrons for each shot of the laser. Helm and his co-workers have now used this technique to investigate the ionization of Xe, Ne, H2, and Ar. In more recent examples, Suzuki, Hayden, and Stolow have pioneered the use of femtosecond excitation and ionization to follow excited state dynamics in larger molecules.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SOD3** SOD3: Extracellular superoxide dismutase [Cu-Zn] is an enzyme that in humans is encoded by the SOD3 gene. SOD3: This gene encodes a member of the superoxide dismutase (SOD) protein family. SODs are antioxidant enzymes that catalyze the dismutation of two superoxide radicals into hydrogen peroxide and oxygen. The product of this gene is thought to protect the brain, lungs, and other tissues from oxidative stress. The protein is secreted into the extracellular space and forms a glycosylated homotetramer that is anchored to the extracellular matrix (ECM) and cell surfaces through an interaction with heparan sulfate proteoglycan and collagen. A fraction of the protein is cleaved near the C-terminus before secretion to generate circulating tetramers that do not interact with the ECM.Among black garden ants (Lasius niger), the lifespan of queens is an order of magnitude greater than of workers despite no systematic nucleotide sequence difference between them. The SOD3 gene was found to be the most differentially over-expressed gene in the brains of queen vs worker ants. This finding raises the possibility that SOD3 antioxidant activity plays a key role in the striking longevity of social insect queens.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bunce–Deddens algebra** Bunce–Deddens algebra: In mathematics, a Bunce–Deddens algebra, named after John W. Bunce and James A. Deddens, is a certain type of AT algebra, a direct limit of matrix algebras over the continuous functions on the circle, in which the connecting maps are given by embeddings between families of shift operators with periodic weights. Bunce–Deddens algebra: Each inductive system defining a Bunce–Deddens algebra is associated with a supernatural number, which is a complete invariant for these algebras. In the language of K-theory, the supernatural number correspond to the K0 group of the algebra. Also, Bunce–Deddens algebras can be expressed as the C*-crossed product of the Cantor set with a certain natural minimal action known as an odometer action. They also admit a unique tracial state. Together with the fact that they are AT, this implies they have real rank zero. Bunce–Deddens algebra: In a broader context of the classification program for simple separable nuclear C*-algebras, AT-algebras of real rank zero were shown to be completely classified by their K-theory, the Choquet simplex of tracial states, and the natural pairing between K0 and traces. The classification of Bunce–Deddens algebras is thus a precursor to the general result. It is also known that, in general, crossed products arising from minimal homeomorphism on the Cantor set are simple AT-algebras of real rank zero. Definition and basic properties: Definition Let C(T) denote continuous functions on the circle and Mr(C(T)) be the C*-algebra of r × r matrices with entries in C(T). For a supernatural number {nk}, the corresponding Bunce–Deddens algebra B({nk}) is the direct limit: lim →⁡⋯→Mnk(C(T))→βkMnk+1(C(T))→⋯. One needs to define the embeddings βk:Mnk(C(T))→Mnk+1(C(T)). Definition and basic properties: These imbedding maps arise from the natural embeddings between C*-algebras generated by shifts with periodic weights. For integers n and m, we define an embedding β : Mn(C(T)) → Mnm(C(T)) as follows. On a separable Hilbert space H, consider the C*-algebra W(n) generated by weighted shifts of fixed period n with respect to a fixed basis. W(n) embeds into W(nm) in the obvious way; any n-periodic weighted shift is also a nm-periodic weighted shift. W(n) is isomorphic to Mn(C*(Tz)), where C*(Tz) denotes the Toeplitz algebra. Therefore, W contains the compact operators as an ideal, and modulo this ideal it is Mn(C(T)). Because the map from W(n) into W(nm) preserves the compact operators, it descends into an embedding β : Mn(C(T)) → Mnm(C(T)). It is this embedding that is used in the definition of Bunce–Deddens algebras. Definition and basic properties: The connecting maps The βk's can be computed more explicitly and we now sketch this computation. This will be useful in obtaining an alternative characterization description of the Bunce–Deddens algebras, and also the classification of these algebras. Definition and basic properties: The C*-algebra W(n) is in fact singly generated. A particular generator of W(n) is the weighted shift T of period n with periodic weights ½, …, ½, 1, ½, …, ½, 1, …. In the appropriate basis of H, T is represented by the n × n operator matrix T=[0⋯Tz12I⋱⋱⋱⋱⋮12I0], where Tz is the unilateral shift. A direct calculation using functional calculus shows that the C*-algebra generated by T is Mn(C*(Tz)), where C*(Tz) denotes the Toeplitz algebra, the C*-algebra generated by the unilateral shift. Since it is clear that Mn(C*(Tz)) contains W(n), this shows W(n) = Mn(C*(Tz)). Definition and basic properties: From the Toeplitz short exact sequence, 0→K→C∗(Tz)→C(T)→0, one has, 0→Mn(K)↪iMn(C∗(Tz))→jMn(C(T))→0, where i is the entrywise embedding map and j the entrywise quotient map on the Toeplitz algebra. So the C*-algebra M nk (C (T)) is singly generated by T~=[0⋯z12⋱⋱⋱⋱⋮120], where the scalar entries denote constant functions on the circle and z is the identity function. For integers nk and nk + 1, where nkdivides nk + 1, the natural embedding of W(nk) into W(nk + 1) descends into an (unital) embedding from Mnk(C(T)) into M nk + 1(C(T)). This is the connecting map βk from the definition of the Bunce–Deddens algebra that we need to analyze. For simplicity, assume nk = n and nk + 1 = 2nk. The image of the above operator T ∈ W(n) under the natural embedding is the following 2n × 2n operator matrix in W(2n): T↦[0Tz12I⋱0⋱⋱⋮12I0I012I⋱⋱⋱⋮12I0]. Therefore, the action of the βk on the generator is βk(T~)=[0z12⋱0⋱⋱⋮1201012⋱⋱⋱⋮120]. A computation with matrix units yields that βk(Eij)=Eij⊗I2 and 11 11 ⊗Z2, where Z2=[0z10]∈M2(C(T)). So βk(fij(z))=fij(Z2). Definition and basic properties: In this particular instance, βk is called a twice-around embedding. The reason for the terminology is as follows: as z varies on the circle, the eigenvalues of Z2 traces out the two disjoint arcs connecting 1 and -1. An explicit computation of eigenvectors shows that the circle of unitaries implementing the diagonalization of Z2 in fact connect the beginning and end points of each arc. So in this sense the circle gets wrap around twice by Z2. In general, when nk + 1 = m·nk, one has a similar m-times around embedding. K-theory and classification: Bunce–Deddens algebras are classified by their K0 groups. Because all finite-dimensional vector bundles over the circle are homotopically trivial, the K0 of Mr(C(T)), as an ordered abelian group, is the integers Z with canonical ordered unit r. According to the above calculation of the connecting maps, given a supernatural number {nk}, the K0 of the corresponding Bunce–Deddens algebra is precisely the corresponding dense subgroup of the rationals Q. K-theory and classification: As it follows from the definition that two Bunce–Deddens algebras with the same supernatural number, in the sense that the two supernatural numbers formally divide each other, are isomorphic, K0 is a complete invariant of these algebras. It also follows from the previous section that the K1 group of any Bunce–Deddens algebra is Z. As a crossed product: C*-crossed product A C*-dynamical system is a triple (A, G, σ), where A is a C*-algebra, G a group, and σ an action of G on A via C*-automorphisms. A covariant representation of (A, G, σ) is a representation π of A, and a unitary representation t ↦ Ut of G, on the same Hilbert space, such that Utπ(a)Ut∗=π(σ(t)(a)), for all a, t. As a crossed product: Assume now A is unital and G is discrete. The (C*-)crossed product given by (A, G, σ), denoted by A⋊σG, is defined to be the C*-algebra with the following universal property: for any covariant representation (π, U), the C*-algebra generated by its image is a quotient of A⋊σG. As a crossed product: Odometer action on Cantor set The Bunce–Deddens algebras in fact are crossed products of the Cantor sets with a natural action by the integers Z. Consider, for example, the Bunce–Deddens algebra of type 2∞. Write the Cantor set X as sequences of 0's and 1's, X=∏{0,1}, with the product topology. Define a homeomorphism α:X→X by α(x)=x+(⋯,0,0,1) where + denotes addition with carryover. This is called the odometer action. The homeomorphism α induces an action on C(X) by pre-composition with α. The Bunce–Deddens algebra of type 2∞ is isomorphic to the resulting crossed product.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Time Machine (macOS)** Time Machine (macOS): Time Machine is the backup mechanism of macOS, the desktop operating system developed by Apple. The software is designed to work with both local storage devices and network-attached disks, and is most commonly used with external disk drives connected using either USB or Thunderbolt. It was first introduced in Mac OS X 10.5 Leopard, which appeared in October 2007 and incrementally refined in subsequent releases of macOS. Time Machine was revamped in macOS 11 Big Sur to support APFS, thereby enabling "faster, more compact, and more reliable backups" than were possible previously. Overview: Time Machine creates incremental backups of files that can be restored at a later date. It allows the user to restore the whole system or specific files. It also works within a number of applications such as Mail and iWork, making it possible to restore individual objects (e.g. emails, contacts, text documents, presentations) without leaving the application. According to an Apple support statement: “Time Machine is a backup utility, not an archival utility, it is not intended as offline storage. Time Machine captures the most recent state of your data on your disk. As snapshots age, they are prioritized progressively lower compared to your more recent ones.” For backups to a network drive, Time Machine allows the user to back up Mac computers over the network, and supports backing up to certain network attached storage devices or servers, depending on the version of Time Machine. Earlier versions worked with a wide variety of NAS servers, but later versions require the server to support a recent version of Apple's Apple Filing Protocol (AFP) or a recent version of the Server Message Block (SMB) protocol, and Time Machine no longer works with servers using earlier versions of SMB. Some of the legacy support can be re-enabled by using hand-tuned configuration options, accessed through the Terminal. Apple's Time Capsule, which was introduced in 2008 and discontinued in 2018, acted as a network storage device specifically for Time Machine backups, allowing both wired and wireless backups to the Time Capsule's internal hard drive. Time Machine may also be used with other external or internal volumes. Overview: Time Machine saves hourly backups for the past 24 hours, daily backups for the past month, and weekly backups for everything older than a month until the volume runs out of space. At that point, Time Machine deletes the oldest weekly backup. Revamp in macOS Big Sur: Time Machine was overhauled in macOS 11 Big Sur to utilize APFS, Apple's modern file system first introduced in 2016. Specifically, the new version of Time Machine makes use of APFS's snapshot technology. According to Apple, this enables "faster, more compact, and more reliable backups" than were possible previously with HFS+-formatted drives. An independent evaluation of this claim found that macOS 11's Time Machine implementation in conjunction with APFS was 2.75-fold faster upon initial local backup and 4-fold faster upon subsequent backups relative to macOS 10.15's Time Machine implementation using HFS+. A more modest yet nevertheless significant advantage was noted as well for backups to network-attached disks.New local (i.e. USB- or Thunderbolt-connected) and network-connected Time Machine backup destinations are formatted as APFS by default, though Time Machine can continue backing up to existing HFS+ backup volumes." There is no option to convert existing, HFS+-based backups to APFS; instead, users who want to benefit from the advantages of the new, APFS-based implementation of Time Machine need to start with a fresh volume.At least in some circumstances, encryption appears to be required (instead of merely optional) in the new version of Time Machine. User interface: Time Machine's user interface when retrieving a file uses Apple's Core Animation API. Upon its launch, Time Machine "floats" the active Finder or application window from the user's desktop to a backdrop depicting the user's blurred desktop wallpaper. Behind the current active window are stacked windows, with each window representing a snapshot of how that folder or application looked on the given date and time in the past. When toggling through the previous snapshots, the stacked windows extend backwards, giving the impression of flying through a "time tunnel." While paging through these "windows from the past", a previous version of the data (or currently deleted data) may be retrieved. Storage: Time Machine works with locally connected storage disks, which must be formatted in the APFS or HFS+ volume formats. Support for backing up to APFS volumes was added with macOS 11 Big Sur and since then APFS is the default volume format. Storage: Time Machine also works with remote storage media shared from other systems, including Time Capsule, via the network. When using remote storage, Time Machine uses sparse bundles. This acts as an isolation layer, which makes the storage neutral to the actual file system used by the network server, and also permits the replication of the backup from one storage medium to another. Sparse bundles are mounted by macOS like any other device, presenting their content as a HFS+ formatted volume, functionally similar to a local storage. Storage: Requirements Time Machine places strict requirements on the backup storage medium. The only officially supported configurations are: A storage drive or partition connected directly to the computer, either internally or by a bus like USB or Thunderbolt and formatted as APFS or journaled HFS+. If the volume format is not correct, Time Machine will prompt the user to reformat it. Storage: A folder on another Mac on the same network. A drive shared by an Apple Time Capsule on the same network. Storage: A drive connected to an Apple AirPort Extreme 802.11ac model on the same network. (Earlier generations of the AirPort Extreme are not supported.) Local network volumes connected using the Apple Filing Protocol or via an SMB3 share that advertises a number of capabilities.On a Time Capsule, the backup data is stored in an HFS+ disk image and accessed via Apple Filing Protocol. Although it is not officially supported, users and manufacturers have also configured FreeBSD and Linux servers and network-attached storage systems to serve Time Machine-enabled Macs. There are also a few software tools available on the market that can copy files inside Time Machine backups in Windows machines. Operation: Time Machine creates a folder on the designated Time Machine volume (local or inside a remote sparse image) into which it copies the directory tree of all locally attached storage drives, except for files and directories that the user has specified to omit, including the Time Machine volume itself. Every hour thereafter, it creates a new subordinate folder and copies only files that have changed since the last backup and creates (in the case of HFS+ volumes) hard links to files that already exist on the backup drive. A user can browse the directory hierarchy of these copies as if browsing the primary disk.Some other backup utilities save deltas for file changes, much like version control systems. Such an approach permits more frequent backups of minor changes, but can often complicate the interaction with the backup volume. By contrast, it is possible to manually browse a Time Machine backup volume without using the Time Machine interface; Time Machine presents each backup to the user as a complete disk copy.Time Machine on HFS+ volumes creates multiple hard links to unmodified directories. Multiple linking of directories is a peculiar feature for HFS+, and is not supported on modern Unix file systems including Apple's own APFS. As a result, tools like rsync cannot be used to replicate a Time Machine volume; replication can only reliably be done by imaging the entire filesystem. Operation: Apple system events record when each directory is modified on the hard drive. This means that instead of examining every file's modification date when it is activated, Time Machine only needs to scan the directories that changed for files to copy. This differs from the approach taken by similar backup utilities rsync and FlyBack, which examine modification dates of all files during backup. Operation: Time Machine is also available in the macOS installation process. One of the features in the Migration Assistant interface is to restore the contents of a Time Machine backup. In other words, a hard drive can be restored from a Time Machine backup in the event of a catastrophic crash. OS X Mountain Lion introduced the ability to use multiple volumes simultaneously for Time Machine operations. When the user specifies more than one volume to use, macOS rotates among the desired volumes each time it does a backup. Exclusion: Time Machine supports two forms of exclusion: one based on a user-configured list of paths (plus a set of system defaults), the other based on the extended file attribute com.apple.metadata:com_apple_backup_excludeItem dependencies. Since the attribute is applied to the file or directory directly, moving or copying will not affect the exclusion. The attribute should contain the string com.apple.backup in any property list format. Writing com.apple.MobileBackup instead sets the exclusion for iOS backups.Google Chrome is known to use the attribute to exclude its histories. Third-party backup applications that respect this setting include CrashPlan and Arq. Apple wraps the attribute into the tmutil command-line utility as well as a CoreServices API.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mundane reason** Mundane reason: The basic premise of the concept of mundane reason is that the standard assumptions about reality that people typically make as they go about day to day, including the very fact that they experience their reality as perfectly natural, are actually the result of social, cultural, and historical processes that make a particular perception of the world readily available. It is the reasoning about the world, self, and others which presupposes the world and its relationship to the observer; according to Steven Shapin (Shapin 1994:31), it is a set of presuppositions about the subject, the object, and the nature of their relations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aseptic processing** Aseptic processing: Aseptic processing is a processing technique wherein commercially thermally sterilized liquid products (typically food or pharmaceutical) are packaged into previously sterilized containers under sterile conditions to produce shelf-stable products that do not need refrigeration. Aseptic processing has almost completely replaced in-container sterilization of liquid foods, including milk, fruit juices and concentrates, cream, yogurt, salad dressing, liquid egg, and ice cream mix. There has been an increasing popularity for foods that contain small discrete particles, such as cottage cheese, baby foods, tomato products, fruit and vegetables, soups, and rice desserts.Aseptic processing involves three primary steps: thermal sterilization of the product, sterilization of the packaging material, and conservation of sterility during packaging. To ensure commercial sterility, aseptic processing facilities are required to maintain proper documentation of production operations, showing that commercially sterile conditions were achieved and maintained in all areas of the facility. Any breach of a scheduled process for the processing or packaging system means that the affected product must be destroyed, reprocessed or segregated and held for further evaluation. In addition, the processing and packaging system must be cleaned and re-sterilized before processing and/or packaging operations can resume. Packaging equipment and packaging materials are sterilized with various media or combinations thereof (i.e., saturated steam, superheated steam, hydrogen peroxide and heat and other treatments). Historical development in foods: Aseptic processing was derived from Olin Ball's heat-cool-fill (HCF) machine that was developed in 1927. While HCF was successful in improving the sensory quality of the processed chocolate milk as compared to canned product, the use of the equipment was hindered by its cost, maintenance, and inflexibility to process various container sizes, rendering the machine a failure.In the 1940s, the Avoset process was developed by George Grindrod. Food products processed using the Avoset process were packaged under ultraviolet lamps and sterilized air inside a positive-pressurized room to keep the contaminants out of the processing room. Sterilization was achieved through the use of direct steam injection of 126–137 °C (260–280 °F) and then cooled. The food treated using this technique was described as an "excellent cream product" and 75–100 containers were produced each minute.Later in the 1940s, the Dole Aseptic Process was developed by McKinley Martin. The foods processed ranged from soups to specialty sauces, fruits, and dairy products. This process involved four steps: Sterilization of product by heating and immediate cooling Sterilization of containers and lids using steam Filling of cooled products aseptically into previously sterilized containers Sealing of lids at an atmosphere of saturated or super heated steam The Dole aseptic machine overcame the hindrances that caused HCF's failure, since it was able to process various container sizes, needed less maintenance time and cost less. The quality of products processed was consistent regardless of container size, an important characteristic for heat sensitive foods, due to its short processing time. Split pea soup was treated using the Dole aseptic machine at the following dosage: heat time of 140–146 °C (280–290 °F) for 3.53 seconds, hold time of 8.8 seconds, and cooling to 32 °C (90 °F) in 14.0 – 17.0 seconds, compared to the normal processing time of 40–70 minutes at 115–121 °C (240–250 °F). The lack of consumer interest drove foods that were processed in the Dole aseptic machine to be discontinued. Roy Graves began sterilizing milk in the 1940s. The milk that was drawn from the cow went through a pipeline, into a vacuum tank, which was then heated to 285 °F, then cooled to room temperature. The product, packaged in metal cans, was widely accepted by consumers lacking access to fresh milk, including the U.S. military.In 1959, the food industry saw the advent of the use of paper-foil-plastic laminated containers called tetrahedron. In 1962, the Swedish company Tetra Pak, introduced this container to the United States market. They sold pasteurized milk and beverages in the containers. Roy Graves' company started sterilizing this container with chlorine and were able to aseptically fill and hermetically seal the container. The use of these containers was not accepted by the American consumers due to their difficulty in opening. It was widely used by the U.S. Navy.In 1981, hydrogen peroxide was approved by the FDA to be used to sterilize containers.Today, ships used for continental food transport are equipped with aseptic tanks to transport fruit juices. Another means of transporting aseptically processed food is the use of aseptic bags. Processing: Aseptic processing allows for the food to be properly sterilized outside the container and then placed into a previously sterilized container, which is then sealed in a sterile environment. Most systems use ultra-high temperature (UHT) sterilization to sterilize the food product before it is packaged. UHT sterilizes food at high temperatures usually above 135 C for 1–2 seconds. This is advantageous because it allows for faster processing, usually a few seconds at high temperatures (130–150 °C) and better retention of sensory and nutritional characteristics. Aseptic products have a non-refrigerated shelf-life of a few months to several years. Processing: Sterilization of aseptic packaging material is a crucial step in aseptic food processing. These containers are sterilized to kill microorganisms present on the container during forming and transport and prior to filling. There are numerous methods used to sterilize the containers, the most commonly used methods include: heat, hot water, chemical sterilants (hydrogen peroxide or peracetic acid), and radiation or a combination of methods.Aseptically processed food products can be sterilized using either direct or indirect methods of heat transfer. Direct heat transfer can be achieved through steam injection and steam infusion. Food products processed with a steam injector go through an injection chamber, where steam (150 °C) is injected into the product, then the product is flash cooled to 70 °C. Direct heat transfer is suitable for heat-sensitive foods such as milk. However, only low viscosity liquids can be processed using steam injection, and high-quality steam is required to ensure sterilization. Steam infused food products involves food free-falling into highly pressurized steam which heats the food to approximately 145 °C and then its flash cooled to 65–70 °C. Steam infusion provides processors with great control compared to steam injection and reduction of burn-on and overheating risks are reduced. It can process higher viscosity foods compared to steam injection, but risks the blockage of nozzles in machinery. Indirect forms of heat transfer include: plate heat exchangers, tubular heat exchangers, or scraped-surface heat exchangers. Plate heat exchangers are mostly used because they are inexpensive and allow for easy changes during production. Tubular and scraped-surface can heat viscous food with particulates or high pulp content with minimal damage. Processing: Equipment and systems Equipment used in aseptic processing of food and beverages must be sterilized before processing and remain sterile during processing. When designing aseptic processing equipment there are six basic requirements to consider: the equipment must have the capability of being cleaned thoroughly, it must be able to be sterilized with steam, chemicals, or high-temperature water, sterilization media should be able to contact all surfaces of the equipment, meaning the equipment does not contain any cracks, crevices or dead spots, the equipment must be able to be kept in a sterile state, it must have the ability to be used continuously, and lastly, the equipment must comply with regulations.Aseptic packaging are generally placed in the following categories: fill, erect, form, thermoform, blow mold, and bulk packaging and storage systems. Processing: Fill and seal. The containers are filled and sealed in a sterile environment to avoid contamination. Erect, fill and seal. A plastic container is erected then sterilized, filled and sealed. Form, fill and seal. In this system, a roll of film is first sterilized. After sterilization it is formed into the desired shape, filled and sealed. Thermoform, fill and seal. A roll of film is heated and thermoform on a sterile surface or environment. It is then filled and seal, also in a sterile environment. Blow mold, fill and seal. The process requires an extrudable material to be first blow-molded into a sterile package before filling and sealing. This process is usually used to produce bottle products like juices and sodas. Bulk packaging and storage systems. Packaging used for bulk storage (drums, totes, bags, etc.) are sterilized using either heat or disinfectants. After sterilization they are able to be filled and sealed. Packaging material: Aseptic packaging consists of filling and sealing a sterilized packaging material with a sterilized product. Aseptic packaging material not only has to assure sterile conditions within the package and protect the product from physical damage, but also maintain the quality of the product inside the packaging. To achieve this, a laminate material is formed from the following components: semi-rigid paper, aluminum, and plastic. Paper (70%) provides the stiffness, strength, and the efficient brick shape to the package; potential for bacteria needs to be addressed. Low-density polyethylene (24%), the most common plastic used for aseptic packaging, located on the innermost layer forms the seals that make the package liquid-tight. Aluminum (6%) is located on the inside of the aseptic package, forming a barrier against light and oxygen, thereby eliminating the need for refrigeration and preventing spoilage without using preservatives. Most packaging material used in aseptic packaging is made from plastics instead of metal or glass containers due to the relatively low cost of producing plastic material when compared to metal and glass. Plastics are lighter than metal or glass making them cheaper and easier to transport. Plastics also required much less energy to produce than metal and glass. These factors have made plastic the packaging material of choice for use in aseptic processing. Packaging material: Selection of aseptic containers There are a lot of factors that can influence the type of aseptic container chosen for a product. The following factors may influence the choice of packaging material for aseptically processed products: functional properties of the plastic polymer (gas and water vapor barrier properties, chemical inertness, and flavor and odor absorption or scalping), potential interactions between plastic polymer and food product, desired shelf life, economical costs, mechanical characteristics of the packaging material (molding properties, material handling characteristics, and compatibility with packaging and sterilization methods), shipping and handling conditions (toughness, compression), compliance with regulation, and targeted consumer group.There are a range of different types of containers to choose from depending on the product. The table below offers a few container types and examples. Effects on food quality: Aseptic processing preserves food quality through fast heat treatment followed by a short holding time and rapid cooling. Compared to canning where food products are subjected to high temperature processing, the fast heat treatment provided by aseptic processing enables heat-sensitive characteristics of the food to be better retained. Flavor The flavor of aseptically processed food products is minimally changed. Dairy products could have a cooked flavor because of exposure to sulfhydryl groups. The flavor is reduced during storage as the sulfhydryl groups oxidize. Severely treated milk could have a bitter flavor because of proteolysis. Color Dairy products could have changes in color, an effect caused by Maillard browning. This depends on the amount of reducing sugar, the formation of pyralysins and melanoidins, the severity of the treatment, and the storage temperature.Plant pigments, carotene and betanin, are not affected, while chlorophyll and anthocyanins are minimally reduced. Texture Meat is less likely to toughen when aseptically processed, compared to canned products.Fruit juice viscosity is unaffected. Processed sliced fruit and vegetable pieces are softer compared to unprocessed pieces as a result of the solubilization of pectic materials and loss of cell turgor. Effects on food quality: Nutritional value Aseptic Processing achieves sterility through a flash-heating process with temperatures ranging from 91 °C to 146 °C and is minimally processed. Due to the significantly lower processing time and temperature range used in aseptic processing compared to conventional sterilization, such as canning, products that are aseptically processed are able to retain more nutrients. Riboflavin, pantothenic acid, biotin, niacin, and vitamin B6 are unaffected. Approximately 10% of thiamine and vitamin B12, approximately 15% of folic acid and pyridoxine, and approximately 25% of vitamin C are lost during aseptic processing. Advantages and limitations: Advantages Foods that are processed aseptically have better nutritional, vitamin, and natural pigment retention (chlorophyll, anthocyanins, betalains, carotenoids) compared to canned food products because of the lower temperature the foods are subjected to upon processing. Aseptic processing provides flexibility in using various container sizes as well as possibility of addition of bioactive and heat-sensitive components after processing (probiotics, omega-3 fatty acids, conjugated linoleic acids). Advantages and limitations: Limitations Aseptic processing costs more than canning because sterilization of the packaging materials requires different machinery and can get complex. In addition, maintaining air sterility in the processing room is difficult. FDA inspection and regulation for aseptic processing: Inspections of aseptic processing is one of the most complex inspection of food manufacturing operations. Process authorities are required to establish a process that ensures commercial sterility for the following: The product All equipment including the hold tube and any equipment downstream from the holding tube such as the filler The packaging equipment The packaging material.Documentation of production operations must be maintained by the facility, showing an achievement of commercial sterile conditions in all areas of the facility.The general regulatory requirements for all U.S Food and Drug Administration (FDA) regulated foods are found in section 21 of the U.S. Code of Federal Regulations (CFR) Part 117. Section 113.40 lists specific requirements for aseptic processing and packaging systems, including specifications for equipment and instrumentation. One requirement of the FDA regulations is that all thermal processing operations must be conducted under the operating supervision of an individual who has completed an FDA-approved course of instruction on control of thermal processing systems, container closures, and acidification procedures. The Better Process Control School provides a section on aseptic processing and packaging systems, and will meet the FDA requirement for supervisors of aseptic operations.Processing authorities are responsible for aseptic systems must be aware of certain factors unique to aseptic processing and packaging operations, therefore specific knowledge in this area is essential. Neither the FDA nor other regulatory agency maintains a list of recognized processing authorities, however, certain organizations are widely recognized within government agencies and the industry as having the experience and expertise. The FDA regulations rely upon aseptic processing and packaging authorities to establish parameters for sterilization of product, packages, and equipment so that commercial sterility of the end product is assured.The forms presently used to file aseptic processes for low-acid foods with the FDA is Form 2541c. Processes for acidified foods that are aseptically processes and packaged are filed under 2541a. Additionally, processing plants must be registered with the FDA using Form 2541. The FDA has also developed a Low-acid Canned Food (LACF) Electronic Process Filling System that facilitates the completion and submission of the forms.The FDA does exert authority over the types of aseptic processing and packaging systems that can be utilized to produce foods for distribution in U.S. commerce by reviewing and either accepting or rejecting process filing forms from individual processing firms. The FDA may request sufficient technical information from the processor to evaluate adequacy of the equipment and the procedures used to produce a commercially sterile product. Until the FDA finds no further objections to a process filing, the company is prevented from distributing product produced on that system in interstate commerce.Final aseptic products must undergo an incubation test before the product is released into distribution. The firm must determine the time and temperature of incubation as well as how many containers are incubated. It is generally accepted to incubate at 20–25 °C for a minimum of 7 days followed immediately, or after a first reading, by incubation at 30–35 °C for a total minimum incubation time of 14 days. Other incubation schedules should be based on supporting validation data. It is important to note that prior to incubation, the containers with the microbial growth medium must be inverted to ensure all surfaces are thoroughly wetted by the medium.The FDA relies on periodic inspections of processing plants to monitor compliance with its regulatory requirements. Inspection frequency for an individual plant may vary significantly depending upon products packed, occurrence of potential hazardous processing problems at the plant, and availability of FDA inspection personnel.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Methylarsonic acid** Methylarsonic acid: Methylarsonic acid is an organoarsenic compound with the formula CH3AsO3H2. It is a colorless, water-soluble solid. Salts of this compound, e.g. disodium methyl arsonate, have been widely used in as herbicides and fungicides in growing cotton and rice. Reactions: Near physiological pH, methanearsonic acid converts to its conjugate bases, the methylarsonates. These include CH3AsO3H− and CH3AsO2−3. Synthesis and biosynthesis: Reaction of arsenous acid with methyl iodide gives methylarsonic acid. This historically significant conversion is called the Meyer reaction: As(OH)3 + CH3I + NaOH → CH3AsO(OH)2 + NaI + H2OThe then-novel aspect of the reaction was that alkylation occurs at arsenic, leading to oxidation of arsenic from oxidation state +3 to +5. The biomethylation of arsenic compounds is thought to start with the formation of methanearsonates. Thus, trivalent arsenic compounds are methylated to give methanearsonate. S-Adenosylmethionine is the methyl donor. The methanearsonates are the precursors to cacodylates, again by the cycle of reduction (to methylarsonous acid) followed by a second methylation. Safety: Like most arsenic compounds, it is highly toxic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gomo (video game)** Gomo (video game): Gomo is a video game developed by Slovak company Fishcow Studio. It is a Point-and-click adventure game. Gameplay: The game is a typical point-and-click adventure game with hand-drawn graphics. It also includes some puzzles to solve. The game is inspired by Amanita Design titles such as Machinarium. It also means that the story is told without words. Plot: The game follows Gomo whose dog Dingo is kidnapped by an alien. The alien demands a mystical red crystal artifact that is imbued with power. Reception: The game received middling reviews. It holds 50% on Metacritic. The game was criticised for its length, low difficulty, story and a low diversity of worlds. On the other hand, it was praised for its visuals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radiosity (radiometry)** Radiosity (radiometry): In radiometry, radiosity is the radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area, and spectral radiosity is the radiosity of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiosity is the watt per square metre (W/m2), while that of spectral radiosity in frequency is the watt per square metre per hertz (W·m−2·Hz−1) and that of spectral radiosity in wavelength is the watt per square metre per metre (W·m−3)—commonly the watt per square metre per nanometre (W·m−2·nm−1). The CGS unit erg per square centimeter per second (erg·cm−2·s−1) is often used in astronomy. Radiosity is often called intensity in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity. Mathematical definitions: Radiosity Radiosity of a surface, denoted Je ("e" for "energetic", to avoid confusion with photometric quantities), is defined as Je=∂Φe∂A=Je,em+Je,r+Je,tr, where ∂ is the partial derivative symbol Φe is the radiant flux leaving (emitted, reflected and transmitted) A is the area Je,em=Me is the emitted component of the radiosity of the surface, that is to say its exitance Je,r is the reflected component of the radiosity of the surface Je,tr is the transmitted component of the radiosity of the surfaceFor an opaque surface, the transmitted component of radiosity Je,tr vanishes and only two components remain: Je=Me+Je,r. Mathematical definitions: In heat transfer, combining these two factors into one radiosity term helps in determining the net energy exchange between multiple surfaces. Spectral radiosity Spectral radiosity in frequency of a surface, denoted Je,ν, is defined as Je,ν=∂Je∂ν, where ν is the frequency. Spectral radiosity in wavelength of a surface, denoted Je,λ, is defined as Je,λ=∂Je∂λ, where λ is the wavelength. Radiosity method: The radiosity of an opaque, gray and diffuse surface is given by Je=Me+Je,r=εσT4+(1−ε)Ee, where ε is the emissivity of that surface; σ is the Stefan–Boltzmann constant; T is the temperature of that surface; Ee is the irradiance of that surface.Normally, Ee is the unknown variable and will depend on the surrounding surfaces. So, if some surface i is being hit by radiation from some other surface j, then the radiation energy incident on surface i is Ee,ji Ai = Fji Aj Je,j where Fji is the view factor or shape factor, from surface j to surface i. So, the irradiance of surface i is the sum of radiation energy from all other surfaces per unit surface of area Ai: Ee,i=∑j=1NFjiAjJe,jAi. Radiosity method: Now, employing the reciprocity relation for view factors Fji Aj = Fij Ai, Ee,i=∑j=1NFijJe,j, and substituting the irradiance into the equation for radiosity, produces Je,i=εiσTi4+(1−εi)∑j=1NFijJe,j. For an N surface enclosure, this summation for each surface will generate N linear equations with N unknown radiosities, and N unknown temperatures. For an enclosure with only a few surfaces, this can be done by hand. But, for a room with many surfaces, linear algebra and a computer are necessary. Once the radiosities have been calculated, the net heat transfer Q˙i at a surface can be determined by finding the difference between the incoming and outgoing energy: Q˙i=Ai(Je,i−Ee,i). Using the equation for radiosity Je,i = εiσTi4 + (1 − εi)Ee,i, the irradiance can be eliminated from the above to obtain Q˙i=Aiεi1−εi(σTi4−Je,i)=Aiεi1−εi(Me,i∘−Je,i), where Me,i° is the radiant exitance of a black body. Circuit analogy: For an enclosure consisting of only a few surfaces, it is often easier to represent the system with an analogous circuit rather than solve the set of linear radiosity equations. To do this, the heat transfer at each surface is expressed as Qi˙=Me,i∘−Je,iRi, where Ri = (1 − εi)/(Aiεi) is the resistance of the surface. Likewise, Me,i° − Je,i is the blackbody exitance minus the radiosity and serves as the 'potential difference'. These quantities are formulated to resemble those from an electrical circuit V = IR. Now performing a similar analysis for the heat transfer from surface i to surface j, Q˙ij=AiFij(Je,i−Je,j)=Je,i−Je,jRij, where Rij = 1/(Ai Fij). Because the above is between surfaces, Rij is the resistance of the space between the surfaces and Je,i − Je,j serves as the potential difference. Combining the surface elements and space elements, a circuit is formed. The heat transfer is found by using the appropriate potential difference and equivalent resistances, similar to the process used in analyzing electrical circuits. Other methods: In the radiosity method and circuit analogy, several assumptions were made to simplify the model. The most significant is that the surface is a diffuse emitter. In such a case, the radiosity does not depend on the angle of incidence of reflecting radiation and this information is lost on a diffuse surface. In reality, however, the radiosity will have a specular component from the reflected radiation. So, the heat transfer between two surfaces relies on both the view factor and the angle of reflected radiation. Other methods: It was also assumed that the surface is a gray body, that is to say its emissivity is independent of radiation frequency or wavelength. However, if the range of radiation spectrum is large, this will not be the case. In such an application, the radiosity must be calculated spectrally and then integrated over the range of radiation spectrum. Yet another assumption is that the surface is isothermal. If it is not, then the radiosity will vary as a function of position along the surface. However, this problem is solved by simply subdividing the surface into smaller elements until the desired accuracy is obtained.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**NE-100** NE-100: NE-100 or 4-methoxy-3-(2-phenylethoxy)-N,N-dipropylbenzeneethanamine is a selective sigma-1 receptor antagonist, with a reported binding affinity of Ki = 1.03 ± 0.01 nM, and more than 205 times selectivity over the sigma-2 receptor.NE-100 was one of the earliest selective sigma-1 receptor ligands reported and has been widely used as a pharmacological tool. The original, eight step synthesis of NE-100 was reported by Atsuro Nakazato and colleagues of Taisho Pharmaceutical Company in 1999. More recently, Michael Kassiou and co-workers have reported a more expedient synthesis of NE-100 that proceeds in 56% unoptimized yield over 4 steps.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Babcock Model** Babcock Model: The Babcock Model describes a mechanism which can explain magnetic and sunspot patterns observed on the Sun. History: The modern understanding of sunspots starts with George Ellery Hale, who linked magnetic fields and sunspots. Hale suggested that the sunspot cycle period is 22 years, covering two polar reversals of the solar magnetic dipole field. Horace W. Babcock proposed in 1961 a qualitative model for solar dynamics. On the largest scale, the Sun supports an oscillatory magnetic field, with a quasi-steady periodicity of 22 years. This oscillation is known as the Babcock-Leighton dynamo cycle, amounting to the oscillatory exchange of energy between poloidal and toroidal solar magnetic field ingredients. Babcock-Leighton dynamo cycle: A half dynamo cycle corresponds to a single sunspot solar cycle. At solar maximum, the external poloidal dipolar magnetic field is near its dynamo-cycle minimum strength, but an internal toroidal quadrupolar field, generated through differential rotation, is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the convective zone forces emergence of a toroidal magnetic field through the photosphere, giving rise to patches of concentrated magnetic field corresponding to sunspots. Babcock-Leighton dynamo cycle: During the solar cycle’s declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number. At solar-cycle minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are few in number, and the poloidal field is at its maximum strength. With the rise of the next 11 year sunspot cycle, magnetic energy shifts back from the poloidal to the toroidal field, but with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11 year sunspot cycle corresponds to a change in the overall polarity of the Sun's large-scale magnetic field.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Webcal** Webcal: Webcal is a uniform resource identifier (URI) scheme for accessing iCalendar files. WebCal allows you to create and maintain an interactive events calendar or scheduling system on a Web site or app.The webcal scheme was devised for use with the Apple iCal application and has become a common de facto standard for accessing iCalendar formatted files via WebDAV, usually using GET method. It is not an official URI scheme, such as http and ftp, as registered with IANA. As of 23 September 2012, the webcal scheme has provisional status with IANA. The Webcal protocol prefix is used to trigger an external protocol handler which is passed the URL of the .ics file rather than being passed the downloaded contents of the file, in much the same way feed is sometimes used to trigger external RSS readers. The idea is that with this protocol prefix the target file should be subscribed to rather than imported into the calendar application as would happen with a simple download. Handlers: Notable software packages and web applications supporting the webcal protocol include: Google Calendar Microsoft Outlook Mozilla Lightning Alternative protocols: CalDAV and GroupDAV are both efforts to provide WebDAV-based access to calendar stores with finer granularity. The CalDAV Access protocol has been standardized by the IETF and published as RFC 4791. Extensions to CalDAV for automated scheduling are also standardized, as RFC 6638. Neither of those protocols call for using DAV style URIs. Instead, both drafts call for using the HTTP OPTIONS feature to return that the server supports calendaring extensions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Volleyball offensive systems** Volleyball offensive systems: Volleyball offensive systems are the ways in which a coach can personalize and tweak his or her team's offense based on each player's skill level to make the team as competitive as possible. This is done by using different formations that allow a team to use a variety of volleyball attacks. A team on offense will try to increase the probability of winning a point on a hit by confusing the opposing blockers and disguising the setter's intended receiver as much as possible. This is done keeping in mind that the goal is to score a point and that running a successful offense is executed differently for every team. Teams use offensive systems in whatever way suits the team best. Preview: Volleyball offense is how a team can attempt to score a point by causing the ball to land on the opposing teams side of the court. Generally, this is done by first receiving the ball from the other side in the form of either an attack or serve, having the ball set to an attacker, and then having a player jump and attack the ball. Once the ball is received, the goal is to get the ball where it can be hit most effectively. This is usually close to the net where an attacker can jump and hit the ball. Based on a teams skill level, they will be able to run their offense smoothly with a unique arsenal of attacks and formations. Attacks: Basic The basics sets are used by teams that do not yet have the experience to run more complicated plays. Any more advanced set, relative to the team skill level, should be used sparingly so the opposing block is tricked by thinking the set is merely a basic set. 4: A 4 is a high set to the left antenna where an outside (left front) attacker may hit it. 2: A 2 is a high set to the middle (middle front) attacker about 2 feet above the tape of the net. 5: A 5 is a high set to the right antenna where an opposite (right front) attacker may hit it. Pipe: A pipe is a high set to the middle back right behind the 10' line where a back row attacker may hit it. 3: A 3 (also referred to as a 32 or 33) is a shoot set between the outside and middle hitters. Much like a combination of a 1 and a shoot. Attacks: 1: A 1 (also referred to as a quick) is a low set that is set about 1' above the tape of the net for the middle attacker Advanced These more complicated sets are meant to fool the opposition and get your attacker with 1 blocker or less. At higher levels, teams will use a basic set if it is a last resort, meaning that the ball was received poorly.Shoot: A shoot or Go-ball is a quick, low set to the left-front hitter near the antenna.32: A 32 (pronounced three-two) is a set to the left-front hitter halfway in between the middle of the net and the antenna about the height of a two ball.Flare: A flare is when an attacker uses an inside-out path to attack an outside set. A teammate commonly runs a quick fake to trick the opponents, then the attacker flares out to attack.Slide: A slide is a set to any attacker who runs parallel to the net and jumps off of one foot.Iso: An isolation play is a play where you use an attacker, usually the middle, as a decoy to leave another attacker with a weaker opposing block.Tandem: A tandem is when one attack follows another and hits the ball right after the first one lands, using the first attacker as a decoy.Double quick: A double quick is when two attackers take an approach towards the setter so that he or she may set either a 1 or a back 1, which is a 1 set over the shoulder of the setter.X: An x is when a middle goes up for a 1 and the right side attacker comes from the other side to hit a 2, making the two paths of the hitters cross. Formations: In volleyball, teams must have their players in a specific formation. The players then rotate around the court clockwise whenever the team performs a side-out. There is a penalty for being out of rotation and the opposing team receives a point. There are three formations that are widely used in the sport, each having advantages and disadvantages. Formations: 4-2 This offense takes its name from the fact that it uses 4 attackers and two setters. This is a basic formation generally used by less experienced teams to avoid confusion on the court. At any given time, one of the setters is front row and the other is back row. They are always opposite of each other on the court. This allows for 2 attackers front row at any given time, and the setter is able to dump the ball as the setter will always be in the front court. This basic offensive formation allows for any of the basic sets to be run, as well as a 32, shoot, or possibly a tandem. Teams that use a 4-2 will rarely set anything other than the basic sets. The positive aspects of the 4-2 include its simplicity, so a team can gain experience and later move on to a more complicated formation. The negative aspect of using a 4-2 is its limits regarding your offense. Some think that having two setters takes away from your team as the setter is generally the team leader. Some coaches opt to start their team out running a more complicated system and just having the players adopt it. Formations: 5-1 A 5-1 takes its name from using 1 setter and having 5 attackers on the court. The secondary setter is replaced by an opposite hitter who is always opposite the setter on the court. This formation allows the setter to be able to dump the ball for half the rotations and have 3 front row attackers to set the ball to on the other three rotations. This system allows the setter to set any possible set he or she wants to depending on whether he or she is front row or back row. Many coaches prefer this system, having one setter as the team leader. It also helps having only one setter so that the setting does not change. One setter may set the ball differently from another giving a different feel for the attackers. It helps when the attackers are used to one setter in particular. The negative points of this offense are that the setter needs to transition from defense to set the ball. This creates situations where the setter has the first contact and someone else has to set the ball. Formations: 6-2 A 6-2 is similar to a 4-2, but has 6 attackers and 2 setters. This is possible by having the back row setter always set the ball, making the setter only a hitter when he or she is front row. This formation allows any possible set to be made not including a dump by the setter because he or she is always back row when setting the ball. This formation is good for a team in which the setters are also very good attackers where coach does not want to waste that talent. Some young teams also use it so that the player can increase a broad range of skills and not be sentenced to being a setter for his or her career. Unfortunately, this formation has the problems of 5-1 and 4-2. Having two setters, and always having one of them be back row. The setter always has to transition from defense and the leadership is lacking. Most teams at the highest level, including the USA Olympic team use the 5-1 rather than this for leadership purposes. A 6-2 offense is very common when a team has two setters of equal ability and at a younger age.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Squeaky toy** Squeaky toy: A squeaky toy, squeak toy, squeaker toy, or squeeze toy, is a soft, hollow toy made from flexible materials such as rubber or vinyl, and usually equipped with a small device known as a squeaker. How it works: When the toy is squeezed, air is forced through the squeaker, resulting in a high-pitched sound, such as a squeak, or the sound of a toy horn or whistle. The tone and duration of the sound may depend on the size of the squeaker, the amount of air squeezed out of the toy, and the speed with which it is squeezed. When the toy is not being squeezed, it resumes its normal shape and re-inflates. Air returning into the toy through the squeaker may or may not make a sound, depending on the design of the squeaker and the speed at which air re-enters. How it works: The high-pitched noise produced by squeaky toys quickly attracts the attention of infants and small children, while their soft, squeezable nature makes them safe for young children to handle. Squeaky toys are also popular with pets, and examples shaped like bones or small furry animals are commonly marketed for dogs. History: The first squeaky toys were simple rubber balls which produced a high pitched noise when air was squeezed through a hole, without a special noise maker. Later examples contained a metal noisemaker known as a "whistle disk." Brightly colored rubber squeaky toys molded in various shapes became common during the 1940s. Later examples were molded from durable vinyl, and plastic squeakers replaced metal whistles.Squeaky toys may be modeled after popular cartoon characters, or used as promotional advertising. There are squeaky toy collectors, and published guides with typical selling prices. Nature's squeaky toys: Small animals are sometimes compared with squeaky toys. A particularly apt example is the desert rain frog, the subject of a widely viewed video titled "World's Cutest Frog," regularly described as making a noise like a squeaky toy. The resemblance is enhanced by the fact that the frog vocalizes by inflating its body, and then exhaling (relatively) large quantities of air, as if being squeezed. The calls of certain birds have also been compared to squeaky toys; in particular those of the western kingbird, Mississippi kite, and sulphur-bellied flycatcher of North America, and the blue nuthatch of southeast Asia. In popular culture: Several squeaky toys play prominent roles in Pixar's Toy Story movies. The three-eyed alien toys first encountered in the claw machine at Pizza Planet are squeaky toys; they appear in all four films, and rescue the other toys from an incinerator in Toy Story 3. Another squeaky toy character is Wheezy, a penguin with a broken squeaker in Toy Story 2. Consigned to a yard sale, his rescue by Woody sets in motion the remainder of the movie's plot. In popular culture: Henry Dagg used squeaky toys in the shape of cats to build a "katklavier" (cat organ). This unusual instrument came to public attention in 2010, when Dagg used it to perform "Over the Rainbow" at a charity event held by Prince Charles. Both the Prince and the Duchess of Cornwall were reduced to tears of laughter by the performance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Siege (software)** Siege (software): Siege is a Hypertext Transfer Protocol (HTTP) and HTTPS load testing and web server benchmarking utility developed by Jeffrey Fulmer. It was designed to let web developers measure the performance of their code under stress, to see how it will stand up to load on the internet. It is licensed under the GNU General Public License (GNU GPL) open-source software license, which means it is free to use, modify, and distribute.Siege can stress a single URL or it can read many URLs into memory and stress them simultaneously. It supports basic authentication, cookies, HTTP, HTTPS and FTP protocols. Performance measures: Performance measures include elapsed time of the test, the amount of data transferred (including headers), the response time of the server, its transaction rate, its throughput, its concurrency and the number of times it returned OK. These measures are quantified and reported at the end of each run.This is a sample of siege output: Ben: $ siege -u shemp.whoohoo.com/Admin.jsp -d1 -r10 -c25 ..Siege 2.65 2006/05/11 23:42:16 ..Preparing 25 concurrent users for battle. Performance measures: The server is now under siege...done Transactions: 250 hits Elapsed time: 14.67 secs Data transferred: 448,000 bytes Response time: 0.43 secs Transaction rate: 17.04 trans/sec Throughput: 30538.51 bytes/sec Concurrency: 7.38 Status code 200: 250 Successful transactions: 250 Failed transactions: 0 Siege has essentially three modes of operation: regression, internet simulation and brute force. It can read a large number of URLs from a configuration file and run through them incrementally (regression) or randomly (internet simulation). Or the user may simply pound a single URL with a runtime configuration at the command line (brute force). Platform support: Siege was written on Linux and has been successfully ported to AIX, BSD, HP-UX, and Solaris. It compiles on most UNIX System V variants and on most newer BSD systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Terminal crossbreeding** Terminal crossbreeding: Terminal crossbreeding is a breeding system used in animal production. It involves two (different) breeds of animal that have been crossbred. The female offspring of that cross is then mated with a male (the terminal male) of a third breed, producing the terminal crossbred animal.The first crossbreeding may produce a superior animal due to hybrid vigor. Often, this crossbreed is part of a rotational crossbreeding scheme; if it also incorporates terminal crossbreeding, it is then called a rotaterminal system. By mating the crossbreed with a third breed, hybrid vigor may be further enhanced.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Selection cutting** Selection cutting: Selection cutting, also known as selection system, is the silvicultural practice of harvesting trees in a way that moves a forest stand towards an uneven-aged or all-aged condition, or 'structure'. Using stocking models derived from the study of old growth forests, selection cutting, also known as 'selection system', or 'selection silviculture', manages the establishment, continued growth and final harvest of multiple age classes (usually three, but 5 or even 10 are possible) of trees within a stand. A closely related approach to forest management is Continuous Cover Forestry (CCF), which makes use of selection systems to achieve a permanently irregular stand structure.Selection cutting or systems are generally considered to be more challenging to implement and maintain than even-aged management, due to the difficulty of managing multiple age classes in a shared space, but there are significant ecological benefits associated with it. Uneven-aged stands generally exhibit higher levels of vertical structure (key for many species of birds and mammals), have higher levels of carbon sequestration, and produce a more constant flow of market and non-market forest resources than even-aged stands. Although a forest composed of many stands with varied maturity ages maybe comparable, this would be at the forest rather than the stand level. This silvicultural method also protects forest soils from the adverse effects of many types of even-aged silviculture, including nutrient loss, erosion and soil compaction and the rapid loss of organic material from a forested system. Selection silviculture is especially adept at regenerating shade-tolerant species of trees (those able to function under conditions of low solar energy, both cooler and less light), but can also be modified to suit the regeneration and growth of intolerant and mid-tolerant species. This is one of many different ways of harvesting trees. Selection cutting as a silvicultural system can be modified in many ways and would be so done be a forester to take into account varied ownership goals, local site conditions and the species mix found from past forest conditions. Confusing term: Selection cutting is often (sometimes deliberately) confused with "selective" cutting, a term synonymous with the practice of highgrading (the removal of the most economically profitable trees in a forest, often with a disregard for the future of the residual stand). Often the latter term is used by foresters or loggers to imply the former (which has a generally positive connotation in forestry circles) and mislead landowners into stripping their woodlot of its most valuable timber. Used correctly, the term 'selection cutting', 'selection system', or 'selection silviculture' implies the implementation of specific silvicultural techniques—usually either 'single tree selection', 'group selection' or a combination of the two—to create an uneven-aged or all-aged condition in a forest stand, one more akin to a late successional or 'climax' condition.Partly as a result of such confusion, the term Plenterwald, which is the German term for selection cutting, is being more commonly used as the standard term in English. Increasingly, especially in Britain, Ireland and elsewhere in Europe, the term Continuous Cover Forestry (CCF) has been adopted to embrace an approach to stand management that most often employs selection systems to achieve a permanently irregular stand structure. Single-tree selection: The most common type of selection system is single-tree selection, in which scattered individual trees of multiple age classes, whose canopies are not touching, are harvested. This type of selection system generally produces small canopy openings especially conducive to the establishment and growth of shade tolerant tree species. Group selection: Another variation of selection silviculture is group selection. Under this system, a number of 'groups', or small openings created by the removal of several adjacent trees, are created in complement to the harvest of scattered individual trees. If the groups created are large enough, and if seed-bed conditions are favorable, this can allow species which are intolerant of shade to regenerate. Group selection is designed to mimic larger, multi-tree mortality events, which in some environments may represent natural disturbance regimes. Group selection: The maximum size of a group (before it becomes a patch, or clearcut) is debatable. Some say it may be up to 2 acres (0.8 hectares) in size, whereas others limit it to a maximum of 0.5 acres (0.1 hectares). Group selection: In any case Plenterwald can operate in a small areas of 1/3 - 1/2 hectare, whereas other systems need a bigger area. Behind this is the philosophical idea that a stand should be balanced (that is equal amounts of land cover for each age class) in the same way that a forest would be balanced under a clear cut régime (that is stands collectively are balanced on yield flow). The reasoning is based on the Normalwald concept, which is a model of a forest over 100 years that will produce an amount of money that is consistent over time with treatments being consistent over time rather than big expenses or big profits at one time and low expenses and low profits at another.Care need to be taken to avoid epicormic shoots growing on trunks of surrounding trees such that they lead to knotty wood, if timber production is desired. It is also challenging to visualize the groups with cuttings over time. Implementing A Selection System: In North America, trees are selected for harvest in a selection system with reference to the Arbogast Method (named after the method's creator). This is also known as the BDq method. Under this method, a harvest is specified by defining a residual basal area (B), a maximum diameter (D), and a q-ratio (q). The q-ratio is the ratio of the number of trees in a diameter class to the number of trees in the next larger class. Typically diameter classes are either 4 centimeters or 2 inches. Implementing A Selection System: When the Q is plotted on semi-log paper it gives a straight slope for uneven aged stands. However, in reality this slope can be seen to vary from what is called an S-curve in old growth forests to cut off the older trees giving a reverse-J curve in a managed stand. The curve is also an ideal curve and there may be variations to some extent, particularly in earlier number of trees where there are many more seedlings and saplings than the model Q-ratio would suggest. Implementing A Selection System: Given the BDq, a curve representing the state of the residual stand is computed. This curve is compared to the inventory data from a stand, specifically the curve of the diameter classes of the trees in the stand against the number of trees in each diameter (age) class. Diameter is used as a surrogate for age and thus called an age class even though strictly it should be a size class. The comparison of these two curves tells the forester how many trees of each age-class should remain in the stand. Surplus trees are marked for harvest. If there are too few trees in a class, the forester will determine if it is necessary to reduce the removal of trees from neighboring classes to maintain an ideal q-ratio.The goal of the use of a BDq curve is to ensure the continued development of trees in each age class, and the continued availability of mature timber to harvest on a relatively short cutting cycle (8–15 years). Longer cutting cycles may be used depending on species mix, silvicultural goals and if the aim is amenity or economic forestry in respect to the land. Implementing A Selection System: Following this method with well performed forest inventories should see the right amount of cutting. However, reality has shown about a third of forests are overcut and a third are undercut. It appears that the model also departs from reality in many cases, and so cannot be solely relied on. The judgement of an experienced forester is also needed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Double Arena** Double Arena: Double Arena is a 1984 supplement for the simulation game Car Wars, published by Steve Jackson Games. Gameplay: Double Arena is a supplement in which two gigantic arenas can be constructed by matching up the two double-sided 21" x 32" mapsheets. The arenas represent the Buffalo Municipal Coliseum in Buffalo, New York and the Dumbarton Slalom Arena in Oakland, California. The supplement also includes 48 counters and an instruction sheet. Reception: Craig Sheeley reviewed Double Arena in Space Gamer No. 70. Sheeley commented, "Double Arena is a nice switch from the Armadillo A.A., or Midville, or whatever area you've been using. If your arena fighters need a new place to shed blood, or if you want new (and good) counters, or if you love the idea of a slalom, then Double Arena is for you."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**National Design &amp; Research Forum** National Design &amp; Research Forum: National Design & Research Foundation (NDRF) is a premier Indian research and development organization promoting collaborative interdisciplinary research and development. It is actively involved in Technological Research, Design, Development, Productization, Innovation, and Program Management of large, interdisciplinary Indian Technological Research Programs. NDRF was established in June 1967 by the Institution of Engineers (India), as an autonomous forum for technological research and development. It was formerly known as the National Design Engineering Forum. It is located in Bengaluru, Karnataka, India. NDRF's collaborative research ecosystem-NDRF Consortium has over 80 partners from the academia, industry, and research organizations; and has progressed 32 research projects. Currently, Space Scientist Dr. Annadurai M, serves as the chairman, board of governors, NDRF. Defence Scientist Dr.V.Dillibabu is the Director of NDRF from June 2019. NDRF -Consortium partner: Agargami Applied AeronauticsActivities •Research and Development Micro, Nano, and Bio Systems Medical Device Development for Affordable Healthcare Interfacing Biology and Engineering Sensors and Sensors for Societal applications including Chemical, Biomimetic and Biosensors National Programme on Micro air vehicles (NP-MICAV) System Identification Group-Flapping Wing MICAVs Development of Fuel Cells Rapid Prototyping and Tooling•Research Infrastructure and Laboratories as National Resource Facilities ProtoLab-Rapid Prototyping Laboratory Central Integrated Systems Laboratory-systems engineering, and design of multidisciplinary systems by simulation Technical Facility for NP-MICAV•Knowledge-sharing and Networking National and International Flying Competitions for MICAVs Organizing premier technological networking and collaborative sessions to bring professionals and researchers from different disciplines of engineering on a common platform•Monographs on Contemporary Technologies •Design Awards-National Design Awards and Student Design Awards Contribution NDRF success stories span •Development of Interdisciplinary Technologies-biosensors for detection of explosives and gases, bio-fuel cells and folding wing technologies for micro air vehicles, innovative technologies for developing affordable blood pressure measurement devices •Products-micro brushless outrunner motors for micro air vehicles, non-invasive glucometer using innovative technologies •Patents and Intellectual Property
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Opium replacement** Opium replacement: Opium replacement or opium substitution refers to the process of substituting opium poppy cash crops with non-narcotic alternatives. Concept: The concept of opium replacement was first developed within an agricultural framework, most notably in Thailand. Agricultural engineers sought to identify crops that would generate more income than the opium poppy. In the late 1970s and early 1980s, rural development projects caused the terms opium replacement and opium substitution to be superseded by integrated rural development. In the 1990s, the term shifted to alternative development. This term and its minor variants are still used in Latin America (where crop-replacement approaches are used for coca). The United Nations refers to these crop replacement projects as sustainable alternative livelihoods; in Afghanistan, development agencies use the term sustainable livelihoods. Worldwide: Opium has been grown in Turkey, Iran, Pakistan, Afghanistan, India, Nepal, Myanmar (formally Burma), Thailand, Laos, China, and Vietnam. It is also believed to be grown in the central post-Soviet states, including Kazakhstan and Kyrgyzstan, Mexico (allegedly imported by immigrant Chinese opium users), and Colombia (reportedly as part of a collaboration between South-East Asian and Colombian drug traffickers). According to a United Nations Office on Drugs and Crime report published in the mid-2000s, large amounts of opium are only cultivated in Myanmar, Afghanistan, and Colombia. Small to intermediate amounts were produced by Laos, Mexico, and Pakistan, while Thailand and Vietnam produced negligible amounts. Of these countries, opium replacement has been implemented in Thailand, Laos, Myanmar, Vietnam, Pakistan, Mexico, and Afghanistan. Worldwide: In Colombia, much of the opium cultivation takes place under the protection of armed groups opposing the government, limiting the success of opium replacement attempts. Laos has experienced steep declines in cultivation, but former opium farmers are often left destitute due to the scarcity of legal, alternative crops. A similar situation had been observed in Pakistan, which is now experiencing an increase in cultivation due to over spill from Afghanistan. The opium replacement project in Afghanistan is slow, due to the large scale of cultivation, size of the country, poor security, destruction of infrastructure, and weakness of government institutions. Worldwide: Myanmar Myanmar has had some attempts at opium replacement: the United Nations has one project in the Wa State (in the north-east) and the Doi Tung project of Thailand also initiated some activities. The areas covered by such projects were too small to have a significant effect on national production. While opium production has been falling, it is attributed to Myanmar warlords new focus on methamphetamines rather than replacement projects. Worldwide: Thailand Thailand is widely considered the most successful example of opium replacement policies. Although peak production in Thailand was relatively low (150–200 tonnes annually), Thailand's approach to opium replacement is considered the broadest attempt to replace opium cultivation with cultivation of legal crops. More than 150 crops have been introduced to farmers, especially to farmers from temperate climates (suitable to growing opium). The crops include: cabbage, lettuce, kidney beans, tea, coffee, peaches, apples, herbs, and decorative flowers. In general, these crops were cash crops of medium to high value. While many are not native to Thailand, they have been integrated into Thai cooking and culture. Two particularly successful opium replacement projects are still in operation: the Royal Project (established in 1969) and the Doi Tung Project (established in 1988). Both have eliminated opium cultivation from their project areas and have helped farmers improve living conditions. They are used as models and are studied by practitioners of opium replacement from other countries. Scepticism: Despite the success of Thailand and, to a lesser extent, Pakistan and Vietnam, many people claim that opium replacement is ineffective, noting that Thailand is the only "real" success, but that its success is due to unique and non-replicable factors. Development activities may cause opium cultivators to simply relocate (in what is known as the balloon effect). Despite the presence of opium replacement projects, the world's supply of illicit drugs is continually rising, while prices are falling. Organisations: Opium replacement projects are typically implemented by national government agencies with the support of an international donor. A contractor implements the project in partnership with the national agency. At the moment, the largest providers of funding are the United States Agency for International Development and the European Union. Major contractors include the German Technical Cooperation Agency and several for-profit firms from the United States. The United Nations Office on Drugs and Crime helps coordinate different efforts, and also funds a few projects. Organisations: Opium replacement projects are no longer planned and executed over long time frames, as was the case with Thailand's Royal Project and the Doi Tung project. Rather, they take place over two or three years. Effectiveness: There are three reasons why Opium Replacement was so successful in Doi Tung of Thailand. One is that Alternative Development (AD) was preceded by violent intimidation of people living in North Thailand. (Ref: Race, Jeffrey, (1974), The War in Northern Thailand, Modern Asian Studies, Vol 8, No. 1, (1974), pg. 105, Published by Cambridge University Press, https://www.jstor.org/stable/311628 ) The second is that it was such a very small project that a well financed (from foreign countries) NGO could handle it. It has no universal application. In 1988 when the AD project started in Doi Tung the total area under illicit poppy cultivation in Thailand was only 2811 hectares (pg. 23 of UNODC's World Drug Report 1999). The same year 1740 hectares had been eradicated. The third reason is that the AD project required eradication to succeed. Eradication still continues. 264 hectares of illicit poppy fields were eradicated in 2013 according to the World Drug Report of 2014. Had Alternative Development or Opium Replacement been so successful in Thailand, why is eradication continuing or for that matter opium use, which according to Table I of UNODC's South East Asian Opium Survey is consumed by 96,284 people? The Alternative Development lobby has in its anxiety to push this concept has ignored all these hard facts, as also that all resolutions on Alternative Development recommend Alternative Development as a means to assist eradication. Thus AD is not all that peaceful either.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pine Hill Formation** Pine Hill Formation: The Pine Hill Formation is a geologic formation in New York. It preserves fossils dating back to the Devonian period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cyclic nucleotide-gated channel alpha 3** Cyclic nucleotide-gated channel alpha 3: Cyclic nucleotide-gated cation channel alpha-3 is a protein that in humans is encoded by the CNGA3 gene. Function: This gene encodes a member of the cyclic nucleotide-gated cation channel protein family, which is required for normal vision and olfactory signal transduction. CNGA3 is expressed in cone photoreceptors and is necessary for color vision. Missense mutations in this gene are associated with rod monochromacy and segregate in an autosomal recessive pattern. Two alternatively-spliced transcripts encoding different isoforms have been described. Clinical relevance: Variants in this gene have been shown to cause achromatopsia and colour blindness.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Selection shadow** Selection shadow: The selection shadow is a concept involved with the evolutionary theories of aging that states that selection pressures on an individual decrease as an individual ages and passes sexual maturity, resulting in a "shadow" of time where selective fitness is not considered. Over generations, this results in maladaptive mutations that accumulate later in life due to aging being non-adaptive toward reproductive fitness. The concept was first worked out by J. B. S. Haldane and Peter Medawar in the 1940s, with Medawar creating the first graphical model. Model: The model developed by Medawar states that due to the dangerous conditions and pressures from the environment, including predators and diseases, most individuals in the wild die not long after sexual maturity. Therefore, there is a low probability for individuals to survive to an advanced age and suffer the effects related to aging. In conjunction with this, the effects of natural selection decrease as age increases, so that later individual performance is ignored by selection forces. This results in beneficial mutations not being selected for if they only have a positive result later in life, along with later in life deleterious mutations not being selected against. Due to the fitness of an individual not being affected once it is past its reproductive prime, later mutations and effects are considered to be in the "shadow" of selection.This concept would later be adapted into Medawar's 1952 mutation accumulation hypothesis, which was itself expanded upon by George C. Williams in his 1957 antagonistic pleiotropy hypothesis.A classical requirement and constraint of the model is that the number of individuals within a population that live to reach senescence must be small in number. If this is not true for a population, then the effects of old age will not be under a selection shadow and instead affect adaptation and evolution of the population as a whole. At the same time, however, this requirement has been challenged by increasing evidence of senescence being more common in wild populations than previously expected, especially among birds and mammals, while the effects of the selection shadow remain present. Medawar's Test Tube model: Medawar developed a theoretical model to demonstrate his thought process which explained that most animals will die before aging will be the ultimate cause for death in that animal. This would be from environmental factors such as large storms, drought, and fires, and predation. Medawar wanted to demonstrate this possibility by using test tubes to get his point across. The test tubes would be used to represent a population of species. If one of these test tubes were to theoretically break, this would represent an individual animal dying. Randomly, test tubes would then be broken in the population to keep his model realistic. The broken test tubes would be replaced with a new one, which represents a new animal being born into the population. Over time, the model showed that test tubes over a certain age would decline in the population as new test tubes were put in. The overall results in Medawar’s thought model demonstrated an exponential decline in the survivor curve which resulted in the population having a half life. The amount of older animals, or test tubes in the population would then be harder to maintain and ultimately die. Medawar created this model to ultimately explain what would realistically happen in actual life. Criticism: Some scientists, however, have criticized the idea of aging being non-adaptive, instead adopting the theory of "death by design". This theory follows the work of August Weismann, which states that aging specifically evolved as an adaptation, and disagrees with Medawar's model as a perceived oversimplification of the impact older organisms have on evolution. It is also claimed that older organisms have a higher reproductive capacity due to being better fit in order to reach their age, rather than their capacity being equal as in Medawar's calculations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DE-9IM** DE-9IM: The Dimensionally Extended 9-Intersection Model (DE-9IM) is a topological model and a standard used to describe the spatial relations of two regions (two geometries in two-dimensions, R2), in geometry, point-set topology, geospatial topology, and fields related to computer spatial analysis. The spatial relations expressed by the model are invariant to rotation, translation and scaling transformations. DE-9IM: The matrix provides an approach for classifying geometry relations. Roughly speaking, with a true/false matrix domain, there are 512 possible 2D topologic relations, that can be grouped into binary classification schemes. The English language contains about 10 schemes (relations), such as "intersects", "touches" and "equals". When testing two geometries against a scheme, the result is a spatial predicate named by the scheme. DE-9IM: The model was developed by Clementini and others based on the seminal works of Egenhofer and others. It has been used as a basis for standards of queries and assertions in geographic information systems (GIS) and spatial databases. Matrix model: The DE-9IM model is based on a 3×3 intersection matrix with the form: where dim is the dimension of the intersection (∩) of the interior (I), boundary (B), and exterior (E) of geometries a and b. Matrix model: The terms interior and boundary in this article are used in the sense used in algebraic topology and manifold theory, not in the sense used in general topology: for example, the interior of a line segment is the line segment without its endpoints, and its boundary is just the two endpoints (in general topology, the interior of a line segment in the plane is empty and the line segment is its own boundary). Matrix model: In the notation of topological space operators, the matrix elements can be expressed also as The dimension of empty sets (∅) are denoted as −1 or F (false). The dimension of non-empty sets (¬∅) are denoted with the maximum number of dimensions of the intersection, specifically 0 for points, 1 for lines, 2 for areas. Then, the domain of the model is {0,1,2,F}. Matrix model: A simplified version of dim ⁡(x) values are obtained mapping the values {0,1,2} to T (true), so using the boolean domain {T,F}. The matrix, denoted with operators, can be expressed as The elements of the matrix can be named as shown below: Both matrix forms, with dimensional and boolean domains, can be serialized as "DE-9IM string codes", which represent them in a single-line string pattern. Since 1999 the string codes have a standard format. Matrix model: For output checking or pattern analysis, a matrix value (or a string code) can be checked by a "mask": a desired output value with optional asterisk symbols as wildcards — that is, "*" indicating output positions that the designer does not care about (free values or "don't-care positions"). The domain of the mask elements is {0,1,2,F,*}, or {T,F,*} for the boolean form. The simpler models 4-Intersection and 9-Intersection were proposed before DE-9IM for expressing spatial relations (and originated the terms 4IM and 9IM). They can be used instead of the DE-9IM to optimize computation when input conditions satisfy specific constraints. Illustration Visually, for two overlapping polygonal geometries, the result of the function DE_9IM(a,b) looks like: This matrix can be serialized. Reading from left-to-right and top-to-bottom, the result is II=2,IB=1,IE=2,BI=1,BB=0,BE=1,EI=2,EB=1,EE=2 . So, in a compact representation as string code is '212101212'. Spatial predicates: Any topological property based on a DE-9IM binary spatial relation is a spatial predicate. For ease of use "named spatial predicates" have been defined for some common relations, which later became standard predicates. Spatial predicates: The spatial predicate functions that can be derived from DE-9IM include: Predicates defined with masks of domain {T,F,*}:Predicates that can be obtained from the above by logical negation or parameter inversion (matrix transposition), as indicated by the last column:Predicates that utilize the input dimensions, and are defined with masks of domain {0,1,T,*}:Notice that: The topologically equal definition does not imply that they have the same points or even that they are of the same class.The output of DE-9IM⁡(a,b) have the information contained in a list of all interpretable predicates about geometries a and b.All predicates are computed by masks. Only Crosses and Overlaps have additional conditions about dim ⁡(a) and dim ⁡(b) .All mask string codes end with *. This is because EE is trivially true, and thus provides no useful information.The Equals mask, T*F**FFF*, is the "merge" of Contains (T*****FF*) and Within (T*F**F***): (II ∧ ~EI ∧ ~EB) ∧ (II ∧ ~IE ∧ ~BE).The mask T*****FF* occurs in the definition of both Contains and Covers. Covers is a more inclusive relation. In particular, unlike Contains it does not distinguish between points in the boundary and in the interior of geometries. For most situations, Covers should be used in preference to Contains.Similarly, the mask T*F**F*** occurs in the definition of both Within and CoveredBy. For most situations, CoveredBy should be used in preference to Within.Historically, other terms and other formal approaches have been used to express spatial predicates; for example region connection calculus was introduced in 1992 by Randell, Cohn and Cohn. Spatial predicates: Properties The spatial predicates have the following properties of binary relations: Reflexive: Equals, Contains, Covers, CoveredBy, Intersects, Within Anti-reflexive: Disjoint Symmetric: Equals, Intersects, Crosses, Touches, Overlaps Transitive: Equals, Contains, Covers, CoveredBy, Within Interpretation The choice of terminology and semantics for the spatial predicates is based on reasonable conventions and the tradition of topological studies. Spatial predicates: Relationships such as Intersects, Disjoint, Touches, Within, Equals (between two geometries a and b) have an obvious semantic: Equals a = b that is (a ∩ b = a) ∧ (a ∩ b = b) Within a ∩ b = a Intersects a ∩ b ≠ ∅ Touches (a ∩ b ≠ ∅) ∧ (aο ∩ bο = ∅)The predicates Contains and Within have subtle aspects to their definition which are contrary to intuition. Spatial predicates: For example, a line L which is completely contained in the boundary of a polygon P is not considered to be contained in P. This quirk can be expressed as "Polygons do not contain their boundary". This issue is caused by the final clause of the Contains definition above: "at least one point of the interior of B lies in the interior of A". For this case, the predicate Covers has more intuitive semantics (see definition), avoiding boundary considerations. Spatial predicates: For better understanding, the dimensionality of inputs can be used as justification for a gradual introduction of semantic complexity: Coverage on possible matrix results The number of possible results in a boolean 9IM matrix is 29=512, and in a DE-9IM matrix is 39=6561. The percentage of these results that satisfy a specific predicate is determined as following, On usual applications the geometries intersects a priori, and the other relations are checked. Spatial predicates: The composite predicates "Intersects OR Disjoint" and "Equals OR Different" have the sum 100% (always true predicates), but "Covers OR CoveredBy" have 41%, that is not the sum, because they are not logical complements neither independent relations; idem "Contains OR Within", that have 21%. The sum 25%+12.5%=37.5% is obtained when ignoring overlapping of lines in "Crosses OR Overlaps", because the valid input sets are disjoints. Queries and assertions: The DE-9IM offers a full descriptive assertion about the two input geometries. It is a mathematical function that represents a complete set of all possible relations about two entities, like a Truth table, the Three-way comparison, a Karnaugh map or a Venn diagram. Each output value is like a truth table line, that represent relations of specific inputs. Queries and assertions: As illustrated above, the output '212101212' resulted from DE-9IM(a,b) is a complete description of all topologic relations between specific geometries a and b. It says to us that II=2,IB=1,IE=2,BI=1,BB=0,BE=1,EI=2,EB=1,EE=2 By other hand, if we check predicates like Intersects(a,b) or Touches(a,b) — for the same example we have "Intersects=true and Touches=true" — it is an incomplete description of "all topologic relations". Queries and assertions: Predicates also do not say any thing about the dimensionality of the geometries (it doesn't matter if a and b are lines, areas or points). This independence of geometry-type and the lack of completeness, on predicates, are useful for general queries about two geometries: For usual applications, the use of spatial predicates also is justified by being more human-readable than DE-9IM descriptions: a typical user have better intuition about predicates (than a set of interiors/border/exterior intersections). Predicates have useful semantic into usual applications, so it is useful the translation of a DE-9IM description into a list of all associated predicates, that is like a casting process between the two different semantic types. Examples: The string codes "0F1F00102" and "0F1FF0102" have the semantic of "Intersects & Crosses & Overlaps". The string code "1FFF0FFF2" have the semantic of "Equals". The string codes "F01FF0102", "FF10F0102", "FF1F00102", "F01FFF102", and "FF1F0F1F2" have the semantic of "Intersects & Touches". Standards The Open Geospatial Consortium (OGC) has standardized the typical spatial predicates (Contains, Crosses, Intersects, Touches, etc.) as boolean functions, and the DE-9IM model, as a function that returns a string (the DE-9IM code), with domain of {0,1,2,F}, meaning 0=point, 1=line, 2=area, and F="empty set". This DE-9IM string code is a standardized format for data interchange. The Simple Feature Access (ISO 19125) standard, in the chapter 7.2.8, "SQL routines on type Geometry", recommends as supported routines the SQL/MM Spatial (ISO 13249-3 Part 3: Spatial) ST_Dimension, ST_GeometryType, ST_IsEmpty, ST_IsSimple, ST_Boundary for all Geometry Types. The same standard, consistent with the definitions of relations in "Part 1, Clause 6.1.2.3" of the SQL/MM, recommends (shall be supported) the function labels: ST_Equals, ST_Disjoint, ST_Intersects, ST_Touches, ST_Crosses, ST_Within, ST_Contains, ST_Overlaps and ST_Relate. The DE-9IM in the OGC standards use the following definitions of Interior and Boundary, for the main OGC standard geometry types: Implementation and practical use Most spatial databases, such as PostGIS, implements the DE-9IM() model by the standard functions: ST_Relate, ST_Equals, ST_Intersects, etc. The function ST_Relate(a,b) outputs the standard OGC's DE-9IM string code. Examples: two geometries, a and b, that intersects and touches with a point (for instance with dim ⁡(B(a)∩I(b))=0 and dim ⁡(I(a)∩I(b))=F ), can be ST_Relate(a,b)='FF1F0F1F2' or ST_Relate(a,b)='FF10F0102' or ST_Relate(a,b)='FF1F0F1F2'. It also satisfies ST_Intersects(a,b)=true and ST_Touches(a,b)=true. When ST_Relate(a,b)='0FFFFF212', the returned DE-9IM code have the semantic of "Intersects(a,b) & Crosses(a,b) & Within(a,b) & CoveredBy(a,b)", that is, returns true on the boolean expression ST_Intersects(a,b) AND ST_Crosses(a,b) AND ST_Within(a,b) AND ST_Coveredby(a,b). Queries and assertions: The use of ST_Relate() is faster than direct computing of a set of correspondent predicates. There are cases where using ST_Relate() is the only way to compute a complex predicate — see the example of the code 0FFFFF0F2, of a point that not "crosses" a multipoint (a object that is a set of points), but predicate Crosses (when defined by a mask) returns true. Queries and assertions: It is usual to overload the ST_Relate() by adding a mask parameter, or use a returned ST_Relate(a,b) string into the ST_RelateMatch() function. When using ST_Relate(a,b,mask), it returns a boolean. Examples: ST_Relate(a,b,'*FF*FF212') returns true when ST_Relate(a,b) is 0FFFFF212 or 01FFFF212, and returns false when 01FFFF122 or 0FF1FFFFF. ST_RelateMatch('0FFFFF212','*FF*FF212') and ST_RelateMatch('01FFFF212','TTF*FF212') are true, ST_RelateMatch('01FFFF122','*FF*FF212') is false. Synonyms: "Egenhofer-Matrix" is a synonym for the 9IM 3x3 matrix of boolean domain. "Clementini-Matrix" is a synonym for the DE-9IM 3x3 matrix of {0,1,2,F} domain. "Egenhofer operators" and "Clementini operators" are sometimes a reference to matrix elements as II, IE, etc. that can be used in boolean operations. Example: the predicate "G1 contains G2" can be expressed by "⟨G1| II ∧ ~EI ∧ ~EB |G1⟩", that can be translated to mask syntax, T*****FF*. Predicates "meets" is a synonym for touches; "inside" is a synonym for within Oracle's "ANYINTERACT" is a synonym for intersects and "OVERLAPBDYINTERSECT" is a synonym for overlaps. Its "OVERLAPBDYDISJOINT" does not have a corresponding named predicate. In Region connection calculus operators offer some synonyms for predicates: disjoint is DC (disconnected), touches is EC (externally connected), equals is EQ. Other, like Overlaps as PO (partially overlapping), need context analysis or composition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Accident analysis** Accident analysis: Accident analysis is carried out in order to determine the cause or causes of an accident (that can result in single or multiple outcomes) so as to prevent further accidents of a similar kind. It is part of accident investigation or incident investigation . These analyses may be performed by a range of experts, including forensic scientists, forensic engineers or health and safety advisers. Accident investigators, particularly those in the aircraft industry, are colloquially known as "tin-kickers". Health and safety and patient safety professionals prefer using the term "incident" in place of the term "accident". Its retrospective nature means that accident analysis is primarily an exercise of directed explanation; conducted using the theories or methods the analyst has to hand, which directs the way in which the events, aspects, or features of accident phenomena are highlighted and explained. Sequence: Accident analysis is performed in four steps: Fact gathering: After an accident, a forensic process is started to gather all possibly relevant facts that may contribute to understanding the accident. Fact Analysis: After the forensic process has been completed or at least delivered some results, the facts are put together to give a "big picture." The history of the accident is reconstructed and checked for consistency and plausibility. Conclusion Drawing: If the accident history is sufficiently informative, conclusions can be drawn about causation and contributing factors. Counter-measures: In some cases, the development of counter-measures or recommendations are made to prevent further accidents of the same kind. Methods: There exist numerous forms of Accident Analysis methods. These can be divided into three categories: Causal Analysis (Root cause analysis) uses the principle of causality to determine the course of events. Though people casually speak of a "chain of events", results from Causal Analysis usually have the form of directed a-cyclic graphs – the nodes being events and the edges the cause-effect relations. Methods of Causal Analysis differ in their respective notion of causation. Methods: Expert Analysis relies on the knowledge and experience of field experts. This form of analysis usually lacks a rigorous (formal/semiformal) methodological approach. This usually affects falsifiability and objectivity of analyses. This is of importance when conclusions are heavily disputed among experts. Methods: Organizational Analysis relies on systemic theories of organization. Most theories imply that if a system's behaviour stayed within the bounds of the ideal organization then no accidents can occur. Organizational Analysis can be falsified and results from analyses can be checked for objectivity. Choosing an organizational theory for accident analysis comes from the assumption that the system to be analysed conforms to that theory. Models: Many models have been described to characterise and analyse accidents. Using photographs to extract evidence: Once all available data has been collected by accident scene investigators and law enforcement officers, camera matching, photogrammetry or rectification can be used to determine the exact location of physical evidence shown in the accident scene photos. Using photographs to extract evidence: Camera matching: Camera matching uses accident scene photos that show various points of evidence. The technique uses CAD software to create a 3-dimensional model of the accident site and roadway surface. All survey data and photos are then imported into a three dimensional software package like 3D Studio Max. A virtual camera can be then be positioned relative to the 3D roadway surface. Physical evidence is then mapped from the photos onto the 3D roadway to create a three dimensional accident scene drawing. Using photographs to extract evidence: Photogrammetry: Photogrammetry is used to determine the three-dimensional geometry of an object on the accident scene from the original two dimensional photos. The photographs can be used to extract evidence that may be lost after the accident is cleared. Photographs from several viewpoints are imported into software like PhotoModeler. The forensic engineer can then choose points common to each photo. The software will calculate the location of each point in a three dimensional coordinate system. Using photographs to extract evidence: Rectification: Photographic rectification is also used to analyze evidence that may not have been measured at the accident scene. Two dimensional rectification transforms a single photograph into a top-down view. Software like PC-Rect can be used to rectify a digital photograph.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**A♯ (musical note)** A♯ (musical note): A♯ (A-sharp; also called la dièse) is the eleventh semitone of the solfège. In some countries (where B is known as H), it is also called B. This note lies a chromatic semitone above A and a diatonic semitone below B, thus being enharmonic to si bémol or B♭ (B-flat). When calculated in equal temperament with a reference of A above middle C as 440 Hz, the frequency of the A♯ above middle C is approximately 466.164 Hz. See pitch (music) for a discussion of historical variations in frequency. Scales: Common scales beginning on A♯ A♯ major: A♯ B♯ C D♯ E♯ F G A♯ A♯ natural minor: A♯ B♯ C♯ D♯ E♯ F♯ G♯ A♯ A♯ harmonic minor: A♯ B♯ C♯ D♯ E♯ F♯ G A♯ A♯ melodic minor ascending: A♯ B♯ C♯ D♯ E♯ F G A♯ A♯ melodic minor descending: A♯ G♯ F♯ E♯ D♯ C♯ B♯ A♯ Diatonic scales A♯ Ionian: A♯ B♯ C D♯ E♯ F G A♯ A♯ Dorian: A♯ B♯ C♯ D♯ E♯ F G♯ A♯ A♯ Phrygian: A♯ B C♯ D♯ E♯ F♯ G♯ A♯ A♯ Lydian: A♯ B♯ C D E♯ F G A♯ A♯ Mixolydian: A♯ B♯ C D♯ E♯ F G♯ A♯ A♯ Aeolian: A♯ B♯ C♯ D♯ E♯ F♯ G♯ A♯ A♯ Locrian: A♯ B C♯ D♯ E F♯ G♯ A♯ Jazz melodic minor A♯ ascending melodic minor: A♯ B♯ C♯ D♯ E♯ F G A♯ A♯ Dorian ♭2: A♯ B C♯ D♯ E♯ F G♯ A♯ A♯ Lydian augmented: A♯ B♯ C D E F G A♯ A♯ Lydian dominant: A♯ B♯ C D E♯ F G♯ A♯ A♯ Mixolydian ♭6: A♯ B♯ C D♯ E♯ F♯ G♯ A♯ A♯ Locrian ♮2: A♯ B♯ C♯ D♯ E F♯ G♯ A♯ A♯ altered: A♯ B C♯ D E F♯ G♯ A♯
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antecedent (behavioral psychology)** Antecedent (behavioral psychology): An antecedent is a stimulus that cues an organism to perform a learned behavior. When an organism perceives an antecedent stimulus, it behaves in a way that maximizes reinforcing consequences and minimizes punishing consequences. This might be part of complex, interpersonal communication. The definition of antecedent is a preceding event or a cause- in this case it is the event that causes the learned behavior to happen. Learned behavior and conditioning: A learned behavior is one that does not come from instincts- it is created by practice or experiences. Learned behavior can be controlled by two systems- reflective or reflexive, which in turn create cognitive learning and habitual learning. Cognitive learning is influenced directly by the environment and evaluates it in order to acquire a particular behavior. An example of cognitive learning is riding a bike, where the environment (changing of the road path, weather, turns etc.) is constantly changing and you have to adjust to this. Learned behavior and conditioning: Habitual learning is formed through conditioning, whether that is voluntary or involuntary. Classical conditioning denotes when an organism creates reflexes based on past events. A reflex is a stimulus response that happens due to a biological response and is mediated by the nervous system. Habitual learning can then be a result of this reflex happening time after time, as we get used to the stimuli- this is where the antecedent comes in.Habitual learning uses strategies from past experiences to dictate how to behave in the present- e.g., continuing to ride a bike after initially learning how to.Both of these learning strategies can be a result of an antecedent. Learned behavior and conditioning: Classical conditioning was first discovered by Pavlov, who studied digestive reflexes in dogs- the results showed that different stimuli (different types of food) elicit different reflexes and responses (different compositions of saliva). He then discovered that the dogs salivated before they received the food- due to the antecedent. The antecedent became the bell that Pavlov rang before he fed the dogs, and the learned behavior became the salivation. Learned behavior and conditioning: On the other hand, operant conditioning is when we respond for stimuli, not to it. It is another form of social learning in which the consequence of a response makes us respond more, or more often. Learned behavior and conditioning: Variables Antecedent stimuli (paired with reinforcing consequences) activate centers of the brain involved in motivation, while antecedent stimuli that have been paired with punishing consequences activate brain centers involved in fear. Antecedents play a different role while attempting to trigger positive and negative outcomes.The latter is particularly important when it comes to antecedents, as bad stimuli in the environment lead to behavioral consequences.It has been suggested that these stimuli that lead to learned behavior can be described by behavioral science principles. Reinforcement theory states that the consequences of behavior drives the behavior itself- positive behaviors are rewarded and negative behaviors are either ignored or punished. Learned behavior and conditioning: Evidence There are some scientific papers that argue that there are two different types of antecedent variables. These two types of antecedent variables are referred to as discriminative stimuli and setting events. Setting events differ from discriminative stimuli as setting events are believed to have an effect on the stimulus-response relationship. It has been suggested that setting events focus on three categories of stimuli (biological, physical and social variables). Discriminative stimuli are found to be present “when a behavior is reinforced”. The discriminative stimuli is believed to be the identifying event alerting the mind that a reinforcement will occur in exchange for a specific behavior.Another scientific paper states that antecedent variables can be proximal (things like financial stressors or job satisfaction), and conducted an experiment to see if these stimuli could induce relapse to alcohol problems. The theory here is that the learned behavior is the continuance of drinking, and this is performed to the stimuli that could be losing a job. The antecedent here is a setting event, as it happens due to social variables in order to effect a response.Similarly, a scientific book states that culture is antecedent to behavior, but that culture can also have a direct or indirect effect on the behavior. A direct effect would line up with the theory of setting events as an antecedent variable, as the culture is a direct social stimulus that causes an effect on the stimulus-response relationship. An indirect effect reinforces the theory of discriminative stimuli, as it is an identifying event that is one reason behind the learned behavior being performed.Stimuli that activate the "motivation" part of the brain have been tested through areas of competition in certain categories like, for example, tourism places. There are a few factors that can lead to competition changes in tourism, like hospitality, food selections, cleanliness, and more. These areas of concentration (resources, facilities, etc.) are the stimuli that would be considered the second variable--setting events. This type of competitiveness affects not only where the tourists are planning on visiting, but it also affects the employees that work in tourist towns. Things like gift shops, hotels, and restaurants depend on the flow of tourism to keep their businesses thriving. This makes businesses continuously improve and change their business ways to meet consumer demands. All of these variables change the behavior of all parties involved. Learned behavior and conditioning: Interventions There are a number of studies that have been done in order to prevent past learned behaviors using antecedent variables. One intervention talked about preventing bad behavior in classrooms as a positive alternative to punishment. This goes against reinforcement theory, which states that the consequence of the behavior drives the behavior. When it comes to behaviors in schools, the antecedent here (without intervention) could be a number of things: Attention from the teacher/peers An instruction from peers/teachers that the child does not want to do Communication from staff and students when the child in question has limited/no vocal languageEach of these antecedents caused a learned behavior that is unfavourable, and this article suggests some interventions to overcome the bad behavior. For example, in order to override antecedent 2, gain the students’ attention and immediately request something (e.g., a high five), before praising them and providing positive reinforcement. This intervention fits in with the idea of classical conditioning, as the child is rewarded with positive affirmation when they complete a task. Learned behavior and conditioning: A different study agrees that these antecedent interventions do not work on reinforcement theory, and aim to reduce the probability of unwanted behavior occurring rather than punishing unwanted behavior with consequences. This article similarly agrees with another that setting events and discriminative stimuli are the two antecedent variables, and that both of these can be used in different ways in interventions. For example, behavior that happens due to discriminative stimuli (like a hard mathematics test leading to a student destroying it and being sent to the principal's office) is likely to reoccur again and again (as the child got out of doing the test by performing the behavior). To counter this, the article, suggests that the environment should be rearranged in some way so as not to provoke the individual. Changing the antecedent from a hard maths test to an easier or shorter one, or warning the child prior, had a positive effect on the behavior observed.There are still questions surrounding the role of antecedent interventions within society, as they are relatively new and not a lot is known about its applicability cross-culturally. However, it is evident that there is potential for antecedents to be used in behavioral interventions, and they have been proven to positively influence behaviors like self-injury and aggression.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Service-oriented architecture implementation framework** Service-oriented architecture implementation framework: Service-oriented architectures (SOA) are based on the notion of software services, which are high-level software components that include web services. Implementation of an SOA requires tools as well as run-time infrastructure software. This is collectively referred to as a service-oriented architecture implementation framework or (SOAIF). The SOAIF envisions a comprehensive framework that provides all the technology that an enterprise might need to build and run an SOA. An SOAIF includes both design-time and run-time capabilities as well as all the software functionality an enterprise needs to build and operate an SOA, including service-oriented: tools, management, integration, modeling, security, processes.As vendors race to provide possible solutions, three different approaches are emerging to integrating disparate, heterogeneous information and systems in the enterprise. These implementation frameworks should meet the requirements for loosely coupled, coarse grained, asynchronous Services. Efficiency: Most packaged enterprise applications perform well in streamlining processes related to standard tasks. However, the performance rapidly deteriorates while automating and streamlining customized processes that encompass multiple enterprise applications. The process is difficult, time-consuming, and expensive to implement and maintain. The SOAIF infrastructure addresses this issue by allowing the definition of any process in any network topology, spanning multiple enterprise boundaries. This is accomplished via a peer-to-peer messaging infrastructure with distributed security mechanisms that allow efficient data exchanges for easy implementation, while enabling each enterprise to enforce its own security policies. This allows an SOAIF to increase operational efficiency across the entire value chain. Application integration: Existing packaged application integration solutions are complex and require significant implementation effort, often including extensive manual coding for deployment purposes. An SOAIF provides native support for run time deployment of services across the network and dramatically reduces the overall costs of application integration and deployment by automating these time-consuming processes. It also allows extension of integration across business boundaries. Application development and deployment: In the traditional software development process, translating requirements into working distributed systems is both time-consuming and difficult, requiring several stages of manual development and deployment. This complex, error-prone task can be effectively streamlined using a higher-level, component-based SOAIF. The SOAIF incorporates tools that let processes that are developed, using standards such as Business Process Execution Language (BPEL), to be easily translated into distributed, high-level services, which are easier to develop, manipulate, and debug. These services are easily composed into implementation-level data flows without the user or developer having to track complex middleware concepts, such as topics or queues. Further, the implementation-level services can run on any machine across the network by virtue of the built-in dynamic deployment support SOAIF provides. The combination of service-oriented tools and built-in support for distributed debugging, run-time tracing and logging, and dynamic deployment allows the SOAIF to dramatically reduce the time taken to implement and deliver working processes. SOAIF requirements: A SOAIF is a general-purpose infrastructure platform that lets developers and business analysts create, deploy, manage, and change processes within and across the enterprise. SOAIFs have unique requirements at both the tools and infrastructure levels that are not typically provided by any single current technology or platform. These include: distributed event-enabled architecture, flexibility via service-enabled processes, enterprise standards support (fault tolerance, reliability, and scalability), security in distributed environment, visual process composition and monitoring, rapid process changes.By addressing these requirements, an SOAIF lets users quickly respond to changes and integrate operations efficiently, regardless of platform, language, database, or application. Distributed event-enabled architecture: Enterprise processes are usually distributed across multiple applications and hardware/software systems. These processes are also event-based in the sense that the subprocesses are linked by a series of events. For example, the depletion of inventory at a manufacturer may lead to an event-trigger that is automatically generated and propagated to one or more suppliers to replenish the depleted inventory items. Distributed event-enabled architecture: Most current BPM solutions control the processes through a centralized hub. Changes to applications, or additions of new applications, require modifications at the centralized hub. Further, all data exchanged between applications needs to traverse the central hub. This type of topology restriction is inefficient, inflexible, and leads to bottlenecks. To overcome this limitation, a framework that tries to integrate enterprise processes needs to be fully distributed across the network within the enterprise. The framework must also be symmetric, which implies that the same event-based infrastructure software and tools need to run on all machines within the enterprise. Enterprise standards support: Support for data exchange, messaging, and existing enterprise standards becomes essential in an SOAIF. Since content needs to be exchanged between partners, XML messages and documents will be the desired format. Further, since most businesses want to leverage existing infrastructures, an SOAIF needs to easily support multiple standards. Fault tolerance, reliability and scalability: A SOAIF should be able to offer a very high degree of reliability. The platform should support a broad range of processes that span an increasing number of applications, corporations, and partners. To eliminate single points of failure and to maximize performance, a fully distributed architecture becomes essential. Security in a distributed environment: A SOAIF needs to be fully distributed for maximum performance and scalability. In such a distributed computing environment, it becomes necessary to restrict the scope of interactions that partners can conduct with the corporate IT infrastructure. It becomes necessary to allow customization for the interactions of each partner by providing different security roles on a per-user and per-service basis. This requires a security model that incorporates users, Web services and more general enterprise services, and that’s fully distributed and fault-tolerant, such as the SOAIF infrastructure itself. This security model needs to be based on existing standards and tools and should support certificate authentication at both the user and services level. Visual process composition: A SOAIF needs to provide a single dashboard with visibility into an organization’s entire distributed computing environment. The platform should incorporate visual implementation-process-composition tools, together with infrastructure-level support to instantly deploy the modeled implementation-level processes across a distributed enterprise network. The visual composition tools need to be service-oriented in the sense of being able to directly manipulate higher-level, coarse-grained implementation processes as first-class objects. They also should provide both a visual display of programming constructs and be able to map directly (and naturally) to deployable processes. A critical problem in deploying distributed systems is monitoring and debugging concurrently running processes. An SOAIF should provide native support for tracing, logging, and monitoring any process or service across the distributed environment. Process changes: Another challenge is responding to changing requirements. A SOAIF should provide support for incremental on-the-fly modification of the service-based flows that implement processes. This is among the most critical features expected from an SOAIF, since it lets analysts visually change and instantly redeploy processes to address dynamic requirements. Such changes are implemented within an SOAIF by abstracting all concepts relating to lower-level middleware at the tools and applications levels. Users simply specify that a service be replaced by another running service (often on another machine); the SOAIF dynamically reroutes data to the new service by setting up new underlying middleware constructs (such as topics and queues, for example) on-the-fly. This allows the implementation to be changed without stopping the current process in much the same way as hardware is upgraded on a mainframe system without interruption of operations. SOAIF components: Essential elements of an SOAIF include design-time and run-time infrastructure, together with service-oriented tools for deploying distributed processes and implementation flows. Enterprise service bus The core infrastructure of an SOAIF is typically provided by an enterprise service bus (ESB), which addresses the challenges in composing, deploying, and managing distributed, service-based enterprise applications. The ESB incorporates a standards-based, enterprise-class messaging backbone, together with enhanced systems connectivity using web services, Java EE, .NET Framework, and other standards. SOAIF components: One approach that contributes to an optimal SOA implementation is the use of an Enterprise service bus (ESB) to provide an infrastructural element to distributed Services on the network. The ESB approach to integration considers systems as discrete, distributed Services that connect to each other via an asynchronous, message-oriented communications infrastructure. The message-oriented infrastructure allows loosely coupled, document-oriented exchanges between independent systems. SOAIF components: ESBs provides the critical infrastructure components that simplify and scale integration approaches. ESBs do not however provide the required integration to meet high-level business requirements. ESBs also do not provide guarantees of loose coupling and coarse granularity to meet evolving Service-oriented needs. Implementing ESBs to meet SOA requirements require the addition of extra functionality to compose fine-grained atomic Services into coarse-grained business Services and provide policy-driven, managed, and secure Service interactions. An ESB links individual enterprises together for extended process efficiency across the supply chain, allowing them to become more flexible and adaptable to rapidly changing requirements. The ESB lets an enterprise leverage its previous investments by supporting the deployment of processes over existing software and hardware infrastructure. As the core, underlying infrastructure of an SOAIF, ESBs offer several unique business and technical advantages: support for enterprise standards, fault tolerance, scalability, and reliability, service-based tools, easy process deployment and changes, component-level security, run-time monitoring, tracing, and logging. SOAIF components: Business process management Business process management (BPM) considers systems and IT assets as activities or tasks that participate in well-coordinated and centrally orchestrated Business processes. Traditionally, the challenge of BPM is that while it is possible to construct processes that achieve integration goals, enterprises typically use BPM tools only at design time, modeling processes as they used to be or processes as they should be, but rarely processes as they actually are in the IT environment. SOAIF components: So, while BPM solutions can craft orchestrated processes that are composed of fine-grained Services, they don’t contain the runtime environment necessary for loosely coupled, asynchronous Service interactions. At the very least, a BPM solution must be used in conjunction with a loosely coupled integration approach to make the business processes runtime activities that coordinate integration. Thus, by itself, BPM solutions are not sufficient to meet SOA requirements. SOAIF components: Service-oriented integration The service-oriented integration (SOI) approach uses the architectural guiding principles of Services orientation to construct an ecosystem of Services that business users can dynamically combine and compose into higher-level processes that meet continuously evolving and changing business requirements. SOI approaches transcend brittle, tightly coupled EAI and Business-to-business integration approaches by mandating a separation of the consumer of each Service from the producer of that Service, thus enforcing the critical aspect of loose coupling that is required to allow an integration scenario to evolve automatically to meet business requirements. SOAIF components: SOI provides no guidance on how to build the right Services to meet current business requirements, nor does it provide a means to execute Services in the most effective, scalable manner to guarantee long-running interactions. Enterprise standards support: ESBs implement standardized interfaces for communication, connectivity, transformation, security, and portability. Supported standards include: JMS for communication, web services, Java EE, and .NET for connectivity to various systems, XSLT and X-query for transformation, LDAP, TLS for security.Modern ESB implementations typically support development in multiple languages. This, combined with the inherently portable ESB infrastructure, makes the ESB a true multi-language, multiplatform enterprise backbone and an ideal foundation for an SOAIF. Fault tolerance, scalability and reliability: Several modern ESBs implement a symmetric, distributed architecture in which peer-messaging servers run on multiple nodes of an enterprise network, providing a highly scalable, reliable distributed messaging platform with no single point of failure. Modern ESB architectures combine the benefits of centralized control with distributed, parallel data flow, giving application developers the ultimate flexibility in defining the network topology of choice to route data directly and optimally between services. Ensuring that data flowing between services does not always have to traverse a central point in the network optimizes peer-to-peer network performance. For instance, if one has a process that requires data exchanges between New York and Boston, as well as between San Francisco and Los Angeles, then the two flows of data don’t necessarily have to traverse a messaging hub located in Chicago (which is often the case in most enterprise or cross-enterprise deployments). Instead, efficiency dictates setting up direct data flow connections between peer nodes on a network. Service-based tools: Service-oriented tools enable composition of distributed applications from one or more services (Web services and more general enterprise services), each of which typically runs in a separate process. Services may be written in any language and communicate with each other via XML messages. This allows service-oriented tools within an SOAIF to compose flexible, easy-to-modify systems. Easy process deployment and changes: Service-oriented processes deployed in an SOAIF are composed of coarse-grained Web services ideally suited for easy change and replacement. By abstracting the details of message routing from service implementations, service-oriented tools decouple and enable running processes to be modified on-the-fly by simple service replacement or addition. The tools framework within an SOAIF supports the run-time deployment of services, allowing changed processes to be deployed instantly across the network. Our experience is that this significantly reduces solution deployment costs compared with traditional, broker-based solutions. Component-level security: The ESB defines a comprehensive security system, giving administrators full control over which services are executed where. ESBs provide the ability to set several security attributes for each service and provide administrative tools to configure security settings on the distributed ESB infrastructure across the network. Run-time monitoring, tracing and logging: ESBs include native service-level support for run-time monitoring, tracing, and logging. All services can be monitored instantly, using visual tools within the SOAIF. Trace levels can be dynamically changed within existing services running across the network and debug logs can be routed to software tools on any node. These features greatly simplify the development, deployment, and debugging of distributed applications running across the SOAIF.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Link aggregation** Link aggregation: In computer networking, link aggregation is the combining (aggregating) of multiple network connections in parallel by any of several methods. Link aggregation increases total throughput beyond what a single connection could sustain, and provides redundancy where all but one of the physical links may fail without losing connectivity. A link aggregation group (LAG) is the combined collection of physical ports. Link aggregation: Other umbrella terms used to describe the concept include trunking, bundling, bonding, channeling or teaming. Implementation may follow vendor-independent standards such as Link Aggregation Control Protocol (LACP) for Ethernet, defined in IEEE 802.1AX or the previous IEEE 802.3ad, but also proprietary protocols. Motivation: Link aggregation increases the bandwidth and resilience of Ethernet connections. Motivation: Bandwidth requirements do not scale linearly. Ethernet bandwidths historically have increased tenfold each generation: 10 megabit/s, 100 Mbit/s, 1000 Mbit/s, 10,000 Mbit/s. If one started to bump into bandwidth ceilings, then the only option was to move to the next generation, which could be cost prohibitive. An alternative solution, introduced by many of the network manufacturers in the early 1990s, is to use link aggregation to combine two physical Ethernet links into one logical link. Most of these early solutions required manual configuration and identical equipment on both sides of the connection.There are three single points of failure inherent to a typical port-cable-port connection, in either a computer-to-switch or a switch-to-switch configuration: the cable itself or either of the ports the cable is plugged into can fail. Multiple logical connections can be made, but many of the higher level protocols were not designed to fail over completely seamlessly. Combining multiple physical connections into one logical connection using link aggregation provides more resilient communications. Architecture: Network architects can implement aggregation at any of the lowest three layers of the OSI model. Examples of aggregation at layer 1 (physical layer) include power line (e.g. IEEE 1901) and wireless (e.g. IEEE 802.11) network devices that combine multiple frequency bands. OSI layer 2 (data link layer, e.g. Ethernet frame in LANs or multi-link PPP in WANs, Ethernet MAC address) aggregation typically occurs across switch ports, which can be either physical ports or virtual ones managed by an operating system. Aggregation at layer 3 (network layer) in the OSI model can use round-robin scheduling, hash values computed from fields in the packet header, or a combination of these two methods. Architecture: Regardless of the layer on which aggregation occurs, it is possible to balance the network load across all links. However, in order to avoid out-of-order delivery, not all implementations take advantage of this. Most methods provide failover as well. Combining can either occur such that multiple interfaces share one logical address (i.e. IP) or one physical address (i.e. MAC address), or it allows each interface to have its own address. The former requires that both ends of a link use the same aggregation method, but has performance advantages over the latter. Channel bonding is differentiated from load balancing in that load balancing divides traffic between network interfaces on per network socket (layer 4) basis, while channel bonding implies a division of traffic between physical interfaces at a lower level, either per packet (layer 3) or a data link (layer 2) basis. IEEE link aggregation: Standardization process By the mid-1990s, most network switch manufacturers had included aggregation capability as a proprietary extension to increase bandwidth between their switches. Each manufacturer developed its own method, which led to compatibility problems. The IEEE 802.3 working group took up a study group to create an interoperable link layer standard (i.e. encompassing the physical and data-link layers both) in a November 1997 meeting. The group quickly agreed to include an automatic configuration feature which would add in redundancy as well. This became known as Link Aggregation Control Protocol (LACP). IEEE link aggregation: 802.3ad As of 2000, most gigabit channel-bonding schemes used the IEEE standard of link aggregation which was formerly clause 43 of the IEEE 802.3 standard added in March 2000 by the IEEE 802.3ad task force. Nearly every network equipment manufacturer quickly adopted this joint standard over their proprietary standards. IEEE link aggregation: 802.1AX The 802.3 maintenance task force report for the 9th revision project in November 2006 noted that certain 802.1 layers (such as 802.1X security) were positioned in the protocol stack below link aggregation which was defined as an 802.3 sublayer. To resolve this discrepancy, the 802.3ax (802.1AX) task force was formed, resulting in the formal transfer of the protocol to the 802.1 group with the publication of IEEE 802.1AX-2008 on 3 November 2008. IEEE link aggregation: Link Aggregation Control Protocol Within the IEEE Ethernet standards, the Link Aggregation Control Protocol (LACP) provides a method to control the bundling of several physical links together to form a single logical link. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to their peer, a directly connected device that also implements LACP. IEEE link aggregation: LACP Features and practical examples Maximum number of bundled ports allowed in the port channel: Valid values are usually from 1 to 8. LACP packets are sent with multicast group MAC address 01:80:C2:00:00:02 During LACP detection period LACP packets are transmitted every second Keep-alive mechanism for link member: (default: slow = 30s, fast=1s) Selectable load-balancing mode is available in some implementations LACP mode : Active: Enables LACP unconditionally. IEEE link aggregation: Passive: Enables LACP only when an LACP device is detected. (This is the default state) Advantages over static configuration Failover occurs automatically: When a link has an intermediate failure, for example in a media converter between the devices, a peer system may not perceive any connectivity problems. With static link aggregation, the peer would continue sending traffic down the link causing the connection to fail. IEEE link aggregation: Dynamic configuration: The device can confirm that the configuration at the other end can handle link aggregation. With static link aggregation, a cabling or configuration mistake could go undetected and cause undesirable network behavior. IEEE link aggregation: Practical notes LACP works by sending frames (LACPDUs) down all links that have the protocol enabled. If it finds a device on the other end of a link that also has LACP enabled, that device will independently send frames along the same links in the opposite direction enabling the two units to detect multiple links between themselves and then combine them into a single logical link. LACP can be configured in one of two modes: active or passive. In active mode, LACPDUs are sent 1 per second along the configured links. In passive mode, LACPDUs are not sent until one is received from the other side, a speak-when-spoken-to protocol. Proprietary link aggregation: In addition to the IEEE link aggregation substandards, there are a number of proprietary aggregation schemes including Cisco's EtherChannel and Port Aggregation Protocol, Juniper's Aggregated Ethernet, AVAYA's Multi-Link Trunking, Split Multi-Link Trunking, Routed Split Multi-Link Trunking and Distributed Split Multi-Link Trunking, ZTE's Smartgroup, Huawei's Eth-Trunk, and Connectify's Speedify. Most high-end network devices support some form of link aggregation. Software-based implementations – such as the *BSD lagg package, Linux bonding driver, Solaris dladm aggr, etc. – exist for many operating systems. Linux drivers: The Linux bonding driver provides a method for aggregating multiple network interface controllers (NICs) into a single logical bonded interface of two or more so-called (NIC) slaves. The majority of modern Linux distributions come with a Linux kernel which has the Linux bonding driver integrated as a loadable kernel module and the ifenslave (if = [network] interface) user-level control program pre-installed. Donald Becker programmed the original Linux bonding driver. It came into use with the Beowulf cluster patches for the Linux kernel 2.0. Linux drivers: Modes for the Linux bonding driver (network interface aggregation modes) are supplied as parameters to the kernel bonding module at load time. They may be given as command-line arguments to the insmod or modprobe commands, but are usually specified in a Linux distribution-specific configuration file. The behavior of the single logical bonded interface depends upon its specified bonding driver mode. The default parameter is balance-rr. Linux drivers: Round-robin (balance-rr) Transmit alternate network packets in sequential order from the first available NIC slave through the last. This mode provides load balancing and fault tolerance. This mode can cause congestion control issues due to the packet reordering it can introduce. Active-backup (active-backup) Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to simplify forwarding in the network switch. This mode provides fault tolerance. Linux drivers: XOR (balance-xor) Transmit network packets based on a hash of the packet's source and destination. The default algorithm only considers MAC addresses (layer2). Newer versions allow selection of additional policies based on IP addresses (layer2+3) and TCP/UDP port numbers (layer3+4). This selects the same NIC slave for each destination MAC address, IP address, or IP address and port combination, respectively. Single connections will have guaranteed in order packet delivery and will transmit at the speed of a single NIC. This mode provides load balancing and fault tolerance. Linux drivers: Broadcast (broadcast) Transmit network packets on all slave network interfaces. This mode provides fault tolerance. IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP) Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification. This mode is similar to the XOR mode above and supports the same balancing policies. The link is set up dynamically between two LACP-supporting peers. Linux drivers: Adaptive transmit load balancing (balance-tlb) Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave. Linux drivers: Adaptive load balancing (balance-alb) includes balance-tlb plus receive load balancing (rlb) for IPv4 traffic and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.The Linux Team driver provides an alternative to bonding driver. The main difference is that Team driver kernel part contains only essential code and the rest of the code (link validation, LACP implementation, decision making, etc.) is run in userspace as a part of teamd daemon. Usage: Network backbone Link aggregation offers an inexpensive way to set up a high-capacity backbone network that transfers multiple times more data than any single port or device can deliver. Link aggregation also allows the network's backbone speed to grow incrementally as demand on the network increases, without having to replace everything and deploy new hardware. Usage: Most backbone installations install more cabling or fiber optic pairs than is initially necessary. This is done because labor costs are higher than the cost of the cable, and running extra cable reduces future labor costs if networking needs change. Link aggregation can allow the use of these extra cables to increase backbone speeds for little or no extra cost if ports are available. Usage: Order of frames When balancing traffic, network administrators often wish to avoid reordering Ethernet frames. For example, TCP suffers additional overhead when dealing with out-of-order packets. This goal is approximated by sending all frames associated with a particular session across the same link. Common implementations use L2 or L3 hashes (i.e. based on the MAC or the IP addresses), ensuring that the same flow is always sent via the same physical link.However, this may not provide even distribution across the links in the trunk when only a single or very few pairs of hosts communicate with each other, i.e. when the hashes provide too little variation. It effectively limits the client bandwidth in aggregate. In the extreme, one link is fully loaded while the others are completely idle and aggregate bandwidth is limited to this single member's maximum bandwidth. For this reason, an even load balancing and full utilization of all trunked links is almost never reached in real-life implementations. Usage: Use on network interface cards NICs trunked together can also provide network links beyond the throughput of any one single NIC. For example, this allows a central file server to establish an aggregate 2-gigabit connection using two 1-gigabit NICs teamed together. Note the data signaling rate will still be 1 Gbit/s, which can be misleading depending on methodologies used to test throughput after link aggregation is employed. Usage: Microsoft Windows Microsoft Windows Server 2012 supports link aggregation natively. Previous Windows Server versions relied on manufacturer support of the feature within their device driver software. Intel, for example, released Advanced Networking Services (ANS) to bond Intel Fast Ethernet and Gigabit cards.Nvidia supports teaming with their Nvidia Network Access Manager/Firewall Tool. HP has a teaming tool for HP-branded NICs which supports several modes of link aggregation including 802.3ad with LACP. In addition, there is a basic layer-3 aggregation that allows servers with multiple IP interfaces on the same network to perform load balancing, and for home users with more than one internet connection, to increase connection speed by sharing the load on all interfaces.Broadcom offers advanced functions via Broadcom Advanced Control Suite (BACS), via which the teaming functionality of BASP (Broadcom Advanced Server Program) is available, offering 802.3ad static LAGs, LACP, and "smart teaming" which doesn't require any configuration on the switches to work. It is possible to configure teaming with BACS with a mix of NICs from different vendors as long as at least one of them is from Broadcom and the other NICs have the required capabilities to support teaming. Usage: Linux and UNIX Linux, FreeBSD, NetBSD, OpenBSD, macOS, OpenSolaris and commercial Unix distributions such as AIX implement Ethernet bonding at a higher level and, as long as the NIC is supported by the kernel, can deal with NICs from different manufacturers or using different drivers. Virtualization platforms Citrix XenServer and VMware ESX have native support for link aggregation. XenServer offers both static LAGs as well as LACP. vSphere 5.1 (ESXi) supports both static LAGs and LACP natively with their virtual distributed switch.Microsoft's Hyper-V does not offer link aggregation support from the hypervisor level, but the above-mentioned methods for teaming under Windows apply to Hyper-V. Limitations: Single switch With the modes balance-rr, balance-xor, broadcast and 802.3ad, all physical ports in the link aggregation group must reside on the same logical switch, which, in most common scenarios, will leave a single point of failure when the physical switch to which all links are connected goes offline. The modes active-backup, balance-tlb, and balance-alb can also be set up with two or more switches. But after failover (like all other modes), in some cases, active sessions may fail (due to ARP problems) and have to be restarted. Limitations: However, almost all vendors have proprietary extensions that resolve some of this issue: they aggregate multiple physical switches into one logical switch. Nortel's split multi-link trunking (SMLT) protocol allows multiple Ethernet links to be split across multiple switches in a stack, preventing any single point of failure and additionally allowing all switches to be load balanced across multiple aggregation switches from the single access stack. These devices synchronize state across an Inter-Switch Trunk (IST) such that they appear to the connecting (access) device to be a single device (switch block) and prevent any packet duplication. SMLT provides enhanced resiliency with sub-second failover and sub-second recovery for all speed trunks while operating transparently to end-devices. Limitations: Multi-chassis link aggregation group provides similar features in a vendor-nonspecific manner. To the connected device, the connection appears as a normal link aggregated trunk. The coordination between the multiple sources involved is handled in a vendor-specific manner. Limitations: Same link speed In most implementations, all the ports used in an aggregation consist of the same physical type, such as all copper ports (10/100/1000BASE‑T), all multi-mode fiber ports, or all single-mode fiber ports. However, all the IEEE standard requires is that each link be full duplex and all of them have an identical speed (10, 100, 1,000 or 10,000 Mbit/s). Limitations: Many switches are PHY independent, meaning that a switch could have a mixture of copper, SX, LX, LX10 or other GBIC/SFP modular transceivers. While maintaining the same PHY is the usual approach, it is possible to aggregate a 1000BASE-SX fiber for one link and a 1000BASE-LX (longer, diverse path) for the second link. One path may have a longer propagation time but since most implementations keep a single traffic flow on the same physical link (using a hash of either MAC addresses, IP addresses, or IP/transport-layer port combinations as index) this doesn't cause problematic out-of-order delivery. Limitations: Ethernet aggregation mismatch Aggregation mismatch refers to not matching the aggregation type on both ends of the link. Some switches do not implement the 802.1AX standard but support static configuration of link aggregation. Therefore, link aggregation between similarly statically configured switches may work but will fail between a statically configured switch and a device that is configured for LACP. Examples: Ethernet On Ethernet interfaces, channel bonding requires assistance from both the Ethernet switch and the host computer's operating system, which must stripe the delivery of frames across the network interfaces in the same manner that I/O is striped across disks in a RAID 0 array. For this reason, some discussions of channel bonding also refer to Redundant Array of Inexpensive Nodes (RAIN) or to redundant array of independent network interfaces. Examples: Modems In analog modems, multiple dial-up links over POTS may be bonded. Throughput over such bonded connections can come closer to the aggregate bandwidth of the bonded links than can throughput under routing schemes which simply load-balance outgoing network connections over the links. DSL Similarly, multiple DSL lines can be bonded to give higher bandwidth; in the United Kingdom, ADSL is sometimes bonded to give for example 512kbit/s upload bandwidth and 4 megabit/s download bandwidth, in areas that only have access to 2 megabit/s bandwidth. DOCSIS Under the DOCSIS 3.0 and 3.1 specifications for data over cable TV (CATV) systems, multiple channels may be bonded. Under DOCSIS 3.0, up to 32 downstream and 8 upstream channels may be bonded. These are typically 6 or 8 MHz wide. DOCSIS 3.1 defines more complicated arrangements involving aggregation at the level subcarriers and larger notional channels. Wireless Broadband Broadband bonding is a type of channel bonding that refers to aggregation of multiple channels at OSI layers at level four or above. Channels bonded can be wired links such as a T-1 or DSL line. Additionally, it is possible to bond multiple cellular links for an aggregated wireless bonded link. Examples: Previous bonding methodologies resided at lower OSI layers, requiring coordination with telecommunications companies for implementation. Broadband bonding, because it is implemented at higher layers, can be done without this coordination.Commercial implementations of Broadband Channel Bonding include: Wistron AiEdge Corporation's U-Bonding Technology Mushroom Networks' Broadband Bonding Service Connectify's Speedify fast bonding VPN - software app for multiple platforms: PC, Mac, iOS and Android Peplink's SpeedFusion Bonding Technology Viprinet's Multichannel VPN Bonding Technology Elsight's Multichannel Secure Data Link Synopi's Natiply Internet Bonding Technology ComBOX Networks multi-wan bonding as a service Wi-Fi On 802.11 (Wi-Fi), channel bonding is used in Super G technology, referred to as 108Mbit/s. It bonds two channels of standard 802.11g, which has 54Mbit/s data signaling rate. Examples: On IEEE 802.11n, a mode with a channel width of 40 MHz is specified. This is not channel bonding, but a single channel with double the older 20 MHz channel width, thus using two adjacent 20 MHz bands. This allows direct doubling of the PHY data rate from a single 20 MHz channel, but the MAC and user-level throughput also depends on other factors so may not double.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Infantile acne** Infantile acne: Infantile acne is a form of acneiform eruption that occurs in infants from 6 weeks to 1 year of age. Typical symptoms include inflammatory and noninflammatory lesions, papules and pustules most commonly present on the face. No cause of infantile acne has been established but it may be caused by increased sebaceous gland secretions due to elevated androgens, genetics and the fetal adrenal gland causing increased sebum production. Infantile acne can resolve by itself by age 1 or 2. However, treatment options include topical benzyl peroxide, topical retinoids and topical antibiotics in most cases. Signs and symptoms: Infantile acne has a later onset and is less commonly seen than neonatal acne, occurring between 6 weeks to 1 year of age. It is also more commonly seen in boys rather than girls. Infantile acne tends to be more inflammatory and wide spread than neonatal acne. It presents with both open and closed comedones, papules and pustules. Cystic lesions are uncommon. Scarring can occur in severe cases. Very rarely, facial conglobate acne, a severe form of acne that involves extensive inflammation and nodule formation can develop and lead to extensive scarring. Lesions occur most commonly on the cheeks but can also appear on the chest and back. More severe occurrences may lead to development of more severe forms of acne in adolescence. Causes: The cause of infantile acne is not known for certain. Research into higher occurrence in boys rather than girls imply that higher than normal levels of testicular androgens can cause increased sebaceous gland secretions. During the first 6–12 months of age, there is increased sebum production stimulated by luteinizing hormone (LH) and testosterone of testicular origin that stops after this period until adrenarche. Girls do not experience this.Genetics and family history play a role in influencing sebaceous gland size and activity, pore size and inflammation that can increase risk of onset and presentation of infantile acne.It is suggested that the fetal adrenal gland along with testicular androgen could be the cause of infantile acne. During the neonatal period, there is increased sebum production through an enlarged zona reticularis (an androgen producing area) on the fetal adrenal gland that gradually decreases to very low levels at around 1 years of age, coinciding with when infantile acne tends to resolve. The fetal adrenal gland produces androgens such as dehydroepiandrosterone (DHEA) and dehydroepiandrosterone sulfate (DHEAS) that stimulate sebaceous glands. Diagnosis: Diagnosis is based on presentation of comedones primarily on the face of an infant of 6–12 months of age. Severity can be mild, moderate or severe and can be determined from the presence and distribution of comedones and inflammatory lesions such as papules and nodules. A physical may be followed up next with particular attention paid to signs of an endocrine disorder including normal growth and weight, testicular growth, breast development, hirsutism or growth of pubic hair. Hormonal workup may not be necessary unless one of these abnormalities is present, then work up of testosterone, DHEA, DHEAS, LH and FSH or referral to a pediatric endocrinologist specialist may be recommended. Diagnosis: Differential diagnosis It is important to differentiate infantile acne from other forms of acneiform eruptions. Acne venenata infantum is a form of acne characterized by comedone formation and induced by chemical irritants on the skin. This can include comedogenic products such as lotions, ointments, creams and oils. Upon discontinuation of these products, lesions will heal within 6–8 weeks.Other conditions that should be considered include periorificial dermatitis, keratosis pilaris, sebaceous hyperplasia and infections. Pyodermas and panniculitis should be considered in severe cases of inflammatory acne or in cases of acne conglobata while hyperandrogenism should be ruled out in cases of persistent infantile acne. Treatment: Infantile acne is a self-limiting condition that resolves by itself within 6–12 months of occurrence or occasionally by ages 4–5 and does not require treatment in most cases but topical therapies can be used, especially in more severe cases.The goals of treatment are to reduce sebum production, prevent formation of microcomedones, suppress the growth of bacteria and reduce inflammation. As there are no US FDA approved medications for treatment of infantile acne due to lack of high quality trials in patients under 9, recommendations for treatment is based on observations in adult and adolescent populations. Treatment: Benzoyl peroxide Benzoyl peroxide (BPO) is first line for mild cases of infantile acne due to its safety and effectiveness. BPO concentrates within cells of sebaceous follicles where it generates free radicals to oxidize proteins in bacteria such as P. acnes. This leads to bacterial death. It additionally works as a mild comedolytic and anti-inflammatory. No bacterial resistance has been reported from BPO usage and in fact it can help limit resistance to antibiotics. Common side effects include burning, stinging, scaling and dryness at the sight of application and can be managed by reducing quantity applied, frequency of application, using a less potent product and use of non-comedogenic moisturizers. Treatment: Retinoids Topical retinoids both alone and in combination are also first line for treating mild to moderate cases of acne. Safety and efficacy has been demonstrated in individuals 12–18 years of age. Retinoids prevent formation of comedones and promote comedolysis by binding to retinoic receptors and normalizing growth of keratinocytes. Tretinoin and adapalene have demonstrated efficacy in reducing inflammation. Adapalene as a newer retinoid is thought to be more effective and better tolerated than others in this class. As a topical product, it has similar side effects to topical BPO of burning, stinging, drying and scaling which can be managed much the same way. Reducing the potency of initial treatment, using a non-comedogenic moisturizers, and applying a small amount on the whole face rather than spot-treating may reduce severity of side effects.In severe cases, oral isotretinoin may be recommended to prevent scarring. Dosing ranges from 0.2 mg/kg/day to 2 mg/kg/day for several months to over a year with careful monitoring. Monitoring may include complete blood count with differential and baseline liver function and lipids tests followed by routine liver function and lipid tests while on treatment. Treatment: Antibiotics Topical antibiotics Topical antibiotics are often used in cases of inflammatory infantile acne in combination with another topical treatment to prevent emergence of antibiotic resistance especially for periods longer than a few weeks. Clindamycin and erythromycin are the most commonly prescribed topical antibiotics for acne with coverage for S. aureus and P. acnes. These bacteriostatic antibiotics interfere with bacterial protein synthesis, preventing formation of free fatty acids by these bacteria that cause inflammation. Treatment: Oral antibiotics In severe cases of infantile acne, especially with the presence of nodules and cysts with risks of scarring, oral antibiotics may be used. First line therapy is erythromycin with sulfamethoxazole-trimethoprim as a secondary choice in cases of P. acnes resistance. It is suggested not to use tetracyclines due to risks of permanently staining teeth in children under the age of 7. Side effects of erythromycin include gastrointestinal upset. There has however been concerns about resistant P. acnes due to widespread usage of antibiotics, and therefore steps taken to minimize resistance such as use in combination with BPO is highly recommended by experts. Epidemiology: Infantile acne effects around 2% of children with a higher occurrence in males rather than females. Of around 9.2 million visits to outpatient care for pediatric acne, 3% or 276,000 visits, were due to neonatal and pediatric acne in the United States from 2000 to 2010.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catch and release** Catch and release: Catch and release is a practice within recreational fishing where after capture, often a fast measurement and weighing of the fish is performed, followed by posed photography as proof of the catch, and then the fish are unhooked and returned live to the water. Using barbless hooks, it is often possible to release the fish without removing it from the water (a slack line is frequently sufficient). Catch and release: Catch and release is a conservation practice developed to prevent overharvest of fish stocks in the face of growing human populations, mounting ecological pressure, increasingly effective fishing tackle and techniques, inadequate fishing regulations and enforcement, and habitat degradation. Sports fishers have been practicing catch and release for decades, including with some highly pressured fish species. History: In the United Kingdom, catch and release has been performed for more than a century by coarse fishermen in order to prevent target species from disappearing in heavily fished waters. Since the latter part of the 20th century, many salmon and sea trout rivers have been converted to complete or partial catch and release. In Scotland, the River Dee operates a full catch and release policy for salmon, grilse and sea trout.In the United States, catch and release was first introduced as a management tool in the state of Michigan in 1952 as an effort to reduce the cost of stocking hatchery-raised trout. Anglers fishing for fun rather than for food accepted the idea of releasing the fish while fishing in "no-kill" zones. Conservationists have advocated catch and release as a way to ensure sustainability and to avoid overfishing of fish stocks. Lee Wulff, a New York-based fly angler, author and film maker, promoted catch and release as early as 1936 with the phrase "Game fish are too valuable to be caught only once." Don Martinez a West Yellowstone, Montana fly shop owner promoted catch and release in his 1930–40s newsletters sent to Eastern anglers.In Australia, catch and release caught on slowly, with some pioneers practicing it in the 1960s, and the practice slowly became more widespread in the 1970s and 1980s. Catch and release is now widely used to conserve—and indeed is critical in conserving—vulnerable fish species like the large, long lived native freshwater Murray Cod and the prized, slowly growing, heavily fished Australian bass, heavily fished coastal species like Dusky Flathead and prized gamefish like striped marlin.In Ireland, catch and release has been used as a conservation tool for Atlantic salmon and sea trout fisheries since 2003. A number of fisheries now have mandatory catch and release regulations. Catch and release for coarse fish has been used by sport anglers for as long as these species have been fished for on this island. However catch and release for Atlantic salmon has required a huge turn about in how many anglers viewed the salmon angling resource. To encourage anglers to practice catch and release in all fisheries a number of government led incentives have been implemented.In Canada, catch and release is mandatory for some species. Canada also requires in some cases the use of barbless hooks to facilitate release and minimize injury.In Switzerland and Germany, catch and release fishing is considered inhumane and is now banned. In Germany, the Animal Welfare Act states that "no-one may cause an animal pain, suffering or harm without good reason". This leaves no legal basis for catch and release due to its argued inherent lack of "good reason", and thus personal fishing is solely allowed for immediate food consumption. Additionally, it is against the law to release fish back into the water if they are above minimum size requirements and aren't a protected species or in closed season. History: In 2011, the National Park Service in Yellowstone National Park began reversing decades of regulation that promoted catch and release and other techniques that protected fish populations. In the name of native fish conservation, they began mandatory kill regulations on rainbow and brook trout in the Lamar River drainage and encouraged unlimited taking and disposal of non-native species, including brown trout in some park waters. Techniques: Into the 21st century, there has been an emphasis on the development and refinement of science-based practices to increase the likelihood that released fish will survive (e.g., see research by Steven J. Cooke). That work led to the development of the UN FAO Technical Guidelines for Recreational Fisheries.Effective catch and release fishing techniques avoid excessive fish fighting and handling times by using sufficiently strong tackle and barbless hooks, avoid damage to fish skin, scale and slime layers from nets, dry hands and dry, hot or rough surfaces (that leave fish vulnerable to oomycete skin infections), and avoid damage to jaw ligaments and vertebrae by suspending fish from jaws or gills for weighing or handling. If a net must be used it is important that it is pre-wetted and is not abrasive to the fish (such as a rubber coated net or very dense lightweight mesh), because fish can easily damage themselves in a normal net while thrashing. The use of barbless hooks is an important aspect of catch and release; barbless hooks reduce injury and handling time, increasing survival. Frequently, fish caught on barbless hooks can be released without being removed from the water, and the hook(s) effortlessly slipped out with a single flick of the pliers or leader. Barbless hooks can be purchased from several major manufacturers or can be created from a standard hook by crushing the barb(s) flat with needle-nosed pliers. Some anglers avoid barbless hooks because of the belief that too many fish will escape. Concentrating on keeping the line tight at all times while fighting fish, equipping lures that do not have them with split rings, and using recurved point or "Triple Grip" style hooks on lures, will keep catch rates with barbless hooks as high as those achieved with barbed hooks.One study looking at brook trout found that barbless hooks had no statistically significant effect on mortality rates when fish were hooked in the mouth, but observed that they did reduce mortalities compared to barbed hooks if fish were hooked deeper. The study also suggested bait fishing does not have a significantly higher mortality when utilized in an active style, rather than a passive manner that allows the fish to swallow the bait.The effects of catch and release vary from species to species. A study of fish caught in shallow water on the Great Barrier Reef showed high survival rates (97%+), for released fish if handled correctly and particularly if caught on artificial baits such as lures. Fish caught on lures are usually hooked cleanly in the mouth, minimizing injury and aiding release. Other studies have shown somewhat lower survival rates for fish gut-hooked on bait if the line is cut and the fish is released without trying to remove the hook. Debate over pain in released fish: Opponents of catch and release argue that fish are highly evolved vertebrates that share many of the same neurological structures that in humans are associated with pain perception. They cite studies showing that, neurologically, fish are quite similar to higher vertebrates and that blood chemistry reveals that hormones and blood metabolites associated with stress are quite high in fish struggling against hook and line. The idea that fish do not feel pain in their mouths has been studied at the University of Edinburgh and the Roslin Institute by injecting bee venom and acetic acid into the lips of rainbow trout; the fish responded by rubbing their lips along the sides and floors of their tanks in an effort to relieve themselves of the sensation.Lead researcher Lynne Sneddon wrote, "Our research demonstrates nociception and suggests that noxious stimulation in the rainbow trout has adverse behavioral and physiological effects. This fulfills the criteria for animal pain." A 2014 paper provides a critique of existing studies that purport to demonstrate that fish feel pain. James D. Rose of the University of Wyoming argues this may demonstrate a chemical sensitivity rather than pain and that the evidence for pain sensation in fish is ambiguous. Injury and mortality in released fish: A metastudy in 2005 found that the average catch and release mortality rate was 18%, but varied greatly by species. During an Oklahoma Department of Wildlife Conservation study, up to 43 percent of fish released after being caught died within six days as a result of inadequate holding and weigh in procedures during tournaments. More recent studies reported in Montana estimate that approximately 20% of released trout die from injuries or stress and for those that do not die, their injuries may significantly reduce their ability to feed and grow.Emerging research suggests catch and release does not work very well with fish caught when deep sea fishing. Most deep sea fish species suffer from the sudden pressure change when wound to the surface from great depths; these species cannot adjust their body's physiology quickly enough to follow the pressure change. The result is called "barotrauma". Fish with barotrauma will have their enormously swollen swim-bladder protruding from their mouth, bulging eyeballs, and often sustain other, more subtle but still very serious injuries. Upon release, fish with barotrauma will be unable to swim or dive due to the swollen swim-bladder. The common practice has been to deflate the swim bladder by pricking it with a thin sharp object before attempting to release the fish. Emerging research also indicates both barotrauma and the practice of deflating the swimbladder are both highly damaging to fish, and that survival rates of caught-and-released deep-sea fish are extremely low. Barotrauma requires that fish be caught at least 10–15 m (30–50 ft) below the surface. Many surface caught fish, such as billfish, and all fish caught from shore do not meet this criterion and thus do not suffer barotrauma.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Long drum** Long drum: Long drums are a loose category of tubular membranophones, characterized by their extreme length. They are most common in Africa, Thailand, and in Native American traditions. Long drums can be made out of entire tree trunks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Data degradation** Data degradation: Data degradation is the gradual corruption of computer data due to an accumulation of non-critical failures in a data storage device. The phenomenon is also known as data decay, data rot or bit rot. Example: Below are several digital images illustrating data degradation, all consisting of 326,272 bits. The original photo is displayed first. In the next image, a single bit was changed from 0 to 1. In the next two images, two and three bits were flipped. On Linux systems, the binary difference between files can be revealed using cmp command (e.g. cmp -b bitrot-original.jpg bitrot-1bit-changed.jpg). Primary storages: Data degradation in dynamic random-access memory (DRAM) can occur when the electric charge of a bit in DRAM disperses, possibly altering program code or stored data. DRAM may be altered by cosmic rays or other high-energy particles. Such data degradation is known as a soft error. ECC memory can be used to mitigate this type of data degradation. Secondary storages: Data degradation results from the gradual decay of storage media over the course of years or longer. Causes vary by medium: Solid-state media EPROMs, flash memory and other solid-state drive store data using electrical charges, which can slowly leak away due to imperfect insulation. Modern flash controller chips account for this leak by trying several lower threshold voltages (until ECC passes), prolonging the age of data. Multi-level cells with much lower distance between voltage levels cannot be considered stable without this functionality. Secondary storages: The chip itself is not affected by this, so reprogramming it approximately once per decade prevents decay. An undamaged copy of the master data is required for the reprogramming. A checksum can be used to assure that the on-chip data is not yet damaged and ready for reprogramming. Secondary storages: Magnetic media Magnetic media, such as hard disk drives, floppy disks and magnetic tapes, may experience data decay as bits lose their magnetic orientation. Higher temperature speeds up the rate of magnetic loss. As with solid-state media, re-writing is useful as long as the medium itself is not damaged (see below). Modern hard drives use Giant magnetoresistance and have a higher magnetic lifespan on the order of decades. They also automatically correct any errors detected by ECC through rewriting. The reliance on a factory servo track can complicate data recovery if it becomes unrecoverable, however. Secondary storages: Floppy disks and tapes are poorly protected against ambient air. In warm/humid conditions, they are prone to the physical decomposition of the storage medium. Secondary storages: Optical media Optical media such as CD-R, DVD-R and BD-R, may experience data decay from the breakdown of the storage medium. This can be mitigated by storing discs in a dark, cool, low humidity location. "Archival quality" discs are available with an extended lifetime, but are still not permanent. However, data integrity scanning that measures the rates of various types of errors is able to predict data decay on optical media well ahead of uncorrectable data loss occurring. Secondary storages: Both the disc dye and the disc backing layer are potentially susceptible to breakdown. Early cyanine-based dyes used in CD-R were notorious for their lack of UV stability. Early CDs also suffered from CD bronzing, and is related to a combination of bad lacquer material and failure of the aluminum reflection layer. Later discs use more stable dyes or forgo them for an inorganic mixture. The aluminum layer is also commonly swapped out for gold or silver alloy. Secondary storages: Paper media Paper media, such as punched cards and punched tape, may literally rot. Mylar punched tape is another approach that does not rely on electromagnetic stability. Degradation of books and printing paper is primarily driven by acid hydrolysis of glycosidic bonds within the cellulose molecule as well as by oxidation; degradation of paper is accelerated by high relative humidity, high temperature, as well as by exposure to acids, oxygen, light, and various pollutants, including various volatile organic compounds and nitrogen dioxide. Hardware failures: Most disk, disk controller and higher-level systems are subject to a slight chance of unrecoverable failure. With ever-growing disk capacities, file sizes, and increases in the amount of data stored on a disk, the likelihood of the occurrence of data decay and other forms of uncorrected and undetected data corruption increases.Low-level disk controllers typically employ error correction codes (ECC) to correct erroneous data.Higher-level software systems may be employed to mitigate the risk of such underlying failures by increasing redundancy and implementing integrity checking, error correction codes and self-repairing algorithms. The ZFS file system was designed to address many of these data corruption issues. The Btrfs file system also includes data protection and recovery mechanisms, as does ReFS.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epoxy moisture control system** Epoxy moisture control system: Epoxy moisture control systems are chemical barriers that are used to prevent moisture damage to flooring. Excessive moisture vapor emissions in concrete slabs can mean significant, expensive damage to a flooring installation. Hundreds of millions of dollars are spent annually just in the United States to correct moisture-related problems in flooring. These problems include failure of the flooring adhesive; damage to the floor covering itself, such as blistering; the formation of efflorescence salts; and the growth of mold and mildew. Epoxy moisture control system: In 2013 the ASTM F3010-13 "Standard Practice for Two-Component Resin Based Membrane-Forming Moisture Mitigation Systems for Use Under Resilient Floor Coverings" was adopted to establish performance criteria required for two component membranes employed as concrete moisture control systems. Excess moisture in concrete is defined by that amount of moisture emitting from the concrete subfloor that exceeds the amount allowed by the flooring manufacturer. This condition occurs when the flooring is installed before the water in the concrete mix that is not needed for hydration (strengthening) has had adequate time to evaporate. Causes of this condition include a construction schedule that does not allow at least 28 days for the slab to dry; using too much water in the concrete mix; installing the slab without a puncture- and tear-resistant, low-permeability vapor barrier beneath it; rewetting of the slab due to precipitation; inadequate drying conditions, which can include air temperatures that are lower than 50°F, high humidity in the surrounding air and poor airflow; and liquid water infiltration due to external sources, such as broken pipes, irrigation, improper sloping of the landscape, condensation, cleaning and maintenance, and moisture from flooring adhesives. Epoxy moisture control system: There are two industry standards for measuring moisture vapor emissions in concrete: calcium chloride testing (ASTM F1869) and relative humidity testing (ASTM F2170). Epoxy moisture control systems can be used when these tests determine that the moisture vapor emissions need to be remediated in order to install the selected floor covering within the timeframe allotted by the construction schedule. Epoxy moisture control system: Epoxy moisture control systems are roller-applied and are available in one-coat and two-coat varieties. One-coat systems allow for a faster installation time, while two-coat systems address the potential for pinholes or voids in the system, which can cause future failures. Epoxy moisture control systems can be applied over concrete with relative humidity levels up to 100%, and there are systems available on the market today that can be applied over concrete that is physically damp. In some cases, with the use of an epoxy moisture control system, floor coverings can be installed just 7 days after the slab is poured. Epoxy moisture control system: When applied correctly, epoxy moisture control systems are designed to bring moisture emission rates to acceptable levels for the flooring being installed, which combats flooring failures, microbiological activity (mold and mildew) and other problems associated with excess moisture in the slab.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wool measurement** Wool measurement: A micron (micrometre) is the measurement used to express the diameter of wool fibre. Fine wool fibers have a low micron value. Fibre diameter is the most important characteristic of wool in determining its value. Wool measurement: Every fleece comprises a very wide range of fibre diameters—for example a typical Merino fleece will contain fibres of as low as 10 microns in diameter, and there could be fibres with diameters exceeding 25 microns, depending on the age and health (or nutrition) of the sheep. What is usually referred to as wool's "micron" is the mean of the fibre diameters or average diameter. This may be measured in a number of different ways. Wool measurement: Small samples can be taken from the side or fleece of a sheep and measured using a portable instrument such as an OFDA2000 (Optical Fibre Diameter Analyser); or a mobile instrument system called a Fleecescan. Both these systems have been studied extensively and if used correctly, they should give reasonably reliable results. Pre wool classing micron test results are a useful guide for classers in determining lines of wool to be made. Samples of fleece can also be shorn from the animal and sent to a laboratory for measurement ("midside sampling"). Most modern fleece-testing laboratories use related instruments to those mentioned—either the OFDA models or the Laserscan. Merino stud rams are mid-side sampled and the test results are displayed in the sale catalogues. Wool measurement: Once the fleeces are baled and prepared for sale as lots, they are commonly sampled by coring in the broker store and the samples sent to certification laboratories. Here the core samples are cleaned, dried and prepared for measurement under strict test methods. Merino wools are normally measured on Laserscan instruments in Australia, New Zealand and South Africa, although OFDA instruments may also be used in some cases (the results from these two types of instrument are quite similar). The “coefficient of variation of fibre diameter” (CVD) is a measure of the variation in fibre fineness within the sample fleece, relative to the average fibre diameter. Crossbred and coarse wools are often measured for mean fibre diameter by older instruments—"Airflow" in many parts of the world, and even a projection microscope in some cases. Wool measurement: Weaner and hogget wool is finer and generally more valuable than the wool from older sheep. Most wool between 11.5 and 24 microns in fibre diameter is made into clothing. The remainder is used for other textiles such as blankets, insulation and furnishings. Wool measurement: The finest bale of wool ever auctioned sold for a seasonal record of 269,000 Australian cents per kilogram during June 2008. This bale was produced by the Hillcreston Pinehill Partnership and measured 11.6 microns, 72.1% yield and had a 43-newton-per-kilotex strength measurement. The bale realised $247,480 and was exported to India.In 2010 a soft ultra-fine, 10-micron fleece, from Windradeen, near Pyramul, New South Wales, Australia, set a new world record in the fineness of wool fleeces when it won the Ermenegildo Zegna Vellus Aureum International Trophy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aircraft** Aircraft: An aircraft (PL: aircraft) is a vehicle that is able to fly by gaining support from the air. It counters the force of gravity by using either static lift or the dynamic lift of an airfoil, or, in a few cases, direct downward thrust from its engines. Common examples of aircraft include airplanes, helicopters, airships (including blimps), gliders, paramotors, and hot air balloons.The human activity that surrounds aircraft is called aviation. The science of aviation, including designing and building aircraft, is called aeronautics. Crewed aircraft are flown by an onboard pilot, whereas unmanned aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as lift type, aircraft propulsion (if any), usage and others. History: Flying model craft and stories of manned flight go back many centuries; however, the first manned ascent — and safe descent — in modern times took place by larger hot-air balloons developed in the 18th century. Each of the two World Wars led to great technical advances. Consequently, the history of aircraft can be divided into five eras: Pioneers of flight, from the earliest experiments to 1914. History: First World War, 1914 to 1918. Aviation between the World Wars, 1918 to 1939. Second World War, 1939 to 1945. Postwar era, also called the Jet Age, 1945 to the present day. Methods of lift: Lighter than air – aerostats Aerostats use buoyancy to float in the air in much the same way that ships float on the water. They are characterized by one or more large cells or canopies, filled with a relatively low-density gas such as helium, hydrogen, or hot air, which is less dense than the surrounding air. When the weight of this is added to the weight of the aircraft structure, it adds up to the same weight as the air that the craft displaces. Methods of lift: Small hot-air balloons, called sky lanterns, were first invented in ancient China prior to the 3rd century BC and used primarily in cultural celebrations, and were only the second type of aircraft to fly, the first being kites, which were first invented in ancient China over two thousand years ago (see Han Dynasty). Methods of lift: A balloon was originally any aerostat, while the term airship was used for large, powered aircraft designs — usually fixed-wing. In 1919, Frederick Handley Page was reported as referring to "ships of the air," with smaller passenger types as "Air yachts." In the 1930s, large intercontinental flying boats were also sometimes referred to as "ships of the air" or "flying-ships". — though none had yet been built. The advent of powered balloons, called dirigible balloons, and later of rigid hulls allowing a great increase in size, began to change the way these words were used. Huge powered aerostats, characterized by a rigid outer framework and separate aerodynamic skin surrounding the gas bags, were produced, the Zeppelins being the largest and most famous. There were still no fixed-wing aircraft or non-rigid balloons large enough to be called airships, so "airship" came to be synonymous with these aircraft. Then several accidents, such as the Hindenburg disaster in 1937, led to the demise of these airships. Nowadays a "balloon" is an unpowered aerostat and an "airship" is a powered one. Methods of lift: A powered, steerable aerostat is called a dirigible. Sometimes this term is applied only to non-rigid balloons, and sometimes dirigible balloon is regarded as the definition of an airship (which may then be rigid or non-rigid). Non-rigid dirigibles are characterized by a moderately aerodynamic gasbag with stabilizing fins at the back. These soon became known as blimps. During World War II, this shape was widely adopted for tethered balloons; in windy weather, this both reduces the strain on the tether and stabilizes the balloon. The nickname blimp was adopted along with the shape. In modern times, any small dirigible or airship is called a blimp, though a blimp may be unpowered as well as powered. Methods of lift: Heavier-than-air – aerodynes Heavier-than-air aircraft, such as airplanes, must find some way to push air or gas downwards so that a reaction occurs (by Newton's laws of motion) to push the aircraft upwards. This dynamic movement through the air is the origin of the term. There are two ways to produce dynamic upthrust — aerodynamic lift, and powered lift in the form of engine thrust. Methods of lift: Aerodynamic lift involving wings is the most common, with fixed-wing aircraft being kept in the air by the forward movement of wings, and rotorcraft by spinning wing-shaped rotors sometimes called "rotary wings." A wing is a flat, horizontal surface, usually shaped in cross-section as an aerofoil. To fly, air must flow over the wing and generate lift. A flexible wing is a wing made of fabric or thin sheet material, often stretched over a rigid frame. A kite is tethered to the ground and relies on the speed of the wind over its wings, which may be flexible or rigid, fixed, or rotary. Methods of lift: With powered lift, the aircraft directs its engine thrust vertically downward. V/STOL aircraft, such as the Harrier jump jet and Lockheed Martin F-35B take off and land vertically using powered lift and transfer to aerodynamic lift in steady flight. Methods of lift: A pure rocket is not usually regarded as an aerodyne because it does not depend on the air for its lift (and can even fly into space); however, many aerodynamic lift vehicles have been powered or assisted by rocket motors. Rocket-powered missiles that obtain aerodynamic lift at very high speed due to airflow over their bodies are a marginal case. Methods of lift: Fixed-wing The forerunner of the fixed-wing aircraft is the kite. Whereas a fixed-wing aircraft relies on its forward speed to create airflow over the wings, a kite is tethered to the ground and relies on the wind blowing over its wings to provide lift. Kites were the first kind of aircraft to fly and were invented in China around 500 BC. Much aerodynamic research was done with kites before test aircraft, wind tunnels, and computer modelling programs became available. Methods of lift: The first heavier-than-air craft capable of controlled free-flight were gliders. A glider designed by George Cayley carried out the first true manned, controlled flight in 1853. The first powered and controllable fixed-wing aircraft (the airplane or aeroplane) was invented by Wilbur and Orville Wright. Besides the method of propulsion (if any), fixed-wing aircraft are in general characterized by their wing configuration. The most important wing characteristics are: Number of wings — monoplane, biplane, etc. Wing support — Braced or cantilever, rigid or flexible. Wing planform — including aspect ratio, angle of sweep, and any variations along the span (including the important class of delta wings). Location of the horizontal stabilizer, if any. Dihedral angle — positive, zero, or negative (anhedral).A variable geometry aircraft can change its wing configuration during flight. A flying wing has no fuselage, though it may have small blisters or pods. The opposite of this is a lifting body, which has no wings, though it may have small stabilizing and control surfaces. Methods of lift: Wing-in-ground-effect vehicles are generally not considered aircraft. They "fly" efficiently close to the surface of the ground or water, like conventional aircraft during takeoff. An example is the Russian ekranoplan nicknamed the "Caspian Sea Monster". Man-powered aircraft also rely on ground effect to remain airborne with minimal pilot power, but this is only because they are so underpowered—in fact, the airframe is capable of flying higher. Methods of lift: Rotorcraft Rotorcraft, or rotary-wing aircraft, use a spinning rotor with aerofoil cross-section blades (a rotary wing) to provide lift. Types include helicopters, autogyros, and various hybrids such as gyrodynes and compound rotorcraft. Methods of lift: Helicopters have a rotor turned by an engine-driven shaft. The rotor pushes air downward to create lift. By tilting the rotor forward, the downward flow is tilted backward, producing thrust for forward flight. Some helicopters have more than one rotor and a few have rotors turned by gas jets at the tips. Some have a tail rotor to counteract the rotation of the main rotor, and to aid directional control. Methods of lift: Autogyros have unpowered rotors, with a separate power plant to provide thrust. The rotor is tilted backward. As the autogyro moves forward, air blows upward across the rotor, making it spin. This spinning increases the speed of airflow over the rotor, to provide lift. Rotor kites are unpowered autogyros, which are towed to give them forward speed or tethered to a static anchor in high-wind for kited flight. Methods of lift: Compound rotorcraft have wings that provide some or all of the lift in forward flight. They are nowadays classified as powered lift types and not as rotorcraft. Tiltrotor aircraft (such as the Bell Boeing V-22 Osprey), tiltwing, tail-sitter, and coleopter aircraft have their rotors/propellers horizontal for vertical flight and vertical for forward flight. Methods of lift: Other methods of lift A lifting body is an aircraft body shaped to produce lift. If there are any wings, they are too small to provide significant lift and are used only for stability and control. Lifting bodies are not efficient: they suffer from high drag, and must also travel at high speed to generate enough lift to fly. Many of the research prototypes, such as the Martin Marietta X-24, which led up to the Space Shuttle, were lifting bodies, though the Space Shuttle is not, and some supersonic missiles obtain lift from the airflow over a tubular body. Methods of lift: Powered lift types rely on engine-derived lift for vertical takeoff and landing (VTOL). Most types transition to fixed-wing lift for horizontal flight. Classes of powered lift types include VTOL jet aircraft (such as the Harrier jump jet) and tiltrotors, such as the Bell Boeing V-22 Osprey, among others. A few experimental designs rely entirely on engine thrust to provide lift throughout the whole flight, including personal fan-lift hover platforms and jetpacks. VTOL research designs include the Rolls-Royce Thrust Measuring Rig. Methods of lift: Some rotor wings employ horizontal-axis wings, in which airflow across a spinning rotor generates lift. The Flettner airplane uses a rotating cylinder, obtaining lift from the Magnus effect. The FanWing uses a cross-flow fan, while the mechanically more complex cyclogyro comprises multiple wings which rotate together around a central axis. The ornithopter obtains thrust by flapping its wings. Size and speed extremes: Size The smallest aircraft are toys/recreational items, and nano aircraft. Size and speed extremes: The largest aircraft by dimensions and volume (as of 2016) is the 302 ft (92 m) long British Airlander 10, a hybrid blimp, with helicopter and fixed-wing features, and reportedly capable of speeds up to 90 mph (140 km/h; 78 kn), and an airborne endurance of two weeks with a payload of up to 22,050 lb (10,000 kg).The largest aircraft by weight and largest regular fixed-wing aircraft ever built, as of 2016, was the Antonov An-225 Mriya. That Soviet-built (Ukrainian SSR) six-engine transport of the 1980s was 84 m (276 ft) long, with an 88 m (289 ft) wingspan. It holds the world payload record, after transporting 428,834 lb (194,516 kg) of goods, and has flown 100 t (220,000 lb) loads commercially. With a maximum loaded weight of 550–700 t (1,210,000–1,540,000 lb), it was also the heaviest aircraft built to date. It could cruise at 500 mph (800 km/h; 430 kn). The aircraft was destroyed during the Russo-Ukrainian War.The largest military airplanes are the Ukrainian Antonov An-124 Ruslan (world's second-largest airplane, also used as a civilian transport), and American Lockheed C-5 Galaxy transport, weighing, loaded, over 380 t (840,000 lb). The 8-engine, piston/propeller Hughes H-4 Hercules "Spruce Goose" — an American World War II wooden flying boat transport with a greater wingspan (94m/260ft) than any current aircraft and a tail height equal to the tallest (Airbus A380-800 at 24.1m/78ft) — flew only one short hop in the late 1940s and never flew out of ground effect.The largest civilian airplanes, apart from the above-noted An-225 and An-124, are the Airbus Beluga cargo transport derivative of the Airbus A300 jet airliner, the Boeing Dreamlifter cargo transport derivative of the Boeing 747 jet airliner/transport (the 747-200B was, at its creation in the 1960s, the heaviest aircraft ever built, with a maximum weight of over 400 t (880,000 lb)), and the double-decker Airbus A380 "super-jumbo" jet airliner (the world's largest passenger airliner). Size and speed extremes: Speeds The fastest fixed-wing aircraft and fastest glider, is the Space Shuttle, which re-entered the atmosphere at nearly Mach 25 or 17,500 mph (28,200 km/h)The fastest recorded powered aircraft flight and fastest recorded aircraft flight of an air-breathing powered aircraft was of the NASA X-43A Pegasus, a scramjet-powered, hypersonic, lifting body experimental research aircraft, at Mach 9.68 or 6,755 mph (10,870 km/h) on 16 November 2004.Prior to the X-43A, the fastest recorded powered airplane flight, and still the record for the fastest manned powered airplane, was the North American X-15, rocket-powered airplane at Mach 6.7 or 7,274 km/h (4,520 mph) on 3 October 1967.The fastest manned, air-breathing powered airplane is the Lockheed SR-71 Blackbird, a U.S. reconnaissance jet fixed-wing aircraft, having reached 3,530 km/h (2,193 mph) on 28 July 1976. Propulsion: Unpowered aircraft Gliders are heavier-than-air aircraft that do not employ propulsion once airborne. Take-off may be by launching forward and downward from a high location, or by pulling into the air on a tow-line, either by a ground-based winch or vehicle, or by a powered "tug" aircraft. For a glider to maintain its forward air speed and lift, it must descend in relation to the air (but not necessarily in relation to the ground). Many gliders can "soar", i.e., gain height from updrafts such as thermal currents. The first practical, controllable example was designed and built by the British scientist and pioneer George Cayley, whom many recognise as the first aeronautical engineer. Common examples of gliders are sailplanes, hang gliders and paragliders. Propulsion: Balloons drift with the wind, though normally the pilot can control the altitude, either by heating the air or by releasing ballast, giving some directional control (since the wind direction changes with altitude). A wing-shaped hybrid balloon can glide directionally when rising or falling; but a spherically shaped balloon does not have such directional control. Propulsion: Kites are aircraft that are tethered to the ground or other object (fixed or mobile) that maintains tension in the tether or kite line; they rely on virtual or real wind blowing over and under them to generate lift and drag. Kytoons are balloon-kite hybrids that are shaped and tethered to obtain kiting deflections, and can be lighter-than-air, neutrally buoyant, or heavier-than-air. Propulsion: Powered aircraft Powered aircraft have one or more onboard sources of mechanical power, typically aircraft engines although rubber and manpower have also been used. Most aircraft engines are either lightweight reciprocating engines or gas turbines. Engine fuel is stored in tanks, usually in the wings but larger aircraft also have additional fuel tanks in the fuselage. Propeller aircraft Propeller aircraft use one or more propellers (airscrews) to create thrust in a forward direction. The propeller is usually mounted in front of the power source in tractor configuration but can be mounted behind in pusher configuration. Variations of propeller layout include contra-rotating propellers and ducted fans. Propulsion: Many kinds of power plant have been used to drive propellers. Early airships used man power or steam engines. The more practical internal combustion piston engine was used for virtually all fixed-wing aircraft until World War II and is still used in many smaller aircraft. Some types use turbine engines to drive a propeller in the form of a turboprop or propfan. Human-powered flight has been achieved, but has not become a practical means of transport. Unmanned aircraft and models have also used power sources such as electric motors and rubber bands. Propulsion: Jet aircraft Jet aircraft use airbreathing jet engines, which take in air, burn fuel with it in a combustion chamber, and accelerate the exhaust rearwards to provide thrust. Propulsion: Different jet engine configurations include the turbojet and turbofan, sometimes with the addition of an afterburner. Those with no rotating turbomachinery include the pulsejet and ramjet. These mechanically simple engines produce no thrust when stationary, so the aircraft must be launched to flying speed using a catapult, like the V-1 flying bomb, or a rocket, for example. Other engine types include the motorjet and the dual-cycle Pratt & Whitney J58. Propulsion: Compared to engines using propellers, jet engines can provide much higher thrust, higher speeds and, above about 40,000 ft (12,000 m), greater efficiency. They are also much more fuel-efficient than rockets. As a consequence nearly all large, high-speed or high-altitude aircraft use jet engines. Propulsion: Rotorcraft Some rotorcraft, such as helicopters, have a powered rotary wing or rotor, where the rotor disc can be angled slightly forward so that a proportion of its lift is directed forwards. The rotor may, like a propeller, be powered by a variety of methods such as a piston engine or turbine. Experiments have also used jet nozzles at the rotor blade tips. Propulsion: Other types of powered aircraft Rocket-powered aircraft have occasionally been experimented with, and the Messerschmitt Me 163 Komet fighter even saw action in the Second World War. Since then, they have been restricted to research aircraft, such as the North American X-15, which traveled up into space where air-breathing engines cannot work (rockets carry their own oxidant). Rockets have more often been used as a supplement to the main power plant, typically for the rocket-assisted take off of heavily loaded aircraft, but also to provide high-speed dash capability in some hybrid designs such as the Saunders-Roe SR.53. Propulsion: The ornithopter obtains thrust by flapping its wings. It has found practical use in a model hawk used to freeze prey animals into stillness so that they can be captured, and in toy birds. Design and construction: Aircraft are designed according to many factors such as customer and manufacturer demand, safety protocols and physical and economic constraints. For many types of aircraft the design process is regulated by national airworthiness authorities. The key parts of an aircraft are generally divided into three categories: The structure ("airframe") comprises the main load-bearing elements and associated equipment, as well as flight controls. The propulsion system ("powerplant") (if it is powered) comprises the power source and associated equipment, as described above. The avionics comprise the electrical and electronic control, navigation and communication systems. Design and construction: Structure The approach to structural design varies widely between different types of aircraft. Some, such as paragliders, comprise only flexible materials that act in tension and rely on aerodynamic pressure to hold their shape. A balloon similarly relies on internal gas pressure, but may have a rigid basket or gondola slung below it to carry its payload. Early aircraft, including airships, often employed flexible doped aircraft fabric covering to give a reasonably smooth aeroshell stretched over a rigid frame. Later aircraft employed semi-monocoque techniques, where the skin of the aircraft is stiff enough to share much of the flight loads. In a true monocoque design there is no internal structure left. Design and construction: The key structural parts of an aircraft depend on what type it is. Aerostats Lighter-than-air types are characterised by one or more gasbags, typically with a supporting structure of flexible cables or a rigid framework called its hull. Other elements such as engines or a gondola may also be attached to the supporting structure. Design and construction: Aerodynes Heavier-than-air types are characterised by one or more wings and a central fuselage. The fuselage typically also carries a tail or empennage for stability and control, and an undercarriage for takeoff and landing. Engines may be located on the fuselage or wings. On a fixed-wing aircraft the wings are rigidly attached to the fuselage, while on a rotorcraft the wings are attached to a rotating vertical shaft. Smaller designs sometimes use flexible materials for part or all of the structure, held in place either by a rigid frame or by air pressure. The fixed parts of the structure comprise the airframe. Design and construction: Power The source of motive power for an aircraft is normally called the powerplant, and includes engine or motor, propeller or rotor, (if any), jet nozzles and thrust reversers (if any), and accessories essential to the functioning of the engine or motor (e.g.: starter, ignition system, intake system, exhaust system, fuel system, lubrication system, engine cooling system, and engine controls).Powered aircraft are typically powered by internal combustion engines (piston or turbine) burning fossil fuels -- typically gasoline (avgas) or jet fuel. A very few are powered by rocket power, ramjet propulsion, or by electric motors, or by internal combustion engines of other types, or using other fuels. A very few have been powered, for short flights, by human muscle energy (e.g.: Gossamer Condor). Design and construction: Avionics The avionics comprise any electronic aircraft flight control systems and related equipment, including electronic cockpit instrumentation, navigation, radar, monitoring, and communications systems. Flight characteristics: Flight envelope The flight envelope of an aircraft refers to its approved design capabilities in terms of airspeed, load factor and altitude. The term can also refer to other assessments of aircraft performance such as maneuverability. When an aircraft is abused, for instance by diving it at too-high a speed, it is said to be flown outside the envelope, something considered foolhardy since it has been taken beyond the design limits which have been established by the manufacturer. Going beyond the envelope may have a known outcome such as flutter or entry to a non-recoverable spin (possible reasons for the boundary). Flight characteristics: Range The range is the distance an aircraft can fly between takeoff and landing, as limited by the time it can remain airborne. For a powered aircraft the time limit is determined by the fuel load and rate of consumption. For an unpowered aircraft, the maximum flight time is limited by factors such as weather conditions and pilot endurance. Many aircraft types are restricted to daylight hours, while balloons are limited by their supply of lifting gas. The range can be seen as the average ground speed multiplied by the maximum time in the air. The Airbus A350-900ULR is now the longest range airliner. Flight dynamics Flight dynamics is the science of air vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation around three axes which pass through the vehicle's center of gravity, known as pitch, roll, and yaw. Roll is a rotation about the longitudinal axis (equivalent to the rolling or heeling of a ship) giving an up-down movement of the wing tips measured by the roll or bank angle. Pitch is a rotation about the sideways horizontal axis giving an up-down movement of the aircraft nose measured by the angle of attack. Yaw is a rotation about the vertical axis giving a side-to-side movement of the nose known as sideslip.Flight dynamics is concerned with the stability and control of an aircraft's rotation about each of these axes. Flight characteristics: Stability An aircraft that is unstable tends to diverge from its intended flight path and so is difficult to fly. A very stable aircraft tends to stay on its flight path and is difficult to maneuver. Therefore, it is important for any design to achieve the desired degree of stability. Since the widespread use of digital computers, it is increasingly common for designs to be inherently unstable and rely on computerised control systems to provide artificial stability. Flight characteristics: A fixed wing is typically unstable in pitch, roll, and yaw. Pitch and yaw stabilities of conventional fixed wing designs require horizontal and vertical stabilisers, which act similarly to the feathers on an arrow. These stabilizing surfaces allow equilibrium of aerodynamic forces and to stabilise the flight dynamics of pitch and yaw. They are usually mounted on the tail section (empennage), although in the canard layout, the main aft wing replaces the canard foreplane as pitch stabilizer. Tandem wing and tailless aircraft rely on the same general rule to achieve stability, the aft surface being the stabilising one. Flight characteristics: A rotary wing is typically unstable in yaw, requiring a vertical stabiliser. A balloon is typically very stable in pitch and roll due to the way the payload is slung underneath the center of lift. Control Flight control surfaces enable the pilot to control an aircraft's flight attitude and are usually part of the wing or mounted on, or integral with, the associated stabilizing surface. Their development was a critical advance in the history of aircraft, which had until that point been uncontrollable in flight. Flight characteristics: Aerospace engineers develop control systems for a vehicle's orientation (attitude) about its center of mass. The control systems include actuators, which exert forces in various directions, and generate rotational forces or moments about the aerodynamic center of the aircraft, and thus rotate the aircraft in pitch, roll, or yaw. For example, a pitching moment is a vertical force applied at a distance forward or aft from the aerodynamic center of the aircraft, causing the aircraft to pitch up or down. Control systems are also sometimes used to increase or decrease drag, for example to slow the aircraft to a safe speed for landing. Flight characteristics: The two main aerodynamic forces acting on any aircraft are lift supporting it in the air and drag opposing its motion. Control surfaces or other techniques may also be used to affect these forces directly, without inducing any rotation. Environmental impact: Aircraft permit long distance, high speed travel and may be a more fuel efficient mode of transportation in some circumstances. Aircraft have environmental and climate impacts beyond fuel efficiency considerations, however. They are also relatively noisy compared to other forms of travel and high altitude aircraft generate contrails, which experimental evidence suggests may alter weather patterns. Uses for aircraft: Aircraft are produced in several different types optimized for various uses; military aircraft, which includes not just combat types but many types of supporting aircraft, and civil aircraft, which include all non-military types, experimental and model. Uses for aircraft: Military A military aircraft is any aircraft that is operated by a legal or insurrectionary armed service of any type. Military aircraft can be either combat or non-combat: Combat aircraft are aircraft designed to destroy enemy equipment using its own armament. Combat aircraft divide broadly into fighters and bombers, with several in-between types, such as fighter-bombers and attack aircraft, including attack helicopters. Uses for aircraft: Non-combat aircraft are not designed for combat as their primary function, but may carry weapons for self-defense. Non-combat roles include search and rescue, reconnaissance, observation, transport, training, and aerial refueling. These aircraft are often variants of civil aircraft.Most military aircraft are powered heavier-than-air types. Other types, such as gliders and balloons, have also been used as military aircraft; for example, balloons were used for observation during the American Civil War and World War I, and military gliders were used during World War II to land troops. Uses for aircraft: Civil Civil aircraft divide into commercial and general types, however there are some overlaps. Commercial aircraft include types designed for scheduled and charter airline flights, carrying passengers, mail and other cargo. The larger passenger-carrying types are the airliners, the largest of which are wide-body aircraft. Some of the smaller types are also used in general aviation, and some of the larger types are used as VIP aircraft. General aviation is a catch-all covering other kinds of private (where the pilot is not paid for time or expenses) and commercial use, and involving a wide range of aircraft types such as business jets (bizjets), trainers, homebuilt, gliders, warbirds and hot air balloons to name a few. The vast majority of aircraft today are general aviation types. Experimental An experimental aircraft is one that has not been fully proven in flight, or that carries a Special Airworthiness Certificate, called an Experimental Certificate in United States parlance. This often implies that the aircraft is testing new aerospace technologies, though the term also refers to amateur-built and kit-built aircraft, many of which are based on proven designs. Model A model aircraft is a small unmanned type made to fly for fun, for static display, for aerodynamic research or for other purposes. A scale model is a replica of some larger design.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pre-Bötzinger complex** Pre-Bötzinger complex: The preBötzinger complex, often abbreviated as preBötC, is a functionally and anatomically specialized site in the ventral-lateral region of the lower medulla oblongata (i.e., lower brainstem). The preBötC is part of the ventral respiratory group of respiratory related interneurons. Its foremost function is to generate the inspiratory breathing rhythm in mammals. In addition, the preBötC is widely and paucisynaptically connected to higher brain centers that regulate arousal and excitability more generally such that respiratory brain function is intimately connected with many other rhythmic and cognitive functions of the brain and central nervous system. Further, the preBötC receives mechanical sensory information from the airways that encode lung volume as well as pH, oxygen, and carbon dioxide content of circulating blood and the cerebrospinal fluid. Pre-Bötzinger complex: The preBötC is approximately colocated with the hypoglossal (XII) cranial motor nucleus as well as the ‘loop’ portion of the inferior olive in the anterior-posterior axis. The caudal border of the preBötC is slightly caudal to the obex, where the brainstem merges with the cervical spinal cord. Discovery: The initial description of the preBötC was widely disseminated in a 1991 paper in Science, but its discovery predates that paper by one year. The team was led by Jack L. Feldman and Jeffrey C. Smith at the University of California, Los Angeles (UCLA), but the Science paper also included UCLA coauthor Howard Ellenberger, as well as Klaus Ballanyi and Diethelm W. Richter from Göttingen University in Germany. The region derives its name from a neighboring medullary region involved in expiratory breathing rhythm dubbed Bötzinger complex, which was named after the Silvaner (Bötzinger) variety of wine, featured at the conference at which that region was named (click here to hear a BBC interview with Jack Feldman on the topic of Bötzinger / preBötzinger nomenclature). Functional definition of the preBötC: The first definition of the preBötC was based largely on functional criteria. If the central neuraxis from pons to lumbar spinal cord is removed from a newborn rodent, then basic neural motor patterns can be generated and recorded using microelectrodes in vitro. The breathing rhythm emerges spontaneously with robust and continuous motor activity measurable on any cranial or spinal motor nerve that innervates breathing related musculature.By isolating a rhythmically active newborn rat brainstem-spinal cord in a microsectioning vibratome, Smith and colleagues performed a series of 75 µm-thick transverse sections while monitoring inspiratory-related motor rhythms. The preBötC represented the portion of the ventral-lateral lower brainstem that was necessary and sufficient to generate inspiratory related rhythm and motor output in vitro. Surprisingly, if microsections were applied from the anterior and posterior regions of the neuraxis simultaneously, a transverse section of thickness ~500 µm – which retained the preBötC and XII motoneurons – generated a rhythm and motor pattern that was almost identical to the rhythm and pattern in the full brainstem-spinal cord preparation. Perturbations that elevated excitability in preBötC sped up respiratory rhythm, whereas perturbations that depressed its excitability slowed the rhythm down. The authors concluded that these preBötC-retaining slice preparations preserved the core network generating inspiratory rhythm as well as premotor and motor neurons that define a minimal breathing-related circuit suitable for studies under controlled conditions in vitro. Breathing slices became a widely exploited preparation for such studies that continue to be used by laboratories worldwide to the present day. Anatomic definition of the preBötC: Anatomical observations advanced understanding of the preBötC by providing specific markers expressed by its constituent neurons, which helped understand its approximate borders. The superset of markers is based largely on neuropeptides and peptide receptors, whose expression patterns have come to define the borders of preBötC and its constituent rhythm-generating and output pattern-related interneurons . preBötC neurons selectively express neurokinin-1 receptors (NK1Rs), µ-opioid receptors (µORs), as well as somatostatin (SST) and SST2a-type receptors. Of course, selectively does not mean exclusively or entirely. Each marker has limitations as a defining feature of the preBötC core, but generally speaking, the neuropeptide-related markers below have proved to be both reliable and of great utility in the quest to define preBötC structure and function. Anatomic definition of the preBötC: Peptide markers have been used to probe preBötC function. Substance P (SP) accelerated inspiratory rhythms in vitro by depolarizing putatively rhythmogenic preBötC neurons. SP also depolarized preBötC neurons whose function is premotor-related, i.e., those neurons transmit the nascent inspiratory rhythm to motoneurons outside the preBötC. The net result was that SP sped up the rhythm and elevated the baseline level neural activity in XII nerve recordings in vitro. Anatomic definition of the preBötC: The expression of NK1Rs by preBötC neurons was used to test its inspiratory rhythm-generating, role. SP, conjugated to the ribosomal toxin saporin, was injected into the preBötC of adult rats. Over the course of a week, this intervention caused progressive breathing deficits that ultimately resulted in severely pathological (i.e., ataxic) breathing. SP-saporin-injected rats also experienced sleep deficits and extraordinary sensitivity to anesthesia.Expression of µORs appear to be less widespread than NK1Rs among constituent preBötC neurons. Although expressed somewhat more sparsely, the application of µOR agonists like [D-Ala2, NMe-Phe4, Gly-ol5]-enkephalin (i.e., DAMGO) potently slowed the inspiratory rhythm. Note, this observation in vitro presaged the 2010-2020's crisis of opioid-drug related deaths by respiratory failure, which are attributable in large part to depression of rhythm-generating function in the preBötC (but also see:). Anatomic definition of the preBötC: In the late 1980s and early 1990s, following discovery of the preBötC, in vitro preparations from neonates were not yet widely accepted as experimental models of the respiratory neural control system in adults. Some groups argued that in vitro rhythms reflected gasping rather than breathing, despite the fact that in vitro preparations, show physiological levels of oxygen and pH even several hundred micrometers below the surface of the tissue. Thus, the SP-saporin experiments were critical for showing that the preBötC was necessary for normal breathing in un-anesthetized adult animals. Anatomic definition of the preBötC: Nevertheless, one is confronted with a disparity of motor patterns. The pattern of phrenic or XII nerve activity in vitro shows an abrupt onset followed by a decremental pattern, whereas in vivo the inspiratory motor nerves typically show an incremental onset followed by a more precipitous offset. The differences in the motor patterns measured in adults in vivo and those of in vitro preparations can be explained age- and development-related differences, the loss of mechanical sensory feedback in vitro , and the temperature (in vitro preparations are typically maintained ~10 °C lower than physiological temperature).SST and SST2a receptors are expressed by neurons in the preBötC. Unlike NK1R expression, which remains rather strong in regions caudal to the preBötC within the cervical spinal cord, SST expression appears to peak in the anterior-posterior axis at the region recognized as the preBötC. Could SST-expressing preBötC neurons be markers for the preBötC core? Investigators installed in the preBötC a peptide receptor from fruit fly, adapted for expression in mammals, that activates potassium channels. Whether awake or anesthetized, activation of those potassium channel-linked receptors in SST-expressing neurons of the preBötC reduced breathing movements, both their amplitude and frequency, and ultimately caused apnea, i.e., a lack of breathing. The exogenous peptide that activates the fly receptor was ultimately cleared from the central nervous system: injected rats nonetheless needed mechanical ventilation until they recovered from the experiment. Subsequent studies examined the underlying cellular mechanisms and have come to the conclusion that preBötC neurons expressing SST are related to transmission of the rhythm from core rhythmogenic neurons to premotor neurons inspiratory neurons. The SST “output” neurons are intermingled in the preBötC with rhythm-generating neurons, and their function is to coactivate and pass on inspiratory rhythm to dedicated premotor populations outside of the preBötC.Other markers for the preBötC include peptide hormone thyrotropin releasing hormone (TRH) and the glycoprotein reelin.In summary, the preBötC is the source of rhythmic activity that – once distributed to premotor and motoneurons of respiratory muscles – produces inspiratory breathing movements. The neurons that comprise the preBötC express NK1Rs, µORs, SST2a receptors, and SST. Each of these markers holds functional significance for modulation of preBötC rhythmicity, and their expression delineates the borders of the preBötC. SP accelerated inspiratory rhythms measured in vitro and ablation of NK1R-expressing preBötC neurons caused severe pathologies of breathing that were ultimately fatal. The µORs also map the preBötC and opioid drugs depress breathing rhythms, which is further evidence of the preeminent rhythmogenic role of the preBötC. SST is a peptide transmitter rather than a receptor, but its expression also maps the preBötC. SST-expressing neurons are breathing essential, but their role is linked to the production of motor output rather than generation of rhythm per se. Cellular composition of the preBötC: Excitatory (glutamatergic) neurons The rhythm-generating core of preBötC incorporates glutamatergic interneurons that express the gene Slc17a6 (i.e., Vglut2). preBötC glutamatergic neurons also express NK1Rs and µORs, but probably not SST. Pharmacological studies showed that excitatory transmission, predominantly via AMPA- and kainate-type ionotropic glutamate receptors were essential for rhythm generation as well as transmission to premotor neurons and ultimately motor output. Furthermore, Vglut2-knockout mice fail to breathe at birth. Transverse slices from late-stage embryos of Vglut2-knockout mice fail to generate rhythmic activity in the preBötC. Nevertheless, the cellular composition of the preBötC appears relatively unperturbed and constituent neurons express electrical properties associated with the preBötC in early postnatal mice, which emphasizes the importance of excitatory synaptic interactions for rhythm generation. Cellular composition of the preBötC: Dbx1-derived neurons A subset of preBötC glutamatergic neurons are derived from progenitor cells that express transcription factor Dbx1 (developing brain homeobox 1) during embryonic development. In slices from early postnatal Dbx1 reporter mice, Dbx1-derived preBötC neurons are rhythmically active in vitro in sync with inspiratory rhythm and motor output. Examined histologically, Dbx1-derived preBötC neurons express NK1Rs, µORs, SST2a receptors, as well as SST. Also in slices from postnatal Dbx1 reporter mice, the selective photonic ablation of Dbx1-derived preBötC neurons diminishes XII motor output magnitude and decelerates then irreversibly stops the XII rhythm. In adult mice that express light-sensitive cation channels (channelrhodopsin 2) in Dbx1-derived neurons, optogenetic photostimulation speeds up breathing and increases tidal volume of the breaths. Mice expressing proton pumps (archaerhodopsin) in Dbx1-derived preBötC neurons slows or stops breathing movements. When the breathing is slowed via photoinhibition of Dbx1-derived preBötC neurons, the tidal volume of the breaths is diminished.Dbx1 is a useful marker for the core preBötC neurons, but with caveats. First, Dbx1 is expressed during embryonic development, which makes it more challenging (though far from impossible ) to use as a marker or a tool to manipulate neuronal function compared to genes like Vglut2 that are expressed throughout life. Second, Dbx1, like Vglut2, marks output-related preBötC neurons as well as premotor neurons in the reticular formation that transmit to the hypoglossal motoneurons and phrenic premotor neurons upper cervical spinal cord. Third, Dbx1 is an embryonic transcription factor that governs the development of many populations in the brain and central nervous system, notably the V0 interneuron class involved in locomotion. Nevertheless, Dbx1 expression patterns can be mapped using Cre-Lox recombination in genetically modified mice to find and record preBötC core rhythmogenic interneurons. Cellular composition of the preBötC: Inhibitory (GABA- and glycinergic) neurons Approximately half of preBötC interneurons are inhibitory, glycinergic or GABAergic. Inhibitory preBötC neurons modulate the amplitude as well as the frequency of the rhythmic inspiratory bursts. These inhibitory populations receive sensorimotor information from the nucleus of the solitary tract (NTS), located in the dorsomedial medulla near the XII motor nucleus and the dorsal motor nucleus of vagus. Inhibitory neurons project to core rhythmogenic preBötC neurons. During normal breathing, inhibitory neurons in the preBötC are recruited periodically during each breath to hasten inspiratory termination. That role profoundly influences phase transition from inspiration to post-inspiration, then expiration, and that speeds up breathing cycles. Without preBötC inhibitory microcircuits, the breathing rhythm is slower overall and 'stiff' in the sense that its oscillation stabilizes even when faced with normally effective respiratory drive like CO2 or SP. Inhibitory preBötC neurons also inhibit neurons involved in generating expiratory (exhale-related) rhythm to enforce an exclusively inspiratory phase when the preBötC is active. Eupnea and sigh: The preBötC produces two types of breathing rhythm in the presence of physiological levels of oxygen and carbon dioxide. In eupnea, or normal resting breathing, the preBötC generates a rhythm that is relatively fast (~2–4 Hz in rodents, ~0.1-0.2 Hz in humans) with each breath achieving a tidal volume of air movement. Sigh breaths, on the other hand, are much slower (cycle periods range from 1-4 min−1 in mammals) with breath amplitudes being two or three-fold larger than tidal volume. Each type of rhythm is generated within the pre-Bötzinger complex and sigh bursts can be measured in rhythmically active slices. Robust sigh rhythmicity in slices requires that the slice retain some tissue immediately rostral to preBötC, which contains the cut axons from a rostral site at the level of the Facial (VII) cranial nucleus that projects to preBötC and delivers bombesin-like peptides, namely Gastrin-releasing peptide (GRP) and Neuromedin-B (NMB). Producing both inspiratory (eupnea-related) and sigh bursts appears to involve the majority of excitatory neurons in the preBötC (although see ). However, each type of rhythmic activity appears to depend on different mechanisms. The sigh rhythm depends on synaptic mechanisms that involve P/Q type calcium channels, suggesting of a subset of neurons with specialized synapses for this type of rhythm generation since only a very small number of respiratory neurons receive glutamatergic inputs that depend on P/Q type calcium currents, or emphasizing the need for calcium influx to produce sighs. The sigh burst rhythm also depends on mGluR8 receptor activation. A subset of preBotC neurons active during sigh, but not eupnea, so-called 'sigh-only' neurons has been identified PMID 18287547. Additionally, a different subset of preBotC neurons has been identified that have rhythmogenic bursting properties that even after being synaptically isolated, appear to intrinsically generate both eupneic and sigh-like rhythms PMID 18287547. The above studies suggest both intrinsic and synaptic mechanisms contribute to eupneic and sigh rhythmogenesis. Eupnea and sigh: Gasping Under low levels of oxygen, the preBötC rearranges its activity, which may require the assistance of other brain structures like the pons, to generate a rhythmic gasping-related pattern. Gasping-related bursts are characterized by faster rise time and shorter duration, which emerge at a lower frequency compared to eupnea. When under a low oxygenated state (hypoxia) the respiratory network responds by transitioning into an augmentation followed by a depression phase, controlled in the pre-BötC. During the depression phase, the inspiratory burst changes from an augmenting bell-shaped burst to a decrementing burst, a primary feature of gasping. Neuronal discharge patterns are altered during the depressed synaptic inhibition, evidence of a rearrangement of the network, presumably attributable to changes in synaptic connectivity strengths as well as modifications in the intrinsic properties of rhythmogenic preBötC neurons.In summary, the preBötC gives rise to more than one breathing-related rhythm: inspiratory (eupnea), sigh, and gasping. This single neuronal network can create multiple respiratory rhythmic patterns and is by itself both necessary and sufficient to generate these respiratory rhythms. Neighboring respiratory sites and nuclei: Located within the ventrolateral medulla, the pre-Bötzinger complex contains subnetworks that hold distinct synapses and intrinsic membrane properties. In mammals, the respiratory network system and the nuclei controlling breathing modulation are found along the neuronal axis. The neuronal networks involved in respiratory function are located in the ventral respiratory column (VRC). From rostral to caudal, these networks include the retrotrapezoid nucleus/parafacial respiratory group complex (RTN/pFRG) the Bötzinger complex, the preBötzinger complex (preBötC), as well as the rostral and the caudal divisions of the ventral respiratory group (rVRG and cVRG). The dorsal pons, including the Kölliker-Fuse and the parabrachial nuclei, play an important role in respiratory control and rhythm generation. Other areas that aid in breathing control are the cerebellum, neocortex, and the periaqueductal gray (speech and breathing), although the mechanisms are not yet well explained. Mononsynaptic projections to the preBötC have been mapped. Efferent projections from the preBötC to other respiratory and non-respiratory sites throughout the brain and central nervous system have been mapped too. Mechanism of rhythm generation: The exact mechanism of the rhythm generation and transmission to motor nuclei remains controversial and the topic of much research Ionic currents: Persistent sodium current (INaP) There are several inward currents that are proposed to help produce action potentials and bursts in pacemaker neurons. There are two main voltage dependent sodium currents that contribute to the depolarization and firing of action potentials in neurons. The fast and transient sodium current produces a large depolarization that fires the initial action potential in neurons, however this current is quickly inactivated and does not help maintain bursting activity in neurons. To achieve bursts, a persistent sodium current provides enough depolarization to facilitate the firing of action potentials during a burst. Unlike the fast and transient sodium current, the persistent sodium current (INaP) is activated at very low membrane potentials and has a much slower inactivation, which allows neurons to intrinsically fire action potentials at sub-threshold membrane potentials. Studies have shown that the inactivation of this persistent sodium current helps end bursts in pacemaker neurons. The amount of time it takes for INaP to become activated again establishes the timeframe between each burst. The neuron can receive synaptic inputs and different amounts of inward and outward currents to regulate the time between each burst, which ultimately helps generate a specific breathing pattern. Ionic currents: NALCN NALCN sodium leak channels have been hypothesized to give rise to an inward current that may play an important role in the modulation of bursting and spiking activity. These nonselective cation channels may provide a voltage-independent sodium current that also helps slightly depolarize neurons. The channels are regulated by G protein–coupled receptors that can activate or inhibit the NALCN channels depending on the neurotransmitter that binds the receptor and the specific signaling pathway that is involved. Activation of M3 muscarinic receptors by acetylcholine and NK1 by Substance P significantly increases NALCN currents, while activation of CaSR by calcium stops the flow of the currents. Since NALCN sodium leak channels may contribute to the depolarization of neurons, their regulation by G-protein coupled receptors may be vital for the alteration of bursting and breathing rhythms. Ionic currents: Calcium-activated non-specific cation current (ICAN) Other inward currents that help generate intrinsic spiking and bursting in pacemaker neurons are the calcium current and calcium-activated nonspecific currents (ICAN). When a neuron becomes depolarized, voltage gated calcium channels become activated and calcium is able to flow into the cell which usually leads to the release of neurotransmitters. Calcium-sensitive dyes have shown that internal concentrations of calcium increase during bursts. The activation of different calcium channels has distinct effects on the activity of neurons in the pre-Bötzinger complex. L-type calcium channels are known to increase the frequency of action potentials in some neurons, which might be the reason calcium influx through these channels has been observed during the augmentation when tissues have low levels of oxygen. P/Q-type calcium channels are mainly responsible for the release of neurotransmitters that excite, or activate, postsynaptic neurons. Studies have shown that blockage of these channels leads to the inhibition of sighs, which indicates calcium flow through these channels is necessary for sighs. Other research has also suggested that calcium flow through N-type calcium channels is essential for normal breathing, and is responsible for the activation of calcium-dependent potassium channels. Calcium-activated nonselective cation currents are important for the intrinsic spiking and bursting activity in CS pacemaker neurons. Metabotropic glutamate 1/5 receptors appear to be important for the increase in intracellular calcium that activate ICAN. The initial burst in a neuron usually leads to the activation of the transient sodium current and the several types of calcium currents. These currents depolarize the cell further enough to activate NMDA receptors and ICAN, which helps cell regenerate its bursts. Ionic currents: The ratio between inward and outward currents helps determine the activity of pacemaker neurons in the pre-Bötzinger complex. The major outward currents involved in the regulation of neuron activity are potassium currents. Although the exact role of potassium currents is still being investigated, it appears that potassium and sodium leak currents are crucial for the rhythmicity of the pre-Bötzinger complex. Transient A-type potassium currents are more common in neurons that are involved in the inspiration process. When A-type potassium currents were blocked with 4-AP in slices of the pre-Bötzinger complex, synchronized bursts in inspiratory neurons was affected as well as communication with hypoglossal motor pools that help regulate breathing. This suggests that transient A-type potassium currents are needed for the synchronized bursts in inspiratory neurons and for effective respiratory control. Other potassium channels like large conductance calcium-dependent potassium channels and sodium chloride dependent potassium channels appear to end burst potentials in neurons. Moreover, ATP-dependent potassium channels help neurons detect changes in energy or oxygen levels to modify breathing patterns. These channels are activated by decreases in ATP, which suggests they provide the needed hyperpolarization during hypoxia. Neuromodulation of preBötC rhythmicity: Several synthetic compounds have been shown to act on neurons specific to the preBötC, most being selective agonists or antagonists to receptor subtypes on neurons in the vicinity. Since many of these neurons express GABA, glutamate, serotonin and adenosine receptors, chemicals custom tailored to bind at these sites are most effective at altering respiratory rhythm. Neuromodulation of preBötC rhythmicity: Adenosine modulates the preBötC output via activation of the A1 and A2A receptor subtypes. An adenosine A1 receptor agonist has been shown to depress preBötC rhythmogenesis independent of the neurotransmitters GABA and glycine in in vitro preparations from 0- to 7-day-old mice. Another synthetic drug specific to the adenosine A2A receptor subtype is CGS-21680 that has been shown to cause apneas in 14- to 21-day-old rat pups in vivo. For this reason, it has been used as a model to study pathological conditions such as apnea of prematurity and sudden infant death syndrome. Neuromodulation of preBötC rhythmicity: The complex regulation of respiratory rhythm involves the integration of multiple signaling molecules and the activation of numerous diverse metabotropic and ionotropic receptors. These include norepinephrine, serotonin, acetylcholine, substance P, ATP, TRH, somatostatin, dopamine, endorphins, and adenosine, which in turn activate g-protein coupled receptors to produce the diverse responses mediated by the pre-Bötzinger complex. Neuromodulation of preBötC rhythmicity: Nonpacemaker and pacemaker neurons involved in inspiration are stimulated by NE. They are found within the pre-BötC and act via alpha-1, alpha-2, and beta-noradrenergic mechanisms. NE induces ICAN-dependent bursting in active nonpacemakers and depolarizes CI pacemakers, increasing the frequency of their bursting. In CS pacemakers, NE increases only the amplitude of the depolarizing drive potential and the number of action potentials during the burst, but does not affect the burst frequency in CS pacemakers, unlike in CI pacemakers. Neuromodulation of preBötC rhythmicity: Serotonergic neurons are also involved in breathing systems. Their actions are diverse and dependent upon the activity level and species of the animal. Serotonin plays a critical role in altering the pacemaker neurons involved in gasping and normal respiratory activity. Blocking of the 5-HT2 receptor eliminates the bursts occurring in the pacemaker neurons and leads to the abolishing of gasps. The blocking of this receptor is therefore problematic, especially in SIDS, because gasping is an important mechanism involved in autoresuscitation. A lack of serotonin binding to the serotonin receptor 2 leads to the inability to autoresuscitation due to the lack of drive for gasping. Neuromodulation of preBötC rhythmicity: Substance P, a peptidergic modulator, also plays a role in neuromodulation of the pre-BötC. It is often coreleased with other neurotransmitters. Substance P activates the inspiratory frequency at the level of the network and behavioral systems. Cellularly, substance P is involved in the depolarization of nonpacemaker neurons slowly, causing an increase in action potential firing rate. The neuropeptide can also activate CS pacemakers and less dramatically, CI pacemakers. This leads to an increase in burst amplitude, frequency, and duration. When Substance P is coreleased with serotonin, it plays a crucial role in hypoxic response. This occurs because substance P stabilizes the respiratory rhythm through depolarization of neurons and activation of Pacemaker neurons. Neuromodulation of preBötC rhythmicity: Acetylcholine plays an important modulatory role on the respiratory system by altering nicotinic and muscarinic receptors. The suppression of muscarinic receptors and the activation of nicotinic receptors due to prenatal exposure to nicotine have been linked to SIDS. This is due to the reduction of excitatory synaptic transmission in a nucleus and increased excitability in motor neurons caused by nicotinic activation. Neuromodulation of preBötC rhythmicity: Many other neuromodulators have roles in respiration. The aforementioned are simply three examples. Homeostatic changes in preBötC rhythmicity: Investigation of the respiratory response to Acute intermittent hypoxia (AIH), repeated episodes of hypoxia, reveals connection to various breathing disorders, such as Rett syndrome and obstructive sleep apnea. AIH leads to persistent increases in respiratory frequency and amplitude of integrated motor neuronal bursts in vivo. These changes lasting for 90 minutes or longer are termed long-term facilitation (LTF). AIH causes homeostatic changes in multiple sites of the respiratory system; the pre-BötC is likely the site for the LTF, since intermittent hypoxia causes an increase in persistent frequency after ongoing hypoxia. The respiratory system is regulated by multiple forms of long-term synaptic plasticity. The role of synaptic inhibition has been proved widespread and critical within the expiratory Botzinger complex respiratory network, through cross-correlation and antidromic mapping techniques. The inhibitory connections discovered indicate their ability to connect different classes of neurons, their importance in regulating the interval of inspiration, and their ability to control driving potential of respiratory neurons. These characteristics show the interaction between the parafacial respiratory group and the pre-Bötzinger complex, which allows for active expiration to be produced by synaptic inhibition within the respiratory network. Synaptic inhibition is critical for allowing the pre-Bötzinger complex to communicate with other respiratory centers in order to generate respiratory activity. Homeostatic changes in preBötC rhythmicity: Glycinergic and GABAergic inhibitory neurons make up half of all inspiratory neurons. Exposure of the pre-Bötzinger complex to these inhibitory neurotransmitters results in the rhythmic nature associated with respiration. Blocking this inhibition from Glycine or GABA causes its neurons to be incapable of switching from the active phase to the inspiration phase, demonstrated by shorter inspiratory activity (as seen in vivo). However, the absence of inhibitory synapses still resulted in rhythmic respiratory activity in vitro and in situ. This is largely due to the fact that respiratory rhythm results from numerous aspects, with synaptic inhibition playing only a single part. Homeostatic changes in preBötC rhythmicity: In addition to the inhibitory synaptic regulation of respiratory rhythm within the pre-Bötzinger complex, there is also an excitatory component utilizing mostly AMPA receptors. The generation of inspirations is due to a signaling cascade involving transient Ca2+ influx as a result of glutamate activating a postsynaptic receptor. In addition to glutamates role in activating the synaptic drive of inspiration, it is also understood that pacemaker neurons, with autonomous voltage-dependent properties, are also responsible for the generation of respiratory rhythm. Evidence of this is seen when isolating neurons within the pre-Bötzinger complex, which results in rhythmic bursts due to synaptically coupled micronetworks. Homeostatic changes in preBötC rhythmicity: However, the generation of respiratory rhythm requires other excitatory components, such as glutamate, in order to produce a wide range of behavioral functions including eupneic and sigh activity. The pre-Bötzinger complex is responsible for generating the wide variety of components that make up the respiratory rhythm. The accomplishment of these precise activities requires distinct neuron populations that overlap to allow the generation of different respiratory actions. Eupneic activity is generated using the excitatory mechanism through the NMDA glutamate receptor. Sighs have a differential generation originating from pacemaker neurons. The pre-Bötzinger complex is capable of generating differential rhythmic activities due to the intricate integration of modulatory, synaptic, and intrinsic properties of the neurons involved. Oxygen sensing: In addition to its involvement in generating respiratory rhythm, the pre-Bötzinger complex is also capable of integrating sensory information from changes in the biochemical environment, particularly oxygen. The capability to detect focal hypoxia causes an excitatory response in the motor output responsible for respiration, which causes alterations in the firing pattern of neurons within the pre-Bötzinger complex. Among these changes are the transition of a fully integrated network involving complex networks and autonomous mechanisms, to a system dependent on the activity of pacemaker neurons through sodium current activation. Hypoxia results in gasping due to the increased dependence on the sodium current and the overlap in networks between the generation of respiratory rhythm and intrinsic oxygen sensitization. Pathologies and the preBötC: Disturbances in neuromodulatory processes acting on ion channels, receptors, and second messengers have been associated with numerous pathophysiological conditions, such as Rett syndrome and sudden infant death syndrome. Pathologies and the preBötC: Rhythmic breathing continuously adapts to posture, activity level, speech, and can reveal whether someone is calm, agitated, or scared. Plasticity of the mechanisms involved in respiratory behavior is modulated in part by the preBötC. Disruption causes irreversible loss or major disruption of breathing in vivo. The frequency and amplitude change according to the behavioral and metabolic demands of the organism it controls. Breathing is thus extremely sensitive to the internal state of the organism. Pathologies and the preBötC: Associated diseases Rett syndrome Sleep apnea
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fratricidins** Fratricidins: Fratricidins are antibodies that employ receptor pleiotropism to transform tumor cells into natural killer cells that destroy other tumor cells. In 2015 the effect was demonstrated by Dr. Richard A. Lerner of The Scripps Research Institute on human acute myeloid leukemia (AML) cells. Mode of action: Most AML cells have the thrombopoietin (TPO) receptor. Fratricidins activate this receptor, turning as many as 80% first into dendritic cells and then into cells that resemble natural killer (NK) cells. NK cells are capable of rapidly attacking potentially dangerous pathogens and tumors even if they do not contain the biomarkers normally identified by other immune cells.These induced NK cells possessed extending tendrils that had made their way through the outer membranes of nearby AML cells. The antibody-induced killer cells make large amounts of perforin, IFN-γ and granzyme B, in a different cascade from that caused by the "normal" antibody for this receptor.In lab tests, a “modest” number of NK cells took out about 15 percent of the surrounding leukemic cell population in 24 hours. These cells attacked only related leukemia cells, but not unrelated breast cancer cells.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Games and learning** Games and learning: Games and learning is a field of education research that studies what is learned by playing video games, and how the design principles, data and communities of video game play can be used to develop new learning environments. Video games create new social and cultural worlds – worlds that help people learn by integrating thinking, social interaction, and technology, all in service of doing things they care about. Computers and other technologies have already changed the way students learn. Integrating games into education has the potential to create new and more powerful ways to learn in schools, communities and workplaces. Games and learning researchers study how the social and collaborative aspects of video game play can create new kinds of learning communities. Researchers also study how the data generated by game play can be used to design the next generation of learning assessments. Research: The games and learning research world studies how new digital media tools shift the topic of education research from recalling and repeating information to being able to find it, evaluate it and use it compellingly at the right time and in the right context. Games and learning research explores how games and game communities can lead to 21st-century educational skills such as higher order thinking, the ability to solve complex problems, think independently, collaborate, communicate and apply digital tools to effectively gather information. Research: Research shows the educational and social benefits of digital games. Games do not need to be specifically geared towards education to be educational tools. Games can bring together ways of knowing, ways of doing, ways of being, and ways of caring. As John Dewey argued, schools are built on an obsession with facts. Students need to learn by doing, and with gaming, students can learn by doing something as a part of a larger community of people who share common goals and ways of achieving those common goals, making gaming a benefit for social reasons as well. Gaming has also changed the look of content-driven curriculum in schools. In content-driven media, people learn by being told and reflecting on what they are told. In gaming, game designers create digital environments and game levels that shape, facilitate and even teach problem solving.Games also teach students that failure is inevitable, but not irrevocable. In school, failure is a big deal. In games, players can just start over from the last save. A low cost failure ensures that players will take risks, explore and try new things.Much of the debate about digital games for education was based on whether or not games are good for education. But that question is overly simplistic. The National Research Council's report on laboratory activities and simulations makes clear that the design and not merely the medium of a physical or virtual learning activity determines its efficacy. Digital games are a medium with certain affordances and constraints, just as physical labs and virtual simulations are media with certain affordances and constraints. Simulations and digital games actually share many similarities in this regard. Although there are multiple definitions for games, the key characteristics differentiating games from simulations involve the explicit inclusion of (a) rules for engaging with the simulation, (b) goals for players to pursue, and (c) means for indicating players' progress toward those goals. Properly designed, features of games can provide powerful affordances for motivation and learning. Individual studies have shown, for example, that well designed games can promote conceptual understanding and process skills, can foster a deeper epistemological understanding of the nature and processes through which science knowledge is developed and can produce gains in players' willingness and ability to engage in scientific practices and discourse.In his book What Video Games Have to Teach Us About Learning and Literacy, James Paul Gee talks about the application and principles of digital learning. Gee has focused on the learning principles in video games and how these learning principles can be applied to the K-12 classroom. Successful video games are good at challenging players. They motivate players to persevere and teach players how to play. Gee's video game learning theory includes his identification of thirty-six learning principles, including: 1) Active Control, 2) Design Principle, 3) Semiotic Principle, 4) Semiotic Domain, 5) Meta-level Thinking, 6) Psychosocial Moratorium Principle, 7) Committed Learning Principle 8) Identity Principle, 9) Self-knowledge Principle, 10) Amplification of Input Principle, 11) Achievement Principle, 12) Practice Principle, 13) Ongoing Learning Principle, and 14) Regime of Competence Principle and more. Within these learning principles Gee shows the reader the various ways in which games and learning are linked and how each principle supports learning through gaming. One example would be Learning Principle 6: "Psychosocial Moratorium" Principle, where Gee explains that in games, learners can take risks in a space where real-world consequences are lowered. Another of Gee's principles, #8, that shows the importance of games and learning states that learning involves taking on and playing with identities in such a way that the learner has real choices (in developing the virtual identity) and ample opportunity to mediate on the relationship between new identities and old ones. There is tripartite play of identities as learners relate, and reflect on, their multiple real-world identities, a virtual identity, and a projective identity.Other research takes the position that these standards and testing methods are not conducive to teaching methods that incorporate video games. Games alone will not make schools more efficient, cannot replace teachers or serve as an educational resource that can reach an infinite number of students. The extent of the roles games will play in learning remains to be seen. More research in this area is needed to determine impact of games and learning. Research: Peter Gray, who has conducted research on early childhood learning, states that gaming is purely a beneficial activity in young children. He states that children are able to choose how to most effectively use their time and that extensive use of a particular medium of learning shows they are taking something valuable from it. He goes on to state the significance of the computer in the modern age and that not utilizing it as a learning tool is simply foolish. Video gaming has shown positive levels of improvement in areas of cognitive function. In their study "Improving Multi-Tasking Ability through Action Videogames". Chiappe and colleagues determined that 50 hours of gaming significantly improved results on a performance test modeled after skills used when piloting an aircraft. Aside from this, areas of attention and vigilance, as well as basic visual processes have shown to improve with allotted video game time. Application: Digital learning tools have the potential of being customized to fit the abilities of individual students and can engage them with interactive tasks and simulate real-life situations. Games can create new social and cultural worlds that may not have been available to everyone in the past. These worlds can help people learn by integrating thinking, social interaction, and technology, all in service of doing things they care about.Video games are important because they let people participate in and experience new worlds. They let players think, talk, and act in new ways. Indeed, players inhabit roles that are otherwise inaccessible to them. One example of a game where players are learning while playing would be The Sims, a real-time strategy game where players need to make decisions that alter their character's life. They can manipulate the scenario to create digital lives where they can experience the struggles of single parenthood or poverty. Players in this game are not allowed to modify a previous decision to alter the outcome, even if the outcome is unpleasant. The goal is to survive to the best of their abilities. The game is complicated and difficult, just as it would be to live a real life. Regarding a more traditional approach to education, The Sims has been used as a platform for students to learn a language and explore world history while developing skills such as reading, math, logic and collaboration.While not all researchers agree, some recent studies have shown the positive effects of using games for learning. A study carried out by professor Traci Sitzmann at the University Oregon among 6,476 students states that "trainees in the game group had 11 percent higher factual knowledge levels, 14 percent higher skill-based knowledge levels, and 9 percent higher retention levels than trainees in the comparison group". Some other aggregated studies also show an increase in learning performance thanks to the use of videogames. Controversy: Critics suggest that lessons people learn from playing video games are not always desirable. Douglas Gentile, an associate professor of psychology at Iowa State University found that children who repeatedly play violent video games are learning thought patterns that will stick with them and influence behaviors as they grow older. Researchers from this study found that over time children started to think more aggressively, and when provoked at home, school or in other situations, children reacted much like they did when playing a violent video game. But even the harshest critics agree that people can learn something from playing video games. While research on the behavioral and cognitive impacts of video games with violence have shown mixed outcomes, games with little or no violence have shown promising results. Elizabeth Zelinski, a professor of gerontology and psychology at the University of Southern California states that some digital games have been shown to improve the function of the brain, while others have the potential to reverse cognitive loss associated with aging. Some games require players to make decisions ranging from simple to quite complex to drive its progress. Controversy: Some researchers question whether a greater reliance on video games is in students' best interests, indicating there is little proof that skillful game play translates into better test scores or broader cognitive development. Emma Blakey notes very few studies have examined whether video games improve classroom performance and academic achievement.Others, like Emma Blakey, a PhD researcher in developmental psychology at the University of Sheffield in England, question whether a greater reliance on video games is in students' best interests, indicating there is little proof that skillful game play translates into better test scores or broader cognitive development.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**G-Man (Half-Life)** G-Man (Half-Life): This is a list of characters in the Half-Life video game series, which comprises Half-Life, Half-Life 2, Half-Life: Alyx, and their respective expansion packs and episodes. Introduced in Half-Life and expansion packs: This section deals with characters that appear in Half-Life, Opposing Force, Blue Shift, and Decay. Introduced in Half-Life and expansion packs: Gordon Freeman Gordon Freeman, PhD, is the silent protagonist of the Half-Life series and the playable character in Half-Life and all games in the Half-Life 2 series. He is a theoretical physicist and holds a PhD from MIT in that field. At the time of Half-Life, he works at Black Mesa Research Facility, a facility in New Mexico, conducting nuclear and subatomic research. Introduced in Half-Life and expansion packs: G-Man The G-Man (voiced by Michael Shapiro) is a mysterious recurring character. He is known to display peculiar behavior and capabilities beyond that of an ordinary human, and his identity and motives remain almost entirely unexplained. He plays the role of an overseer and employer, both observing the player as the games progress and pulling strings to control the outcome of specific events throughout the Half-Life saga. The G-Man's constant appearances in the Half-Life games, as well as his revealing monologues with series protagonist Gordon Freeman, imply that he is of great importance and somewhat anchors the efforts of the player. His mysterious nature has made him an icon of the Half-Life series. Introduced in Half-Life and expansion packs: Barney Calhoun Barney Calhoun is the player character in Half-Life: Blue Shift and a major character in Half-Life 2 as well as Half-Life 2: Episode One. Michael Shapiro provided Barney's voice in the games of the Half-Life series. Scott Lynch, Valve's chief operating officer, lent his face to the game for use in-game as Barney in Half-Life 2. Introduced in Half-Life and expansion packs: Barney's name stemmed from the earlier alpha versions of Half-Life in which the model for the security guards held a resemblance to actor Don Knotts, inspiring comparisons with Knotts's character Barney Fife from The Andy Griffith Show, which in the United States has long been a disparaging term for an inept policeman or security guard. Initially, the "Barneys" were intended to be hostile NPCs who would attack the player. Introduced in Half-Life and expansion packs: In Half-Life: Blue Shift, the playable Barney progresses through Black Mesa to escape the events of the Resonance Cascade and is able to do so, in contrast to Gordon Freeman and Adrian Shephard, who are held in stasis. In Half-Life 2, Barney works as a mole for the Lambda Resistance in the Combine Civil Protection Forces. He provides the player information in the first chapter, leading him to Kleiner and Vance, and in the end of the second chapter, he provides the player with his crowbar. The fact that Barney owes Gordon Freeman a beer is a running gag in the series. Introduced in Half-Life and expansion packs: Adrian Shephard Adrian Shephard is the protagonist of Half-Life: Opposing Force. He is a 22-year old corporal in the United States Marine Corps (USMC) stationed at the fictional Santego Military Base in Arizona who is mysteriously transferred to the Hazardous Environment Combat Unit, a special USMC unit. Three months after his transfer, he is sent to the Black Mesa Research Facility (BMRF) to defeat the Xenian invasion and summarily execute all BMRF personnel. However, the Bell Boeing V-22 Osprey transporting him is hit by a Xenian energy blast and crashes; he is rescued by a group of Black Mesa scientists, and due to never making it to his designated landing zone, Shephard remains unaware of the secret orders to kill all BMRF employees. Making his way through the facility while being observed by the G-Man, he eventually comes across a thermonuclear weapon brought in by the Central Intelligence Agency and deactivates it, but the G-Man later reactivates it, leading to the eventual destruction of Black Mesa. In the end, the G-Man reveals that he has successfully argued for Shephard's life, detaining him in some unknown void. The G-Man expresses a degree of respect for Shephard, offering praise for his ability to "adapt and survive against all odds" which "rather reminds [the G-Man] of [himself]". Introduced in Half-Life and expansion packs: Shephard is briefly mentioned in Half-Life: Blue Shift, where a HECU marine grumbles about taking over some of Shephard's squad's duties. Shephard was planned to be the player character of Arkane Studios' Ravenholm spinoff game, developed around 2007 to 2008, a project which Valve later cancelled. Valve also affirmed that Shephard had no connection to Portal after players found that the keyboard images in game showed the lit characters "ASHPD" and believed that hinted at Shephard's return; the letters instead referred to the long name of the "Aperture Science Handheld Portal Device" a.k.a. the "portal gun", with the nearness to Shephard's name a "total freak coincidence" according to Valve's Doug Lombardi. Introduced in Half-Life and expansion packs: Rosenberg Dr. Rosenberg (voiced by Jon St. John) is a scientist and a survivor of the Black Mesa incident. He first appears in Half-Life: Decay. When Gina Cross and Colette Green first arrive at the test chamber's control room and are receiving instructions from Dr. Keller, Rosenberg interrupts and voices his concern to Keller over having the anti-mass spectrometer run above 90% capacity, which is past the safety buffer zone for the equipment. Dr. Keller, however, dismisses his concern and states that the administrator's orders for this were clear. He tells Rosenberg that he can either stay and watch the experiment or return to his labs by the train yards. Rosenberg remains, and shortly thereafter the Resonance Cascade occurs. Introduced in Half-Life and expansion packs: Immediately after the disaster, Rosenberg converses with Dr. Keller and makes it clear that he believes their greatest responsibility should be the safety of the people at Black Mesa. Although Keller thinks that they should attempt to reset the displacement fields first, he eventually agrees with Rosenberg, and they come up with a plan to contact the military, so that they can help and evacuate the facility as soon as possible. Gina and Colette escort Rosenberg through the Hazard Course to a satellite communications center on the surface, where he is able to transmit a distress signal. Dr. Rosenberg decides to wait there for the military, and this is the last time he is seen in Decay as Gina and Colette return below to assist Dr. Keller. However, his voice is heard once more in the game later on. Introduced in Half-Life and expansion packs: In Half-Life: Blue Shift, Rosenberg makes his first appearance during the Hazard Course tutorial, long before Calhoun encounters him in the train yards. He can be seen behind the observer's window during the duck-jump portion of the training. Introduced in Half-Life and expansion packs: Sometime between Gina and Colette's last sight of Rosenberg in Decay and Calhoun's eventual rescue of the scientist in Blue Shift, he tries to enact an escape plan to get out of Black Mesa with the help of several other scientists. During this time, he is captured by soldiers and held captive in a freight car for questioning, while a colleague, Harold, is cornered and fatally wounded. Before Harold dies, Barney Calhoun discovers him, and he instructs Calhoun to find Dr. Rosenberg to help him with his plan. Calhoun is able to reach the train yards and free Dr. Rosenberg. Rosenberg informs him that their plan is to use the equipment in the prototype labs to teleport to safety. Introduced in Half-Life and expansion packs: He leads Calhoun to the unused part of the complex where two other scientists, Walter Bennett and Simmons, are already preparing the machine. Rosenberg instructs Calhoun that he must activate and align a relay device on Xen in order for them to be able to accurately set their destination. Calhoun travels to Xen and is successful in accomplishing this task, but after returning through the portal back to Earth (it is here that Gina and Colette in Decay, temporarily caught in a harmonic reflux, hear Rosenberg's voice calling Calhoun through the portal), they discover that they need another power cell to replenish the teleporter's power for their escape. Calhoun acquires a newly charged power cell from the lab's sub-basement and delivers it to Rosenberg and the others. Dr. Rosenberg then initiates the system and brings it online. They all narrowly avoid the military's invasion of the prototype labs, teleporting to the safety of an unnoticed access tunnel. They get into an SUV and leave Black Mesa. Introduced in Half-Life and expansion packs: Rosenberg's fate remains unknown. Gina Cross Dr. Gina Cross (voiced by Kathy Levin) is a Black Mesa scientist who first appears as the Holographic Assistant for Gordon Freeman in the Black Mesa's Hazard Course and then later as one half of the protagonists in Half-Life: Decay. Introduced in Half-Life and expansion packs: In Decay, Cross is the one who delivers the GG-3883 crystal sample to the delivery system and then heads to an area below the test chamber, where Dr. Colette Green is stationed, to fix a jam in the lift that allows the specimen to be delivered up to Gordon. After the Resonance Cascade occurs, Cross teams up with Dr. Green to battle their way through the now alien-infested facility. They first escort Rosenberg to the surface to contact the military, and then under the guidance of Dr. Richard Keller, they succeed in starting a resonance reversal to help lessen the effects of the dimensional rift. Introduced in Half-Life and expansion packs: In Half-Life: Blue Shift, Cross can briefly be seen on a security camera in the surveillance room, delivering the GG-3883 crystal. In Half-Life: Opposing Force, Adrian Shephard finds Cross's corpse in Xen after being teleported there by the Displacer Cannon, which implies that she died sometime after the events of Decay. Randy Pitchford, the president and CEO of Gearbox Software, has since confirmed this fate.Cross was originally planned to be Gordon Freeman's spouse as well as another playable character in the original Half-Life, but this idea was cut from the final game. Introduced in Half-Life and expansion packs: Colette Green Dr. Colette Green (voiced by Lani Minella) is a Black Mesa scientist and one half of the protagonist team in Half-Life: Decay. Introduced in Half-Life and expansion packs: In Decay, Dr. Green's role in the experiment is to make preparations in a room below the test chamber and initiate the Anti-Mass Spectrometer to run at 105%. Dr. Gina Cross also enters the same room to fix a jam in the specimen delivery system's lift mechanism, meaning they are both in the same place when the Resonance Cascade finally occurs. Following the disaster, the two team up to fight their way through the facility for survival. They escort Dr. Rosenberg to the surface to call the military for help and then, with the help of Dr. Richard Keller, manage to start a resonance reversal to prevent the dimensional rift from becoming too large to be repaired. Introduced in Half-Life and expansion packs: The outcome for Dr. Green, along with the rest of the survivors in Decay (with the exception of Dr. Cross, who later died in Xen), is unknown to the other Black Mesa survivors. Introduced in Half-Life and expansion packs: Richard Keller Dr. Richard Keller (voiced by Brice Armstrong) is a Black Mesa scientist, working with Colette and Gina. He appears in Half-Life: Decay. He is a 55-year-old, senior scientist in a wheel chair. He gives missions to Colette and Gina during the game. Keller also condemns Gordon Freeman and asks himself what Kleiner sees in him. His final fate is unknown. Introduced in Half-Life and expansion packs: Keller was originally going to be an antagonist, who lied about his ability not to walk, but the idea was scrapped. Walter Bennet Dr. Walter Bennet (voiced by Harry S. Robins) is a Black Mesa scientist. He is seen in Half-Life: Blue Shift. Introduced in Half-Life and expansion packs: In Blue Shift, Dr. Bennet is seen fixing a battery in Dr. Rosenberg's office, along with Dr. Simmons. The three scientists soon get it fixed with the help of Barney Calhoun, and they start their teleportation out of Black Mesa. The four successfully make it out the facility, making Dr. Bennet one of the few known survivors of the incident. They open the gates and start their journey to the outside world with an SUV. Introduced in Half-Life and expansion packs: Dr. Bennet is briefly mentioned in Half-Life: Opposing Force. As Adrian Shephard travels within Sector E of Black Mesa, he enters a testing laboratory where Xen specimens were being experimented on prior to the Resonance Cascade. He opens up a transmission intended for Dr. Bennet, revealing a hologram of a scientist talking about the results of an experiment conducted on a Barnacle, one of the Xen creatures being examined. Following the transmission, Shephard takes a nearby Barnacle specimen that was intended for Dr. Bennet to experiment on before the Resonance Cascade. Introduced in Half-Life and expansion packs: Dr. Bennet's final fate is unknown. Simmons Dr. Simmons is a Black Mesa scientist. He is seen in Half-Life: Blue Shift. Introduced in Half-Life and expansion packs: In Blue Shift, Dr. Simmons is seen fixing a battery in Dr. Rosenberg's office, along with Dr. Walter Bennet. The three scientists soon get it fixed with the help of Barney Calhoun, and they start their teleportation out of Black Mesa. The four successfully make it out the facility, making Dr. Simmons one of the few known survivors of the incident. They open the gates and start their journey to the outside world with an SUV. Introduced in Half-Life and expansion packs: Simmons does not talk at all in the game, and his first name is unknown. Furthermore, his final fate is unknown like all of his colleagues. Introduced in Half-Life 2 and episodes: This section deals with characters that appear in Half-Life 2, Episode One, and Episode Two. Introduced in Half-Life 2 and episodes: Alyx Vance Alyx Vance (voiced by Merle Dandridge in Half-Life 2 and its episodes and by Ozioma Akagha in the prequel Half-Life: Alyx) is a prominent figure in the human resistance against the rule of the Combine over Earth and their human representative, Dr. Wallace Breen. Alyx is the daughter of Dr. Eli Vance and his deceased wife Azian, and she becomes a close friend and ally of Gordon Freeman over the course of Half-Life 2. Introduced in Half-Life 2 and episodes: The 2020 VR title Half-Life: Alyx, which takes place between the events of Half-Life and Half-Life 2, focuses on Alyx and Eli Vance as they fight against the Combine's occupation of Earth. Isaac Kleiner Dr. Isaac Kleiner (voiced by Harry S. Robins), a Black Mesa survivor, is one of the leading scientists in the human resistance to the Combine. His character design is based on the generic "bald, glasses (Walter, as its model name suggests)" scientist model from the original Half-Life. Dr. Kleiner was one of Gordon Freeman's professors at MIT, recommending him for employment at the Black Mesa to the Civilian Recruitment Division and working with him as part of the facility's Anomalous Materials team. He managed to survive the Resonance Cascade disaster of the first game with the aid of Eli Vance. In Half-Life 2, he operates an underground lab in an abandoned Northern Petrol building. A teleportation system, developed jointly by Kleiner and Eli Vance, connects to Vance's facility, several miles away. As a pet, Dr. Kleiner keeps a debeaked headcrab he calls 'Lamarr' (after the 1930s actress and inventor Hedy Lamarr). In Episode One, Kleiner appears on the video screens previously reserved for Dr. Breen's propaganda and instructs survivors to evacuate City 17, also encouraging them to procreate. He rallies people to prepare for the Combine's retaliation, stating that several new technologies developed during their occupation would be deployed as soon as possible to help fight the Combine. Introduced in Half-Life 2 and episodes: In Episode Two, Kleiner is working out of the White Forest Rocket Facility with Eli Vance and Arne Magnusson on a device intended to close the Combine Superportal created by the Citadel's destruction. He mostly appears during radio transmissions while guiding Alyx and Gordon to White Forest, and argues bitterly with Magnusson, whom Vance states was Kleiner's rival for grant money at Black Mesa. Upon the discovery of the Borealis in Judith Mossman's decoded message, Kleiner expresses a wish to use the technology residing in the ship against the Combine, opposing Eli's vehement desire to destroy it in order to prevent "another Black Mesa". Introduced in Half-Life 2 and episodes: Eli Vance Dr. Eli Vance (voiced by Robert Guillaume in Half-Life 2 and its episodes and by James Moses Black in the prequel Half-Life: Alyx) is a physicist, researcher, and Harvard University graduate who worked with Gordon Freeman at Black Mesa. He wears a prosthetic that replaces his left leg beneath the knee, which was lost when he was attacked by a Bullsquid while helping Dr. Isaac Kleiner climb over a wall into a Combine city. He is Alyx Vance's father; his late wife, Azian, died in the aftermath of the resonance cascade. The leader of the Lambda Resistance, Dr. Vance was the first human being to make peaceful contact with the Vortigaunt species and thus the "first collaborator", quickly persuading the alien race to ally with humanity against the Combine invasion of Earth. In Episode Two, Eli Vance works at the White Forest base before being killed by a Combine Advisor. Introduced in Half-Life 2 and episodes: The 2020 VR game Half-Life: Alyx, which takes place between the events of Half-Life and Half-Life 2, focuses on Eli and Alyx Vance as they fight against the Combine's occupation of Earth. As a result of the events of the game, his death is prevented, albeit at the cost of his daughter Alyx becoming a willing agent of the G-Man. Upon learning the truth, Eli seeks to rescue her. Introduced in Half-Life 2 and episodes: Arne Magnusson In Episode Two, Dr. Arne Magnusson (voiced by John Aylward) runs the White Forest base and is described as a Black Mesa survivor. He gets on poorly with Dr. Kleiner due to their clashing personalities, as spelled out by their very names: 'Magnus' means 'great' in Latin, while 'klein' means 'small' in German and Dutch. Magnusson's peculiar personality seems to have gained him much respect from the Vortigaunts, such as his assistant Uriah, who makes awed references to him. Introduced in Half-Life 2 and episodes: Magnusson also makes a remark to Freeman saying that if he successfully defends White Forest, then he will forgive Freeman for an earlier incident in Black Mesa, involving his 'Microwave Casserole', a reference to a scene in the first Half-Life. Dog Dog is a hulking gorilla-like robot belonging to Alyx Vance, built by her father Eli to provide both companionship and protection. Alyx subsequently upgraded the robot into its current form. Despite its name, Dog is anthropomorphic in appearance. Dog provides support to Freeman during training with the Gravity Gun, and makes appearances several times after. Introduced in Half-Life 2 and episodes: Judith Mossman Dr. Judith Mossman (voiced by Michelle Forbes) is introduced in Half-Life 2 as a physicist working with Eli Vance at the Black Mesa East Research Facility. Although she is apparently friendly with other scientists, her condescending attitude toward laypeople annoys Alyx. Over the course of the game, she is revealed as a triple agent who betrays the resistance in an attempt to form an alliance with Dr. Breen, then betrays him in turn. In the follow-on episodes, she is again working for the resistance in a remote location. Introduced in Half-Life 2 and episodes: Odessa Cubbage Colonel Odessa Cubbage (voiced by John Patrick Lowrie) is a member of the Resistance against the Combine who speaks in distinct Received Pronunciation. He wears a jacket with emblems on it indicating that he was possibly once a security officer as part of the University of Rochester Security Services. According to Raising the Bar, his model was based on the martial arts instructor for one of the game's developers, and the name was found in a spam filter. Introduced in Half-Life 2 and episodes: Odessa Cubbage leads a small Resistance base and town, dubbed "New Little Odessa", in a coastal region outside City 17. Before arriving at New Little Odessa, the player can see Cubbage speaking with the G-Man by looking through a binocular spotting-scope device. When Gordon Freeman arrives at New Little Odessa en route to Nova Prospekt, Cubbage is briefing members on the use of the rocket launcher against Combine gunships. Cubbage entrusts the rocket launcher to Gordon and never turns up to fight himself, instead staying behind to attempt to contact another Resistance settlement. Introduced in Half-Life 2 and episodes: Grigori Father Grigori (voiced by Jim French) is an Eastern Orthodox Christian priest who appears throughout the Ravenholm chapter of Half-Life 2. He is the only human survivor encountered in Ravenholm. Introduced in Half-Life 2 and episodes: He speaks enthusiastically about "tending to his flock", i.e. dispatching the remaining zombie inhabitants of the city with a Winchester Model 1886 and homemade traps while offering them consolatory words. He helps Gordon Freeman intermittently in Ravenholm, giving him a shotgun, combat tips and advice mingled with biblical quotations. Eventually, Grigori escorts Freeman through a cemetery infested with zombies to show him a hidden passage to the mines out of the haunted town. After waving Gordon off, Grigori continues fighting the hordes of enemies until he retreats into a nearby tomb, ignites a wall of fire around it and disappears, laughing maniacally. Introduced in Half-Life 2 and episodes: Wallace Breen Dr. Wallace Breen (voiced by Robert Culp) was the administrator of the Black Mesa Research Facility at the time of the "Black Mesa Incident," the events depicted in Half-Life, but he was neither seen nor mentioned by name (he was instead always referred to as "the Administrator"). After the Seven Hour War, he "negotiated" a peace agreement with the Combine that saved humanity at the cost of enslavement. Dr. Breen was appointed as ruler of Earth – a puppet of the Combine, who have little physical presence on the planet. In his propaganda messages to the people in City 17 (dubbed "Breencasts"), he often refers to the Combine as "our Benefactors". Introduced in Half-Life 2 and episodes: The Half-Life 2 art book, Raising the Bar, has information that indicates Breen used, at least at one point of the planned story if not in the final version, a radio transmitter tower on the surface (i.e., not in Black Mesa) to communicate directly to the Combine and negotiate a surrender. Draft scripts for Half-Life 2 indicate that this would have been shown in an introductory segment to the game carried out through a series of projector slides. One of the slides would have shown Breen at the foot of a tower wearing a headset linked directly to it, with arms held wide and speaking to the skies. Introduced in Half-Life 2 and episodes: Breen is alerted to the return of Gordon Freeman in Half-Life 2 when Gordon is temporarily teleported, by accident, to his office in the Citadel. Dr. Breen informs the Combine and immediately dispatches the forces at his disposal to capture Freeman and break the associated Resistance movement in City 17. Introduced in Half-Life 2 and episodes: During Gordon Freeman's raid on the Citadel, Freeman is temporarily in the custody of Breen, until Judith Mossman turns against the administrator. During this period, Breen makes a very notable statement while in the presence of Alyx Vance and her father, Eli (who are also in his custody). He claims that Gordon "has proven a fine pawn to those who control him." He also comments that Gordon's services are "open to the highest bidder," and says he would understand if Gordon doesn't want to discuss it in front of his friends. These remarks imply that Breen may be aware of the mysterious G-Man and his influence over Freeman. It was also mentioned in one of the "Breencasts" to the Sector Seventeen Overwatch in Nova Prospekt; "I have good reason to believe that in the intervening years, he was in a state that precluded further development of covert skills." When Judith Mossman frees Gordon Freeman and Alyx Vance in his office, Dr. Breen attacks Gordon by firing at him with the supercharged Gravity Gun; however, the charge doesn't kill him, and Breen leaves it behind while escaping. Gordon manages to stop him by destroying the Citadel's dark fusion reactor, which destroys the teleporter Breen attempted to use to escape in a massive explosion; the platform Breen was standing on collapses, dropping Breen from the Citadel to his death. Introduced in Half-Life: Alyx: This section deals with characters that appear in Half-Life: Alyx. Russell Russell (voiced by Rhys Darby) is a member of the Resistance who serves as a mechanic.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MTA2** MTA2: Metastasis-associated protein MTA2 is a protein that in humans is encoded by the MTA2 gene.MTA2 is the second member of the MTA family of genes. MTA2 protein localizes in the nucleus and is a component of the nucleosome remodeling and the deacetylation complex (NuRD). Similar to the founding family member MTA1, MTA2 functions as a chromatin remodeling factor and regulates gene expression. MTA2 is overexpressed in human cancer and its dysregulated level correlates well with cancer invasiveness and aggressive phenotypes. Discovery: MTA2 was initially recognized as an MTA1 like 1 gene, named MTA1-L1, from a large scale sequencing of randomly selected clones from human cDNA libraries in 1999. Clues about the role of MTA2 in gene expression came from the association of MTA2 polypeptides in the NuRD complex in a proteomic study This was followed by targeted cloning of murine Mta2 in 2001. Gene and spliced variants: MTA2 is localized on chromosome 11q12-q13.1 in human and on 19B in mice. The 8.6-kb long human MTA2 gene contains 20 exons and seven transcripts inclusive of three protein-coding transcripts but predicted to code for two polypeptides of 688 amino acids and 495 amino acids. The remaining four MTA2 transcripts are non-coding RNA transcripts ranging from 532-bp to 627-bp. The murine Mta2 consists of a 3.1-kb protein-coding transcript to code a protein of 668 amino acids, and five non-coding RNAs transcripts, ranging from 620-bp to 839-bp. Structure: Amino acid sequence of MTA2 shares 68.2% homology with MTA1’s sequence. MTA2 domains include, a BAH (Bromo-Adjacent Homology), an ELM2 (egl-27 and MTA1 homology), a SANT domain (SWI, ADA2, N-CoR, TFIIIB-B), and a GATA-like zinc finger. MTA2 is acetylated at lysine 152 within the BAH domain Function: This gene encodes a protein that has been identified as a component of NuRD, a nucleosome remodeling deacetylase complex identified in the nucleus of human cells. It shows a very broad expression pattern and is strongly expressed in many tissues. It may represent one member of a small gene family that encode different but related proteins involved either directly or indirectly in transcriptional regulation. Their indirect effects on transcriptional regulation may include chromatin remodeling.MTA2 inhibits estrogen receptor-transactivation functions, and participates in the development of hormones independent of breast cancer cells. The MTA2 participate in the circadian rhythm through CLOCK-BMAL1 complex. MTA2 inhibits the expression of target genes owing to its ability to interact with chromatin remodeling complexes, and modulates pathways involved in cellular functions, including invasion, apoptosis, epithelial-to-mesenchymal transition, and growth of normal and cancer cells Regulation: Expression of MTA2 is stimulated by Sp1 transcription factor and repressed by Kaiso. Growth regulatory activity of MTA2 is modulated through its acetylation by histone acetylase p300 [12]. The expression of MTA2 is inhibited by the Rho GDIa in breast cancer cells and by human β-defensins in colon cancer cells. MicroRNAs-146a and miR-34a also regulate the levels of MTA2 mRNA through post-transcriptional mechanism. Targets: MTA2 deacetylates the estrogen receptor alpha and p53 and inhibits their transactivation functions. MTA2 represses the expression of E-cadherin in non-small-cell lung cancer cells. but stimulates the expression of IL-11 in gastric cancer cells. The MTA2-containing chromatin remodeling complex targets CLOCK-BMAL1 complex. Interactions: MTA2 has been shown to interact with:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Board representation (computer chess)** Board representation (computer chess): Board representation in computer chess is a data structure in a chess program representing the position on the chessboard and associated game state. Board representation is fundamental to all aspects of a chess program including move generation, the evaluation function, and making and unmaking moves (i.e. search) as well as maintaining the state of the game during play. Several different board representations exist. Chess programs often utilize more than one board representation at different times, for efficiency. Execution efficiency and memory footprint are the primary factors in choosing a board representation; secondary considerations are effort required to code, test and debug the application. Board representation (computer chess): Early programs used piece lists and square lists, both array based. Most modern implementations use a more elaborate but more efficient bit array approach called bitboards which map bits of a 64-bit word or double word to squares of the board. Board state: A full description of a chess position, i.e. the position "state", must contain the following elements: The location of each piece on the board Whose turn it is to move Status of the 50-move draw rule. The name of this is sometimes a bit confusing, as it is 50 moves by each player, and therefore 100 half-moves, or ply. For example, if the previous 80 half-moves passed without a capture or a pawn move, the fifty-move rule will kick in after another twenty half-moves. Board state: Whether either player is permanently disqualified to castle, both kingside and queenside. Board state: If an en passant capture is possible.Board representation typically does not include the status of the threefold repetition draw rule. To determine this rule, a complete history of the game from the last irreversible action (capture, pawn movement, or castling) needs to be maintained, and so, is generally tracked in separate data structures. Without this information, models may repeat the position despite having a winning advantage, resulting in an excessive amount of draws.The board state may also contain secondary derived information like which pieces attack a square; for squares containing pieces, which spaces are attacked or guarded by that piece; which pieces are pinned; and other convenient or temporary state. Board state: The board state is associated with each node of the game tree, representing a position arrived at by a move, whether that move was played over the board, or generated as part of the program's search. It is conceptually local to the node, but may be defined globally, and incrementally updated from node to node as the tree is traversed. Types: Array based Piece lists Some of the very earliest chess programs working with extremely limited amounts of memory maintained serial lists (arrays) of the pieces in a conveniently searchable order, like largest to smallest; associated with each piece was its location on the board as well as other information, such as squares representing its legal moves. There were several lists, one set for white pieces and another for black pieces. The lists were usually divided into pieces and pawns. This was a compact representation because most squares of the board are unoccupied, but inefficient because acquiring information about the relationship of pieces to the board or to each other was tedious. Piece lists are still used by many of today's programs in conjunction with a separate board representation structure, to give serial access to the pieces without searching the board. Types: Square list One of the simplest ways to represent a board is to create an 8x8 two-dimensional array (or, equivalently, a 64 element one-dimensional array). Each array element would identify what piece occupied the given square, or alternatively, if the square is empty. A common encoding is to consider 0 as empty, positive as white, and negative as black, e.g., white pawn +1, black pawn −1, white knight +2, black knight −2, white bishop +3, and so on. This scheme is called mailbox addressing. Types: A problem with this approach arises during move generation. Each move has to be checked to ensure it is on the board, significantly slowing down the process. One solution is to use a 12x12 array instead, with the outer edges filled with, say, the value 99. During move generation, the operation to check for a piece on the destination square will also indicate whether the destination square is off the board.Better memory usage can be achieved with a 10x12 array, which provides the same functionalities as a 12x12 one by overlapping the leftmost and rightmost edge files (which are marked as off-the-board). Some chess engines use 16x16 arrays to improve the speed of the rank and file number conversion and allow some special coding tricks for attacks etc. Types: 0x88 method The 0x88 method takes advantage of the fact that a chessboard's 8x8 dimensions are an even power of two (i.e. 8 squared). The board uses a one-dimensional array of size 16x8 = 128, numbered 0 to 127 rather than an array of size 64. It is basically two boards next to each other, the actual board on the left while the board on the right would contain illegal territory. The binary layout for a legal board coordinate's rank and file within the array is 0rrr0fff (The r's are the 3 bits used to represent the rank. The f's for the file). For example, 0x71 (binary 01110001) would represent the square b8 (in Algebraic notation). When generating moves from the main board, one can check that a destination square is on the main board before consulting the array simply by ANDing the square number with hexadecimal 0x88 (binary 10001000). A non-zero result indicates that the square is off the main board. In addition, the difference between two squares' coordinates uniquely determines whether those two squares are along the same row, column, or diagonal (a common query used for determining check). Types: Bitboards A more efficient but more elaborate board representation than the array-based structures is the bitboard. A bitboard is a 64-bit sequence of bits (0 or 1), which indicates the absence or presence (false or true) of some state of each space on the board. A board position can then be represented using a series of bitboards. For example, a series of bitboards for each piece type, for each side, can represent the board position. Types: The advantage to this representation is the ability to use bit parallel operations upon the 64-bit entities instead of iteration to manipulate and derive information about the state of the board. This makes maximal use of the hardware available, especially as 64-bit processors have become mainstream. Types: A substantive advantage of bitboards is that enables maps for the spaces attacked by each type of piece on each space of the board to be pre-collated and stored in a table, so that possible moves of the piece can be retrieved in a single memory fetch of the attack map for the square on which the piece resides which, excluding spaces occupied by friendly pieces (one bitwise operation), yields the legal moves of the piece. But the moves of the sliding pieces (rooks, bishops, queens) are indeterminate because the moves of these pieces depend on the configuration of other pieces on the board. So special and complex data structures have been devised to represent their moves. Types: Rotated bitboards Rotated bitboards is a move generation technique for the sliding pieces that uses rotated copies of a bitboard to place spaces (bits) in a file or diagonal into adjacent bits analogous to the bits representing a rank. These bits can be extracted and used as an index into a table to obtain the map of spaces attacked by these pieces. The bitboard is rotated 90° for file indexing and either 45° or -45° for diagonal indexing. Rotating a chessboard is conceptually challenging, and rotating a bitboard is computationally inelegant, but the transformation avoids serially enumerating the piece moves, or a lengthy sequence of shifting and masking a bitboard of the attack map of the piece to take into account the board configuration. Types: Direct lookup The masked ranks, files and diagonals of sliding pieces can be used via a hash function to directly index a table of precomputed attack vectors based on the occupancy bits in the masked portion. One such scheme that uses a perfect hash function along with tricks to minimize the potential size of the table that must be stored in memory, is called "magic bitboards". Types: Transposition table A transposition table is a cache of previously seen positions, and associated evaluations, in a game tree generated by a computer game playing program. For fast searching of the table, a hash function may be used, such as Zobrist hashing, to speed finding matching boards. Other methods Other methods such as Compact Chessboard Representation (CCR) have been proposed, but none has gained acceptance. Types: CCR uses 4 bits per square to represent the occupancy of the square, an entire rank can be represented in 32 bits, and the board in 8 registers (with an additional one for remaining position information). The occupancy code for a square can be dialed out of a register and added to the program counter to index a jump table, branching directly to code to generate moves for the type of piece on this square (if any). Although the program is longer than for conventional move generation methods, no checks for the edge of the board are required, and no moves off the board are possible, increasing move generation speed. Types: The drawbacks of CCR are: 1) dependency on 32-bit word size; 2) availability of at least 9 free registers to the API; 3) necessity of assembly programming on a CISC architecture to access the registers; 4) non-portability of assembly application.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded