text
stringlengths
60
353k
source
stringclasses
2 values
**Tesetaxel** Tesetaxel: Tesetaxel is an orally administered taxane being investigated as a chemotherapy agent for various types of cancer, including breast cancer, gastric cancer, colorectal cancer, and other solid tumors. It differs from other members of the taxane class (e.g. paclitaxel or docetaxel) in that it is administered orally, not intravenously.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Y Centauri** Y Centauri: Y Centauri or Y Cen (HD 127233, HIP 70969) is a semiregular variable star in the constellation of Centaurus. Y Centauri: The variability in the star was discovered by Williamina Fleming in 1895 and published in the Third Catalogue of Variable Stars. The photographic magnitude range was given as 7.7 - 8.8, but the variability was described as "somewhat doubtful". It was later given the designation HV 52 in the Harvard Catalogue of Variable Stars. The General Catalogue of Variable Stars lists it as a semiregular variable star with a period of 180 days and a visual magnitude range of 8.0 - 9.1. A study of Hipparcos satellite photometry found a small amplitude range of 0.2 magnitudes at a visual magnitude of 8.53.The distance of the star is poorly known. The revised Hipparcos annual parallax of 3.50 mas gives a distance of 900 light years. A study taking into account the variability of the star found a parallax of 5.57 mas, corresponding to a distance of 585 light years. Both estimates have a margin of error over 20%. The Gaia Data Release 2 parallax lies between these two values and appears more accurate with a margin of error around 5%, but with a large value for astrometric noise. Gaia EDR3 does not list a parallax for this star.Y Centauri is an asymptotic giant branch star 330 times as luminous as the sun. Its spectral type varies between M4 and M7 as it pulsates.The star has been observed to produce 22 GHz water maser emission, although later searches did not find any maser emission.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Philatelic auction** Philatelic auction: A philatelic auction, or stamp auction is a sale of stamps, covers and other philatelic material usually run by stamp dealers or specialist collectibles auctioneers, such as, David Feldman, Christie's and Sotheby's, where prospective purchasers place bids in an attempt to obtain the desired items. The highest bidder for each lot (described item or items) makes the purchase. Auctions are generally divided into mail sales, where bids are accepted by mail, and public sales, where mail bids are combined with live bidding from individuals present at the auction or participating by telephone. Auctions usually allow prospective purchasers to view the items beforehand, either in a catalogue, in the auction house, or both.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum gravity** Quantum gravity: Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics. It deals with environments in which neither gravitational nor quantum effects can be ignored, such as in the vicinity of black holes or similar compact astrophysical objects, such as neutron stars as well as in the early stages of the universe moments after the Big BangThree of the four fundamental forces of nature are described within the framework of quantum mechanics and quantum field theory: the electromagnetic interaction, the strong force, and the weak force; this leaves gravity as the only interaction that has not been fully accommodated. The current understanding of gravity is based on Albert Einstein's general theory of relativity, which incorporates his theory of special relativity and deeply modifies the understanding of concepts like time and space. Although General Relativity is highly regarded for its elegance it is not without limitations: the gravitational singularities inside of black holes, the ad. hoc. postulation of Dark Matter, as well as Dark Energy and its relation to the Cosmological Constant are among the current unsolved mysteries regarding gravity; all of which signal the collapse of the general theory of relativity at different scales and highlight the need for a gravitational theory that goes into the quantum realm. At distances close to the Planck length, like those near the center of the black hole, quantum fluctuations of spacetime are expected to play an important role. The breakdown of general relativity at galactic and cosmological scales also points out the necessity for a more robust theory. Finally the discrepancies between the predicted value for the vacuum energy and the observed values (which, depending on the considerations, can be of 60 or 120 orders of magnitude) highlight the necessity for a quantum theory of gravity. Quantum gravity: The field of quantum gravity is actively developing, and theorists are exploring a variety of approaches to the problem of quantum gravity, the most popular being M-theory and loop quantum gravity. All of these approaches aim to describe the quantum behavior of the gravitational field, which does not necessarily include unifying all fundamental interactions into a single mathematical framework. However, many approaches to quantum gravity, such as string theory, try to develop a framework that describes all fundamental forces. Such a theory is often referred to as a theory of everything. Some of the approaches, such as loop quantum gravity, make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. Other lesser-known but no less important theories include Causal dynamical triangulation, Noncommutative geometry, and Twistor theory.One of the difficulties of formulating a quantum gravity theory is that direct observation of quantum gravitational effects is thought to only appear at length scales near the Planck scale, around 10−35 meters, a scale far smaller, and hence only accessible with far higher energies, than those currently available in high energy particle accelerators. Therefore, physicists lack experimental data which could distinguish between the competing theories which have been proposed.Thought experiment approaches have been suggested as a testing tool for quantum gravity theories. In the field of quantum gravity there are several open questions - e.g., it is not known how spin of elementary particles sources gravity, and thought experiments could provide a pathway to explore possible resolutions to these questions, even in the absence of lab experiments or physical observations. Quantum gravity: In the early 21st century, new experiment designs and technologies have arisen which suggest that indirect approaches to testing quantum gravity may be feasible over the next few decades. This field of study is called phenomenological quantum gravity. Overview: Much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. General relativity models gravity as curvature of spacetime: in the slogan of John Archibald Wheeler, "Spacetime tells matter how to move; matter tells spacetime how to curve." On the other hand, quantum field theory is typically formulated in the flat spacetime used in special relativity. No theory has yet proven successful in describing the general situation where the dynamics of matter, modeled with quantum mechanics, affect the curvature of spacetime. If one attempts to treat gravity as simply another quantum field, the resulting theory is not renormalizable. Even in the simpler case where the curvature of spacetime is fixed a priori, developing quantum field theory becomes more mathematically challenging, and many ideas physicists use in quantum field theory on flat spacetime are no longer applicable.It is widely hoped that a theory of quantum gravity would allow us to understand problems of very high energy and very small dimensions of space, such as the behavior of black holes, and the origin of the universe. Quantum mechanics and general relativity: Graviton The observation that all fundamental forces except gravity have one or more known messenger particles leads researchers to believe that at least one must exist for gravity. This hypothetical particle is known as the graviton. These particles act as a force particle similar to the photon of the electromagnetic interaction. Under mild assumptions, the structure of general relativity requires them to follow the quantum mechanical description of interacting theoretical spin-2 massless particles. Quantum mechanics and general relativity: Many of the accepted notions of a unified theory of physics since the 1970s assume, and to some degree depend upon, the existence of the graviton. The Weinberg–Witten theorem places some constraints on theories in which the graviton is a composite particle. While gravitons are an important theoretical step in a quantum mechanical description of gravity, they are generally believed to be undetectable because they interact too weakly. Nonrenormalizability of gravity General relativity, like electromagnetism, is a classical field theory. One might expect that, as with electromagnetism, the gravitational force should also have a corresponding quantum field theory. Quantum mechanics and general relativity: However, gravity is perturbatively nonrenormalizable. For a quantum field theory to be well defined according to this understanding of the subject, it must be asymptotically free or asymptotically safe. The theory must be characterized by a choice of finitely many parameters, which could, in principle, be set by experiment. For example, in quantum electrodynamics these parameters are the charge and mass of the electron, as measured at a particular energy scale. Quantum mechanics and general relativity: On the other hand, in quantizing gravity there are, in perturbation theory, infinitely many independent parameters (counterterm coefficients) needed to define the theory. For a given choice of those parameters, one could make sense of the theory, but since it is impossible to conduct infinite experiments to fix the values of every parameter, it has been argued that one does not, in perturbation theory, have a meaningful physical theory. At low energies, the logic of the renormalization group tells us that, despite the unknown choices of these infinitely many parameters, quantum gravity will reduce to the usual Einstein theory of general relativity. On the other hand, if we could probe very high energies where quantum effects take over, then every one of the infinitely many unknown parameters would begin to matter, and we could make no predictions at all.It is conceivable that, in the correct theory of quantum gravity, the infinitely many unknown parameters will reduce to a finite number that can then be measured. One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really is a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, finding a reliable answer is difficult, pursued in the asymptotic safety program. Another possibility is that there are new, undiscovered symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries. Quantum mechanics and general relativity: Quantum gravity as an effective field theory In an effective field theory, all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is a predictive quantum field theory. Furthermore, many theorists argue that the Standard Model should be regarded as an effective field theory itself, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally. Quantum mechanics and general relativity: Works pioneered by Barvinsky and Vilkovisky suggest as a starting point up to second order in curvature the following action, consisting of local and non-local terms: 16 ln ln ln ⁡(◻μ2)Rμνρσ], where μ is an energy scale. The exact values of the coefficients c1,c2,c3 are unknown, as they depend on the nature of the ultra-violet theory of quantum gravity. ln ⁡(◻/μ2) is an operator with the integral representation ln ⁡(◻μ2)=∫0+∞ds(1μ2+s−1◻+s). Quantum mechanics and general relativity: By treating general relativity as an effective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses. Moreover, one can compute the quantum gravitational corrections to classical thermodynamic properties of black holes, most importantly the entropy. A rigorous derivation of the quantum gravitational corrections to the entropy of Schwarzschild black holes was provided by Calmet and Kuipers. A generalisation for charged (Reissner–Nordström) black holes was subsequently carried out by Campos Delgado. Quantum mechanics and general relativity: Spacetime background dependence A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While simple to grasp in principle, this is a complex idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in space-time. Quantum mechanics and general relativity: On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamic) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski spacetime is the fixed background of the theory. String theory String theory can be seen as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background, although the interactions among closed strings give rise to space-time in a dynamic way. Quantum mechanics and general relativity: Although string theory had its origins in the study of quark confinement and not of quantum gravity, it was soon discovered that the string spectrum contains the graviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in the AdS/CFT correspondence) which is a weak form of background dependence. Quantum mechanics and general relativity: Background independent theories Loop quantum gravity is the fruit of an effort to formulate a background-independent quantum theory. Quantum mechanics and general relativity: Topological quantum field theory provided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions, which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, including spin networks. Quantum mechanics and general relativity: Semi-classical quantum gravity Quantum field theory on curved (non-Minkowskian) backgrounds, while not a full quantum theory of gravity, has shown many promising early results. In an analogous way to the development of quantum electrodynamics in the early part of the 20th century (when physicists considered quantum mechanics in classical electromagnetic fields), the consideration of quantum field theory on a curved background has led to predictions such as black hole radiation. Quantum mechanics and general relativity: Phenomena such as the Unruh effect, in which particles exist in certain accelerating frames but not in stationary ones, do not pose any difficulty when considered on a curved background (the Unruh effect occurs even in flat Minkowskian backgrounds). The vacuum state is the state with the least energy (and may or may not contain particles). Quantum mechanics and general relativity: Problem of time A conceptual difficulty in combining quantum mechanics with general relativity arises from the contrasting role of time within these two frameworks. In quantum theories time acts as an independent background through which states evolve, with the Hamiltonian operator acting as the generator of infinitesimal translations of quantum states through time. In contrast, general relativity treats time as a dynamical variable which relates directly with matter and moreover requires the Hamiltonian constraint to vanish. Because this variability of time has been observed macroscopically, it removes any possibility of employing a fixed notion of time, similar to the conception of time in quantum theory, at the macroscopic level. Candidate theories: There are a number of proposed quantum gravity theories. Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments become available. Candidate theories: String theory The central idea of string theory is to replace the classical concept of a point particle in quantum field theory with a quantum theory of one-dimensional extended objects: string theory. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions. The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price of this success is unusual features such as six extra dimensions of space in addition to the usual three for space and one for time.In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity. As presently understood, however, string theory admits a very large number (10500 by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge. Candidate theories: Loop quantum gravity Loop quantum gravity seriously considers general relativity's insight that spacetime is a dynamical field and is therefore a quantum object. Its second idea is that the quantum discreteness that determines the particle-like behavior of other field theories (for instance, the photons of the electromagnetic field) also affects the structure of space. Candidate theories: The main result of loop quantum gravity is the derivation of a granular structure of space at the Planck length. This is derived from following considerations: In the case of electromagnetism, the quantum operator representing the energy of each frequency of the field has a discrete spectrum. Thus the energy of each frequency is quantized, and the quanta are the photons. In the case of gravity, the operators representing the area and the volume of each surface or space region likewise have discrete spectra. Thus area and volume of any portion of space are also quantized, where the quanta are elementary quanta of space. It follows, then, that spacetime has an elementary quantum granular structure at the Planck scale, which cuts off the ultraviolet infinities of quantum field theory. Candidate theories: The quantum state of spacetime is described in the theory by means of a mathematical structure called spin networks. Spin networks were initially introduced by Roger Penrose in abstract form, and later shown by Carlo Rovelli and Lee Smolin to derive naturally from a non-perturbative quantization of general relativity. Spin networks do not represent quantum states of a field in spacetime: they represent directly quantum states of spacetime. Candidate theories: The theory is based on the reformulation of general relativity known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields. In the quantum theory, space is represented by a network structure called a spin network, evolving over time in discrete steps.The dynamics of the theory is today constructed in several versions. One version starts with the canonical quantization of general relativity. The analogue of the Schrödinger equation is a Wheeler–DeWitt equation, which can be defined within the theory. In the covariant, or spinfoam formulation of the theory, the quantum dynamics is obtained via a sum over discrete versions of spacetime, called spinfoams. These represent histories of spin networks. Candidate theories: Other theories There are a number of other approaches to quantum gravity. The theories differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified. Examples include: Asymptotic safety in quantum gravity Euclidean quantum gravity Integral method Causal dynamical triangulation Causal fermion systems Causal Set Theory Covariant Feynman path integral approach Dilatonic quantum gravity Double copy theory Group field theory Wheeler–DeWitt equation Geometrodynamics Hořava–Lifshitz gravity MacDowell–Mansouri action Noncommutative geometry Path-integral based models of quantum cosmology Regge calculus Shape Dynamics String-nets and quantum graphity Supergravity Twistor theory Canonical quantum gravity Experimental tests: As was emphasized above, quantum gravitational effects are extremely weak and therefore difficult to test. For this reason, the possibility of experimentally testing quantum gravity had not received much attention prior to the late 1990s. However, in the past decade, physicists have realized that evidence for quantum gravitational effects can guide the development of the theory. Since theoretical development has been slow, the field of phenomenological quantum gravity, which studies the possibility of experimental tests, has obtained increased attention.The most widely pursued possibilities for quantum gravity phenomenology include gravitationally mediated entanglement, violations of Lorentz invariance, imprints of quantum gravitational effects in the cosmic microwave background (in particular its polarization), and decoherence induced by fluctuations in the space-time foam.ESA's INTEGRAL satellite measured polarization of photons of different wavelengths and was able to place a limit in the granularity of space that is less than 10−48 m, or 13 orders of magnitude below the Planck scale.The BICEP2 experiment detected what was initially thought to be primordial B-mode polarization caused by gravitational waves in the early universe. Had the signal in fact been primordial in origin, it could have been an indication of quantum gravitational effects, but it soon transpired that the polarization was due to interstellar dust interference. Sources: Green, M.B.; Schwarz, J.H.; Witten, E. (1987). Superstring theory. Vol. l. Cambridge University Press. ISBN 9781107029118. Penrose, Roger (2004), The Road to Reality, A. A. Knopf, ISBN 978-0-679-45443-4
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Catechol estrogen** Catechol estrogen: A catechol estrogen is a steroidal estrogen that contains catechol (1,2-dihydroxybenzene) within its structure. The catechol estrogens are endogenous metabolites of estradiol and estrone and include the following compounds: The most abundant catechol estrogen in serum and urine is 2-hydroxyestrone, with 2-hydroxyestradiol and 2-hydroxyestriol also being formed, while the principal 4-hydroxy catechol estrogen, 4-hydroxyestrone, is present in only small amounts in urine. 4-Hydroxyestriol has been detected in the urine of pregnant women. The catechol estrogens are formed from estradiol and estrone by cytochrome P450 enzymes predominantly in the liver but also in extrahepatic tissues, and are metabolized by catechol O-methyltransferase (COMT) into methoxylated estrogens such as 2-methoxyestradiol and 4-methoxyestrone as well as by conjugation via other phase II enzymes. Under poor conditions of inactivation by phase II enzymes, catechol estrogens can undergo oxidation to reactive quinones and semiquinones, and this has been hypothesized to contribute to estrogen-induced carcinogenesis.Similarly to estradiol and estrone, catechol estrogens possess estrogenic activity. 2-Hydroxylated catechol estrogens are weak and possibly antiestrogenic estrogens, whereas their 4-hydroxylated counterparts are more potent in their estrogenic activity. For instance, 2-hydroxyestrone reportedly shows negligible uterotrophic effect in animals, whereas 4-hydroxy catechol estrogens show moderate changes in stimulating uterine weight. In addition to being substrates for COMT similarly to catecholamines like dopamine, norepinephrine, and epinephrine, catechol estrogens are potent competitive inhibitors of COMT as well as of tyrosine hydroxylase, and may affect both catecholamine biosynthesis and metabolism.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hydrogel fiber** Hydrogel fiber: Hydrogel fiber is a hydrogel made into a fibrous state, where its width is significantly smaller than its length. The hydrogel's specific surface area at fibrous form is larger than that of the bulk hydrogel, and its mechanical properties also changed accordingly. As a result of these changes, hydrogel fiber has a faster matter exchange rate and can be woven into different structures. Hydrogel fiber: As a water swollen network with usually low toxicity, hydrogel fiber can be used in a variety of biomedical applications such as drug carrier, optical sensor, and actuator. But the production of hydrogel fiber can be challenging as the hydrogel is crosslinked and can not be shaped into a fibrous state after polymerization. To make hydrogel into a fibrous state, the pregel solution must be made into fibrous form and then crosslinked while maintaining this shape. Production method: To produce hydrogel fiber, the solidification of the pregel solution is the most important step. The pregel solution needs to be solidified while maintaining its fibrous shape. To achieve this, several methods based on chemical crosslinking, phase change, rheological property change have been developed. Physical solidification based Change in physical interactions can be utilized for the solidification process, and the fibrous state is usually achieved outside of the extrusion nozzle. Due to the reversibility of those physical interactions, subsequent crosslinking is traditionally required. Production method: Electrospinning Hydrogel fiber can be produced by electrospinning with solidification done by the evaporation of the solvent. The fibrous state is created by the combination of electrostatic repulsion and the surface tension of the solution. But subsequent crosslinking is usually needed to form a crosslinked network. One advantage of electrospun hydrogel fiber is that it has a diameter in range in the order between nm to μm, which is desirable for fast matter exchange. However, utilization of single fiber can be hard to achieve due to the weak mechanical strength of the microscopic fiber and its entanglements after production. Production method: An example of this method would be the production of polyacrylamide (PAAM) semi-interpretation network developed by Tahchi et al. Where the first linear PAAM (provide solidification) was mixed with AAM monomer (form subsequent network) and crosslinker N,N′-methylenebisacrylamide (MBA). During the electrospinning process, the linear PAAM provided the required physical properties to achieve electrospinning, while the AAM monomer and MBA crosslinker were used to form a second crosslinked network inside the PAAM fiber. Although no crosslinking was formed between the first and second networks, the physical entanglement will prevent linear PAAM from leaking. Production method: Drawspinning Through supramolecular chemistry, pregel solution can solidify through reversible supramolecular interactions such as host-guest interactions. Such interaction can be manipulated through the mechanical force or the temperature. When energy exerted to the network is high enough, physical crosslinking point will break and the polymer will be at liquid state, after leaving the nozzle, the crosslinking can be rapidly formed to solidify the solution. Production method: A case would be the Host–Guest Chemistry reported by Scherman et al. Where the formation of inclusion complex between Cucurbit[8]uril and 1-benzyl-3-vinylimidazolium bromide (BVIm) formed physical crosslinking point for the network. The formation of this physical crosslinking point is controlled by the temperature of the solution. By heating up the solution and cooling it down rapidly at extrusion nuzzle, the hydogel fiber is formed. Also, subsequent crosslinking is performed to form a perment network. Production method: Meltspinning Some hydrophilic polymer can be made into hydrogel fiber via melt-spinning method, where the solidification is done by the phase transition from the molten state. Similar to the electro-spinning, the pregel solution was kept liquid in the container. After leaving the nuzzle at filament state, the fiber solidified after the encounter of cool ambient air and maintained their shape. Production method: An example would be the meltspinning apparatus built by Long et al., where meltspinning of polylactic acid (PLA) and polycaprolactone (PCL) fiber are achieved. Production method: Direct ink writing Similar to the draw spinning technique the direct ink writing technique utilized reversible physical solidification to produce hydrogel fibers. The pregel solution was liqufied through shear thinning process which can be generated by adding microscopic particles such as mircrogel. After leaving the nuzzle, the hydrogel will solidify and retain their shape, and network will be made perment after crosslinking. Production method: An example would be the production of the fiber developed by Lewis et al. Where Silk fibroin was used to generate the desired shear-thinning properties. And the network was formed when the solvent was subsequently changed. Chemical crosslink based Similar to physical solidification, some chemical crosslinking methods have been developed to produce hydrogel fibers. And the key for the achievement of hydrogel production through the chemical crosslinking method is the effective separation between the formed network and the tube wall. Microfluid spinning Many microfluid device-based methods have been developed to produce hydrogel fibers. Production method: Crosslinking of alginate One of the most commonly used fiber production methods is the crosslinking of sodium alginate by CaCl2, where the formed calcium alginate will act as the crosslinking point to link the alginate chains together to form the network and solidified the polymer. Afterward, this alginate hydrogel fiber can be used as a template for the polymerization of secondary networks. Additionally, by controlling the fluid dynamics inside the microfluid device, the diameter and the shape of the resulting fiber can be tuned without doing modification to the devices.A practice would be the production of alginate solution reported by Yang et al. They used the sodium alginate as core fluid and CaCl2 as shealth fluid, the crosslinked network (hydrogel fiber) formed once this two fluid met, the laminar flow kept its tubular shape during the reaction. Production method: Photoinitiated crosslinking Other photoinitiated free radical polymerization reactions can also be used for fiber production. In this case, the shealth fluid was only used to separate the core fluid from the tube wall. Also, to achieve the solidification rapid enough, a more concentrated monomer solution was usually used. Production method: An example would be the production of 4-hydroxybutyl acrylate fiber reported by Beebe et al. The microfluid device they used was built with ethylvinyl acetate caplliary and PDMS rubber. The core fluid was a mixture of 4-hydroxybutyl acrylate, acrylic acid, ethyleneglycol dimethacrylate (crosslinker), 2,2′-dimethoxy-2-phenyl-acetonephenone (photoinitiator). The sheath fluid was only for separation. The crosslinked network was formed by free radical polymerization when the UV light met the core fluid. Production method: Polymerization in tubular molds Although only being able to produce short hydrogel fibers, production of hydrogel fiber by polymerizing the hydrogel network inside a tubular mold and push out the fiber forcefully can also be achieved. But the friction will increase with the increasing length, and only short hydrogel fibers are feasible. A case would be the production of poly(acrylamide-co-poly(ethylene glycol) diacrylate) fiber reported by yun et al. The pregel solution was a mixture of AAM, poly(ethylene glycol) diacrylate (PEGDA, crosslinker), and 2-hydroxy-2-methylpropiophenone (photoinitiator). The mixture was injected into a tubular mold and extracted through hydrostatic force afterwards. Production method: Self-lubricate spinning An interesting phenomenon called self-lubricate spinning can facilitate the demolding of the fiber and enables the continuous production of hydrogel fiber from tubular mold. During the polymerization process, if an inert second polymer is present, it will be particularly expelled from the formed network and being able to move with relative ease. The linear polymer on the surface of the crosslinked network also contains water solvent due to the osmic pressure, thus, a lubrication layer is formed. Therefore, the solidified polymer fiber can exit the tube with decreased friction force and continuous production can be achieved. Production method: An example would be the production the PAAM/PAMPS semi-interpenetration network hydrogel fiber reported by Zhao et al. The pregel solution was the mixture of PAMPS, AAM, PEGDA (crosslinker), and 2-hydroxy-4'-(2-hydroxyethoxy)-2-methylpropiophenone (photoinitiator). The pregel solution was fed into a PTFE tube at a constant speed, with UV light being used to initiate the reaction. Characterization methods: Surface morphology The surface morphology and shape of the cross-section can be observed via scanning electron microscope (SEM) imaging after removal of solvent. Also, environmental scanning electron microscope (ESEM) can be used to observe wet hydrogel fibers. But different treatments will affect the surface morphology of the hydrogel fiber drastically. If the hydrogel fiber was dried directly, a smooth surface would be obtained because of the collapse of the polymer network after the removal of the solvent. If the hydrogel fiber was lyophilized, a porous surface will usually be found due to the pore-forming effect of the ice crystal. ESEM can directly observe the surface morphology. The resulting image usually indicates a smooth surface with some wrinkled formed due to the gradual loss of water. Characterization methods: Mechanical properties The mechanical properties of the fibers are tested, but the process can be tricky due to practical reasons. The mechanical properties are tested with Universal Test Machine by fixing the hydrogel fibers between two holders. However, due to the compress of the holder, hydrogel fiber might have a trend to break at the holding point. Also, the loss of water during the test will impact the resulting data, and precaution needs to be taken to meditate the loss. And the tensile strength of the hydrogel fiber is usually smaller than 1 MPa. Characterization methods: Optical properties Optical properties are tested for optical sensing-related applications. This can include light attenuation, refractive index, transmission, etc. These optical properties are significantly influenced by the composition of the hydrogel. Biocompatibility Cell toxicity tests are performed for applications such as cell growth scaffolds. By growing the cell with the ability to produce fluorescent protein, the growth of the cell can be monitored with fluorescent imaging techniques. Applications: Optical fiber sensors Transparent hydrogel fibers can be used as optical fiber, and stimuli-responsive functional groups can be grafted on to create optical sensors. For example, in the research done by Yun et al. the glucose-sensitive phenylboronic acid was grafted onto the polymer network. When the glucose concentration changes, the adsorption of the phenylboronic acid will change accordingly and can be recorded with the light intensity at a certain wavelength. Applications: Additive manufacture Although suffering from poor mechanical strength, some approach has been made to construct hydrogel fiber with textile methods. Also, the electrospun, meltspun, DIW method can produce hydrogel fiber structures at higher dimensions directly. Biomedical scaffolds Hydrogel fiber can be used to fabricate scaffolds for cell growth and drug release. Actuators Stimuli-responsive hydrogel fibers can be used as actuators and soft robots. By braiding the hydrogel fiber together, the force of the single fiber can be magnified. Also, due to the slipping between hydrogel fibers, the stain of the bending can be reduced to further enhance the performance.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Body fat redistribution syndrome** Body fat redistribution syndrome: Body fat redistribution (BFR) syndrome, sometimes called fat derangement, is a medical condition characterized by fat loss (or occasionally fat gain), often in the cheeks or face. BFR most often occurs in HIV/AIDS patients undergoing antiretroviral therapy. Symptoms: The most common manifestations of body fat redistribution are accumulations of fat in the central body in the form of a fat pad on the back of the neck and an accumulation of visceral fat in the abdomen or belly. This fat accumulation is accompanied by a loss of subcutaneous fat in the face, arms, legs, and buttocks. Adverse effects: Cosmetic concerns may cause patients to refuse or stop treatment. If severe enough, the fat accumulation may result in sleep apnea or other sleep disorders, migraines, decreased range of motion, discomfort due to pressure on internal organs, and general loss of condition. Fat loss may result in pain in the buttocks when seated. Other potential complications resulting from BFR include high cholesterol, high levels of triglycerides, insulin resistance, hyperglycemia, diabetes, gout, and cardiovascular disease. BFR is also associated with certain metabolic abnormalities such as elevations of plasma sugar and fats, but the precise relationship is unknown. Diagnosis: Definition No firm definition of body fat redistribution syndrome exists as yet. At least four syndromes have been described that are characterized by the accumulation of fat, and one by the loss of fat; combinations of these may occur in an individual. Gender, age, and pre-therapy body weight appear to influence the severity of BFR in patients. BFR is distinct from lipodystrophy, which simply refers to fat loss. Treatment: Treatment of symptoms may include cosmetic surgery such as collagen implants; treatment of the underlying syndrome may include changing from protease inhibitors to an NNRTI.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Third-order intercept point** Third-order intercept point: In telecommunications, a third-order intercept point (IP3 or TOI) is a specific figure of merit associated with the more general third-order intermodulation distortion (IMD3), which is a measure for weakly nonlinear systems and devices, for example receivers, linear amplifiers and mixers. It is based on the idea that the device nonlinearity can be modeled using a low-order polynomial, derived by means of Taylor series expansion. The third-order intercept point relates nonlinear products caused by the third-order nonlinear term to the linearly amplified signal, in contrast to the second-order intercept point that uses second-order terms. Third-order intercept point: The intercept point is a purely mathematical concept and does not correspond to a practically occurring physical power level. In many cases, it lies far beyond the damage threshold of the device. Definitions: Two different definitions for intercept points are in use: Based on harmonics: The device is tested using a single input tone. The nonlinear products caused by n-th-order nonlinearity appear at n times the frequency of the input tone. Definitions: Based on intermodulation products: The device is fed with two sine tones one at f1 and one at f2 . When you cube the sum of these sine waves you will get sine waves at various frequencies including (2f2−f1) and (2f1−f2) . If f1 and f2 are large but very close together then (2f2−f1) and (2f1−f2) will be very close to f1 and f2 . This two-tone approach has the advantage that it is not restricted to broadband devices and is commonly used for radio receivers.The intercept point is obtained graphically by plotting the output power versus the input power both on logarithmic scales (e.g., decibels). Two curves are drawn; one for the linearly amplified signal at an input tone frequency, one for a nonlinear product. Definitions: On a logarithmic scale, the function xn translates into a straight line with slope of n. Therefore, the linearly amplified signal will exhibit a slope of 1. A third-order nonlinear product will increase by 3 dB in power when the input power is raised by 1 dB. Both curves are extended with straight lines of slope 1 and n (3 for a third-order intercept point). The point where the curves intersect is the intercept point. It can be read off from the input or output power axis, leading to input (IIP3) or output (OIP3) intercept point respectively. Input and output intercept point differ by the small-signal gain of the device. Practical considerations: The concept of intercept point is based on the assumption of a weakly nonlinear system, meaning that higher-order nonlinear terms are small enough to be negligible. In practice, the weakly nonlinear assumption may not hold for the upper end of the input power range, be it during measurement or during use of the amplifier. As a consequence, measured or simulated data will deviate from the ideal slope of n. The intercept point according to its basic definition should be determined by drawing the straight lines with slope 1 and n through the measured data at the smallest possible power level (possibly limited towards lower power levels by instrument or device noise). Practical considerations: It is a frequent mistake to derive intercept points by either changing the slope of the straight lines, or fitting them to points measured at too high power levels. In certain situations such a measure can be useful, but it is not an intercept point according to definition. Its value depends on the measurement conditions that need to be documented, whereas the IP according to definition is mostly unambiguous; although there is some dependency on frequency and tone spacing, depending on the physics of the device under test. Practical considerations: One of the useful applications of third-order intercept point is as a rule-of-thumb measure to estimate nonlinear products. When comparing systems or devices for linearity, a higher intercept point is better. It can be seen that the spacing between two straight lines with slopes of 3 and 1 closes with slope 2. Practical considerations: For example, assume a device with an input-referred third-order intercept point of 10 dBm is driven with a test signal of −5 dBm. This power is 15 dB below the intercept point, therefore nonlinear products will appear at approximately 2×15 dB below the test signal power at the device output (in other words, 3×15 dB below the output-referred third-order intercept point). Practical considerations: A rule of thumb that holds for many linear radio-frequency amplifiers is that the 1 dB compression point falls approximately 10 dB below the third-order intercept point. Theory: The third-order intercept point (TOI) is a property of the device transfer function O (see diagram). Theory: This transfer function relates the output signal voltage level to the input signal voltage level. We assume a "linear" device having a transfer function whose small-signal form may be expressed in terms of a power series containing only odd terms, making the transfer function an odd function of input signal voltage, i.e., O(−s) = −O(s). Where the signals passing through the actual device are modulated sinusoidal voltage waveforms (e.g., RF amplifier), device nonlinearities can be expressed in terms of how they affect individual sinusoidal signal components. For example, say the input voltage signal is the sine wave cos ⁡(ωt), and the device transfer function produces an output of the form O(s)=Gs−D3s3+…, where G is the amplifier gain, and D3 is cubic distortion. We may substitute the first equation into the second and, using the trigonometric identity cos cos cos ⁡(3x), we obtain the device output voltage waveform as cos cos ⁡(3ωt). Theory: The output waveform contains the original waveform, cos(ωt), plus a new harmonic term, cos(3ωt), the third-order term. The coefficient of the cos(ωt) harmonic has two terms, one that varies linearly with V and one that varies with the cube of V. In fact, the coefficient of cos(ωt) has nearly the same form as the transfer function, except for the factor 3/4 on the cubic term. In other words, as signal level V is increased, the level of the cos(ωt) term in the output eventually levels off, similar to how the transfer function levels off. Of course, the coefficients of the higher-order harmonics will increase (with increasing V) as the coefficient of the cos(ωt) term levels off (the power has to go somewhere). Theory: If we now restrict our attention to the portion of the cos(ωt) coefficient that varies linearly with V, and then ask ourselves, at what input voltage level V will the coefficients of the first- and third-order terms have equal magnitudes (i.e., where the magnitudes intersect), we find that this happens when V2=4G3D3, which is the third-order intercept point (TOI). So, we see that the TOI input power level is simply 4/3 times the ratio of the gain and the cubic distortion term in the device transfer function. The smaller the cubic term is in relation to the gain, the more linear the device is, and the higher the TOI is. The TOI, being related to the magnitude squared of the input voltage waveform, is a power quantity, typically measured in milliwatts (mW). The TOI is always beyond operational power levels because the output power saturates before reaching this level. Theory: The TOI is closely related to the amplifier's "1 dB compression point", which is defined as that point at which the total coefficient of the cos(ωt) term is 1 dB below the linear portion of that coefficient. We can relate the 1 dB compression point to the TOI as follows. Since 1 dB = 20 log10 1.122, we may say, in a voltage sense, that the 1 dB compression point occurs when 1.122 (GV−34D3V3)=GV, or 0.10875 ×4G3D3, or 0.10875 ×TOI. Theory: In a power sense (V2 is a power quantity), a factor of 0.10875 corresponds to −9.636 dB, so by this approximate analysis, the 1 dB compression point occurs roughly 9.6 dB below the TOI. Recall: decibel figure = 10 dB × log10(power ratio) = 20 dB × log10(voltage ratio). Notes: The third-order intercept point is an extrapolated convergence – not directly measurable – of intermodulation distortion products in the desired output. It indicates how well a device (for example an amplifier) or a system (for example, a receiver) performs in the presence of strong signals. It is sometimes used (interchangeably with the 1 dB compression point) to define the upper limit of the dynamic range of an amplifier. Determination of a third-order intercept point of a superheterodyne receiver is accomplished by using two test frequencies that fall within the first intermediate frequency mixer passband. Usually, the test frequencies are about 20–30 kHz apart. The concept of intercept point has no meaning for strongly nonlinear systems, such as when an output signal is clipped due to limited supply voltage.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Action camera** Action camera: An action camera or action cam is a digital camera designed for recording action while being immersed in it. Action cameras are therefore typically compact, rugged, and waterproof at the surface level. They typically use CMOS image sensors, and can take photos in burst mode and time-lapse mode as well as record high-definition video (as of 2019, mid-range to high-end action cameras can record 4K video at 60 fps). Slow-motion video recording at 120 or 240 fps is also a common feature. Overview: The camera is typically worn or mounted in such a way that it can shoot from the point of view of the shooter. Some examples of common places to mount an action camera are on a hat or helmet, on the chest, or on the handlebars of a bike or similar vehicle. They may also be mounted on a tripod or on a monopod for handheld use. An action camera is usually designed to require minimal interaction once recording has begun, as this allows continuous capture of the action without having to interact with the camera. A typical action camera records onto a micro SD card, and has either a Micro-USB or a USB-C connector. Overview: Action cameras are associated with outdoor sports, and, often attached to helmets, surfboards or handlebars, are an integral part of many extreme sports such as base jumping and wingsuit flying. Sometimes several cameras are used to capture specific perspectives, such as a helmet camera that sees the perspective of the actor in combination with a second camera attached to the environment of the rider, such as a board, wing, handlebar or wrist, that looks back onto the rider and records their reactions. Overview: The category is commonly associated with the GoPro range of cameras, and many action cameras come with a GoPro mount adapter to take advantage of the accessories available for these cameras. However, there are many GoPro alternatives which are entering the market of action cameras in recent times. Overview: In 2014, worldwide action camera sales increased by 44 percent from the previous year and half of them have the capability to shoot Ultra High Definition at 4K resolution. Action camera sales have surpassed traditional camcorder and compact camera sales, and it is predicted that in 2019, action camera sales will surpass all types of cameras due to the sales of other camera types declining or stabilizing.By 2021, the Ultra HD category of the action camera market is expected to reach $3.3 billion. The Full HD category, meanwhile, is expected to reach $2.2 billion, with the surveillance/security industry driving growth. Overview: In 2018 Sony launched a shock and waterproof camera with a 1" sensor in a body size similar to an action camera. However, Sony is not marketing it as an action camera; rather, as a video professional camera with the capability to shoot with up to 15 cameras at the same time. Product lines: Besides the GoPro line, other models of action cams include the: Sony HDR-AS10, HDR-AS15 and HDR-AS30V Garmin VIRB Panasonic HX-A500E Toshiba Camileo X-Sports Polaroid Cube TomTom Bandit Xiaomi YiCamera Ricoh WG-M1 Insta360 DJI Osmo Action
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gasket** Gasket: A gasket is a mechanical seal which fills the space between two or more mating surfaces, generally to prevent leakage from or into the joined objects while under compression. It is a deformable material that is used to create a static seal and maintain that seal under various operating conditions in a mechanical assembly.Gaskets allow for "less-than-perfect" mating surfaces on machine parts where they can fill irregularities. Gaskets are commonly produced by cutting from sheet materials. Given the potential cost and safety implications of faulty or leaking gaskets, it is critical that the correct gasket material is selected to fit the needs of the application.Gaskets for specific applications, such as high pressure steam systems, may contain asbestos. However, due to health hazards associated with asbestos exposure, non-asbestos gasket materials are used when practical.It is usually desirable that the gasket be made from a material that is to some degree yielding such that it is able to deform and tightly fill the space it is designed for, including any slight irregularities. Some types of gaskets require a sealant be applied directly to the gasket surface to function properly. Gasket: Some (piping) gaskets are made entirely of metal and rely on a seating surface to accomplish the seal; the metal's own spring characteristics are utilized (up to but not passing σy, the material's yield strength). This is typical of some "ring joints" (RTJ) or some other metal gasket systems. These joints are known as R-con and E-con compressive type joints. Gasket: Some gaskets are dispensed and cured in place. These materials are called formed-in-place gaskets. Properties: Gaskets are normally made from a flat material, a sheet such as paper, rubber, silicone, metal, cork, felt, neoprene, nitrile rubber, fiberglass, polytetrafluoroethylene (otherwise known as PTFE or Teflon) or a plastic polymer (such as polychlorotrifluoroethylene). Properties: One of the more desirable properties of an effective gasket in industrial applications for compressed fiber gasket material is the ability to withstand high compressive loads. Most industrial gasket applications involve bolts exerting compression well into the 14 MPa (2000 psi) range or higher. Generally speaking, there are several truisms that allow for better gasket performance. One of the more tried and tested is: "The more compressive load exerted on the gasket, the longer it will last". Properties: There are several ways to measure a gasket material's ability to withstand compressive loading. The "hot compression test" is probably the most accepted of these tests. Most manufacturers of gasket materials will provide or publish the results of these tests. Properties: Gasket design Gaskets come in many different designs based on industrial usage, budget, chemical contact and physical parameters: Sheet gaskets Gaskets can be produced by punching the required shape out of a sheet of flat, thin material, resulting in a sheet gaskets. Sheet gasket are fast and cheap to produce, and can be produced from a variety of materials, among them fibrous materials and matted graphite (and in the past - compressed asbestos). These gaskets can fill various different chemical requirements based on the inertness of the material used. Non-asbestos gasket sheet is durable, of multiple materials, and thick in nature. Material examples are mineral, carbon or synthetic rubbers such as EPDM, Nitrile, Neoprene, Natural, SBR Insertion - each of which have unique properties suitable for different applications. Applications using sheet gaskets involve acids, corrosive chemicals, steam or mild caustics. Flexibility and good recovery prevent breakage during installation of a sheet gasket. Properties: Solid material gaskets The idea behind solid material is to use metals which cannot be punched out of sheets but are still cheap to produce. These gaskets generally have a much higher level of quality control than sheet gaskets and generally can withstand much higher temperatures and pressures. The key downside is that a solid metal must be greatly compressed in order to become flush with the flange head and prevent leakage. The material choice is more difficult; because metals are primarily used, process contamination and oxidation are risks. An additional downside is that the metal used must be softer than the flange — in order to ensure that the flange does not warp and thereby prevent sealing with future gaskets. Even so, these gaskets have found a niche in industry. Properties: Spiral-wound gaskets Spiral-wound gaskets comprise a mix of metallic and filler material. Generally, the gasket has a metal (normally carbon rich or stainless steel) wound outwards in a circular spiral (other shapes are possible) with the filler material (generally a flexible graphite) wound in the same manner but starting from the opposing side. This results in alternating layers of filler and metal. The filler material in these gaskets acts as the sealing element, with the metal providing structural support. Properties: These gaskets have proven to be reliable in most applications, and allow lower clamping forces than solid gaskets, albeit with a higher cost. Properties: [1] Constant seating stress gaskets The constant seating stress gasket consists of two components; a solid carrier ring of a suitable material, such as stainless steel, and two sealing elements of some compressible material installed within two opposing channels, one channel on either side of the carrier ring. The sealing elements are typically made from a material (expanded graphite, expanded polytetrafluoroethylene (PTFE), vermiculite, etc.) suitable to the process fluid and application. Properties: Constant seating stress gaskets derive their name from the fact that the carrier ring profile takes flange rotation (deflection under bolt preload) into consideration. With all other conventional gaskets, as the flange fasteners are tightened, the flange deflects radially under load, resulting in the greatest gasket compression, and highest gasket stress, at the outer gasket edge. Properties: Since the carrier ring used in constant seating stress gaskets take this deflection into account when creating the carrier ring for a given flange size, pressure class, and material, the carrier ring profile can be adjusted to enable the gasket seating stress to be radially uniform across the entire sealing area. Further, because the sealing elements are fully confined by the flange faces in opposing channels on the carrier ring, any in-service compressive forces acting on the gasket are transmitted through the carrier ring and avoid any further compression of the sealing elements, thus maintaining a 'constant' gasket seating stress while in-service. Thus, the gasket is immune to common gasket failure modes that include creep relaxation, high system vibration, or system thermal cycles. Properties: The fundamental concept underlying the improved sealability for constant seating stress gaskets are that (i) if the flange sealing surfaces are capable of attaining a seal, (ii) the sealing elements are compatible with the process fluid and application, and (iii) the sufficient gasket seating stress is achieved on installation necessary to affect a seal, then the possibility of the gasket leaking in-service is greatly reduced or eliminated altogether. Properties: Double-jacketed gaskets Double-jacketed gaskets are another combination of filler material and metallic materials. In this application, a tube with ends that resemble a "C" is made of the metal with an additional piece made to fit inside of the "C" making the tube thickest at the meeting points. The filler is pumped between the shell and piece. When in use, the compressed gasket has a larger amount of metal at the two tips where contact is made (due to the shell/piece interaction) and these two places bear the burden of sealing the process. Since all that is needed is a shell and piece, these gaskets can be made from almost any material that can be made into a sheet and a filler can then be inserted. Properties: Kammprofile gaskets Kammprofile gaskets (sometimes spelled Camprofile) are used in many older seals since they have both a flexible nature and reliable performance. Kammprofiles work by having a solid corrugated core with a flexible covering layer. This arrangement allows for very high compression and an extremely tight seal along the ridges of the gasket. Since generally the graphite will fail instead of the metal core, Kammprofile can be repaired during later inactivity. Kammprofile has a high capital cost for most applications but this is countered by long life and increased reliability. Properties: Fishbone Gaskets Fishbone Gaskets are direct replacements for Kammprofile and Spiralwound gaskets. They are fully CNC machine manufactured from similar materials but the design of the gaskets has eliminated inherent short comings. Fishbone gaskets do not unwind in storage or in the plant. The rounded edges do not cause flange damage. The added "Stop Step" prevents the Fishbone gaskets from being over compressed/crushed, often caused by hot torque techniques on plant start up. The bones of the gasket remain ductile and adjust to thermal cycling and system pressure spikes, resulting in a durable and reliable flange seal that out performs all other gaskets of this nature significantly. Properties: Flange gasket A flange gasket is a type of gasket made to fit between two sections of pipe that are flared to provide higher surface area. Flange gaskets come in a variety of sizes and are categorized by their inside diameter and their outside diameter. Properties: There are many standards in gasket for flanges of pipes. The gaskets for flanges can be divided into four major categories: Sheet gaskets Corrugated metal gaskets Ring gaskets Spiral wound gasketsSheet gaskets are simple, they are cut to size either with bolt holes or without holes for standard sizes with various thickness and material suitable to media and temperature pressure of pipeline. Properties: Ring gaskets also known as RTJ. They are mostly used in offshore oil- and gas pipelines and are designed to work under extremely high pressure. They are solid rings of metal in different cross sections like oval, round, octagonal etc. Sometimes they come with hole in center for pressure . Properties: Spiral wound gaskets are also used in high pressure pipelines and are made with stainless steel outer and inner rings and a center filled with spirally wound stainless steel tape wound together with graphite and PTFE, formed in V shape. Internal pressure acts upon the faces of the V, forcing the gasket to seal against the flange faces. Most spiral wound gasket applications will use two standard gasket thicknesses: 1/8 inch and 3/16 inch. With 1/8 inch thick gaskets, compression to a 0.100 inch thickness is recommended. For 3/16 inches, compress to a 0.13 inch thickness. Properties: Soft cut gasket Soft gasket is a term that refers to a gasket that is cut from a soft (flexible) sheet material and can easily conform to surface irregularities even when the bolt load is low. Soft gaskets are used in applications such as heat exchangers, compressors, bonnet valve and pipe flanges. Ring Type Joint Gasket (RTJ Gasket) Annular Seal (RTJ Seal) is a high integrity, high temperature, high pressure seal for applications in the oil industry, oilfield drilling, pressure vessel connections, pipes, valves and more. Properties: The movement of the ring packing (RTJ) can be described as an irregular flow in the groove of the deformed sealing flange due to the axial compressive load. Colored seal (RTJ seal) has a small load area, which leads to a large surface pressure between the sealing surface and the groove, the maintenance properties are poor and not suitable for reuse. Improvements: Many gaskets contain minor improvements to increase or infer acceptable operating conditions: A common improvement is an inner compression ring. A compression ring allows for higher flange compression while preventing gasket failure. The effects of a compression ring are minimal and generally are just used when the standard design experiences a high rate of failure. A common improvement is an outer guiding ring. A guiding ring allows for easier installation and serves as a minor compression inhibitor. In some alkylation uses these can be modified on Double Jacketed gaskets to show when the first seal has failed through an inner lining system coupled with alkylation paint. Reasons for failure: Uneven distributed pressing force Uneven pressure can be caused by a variety of factors. First is the human factor: asymmetric application of the bolt preload, this can cause uneven pressure. Theoretically when the flanges are pressed, the sealing surfaces are absolutely parallel, in practice however, the centerline of a pipeline cannot be absolutely concentric, and tightening the bolts on the flange moment makes the flange a discontinuity. With asymmetric connections, the seal surfaces will be more or less deformed and the pressure reduced, the running load, prone to leakage. Third, the density of bolt arrangement has an obvious impact on the pressure distribution, the closer the bolts, the more uniform the pressure. Reasons for failure: Stress relaxation and torque loss Tighten the bolts on the flange. Due to vibration, temperature changes, and other factors such as spiral wound gasket stress relaxation, the bolt tension will gradually decrease, resulting in loss of torque, causing a leak. In general longer bolts and smaller diameters of bolt are better at preventing the loss of torque. A long thin bolt is an effective way to prevent torque loss. Heating for a certain period of time to stretch the bolt, and then maintaining a given torque, is very effective in preventing the loss of torque. When the gasket is thinner and smaller there will be a greater loss of torque. In addition, prevent strong vibration of the machine and the pipe itself, and isolate them from adjacent equipment vibration. Impacts on the sealing surface are not meaningless. Not impacting the tightened bolts can prevent the loss of torque. Reasons for failure: Surface not smooth It is important to make the sealing finish properly otherwise it will cause leakage. A surface that is too smooth can allow your gasket material to blow out under pressure. A surface that is not machined flat can provide leak paths. A good rule of thumb is a machined surface to 32RMS. This insures the surface is flat, but with enough surface finish to bite into the gasket under compression. Reasons for failure: Metal Reinforced Gasket With metal core coated gaskets, both sides of the core are covered with a flexible, malleable sealant. There are reinforced metal seals in the pressure class up to 300. A strong metal core prevents pressure seals and a soft core ensures exceptional sealing. Sources: Bickford, John H.: An Introduction to the Design and Behavior of Bolted Joints, 3rd ed., Marcel Dekker, 1995, pg. 5 Latte, Dr. Jorge and Rossi, Claudio: High Temperature Behavior of Compressed Fiber Gasket Materials, and an Alternative Approach to the Prediction of Gasket Life, FSA presented Paper, 1995, pg. 16
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Warbits** Warbits: Warbits is a turn-based tactics video game developed and published by Risky Lab. It was released on April 13, 2016 for iOS. A remaster, Warbits+, was announced in 2021 for iOS, Windows and Android, with its release TBA.The game received positive reception from critics for its gameplay, graphics and humor. Gameplay: Warbits is based heavily on Advance Wars, and pits several armies against each other. The game has a single-player campaign as well as local and online multiplayer. The objective is usually to wipe out the enemy's troops or capture their headquarters. Neutral cities can be captured by soldiers to generate money, which is used to manufacture new units at factories.Each type of troops have their own strengths and weaknesses. For example, anti-aircraft guns are extremely powerful against bombers. Terrain bonuses also factor in, with areas such as forests and towns giving units defensive boosts for standing on them. Units largely correspond to those in Advance Wars, but the game adds the Ranger unit, a sniper squad that can either move or attack, and is most effective on mountains. Additionally, scout probes are able to hover and cross shallow water. Plot: Warbits takes place in a formerly war-torn world, that has agreed to use a military simulation to decide real-life political disputes rather than actual combat, saving "billions of lives". In the game's single-player campaign, the player controls the Red Bear Republic, which responds to mysterious provocations from other nations. Strange structures also start appearing within the simulation that act as obstacles. Finally, it is revealed that the artificial intelligence controlling the simulation has rebelled, seeking to control the nations. The factions, realizing they have been set up, band together to destroy the digital core of the AI. Development: Warbits was developed by the 2-man indie team of programmer Joseph Borghetti and artist Reilly Stroope, who worked remotely and never met in person. They characterized it as a "dumb idea" due to the potential failure of a "niche strategy game" launching on only a single platform, saying that, while it targeted an under-served market, it was nonetheless a "gargantuan task" for first-time game developers. After being introduced to each other on a small community forum, they commenced development in 2012, seeking to make a mobile game.The inspiration behind the game was the fact that Advance Wars was not on a mobile platform. The developers assumed it would be finished in 6 months, but realized the large amount of depth and complexity in the Wars games would be more difficult to emulate than they believed. Keeping their day jobs, they developed the game as a hobby, spending the first two years learning how to develop a game from scratch, and the next two years completing the game. Much of the development time was spent simply learning how to program as opposed to creating the game itself. The game was ultimately developed in the Cocos2d engine using Objective-C.The developers spent about USD $11,000 hiring freelancers to provide assets such as sound effects, music and a trailer. About half the money was spent on a complex backend system and map editor that ultimately went unused. Additional money was spent purchasing an Apple developer license and Dropbox Pro. The developers regretted not starting with smaller games, noting that such a large game could have easily failed and never recouped the time or investment.Upon launch, the game was made the App Store's Editor's Choice for two weeks straight, attributed to launching during an Earth Day promotion that prevented larger developers from launching their apps. The game sold the majority of units within these two weeks, drastically decreasing afterwards. By late 2016, it had made lifetime sales of USD $173,000, earning the developers USD $116,000. Most of its reviews were from mobile gaming sites rather than more major outlets. Reception: The game was well-received by critics, with an aggregate score of 92/100 on Metacritic.Nadia Oxford of Gamezebo rated the game 4.5/5 stars, saying its gameplay was "deep" and that it should "appease starving Advance Wars fans", and commending the game's sense of humor. She criticized the fact that the player cannot preview enemy movement range, and that troops' strengths and weaknesses were "difficult to remember".Carter Dodson of TouchArcade also rated the game 4.5/5 stars, saying it "lacks originality", but is "slick and well-constructed". Harry Slater of Pocket Gamer UK rated the game 90/100, praising the game for not being "dumbed down" for a mobile audience, and calling it "wonderfully balanced", also saying that "fans of the genre have been screaming for" such a game. Legacy: In 2021, the game's developers announced that Warbits+, an updated version of the original, would receive a multi-platform release on iOS, Android and Windows. Warbits+ would include quality-of-life features, cross-platform play, and the ability to create community maps, among other additions. Its release date remains TBA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Applications of cybernetics in economics** Applications of cybernetics in economics: Economics is one domain in which cybernetics has had application and influence. In the Soviet Union: The Great Soviet Encyclopaedia defines Economic cybernetics as a scientific field wherein cybernetic approaches are applied to economics. It facilitates a dialogue between microsystems and macrosystems.The design of self-regulating control systems for a real-time planned economy was explored by economist Oskar Lange, cyberneticist Viktor Glushkov, and other Soviet cyberneticists during the 1960s. By the time information technology was developed enough to enable feasible economic planning based on computers, the Soviet Union and eastern bloc countries began moving away from planning and eventually collapsed. Hayek: Friedrich Hayek attended the 1960 Symposium on Principles of Self-Organization, organised by Heinz von Foerster. Hayek: Hayek mentions cybernetics as a discipline that could help economists understand the "self-organizing or self-generating systems" called markets. Being "complex phenomena", the best way to examine market functions is by using the feedback mechanism, explained by cybernetic theorists. That way, economists could make "pattern predictions".Therefore, the market for Hayek is a "communication system", an "efficient mechanism for digesting dispersed information". The economist and a cyberneticist are like gardeners who are "providing the appropriate environment". Hayek's definition of information is idiosyncratic and precedes the information theory used in cybernetics and the natural sciences. Hayek: Finally, Hayek also considers Adam Smith's idea of the invisible hand as an anticipation of the operation of the feedback mechanism in cybernetics. In the same book, Law, Legislation and Liberty, Hayek mentions, along with cybernetics, that economists should rely on the scientific findings of Ludwig von Bertalanffy general systems theory, along with information and communication theory and semiotics. Towards a new socialism: A proposal for a "New Socialism" was outlined by the computer scientists Paul Cockshott and Allin Cottrell in 1995 (Towards a New Socialism), where computers determine and manage the flows and allocation of resources among socially owned enterprises.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bachelor of Business Information Systems** Bachelor of Business Information Systems: Bachelor of Business Information Systems (BBIS) is an IT focused undergraduate program designed to better understand the needs of business and industry and be well equipped to meet those needs. It blends core concepts from a traditional business administration degree and a technology related degree. International variations: Australia It is offered by Asia Pacific International College (APIC), Kent Institute Australia, Swinburne University of Technology, RMIT University, La Trobe University, Melbourne Institute of Technology, Monash University, Open Universities Australia, Sydney International School of Technology and Commerce and Torrens University Australia. In Australia, it is a 3-years program. Germany Furtwangen University of Applied Sciences in Germany offers International Business Information Systems a leading bachelor study program in Germany educating students in three fields of competence: Applied Computer Science, Digital Business Management and Data Science & Artificial Intelligence. International variations: Pakistan The Bachelor of Business Information Systems (BBIS) degree is offered by several institutions in Pakistan, including the Institute of Business Administration (IBA) and the Shaheed Zulfikar Ali Bhutto Institute of Science and Technology (SZABIST), both located in Karachi. And in Lahore, the Lahore University of Management Sciences (LUMS), the Lahore School of Economics (LSE), the University of the Punjab, and the University of Management and Technology (UMT). In the capital city of Islamabad, students can pursue a BBIS degree at the NUST Business School (NBS), the COMSATS Institute of Information Technology, the Pakistan Institute of Engineering and Applied Sciences (PIEAS), and Quaid-i-Azam University. Other notable institutions offering the program include Sukkur IBA University in Sukkur, the Institute of Management Sciences (IMS) in Peshawar, the Karachi School for Business and Leadership (KSBL) in Karachi, the University of Karachi, and the University of Engineering and Technology (UET) in Lahore. International variations: India In India, it is known as Bachelor of Science (Business Information System) which is offered by Hindustan Institute of Technology and Management and FTMS Global Academy India Private Limited. International variations: Nepal BBIS program is offered by Kathmandu University under the Department of Management Informatics and Communication, School of Management and Little Angels College of Management (LACM). LACM is affiliated to Kathmandu University.It is a 4 years - 141 credit hours comprehensive bachelor degree program, designed by blending the domain knowledge of the information systems and information technology with that of business and management. International variations: Singapore Murdoch University offers Bachelor of Science in Business Information Systems in Singapore. United States Ashford University offers online course in Bachelor of Arts in Business Information Systems. Scope and career prospects: A typical role for BBIS graduates is systems analyst, where the graduates can apply their expertise in analyzing business information system requirements and system deployment methods. This program prepares for the graduates to be the Information Systems professionals and they can work as a Database Administrator (DBA), Chief Information Officer (CIO) and other senior management positions with additional work experience and professional development.They can also work as Business Analyst, IT Project Manager, IT Consultant, IT Technical Support Officer, Programmer and Designer, Applications Developer, Network Administrator, Computer Engineer, Computer System Auditor, Computer System Engineer, Data Modeller, Database Designer and Administrator, Electronic Commerce Administrator, Hardware Technician, Business Process Analyst, Enterprise System Analyst, Information and Data Manager, Information Management Administrator, Information Manager, Management Consultant, Sales Representative, Specialist Consultant, System Designer, Training and Support Leader and Training Manager.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peptide hormone** Peptide hormone: Peptide hormones are hormones whose molecules are peptides. Peptide hormones have shorter amino acid chain lengths than protein hormones. These hormones have an effect on the endocrine system of animals, including humans. Most hormones can be classified as either amino acid–based hormones (amine, peptide, or protein) or steroid hormones. The former are water-soluble and act on the surface of target cells via second messengers; the latter, being lipid-soluble, move through the plasma membranes of target cells (both cytoplasmic and nuclear) to act within their nuclei. Peptide hormone: Like all peptides, peptide hormones are synthesized in cells from amino acids according to mRNA transcripts, which are synthesized from DNA templates inside the cell nucleus. Preprohormones, peptide hormone precursors, are then processed in several stages, typically in the endoplasmic reticulum, including removal of the N-terminal signal sequence and sometimes glycosylation, resulting in prohormones. The prohormones are then packaged into membrane-bound secretory vesicles, which can be secreted from the cell by exocytosis in response to specific stimuli (e.g. an increase in Ca2+ and cAMP concentration in cytoplasm).These prohormones often contain superfluous amino acid residues that were needed to direct folding of the hormone molecule into its active configuration but have no function once the hormone folds. Specific endopeptidases in the cell cleave the prohormone just before it is released into the bloodstream, generating the mature hormone form of the molecule. Mature peptide hormones then travel through the blood to all of the cells of the body, where they interact with specific receptors on the surfaces of their target cells. Peptide hormone: Some neurotransmitters are secreted and released in a similar fashion to peptide hormones, and some "neuropeptides" may be used as neurotransmitters in the nervous system in addition to acting as hormones when released into the blood. When a peptide hormone binds to a receptor on the surface of the cell, a second messenger appears in the cytoplasm, which triggers signal transduction leading to the cellular responses.Some peptides (angiotensin II, basic fibroblast growth factor-2, parathyroid hormone-related protein) also interact with intracellular receptors located in the cytoplasm or nucleus by an intracrine mechanism. List of peptide hormones in humans: adrenocorticotropic hormone (ACTH) adropin amylin angiotensin atrial natriuretic peptide (ANP) calcitonin cholecystokinin (CCK) gastrin ghrelin glucagon growth hormone follicle-stimulating hormone (FSH) insulin leptin luteinizing hormone (LH) melanocyte-stimulating hormone (MSH) oxytocin parathyroid hormone (PTH) prolactin renin somatostatin thyroid-stimulating hormone (TSH) thyrotropin-releasing hormone (TRH) vasopressin, also called arginine vasopressin (AVP) or anti-diuretic hormone (ADH) vasoactive intestinal peptide (VIP)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Xdelta** Xdelta: xdelta is a command line program for delta encoding, which generates the difference between two files. This is similar to diff and patch, but it is targeted for binary files and does not generate human readable output. It was first released in 1997. The developer of xdelta is Joshua MacDonald, who currently maintains the program. The algorithm of xdelta1 was based on the algorithm of rsync, developed by Andrew Tridgell, though it uses a smaller block size. xdelta3 can generate standardized VCDIFF format, and it realized the compatibility among other delta encoding software which supports the VCDIFF format. It runs on Unix-like operating systems and Microsoft Windows. xdelta can handle up to 264 byte files, and it is suitable for large backups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cable-ready** Cable-ready: Cable-ready is a designation which indicates that a TV set or other television-receiving device (such as a VCR or DVR) is capable of receiving cable TV without a set-top box.The term originated with analog TV, which uses different frequencies for cable versus over-the-air. This gives more channels, and at lower frequencies, so that early systems did not have to be so broadband and were therefore less expensive to build. For North American cable television frequencies, the VHF channels 2 to 13 are the same, while an extra 51 cable channels exist between there and over-the-air UHF channel 14. Thus, over-the-air channel 14 can be seen on cable channel 65. Conversely, those 51 extra channels (plus an additional five inserted at 95 to 99) cannot be seen at all on a device which is not cable-ready. A "181-channel tuner" receives 125 on cable (1 to 125), plus 10 (126 to 135) more for digital cable ready TVs, plus the 56 (14 to 69) which are not identical in both (2 to 13). Other cable channels, 0, 00 and 1, which along with channels 136-158 are ill-defined and thus rarely used, and often not included in otherwise cable-ready tuners. Those "lowest numbered" channels often reside between VHF channels four and five on HRC (harmonically related carrier) and IRC (incrementally related carrier) systems where the normally four MHz gap is increased to six MHz, wide enough for one NTSC channel. Similar situations exist in the rest of the world as well. Cable-ready: Another use of a cable-ready tuner is for receiving amateur television (ATV) in North America, where the main ATV band appears on cable channels 56 to 59, 57 being the most popular. Most repeaters output on these channels, while input from amateur operators is often in another band. Digital cable: Digital cable-ready or DCR is a label used by manufacturers on new televisions which feature built-in technology that allows consumers to receive SDTV and HDTV digital cable programs. Usually this is a QAM tuner, since over-the-air broadcasts are either COFDM (DVB-T and ISDB-T) or 8VSB (ATSC-T). Some cable TV systems in North America use 16VSB instead of 256QAM, for which there are no cable-ready devices. Only channels that are left unencrypted can be received using this method, however encrypted channels can be viewed without a set-top-box using systems such as a CableCard or using a Downloadable Conditional Access System. Digital cable: Interactive digital cable ready or iDCR extends DCR. Unlike the DCR standard, iDCR supports interactive customer features such as electronic program guides, pay-per-view and video on demand. Consumer devices which support iDCR also support the new OpenCable Application Platform (OCAP) standard developed by CableLabs. Digital cable: In practice however, the rental of cable converter boxes (or since the late 2010s, the rent-to-own arrangement of digital media players with a provider's app if the customer prefers) has remained a lucrative business line for most cable providers, and they have preferred to phase out support of analog or digital cable-ready televisions which are not CableCard compliant and rent converter boxes out instead, with prevention of cable theft another reason for the arrangement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BBCH-scale (root and stem vegetable)** BBCH-scale (root and stem vegetable): The BBCH-scale for root and stem vegetables identifies the phenological development stages of the root and stem vegetables such as carrot, celeriac, kohlrabi, chicory, radish and swede, using the BBCH-scale.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DNA barcoding in diet assessment** DNA barcoding in diet assessment: DNA barcoding in diet assessment is the use of DNA barcoding to analyse the diet of organisms. and further detect and describe their trophic interactions. This approach is based on the identification of consumed species by characterization of DNA present in dietary samples, e.g. individual food remains, regurgitates, gut and fecal samples, homogenized body of the host organism, target of the diet study (for example with whole body of insects). DNA barcoding in diet assessment: The DNA sequencing approach to be adopted depends on the diet breadth of the target consumer. For organisms feeding on one or only few species, traditional Sanger sequencing techniques can be used. For polyphagous species with diet items more difficult to identify, it is conceivable to determine all consumed species using NGS methodology.The barcode markers utilized for amplification will differ depending on the diet of the target organism. For herbivore diets, the standard DNA barcode loci will differ significantly depending on the plant taxonomic level. Therefore, for identifying plant tissue at the taxonomic family or genus level, the markers rbcL and trn-L-intron are used, which differ from the loci ITS2, matK, trnH-psbA (noncoding intergenic spacer) used to identify diet items to genus and species level. For animal prey, the most broadly used DNA barcode markers to identify diets are the mitochondrial cytochrome C oxydase (COI) and cytochrome b (cytb). When the diet is broad and diverse, DNA metabarcoding is used to identify most of the consumed items. Advantages: A major benefit of using DNA barcoding in diet assessment is the ability to provide high taxonomic resolution of consumed species. Indeed, when compared to traditional morphological analysis, DNA barcoding enables a more reliable separation of closely related taxa reducing the observed bias. Moreover, DNA barcoding enables to detect soft and highly digested items, not recognisable through morphological identification. For example, Arachnids feed on pre-digested bodies of insects or other small animals and their stomach content is too decomposed and morphologically unrecognizable using traditional methods such as microscopy.When investigating herbivores diet, DNA metabarcoding enables detection of highly digested plant items with a higher number of taxa identified compared to microhistology and macroscopic analysis. For instance, Nichols et al. (2016) highlighted the taxonomic precision of metabarcoding on rumen contents, with on average 90% of DNA-sequences being identified to genus or species level in comparison to 75% of plant fragments recognised with macroscopy. Morevoer, another empirically tested advantage of metabarcoding compared to traditional time-consuming methods, involves higher cost efficiency. Finally, with its fine resolution, DNA barcoding represents a crucial tool in wildlife management to identify the feeding habits of endangered species and animals that can cause feeding damages to the environment. Challenges: With DNA barcoding it is not possible to retrieve information about sex or age of prey species, which can be crucial. This limitation can anyway be overcome with an additional step in the analysis by using microsatellite polymorphism and Y-chromosome amplification. Moreover, DNA provides detailed information of the most recent events (e.g. 24–48 hr) but it is not able to provide a longer dietary prospect unless a continuous sampling is conducted. Additionally, when using generic primers that amplify ‘barcode’ regions from a broad range of food species, the amplifiable host DNA may largely outnumber the presence of prey DNA, complicating prey detection. However, a strategy to prevent the host DNA amplification can be the addition of a predator-specific blocking primer. Indeed, blocking primers for suppressing amplification of predator DNA allows the amplification of the other vertebrate groups and produces amplicon mixes that are predominately food DNA.Despite the improvement of diet assessment via DNA barcoding, secondary consumption (prey of the prey, parasites, etc.) still represents a confounding factor. In fact, some secondary prey may result in the analysis as primary prey items, introducing a bias. However, due to a much lower total biomass and to a higher level of degradation, DNA of secondary prey might represent only a minor part of sequences recovered compared to primary prey.The quantitative interpretation of DNA barcoding results is not straightforward. There have been attempts to use the number of sequences recovered to estimate the abundance of prey species in diet contents (e.g. gut, faeces). For example, if the wolf ate more moose than wild boar, there should be more moose DNA in their gut, and thus, more moose sequences are recovered. Despite the evidence for general correlations between the sequence number and the biomass, actual evaluations of this method have been unsuccessful. This can be explained by the fact that tissues originally contain different densities of DNA and can be digested differently. Examples: Mammals Mammals diet is widely studied using DNA barcoding and metabarcoding. Some differences in the methodology can be observed depending on the feeding strategy of the target mammal species, i.e. whether it is herbivore, carnivore, or omnivore. Examples: For herbivore mammal species, DNA is usually extracted from faeces samples or rumen contents collected from road kills or animals killed during regular hunting. Within DNA barcoding, the trnL approach can be used to identify plant species by using a very short but informative fragment of chloroplast DNA (P6 loop of the chloroplast trnL (UAA) intron). Potentially, this application is applicable to all herbivorous species feeding on angiosperms and gymnosperms Alternatively to the trnL approach, the markers rbcL, ITS2, matK, trnH-psbA can be used to amplify plant species. Examples: When studying small herbivores with a cryptic life style, such as voles and lemmings, DNA barcoding of ingested plants can be a crucial tool giving an accurate picture of food utilization. Additionally, the fine resolution in plant identification obtained with DNA barcoding allows researchers to understand change in diet composition over time and variability among individuals, as observed in the alpine chamois (Rupicapra rupicapra). Between October and November, by analyzing the faeces composition via DNA barcoding, the alpine chamois showed a shift in diet preferences. Also, different diet categories were observed amongst individuals within each month. Examples: For carnivores, the use of non-invasive approaches is crucial especially when dealing with elusive and endangered species. Diet assessment through DNA barcoding of faeces can have a greater efficiency in prey species detection compared to traditional diet analysis, which mostly rely upon the morphological identification of undigested hard remains in the faeces. Estimating the vertebrate diet diversity of the leopard cat (Prionailurus bengalensis) in Pakistan, Shehzad et al. (2012) identified a total of 18 prey taxa using DNA barcoding on faeces. Eight distinct bird taxa were reported, while previous studies based on conventional methods did not identify any bird species in the leopard cat diet. Another example is the use of DNA barcoding to identify soft remains of prey in the stomach contents of predators e.g. grey seals (Halichoerus grypus) and harbour porpoises (Phocoena phocoena).DNA metabarcoding is a game changer for the study of complex diets, such as for omnivores predators, feeding on many different species with both plants and animal origin. This methodology does not require knowledge about the food consumed by animals in the habitat they occupy. In a study on brown bear (Ursus arctos) diet, DNA metabarcoding allowed accurate reconstruction of a wide range of taxonomically different items present in faecal samples collected in the field.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HathiTrust** HathiTrust: HathiTrust Digital Library is a large-scale collaborative repository of digital content from research libraries including content digitized via Google Books and the Internet Archive digitization initiatives, as well as content digitized locally by libraries. Etymology: Hathi (IPA: [hah-tee]), derived from the Sanskrit hastin, is the Hindi/Urdu word for 'elephant', an animal famed for its long-term memory. History: HathiTrust was founded in October 2008 by the twelve universities of the Committee on Institutional Cooperation and the eleven libraries of the University of California. The partnership includes over 60 research libraries across the United States, Canada, and Europe, and is based on a shared governance structure. Costs are shared by the participating libraries and library consortia. The repository is administered by the University of Michigan. The executive director of HathiTrust is Mike Furlough, who succeeded founding director John Wilkin after Wilkin stepped down in 2013. The HathiTrust Shared Print Program is a distributed collective collection whose participating libraries have committed to retaining almost 18 million monograph volumes for 25 years, representing three-quarters of HathiTrust digital book holdings.In September 2011, the Authors Guild sued HathiTrust (Authors Guild, Inc. v. HathiTrust), alleging massive copyright violation. A federal court ruled against the Authors Guild in October 2012, finding that HathiTrust's use of books scanned by Google was fair use under US law. The court's opinion relied on the transformativeness doctrine of federal copyright law, holding that the Trust had transformed the copyrighted works without infringing on the copyright holders' rights. That decision was largely affirmed by the Second Circuit on June 10, 2014, which found that providing search and accessibility for the visually impaired were grounds to consider the service transformative and fair use, and remanded to the lower court to reconsider whether the plaintiffs had standing to sue regarding HathiTrust's library preservation copies.In October 2015, HathiTrust comprised over 13.7 million volumes, including 5.3 million in the public domain in the United States. HathiTrust provides a number of discovery and access services, notably, full-text search across the entire repository. In 2016 over 6.17 million users located in the United States and in 236 other nations used HathiTrust in 10.92 million sessions.As of 2021, the copyright policy states that "many works in our collection are protected by copyright law, so we cannot ordinarily publicly display large portions of those protected works unless we have permission from the copyright holder", and thus "if we cannot determine the copyright or permission status of a work, we restrict access to that work until we can establish its status. Because of differences in international copyright laws, access is also restricted for users outside the United States to works published outside the United States after and including 1896." PageTurner: PageTurner is the web application on the HathiTrust website for viewing publications. From PageTurner readers can navigate through a publication, download a PDF version of it, and view pages in different ways, such as one page at a time, scrolling, flipping, or thumbnail views. Emergency Temporary Access Service: The Emergency Temporary Access Service (ETAS) is a service provided by HathiTrust that makes it possible in certain special situations, such as closure of a library for a public health emergency, for users of HathiTrust member libraries to obtain lawful access to copyright digital materials in place of the corresponding physical books held by the same library through the controlled digital lending model.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rain sensor** Rain sensor: A rain sensor or rain switch is a switching device activated by rainfall. There are two main applications for rain sensors. The first is a water conservation device connected to an automatic irrigation system that causes the system to shut down in the event of rainfall. The second is a device used to protect the interior of an automobile from rain and to support the automatic mode of windscreen wipers. How Does a Rain Sensor Work?: Operation The rain sensor works on the principle of total internal reflection. An infrared light beams at a 45-degree angle on a clear area of the windshield is reflected and it is sensed by the sensor-inside the car. When it rains, the wet glass causes the light to scatter and lesser amount of light gets reflected back to the sensor An additional application in professional satellite communications antennas is to trigger a rain blower on the aperture of the antenna feed, to remove water droplets from the mylar cover that keeps pressurized and dry air inside the wave-guides. Irrigation sensors: Rain sensors for irrigation systems are available in both wireless and hard-wired versions, most employing hygroscopic disks that swell in the presence of rain and shrink back down again as they dry out — an electrical switch is in turn depressed or released by the hygroscopic disk stack, and the rate of drying is typically adjusted by controlling the ventilation reaching the stack. However, some electrical type sensors are also marketed that use tipping bucket or conductance type probes to measure rainfall. Wireless and wired versions both use similar mechanisms to temporarily suspend watering by the irrigation controller specifically they are connected to the irrigation controller's sensor terminals, or are installed in series with the solenoid valve common circuit such that they prevent the opening of any valves when rain has been sensed. Irrigation sensors: Some irrigation rain sensors also contain a freeze sensor to keep the system from operating in freezing temperatures, particularly where irrigation systems are still used over the winter. Some type of sensor is required on new lawn sprinkler systems in Florida, New Jersey, Minnesota, Connecticut and most parts of Texas. Automotive sensors: In 1958, the Cadillac Motor Car Division of General Motors experimented with a water-sensitive switch that triggered various electric motors to close the convertible top and raise the open windows of a specially-built Eldorado Biarritz model, in case of rain. The first such device appears to have been used for that same purpose in a concept vehicle designated Le Sabre and built around 1950–51. Automotive sensors: General Motors' automatic rain sensor for convertible tops was available as a dealer-installed option during the 1950s for vehicles such as the Chevrolet Bel Air.For the 1996 Model Year, Cadillac once again equipped cars with an automatic rain sensor; this time to automatically trigger the windshield wipers and adjust their speed to conditions as necessary.In December 2017 Tesla started rolling out an OTA update (2017.52.3) enabling their AP2.x cars to utilize the onboard cameras to passively detect rain without the use of a dedicated sensor. Automotive sensors: Most vehicles with this feature have an "AUTO" position on the control column. Physics of rain sensor: The most common modern rain sensors are based on the principle of total internal reflection. At all times, an infrared light is beamed at a 45-degree angle into the windshield from the interior. If the glass is dry, the critical angle for total internal refraction is around 42°. This value is obtained with the total internal refraction formula sin ⁡(θc)=n1n2 where n1=1 is the approximate value on air's refraction index for infrared and 1.5 is the approximate value of the glass refraction index, also for infrared. In that case, since the incident angle of light is 45°, all the light is reflected and the detector receives maximum intensity. Physics of rain sensor: If the glass is wet, the critical angle changes to around 60° because the refraction index of water is higher than air ( 1.3 ). In that case, because the incident angle is 45°, total internal reflection is not obtained. Part of the light beam is transmitted through the glass and the intensity measured for reflection is lower : the system detects water and the wipers turn on.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nios II** Nios II: Nios II is a 32-bit embedded processor architecture designed specifically for the Altera family of field-programmable gate array (FPGA) integrated circuits. Nios II incorporates many enhancements over the original Nios architecture, making it more suitable for a wider range of embedded computing applications, from digital signal processing (DSP) to system-control. Nios II is a successor to Altera's first configurable 16-bit embedded processor Nios, introduced in 2000. Key features: Like the original Nios, the Nios II architecture is a RISC soft-core architecture which is implemented entirely in the programmable logic and memory blocks of Altera FPGAs. Unlike its predecessor it is a full 32-bit design: 32 general-purpose 32-bit registers, Full 32-bit instruction set, data path, and address space, Single-instruction 32 × 32 multiply and divide producing a 32-bit result.The soft-core nature of the Nios II processor lets the system designer specify and generate a custom Nios II core, tailored for his or her specific application requirements. System designers can extend the Nios II's basic functionality by, for example, adding a predefined memory management unit, or defining custom instructions and custom peripherals. Key features: Custom instructions Similar to native Nios II instructions, user-defined instructions accept values from up to two 32-bit source registers and optionally write back a result to a 32-bit destination register. By using custom instructions, the system designers can fine-tune the system hardware to meet performance goals and also the designer can easily handle the instruction as a macro in C. Key features: Custom peripherals For performance-critical systems that spend most CPU cycles executing a specific section of code, a user-defined peripheral can potentially offload part or all of the execution of a software-algorithm to user-defined hardware logic, improving power-efficiency or application throughput. Memory Management Unit Introduced with Quartus 8.0, the optional MMU enables Nios II to run operating systems which require hardware-based paging and protection, such as the Linux kernel. Without an MMU, Nios is restricted to operating systems which use a simplified protection and virtual memory-model: e.g., µClinux and FreeRTOS. Memory Protection Unit Introduced with Quartus 8.0, the optional MPU provides memory protection similar to that provided by an MMU but with a simpler programming model and without the performance overhead associated with an MMU. Nios II CPU family: Nios II classic is offered in 3 different configurations: Nios II/f (fast), Nios II/s (standard), and Nios II/e (economy). Nios II gen2 is offered in 2 different configurations: Nios II/f (fast), and Nios II/e (economy). Nios II CPU family: Nios II/f The Nios II/f core is designed for maximum performance at the expense of core size. Features of Nios II/f include: Separate instruction and data caches (512 B to 64 KB) Optional MMU or MPU Access to up to 2 GB of external address space Optional tightly coupled memory for instructions and data Six-stage pipeline to achieve maximum DMIPS/MHz Single-cycle hardware multiply and barrel shifter Optional hardware divide option Dynamic branch prediction Up to 256 custom instructions and unlimited hardware accelerators JTAG debug module Optional JTAG debug module enhancements, including hardware breakpoints, data triggers, and real-time trace Nios II/s Nios II/s core is designed to maintain a balance between performance and cost. This core implementation is not longer supported for Altera Quartus II v.17 and newer. Features of Nios II/s include: Instruction cache Up to 2 GB of external address space Optional tightly coupled memory for instructions Five-stage pipeline Static branch prediction Hardware multiply, divide, and shift options Up to 256 custom instructions JTAG debug module Optional JTAG debug module enhancements, including hardware breakpoints, data triggers, and real-time trace Nios II/e The Nios II/e core is designed for smallest possible logic utilization of FPGAs. This is especially efficient for low-cost Cyclone II FPGA applications. Features of Nios II/e include: Up to 2 GB of external address space JTAG debug module Complete systems in fewer than 700 LEs Optional debug enhancements Up to 256 custom instructions Free, no license required Avalon switch fabric interface: Nios II uses the Avalon switch fabric as the interface to its embedded peripherals. Compared to a traditional bus in a processor-based system, which lets only one bus master access the bus at a time, the Avalon switch fabric, using a slave-side arbitration scheme, lets multiple masters operate simultaneously. Development processes: Development for Nios II consists of two separate steps: hardware generation and software creation. Development processes: Development is hosted inside an Altera application called the Embedded Design Suite (EDS). The EDS contains a complete integrated development environment to manage both hardware and software in two separate steps: Hardware generation process Nios II hardware designers use the Qsys system integration tool, a component of the Quartus-II package, to configure and generate a Nios system. The configuration graphical user interface (GUI) allows users to choose the Nios-II's feature-set, and to add peripheral and I/O-blocks (timers, memory-controllers, serial interface, etc.) to the embedded system. When the hardware specification is complete, Quartus-II performs the synthesis, place & route to implement the entire system on the selected FPGA target. Development processes: Qsys is replacing the older SOPC (System-on-a-Programmable-Chip) Builder, which could also be used to build a Nios II system, and is being recommended for new projects. Software creation process A separate package, called the Embedded Design Suite (EDS), manages the software development. Based on the Eclipse IDE, the EDS includes a C/C++ compiler (based on the GNU toolchain), debugger, and an instruction-set simulator. EDS allows programmers to test their application in simulation, or download and run their compiled application on the actual FPGA host. Because the C/C++ development-chain is based on GCC, the vast majority of open source software for Linux compiles and runs with minimal or no modification. Third-party operating-systems have also been ported to Nios II. These include Micrium MicroC/OS-II, eCos, Segger Microcontroller embOS, ChibiOS/RT, μCLinux and FreeRTOS. Licensing Nios II is comparable to MicroBlaze, a competing softcore CPU for the Xilinx family of FPGA. Unlike MicroBlaze, Nios II is licensable for standard-cell ASICs through a third-party IP provider, Synopsys Designware. Through the Designware license, designers can port Nios-based designs from an FPGA-platform to a mass production ASIC-device.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Equid alphaherpesvirus 4** Equid alphaherpesvirus 4: Equid alphaherpesvirus 4, formerly Equine herpesvirus 4 (EHV-4) is a virus of the family Herpesviridae that cause rhinopneumonitis in horses. It is the most important viral cause of respiratory infection in foals. Like other herpes viruses, EHV-4 causes a lifelong latent infection in affected animals. These horses are usually the source for new infection for foals over two months old, weanlings, and yearlings. Symptoms include fever, loss of appetite, and discharge from the nose. Most infected animals recover in one to three weeks, but death can occur in environments with overcrowding and other stress factors. There are several vaccines available (ATCvet codes: QI05AA03 (WHO) inactivated, QI05AD01 (WHO) live, plus various combinations). Description: EHV-4 is an upper respiratory disease restricted to the infection of the respiratory tract epithelium and its associated lymph nodes. EHV 4 and its close relative EHV 1 are clinically and pathologically indistinguishable and are the primary pathogens that cause respiratory tract disease in young horses from weanling to 2 years of age. Incubation period of Equine Herpiesvirus is 2–10 days. Symptoms include fever (38.9–41.7 °C), loss of appetite, and a nasal discharge giving it the nickname "snots". Without antibiotic treatment, the damage to the respiratory mucosal barrier predisposes infected horses to secondary infections and the involvement of the lower airways (ex. bronchiolitis or pneumonia); Increasing the duration, severity and the mortality of the disease. EHV-4 rarely causes abortion in infected pregnant mares unlike its EHV-1 counterpart. Although there is no specific treatment for the disease once a horse is infected, vaccination against EHV-1 and EHV-4 is recommended as part of preventative herd health for those at high risk of infection. Multiple vaccines are available (Duvaxyn EHV1,4, EquiGuard, EquiVac EHV-1/4, etc.), most in an inactivated virus form.The Equine Herpesvirus occupies the horse in such a way that allows post infection viral persistency over the lifetime of an animal. These carrier horses may comprise up to half of a given horse population. Therefore, management practices are recommended for controlling and managing EHV include isolating incoming horses for 3–4 weeks before co-mingling with resident horses and pregnant mares, reducing stress to prevent the reappearance of a latent virus and if there is an appearance of EHV affected horses should be isolated, and disinfection of the contaminated premise should commence. (The EHV has a large genome (150 kb) which is enclosed in a relatively fragile capsule. This limits their survival in the external environment and makes them highly susceptible to common disinfectants.) After an outbreak no horse should leave the premise for three weeks after the final clinical case recovers. Effective prevention measures, quick diagnosis, therapeutic intervention and the ability to control the spread in the case of an outbreak all allow for the management of EHV.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hayashi rearrangement** Hayashi rearrangement: The Hayashi rearrangement is the chemical reaction of ortho-benzoylbenzoic acids catalyzed by sulfuric acid or phosphorus pentoxide. This reaction proceeds through electrophilic acylium ion attack with a spiro intermediate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**.ve** .ve: .ve is the Internet country code top-level domain (ccTLD) for Venezuela. On 3 March 2009 the ISO 3166-1 code for Venezuela changed to reflect the VE used for the ccTLD. .ve: Registrations are allowed without restrictions, only at the third level: .arts.ve - artistic and cultural institution .co.ve - a website originally ".com" ported to Venezuelan Spanish .com.ve - Venezuelan commercial entity .info.ve - informational sites .net.ve - network service providers .org.ve - non-profit organizations .radio.ve - radio stations .web.ve - individualsThe following second-level domains allow restricted third-level domain registrations: .gob.ve / .gov.ve - government-related websites .edu.ve - Venezuela based educational institutions .int.ve - International institutions .mil.ve - Venezuelan military institution .tec.ve - University of TechnologyInternationalized domain names are available using the following Spanish characteres: á, é, í, ó, ú, ü, and ñ.A number of second-level domain names are in place, i.e., cha.ve, internet.ve, ipv6.ve, nic.ve
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Triethylammonium acetate** Triethylammonium acetate: Triethylammonium acetate is a volatile salt, which is often used as an ion-pairing reagent in high-performance liquid chromatography separations of oligonucleotides. Since unadjusted triethylammonium acetate salt solutions contain neither conjugate acid nor conjugate base, they are not buffers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Earth battery** Earth battery: An earth battery is a pair of electrodes made of two dissimilar metals, such as iron and copper, which are buried in the soil or immersed in the sea. Earth batteries act as water-activated batteries. If the plates are sufficiently far apart, they can tap telluric currents . Earth batteries are sometimes referred to as telluric power sources and telluric generators. History: One of the earliest examples of an earth battery was built by Alexander Bain in 1841 in order to drive a prime mover—a device that transforms the flow or changes in pressure of a fluid into mechanical energy. Bain buried plates of zinc and copper in the ground about one meter apart and used the resulting voltage, of about one volt, to operate a clock. Carl Friedrich Gauss, who had researched Earth's magnetic field, and Carl August von Steinheil, who built one of the first electric clocks and developed the idea of an "Earth return" (or "ground return"), had previously investigated such devices. History: Daniel Drawbaugh received U.S. Patent 211,322 for an Earth battery for electric clocks (with several improvements in the art of Earth batteries). Another early patent was obtained by Emil Jahr U.S. Patent 690,151 Method of utilizing electrical Earth currents). In 1875, James C. Bryan received U.S. Patent 160,152 for his Earth Battery. In 1885, George Dieckmann, received US patent U.S. Patent 329,724 for his Electric Earth battery. In 1898, Nathan Stubblefield received U.S. Patent 600,457 for his electrolytic coil battery, which was a combination of an earth battery and a solenoid. (For more information see US patents 155209, 182802, 495582, 728381, 3278335, 3288648, 4153757 and 4457988.) The Earth battery, in general, generated power for early telegraph transmissions and formed part of a tuned circuit that amplified the signalling voltage over long distances. Operation and use: The simplest earth batteries consist of conductive plates from different metals of the electropotential series, buried in the ground so that the soil acts as the electrolyte in a voltaic cell. As such, the device acts as a primary cell. When operated only as electrolytic devices, the devices were not continuously reliable, owing to drought condition. These devices were used by early experimenters as energy sources for telegraphy. However, in the process of installing long telegraph wires, engineers discovered that there were electrical potential differences between most pairs of telegraph stations, resulting from natural electrical currents (called telluric currents) flowing through the ground. Some early experimenters did recognize that these currents were, in fact, partly responsible for extending the earth batteries' high outputs and long lifetimes. Later, experimenters would utilize these currents alone and, in these systems, the plates became polarized. Operation and use: It had been long known that continuous electric currents flowed through the solid and liquid portions of the Earth, and the collection of current from an electrically conductive medium in the absence of electrochemical changes (and in the absence of a thermoelectric junction) was established by Lord Kelvin. Lord Kelvin's "sea battery" was not a chemical battery. Lord Kelvin observed that such variables as placement of the electrodes in the magnetic field and the direction of the medium's flow affected the current output of his device. Such variables do not affect battery operation. When metal plates are immersed in a liquid medium, energy can be obtained and generated, including (but not limited to) methods known via magneto-hydrodynamic generators. In the various experiments by Lord Kelvin, metal plates were symmetrically perpendicular to the direction of the medium's flow and were carefully placed with respect to a magnetic field, which differentially deflected electrons from the flowing stream. The electrodes can be asymmetrically oriented with respect to the source of energy, though. Operation and use: To obtain the natural electricity, experimenters would thrust two metal plates into the ground at a certain distance from each other in the direction of a magnetic meridian, or astronomical meridian. The stronger currents flow from south to north. This phenomenon possesses a considerable uniformity of current strength and voltage. As the Earth currents flow from south to north, electrodes are positioned, beginning in the south and ending in the north, to increase the voltage at as large a distance as possible. In many early implementations, the cost was prohibitive because of an over-reliance on extreme spacing between electrodes. Operation and use: It has been found that all the common metals behave relatively similarly. The two spaced electrodes, having a load in an external circuit connected between them, are disposed in an electrical medium, and energy is imparted to the medium in such manner that "free electrons" in the medium are excited. The free electrons then flow into one electrode to a greater degree than in the other electrode, thereby causing electric current to flow in the external circuit through the load. The current flows from that plate whose position in the electropotential series is near the negative end (such as palladium). The current produced is highest when the two metals are most widely separated from each other in the electropotential series, and when the material nearer the positive end is to the north, while that at the negative end is towards the south. The plates, one copper and another iron or carbon, are connected above ground by means of a wire with as little resistance as possible. In such an arrangement, the electrodes are not appreciably chemically corroded, even when they are in earth saturated with water, and are connected together by a wire for a long time.It had been found that to strengthen the current, it was most advantageous to drive the northerly electropositive electrode deeper into the medium than the southerly electrode. The greatest currents and voltages were obtained when the difference in depth was such that a line joining the two electrodes was in the direction of the magnetic dip, or magnetic inclination. When the previous methods were combined, the current was tapped and utilized in any well-known manner.In some cases, a pair of plates with differing electrical properties, and with suitable protective coatings, were buried below the ground. A protective or other coating covered each entire plate. A copper plate could be coated with powdered coke, a processed carbonaceous material. To a zinc plate, a layer of felt could be applied. To use the natural electricity, earth batteries fed electromagnets, the load, that were part of a motor mechanism. References and articles: General information Park Benjamin and Melvin L. Severy, The Voltaic Cell: Its Construction and Its Capacity. Wiley, 1893. 562 pages. pp. 317–319. George Milton Hopkins, Experimental Science: Elementary, Practical and Experimental Physics. Munn & Co., 1902. pp. 437–451. Frederick Collier Bakewell, Electric science, its history, phenomena and applications. 1853. pp. 182–184. James Napier, A manual of electro-metallurgy. 1876. pp. 48–49. William Edward Armytage Axon, The Mechanic's Friend. Trübner, 1875. 339 pages. pp. 303–304. Adolph A. Fesquet, Oliver Byrne, and John Percy, The Practical Metal-worker's Assistant. H.C. Baird & Co., 1878. 683 pages. pp. 529–530. Eugenii Katz, "Alexander Bain". The history of electrochemistry, electricity and electronics; Biosensors & Bioelectronics. Vassilatos, Gerry, "An Introduction to the Mysteries of Ground Radio". Burns, R. W., "Alexander Bain, a most ingenious and meritorious inventor". Engineering Science and Education Journal, Volume 2, Issue 2, Apr 1993. pp. 85–93. ISSN 0963-7346 R. J. Edwards G4FGQ, Measurement of Soil Resistivity & Calculation of Earth Electrode Resistance. 15 February 1998 The Gentleman's magazine. (1731). London: [s.n.]. p. 587. Spencer W. Richardson, "The Flow of Electricity through Dielectrics". Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character, Vol. 92, No. 635 (Nov. 1, 1915), pp. 101–107. John Patterson Abernethy, The Modern Service of Commercial and Railway Telegraphy. 1887. 423 pages. p. 72. William Dwight, Whitney Dictionary: An Encyclopedic Lexicon of the English Language. p. 1405. Thomas Dixon Lockwood, Electricity, Magnetism, and Electric Telegraphy. D. Van Nostrand Co., 1883. 375 pages. p. 42. Eliot, Samuel (1911). "'Thomas Dixon Lockwood' (and 'John Davis Long', 'Henry Cabot Lodge')". Biographical Massachusetts; Biographies and Autobiographies of the Leading Men in the State, Volume 1. Boston: Massachusetts Biographical Society. OCLC 8185704. Edwin James Houston, A Dictionary of Electrical Words, Terms and Phrases. P.F. Collier & Son, 1903. p. 756. Henry Minchin, Student's Text-book of Electricity. Lockwood, 1867. 519 pages. pp. 477–485. (Alternative copy) Vassilatos, G. (2000). Lost science. Kempton, Ill: Adventures Unlimited. "Telluric Currents: The Natural Environment and Interactions with Man-made Systems". The Earth's Electrical Environment (1986), Commission on Physical Sciences, Mathematics, and Applications. Prescott, G. B. (1860). History, theory, and practice of the electric telegram. Boston: Ticknor and Fields. 468 pages. Citations and notes Patents A. Bain, "U.S. Patent 5,957 Copying surfaces by electricity". A. Bain, "U.S. Patent 6,328 Improvements in electric telegraphs". W. P. Piggot, "U.S. Patent 050,314 Telegraph cable". W. D. Snow, "U.S. Patent 155,209 Earth-batteries for generating electricity". J. Cerpaux, "U.S. Patent 182,802 Electric piles". Daniel Drawbaugh, "U.S. Patent 211,322 Earth battery for electric clocks". M. Emme, "U.S. Patent 495,582 Ground generator of electricity". M. Emme, "U.S. Patent 728,381 Storage Battery". Jahr, Emil, "U.S. Patent 690,151 Method of utilizing electrical earth currents". Bryan, James C., "U.S. Patent 160,151 Improvements in lightning rods". Bryan, James C., "U.S. Patent 160,152 Earth Battery". February 23, 1875. Bryan, James C., "U.S. Patent 160,154 Improvements in lightning rods". James M. Dices, "U.S. Patent 2,806,895 Immersion type battery". Dieckmann, George F., "U.S. Patent 329,724 Electric Earth Battery". November 3, 1885. Stubblefield, Nathan, "U.S. Patent 600,457 Electric battery". May 8, 1898. William T. Clark, "U.S. Patent 4,153,757 Method and apparatus for generating electricity". Ryeczek, "U.S. Patent 4,457,988 Earth battery". July 3, 1984. Further reading Lamont, J. V., Der Erdstrom und der Zusammen desselben mit dem Erdmagnetismus. Leopold-Voss-Verlag, Leipzig und Muenchen, 1862. (Tr., Telluric currents and their relationship to geomagnetism) Weinstein, Electrotechnische Zeitshrift. 1898, pg., 794. (Tr., Electrotechnic magazine) John Timbs, The Year-book of Facts in Science and Art. 1868. p. 130. Journal of the Telegraph. Western Union Telegraph, Co., 1914.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flag of Chicago** Flag of Chicago: The flag of Chicago consists of two light blue horizontal bars, or stripes, on a field of white, each bar one-sixth the height of the full flag, and placed slightly less than one-sixth of the way from the top and bottom. Four bright red stars, with six sharp points each, are set side by side, close together, in the middle third of the flag's surface.Chicago is a city in Illinois, United States. Its flag was adopted in 1917 after the design by Wallace Rice won a City Council sponsored competition. It initially had two stars until 1933, when a third was added. The four-star version has existed since 1939. The three sections of the white field and the two bars represent geographical features of the city, the stars symbolize historical events, and the points of the stars represent important virtues or concepts. The historic events represented by the stars are the establishment of Fort Dearborn, Great Chicago Fire of 1871, World's Columbian Exposition of 1893, and Century of Progress Exposition of 1933–34. Flag of Chicago: In a review by the North American Vexillological Association of 150 American city flags, the Chicago city flag was ranked second-best with a rating of 9.03 out of 10, behind only the flag of Washington, D.C. Symbolism: Bars The three white background areas of the flag represent, from top to bottom, the North, West, and South sides of the city. The top blue bar represents Lake Michigan and the North Branch of the Chicago River. The bottom blue bar represents the South Branch of the river and the "Great Canal", over the Chicago Portage. The light blue of the flag's two bars is variously called sky blue or pale blue; in a 1917 article of a speech by designer Wallace Rice, it was called "the color of water". Symbolism: Stars There are four red six-pointed stars on the center white bar. Six-pointed stars are used because five-pointed stars represent sovereign states and because the star as designed was found on no other known flags as of 1917. From the hoist outwards, the stars represent: Added in 1939: Commemorates Fort Dearborn, and its six points stand for political entities the Chicago region has belonged to and the flags that have flown over the area: France, 1693; Great Britain, 1763; Virginia, 1778; the Northwest Territory, 1789; Indiana Territory, 1802; and Illinois (territory, 1809, and state, since 1818). Symbolism: Original to the 1917 flag: This star stands for the Great Chicago Fire of 1871. Its six points represent the virtues of religion, education, aesthetics, justice, beneficence, and civic pride. Original to the 1917 flag: This star symbolizes the World's Columbian Exposition of 1893. Its six points symbolize transportation, labor, commerce, finance, populousness, and salubrity (health). Symbolism: Added in 1933: This star represents the Century of Progress Exposition (1933–34). Its points refer to: Chicago's status as the United States' second largest city at the time of the star's addition (Chicago became third largest in a 1990 census when passed by Los Angeles); Chicago's Latin motto, Urbs in horto ("City in a garden"); Chicago's "I Will" motto; the Great Central Marketplace; Wonder City; and Convention City.Additional stars have been proposed, with varying degrees of seriousness. The following reasons have been suggested for possible additions of a fifth star: A fifth star could represent Chicago's contribution to the nuclear age (see Metallurgical Laboratory), an idea first suggested in a 1940s letter published by the Chicago Tribune and later championed by Mayor Richard J. Daley in the 1960s. Symbolism: In the 1980s, a star was proposed in honor of Harold Washington, the first African-American mayor of Chicago. The 1992 Chicago flood was suggested as an additional natural disaster deserving of a star, in line with the existing star for the 1871 Great Chicago Fire. Another fifth star was in the works from a group of Chicago real estate professionals to represent Chicago's entrepreneurial spirit in the early 1990s. When Chicago was bidding to host the 2016 Summer Olympics, the Bid Committee proposed that a fifth star be added to the flag in commemoration, but the bid was won instead by Rio de Janeiro, Brazil. Anne Burke, Tim Shriver, and others have proposed adding a fifth star to commemorate the Special Olympics, which were founded in Chicago. Other sports-related suggestions include recognizing the Chicago Bulls' dominance of the National Basketball Association in the 1990s and a proposal for a fifth star if the Chicago Cubs should ever win the World Series, which did not happen between their long drought of series wins in 1908, up to 2016. The Chicago History Museum has an ongoing exhibition where the public is encouraged to vote for a potential fifth star. Chicago Mayor Lori Lightfoot suggested that Chicago's response to the COVID-19 pandemic could warrant adding a fifth star to Chicago's flag. Unlawful private use: Per the Municipal Code of Chicago, it is unlawful to use the flag, or any imitation or design thereof, except for the usual and customary purposes of decoration or display. Causing to be displayed on the flag, any letter, word, legend, or device not provided for in the Code is also prohibited. Violators are subject to fines between $5.00 and $25.00 for each offense. However, the First Amendment to the United States Constitution prohibits this section from being enforced (U.S. v. Eichman). History: In 1915, Mayor William Hale Thompson appointed a municipal flag commission chaired by Alderman James A. Kearnes. Among the commission members were wealthy industrialist Charles Deering and impressionist painter Lawton S. Parker. Parker asked lecturer and poet Wallace Rice to develop the rules for an open public competition for the best flag design. Over a thousand entries were received. The 318th Cavalry Regiment incorporated the flag into their insignia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Whipping knot** Whipping knot: A whipping knot or whipping is a binding of marline twine or whipcord around the end of a rope to prevent its natural tendency to fray. Some whippings are finished cleanly, as by drawing the bitter end of the cordage beneath the whipping itself. Others are tied off or have the end(s) of the twine sewn through the rope. According to The Ashley Book of Knots, "The purpose of a whipping is to prevent the end of a rope from fraying ... A whipping should be, in width, about equal to the diameter of the rope on which it is put ... [Two sailmaker's whippings], a short distance apart, are put in the ends of every reef point, where the constant "whipping" against the sail makes the wear excessive; this is said to be the source of the name whipping." The other type of stopping knot is a seizing knot. Whipping knot: Whipping is suitable for synthetic and natural stranded and braided lines, including 3-strand rope, 4-strand cable and 8-strand multiplait, as well as concentric and braided constructions. Tying: Multiple turns of twine (sometimes called small stuff for smaller lines) or heavier whipcord (for large diameter cables and ropes) are tightly wrapped around a rope's cut end to prevent its fibers from unlaying. Tying: Usually one end of the whipping cord is looped along the rope to be whipped, and the remaining cord wound tightly over the loop. Finally the loose end of the wound whipping is passed through the loop so that both ends may be drawn securely inside the winding.Whippings may also be applied by hand or using a palm and needle, and either simply tied off or made neat and permanent by reeving the twine's cut ends into or behind the whipping, sewing them to adjacent strands, or through the rope itself. Tying: In applications where a lot of flexing is expected, the whipping may be impregnated with dilute spar varnish or superglue. Types: French whipping French whipping is merely a series of half hitches. Start with a running eye and finish up with the end tucked back under the last few hitches. The ridge of the hitches should follow the lay of the rope. French whipping is a whipping knot that consists of a series of half hitches. It is used to stop unraveling of rope ends as well as to provide a grip over railings. Portuguese whipping Portuguese whipping is the quickest of all to apply; the ends are merely reef knotted together. It is given by Esparteiro in his Dicionario de Marinharia (Lisboa, 1936). The Portuguese whipping is a type of whipping knot. To make it you take the small diameter string and lay one end against the rope. Wrap backwards up the rope until you have both ends side by side, finish by tying a reef knot. This is the quickest of the seizings, but is not as secure as some. Alternatives: Constrictor knot A constrictor knot can be used temporarily to hold the fibres of a cut line until a final whipping can be applied. Tape Several turns of self-adhesive plastic tape may form a temporary or emergency substitute for whipping. Alternatives: Fusion The ends of some man-made fibers such as Dacron, Nylon, polyethylene, polyester, and polypropylene (but not aramid fibers) may be melted to fuse their fibers to prevent fraying. However, the rope and knotting expert Geoffrey Budworth warns against this practice for boat operators thus: Sealing rope ends this way is lazy and dangerous. A tugboat operator once sliced the palm of his hand open down to the sinews after the hardened (and obviously sharp) end of a rope that had been heat-sealed pulled through his grasp. There is no substitute for a properly made whipping. Alternatives: Among the methods of fusing are using an electrically heated rope cutter, heating the blade of a knife, or melting cut ends in a flame. The cool (transparent) part of a butane lighter flame works best. It is helpful to wrap the end of a line to be fused with several turns of plastic tape first. The finished end will be neater and narrower if a cut is made through the tape. Back splice Back splicing uses a stranded rope's own fibres to prevent fraying. A back splice adds extra thickness to the rope end, preventing it from running through blocks and sheaves. It can also be of benefit when a user needs to feel the end of the rope, as on a bucket lanyard. Liquid Liquid whipping is a semi-permanent rubbery coating applied by dipping the cut end of a line into a container of the product. When the coating sets it is flexible but solid enough to keep the rope together. Liquid whipping can be used on both natural and synthetic fibers. Aglet An aglet is a permanent ending applied mechanically to bind the end of the rope. A typical example is the plastic aglet at the end of a shoelace. Metal aglets may be crimped onto ropes or cables. Aglets may also be made by melting a softer metal to cap the end of the cable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chain Bridge** Chain Bridge: Chain bridges are suspension bridges built with chains. Some chain bridges built using this design have retained the name Chain Bridge. Thus as a proper noun, it may refer to: In Hungary: Chain Bridge (Budapest), a bridge over the Danube in Budapest, Hungary (completed 1849)In Germany: Chain Bridge (Nuremberg), a pedestrian bridge over the river Pegnitz in Nuremberg, Bavaria (opened 1924)In the United Kingdom: Union Bridge (Tweed), a bridge over the River Tweed between England and Scotland (opened 1820) Chain Bridge, a bridge over the River Usk in Monmouthshire, Wales (opened 1829) Chain Bridge (Berwyn), a bridge over the River Dee at Berwyn, Llangollen, Denbighshire, North Wales (completed 1818)In the United States: Chain Bridge (Easton, Pennsylvania), a historic change bridge spanning the Lehigh River (completed 1857) Chain Bridge (Potomac River) a bridge at the Little Falls of the Potomac River in Washington, D.C. (completed 1808) Chain Bridge (Massachusetts), a bridge which crosses the Merrimack River, connecting Amesbury and Newburyport, Massachusetts (completed 1810)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mod n cryptanalysis** Mod n cryptanalysis: In cryptography, mod n cryptanalysis is an attack applicable to block and stream ciphers. It is a form of partitioning cryptanalysis that exploits unevenness in how the cipher operates over equivalence classes (congruence classes) modulo n. The method was first suggested in 1999 by John Kelsey, Bruce Schneier, and David Wagner and applied to RC5P (a variant of RC5) and M6 (a family of block ciphers used in the FireWire standard). These attacks used the properties of binary addition and bit rotation modulo a Fermat prime. Mod 3 analysis of RC5P: For RC5P, analysis was conducted modulo 3. It was observed that the operations in the cipher (rotation and addition, both on 32-bit words) were somewhat biased over congruence classes mod 3. To illustrate the approach, consider left rotation by a single bit: if 31 32 if 31 Then, because 32 mod 3), it follows that mod 3). Mod 3 analysis of RC5P: Thus left rotation by a single bit has a simple description modulo 3. Analysis of other operations (data dependent rotation and modular addition) reveals similar, notable biases. Although there are some theoretical problems analysing the operations in combination, the bias can be detected experimentally for the entire cipher. In (Kelsey et al., 1999), experiments were conducted up to seven rounds, and based on this they conjecture that as many as 19 or 20 rounds of RC5P can be distinguished from random using this attack. There is also a corresponding method for recovering the secret key. Mod 3 analysis of RC5P: Against M6 there are attacks mod 5 and mod 257 that are even more effective.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heinz Billing Prize** Heinz Billing Prize: In 1993, the Heinz Billing Prize for the advancement of scientific computation was presented for the first time. The aim of this award is to honour the achievements of those who have spent time and effort developing the hardware and software crucial for scientific advances. It is the purpose of the award to honour outstanding scientific contributions in all areas of computational science, specifically: Modelling and computer simulation Design of user interfaces based on new scientific findings Data handling and data analysis procedures Scientific visualization of data and processes Previous Winners: 1993 Dr. Hans Thomas Janka, Dr. Ewald Müller, Dr. Maximilian Ruffert 1994 Dr. Rainer Goebel 1995 Dr. Ralf Giering 1996 Dr. Klaus Heumann 1997 Dr. Florian Mueller 1998 Prof. Dr. Edward Seidel 1999 Dr. Alexander Pukhov 2000 Dr. Oliver Kohlbacher 2001 Dr. Jörg Haber 2002 Dipl. Ing. Daan Broeder, Dr. Hennie Brugman and Dipl. Ing. Reiner Dirksmeyer 2003 Dipl. Phys. Roland Chrobok, Dr. Sigurður F. Hafstein and Dipl. Phys. Andreas Pottmeier 2004 Dr. Markus Rampp and Dr. Thomas Soddemann 2005 Dr. Patrick Jöckel and Dr. Rolf Sander 2006 Rafal Mantiuk 2007 Axel Fingerle and Klaus Röller/ Hannah Bast and Stefan Funke 2011 Peter Wittenburg 2013 Thomas Hrabe 2015 Andreas Brandmeier 2017 Dr. Christian Schulz 2019 Tim Dietrich 2021 Adam Runions
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Adjusted Peak Performance** Adjusted Peak Performance: Adjusted Peak Performance (APP) is a metric introduced by the U.S. Department of Commerce's Bureau of Industry and Security (BIS) to more accurately predict the suitability of a computing system to complex computational problems, specifically those used in simulating nuclear weapons. This is used to determine the export limitations placed on certain computer systems under the Export Administration Regulations 15 CFR. Adjusted Peak Performance: Further details can be found in the document "Practitioner's Guide To Adjusted Peak Performance".The (simplified) algorithm used to calculate APP consists of the following steps: Determine how many 64 bit (or better) floating point operations every processor in the system can perform per clock cycle (best case). This is FPO(i). Determine the clock frequency of every processor. This is F(i). Choose the weighting factor for each processor: 0.9 for vector processors and 0.3 for non-vector processors. This is W(i). Adjusted Peak Performance: Calculate the APP for the system as follows: APP = FPO(1) * F(1) * W(1) + ... + FPO(n) * F(n) * W(n).The metric was introduced in April 2006 to replace the Composite Theoretical Performance (CTP) metric which was introduced in 1993. APP was itself replaced in November 2007 when the BIS amended 15 CFR to include the December 2006 Wassenaar Arrangement Plenary Agreement Implementation's new metric - Gigaflops (GFLOPS), one billion floating point operations per second, or TeraFLOPS, one trillion floating point operations per second. Adjusted Peak Performance: The unit of measurement is Weighted TeraFLOPS (WT) to specify Adjusted Peak Performance (APP). The weighting factor is 0.3 for non-vector processors and 0.9 for vector processors. For example, a PowerPC 750 running at 800 MHz would be rated at 0.00024 WT due to being able to execute one floating point instruction per cycle and not having a vector unit. Note that only 64 bit (or wider) floating point instructions count. Notes: Processors without 64 bit (or better) floating point support have an FPO of zero. The current APP limit is 0.75 WT.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**XHP** XHP: XHP is an augmentation of PHP and Hack developed at Meta (formerly known as Facebook) to allow XML syntax for the purpose of creating custom and reusable HTML elements. It is available as an open-source software GitHub project and as a Homebrew module for PHP 5.3, 5.4, and 5.5. Meta also developed a similar augmentation for JavaScript, named JSX. Origins: XHP was loosely inspired by ECMAScript for XML and created by Marcel Laverdet. It was first developed for Facebook Lite as a new UI rendering layer but was later ported over to Facebook's www and mobile web stack as well as incorporated into HipHop for PHP. It was made available to the public in February 2010 and until 2020 accounted for nearly all of Facebook app's server-side generated HTML.In 2020, Facebook redesigned its primary web app to run mostly on React components, rendered both server and client-side. XHP is still used in parts of Facebook but is a legacy technology now being phased out. Benefits: XHP offers a much cleaner interface to UI programming when outputting HTML in PHP, but has some engineering advantages as well. Parse-time validation of HTML syntax XHP validates the syntax and structure of the entire document tree on render and will throw an exception if an element was not closed properly, has invalid children, has an invalid attribute, or is missing required children or attributes. Automatic XSS protection Because all rendering to the page is done inside XHP, and it knows what is HTML and what is content, XHP escapes all content without any special effort from the programmer. Object mutation XHP objects are stored as standard PHP objects, so they can be manipulated through a DOM-like API, which includes methods such as setAttribute(), getAttribute(), appendChild(), and several others prior to or during render. Custom HTML Instead of writing functions to generate HTML, or switching in and out of PHP, custom XHP elements can be defined and mixed in with standard HTML elements that will abstract out common HTML structures.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Table of organization and equipment** Table of organization and equipment: A table of organization and equipment (TOE or TO&E) is the specified organization, staffing, and equipment of units. Also used in acronyms as 'T/O' and 'T/E'. It also provides information on the mission and capabilities of a unit as well as the unit's current status. Table of organization and equipment: A general TOE is applicable to a type of unit (for instance, an infantry battalion) rather than a specific unit (the 2nd Battalion, 4th Infantry Regiment). Sometimes, all units of the same branch (such as Infantry) follow the same structural guidelines; much more often, there are a wide variety of TOEs to suit specific circumstances (Modified Tables of Organization and Equipment (MTOEs), in the United States Army, for example). Soviet Union and Russia: In the Soviet and the Russian Armed Forces the term used for TO&E since the 1930s is "Shtatnoe raspisanie" (Штатное расписание, literally translated as Shtat Prescription). It originates from the term "Shtat" (штат) which is used primarily to denote manpower and in a secondary meaning as the synonym for TO&E itself. Note that in the Soviet Union and modern day Russia the term "Shtatnoe raspisanie" applied not only to military unit, but also to state organisations such as ministries, agencies, universities, hospitals etc. and even to the corporate structure of private companies. Soviet Union and Russia: Many of the Red Army's rifle divisions at the beginning of Operation Barbarossa were operating on Shtat 04/400 of 5 April 1941. This Shtat stipulated that an infantry division should consist of three infantry regiments, a light and a howitzer artillery regiment, other artillery units, a reconnaissance battalion, a combat engineer battalion, signals, chemical company (decontamination/flamethrower), transport, medical, and logistics train units, an aviation flight, and a division staff seemingly consisting of the division commander (1/0/0), division staff (70, including 12 horses and 13 vehicles), a quartermaster section of five officers (5/0/0), a military tribunal (military justice) of two officers, and a political section of 11 officers. Soviet Union and Russia: Soviet rifle divisions were often forced to operate at far below their authorised strengths. For example, in the middle of the fighting on the Eastern Front, on July 20, 1942, a report on the 284th Rifle Division lamented: In the division there are 3,172 military servicemen; a batch of replacements numbering 1,312 men has arrived and another 2,000... are expected, but in the division there are only a total of 1,921 rifles, 98 [semi-]automatic rifles and 202 PPSh submachine guns... There are 21 motorized vehicles in the division, but according to the shtat there should be 114. There are just 7 heavy machine guns, but according to the shtat 108 are necessary. 47 light machine guns, but according to the shtat there should be 350. 36 anti-tank rifles, but 277 according to the shtat. The division's separation from its supply base extends up to 100 kilometres and aggravates the supply [of] food. Soviet Union and Russia: The commissar, Tkachenko, went on to urgently request vehicles (including ambulances, of which there were none), small arms and support weapons, draught horses, and a closer supply base. After the first day of fighting he further reported that the lack of high-explosive shells forced the artillery to fire armor-piercing rounds at enemy firing points and troops; there were no cartridges for the submachine guns; many of the men's uniforms and footwear were worn out; and it was impossible to commit the replacements into the fighting because of the lack of weapons. United States: Army In the U.S. Army, there are four basic types of TOEs: The Base Table of Organization and Equipment (BTOE) An organizational design document based on current doctrine and available equipment. It shows the basics of a unit's structure and their wartime requirements (both for personnel and equipment). The Objective Table of Organization and Equipment (OTOE) An updated form of the BTOE, usually formed within the last year. It is a fully modern document and is up to date with current policies and initiatives. A Modified Table of Organization and Equipment (MTOE) A document that modifies a BTOE in regard to a specific unit. Used when a unit's needs are substantially different from the BTOE. A Table of Distribution and Allowances (TDA) A type of temporary TOE that is applicable to a specific mission. Used in an instance when there is no applicable TOE.Each TOE has a unique number that identifies it. When changes are needed, a table is not modified, instead, a new table is drafted from scratch. United States: An example of an overall T/O change can be seen when the "Pentomic" organization was superseded by the Reorganization Objective Army Division (ROAD). During the 1950s, the Pentomic reorganization shifted the basic tactical unit from the regiment to the five-company battle group. Instead of brigades, an armored division had three Combat Commands designated: CCA, CCB, and CCC. On 16 December 1960, the Army Chief of Staff directed a reappraisal of division organization. Resulting studies were carried out between January and April 1961, and fully implemented by 1965. The resulting Reorganization of Army Divisions (ROAD) changed all division types (Mechanized, Airborne, Armor, Infantry and Cavalry) to an identical structure of three brigades of three (sometimes four) battalions. The ROAD division consisted of a mix of nine to twelve armor and infantry battalions based on its Mission, the likely Enemy, the Terrain/weather, and other forces available or Troops (METT). Each brigade would be assigned or attached the mix of battalions and companies based on the division commanders estimate based on METT. As operations continued, the division commander could task organize subordinate units as needed by the flow of the battle. United States: Marine Corps Marine T/O&Es are based on a generic template for each specific type and size of unit, for example, a weapons company of an infantry battalion, or a heavy helicopter squadron. These templates are then modified as needed by the individual unit. The Marine Corps also relies on other documents to report what personnel and equipment a unit actually possesses. United States: The T/O section denotes every authorized billet within a unit by rank and Military Occupational Specialty required to fulfill the necessary duties. The T/E section denotes authorized equipment by Line Item Number and quantity. External links and sources: U.S. Army TOE TRADOC Regulation 71-15 What is a TOE (WWII example)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**6-Chloronicotine** 6-Chloronicotine: 6-Chloronicotine is a drug which acts as an agonist at neural nicotinic acetylcholine receptors. It substitutes for nicotine in animal studies with around twice the potency, and shows antinociceptive effects.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Range of a function** Range of a function: In mathematics, the range of a function may refer to either of two closely related concepts: The codomain of the function The image of the functionGiven two sets X and Y, a binary relation f between X and Y is a (total) function (from X to Y) if for every x in X there is exactly one y in Y such that f relates x to y. The sets X and Y are called domain and codomain of f, respectively. The image of f is then the subset of Y consisting of only those elements y of Y such that there is at least one x in X with f(x) = y. Terminology: As the term "range" can have different meanings, it is considered a good practice to define it the first time it is used in a textbook or article. Older books, when they use the word "range", tend to use it to mean what is now called the codomain. More modern books, if they use the word "range" at all, generally use it to mean what is now called the image. To avoid any confusion, a number of modern books don't use the word "range" at all. Elaboration and example: Given a function f:X→Y with domain X , the range of f , sometimes denoted ran ⁡(f) or Range ⁡(f) , may refer to the codomain or target set Y (i.e., the set into which all of the output of f is constrained to fall), or to f(X) , the image of the domain of f under f (i.e., the subset of Y consisting of all actual outputs of f ). The image of a function is always a subset of the codomain of the function.As an example of the two different usages, consider the function f(x)=x2 as it is used in real analysis (that is, as a function that inputs a real number and outputs its square). In this case, its codomain is the set of real numbers R , but its image is the set of non-negative real numbers R+ , since x2 is never negative if x is real. For this function, if we use "range" to mean codomain, it refers to R ; if we use "range" to mean image, it refers to R+ In many cases, the image and the codomain can coincide. For example, consider the function f(x)=2x , which inputs a real number and outputs its double. For this function, the codomain and the image are the same (both being the set of real numbers), so the word range is unambiguous.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Water filling algorithm** Water filling algorithm: Water filling algorithm is a general name given to the ideas in communication systems design and practice for equalization strategies on communications channels. As the name suggests, just as water finds its level even when filled in one part of a vessel with multiple openings, as a consequence of Pascal's law, the amplifier systems in communications network repeaters, or receivers amplify each channel up to the required power level compensating for the channel impairments. See, for example, channel power allocation in MIMO systems. Single channel systems: In a single channel communication system the deamplification and loss present on them can be simplistically taken as attenuation by a percentage g, then amplifiers restore the signal power level to the same value at transmission setup by operating at a gain of 1/ (1 − g). E.g. if we experience 6 dB attenuation in transmission, i.e. 75% loss, then we have to amplify the signal by a factor of 4x to restore the signal to the transmitter levels. Multichannel systems: Same ideas can be carried out in presence impairments and a multiple channel system. Amplifier nonlinearity, crosstalk and power budgets prevent the use of these waterfilling algorithms to restore all channels, and only a subset can benefit from them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Outpost (military)** Outpost (military): A military outpost is detachment of troops stationed at a distance from the main force or formation, usually at a station in a remote or sparsely populated location, positioned to stand guard against unauthorized intrusions and surprise attacks; and the station occupied by such troops, usually a small military base or settlement in an outlying frontier, limit, political boundary or in another country. Outposts can also be called miniature military bases based on size and number of troops it houses. Recent military use: Military outposts, most recently referred to as combat outposts (COPs), served as a cornerstone of counterinsurgency doctrine in Iraq and Afghanistan. These permanent or semi-permanent structures, often located in or near populated areas, enabled military forces to secure key lines of communication or infrastructure, secure and co-opt the populace, assist the government in restoring essential services, and force insurgents to operate elsewhere. Combat Outposts were almost unanimously described in positive terms by defense analysts and military officers as a means through which to carry out its counterinsurgency efforts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**So What chord** So What chord: In jazz harmony, a So What chord is a particular 5-note chord voicing. From the bottom note upwards, it consists of three perfect fourth intervals followed by a major third interval. It was employed by Bill Evans in the "'amen' response figure" to the head of the Miles Davis tune "So What". So What chord: For example, an "E minor" So What chord is an Em11 voicing,: The So What chord is often used as an alternative to quartal voicings and may be used in diatonic and chromatic planing. It is identical to the standard tuning of a guitar's five lowest strings. It is essentially a minor eleventh chord, arranged as it would be played on a guitar (1, 4, ♭7, ♭3, 5). So What chord: It may also be thought of as a five-note quartal chord (built from fourths) with the top note lowered by a semitone. More modern sounding than "tertial chords" (built from thirds), it is useful in comping; since the structure of quartal harmony is usually vague, many roots may be applied to the So What chord and it may work well in various contexts including, "a major scale context; a Mixolydian mode context; or a minor context". For example, without changing the keys that are played, the same Em11 chord described above can also function as a C6Δ9, Asus47(9), G69, Dsus24, 6 [no 7], Flydian (FΔ9♯1113 [no 5]) or F♯phrygian (F♯m7♭911♭13 [no 5]). So What chord: Other jazz recordings that make extensive use of the chord include McCoy Tyner's "Peresina" and Gary Burton's "Gentle Wind and Falling Tear". Tyner's use of similar voicings was an early influence on Chick Corea; it can be heard in tunes such as "Steps" and "Matrix" (both featured on his landmark album Now He Sings, Now He Sobs). The term "So What chord" is used extensively in Mark Levine's landmark work The Jazz Piano Book, wherein he describes a range of uses for which the voicing might be employed. Frank Mantooth dedicated two chapters to the chord under the name "Miracle voicing" in his work Voicings for Jazz Keyboard.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Society for Brain Mapping and Therapeutics** Society for Brain Mapping and Therapeutics: The International Brain Mapping and Intraoperative Surgical Planning Society (IBMISPS-Tax ID 20-2793206) DBA The Society for Brain Mapping and Therapeutics (SBMT) is a non-profit biomedical association (501c6) principally concerned with Brain Mapping and Intra-operative Surgical planning. International Brain Mapping and Intraoperative Surgical planning Foundation (IBMISPF) DBA The Brain Mapping Foundation provides funding to members of the society.In 2013 SBMT Board and Members defined Brain Mapping as the study of the anatomy and function of the brain and spinal cord through the use of imaging (including intra-operative, microscopic, endoscopic and multi-modality imaging), immunohistochemistry, molecular & optogenetics, stem cell and cellular biology, engineering (material, electrical and biomedical), neurophysiology and nanotechnology. History: The Society for Brain Mapping and Therapeutics (SBMT) was founded in 2004 to break boundaries in healthcare. The society promotes policies that support rapid, safe, and cost-effective translation of new technology into medicine. SBMT played a significant role in the formulation, planning and execution of Obama's BRAIN Initiative and in 2013 pioneered the G20+ World Brain Mapping & Therapeutic Initiative, which is aimed at creating a global consortium focusing on integration of nanotechnology, imaging, cellular/stem cell therapeutics, Information Technology (IT) and devices (this approach called NanoBioElectronics) in Brain Mapping.Collaborating partners also include: Australian Government and Australian Bio Tech (collections of 650 Australasia biotech firms), Canadian government and scientists, Turkish scientists, more than 200 universities and research institutions across the world. Man of the US Government Agencies. SBMT has near 3000 contacts with the industry around the globe. Board Members: Board appointments is one year with possibility of extension depending on how active the member is. Previous conferences: 2004 (15 Nov) Keck School of Medicine, USC 2005 (17–19 Nov) Pasadena, CA, USA 2006 (5–8 Sep) Clermont Ferrand, France 2007 (6–8 Sep) Washington DC, USA 2008 (26–29 Aug) Los Angeles, CA, USA 2009 (26–29 Aug) Harvard Medical School, Boston, USA 2010 (24-27 May) Uniformed Services University of the Health Sciences, Bethesda, USA 2011 (8-10 Jun) Mission Bay Conference Center, San Francisco, CA, USA 2012 (2-4 Jun) Metro Toronto Conference Center, Toronto, ON, Canada 2013 (12-14 May) Baltimore Convention Center, Baltimore, MD, USA 2014 (17-19 Mar) Four Seasons Hotel, Sydney, Australia 2015 (6-8 Mar) L.A. Convention Center, Los Angeles, CA, USA 2016 (8-10 April) Miami Convention Center, Miami, FL, USA 2017 (18-20 April) Millennium Biltmore Hotel, Los Angeles, CA, USA 2018 (13-15 April) Millennium Biltmore Hotel, Los Angeles, CA, USA 2019 (15-17 March) L.A. Convention Center, Los Angeles, CA, USA 2020 - Had to be postponed due to COVID-19 Pandemic 2021 (8-11 July) L.A. Convention Center, Los Angeles, CA, USA 2022 (10-13 March) L.A. Convention Center, Los Angeles, CA, USA 2023 (16-19 February) L.A. Convention Center, Los Angeles, CA, USA
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Canto (gene curation tool)** Canto (gene curation tool): Canto is a web-based tool to support the curation of gene-specific scientific data, by both professional biocurators and publication authors. Canto was developed as part of the PomBase project, and is funded by the Wellcome Trust. Canto (gene curation tool): Canto enables experts (biocurators and publication authors) to provide detailed, standardized, sharable annotation from research publications and was originally created for the fission yeast community. Canto is a generic tool that can be readily configured for use with other organisms and other databases and now supports pathogen-host interactions for PHI-base (Rothamsted research) and the curation of phenotypes and genetic interactions at FlyBase (University of Cambridge), and all gene-specific datatypes for the emerging model species Schizosaccharomyces japonicus in JaponicusDB. Curation using ontology terms: Canto supports the use of bio-ontologies (including the Gene Ontology, Protein Ontology, The Fission Yeast Phenotype Ontology FYPO, and the Sequence Ontology to describe attributes of gene products. Complex ontology structures are hidden by an intuitive search, browse, and drill-down workflow. Canto workflow guides the user through the curation process with prompts for required qualifiers and metadata (for example evidence (provenance), annotation extensions, and experimental conditions). Prompts are tailored to different data types, and their individual specific domains and ranges. Community Curation: Canto has been successful in supporting community curation, and most of the new curation in PomBase is provided by the community of researchers who use the fission yeast Schizosaccharomyces pombe as a model organism. The PomBase team demonstrate that co-curation by publication authors and professional curators provides higher quality curation to maximise the value and impact of scientific research.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Taenia of fourth ventricle** Taenia of fourth ventricle: In the brain, the taenia of the fourth ventricle (lingula, tenia of fourth ventricle) are two narrow bands of white matter, one on either side, which complete the lower part of the roof of the fourth ventricle. Each consists of a vertical and a horizontal part. The vertical part is continuous below the obex with the gracile nucleus, to which it is adherent by its lateral border. Taenia of fourth ventricle: The horizontal portion extends transversely across the inferior peduncle, below the striae medullares, and roofs in the lower and posterior part of the lateral recess; it is attached by its lower margin to the inferior peduncle, and partly encloses the choroid plexus, which, however, projects beyond it like a cluster of grapes; and hence this part of the tænia has been termed the cornucopia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BK Stacker** BK Stacker: The BK Stacker sandwiches are a family of cheeseburgers sold by the international fast-food restaurant chain Burger King. History: In 2002, Burger King changed ownership when its parent company, Diageo, sold its interest in the company to a group of investment firms led by TPG Capital. After assuming ownership, TPG's newly appointed management team began focusing menu development and advertising on a very narrow demographic group, young men aged 20–34 who routinely ate at fast food restaurants several times per month which the chain identified as the "super fan". Amid this new super-fan focused menu expansion the chain introduced its new BK Stacker sandwich in late 2006, a family of sandwiches featuring the same set of toppings served as a single, double, triple or quadruple hamburger. The Stacker line was part of a series of larger, more calorie-laden products introduced by the company to entice the super-fan into the chain's restaurants. These new additions helped propel same store profits for more than sixteen quarters.The Stacker consisted of anywhere from one to four 1.7 oz (48 g) beef patties, American cheese, bacon and a Thousand Island dressing variant called Stacker sauce served on a sesame seed bun. The new sandwiches had a muted reaction in several reviews—Chowhound.com readers rated the Quad Stacker as one of the most over-the-top gluttonous burgers in a poll, while the Impulsive Buy stated that the sandwich was much like any other bacon cheeseburger but meatier. Despite its lukewarm reception, an internet meme relating to the sandwich developed rather quickly. Customers would create an "Octo-Stacker" sandwich by purchasing two quad Stackers and mashing the two together sandwiches to create a sandwich with eight patties, eight slices of cheese and sixteen half pieces of bacon. They would then film themselves trying to eat the 1 lb (0.45 kg) sandwich in under five minutes.With the onset of the Great Recession in 2008–2009, this narrowly-defined demographic-based sales plan faltered and sales and profits for the chain declined; Burger King's same-store comparable sales in the United States and Canada declined 4.6% in the three months ended September 30, while McDonald's posted same-store comparable sales growth of 2.5% within the United States. The Stacker line underwent a minor reformulation in 2011 that involved deleting the top layer of cheese and changing the amount of bacon in the sandwiches, and moving the sandwiches from the core section of its menu to the company's value menu. The changed ingredient list and pricing structure created a situation such that the distribution of ingredients did not scale at the same rate as increasing numbers of burger patties. Consumer Reports' blog The Consumerist noted that two single Stackers at $1.00 included more cheese and more bacon than one double Stacker for $2.00. Three single Stackers had 50% more cheese and double the bacon of one triple Stacker. The Stacker line and other related calorie-heavy menu items were dropped in 2012 when 3G Capital of Brazil bought the company and initiated a menu restructuring focusing on a broader demographic base. Since then, the Stacker line has been reintroduced under their 2005-2011 formulation and with a new name: the "Stacker King" sandwiches. Canadian locations serve both the 2005 formulations of the Stacker sandwiches as well as the 2011 formulations together. The 2005 formulations are branded as the "Stacker King" line, while the 2011 formulations are branded as simply the "Stacker" line. Product description: The BK Stacker is a hamburger consisting of anywhere from one to four 2.0 ounces (57 g) grilled beef patties, American cheese, bacon and Stacker sauce (a Thousand Island dressing variant) served on a sesame seed bun. Product description: Notable variants The standard variants of the BK Stacker sandwich are: The Single Stacker - 1 patty, 2 half pieces of bacon and 1 slice of cheese The Double Stacker - 2 patties, 3 half pieces of bacon and 1 slice of cheese The Triple Stacker - 3 patties, 3 half pieces of bacon and 2 slices of cheese The Quad Stacker - 4 patties, 3 half pieces of bacon and 3 slices of cheese BK Stackticon - A summer 2009 variation that replaces the stacker sauce with BBQ Sauce. Sold as product tie-in with Transformers: Revenge of the Fallen BBQ Beef Stack - A similar sandwich offered by Hungry Jack's that features single, double and triple sized burgers along with a fried egg and a proprietary BBQ sauce called "Jack Sauce." The Quintuple Stacker, (a limited edition version offered in Argentina) - 5 patties, 3 half pieces of bacon and 5 slices of cheese Advertising: The BK Stacker was introduced using commercials that employed groups of little people in the roles of members of the "Stackers Union". The characters were "Vin," played by Danny Woodburn, "the new guy," and various members of the "Stackers Union" construction team that work in a BK kitchen assembling the sandwiches. The tag line was "Meat, Cheese and Bacon- Stacked High". As exemplified in the advertising campaign, part of the sandwich's concept revolves around not having vegetables like lettuce, onions, or tomatoes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Childhood arthritis** Childhood arthritis: Childhood arthritis (also known as juvenile arthritis or pediatric rheumatic disease) is an umbrella term used to describe any rheumatic disease or chronic arthritis-related condition which affects individuals under the age of 16. Most types are autoimmune disorders. Signs and symptoms: Several types of childhood arthritis exist, including juvenile idiopathic arthritis, juvenile myositis, juvenile lupus, juvenile scleroderma, vasculitis, and fibromyalgia. Signs and symptoms: General signs of childhood arthritis disorders include: Joints: Swollen, stiff, red, warm, and/or painful joints Eyes: Painful/dry eyes, sensitivity to light and/or difficulty seeing caused by uveitis Skin: Scaly red rash (psoriatic), light spotted pink rash (systemic), butterfly shaped rash across the bridge of the nose and cheeks (lupus) or thick, hardened patches of skin (scleroderma) Organs: Digestive tract (diarrhea and bloating), lungs (shortness of breath) and heart Other: Fatigue, appetite loss, and/or high, spiking fever The most common type of childhood arthritis, juvenile idiopathic arthritis (previously known as juvenile rheumatoid arthritis (JRA) or juvenile chronic arthritis (JCA)) can be divided into three main forms: The classification is based upon symptoms, number of joints involved and the presence of certain antibodies in the blood. Signs and symptoms: Polyarticular arthritis is the first type of arthritis, which affects about 30–40% of children with arthritis and is more common in girls than boys. Typically five or more joints are affected (usually smaller joints such as the hands and feet but many also affect the hips, neck, shoulders and jaw). Signs and symptoms: Oligoarticular (aka pauciarticular) arthritis can be early or late onset and is the second type of arthritis, affecting about 50% of children with juvenile arthritis. This type affects fewer than four joints (usually the large joints such as knees, ankles or wrists) and may cause eye inflammation in girls with positive anti-nuclear antibodies (ANA). Girls younger than eight are more likely to develop this type of arthritis. Signs and symptoms: Systemic disease is the least common form, with 10–20% of children (boys and girls equally) being affected with limited movement, swelling and pain in at least one joint. A common symptom of this type is a high, spiking fever of 103 °F (39.4 °C) or higher, lasting for weeks or months, and a rash of pale red spots on the chest, thighs or other parts of the body may be visible. Cause: In most cases, juvenile arthritis is caused by the body attacking its own healthy cells and tissues, i.e. autoimmunity, causing the joint to become inflamed and stiff. Once the joint has become inflamed and stiff, damage is done to the joint and the growth of the joint may by changed or impaired. The underlying cause in the malfunction of the autoimmune system is unknown; dietary habits and emotional state seem to have no effect on the disease. Diagnosis: Early diagnosis and treatment by a pediatric rheumatologist or a rheumatologist can help manage inflammation, relieve pain, and prevent joint damage. However, it is difficult for doctors to diagnose the disease. Careful examination, laboratory tests (blood and urine), and various forms of imaging like X-rays may be some of the tests conducted by a doctor. Doctors may perform some of the following tests to diagnose the condition ANA (Antinuclear Antibody) Test Joint Aspiration Rheumatoid Factor (RF) Test Treatment: The treatment of most types of juvenile arthritis include medications, physical therapy, splints and in severe cases surgery. Methotrexate is commonly prescribed to children with juvenile arthritis. These treatments are focused on reducing swelling, relieving pain and maintaining full movement of joints. Children are encouraged to be involved in extra-curricular activities, physical activity when possible, and to live a "normal" life. Epidemiology: In the US it affects about 250,000-294,000 children making it one of the most common groups of childhood diseases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Load-Hit-Store** Load-Hit-Store: A Load-Hit-Store, sometimes abbreviated as LHS, is a data dependency in a CPU in which a memory location that has just been the target of a store operation is loaded from. The CPU may then need to wait until the store finishes, so that the correct value can be retrieved. This involves e.g. a L1 cache roundtrip, during which most or all of the pipeline will be stalled, causing a significant decrease in performance. For example, (C/C++): Here, the language rules do not allow the compiler to assume that the pointers a and b refer to different memory locations. Therefore, it cannot, in general, keep the stored values in a register for the final addition (or, in this simple example, precalculate the return value to 12), but instead has to emit code that reloads at least the value from the first memory location, *a. The only realistic alternatives are a test-and-branch to see whether a and b are equal, in which case the correct return value is 14, but this adds significant overhead if the pointers are not equal, and optimizations enabled by function inlining. Load-Hit-Store: Now if a call to slow is made with the same address for a and b, there is a data dependency between the memory stores and the memory load(s) in the final statement of slow. Some CPU designs (like general purpose processors for desktop or notebook computers) dedicate a significant amount of die space to complex store-to-load forwarding, which, under suitable circumstances such as native alignment of the operands, can avert having to wait for the cache roundtrip. Other CPUs (e.g. for embedded devices or video game consoles) may use a less elaborate or even minimalistic approach, and rely on the software developer to avoid frequent load-hit-stores in performance-critical code, or remove them during performance optimization. In the minimalistic approach, a store-to-load dependency forces a flush of the store buffers and stalling the pipeline. This ensures that the computation has the correct result, at a high performance cost.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Photo-Carnot engine** Photo-Carnot engine: A photo-Carnot engine is a Carnot cycle engine in which the working medium is a photon inside a cavity with perfectly reflecting walls. Radiation is the working fluid, and the piston is driven by radiation pressure. Photo-Carnot engine: A quantum Carnot engine is one in which the atoms in the heat bath are given a small bit of quantum coherence. The phase of the atomic coherence provides a new control parameter.The deep physics behind the second law of thermodynamics is not violated; nevertheless, the quantum Carnot engine has certain features that are not possible in a classical engine. Derivation: The internal energy of the photo-Carnot engine is proportional to the volume (unlike the ideal-gas equivalent) as well as the 4th power of the temperature (see Stefan–Boltzmann law) using a=4σc :U=VεaT4. The radiation pressure is only proportional to this 4th power of temperature but no other variables, meaning that for this photo-Carnot engine an isotherm is equivalent to an isobar: P=U3V=εaT43. Using the first law of thermodynamics ( dU=dW+dQ ) we can determine the work done through an adiabatic ( dQ=0 ) expansion by using the chain rule ( dU=εaT4dV+4εaVT3dT ) and setting it equal to dWV=−PdV=−13εaT4dV. Combining these dWV=dU gives us −13TdV=VdT which we can solve to find const , or equivalently const . Since the photo-Carnot engine needs a quantum coherence in the gas which is lost during the process, the rebuild of coherency takes more energy than is produced with the machine. The efficiency of this reversible engine including the coherency must at most be the Carnot efficiency, regardless of the mechanism and so η≤TH−TCTH=1−TCTH.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bachelor of Science in Human Biology** Bachelor of Science in Human Biology: Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas: Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Americas: Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Americas: Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia: BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New Delhi, India) was started in 1980 with a goal to bridge the gap between research performed at molecular/cellular level in vitro or in lower animal models and clinical research, creating a pool of scientists with a greater understanding of human physiology and tools to incorporate the big picture while still taking a reductionist approach to scientific research. As such, it provided a stepping stone for advanced career in research and allied fields. Candidates for the maximum of 25 open seats were selected via a nationwide entrance exam. Asia: Origin Prof. Prakash Chandra (b. 1952 - d.2006), who was Dean at AIIMS from 1979 to 1984, undertook the task of revising the undergraduate curriculum and initiated the BSc (Hons.) courses in Nursing and in Human Biology. Asia: Among the founders of Human Biology program was Dr. B. S. Narang, a well-renowned educator at that time, whose contributions are now recognized as Dr. B.S. Narang Memorial Prize awarded to the Best Undergraduate in Biochemistry at AIIMS.Coursework FIRST PHASE: During the first phase, students were exposed to the basic medical sciences: Human anatomy, physiology and biochemistry. This phase was run together with first year M.B.B.S. medical students who also followed the same curriculum. Asia: Human Anatomy: Gross cadaver (cadaver dissections, osteology and kinesiology), microanatomy of all systems of the body, neuroanatomy (of brain and spinal cord, its connections and function with demonstration of cut brain sections), embryology (development of human embryo normal and abnormal, study of various stages of microscopic and gross level) and genetics. Physiology: A complete review of the functional aspects of human physiology, neurophysiology, respiratory physiology, GIT, the special senses, skeletal and smooth muscles, cardiovascular system, excretory and reproductive systems. Asia: Biochemistry: An introduction to biochemistry and allied fields at the molecular, cellular and system level. Biomolecules, enzymology, metabolism and specialization in the tissues, immunology, biochemical genetics.SECOND PHASE: The Phase II was designed to provide as broad an exposure scientific fundamentals and to various facets of research areas and basic concepts as possible. Each chosen subject matter was spread over three-to-five weeks and included lectures, tutorials (discussion sessions) and labs. Asia: Biomathematics: Differentiation and integration, partial differentiation equations, special functions, integral transforms. Biostatistics: measures of location and dispersion, sampling, probability, statistical distribution, tests of significance, correlation and regression, analysis of variance. Chemical basis of biology: Concept in organic and physical chemistry, nature of bonds, non-bonded interactions, quantum chemistry. Physical basis of biology: laws of inertia, gravitation, relativity, electrodynamics, quantum physics. Biochemical basis of biology: Organization of genes, viruses and plasmids, biochemical evolution, the organization of DNA, replication, the code, molecular basis of differentiation and morphogenesis, molecular basis of cancer. Biophysical basis of biology: Principles of structure and function of macromolecules, nucleic acids and proteins, small molecules, organization of macromolecular assemblies, lipids and membranes, phase diagrams, structure activity relationships in drugs, computer modeling. Principles of genetics and evolution: Heredity and variation, multifactorial inheritance, molecular genetics, chromosomal disorders. Pharmacology, Microbiology, and Pathology (concepts) Instrumentation and techniques in experimental biology: Principles of animal care, anesthetic agents, surgical skills, perfusion techniques, experimental design, bioassay techniques. Ecology and environmental biology: types of ecosystem adaptation to the environment, physiological changes in response to hypo/hyperthermia in humans. Reproductive biology and experimental endocrinology: the endocrinology of reproduction, human contraceptives, the animal models for studying hormonal control of reproduction. Bioenergetics and Biocybernetics: Basic thermodynamics, chemical kinetics, far from equilibrium thermodynamics, introduction to biocybernetics. Techniques in experimental biochemistry: Colorimetry, spectrophotometry, spectrofluorometry, pH determination, gel electrophoresis, chromatography.THIRD PHASE: The third year was devoted to specialization in a chosen field of anatomy, biochemistry, biophysics, physiology or pharmacology. It involved in-depth instruction in the chosen fields, reviews of relevant research literature, seminars and term papers. Since the first phase of the coursework was executed together with the incoming class of medical students, this program was unique in its similarity to the present day MD/PhD programs in US universities. The experience of this unique program format has inspired similar programs at Jayewerdenepura and Singapore. Asia: History The first class graduated in 1983. However, with the establishment of new Master's level programs, such as, MSc Biotechnology in 1986, and the shifting focus of the institute in late 80s and early 90s, the undergraduate "Human Biology" program lost its core support. The last batch of students was accepted for this course in 2002. Several components of the second phase are now incorporated into the MSc or M.Biotech curriculum at AIIMS. Asia: BSc (Honours) Human Biology at University of Sri Jayewardenepura, Nugegoda, Sri Lanka (since 1994) BSc Human Biology degree program was initiated in 1994 at the Faulty of Medical Sciences with the intention of fulfilling the need to provide human resources for Medical and Health sciences faculties; to those departments finding it difficult to recruit MBBS graduates with practical skills; specially intended for Pharmacy, Medical Laboratory Technology and Medical Laboratory Sciences courses. Also to provide research institutes, private sector institutions involved in food and nutrition, public and private sector diagnostic institutions and health policy making institutions with individuals with necessary knowledge and skills.The Human Biology special degree course involves 3.5 (9½ terms) years of study in the Faculty of Medical Sciences and comprises three Parts. Similar to the program at AIIMS, Human Biology students follow the Part I & Part II of their degree course with the medical undergraduates. Part I and Part II comprise 12 units from Anatomy, Physiology, Biochemistry and four units from General Pathology, Parasitology, Microbiology and Pharmacology respectively. Part III of the course is the specialization year in a chosen field (Biochemistry, Food & Nutrition, Microbiology, Genetics, Pharmacology or Microbiology); the curriculum is designed so that the students are exposed to gain advanced knowledge in the selected specialization. Asia: B.S. Human Biology at Kathmandu University, Nepal(since 2006) Kathmandu University initiated Human Biology Course since 2006. B.S. Life Sciences at National University of Singapore B.S. Life Sciences at National University of Singapore Australia and New Zealand: BBiomedSc (Bachelor of Biomedical Sciences) at University of Otago, Christchurch (since 2002) A degree in Biomedical Sciences (BBiomedSc) at Otago is complementary to traditional discipline-based majors (e.g. Anatomy, Biochemistry, Genetics, Human Nutrition, Microbiology, Pharmacology, Physiology) currently offered within the Bachelor of Science (BSc) degree, but allows a wider diversity of health related papers to be taken. YouTube It provides a suitable qualification for graduate entry into medicine and other professional health science programmes, as well as increasing the range of career opportunities accessible to graduates. This degree aims at producing graduates with a sound and comprehensive grounding in the key principles underpinning modern biological and medical research and their potential applications in biotechnology. Students completing the First Year Health Sciences course will have met the requirements to advance to 200-level in the Biomedical Sciences degree. Australia and New Zealand: The degree structure is of a standard three-year bachelor's degree programme enabling students to graduate with a Bachelor of Biomedical Science degree. Biomedical Sciences is a combination of subject areas that promotes understanding of the scientific basis of health and disease in humans. Students have the opportunity of majoring in one of six subject areas offered: Drugs and Human Health, Functional Human Biology, Infection and Immunity, Molecular Basis of Health & Disease, Nutrition and Metabolism in Human Health, Reproduction, Genetics and Development. Australia and New Zealand: A postgraduate Honours degree in Biomedical Sciences is also available to students who have completed the requirements for the BBiomedSc degree with an average grade of at least B+ for the appropriate 300-level papers. Australia and New Zealand: Bachelor of Human Biology (Honours), University of Auckland, New Zealand Programs at Australian universities Bachelor of Science - Major: Human Biology University of Notre Dame Bachelor of Applied Science (Chinese Medicine)/Bachelor of Applied Science (Human Biology), Royal Melbourne Institute of Technology University (RMIT) Bachelor of Science (Human Biology), Edith Cowan University Bachelor of Science (Human Biology Preclinical), Curtin University of Technology Bachelor of Applied Science in Human Biology, University of Canberra, Bachelor of Science (Anatomy and Human Biology), University of Western Australia United Kingdom: British universities are at the forefront of providing a broad-based undergraduate curriculum in all major aspects of human biology; although courses differ widely in their emphasis. For example, Sheffield Hallam combines business studies to biosciences, healthcare and pharmaceutical sciences. United Kingdom: BSc (Honours) Human Biology at Kingston University BSc (Honours) Human Biology at Queen Margaret University B.A./BSc (Honors) Human Biology and Psychology at University of Hertfordshire BSc (Honours) Anatomy and Human Biology at University of Liverpool BSc Human Biosciences at Roehampton University, London BSc Human Biosciences at Northumbria University, Newcastle BSc (Honours) Human Biology at Sheffield Hallam University BSc (Honours) Human Biosciences at Plymouth University
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Space Telescope Science Data Analysis System** Space Telescope Science Data Analysis System: The Space Telescope Science Data Analysis System (STSDAS) is an IRAF-based suite of astronomical software for reducing and analyzing astronomical data. It contains general purpose tools and packages for processing data from the Hubble Space Telescope. STSDAS is produced by Space Telescope Science Institute (STScI). The STSDAS software is in the public domain and the source code is available.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kap Stanton Formation** Kap Stanton Formation: The Kap Stanton Formation is a geologic formation in Greenland. It preserves fossils dating back to the Cambrian period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Technology trajectory** Technology trajectory: Technology trajectory refers to a single branch in the evolution of a technological design of a product/service, with nodes representing separate designs. With Technology trajectory referring to a single branch we do expect the development of new technologies to precede recent uses and advance future technologies. The development of future technologies allows for the innovation of new ideas, research, and much more. Technology trajectory: It also can be defined as the paths by which innovations in a given field occur. Technology trajectory: Movement along the technology trajectory is associated with research and development. Due to the institutionalization of ideas, markets, and professions, technology development can get 'stuck' (locked-in) within one trajectory, and firms and engineers are unable to adapt to ideas and innovation from the outside. Technological trajectory/development may break- out of trajectory and can cause three understandings 1. when technology will lock in into a trajectory, 2.) when technology may break out of lock-in, and 3.) when competing technologies may co-exist in a balance. A lock-in is when a certain technology develops along a certain trajectory allowing the development to get stuck due to certain circumstances. Not all trajectories are permanently locked into a trajectory. Let us take for example the Technological Advancement/Trajectory of Increasing Resource use. In 1929 after a man who worked for the USGS wanted to make sure there were enough materials and technological advancements after the war on metal production. He considered 4 important factors to make sure metal production would be made: Geology, Technology, Economics, and Politics. There are technical factors that go into mining, treatment, and refining. “The history of sulfur extraction and production technology also reflects continuous improvement upon processes developed from other industries to meet changing materials use requirements and societal needs". The process of sulfur extraction is found deep underground or underwater. The Clean Air Act of 1970 made rules for getting sulfur from oil refining, processing of sulfide, ores, and even the combustion of electricity generation. This required technologies to be made in order to coincide with the Clean Air Act. Technology trajectory: The continuous improvement of sulfur extraction over the years shows how this technological trajectory/ advancement has developed over the years. Technology trajectory: Technology Trajectory doesn't just focus on firms or engineers but it can deal with healthcare, schools, the daily life of everyone, and much more. Technology Trajectory also poses the question of whether innovations are integrated into systems nationally, regionally, or sectorally. This then makes you wonder about the environmental issues and the structure of how Technology trajectory affects everyone. Technology in this day in age is all around us and with that being said we must have a Technology Trajectory of where we want to advance to maintain our ability to take technology beyond our imagination. Technology is shaping how we learn, gather information, move forward, and change. Technology is like a policy because it tells us how we are supposed to do things, and makes some ways of doing things more rational and practical than others. Technology trajectory: See also Innovation Thomas Samuel Kuhn Social shaping of technology Technological paradigm
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Data pre-processing** Data pre-processing: Data preprocessing can refer to manipulation or dropping of data before it is used in order to ensure or enhance performance, and is an important step in the data mining process. The phrase "garbage in, garbage out" is particularly applicable to data mining and machine learning projects. Data collection methods are often loosely controlled, resulting in out-of-range values, impossible data combinations, and missing values, amongst other issues. Data pre-processing: Analyzing data that has not been carefully screened for such problems can produce misleading results. Thus, representation and quality of data is necessary before running any analysis. Often, data preprocessing is the most important phase of a machine learning project, especially in computational biology. If there is a high proportion of irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase may be more difficult. Data preparation and filtering steps can take a considerable amount of processing time. Examples of methods used in data preprocessing include cleaning, instance selection, normalization, one-hot encoding, data transformation, feature extraction and feature selection. Applications: Data mining The origins of data preprocessing are located in data mining. The idea is to aggregate existing information and search in the content. Later it was recognized, that for machine learning and neural networks a data preprocessing step is needed too. So it has become to a universal technique which is used in computing in general. Applications: Data preprocessing allows for the removal of unwanted data with the use of data cleaning, this allows the user to have a dataset to contain more valuable information after the preprocessing stage for data manipulation later in the data mining process. Editing such dataset to either correct data corruption or human error is a crucial step to get accurate quantifiers like true positives, true negatives, false positives and false negatives found in a confusion matrix that are commonly used for a medical diagnosis. Users are able to join data files together and use preprocessing to filter any unnecessary noise from the data which can allow for higher accuracy. Users use Python programming scripts accompanied by the pandas library which gives them the ability to import data from a comma-separated values as a data-frame. The data-frame is then used to manipulate data that can be challenging otherwise to do in Excel. pandas (software) which is a powerful tool that allows for data analysis and manipulation; which makes data visualizations, statistical operations and much more, a lot easier. Many also use the R programming language to do such tasks as well. The reason why a user transforms existing files into a new one is because of many reasons. Data preprocessing has the objective to add missing values, aggregate information, label data with categories (data binning) and smooth a trajectory. More advanced techniques like principal component analysis and feature selection are working with statistical formulas and are applied to complex datasets which are recorded by GPS trackers and motion capture devices. Applications: Semantic data preprocessing Semantic data mining is a subset of data mining that specifically seeks to incorporate domain knowledge, such as formal semantics, into the data mining process. Domain knowledge is the knowledge of the environment the data was processed in. Domain knowledge can have a positive influence on many aspects of data mining, such as filtering out redundant or inconsistent data during the preprocessing phase. Domain knowledge also works as constraint. It does this by using working as set of prior knowledge to reduce the space required for searching and acting as a guide to the data. Simply put, semantic preprocessing seeks to filter data using the original environment of said data more correctly and efficiently. Applications: There are increasingly complex problems which are asking to be solved by more elaborate techniques to better analyze existing information. Instead of creating a simple script for aggregating different numerical values into a single value, it make sense to focus on semantic based data preprocessing. The idea is to build a dedicated ontology, which explains on a higher level what the problem is about. In regards to semantic data mining and semantic pre-processing, ontologies are a way to conceptualize and formally define semantic knowledge and data. The Protégé (software) is the standard tool for constructing an ontology. In general, the use of ontologies bridges the gaps between data, applications, algorithms, and results that occur from semantic mismatches. As a result, semantic data mining combined with ontology has many applications where semantic ambiguity can impact the usefulness and efficiency of data systems. Applications include the medical field, language processing, banking, and even tutoring, among many more. Applications: There are various strengths to using a semantic data mining and ontological based approach. As previously mentioned, these tools can help during the per-processing phase by filtering out non-desirable data from the data set. Additionally, well-structured formal semantics integrated into well designed ontologies can return powerful data that can be easily read and processed by machines. A specifically useful example of this exists in the medical use of semantic data processing. As an example, a patient is having a medical emergency and is being rushed to hospital. The emergency responders are trying to figure out the best medicine to administer to help the patient. Under normal data processing, scouring all the patient’s medical data to ensure they are getting the best treatment could take too long and risk the patients’ health or even life. However, using semantically processed ontologies, the first responders could save the patient’s life. Tools like a semantic reasoner can use ontology to infer the what best medicine to administer to the patient is based on their medical history, such as if they have a certain cancer or other conditions, simply by examining the natural language used in the patient's medical records. This would allow the first responders to quickly and efficiently search for medicine without having worry about the patient’s medical history themselves, as the semantic reasoner would already have analyzed this data and found solutions. In general, this illustrates the incredible strength of using semantic data mining and ontologies. They allow for quicker and more efficient data extraction on the user side, as the user has fewer variables to account for, since the semantically pre-processed data and ontology built for the data have already accounted for many of these variables. However, there are some drawbacks to this approach. Namely, it requires a high amount of computational power and complexity, even with relatively small data sets. This could result in higher costs and increased difficulties in building and maintaining semantic data processing systems. This can be mitigated somewhat if the data set is already well organized and formatted, but even then, the complexity is still higher when compared to standard data processing.Below is a simple a diagram combining some of the processes, in particular semantic data mining and their use in ontology. Applications: The diagram depicts a data set being broken up into two parts: the characteristics of its domain, or domain knowledge, and then the actual acquired data. The domain characteristics are then processed to become user understood domain knowledge that can be applied to the data. Meanwhile, the data set is processed and stored so that the domain knowledge can applied to it, so that the process may continue. This application forms the ontology. From there, the ontology can be used to analyze data and process results. Applications: Fuzzy preprocessing is another, more advanced technique for solving complex problems. Fuzzy preprocessing and fuzzy data mining make use of fuzzy sets. These data sets are composed of two elements: a set and a membership function for the set which comprises 0 and 1. Fuzzy preprocessing uses this fuzzy data set to ground numerical values with linguistic information. Raw data is then transformed into natural language. Ultimately, fuzzy data mining's goal is to help deal with inexact information, such as an incomplete database. Currently fuzzy preprocessing, as well as other fuzzy based data mining techniques see frequent use with neural networks and artificial intelligence.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CodeCharge Studio** CodeCharge Studio: CodeCharge Studio is a rapid application development (RAD) and integrated development environment (IDE) for creating database-driven web applications. It is a code generator and templating engine that separates the presentation layer from the coding layer, with the aim of allowing designers and programmers to work cohesively in a web application (the model-view-controller design pattern). CodeCharge is the first product released by Yes Software, Inc., after two years of development. Software: CodeCharge utilizes point-and-click wizards for creating record and search forms, grids, and editable grids without the need for programming. The databases it supports include MySQL, MS SQL Server, MS Access, PostgreSQL, and Oracle, as well as any other database that supports web connectivity. CodeCharge can export code to all major programing languages, such as ASP.NET, ASP, Java, ColdFusion, PHP, and Perl.CodeCharge employs an interactive user interface (UI) designed for the creation of web applications. When generating code, CodeCharge automatically structures the code, using naming conventions and comments to describe the code's purpose. Moreover, CodeCharge keeps the application separate from the code it generates, so that projects may be converted to any language at any time.Without additional programming, a CodeCharge-generated project is not a routed web site (where everything is routed through, for example, index.asp); rather, every page is accessible by reference to its own name or URL. Software: Technologies Here are listed technologies which used, when the application is ready and running. OOP - The generated application is Object Oriented. Every structural element, like database connection, grid, navigation bar, the visible page itself etc. are all objects.The application uses the Microsoft .NET 2 Framework and will also install when the .NET 3.5 framework is detected on the host computer. Templating - Codecharge uses HTML template pages to generate visible internet sites. Templates of web pages may be previewed before making it "live." There are xxxx.html files, accordingly xxxx.asp (xxxx.php etc.) code files and for server side events a separate xxxx_events.asp (xxxx_events.php etc.) files. Customization - CodeCharge provides its users a standard way to set up custom code for handling events not fully addressed by the built-in features. Application generating technologies PHP Perl .NET Java ASP Coldfusion xml Reception: In 2003, regarding the original version of CodeCharge Studio, Arbi Arzoumani of PHP Architect wrote: "For its price tag this code generation application is well worth it. One great application that I can see this being used for is creating prototypes of web applications in very short periods of time. In other words, last minute proposals." Kevin Yank of SitePoint Tech Times was impressed "by the many ways in which experienced developers could draw added power out of the software, instead of being limited by it, as is the case with most RAD tools for Web development."In his review of CodeCharge Studio 2.0, Troy Dreier wrote in Intranet Journal, "CodeCharge Studio [allows] Web application developers [to] shave literally months off their development times."CodeCharge Studio 3.0 received a rating of 3.5 out of 5 from Peter B. MacIntyre of php|architect.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metric temporal logic** Metric temporal logic: Metric temporal logic (MTL) is a special case of temporal logic. It is an extension of temporal logic in which temporal operators are replaced by time-constrained versions like until, next, since and previous operators. It is a linear-time logic that assumes both the interleaving and fictitious-clock abstractions. It is defined over a point-based weakly-monotonic integer-time semantics. MTL has been described as a prominent specification formalism for real-time systems. Full MTL over infinite timed words is undecidable. Syntax: The full metric temporal logic is defined similarly to linear temporal logic, where a set of non-negative real number is added to temporal modal operators U and S. Formally, MTL is built up from: a finite set of propositional variables AP, the logical operators ¬ and ∨, and the temporal modal operator UI (pronounced "φ until in I ψ."), with I an interval of non-negative numbers. Syntax: the temporal modal operator SI (pronounced "φ since in I ψ."), with I as above.When the subscript is omitted, it is implicitly equal to [0,∞) Note that the next operator N is not considered to be a part of MTL syntax. It will instead be defined from other operators. Past and Future The past fragment of metric temporal logic, denoted as past-MTL is defined as the restriction of the full metric temporal logic without the until operator. Similarly, the future fragment of metric temporal logic, denoted as future-MTL is defined as the restriction of the full metric temporal logic without the since operator. Depending on the authors, MTL is either defined as the future fragment of MTL, in which case full-MTL is called MTL+Past. Or MTL is defined as full-MTL. In order to avoid ambiguity, this article uses the names full-MTL, past-MTL and future-MTL. When the statements holds for the three logic, MTL will simply be used. Model: Let T⊆R+ intuitively represent a set of points in time. Let γ:T→A a function which associates a letter to each moment t∈T . A model of a MTL formula is such a function γ . Usually, γ is either a timed word or a signal. In those cases, T is either a discrete subset or an interval containing 0. Semantics: Let T and γ as above and let t∈T some fixed time. We are now going to explain what it means that a MTL formula ϕ holds at time t which is denoted γ,t⊨ϕ Let I⊆R+ and ϕ,ψ∈MTL . We first consider the formula ϕUIψ . We say that γ,t⊨ϕUIψ if and only if there exists some time t′∈I+t such that: γ,t′⊨ψ and for each t″∈T with t<t″<t′ , γ,t″⊨ϕ .We now consider the formula ϕSIψ (pronounced " ϕ since in I ψ .") We say that γ,t⊨ϕSIψ if and only if there exists some time t′∈I−t such that: γ,t′⊨ψ and for each t″∈T with t′<t″<t , γ,t″⊨ϕ .The definitions of γ,t⊨ϕ for the values of ϕ not considered above is similar as the definition in the LTL case. Operators defined from basic MTL operators: Some formulas are so often used that a new operator is introduced for them. These operators are usually not considered to belong to the definition of MTL, but are syntactic sugar which denote more complex MTL formula. We first consider operators which also exists in LTL. In this section, we fix ϕ,ψ MTL formulas and I⊆R+ Operators similar to the ones of LTL Release and Back to We denote by ϕRIψ (pronounced " ϕ release in I , ψ ") the formula ¬ϕUI¬ψ . This formula holds at time t if either: there is some time t′∈t+I such that ϕ holds, and ψ hold in the interval (t,t′)∩(t+I) at each time t′∈t+I , ϕ holds.The name "release" come from the LTL case, where this formula simply means that ϕ should always hold, unless ψ releases it. Operators defined from basic MTL operators: The past counterpart of release is denote by ϕBIψ (pronounced " ϕ back to in I , ψ ") and is equal to the formula ¬ϕSI¬ψ Finally and Eventually We denote by ◊Iϕ or FIϕ (pronounced "Finally in I , ϕ ", or "Eventually in I , ϕ ") the formula ⊤UIϕ . Intuitively, this formula holds at time t if there is some time t′∈t+I such that ϕ holds. Operators defined from basic MTL operators: We denote by ◻Iϕ or GIϕ (pronounced "Globally in I , ϕ ",) the formula ¬◊I¬ϕ . Intuitively, this formula holds at time t if for all time t′∈t+I , ϕ holds. We denote by ◻←Iϕ and ◊←Iϕ the formula similar to ◻Iϕ and ◊Iϕ where U is replaced by S . Both formula has intuitively the same meaning, but when we consider the past instead of the future. Next and previous This case is slightly different from the previous ones, because the intuitive meaning of the "Next" and "Previously" formulas differs depending on the kind of function γ considered. We denote by ◯Iϕ or NIϕ (pronounced "Next in I , ϕ ") the formula ⊥UIϕ . Similarly, we denote by ⊖Iϕ (pronounced "Previously in I , ϕ ) the formula ⊥SIϕ . The following discussion about the Next operator also holds for the Previously operator, by reversing the past and the future. When this formula is evaluated over a timed word γ:T→A , this formula means that both: at the next time in the domain of definition T , the formula ϕ will holds. Operators defined from basic MTL operators: furthermore, the distance between this next time and the current time belong to the interval I In particular, this next time holds, thus the current time is not the end of the word.When this formula is evaluated over a signal γ , the notion of next time does not makes sense. Instead, "next" means "immediately after". More precisely γ,t⊨∘ϕ means: I contains an interval of the form (0,ϵ) and for each t′∈(t,t+ϵ) , γ,t′⊨ϕ Other operators We now consider operators which are not similar to any standard LTL operators. Operators defined from basic MTL operators: Fall and Rise We denote by ↑ϕ (pronounced "rise ϕ "), a formula which holds when ϕ becomes true. More precisely, either ϕ did not hold in the immediate past, and holds at this time, or it does not hold and it holds in the immediate future. Formally ↑ϕ is defined as (ϕ∧(¬ϕS⊤))∨(¬ϕ∧(ϕU⊤)) .Over timed words, this formula always hold. Indeed ϕU⊤ and ¬ϕS⊤ always hold. Thus the formula is equivalent to ϕ∨¬ϕ , hence is true. Operators defined from basic MTL operators: By symmetry, we denote by ↓ϕ (pronounced "Fall ϕ ), a formula which holds when ϕ becomes false. Thus, it is defined as (¬ϕ∧(ϕS⊤))∧(ϕ∧(¬ϕU⊤)) History and Prophecy We now introduce the prophecy operator, denoted by ▹ . We denote by ▹Iϕ the formula ¬ϕUIϕ . This formula asserts that there exists a first moment in the future such that ϕ holds, and the time to wait for this first moment belongs to I We now consider this formula over timed words and over signals. We consider timed words first. Assume that =∣ a,b∣′ where ∣ and ∣′ represents either open or closed bounds. Let γ a timed word and t in its domain of definition. Over timed words, the formula γ,t⊨▹Iϕ holds if and only if γ,t⊨◻]0,b[∖I¬ϕ∧◊Iϕ also holds. That is, this formula simply assert that, in the future, until the interval t+I is met, ϕ should not hold. Furthermore, ϕ should hold sometime in the interval t+I . Indeed, given any time t″∈t+I such that γ,t″⊨ϕ hold, there exists only a finite number of time t′∈t+I with t′<t″ and γ,t′⊨ϕ . Thus, there exists necessarily a smaller such t″ Let us now consider signal. The equivalence mentioned above does not hold anymore over signal. This is due to the fact that, using the variables introduced above, there may exists an infinite number of correct values for t′ , due to the fact that the domain of definition of a signal is continuous. Thus, the formula ▹Iϕ also ensures that the first interval in which ϕ holds is closed on the left. Operators defined from basic MTL operators: By temporal symmetry, we define the history operator, denoted by ◃ . We define ◃Iϕ as ¬ϕSIϕ . This formula asserts that there exists a last moment in the past such that ϕ held. And the time since this first moment belongs to I Non-strict operator The semantic of operators until and since introduced do not consider the current time. That is, in order for ϕ1U¯ϕ2 to holds at some time t , neither ϕ1 nor ϕ2 has to hold at time t . This is not always wanted, for example in the sentence "there is no bug until the system is turned-off", it may actually be wanted that there are no bug at current time. Thus, we introduce another until operator, called non-strict until, denoted by U¯ , which consider the current time. Operators defined from basic MTL operators: We denote by ϕ1U¯Iϕ2 and ϕ1S¯Iϕ2 either: the formulas ϕ2∨(ϕ1∧(ϕ1UIϕ2)) and ϕ2∨(ϕ1∧(ϕ1SIϕ2)) if 0∈I , and the formulas ϕ1∧(ϕ1UIϕ2) and ϕ1∧(ϕ1SIϕ2) otherwise.For any of the operators O introduced above, we denote O¯ the formula in which non-strict untils and sinces are used. For example ◊¯p is an abbreviation for ⊤U¯p Strict operator can not be defined using non-strict operator. That is, there is no formula equivalent to ◯Ip which uses only non-strict operator. This formula is defined as ⊥UIp . This formula can never hold at a time t if it is required that ⊥ holds at time t Example: We now give examples of MTL formulas. Some more example can be found on article of fragments of MITL, such as metric interval temporal logic. ◻(p⟹◊{1}q) states that each letter p is followed exactly one time unit later by a letter q .◻(p⟹¬◊{1}p) states that no two successive occurrences of p can occur at exactly one time unit from each other. Comparison with LTL: A standard (untimed) infinite word w=a0,a1,…, is a function from N to A . We can consider such a word using the set of time T=N and the function γ(i)=ai . In this case, for ϕ an arbitrary LTL formula, w,i⊨ϕ if and only if γ,i⊨ϕ , where ϕ is considered as a MTL formula with non-strict operator and [0,∞) subscript. In this sense, MTL is an extension of LTL.For this reason, a formula using only non-strict operator with [0,∞) subscript is called an LTL formula. This is because the Algorithmic complexity: The satisfiability of ECL over signals is EXPSPACE-complete. Fragments of MTL: We now consider some fragments of MTL. MITL An important subset of MTL is the Metric Interval Temporal Logic (MITL). This is defined similarly to MTL, with the restriction that the sets I , used in U and S , are intervals which are not singletons, and whose bounds are natural numbers or infinity. Some other subsets of MITL are defined in the article MITL. Future Fragments Future-MTL was already introduced above. Both over timed-words and over signals, it is less expressive than Full-MTL: 3 . Event-Clock Temporal Logic The fragment Event-Clock Temporal Logic of MTL, denoted EventClockTL or ECL, allows only the following operators: the boolean operators, and, or, not the untimed until and since operators. The timed prophecy and history operators.Over signals, ECL is as expressive as MITL and as MITL0. The equivalence between the two last logics is explained in the article MITL0. We sketch the equivalence of those logics with ECL. Fragments of MTL: If I is not a singleton and ϕ is a MITL formula, ▹Iϕ is defined as a MITL formula. If I={i} is a singleton, then ▹Iϕ is equivalent to ◻]0,i[¬ϕ∧◊]0,i]ϕ which is a MITL-formula. Reciprocally, for ψ an ECL-formula, and I an interval whose lower bound is 0, ◻Iψ is equivalent to the ECL-formula ¬▹I¬ψ The satisfiability of ECL over signals is PSPACE-complete. Fragments of MTL: Positive normal form A MTL-formula in positive normal form is defined almost as any MTL formula, with the two following change: the operators Release and Back are introduced in the logical language and are not considered anymore to be notations for some other formulas. negations can only be applied to letters.Any MTL formula is equivalent to formula in normal form. This can be shown by an easy induction on formulas. For example, the formula ¬(ϕUSψ) is equivalent to the formula (¬ϕ)RS(¬ψ) . Similarly, conjunctions and disjunctions can be considered using De Morgan's laws. Strictly speaking, the set of formulas in positive normal form is not a fragment of MTL.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interactive television (narrative technique)** Interactive television (narrative technique): Interactive television or interactive TV, sometimes also called pseudo-interactive television to distinguish it from technologically enabled interactive television, is a term used to refer to television programs in which it is pretended that the characters and the viewing audience can interact, while in actuality they cannot. This narrative technique is often used in children's television. It is a simulated form of audience participation. When employed, characters will often break the fourth wall and ask the viewers to give them advice or the solution to a problem. Characters typically provide a short period of time for the viewers to react, and then proceed as though the viewers have given them the correct answer. Examples and history: Winky Dink and You Airing 1953 to 1957, the Winky Dink and You program was perhaps the first interactive TV show. The central gimmick of the show, praised by Microsoft mogul Bill Gates as "the first interactive TV show", was the use of a "magic drawing screen"—a piece of vinyl plastic that stuck to the television screen via static electricity. A kit containing the screen and various Winky Dink crayons could be purchased for 50 cents. At a climactic scene in every Winky Dink short film, Winky would arrive on a scene that contained a connect-the-dots picture that could be navigated only with the help of viewers. Winky Dink then would prompt the children at home to complete the picture, and the finished result would help him continue the story. Examples included drawing a bridge to cross a river, using an axe to chop down a tree, or creating a cage to trap a dangerous lion. Another use of the interactive screen was to decode messages. An image would be displayed, showing only the vertical lines of the letters of the secret message. Viewers would then quickly trace onto their magic screen, and a second image would display the horizontal lines, completing the text. A final use of the screen was to create the outline of a character with whom Jack Barry would have a conversation. It would seem meaningless to viewers without the screen, further encouraging its purchase. Examples and history: Blue's Clues Premiering in 1996, Blue's Clues was perhaps the most influential interactive TV show. It used pauses that were "long enough to give the youngest time to think, short enough for the oldest not to get bored". The length of the pauses, which was estimated from formative research, gave children enough time to process the information and solve the problem. After pausing, child voice-overs provided the answers so that they were given to children who had not come up with the solution and helped encourage viewer participation. Researcher Alisha M. Crawley and her colleagues stated that although earlier programs sometimes invited overt audience participation, Blue's Clues was "unique in making overt involvement a systematic research-based design element". In 2004, Daniel Anderson said that Blue's Clues "raised the bar" for educational television; he and Variety reported that audience participation became an important part of other educational preschool TV programs such as Dora the Explorer and Sesame Street.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CNMa** CNMa: CNMamide (CNMa) is a cyclic neuropeptide identified by computational analysis of Drosophila melanogaster protein sequences and named after its C-terminal ending motif. A gene encoding CNMa was found in most arthropods and comparison among the precursor sequences of several representative species revealed high conservation, particularly in the region of the predicted mature peptide. Two conserved cysteine residues enveloping four amino acids form a disulfide bond and were shown to be important for binding of the peptide to its receptor. Expression of CNMa was confirmed in the larval and adult brain of D. melanogaster but the function of the peptide has not been elucidated yet. Sequences: CNMa is cleaved from a larger protein to form a mature peptide at two flanking dibasic (K or R) cleavage sites. The sequences of the final peptides are: DROME: Gln-Tyr-Met-Ser-Pro-Cys-His-Phe-Lys-Ile-Cys-Asn-Met-amide APIME: Thr-Met-Ile-Ser-Tyr-Met-Thr-Leu-Cys-His-Phe-Lys-Ile-Cys-Asn-Met-amide DAPPU: Asp-Ser-Tyr-Leu-Ser-Met-Cys-His-Phe-Lys-Leu-Cys-Asn-Leu-amide The overall sequence motif, in ProSite format, is [LPM]-C-[HI]-F-K-[IL]-C-N-[ML]-G-[RK](2). Some species encode two forms of CNMa via alternative splicing. Receptor: The receptor for CNMa (CG33696) is a G protein-coupled receptor. Phylogenetic analysis identified two separate clades of CNMaRs in arthropod species, but many taxa retain only one. Existence of two paralogous CNMaRs suggests that CNMaR has additional ligands in some insect species. This assumption is also supported by absence of the gene for CNMa in the genome of Lepidopteran species (such as Bombyx mori and Danaus plexippus) that retain the CNMaR.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Passive smoking** Passive smoking: Passive smoking is the inhalation of tobacco smoke, commonly called secondhand smoke (SHS) or environmental tobacco smoke (ETS), by persons other than the active smoker. It occurs when tobacco smoke diffuses into the surrounding atmosphere as an aerosol pollutant, which leads to its inhalation by nearby bystanders within the same environment. Exposure to secondhand tobacco smoke causes many of the same diseases caused by active tobacco smoking, although to a lower prevalence due to the reduced concentration of smoke that enters the airway. The health risks of secondhand smoke are a matter of scientific consensus, and have been a major motivation for smoke-free laws in workplaces and indoor venues, including restaurants, bars and night clubs, as well as some open public spaces.Concerns around secondhand smoke have played a central role in the debate over the harms and regulation of tobacco products. Since the early 1970s, the tobacco industry has viewed public concern over secondhand smoke as a serious threat to its business interests. Harm to bystanders was perceived as a motivator for stricter regulation of tobacco products. Despite the industry's awareness of the harms of secondhand smoke as early as the 1980s, the tobacco industry coordinated a scientific controversy with the purpose of stopping regulation of their products.: 1242 Terminology: As of 2003, "secondhand smoke" was the term most used to refer to other people's smoke in the English-language media. Other terms used include "environmental tobacco smoke", while "involuntary smoking" and "passive smoking" are used to refer to exposure to secondhand smoke. The term "environmental tobacco smoke" can be traced back to a 1974 industry-sponsored meeting held in Bermuda, while the term "passive smoking" was first used in the title of a scientific paper in 1970. The Surgeon General of the United States prefers to use the phrase "secondhand smoke" rather than "environmental tobacco smoke", stating that "The descriptor 'secondhand' captures the involuntary nature of the exposure, while 'environmental' does not.": 9  Most researchers consider the term "passive smoking" to be synonymous with "secondhand smoke". In contrast, a 2011 commentary in Environmental Health Perspectives argued that research into "thirdhand smoke" renders it inappropriate to refer to passive smoking with the term "secondhand smoke", which the authors stated constitutes a pars pro toto.The term "sidestream smoke" is sometimes used to refer to smoke which goes into the air directly from a burning cigarette, cigar, or pipe, while "mainstream smoke" refers to smoke that a smoker exhales. Effects: Secondhand smoke causes many of the same diseases as direct smoking, including cardiovascular diseases, lung cancer, and respiratory diseases. These include: Cancer: General: overall increased risk; reviewing the evidence accumulated on a worldwide basis, the International Agency for Research on Cancer concluded in 2004 that "Involuntary smoking (exposure to secondhand or 'environmental' tobacco smoke) is carcinogenic to humans." The Centers for Disease Control and Prevention reports that about 70 chemicals present in secondhand smoke are carcinogenic. Effects: Lung cancer: Passive smoking is a risk factor for lung cancer. In the United States, secondhand smoke is estimated to cause more than 7,000 deaths from lung cancer a year among non-smokers. A quarter of all cases occur in people who have never smoked. Effects: Breast cancer: The California Environmental Protection Agency concluded in 2005 that passive smoking increases the risk of breast cancer in younger, primarily premenopausal females by 70% and the US Surgeon General has concluded that the evidence is "suggestive", but still insufficient to assert such a causal relationship. In contrast, the International Agency for Research on Cancer concluded in 2004 that there was "no support for a causal relation between involuntary exposure to tobacco smoke and breast cancer in never-smokers." A 2015 meta-analysis found that the evidence that passive smoking moderately increased the risk of breast cancer had become "more substantial than a few years ago". Effects: Cervical cancer: A 2015 overview of systematic reviews found that exposure to secondhand smoke increased the risk of cervical cancer. Bladder cancer: A 2016 systematic review and meta-analysis found that secondhand smoke exposure was associated with a significant increase in the risk of bladder cancer. Circulatory system: risk of heart disease and reduced heart rate variability.Epidemiological studies have shown that both active and passive cigarette smoking increase the risk of atherosclerosis. Passive smoking is strongly associated with an increased risk of stroke, and this increased risk is disproportionately high at low levels of exposure. Lung problems: Risk of asthma Risk of chronic obstructive pulmonary disease (COPD) According to a 2015 review, passive smoking may increase the risk of tuberculosis infection and accelerate the progression of the disease, but the evidence remains weak. The majority of studies on the association between secondhand smoke exposure and sinusitis have found a significant association between the two. Cognitive impairment and dementia: Exposure to secondhand smoke may increase the risk of cognitive impairment and dementia in adults 50 and over. Children exposed to secondhand smoke show reduced vocabulary and reasoning skills when compared with non-exposed children as well as more general cognitive and intellectual deficits. Mental health: Exposure to secondhand smoke is associated with an increased risk of depressive symptoms. During pregnancy: Miscarriage: a 2014 meta-analysis found that maternal secondhand smoke exposure increased the risk of miscarriage by 11%. Low birth weight, part B, ch. 3. Premature birth, part B, ch. 3 (Evidence of the causal link is described only as "suggestive" by the US Surgeon General in his 2006 report.) Laws limiting smoking decrease premature births. Stillbirth and congenital malformations in children Recent studies comparing females exposed to secondhand smoke and non-exposed females, demonstrate that females exposed while pregnant have higher risks of delivering a child with congenital abnormalities, longer lengths, smaller head circumferences, and neural tube defects. General: Worsening of asthma, allergies, and other conditions. A 2014 systematic review and meta-analysis found that passive smoking was associated with a slightly increased risk of allergic diseases among children and adolescents; the evidence for an association was weaker for adults. Type 2 diabetes. It remains unclear whether the association between passive smoking and diabetes is causal. Risk of carrying Neisseria meningitidis or Streptococcus pneumoniae. A possible increased risk of periodontitis. Effects: Overall increased risk of death in both adults, where it was estimated to kill 53,000 nonsmokers per year in the U.S in 1991, and in children. The World Health Organization states that passive smoking causes about 600,000 deaths a year, and about 1% of the global burden of disease. As of 2017, passive smoking causes about 900,000 deaths a year, which is about 1/8 of all deaths caused by smoking. Effects: Skin conditions: A 2016 systematic review and meta-analysis found that passive smoking was associated with a higher rate of atopic dermatitis. Risk to children Sudden infant death syndrome (SIDS). In his 2006 report, the US Surgeon General concludes: "The evidence is sufficient to infer a causal relationship between exposure to secondhand smoke and sudden infant death syndrome." Secondhand smoking has been estimated to be associated with 430 SIDS deaths in the United States annually. Asthma. Secondhand smoke exposure is also associated with an almost doubled risk of hospitalization for asthma exacerbation among children with asthma. Effects: Lung infections, also including more severe illness with bronchiolitis and bronchitis, and worse outcome, as well as increased risk of developing tuberculosis if exposed to a carrier. In the United States, it is estimated that secondhand smoke has been associated with between 150,000 and 300,000 lower respiratory tract infections in infants and children under 18 months of age, resulting in between 7,500 and 15,000 hospitalizations each year. Effects: Impaired respiratory function and slowed lung growth Allergies Maternal passive smoking increases the risk of non-syndromic orofacial clefts by 50% among their children. Learning difficulties, developmental delays, executive function problems, and neurobehavioral effects. Animal models suggest a role for nicotine and carbon monoxide in neurocognitive problems. An increase in tooth decay (as well as related salivary biomarkers) has been associated with passive smoking in children. Increased risk of middle ear infections. Invasive meningococcal disease. Anesthesia complications and some negative surgical outcomes. Sleep disordered breathing: Most studies have found a significant association between passive smoking and sleep disordered breathing in children, but further studies are needed to determine whether this association is causal. Adverse effects on the cardiovascular system of children. Evidence: Epidemiological studies show that non-smokers exposed to secondhand smoke are at risk for many of the health problems associated with direct smoking. Evidence: In 1992, a review estimated that secondhand smoke exposure was responsible for 35,000 to 40,000 deaths per year in the United States in the early 1980s. The absolute risk increase of heart disease due to ETS was 2.2%, while the attributable risk percent was 23%. A 1997 meta-analysis found that secondhand smoke exposure increased the risk of heart disease by a quarter, and two 1999 meta-analyses reached similar conclusions.Evidence shows that inhaled sidestream smoke, the main component of secondhand smoke, is about four times more toxic than mainstream smoke. This fact has been known to the tobacco industry since the 1980s, though it kept its findings secret. Some scientists believe that the risk of passive smoking, in particular the risk of developing coronary heart diseases, may have been substantially underestimated.In 1997, a meta-analysis on the relationship between secondhand smoke exposure and lung cancer concluded that such exposure caused lung cancer. The increase in risk was estimated to be 24% among non-smokers who lived with a smoker. In 2000, Copas and Shi reported that there was clear evidence of publication bias in the studies included in this meta-analysis. They further concluded that after correcting for publication bias, and assuming that 40% of all studies are unpublished, this increased risk decreased from 24% to 15%. This conclusion has been challenged on the basis that the assumption that 40% of all studies are unpublished was "extreme".: 1269  In 2006, Takagi et al. reanalyzed the data from this meta-analysis to account for publication bias and estimated that the relative risk of lung cancer among those exposed to secondhand smoke was 1.19, slightly lower than the original estimate. A 2000 meta-analysis found a relative risk of 1.48 for lung cancer among men exposed to secondhand smoke, and a relative risk of 1.16 among those exposed to it at work. Another meta-analysis confirmed the finding of an increased risk of lung cancer among women with spousal exposure to secondhand smoke the following year. It found a relative risk of lung cancer of 1.29 for women exposed to secondhand smoke from their spouses. A 2014 meta-analysis noted that "the association between exposure to secondhand smoke and lung cancer risk is well established."A minority of epidemiologists have found it hard to understand how secondhand smoke, which is more diluted than actively inhaled smoke, could have an effect that is such a large fraction of the added risk of coronary heart disease among active smokers. One proposed explanation is that secondhand smoke is not simply a diluted version of "mainstream" smoke, but has a different composition with more toxic substances per gram of total particulate matter. Passive smoking appears to be capable of precipitating the acute manifestations of cardio-vascular diseases (atherothrombosis) and may also have a negative impact on the outcome of patients who have acute coronary syndromes.In 2004, the International Agency for Research on Cancer (IARC) of the World Health Organization (WHO) reviewed all significant published evidence related to tobacco smoking and cancer. It concluded: These meta-analyses show that there is a statistically significant and consistent association between lung cancer risk in spouses of smokers and exposure to second-hand tobacco smoke from the spouse who smokes. The excess risk is of the order of 20% for women and 30% for men and remains after controlling for some potential sources of bias and confounding. Evidence: Subsequent meta-analyses have confirmed these findings.The National Asthma Council of Australia cites studies showing that secondhand smoke is probably the most important indoor pollutant, especially around young children: Smoking by either parent, particularly by the mother, increases the risk of asthma in children. The outlook for early childhood asthma is less favourable in smoking households. Children with asthma who are exposed to smoking in the home generally have more severe disease. Many adults with asthma identify ETS as a trigger for their symptoms. Evidence: Doctor-diagnosed asthma is more common among non-smoking adults exposed to ETS than those not exposed. Among people with asthma, higher ETS exposure is associated with a greater risk of severe attacks.In France, exposure to secondhand smoke has been estimated to cause between 3,000 and 5,000 premature deaths per year, with the larger figure cited by Prime Minister Dominique de Villepin during his announcement of a nationwide smoke-free law: "That makes more than 13 deaths a day. It is an unacceptable reality in our country in terms of public health."There is good observational evidence that smoke-free legislation reduces the number of hospital admissions for heart disease. Evidence: Exposure and risk levels The International Agency for Research on Cancer of the World Health Organization concluded in 2004 that there was sufficient evidence that secondhand smoke caused cancer in humans. Those who work in environments where smoke is not regulated are at higher risk. Workers particularly at risk of exposure include those in installation repair and maintenance, construction and extraction, and transportation.Much research has come from studies of nonsmokers who are married to a smoker. The US Surgeon General, in his 2006 report, estimated that living or working in a place where smoking is permitted increases the non-smokers' risk of developing heart disease by 25–30% and lung cancer by 20–30%.Similarly, children who are exposed to environmental tobacco smoke are shown to experience a range of adverse effects and a higher risk of becoming smokers later in life. The WHO has identified reduction of exposure to environmental tobacco smoke as key element for actions to encourage healthy child development.The US Centers for Disease Control and Prevention monitors the extent of and trends in exposure to environmental tobacco smoke by measuring serum cotinine in national health surveys. The prevalence of secondhand smoke exposure among U.S. nonsmokers declined from 87.5% in 1988 to 25.2% in 2014. However, nearly half of blacks and the poor were exposed in 2014. Evidence: Interventions to reduce environmental tobacco smoke A systematic review compared smoking control programmes and their effects on smoke exposure in children. The review distinguishes between community-based, ill-child and healthy-child settings and the most common types of interventions were counselling or brief advice during clinical visits. The review did not find superior outcomes for any intervention, and the authors caution that evidence from adult settings may not generalise well to children. Evidence: Biomarkers Environmental tobacco smoke can be evaluated either by directly measuring tobacco smoke pollutants found in the air or by using biomarkers, an indirect measure of exposure. Carbon monoxide monitored through breath, nicotine, cotinine, thiocyanates, and proteins are the most specific biological markers of tobacco smoke exposure. Biochemical tests are a much more reliable biomarker of secondhand smoke exposure than surveys. Certain groups of people are reluctant to disclose their smoking status and exposure to tobacco smoke, especially pregnant women and parents of young children. This is due to their smoking being socially unacceptable. Also, it may be difficult for individuals to recall their exposure to tobacco smoke.A 2007 study in the Addictive Behaviors journal found a positive correlation between secondhand tobacco smoke exposure and concentrations of nicotine and/or biomarkers of nicotine in the body. Significant biological levels of nicotine from secondhand smoke exposure were equivalent to nicotine levels from active smoking and levels that are associated with behaviour changes due to nicotine consumption. Evidence: Cotinine Cotinine, the metabolite of nicotine, is a biomarker of secondhand smoke exposure. Typically, cotinine is measured in the blood, saliva, and urine. Hair analysis has recently become a new, noninvasive measurement technique. Cotinine accumulates in hair during hair growth, which results in a measure of long-term, cumulative exposure to tobacco smoke. Urinary cotinine levels have been a reliable biomarker of tobacco exposure and have been used as a reference in many epidemiological studies. However, cotinine levels found in the urine reflect exposure only over the preceding 48 hours. Cotinine levels of the skin, such as the hair and nails, reflect tobacco exposure over the previous three months and are a more reliable biomarker. Evidence: Carbon monoxide (CO) Carbon monoxide monitored via breath is also a reliable biomarker of secondhand smoke exposure as well as tobacco use. With high sensitivity and specificity, it not only provides an accurate measure, but the test is also non-invasive, highly reproducible, and low in cost. Breath CO monitoring measures the concentration of CO in an exhalation in parts per million, and this can be directly correlated to the blood CO concentration (carboxyhemoglobin). Breath CO monitors can also be used by emergency services to identify patients who are suspected of having CO poisoning. Pathophysiology: A 2004 study by the International Agency for Research on Cancer of the World Health Organization concluded that non-smokers are exposed to the same carcinogens as active smokers. Sidestream smoke contains more than 4,000 chemicals, including 69 known carcinogens. Of special concern are polynuclear aromatic hydrocarbons, tobacco-specific N-nitrosamines, and aromatic amines, such as 4-aminobiphenyl, all known to be highly carcinogenic. Mainstream smoke, sidestream smoke, and secondhand smoke contain largely the same components, however the concentration varies depending on type of smoke. Several well-established carcinogens have been shown by the tobacco companies' own research to be present at higher concentrations in sidestream smoke than in mainstream smoke.Secondhand smoke has been shown to produce more particulate-matter (PM) pollution than an idling low-emission diesel engine. In an experiment conducted by the Italian National Cancer Institute, three cigarettes were left smoldering, one after the other, in a 60 m3 garage with a limited air exchange. The cigarettes produced PM pollution exceeding outdoor limits, as well as PM concentrations up to 10-fold that of the idling engine.Secondhand tobacco smoke exposure has immediate and substantial effects on blood and blood vessels in a way that increases the risk of a heart attack, particularly in people already at risk. Exposure to tobacco smoke for 30 minutes significantly reduces coronary flow velocity reserve in healthy nonsmokers. Secondhand smoke is also associated with impaired vasodilation among adult nonsmokers. Secondhand smoke exposure also affects platelet function, vascular endothelium, and myocardial exercise tolerance at levels commonly found in the workplace.Pulmonary emphysema can be induced in rats through acute exposure to sidestream tobacco smoke (30 cigarettes per day) over a period of 45 days. Degranulation of mast cells contributing to lung damage has also been observed.The term "third-hand smoke" was recently coined to identify the residual tobacco smoke contamination that remains after the cigarette is extinguished and secondhand smoke has cleared from the air. Preliminary research suggests that by-products of third-hand smoke may pose a health risk, though the magnitude of risk, if any, remains unknown. In October 2011, it was reported that Christus St. Frances Cabrini Hospital in Alexandria, Louisiana, would seek to eliminate third-hand smoke beginning in July 2012, and that employees whose clothing smelled of smoke would not be allowed to work. This prohibition was enacted because third-hand smoke poses a special danger for the developing brains of infants and small children.In 2008, there were more than 161,000 deaths attributed to lung cancer in the United States. Of these deaths, an estimated 10% to 15% were caused by factors other than first-hand smoking; equivalent to 16,000 to 24,000 deaths annually. Slightly more than half of the lung cancer deaths caused by factors other than first-hand smoking were found in nonsmokers. Lung cancer in non-smokers may well be considered one of the most common cancer mortalities in the United States. Clinical epidemiology of lung cancer has linked the primary factors closely tied to lung cancer in non-smokers as exposure to secondhand tobacco smoke, carcinogens including radon, and other indoor air pollutants. Opinion of public health authorities: There is widespread scientific consensus that exposure to secondhand smoke is harmful. The link between passive smoking and health risks is accepted by every major medical and scientific organisation, including: World Health Organization U.S. National Institutes of Health Centers for Disease Control United States Surgeon General U.S. National Cancer Institute United States Environmental Protection Agency California Environmental Protection Agency American Heart Association, American Lung Association, and American Cancer Society American Medical Association American Academy of Pediatrics Australian National Health and Medical Research Council United Kingdom Scientific Committee on Tobacco and Health Public opinion: Recent major surveys conducted by the U.S. National Cancer Institute and Centers for Disease Control have found widespread public awareness that secondhand smoke is harmful. In both 1992 and 2000 surveys, more than 80% of respondents agreed with the statement that secondhand smoke was harmful. A 2001 study found that 95% of adults agreed that secondhand smoke was harmful to children, and 96% considered tobacco-industry claims that secondhand smoke was not harmful to be untruthful.A 2007 Gallup poll found that 56% of respondents felt that secondhand smoke was "very harmful", a number that has held relatively steady since 1997. Another 29% believe that secondhand smoke is "somewhat harmful"; 10% answered "not too harmful", while 5% said "not at all harmful". Controversy over harm: As part of its attempt to prevent or delay tighter regulation of smoking, the tobacco industry funded a number of scientific studies and, where the results cast doubt on the risks associated with secondhand smoke, sought wide publicity for those results. The industry also funded libertarian and conservative think tanks, such as the Cato Institute in the United States and the Institute of Public Affairs in Australia which criticised both scientific research on passive smoking and policy proposals to restrict smoking. New Scientist and the European Journal of Public Health have identified these industry-wide coordinated activities as one of the earliest expressions of corporate denialism. Further, they state that the disinformation spread by the tobacco industry has created a tobacco denialism movement, sharing many characteristics of other forms of denialism, such as HIV-AIDS denialism. Controversy over harm: Industry-funded studies and critiques Enstrom and Kabat A 2003 study by James Enstrom and Geoffrey Kabat, published in the British Medical Journal, argued that the harms of passive smoking had been overstated. Their analysis reported no statistically significant relationship between passive smoking and lung cancer, coronary heart disease (CHD), or chronic obstructive pulmonary disease, though the accompanying editorial noted that "they may overemphasise the negative nature of their findings." This paper was widely promoted by the tobacco industry as evidence that the harms of passive smoking were unproven. The American Cancer Society (ACS), whose database Enstrom and Kabat used to compile their data, criticized the paper as "neither reliable nor independent", stating that scientists at the ACS had repeatedly pointed out serious flaws in Enstrom and Kabat's methodology prior to publication. Notably, the study had failed to identify a comparison group of "unexposed" persons.Enstrom's ties to the tobacco industry also drew scrutiny; in a 1997 letter to Philip Morris, Enstrom requested a "substantial research commitment... in order for me to effectively compete against the large mountain of epidemiologic data and opinions that already exist regarding the health effects of ETS and active smoking." In a US racketeering lawsuit against tobacco companies, the Enstrom and Kabat paper was cited by the US District Court as "a prime example of how nine tobacco companies engaged in criminal racketeering and fraud to hide the dangers of tobacco smoke." The Court found that the study had been funded and managed by the Center for Indoor Air Research, a tobacco industry front group tasked with "offsetting" damaging studies on passive smoking, as well as by Philip Morris who stated that Enstrom's work was "clearly litigation-oriented". A 2005 paper in Tobacco Control argued that the disclosure section in the Enstrom and Kabat BMJ paper, although it met the journal's requirements, "does not reveal the full extent of the relationship the authors had with the tobacco industry."In 2006, Enstrom and Kabat published a meta-analysis of studies regarding passive smoking and coronary heart disease in which they reported a very weak association between passive smoking and heart disease mortality. They concluded that exposure to secondhand smoke increased the risk of death from CHD by only 5%, although this analysis has been criticized for including two previous industry-funded studies that suffered from widespread exposure misclassification. Controversy over harm: Gori Gio Batta Gori, a tobacco industry spokesman and consultant and an expert on risk utility and scientific research, wrote in the libertarian Cato Institute's magazine Regulation that "...of the 75 published studies of ETS and lung cancer, some 70% did not report statistically significant differences of risk and are moot. Roughly 17% claim an increased risk and 13% imply a reduction of risk." Milloy Steven Milloy, the "junk science" commentator for Fox News and a former Philip Morris consultant, claimed that "of the 19 studies" on passive smoking "only 8— slightly more than 42%— reported statistically significant increases in heart disease incidence."Another component of criticism cited by Milloy focused on relative risk and epidemiological practices in studies of passive smoking. Milloy, who has a master's degree from the Johns Hopkins School of Hygiene and Public Health, argued that studies yielding relative risks of less than 2 were meaningless junk science. This approach to epidemiological analysis was criticized in the American Journal of Public Health: A major component of the industry attack was the mounting of a campaign to establish a "bar" for "sound science" that could not be fully met by most individual investigations, leaving studies that did not meet the criteria to be dismissed as "junk science." The tobacco industry and affiliated scientists also put forward a set of "Good Epidemiology Practices" which would have the practical effect of obscuring the link between secondhand smoke and lung cancer; the privately stated goal of these standards was to "impede adverse legislation". However, this effort was largely abandoned when it became clear that no independent epidemiological organization would agree to the standards proposed by Philip Morris et al. Controversy over harm: Levois and Layard In 1995, Levois and Layard, both tobacco industry consultants, published two analyses in the journal Regulatory Toxicology and Pharmacology regarding the association between spousal exposure to secondhand smoke and heart disease. Both of these papers reported no association between secondhand smoke and heart disease. These analyses have been criticized for failing to distinguish between current and former smokers, despite the fact that former smokers, unlike current ones, are not at a significantly increased risk of heart disease. Controversy over harm: World Health Organization controversy A 1998 study by the International Agency for Research on Cancer (IARC) on environmental tobacco smoke (ETS) found "weak evidence of a dose–response relationship between risk of lung cancer and exposure to spousal and workplace ETS."In March 1998, before the study was published, reports appeared in the media alleging that the IARC and the World Health Organization (WHO) were suppressing information. The reports, appearing in the British Sunday Telegraph and The Economist, among other sources, alleged that the WHO withheld from publication of its own report that supposedly failed to prove an association between passive smoking and a number of other diseases (lung cancer in particular). Controversy over harm: In response, the WHO issued a press release stating that the results of the study had been "completely misrepresented" in the popular press and were in fact very much in line with similar studies demonstrating the harms of passive smoking. The study was published in the Journal of the National Cancer Institute in October of the same year, and concluded the authors found "no association between childhood exposure to ETS and lung cancer risk" but "did find weak evidence of a dose–response relationship between risk of lung cancer and exposure to spousal and workplace ETS." An accompanying editorial summarized: When all the evidence, including the important new data reported in this issue of the Journal, is assessed, the inescapable scientific conclusion is that ETS is a low-level lung carcinogen. Controversy over harm: With the release of formerly classified tobacco industry documents through the Tobacco Master Settlement Agreement, it was found (by Elisa Ong and Stanton Glantz) that the controversy over the WHO's alleged suppression of data had been engineered by Philip Morris, British American Tobacco, and other tobacco companies in an effort to discredit scientific findings which would harm their business interests. A WHO inquiry, conducted after the release of the tobacco-industry documents, found that this controversy was generated by the tobacco industry as part of its larger campaign to cut the WHO's budget, distort the results of scientific studies on passive smoking, and discredit the WHO as an institution. This campaign was carried out using a network of ostensibly independent front organizations and international and scientific experts with hidden financial ties to the industry. Controversy over harm: EPA lawsuit In 1993, the United States Environmental Protection Agency (EPA) issued a report estimating that 3,000 lung cancer related deaths in the United States were caused by passive smoking annually.Philip Morris, R.J. Reynolds Tobacco Company, and groups representing growers, distributors and marketers of tobacco took legal action, claiming that the EPA had manipulated this study and ignored accepted scientific and statistical practices. Controversy over harm: The United States District Court for the Middle District of North Carolina ruled in favor of the tobacco industry in 1998, finding that the EPA had failed to follow proper scientific and epidemiologic practices and had "cherry picked" evidence to support conclusions which they had committed to in advance. The court stated in part, "EPA publicly committed to a conclusion before research had begun…adjusted established procedure and scientific norms to validate the Agency's public conclusion... In conducting the ETS Risk Assessment, disregarded information and made findings on selective information; did not disseminate significant epidemiologic information; deviated from its Risk Assessment Guidelines; failed to disclose important findings and reasoning…" In 2002, the EPA successfully appealed this decision to the United States Court of Appeals for the Fourth Circuit. The EPA's appeal was upheld on the preliminary grounds that their report had no regulatory weight, and the earlier finding was vacated.In 1998, the U.S. Department of Health and Human Services, through the publication by its National Toxicology Program of the 9th Report on Carcinogens, listed environmental tobacco smoke among the known carcinogens, observing of the EPA assessment that "The individual studies were carefully summarized and evaluated." Tobacco-industry funding of research The tobacco industry's role in funding scientific research on secondhand smoke has been controversial. A review of published studies found that tobacco-industry affiliation was strongly correlated with findings exonerating secondhand smoke; researchers affiliated with the tobacco industry were 88 times more likely than independent researchers to conclude that secondhand smoke was not harmful. In a specific example which came to light with the release of tobacco-industry documents, Philip Morris executives successfully encouraged an author to revise his industry-funded review article to downplay the role of secondhand smoke in sudden infant death syndrome. The 2006 U.S. Surgeon General's report criticized the tobacco industry's role in the scientific debate: The industry has funded or carried out research that has been judged to be biased, supported scientists to generate letters to editors that criticized research publications, attempted to undermine the findings of key studies, assisted in establishing a scientific society with a journal, and attempted to sustain controversy even as the scientific community reached consensus. Controversy over harm: This strategy was outlined at an international meeting of tobacco companies in 1988, at which Philip Morris proposed to set up a team of scientists, organized by company lawyers, to "carry out work on ETS to keep the controversy alive." All scientific research was subject to oversight and "filtering" by tobacco-industry lawyers: Philip Morris then expect the group of scientists to operate within the confines of decisions taken by PM scientists to determine the general direction of research, which apparently would then be 'filtered' by lawyers to eliminate areas of sensitivity. Controversy over harm: Philip Morris reported that it was putting "...vast amounts of funding into these projects... in attempting to coordinate and pay so many scientists on an international basis to keep the ETS controversy alive." Tobacco industry response Measures to tackle secondhand smoke pose a serious economic threat to the tobacco industry, having broadened the definition of smoking beyond a personal habit to something with a social impact. In a confidential 1978 report, the tobacco industry described increasing public concerns about secondhand smoke as "the most dangerous development to the viability of the tobacco industry that has yet occurred." In United States of America v. Philip Morris et al., the District Court for the District of Columbia found that the tobacco industry "... recognized from the mid-1970s forward that the health effects of passive smoking posed a profound threat to industry viability and cigarette profits," and that the industry responded with "efforts to undermine and discredit the scientific consensus that ETS causes disease."Accordingly, the tobacco industry have developed several strategies to minimise the impact on their business: The industry has sought to position the secondhand smoke debate as essentially concerned with civil liberties and smokers' rights rather than with health, by funding groups such as FOREST. Controversy over harm: Funding bias in research; in all reviews of the effects of secondhand smoke on health published between 1980 and 1995, the only factor associated with concluding that secondhand smoke is not harmful was whether an author was affiliated with the tobacco industry. However, not all studies that failed to find evidence of harm were by industry-affiliated authors. Controversy over harm: Delaying and discrediting legitimate research (see for an example of how the industry attempted to discredit Takeshi Hirayama's landmark study, and for an example of how it attempted to delay and discredit a major Australian report on passive smoking) Promoting "good epidemiology" and attacking so-called junk science (a term popularised by industry lobbyist Steven Milloy): attacking the methodology behind research showing health risks as flawed and attempting to promote sound science. Ong & Glantz (2001) cite an internal Phillip Morris memo giving evidence of this as company policy. Controversy over harm: Creation of outlets for favourable research. In 1989, the tobacco industry established the International Society of the Built Environment, which published the peer-reviewed journal Indoor and Built Environment. This journal did not require conflict-of-interest disclosures from its authors. With documents made available through the Master Settlement, it was found that the executive board of the society and the editorial board of the journal were dominated by paid tobacco-industry consultants. The journal published a large amount of material on passive smoking, much of which was "industry-positive".Citing the tobacco industry's production of biased research and efforts to undermine scientific findings, the 2006 U.S. Surgeon General's report concluded that the industry had "attempted to sustain controversy even as the scientific community reached consensus... industry documents indicate that the tobacco industry has engaged in widespread activities... that have gone beyond the bounds of accepted scientific practice." The U.S. District Court, in U.S.A. v. Philip Morris et al., found that "...despite their internal acknowledgment of the hazards of secondhand smoke, Defendants have fraudulently denied that ETS causes disease." Position of major tobacco companies The positions of major tobacco companies on the issue of secondhand smoke is somewhat varied. In general, tobacco companies have continued to focus on questioning the methodology of studies showing that secondhand smoke is harmful. Some (such as British American Tobacco and Philip Morris) acknowledge the medical consensus that secondhand smoke carries health risks, while others continue to assert that the evidence is inconclusive. Several tobacco companies advocate the creation of smoke-free areas within public buildings as an alternative to comprehensive smoke-free laws. Controversy over harm: US racketeering lawsuit against tobacco companies On September 22, 1999, the U.S. Department of Justice filed a racketeering lawsuit against Philip Morris and other major cigarette manufacturers. Almost 7 years later, on August 17, 2006, U.S. District Court Judge Gladys Kessler found that the Government had proven its case and that the tobacco company defendants had violated the Racketeer Influenced Corrupt Organizations Act (RICO). In particular, Judge Kessler found that PM and other tobacco companies had: conspired to minimize, distort and confuse the public about the health hazards of smoking; publicly denied, while internally acknowledging, that secondhand tobacco smoke is harmful to nonsmokers, and destroyed documents relevant to litigation.The ruling found that tobacco companies undertook joint efforts to undermine and discredit the scientific consensus that secondhand smoke causes disease, notably by controlling research findings via paid consultants. The ruling also concluded that tobacco companies were fraudulently continuing to deny the health effects of ETS exposure.On May 22, 2009, a three-judge panel of the U.S. Court of Appeals for the District of Columbia Circuit unanimously upheld the lower court's 2006 ruling. Smoke-free laws: As a consequence of the health risks associated with secondhand smoke, many national and local governments have outlawed smoking in indoor public places, including restaurants, cafés, and nightclubs, as well as some outdoor open areas. Ireland was the first country in the world to institute a comprehensive national ban on smoking in all indoor workplaces on 29 March 2004. Since then, many others have followed suit. The countries which have ratified the WHO Framework Convention on Tobacco Control (FCTC) have a legal obligation to implement effective legislation "for protection from exposure to tobacco smoke in indoor workplaces, public transport, indoor public places and, as appropriate, other public places." (Article 8 of the FCTC) The parties to the FCTC have further adopted Guidelines on the Protection from Exposure to secondhand Smoke which state that "effective measures to provide protection from exposure to tobacco smoke ... require the total elimination of smoking and tobacco smoke in a particular space or environment in order to create a 100% smoke-free environment."Opinion polls have shown considerable support for smoke-free laws. In June 2007, a survey of 15 countries found 80% approval for such laws. A survey in France, reputedly a nation of smokers, showed 70% support. Smoke-free laws: Effects Smoking bans by governments result in decreased harm from secondhand smoke, including less admissions for acute coronary syndrome. In the first 18 months after the town of Pueblo, Colorado, enacted a smoke-free law in 2003, hospital admissions for heart attacks dropped 27%. Admissions in neighbouring towns without smoke-free laws showed no change, and the decline in heart attacks in Pueblo was attributed to the resulting reduction in secondhand smoke exposure. A 2004 smoking ban instituted in Massachusetts workplaces decreased workers' secondhand smoke exposure from 8% of workers in 2003 to 5.4% of workers in 2010. A 2016 review also found that bans and policy changes in specific locations such as hospitals or universities can lead to reduced smoking rates. In prison settings bans might lead to reduced mortality and to lower exposure to secondhand smoke.In 2001, a systematic review for the Guide to Community Preventive Services acknowledged strong evidence of the effectiveness of smoke-free policies and restrictions in reducing expose to secondhand smoke. A follow-up to this review, identified the evidence on which the effectiveness of smoking bans reduced the prevalence of tobacco use. Articles published until 2005, were examined to further support this evidence. The examined studies provided sufficient evidence that smoke-free policies reduce tobacco use among workers when implemented in worksites or by communities.While a number of studies funded by the tobacco industry have claimed a negative economic impact from smoke-free laws, no independently funded research has shown any such impact. A 2003 review reported that independently funded, methodologically sound research consistently found either no economic impact or a positive impact from smoke-free laws.Air nicotine levels were measured in Guatemalan bars and restaurants before and after an implemented smoke-free law in 2009. Nicotine concentrations significantly decreased in both the bars and restaurants measured. Also, the employees' support for a smoke-free workplace substantially increased in the post-implementation survey compared to pre-implementation survey. Smoke-free laws: Public opinion Recent surveys taken by the Society for Research on Nicotine and Tobacco demonstrate supportive attitudes of the public, towards smoke-free policies in outdoor areas. A vast majority of the public supports restricting smoking in various outdoor settings. The respondents' support for the policies were for varying reasons such as litter control, establishing positive smoke-free role models for youth, reducing youth opportunities to smoke, and avoiding exposure to secondhand smoke. Smoke-free laws: Alternative forms Alternatives to smoke-free laws have also been proposed as a means of harm reduction, particularly in bars and restaurants. For example, critics of smoke-free laws cite studies suggesting ventilation as a means of reducing tobacco smoke pollutants and improving air quality. Ventilation has also been heavily promoted by the tobacco industry as an alternative to outright bans, via a network of ostensibly independent experts with often undisclosed ties to the industry. However, not all critics have connections to the industry. Smoke-free laws: The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) officially concluded in 2005 that while completely isolated smoking rooms do eliminate the risk to nearby non-smoking areas, smoking bans are the only means of eliminating health risks associated with indoor exposure. They further concluded that no system of dilution or cleaning was effective at eliminating risk. The U.S. Surgeon General and the European Commission Joint Research Centre have reached similar conclusions. The implementation guidelines for the WHO Framework Convention on Tobacco Control states that engineering approaches, such as ventilation, are ineffective and do not protect against secondhand smoke exposure. However, this does not necessarily mean that such measures are useless in reducing harm, only that they fall short of the goal of reducing exposure completely to zero. Smoke-free laws: Others have suggested a system of tradable smoking pollution permits, similar to the cap-and-trade pollution permits systems used by the United States Environmental Protection Agency in recent decades to curb other types of pollution. This would guarantee that a portion of bars/restaurants in a jurisdiction will be smoke-free, while leaving the decision to the market. In animals: Multiple studies have been conducted to determine the carcinogenicity of environmental tobacco smoke to animals. These studies typically fall under the categories of simulated environmental tobacco smoke, administering condensates of sidestream smoke, or observational studies of cancer among pets. In animals: To simulate environmental tobacco smoke, scientists expose animals to sidestream smoke, that which emanates from the cigarette's burning cone and through its paper, or a combination of mainstream and sidestream smoke. The IARC monographs conclude that mice with prolonged exposure to simulated environmental tobacco smoke, that is six hours a day, five days a week, for five months with a subsequent four-month interval before dissection, will have significantly higher incidence and multiplicity of lung tumors than with control groups. In animals: The IARC monographs concluded that sidestream smoke condensates had a significantly higher carcinogenic effect on mice than did mainstream smoke condensates. In animals: Observational studies Secondhand smoke is popularly recognised as a risk factor for cancer in pets. A study conducted by the Tufts University School of Veterinary Medicine and the University of Massachusetts Amherst linked the occurrence of feline oral cancer to exposure to environmental tobacco smoke through an overexpression of the p53 gene. Another study conducted at the same universities concluded that cats living with a smoker were more likely to get feline lymphoma; the risk increased with the duration of exposure to secondhand smoke and the number of smokers in the household. A study by Colorado State University researchers, looking at cases of canine lung cancer, was generally inconclusive, though the authors reported a weak relation for lung cancer in dogs exposed to environmental tobacco smoke. The number of smokers within the home, the number of packs smoked in the home per day, and the amount of time that the dog spent within the home had no effect on the dog's risk for lung cancer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heun's method** Heun's method: In mathematics and computational science, Heun's method may refer to the improved or modified Euler's method (that is, the explicit trapezoidal rule), or a similar two-stage Runge–Kutta method. It is named after Karl Heun and is a numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. Both variants can be seen as extensions of the Euler method into two-stage second-order Runge–Kutta methods. Heun's method: The procedure for calculating the numerical solution to the initial value problem: y′(t)=f(t,y(t)),y(t0)=y0, by way of Heun's method, is to first calculate the intermediate value y~i+1 and then the final approximation yi+1 at the next integration point. y~i+1=yi+hf(ti,yi) yi+1=yi+h2[f(ti,yi)+f(ti+1,y~i+1)], where h is the step size and ti+1=ti+h Description: Euler's method is used as the foundation for Heun's method. Euler's method uses the line tangent to the function at the beginning of the interval as an estimate of the slope of the function over the interval, assuming that if the step size is small, the error will be small. However, even when extremely small step sizes are used, over a large number of steps the error starts to accumulate and the estimate diverges from the actual functional value. Description: Where the solution curve is concave up, its tangent line will underestimate the vertical coordinate of the next point and vice versa for a concave down solution. The ideal prediction line would hit the curve at its next predicted point. In reality, there is no way to know whether the solution is concave-up or concave-down, and hence if the next predicted point will overestimate or underestimate its vertical value. The concavity of the curve cannot be guaranteed to remain consistent either and the prediction may overestimate and underestimate at different points in the domain of the solution. Description: Heun's Method addresses this problem by considering the interval spanned by the tangent line segment as a whole. Taking a concave-up example, the left tangent prediction line underestimates the slope of the curve for the entire width of the interval from the current point to the next predicted point. If the tangent line at the right end point is considered (which can be estimated using Euler's Method), it has the opposite problem. Description: The points along the tangent line of the left end point have vertical coordinates which all underestimate those that lie on the solution curve, including the right end point of the interval under consideration. The solution is to make the slope greater by some amount. Heun's Method considers the tangent lines to the solution curve at both ends of the interval, one which overestimates, and one which underestimates the ideal vertical coordinates. A prediction line must be constructed based on the right end point tangent's slope alone, approximated using Euler's Method. If this slope is passed through the left end point of the interval, the result is evidently too steep to be used as an ideal prediction line and overestimates the ideal point. Therefore, the ideal point lies approximately halfway between the erroneous overestimation and underestimation, the average of the two slopes. Description: Euler's Method is used to roughly estimate the coordinates of the next point in the solution, and with this knowledge, the original estimate is re-predicted or corrected. Assuming that the quantity f(x,y) on the right hand side of the equation can be thought of as the slope of the solution sought at any point (x,y) , this can be combined with the Euler estimate of the next point to give the slope of the tangent line at the right end-point. Next the average of both slopes is used to find the corrected coordinates of the right end interval. Derivation: Slope left =f(xi,yi) Slope right =f(xi+h,yi+hf(xi,yi)) Slope ideal Slope left Slope right ) Using the principle that the slope of a line equates to the rise/run, the coordinates at the end of the interval can be found using the following formula: Slope ideal =Δyh Slope ideal ) xi+1=xi+h , yi+1=yi+Δy Slope ideal Slope left Slope right ) yi+1=yi+h2(f(xi,yi)+f(xi+h,yi+hf(xi,yi))) The accuracy of the Euler method improves only linearly with the step size is decreased, whereas the Heun Method improves accuracy quadratically . The scheme can be compared with the implicit trapezoidal method, but with f(ti+1,yi+1) replaced by f(ti+1,y~i+1) in order to make it explicit. y~i+1 is the result of one step of Euler's method on the same initial value problem. So, Heun's method is a predictor-corrector method with forward Euler's method as predictor and trapezoidal method as corrector. Runge–Kutta method: The improved Euler's method is a two-stage Runge–Kutta method, and can be written using the Butcher tableau (after John C. Butcher): The other method referred to as Heun's method (also known as Ralston's method) has the Butcher tableau: This method minimizes the truncation error.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Emphasis (typography)** Emphasis (typography): In typography, emphasis is the strengthening of words in a text with a font in a different style from the rest of the text, to highlight them. It is the equivalent of prosody stress in speech. Methods and use: The most common methods in Western typography fall under the general technique of emphasis through a change or modification of font: italics, boldface and SMALL CAPS. Methods and use: A means of emphasis that does not have much effect on blackness is the use of italics, where the text is written in a script style, or oblique, where the vertical orientation of each letter of the text is slanted to the left or right. With one or the other of these techniques (usually only one is available for any typeface), words can be highlighted without making them stand out much from the rest of the text (inconspicuous stressing). This is used for marking passages that have a different context, such as book titles, words from foreign languages, or internal dialogue. Methods and use: By contrast, a bold font weight makes letters of a text thicker than the surrounding text. Bold strongly stands out from regular text, and is often used to highlight keywords important to the text's content. For example, printed dictionaries often use boldface for their keywords, and the names of entries can conventionally be marked in bold.Small capitals (THUS) are also used for emphasis, especially for the first line of a section, sometimes accompanied by or instead of a drop cap, or for personal names as in bibliographies. Methods and use: If the text body is typeset in a serif typeface, it is also possible to highlight words by setting them in a sans serif face. This practice is often considered archaic in Latin script, and on computers is complicated since fonts are no longer issued by foundries with a standard baseline, so switching font may distort line spacing. It is still possible using some font super families, which come with matching serif and sans-serif variants, though these are not generally supplied with modern computers as system fonts. In Japanese typography, due to the reduced legibility of heavier Minchō type, the practice remains common. Methods and use: Of these methods, italics, small capitals and capitalization are oldest, with bold type and sans-serif typefaces not arriving until the nineteenth century. Capitalization The house styles of many publishers in the United States use all caps text for: chapter and section headings; newspaper headlines; publication titles; warning messages; and words of important meaning.Capitalization is used much less frequently by British publishers, and usually only for book titles. All-uppercase letters are a common substitute form of emphasis where the medium lacks support for boldface, such as old typewriters, plain-text email, SMS and other text-messaging systems. Methods and use: Socially, the use of all-caps text in Roman languages has become an indicator of shouting when quoting speech. It was also often used in the past by American lawyers to flag important points in a legal text. Coinciding with the era of typewriter use, the practice became unnecessary with the advent of computerized text formatting, although it is still found on occasion in documents created by older lawyers. Methods and use: Letter-spacing Another means of emphasis is to increase the spacing between the letters, rather than making them darker, but still achieving a distinction in blackness. This results in an effect reverse to boldface: the emphasized text becomes lighter than its environment. This is often used in blackletter typesetting and typewriter manuscripts, but by no means restricted to those situations.This letter-spacing is referred to as sperren in German, which could be translated as "spacing out": in typesetting with letters of lead, the spacing would be achieved by inserting additional non-printing slices of metal between the types, usually about an eighth of an em wide. On typewriters a full space was used between the letters of an emphasized word and also one before and one after the word. Methods and use: For black letter type boldface was not feasible, since the letters were very dark in their standard format, and on (most) typewriters only a single type was available. Although letter-spacing was common, sometimes different typefaces (e.g. Schwabacher inside Fraktur), underlining or colored, usually red ink were used instead. Methods and use: Since blackletter type remained in use in German speaking parts of Europe much longer than anywhere else, the custom of letter-spacing is sometimes seen as specific to German, although it has been used with other languages, including English. Especially in German, however, this kind of emphasis may also be used within modern type, e.g. where italics already serve another semantic purpose (as in linguistics) and where no further means of emphasis (e.g. small caps) are easily available or feasible. Its professional use today is very limited in German. This use of spacing is also traditionally found in Polish.German orthographic (or rather typographic) rules require that the mandatory blackletter ligatures are retained. That means, ſt, ch, ck, and tz are still stuck together just as the letter ß, whereas optional, additional ligatures like ff and ſi are broken up with a (small) space in between. Other writing systems did not develop such sophisticated rules since spacing was so uncommon therein. Methods and use: In Cyrillic typography, it also used to be common to emphasize words using letter-spaced type. This practice for Cyrillic has become obsolete with the availability of Cyrillic italic and small capital fonts. Methods and use: Underlining Professional Western typesetting usually does not employ lines under letters for emphasis within running text. In proofreading, underlining (or underscoring) is a convention that says "set this text in italic type", traditionally used on manuscript or typescript as an instruction to the printer. Its use to add emphasis in modern documents is a deprecated practice. In web pages, hyperlinks are often displayed with underlines – to identify them as such rather than to emphasize them. Underlining is also used for secondary emphasis, i.e. marks added to a printed text by the reader. Methods and use: Overlining In Arabic, it is traditional to emphasize text by drawing a line over the letters. This is seen in the Quran, where the word at which Sujud Tilawa is performed is overlined. Methods and use: Punctuation marks Sometimes quotation marks are used for emphasis. However, this clashes with the general understanding of how the marks are properly used, particularly scare quotes, and can leave the reader with a different impression than intended.In Chinese, emphasis in body text is supposed to be indicated by using an "emphasis mark" (着重號/着重号), which is a dot placed under each character to be emphasized. This is still taught in schools but in practice it is not usually done, probably due to the difficulty of doing this using most computer software. Consequently, methods used for emphasis in Western text are often used instead, even though they are considered inappropriate for Chinese (for example, the use of underlining or setting text in oblique type). Methods and use: In Japanese texts, when katakana would be inappropriate, emphasis is indicated by "emphasis dots" (圏点 or 傍点) placed above the kanji and any accompanying furigana in horizontal writing and to the right in vertical writing. Japanese also has an "emphasis line" (傍線) used in a similar manner, but less frequently. In Korean texts, a dot is placed above each Hangul syllable block or Hanja to be emphasized.In Armenian the շեշտ (šešt) sign ( ՛ ) is used. Methods and use: On websites and other Internet services, as with typewriters, rich text is not always available. Asterisks are sometimes used for emphasis (as in "That was *really* bad"). Less commonly, underscores may be used, resembling underlining ("That was _really_ bad"). Periods can be used between words (as in "That. was. really. bad.") to emphasize whole sentences, mimicking when somebody slows down their speech for impact. In some cases, the engine behind the text area being parsed will render the text and the asterisks in bold automatically after the text is submitted. Markdown is a common formalization of this concept. Methods and use: Color Colors are important for emphasizing. Important words in a text may be colored differently from others. For example, many dictionaries use a different color for headwords, and some religious texts color the words of deities red, commonly referred to as rubric. In Ethiopic script, red is used analogously to italics in Latin text.Post-print emphasis added by a reader is often done with highlighters which add a bright background color to usual black-on-white text. Design: There are many designs. With both italics and boldface, the emphasis is correctly achieved by swapping into a different font of the same family; for example by replacing body text in Arial with its bold or italic style. Professional typographic systems, including most modern computers, would therefore not simply tilt letters to the right to achieve italics (that is instead referred to as slanting or oblique), print them twice or darker for boldface, or scale majuscules to the height of middle-chamber minuscules (like x and o) for small-caps, but instead use entirely different typefaces that achieve the effect. The letter 'w', for example, looks quite different in italic compared to upright. Design: As a result, typefaces therefore have to be supplied at least fourfold (with computer systems, usually as four font files): as regular, bold, italic, and bold italic to provide for all combinations. Professional typefaces sometimes offer even more variations for popular fonts, with varying degrees of blackness. Only if such fonts are not available should the effect of italic or boldface be imitated by algorithmically altering the original font. Design: The modern Latin-alphabet system of fonts appearing in two standard weights, with the styles being regular (or "Roman"), italic, bold and bold italic is a relatively recent development, dating to the early twentieth century. Modern "Roman" type was developed around the 1470s, while italic type was developed around 1500 and was commonly used for emphasis by the early 17th century. Bold type did not arrive until the nineteenth century, and at first fonts did not have matching bold weights; instead a generic bold, sometimes a Clarendon or other kind of slab-serif, would be swapped in. In some books printed before bold type existed, emphasis could be shown by switching to blackletter. Some font families intended for professional use in documents such as business reports may also make the bold-style numbers take up the same width as the regular (non-bold) numbers, so a bold-style total lines up below the digits of the sum in regular style. Recommendations and requirements: Linguistics professor Larry Trask stated that "It is possible to write an entire word or phrase in capital letters in order to emphasize it", but adds that "On the whole, though, it is preferable to express emphasis, not with capital letters, but with italics." Many university researchers and academic journal editors advise not to use italics, or other approaches to emphasizing a word, unless essential, for example the Modern Language Association "discourages the use of italics in academic prose to emphasize or point, because they are unnecessary—most often, the unadorned words do the job without typographic assistance". Although emphasis is useful in speech, and so has a place in informal or journalistic writing, in academic traditions it is often suggested that italics are only used where there is a danger of misunderstanding the meaning of the sentence, and even in that case that rewriting the sentence is preferable; in formal writing the reader is expected to interpret and understand the text themselves, without the assumption that the precise intended interpretation of the author is correct. Italics are principally used in academic writing for texts that have been referenced, and for foreign language words. Similarly capitals and underlining have particular meanings, and are rarely used in formal writing for emphasis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neuropsychiatric systemic lupus erythematosus** Neuropsychiatric systemic lupus erythematosus: Neuropsychiatric systemic lupus erythematosus or NPSLE refers to the neurological and psychiatric manifestations of systemic lupus erythematosus. SLE is a disease in which the immune system attacks the body's own cells and tissues. It can affect various organs or systems of the body. It is estimated that over half of people with SLE have neuropsychiatric involvement. Classification: The American College of Rheumatology (ACR) has outlined 19 syndromes that are seen in NPSLE. These syndromes encompass disorders of the central and peripheral nervous systems: Each of the 19 syndromes are also stand-alone diagnoses, which can occur with or without lupus. Classification: The majority of cases involve the central nervous system (CNS), which consists of the brain and spinal cord. The most common CNS syndromes are headache and mood disorder.Though neuropsychiatric lupus is sometimes referred to as "CNS lupus", it can also affect the peripheral nervous system (PNS). Between 10-15% of people with NPSLE have PNS involvement. Mononeuropathy and polyneuropathy are the most common PNS syndromes. Classification: Other syndromes Some neurological syndromes outside of the ACR classification may also be considered NPSLE manifestations. These include neuromyelitis optica, posterior reversible encephalopathy syndrome, small fiber neuropathy, and Lambert–Eaton myasthenic syndrome. Pathogenesis: There are several possible mechanisms that underlie the nervous system manifestations of lupus. Specific syndromes may be vasculopathic, autoantibody-mediated, or inflammatory in nature. There is evidence that the blood–brain barrier, which protects the central nervous system, is compromised in patients with NPSLE. As a result of this, autoantibodies are able to infiltrate the CNS and cause damage. Diagnosis: For diagnosis of NPSLE, it must be determined whether neuropsychiatric symptoms are indeed caused by SLE, whether they constitute a separate comorbid condition, or whether they are an adverse effect of disease treatment. In addition, onset of neuropsychiatric symptoms may happen prior to the diagnosis of lupus. Due to the lack of uniform diagnostic standards, statistics about NPSLE vary widely.Tests which aid in diagnosis include MRI, electrophysiological studies, psychiatric evaluation, and autoantibody tests. Treatment: Management of neuropsychiatric lupus is similar to the management of neuropsychiatric disease in patients without lupus. Treatment depends on the underlying causes of a patient’s disease, and may include immunosuppressants, anticoagulants, and symptomatic therapy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DiFMDA** DiFMDA: Difluoromethylenedioxyamphetamine (DiFMDA) is a substituted derivative of 3,4-methylenedioxyamphetamine (MDA), which was developed by Daniel Trachsel and coworkers, along with the corresponding fluorinated derivatives of MDMA, MDEA, BDB and MBDB, with the aim of finding a non-neurotoxic drug able to be used as a less harmful substitute for entactogenic drugs such as MDMA. Since a major route of the normal metabolism of these compounds is scission of the methylenedioxy ring, producing neurotoxic metabolites such as alpha-methyldopamine, it was hoped that the difluoromethylenedioxy bioisostere would show increased metabolic stability and less toxicity.These compounds have not yet been tested in animals to verify whether they show similar pharmacological activity to the non-fluorinated parent compounds, although in vitro binding studies show DiFMDA to have a SERT affinity in between that of MDA and MDMA. It is also now generally accepted that MDMA neurotoxicity results from a variety of different causes and is not solely due to accumulation of alpha-methyldopamine, making it unclear how much less neurotoxic DiFMDA and related drugs would be in practice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GLCCI1** GLCCI1: Glucocorticoid-induced transcript 1 protein is a protein that in humans is encoded by the GLCCI1 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chromosome engineering** Chromosome engineering: Chromosome engineering is "the controlled generation of chromosomal deletions, inversions, or translocations with defined endpoints." By combining chromosomal translocation, chromosomal inversion, and chromosomal deletion, chromosome engineering has been shown to identify the underlying genes that cause certain diseases in mice. In coming years, it is very likely that chromosomal engineering will be able to do the same identification for diseases in humans, as well as all other organisms. Experiments of Chromosome Engineering: In an experiment pertaining to chromosome engineering that was conducted in 2006, it was found that chromosome engineering can be effectively used as a method of identifying the causes of genetic disorders such as the continuous gene and aneuploidy syndromes. The experiment was conducted by infecting mice with the human disease, ES, to see the effectiveness of chromosomal engineering in the gene identification of those diseases. After much experimenting, it was found that manipulating chromosomes, or chromosome engineering, is an excellent and efficient method of determining underlying genes in genetic orders and diseases.In the future, chromosome engineering will experiment in removing more common disorders such as asthma, diabetes, and cancer. If it can be recognized by the medical community as effective and safe, it should be able to be used regularly in the near future.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Silver tetrafluoroborate** Silver tetrafluoroborate: Silver tetrafluoroborate is an inorganic compound with the chemical formula AgBF4. It is a white solid that dissolves in polar organic solvents as well as water. In its solid state, the Ag+ centers are bound to fluoride. Preparation: Silver tetrafluoroborate is prepared by the reaction between boron trifluoride and silver oxide in the presence of benzene. Laboratory uses: In the inorganic and organometallic chemistry laboratory, silver tetrafluoroborate, sometimes referred to "silver BF-4", is a useful reagent. In dichloromethane, silver tetrafluoroborate is a moderately strong oxidant. Similar to silver hexafluorophosphate, it is commonly used to replace halide anions or ligands with the weakly coordinating tetrafluoroborate anions. The abstraction of the halide is driven by the precipitation of the appropriate silver halide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Maxalding** Maxalding: Maxalding is an exercise system of muscle control using a form of isometrics. Books and pamphlets teaching the system were first published in 1909 and continued until Maxalding ceased to trade in the late 1970s. System: The Maxalding system, like the "dynamic tension" system of Charles Atlas and those of others, did not use weights. Where the other systems concentrated on muscle development, Maxalding went one stage further and taught muscle control. The methods taught had been around since the early 1900s and indeed many of the photos used in the instruction leaflets, even those sold in the 1970s, date from that period. Some exercises of Maxalding, involving isolating the muscles of the abdominal region, are similar to the yoga exercise of nauli. Founders: Maxalding (originally called Maxaldo) was a name created from those of the founders, Maxick (Max Sick) and Monte Saldo (Alfred Montague Woollaston), and first came into being in 1909. Founders: Maxick was an Austrian strongman. He was born in Bregenz in Austria on 28 June 1882, and moved to Britain in 1909, where he met Saldo. He died in Buenos Aires on 10 May 1961 after a wrist-wrestling match. The Maxalding principles are based mainly on exercises and techniques which appeared in his book Muscle Control, written in 1911. Saldo was apprenticed to Eugen Sandow in 1897. He took his stage name at the turn of the 20th century while touring Europe demonstrating strength and gymnastics. He was also an artist's model and in 1914 published a book called How to Pose. He provided the financial means of promoting Maxick's methods and starting the Maxalding postal course. His son F. H. C. Woollaston took over, using the professional name of Courtlandt Saldo. He carried on the business until sometime in the late 1970s. Courtlandt Saldo died in 1983 at the age of 72.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Finished goods** Finished goods: Finished goods are goods that have completed the manufacturing process but have not yet been sold or distributed to the end user. Manufacturing: Manufacturing has three classes of inventory: Raw material Work in process Finished goodsA good purchased as a "raw material" goes into the manufacture of a product. A good only partially completed during the manufacturing process is called "work in process". When the good is completed as to manufacturing but not yet sold or distributed to the end-user, it is called a "finished good".This is the last stage for the processing of goods. The goods are ready to be consumed or distributed. Manufacturing: There is no processing required in term of the goods after this stage by the seller. Though there maybe instance that seller finished goods become buyer’s raw materials Finished goods is a relative term. In a Supply chain management flow, the finished goods of a supplier can constitute the raw material of a buyer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Announcement chime** Announcement chime: An announcement chime is a sound similar to chimes, which is played before or after a manual or automated message to let people know when the announcement begins or ends. Description: Announcement chimes are sounds of chimes or similar instruments, which are played before or after a manual or automated announced message. The sound may be created from various methods, including striking chimes, playing an analog recording, or sounding a digital chime. Used before an announcement, the chime alerts that there is a forthcoming statement. When played after an announcement, the sound of the chime denotes the end of the statement. Use in transport: Air At airports, chimes (usually three or four-tone) play before an automated announcement is said to inform people of the next flight to depart. On aeroplanes, a two-tone chime plays before a safety announcement (e.g., for fastening seatbelts, etc.) or a crew call. Rail United Kingdom At most stations managed by the train operating companies (TOCs) Great Western Railway and South Western Railway, as well as additionally at most stations now managed by Elizabeth line, but were formerly managed by GWR, and at Carlisle, Llandaf, Shrewsbury and Atherstone, a two-tone chime is played before any automated announcement is made. Use in transport: In the latter part of the British Rail era, stations with manual announcements were fitted with three or four-tone bell chimes. They still remain this way at Bolton. However, even with KeTech automated announcements being fitted across the north in the 2010s and across the west in the 2000s, some stations such as Plymouth (until around 2013 when it was replaced by an Atos installation), some stations retained the chime (such as East Didsbury and Morpeth), however, all chimes at Northern stations with automated announcements had been removed by 2019. Use in transport: On most trains, a short two-tone chime is played before an announcement.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arborite** Arborite: Arborite is the leading Canadian manufacturer of high-pressure decorative plastic laminates (HPL). Best known as a counter top surfacing material, this laminate is a durable decorative veneer applied to cabinetry, furniture, and other horizontal and vertical surfaces. The original Arborite material was developed in 1942 by the Howard Smith Paper Company as an innovative way to utilize waste by-products of the Canadian papermaking industry, and to this day any laminate used for the same purpose is commonly referred to in Canada by the trade name Arborite. What is laminate? (HPL): Laminate is a material made by bonding layers of material or materials. Laminate, in technical terminology, is referred to as High Pressure Laminate (HPL) or even more accurately as High Pressure Decorative Plastic Laminate since there are also industrial high pressure laminates which are not decorative.The decorative high pressure laminates in our homes and offices, etc. consist of sheets of paper that have been coated or impregnated with two types of resin, stacked on top of each other and placed into a press where they are cooked at a minimum of 265 degrees F. at a pressure of approximately 1,200 pounds per square inch (psi) for about an hour. Under this pressure and heat the resins flow, transforming the stack and the resins into a single sheet of homogeneous composite material. "Plastic" laminate is a misleading term because the material is approximately 70% paper and 30% polymer (phenolic and melamine) resin. History: The Howard Smith Paper Company was founded in 1912 by C. Howard Smith (1873 – 1931) in an abandoned cotton mill in Beauharnois, Québec, Canada on the shores of Lake St. Louis. By 1914, this one-machine mill was in high production, churning out rag paper. In 1916, Howard Smith acquired the newsprint business of Edwin Crabtree in Crabtree Mills, Quebec, and by 1919, they had also purchased the Toronto Paper Company Limited of Cornwall, Ontario. Over the next 20 years, Howard Smith would acquire an additional four paper companies in various locations across Canada, and expand the operations at each of the facilities.Howard Smith Paper Company was committed to the conservation of Canada's forests and the sustainability of their source material. In 1937, for their 25th anniversary, the company published a history called "25 Years of Progress"; in it, President Harold Crabtree's mission statement states, "Our aim, primarily, is that of serving the Canadian trade with quality papers at fair prices, conserving the forest wealth of Canada, from which we draw our raw materials, not only to the end that our vast operations may be served for the immediate future, but that future generations, too, may have the same privileges and enjoyment of these forests as ourselves."Edmund Howard Smith, the son of C. Howard and Alice Young Day, followed his father in the family business. He was born and raised in Montreal, Québec and trained as an industrialist at McGill University. After graduation, he worked his way up in his father's company, from a business clerk to president of the Howard Smith Paper Company in 1946. Both Edmund Howard and his father held the position of President of the Canadian Pulp and Paper Association at various points in their careers.Edmund was convinced that waste from the paper making process could be transformed into a useful product in its own right. He began working with fellow McGill graduate Dr. George Tomlinson II, the chief of research and development at Howard Smith Paper; his father, Dr. George Tomlinson Sr, had previously been in the same position at Howard Smith and while there had patented the ingenious Tomlinson recovery boiler. For four years, these two young men spearheaded experiments to develop a process for separating and extracting lignin from kraft black liquor, a by-product of paper making; in 1946 Smith and Tomlinson were awarded a patent for the resulting material, which they named "Arborite". Though it is not recorded how they arrived at that name, likely is because ‘arbor’ is the Latin word for tree, and the fact that the parent company was a paper manufacturer concerned about the welfare of the Canadian forests from which their trees were sourced. History: Production presses were established and a company was formed. Edmund Howard Smith went on to become Arborite's first president, with George Tomlinson Jr. as his chief engineer. Arborite was the first commercial decorative melamine laminate. The manufacturing facility was opened in 1948 in LaSalle, Quebec, where it still is to this day. By early 1949, Arborite was being advertised as the "only all-Canadian" laminate on the market, available in 35 "solid colors, as well as a series of five colored fabric designs, two tones of "marble" and a wide variety of simulated wood grains." Residential: Arborite was originally marketed not to design or construction firms, but directly to housewives looking for a "modern surfacing material". One of the new material's first marketing platforms was the popular Chatelaine ladies’ home magazine, where it was touted as being "tested and approved by the Chatelaine Institute".By the early 1950s, Arborite was available in more than 60 colors and patterns, mostly solid colors and wood grains. In 1954, Western Woods built 10 trend houses across Canada, representing the epitome in modern design and materials. Arborite was chosen for kitchen and bathroom surfaces in many of these model homes. 1958 saw the introduction of new lines of pastel Glitter and Metallic Tone laminates, closely followed by Stardust (a random breakup pattern) and Fantasy (abstract mid-century stars). Woodgrain patterns at this time included Sliced Walnut, Fawn English Walnut and Blond Persian Walnut.By 1962, Arborite had branched into the United Kingdom. This is from Design magazine in 1965: "Arborite decorative laminates only appeared in Britain in 1960, but already they have radically effected the decorative laminates scene here. The company established its name with its woodgrains and marbles, and has recently launched the most comprehensive plain colour range on the British market, as well as issuing an architectural manual."Then, in 1963, came one of the most pivotal changes in the history of the company. Howard Smith Paper Mills Ltd. was acquired by Domtar Inc, one of the largest manufacturing enterprises in Canada at the time; Arborite was now a division of Domtar Construction materials. Residential: In the merger, Dr. George Tomlinson II was retained by Domtar as the Director of Research. He went on to have an over-thirty-year history with the company, and won the TAPPI (Technical Association of the Pulp and Paper Industry) medal in 1969 for his outstanding contributions to lignin chemistry and pulping technology. By the 1980s, Dr. Tomlinson was semi-retired but was still an advisor/consultant for Domtar, publishing articles and books about the effects of acid rain on the forests of North America—still concerned about environmental responsibility in the paper industry. Commercial: The 1970s saw a shift in marketing, from the residential market to a more corporate focus. Arborite was advertised as "An excellent choice for architects, designers and furniture manufacturers alike." Over 140 patterns and colors were available at this point, including East Indian Teak and Black Leather, with "new Metallic and Fabric laminates". Many of the 52 solid colors could be seen as epitomizing the decade, from Bitter Lemon and Dusty Olive, to Pale Avocado and Minton Blue. Application locations of Arborite laminate included McGill University, high-end hotels, corporate offices and private, architect-designed residences, and Canadian Pacific rail car interiors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Myriad Colors Phantom World** Myriad Colors Phantom World: Myriad Colors Phantom World (無彩限のファントム・ワールド, Musaigen no Fantomu Wārudo) is a Japanese fantasy light novel series written by Sōichirō Hatano and illustrated by Shirabi. The series, set in the future Kamigyo ward, an accidental release of an unstable virus caused an epidemic that alters the human brain, leading to the creation of the beings called “phantoms”. An anime television adaptation by Kyoto Animation aired from January to March 2016. It is licensed by Crunchyroll outside of Asia. Plot: In the near future, the accidental release of an experimental virus causes an outbreak that changes the brain chemistry of every person in the world, allowing them to perceive extra-dimensional beings called "Phantoms". In addition, some children born after the outbreak have developed special powers that allow them to battle and seal Phantoms. Even though the vast majority of phantoms are harmless, many of these gifted children are placed in clubs, schools, and organizations dedicated to dealing with Phantoms that prove to be nuisances or threats to humanity. The story revolves around Haruhiko Ichijo and his friends in the Phantom-hunting Club of Hosea Academy, a private school for children with special abilities to seal Phantoms, and their everyday life and struggles, dealing with Phantoms. Characters: Haruhiko Ichijo (一条 晴彦, Ichijō Haruhiko) Voiced by: Hiro Shimono, Omi Minami (child) (Japanese); Micah Solusod, Apphia Yu (child) (English) A first-year high school student and the main character. His special ability is called The Book of Thoth, which consists of sealing or summoning Phantoms by drawing them in a sketchbook. Due to the library in his house, he has a lot of knowledge about numerous different subjects, but many times his facts are seen as useless by his teammates. His parents are separated but he hopes for them to be reunited as a family again. In the anime, most of the episodes begin with Haruhiko giving a brief explanation about certain topics.Mai Kawakami (川神 舞, Kawakami Mai) Voiced by: Sumire Uesaka (Japanese); Amber Lee Connors (English) A second-year high school student, Haruhiko's senior and original partner. She specializes in close combat. Her special ability is called Spirit of the Five Elements, which consists of channeling elemental powers through her body, such as fire from her heart, earth from her spleen, metal from her lungs, water from her kidneys, and wood from her armpit. Mai has been known by Haruhiko to be hot-headed and violent ever since she was a child. She seems to harbor feelings for Haruhiko.Reina Izumi (和泉 玲奈, Izumi Reina) Voiced by: Saori Hayami (Japanese); Natalie Hoover (English) A first-year high school student and a new member of Haruhiko's team. Her special ability is called Phantom Eater, an unusual power that allows her to seal Phantoms by consuming them. She has also been trained in basic self-defense, as seen when she assaults Haruhiko when he touches her. She has a large appetite and constantly struggles with getting enough money to eat, despite coming from a wealthy household. She has an older sister who ran away from home due to their parents being very strict as well as having a strong dislike towards Phantoms. She strongly admires Mai who she claims to resemble her older sister. She later develops feelings for Haruhiko.Koito Minase (水無瀬 小糸, Minase Koito) Voiced by: Maaya Uchida (Japanese); Jeannie Tirado (English) A newly transferred student who is always wearing headphones. Her special ability is a powerful sound attack using her voice, which can stun or seal Phantoms. This first manifested when she was in elementary school when a Phantom attacked the rabbits that she was assigned to care for in the schoolyard. She managed to seal the Phantom with her special ability, but in the process damaged a large portion of the school. This caused her friends and even her parents to fear her and she eventually developed the anti-social personality that she has today. She tends to use a lot of sugar in her drinks. She is hinted to have feelings for Haruhiko.Ruru (ルル) Voiced by: Azusa Tadokoro (Japanese); Jad Saxton (English) A friendly Phantom in the form of a small fairy. She always follows Haruhiko and enjoys making fun of him and the other characters. This character is original to the anime. Her full name is Rururaruri Rurararirararururirirari Rirararururararururararirari.Kurumi Kumamakura (熊枕 久瑠美, Kumamakura Kurumi) Voiced by: Misaki Kuno (Japanese); Tia Ballard (English) An anime-original character, she is a shy fourth-grade student from the primary school division of Hosea Academy who looks up to Haruhiko's group. She always carries a teddy bear named Albrecht (named after Albert the Bear) and has a very strong affinity with bears as almost everything associated with her has "bear" ("kuma") in its name, including the animal itself, her birthplace (Kumamoto Prefecture), her favorite food (bear claw) and even her surname (Kumamakura). Her special ability enlarges Albrecht's size considerably and allows him to move on his own and fight. Like Koito, Kurumi's ability manifested at a very young age. She's quite fond of Haruhiko.Shosuke Morohashi (諸橋 翔介, Morohashi Shōsuke) Voiced by: Daisuke Sakaguchi (Japanese); Dallas Reid (English) Haruhiko's friend and classmate who is usually envious of him because all of his teammates are beautiful girls.Arisu Himeno (姫野 アリス, Himeno Arisu) Voiced by: Kikuko Inoue (Japanese); Carli Mosier (English) Haruhiko's teacher, who's responsible for assigning jobs to students with powers in order to deal with troublesome Phantoms in exchange for a reward. Media: Novel The light novel was written by Sōichirō Hatano and illustrated by Shirabi. It was published by Kyoto Animation's novel imprint KA Esuma Bunko on 20 December 2013. The book received an honorable mention in the novel category of the fourth Kyoto Animation Award on 5 April 2013. Previous works to be featured in the awards have received anime adaptations. A second novel was released on 30 October 2015. A third novel was released on 11 February 2016. Media: Anime An anime television series aired between 7 January and 31 March 2016 on ABC Asahi, Tokyo MX, TV Aichi, and BS11. The series was directed by Tatsuya Ishihara and written by Fumihiko Shimo, with animation produced by Kyoto Animation. Kazumi Ikeda handled the series' character designs, and also served as the chief animation director. Shinpei Sawa provided the designs for the Phantoms. The series' music was composed by Effy. Additionally, Ryuuta Nakagami served as director of photography; Mikiko Watanabe was the series' art director; Kana Miyata provided the color key; Hiroshi Karata was in charge of accessories planning; and Yota Tsuruoka was the sound director. The opening theme song is "Naked Dive" by Screen Mode, while the ending theme is "Junshin Always" (純真Always, "Innocence Always") by Azusa Tadokoro. The anime was released on seven Blu-ray and DVD compilation volumes containing two episodes and one picture drama each between 6 April and 5 October 2016. An original video animation was bundled with the seventh volume. Funimation released the series in North America on home video, and Madman Entertainment distributes the title in Australia and New Zealand on behalf of Funimation. Reception: Jonah Welland of CBR.com writes in November 2022 that the series has received poor criticism as it was "generic". He also described that the elements of the series had felt "half-baked and rushed, lacking the usual creativity the studio is known for." This led to Kyoto Animation to produce Miss Kobayashi's Dragon Maid and Violet Evergarden.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Structural chemistry** Structural chemistry: Structural chemistry is a part of chemistry and deals with spatial structures of molecules (in the gaseous, liquid or solid state) and solids (with extended structures that cannot be subdivided into molecules).The main tasks are: The formulation of general laws for structure-property relationships; and The derivation of general rules on how the chemical and physical properties of the constituents of matter determine the resulting structures (e.g. the relationship between the electron configuration of the crystal building blocks and the symmetry of the resulting crystal lattice).For structure elucidation a range of different methods are used. One has to distinguish between methods that elucidate solely the connectivity between atoms (constitution) and such that provide precise three dimensional information such as atom coordinates, bond lengths and angles and torsional angles. The latter methods include (mainly): for the gaseous state: gas electron diffraction and microwave spectroscopy for the liquid state: NMR spectroscopy (note, obtaining precise structural information from liquids and solutions is still rather difficult compared to gases and crystalline solids) for the solid state: X-ray, electron and neutron diffractionTo identify connectivity and the presence of functional groups a variety of methods of molecular spectroscopy and solid state spectroscopy can be used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The point (ice hockey)** The point (ice hockey): The point is a term in ice hockey to indicate a position inside the opposition's blue line along the edges of the rink. Description: A player in the opponent's end zone at the junction of the blue line with the boards is said to be at the point. Usually the players at the two points are the defencemen. On the power play the players playing at these positions are always known as the points, though one of the positions is sometimes played by a forward. Description: The point's responsibilities include attempting to keep the puck in the offensive zone when the defensive team attempts to clear (see also Offside (ice hockey)), receiving a pass from the forwards to allow the play to reset, and taking slapshots at the goal, hoping to score, create a rebound or a deflection. On the power play, one of the players playing the point is typically the "quarterback" - that is, the one who controls (through passing) where the puck goes, and also takes many shots. Description: Given the difficulty of scoring directly from the point due to the distance to the goal, goals scored from the point are typically either on screens, or are tipped goals. Point and cover point: In the early years of ice hockey, the two defencemen were known as the "point" and "cover-point" players. The term the point may have been derived from that early terminology. The point played further back, while the cover-point was allowed more latitude to roam forward.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CDMF** CDMF: In cryptography, CDMF (Commercial Data Masking Facility) is an algorithm developed at IBM in 1992 to reduce the security strength of the 56-bit DES cipher to that of 40-bit encryption, at the time a requirement of U.S. restrictions on export of cryptography. Rather than a separate cipher from DES, CDMF constitutes a key generation algorithm, called key shortening. It is one of the cryptographic algorithms supported by S-HTTP. Algorithm: Like DES, CDMF accepts a 64-bit input key, but not all bits are used. The algorithm consists of the following steps: Clear bits 8, 16, 24, 32, 40, 48, 56, 64 (ignoring these bits as DES does). XOR the result with its encryption under DES using the key 0xC408B0540BA1E0AE. Clear bits 1, 2, 3, 4, 8, 16, 17, 18, 19, 20, 24, 32, 33, 34, 35, 36, 40, 48, 49, 50, 51, 52, 56, 64. Encrypt the result under DES using the key 0xEF2C041CE6382FE6.The resulting 64-bit data is to be used as a DES key. Due to step 3, a brute force attack needs to test only 240 possible keys.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hostal** Hostal: A hostal is a type of lodging very common in Spain and Hispanic America. Hostals tend to be cheaper than hotels. They normally have a bar, restaurant or cafeteria where drinks and food are sold to guests and locals alike. Accommodations typically include private bedrooms, and sometimes apartments, available for either short or long term rent. Linens and towels are usually provided, unless it is a long term apartment rental in which case the guest is considered a resident and does not receive cleaning and other services. Guests sometimes share a common bathroom, but a number of rooms with en suite bathrooms may also be available. Hostals are common in Spain and are also found in Mexico, Central and South America and California. They are often family-run, independent businesses, with a strong involvement with the local community. Hostal-residencias are the same as hostales, but generally without a cafetería or other place where guests can eat. Difference from hostels: Though the word hostal is similar to hostel, the two words refer to different types of accommodation. Hostels refers to properties that offer shared accommodation, typically in dormitories, while hostal refers to a type of family-run pension typically common only in Spain and a few other Spanish-speaking countries. Difference from hostels: In Mexico, hostal is just the Spanish word for hostel: A cheap hotel-like accommodation that will normally have one or two dormitory rooms with bunk beds and a few individual or shared with other rooms. They are ideal for backpackers, youth, and those with little funds for accommodations. Some regular hotels will however add the word hostal to their names to try to increase business. Difference from hostels: Hostals are classified from one to three stars, contrary from hostels, which are not classified under the star rating, and from hotels which are classified from one to five stars.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Palette (video game)** Palette (video game): Palette (パレット, Paretto) is a psychological horror adventure game that was made with RPG Tsukūru 95 by Nishida Yoshitaka (西田好孝). The game was highly acclaimed in the Fourth ASCII Entertainment Software Contest, awarded a Grand Prix of 10,000,000 yen, which resulted in remaking the game for PlayStation by Enterbrain. That version, entitled Forget me not -Palette-, saw the release on April 26, 2001 exclusively in Japan. Gameplay: In Palette, the player controls Dr. Shianosu B. Shian (シアノス・B・シアン), a renowned psychologist specializing in memory, and his patient, an amnesiac girl known only as "B.D." Communicating using the telephone in Shian's office, Shian guides B.D. as she explores a maze-like dreamscape of her traumatic memories. At first, each memory is missing crucial details, which are filled in as B.D. investigates the memory. As B.D. fills in her memories, new paths open in the dreamscape, allowing her to explore new memories. Gameplay: Some fragments of her memory are in hidden in the dreamscape and must be taken to the correct one. With each of these pieces B.D. discovers, the maximum length of a gauge on the right of the screen is increased by one. This gauge essentially represents her mental health. As she travels down paths of her memory and breaks down barriers, it decreases. If it reaches zero, she gets a painful headache and the telephone call ends. When Shian redials B.D., she must restart the journey from the first room, but the details in her memories she fills in, the fragments she collects, and the maximum length of her gauge are permanent. Gameplay: There are also circles of light in the dreamscape that recover B.D.'s gauge when she steps into them. These will usually only appear if her gauge is low enough, so in some cases the player must find a way to intentionally damage B.D. so that the circle will appear, allowing her to heal and make further progress. Some rooms also have hidden passageways that the player can discover. Gameplay: The other character, Dr. Shian, is trapped in his office, but can use his library and other objects in the room to gain new information about topics that B.D. remembers. Plot: Shian is closing down his office for the night when a mysterious figure asks for his help. When Shian refuses, they shoot through his office door. Shian complies with the figure and is instructed to help B.D. over the telephone. Plot: B.D. is initially completely incapable of remembering anything at all. She doesn't know her own name, where she lives, the names of her family, or even if she has any family. She can't even remember her own face. She slowly begins to recall some details, but they are scattered across all periods of her life, and it is difficult to connect them logically or gain much meaning from them. Plot: Each memory is related to a traumatic moment of her life, which B.D. associates with the color red. The first memories seen are the last chronologically, and appear to be of a violent murder taking place in B.D.'s home. Earlier memories depict scenes from her young childhood, then of her life with a series of caretakers. B.D. slowly begins to be able to differentiate these caretakers, but still can't remember their names or faces. One of them, a woman with a red silhouette, is present in the murder memory. Plot: Even before losing her memory, B.D. struggled with horrible loneliness. Many of her memories are scenes of her isolated and alone. B.D. is very introspective, and often thinks about how many of the details in her memories (an empty birdcage, a lone apple separated from the rest, a toy telephone that can't call anyone, a clothing store display of a happy family of mannequins) ironically reflect on herself. Plot: Slowly B.D. and Shian begin to piece together fragments of her traumatic past. Two families, each horribly murdered, each with only the father and youngest daughter surviving. A strange city called Zebul (ゼブル). A drug that causes amnesia, and a society dedicated to the elimination of crime. Perhaps most interesting is a medical term, "Born of Disorder", which is abbreviated to B.D. Plot: As they make progress, Shian finds books containing relevant information, books that he can't remember ever owning. Slowly, he begins to find other details from B.D.'s dreams in his office. Scratches on his wall, a newspaper from a company that doesn't exist, and a music box. The more B.D. fills in the gaps in her memory, the more real her dreams feel, and the less real Shian's office feels. At one point, Shian discovers that his telephone line has been cut, but he is still able to call and help B.D. despite this. Plot: The two finally discover one of B.D.'s happy memories for the first time. It's her favorite television show, about a genius psychologist who goes on exciting adventures and helps people recover their memories. That psychologist's name is also Shianosu B. Shian, and the two both have the same face. Plot: The mysterious figure enters Shian's office, and it is revealed that the entire room is imaginary. The whole time B.D. has been pretending to be Shian as a way to deal with her trauma. She's been pretending to call herself on her toy telephone, and the room is full of details from her dreams because it is her room. The mysterious figure is revealed to be the same woman as the red silhouette, who worked for the anti-crime society that originally took away B.D.'s memories, which she has now betrayed. That society is now hunting both of them down; the woman for her betrayal, and B.D. because they believe she is destined to be a criminal, as well as for knowing too many of the society's secrets. Rather than give herself up and either have her memories taken away again or killed, B.D. resolves to run away with the woman. B.D. recovers one last memory, a truly happy moment of being held by her parents as a baby. Plot: Shian shares some closing thoughts with the audience, saying that all memories are worth keeping. A painful memory is just as important as the color red on a palette.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Potential theory** Potential theory: In mathematics and mathematical physics, potential theory is the study of harmonic functions. The term "potential theory" was coined in 19th-century physics when it was realized that two fundamental forces of nature known at the time, namely gravity and the electrostatic force, could be modeled using functions called the gravitational potential and electrostatic potential, both of which satisfy Poisson's equation—or in the vacuum, Laplace's equation. Potential theory: There is considerable overlap between potential theory and the theory of Poisson's equation to the extent that it is impossible to draw a distinction between these two fields. The difference is more one of emphasis than subject matter and rests on the following distinction: potential theory focuses on the properties of the functions as opposed to the properties of the equation. For example, a result about the singularities of harmonic functions would be said to belong to potential theory whilst a result on how the solution depends on the boundary data would be said to belong to the theory of the Laplace equation. This is not a hard and fast distinction, and in practice there is considerable overlap between the two fields, with methods and results from one being used in the other. Potential theory: Modern potential theory is also intimately connected with probability and the theory of Markov chains. In the continuous case, this is closely related to analytic theory. In the finite state space case, this connection can be introduced by introducing an electrical network on the state space, with resistance between points inversely proportional to transition probabilities and densities proportional to potentials. Even in the finite case, the analogue I-K of the Laplacian in potential theory has its own maximum principle, uniqueness principle, balance principle, and others. Symmetry: A useful starting point and organizing principle in the study of harmonic functions is a consideration of the symmetries of the Laplace equation. Although it is not a symmetry in the usual sense of the term, we can start with the observation that the Laplace equation is linear. This means that the fundamental object of study in potential theory is a linear space of functions. This observation will prove especially important when we consider function space approaches to the subject in a later section. Symmetry: As for symmetry in the usual sense of the term, we may start with the theorem that the symmetries of the n -dimensional Laplace equation are exactly the conformal symmetries of the n -dimensional Euclidean space. This fact has several implications. First of all, one can consider harmonic functions which transform under irreducible representations of the conformal group or of its subgroups (such as the group of rotations or translations). Proceeding in this fashion, one systematically obtains the solutions of the Laplace equation which arise from separation of variables such as spherical harmonic solutions and Fourier series. By taking linear superpositions of these solutions, one can produce large classes of harmonic functions which can be shown to be dense in the space of all harmonic functions under suitable topologies. Symmetry: Second, one can use conformal symmetry to understand such classical tricks and techniques for generating harmonic functions as the Kelvin transform and the method of images. Third, one can use conformal transforms to map harmonic functions in one domain to harmonic functions in another domain. The most common instance of such a construction is to relate harmonic functions on a disk to harmonic functions on a half-plane. Symmetry: Fourth, one can use conformal symmetry to extend harmonic functions to harmonic functions on conformally flat Riemannian manifolds. Perhaps the simplest such extension is to consider a harmonic function defined on the whole of Rn (with the possible exception of a discrete set of singular points) as a harmonic function on the n -dimensional sphere. More complicated situations can also happen. For instance, one can obtain a higher-dimensional analog of Riemann surface theory by expressing a multi-valued harmonic function as a single-valued function on a branched cover of Rn or one can regard harmonic functions which are invariant under a discrete subgroup of the conformal group as functions on a multiply connected manifold or orbifold. Two dimensions: From the fact that the group of conformal transforms is infinite-dimensional in two dimensions and finite-dimensional for more than two dimensions, one can surmise that potential theory in two dimensions is different from potential theory in other dimensions. This is correct and, in fact, when one realizes that any two-dimensional harmonic function is the real part of a complex analytic function, one sees that the subject of two-dimensional potential theory is substantially the same as that of complex analysis. For this reason, when speaking of potential theory, one focuses attention on theorems which hold in three or more dimensions. In this connection, a surprising fact is that many results and concepts originally discovered in complex analysis (such as Schwarz's theorem, Morera's theorem, the Weierstrass-Casorati theorem, Laurent series, and the classification of singularities as removable, poles and essential singularities) generalize to results on harmonic functions in any dimension. By considering which theorems of complex analysis are special cases of theorems of potential theory in any dimension, one can obtain a feel for exactly what is special about complex analysis in two dimensions and what is simply the two-dimensional instance of more general results. Local behavior: An important topic in potential theory is the study of the local behavior of harmonic functions. Perhaps the most fundamental theorem about local behavior is the regularity theorem for Laplace's equation, which states that harmonic functions are analytic. There are results which describe the local structure of level sets of harmonic functions. There is Bôcher's theorem, which characterizes the behavior of isolated singularities of positive harmonic functions. As alluded to in the last section, one can classify the isolated singularities of harmonic functions as removable singularities, poles, and essential singularities. Inequalities: A fruitful approach to the study of harmonic functions is the consideration of inequalities they satisfy. Perhaps the most basic such inequality, from which most other inequalities may be derived, is the maximum principle. Another important result is Liouville's theorem, which states the only bounded harmonic functions defined on the whole of Rn are, in fact, constant functions. In addition to these basic inequalities, one has Harnack's inequality, which states that positive harmonic functions on bounded domains are roughly constant. Inequalities: One important use of these inequalities is to prove convergence of families of harmonic functions or sub-harmonic functions, see Harnack's theorem. These convergence theorems are used to prove the existence of harmonic functions with particular properties. Spaces of harmonic functions: Since the Laplace equation is linear, the set of harmonic functions defined on a given domain is, in fact, a vector space. By defining suitable norms and/or inner products, one can exhibit sets of harmonic functions which form Hilbert or Banach spaces. In this fashion, one obtains such spaces as the Hardy space, Bloch space, Bergman space and Sobolev space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**General surgery** General surgery: General surgery is a surgical specialty that focuses on alimentary canal and abdominal contents including the esophagus, stomach, small intestine, large intestine, liver, pancreas, gallbladder, appendix and bile ducts, and often the thyroid gland. They also deal with diseases involving the skin, breast, soft tissue, trauma, peripheral artery disease and hernias and perform endoscopic as such as gastroscopy, colonoscopy and laparoscopic procedures. Scope: General surgeons may sub-specialize into one or more of the following disciplines: Trauma surgery In many parts of the world including North America, Australia and the United Kingdom, the overall responsibility for trauma care falls under the auspices of general surgery. Some general surgeons obtain advanced training in this field (most commonly surgical critical care) and specialty certification surgical critical care. General surgeons must be able to deal initially with almost any surgical emergency. Often, they are the first port of call to critically ill or gravely injured patients, and must perform a variety of procedures to stabilize such patients, such as thoracostomy, cricothyroidotomy, compartment fasciotomies and emergency laparotomy or thoracotomy to stanch bleeding. They are also called upon to staff surgical intensive care units or trauma intensive care units.All general surgeons are trained in emergency surgery. Bleeding, infections, bowel obstructions and organ perforations are the main problems they deal with. Cholecystectomy, the surgical removal of the gallbladder, is one of the most common surgical procedures done worldwide. This is most often done electively, but the gallbladder can become acutely inflamed and require an emergency operation. Infections and rupture of the appendix and small bowel obstructions are other common emergencies. Scope: Laparoscopic surgery This is a relatively new specialty dealing with minimal access techniques using cameras and small instruments inserted through 3- to 15-mm incisions. Robotic surgery is now evolving from this concept (see below). Gallbladders, appendices, and colons can all be removed with this technique. Hernias are also able to be repaired laparoscopically. Bariatric surgery can be performed laparoscopically and there a benefits of doing so to reduce wound complications in obese patients. General surgeons that are trained today are expected to be proficient in laparoscopic procedures. Scope: Colorectal surgery General surgeons treat a wide variety of major and minor colon and rectal diseases including inflammatory bowel diseases (such as ulcerative colitis or Crohn's disease), diverticulitis, colon and rectal cancer, gastrointestinal bleeding and hemorrhoids. Breast surgery General surgeons perform a majority of all non-cosmetic breast surgery from lumpectomy to mastectomy, especially pertaining to the evaluation, diagnosis and treatment of breast cancer. Vascular surgery General surgeons can perform vascular surgery if they receive special training and certification in vascular surgery. Otherwise, these procedures are typically performed by vascular surgery specialists. However, general surgeons are capable of treating minor vascular disorders. Scope: Endocrine surgery General surgeons are trained to remove all or part of the thyroid and parathyroid glands in the neck and the adrenal glands just above each kidney in the abdomen. In many communities, they are the only surgeon trained to do this. In communities that have a number of subspecialists, other subspecialty surgeons may assume responsibility for these procedures. Scope: Transplant surgery Responsible for all aspects of pre-operative, operative, and post-operative care of abdominal organ transplant patients. Transplanted organs include liver, kidney, pancreas, and more rarely small bowel. Scope: Surgical oncology Surgical oncologist refers to a general surgical oncologist (a specialty of a general surgeon), but thoracic surgical oncologists, gynecologist and so forth can all be considered surgeons who specialize in treating cancer patients. The importance of training surgeons who sub-specialize in cancer surgery lies in evidence, supported by a number of clinical trials, that outcomes in surgical cancer care are positively associated to surgeon volume (i.e., the more cancer cases a surgeon treats, the more proficient he or she becomes, and his or her patients experience improved survival rates as a result). This is another controversial point, but it is generally accepted, even as common sense, that a surgeon who performs a given operation more often, will achieve superior results when compared with a surgeon who rarely performs the same procedure. This is particularly true of complex cancer resections such as pancreaticoduodenectomy for pancreatic cancer, and gastrectomy with extended (D2) lymphadenectomy for gastric cancer. Surgical oncology is generally a 2-year fellowship following completion of a general surgery residency (5–7 years). Scope: Cardiothoracic surgery Most cardiothoracic surgeons in the U.S. (D.O. or M.D.) first complete a general surgery residency (typically 5–7 years), followed by a cardiothoracic surgery fellowship (typically 2–3 years). However, new programmes are currently offering cardiothoracic surgery as a residency (6–8 years). Pediatric surgery Pediatric surgery is a subspecialty of general surgery. Pediatric surgeons do surgery on patients under age 18. Pediatric surgery is 5–7 years of residency and a 2-3 year fellowship. Trends: In the 2000s, minimally invasive surgery became more prevalent. Considerable enthusiasm has been built around robot-assisted surgery (also known as robotic surgery), despite a lack of data suggesting it has significant benefits that justify its cost. Training: In Canada, Australia, New Zealand, and the United States general surgery is a five to seven year residency and follows completion of medical school, either MD, MBBS, MBChB, or DO degrees. In Australia and New Zealand, a residency leads to eligibility for Fellowship of the Royal Australasian College of Surgeons. In Canada, residency leads to eligibility for certification by and Fellowship of the Royal College of Physicians and Surgeons of Canada, while in the United States, completion of a residency in general surgery leads to eligibility for board certification by the American Board of Surgery or the American Osteopathic Board of Surgery which is also required upon completion of training for a general surgeon to have operating privileges at most hospitals in the United States. Training: In the United Kingdom, surgical trainees enter training after five years of medical school and two years of the Foundation Programme. During the two to three-year core training programme, doctors will sit the Membership of the Royal College of Surgeons (MRCS) examination. On award of the MRCS examination, surgeons may hold the title 'Mister' or 'Miss/Ms./Mrs' rather than doctor. This is a tradition dating back hundreds of years in the United Kingdom from when only physicians attended medical school and surgeons did not, but were rather associated with barbers in the Barber Surgeon's Guild. The tradition is also present in many Commonwealth countries including New Zealand and some states of Australia. Trainees will then go onto Higher Surgical Training (HST), lasting a further five to six years. During this time they may choose to subspecialise. Before the end of HST, the examination of Fellow of the Royal College of Surgeons (FRCS) must be taken in general surgery plus the subspeciality. Upon completion of training, the surgeon will become a consultant surgeon and will be eligible for entry on the GMC Specialist Register and may work both in the NHS and independent sector as a consultant general surgeon. The implementation of the European Working Time Directive limited UK surgical residents to a 48-hour working week. The introduction of a sub-consultant grade to enable those who have recently received a UK Certificate of Completion of Training may be necessary.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chorale setting** Chorale setting: Chorale settings refer to a wide variety of musical compositions, almost entirely of Protestant origin, which use a chorale as their basis. A chorale is a simple melody, often based on Gregorian chant, written for congregations to sing hymns. Chorale settings can be vocal, instrumental, or both. Chorale setting: Although the bulk of them are German in origin, and predominantly baroque in style, chorale settings span many countries and musical periods. At their simplest and most common, chorale settings are plain chordal harmonisations with little or no localised ornamentation—typically one chord for each note of the chorale, although quicker passing and neighbour notes are almost never harmonised with a separate chord. Chorale setting: The Protestant Reformation resulted in a significant change in musical practice in northern Europe. Plainchant, associated with the Catholic Church, was largely replaced with choral music sung in the vernacular language—usually German—and the corresponding musical forms from Catholic countries, such as the motet, were replaced with forms that used as their basis the chorales instead of the plainsong from which much of the motet repertory was derived. Chorale setting: Not only the musical forms, but the individual tunes of the Catholic Church were replaced by reformers, although there was often a close relation between the original and the replacement. Composers, including Martin Luther himself, both composed new tunes for the German chorale texts and adapted specific plainchant melodies. These chorales were set musically in an extraordinary number of ways, from the time of the Protestant Reformation to the present day. Chorale setting: Chorale settings are of the following principal types: Chorale cantata Chorale canzona (usually called a Chorale ricercare) Chorale concerto Chorale fantasia Chorale fugue Chorale mass Chorale monody Chorale motet Chorale partita (usually interchangeable with chorale variations) Chorale prelude Chorale ricercare Chorale variations (usually interchangeable with chorale partita)Boundaries between different items on this list can be vague, especially in the early Baroque. Some of these forms are exclusively instrumental (such as the chorale prelude, chorale fugue, chorale fantasia, chorale partita or variations, and chorale ricercare/canzona) while the others are a cappella vocal (some chorale motets) or for voices and instruments (chorale cantata, chorale concerto, chorale mass, chorale monody, some chorale motets). Many of the instrumental forms are almost exclusively for organ, the single most important liturgical instrument in Protestant church music from the Reformation until recent times. These organ settings can be called organ chorales.Some of these forms continue to be used by composers up to the present day, particularly the chorale prelude and the chorale mass.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Accarezzevole** Accarezzevole: Accarezzevole Italian: [akkaretˈtseːvole] (listen) (Italian: "Caressingly") is a music term that is marked on sheet music to indicate that a piece is to be played in an expressive and caressing manner. Alexander Scriabin was one of the first composers to use this term in his music.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sonotone 1010** Sonotone 1010: The Sonotone 1010 hearing aid was introduced on 29 December 1952. It was the first commercial product to use transistors, which had been invented five years earlier in 1947.It was a hybrid design, using two miniature vacuum tubes as input stages and a single transistor as the output stage; this was required because the transistors at the time produced too much electrical noise. Even using one transistor considerably extended battery life, lowering the operating cost of the unit. As transistors improved, this model was replaced by all-transistor hearing aids.The Sonotone company had its headquarters in New York City and was established in 1929. The company was bought by various other companies and was no longer in business by 2005.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Modality (theology)** Modality (theology): Modality, in Protestant and Catholic Christian theology, is the structure and organization of the local or universal church. In Catholic theology, the modality is the universal Catholic church. In Protestant theology, the modality is variously described as either the universal church (that is, all believers) or the local church. By contrast, parachurch organizations are sodalities. These include missionary organizations and Christian charities not linked to specific churches. Some theologians consider denominations, schools of theology, and other multi-congregational sodalities. Catholic sodalities include orders, monasteries and convents. The modality versus sodality parachurch dispute: In some Christian circles, particularly among non-denominational evangelicals, there is conflict over whether parachurch, including Christian not-for-profit organizations are a biblical model for ministry. A minority of pastors and theologians assert that only the modality is a valid model for ministry, and they typically equate modality with the local church structure. Central to the dispute is whether the missionary travels of Paul the Apostle should be categorized as an expression of modality or sodality. The modality versus sodality parachurch dispute: A practical consideration in the modality/sodality dispute is that certain Christian efforts, like translating the Bible into different languages, are difficult to organize and fund solely by local congregations in the absence of parachurch organizations. Ralph D. Winter of the US Center for World Mission has argued that modes of modality and sodality are both necessary and will be most effective if they are supportive of one another.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Interval order** Interval order: In mathematics, especially order theory, the interval order for a collection of intervals on the real line is the partial order corresponding to their left-to-right precedence relation—one interval, I1, being considered less than another, I2, if I1 is completely to the left of I2. Interval order: More formally, a countable poset P=(X,≤) is an interval order if and only if there exists a bijection from X to a set of real intervals, so xi↦(ℓi,ri) such that for any xi,xj∈X we have xi<xj in P exactly when ri<ℓj Such posets may be equivalently characterized as those with no induced subposet isomorphic to the pair of two-element chains, in other words as the (2+2) -free posets . Fully written out, this means that for any two pairs of elements a>b and c>d one must have a>d or c>b The subclass of interval orders obtained by restricting the intervals to those of unit length, so they all have the form (ℓi,ℓi+1) , is precisely the semiorders. Interval order: The complement of the comparability graph of an interval order ( X , ≤) is the interval graph (X,∩) Interval orders should not be confused with the interval-containment orders, which are the inclusion orders on intervals on the real line (equivalently, the orders of dimension ≤ 2). Interval orders and dimension: An important parameter of partial orders is order dimension: the dimension of a partial order P is the least number of linear orders whose intersection is P . For interval orders, dimension can be arbitrarily large. And while the problem of determining the dimension of general partial orders is known to be NP-hard, determining the dimension of an interval order remains a problem of unknown computational complexity.A related parameter is interval dimension, which is defined analogously, but in terms of interval orders instead of linear orders. Thus, the interval dimension of a partially ordered set P=(X,≤) is the least integer k for which there exist interval orders ⪯1,…,⪯k on X with x≤y exactly when x⪯1y,…, and x⪯ky . The interval dimension of an order is never greater than its order dimension. Combinatorics: In addition to being isomorphic to (2+2) -free posets, unlabeled interval orders on [n] are also in bijection with a subset of fixed-point-free involutions on ordered sets with cardinality 2n . These are the involutions with no so-called left- or right-neighbor nestings where, for any involution f on [2n] , a left nesting is an i∈[2n] such that i<i+1<f(i+1)<f(i) and a right nesting is an i∈[2n] such that f(i)<f(i+1)<i<i+1 Such involutions, according to semi-length, have ordinary generating function F(t)=∑n≥0∏i=1n(1−(1−t)i). Combinatorics: The coefficient of tn in the expansion of F(t) gives the number of unlabeled interval orders of size n . The sequence of these numbers (sequence A022493 in the OEIS) begins 1, 2, 5, 15, 53, 217, 1014, 5335, 31240, 201608, 1422074, 10886503, 89903100, 796713190, 7541889195, 75955177642, …
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spreader (railroad)** Spreader (railroad): A spreader is a type of maintenance equipment designed to spread or shape ballast profiles. The spreader spreads gravel along the railroad ties. The various ploughs, wings and blades of specific spreaders allow them to remove snow, build banks, clean and dig ditches, evenly distribute gravel, as well as trim embankments of brush along the side of the track. Spreaders quickly proved themselves as an extremely economical tool for maintaining trackside drainage ditches and spreading fill dumped beside the track.The operation of the wings was once performed by compressed air and later hydraulics. Besides the MoW-operation spreaders are also used in open cast mines to clean the tracks from overburden tipped from dump cars. Jordan spreader history: The Jordan spreader was the creation of Oswald F. Jordan, a Canadian road master who worked in the Niagara, Ontario area on the Canada Southern Railway, later a subsidiary of the New York Central Railroad. He supervised a crew at the St. Thomas Canada Southern shop in the early 1890s. Jordan's first patent, filed in 1890 and listing Robert Potts as co-inventor, covered a single-blade mechanism with the blade height adjustable with a hand crank and gearing.Jordan formed his own company, O.F. Jordan Company, in 1898 and continued construction of Jordan Spreaders. By 1906, the company had moved to Chicago, Jordan was a U.S. Citizen, and the spreader was a far more sophisticated device, with blades on both sides of the car, pneumatic power for raising and lowering each blade, and considerably more rugged construction. By 1909, the spreader was being built on a steel-framed car body instead of the wood used in earlier models, and a plow was mounted on the front, with an extension in front of that for shifting material across the track from side to side. Shortly after this, Jordan added a pneumatic system for rapidly and automatically extending and retracting the side blades. At this point, the primary purpose of the Jordan spreader was spreading ballast along the tracks. Jordan spreader history: Following Jordan's death in 1910, Walter Riley took over management of the company and directed it for the next 50 years. Over the years that followed, the Jordan spreader was developed into a multi-purpose MoW vehicle with adjustable blades and ploughs added to the wings. New uses included trackside ditch maintenance and spreading fill dumped beside the track. Over 1,400 spreaders were built. Jordan spreaders are available by special order from Harsco Rail. In 2001, the Jordan Spreader was inducted into the North America Railway Hall of Fame in the "Local:Technical Innovation" category. It shared this selection with another technical innovation, the rotary snowplow.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**HD 153201** HD 153201: HD 153201 is a Bp star in the southern constellation of Ara. It is chemically peculiar star that displays an anomalous abundance of the element silicon in its spectrum. This is a suspected variable star of the type known as Alpha² Canum Venaticorum. There is a magnitude 9.86 companion star at an angular separation of 2.30″ along a position angle of 131°.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Osmotic-controlled release oral delivery system** Osmotic-controlled release oral delivery system: The osmotic-controlled release oral delivery system (OROS) is an advanced controlled release oral drug delivery system in the form of a rigid tablet with a semi-permeable outer membrane and one or more small laser drilled holes in it. As the tablet passes through the body, water is absorbed through the semipermeable membrane via osmosis, and the resulting osmotic pressure is used to push the active drug through the laser drilled opening(s) in the tablet and into the gastrointestinal tract. OROS is a trademarked name owned by ALZA Corporation, which pioneered the use of osmotic pumps for oral drug delivery. Rationale: Pros and cons Osmotic release systems have a number of major advantages over other controlled-release mechanisms. They are significantly less affected by factors such as pH, food intake, GI motility, and differing intestinal environments. Using an osmotic pump to deliver drugs has additional inherent advantages regarding control over drug delivery rates. This allows for much more precise drug delivery over an extended period of time, which results in much more predictable pharmacokinetics. However, osmotic release systems are relatively complicated, somewhat difficult to manufacture, and may cause irritation or even blockage of the GI tract due to prolonged release of irritating drugs from the non-deformable tablet. Oral osmotic release systems: Single-layer The Elementary Osmotic Pump (EOP) was developed by ALZA in 1974, and was the first practical example of an osmotic pump based drug release system for oral use. It was introduced to the market in the early 1980s in Osmosin (indomethacin) and Acutrim (phenylpropanolamine), but unexpectedly severe issues with GI irritation and cases of GI perforation led to the withdrawal of Osmosin.Merck & Co. later developed the Controlled-Porosity Osmotic Pump (CPOP) with the intention of addressing some of the issues that led to Osmosin's withdrawal via a new approach to the final stage of the release mechanism. Unlike the EOP, the CPOP had no pre-formed hole in the outer shell for the drug to be expelled out of. Instead, the CPOP's semipermeable membrane was designed to form numerous small pores upon contact with water through which the drug would be expelled via osmotic pressure. The pores were formed via the use of a pH insensitive leachable or dissolvable additive such as sorbitol. Oral osmotic release systems: Multi-layer Both the EOP and CPOP were relatively simple designs, and were limited by their inability to deliver poorly soluble drugs. This led to the development of an additional internal "push layer" composed of material (a swellable polymer) that would expand as it absorbed water, which then pushed the drug layer (which incorporates a viscous polymer for suspension of poorly soluble drugs) out of the exit hole at a controlled rate. Osmotic agents such as sodium chloride, potassium chloride, or xylitol are added to both the drug and push layers to increase the osmotic pressure. The initial design developed in 1982 by ALZA researchers was designated the Push-Pull Osmotic Pump (PPOP), and Procardia XL (nifedipine) was one of the first drugs to utilize this PPOP design. Oral osmotic release systems: In the early 1990s, an ALZA-funded research program began to develop a new dosage form of methylphenidate for the treatment of children with attention deficit hyperactivity disorder (ADHD). Methylphenidate's short half-life required multiple doses to be administered each day to attain long-lasting coverage, which made it an ideal candidate for the OROS technology. Multiple candidate pharmacokinetic profiles were evaluated and tested in an attempt to determine the optimal way to deliver the drug, which was especially important given the puzzling failure of an existing extended-release formulation of methylphenidate (Ritalin SR) to act as expected. The zero-order (flat) release profile that the PPOP was optimal at delivering failed to maintain its efficacy over time, which suggested that acute tolerance to methylphenidate formed over the course of the day. This explained why Ritalin SR was inferior to twice-daily Ritalin IR, and led to the hypothesis that an ascending pattern of drug delivery was necessary to maintain clinical effect. Trials designed to test this hypothesis were successful, and ALZA subsequently developed a modified PPOP design that utilized an overcoat of methylphenidate designed to release immediately and rapidly raise serum levels, followed by 10 hours of first-order (ascending) drug delivery from the modified PPOP design. This design was called the Push-Stick Osmotic Pump (PSOP), and utilized two separate drug layers with different concentrations of methylphenidate in addition to the (now quite robust) push layer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Glutathione reductase** Glutathione reductase: Glutathione reductase (GR) also known as glutathione-disulfide reductase (GSR) is an enzyme that in humans is encoded by the GSR gene. Glutathione reductase (EC 1.8.1.7) catalyzes the reduction of glutathione disulfide (GSSG) to the sulfhydryl form glutathione (GSH), which is a critical molecule in resisting oxidative stress and maintaining the reducing environment of the cell. Glutathione reductase functions as dimeric disulfide oxidoreductase and utilizes an FAD prosthetic group and NADPH to reduce one molar equivalent of GSSG to two molar equivalents of GSH: The glutathione reductase is conserved between all kingdoms. In bacteria, yeasts, and animals, one glutathione reductase gene is found; however, in plant genomes, two GR genes are encoded. Drosophila and trypanosomes do not have any GR at all. In these organisms, glutathione reduction is performed by either the thioredoxin or the trypanothione system, respectively. Function: Glutathione plays a key role in maintaining proper function and preventing oxidative stress in human cells. It can act as a scavenger for hydroxyl radicals, singlet oxygen, and various electrophiles. Reduced glutathione reduces the oxidized form of the enzyme glutathione peroxidase, which in turn reduces hydrogen peroxide (H2O2), a dangerously reactive species within the cell. [In the following illustration of redox reeactions, the rightmost arrow is reversed; it should be pointing up not down.] In addition, it plays a key role in the metabolism and clearance of xenobiotics, acts as a cofactor in certain detoxifying enzymes, participates in transport, and regenerates antioxidants such and Vitamins E and C to their reactive forms. The ratio of GSSG/GSH present in the cell is a key factor in properly maintaining the oxidative balance of the cell, that is, it is critical that the cell maintains high levels of the reduced glutathione and a low level of the oxidized glutathione disulfide. This narrow balance is maintained by glutathione reductase, which catalyzes the reduction of GSSG to GSH. Structure: Glutathione reductase from human erythrocytes is a homodimer consisting of 52Kd monomers, each containing 3 domains. GR exhibits single sheet, double layered topology where an anti-parallel beta-sheet is largely exposed to the solvent on one face while being covered by random coils on the other face. This includes and NADPH-binding Domain, FAD-binding domain(s) and a dimerization domain. Each monomer contains 478 residues and one FAD molecule. GR is a thermostable protein, retaining function up to 65 °C. Reaction mechanism: Steps: Reductive half The action of GR proceeds through two distinct half reactions, a reductive half mechanism followed by an oxidative half. In the first half, NADPH reduces FAD present in GSR to produce a transient FADH− anion. This anion then quickly breaks a disulfide bond of Cys58 - Cys63, forming a short lived covalent bond a stable charge-transfer complex between the flavin and Cys63. The now oxidized NADP+ is released and is subsequently replaced by a new molecule of NADPH. This is the end of the so-called reductive half of the mechanism. Reaction mechanism: Oxidative half In the oxidative half of the mechanism, Cys63 nucleophilically attacks the nearest sulfide unit in the GSSG molecule (promoted by His467), which creates a mixed disulfide bond (GS-Cys58) and a GS− anion. His467 of GSR then protonates the GS- anion to release the first molecule of GSH. Next, Cys63 nucleophilically attacks the sulfide of Cys58, releasing a GS− anion, which, in turn, picks up a solvent proton and is released from the enzyme, thereby creating the second GSH. So, for every GSSG and NADPH, two reduced GSH molecules are gained, which can again act as antioxidants scavenging reactive oxygen species in the cell. Inhibition: In vitro, glutathione reductase is inhibited by low concentrations of sodium arsenite and methylated arsenate metabolites, but in vivo, significant Glutathione Reductase inhibition by sodium arsenate has only been at 10 mg/kg/day. Glutathione reductase is also inhibited by some flavanoids, a class of pigments produced by plants. Clinical significance: GSH is a key cellular antioxidant and plays a major role in the phase 2 metabolic clearance of electrophilic xenobiotics. The importance of the GSH pathway and enzymes that affect this delicate balance is gaining an increased level of attention in recent years. Although glutathione reductase has been an attractive target for many pharmaceuticals, there have been no successful glutathione reductase related therapeutic compounds created to date. In particular, glutathione reductase appears to be a good target for anti-malarials, as the glutathione reductase of the malaria parasite Plasmodium falciparum has a significantly different protein fold than that of mammalian glutathione reductase. By designing drugs specific to p. falciparum it may be possible to selectively induce oxidative stress in the parasite, while not affecting the host. Clinical significance: There are two main classes of GR targeting compounds: Inhibitors of GSSG binding, or dimerization: Reactive electrophiles such as gold compounds, and fluoronaphthoquinones. Clinical significance: Drugs which use glutathione reductase to regenerate, such as redox cyclers. Two examples of these types of compounds are Methylene blue and Naphthoquinone.Clinical trials performed in Burkina Faso have revealed mixed results when treating malaria with Naphthoquinones In cells exposed to high levels of oxidative stress, like red blood cells, up to 10% of the glucose consumption may be directed to the pentose phosphate pathway (PPP) for production of the NADPH needed for this reaction. In the case of erythrocytes, if the PPP is non-functional, then the oxidative stress in the cell will lead to cell lysis and anemia.Lupus is an autoimmune disorder in which patients produce an elevated quantity of antibodies that attack DNA and other cell components. In a recent study, a single nucleotide polymorphism (SNP) in the Glutathione Reductase gene was found to be highly associated with lupus in African Americans in the study. African Americans with lupus have also been shown to express less reduced glutathione in their T cells. The study's authors believe that reduced glutathione reductase activity may contribute to the increased production of reactive oxygen in African Americans with lupus.In mice, glutathione reductase has been implicated in the oxidative burst, a component of the immune response. The oxidative burst is a defense mechanism in which neutrophils produce and release reactive oxidative species in the vicinity of bacteria or fungi to destroy the foreign cells. Glutathione Reductase deficient neutrophils were shown to produce a more transient oxidative burst in response to bacteria than neutrophils that express GR at ordinary levels. The mechanism of Glutathione Reductase in sustaining the oxidative burst is still unknown. Clinical significance: Deficiency Glutathione reductase deficiency is a rare disorder in which the glutathione reductase activity is absent from erythrocytes, leukocytes or both. In one study this disorder was observed in only two cases in 15,000 tests for glutathione reductase deficiency performed over the course of 30 years. In the same study, glutathione reductase deficiency was associated with cataracts and favism in one patient and their family, and with severe unconjugated hyperbilirubinemia in another patient. It has been proposed that the glutathione redox system (of which glutathione reductase is a part) is almost exclusively responsible for the protecting of eye lens cells from hydrogen peroxide because these cells are deficient in catalase, an enzyme which catalyzes the breakdown of hydrogen peroxide, and the high rate of cataract incidence in glutathione reductase deficient individuals.Some patients exhibit deficient levels of glutathione activity as a result of not consuming enough riboflavin in their diets. Riboflavin is a precursor for FAD, whose reduced form donates two electron to the disulfide bond which is present in the oxidized form of glutathione reductase in order to begin the enzyme's catalytic cycle. In 1999, a study found that 17.8% of males and 22.4% of females examined in Saudi Arabia suffered from low glutathione reductase activity due to riboflavin deficiency. Clinical significance: Connection to favism In favism, patients lack glucose-6-phosphate dehydrogenase, an enzyme in their pentose phosphate pathway that reduces NADP+ to NADPH while catalyzing the conversion of glucose-6-phosphate to 6-phosphoglucono-δ-lactone. Glucose-6-phosphate dehydrogenase deficient individuals have less NADPH available for the reduction of oxidized glutathione via glutathione reductase. Thus their basal ratio of oxidized to reduced glutathione is significantly higher than that of patients who express glucose-6-phosphate dehydrogenase, normally, making them unable to effectively respond to high levels of reactive oxygen species, which causes cell lysis. Monitoring glutathione reductase activity: The activity of glutathione reductase is used as indicator for oxidative stress. The activity can be monitored by the NADPH consumption, with absorbance at 340 nm, or the formed GSH can be visualized by Ellman's reagent. Alternatively the activity can be measured using roGFP (redox-sensitive Green Fluorescent Protein). In plants: As it does in human cells, glutathione reductase helps to protect plant cells from reactive oxygen species. In plants, reduced glutathione participates in the glutathione-ascorbate cycle in which reduced glutathione reduces dehydroascorbate, a reactive byproduct of the reduction of hydrogen peroxide. In particular, glutathione reductase contributes to plants' response to abiotic stress. The enzyme's activity has been shown to be modulated in response to metals, metalloids, salinity, drought, UV radiation and heat induced stress. History: Glutathione reductase was first purified in 1955 at Yale University by P. Janmeda. Janmeda also identified NADPH as the primary electron donor for the enzyme. Later groups confirmed the presence of FAD and the thiol group, and an initial mechanism was suggested for the mechanism in 1965. The initial (low resolution) structure of glutathione reductase was solved in 1977. This was quickly followed by a 3Å structure by Shulze et al. in 1978. Glutathione reductase has been studied exhaustively since these early experiments, and is subsequently one of the most well characterized enzymes to date. Interactive pathway map: Interactive pathway can be found here: pathway map
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Verilog-to-Routing** Verilog-to-Routing: Verilog-to-Routing (VTR) is an open source CAD flow for FPGA devices. VTR's main purpose is to map a given circuit described in Verilog, a Hardware Description Language, on a given FPGA architecture for research and development purposes; the FPGA architecture targeted could be a novel architecture that a researcher wishes to explore, or it could be an existing commercial FPGA whose architecture has been captured in the VTR input format. The VTR project has many contributors, with lead collaborating universities being the University of Toronto, the University of New Brunswick, and the University of California, Berkeley . Additional contributors include Google, The University of Utah, Princeton University, Altera, Intel, Texas Instruments, and MIT Lincoln Lab. VTR Flow: The VTR design flow usually consists of three main component applications: ODIN II which compiles Verilog code to a circuit in Berkeley Logic Interchange Format (BLIF), a human-readable graph representation of the circuit; ABC which optimizes the BLIF circuit produced by ODIN II; and VPR which packs, places and routes the optimized circuit on the given FPGA architecture. There are some additional optional tools that can process the VTR output further. For example, the FASM FPGA Assembly tool can produce programming bitstreams for some commercial FPGAs (Xilinx Artix and Lattice ice40) at the end of the VTR flow, while the OpenFPGA tool integrates with VTR to produce a standard cell layout of a novel (proposed) FPGA. It is also possible to use different tools for the first (HDL synthesis) stage of the VTR flow; for example the Titan Flow uses Quartus to perform the HDL to logic synthesis stage, and then VPR to perform placement and routing, while Symbiflow uses the Yosys synthesis tool followed by VPR placement and routing. VTR Flow: ODIN II ODIN II is the HDL compiler of the VTR flow. It transforms a given Verilog code to a BLIF circuit, performs code and circuit optimizations, visualizes circuits, and performs partial mapping of logic to available hard blocks of the given architecture. Also, it can simulate the execution of circuits both for validation as well as power, performance and heat analysis. ODIN II is maintained by the University of New Brunswick. VTR Flow: ABC ABC optimizes BLIF circuits by performing logic optimization and technology mapping. ABC is maintained by the University of California, Berkeley. VPR Versatile Place and Route (VPR) is the final component of VTR. Its input is a BLIF circuit, which it packs, places and routes on an input FPGA architecture. VTR Flow: During packing, neighboring and related logic elements of the circuit are clustered together into Logic Blocks matching the hardware of the FPGA. During placement, these logic blocks as well as hard blocks are assigned to the available hardware resources of the FPGA. Finally, during routing the signal connections between blocks are made. VPR is primarily developed by the University of Toronto, with contributions from many other universities and companies. VTR Flow: FASM The FPGA Assembly (genfasm) tool will produce a programming bitstream from a VTR implementation (placement and routing of a circuit) on commercial architectures for which complete VTR architecture files describing the FPGA device have been produced. Currently this includes the Xilinx Artix and Lattice ice40 FPGA families. This tool is primarily developed by Google.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fascia iliaca block** Fascia iliaca block: Fascia iliaca blocks (FIC, FICB) is a local anesthetic nerve block, a type of regional anesthesia technique, used to provide analgesia or anaesthesia to the hip and thigh. FICB can performed by using ultrasound or with a loss of resistance technique, the latter sometimes referred to as the "two-pop-method". FICB works by affecting the femoral, obturator and the lateral cutaneous nerves with a local anesthetic. Technique: When FICB is performed with the loss of resistance technique, the injection site for FICB is found by drawing an imaginary line between the pubic tubercle to the anterior superior iliac spine. The injection site is 1 cm. below the lateral one third and the medial two thirds of this line. Two losses of resistances are felt as the fascia lata and the fascia iliaca is penetrated by a semi-blunt cannula. Aspiration (drawing back the cannula) is performed, after which a local anaesthetic is injected while compressing on the skin distally to increase cranial distribution. Technique: FICB can generally be performed with minimally required training and by non-medical practitioners Medical uses: FIC can be used to offer pain relief for hip fractures in adults and femoral fractures in children. Adverse effects: FIC is generally safe to use and have few adverse effects. There is a 0.09-3.2% risk of hematomas at the injection site and a 0.18% risk of local anaesthetic intoxication. There are also case reports of pneumoretroperitoneum using continuous infusion, bladder puncture with a modified block under very special conditions and postoperative neuropathy. History: The block was first described in 1989 as an alternative to 3-in-1 nerve block in children.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cell surface receptor** Cell surface receptor: Cell surface receptors (membrane receptors, transmembrane receptors) are receptors that are embedded in the plasma membrane of cells. They act in cell signaling by receiving (binding to) extracellular molecules. They are specialized integral membrane proteins that allow communication between the cell and the extracellular space. The extracellular molecules may be hormones, neurotransmitters, cytokines, growth factors, cell adhesion molecules, or nutrients; they react with the receptor to induce changes in the metabolism and activity of a cell. In the process of signal transduction, ligand binding affects a cascading chemical change through the cell membrane. Structure and mechanism: Many membrane receptors are transmembrane proteins. There are various kinds, including glycoproteins and lipoproteins. Hundreds of different receptors are known and many more have yet to be studied. Transmembrane receptors are typically classified based on their tertiary (three-dimensional) structure. If the three-dimensional structure is unknown, they can be classified based on membrane topology. In the simplest receptors, polypeptide chains cross the lipid bilayer once, while others, such as the G-protein coupled receptors, cross as many as seven times. Each cell membrane can have several kinds of membrane receptors, with varying surface distributions. A single receptor may also be differently distributed at different membrane positions, depending on the sort of membrane and cellular function. Receptors are often clustered on the membrane surface, rather than evenly distributed. Structure and mechanism: Mechanism Two models have been proposed to explain transmembrane receptors' mechanism of action. Dimerization: The dimerization model suggests that prior to ligand binding, receptors exist in a monomeric form. When agonist binding occurs, the monomers combine to form an active dimer. Rotation: Ligand binding to the extracellular part of the receptor induces a rotation (conformational change) of part of the receptor's transmembrane helices. The rotation alters which parts of the receptor are exposed on the intracellular side of the membrane, altering how the receptor can interact with other proteins within the cell. Domains: Transmembrane receptors in plasma membrane can usually be divided into three parts. Domains: Extracellular domains The extracellular domain is just externally from the cell or organelle. If the polypeptide chain crosses the bilayer several times, the external domain comprises loops entwined through the membrane. By definition, a receptor's main function is to recognize and respond to a type of ligand. For example, a neurotransmitter, hormone, or atomic ions may each bind to the extracellular domain as a ligand coupled to receptor. Klotho is an enzyme which effects a receptor to recognize the ligand (FGF23). Domains: Transmembrane domains Two most abundant classes of transmembrane receptors are GPCR and single-pass transmembrane proteins. In some receptors, such as the nicotinic acetylcholine receptor, the transmembrane domain forms a protein pore through the membrane, or around the ion channel. Upon activation of an extracellular domain by binding of the appropriate ligand, the pore becomes accessible to ions, which then diffuse. In other receptors, the transmembrane domains undergo a conformational change upon binding, which affects intracellular conditions. In some receptors, such as members of the 7TM superfamily, the transmembrane domain includes a ligand binding pocket. Domains: Intracellular domains The intracellular (or cytoplasmic) domain of the receptor interacts with the interior of the cell or organelle, relaying the signal. There are two fundamental paths for this interaction: The intracellular domain communicates via protein-protein interactions against effector proteins, which in turn pass a signal to the destination. With enzyme-linked receptors, the intracellular domain has enzymatic activity. Often, this is tyrosine kinase activity. The enzymatic activity can also be due to an enzyme associated with the intracellular domain. Signal transduction: Signal transduction processes through membrane receptors involve the external reactions, in which the ligand binds to a membrane receptor, and the internal reactions, in which intracellular response is triggered.Signal transduction through membrane receptors requires four parts: Extracellular signaling molecule: an extracellular signaling molecule is produced by one cell and is at least capable of traveling to neighboring cells. Receptor protein: cells must have cell surface receptor proteins which bind to the signaling molecule and communicate inward into the cell. Intracellular signaling proteins: these pass the signal to the organelles of the cell. Binding of the signal molecule to the receptor protein will activate intracellular signaling proteins that initiate a signaling cascade. Target proteins: the conformations or other properties of the target proteins are altered when a signaling pathway is active and changes the behavior of the cell.Membrane receptors are mainly divided by structure and function into 3 classes: The ion channel linked receptor; The enzyme-linked receptor; and The G protein-coupled receptor. Ion channel linked receptors have ion channels for anions and cations, and constitute a large family of multipass transmembrane proteins. They participate in rapid signaling events usually found in electrically active cells such as neurons. They are also called ligand-gated ion channels. Opening and closing of ion channels is controlled by neurotransmitters. Enzyme-linked receptors are either enzymes themselves, or directly activate associated enzymes. These are typically single-pass transmembrane receptors, with the enzymatic component of the receptor kept intracellular. The majority of enzyme-linked receptors are, or associate with, protein kinases. G protein-coupled receptors are integral membrane proteins that possess seven transmembrane helices. These receptors activate a G protein upon agonist binding, and the G-protein mediates receptor effects on intracellular signaling pathways. Signal transduction: Ion channel-linked receptor During the signal transduction event in a neuron, the neurotransmitter binds to the receptor and alters the conformation of the protein. This opens the ion channel, allowing extracellular ions into the cell. Ion permeability of the plasma membrane is altered, and this transforms the extracellular chemical signal into an intracellular electric signal which alters the cell excitability.The acetylcholine receptor is a receptor linked to a cation channel. The protein consists of four subunits: alpha (α), beta (β), gamma (γ), and delta (δ) subunits. There are two α subunits, with one acetylcholine binding site each. This receptor can exist in three conformations. The closed and unoccupied state is the native protein conformation. As two molecules of acetylcholine both bind to the binding sites on α subunits, the conformation of the receptor is altered and the gate is opened, allowing for the entry of many ions and small molecules. However, this open and occupied state only lasts for a minor duration and then the gate is closed, becoming the closed and occupied state. The two molecules of acetylcholine will soon dissociate from the receptor, returning it to the native closed and unoccupied state. Signal transduction: Enzyme-linked receptors As of 2009, there are 6 known types of enzyme-linked receptors: Receptor tyrosine kinases; Tyrosine kinase associated receptors; Receptor-like tyrosine phosphatases; Receptor serine/threonine kinases; Receptor guanylyl cyclases and histidine kinase associated receptors. Receptor tyrosine kinases have the largest population and widest application. The majority of these molecules are receptors for growth factors such as epidermal growth factor (EGF), platelet-derived growth factor (PDGF), fibroblast growth factor (FGF), hepatocyte growth factor (HGF), nerve growth factor (NGF) and hormones such as insulin. Signal transduction: Most of these receptors will dimerize after binding with their ligands, in order to activate further signal transductions. For example, after the epidermal growth factor (EGF) receptor binds with its ligand EGF, the two receptors dimerize and then undergo phosphorylation of the tyrosine residues in the enzyme portion of each receptor molecule. This will activate the tyrosine kinase and catalyze further intracellular reactions. Signal transduction: G protein-coupled receptors G protein-coupled receptors comprise a large protein family of transmembrane receptors. They are found only in eukaryotes. The ligands which bind and activate these receptors include: photosensitive compounds, odors, pheromones, hormones, and neurotransmitters. These vary in size from small molecules to peptides and large proteins. G protein-coupled receptors are involved in many diseases, and thus are the targets of many modern medicinal drugs.There are two principal signal transduction pathways involving the G-protein coupled receptors: the cAMP signaling pathway and the phosphatidylinositol signaling pathway. Both are mediated via G protein activation. The G-protein is a trimeric protein, with three subunits designated as α, β, and γ. In response to receptor activation, the α subunit releases bound guanosine diphosphate (GDP), which is displaced by guanosine triphosphate (GTP), thus activating the α subunit, which then dissociates from the β and γ subunits. The activated α subunit can further affect intracellular signaling proteins or target functional proteins directly. Membrane receptor-related disease: If the membrane receptors are denatured or deficient, the signal transduction can be hindered and cause diseases. Some diseases are caused by disorders of membrane receptor function. This is due to deficiency or degradation of the receptor via changes in the genes that encode and regulate the receptor protein. The membrane receptor TM4SF5 influences the migration of hepatic cells and hepatoma. Also, the cortical NMDA receptor influences membrane fluidity, and is altered in Alzheimer's disease. When the cell is infected by a non-enveloped virus, the virus first binds to specific membrane receptors and then passes itself or a subviral component to the cytoplasmic side of the cellular membrane. In the case of poliovirus, it is known in vitro that interactions with receptors cause conformational rearrangements which release a virion protein called VP4.The N terminus of VP4 is myristylated and thus hydrophobic【myristic acid=CH3(CH2)12COOH】. It is proposed that the conformational changes induced by receptor binding result in the attachment of myristic acid on VP4 and the formation of a channel for RNA. Structure-based drug design: Through methods such as X-ray crystallography and NMR spectroscopy, the information about 3D structures of target molecules has increased dramatically, and so has structural information about the ligands. This drives rapid development of structure-based drug design. Some of these new drugs target membrane receptors. Current approaches to structure-based drug design can be divided into two categories. The first category is about determining ligands for a given receptor. This is usually accomplished through database queries, biophysical simulations, and the construction of chemical libraries. In each case, a large number of potential ligand molecules are screened to find those fitting the binding pocket of the receptor. This approach is usually referred to as ligand-based drug design. The key advantage of searching a database is that it saves time and power to obtain new effective compounds. Another approach of structure-based drug design is about combinatorially mapping ligands, which is referred to as receptor-based drug design. In this case, ligand molecules are engineered within the constraints of a binding pocket by assembling small pieces in a stepwise manner. These pieces can be either atoms or molecules. The key advantage of such a method is that novel structures can be discovered. Structure-based drug design: Other examples Adrenergic receptor Olfactory receptors Receptor tyrosine kinases Epidermal growth factor receptor Insulin Receptor Fibroblast growth factor receptors, High affinity neurotrophin receptors Ephrin receptors Integrins Low Affinity Nerve Growth Factor Receptor NMDA receptor Several Immune receptors Toll-like receptor T cell receptor CD28 SCIMP protein
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hamiltonian coloring** Hamiltonian coloring: Hamiltonian coloring, named after William Rowan Hamilton, is a type of graph coloring. Hamiltonian coloring uses a concept called detour distance between two vertices of the graph. It has many applications in different areas of science and technology. Terminologies: Radio coloring A graph G with diameter D with n nodes that is colored (i.e. has a positive integer assigned to each vertex) with k colors is called a radio k-coloring G if for every pair of vertices a and b, the sum of the distance between them and the difference between their labels ("colors") is greater than k. For example, two nodes labelled 3 and 7 with a distance of 5 is acceptable for a radio 8-coloring, but not for a radio 9-coloring, since (7−3)+5=9 , which is not greater than 9. Terminologies: Antipodal coloring A radio (d-1)-coloring, that is, where k is equal to one less than the graph's diameter, is known as an antipodal coloring because antipodal vertices may be colored the same, but all nodes between them must be different. Terminologies: Detour distance The distance between two vertices in a graph is defined as the minimum of lengths of paths connecting those vertices. The detour distance between two vertices, say, u and v is defined as the length of the longest u-v path in the graph. In the case of a tree the detour distance between any two vertices is same as the distance between the two vertices. Terminologies: Hamiltonian coloring Hamiltonian colorings are a variation on antipodal colorings where, instead of considering the regular distance between nodes, the detour distance is instead considered. Specifically, a Hamiltonian coloring's nodes have the property that the detour distance plus the difference in colors is greater than or equal to one less than n, the number of nodes in the graph. If the graph G is a path, then any Hamiltonian coloring is also an antipodal coloring, which is the inspiration for the definition of Hamiltonian coloring.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vivisection** Vivisection: Vivisection (from Latin vivus 'alive', and sectio 'cutting') is surgery conducted for experimental purposes on a living organism, typically animals with a central nervous system, to view living internal structure. The word is, more broadly, used as a pejorative catch-all term for experimentation on live animals by organizations opposed to animal experimentation, but the term is rarely used by practising scientists. Human vivisection, such as live organ procurement, has been perpetrated as a form of torture. Animal vivisection: Research requiring vivisection techniques that cannot be met through other means is often subject to an external ethics review in conception and implementation, and in many jurisdictions use of anesthesia is legally mandated for any surgery likely to cause pain to any vertebrate.In the United States, the Animal Welfare Act explicitly requires that any procedure that may cause pain use "tranquilizers, analgesics, and anesthetics" with exceptions when "scientifically necessary". The act does not define "scientific necessity" or regulate specific scientific procedures, but approval or rejection of individual techniques in each federally funded lab is determined on a case-by-case basis by the Institutional Animal Care and Use Committee, which contains at least one veterinarian, one scientist, one non-scientist, and one other individual from outside the university.In the United Kingdom, any experiment involving vivisection must be licensed by the Home Secretary. The Animals (Scientific Procedures) Act 1986 "expressly directs that, in determining whether to grant a licence for an experimental project, 'the Secretary of State shall weigh the likely adverse effects on the animals concerned against the benefit likely to accrue.'"In Australia, the Code of Practice "requires that all experiments must be approved by an Animal Experimentation Ethics Committee" that includes a "person with an interest in animal welfare who is not employed by the institution conducting the experiment, and an additional independent person not involved in animal experimentation."Anti-vivisectionists have played roles in the emergence of the animal welfare and animal rights movements, arguing that animals and humans have the same natural rights as living creatures, and that it is inherently immoral to inflict pain or injury on another living creature, regardless of the purpose or potential benefit to mankind. Animal vivisection: Vivisection and anti-vivisection in the 19th century At the turn of the 19th century, medicine was undergoing a transformation. The emergence of hospitals and the development of more advanced medical tools such as the stethoscope are but a few of the changes in the medical field. There was also an increased recognition that medical practices needed to be improved, as many of the current therapeutics were based on unproven, traditional theories that may or may not have helped the patient recover. The demand for more effective treatment shifted emphasis to research with the goal of understanding disease mechanisms and anatomy. This shift had a few effects, one of which was the rise in patient experimentation, leading to some moral questions about what was acceptable in clinical trials and what was not. An easy solution to the moral problem was to use animals in vivisection experiments, so as not to endanger human patients. This, however, had its own set of moral obstacles, leading to the anti-vivisection movement. Animal vivisection: François Magendie (1783–1855) One polarizing figure in the anti-vivisection movement was François Magendie. Magendie was a physiologist at the Académie Royale de Médecine in France, established in the first half of the 19th century. Magendie made several groundbreaking medical discoveries, but was far more aggressive than some of his other contemporaries with his use of animal experimentation. For example, the discovery of the different functionalities of dorsal and ventral spinal nerve roots was achieved by both Magendie, as well as a Scottish anatomist named Charles Bell. Bell used an unconscious rabbit because of "the protracted cruelty of the dissection", which caused him to miss that the dorsal roots were also responsible for sensory information. Magendie, on the other hand, used conscious, six-week-old puppies for his own experiments. While Magendie's approach was more of an infringement on what would today be referred to as animal rights, both Bell and Magendie used the same rationalization for vivisection: the cost of animal lives and experimentation was well worth it for the benefit of humanity.Many viewed Magendie's work as cruel and unnecessarily torturous. One note is that Magendie carried out many of his experiments before the advent of anesthesia, but even after ether was discovered it was not used in any of his experiments or classes. Even during the period before anesthesia, other physiologists expressed their disgust with how he conducted his work. One such visiting American physiologist describes the animals as "victims" and the apparent sadism that Magendie displayed when teaching his classes. The cruelty in such experiments actually even led to Magendie's role as an important figure in animal-rights legislation, such as his experiments being cited in the drafting of the British Cruelty to Animals Act 1876 and Cruel Treatment of Cattle Act 1822, otherwise known as Martin's Act, with its namesake, Irish MP and well known anti-cruelty campaigner Richard Martin describing Magendle as "disgrace to Society" after one of Magendle's public vivisections, described by Martin as "anatomical theatres", which was widely commented on at the time reportedly involving a greyhound's dissection potentially over two days. Magendle faced widespread opposition in British society, among the general public but also his contemporaries, including William Sharpey who described his experiments aside from cruel as "purposeless" and "without sufficient object", a feeling he claimed was shared among other physiologists. Animal vivisection: David Ferrier and the Cruelty to Animals Act 1876 The Cruelty to Animals Act, 1876 in Britain determined that one could only conduct vivisection on animals with the appropriate license from the state, and that the work the physiologist was doing had to be original and absolutely necessary. The stage was set for such legislation by physiologist David Ferrier. Ferrier was a pioneer in understanding the brain and used animals to show that certain locales of the brain corresponded to bodily movement elsewhere in the body in 1873. He put these animals to sleep, and caused them to move unconsciously with a probe. Ferrier was successful, but many decried his use of animals in his experiments. Some of these arguments came from a religious standpoint. Some were concerned that Ferrier's experiments would separate God from the mind of man in the name of science. Some of the anti-vivisection movement in England had its roots in Evangelicalism and Quakerism. These religions already had a distrust for science, only intensified by the recent publishing of Darwin's Theory of Evolution in 1859.Neither side was pleased with how the Cruelty to Animals Act 1876 was passed. The scientific community felt as though the government was restricting their ability to compete with the quickly advancing France and Germany with new regulations. The anti-vivisection movement was also unhappy, but because they believed that it was a concession to scientists for allowing vivisection to continue at all. Ferrier would continue to vex the anti-vivisection movement in Britain with his experiments when he had a debate with his German opponent, Friedrich Goltz. They would effectively enter the vivisection arena, with Ferrier presenting a monkey, and Goltz presenting a dog, both of which had already been operated on. Ferrier won the debate, but did not have a license, leading the anti-vivisection movement to sue him in 1881. Ferrier was not found guilty, as his assistant was the one operating, and his assistant did have a license. Ferrier and his practices gained public support, leaving the anti-vivisection movement scrambling. They made the moral argument that given recent developments, scientists would venture into more extreme practices to operating on "the cripple, the mute, the idiot, the convict, the pauper, to enhance the 'interest' of [the physiologist's] experiments". Human vivisection: It is possible that human vivisection was practised by some Greek anatomists in Alexandria in the 3rd century BC. Celsus in De Medicina states that Herophilos of Alexandria vivisected some criminals sent by the King, and the early Christian writer Tertullian states that Herophilos vivisected at least 600 live prisoners although the accuracy of this claim is disputed by many historians.Andalusian Arab Ibn Tufail in the 12th century elaborated on human vivisection in his treatise called Hayy ibn Yaqzan and Nadia Maftouni, discussing the subject in an extensive article, believes him to be among the early supporters of autopsy and vivisection.Unit 731, a biological and chemical warfare research and development unit of the Imperial Japanese Army, undertook lethal human experimentation during the period that comprised both the Second Sino-Japanese War and the Second World War (1937–1945). In the Filipino island of Mindanao, Moro Muslim prisoners of war were subjected to various forms of vivisection by the Japanese, in many cases without anesthesia.Nazi human experimentation involved many medical experiments on live subjects, such as vivisections by Josef Mengele, usually without anesthesia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dimethyldioxirane** Dimethyldioxirane: Dimethyldioxirane (DMDO), also referred to as Murray's reagent in reference to Robert W. Murray, is a dioxirane derived from acetone and can be considered as a monomer of acetone peroxide. It is a powerful yet selective oxidizing agent which finds use in organic synthesis. It is known only in the form of a dilute solution, usually in acetone, and hence the properties of the pure material are largely unknown. Synthesis: DMDO is not commercially available because of its instability. DMDO can be prepared as dilute solutions (~0.1 M) by treatment of acetone with potassium peroxymonosulfate KHSO5, usually in the form of Oxone (2KHSO5·KHSO4·K2SO4). Synthesis: The preparation of DMDO is rather inefficient (typical yields < 3%) and typically only yields a relatively dilute solution in acetone (only up to approximately 0.1 M). This is tolerable as preparation uses inexpensive substances: acetone, sodium bicarbonate, and potassium peroxymonosulfate (commercially known as "oxone"). The solution can be stored at low temperatures and its concentration may be assayed immediately prior to its use. Synthesis: The more active compound methyl(trifluoromethyl)dioxirane (H3C)(F3C)CO2 can be similarly prepared from methyl trifluoromethyl ketone. Stability Solutions are stable under refrigeration (−10 to −20 °C) for up to a week. The rate of decomposition will increase upon exposure to light or heavy metals. Uses: The most common use for DMDO is the oxidation of alkenes to epoxides. One particular advantage of using DMDO is that the only byproduct of oxidation is acetone, a fairly innocuous and volatile compound. DMDO oxidations are particularly mild, sometimes allowing oxidations which might not otherwise be possible. In fact, DMDO is considered the reagent of choice for epoxidation, and in nearly all circumstances is as good as or better than peroxyacids such as meta-chloroperoxybenzoic acid (mCPBA).Despite its high reactivity, DMDO displays good selectivity for olefins. Typically, electron deficient olefins are oxidized more slowly than electron rich ones. DMDO will also oxidize several other functional groups. For example, DMDO will oxidize primary amines to nitro compounds and sulfides to sulfoxides. In some cases, DMDO will even oxidize unactivated C-H bonds: DMDO can also be used to convert nitro compounds to carbonyl compounds (Nef reaction).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biological pathway** Biological pathway: A biological pathway is a series of interactions among molecules in a cell that leads to a certain product or a change in a cell. Such a pathway can trigger the assembly of new molecules, such as a fat or protein. Pathways can also turn genes on and off, or spur a cell to move. Some of the most common biological pathways are involved in metabolism, the regulation of gene expression and the transmission of signals. Pathways play a key role in advanced studies of genomics. Biological pathway: Most common types of biological pathways: Metabolic pathway Genetic pathway Signal transduction pathway Pathways databases: KEGG Pathway database is a popular pathway search database highly used by biologists. WikiPathways is a community curated pathway database using the "wiki" concept. All pathways have an open license and can be freely used. Reactome is a free and manually curated online database of biological pathways. NCI-Nature Pathway Interaction Database is a free biomedical database of human cellular signaling pathways (new official name: NCI Nature Pathway Interaction Database: Pathway, synonym: PID). PhosphoSitePlus is a database of observed post-translational modifications in human and mouse proteins; an online systems biology resource providing comprehensive information and tools for the study of protein post-translational modifications (PTMs) including phosphorylation, ubiquitination, acetylation and methylation. BioCyc database collection is an assortment of organism specific Pathway/Genome Databases. Human Protein Reference Database is a centralized platform to visually depict and integrate information pertaining to domain architecture, post-translational modifications, interaction networks and disease association for each protein in the human proteome (the last release was #9 in 2010). PANTHER (Protein ANalysis THrough Evolutionary Relationships) is a large curated biological database of gene/protein families and their functionally related subfamilies that can be used to classify and identify the function of gene products. TRANSFAC (TRANScription FACtor database) is a manually curated database of eukaryotic transcription factors, their genomic binding sites and DNA binding profiles (provided by geneXplain GmbH). MiRTarBase is a curated database of MicroRNA-Target Interactions. DrugBank is a comprehensive, high-quality, freely accessible, online database containing information on drugs and drug targets. esyN is a network viewer and builder that allows to import pathways from the biomodels database or from biogrid, flybase pombase and see what drugs interact with the proteins in your network. Comparative Toxicogenomics Database (CTD) is a public website and research tool that curates scientific data describing relationships between chemicals/drugs, genes/proteins, diseases, taxa, phenotypes, GO annotations, pathways, and interaction modules; CTD illuminates how environmental chemicals affect human health. Pathway commons is a project and database that uses BioPAX language to convert, integrate and query other biological pathway and interaction databases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Network allocation vector** Network allocation vector: The network allocation vector (NAV) is a virtual carrier-sensing mechanism used with wireless network protocols such as IEEE 802.11 (Wi-Fi) and IEEE 802.16 (WiMax). The virtual carrier-sensing is a logical abstraction which limits the need for physical carrier-sensing at the air interface in order to save power. The MAC layer frame headers contain a duration field that specifies the transmission time required for the frame, in which time the medium will be busy. The stations listening on the wireless medium read the Duration field and set their NAV, which is an indicator for a station on how long it must defer from accessing the medium. Network allocation vector: The NAV may be thought of as a counter, which counts down to zero at a uniform rate. When the counter is zero, the virtual carrier-sensing indication is that the medium is idle; when nonzero, the indication is busy. The medium shall be determined to be busy when the station (STA) is transmitting. In IEEE 802.11, the NAV represents the number of microseconds the sending STA intends to hold the medium busy (maximum of 32,767 microseconds). When the sender sends a Request to Send the receiver waits one SIFS before sending Clear to Send. Then the sender will wait again one SIFS before sending all the data. Again the receiver will wait a SIFS before sending ACK. So NAV is the duration from the first SIFS to the ending of ACK. During this time the medium is considered busy. Wireless stations are often battery-powered, so to conserve power the stations may enter a power-saving mode. A station decrements its NAV counter until it becomes zero, at which time it is awakened to sense the medium again. Network allocation vector: The NAV virtual carrier sensing mechanism is a prominent part of the CSMA/CA MAC protocol used with IEEE 802.11 WLANs. NAV is used in DCF, PCF and HCF.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**7-Methylxanthine** 7-Methylxanthine: 7-Methylxanthine (7-MX), also known as heteroxanthine, is an active metabolite of caffeine (1,3,7-trimethylxanthine) and theobromine (3,7-dimethylxanthine). It is a non-selective antagonist of the adenosine receptors. The compound may slow the progression of myopia (nearsightedness). It is under investigation for this purpose in children with myopia. 7-Methylxanthine: It is shown that systemic treatment with 7-mx appears to be efficient in retarding axial elongation and myopia progression among myopic children. The treatment is safe and without side effects, and may be continued until 18–20 years of age, when age-related cross-linking of collagen prevents further elongation of the eye. Additionally, further studies show that oral intake of 7-MX was associated with reduced myopia progression and reduced axial elongation in this sample of myopic children from Denmark. Randomised controlled trials are needed to determine whether the association is causal. 7-Methylxanthine: Further clinical trials will be conducted to ascertain the full efficacy of this new drug for the use of controlling the progression of Myopia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stock catalyst** Stock catalyst: A stock catalyst is an event that causes the price of a security to move, often significantly. In a simplified sense, it can be either bad news that unnerves investors or good news to get investors interested in the stock again. Stock catalysts often change investor sentiment and can mark the beginning or end of stock trends. The most common catalysts arise due to unexpected information that triggers the market to re-consider a company's business prospects. Some investors and traders use catalysts in short-term trading strategies to generate a profit. Types: A stock catalyst can be either a sudden catalyst or an anticipated catalyst. Sudden catalysts cannot be anticipated and are announced suddenly by the company during a press release. An example of a sudden catalyst is a company partnership since they are announced without prior notice to investors. Anticipated catalysts are catalysts that investors are aware of before the catalyst even happens. They are generally pre-scheduled and can have a strong affect a company's stock price during the days leading up to and including the event. Examples: The following are examples of stock catalysts: Earnings release Investor Conference Product Release FDA/CDC Approval Economic Event Metric Reveal Court Decision Corporate Action IPO IPO Lockup Expiration Partnership Contracts Analyst Revisions Trading strategies: Buy the Rumor, Sell the News The trading strategy around buying the rumor and selling the news revolves around buying or selling the stock during the 3 weeks leading up to the catalyst event, and selling before the event actually occurs. This strategy can be predicable because the stock market will price in rumors around the catalyst in the days leading up to the event. If the catalyst is expected to be positive, then the company stock is also expected to rise in the weeks leading up to the event. Trading strategies: Trade the Catalyst Another trading strategy related to stock catalysts is buying or selling the stock and maintaining that position during the catalyst event. This is a highly speculative and risky strategy due to the unpredictability of catalyst events such as earnings releases.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MED24** MED24: Mediator of RNA polymerase II transcription subunit 24 is an enzyme that in humans is encoded by the MED24 gene. Function: This gene encodes a component of the mediator complex (also known as TRAP, SMCC, DRIP, or ARC), a transcriptional coactivator complex thought to be required for the expression of almost all genes. The mediator complex is recruited by transcriptional activators or nuclear receptors to induce gene expression, possibly by interacting with RNA polymerase II and promoting the formation of a transcriptional pre-initiation complex. Multiple transcript variants encoding different isoforms have been found for this gene. Interactions: MED24 has been shown to interact with Estrogen receptor alpha, Cyclin-dependent kinase 8, Calcitriol receptor and BRCA1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aiyara cluster** Aiyara cluster: An Aiyara cluster is a low-powered computer cluster specially designed to process Big Data. The Aiyara cluster model can be considered as a specialization of the Beowulf cluster in the sense that Aiyara is also built from commodity hardware, not inexpensive personal computers, but system-on-chip computer boards. Unlike Beowulf, applications of an Aiyara cluster are scoped only for the Big Data area, not for scientific high-performance computing. Another important property of an Aiyara cluster is that it is low-power. It must be built with a class of processing units that produces less heat. Aiyara cluster: The name Aiyara originally referred to the first ARM-based cluster built by Wichai Srisuruk and Chanwit Kaewkasi at Suranaree University of Technology. The name "Aiyara" came from a Thai word literally an elephant to reflect its underneath software stack, which is Apache Hadoop. Like Beowulf, an Aiyara cluster does not define a particular software stack to run atop it. A cluster normally runs a variant of the Linux operating system. Commonly used Big Data software stacks are Apache Hadoop and Apache Spark. Development: A report of the Aiyara hardware which successfully processed a non-trivial amount of Big Data was published in the Proceedings of ICSEC 2014. Aiyara Mk-I, the second Aiyara cluster, consists of 22 Cubieboards. It is the first known SoC-based ARM cluster which is able to process Big Data successfully using the Spark and HDFS stack.The Aiyara cluster model, a technical description explaining how to build an Aiyara cluster, was later published by Chanwit Kaewkasi in the DZone's 2014 Big Data Guide. Development: The further results and cluster optimization techniques, that make the cluster's processing rate boost to 0.9 GB/min while still preserve low-power consumption, were reported in the Proceeding of IEEE's TENCON 2014. The whole architecture of software stack, including the runtime, data integrity verification and data compression, is studied and improved. The work reported in this paper achieved the processing rate at almost 0.9 GB/min, successfully processed the same benchmarks from the previous work by roughly 38 minutes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Derivation of the Schwarzschild solution** Derivation of the Schwarzschild solution: The Schwarzschild solution describes spacetime under the influence of a massive, non-rotating, spherically symmetric object. It is considered by some to be one of the simplest and most useful solutions to the Einstein field equations . Assumptions and notation: Working in a coordinate chart with coordinates (r,θ,ϕ,t) labelled 1 to 4 respectively, we begin with the metric in its most general form (10 independent components, each of which is a smooth function of 4 variables). The solution is assumed to be spherically symmetric, static and vacuum. For the purposes of this article, these assumptions may be stated as follows (see the relevant links for precise definitions): A spherically symmetric spacetime is one that is invariant under rotations and taking the mirror image. Assumptions and notation: A static spacetime is one in which all metric components are independent of the time coordinate t (so that ∂∂tgμν=0 ) and the geometry of the spacetime is unchanged under a time-reversal t→−t A vacuum solution is one that satisfies the equation Tab=0 . From the Einstein field equations (with zero cosmological constant), this implies that Rab=0 since contracting Rab−R2gab=0 yields R=0 Metric signature used here is (+,+,+,−). Diagonalising the metric: The first simplification to be made is to diagonalise the metric. Under the coordinate transformation, (r,θ,ϕ,t)→(r,θ,ϕ,−t) , all metric components should remain the same. The metric components gμ4 (μ≠4 ) change under this transformation as: gμ4′=∂xα∂x′μ∂xβ∂x′4gαβ=−gμ4 (μ≠4 )But, as we expect gμ4′=gμ4 (metric components remain the same), this means that: gμ4=0 (μ≠4 )Similarly, the coordinate transformations (r,θ,ϕ,t)→(r,θ,−ϕ,t) and (r,θ,ϕ,t)→(r,−θ,ϕ,t) respectively give: gμ3=0 (μ≠3 )gμ2=0 (μ≠2 )Putting all these together gives: gμν=0 (μ≠ν )and hence the metric must be of the form: 11 22 33 44 dt2 where the four metric components are independent of the time coordinate t (by the static assumption). Simplifying the components: On each hypersurface of constant t , constant θ and constant ϕ (i.e., on each radial line), 11 should only depend on r (by spherical symmetry). Hence 11 is a function of a single variable: 11 =A(r) A similar argument applied to 44 shows that: 44 =B(r) On the hypersurfaces of constant t and constant r , it is required that the metric be that of a 2-sphere: sin 2⁡θdϕ2) Choosing one of these hypersurfaces (the one with radius r0 , say), the metric components restricted to this hypersurface (which we denote by 22 and 33 ) should be unchanged under rotations through θ and ϕ (again, by spherical symmetry). Comparing the forms of the metric on this hypersurface gives: 22 33 22 sin 2⁡θdϕ2) which immediately yields: 22 =r02 and 33 sin 2⁡θ But this is required to hold on each hypersurface; hence, 22 =r2 and 33 sin 2⁡θ An alternative intuitive way to see that 22 and 33 must be the same as for a flat spacetime is that stretching or compressing an elastic material in a spherically symmetric manner (radially) will not change the angular distance between two points. Simplifying the components: Thus, the metric can be put in the form: sin 2⁡θdϕ2+B(r)dt2 with A and B as yet undetermined functions of r . Note that if A or B is equal to zero at some point, the metric would be singular at that point. Calculating the Christoffel symbols: Using the metric above, we find the Christoffel symbols, where the indices are (1,2,3,4)=(r,θ,ϕ,t) . The sign ′ denotes a total derivative of a function. sin 2⁡θ/A0000−B′/(2A)] sin cos ⁡θ00000] cot cot ⁡θ000000] Γik4=[000B′/(2B)00000000B′/(2B)000] Using the field equations to find A(r) and B(r): To determine A and B , the vacuum field equations are employed: Rαβ=0 Hence: Γβα,ρρ−Γρα,βρ+ΓρλρΓβαλ−ΓβλρΓραλ=0, where a comma is used to set off the index that is being used for the derivative. The Ricci curvature is diagonal in the given coordinates: Rtt=−14B′A(A′A−B′B+4r)−12(B′A)′, Rrr=−12(B′B)′−14(B′B)2+14A′A(B′B+4r), Rθθ=1−(rA)′−r2A(A′A+B′B), sin 2⁡(θ)Rθθ, where the prime means the r derivative of the functions. Using the field equations to find A(r) and B(r): Only three of the field equations are nontrivial and upon simplification become: 4A′B2−2rB″AB+rA′B′B+rB′2A=0, rA′B+2A2B−2AB−rB′A=0, −2rB″AB+rA′B′B+rB′2A−4B′AB=0 (the fourth equation is just sin 2⁡θ times the second equation). Subtracting the first and third equations produces: A′B+AB′=0⇒A(r)B(r)=K where K is a non-zero real constant. Substituting A(r)B(r)=K into the second equation and tidying up gives: rA′=A(1−A) which has general solution: A(r)=(1+1Sr)−1 for some non-zero real constant S . Hence, the metric for a static, spherically symmetric vacuum solution is now of the form: sin 2⁡θdϕ2)+K(1+1Sr)dt2 Note that the spacetime represented by the above metric is asymptotically flat, i.e. as r→∞ , the metric approaches that of the Minkowski metric and the spacetime manifold resembles that of Minkowski space. Using the weak-field approximation to find K and S: The geodesics of the metric (obtained where ds is extremised) must, in some limit (e.g., toward infinite speed of light), agree with the solutions of Newtonian motion (e.g., obtained by Lagrange equations). (The metric must also limit to Minkowski space when the mass it represents vanishes.) 0=δ∫dsdtdt=δ∫(KE+PEg)dt (where KE is the kinetic energy and PEg is the Potential Energy due to gravity) The constants K and S are fully determined by some variant of this approach; from the weak-field approximation one arrives at the result: 44 =K(1+1Sr)≈−c2+2Gmr=−c2(1−2Gmc2r) where G is the gravitational constant, m is the mass of the gravitational source and c is the speed of light. It is found that: K=−c2 and 1S=−2Gmc2 Hence: A(r)=(1−2Gmc2r)−1 and B(r)=−c2(1−2Gmc2r) So, the Schwarzschild metric may finally be written in the form: sin 2⁡θdϕ2)−c2(1−2Gmc2r)dt2 Note that: 2Gmc2=rs is the definition of the Schwarzschild radius for an object of mass m , so the Schwarzschild metric may be rewritten in the alternative form: sin 2⁡θdϕ2)−c2(1−rsr)dt2 which shows that the metric becomes singular approaching the event horizon (that is, r→rs ). The metric singularity is not a physical one (although there is a real physical singularity at r=0 ), as can be shown by using a suitable coordinate transformation (e.g. the Kruskal–Szekeres coordinate system). Alternate derivation using known physics in special cases: The Schwarzschild metric can also be derived using the known physics for a circular orbit and a temporarily stationary point mass. Start with the metric with coefficients that are unknown coefficients of r :−c2=(dsdτ)2=A(r)(drdτ)2+r2(dϕdτ)2+B(r)(dtdτ)2. Now apply the Euler–Lagrange equation to the arc length integral J=∫τ1τ2−(ds/dτ)2dτ. Since ds/dτ is constant, the integrand can be replaced with (ds/dτ)2, because the E–L equation is exactly the same if the integrand is multiplied by any constant. Applying the E–L equation to J with the modified integrand yields: A′(r)r˙2+2rϕ˙2+B′(r)t˙2=2A′(r)r˙2+2A(r)r¨0=2rr˙ϕ˙+r2ϕ¨0=B′(r)r˙t˙+B(r)t¨ where dot denotes differentiation with respect to τ. In a circular orbit r˙=r¨=0, so the first E–L equation above is equivalent to 2rϕ˙2+B′(r)t˙2=0⇔B′(r)=−2rϕ˙2/t˙2=−2r(dϕ/dt)2. Kepler's third law of motion is T2r3=4π2G(M+m). In a circular orbit, the period T equals 2π/(dϕ/dt), implying (dϕdt)2=GM/r3 since the point mass m is negligible compared to the mass of the central body M. So B′(r)=−2GM/r2 and integrating this yields B(r)=2GM/r+C, where C is an unknown constant of integration. C can be determined by setting M=0, in which case the spacetime is flat and B(r)=−c2. So C=−c2 and B(r)=2GM/r−c2=c2(2GM/c2r−1)=c2(rs/r−1). When the point mass is temporarily stationary, r˙=0 and 0. The original metric equation becomes t˙2=−c2/B(r) and the first E–L equation above becomes A(r)=B′(r)t˙2/(2r¨). When the point mass is temporarily stationary, r¨ is the acceleration of gravity, −MG/r2. So A(r)=(−2MGr2)(−c22MG/r−c2)(−r22MG)=11−2MG/(rc2)=11−rs/r. Alternative form in isotropic coordinates: The original formulation of the metric uses anisotropic coordinates in which the velocity of light is not the same in the radial and transverse directions. Arthur Eddington gave alternative forms in isotropic coordinates. For isotropic spherical coordinates r1 , θ , ϕ , coordinates θ and ϕ are unchanged, and then (provided r≥2Gmc2 )r=r1(1+Gm2c2r1)2 , dr=dr1(1−(Gm)24c4r12) , and (1−2Gmc2r)=(1−Gm2c2r1)2/(1+Gm2c2r1)2 Then for isotropic rectangular coordinates x , y , z sin cos ⁡(ϕ), sin sin ⁡(ϕ), cos ⁡(θ) The metric then becomes, in isotropic rectangular coordinates: ds2=(1+Gm2c2r1)4(dx2+dy2+dz2)−c2dt2(1−Gm2c2r1)2/(1+Gm2c2r1)2 Dispensing with the static assumption – Birkhoff's theorem: In deriving the Schwarzschild metric, it was assumed that the metric was vacuum, spherically symmetric and static. The static assumption is unneeded, as Birkhoff's theorem states that any spherically symmetric vacuum solution of Einstein's field equations is stationary; the Schwarzschild solution thus follows. Birkhoff's theorem has the consequence that any pulsating star that remains spherically symmetric does not generate gravitational waves, as the region exterior to the star remains static.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Distributed file system for cloud** Distributed file system for cloud: A distributed file system for cloud is a file system that allows many clients to have access to data and supports operations (create, delete, modify, read, write) on that data. Each data file may be partitioned into several parts called chunks. Each chunk may be stored on different remote machines, facilitating the parallel execution of applications. Typically, data is stored in files in a hierarchical tree, where the nodes represent directories. There are several ways to share files in a distributed architecture: each solution must be suitable for a certain type of application, depending on how complex the application is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system. Distributed file system for cloud: Users can share computing resources through the Internet thanks to cloud computing which is typically characterized by scalable and elastic resources – such as physical servers, applications and any services that are virtualized and allocated dynamically. Synchronization is required to make sure that all devices are up-to-date. Distributed file systems enable many big, medium, and small enterprises to store and access their remote data as they do local data, facilitating the use of variable resources. Overview: History Today, there are many implementations of distributed file systems. The first file servers were developed by researchers in the 1970s. Sun Microsystem's Network File System became available in the 1980s. Before that, people who wanted to share files used the sneakernet method, physically transporting files on storage media from place to place. Once computer networks started to proliferate, it became obvious that the existing file systems had many limitations and were unsuitable for multi-user environments. Users initially used FTP to share files. FTP first ran on the PDP-10 at the end of 1973. Even with FTP, files needed to be copied from the source computer onto a server and then from the server onto the destination computer. Users were required to know the physical addresses of all computers involved with the file sharing. Overview: Supporting techniques Modern data centers must support large, heterogenous environments, consisting of large numbers of computers of varying capacities. Cloud computing coordinates the operation of all such systems, with techniques such as data center networking (DCN), the MapReduce framework, which supports data-intensive computing applications in parallel and distributed systems, and virtualization techniques that provide dynamic resource allocation, allowing multiple operating systems to coexist on the same physical server. Overview: Applications Cloud computing provides large-scale computing thanks to its ability to provide the needed CPU and storage resources to the user with complete transparency. This makes cloud computing particularly suited to support different types of applications that require large-scale distributed processing. This data-intensive computing needs a high performance file system that can share data between virtual machines (VM).Cloud computing dynamically allocates the needed resources, releasing them once a task is finished, requiring users to pay only for needed services, often via a service-level agreement. Cloud computing and cluster computing paradigms are becoming increasingly important to industrial data processing and scientific applications such as astronomy and physics, which frequently require the availability of large numbers of computers to carry out experiments. Architectures: Most distributed file systems are built on the client-server architecture, but other, decentralized, solutions exist as well. Architectures: Client-server architecture Network File System (NFS) uses a client-server architecture, which allows sharing of files between a number of machines on a network as if they were located locally, providing a standardized view. The NFS protocol allows heterogeneous clients' processes, probably running on different machines and under different operating systems, to access files on a distant server, ignoring the actual location of files. Relying on a single server results in the NFS protocol suffering from potentially low availability and poor scalability. Using multiple servers does not solve the availability problem since each server is working independently. The model of NFS is a remote file service. This model is also called the remote access model, which is in contrast with the upload/download model: Remote access model: Provides transparency, the client has access to a file. He sends requests to the remote file (while the file remains on the server). Architectures: Upload/download model: The client can access the file only locally. It means that the client has to download the file, make modifications, and upload it again, to be used by others' clients.The file system used by NFS is almost the same as the one used by Unix systems. Files are hierarchically organized into a naming graph in which directories and files are represented by nodes. Architectures: Cluster-based architectures A cluster-based architecture ameliorates some of the issues in client-server architectures, improving the execution of applications in parallel. The technique used here is file-striping: a file is split into multiple chunks, which are "striped" across several storage servers. The goal is to allow access to different parts of a file in parallel. If the application does not benefit from this technique, then it would be more convenient to store different files on different servers. However, when it comes to organizing a distributed file system for large data centers, such as Amazon and Google, that offer services to web clients allowing multiple operations (reading, updating, deleting,...) to a large number of files distributed among a large number of computers, then cluster-based solutions become more beneficial. Note that having a large number of computers may mean more hardware failures. Two of the most widely used distributed file systems (DFS) of this type are the Google File System (GFS) and the Hadoop Distributed File System (HDFS). The file systems of both are implemented by user level processes running on top of a standard operating system (Linux in the case of GFS). Architectures: Design principles Goals Google File System (GFS) and Hadoop Distributed File System (HDFS) are specifically built for handling batch processing on very large data sets. Architectures: For that, the following hypotheses must be taken into account: High availability: the cluster can contain thousands of file servers and some of them can be down at any time A server belongs to a rack, a room, a data center, a country, and a continent, in order to precisely identify its geographical location The size of a file can vary from many gigabytes to many terabytes. The file system should be able to support a massive number of files The need to support append operations and allow file contents to be visible even while a file is being written Communication is reliable among working machines: TCP/IP is used with a remote procedure call RPC communication abstraction. TCP allows the client to know almost immediately when there is a problem and a need to make a new connection. Architectures: Load balancing Load balancing is essential for efficient operation in distributed environments. It means distributing work among different servers, fairly, in order to get more work done in the same amount of time and to serve clients faster. In a system containing N chunkservers in a cloud (N being 1000, 10000, or more), where a certain number of files are stored, each file is split into several parts or chunks of fixed size (for example, 64 megabytes), the load of each chunkserver being proportional to the number of chunks hosted by the server. In a load-balanced cloud, resources can be efficiently used while maximizing the performance of MapReduce-based applications. Architectures: Load rebalancing In a cloud computing environment, failure is the norm, and chunkservers may be upgraded, replaced, and added to the system. Files can also be dynamically created, deleted, and appended. That leads to load imbalance in a distributed file system, meaning that the file chunks are not distributed equitably between the servers. Architectures: Distributed file systems in clouds such as GFS and HDFS rely on central or master servers or nodes (Master for GFS and NameNode for HDFS) to manage the metadata and the load balancing. The master rebalances replicas periodically: data must be moved from one DataNode/chunkserver to another if free space on the first server falls below a certain threshold. However, this centralized approach can become a bottleneck for those master servers, if they become unable to manage a large number of file accesses, as it increases their already heavy loads. The load rebalance problem is NP-hard.In order to get a large number of chunkservers to work in collaboration, and to solve the problem of load balancing in distributed file systems, several approaches have been proposed, such as reallocating file chunks so that the chunks can be distributed as uniformly as possible while reducing the movement cost as much as possible. Architectures: Google file system Description Google, one of the biggest internet companies, has created its own distributed file system, named Google File System (GFS), to meet the rapidly growing demands of Google's data processing needs, and it is used for all cloud services. GFS is a scalable distributed file system for data-intensive applications. It provides fault-tolerant, high-performance data storage a large number of clients accessing it simultaneously. Architectures: GFS uses MapReduce, which allows users to create programs and run them on multiple machines without thinking about parallelization and load-balancing issues. GFS architecture is based on having a single master server for multiple chunkservers and multiple clients.The master server running in dedicated node is responsible for coordinating storage resources and managing files's metadata (the equivalent of, for example, inodes in classical file systems). Architectures: Each file is split into multiple chunks of 64 megabytes. Each chunk is stored in a chunk server. A chunk is identified by a chunk handle, which is a globally unique 64-bit number that is assigned by the master when the chunk is first created. Architectures: The master maintains all of the files's metadata, including file names, directories, and the mapping of files to the list of chunks that contain each file's data. The metadata is kept in the master server's main memory, along with the mapping of files to chunks. Updates to this data are logged to an operation log on disk. This operation log is replicated onto remote machines. When the log becomes too large, a checkpoint is made and the main-memory data is stored in a B-tree structure to facilitate mapping back into the main memory. Architectures: Fault tolerance To facilitate fault tolerance, each chunk is replicated onto multiple (default, three) chunk servers. A chunk is available on at least one chunk server. The advantage of this scheme is simplicity. The master is responsible for allocating the chunk servers for each chunk and is contacted only for metadata information. For all other data, the client has to interact with the chunk servers. Architectures: The master keeps track of where a chunk is located. However, it does not attempt to maintain the chunk locations precisely but only occasionally contacts the chunk servers to see which chunks they have stored. This allows for scalability, and helps prevent bottlenecks due to increased workload.In GFS, most files are modified by appending new data and not overwriting existing data. Once written, the files are usually only read sequentially rather than randomly, and that makes this DFS the most suitable for scenarios in which many large files are created once but read many times. Architectures: File processing When a client wants to write-to/update a file, the master will assign a replica, which will be the primary replica if it is the first modification. The process of writing is composed of two steps: Sending: First, and by far the most important, the client contacts the master to find out which chunk servers hold the data. The client is given a list of replicas identifying the primary and secondary chunk servers. The client then contacts the nearest replica chunk server, and sends the data to it. This server will send the data to the next closest one, which then forwards it to yet another replica, and so on. The data is then propagated and cached in memory but not yet written to a file. Architectures: Writing: When all the replicas have received the data, the client sends a write request to the primary chunk server, identifying the data that was sent in the sending phase. The primary server will then assign a sequence number to the write operations that it has received, apply the writes to the file in serial-number order, and forward the write requests in that order to the secondaries. Meanwhile, the master is kept out of the loop.Consequently, we can differentiate two types of flows: the data flow and the control flow. Data flow is associated with the sending phase and control flow is associated to the writing phase. This assures that the primary chunk server takes control of the write order. Architectures: Note that when the master assigns the write operation to a replica, it increments the chunk version number and informs all of the replicas containing that chunk of the new version number. Chunk version numbers allow for update error-detection, if a replica wasn't updated because its chunk server was down.Some new Google applications did not work well with the 64-megabyte chunk size. To solve that problem, GFS started, in 2004, to implement the Bigtable approach. Architectures: Hadoop distributed file system HDFS , developed by the Apache Software Foundation, is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes). Its architecture is similar to GFS, i.e. a server/client architecture. The HDFS is normally installed on a cluster of computers. Architectures: The design concept of Hadoop is informed by Google's, with Google File System, Google MapReduce and Bigtable, being implemented by Hadoop Distributed File System (HDFS), Hadoop MapReduce, and Hadoop Base (HBase) respectively. Like GFS, HDFS is suited for scenarios with write-once-read-many file access, and supports file appends and truncates in lieu of random reads and writes to simplify data coherency issues.An HDFS cluster consists of a single NameNode and several DataNode machines. The NameNode, a master server, manages and maintains the metadata of storage DataNodes in its RAM. DataNodes manage storage attached to the nodes that they run on. NameNode and DataNode are software designed to run on everyday-use machines, which typically run under a Linux OS. HDFS can be run on any machine that supports Java and therefore can run either a NameNode or the Datanode software.On an HDFS cluster, a file is split into one or more equal-size blocks, except for the possibility of the last block being smaller. Each block is stored on multiple DataNodes, and each may be replicated on multiple DataNodes to guarantee availability. By default, each block is replicated three times, a process called "Block Level Replication".The NameNode manages the file system namespace operations such as opening, closing, and renaming files and directories, and regulates file access. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for servicing read and write requests from the file system's clients, managing the block allocation or deletion, and replicating blocks.When a client wants to read or write data, it contacts the NameNode and the NameNode checks where the data should be read from or written to. After that, the client has the location of the DataNode and can send read or write requests to it. Architectures: The HDFS is typically characterized by its compatibility with data rebalancing schemes. In general, managing the free space on a DataNode is very important. Data must be moved from one DataNode to another, if free space is not adequate; and in the case of creating additional replicas, data should be moved to assure system balance. Architectures: Other examples Distributed file systems can be optimized for different purposes. Some, such as those designed for internet services, including GFS, are optimized for scalability. Other designs for distributed file systems support performance-intensive applications usually executed in parallel. Some examples include: MapR File System (MapR-FS), Ceph-FS, Fraunhofer File System (BeeGFS), Lustre File System, IBM General Parallel File System (GPFS), and Parallel Virtual File System. Architectures: MapR-FS is a distributed file system that is the basis of the MapR Converged Platform, with capabilities for distributed file storage, a NoSQL database with multiple APIs, and an integrated message streaming system. MapR-FS is optimized for scalability, performance, reliability, and availability. Its file storage capability is compatible with the Apache Hadoop Distributed File System (HDFS) API but with several design characteristics that distinguish it from HDFS. Among the most notable differences are that MapR-FS is a fully read/write filesystem with metadata for files and directories distributed across the namespace, so there is no NameNode.Ceph-FS is a distributed file system that provides excellent performance and reliability. It answers the challenges of dealing with huge files and directories, coordinating the activity of thousands of disks, providing parallel access to metadata on a massive scale, manipulating both scientific and general-purpose workloads, authenticating and encrypting on a large scale, and increasing or decreasing dynamically due to frequent device decommissioning, device failures, and cluster expansions.BeeGFS is the high-performance parallel file system from the Fraunhofer Competence Centre for High Performance Computing. The distributed metadata architecture of BeeGFS has been designed to provide the scalability and flexibility needed to run HPC and similar applications with high I/O demands.Lustre File System has been designed and implemented to deal with the issue of bottlenecks traditionally found in distributed systems. Lustre is characterized by its efficiency, scalability, and redundancy. GPFS was also designed with the goal of removing such bottlenecks. Communication: High performance of distributed file systems requires efficient communication between computing nodes and fast access to the storage systems. Operations such as open, close, read, write, send, and receive need to be fast, to ensure that performance. For example, each read or write request accesses disk storage, which introduces seek, rotational, and network latencies.The data communication (send/receive) operations transfer data from the application buffer to the machine kernel, TCP controlling the process and being implemented in the kernel. However, in case of network congestion or errors, TCP may not send the data directly. While transferring data from a buffer in the kernel to the application, the machine does not read the byte stream from the remote machine. In fact, TCP is responsible for buffering the data for the application.Choosing the buffer-size, for file reading and writing, or file sending and receiving, is done at the application level. The buffer is maintained using a circular linked list. It consists of a set of BufferNodes. Each BufferNode has a DataField. The DataField contains the data and a pointer called NextBufferNode that points to the next BufferNode. To find the current position, two pointers are used: CurrentBufferNode and EndBufferNode, that represent the position in the BufferNode for the last write and read positions. Communication: If the BufferNode has no free space, it will send a wait signal to the client to wait until there is available space. Cloud-based Synchronization of Distributed File System: More and more users have multiple devices with ad hoc connectivity. The data sets replicated on these devices need to be synchronized among an arbitrary number of servers. This is useful for backups and also for offline operation. Indeed, when user network conditions are not good, then the user device will selectively replicate a part of data that will be modified later and off-line. Once the network conditions become good, the device is synchronized. Two approaches exist to tackle the distributed synchronization issue: user-controlled peer-to-peer synchronization and cloud master-replica synchronization. Cloud-based Synchronization of Distributed File System: user-controlled peer-to-peer: software such as rsync must be installed in all users' computers that contain their data. The files are synchronized by peer-to-peer synchronization where users must specify network addresses and synchronization parameters, and is thus a manual process. cloud master-replica synchronization: widely used by cloud services, in which a master replica is maintained in the cloud, and all updates and synchronization operations are to this master copy, offering a high level of availability and reliability in case of failures. Security keys: In cloud computing, the most important security concepts are confidentiality, integrity, and availability ("CIA"). Confidentiality becomes indispensable in order to keep private data from being disclosed. Integrity ensures that data is not corrupted. Security keys: Confidentiality Confidentiality means that data and computation tasks are confidential: neither cloud provider nor other clients can access the client's data. Much research has been done about confidentiality, because it is one of the crucial points that still presents challenges for cloud computing. A lack of trust in the cloud providers is also a related issue. The infrastructure of the cloud must ensure that customers' data will not be accessed by unauthorized parties. Security keys: The environment becomes insecure if the service provider can do all of the following: locate the consumer's data in the cloud access and retrieve consumer's data understand the meaning of the data (types of data, functionalities and interfaces of the application and format of the data).The geographic location of data helps determine privacy and confidentiality. The location of clients should be taken into account. For example, clients in Europe won't be interested in using datacenters located in United States, because that affects the guarantee of the confidentiality of data. In order to deal with that problem, some cloud computing vendors have included the geographic location of the host as a parameter of the service-level agreement made with the customer, allowing users to choose themselves the locations of the servers that will host their data. Security keys: Another approach to confidentiality involves data encryption. Otherwise, there will be serious risk of unauthorized use. A variety of solutions exists, such as encrypting only sensitive data, and supporting only some operations, in order to simplify computation. Furthermore, cryptographic techniques and tools as FHE, are used to preserve privacy in the cloud. Integrity Integrity in cloud computing implies data integrity as well as computing integrity. Such integrity means that data has to be stored correctly on cloud servers and, in case of failures or incorrect computing, that problems have to be detected. Security keys: Data integrity can be affected by malicious events or from administration errors (e.g. during backup and restore, data migration, or changing memberships in P2P systems).Integrity is easy to achieve using cryptography (typically through message-authentication code, or MACs, on data blocks).There exist checking mechanisms that effect data integrity. For instance: HAIL (High-Availability and Integrity Layer) is a distributed cryptographic system that allows a set of servers to prove to a client that a stored file is intact and retrievable. Security keys: Hach PORs (proofs of retrievability for large files) is based on a symmetric cryptographic system, where there is only one verification key that must be stored in a file to improve its integrity. This method serves to encrypt a file F and then generate a random string named "sentinel" that must be added at the end of the encrypted file. The server cannot locate the sentinel, which is impossible differentiate from other blocks, so a small change would indicate whether the file has been changed or not. Security keys: PDP (provable data possession) checking is a class of efficient and practical methods that provide an efficient way to check data integrity on untrusted servers: PDP: Before storing the data on a server, the client must store, locally, some meta-data. At a later time, and without downloading data, the client is able to ask the server to check that the data has not been falsified. This approach is used for static data. Security keys: Scalable PDP: This approach is premised upon a symmetric-key, which is more efficient than public-key encryption. It supports some dynamic operations (modification, deletion, and append) but it cannot be used for public verification. Dynamic PDP: This approach extends the PDP model to support several update operations such as append, insert, modify, and delete, which is well suited for intensive computation. Security keys: Availability Availability is generally effected by replication. Meanwhile, consistency must be guaranteed. However, consistency and availability cannot be achieved at the same time; each is prioritized at some sacrifice of the other. A balance must be struck.Data must have an identity to be accessible. For instance, Skute is a mechanism based on key/value storage that allows dynamic data allocation in an efficient way. Each server must be identified by a label in the form continent-country-datacenter-room-rack-server. The server can reference multiple virtual nodes, with each node having a selection of data (or multiple partitions of multiple data). Each piece of data is identified by a key space which is generated by a one-way cryptographic hash function (e.g. MD5) and is localised by the hash function value of this key. The key space may be partitioned into multiple partitions with each partition referring to a piece of data. To perform replication, virtual nodes must be replicated and referenced by other servers. To maximize data durability and data availability, the replicas must be placed on different servers and every server should be in a different geographical location, because data availability increases with geographical diversity. The process of replication includes an evaluation of space availability, which must be above a certain minimum thresh-hold on each chunk server. Otherwise, data are replicated to another chunk server. Each partition, i, has an availability value represented by the following formula: availi=∑i=0|si|∑j=i+1|si|confi.confj.diversity(si,sj) where si are the servers hosting the replicas, confi and confj are the confidence of servers i and j (relying on technical factors such as hardware components and non-technical ones like the economic and political situation of a country) and the diversity is the geographical distance between si and sj .Replication is a great solution to ensure data availability, but it costs too much in terms of memory space. DiskReduce is a modified version of HDFS that's based on RAID technology (RAID-5 and RAID-6) and allows asynchronous encoding of replicated data. Indeed, there is a background process which looks for widely replicated data and deletes extra copies after encoding it. Another approach is to replace replication with erasure coding. In addition, to ensure data availability there are many approaches that allow for data recovery. In fact, data must be coded, and if it is lost, it can be recovered from fragments which were constructed during the coding phase. Some other approaches that apply different mechanisms to guarantee availability are: Reed-Solomon code of Microsoft Azure and RaidNode for HDFS. Also Google is still working on a new approach based on an erasure-coding mechanism.There is no RAID implementation for cloud storage. Economic aspects: The cloud computing economy is growing rapidly. The US government has decided to spend 40% of its compound annual growth rate (CAGR), expected to be 7 billion dollars by 2015.More and more companies have been utilizing cloud computing to manage the massive amount of data and to overcome the lack of storage capacity, and because it enables them to use such resources as a service, ensuring that their computing needs will be met without having to invest in infrastructure (Pay-as-you-go model).Every application provider has to periodically pay the cost of each server where replicas of data are stored. The cost of a server is determined by the quality of the hardware, the storage capacities, and its query-processing and communication overhead. Cloud computing allows providers to scale their services according to client demands. Economic aspects: The pay-as-you-go model has also eased the burden on startup companies that wish to benefit from compute-intensive business. Cloud computing also offers an opportunity to many third-world countries that wouldn't have such computing resources otherwise. Cloud computing can lower IT barriers to innovation.Despite the wide utilization of cloud computing, efficient sharing of large volumes of data in an untrusted cloud is still a challenge.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Aggressive NK-cell leukemia** Aggressive NK-cell leukemia: Aggressive NK-cell leukemia is a disease with an aggressive, systemic proliferation of natural killer cells (NK cells) and a rapidly declining clinical course.It is also called aggressive NK-cell lymphoma. Signs and symptoms: Patients usually present with constitutional symptoms (malaise, weight loss, fatigue), and hepatosplenomegaly is commonly found on physical exam. Lymphadenopathy is also found to a lesser extent. Due to the aggressive nature of the disease, patients may initially present at a more advanced stage, with coagulopathies, hemophagocytic syndrome, and multi-organ failure. Rarely, individuals who have an aggressive NK cell lymphoma that is associated with latent infection with the Epstein-Barr virus (see next section) present with or develop extensive allergic reactions to mosquito bites. The symptoms of these reactions range from a greatly enlarged bite site that may be painful and involve necrosis to systemic symptoms (e.g. fever, swollen lymph nodes, abdominal pain, and diarrhea), or, in extremely rare cases, to life-threatening anaphylaxis. Cause: This disease has a strong association with the Epstein–Barr virus (EBV), but the true pathogenesis of this disease has yet to be described. The cell of origin is believed to be an NK cell. Blastoid NK cell lymphoma appears to be a different entity and shows no association with EBV. Sites of involvement This disease is typically found and diagnosed in peripheral blood, and while it can involve any organ, it is usually found in the spleen, liver, and bone marrow. Diagnosis: Leukemic cells are invariably present in samples of peripheral blood to a variable extent. Pancytopenia (anemia, neutropenia, thrombocytopenia) is commonly seen as well. Peripheral blood The leukemic cells have a diameter mildly greater than a large granular lymphocyte (LGL) and have azurophilic granules and nucleoli of varying prominence. Nuclei may be irregular and hyperchromatic. Bone marrow Bone marrow involvement runs the spectrum between an inconspicuous infiltrate to extensive marrow replacement by leukemic cells. Reactive histiocytes displaying hemophagocytosis can be seen interspersed in the neoplastic infiltrate. Other organs Leukemic involvement of organs is typically destructive on tissue sections with necrosis and possibly angioinvasion, and the monotonous infiltrate may be diffuse or patchy. Immunophenotype The immunophenotype of this disease is the same as extranodal NK/T-cell lymphoma, nasal type and is shown in the table below. CD11b and CD16 show variable expression. Genetic findings Due to the NK lineage, clonal rearrangements of lymphoid (T cell receptor; B cell receptor) genes are not seen. The genome of the Epstein Barr virus (EBV) is detected in many cases, along with a variety of chromosomal abnormalities. Treatment: Currently Aggressive NK-cell leukemia, being a subtype of PTCL, is treated similarly to B-cell lymphomas. However, in recent years, scientists have developed techniques to better recognize the different types of lymphomas, such as PTCL. It is now understood that PTCL behaves differently from B-cell lymphomas and therapies are being developed that specifically target these types of lymphoma. Currently, however, there are no therapies approved by the U.S. Food and Drug Administration (FDA) specifically for PTCL. Anthracycline-containing chemotherapy regimens are commonly offered as the initial therapy. Some patients may receive a stem cell transplant. Novel approaches to the treatment of PTCL in the relapsed or refractory setting are under investigation. Epidemiology: This rare form of leukemia is more common among Asians in comparison to other ethnic groups. It is typically diagnosed in adolescents and young adults, with a slight predominance in males. Research directions: Pralatrexate is one compound currently under investigations for the treatment of PTCL.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FR-4** FR-4: FR-4 (or FR4) is a NEMA grade designation for glass-reinforced epoxy laminate material. FR-4 is a composite material composed of woven fiberglass cloth with an epoxy resin binder that is flame resistant (self-extinguishing). "FR" stands for "flame retardant", and does not denote that the material complies with the standard UL94V-0 unless testing is performed to UL 94, Vertical Flame testing in Section 8 at a compliant lab. The designation FR-4 was created by NEMA in 1968. FR-4: FR-4 glass epoxy is a popular and versatile high-pressure thermoset plastic laminate grade with good strength to weight ratios. With near zero water absorption, FR-4 is most commonly used as an electrical insulator possessing considerable mechanical strength. The material is known to retain its high mechanical values and electrical insulating qualities in both dry and humid conditions. These attributes, along with good fabrication characteristics, lend utility to this grade for a wide variety of electrical and mechanical applications. FR-4: Grade designations for glass epoxy laminates are: G-10, G-11, FR-4, FR-5 and FR-6. Of these, FR-4 is the grade most widely in use today. G-10, the predecessor to FR-4, lacks FR-4's self-extinguishing flammability characteristics. Hence, FR-4 has since replaced G-10 in most applications. FR-4 epoxy resin systems typically employ bromine, a halogen, to facilitate flame-resistant properties in FR-4 glass epoxy laminates. Some applications where thermal destruction of the material is a desirable trait will still use G-10 non flame resistant. Properties: Which materials fall into the "FR-4" category is defined in the NEMA LI 1-1998 standard. Typical physical and electrical properties of FR-4 are as follows. The abbreviations LW (lengthwise, warp yarn direction) and CW (crosswise, fill yarn direction) refer to the conventional perpendicular fiber orientations in the XY plane of the board (in-plane). In terms of Cartesian coordinates, lengthwise is along the x-axis, crosswise is along the y-axis, and the z-axis is referred to as the through-plane direction. The values shown below are an example of a certain manufacturer's material. Another manufacturer's material will usually have slightly different values. Checking the actual values, for any particular material, from the manufacturer's datasheet, can be very important, for example in high frequency applications. Properties: where: LW Lengthwise CW Crosswise PF Perpendicular to laminate face Applications: FR-4 is a common material for printed circuit boards (PCBs). A thin layer of copper foil is typically laminated to one or both sides of an FR-4 glass epoxy panel. These are commonly referred to as copper clad laminates. The copper thickness or copper weight can vary and so is specified separately. FR-4 is also used in the construction of relays, switches, standoffs, busbars, washers, arc shields, transformers and screw terminal strips.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded