text
stringlengths
60
353k
source
stringclasses
2 values
**Semigroup with three elements** Semigroup with three elements: In abstract algebra, a semigroup with three elements is an object consisting of three elements and an associative operation defined on them. The basic example would be the three integers 0, 1, and −1, together with the operation of multiplication. Multiplication of integers is associative, and the product of any two of these three integers is again one of these three integers. Semigroup with three elements: There are 18 inequivalent ways to define an associative operation on three elements: while there are, altogether, a total of 39 = 19683 different binary operations that can be defined, only 113 of these are associative, and many of these are isomorphic or antiisomorphic so that there are essentially only 18 possibilities.One of these is C3, the cyclic group with three elements. The others all have a semigroup with two elements as subsemigroups. In the example above, the set {−1,0,1} under multiplication contains both {0,1} and {−1,1} as subsemigroups (the latter is a subgroup, C2). Six of these are bands, meaning that all three elements are idempotent, so that the product of any element with itself is itself again. Two of these bands are commutative, therefore semilattices (one of them is the three-element totally ordered set, and the other is a three-element semilattice that is not a lattice). The other four come in anti-isomorphic pairs. One of these non-commutative bands results from adjoining an identity element to LO2, the left zero semigroup with two elements (or, dually, to RO2, the right zero semigroup). It is sometimes called the flip-flop monoid, referring to flip-flop circuits used in electronics: the three elements can be described as "set", "reset", and "do nothing". This semigroup occurs in the Krohn–Rhodes decomposition of finite semigroups. The irreducible elements in this decomposition are the finite simple groups plus this three-element semigroup, and its subsemigroups. Semigroup with three elements: There are two cyclic semigroups, one described by the equation x4 = x3, which has O2, the null semigroup with two elements, as a subsemigroup. The other is described by x4 = x2 and has C2, the group with two elements, as a subgroup. (The equation x4 = x describes C3, the group with three elements, already mentioned.) There are seven other non-cyclic non-band commutative semigroups, including the initial example of {−1, 0, 1}, and O3, the null semigroup with three elements. There are also two other anti-isomorphic pairs of non-commutative non-band semigroups.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benzenetricarboxylic acid** Benzenetricarboxylic acid: Benzenetricarboxylic acid is a group of chemical compounds which are tricarboxylic derivatives of benzene. Benzenetricarboxylic acid comes in three isomers: All isomers share the molecular weight 210.14 g/mol and the chemical formula C9H6O6.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comparison of machine translation applications** Comparison of machine translation applications: Machine translation is an algorithm which attempts to translate text or speech from one natural language to another. General information: Basic general information for popular machine translation applications. Languages features comparison: The following table compares the number of languages which the following machine translation programs can translate between. Languages features comparison: (Moses and Moses for Mere Mortals allow you to train translation models for any language pair, though collections of translated texts (parallel corpus) need to be provided by the user. The Moses site provides links to training corpora.) This is not an all-encompassing list. Some applications have many more language pairs than those listed below. This is a general comparison of key languages only. A full and accurate list of language pairs supported by each product should be found on each of the product's websites.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Golden Arena for Best Production Design** Golden Arena for Best Production Design: == List of winners == The following is a list of winners of the Golden Arena for Best Production Design (also known as Scenography or Scenic design) at the Pula Film Festival.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxoglutarate dehydrogenase (NADP+)** Oxoglutarate dehydrogenase (NADP+): In enzymology, an oxoglutarate dehydrogenase (NADP+) (EC 1.2.1.52) is an enzyme that catalyzes the chemical reaction 2-oxoglutarate + CoA + NADP+ ⇌ succinyl-CoA + CO2 + NADPHThe 3 substrates of this enzyme are 2-oxoglutarate, CoA, and NADP+, whereas its 3 products are succinyl-CoA, CO2, and NADPH. This enzyme belongs to the family of oxidoreductases, specifically those acting on the aldehyde or oxo group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is 2-oxoglutarate:NADP+ 2-oxidoreductase (CoA-succinylating). This enzyme is also called oxoglutarate dehydrogenase (NADP+).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**112 (number)** 112 (number): 112 (one hundred [and] twelve) is the natural number following 111 and preceding 113. Mathematics: 112 is an abundant number, a heptagonal number, and a Harshad number.112 is the number of connected graphs on 6 unlabeled nodes.If an equilateral triangle has sides of length 112, then it contains an interior point at integer distances 57, 65, and 73 from its vertices. This is the smallest possible side length of an equilateral triangle that contains a point at integer distances from the vertices.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beta-silicon effect** Beta-silicon effect: The beta-silicon effect in organosilicon chemistry, also called silicon hyperconjugation, is a special type of hyperconjugation that describes the stabilizing influence of a silicon atom on the development of positive charge at a carbon atom one position removed (β) from the silicon atom. The C-Si σ orbital is said to partially overlap with the σ* anti-bonding orbital of the C-leaving group, lowering the energy of the transition state leading to the formation of a carbocation. A prerequisite for the hyperconjugation to occur is an antiperiplanar relationship between the Si group and the leaving group. This allows for the maximum overlap between the C-Si σ orbital and the σ* anti-bonding orbital of the leaving group. Silicon hyperconjugation explains specific observations regarding chemical kinetics and stereochemistry of organic reactions with reactants containing silicon. Beta-silicon effect: The picture below shows the partial overlap of the C-Si σ orbital with the C-X (leaving group) σ*orbital (2b). This donation of electron density into the anti-bonding orbital weakens the C-X bonding orbital, lowering the energy barrier to breakage of the C-X bond as indicated in transition state 3. This stabilization of the transition state leads to favorable formation of carbenium ion 4. This becomes manifest in the increased rates of reactions that have positive charge developing on carbon atoms β to the silicon. Beta-silicon effect: The alpha-silicon effect is the destabilizing effect a silicon atom has on the development of positive charge on a carbon atom α to the silicon (ie directly attached to the silicon). As a corollary to this, development of negative charge on this atom is stabilized, as seen in the increased rates of reactions that develop negative charge here, such as metalations. This is explained by partial overlap of the C-M σ orbital with the C-Si σ* anti-bonding orbital, which stabilizes the C-M bond. Beta-silicon effect: In a pioneering study by Frank C. Whitmore ethyltrichlorosilane (scheme 2) was chlorinated by sulfuryl chloride as chlorine donor and benzoyl peroxide as radical initiator in a radical substitution resulting in chloride monosubstitution to some extent in the α-position (28%, due to steric hindrance of the silyl group) and predominantly in the β-position. By adding sodium hydroxide to the α-substituted compound only the silicon chlorine groups are replaced but not the carbon chlorine group. Addition of alkali to the β-substituted compound on the other hand leads to an elimination reaction with liberation of ethylene. In another set of experiments (scheme 3) the chlorination is repeated with n-propyltrichlorosilane The α-adduct and the γ-adduct are resistant to hydrolysis but the chlorine group in the β-adduct gets replaced by a hydroxyl group. The silicon effect is also manifest in certain compound properties. Trimethylsilylmethylamine (Me3SiCH2NH2) is a stronger base with a pKa of 10.96 for the conjugate acid than the carbon analogue neopentyl amine with pKa 10.21. In the same vein trimethylsilylacetic acid (pKa 5.22) is a poorer acid than trimethyl acetic acid (pKa 5.00).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Saccade** Saccade: A saccade ( sə-KAHD, French for jerk) is a quick, simultaneous movement of both eyes between two or more phases of fixation in the same direction. In contrast, in smooth pursuit movements, the eyes move smoothly instead of in jumps. The phenomenon can be associated with a shift in frequency of an emitted signal or a movement of a body part or device. Controlled cortically by the frontal eye fields (FEF), or subcortically by the superior colliculus, saccades serve as a mechanism for fixation, rapid eye movement, and the fast phase of optokinetic nystagmus. The word appears to have been coined in the 1880s by French ophthalmologist Émile Javal, who used a mirror on one side of a page to observe eye movement in silent reading, and found that it involves a succession of discontinuous individual movements. Function: Humans and many animals do not look at a scene in fixed steadiness; instead, the eyes move around, locating interesting parts of the scene and building up a mental, three-dimensional 'map' corresponding to the scene (as opposed to the graphical map of avians, that often relies upon detection of angular movement on the retina).When scanning immediate surroundings or reading, human eyes make saccadic movements and stop several times, moving very quickly between each stop. The speed of movement during each saccade cannot be controlled; the eyes move as fast as they are able. One reason for the saccadic movement of the human eye is that the central part of the retina—known as the fovea—which provides the high-resolution portion of vision is very small in humans, only about 1–2 degrees of vision, but it plays a critical role in resolving objects. By moving the eye so that small parts of a scene can be sensed with greater resolution, body resources can be used more efficiently. Timing and kinematics: Saccades are one of the fastest movements produced by the human eye (blinks may reach even higher peak velocities). The peak angular speed of the eye during a saccade reaches up to 700°/s in humans for great saccades (25° of visual angle); in some monkeys, peak speed can reach 1000°/s. Saccades to an unexpected stimulus normally take about 200 milliseconds (ms) to initiate, and then last from about 20–200 ms, depending on their amplitude (20–30 ms is typical in language reading). Under certain laboratory circumstances, the latency of, or reaction time to, saccade production can be cut nearly in half (express saccades). These saccades are generated by a neuronal mechanism that bypasses time-consuming circuits and activates the eye muscles more directly. Specific pre-target oscillatory (alpha rhythms) and transient activities occurring in posterior-lateral parietal cortex and occipital cortex also characterise express saccades. Timing and kinematics: The amplitude of a saccade is the angular distance the eye travels during the movement. For amplitudes up to 15 or 20°, the velocity of a saccade linearly depends on the amplitude (the so-called saccadic main sequence, a term borrowed from astrophysics; see Figure). For amplitudes larger than 20°, the peak velocity starts to plateau (nonlinearly) toward the maximum velocity attainable by the eye at around 60°. For instance, a 10° amplitude is associated with a velocity of 300°/s, and 30° is associated with 500°/s. Therefore, for larger amplitude ranges, the main sequence can best be modeled by an inverse power law function.The high peak velocities and the main sequence relationship can also be used to distinguish micro-/saccades from other eye movements (like ocular tremor, ocular drift, and smooth pursuit). Velocity-based algorithms are a common approach for saccade detection in eye tracking. Although, depending on the demands on timing accuracy, acceleration-based methods are more precise.Saccades may rotate the eyes in any direction to relocate gaze direction (the direction of sight that corresponds to the fovea), but normally saccades do not rotate the eyes torsionally. (Torsion is clockwise or counterclockwise rotation around the line of sight when the eye is at its central primary position; defined this way, Listing's law says that, when the head is motionless, torsion is kept at zero.) Head-fixed saccades can have amplitudes of up to 90° (from one edge of the oculomotor range to the other), but in normal conditions saccades are far smaller, and any shift of gaze larger than about 20° is accompanied by a head movement. During such gaze saccades, first, the eye produces a saccade to get gaze on target, whereas the head follows more slowly and the vestibulo-ocular reflex (VOR) causes the eyes to roll back in the head to keep gaze on the target. Since the VOR can actually rotate the eyes around the line of sight, combined eye and head movements do not always obey Listing's law. Types: Saccades can be categorized by intended goal in four ways: In a visually guided saccade, the eyes move toward a visual transient, or stimulus. The parameters of visually guided saccades (amplitude, latency, peak velocity, and duration) are frequently measured as a baseline when measuring other types of saccades. Visually guided saccades can be further subcategorized: A reflexive saccade is triggered exogenously by the appearance of a peripheral stimulus, or by the disappearance of a fixation stimulus. Types: A scanning saccade is triggered endogenously for the purpose of exploring the visual environment. In an antisaccade, the eyes move away from the visual onset. They are more delayed than visually guided saccades, and observers often make erroneous saccades in the wrong direction. A successful antisaccade requires inhibiting a reflexive saccade to the onset location, and voluntarily moving the eye in the other direction. In a memory guided saccade, the eyes move toward a remembered point, with no visual stimulus. Types: In a sequence of predictive saccades, the eyes are kept on an object moving in a temporally and/or spatially predictive manner. In this instance, saccades often coincide with (or anticipate) the predictable movement of an object.As referenced to above, it is also useful to categorize saccades by latency (time between go-signal and movement onset). In this case the categorization is binary: Either a given saccade is an express saccade or it is not. The latency cut-off is approximately ~200 ms; any longer than this is outside the express saccade range.Microsaccades are a related type of fixational eye movement that are small, jerk-like, involuntary eye movements, similar to miniature versions of voluntary saccades. They typically occur during visual fixation, not only in humans, but also in animals with foveal vision (primates, cats, etc.). Microsaccade amplitudes vary from 2 to 120 arcminutes. In depth: When exploring the visual environment with the gaze, humans make two to three fixations a second. Each fixation involves binocularly coordinated movements of the eyes to acquire the new target in three dimensions: horizontal and vertical, but also in-depth. In literature it has been shown how an upward or a vertical saccade is generally accompanied by a divergence of the eyes, while a downward saccade is accompanied by a convergence. The amount of this intra-saccadic vergence has a strong functional significance for the effectiveness of binocular vision. When making an upward saccade, the eyes diverged to be aligned with the most probable uncrossed disparity in that part of the visual field. On the other way around, when making a downward saccade, the eyes converged to enable alignment with crossed disparity in that part of the field. The phenomenon can be interpreted as an adaptation of rapid binocular eye movements to the statistics of the 3D environment, in order to minimize the need for corrective vergence movements at the end of saccades. Pathophysiologic saccades: Saccadic oscillations not fitting the normal function are a deviation from a healthy or normal condition. Nystagmus is characterised by the combination of 'slow phases', which usually take the eye off the point of regard, interspersed with saccade-like "quick phases" that serve to bring the eye back on target. Pathological slow phases may be due to either an imbalance in the vestibular system or damage to the brainstem "neural integrator" that normally holds the eyes in place. On the other hand, opsoclonus or ocular flutter are composed purely of fast-phase saccadic eye movements. Without the use of objective recording techniques, it may be very difficult to distinguish between these conditions. Pathophysiologic saccades: Eye movement measurements are also used to investigate psychiatric disorders. For example, ADHD is characterized by an increase of antisaccade errors and an increase in delays for visually guided saccade.Paroxysmal eye–head movements, termed aberrant gaze saccades, are an early symptom of GLUT1 deficiency syndrome in infancy. Saccade adaptation: When the brain is led to believe that the saccades it is generating are too large or too small (by an experimental manipulation in which a saccade-target steps backward or forward contingent on the eye movement made to acquire it), saccade amplitude gradually decreases (or increases), an adaptation (also termed gain adaptation) widely seen as a simple form of motor learning, possibly driven by an effort to correct visual error. This effect was first observed in humans with ocular muscle palsy. In these cases, it was noticed that the patients would make hypometric (small) saccades with the affected eye, and that they were able to correct these errors over time. This led to the realization that visual or retinal error (the difference between the post-saccadic point of regard and the target position) played a role in the homeostatic regulation of saccade amplitude. Since then, much scientific research has been devoted to various experiments employing saccade adaptation. Reading: Saccadic eye movement allows the mind to read quickly, but it comes with its disadvantages. It can cause the mind to skip over words because it does not see them as important to the sentence, and the mind completely leaves it from the sentence or it replaces it with the wrong word. This can be seen in "Paris in the the Spring". This is a common psychological test, where the mind will often skip the second "the", especially when there is a line break in between the two. Reading: When speaking, the mind plans what will be said before it is said. Sometimes the mind is not able to plan in advance and the speech is rushed out. This is why there are errors like mispronunciation, stuttering, and unplanned pauses. The same thing happens when reading. The mind does not always know what will come next. This is another reason that the second "the" can be missed. Vision: Saccadic masking It is a common but false belief that during the saccade, no information is passed through the optic nerve to the brain. Whereas low spatial frequencies (the 'fuzzier' parts) are attenuated, higher spatial frequencies (an image's fine details) that would otherwise be blurred by the eye movement remain unaffected. This phenomenon, known as saccadic masking or saccadic suppression, is known to begin prior to saccadic eye movements in every primate species studied, implying neurological reasons for the effect rather than simply the image's motion blur. This phenomenon leads to the so-called stopped-clock illusion, or chronostasis. Vision: A person may observe the saccadic masking effect by standing in front of a mirror and looking from one eye to the next (and vice versa). The subject will not experience any movement of the eyes or any evidence that the optic nerve has momentarily ceased transmitting. Due to saccadic masking, the eye/brain system not only hides the eye movements from the individual but also hides the evidence that anything has been hidden. Of course, a second observer watching the experiment will see the subject's eyes moving back and forth. The function's main purpose is to prevent an otherwise significant smearing of the image. (You can experience your eye saccade movements by using your cellphone's front-facing camera as a mirror, hold the cellphone screen a couple of inches away from your face as you saccade from one eye to the other—the cellphone's signal processing delay allows you to see the end of the saccade movement.) Spatial updating When a visual stimulus is seen before a saccade, subjects are still able to make another saccade back to that image, even if it is no longer visible. This shows that the brain is somehow able to take into account the intervening eye movement. It is thought that the brain does this by temporarily recording a copy of the command for the eye movement, and comparing this to the remembered image of the target. This is called spatial updating. Neurophysiologists, having recorded from cortical areas for saccades during spatial updating, have found that memory-related signals get remapped during each saccade. Vision: Trans-saccadic perception It is also thought that perceptual memory is updated during saccades so that information gathered across fixations can be compared and synthesized. However, the entire visual image is not updated during each saccade. Some scientists believe that this is the same as visual working memory, but as in spatial updating the eye movement has to be accounted for. The process of retaining information across a saccade is called trans-saccadic memory, and the process of integrating information from more than one fixation is called trans-saccadic integration. Comparative physiology: Saccades are a widespread phenomenon across animals with image-forming visual systems. They have been observed in animals across three phyla, including animals that do not have a fovea (most vertebrates do) and animals that cannot move their eyes independently of their head (such as insects). Therefore, while saccades serve in humans and other primates to increase the effective visual resolution of a scene, there must be additional reasons for the behavior. The most frequently suggested of these reasons is to avoid blurring of the image, which would occur if the response time of a photoreceptor cell is longer than the time a given portion of the image is stimulating that photoreceptor as the image drifts across the eye. Comparative physiology: In birds, saccadic eye movements serve a further function. The avian retina is highly developed. It is thicker than the mammalian retina, has a higher metabolic activity, and has less vasculature obstruction, for greater visual acuity. Because of this, the retinal cells must obtain nutrients via diffusion through the choroid and from the vitreous humor. The pecten is a specialised structure in the avian retina. It is a highly vascular structure that projects into the vitreous humor. Experiments show that, during saccadic eye oscillations (which occupy up to 12% of avian viewing time), the pecten oculi acts as an agitator, propelling perfusate (natural lubricants) toward the retina. Thus, in birds, saccadic eye movements appear to be important in retinal nutrition and cellular respiration.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Key server (cryptographic)** Key server (cryptographic): In computer security, a key server is a computer that receives and then serves existing cryptographic keys to users or other programs. The users' programs can be running on the same network as the key server or on another networked computer. Key server (cryptographic): The keys distributed by the key server are almost always provided as part of a cryptographically protected public key certificates containing not only the key but also 'entity' information about the owner of the key. The certificate is usually in a standard format, such as the OpenPGP public key format, the X.509 certificate format, or the PKCS format. Further, the key is almost always a public key for use with an asymmetric key encryption algorithm. History: Key servers play an important role in public key cryptography. History: In public key cryptography an individual is able to generate a key pair, where one of the keys is kept private while the other is distributed publicly. Knowledge of the public key does not compromise the security of public key cryptography. An individual holding the public key of a key pair can use that key to carry out cryptographic operations that allow secret communications with strong authentication of the holder of the matching private key. The need to have the public key of a key pair in order to start communication or verify signatures is a bootstrapping problem. Locating keys on the web or writing to the individual asking them to transmit their public keys can be time consuming and unsecure. Key servers act as central repositories to alleviate the need to individually transmit public keys and can act as the root of a chain of trust. History: The first web-based PGP keyserver was written for a thesis by Marc Horowitz, while he was studying at MIT. Horowitz's keyserver was called the HKP Keyserver after a web-based OpenPGP HTTP Keyserver Protocol (HKP), used to allow people to interact with the keyserver. Users were able to upload, download, and search keys either through HKP on TCP port 11371, or through web pages which ran CGI scripts. Before the creation of the HKP Keyserver, keyservers relied on email processing scripts for interaction. History: A separate key server, known as the PGP Certificate Server, was developed by PGP, Inc. and was used as the software (through version 2.5.x for the server) for the default key server in PGP through version 8.x (for the client software), keyserver.pgp.com. Network Associates was granted a patent co-authored by Jon Callas (United States Patent 6336186) on the key server concept. History: To replace the aging Certificate Server, an LDAP-based key server was redesigned at Network Associates in part by Randy Harmon and Len Sassaman, called PGP Keyserver 7. With the release of PGP 6.0, LDAP was the preferred key server interface for Network Associates’ PGP versions. This LDAP and LDAPS key server (which also spoke HKP for backwards compatibility, though the protocol was (arguably correctly) referred to as “HTTP” or “HTTPS”) also formed the basis for the PGP Administration tools for private key servers in corporate settings, along with a schema for Netscape Directory Server. PGP Keyserver 7 was later replaced by the new PGP Corporation PGP Global Directory[1] which allows PGP keys to published and downloaded using HTTPS or LDAP. Public versus private keyservers: Many publicly accessible key servers, located around the world, are computers which store and provide OpenPGP keys over the Internet for users of that cryptosystem. In this instance, the computers can be, and mostly are, run by individuals as a pro bono service, facilitating the web of trust model PGP uses. Several publicly accessible S/MIME key servers are available to publish or retrieve certificates used with the S/MIME cryptosystem. There are also multiple proprietary public key infrastructure systems which maintain key servers for their users; those may be private or public, and only the participating users are likely to be aware of those keyservers at all. Privacy concerns: For many individuals, the purpose of using cryptography is to obtain a higher level of privacy in personal interactions and relationships. It has been pointed out that allowing a public key to be uploaded in a key server when using decentralized web of trust based cryptographic systems, like PGP, may reveal a good deal of information that an individual may wish to have kept private. Since PGP relies on signatures on an individual's public key to determine the authenticity of that key, potential relationships can be revealed by analyzing the signers of a given key. In this way, models of entire social networks can be developed. Problems with keyservers: The OpenPGP keyservers since their development in 1990s suffered from a few problems. Once a public key has been uploaded, it was purposefully made difficult to remove it as servers auto-synchronize between each other (it was done in order to fight government censorship). Some users stop using their public keys for various reasons, such as when they forget their pass phrase, or if their private key is compromised or lost. In those cases, it was hard to delete a public key from the server, and even if it were deleted, someone else can upload a fresh copy of the same public key to the server. This leads to an accumulation of old fossil public keys that never go away, a form of "keyserver plaque". As consequence anyone can upload a bogus public key to the keyserver, bearing the name of a person who in fact does not own that key, or even worse, use it as vulnerability: the Certificate Spamming Attack.The keyserver had no way to check to see if the key was legitimate (belong to true owner). Problems with keyservers: To solve these problems, PGP Corp developed a new generation of key server, called the PGP Global Directory. This keyserver sent an email confirmation request to the putative key owner, asking that person to confirm that the key in question is theirs. If they confirm it, the PGP Global Directory accepts the key. This can be renewed periodically, to prevent the accumulation of keyserver plaque. The result is a higher quality collection of public keys, and each key has been vetted by email with the key's apparent owner. But as consequence, another problem arise: because PGP Global Directory allows key account maintenance and verifies only by email, not cryptographically, anybody having access to the email account could for example delete a key and upload a bogus one. Problems with keyservers: The last Internet Engineering Task Force draft for HKP also defines a distributed key server network, based on DNS SRV records: to find the key of someone@example.com, one can ask it by requesting example.com's key server. Keyserver examples: These are some keyservers that are often used for looking up keys with gpg --recv-keys. These can be queried via https:// (HTTPS) or hkps:// (HKP over TLS) respectively. keys.openpgp.org keys.mailvelope.com/manage.html pgp.mit.edu keyring.debian.org keyserver.ubuntu.com pgp.surf.nl
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jacobsen epoxidation** Jacobsen epoxidation: The Jacobsen epoxidation, sometimes also referred to as Jacobsen-Katsuki epoxidation is a chemical reaction which allows enantioselective epoxidation of unfunctionalized alkyl- and aryl- substituted alkenes. It is complementary to the Sharpless epoxidation (used to form epoxides from the double bond in allylic alcohols). The Jacobsen epoxidation gains its stereoselectivity from a C2 symmetric manganese(III) salen-like ligand, which is used in catalytic amounts. The manganese atom transfers an oxygen atom from chlorine bleach or similar oxidant. The reaction takes its name from its inventor, Eric Jacobsen, with Tsutomu Katsuki sometimes being included. Chiral-directing catalysts are useful to organic chemists trying to control the stereochemistry of biologically active compounds and develop enantiopure drugs. Jacobsen epoxidation: Several improved procedures have been developed.A general reaction scheme follows: History: In the early 1990s, Jacobsen and Katsuki independently released their initial findings about their catalysts for the enantioselective epoxidation of isolated alkenes. In 1991, Jacobsen published work where he attempted to perfect the catalyst. He was able to obtain ee values above 90% for a variety of ligands. Also, the amount of catalyst used was no more than 15% of the amount of alkene used in the reaction. General features: The degree of enantioselectivity depends on numerous factors, namely the structure of the alkene, the nature of the axial donor ligand on the active oxomanganese species and the reaction temperature. Cyclic and acyclic cis-1,2-disubstituted alkenes are epoxidized with almost 100% enantioselectivity whereas trans-1,2-disubstituted alkenes are poor substrates for Jacobsen's catalysts but yet give higher enantioselectivities when Katsuki's catalysts are used. Furthermore, the enantioselective epoxidation of conjugated dienes is much higher than that of the nonconjugated dienes.The enantioselectivity is explained by either a "top-on" approach (Jacobsen) or by a "side-on" approach (Katsuki) of the alkene. Mechanism: The mechanism of the Jacobsen–Katsuki epoxidation is not fully understood, but most likely a manganese(V)-species (similar to the ferryl intermediate of Cytochrome P450) is the reactive intermediate which is formed upon the oxidation of the Mn(III)-salen complex. There are three major pathways. The concerted pathway, the metalla oxetane pathway and the radical pathway. The most accepted mechanism is the concerted pathway mechanism. After the formation of the Mn(V) complex, the catalyst is activated and therefore can form epoxides with alkenes. The alkene comes in from the "top-on" approach (above the plane of the catalyst) and the oxygen atom now is bonded to the two carbon atoms (previously C=C bond) and is still bonded to the manganese metal. Then, the Mn–O bond breaks and the epoxide is formed. The Mn(III)-salen complex is regenerated, which can then be oxidized again to form the Mn(V) complex. Mechanism: The radical intermediate accounts for the formation of mixed epoxides when conjugated dienes are used as substrates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**R-Y** R-Y: R−Y indicates a color difference signal between Red (R) and a Luminance component, as part of a Luminance (Y) and Chrominance (C) color model. It has different meanings depending on the exact model used: V in YUV, a generic model used for analog and digital image formats; Cr in YCbCr, used for digital images and video; Pr in YPbPr, used in analog component video; Dr in YDbDr, used in analog SECAM and PAL-N;
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Versant** Versant: The Versant suite of tests are computerized tests of spoken language available from Pearson PLC. Versant tests were the first fully automated tests of spoken language to use advanced speech processing technology (including speech recognition) to assess the spoken language skills of non-native speakers. The Versant language suite includes tests of English, Spanish, Dutch, French, and Arabic. Versant technology has also been applied to the assessment of Aviation English, children's oral reading assessment, and adult literacy assessment. History: In 1996, Jared Bernstein and Brent Townsend founded Ordinate Corporation to develop a system that would use speech processing technology and linguistic and test theory to provide an automatically delivered and automatically scored spoken language test. The first English test was called PhonePass. It was the first fully computerized test of spoken language using speech recognition technology. In 2002, the name PhonePass was changed to PhonePass SET-10 (Spoken English Test) or simply SET-10. In 2003 Ordinate was acquired by Harcourt Assessment and later in 2005 the name of the test changed to its current name, Versant. In January 2008, Harcourt Assessment (including Ordinate Corporation) was acquired by Pearson and Ordinate Corporation became part of the Knowledge Technologies group of Pearson. In June 2010, Versant Pro Speaking Test and Versant Pro Writing Test, were launched. Product description: Versant tests are typically fifteen-minute tests of speaking and listening skills for adult language learners. (Test length varies slightly depending on the test). The test is delivered over the telephone or on a computer and is scored by computer using pre-determined data-driven algorithms. During the test, the system presents a series of recorded prompts at a conversational pace and elicits oral responses from the test-taker. The Versant tests are available as several products: Versant English Test Versant English - Placement Test [1] Versant English - Writing Test Versant Spanish Test Versant Arabic Test Versant French Test Versant Aviation EnglishAdditionally, several domain-specific tests have been created using the Versant framework in collaboration with other organizations. These tests include the Versant Aviation English Test (for aviation personnel), the Versant Junior English Test (for learners of English, ages 5 to 12), and the Dutch immigration test (exclusively available through Dutch Embassies). The Versant scoring system also provides automated scoring of the spoken portion of the four-skills test, Pearson Test of English, available in late 2009. Product description: Versant test construct Versant tests measure "facility in a spoken language", defined as the ability to understand spoken language on everyday topics and to respond appropriately at a native-like conversational pace. While keeping up with the conversational pace, a person has to track what is being said, extract meaning as speech continues, and formulate and produce a relevant and intelligible response. The Versant tests are designed to measure these real-time psycholinguistic aspects of spoken performance in a second language. Product description: Test format and tasks Versant tests typically have six tasks: Reading, Repeats, Short Answer Questions, Sentence Builds, Story Retelling, and Open Questions. Versant technology: Automated administration Versant tests can be administered over the telephone or on a computer. Test takers can access and complete the tests from any location where there is a landline telephone or an internet connection. Versant technology: Test takers are given a Test Identification Number and listen to a recorded examiner's voice for instructions which are also printed verbatim on the test paper or computer screen. Throughout the test, test takers listen to recorded item prompts read by a variety of native speakers. Because the test is automated, large numbers of tests can be administered and scored very rapidly. Versant technology: Automated scoring technology Versant test scores are posted on-line within minutes of the completed test. Test administrators and test takers can view and print out their test results by entering their Test Identification Number on the Versant website. The Versant score report is composed of an Overall score (a weighted combination of the subscores) and four diagnostic subscores: Sentence Mastery (i.e., grammar), Vocabulary, Fluency, and Pronunciation. The Overall score and subscores are reported on a scale from 20 to 80. Versant technology: The automated scoring technology is optimized using a large number of speech samples from both native and non-native speakers. Extensive data collection is typically carried out to collect a sufficient amount of such speech samples. These spoken responses are then transcribed to train an automatic speech recognition system. Versant technology: Each incoming response is then processed automatically by the speech recognizer that has been optimized for non-native speech. The words, pauses, syllables and phones are located in the recorded signal. The content of the response is scored according to the presence or absence of expected correct words in correct sequences as well as the pace, fluency, and pronunciation of those words in phrases and sentences. Base measures are then derived from the segments, syllables and words based on statistical models of native and non-native speakers. Much documentation has been produced regarding the accuracy of Versant's automated scoring system. Versant technology: Score use Versant tests are currently used by academic institutions, corporations, and government agencies around the world. Versant tests provide information that can be used to determine if employees or students have the necessary spoken language skills to interact effectively. For example, the Versant English Test was used in the 2002 World Cup Korea/Japan to measure the English skills of over 15,000 volunteers and assign the appropriate workers to the most English-intensive tasks. The Versant Spanish Test was used in a study by Blake, et al. (2008) to evaluate whether distance-learning courses are as valid a way to start learning a foreign language as traditional face-to-face classes that meet five times a week with respect to oral proficiency. Validation: Relationship to other tests Versant test scores have been aligned with the Common European Framework of Reference (CEFR). Below are the mappings of Versant scores and other tests' scores to the CEFR. Versant English overall scores can be used to predict CEFR levels on the CEFR scale of Oral Interaction Skills with reasonable accuracy. A series of validation studies has found that the Versant English Test correlates reasonably with other measures of spoken English skills. For example, the correlation between the Versant English Test and TOEFL iBT Speaking is r=0.75 and the correlation between the Versant English Test and IELTS Speaking is r=0.77. Validation: Machine-human correlation One of the common criticisms of the Versant tests is that a machine cannot evaluate speaking skills as well as a human can. Knowledge Technologies, the company that produces and administers the test, claim that the Versant English Test's machine-generated scores are virtually indistinguishable from scores given by repeated independent human raters at the Overall level.Another criticism is that the Versant tests do not measure communicative abilities because there are no interaction exchanges between live participants. Versant, in Downey et al. (2008) claim that the psycholinguistic competencies that are assessed in their tests underlie a larger spoken language performance. This claim is supported by the concurrent validity data that Versant test scores correlate highly with other well-known oral proficiency interview tests such as ACTFL OPIs or ILR OPIs. Validation: The usefulness of Versant products has been challenged by a third party. Management: Alistair Van Moere, President Ryan Down, Director, Product Management
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BeerXML** BeerXML: BeerXML is a free, fully defined XML data description standard designed for the exchange of beer brewing recipes and other brewing data. Tables of recipes as well as other records such as hop schedules and malt bills can be represented using BeerXML for use by brewing software. BeerXML is an open standard and as a subset of Extensible Markup Language (XML). BeerXML is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. BeerXML: BeerXML is supported by a number of web sites, computer programmes and an increasing number of Android Windows Phone and iOS apps.Plugins and extensions supporting BeerXML have been written for a variety of platforms including Ruby via RubyGems, WordPress, PHP and JavaScriptMany brewing hardware manufacturers incorporate BeerXML into their systems and third party plugins and patches are being developed for brewery control hardware and embedded systems allowing the automation and fine control and timing of processes such as mashing and potentially fermentation. Common applications and examples of usage: BeerXML is used in both amateur and professional brewing and facilitates the sharing of brewing data over the internet. Users of different applications such as the open-source software Brewtarget (with more than 52,000 downloads ) can share data via XML with users of popular proprietary software such as Beersmith and ORRTIZ: BMS 4 Breweries or upload their data to share on BeerXML compatible sharing sites and cloud platforms such as Brewtoad (over 50,000 registered users ) or the Beersmith Recipe Cloud (with 43,000 registered users). A user of a recipe design and sharing and creation site such as Brewersfriend.com can import and export BeerXML to and from mobile apps or enter it into a brewing competition database such as The Brew Competition Online Entry & Management (BCOE&M) system. Common applications and examples of usage: The adoption of BeerXML as a standard is leading to new developments such as ingredients databases which attempt to standardise ingredients definitions and characteristics. Brewers can use platforms like Brewblogger.com to create recipes and log their brewday for publication as a blog and for export to databases and common spreadsheet applications.JavaScript applications such as brauhaus.js (developed from the Malt.io recipe sharing site ) allow users to run them on a local machine or web browser for execution through any standards compliant web browser. Supported fields: The following fields form the core information of the BeerXML structure Recipes Recipe name Brewer Brewing method (All grain, Partial Mash, Extract ) Recipe Type (Ale, Lager, Hybrid, etc.) Recipe volume (Run length) Boil volume (Wort size) Boil time (duration) Recipe efficiency Estimated values OG (Original Gravity) FG (Final Gravity) Color (SRM) Bitterness (IBU) Alcohol content (%abv) Hops Name Origin Description Alpha acids Beta acids Storageability (HSI) Humulene Caryophyllene Cohumulone Myrcene Farsene (not explicitly included in BeerXML v1) Total oil (not explicitly included in BeerXML v1) Recipe Specific - When added (Boil, Mash, First Wort, Dry, etc.) Amount Time (duration) Fermentables Name Origin Description Type (Grain, Sugar, etc.) Potential Recommend Mash (true or false) IBU gal/lb (for hopped extract) Color (°Lovibond) Moisture content Protein content Diastatic power (°Lintner) Maximum used (% of grist) Recipe Specific Amount Late Addition (true or false) Additives (Called MISC for miscellaneous in BeerXML v1) Name Description Type (Fining, Spice, Herb, etc.) Recipe Specific - When added (Boil, Primary, etc.) Amount Time (duration) Yeasts Name Supplier Catalog number Description Type (Ale, Lager, etc.) Form (Dry, Liquid, etc.) Best for Temperature range Flocculation Attenuation Max reuse Recipe Specific Amount Added to secondary (true or false) Time cultured Limitations: BeerXML 1.0 supports no more than three fermentation steps. While this is not a real world limitation for many brewers, it does introduce a discrepancy where a software tool or web service that allows several or unlimited fermentation steps wishes to implement BeerXML as an import/export mechanism. For example; where a fermentation schedule instruction to pitch at 21 degrees Celsius, allow to drop to 17 over three days and then decrease by 1 degree per day until the wort reaches 10 degrees, hold for 12 days before racking for maturation. This could not be accommodated within the formal structure requiring the use of informal/optional and non machine readable fields. Limitations: All units are converted to SI units internally. As a result, there is loss of precision when converting non SI units whether they be Imperial, US Customary or metric. Hop oil contributions in the copper are not explicitly supported in the current definition. Farsene levels are not explicitly supported in the current definition. No distinction is made between weight and mass Development: The BeerXML standard has a proposed second version which has been mooted and is under development. It has not been validated or published as its feature set is still under discussion. XML Header: As in XML, all files begin with a header line as the first line. After the XML header a record set should start (for example<RECIPES>…</RECIPES> or <HOPS> … </HOPS>). Required XML Header Example with Recipes tag: Tag Names: Tag names are always uppercase. For example, "HOP" is acceptable, but "hop" and Hop" are not. Version: All records have a required <VERSION> tag that denotes the version of the XML standard. At present, all are set to the integer 1 for this version of the standard. It is intended that future versions of the standard will be backward compatible with older versions, but the VERSION tag allows newer programmes to check for a higher version of the standard or do conversions if required to be backward compatible. Data Formats: Record Set – A special tag that starts a particular set of data. For example, an XML table that consists of a set of hops records might start with a <HOPS> tag to denote that this is the start of hops records. After the last record, a </HOPS> tag would be used. Record - Denotes a tag that starts or ends a particular record—for example "HOP" might start a hops record or "FERMENTABLE" might start a fermentable record. Data Formats: Percentage - Denotes a percentage - all percentages are expressed as percent out of 100- for example 10.4% is written as "10.4" and not "0.104" List - The data has only a fixed number of values that are selected from the list in the description table for the tag. These items are case sensitive, and no other values are allowed. Data Formats: Text - The data is free format text. For multiline entries, line breaks will be preserved where possible and the text may be truncated on import if the text is too long for the importing program to store. Multiline entries may be split with either a newline (Unix format) or a carriage return – newline combination (DOS format). Importing programmes should accept either. Data Formats: Boolean - The Boolean data type may be either TRUE or FALSE, with TRUE and FALSE in capitals. A default value should be specified for optional fields - the default is used if the value is not present. Integer - An integer number with no decimal point. May include negative values - examples include ...-3, -2, -1, 0, 1, 2, 3,... Floating Point - A floating point number, usually expressed in its simplest form with a decimal point as in "1.2", "0.004", etc... Programmes should endeavor to store as many significant digits as possible to avoid truncating or losing small values. Units: All units are fixed. It is the responsibility of the importing or exporting programme to convert to and from the units below if needed. Units: Weight Units All weights are measured in Kilograms (kg). For small values the exporting programme will make an effort to preserve as many significant digits as possible.Volume Units All volumes are measured in Litres (l). For small values the exporting programme will make an effort to preserve as many significant digits as possible.Temperature Units All temperatures are measured in degrees Celsius.Time Units All times are given in minutes or fractions thereof – unless otherwise specified in the tag description.Specific Gravity Units Specific gravity are measured relative to the weight of the same size sample of water. For example, “1.035”, “1.060”, and so on.Pressure Units Pressures are measured in kilopascals (kPa) Non-Standard Tags: As per the XML standard, all non-standard tags should be ignored by the importing program. This allows an implementation to store additional information if desired by using their own tags. Any tags not defined as part of this standard may safely be ignored by the importing program. Optional tags: The optional 'Appendix A' adds tags for use in the display of brewing data using XML style sheets or XML compatible report generators. As the tags in the appendix are for display only and may include rounded values and varying units. These appendix tags are intended for display and not for data import.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Beta-Naphthoflavone** Beta-Naphthoflavone: β-Naphthoflavone, also known as 5,6-benzoflavone, is a potent agonist of the aryl hydrocarbon receptor and as such is an inducer of such detoxification enzymes as cytochromes P450 (CYPs) and uridine 5'-diphospho-glucuronosyltransferases (UGTs). β-Naphthoflavone is a putative chemopreventive agent.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Null device** Null device: In some operating systems, the null device is a device file that discards all data written to it but reports that the write operation succeeded. This device is called /dev/null on Unix and Unix-like systems, NUL: (see TOPS-20) or NUL on CP/M and DOS (internally \DEV\NUL), nul on OS/2 and newer Windows systems (internally \Device\Null on Windows NT), NIL: on Amiga operating systems, and NL: on OpenVMS. In Windows Powershell, the equivalent is $null. It provides no data to any process that reads from it, yielding EOF immediately. In IBM operating systems DOS/360 and successors and also in OS/360 and successors such files would be assigned in JCL to DD DUMMY. Null device: In programmer jargon, especially Unix jargon, it may also be called the bit bucket or black hole. History: According to the Berkeley UNIX man page, Version 4 Unix, which AT&T released in 1973, included a null device. Usage: The null device is typically used for disposing of unwanted output streams of a process, or as a convenient empty file for input streams. This is usually done by redirection. The /dev/null device is a special file, not a directory, so one cannot move a whole file or directory into it with the Unix mv command. References in computer culture: This entity is a common inspiration for technical jargon expressions and metaphors by Unix programmers, e.g. "please send complaints to /dev/null", "my mail got archived in /dev/null", and "redirect to /dev/null"—being jocular ways of saying, respectively: "don't bother sending complaints", "my mail was deleted", and "go away". The iPhone Dev Team commonly uses the phrase "send donations to /dev/null", meaning they do not accept donations. The fictitious person name "Dave (or Devin) Null" is sometimes similarly used (e.g., "send complaints to Dave Null"). In 1996, Dev Null was an animated virtual reality character created by Leo Laporte for MSNBC's computer and technology TV series The Site. Dev/null is also the name of a vampire hacker in the computer game Vampire: The Masquerade – Redemption. A 2002 advertisement for the Titanium PowerBook G4 reads The Titanium Powerbook G4 Sends other UNIX boxes to /dev/null.The null device is also a favorite subject of technical jokes, such as warning users that the system's /dev/null is already 98% full. The 1995 April Fool's issue of the German magazine c't reported on an enhanced /dev/null chip that would efficiently dispose of the incoming data by converting it to a flicker on an internal glowing LED.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benzocycloheptene** Benzocycloheptene: Benzocycloheptenes are cycloheptenes with additional benzene rings attached. Most have two benzene rings, and are called dibenzocycloheptenes. Some benzocycloheptenes and substituted benzocycloheptenes have medical uses as antihistamines, anticholinergics, antidepressants, and antiserotonergics. Examples include: Antihistamines and Antiserotonergics Azatadine Desloratadine Loratadine Rupatadine Cyproheptadine Ketotifen Pizotifen Anticholinergics Deptropine Anticonvulsants Oxitriptyline Antidepressants and Anticholinergics Amineptine Amitriptyline Nortriptyline Noxiptyline Octriptyline Protriptyline Various Cyclobenzaprine Intriptyline
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Circumnavigation** Circumnavigation: Circumnavigation is the complete navigation around an entire island, continent, or astronomical body (e.g. a planet or moon). This article focuses on the circumnavigation of Earth. Circumnavigation: The first circumnavigation of the Earth was the Magellan Expedition, which sailed from Sanlucar de Barrameda, Spain in 1519 and returned in 1522, after crossing the Atlantic, Pacific, and Indian oceans. Since the rise of commercial aviation in the late 20th century, circumnavigating Earth is straightforward, usually taking days instead of years. Today, the challenge of circumnavigating Earth has shifted towards human and technological endurance, speed, and less conventional methods. Etymology: The word circumnavigation is a noun formed from the verb circumnavigate, from the past participle of the Latin verb circumnavigare, from circum "around" + navigare "to sail". Definition: A person walking completely around either pole will cross all meridians, but this is not generally considered a "circumnavigation". The path of a true (global) circumnavigation forms a continuous loop on the surface of Earth separating two regions of comparable area. A basic definition of a global circumnavigation would be a route which covers roughly a great circle, and in particular one which passes through at least one pair of points antipodal to each other. In practice, people use different definitions of world circumnavigation to accommodate practical constraints, depending on the method of travel. Since the planet is quasispheroidal, a trip from one Pole to the other, and back again on the other side, would technically be a circumnavigation. There are practical difficulties (namely, the Arctic ice pack and the Antarctic ice sheet) in such a voyage, although it was successfully undertaken in the early 1980s by Ranulph Fiennes. History: The first circumnavigation was that of the ship Victoria between 1519 and 1522, now known as the Magellan–Elcano expedition. It was a Castilian (Spanish) voyage of discovery. The voyage started in Seville, crossed the Atlantic Ocean, and—after several stops—rounded the southern tip of South America, where the expedition named the Strait of Magellan. It then continued across the Pacific, discovering a number of islands on its way (including Guam), before arriving in the Philippines. The voyage was initially led by the Portuguese Ferdinand Magellan but he was killed on Mactan in the Philippines in 1521. The remaining sailors decided to circumnavigate the world instead of making the return voyage—no passage east across the Pacific would be successful for four decades—and continued the voyage across the Indian Ocean, round the southern cape of Africa, north along Africa's Atlantic coasts, and back to Spain in 1522. Only 18 men were still with the expedition at the end, including its surviving captain, the Spaniard Juan Sebastián Elcano. History: The next to circumnavigate the globe were the survivors of the Castilian/Spanish expedition of García Jofre de Loaísa between 1525 and 1536. None of the seven original ships of the Loaísa expedition nor its first four leaders—Loaísa, Elcano, Salazar, and Íñiguez—survived to complete the voyage. The last of the original ships, the Santa María de la Victoria, was sunk in 1526 in the East Indies (now Indonesia) by the Portuguese. Unable to press forward or retreat, Hernando de la Torre erected a fort on Tidore, received reinforcements under Alvaro de Saavedra that were similarly defeated, and finally surrendered to the Portuguese. In this way, a handful of survivors became the second group of circumnavigators when they were transported under guard to Lisbon in 1536. A third group came from the 117 survivors of the similarly failed Villalobos Expedition in the next decade; similarly ruined and starved, they were imprisoned by the Portuguese and transported back to Lisbon in 1546. History: In 1577, Elizabeth I sent Francis Drake to start an expedition against the Spanish along the Pacific coast of the Americas. Drake set out from Plymouth, England in November 1577, aboard Pelican, which he renamed Golden Hind mid-voyage. In September 1578, the ship passed south of Tierra del Fuego, the southern tip of South America, through the area now known as the Drake Passage. In June 1579, Drake landed somewhere north of Spain's northernmost claim in Alta California, presumably Drakes Bay. Drake completed the second complete circumnavigation of the world in a single vessel on September 1580, becoming the first commander to survive the entire circumnavigation. History: Thomas Cavendish completed his circumnavigation between 1586 and 1588 in record time—in two years and 49 days, nine months faster than Drake. It was also the first deliberately planned voyage of the globe.For the wealthy, long voyages around the world, such as was done by Ulysses S. Grant, became possible in the 19th century, and the two World Wars moved vast numbers of troops around the planet. However, it was the rise of commercial aviation in the late 20th century that made circumnavigation, when compared to the Magellan–Elcano expedition, quicker and safer. Nautical: The nautical global and fastest circumnavigation record is currently held by a wind-powered vessel, the trimaran IDEC 3. The record was established by six sailors: Francis Joyon, Alex Pella, Clément Surtel, Gwénolé Gahinet, Sébastien Audigane and Bernard Stamm; who wrote themselves into history books on 26 January 2017, by circumnavigating the globe in 40 days, 23 hours, 30 minutes and 30 seconds. The absolute speed sailing record around the world followed the North Atlantic Ocean, Equator, South Atlantic Ocean, Southern Ocean, South Atlantic Ocean, Equator, North Atlantic Ocean route in an easterly direction. Nautical: Wind powered The map on the right shows, in red, a typical, non-competitive, route for a sailing circumnavigation of the world by the trade winds and the Suez and Panama canals; overlaid in yellow are the points antipodal to all points on the route. It can be seen that the route roughly approximates a great circle, and passes through two pairs of antipodal points. This is a route followed by many cruising sailors, going in the western direction; the use of the trade winds makes it a relatively easy sail, although it passes through a number of zones of calms or light winds. Nautical: In yacht racing, a round-the-world route approximating a great circle would be quite impractical, particularly in a non-stop race where use of the Panama and Suez Canals would be impossible. Yacht racing therefore defines a world circumnavigation to be a passage of at least 21,600 nautical miles (40,000 km) in length which crosses the equator, crosses every meridian and finishes in the same port as it starts. The second map on the right shows the route of the Vendée Globe round-the-world race in red; overlaid in yellow are the points antipodal to all points on the route. It can be seen that the route does not pass through any pairs of antipodal points. Since the winds in the higher southern latitudes predominantly blow west-to-east it can be seen that there are an easier route (west-to-east) and a harder route (east-to-west) when circumnavigating by sail; this difficulty is magnified for square-rig vessels due to the square rig's dramatic lack of upwind ability when compared to a more modern Bermuda rig.For around the world sailing records, there is a rule saying that the length must be at least 21,600 nautical miles calculated along the shortest possible track from the starting port and back that does not cross land and does not go below 63°S. It is allowed to have one single waypoint to lengthen the calculated track. The equator must be crossed.The solo wind powered circumnavigation record of 42 days, 16 hours, 40 minutes and 35 seconds was established by François Gabart on the maxi-multihull sailing yacht MACIF and completed on 7 December 2017. The voyage followed the North Atlantic Ocean, Equator, South Atlantic Ocean, Southern Ocean, South Atlantic Ocean, Equator, North Atlantic Ocean route in an easterly direction. Nautical: Mechanically powered Since the advent of world cruises in 1922, by Cunard's Laconia, thousands of people have completed circumnavigations of the globe at a more leisurely pace. Typically, these voyages begin in New York City or Southampton, and proceed westward. Routes vary, either travelling through the Caribbean and then into the Pacific Ocean via the Panama Canal, or around Cape Horn. From there ships usually make their way to Hawaii, the islands of the South Pacific, Australia, New Zealand, then northward to Hong Kong, South East Asia, and India. At that point, again, routes may vary: one way is through the Suez Canal and into the Mediterranean; the other is around Cape of Good Hope and then up the west coast of Africa. These cruises end in the port where they began.In 1960, the American nuclear-powered submarine USS Triton circumnavigated the globe in 60 days, 21 hours for Operation Sandblast. Nautical: The current circumnavigation record in a powered boat of 60 days 23 hours and 49 minutes was established by a voyage of the wave-piercing trimaran Earthrace which was completed on 27 June 2008. The voyage followed the North Atlantic Ocean, Panama Canal, Pacific Ocean, Indian Ocean, Suez Canal, Mediterranean Sea route in a westerly direction. Aviation: In 1922 Norman Macmillan (RAF officer), Major W T Blake and Geoffrey Malins made an unsuccessful attempt to fly a Daily News-sponsored round-the-world flight. The first aerial circumnavigation of the planet was flown in 1924 by aviators of the U.S. Army Air Service in a quartet of Douglas World Cruiser biplanes. The first non-stop aerial circumnavigation of the planet was flown in 1949 by Lucky Lady II, a United States Air Force Boeing B-50 Superfortress. Aviation: Since the development of commercial aviation, there are regular routes that circle the globe, such as Pan American Flight One (and later United Airlines Flight One). Today planning such a trip through commercial flight connections is simple. Aviation: The first lighter-than-air aircraft of any type to circumnavigate under its own power was the rigid airship LZ 127 Graf Zeppelin, which did so in 1929.Aviation records take account of the wind circulation patterns of the world; in particular the jet streams, which circulate in the northern and southern hemispheres without crossing the equator. There is therefore no requirement to cross the equator, or to pass through two antipodal points, in the course of setting a round-the-world aviation record. Thus, for example, Steve Fossett's global circumnavigation by balloon was entirely contained within the southern hemisphere.For powered aviation, the course of a round-the-world record must start and finish at the same point and cross all meridians; the course must be at least 36,770 kilometres (19,850 nmi) long (which is approximately the length of the Tropic of Cancer). The course must include set control points at latitudes outside the Arctic and Antarctic circles.In ballooning, which is at the mercy of the winds, the requirements are even more relaxed. The course must cross all meridians, and must include a set of checkpoints which are all outside of two circles, chosen by the pilot, having radii of 3,335.85 kilometres (2,072.80 mi) and enclosing the poles (though not necessarily centred on them). Astronautics: The first person to fly in space, Yuri Gagarin, also became the first person to complete an orbital spaceflight in the Vostok 1 spaceship within 2 hours in 1961.Flight started at 63° E and ended 45° E longitude; thus Gagarin did not circumnavigate Earth completely. Gherman Titov in the Vostok 2 was the first human to circumnavigate Earth in spaceflight and made 17.5 orbits. Human-powered: According to adjudicating bodies Guinness World Records and Explorersweb, Jason Lewis completed the first human-powered circumnavigation of the globe on 6 October 2007. This was part of a thirteen-year journey entitled Expedition 360. Human-powered: In 2012, Turkish-born American adventurer Erden Eruç completed the first entirely solo human-powered circumnavigation, travelling by rowboat, sea kayak, foot and bicycle from 10 July 2007 to 21 July 2012, crossing the equator twice, passing over 12 antipodal points, and logging 66,299 kilometres (41,196 mi) in 1,026 days of travel time, excluding breaks.National Geographic lists Colin Angus as being the first to complete a global circumnavigation. However, his journey did not cross the equator or hit the minimum of two antipodal points as stipulated by the rules of Guinness World Records and AdventureStats by Explorersweb.People have both bicycled and run around the world, but the oceans have had to be covered by air or sea travel, making the distance shorter than the Guinness guidelines. To go from North America to Asia on foot is theoretically possible but very difficult. It involves crossing the Bering Strait on the ice, and around 3,000 kilometres (1,900 mi) of roadless swamped or freezing cold areas in Alaska and eastern Russia. No one has so far travelled all of this route by foot. David Kunst was the first verified person to walk around the world between 20 June 1970 and 5 October 1974. Notable circumnavigations: Maritime The Castilian ('Spanish') Magellan-Elcano expedition of August 1519 to 8 September 1522, started by Portuguese navigator Fernão de Magalhães (Ferdinand Magellan) and completed by Spanish Basque navigator Juan Sebastián Elcano after Magellan's death, was the first global circumnavigation (see Victoria). The survivors of García Jofre de Loaísa's Spanish expedition 1525–1536, including Andrés de Urdaneta and Hans von Aachen, who was also one of the 18 survivors of Magellan's expedition, making him the first to circumnavigate the world twice. Francis Drake carried out the second circumnavigation of the world in a single expedition (and on a single independent voyage), from 1577 to 1580. Jeanne Baret is the first woman to complete a voyage of circumnavigation, in 1766–1769. John Hunter commanded the first ship to circumnavigate the World starting from Australia, between 2 September 1788 and 8 May 1789, with one stop in Cape Town to load supplies for the colony of New South Wales. HMS Driver completed the first circumnavigation by a steam ship in 1845–1847. The Spanish frigate Numancia, commanded by Juan Bautista Antequera y Bobadilla, completed the first circumnavigation by an ironclad in 1865–1867. Joshua Slocum completed the first single-handed circumnavigation in 1895–1898. In 1942, Vito Dumas became the first person to single-handedly circumnavigate the globe along the Roaring Forties. In 1960, the U.S. Navy nuclear-powered submarine USS Triton (SSRN-586) completed the submerged circumnavigation. In 1969, Robin Knox-Johnston became the first person to complete a single-handed non-stop circumnavigation. In 1999, Jesse Martin became the youngest recognized person to complete an unassisted, non-stop, circumnavigation, at the age of 18. In 2001, the U.S. Coast Guard USCGC Sherman (WHEC-720) became the first Coast Guard vessel to circumnavigate the globe. In 2012, PlanetSolar became the first ever solar electric vehicle to circumnavigate the globe. In 2012, Laura Dekker became the youngest person to circumnavigate the globe single-handed, with stops, at the age of 16. Notable circumnavigations: In 2017, trimaran IDEC 3 with sailors: Francis Joyon, Alex Pella, Clément Surtel, Gwénolé Gahinet, Sébastien Audigane and Bernard Stamm completes the fastest circumnavigation of the globe ever; in 40 days, 23 hours, 30 minutes and 30 seconds. The voyage followed the North Atlantic Ocean, Equator, South Atlantic Ocean, Southern Ocean, South Atlantic Ocean, Equator, North Atlantic Ocean route in an easterly direction. Notable circumnavigations: In 2022, the MV Astra, a former Swedish Sea Rescue Society ship became the first sub-24m motor-powered vessel to circumnavigate the globe via the southern capes. Aviation United States Army Air Service, 1924, first aerial circumnavigation, 175 days, covering 44,360 kilometres (27,560 mi), with examples of the Douglas World Cruiser biplane. In 1949, the Lucky Lady II, a Boeing B-50 Superfortress of the U.S. Air Force, commanded by Captain James Gallagher, became the first aeroplane to circle the world non-stop (by refueling the plane in flight). Total time airborne was 94 hours and 1 minute. In 1957, three United States Air Force Boeing B-52 Stratofortresses made the first non-stop jet-aircraft circumnavigation in 45 hours and 19 minutes, with two in-air refuelings. In 1964, Geraldine "Jerrie" Mock was the first woman to fly solo around the world. In 1986, Dick Rutan and Jeana Yeager made the first non-refueled circumnavigation in an airplane (Rutan Voyager), in 9 days, 3 minutes and 44 seconds. In 1999, Bertrand Piccard and Brian Jones, achieved the first non-stop balloon circumnavigation in Breitling Orbiter 3. In 2002, Steve Fossett, after flying on the Spirit of Freedom balloon gondola, became the first person to fly around the world alone, nonstop in any kind of aircraft. Fossett's sole source of aid was a control center in Brookings Hall of Washington University in St. Louis. In 2005, Steve Fossett, flying a Virgin Atlantic GlobalFlyer, set the current record for fastest aerial circumnavigation (first non-stop, non-refueled solo circumnavigation in an airplane) in 67 hours, covering 37,000 kilometers. In 2014, Matt Guthmiller became the youngest person to solo circumnavigate by air at age 19 years, 7 months, and 15 days. In 2016, Bertrand Piccard and André Borschberg completed the first solar-powered aircraft circumnavigation of the world in Solar Impulse 2. In 2020, One More Orbit completed the fastest circumnavigation via both geographic poles in a Gulfstream G650ER. In 2020, Robert DeLaurentis and his twin-engine aircraft "Citizen of the World" became the first pilot and plane to successfully use biofuels over the North and South poles. Land In 1841–1842 Sir George Simpson made the first "land circumnavigation", crossing Canada and Siberia and returning to London. Ranulph Fiennes and Charlie Burton are credited with the first north–south circumnavigation of the Earth. Human On 13 June 2003, Robert Garside completed the first recognized run around the world, taking 5+1⁄2 years; the run was authenticated in 2007 by Guinness World Records after five years of verification. On 6 October 2007, Jason Lewis completed the first human-powered circumnavigation of the globe (including human-powered sea crossings). On 21 July 2012, Erden Eruç completed the first entirely solo human-powered circumnavigation of the globe.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Echogenic intracardiac focus** Echogenic intracardiac focus: Echogenic intracardiac focus (EIF) is a small bright spot seen in the baby's heart on an ultrasound exam. This is thought to represent mineralization, or small deposits of calcium, in the muscle of the heart. EIFs are found in about 3–5% of normal pregnancies and cause no health problems. EIFs themselves have no impact on health or heart function. Often the EIF is gone by the third trimester. If there are no problems or chromosome abnormalities, EIFs are considered normal changes, or variants. Association with birth defects: Researchers have noted an association between an EIF and a chromosome problem in the baby. Types of chromosome problems that are occasionally seen include trisomy 13 (Patau syndrome) or trisomy 21 (Down syndrome). In the case of an isolated EIF, and no other ultrasound findings, some studies show that the risk for a chromosome abnormality is approximately two times a woman's background risk. Other studies report up to a 1% risk for Down syndrome when an EIF is seen on a second trimester fetal ultrasound exam. A clue to chromosome problems: An EIF is one clue which can contribute to the chances of a chromosome problem existing. Generally the risks are low if there are no other risk factors. Many babies with chromosome problems do not show any signs on ultrasound. Other factors are discussed in counseling include: Mother's age at the expected date of delivery The results of the Expanded AFP blood triple test Evidence of other "fetal findings" seen on the ultrasound that suggest a chromosome problem. Options: The best available evidence suggests that an isolated echogenic intracardiac focus in the fetus of an otherwise low risk woman does not confer an increased risk of fetal aneuploidy. Although some studies have reported that the number or location of echogenic foci affects the risk of fetal aneuploidy (higher risk with biventricular or right ventricular involvement), the general consensus is that these factors have not been proven to matter. When an echogenic intracardiac focus is identified in an otherwise normal second trimester fetus, a normal cell-free DNA test can be very reassuring and obviate the need for invasive testing. Options: Amniocentesis is a test to check a baby's chromosomes. A small amount of amniotic fluid, which contains some fetal cells, is removed and tested. Amniocentesis is very accurate; however, there is a risk of miscarriage which occur in 0.5–1% of women who have amniocentesis. Results take about two weeks. A normal amniocentesis result means the EIF is not significant and there would be no other concerns about it. That is usually done between 15 and 20 weeks of pregnancy (during the second trimester). Summary: An EIF in the fetal heart may indicate an increased chance of the baby having a chromosome problem. It does not affect the development of the baby or the function of the heart. If the baby has normal chromosomes, there would be no associated problems to be concerned about. No special treatment or tests are needed at delivery. It is important to remember that with an isolated EIF, chances are strongly in favor of a normal pregnancy outcome, but the patient is entitled to further counseling and testing options.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Baily's beads** Baily's beads: The Baily's beads effect and diamond ring effect are features of total and annular solar eclipses. Although caused by the same phenomenon, they are two distinct events during these types of solar eclipses. As the Moon covers the Sun during a solar eclipse, the rugged topography of the lunar limb allows beads of sunlight to shine through in some places while not in others. The effect is named after Francis Baily, who explained the phenomenon in 1836. The diamond ring effect is seen when only one bead is left, appearing as a shining "diamond" set in a bright ring around the lunar silhouette.Lunar topography has considerable relief because of the presence of mountains, craters, valleys, and other topographical features. The irregularities of the lunar limb profile (the "edge" of the Moon, as seen from a distance) are known accurately from observations of grazing occultations of stars. Astronomers thus have a fairly good idea which mountains and valleys will cause the beads to appear in advance of the eclipse. While Baily's beads are seen briefly for a few seconds at the center of the eclipse path, their duration is maximized near the edges of the path of the umbra, lasting 1–2 minutes. Baily's beads: After the diamond ring effect has diminished, the subsequent Baily's beads effect and totality phase are safe to view without the solar filters used during the partial phases. By then, less than 0.001% of the Sun's photosphere is visible. Baily's beads: Observers in the path of totality of a solar eclipse see first a gradual covering of the Sun by the lunar silhouette for just a small duration of time from around one minute to four minutes, followed by the diamond ring effect (visible without filters) as the last bit of photosphere disappears. As the burst of light from the ring fades, Baily's beads appear as the last bits of the bright photosphere shine through valleys aligned at the edge of the Moon. As the Baily's beads disappear behind the advancing lunar edge (the beads also reappear at the end of totality), a thin reddish edge called the chromosphere (the Greek chrōma meaning "color") appears. Though the reddish hydrogen radiation is most visible to the unaided eye, the chromosphere also emits thousands of additional spectral lines. Observational history: Although Baily is often said to have discovered the cause of the feature which bears his name, Sir Edmond Halley made the first recorded observations of Baily's beads during the solar eclipse of 3 May 1715. Halley described and correctly ascertained the cause of the effect in his "Observations of the late Total Eclipse of the Sun [...]" in the Philosophical Transactions of the Royal Society: About two Minutes before the Total Immersion, the remaining part of the Sun was reduced to a very fine Horn, whose Extremeties seemed to lose their Acuteness, and to become round like Stars ... which Appearance could proceed from no other Cause but the Inequalities of the Moon's Surface, there being some elevated parts thereof near the Moon's Southern Pole, by whose Interposition part of that exceedingly fine Filament of Light was intercepted. Observational history: The term "Baily's beads" then came into use after Baily described the phenomenon to the Royal Astronomical Society in December 1836. Having observed the solar eclipse of 15 May 1836 from Jedburgh in the Scottish Borders, he reported that: ...when the cusps of the sun were about 40 degrees asunder, a row of lucid points, like a string of beads, irregular in size, and distance from each other, suddenly formed around that part of the circumference of the moon that was about to enter on the sun's disc. In media: Cosmas Damian Asam was probably the earliest realistic painter to depict a total solar eclipse and diamond ring. His painting was finished in 1735. The Baily's beads phenomenon is seen during the credit opening sequence of the NBC TV show Heroes, while the Diamond Ring effect is seen during the credit opening sequence of Star Trek: Voyager, albeit from a fictitious extrasolar body, seen from space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benzylideneacetone** Benzylideneacetone: Benzylideneacetone is the organic compound described by the formula C6H5CH=CHC(O)CH3. Although both cis- and trans-isomers are possible for the α,β-unsaturated ketone, only the trans isomer is observed. Its original preparation demonstrated the scope of condensation reactions to construct new, complex organic compounds. Benzylideneacetone is used as a flavouring ingredient in food and perfumes. Preparation: Benzylideneacetone can be efficiently prepared by the base-induced condensation of acetone and benzaldehyde: CH3C(O)CH3 + C6H5CHO → C6H5CH=CHC(O)CH3 + H2OHowever, the benzylideneacetone formed via this reaction can undergo another Claisen-Schmidt condensation with another molecule of benzaldehyde to form dibenzylideneacetone. Because relatively weak bases such as NaOH make very little of the enolate ion at equilibrium, there is still a lot of unreacted base left in the reaction mixture, which can go on and remove protons from the alpha carbon of benzylideneacetone, allowing it to undergo another Claisen-Schmidt condensation and make dibenzylideneacetone.If, on the other hand, lithium diisopropylamide (LDA) is used as the base, all of the acetone will deprotonated, making enolate ion quantitatively. Therefore, a more efficient, but more expense way to make benzylideneacetone is to combine equimolar amounts of LDA (in THF), acetone, and benzaldehyde. Reactions: As with most methyl ketones, benzylideneacetone is moderately acidic at the alpha position, and it can be readily deprotonated to form the corresponding enolate The compound undergoes the reactions expected for its collection of functional groups: e.g., the double bond adds bromine, the heterodiene adds electron-rich alkenes in Diels-Alder reactions to give dihydropyrans, the methyl group undergoes further condensation with benzaldehyde to give dibenzylideneacetone, and the carbonyl forms hydrazones. It reacts with Fe2(CO)9 to give (benzylideneacetone)Fe(CO)3, a reagent for transferring the Fe(CO)3 unit to other organic substrates. Reactions: Hydrogenation of benzylideneacetone results in a preparation of benzylacetone. The reaction of 4-Hydroxycoumarin with this compound yields Warfarin.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fosfestrol** Fosfestrol: Fosfestrol, sold under the brand name Honvan and also known as diethylstilbestrol diphosphate (DESDP), is an estrogen medication which is used in the treatment of prostate cancer in men. It is given by slow intravenous infusion once per day to once per week or by mouth once per day.Side effects of fosfestrol include nausea and vomiting, cardiovascular complications, blood clots, edema, and genital skin reactions, among others. Fosfestrol is an estrogen, and hence is an agonist of the estrogen receptor, the biological target of estrogens like estradiol. It acts as a prodrug of diethylstilbestrol.Fosfestrol was patented in 1941 and was introduced for medical use in 1955. It was previously marketed widely throughout the world, but now remains available in only a few countries. Medical uses: Fosfestrol is used as a form of high-dose estrogen therapy in the treatment of castration-resistant prostate cancer. It is added once progression of metastases has occurred following therapy with other interventions such orchiectomy, gonadotropin-releasing hormone modulators, and nonsteroidal antiandrogens. Fosfestrol has also been used to prevent the testosterone flare at the start of gonadotropin-releasing hormone agonist therapy in men with prostate cancer.Fosfestrol sodium is given at a dosage of 600 to 1200 mg/day by slow intravenous infusion over a period of 1 hour for a treatment duration of 5 to 10 days in men with prostate cancer. Following this, it is given at a dose of 300 mg/day for 10 to 20 days. Maintenance doses of fosfestrol sodium of 300 to 600 mg may be given four times per week. This may be gradually reduced to one 300 to 600-mg dose per week over a period of several months.Fosfestrol sodium is also used to a lesser extent by oral administration initially at a dosage of 360 to 480 mg three times per day in the treatment of prostate cancer. Maintenance doses of 120 to 240 mg three times per day may be used and can be gradually reduced to 240 mg/day. Medical uses: Available forms Fosfestrol is available in the form of solutions for intravenous administration and tablets for oral administration. Side effects: Side effects of fosfestrol include nausea and vomiting in 80% of patients (with 1 in 25 cases, or 4%, resulting in death), cardiovascular complications (18% with fosfestrol plus adriamycin relative to 2% with adriamycin alone) such as thrombosis (2 in 25 cases, or 8%), edema (44% requiring diuretic therapy), and skin reactions such as burning, itching, or pain in the genital area (40%). In addition, weight gain, feminization, and gynecomastia may occur. Pharmacology: Pharmacodynamics Fosfestrol is an estrogen, or an agonist of the estrogen receptors. It is inactive itself and acts as a prodrug of diethylstilbestrol. Similarly to diethylstilbestrol, fosfestrol has powerful antigonadotropic effects and strongly suppresses testosterone levels in men. It decreases testosterone levels into the castrate range within 12 hours of the initiation of therapy. Fosfestrol may also act by other mechanisms, such as via direct cytotoxic effects in the prostate gland. Pharmacology: Pharmacokinetics The pharmacokinetics of fosfestrol have been studied. Chemistry: Fosfestrol is a synthetic nonsteroidal estrogen of the stilbestrol group. It is an estrogen ester; specifically, it is the diphosphate ester of diethylstilbestrol.Fosfestrol is provided both as the free base and as a tetrasodium salt. In terms of dose equivalence, 300 mg anhydrous fosfestrol sodium is equal to about 250 mg fosfestrol.A polymer of fosfestrol, polydiethylstilbestrol phosphate, was developed as a long-acting estrogen for potential use in veterinary medicine, but was never marketed. History: Fosfestrol was first patented in 1941 and was mentioned in the literature by Huggins. Conjugated estrogens and diethylstilbestrol sulfate, which are water-soluble estrogens, were first reported to be effective in the treatment of prostate cancer via intravenous administration in 1952. Starting in October 1952, Flocks and colleagues studied intravenous fosfestrol in the treatment of prostate cancer, publishing their findings in 1955. Fosfestrol was first introduced for medical use in 1955 under the brand names Stilphostrol and ST 52 in the United States and France, respectively. Society and culture: Generic names Fosfestrol is the generic name of the drug and its INN, BAN, and JAN, while diethylstilbestrol diphosphate is its USAN and fosfestrolo is its DCIT. It is also known as stilbestrol diphosphate. Fosfestrol sodium is its INNM and BANM. Brand names Brand names of fosfestrol include Cytonal, Difostilben, Honovan, Honvan, Honvol, Honvon, Fosfostilben, Fostrolin, ST 52, Stilbetin, Stilbol, Stilbostatin, Stilphostrol, and Vagestrol, among others. Availability Fosfestrol has been marketed widely throughout the world, including in the United States, Canada, Europe, Asia, Latin America, and South Africa, among other areas of the world. However, today, it appears to remain available only in a few countries, including Bangladesh, Egypt, India, Oman, and Tunisia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magnetosonic wave** Magnetosonic wave: In physics, magnetosonic waves, also known as magnetoacoustic waves, are low-frequency compressive waves driven by mutual interaction between an electrically conducting fluid and a magnetic field. They are associated with compression and rarefaction of both the fluid and the magnetic field, as well as with an effective tension that acts to straighten bent magnetic field lines. The properties of magnetosonic waves are highly dependent on the angle between the wavevector and the equilibrium magnetic field and on the relative importance of fluid and magnetic processes in the medium. They only propagate with frequencies much smaller than the ion cyclotron or ion plasma frequencies of the medium, and they are nondispersive at small amplitudes. Magnetosonic wave: There are two types of magnetosonic waves, fast magnetosonic waves and slow magnetosonic waves, which—together with Alfvén waves—are the normal modes of ideal magnetohydro­dynamics. The fast and slow modes are distinguished by magnetic and gas pressure oscillations that are either in-phase or anti-phase, respectively. This results in the phase velocity of any given fast mode always being greater than or equal to that of any slow mode in the same medium, among other differences. Magnetosonic wave: Magnetosonic waves have been observed in the Sun's corona and provide an observational foundation for coronal seismology. Characteristics: Magnetosonic waves are a type of low-frequency wave present in electrically conducting, magnetized fluids, such as plasmas and liquid metals. They exist at frequencies far below the cyclotron and plasma frequencies of both ions and electrons in the medium (see Plasma parameters § Frequencies). In an ideal, homogeneous, electrically conducting, magnetized fluid of infinite extent, there are two magnetosonic modes: the fast and slow modes. They form, together with the Alfvén wave, the three basic linear magnetohydrodynamic (MHD) waves. In this regime, magnetosonic waves are nondispersive at small amplitudes. Dispersion relation The fast and slow magnetosonic waves are defined by a bi-quadratic dispersion relation that can be derived from the linearized MHD equations. Characteristics: Phase and group velocities The phase velocities of the fast and slow magnetosonic waves depend on the angle θ between the wavevector k and the equilibrium magnetic field B0 as well as the equilibrium density, pressure, and magnetic field strength. From the roots of the magnetosonic dispersion relation, the associated phase velocities can be expressed as cos 2⁡θ) where the upper sign gives the phase velocity v+ of the fast mode and the lower sign gives the phase velocity v− of the slow mode. Characteristics: The phase velocity of the fast mode is always greater than or equal to that of the slow mode, v+ ≥ v−. This is due to the differences in the signs of the thermal and magnetic pressure perturbations associated with each mode. The magnetic pressure perturbation pm1 = B0⋅B1 ⁄ μ0 can be expressed in terms of the thermal pressure perturbation p1 and phase velocity as cos 2⁡θv±2)p1. Characteristics: For the fast mode v2+ > c2s cos2 θ, so magnetic and thermal pressure perturbations have matching signs. Conversely, for the slow mode v2− < c2s cos2 θ, so magnetic and thermal pressure perturbations have opposite signs. In other words, the two pressure perturbations reinforce one another in the fast mode, but oppose one another in the slow mode. As a result, the fast mode propagates at a faster speed than the slow mode.The group velocity vg ± of fast and slow magnetosonic waves is defined by vg±=dωdk=k^v±+θ^∂v±∂θ where ∧k and ∧θ are local orthogonal unit vector in the direction of k and in the direction of increasing θ, respectively. In a spherical coordinate system with a z-axis along the unperturbed magnetic field, these unit vectors correspond to those in the direction of increasing radial distance and increasing polar angle. Characteristics: Limiting cases Incompressible fluid In an incompressible fluid, the density and pressure perturbations vanish, ρ1 = 0 and p1 = 0, resulting in the sound speed tending to infinity, cs → ∞. In this case, the slow mode propagates with the Alfvén speed, ω2sl = ω2A, and the fast mode disappears from the system, ω2f → ∞. Characteristics: Cold limit Under the assumption that the background temperature is zero, it follows from the ideal gas law that the thermal pressure is also zero, p0 = 0, and, as a result, that the sound speed vanishes, cs = 0. In this case, the slow mode disappears from the system, ω2sl = 0, and the fast mode propagates isotropically with the Alfvén speed, ω2f = k2v2A. In this limit, the fast mode is sometimes referred to as a compressional Alfvén wave. Characteristics: Parallel propagation When the wavevector and the equilibrium magnetic field are parallel, θ → 0, the fast and slow modes propagate as either a pure sound wave or pure Alfvén wave, with the fast mode identified with the larger of the two speeds and the slow mode identified with the smaller. Perpendicular propagation When the wavevector and the equilibrium magnetic field are perpendicular, θ → π/2, the fast mode propagates as a longitudinal wave with phase velocity equal to the magnetosonic speed, and the slow mode propagates as a transverse wave with phase velocity approaching zero. Inhomogeneous fluid: In the case of an inhomogeneous fluids (that is, a fluid where at least one of the background quantities is not constant) the MHD waves lose their defining nature and get mixed properties. In some setups, such as the axisymmetric waves in a straight cylinder with a circular basis (one of the simplest models for a coronal loop), the three MHD waves can still be clearly distinguished. But in general, the pure Alfvén and fast and slow magnetosonic waves don't exist, and the waves in the fluid are coupled to each other in intricate ways. Observations: Both fast and slow magnetosonic waves have been observed in the solar corona providing an observational foundation for the technique for coronal plasma diagnostics, coronal seismology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alma Problem** Alma Problem: The Alma Problem is an issue of concern to certain musicologists, historians and biographers who deal with the lives and works of Gustav Mahler and his wife Alma. Alma Problem: Alma Mahler (ultimately Alma Mahler Gropius Werfel), an articulate, well-connected and influential woman and a composer herself, outlived him by more than 50 years, during which time she was the principal authority on the mature Mahler's values, character and day-to-day behaviour. Her two books quickly became the central source material for Mahler scholars and music-lovers alike. Unfortunately, as scholarship has investigated the picture she sought to paint of Mahler and her relationship with him, some writers painted her accounts as unreliable, false, and misleading, and found evidence of deliberate manipulation and falsification. The fact that these deeply flawed accounts have nevertheless had a massive influence — leaving their mark upon several generations of scholars, interpreters and music-lovers, and becoming a foundation of the critical and popular literature on Mahler — constitutes the 'Alma Problem'. In contrast, other scholars see the creation of the 'Alma Problem' as based on outmoded perspectives on gender relations and artistic aspirations. Letters: The 'Alma Problem' manifests itself in several dimensions. To begin with, there is her treatment of the couple's correspondence. Of the more than 350 written communications Mahler is known to have written to her, Alma suppressed almost 200 — and of the 159 that she did choose to publish, she is now known to have made unacknowledged alterations to no fewer than 122. On three occasions Alma even manufactured items by joining together separate letters. She also appears to have systematically destroyed everything that she wrote to her husband: the text of only a single one of her own letters, written before they were married, is known to survive. Letters: As for the changes she secretly made in his letters before publication, a clear pattern can be discerned: Alma seems determined to present herself as a powerful, potent person whose tremendous gifts and personal allure placed her at the very centre of events — at the same time as insisting that her selfless devotion to her husband made her the powerless, guiltless victim of his unreasonableness. Thus her deletion of Mahler's references to the presents he bought or offered her protected her claims that he hardly ever gave her gifts; while her deletion of his references to the plentiful sums of money he handed over to her allowed her to maintain that he had kept her short of housekeeping money. Her deletion of references to people close to Mahler but not liked by her permitted her to minimise their apparent role in his life, compared to hers. And on other occasions she seems to have been anxious to create the impression that Mahler thought she might be merely unwilling to do or be something, rather than actually unable: his "Answer ... if you are able to follow me" is secretly modified to "Answer ... if you are willing to follow me". Letters: On this subject, Jonathan Carr has noted: "If the text [of a letter] offended Alma's self-esteem or predilections then it had to be 'corrected' with some judicious deletion or insertion before the world could be allowed to see it". In some cases her deletions have actually proved impossible to correct: her distinctive violet ink has obliterated the original word, line or passage. Memories: Alma's re-writing of history extends back beyond the start of her life with Mahler. She describes her father as coming "from old patrician stock", and her mother as having been sent to Vienna to take voice-lessons with a highly regarded teacher at a private academy. It is now known, however, that Alma's father was the great-grandson of a scythe-smith from the Steyr Valley — and that her mother became a singer only after an early life that had seen her family flee to escape bankruptcy and the young girl herself working as a ballet dancer (at the age of eleven), a nanny, an au pair girl, and a cashier at the public baths. Memories: Alma's story of her 'first meeting' with Mahler (in November 1901, at a dinner party given by Berta Zuckerkandl and attended by other glittering personalities such as Gustav Klimt and Max Burckhard) is one of her most famous, but it departs from the truth in at least one major respect: it was not, in fact, their first meeting. Alma is now known to have met Mahler two years earlier in the more humdrum context of a bicycle ride in the lake region of the Salzkammergut. (In her diaries, she wrote: "He soon overtook us, and we met four or five times. Each time he struck up a conversation, staring hard at me"). It is now known that Alma, deeply infatuated with the famous and distant figure, had previously sought (and eventually obtained) Mahler's autograph on a postcard, and that on their actual first meeting she was embarrassed that he appeared to have "perceived the connection" between her and the card he had signed. (This story is instructive in that it not only casts light on Alma's motivations in expunging an important fact from the record, but also reveals the value of her original diaries in correcting her later accounts. The diaries were published only in the 1990s, having remained in almost unreadable manuscript during her lifetime.) Many of Alma's submissions concern purely private experiences that plainly can have left no documentary evidence; nor is there any 'balancing' material from the other side of the marriage — for, in contrast to Alma, Mahler never wrote or spoke (except, perhaps, to Freud) about their relationship. In such circumstances, it is important to remember that the picture we have of Mahler as the typical fin-de-siècle artist — an 'ascetic'; a morbid and tormented neurotic; a despairing and sickly man for whom all pleasures were suspect; and a man whose constant overwork undermined an already weak physical constitution — derives entirely from Alma's writings, and is not corroborated by others. For most of his adult life, in fact, Mahler actively enjoyed putting his strength and endurance to the test: he loved to swim long distances, climb mountains, take endless walks, and go on strenuous bicycle tours.Even in the winter of 1910–11, when the shock of Alma's infidelity had threatened to overwhelm him, he was still planning for his old age, and making decisions about the construction and decoration of a new house in the Semmering mountains — while in 1911, in what was probably his last interview, he made the following statement: "I have worked really hard for decades, and have born [sic] the exertion wonderfully well".Other evident manipulations and falsifications concern the people with whom the couple came into contact. Alma and the Fifth and Sixth Symphonies: Alma met Mahler during the period in which the Fifth Symphony was being composed (1901-2); her various remarks and recollections concerning this and the Sixth Symphony (1903-4, rev. 1906) provide a concise demonstration of the 'Alma Problem'. Alma and the Fifth and Sixth Symphonies: Fifth Symphony In 'Memories and Letters', Alma writes of attending a 1904 'reading rehearsal' of the as-yet unperformed Fifth Symphony: "I had heard each theme in my head while copying the score, but now I could not hear them at all! Mahler had overscored the percussion instruments and side-drum so madly and persistently that little beyond the rhythm was recognizable. I hurried home sobbing. [...] For a long time I refused to speak. At last I said between my sobs: 'You've written it for percussion and nothing else'. He laughed, and produced the score. He crossed out the side drum in red chalk and half the percussion instruments too. He had felt the same thing himself, but my passionate protest turned the scale." (Alma Mahler-Werfel, 'Memories and Letters', p.73)Speaking of what he calls 'this engaging story' — which is found quoted in countless books and programme-notes — Colin Matthews explains that "the evidence of the manuscript and the printed scores does not, unfortunately, bear it out. In fact, the first edition of the score actually has very slightly more percussion in the first movement [...] than the manuscript..." (Colin Matthews, 'Mahler at Work', p.59) Sixth Symphony First movement's 'second subject' Alma claims that Mahler told her in 1904 that he had tried to 'capture' her (the word she reports him using is 'festzuhalten') in the F-major theme that is the 'second subject' of the symphony's first movement. The story has become canonic — to the extent that no commentator can fail to repeat it, and few listeners can hear the theme without thinking of Alma's report. The report may of course be true (in that Mahler may actually have attempted to describe her in music, or may merely have chosen to claim that he had); but her statement is not corroborated. Alma and the Fifth and Sixth Symphonies: Scherzo/children Alma asserts that in the Scherzo movement Mahler represented the unrhythmic games of the two little children, tottering in zig-zags over the sand. Ominously, the childish voices became more and more tragic, and died out in a whimper.This memorable (and interpretatively potent) revelation is still encountered in writings about the symphony — in spite of the fact that it is not merely uncorroborated, but is conclusively refuted by the chronology: the movement was composed in the Summer of 1903, when Maria Anna Mahler (born November 1902) was less than a year old, and when Anna Justine Mahler (born July 1904) had not even been conceived. Alma and the Fifth and Sixth Symphonies: Order of the middle movements The order of the symphony's two middle movements — Scherzo/Andante or Andante/Scherzo — has been the subject of extensive discussion. Mahler's original score (1904 manuscript and first published edition, as well as Zemlinsky's piano duet arrangement) placed the Scherzo second and the Andante third. During rehearsals for the work's first performance in 1906, the composer decided that the slow movement should precede the scherzo, and he instructed his publishers C.F. Kahnt to begin production of a 'second edition' of the work with the movements in that order, and meanwhile to insert a printed instruction in all existing scores. Mahler conducted the Sixth Symphony with the middle movements in the Andante-Scherzo order three times during his lifetime. is revised, 'second thoughts' ordering was observed by Mahler t; it is how the second edition of the symphony was published; and it is how the work was performed by others in the three additional performances that the work received during the composer's lifetime. Alma and the Fifth and Sixth Symphonies: In 1919, following an inquiry from Mengelberg to Alma about the order of the middle movements in anticipation of a large-scale Mahler festival in Amsterdam scheduled for 1920, Alma replied: Erst Scherzo dann Andante herzlichst Alma ("First Scherzo then Andante affectionately Alma")In 1955, in her later years, following a comparable inquiry from Eduard van Beinum to Alma on the same question, Alma replied in reverse to her communication to Mengelberg 36 years earlier. The matter remains under debate, with conductors and musicians having the latitude to choose either option for performance. Alma and the Fifth and Sixth Symphonies: Third hammer blow Alma also claims that Mahler described the three hammer-blows of the finale as 'three blows of fate, the last of which fells [the hero] as a tree is felled'. Deciding that the hero was Mahler himself, and that the symphony was 'prophetic', she then identified these three blows with three later events in her husband's life: his 'forced resignation' from the Vienna State Opera; the death of his eldest daughter; and the diagnosis of a fatal heart condition. In addition, she claims that Mahler eventually deleted the third hammer-blow from the score out of sheer superstition, in an (unsuccessful) attempt to stave off a third disaster in his own life. Again, the story has become canonic; but the difficulties it presents are several. First, Alma's programmatic interpretation is not corroborated by the composer or any other source. Second, Mahler's resignation from the Opera was not, in reality, 'forced', and was not necessarily even a 'disaster'. Third, Alma exaggerates the seriousness of her husband's 'heart condition', which was not inevitably fatal. Fourth, she neglects to mention that Mahler's discovery of her own infidelity was a 'blow' of far greater weight than at least one (and possibly two) of the other events she does mention. Fifth, her story once again falls foul of the known chronology: Mahler revised the symphony in the Summer of 1906 — whereas all three of the events reported by Alma took place after this time: Mahler requested release from his Vienna Opera contract in May 1907, and it was in July of that year that his daughter died and his heart condition was diagnosed. Sixth, her report of Mahler's 'superstitious' reason for removing the third hammer-blow not only has no corroboration of any kind, but also betrays an ignorance of the musical sources. Mahler originally notated no fewer than five large percussive impacts in the score of his finale (b.9, b.336, b.479, b.530, b.783); these five were later reduced to a 'classically' dramatic three and specifically allotted to a 'Hammer' — though with one of these blows (the last) occurring in a structural and gestural context that makes it very different from the other two (and equivalent to the two that were removed). It was this anomalous blow that Mahler, in revising the work, chose to delete — making the important question not 'Why did he finally take it out?', but 'Why did he first leave it in?' Selected further examples: Alma claims that on 24 February 1901 she attended two different musical events conducted by her future husband. "I heard him conduct twice that day", she reports. She then gives an eye-witness account of the second of these events, supposedly a performance of Die Meistersinger:"He looked like Lucifer: white in the face, his eyes like black coals. I felt profoundly sorry for him, and said to the people sitting near me: 'This is more than the man can endure'. [...] It was the unique intensity of his interpretative art that enabled him to create two such miracles in one day without destroying himself".This entire story is pure invention, however. The work that Mahler is known to have conducted on that occasion was actually Mozart's The Magic Flute; and, in any case, Alma's diaries show that she remained at home all that evening.Alma claims that Mahler 'feared women', and that he had almost no sexual experience right up to his forties (he was 41 when they met). In fact, Mahler's long record of prior romantic entanglements — including a lengthy one with Anna von Mildenburg — suggests that this was not the case. Selected further examples: Alma claims that her new husband was 50,000 gold crowns in debt due to the extravagance of his sister (and housekeeper) Justine, and that only her own careful budgeting allowed this to be repaid. In fact, no amount of wifely thrift could ever have paid off a debt of such a size, as the sum was far in excess of Mahler's gross income as opera director, salary and 'fringe benefits' combined. Selected further examples: Alma claims that Mahler intensely disliked Richard Strauss's opera 'Feuersnot', that he 'had a horror of the work', and avoided conducting it. In fact, Feuersnot is the only Strauss opera that Mahler is known to have conducted {see 'Gustav Mahler - Richard Strauss Correspondence, 1888–1911, Ed. Herta Blaukopf (London, 1984)}. Selected further examples: Describing a 1904 concert in Amsterdam in which Mahler's Fourth Symphony was performed twice, Alma claims that Mahler, after conducting the work in the first half, handed the baton to Mengelberg for the evening's second performance. "Mahler took a seat in the stalls and listened to his work", she claimed. "Later, when he came home, he told me it had been as if he himself had conducted. Mengelberg had grasped his intentions down to the last nuance". Her claim is entirely false. From the contents of a postcard that Mahler wrote to her before the performance; from the printed programme for the event, and from the various newspaper reviews, we know that Mengelberg did not conduct at the concert: the two performances that were given were both conducted by Mahler. Problems in translation: An important aspect of the 'Alma Problem' for which Alma herself might not have been responsible concerns the 'standard' English translations of her books, which frequently differ significantly from the German originals. Problems in translation: 'Memories and Letters' (Basil Creighton's 1946 version of 'Erinnerungen und Briefe') incorporates material that was apparently added at that time and is not found in the German edition, and also shows a tendency to abridge and revise (especially where the original was frank about sexual matters). For example, the words Alma recalls as her invitation to the dinner at which she claims she met Mahler for the first time can be literally translated as follows: "Mahler will be coming to us today. Don't you want to be there too? — I know you are interested in him'. Creighton, however, merely renders it as: 'We've got Mahler coming in tonight — won't you come?' Recounting the story of the couple's journey to St Petersburg, Alma writes in German of her husband suffering a 'frightful migraine' [furchtbare Migräne] on the train, and describes the condition as 'one of those auto-intoxications [Autointoxicationen] from which he suffered all his life'. Yet this is rendered by Creighton as Mahler catching 'a severe feverish chill', and the statement that he 'suffered all his life from these infections'. Problems in translation: Describing the discovery of Mahler's heart condition, Alma speaks of the diagnosis of 'hereditary, although compensated, valve defects on both sides'. Creighton's English translation (along with all the commentaries that derive from it) omits the reference to the defects being 'compensated'. Faced with this and other problematic translations, Peter Franklin has been moved to ask whether there might not be 'a special, English readers' Mahler, idiosyncratically marked and defined by textual tradition'. Relevant quotations: Jonathan Carr: "It is now plain that Alma did not just make chance mistakes and 'see things through her own eyes'. She also doctored the record". Relevant quotations: Hugh Wood: "Often she is the only witness, and the biographer has to depend on her while doubting with every sentence her capacity for telling the truth. Everything that passed through her hands must be regarded as tainted" Nancy Newman's 2022 study provides a "theoretical foundation" that "grounds extensive critique of both the conventions of fin-de-siècle Vienna and the chauvinism of late twentieth-century scholars."Henry-Louis de la Grange has provided a more nuanced analysis on the reliability of Alma in his 4-volume biography of Gustav Mahler: "The subject of Alma's reliability deserves more than passing remarks, and it has unfortunately nowadays become a universal habit in most writings about Mahler to question each and every word she wrote in her Erinnerungen. This writer was one of the very first ever to cast doubt on many of her statements and to point out the errors, distortions, and fabrications that can be detected in her books...."Alma's natural tendency was always to exaggerate and dramatize events rather than describe them, yet she rarely falsified or distorted them without a definite purpose, except towards the end of her life when old age and years of alcohol abuse took their toll on her memory. A careful scrutiny can in many cases enable us to detect the passages in which she deliberately falsified the truth, to point out the reasons why she did so, but also to believe her when there can be no serious cause for doubt..."When other witnesses were at hand, and this is the case for the rehearsals for the premiere of the Sixth, her version of facts can often be verified. An objective study of her writings promises to yield more reliable results than ceaselessly doubting what she write, all the more because she remains, after all, our main source of information regarding the nine-year period during which she was married to Mahler."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pram** Pram: Pram or PRAM may refer to: Places: Pram, Austria, a municipality in the district of Grieskirchen in the Austrian state of Upper Austria Dorf an der Pram, a municipality in the district of Schärding in the Austrian state of Upper Austria Zell an der Pram, a municipality in the district of Schärding in the Austrian state of Upper Austria People: Christen Pram (1756–1821), Norwegian/Danish economist, civil servant, poet, novelist, playwright, diarist and editor Arts and entertainment: Pram (band), a musical group The Pram Factory, an Australian alternative theatre venue in the Melbourne suburb of Carlton Pram, a character in the video game Makai Kingdom: Chronicles of the Sacred Tome Science: Parallel RAM, an abstract computer for designing parallel algorithms Phase-change RAM, a chalcogenide glass type of non-volatile random-access memory Parameter RAM, an area of non-volatile random-access memory used to store system settings on Apple's Macintosh computers PRAM1, or PML-RARA-regulated adapter molecule 1, a protein that in humans is encoded by the PRAM1 gene Transportation: Prams Air (Puerto Rico Air Management Services), an air charter and cargo operator, Miami International Airport, US Pram (boat), a small utility dinghy with a transom bow rather than a pointed bow Optimist (dinghy), with a pram hull Pram (ship), a type of shallow-draught, flat-bottomed ship (large watercraft) Pram (baby), a type of wheeled baby transport Others: Pram suit, a one-piece garment for infants, designed as cold-weather outerwear, and typically enclosing the entire body except for the face
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anaxonic neuron** Anaxonic neuron: An anaxonic neuron is a type of neuron where there is no axon or it cannot be differentiated from the dendrites. Being loyal to the etymology of anaxonic there are two types of anaxonic neurons in the human nervous system, the undifferentiated anaxonic neuron where the axon cannot be differentiated from the dendrites, and the unipolar brush cell (UBC), that has no axon and only a dendritic arbour. Location: They are found in the brain and retina, in the latter location it is found as the amacrine cell and retina horizontal cells. They are also found in invertebrates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**APOA5** APOA5: Apolipoprotein A-V is a protein that in humans is encoded by the APOA5 gene on chromosome 11. It is significantly expressed in liver. The protein encoded by this gene is an apolipoprotein and an important determinant of plasma triglyceride levels, a major risk factor for coronary artery disease. It is a component of several lipoprotein fractions including VLDL, HDL, chylomicrons. It is believed that apoA-V affects lipoprotein metabolism by interacting with LDL-R gene family receptors. Considering its association with lipoprotein levels, APOA5 is implicated in metabolic syndrome. The APOA5 gene also contains one of 27 SNPs associated with increased risk of coronary artery disease. Discovery: The gene for apolipoprotein A5 (APOA5, gene ID 116519, OMIM accession number – 606368) was originally found by comparative sequencing of human and mice DNA as a last member of the gene cluster of apolipoproteins APOA1/APOC3/APOA4/APOA5, located on human chromosome 11 at position 11q23. The creation of two mice models (APOA5 transgenic and APOA5 knock-out) confirmed the important role of this gene in plasma triglyceride determination. The transgenic mice had lower and the knock-out mice higher levels of plasma triglycerides, while plasma cholesterol levels remained unchanged in both animal models. A Dutch group simultaneously described an identical gene as apolipoprotein which it is associated with the early phase of liver regeneration, but failed to recognise its important role in the determination of plasma triglyceride levels. Structure: Gene The APOA5 gene resides on chromosome 11 at the band 11q23 and contains 4 exons and 3 introns. This gene uses alternate polyadenylation sites and is located proximal to the apolipoprotein gene cluster on chromosome 11q23. Structure: Protein This protein belongs to the apolipoprotein A1/A4/E family and contains 2 coiled coil domains. Overall, APOA5 is predicted to have approximately 60% a-helical content. The mature APOA5 protein spans a length of 366 amino acid residues, of which 23 amino acids code for the signal peptide. The molecular mass of the precursor was calculated to be 41 kDa, while the mature APOA5 protein was calculated to be 39 kDa. Tissue distribution: In humans, APOA5 is expressed almost exclusively in the liver tissue; some minor expressions have also been detected in the small intestine. Nothing is known about the existence of the potential alternative splicing variants of this gene. In comparison with other apolipoproteins, plasma concentration of APOA5 is very low (less than 1 μg/mL). This suggests that it has more catalytic than structural functions, since there is less than one APOA5 molecule per one lipoprotein particle. APOA5 is associated predominantly with TG-rich lipoproteins (chylomicrons and VLDL) and has also been detected on HDL particles. Function: APOA5 mainly functions to influence plasma triglyceride levels. The first suggested mechanism supposes that APOA5 functions as an activator of lipoprotein lipase (which is a key enzyme in triglyceride catabolism) and, through this process, enhances the metabolism of TG-rich particles. The second is the possible effect of APOA5 on the secretion of VLDL particles, since APOA5 reduces hepatic production by inhibiting VLDL-particle production and assembly by binding to cellular membranes and lipids. Finally, the third possibility relates to the acceleration of the hepatic uptake of lipoprotein remnants and it has been shown that APOA5 binds to different members of the low-density lipoprotein receptor family. In addition to its TG-lowering effect, APOA5 also plays a significant role in modulating HDL maturation and cholesterol metabolism. Increased APOA5 levels were associated with skewed cholesterol distribution from VLDL to large HDL particles. APOA5 mRNA is upregulated during liver regeneration and this suggests that APOA5 serves a function in hepatocyte proliferation. It’s also reported that APOA5 could enhance insulin secretion in beta-cells and the cell surface midkine could be involved in APOA5 endocytosis. Gene variability: Within the APOA5 gene, a couple of important SNPs with a widely confirmed effect on plasma TG levels as well as rare mutations have been described. In Caucasians, the common variants are inherited mostly in three haplotypes, which are characterised by two SNPs, namely rs662799 (T-1131>C; in almost complete LD with A-3>G, where the minor allele is associated with about 50% lower gene expression) and rs3135506 (Ser19>Trp; C56>G; alters the signal peptide and influences APOA5 secretion into plasma). There are also a further three common variants (A-3>G, IVS+476 G>A and T1259>C) which are not necessary for haplotype characterisation. Gene variability: Population frequencies of common APOA5 alleles exhibit large interethnic differences. For example, there are about 15% of carriers of the rs66299(C) allele among Caucasians, but the frequency could reach even between 40% and 50% among Asians. In contrast, the Trp19 allele is very rare in the Asian population (less than 1% of carriers) but is common in Caucasians (about 15% of carriers). Vice versa, important SNP (rs2075291, G553T, Gly185>Cys) with a population frequency of about 5% has been detected among Asians, but it is extremely rare among Caucasians. Sporadic publications refer to some other common polymorphisms, e.g. Val153>Met (rs3135507, G457A) and also suggest significant sex-dependent associations with plasma lipids. Rare variants within the APOA5 gene have been described in a couple of different populations. Among the “common mutations/rare SNPs”, one of the most characterised on a population level is the Ala315>Val exchange. Originally detected in patients with extreme TG levels over 10 mmo/L, it was also found in about 0.7% of the general population (mostly in individuals with normal TG values), which suggests a low penetrance of this variant. More than twenty other rare variants (mutations) have been described within the human APOA5 gene. They cover a wide spectrum that includes preliminary stop codons, amino acid changes as well as insertions and deletions. These mutations are generally associated with hypertriglyceridaemia, but penetration is usually not 100%. Individual mutations have been found mostly in one pedigree only.But not all the SNPs have a detrimental effect on TG levels. A recent report, showed that, in Sardinian population, the missense mutation Arg282Ser in APOA5 gene, correlates with a decrease in TG levels. The authors believe that this point mutation is a mayor modulatory of TG values in this population. Clinical significance: In humans, plasma triglycerides such as triacylglycerols have been long debated as an important risk factor for not only cardiovascular disease but also for other relevant morbidities, such as cancer, renal disease, suicide, and all-cause mortality. The APOA5 gene was found by comparative sequencing of ~200 kbp of human and mice DNA as the last member of the gene cluster of apolipoproteins located on human chromosome 11 at 11q23. Two mouse transgenic mouse models (APOA5 transgenic and APOA5 knockout) confirmed the important role of this gene in plasma triglyceride levels of plasma triglycerides. Obesity and metabolic syndrome are both closely related to plasma triglyceride levels and APOA5. Recent meta-analyses suggest that the effect on metabolic syndrome development is more profound for rs662799 in Asian population and for rs3135506 for Europeans. Moreover, meta-analysis that focused on rs662799 and the risk of type 2 diabetes mellitus has suggested a significant association in Asian populations, but not in European populations. Clinical significance: As a risk factor Even though plasma concentration of APOA5 is very low, some studies have focused on the analysis of the potential association of this biochemical parameter with cardiovascular disease (CVD). This relationship remains controversial, as higher plasma levels of APOA5 in individuals with CVD disease have been found in some, but not in all studies. Clinical significance: Plasma lipids and cardiovascular disease The major effect of the apolipoprotein A5 gene (and its variants) is on plasma triglyceride levels. Minor alleles (C1131 and Trp19) are primarily associated with the elevation of plasma triglyceride levels. The most extensive information available has been drawn from Caucasian populations, particularly in relation to the rs662799 SNP. Here, one minor allele is associated with an approximate 0,25 mmol/L increase of plasma TG levels. A similar effect is associated with the Trp19 allele, even though it has not been confirmed by a huge number of studies. Original studies have further described that the strongest effect of APOA5 polymorphisms on plasma TG levels is observed among Hispanics, with only minor effects detected among Africans. Among Asians, the effect on plasma TG levels is similar that found among Caucasians. Generally, studies have suggested significant interethnic differences and in some cases sex-dependent associations as well.Sporadic publications have also mentioned a weak but nonetheless significant effect of APOA5 variants on plasma HDL-cholesterol and non-HDL cholesterol levels. Clinical significance: Myocardial infarction A large meta-analysis of 101 studies confirmed a risk associated with the presence of the minor APOA5 allele -1131C and coronary heart disease. The odds ratio was 1.18 for every C allele. There are far fewer studies on the second common APOA5 polymorphism, Ser19>Trp, even though available studies have detected that its effect on plasma triglycerides is similar to C-1131>T. Nevertheless, the minor Trp allele is also associated with increased risk of CVD, and it seems that especially homozygotes and carriers of more minor alleles (both -1131C and 19Trp) are at higher risk of CVD. Clinical significance: Clinical marker A multi-locus genetic risk score study based on a combination of 27 loci, including the APOA5 gene, identified individuals at increased risk for both incident and recurrent coronary artery disease events, as well as an enhanced clinical benefit from statin therapy. The study was based on a community cohort study (the Malmo Diet and Cancer study) and four additional randomized controlled trials of primary prevention cohorts (JUPITER and ASCOT) and secondary prevention cohorts (CARE and PROVE IT-TIMI 22). Clinical significance: BMI, metabolic syndrome Obesity and metabolic syndrome are both closely related to plasma triglyceride levels. Therefore, the focus on an association between APOA5 and BMI or metabolic syndrome is understandable. Available studies show that minor APOA5 alleles could be associated with an enhanced risk of obesity or metabolic syndrome development. However, genome wide studies have failed to prove that APOA5 is a gene associated with BMI values and/or obesity, so the effect could be far from clinically significant or at least significantly context-dependent. Clinical significance: Nutri-, acti- and pharmacogenetic associations Several studies have focused on changes of anthropometrical (body weight, BMI, WHR,…) or biochemical parameters (mostly plasma lipid levels) as a result of the interactions between common APOA5 variants and dietary habits (polyunsaturated fatty acid intake, n-3 and n-6 fatty acid intake, total fat and total energy intake, alcohol intake), dietary (lowering the energy intake) and/or physical activity interventions or dyslipidaemic (using statins or fenofibrate) treatment. Due to the high heterogeneity of the examined populations, differences in protocol and/or interventions used, the studies are difficult to directly compare and draw definitive conclusions. However, with caution, it could be concluded that carriers of the minor C-1131, Trp19, or T553 alleles are in some cases less prone to the positive effects of environmental and/or pharmacological interventions. Some papers suggest the importance of the interactions between APOA5 and other genes, especially with common APOE (OMIM acc. No. 107741) three allelic (E2, E3, and E4) polymorphism, in the modulation of plasma lipids. In these cases, the interaction between minor alleles of both genes seems to be of importance. In the general population, APOE4 seems to have the potential to diminish the effect of minor APOA5 rs662799 and rs3135506 alleles, especially in females. Interaction between APOE and APOA5 Ser19˃Trp has been suggested to play some role in the development of type III hyperlipidaemia. Further studies, in which interaction with APOA5 has been described, have included, for example, variants within FTO, lipoprotein lipase, USF-1 and FEN-1. They have also focused not only on plasma lipids, but on BMI values or hypertension as well. Clinical significance: Other roles Some other possible roles of APOA5 variants have been discussed, but generally these reports comprise only one or two papers – and first original papers with positive findings are usually not confirmed in second publications. These papers focus on the possible effect of different APOA5 variants on maternal height, longer foetal birth length, putative associations with plasma levels of C-reactive protein, LDL particle size and haemostatic markers. Despite the very low plasma concentration, variants within apolipoprotein A5 are potent determinants of plasma triglyceride levels. Minor alleles of three SNPs (rs662799, rs3135506, rs3135507) are associated with the higher risk of cardiovascular disease. Interactive pathway map: Click on genes, proteins and metabolites below to link to respective articles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arbitration inter-frame spacing** Arbitration inter-frame spacing: Arbitration inter-frame spacing (AIFS), in wireless LAN communications, is a method of prioritizing one Access Category (AC) over the other, such as giving voice or video priority over email. AIFS functions by shortening or expanding the period a wireless node has to wait before it is allowed to transmit its next frame. A shorter AIFS period means a message has a higher probability of being transmitted with low latency, which is particularly important for delay-critical data such as voice or streaming video. Arbitration inter-frame spacing: AIFS is a time interval between frames being transmitted under the IEEE 802.11e EDCA MAC. It depends on the Access Category and generally depends on the AIFSN, or AIFS-number. AIFS is defined by the formula AIFSN[AC] * ST + SIFS, where the AIFSN depends on the Access Category. Slot time ST (also denoted by σ ) is dependent on the physical layer. Short Interframe Space (SIFS) is the time between a DATA and ACK frame. Arbitration inter-frame spacing: AIFSN[AC] will be set by the AP in the EDCA Parameter set in beacon and probe response frames. If it is not set then the STA has to use the default values. The IEEE 802.11e EDCA MAC has been adopted as part of the IEEE 802.11p standard for Wireless Access in Vehicular Environments (WAVE).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dousing** Dousing: Dousing is the practice of making something or someone wet by throwing liquid over them, e.g., by pouring water, generally cold, over oneself. A related practice is ice swimming. Some consider cold water dousing to be a form of asceticism. Cold water dousing: Cold water dousing is used to "shock" the body into a kind of fever. The body's reaction is similar to the mammalian diving reflex or possibly temperature biofeedback. Several meditative and awareness techniques seem to share similar effects with elevated temperature, such as Tummo. Compare cold water dousing with ice swimming. The effects of dousing are usually more intense and longer-lasting than just a cold shower. Ending a shower with cold water is an old naturopathic tradition. There are those who believe that this fever is helpful in killing harmful bacteria and leaving the hardier beneficial bacteria in the body. National traditions: Burma Thingyan (Water Festival) was celebrated from 13 to 17 April in 2001 and the rituals included dousing. Japan Some Japanese ascetic practices, as with Shinto misogi practices, include dousing. This is seen, for example, with some Aikido martialists. Morihei Ueshiba was known to practice cold water misogi. Kamakura, Japan has a temple whose Nichiren Buddhist priests in training practice a ritual of 100 days of fasting, meditation and walking which ends with stripping to loincloths and dousing with ice cold water. Poland Śmigus-dyngus (or Dyngus Day) is a festival held on Easter Monday, traditionally celebrated by boys throwing water over girls. National traditions: Russia Jumping in freezing lakes, ice swimming, is an old Russian tradition that goes hand in hand with going to banya, a sauna-like bath. Some douse with a bucket of cold water. The bucket is filled with water and left out overnight. They then walk with it outside and spill it over themselves. Preferences include being barefoot outside on the earth, and performing dousing at certain times and more frequently when ill. National traditions: For some, dousing accompanies fasting (absence of all food and water) as an alternate means for the body to obtain water. Some follow cold water dousing with air-drying outside or in wintertime taking a "snow bath" by rubbing handfuls of snow on the body or lying/moving in it. Porfiry Ivanov's health system includes cold water dousing. Cold water dousing is practiced by some Systema martialists. Thailand Songkran is a popular festival in April which includes dousing using water. It is also celebrated by the Dai people in Yunnan Province in China, and in Laos, Cambodia and Myanmar during the traditional New Year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Constraint graph (layout)** Constraint graph (layout): In some tasks of integrated circuit layout design a necessity arises to optimize placement of non-overlapping objects in the plane. In general this problem is extremely hard, and to tackle it with computer algorithms, certain assumptions are made about admissible placements and about operations allowed in placement modifications. Constraint graphs capture the restrictions of relative movements of the objects placed in the plane. These graphs, while sharing common idea, have different definition, depending on a particular design task or its model. Floorplanning: In floorplanning, the model of a floorplan of an integrated circuit is a set of isothetic rectangles called "blocks" within a larger rectangle called "boundary" (e.g., "chip boundary", "cell boundary"). Floorplanning: A possible definition of constraint graphs is as follows. The constraint graph for a given floorplan is a directed graph with vertex set being the set of floorplan blocks and there is an edge from block b1 to b2 (called horizontal constraint), if b1 is completely to the left of b2 and there is an edge from block b1 to b2 (called vertical constraint), if b1 is completely below b2. Floorplanning: If only horizontal constraints are considered, one obtains the horizontal constraint graph. If only vertical constraints are considered, one obtains the vertical constraint graph. Floorplanning: Under this definition, the constraint graph can have as many as O(n2) edges, where n is the number of blocks. Therefore other, less dense constraint graphs are considered. The horizontal visibility graph is a horizontal constraint graph in which the horizontal constraint between two blocks exists only if there is a horizontal line segment which connects the two blocks and does not intersect any other blocks. In other words, one block is a potential "immediate obstacle" for moving another one horizontally. The vertical visibility graph is defined in a similar way. Channel routing: Channel routing is the problem of routing of a set of nets N which have fixed terminals on two opposite sides of a rectangle ("channel"). In this context, the horizontal constraint graph is the undirected graph with vertex set N and two nets are connected by an edge if and only if horizontal segments of the routing must overlap. In the given example, only nets 5 and 6 do not have a horizontal constraint between them. The vertical constraint graph is the directed graph with vertex set N and two nets are connected by an edge if and only if there are two pins from different nets on the same vertical line and the edge is directed from the net with pin on the upper edge of the channel. This direction means that this net must be routed on a horizontal track above the horizontal tracks of the second net. In the given example, only nets 1 and 3 have a vertical constraint.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CAT RNA-binding domain** CAT RNA-binding domain: In molecular biology, the CAT RNA-binding domain (Co-AntiTerminator RNA-binding domain) is a protein domain found at the amino terminus of a family of transcriptional antiterminator proteins. This domain forms a dimer in the crystal structure. Transcriptional antiterminators of the BglG/SacY family are regulatory proteins that mediate the induction of sugar metabolizing operons in Gram-positive and Gram-negative bacteria. Upon activation, these proteins bind to specific targets in nascent mRNAs, thereby preventing abortive dissociation of the RNA polymerase from the DNA template.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Science of Survival** The Science of Survival: The Science Museum is a major museum on Exhibition Road in South Kensington, London. It was founded in 1857 and is one of the city's major tourist attractions, attracting 3.3 million visitors annually in 2019.Like other publicly funded national museums in the United Kingdom, the Science Museum does not charge visitors for admission, although visitors are requested to make a donation if they are able. Temporary exhibitions may incur an admission fee. The Science of Survival: It is one of the five museums in the Science Museum Group. Founding and history: The museum was founded in 1857 under Bennet Woodcroft from the collection of the Royal Society of Arts and surplus items from the Great Exhibition as part of the South Kensington Museum, together with what is now the Victoria and Albert Museum. It included a collection of machinery which became the Museum of Patents in 1858, and the Patent Office Museum in 1863. This collection contained many of the most famous exhibits of what is now the Science Museum. Founding and history: In 1883, the contents of the Patent Office Museum were transferred to the South Kensington Museum. In 1885, the Science Collections were renamed the Science Museum and in 1893 a separate director was appointed. The Art Collections were renamed the Art Museum, which eventually became the Victoria and Albert Museum. Founding and history: When Queen Victoria laid the foundation stone for the new building for the Art Museum, she stipulated that the museum be renamed after herself and her late husband. This was initially applied to the whole museum, but when that new building finally opened ten years later, the title was confined to the Art Collections and the Science Collections had to be divorced from it. On 26 June 1909 the Science Museum, as an independent entity, came into existence.The Science Museum's present quarters, designed by Sir Richard Allison, were opened to the public in stages over the period 1919–28. This building was known as the East Block, construction of which began in 1913 and was temporarily halted by World War I. As the name suggests it was intended to be the first building of a much larger project, which was never realized. However, the museum buildings were expanded over the following years; a pioneering Children's Gallery with interactive exhibits opened in 1931, the Centre Block was completed in 1961–3, the infill of the East Block and the construction of the Lower & Upper Wellcome Galleries in 1980, and the construction of the Wellcome Wing in 2000 result in the museum now extending to Queen's Gate. Founding and history: Centennial volume: Science for the Nation The leading academic publisher, Palgrave Macmillan, published the official centenary history of the Science Museum on 14 April 2010. The first complete history of the Science Museum since 1957, Science for the Nation: Perspectives on the History of the Science Museum is a series of individual views by Science Museum staff and external academic historians of different aspects of the Science Museum's history. While it is not a chronological history in the conventional sense, the first five chapters cover the history of the museum from the Brompton Boilers in the 1860s to the opening of the Wellcome Wing in 2000. The remaining eight chapters cover a variety of themes concerning the museum's development. Collections: Objects not on display at one of the Science Museum Group's five museums will generally be stored at the National Collections Centre near Swindon. Library and archive material is also stored at the Library and Archives at the National Collections Centre. Access to the collections, library and archives is arranged by appointment through the Dana Research Centre and Library located in Queens Gate, South Kensington. Over 380,000 of the objects in the Science Museum Group's collections are available to view online at the Science Museum Group's Search Our Collection web page. Galleries: The Science Museum consists of two buildings – the main building and the Wellcome Wing. Visitors enter the main building from Exhibition Road, while the Wellcome Wing is accessed by walking through the Energy Hall, Exploring Space and then the Making the Modern World galleries (see below) at ground floor level. Main building – Level 0 The Energy Hall The Energy Hall is the first area that most visitors see as they enter the building. On the ground floor, the gallery contains a variety of steam engines, including the oldest surviving James Watt beam engine, which together tell the story of the British industrial revolution. Also on display is a recreation of James Watt's garret workshop from his home, Heathfield Hall, using over 8,300 objects removed from the room, which was sealed after his 1819 death, when the hall was demolished in 1927. Exploring Space Exploring Space is a historical gallery, filled with rockets and exhibits that tell the story of human space exploration and the benefits that space exploration has brought us (particularly in the world of telecommunications). Making the Modern World Making the Modern World displays some of the museum's most remarkable objects, including Puffing Billy (the oldest surviving steam locomotive), Crick's double helix, and the command module from the Apollo 10 mission, which are displayed along a timeline chronicling man's technological achievements. Galleries: A V-2 rocket, designed by German rocket scientist Wernher von Braun, is displayed in this gallery. Doug Millard, space historian and curator of space technology at the museum, states: "We got to the Moon using V-2 technology but this was technology that was developed with massive resources, including some particularly grim ones. The V-2 programme was hugely expensive in terms of lives, with the Nazis using slave labour to manufacture these rockets".Stephenson's Rocket used to be displayed in this gallery. After a short UK tour, since 2019 Rocket is on permanent display at the Railway Museum in York, in the Art Gallery. Galleries: Main Building – Level minus 1 The Secret Life of the Home The Secret Life of the Home shows the development of household appliances mostly from the late 19th and early 20th century, although some are earlier. Galleries: Main Building – Level 1 Medicine: The Wellcome Galleries The Medicine: The Wellcome Galleries is a five-gallery medical exhibition which spans ancient history to modern times with over 3000 exhibits and specially commissioned artworks. Many of the objects on display come from the Wellcome Collection started by Henry Wellcome. One of the commissioned artworks is a large bronze sculpture of Rick Genest titled Self-Conscious Gene by Marc Quinn. The galleries occupy the museum's entire first floor and opened on 16 November 2019. Galleries: Main Building – Level 2 The Clockmakers Museum The Clockmakers Museum is the world's oldest clock and watch museum which was originally assembled by the Worshipful Company of Clockmakers in London's Guildhall. Galleries: Science City 1550 – 1800: The Linbury Gallery The Science City 1550 – 1800: The Linbury Gallery shows how London grew to be a global hub for trade, commerce and scientific enquiry Mathematics: The Winton Gallery The Mathematics: The Winton Gallery examines the role that mathematicians have had in building our modern world. In the landing area to access the gallery (stair C) is a working example of Charles Babbage's Difference engine No.2. This was built by the Science Museum and its main part completed in 1991, to celebrate 200 years since Babbage's birth. Galleries: Information Age The Information Age gallery has exhibits covering the development of communications and computing over the last two centuries. It explores the six networks that have transformed global communications: The Cable, The Telephone Exchange, Broadcast, The Constellation, The Cell and The Web/ It was opened on 24 October 2014 by the Queen, Elizabeth II, who sent her first tweet from here. Galleries: Main Building – Level 3 Wonderlab: The Equinor Gallery One of the most popular galleries in the museum is the interactive Wonderlab:The Equinor Gallery, formerly called Launch Pad. The gallery is staffed by Explainers who demonstrate how exhibits work, conduct live experiments and perform shows to schools and the visiting public. Flight The Flight gallery charts the development of flight in the 20th century. Contained in the gallery are several full sized aeroplanes and helicopters, including Alcock and Brown's transatlantic Vickers Vimy (1919), Spitfire and Hurricane fighters, as well as numerous aero-engines and a cross-section of a Boeing 747. It opened in 1963 and was refurbished in the 1990s. Galleries: Wellcome Wing Tomorrow's World (Level 0) The Tomorrow's World gallery hosts topical science stories and free exhibitions including: Our Future Planet : Can Carbon Capture help us fight climate change? Mission to Mercury : Bepi Columbo Driverless : Who's in control? (exhibition ended January 2021) IMAX: The Ronson Theatre (Entrance from Level 0) The IMAX: The Ronson Theatre is an IMAX cinema which shows educational films (some in 3-D), as well as blockbusters and live events. It features a screen measuring 24.3 by 16.8 metres, with both a dual IMAX with Laser projection system and a traditional IMAX 15/70mm film projector, and an IMAX 12-channel sound system. Galleries: Who Am I? (Level 1) Visitors to the Who Am I? gallery can explore the science of who they are through intriguing objects, provocative artworks and hands-on exhibits. Atmosphere Gallery (Level 2) The Atmosphere gallery explores the science of climate. Engineer your Future (Level 3) The Engineer your Future gallery explores whether you have the problem solving and team working skills to succeed in a career in engineering. Temporary and touring exhibitions: The museum has some dedicated spaces for temporary exhibitions (both free and paid-for) and displays, on level −1 (Basement Gallery), level 0 (inside the Exploring Space Gallery and Tomorrow's World), level 1 (Special Exhibition Gallery 1) and level 2 (Special Exhibition Gallery 2 and The Studio). Most of these travel to other Science Museum Group sites, as well as nationally and internationally. Temporary and touring exhibitions: Past exhibitions have included: Cosmonauts: Birth of Space Age (ended 2016). Wounded – Conflict, Casualties and Care (2016–2018) – timed to commemorated the centenary of the Battle of the Somme; explored the development of medical treatment for wounded soldiers during the First World War. Robots (ended 2017). The Sun: Living with our Star (ended 2019). The Last Tsar: Blood and Revolution (ended 2019). Top Secret: From Cyphers to Cyber Security (ended 2020, closed at the Science and Industry Museum on 31 August 2021). Art of Innovation – from Enlightenment to Dark Matter (2019–2020) – explored the interaction between science, the arts and society; included artworks by Boccioni, Constable, Hepworth, Hockney, Lowry and Turner. Science Fiction: Voyage to the Edge of Imagination (2022-2023) Codebreaker, on the life of Alan Turing. Unlocking Lovelock, which explored the archive of James Lovelock. Temporary and touring exhibitions: The Science Box contemporary science series toured various venues in the UK and Europe in the 1990s and from 1995 The Science of Sport appeared in various incarnations and venues around the World. In 2005 The Science Museum teamed up with Fleming Media to set up The Science of... to develop and tour exhibitions including The Science of Aliens, The Science of Spying and The Science of Survival. Temporary and touring exhibitions: In 2008, The Science of Survival exhibition opened to the public and allowed visitors to explore what the world might be like in 2050 and how humankind will meet the challenges of climate change and energy shortages. In 2014 the museum launched the family science Energy Show, which toured the country. The same year it began a new programme of touring exhibitions which opened with Collider: Step inside the world's greatest experiment to much critical acclaim. The exhibition takes visitors behind the scenes at CERN and explores the science and engineering behind the discovery of the Higgs Boson. The exhibition toured until early 2017. Media Space exhibitions also go on tour, notably Only in England which displays works by the photographers Tony Ray-Jones and Martin Parr. Events: 'Astronights' for Children The Science Museum organises "Astronights", "all-night extravaganza with a scientific twist". Up to 380 children aged between 8 and 11, accompanied by adults, are invited to spend an evening performing fun "science based" activities and then spend the night sleeping in the museum galleries amongst the exhibits. In the morning, they're woken to breakfast and more science, watching a show before the end of the event. Events: 'Lates' for Adults On the evening of the last Wednesday of every month (except December) the museum organises an adults only evening with up to 30 events, from lectures to silent discos. Previous Lates have seen conversations with the actress activist Lily Cole and Biorevolutions with the Francis Crick Institute which attracted around 7000 people, mostly under the age of 35. Events: Cancellation of James D. Watson talk In October 2007, the Science Museum cancelled a talk by the co-discoverer of the structure of DNA, James D. Watson, because he claimed that IQ test results showed black people to have lower intelligence than white people. The decision was criticised by some scientists, including Richard Dawkins, but supported by other scientists, including Steven Rose. Former galleries: The museum has undergone many changes in its history with older galleries being replaced by new ones. The Children's Gallery 1931–1995. Located in the basement, it was replaced by the under fives area called The Garden. Agriculture 1951–2017. Located on the first floor, it looked at the history and future of farming in the 20th century. It featured model dioramas and object displays. It was replaced by Medicine: The Wellcome Galleries in 2019. Shipping 1963–2012. Located on the second floor, its contents were 3D scanned and made available online. It was replaced by Information Age. Land Transport 1967–1996. Located on the ground floor, it displayed vehicles and objects associated with transport on land, including rail and road. It was replaced by the Making the Modern World gallery in 2000. Glimpses of Medical History 1981–2015. Located on the fourth floor, it contained reconstructions and dioramas of the history of practised medicine. It was not replaced, but subsumed into Medicine: The Wellcome Galleries which opened on the museum's first floor in November 2019. Science and the Art of Medicine 1981–2015. Located on the fifth floor, which featured exhibits of medical instruments and practices from ancient days and from many countries. It was not replaced, but subsumed into Medicine: The Wellcome Galleries which opened on the museum's first floor in November 2019. Launchpad 1986–2015. Originally opening on the ground floor, in 1989 it moved to the first floor replacing Textiles. Then in 2000 to the basement of the newly built Wellcome Wing. In 2007, it moved to its final location on the third floor, replacing the George III gallery. It was replaced by Wonderlab in 2016. Challenge of Materials 1997–2019. Located on the first floor, explored the diversity and properties of materials. It was designed by WilkinsonEyre and featured an exhibit Materials House by Thomas Heatherwick. Cosmos and Culture 2009–2017. Located on the first floor, it featured astronomical objects showing the study of the night sky. It was replaced by Medicine: The Wellcome Galleries in 2019. Storage, library and archives: Blythe House, 1979–2019, the museum's former storage facility in West Kensington, while not a gallery, it offered tours of the collections housed there. Objects formerly housed there are being transferred to the National Collections Centre, at the Science Museum Wroughton, in Wiltshire.The Science Museum has a dedicated library, and until the 1960s was Britain's National Library for Science, Medicine and Technology. It holds runs of periodicals, early books and manuscripts, and is used by scholars worldwide. It was, for a number of years, run in conjunction with the library of Imperial College, but in 2007 the library was divided over two sites. Histories of science and biographies of scientists were kept at the Imperial College Library until February 2014 when the arrangement was terminated, the shelves were cleared and the books and journals shipped out, joining the rest of the collection, which includes original scientific works and archives at the National Collections Centre. Storage, library and archives: Dana Research Centre and Library previously an event space and cafe', reopened in its current form in 2015. Open to researchers and members of the public, it allows free access to almost 7,000 volumes, which can be consulted on site. Sponsorship: The Science Museum has been sponsored by major organisations including Shell, BP, Samsung and GlaxoSmithKline. Some have been controversial. The museum declined to give details of how much it receives from oil and gas sponsors. Equinor is also the title sponsor of "Wonderlab: The Equinor Gallery", an exhibition for children, while BP is the funding partner of the museum's STEM Training Academy. Equinor's sponsorship of the Wonderlab exhibit was on the basis that the Science Museum would not make any statement to damage the oil firm's reputation.Shell has influenced how the museum presents climate change in its programme sponsored by the oil company. The museum has signed a gagging clause in its agreement with Shell not to "make any statement or issue any publicity or otherwise be involved in any conduct or matter that may reasonably be foreseen as discrediting or damaging the goodwill or reputation" of Shell.The museum signed a sponsorship contract with the Norwegian oil and gas company Equinor which contained a gagging clause, stating the museum would not say anything that could damage the fossil fuel company's reputation. Sponsorship: Reactions to sponsorship by fossil fuel companies The museum's director, Ian Blatchford, defended the museum's sponsorship policy, saying: "Even if the Science Museum were lavishly publicly funded I would still want to have sponsorship from the oil companies."Scientists for Global Responsibility called the museum's move "staggeringly out-of-step and irresponsible". Some presenters, including George Monbiot, pulled out of climate talks on finding they were sponsored by BP and the Norwegian oil company Equinor. Bob Ward of the Grantham Research Institute on Climate Change and the Environment said the "carbon capture exhibition is not 'greenwash'".There have been protests against the sponsorship; in May 2021, a group calling themselves 'Scientists for XR' (Extinction Rebellion) locked themselves to a mechanical tree inside the museum. The UK Student Climate Network carried out an overnight occupation in June 2021, and were threatened with arrest. In August 2021, members of Extinction Rebellion held a protest inside and outside the museum with a 12 ft (3.7 m) pink dodo.In 2021, Chris Rapley, a climate scientist, resigned from the museum's advisory board because of oil and gas company sponsorship.In 2021, more than 40 senior academics and scientists said they would not work with the Science Museum due to its financial relationships with the fossil fuel industry.In 2022, more than 400 teachers signed an open letter to the museum promising to boycott it following sponsorship of the museum's Energy Revolution exhibition by the coal mining company Adani. Directors of the Science Museum: The directors of the South Kensington Museum were: Henry Cole CB (1857–1873) Sir Philip Cunliffe-Owen KCB KCMG CIE (1873–1893)The directors of the Science Museum have been: Major-General Edward R. Festing CB FRS (1893–1904) William I. Last (1904–1911) Sir Francis Grant Ogilvie CB (1911–1920) Colonel Sir Henry Lyons FRS (1920–1933) Colonel E. E. B. Mackintosh DSO (1933–1945) Herman Shaw (1945–1950) F. Sherwood Taylor (1950–1956) Sir Terence Morrison-Scott DSc FMA (1956–1960) Sir David Follett FMA (1960–1973) Dame Margaret Weston DBE FMA (1973–1986) Neil Cossons OBE FSA FMA (1986–2000) Lindsay Sharp (2000–2002)The following have been head/director of the Science Museum in London, not including its satellite museums: Jon Tucker (2002–2007, Head) Chris Rapley CBE (2007–2010)The following have been directors of the National Museum of Science and Industry, (since April 2012 renamed the Science Museum Group) which oversees the Science Museum and other related museums, from 2002: Lindsay Sharp (2002–2005) Jon Tucker (2005–06, Acting Director) Martin Earwicker FREng (2006–2009) Molly Jackson (2009) Andrew Scott CBE (2009–10) Ian Blatchford (2010–)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biochemistry of body odor** Biochemistry of body odor: The biochemistry of body odor pertains to the chemical compounds in the body responsible for body odor and their kinetics. Causes: Body odor encompasses axillary (underarm) odor and foot odor. It is caused by a combination of sweat gland secretions and normal skin microflora. In addition, androstane steroids and the ABCC11 transporter are essential for most axillary odor. Body odor is a complex phenomenon, with numerous compounds and catalysts involved in its genesis. Secretions from sweat glands are initially odorless, but preodoriferous compounds or malodor precursors in the secretions are transformed by skin surface bacteria into volatile odorous compounds that are responsible for body malodor. Water and nutrients secreted by sweat glands also contribute to body odor by creating an ideal environment for supporting the growth of skin surface bacteria. Types: There are three types of sweat glands: eccrine, apocrine, and apoeccrine. Apocrine glands are primarily responsible for body malodor and, along with apoeccrine glands, are mostly expressed in the axillary (underarm) regions, whereas eccrine glands are distributed throughout virtually all of the rest of the skin in the body, although they are also particularly expressed in the axillary regions, and contribute to malodor to a relatively minor extent. Sebaceous glands, another type of secretory gland, are not sweat glands but instead secrete sebum (an oily substance), and may also contribute to body odor to some degree.The main odorous compounds that contribute to axillary odor include: Unsaturated or hydroxylated branched fatty acids, with the key ones being (E)-3-methyl-2-hexenoic acid (3M2H) and 3-hydroxy-3-methylhexanoic acid (HMHA) Sulfanylalkanols, particularly 3-methyl-3-sulfanylhexan-1-ol (3M3SH) Odoriferous androstane steroids, namely the pheromones androstenone (5α-androst-16-en-3-one) and androstenol (5α-androst-16-en-3α-ol)These malodorous compounds are formed from non-odoriferous precursors that are secreted from apocrine glands and converted by various enzymes expressed in skin surface bacteria. The specific skin surface bacteria responsible are mainly Staphylococcus and Corynebacterium species.The androstane steroids dehydroepiandrosterone sulfate (DHEA-S) and androsterone sulfate have been detected in an extract of axillary hairs together with high concentrations of cholesterol. Apocrine sweat contains relatively high amounts of androgens, for instance dehydroepiandrosterone (DHEA), androsterone, and testosterone, and the androgen receptor (AR), the biological target of androgens, is strongly expressed in the secretory cells of apocrine glands. In addition, 5α-reductase type I, an enzyme which converts testosterone into the more potent androgen dihydrotestosterone (DHT), has been found to be highly expressed in the apocrine glands of adolescents, and DHT has been found to specifically contribute to malodor as well. Starting at puberty, males have higher levels of androgens than do females and produce comparatively more axillary malodor. As such, it has been proposed that the higher axillary malodor seen in males is due to greater relative stimulation of axillary apocrine sweat glands by androgens. Genetics: ABCC11 is a gene encoding an apical ATP-driven efflux transporter that has been found to transport a variety of lipophilic anions including cyclic nucleotides, estradiol glucuronide, steroid sulfates such as DHEA-S, and monoanionic bile acids. It is expressed and localized in apocrine glands, including in the axilla, the ceruminous glands in the auditory canal, and in the mammary gland. A single-nucleotide polymorphism (SNP) 538G→A in ABCC11 that leads to a G180R substitution in the encoded protein has been found to result in loss-of-function via affecting N-linked glycosylation and in turn causing proteasomal degradation of the protein. This polymorphism has been found to be responsible for the dry and white earwax phenotype, and is considered to be unique as it has been described as the only human SNP that has been found to determine a visible genetic trait. In addition to earwax phenotype, the ABCC11 genotype has been found to be associated with colostrum secretion from the breasts as well as normal axillary odor and osmidrosis (excessive axillary malodor).A functional ABCC11 protein has been found to be essential for the presence of the characteristic strong axillary odor, with the 538G→A SNP leading to a loss of secretion of axillary malodorous precursors and a nearly complete loss of axillary odor in those who are homozygous for the polymorphism. Specifically, the secretion of the amino-acid conjugates 3M2H-Gln, HMHA-Gln, and Cys-Gly-(S) 3M3SH, which are precursors of key axillary malodorous compounds including the unsaturated or hydroxylated branched-chain fatty acids 3M2H and HMHA and the sulfanylalkanol 3M3SH, has been found to be abolished in homozygotic carriers of the SNP, and the odoriferous androstane steroids androstenone and androstenol and their precursors DHEA and DHEA-S have been found to be significantly reduced as well. Patients with axillary osmidrosis (538G/G or 538G/A genotype) were found to have significantly more numerous and relatively large axillary apocrine glands compared to controls with the A/A genotype. Fatty acids: In contrast to the aforementioned odoriferous compounds, the levels of long straight-chain fatty acids such as hexadecanoic acid, octadecanoic acid, and linolic acid and short straight-chain fatty acids such as butyric acid, hexanoic acid, and octanoic acid in axillary sweat have not been found to be affected by the ABCC11 genotype, which suggests that their secretion is independent of ABCC11. These straight-chain fatty acids are odoriferous, but differently and to a much lesser extent compared to branched-chain fatty acids. In accordance, it has been said that it is very likely that these aliphatic straight-chain fatty acids are responsible for the faint sour acidic axillary odor that has previously been observed in most Japanese individuals. In addition to the secretion of straight-chain fatty acids, axillary microflora did not appear to differ between homozygous carriers of the 538G→A SNP and non-carriers.The ABCC11 transporter appears to be involved both in the transport of androstane steroids into the secretory cells of apocrine glands and in the secretion of preodoriferous compounds from axillary apocrine glands. Specific steroids that ABCC11 has been found to transport include steroid sulfates like DHEA-S and estrone sulfate and steroid glucuronides like estradiol glucuronide. In accordance with its transport of compounds involved in axillary odor, ABCC11 alleles are strongly associated with axillary odor. Asians have little or faint axillary odor, whereas Caucasians and Africans have strong axillary odor, and this has been found to be due to genetic differences in the ABCC11 gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neber rearrangement** Neber rearrangement: The Neber rearrangement is an organic reaction in which a ketoxime is converted into an alpha-aminoketone via a rearrangement reaction. The oxime is first converted to an O-sulfonate, for example a tosylate by reaction with tosyl chloride. Added base forms a carbanion which displaces the tosylate group in a nucleophilic displacement to an azirine and added water subsequently hydrolyses it to the aminoketone. The Beckmann rearrangement is a side reaction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bistability** Bistability: In a dynamical system, bistability means the system has two stable equilibrium states. A bistable structure can be resting in either of two states. An example of a mechanical device which is bistable is a light switch. The switch lever is designed to rest in the "on" or "off" position, but not between the two. Bistable behavior can occur in mechanical linkages, electronic circuits, nonlinear optical systems, chemical reactions, and physiological and biological systems. Bistability: In a conservative force field, bistability stems from the fact that the potential energy has two local minima, which are the stable equilibrium points. These rest states need not have equal potential energy. By mathematical arguments, a local maximum, an unstable equilibrium point, must lie between the two minima. At rest, a particle will be in one of the minimum equilibrium positions, because that corresponds to the state of lowest energy. The maximum can be visualized as a barrier between them. Bistability: A system can transition from one state of minimal energy to the other if it is given enough activation energy to penetrate the barrier (compare activation energy and Arrhenius equation for the chemical case). After the barrier has been reached, assuming the system has damping, it will relax into the other minimum state in a time called the relaxation time. Bistability: Bistability is widely used in digital electronics devices to store binary data. It is the essential characteristic of the flip-flop, a circuit which is a fundamental building block of computers and some types of semiconductor memory. A bistable device can store one bit of binary data, with one state representing a "0" and the other state a "1". It is also used in relaxation oscillators, multivibrators, and the Schmitt trigger. Bistability: Optical bistability is an attribute of certain optical devices where two resonant transmissions states are possible and stable, dependent on the input. Bistability can also arise in biochemical systems, where it creates digital, switch-like outputs from the constituent chemical concentrations and activities. It is often associated with hysteresis in such systems. Mathematical modelling: In the mathematical language of dynamic systems analysis, one of the simplest bistable systems is dydt=y(1−y2). Mathematical modelling: This system describes a ball rolling down a curve with shape y44−y22 , and has three equilibrium points: y=1 , y=0 , and y=−1 . The middle point y=0 is marginally stable ( y=0 is stable but y≈0 will not converge to y=0 ), while the other two points are stable. The direction of change of y(t) over time depends on the initial condition y(0) . If the initial condition is positive ( y(0)>0 ), then the solution y(t) approaches 1 over time, but if the initial condition is negative ( y(0)<0 ), then y(t) approaches −1 over time. Thus, the dynamics are "bistable". The final state of the system can be either y=1 or y=−1 , depending on the initial conditions.The appearance of a bistable region can be understood for the model system dydt=y(r−y2) which undergoes a supercritical pitchfork bifurcation with bifurcation parameter r In biological and chemical systems: Bistability is key for understanding basic phenomena of cellular functioning, such as decision-making processes in cell cycle progression, cellular differentiation, and apoptosis. It is also involved in loss of cellular homeostasis associated with early events in cancer onset and in prion diseases as well as in the origin of new species (speciation).Bistability can be generated by a positive feedback loop with an ultrasensitive regulatory step. Positive feedback loops, such as the simple X activates Y and Y activates X motif, essentially link output signals to their input signals and have been noted to be an important regulatory motif in cellular signal transduction because positive feedback loops can create switches with an all-or-nothing decision. Studies have shown that numerous biological systems, such as Xenopus oocyte maturation, mammalian calcium signal transduction, and polarity in budding yeast, incorporate multiple positive feedback loops with different time scales (slow and fast). Having multiple linked positive feedback loops with different time scales ("dual-time switches") allows for (a) increased regulation: two switches that have independent changeable activation and deactivation times; and (b) noise filtering.Bistability can also arise in a biochemical system only for a particular range of parameter values, where the parameter can often be interpreted as the strength of the feedback. In several typical examples, the system has only one stable fixed point at low values of the parameter. A saddle-node bifurcation gives rise to a pair of new fixed points emerging, one stable and the other unstable, at a critical value of the parameter. The unstable solution can then form another saddle-node bifurcation with the initial stable solution at a higher value of the parameter, leaving only the higher fixed solution. Thus, at values of the parameter between the two critical values, the system has two stable solutions. An example of a dynamical system that demonstrates similar features is dxdt=r+x51+x5−x where x is the output, and r is the parameter, acting as the input.Bistability can be modified to be more robust and to tolerate significant changes in concentrations of reactants, while still maintaining its "switch-like" character. Feedback on both the activator of a system and inhibitor make the system able to tolerate a wide range of concentrations. An example of this in cell biology is that activated CDK1 (Cyclin Dependent Kinase 1) activates its activator Cdc25 while at the same time inactivating its inactivator, Wee1, thus allowing for progression of a cell into mitosis. Without this double feedback, the system would still be bistable, but would not be able to tolerate such a wide range of concentrations.Bistability has also been described in the embryonic development of Drosophila melanogaster (the fruit fly). Examples are anterior-posterior and dorso-ventral axis formation and eye development.A prime example of bistability in biological systems is that of Sonic hedgehog (Shh), a secreted signaling molecule, which plays a critical role in development. Shh functions in diverse processes in development, including patterning limb bud tissue differentiation. The Shh signaling network behaves as a bistable switch, allowing the cell to abruptly switch states at precise Shh concentrations. gli1 and gli2 transcription is activated by Shh, and their gene products act as transcriptional activators for their own expression and for targets downstream of Shh signaling. Simultaneously, the Shh signaling network is controlled by a negative feedback loop wherein the Gli transcription factors activate the enhanced transcription of a repressor (Ptc). This signaling network illustrates the simultaneous positive and negative feedback loops whose exquisite sensitivity helps create a bistable switch. In biological and chemical systems: Bistability can only arise in biological and chemical systems if three necessary conditions are fulfilled: positive feedback, a mechanism to filter out small stimuli and a mechanism to prevent increase without bound.Bistable chemical systems have been studied extensively to analyze relaxation kinetics, non-equilibrium thermodynamics, stochastic resonance, as well as climate change. In bistable spatially extended systems the onset of local correlations and propagation of traveling waves have been analyzed.Bistability is often accompanied by hysteresis. On a population level, if many realisations of a bistable system are considered (e.g. many bistable cells (speciation)), one typically observes bimodal distributions. In an ensemble average over the population, the result may simply look like a smooth transition, thus showing the value of single-cell resolution. In biological and chemical systems: A specific type of instability is known as modehopping, which is bi-stability in the frequency space. Here trajectories can shoot between two stable limit cycles, and thus show similar characteristics as normal bi-stability when measured inside a Poincare section. In mechanical systems: Bistability as applied in the design of mechanical systems is more commonly said to be "over centre"—that is, work is done on the system to move it just past the peak, at which point the mechanism goes "over centre" to its secondary stable position. The result is a toggle-type action- work applied to the system below a threshold sufficient to send it 'over center' results in no change to the mechanism's state. In mechanical systems: Springs are a common method of achieving an "over centre" action. A spring attached to a simple two position ratchet-type mechanism can create a button or plunger that is clicked or toggled between two mechanical states. Many ballpoint and rollerball retractable pens employ this type of bistable mechanism. An even more common example of an over-center device is an ordinary electric wall switch. These switches are often designed to snap firmly into the "on" or "off" position once the toggle handle has been moved a certain distance past the center-point. In mechanical systems: A ratchet-and-pawl is an elaboration—a multi-stable "over center" system used to create irreversible motion. The pawl goes over center as it is turned in the forward direction. In this case, "over center" refers to the ratchet being stable and "locked" in a given position until clicked forward again; it has nothing to do with the ratchet being unable to turn in the reverse direction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spatial-numerical association of response codes** Spatial-numerical association of response codes: The spatial-numerical association of response codes (SNARC) is an example of the spatial organisation of magnitude information. Put simply, when presented with smaller numbers (0 to 4), people tend to respond faster if those stimuli are associated with the left extrapersonal hemiside of their perceived surroundings; when presented with larger numbers (6 to 9), people respond faster if those stimuli are instead associated with the right extrapersonal hemiside of their perceived surroundings. The SNARC effect is this automatic association that occurs between the location of the response hand and the semantic magnitude of a modality-independent number.Even for tasks in which magnitude is irrelevant, like parity judgement or phoneme detection, larger numbers are faster responded to with the right response key while smaller numbers are faster responded to with the left. This also occurs when the hands are crossed, with the right hand activating the left response key and vice versa. The explanation given by Dehaene and colleagues is that the magnitude of a number on an oriented mental number line is automatically activated. The mental number line is assumed to be oriented from left to right in populations with a left-to-right writing system (e.g. English), and oriented from right to left in populations with a right-to-left writing system (e.g. Iranian) Effects: The SNARC has been observed primarily in two scenarios: attentional and oculomotor. The first of these involves people being faster to detect left probes after smaller numbers are shown and right probes after large numbers, whereas the oculomotor effects are seen when participants look at greater speeds towards the left after detecting small numbers and to the right after detecting large ones.Newer research shows a motor bias to also be associated with the SNARC effect. In an experiment conducted into random number generation, participants tended to generate numbers of a larger magnitude when turning their heads to the right, and numbers of a smaller magnitude when turning their heads to the left. This has been replicated using hand sizes: smaller distances between the index finger and thumb when generating a random number evoked smaller numbers, and larger spaces evoked larger numbers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MIR1271** MIR1271: MicroRNA 1271 is a microRNA that in humans is encoded by the MIR1271 gene. Function: microRNAs (miRNAs) are short (20-24 nt) non-coding RNAs that are involved in post-transcriptional regulation of gene expression in multicellular organisms by affecting both the stability and translation of mRNAs. miRNAs are transcribed by RNA polymerase II as part of capped and polyadenylated primary transcripts (pri-miRNAs) that can be either protein-coding or non-coding. The primary transcript is cleaved by the Drosha ribonuclease III enzyme to produce an approximately 70-nt stem-loop precursor miRNA (pre-miRNA), which is further cleaved by the cytoplasmic Dicer ribonuclease to generate the mature miRNA and antisense miRNA star (miRNA*) products. The mature miRNA is incorporated into a RNA-induced silencing complex (RISC), which recognizes target mRNAs through imperfect base pairing with the miRNA and most commonly results in translational inhibition or destabilization of the target mRNA. The RefSeq represents the predicted microRNA stem-loop.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Oxygen toxicity** Oxygen toxicity: Oxygen toxicity is a condition resulting from the harmful effects of breathing molecular oxygen (O2) at increased partial pressures. Severe cases can result in cell damage and death, with effects most often seen in the central nervous system, lungs, and eyes. Historically, the central nervous system condition was called the Paul Bert effect, and the pulmonary condition the Lorrain Smith effect, after the researchers who pioneered the discoveries and descriptions in the late 19th century. Oxygen toxicity is a concern for underwater divers, those on high concentrations of supplemental oxygen, and those undergoing hyperbaric oxygen therapy. Oxygen toxicity: The result of breathing increased partial pressures of oxygen is hyperoxia, an excess of oxygen in body tissues. The body is affected in different ways depending on the type of exposure. Central nervous system toxicity is caused by short exposure to high partial pressures of oxygen at greater than atmospheric pressure. Pulmonary and ocular toxicity result from longer exposure to increased oxygen levels at normal pressure. Symptoms may include disorientation, breathing problems, and vision changes such as myopia. Prolonged exposure to above-normal oxygen partial pressures, or shorter exposures to very high partial pressures, can cause oxidative damage to cell membranes, collapse of the alveoli in the lungs, retinal detachment, and seizures. Oxygen toxicity is managed by reducing the exposure to increased oxygen levels. Studies show that, in the long term, a robust recovery from most types of oxygen toxicity is possible. Oxygen toxicity: Protocols for avoidance of the effects of hyperoxia exist in fields where oxygen is breathed at higher-than-normal partial pressures, including underwater diving using compressed breathing gases, hyperbaric medicine, neonatal care and human spaceflight. These protocols have resulted in the increasing rarity of seizures due to oxygen toxicity, with pulmonary and ocular damage being largely confined to the problems of managing premature infants. Oxygen toxicity: In recent years, oxygen has become available for recreational use in oxygen bars. The US Food and Drug Administration has warned those who have conditions such as heart or lung disease not to use oxygen bars. Scuba divers use breathing gases containing up to 100% oxygen, and should have specific training in using such gases. Classification: The effects of oxygen toxicity may be classified by the organs affected, producing three principal forms: Central nervous system, characterised by convulsions followed by unconsciousness, occurring under hyperbaric conditions; Pulmonary (lungs), characterised by difficulty in breathing and pain within the chest, occurring when breathing increased pressures of oxygen for extended periods; Ocular (retinopathic conditions), characterised by alterations to the eyes, occurring when breathing increased pressures of oxygen for extended periods.Central nervous system oxygen toxicity can cause seizures, brief periods of rigidity followed by convulsions and unconsciousness, and is of concern to divers who encounter greater than atmospheric pressures. Pulmonary oxygen toxicity results in damage to the lungs, causing pain and difficulty in breathing. Oxidative damage to the eye may lead to myopia or partial detachment of the retina. Pulmonary and ocular damage are most likely to occur when supplemental oxygen is administered as part of a treatment, particularly to newborn infants, but are also a concern during hyperbaric oxygen therapy. Classification: Oxidative damage may occur in any cell in the body but the effects on the three most susceptible organs will be the primary concern. It may also be implicated in damage to red blood cells (haemolysis), the liver, heart, endocrine glands (adrenal glands, gonads, and thyroid), or kidneys, and general damage to cells.In unusual circumstances, effects on other tissues may be observed: it is suspected that during spaceflight, high oxygen concentrations may contribute to bone damage. Hyperoxia can also indirectly cause carbon dioxide narcosis in patients with lung ailments such as chronic obstructive pulmonary disease or with central respiratory depression. Hyperventilation of atmospheric air at atmospheric pressures does not cause oxygen toxicity, because sea-level air has a partial pressure of oxygen of 0.21 bar (21 kPa) whereas toxicity does not occur below 0.3 bar (30 kPa). Signs and symptoms: Central nervous system Central nervous system oxygen toxicity manifests as symptoms such as visual changes (especially tunnel vision), ringing in the ears (tinnitus), nausea, twitching (especially of the face), behavioural changes (irritability, anxiety, confusion), and dizziness. This may be followed by a tonic–clonic seizure consisting of two phases: intense muscle contraction occurs for several seconds (tonic phase); followed by rapid spasms of alternate muscle relaxation and contraction producing convulsive jerking (clonic phase). The seizure ends with a period of unconsciousness (the postictal state). The onset of seizure depends upon the partial pressure of oxygen in the breathing gas and exposure duration. However, exposure time before onset is unpredictable, as tests have shown a wide variation, both amongst individuals, and in the same individual from day to day. In addition, many external factors, such as underwater immersion, exposure to cold, and exercise will decrease the time to onset of central nervous system symptoms. Decrease of tolerance is closely linked to retention of carbon dioxide. Other factors, such as darkness and caffeine, increase tolerance in test animals, but these effects have not been proven in humans. Signs and symptoms: Lungs Exposure to oxygen pressures greater than 0.5 bar, such as during diving, oxygen prebreathing prior to flight, or hyperbatic therapy is associated with the onset of pulmonary toxicity symptoms. Pulmonary toxicity symptoms result from an inflammation that starts in the airways leading to the lungs and then spreads into the lungs (tracheobronchial tree). The symptoms appear in the upper chest region (substernal and carinal regions). This begins as a mild tickle on inhalation and progresses to frequent coughing. If breathing increased partial pressures of oxygen continues, subjects experience a mild burning on inhalation along with uncontrollable coughing and occasional shortness of breath (dyspnea). Physical findings related to pulmonary toxicity have included bubbling sounds heard through a stethoscope (bubbling rales), fever, and increased blood flow to the lining of the nose (hyperaemia of the nasal mucosa).Initially, there is an exudative phase that results in pulmonary oedema. An increase in the width of the interstitial space may be seen in histological examination. X-rays of the lungs show little change in the short term, but extended exposure leads to increasing diffuse shadowing throughout both lungs. Pulmonary function measurements are reduced, as indicated by a reduction in the amount of air that the lungs can hold (vital capacity) and changes in expiratory function and lung elasticity. Lung diffusing capacity decreases leading eventually to hypoxaemia. Tests in animals have indicated a variation in tolerance similar to that found in central nervous system toxicity, as well as significant variations between species. When the exposure to oxygen above 0.5 bar (50 kPa) is intermittent, it permits the lungs to recover and delays the onset of toxicity. A similar progression is common to all mammalian species. If death from hypoxaemia has not occurred after exposure for several days a proliferative phase occurs, developing a chronic thickening of the alveolar membrane and a decrement in lung diffusing capacity. These changes are mostly reversible on return to normoxia, but the time required for complete recovery is not known. Signs and symptoms: Eyes In premature babies, signs of damage to the eye (retinopathy of prematurity, or ROP) are observed via an ophthalmoscope as a demarcation between the vascularised and non-vascularised regions of an infant's retina. The degree of this demarcation is used to designate four stages: (I) the demarcation is a line; (II) the demarcation becomes a ridge; (III) growth of new blood vessels occurs around the ridge; (IV) the retina begins to detach from the inner wall of the eye (choroid). Causes: Oxygen toxicity is caused by hyperoxia, exposure to oxygen at partial pressures greater than those to which the body is normally exposed. This occurs in three principal settings: underwater diving, hyperbaric oxygen therapy, and the provision of supplemental oxygen, in critical care, and for long term treatment of chronic disorders, and particularly to premature infants. In each case, the risk factors are markedly different. Causes: Under normal or reduced ambient pressures, the effects of hyperoxia are initially restricted to the lungs, which are directly exposed, but after prolonged exposure or at hyperbaric pressures, other organs can be at risk. At normal partial pressures of inhaled oxygen, most of the oxygen transported in the blood is carried by haemoglobin, but the amount of dissolved oxygen will increase at partial pressures of arterial oxygen exceeding 100 millimetres of mercury (0.13 bar), when oxyhemoglobin saturation is nearly complete. At higher concentrations the effects of hyperoxia are more widespread in the body tissues beyond the lungs. Causes: Central nervous system toxicity Exposures, from minutes to a few hours, to partial pressures of oxygen above about 1.6 bars (160 kPa)—about eight times normal atmospheric partial pressure—are usually associated with central nervous system oxygen toxicity and are most likely to occur among patients undergoing hyperbaric oxygen therapy and divers. Since sea level atmospheric pressure is about 1 bar (100 kPa), central nervous system toxicity can only occur under hyperbaric conditions, where ambient pressure is above normal. Divers breathing air at depths beyond 60 m (200 ft) face an increasing risk of an oxygen toxicity "hit" (seizure). Divers breathing a gas mixture enriched with oxygen, such as nitrox, similarly increase the risk of a seizure at shallower depths, should they descend below the maximum operating depth accepted for the mixture. Causes: CNS toxicity is aggravated by a high partial pressure of carbon dioxide, stress, fatigue and cold, all of which are much more likely in diving than in hyperbaric therapy. Causes: Lung toxicity The lungs and the remainder of the respiratory tract are exposed to the highest concentration of oxygen in the human body and are therefore the first organs to show toxicity. Pulmonary toxicity occurs only with exposure to partial pressures of oxygen greater than 0.5 bar (50 kPa), corresponding to an oxygen fraction of 50% at normal atmospheric pressure. The earliest signs of pulmonary toxicity begin with evidence of tracheobronchitis, or inflammation of the upper airways, after an asymptomatic period between 4 and 22 hours at greater than 95% oxygen, with some studies suggesting symptoms usually begin after approximately 14 hours at this level of oxygen.At partial pressures of oxygen of 2 to 3 bar (200 to 300 kPa)—100% oxygen at 2 to 3 times atmospheric pressure—these symptoms may begin as early as 3 hours into exposure to oxygen. Experiments on rats breathing oxygen at pressures between 1 and 3 bars (100 and 300 kPa) suggest that pulmonary manifestations of oxygen toxicity may not be the same for normobaric conditions as they are for hyperbaric conditions. Evidence of decline in lung function as measured by pulmonary function testing can occur as quickly as 24 hours of continuous exposure to 100% oxygen, with evidence of diffuse alveolar damage and the onset of acute respiratory distress syndrome usually occurring after 48 hours on 100% oxygen. Breathing 100% oxygen also eventually leads to collapse of the alveoli (atelectasis), while—at the same partial pressure of oxygen—the presence of significant partial pressures of inert gases, typically nitrogen, will prevent this effect.Preterm newborns are known to be at higher risk for bronchopulmonary dysplasia with extended exposure to high concentrations of oxygen. Other groups at higher risk for oxygen toxicity are patients on mechanical ventilation with exposure to levels of oxygen greater than 50%, and patients exposed to chemicals that increase risk for oxygen toxicity such the chemotherapeutic agent bleomycin. Therefore, current guidelines for patients on mechanical ventilation in intensive care recommend keeping oxygen concentration less than 60%. Likewise, divers who undergo treatment of decompression sickness are at increased risk of oxygen toxicity as treatment entails exposure to long periods of oxygen breathing under hyperbaric conditions, in addition to any oxygen exposure during the dive. Causes: Ocular toxicity Prolonged exposure to high inspired fractions of oxygen causes damage to the retina. Damage to the developing eye of infants exposed to high oxygen fraction at normal pressure has a different mechanism and effect from the eye damage experienced by adult divers under hyperbaric conditions. Hyperoxia may be a contributing factor for the disorder called retrolental fibroplasia or retinopathy of prematurity (ROP) in infants. In preterm infants, the retina is often not fully vascularised. Retinopathy of prematurity occurs when the development of the retinal vasculature is arrested and then proceeds abnormally. Associated with the growth of these new vessels is fibrous tissue (scar tissue) that may contract to cause retinal detachment. Supplemental oxygen exposure, while a risk factor, is not the main risk factor for development of this disease. Restricting supplemental oxygen use does not necessarily reduce the rate of retinopathy of prematurity, and may raise the risk of hypoxia-related systemic complications.Hyperoxic myopia has occurred in closed circuit oxygen rebreather divers with prolonged exposures. It also occurs frequently in those undergoing repeated hyperbaric oxygen therapy. This is due to an increase in the refractive power of the lens, since axial length and keratometry readings do not reveal a corneal or length basis for a myopic shift. It is usually reversible with time.A possible side effect of hyperbaric oxygen therapy is the initial or further development of cataracts, which are a increase in opacity of the lens of the eye which reduces visual acuity, and can eventually result in blindness. This is a rare event, associated with lifetime exposure to raised oxygen concentration, and may be under-reported as it develops very slowly. The cause is not fully understood, but evidence suggests that raised oxygen levels may cause accelerated deterioration of the vitreous humour due to degradation of lens crystallins by cross-linking, forming aggregates capable of scattering light. This may be an end-state development of the more commonly observed myopic shift associated with hyperbaric treatment. Mechanism: The biochemical basis for the toxicity of oxygen is the partial reduction of oxygen by one or two electrons to form reactive oxygen species, which are natural by-products of the normal metabolism of oxygen and have important roles in cell signalling. One species produced by the body, the superoxide anion (O−2), is possibly involved in iron acquisition. Higher than normal concentrations of oxygen lead to increased levels of reactive oxygen species. Oxygen is necessary for cell metabolism, and the blood supplies it to all parts of the body. When oxygen is breathed at high partial pressures, a hyperoxic condition will rapidly spread, with the most vascularised tissues being most vulnerable. During times of environmental stress, levels of reactive oxygen species can increase dramatically, which can damage cell structures and produce oxidative stress.While all the reaction mechanisms of these species within the body are not yet fully understood, one of the most reactive products of oxidative stress is the hydroxyl radical (·OH), which can initiate a damaging chain reaction of lipid peroxidation in the unsaturated lipids within cell membranes. High concentrations of oxygen also increase the formation of other free radicals, such as nitric oxide, peroxynitrite, and trioxidane, which harm DNA and other biomolecules. Although the body has many antioxidant systems such as glutathione that guard against oxidative stress, these systems are eventually overwhelmed at very high concentrations of free oxygen, and the rate of cell damage exceeds the capacity of the systems that prevent or repair it. Cell damage and cell death then result. Diagnosis: Diagnosis of central nervous system oxygen toxicity in divers prior to seizure is difficult as the symptoms of visual disturbance, ear problems, dizziness, confusion and nausea can be due to many factors common to the underwater environment such as narcosis, congestion and coldness. However, these symptoms may be helpful in diagnosing the first stages of oxygen toxicity in patients undergoing hyperbaric oxygen therapy. In either case, unless there is a prior history of epilepsy or tests indicate hypoglycaemia, a seizure occurring in the setting of breathing oxygen at partial pressures greater than 1.4 bar (140 kPa) suggests a diagnosis of oxygen toxicity.Diagnosis of bronchopulmonary dysplasia in newborn infants with breathing difficulties is difficult in the first few weeks. However, if the infant's breathing does not improve during this time, blood tests and x-rays may be used to confirm bronchopulmonary dysplasia. In addition, an echocardiogram can help to eliminate other possible causes such as congenital heart defects or pulmonary arterial hypertension.The diagnosis of retinopathy of prematurity in infants is typically suggested by the clinical setting. Prematurity, low birth weight, and a history of oxygen exposure are the principal indicators, while no hereditary factors have been shown to yield a pattern. Diagnosis: Differential diagnosis Clinical diagnosis can be confirmed with arterial oxygen levels. A number of other conditions can be confused with oxygen toxicity, these include: Carbon monoxide poisoning Cerebrovascular event (stroke) Envenomation or toxin ingestion Hypercapnia (Carbon dioxide narcosis) Hyperventilation Hypoglycemia Infection Migraine Multiple sclerosis Seizure disorder (epilepsy) Prevention: The prevention of oxygen toxicity depends entirely on the setting. Both underwater and in space, proper precautions can eliminate the most pernicious effects. Premature infants commonly require supplemental oxygen to treat complications of preterm birth. In this case prevention of bronchopulmonary dysplasia and retinopathy of prematurity must be carried out without compromising a supply of oxygen adequate to preserve the infant's life. Prevention: Underwater Oxygen toxicity is a catastrophic hazard in scuba diving, because a seizure results in high risk of death by drowning. The seizure may occur suddenly and with no warning symptoms. The effects are sudden convulsions and unconsciousness, during which victims can lose their regulator and drown. One of the advantages of a full-face diving mask is prevention of regulator loss in the event of a seizure. Mouthpiece retaining straps are a relatively inexpensive alternative with a similar but less effective function. As there is an increased risk of central nervous system oxygen toxicity on deep dives, long dives and dives where oxygen-rich breathing gases are used, divers are taught to calculate a maximum operating depth for oxygen-rich breathing gases, and cylinders containing such mixtures should be clearly marked with that depth.The risk of seizure appears to be a function of dose – a cumulative combination of partial pressure and duration. The threshold for oxygen partial pressure below which seizures never occur has not been established, and may depend on many variables, some of them personal. the risk to a specific person can vary considerably depending on individual sensitivity, level of exercise, and carbon dioxide retention, which is influenced by work of breathing.In some diver training courses for modes of diving in which exposure may reach levels with significant risk, divers are taught to plan and monitor what is called the 'oxygen clock' of their dives. This is a notional alarm clock, which ticks more quickly at increased oxygen pressure and is set to activate at the maximum single exposure limit recommended in the National Oceanic and Atmospheric Administration Diving Manual. For the following partial pressures of oxygen the limits are: 45 minutes at 1.6 bar (160 kPa), 120 minutes at 1.5 bar (150 kPa), 150 minutes at 1.4 bar (140 kPa), 180 minutes at 1.3 bar (130 kPa) and 210 minutes at 1.2 bar (120 kPa), but it is impossible to predict with any reliability whether or when toxicity symptoms will occur. Many nitrox-capable dive computers calculate an oxygen loading and can track it across multiple dives. The aim is to avoid activating the alarm by reducing the partial pressure of oxygen in the breathing gas or by reducing the time spent breathing gas of greater oxygen partial pressure. As the partial pressure of oxygen increases with the fraction of oxygen in the breathing gas and the depth of the dive, the diver obtains more time on the oxygen clock by diving at a shallower depth, by breathing a less oxygen-rich gas, or by shortening the duration of exposure to oxygen-rich gases. This function is provided by some technical diving decompression computers and rebreather control and monitoring hardware.Diving below 56 m (184 ft) on air would expose a diver to increasing danger of oxygen toxicity as the partial pressure of oxygen exceeds 1.4 bar (140 kPa), so a gas mixture should be used which contains less than 21% oxygen (termed a hypoxic mixture). Increasing the proportion of nitrogen is not viable, since it would produce a strongly narcotic mixture. However, helium is not narcotic, and a usable mixture may be blended either by completely replacing nitrogen with helium (the resulting mix is called heliox), or by replacing part of the nitrogen with helium, producing a trimix.Pulmonary oxygen toxicity is an entirely avoidable event while diving. The limited duration and naturally intermittent nature of most diving makes this a relatively rare (and even then, reversible) complication for divers. Established guidelines enable divers to calculate when they are at risk of pulmonary toxicity. In saturation diving it can be avoided by limiting the oxygen content of gas in living areas to below 0.4 bar. Prevention: Screening The intention of screening using an oxygen tolerance test is to identify divers with low tolerance to high partial pressures of hyperbaric oxygen who may be more prone to oxygen convulsions during diving operations or during hyperbaric treatment for decompression sickness. The value of this test has been questioned, and statistical studies have shown low incidence of seizures during standard hyperbaric treatment schedules, so some navies have discontinued its use, though a others continue to require the test for all candidate divers.The variability in tolerance and other variable factors such as workload have resulted in the U.S. Navy abandoning screening for oxygen tolerance. Of the 6,250 oxygen-tolerance tests performed between 1976 and 1997, only 6 episodes of oxygen toxicity were observed (0.1%).The oxygen tolerance test used by the Indian Navy, which follows recommendations of the US Navy and US National Oceanic and Atmospheric Administration, is to breathe 100% oxygen delivered by BIBS mask at an ambient pressure of 2.8 bar absolute (18 msw) for 30 minutes, at rest in a dry hyperbaric chamber. No symptoms of CNS oxygen toxicity may be observed by the attendant. Prevention: Hyperbaric setting The presence of a fever or a history of seizure is a relative contraindication to hyperbaric oxygen treatment. The schedules used for treatment of decompression illness allow for periods of breathing air rather than 100% oxygen (air breaks) to reduce the chance of seizure or lung damage. The U.S. Navy uses treatment tables based on periods alternating between 100% oxygen and air. For example, USN table 6 requires 75 minutes (three periods of 20 minutes oxygen/5 minutes air) at an ambient pressure of 2.8 standard atmospheres (280 kPa), equivalent to a depth of 18 metres (60 ft). This is followed by a slow reduction in pressure to 1.9 atm (190 kPa) over 30 minutes on oxygen. The patient then remains at that pressure for a further 150 minutes, consisting of two periods of 15 minutes air/60 minutes oxygen, before the pressure is reduced to atmospheric over 30 minutes on oxygen.Vitamin E and selenium were proposed and later rejected as a potential method of protection against pulmonary oxygen toxicity. There is however some experimental evidence in rats that vitamin E and selenium aid in preventing in vivo lipid peroxidation and free radical damage, and therefore prevent retinal changes following repetitive hyperbaric oxygen exposures. Prevention: Normobaric setting Bronchopulmonary dysplasia is reversible in the early stages by use of break periods on lower pressures of oxygen, but it may eventually result in irreversible lung injury if allowed to progress to severe damage. One or two days of exposure without oxygen breaks are needed to cause such damage.Retinopathy of prematurity is largely preventable by screening. Current guidelines require that all babies of less than 32 weeks gestational age or having a birth weight less than 1.5 kg (3.3 lb) should be screened for retinopathy of prematurity at least every two weeks. The National Cooperative Study in 1954 showed a causal link between supplemental oxygen and retinopathy of prematurity, but subsequent curtailment of supplemental oxygen caused an increase in infant mortality. To balance the risks of hypoxia and retinopathy of prematurity, modern protocols now require monitoring of blood oxygen levels in premature infants receiving oxygen.Careful titration of dosage to minimise delivered concentration while achieving the desired level of oxygenation will both minimise the risk of oxygen toxicity damage and the amount of oxygen used for long term therapy. Prevention: Hypobaric setting In low-pressure environments oxygen toxicity may be avoided since the toxicity is caused by high partial pressure of oxygen, not by high oxygen fraction. This is illustrated by the use of pure oxygen in spacesuits, which must operate at low pressure, and a high oxygen fraction and cabin pressure lower than normal atmospheric pressure in early spacecraft, for example, the Gemini and Apollo spacecraft. In such applications as extra-vehicular activity, high-fraction oxygen is non-toxic, even at breathing mixture fractions approaching 100%, because the oxygen partial pressure is not allowed to chronically exceed 0.3 bar (4.4 psi). Management: During hyperbaric oxygen therapy, the patient will usually breathe 100% oxygen from a mask while inside a hyperbaric chamber pressurised with air to about 2.8 bar (280 kPa). Seizures during the therapy are managed by removing the mask from the patient, thereby dropping the partial pressure of oxygen inspired below 0.6 bar (60 kPa).A seizure underwater requires that the diver be brought to the surface as soon as practicable. Although for many years the recommendation has been not to raise the diver during the seizure itself, owing to the danger of arterial gas embolism (AGE), there is some evidence that the glottis does not fully obstruct the airway. This has led to the current recommendation by the Diving Committee of the Undersea and Hyperbaric Medical Society that a diver should be raised during the seizure's clonic (convulsive) phase if the regulator is not in the diver's mouth—as the danger of drowning is then greater than that of AGE—but the ascent should be delayed until the end of the clonic phase otherwise. Rescuers ensure that their own safety is not compromised during the convulsive phase. They then ensure that where the victim's air supply is established it is maintained, and carry out a controlled buoyant lift. Lifting an unconscious body is taught by most recreational diver training agencies as an advanced skill, and for professional divers it is a basic skill, as it is one of the primary functions of the standby diver. Upon reaching the surface, emergency services are always contacted as there is a possibility of further complications requiring medical attention. If symptoms develop other than a seizure underwater the diver should immediately switch to a gas with a lower oxygen fraction or ascend to a shallower depth if decompression obligations allow. If a chamber is available at the surface, surface decompression is a recommended option. The U.S. Navy has published procedures for completing decompression stops where a recompression chamber is not immediately available.The occurrence of symptoms of bronchopulmonary dysplasia or acute respiratory distress syndrome is treated by lowering the fraction of oxygen administered, along with a reduction in the periods of exposure and an increase in the break periods where normal air is supplied. Where supplemental oxygen is required for treatment of another disease (particularly in infants), a ventilator may be needed to ensure that the lung tissue remains inflated. Reductions in pressure and exposure will be made progressively, and medications such as bronchodilators and pulmonary surfactants may be used.Divers manage the risk of pulmonary damage by limiting exposure to levels shown to be generally acceptable by experimental evidence, using a system of accumulated oxygen toxicity units which are based on exposure time at specified partial pressures. In the event of emergency treatment for decompression illness, it may be necessary to exceed normal exposure limits to manage more critical symptoms.Retinopathy of prematurity may regress spontaneously, but should the disease progress beyond a threshold (defined as five contiguous or eight cumulative hours of stage 3 retinopathy of prematurity), both cryosurgery and laser surgery have been shown to reduce the risk of blindness as an outcome. Where the disease has progressed further, techniques such as scleral buckling and vitrectomy surgery may assist in re-attaching the retina. Management: Repetitive exposure Repeated exposure to potentially toxic oxygen concentrations in breathing gas is fairly common in hyperbaric activity, particularly in hyperbaric medicine, saturation diving, underwater habitats, and repetitive decompression diving. Research at the National Oceanic and Atmospheric Administration (NOAA) by R.W. Hamilton and others determined acceptable levels of exposure for single and repeated exposures. A distinction is made between acceptable exposure for acute and chronic toxicity, but these are really the extremes of a possible continuous range of exposures. A further distinction can be made between routine exposure and exposure required for emergency treatment, where a higher risk of oxygen toxicity may be justified to achieve a reduction of a more critical injury, particularly when in a relatively safe controlled and monitored environment. Management: The Repex (repetitive exposure) method, developed in 1988, allows oxygen toxicity dosage to be calculated using a single dose value equivalent to 1 minute of 100% oxygen at atmospheric pressure called an Oxygen Tolerance Unit (OTU), and is used to avoid toxic effects over several days of operational exposure. Some dive computers will automatically track the dosage based on measured depth and selected gas mixture. The limits allow a greater exposure when the person has not been exposed recently, and daily allowable dose decreases with an increase in consecutive days with exposure. These values may not be fully supported by current data. Management: A more recent proposal uses a simple power equation, Toxicity Index (TI) = t2 × PO2c, where t is time and c is the power term. This was derived from the chemical reactions producing reactive oxygen or nitrogen species, and has been shown to give good predictions for CNS toxicity with c = 6.8 and for pulmonary toxicity for c = 4.57.For pulmonary toxicity, time is in hours, and PO2 in atmospheres absolute, TI should be limited to 250. Management: For CNS toxicity, time is in minutes, PO2 in atmospheres absolute, and a TI of 26,108 indicates a 1% risk. Prognosis: Although the convulsions caused by central nervous system oxygen toxicity may lead to incidental injury to the victim, it remained uncertain for many years whether damage to the nervous system following the seizure could occur and several studies searched for evidence of such damage. An overview of these studies by Bitterman in 2004 concluded that following removal of breathing gas containing high fractions of oxygen, no long-term neurological damage from the seizure remains.The majority of infants who have survived following an incidence of bronchopulmonary dysplasia will eventually recover near-normal lung function, since lungs continue to grow during the first 5–7 years and the damage caused by bronchopulmonary dysplasia is to some extent reversible (even in adults). However, they are likely to be more susceptible to respiratory infections for the rest of their lives and the severity of later infections is often greater than that in their peers.Retinopathy of prematurity (ROP) in infants frequently regresses without intervention and eyesight may be normal in later years. Where the disease has progressed to the stages requiring surgery, the outcomes are generally good for the treatment of stage 3 ROP, but are much worse for the later stages. Although surgery is usually successful in restoring the anatomy of the eye, damage to the nervous system by the progression of the disease leads to comparatively poorer results in restoring vision. The presence of other complicating diseases also reduces the likelihood of a favourable outcome.Provision of supplementary oxygen remains of life-saving importance in critical care, and can increase survival in some chronic conditions, but hyperoxia and the formation of reactive oxygen species is involved in the pathogenesis of several life-threatening diseases. The toxic effects of hyperoxia are particularly prevalent in the pulmonary compartment, and cerebral and coronary circulations are at risk when vascular changes occur. Long-term hyperoxia harms the immune responses and susceptibility to infectious complications and tissue injury are increased. Epidemiology: The incidence of central nervous system toxicity among divers has decreased since the Second World War, as protocols have developed to limit exposure and partial pressure of oxygen inspired. In 1947, Donald recommended limiting the depth allowed for breathing pure oxygen to 7.6 m (25 ft), which equates to an oxygen partial pressure of 1.8 bar (180 kPa). Over time this limit has been reduced, until today a limit of 1.4 bar (140 kPa) during a recreational dive and 1.6 bar (160 kPa) during shallow decompression stops is generally recommended, though military divers using oxygen rebreathers may operate to greater depths for limited periods, at greater risk. Oxygen toxicity has now become a rare occurrence other than when caused by equipment malfunction and human error. Historically, the U.S. Navy has refined its Navy Diving Manual air and mixed gas tables to reduce oxygen toxicity incidents. Between 1995 and 1999, reports showed 405 surface-supported dives using the helium–oxygen tables; of these, oxygen toxicity symptoms were observed on 6 dives (1.5%). As a result, the U.S. Navy in 2000 modified the schedules and conducted field tests of 150 dives, none of which produced symptoms of oxygen toxicity. Revised tables were published in 2001.The variability in tolerance and other variable factors such as workload have resulted in the U.S. Navy abandoning screening for oxygen tolerance. Of the 6,250 oxygen-tolerance tests performed between 1976 and 1997, only 6 episodes of oxygen toxicity were observed (0.1%).Central nervous system oxygen toxicity among patients undergoing hyperbaric oxygen therapy is rare, and is influenced by a number of a factors: individual sensitivity and treatment protocol; and probably therapy indication and equipment used. A study by Welslau in 1996 reported 16 incidents out of a population of 107,264 patients (0.015%), while Hampson and Atik in 2003 found a rate of 0.03%. Yildiz, Ay and Qyrdedi, in a summary of 36,500 patient treatments between 1996 and 2003, reported only 3 oxygen toxicity incidents, giving a rate of 0.008%. A later review of over 80,000 patient treatments revealed an even lower rate: 0.0024%. The reduction in incidence may be partly due to use of a mask (rather than a hood) to deliver oxygen.The overall risk of CNS toxicity may be as high as 1 in 2000 to 3000 treatments. but it varies with the pressure and may be as high as 1 in 200 at higher pressure treatment schedules of 2.8 to 3.0 ATA, or as low as 1 in 10,000 schedules at 2 ATA or less.Bronchopulmonary dysplasia is among the most common complications of prematurely born infants and its incidence has grown as the survival of extremely premature infants has increased. Nevertheless, the severity has decreased as better management of supplemental oxygen has resulted in the disease now being related mainly to factors other than hyperoxia.In 1997 a summary of studies of neonatal intensive care units in industrialised countries showed that up to 60% of low birth weight babies developed retinopathy of prematurity, which rose to 72% in extremely low birth weight babies, defined as less than 1 kg (2.2 lb) at birth. However, severe outcomes are much less frequent: for very low birth weight babies—those less than 1.5 kg (3.3 lb) at birth—the incidence of blindness was found to be no more than 8%.Administration of supplemental oxygen is extensively and effectively used in emergency and intensive care medicine, but the reactive oxygen species caused by excessive oxygenation tend to cause a vicious cycle of tissue injury, characterized by cell damage, cell death, and inflammation, mostly in the lungs, which can exacerbate problems of tissue oxygenation for which the supplemental oxygen was intended as a treatment. Similar problems can occur in oxygen therapy for chronic conditions which involve hypoxia. Careful titration of oxygen supply to minimise the excess to physiological need also reduces pulmonary hyperoxic exposure to the reasonably practicable minimum. The incidence of pulmonary symptoms of oxygen toxicity is about 5%, and some drugs can increase the risk, such as the chemotherapeutic agent bleomycin. History: Central nervous system toxicity was first described by Paul Bert in 1878. He showed that oxygen was toxic to insects, arachnids, myriapods, molluscs, earthworms, fungi, germinating seeds, birds, and other animals. Central nervous system toxicity may be referred to as the "Paul Bert effect".Pulmonary oxygen toxicity was first described by J. Lorrain Smith in 1899 when he noted central nervous system toxicity and discovered in experiments in mice and birds that 0.43 bar (43 kPa) had no effect but 0.75 bar (75 kPa) of oxygen was a pulmonary irritant. Pulmonary toxicity may be referred to as the "Lorrain Smith effect". The first recorded human exposure was undertaken in 1910 by Bornstein when two men breathed oxygen at 2.8 bar (280 kPa) for 30 minutes, while he went on to 48 minutes with no symptoms. In 1912, Bornstein developed cramps in his hands and legs while breathing oxygen at 2.8 bar (280 kPa) for 51 minutes. Smith then went on to show that intermittent exposure to a breathing gas with less oxygen permitted the lungs to recover and delayed the onset of pulmonary toxicity.Albert R. Behnke et al. in 1935 were the first to observe visual field contraction (tunnel vision) on dives between 1.0 bar (100 kPa) and 4.1 bar (410 kPa). During World War II, Donald and Yarbrough et al. performed over 2,000 experiments on oxygen toxicity to support the initial use of closed circuit oxygen rebreathers. Naval divers in the early years of oxygen rebreather diving developed a mythology about a monster called "Oxygen Pete", who lurked in the bottom of the Admiralty Experimental Diving Unit "wet pot" (a water-filled hyperbaric chamber) to catch unwary divers. They called having an oxygen toxicity attack "getting a Pete".In the decade following World War II, Lambertsen et al. made further discoveries on the effects of breathing oxygen under pressure and methods of prevention. Their work on intermittent exposures for extension of oxygen tolerance and on a model for prediction of pulmonary oxygen toxicity based on pulmonary function are key documents in the development of standard operating procedures when breathing increased pressures of oxygen. Lambertsen's work showing the effect of carbon dioxide in decreasing time to onset of central nervous system symptoms has influenced work from current exposure guidelines to future breathing apparatus design.Retinopathy of prematurity was not observed before World War II, but with the availability of supplemental oxygen in the decade following, it rapidly became one of the principal causes of infant blindness in developed countries. By 1960 the use of oxygen had become identified as a risk factor and its administration restricted. The resulting fall in retinopathy of prematurity was accompanied by a rise in infant mortality and hypoxia-related complications. Since then, more sophisticated monitoring and diagnosis have established protocols for oxygen use which aim to balance between hypoxic conditions and problems of retinopathy of prematurity.Bronchopulmonary dysplasia was first described by Northway in 1967, who outlined the conditions that would lead to the diagnosis. This was later expanded by Bancalari and in 1988 by Shennan, who suggested the need for supplemental oxygen at 36 weeks could predict long-term outcomes. Nevertheless, Palta et al. in 1998 concluded that radiographic evidence was the most accurate predictor of long-term effects. History: Bitterman et al. in 1986 and 1995 showed that darkness and caffeine would delay the onset of changes to brain electrical activity in rats. In the years since, research on central nervous system toxicity has centred on methods of prevention and safe extension of tolerance. Sensitivity to central nervous system oxygen toxicity has been shown to be affected by factors such as circadian rhythm, drugs, age, and gender. In 1988, Hamilton et al. wrote procedures for the National Oceanic and Atmospheric Administration to establish oxygen exposure limits for habitat operations. Even today, models for the prediction of pulmonary oxygen toxicity do not explain all the results of exposure to high partial pressures of oxygen. Society and culture: Recreational scuba divers commonly breathe nitrox containing up to 40% oxygen, while technical divers use pure oxygen or nitrox containing up to 80% oxygen to accelerate decompression. Divers who breathe oxygen fractions greater than of air (21%) need to be educated on the dangers of oxygen toxicity and how to manage the risk. To buy nitrox, a diver may be required to show evidence of relevant qualification.Since the late 1990s the recreational use of oxygen has been promoted by oxygen bars, where customers breathe oxygen through a nasal cannula. Claims have been made that this reduces stress, increases energy, and lessens the effects of hangovers and headaches, despite the lack of any scientific evidence to support them. There are also devices on sale that offer "oxygen massage" and "oxygen detoxification" with claims of removing body toxins and reducing body fat. The American Lung Association has stated "there is no evidence that oxygen at the low flow levels used in bars can be dangerous to a normal person's health", but the U.S. Center for Drug Evaluation and Research cautions that people with heart or lung disease need their supplementary oxygen carefully regulated and should not use oxygen bars.Victorian society had a fascination for the rapidly expanding field of science. In "Dr. Ox's Experiment", a short story written by Jules Verne in 1872, the eponymous doctor uses electrolysis of water to separate oxygen and hydrogen. He then pumps the pure oxygen throughout the town of Quiquendone, causing the normally tranquil inhabitants and their animals to become aggressive and plants to grow rapidly. An explosion of the hydrogen and oxygen in Dr Ox's factory brings his experiment to an end. Verne summarised his story by explaining that the effects of oxygen described in the tale were his own invention (they are not in any way supported by empirical evidence). There is also a brief episode of oxygen intoxication in his "From the Earth to the Moon". Sources: Clark, James M; Thom, Stephen R (2003). "Oxygen under pressure". In Brubakk, Alf O; Neuman, Tom S (eds.). Bennett and Elliott's physiology and medicine of diving (5th ed.). United States: Saunders. pp. 358–418. ISBN 978-0-7020-2571-6. OCLC 51607923. Sources: Clark, John M; Lambertsen, Christian J (1970). "Pulmonary oxygen tolerance in man and derivation of pulmonary oxygen tolerance curves". IFEM Report No. 1-70. Philadelphia, PA: Environmental Biomedical Stress Data Center, Institute for Environmental Medicine, University of Pennsylvania Medical Center. Archived from the original on 7 October 2008. Retrieved 29 April 2008.{{cite journal}}: CS1 maint: unfit URL (link) Donald, Kenneth W (1947). "Oxygen Poisoning in Man: Part I". British Medical Journal. 1 (4506): 667–72. doi:10.1136/bmj.1.4506.667. PMC 2053251. PMID 20248086. Sources: Donald, Kenneth W (1947). "Oxygen Poisoning in Man: Part II". British Medical Journal. 1 (4507): 712–17. doi:10.1136/bmj.1.4507.712. PMC 2053400. PMID 20248096. Revised version of Donald's articles also available as: Donald, Kenneth W (1992). Oxygen and the diver. UK: Harley Swan, 237 pages. ISBN 1-85421-176-5. OCLC 26894235. Hamilton, Robert W; Thalmann, Edward D (2003). "Decompression practice". In Brubakk, Alf O; Neuman, Tom S (eds.). Bennett and Elliott's physiology and medicine of diving (5th ed.). United States: Saunders. pp. 475–79. ISBN 978-0-7020-2571-6. OCLC 51607923. Lang, Michael A, ed. (2001). DAN nitrox workshop proceedings. Durham, NC: Divers Alert Network, 197 pages. Archived from the original on 16 September 2011. Retrieved 20 September 2008.{{cite book}}: CS1 maint: unfit URL (link) Regillo, Carl D; Brown, Gary C; Flynn, Harry W (1998). Vitreoretinal Disease: The Essentials. New York: Thieme, 693 pages. ISBN 978-0-86577-761-3. OCLC 39170393. U.S. Navy Supervisor of Diving (2011). U.S. Navy Diving Manual (PDF). SS521-AG-PRO-010 0910-LP-106-0957, revision 6 with Change A entered. U.S. Naval Sea Systems Command. Archived from the original (PDF) on 10 December 2014. Retrieved 29 January 2015.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Twenty-One Card Trick** Twenty-One Card Trick: The Twenty-One Card Trick, also known as the 11th card trick or three column trick, is a simple self-working card trick that uses basic mathematics to reveal the user's selected card. Twenty-One Card Trick: The game uses a selection of 21 cards out of a standard deck. These are shuffled and the player selects one at random. The cards are then dealt out face up in three columns of 7 cards each. The player points to the column containing their card. The cards are picked up and the process is repeated three times, at which point the magician reveals the selected card. Variations: Minor aspects of the presentation are adjustable, for example the cards can be dealt either face-up or face-down. If they are dealt face-down then the spectator must look through each of the piles until finding which one contains the selected card, whereas if they are dealt face-up then an attentive spectator can immediately answer the question of which pile contains the selected card. Some performers deal the cards into face-up rows or columns instead of piles, which saves more time as all cards are partly visible. Variations: When the same method is applied to three piles of nine cards each, it is called the 27 card trick. It is identical in principle. Method: The magician begins by handing the spectator the 21-card packet and asking them to look through it and select any one card to remember. Method: The cards are then dealt into three piles one at a time, like when dealing out hands in a card game. Each time they are dealt out, after the spectator indicates which pile contains the thought of card, the magician places that pile between the other two. After the first time, the card will be one of the ones in position 8-14. When the cards are dealt out the second time, the selection will be the third, fourth, or fifth card in the pile it ends up in. In picking up the piles, the magician places this pile between the other two again. This ensures that the selection will now be one of the ones in position 10-12. Method: The third time the cards are dealt out, the selection will be the fourth card in whichever pile it ends up in. On the third deal, as soon as the spectator indicates which pile contains the selection, the magician knows that it is the fourth, or middle, card in that pile. If the magician gathers up the piles again, as before with the pile containing the selection in the middle, the selection will be the eleventh card in the 21 card packet. Method: If 27 cards are used, the procedure is the same but the selection will be the fourteenth card in the packet. Literature: Professor Hoffmann, Modern Magic ISBN 0-486-23623-4
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Intertrochanteric line** Intertrochanteric line: The intertrochanteric line is a line upon the anterior aspect of the proximal end of the femur, extending between the lesser trochanter and the greater trochanter. It is a rough, variable ridge. Structure: The intertrochanteric line marks the boundary between the femoral neck and shaft anteriorly (whereas the intertrochanteric crest marks the same boundary posteriorly). Structure: Attachments The iliofemoral ligament — the largest ligament of the human body — attaches above the line. The lower half, less prominent than the upper half, gives origin to the upper part of the vastus medialis.The distal capsular attachment on the femur follows the shape of the irregular rim between the head and the neck. As a consequence, the capsule of the hip joint attaches in the region of the intertrochanteric line on the anterior side, but a finger away from the intertrochanteric crest on the posterior side of the head.The fibers of the ischiocapsular ligament attach both into the joint capsule and onto the intertrochanteric line. Clinical significance: Intertrochantric fractures This area of the femur being an important pillar for weight bearing through the skeletal system is subject to comparatively high levels of dynamic stress, pathological strain, physiological strain and trauma. This area is prone to fractures due to high velocity trauma in the young and trivial trauma in the elderly. The fractures in this line are called intertrochantric fractures and are classified as per the pattern of the fracture geometry. Clinical significance: After a fracture this area of bone is notorious for uniting in varying, and sometimes problematic angles. Therefore, it typically requires early surgical reduction and fixation with early mobilization and weight bearing in order to facilitate enhanced recovery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ResearcherID** ResearcherID: ResearcherID is an identifying system for scientific authors. The system was introduced in January 2008 by Thomson Reuters Corporation. ResearcherID: This unique identifier aims at solving the problem of author identification and correct attribution of works. In scientific and academic literature it is common to cite name, surname, and initials of the authors of an article. Sometimes, however, there are authors with the same name, with the same initials, or the journal misspells names, resulting in several spellings for the same authors, and different authors with the same spelling. ResearcherID: Researchers can use ResearcherID to claim their published works and link their unique and persistent ResearcherID number to these works for correct attribution. In this way, they can also keep their publication list up to date and online. ResearcherID: The combined use of the Digital Object Identifier with the ResearcherID allows a unique association of authors and research articles. It can be used to link researchers with registered trials or identify colleagues and collaborators in the same field of research.In April 2019, ResearcherID was integrated with Publons, a Clarivate Analytics owned platform, where researchers can track their publications, peer reviewing activity, and journal editing work. With ResearcherID now hosted on Publons researchers can keep a more comprehensive view of their research output and contributions in one place. This is particularly important for researchers in fields that predominantly use peer-reviewed conference articles (computer science) or in fields that focus on publishing books and chapters in books (humanities and disciplines in the social sciences). ResearcherID: ResearcherID and Publons are also integrated with Web of Science, and ORCID, enabling data to be exchanged between these databases.ResearcherID has been criticized for being commercial and proprietary, but also praised as "an initiative addressing the common problem of author misidentification". Overview: As the ongoing globalization of the science and technologies, enlarging groups of scientific researchers have stepped into various different fields to study. The dilemmas they are continuously facing include not being able to directly link the author with respective literature, not being up to date with specific topics within the field, etc." biomedical researchers do not possess the capacity to automatically distinguish between two researchers who happen to share the same, or similar, names”.Therefore, unique identifiers for authors were introduced, which have been developing since the end of last century. ResearcherID, as one of the author identification systems, aiming to offer digital identity to each author and to build closer connections between researchers around the world. First started in 2008 by Thomson Reuters and Elsevier, who were both bibliographic database providers, ResearcherID helped researchers to create comprehensive profiles, containing research fields, key words, published literature, and connections to other researchers in the same research field.From April 2022, Publons started to move the profiles into the Web of Science so as to avoid data inconsistencies between the two platforms. Development: In 2008, Thomson Reuters started up the ResearcherID system as an addition of Clarivate Analytics’ Web of Science (WoS) database. Researchers were benefited from the system on inter-connection between authors and literatures. Each researcher could list personal publishes in the profiles for others’ references. Scholars were also able to find references through searching the researcherID, name of the author or the literature itself. Links under names were created for more direct searches. Meanwhile, creating personal profiles helped others distinguish researchers with the same first and last names, therefore increasing clarification of worldwide scientific progress. Later, ResearcherID was recommended to relate to Digital Object Identifier (DOI), as to enhance relationships between various researchers, creating larger maps connecting authors in the same research field. Though researchers might work in different fields, it became easier to associate authors with key terms and topics. The Web of Knowledge platform was connected to ResearcherID in 2011, compensating manual mistakes between profiles and literature. Due to a vast development of unique identifiers in the research field, there has been numbers of systems serving identification process, for example, ORCID, Scopus, ResearcherID and ResearchGate. Missing literature or informational mistakes were frequently shown when one researcher uploaded several profiles on different platforms. Thus, this combination enhanced the reliability of profiles on each platform, and provided a more thorough knowledge to a particular researcher. In the year 2012, Open Researcher and Contributor ID (ORCID) was integrated with ResearcherID to share and verify information in both systems, improving the efficiency of independent retrieval. In 2019, ResearcherID was binded with Publons which was a platform for field workers to make appraisals and authentication on researchers, thus enhancing the general contribution of certain literature among the field and global process on certain subjects. Nowadays, ResearcherID is still actively used by amounts of authors and researchers. Identifiers: ResearcherID, as a self-registered identifier, will be provided whenever a researcher finishes registration in the ResearcherID database. The identifier was the combination of alphabets and numbers, with the last four numbers representing the year registered, for example: Z-0000-2022. By searching either the name of the author or ResearcherID on the Web of Science ResearcherID website, users can find the author’s present occupation, his or her publications, key words of research fields, main topics of published literature and direct links to information page of the most cited publications, though full text cannot be uploaded. The ORCID link is also listed on the same page as a connection between two systems. Identifiers: The ResearcherID’s registration will be completed at www.researcherid.com, which is set up on the Web of Knowledge database. Researchers will be asked whether to create the ORCID number or not when completing the registration, in order to transfer data from ResearcherID to ORCID database. ResearcherID accounts can be used to login the Web of Science and Endnote. This enables researchers to arrange their own literature in different profiling systems and track their publications at any time. Due to the integration of ORCID number and ResearcherID, the Web of Science Core Collection assign them to the Author Identifiers index, enabling researchers to get access to numbers of profiles and publications. Identifiers: Web of Science Core Collection: Web of Science mainly serves as a citation/abstract database, for full texts cannot be uploaded onto the platform. Users are able to search and analyze any publications as well as their citations and references. Choosing Web of Science Core Collection enables researchers to search literature among the abstract databases. Uses: Having a unique ResearcherID can: Help researchers avoid data inconsistencies after the registrations in different platforms. The Web of Science synchronizes all the personal information and citation details with other solutions in the Groups. It improves the accuracy of searching and analyzing, creating more unique results for authors' use.Help researchers catch up worldwide trends in scientific research. It saves a gallery of the authors' personal details, including the authors' names, geographic addresses, research areas, publications, associated contributors, etc. Authors can therefore take more control of their working process by synchronizing with other colleagues.Help researchers ensure self-positions within their institutions. This mainly aims to improve collaborations between individuals and organizations, so as to adjust to more practical and suitable research goals for the future.Help authors and investors build stronger interrelations, which provide motivation to researchers and to some extent, increase the possibilities in reaching positive research results due to the improvement in research methods as well as the accuracy of research equipment. Therefore, the speed of development in health, science, technologies, arts and humanities will be boosted.Help students to learn how to connect one literature with the other which is assigned in the same discipline. Students will get used to utilizing the papers certain literature has cited as well as to track those which cite this literature. Help research beginners and students to get access to cutting-edge studies without payment, intriguing new members in exploring various research areas, getting deeper connection with current experts, and discovering more innovative, talented researchers who want to make great changes in multiple disciplines. Integration and Distinction: ResearcherID and ORCID The combination of ResearcherID and ORCID helps information transfer between two platforms, for example: main research areas, published literature, etc. Through this exchange of information, it can reduce chances of researchers' manual mistakes on profiling. Yet, Researchers cannot directly edit their profiles in the ResearcherID database. If the edition have occurred in other profiles, ORCID platform will automatically change the old information in its database. In addition, ORCID is known for its non-profittable feature. Thus, comparing to ORCID, ResearcherID is sometimes judged as profitable and proprietary, being not open completely to every researcher. Moreover, ResearcherID will accept any literature published under the Web of Science Group products, which means, to some extent, it needs more process before a non WoS-registered researcher to publish the study on this platform. In comparison, ORCID has a larger group of users for it accepts various sources of publication without filtering in advance. Integration and Distinction: Due to the fact that ResearcherID is proprietary and ORCID is non-proprietary, ORCID has developed to be more community driven than ResearcherID. More authors tend not to use ResearcherID to avoid the connection between researchers and commercial profit. Particularly, journals, books, patents etc., have compulsory regulations for authors’ registration in ORCID instead of ResearcherID. In conclusion, ResearcherID plays a more supplementary role among author identifiers, but is more necessary in the Web of Science Groups of products. Integration and Distinction: Nevertheless, both ResearcherID and ORCID have various user populations, and it has benefits to have both. For ResearcherID, authors primarily distribute among Physical Science, Social Science, Arts and Humanities. ORCID has the largest group in Health Science, but due to its non-profitable features, ORCID accepts more content types, and thus it also has sufficient population in other science disciplines. Nevertheless, neither researcherID nor ORCID focuses on the mathematics field. Instead, arXiv ID mainly serves in the discipline of Mathematics. Integration and Distinction: ResearcherID and Scopus Scopus' users spread across most disciplines included in health science and other non-mathematics areas. There are also a relative number of authors in the field of science, technology, arts and humanities. Though the Web of Science does not have as many citations as Scopus does, the searching results therefore become more accurate compared with Scopus. Yet, data inconsistencies still exist in the Web of Science. For example, the spelling of the authors’ surname and given name, authors' names not corresponding to the correct paper, etc. Integration and Distinction: ResearcherID and Google Scholar Google Scholar, like ResearcherID, is also a widely accepted profiling site. However, ResearcherID provides a list of bibliographic information based on authors and publications, while Google Scholar contains full papers, links to multiple accesses, authors, etc. On the other hand, the Web of Science is able to associate Google Scholar with other solutions, for example, Endnote. In other words, Google Scholar covers a larger range of research studies, yet have included bibliographic problems, for example, author sequence, different paper title, etc. ResearcherID has a relatively smaller coverage but is more accurate than Google Scholar. Inadequacy: ResearcherID was proved to have less users compared with other author identifiers. As a result of an investigation in 2020, there were 172689 profiles in ResearcherID platform, which was less than the 657319 on Scopus database, and 513236 on ResearchGate. ResearcherID was highly recommended for usage, but was not selected frequently because it was not automatically coded. On the other hand, ORCID code was more widely accepted by international journals and publishers than ResearcherID and was somehow mandatory for publications. The Scopus author ID was another researcher identifier which allocate a code directly to any author in the system. Therefore, it is encouraged that ResearcherID to realize automatic registration.Though researchers tend to choose ResearchID for identification less, this system can be used to prove the author sets, especially after having combined with other identifiers. On one hand, ResearcherID can transfer files into RIS form which is specifically established for research information system. The form includes all necessary messages for a certain literature, including its publisher, publishing date, book name, etc. In other words, the transformation will consolidate the information on ResearcherID into a more systematic form, helping both scholars and non-scholars reach the information they are looking for. On the other hand, through using ResearcherID on the Publons platform, users can find the exact researcher, as well as his or her academic collaborators. As an interactive lab environment, researcherID makes it easier to reference literature for the research field and global use. Inadequacy: There are also problems with registration. Since authors complete their registration through self-identification, it becomes easier to have wrong or missing data. For example, the information of authors' geographic addresses is found to be missing in numbers of profiles among the disciplines of social science, arts and humanities. The missing information may slow the research process, for the users cannot compare specific authors with other researchers in the same region, states, countries, or continent. This reduces the connection between individual authors and other institutions as well. Meanwhile, it may be misleading for external users while using the Web of Science. The Information can be assigned into different categories, and result in polarized judgements towards the authors and their literature. While the issue of self-identification registration has been addressed, it is not all of the citations uploaded on the Web of Science that are counted towards the citation metrics, which affects the accuracy and reliability of this bibliographic networking platform. These citation metrics are suggested to represent the overall performance of the literature and its influence in relevant disciplines. The eventual data and analyses may vary when the authors' information is missing or not all papers are included. In addition, there have been a number of empty profiles on the Web of Science with unclear reasons, and yet still are counted in statistics. It is suggested that certain options should be conducted towards these profiles, so as to improve the quality of the networking sites.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plug (horticulture)** Plug (horticulture): Plugs in horticulture are small-sized seedlings grown in seed trays filled with potting soil. This type of plug is used for commercially raising vegetables and bedding plants. Similarly plugs may also refer to small sections of lawn grass sod. After being planted, lawn grass may somewhat spread over an adjacent area. Plug (horticulture): Plug plants are young plants raised in small, individual cells, ready to be transplanted into containers or a garden. Professionally raised vegetable/flowering plants in controlled conditions during their important formative period (the first 4–6 weeks) can help to ensure plant health and for plants to reach their maximum potential during the harvest/blooming period. Establishing a garden using plug plants is often easier than doing so starting from seed. According to the American National Standards a plug is a cylinder of medium in which a plant is grown. The term is generally used to describe seedlings and rooted cuttings which have been removed from the container but with the medium held intact by the roots. Overview: Planting from plugs reduces the time a crop resides in the ground, and is functional for those with limited space. Plugs can improve yields: a healthy, stocky plant will grow rapidly and symmetrically when planted out, with a potentially greater capacity to withstand pests, disease and drought. Raising some types of seedlings successfully can be difficult, so plug plants can be beneficial for less experienced gardeners. Plug plants are beneficial for gardeners who want to try a new variety or a range of varieties without purchasing numerous packets of seeds and starting the plants from seed. Plug plants are very useful if the sowing window is missed, and plugs can be purchased quickly to replace a crop which has failed. Overview: As a garden develops, interplanting (intercropping) existing crops with plugs plants, ideally companion plants, can improve the productivity of the space and so maximise harvests – a sown crop may not be able to compete with established plants. Plug plants are much easier to weed than sown seedlings, and weeding will need to be done less frequently. Overview: Having semi-grown plants simplifies designing a vegetable plot or container. As plants that have already started growth, the time to attain plant growth is lessened. Within days of planting signs of growth are typically visible: leaves will perk up and roots anchor into the soil. Air pruned plugs are grown in a manner to promote very rapid growth almost immediately after being transplanted to new soil. Overview: Plugs are sometimes used in hillside plasticulture applications, due to the ease in which they are transplanted. Plant cultivation and growth: Plug plants grow more consistently, as has been noted by the commercial scale vegetable growing industry, and more rapidly; large-scale brassica field crops are planted almost exclusively from soil block plugs in some parts of Europe, a trend which is growing in the UK. This success at the commercial scale is testament to the success of plugs in the ground. Plant cultivation and growth: It is of note that many varieties actively benefit from being transplanted as severing the taproot encourages bushier root growth. Traditionally nearly all heading brassica are sown in a separate seed bed, thinned, and the best ones planted in a prepared bed after about 6–8 weeks. Many pests want to eat baby brassica; this in combination with its long growing season makes planting brassica from plugs a much easier option. Root vegetables: Root vegetables are typically, but not always, sown from seed, rather than transplanted from plugs, where they are to mature and then be thinned. The thinning action is highly beneficial in itself as it provides soil aeration at depth without disturbing adjacent roots systems. The initial concentration of seedlings also dilutes damage from pests and provided some food for the gardener or the compost in the form of thinnings. Beetroot, carrots and the root brassica family- swede, turnip- will simply not reach their full potential with any check to early root growth. In addition, these seeds are typically inexpensive, and the seedlings are delicate; hence there is little value to the gardener in buying or growing them as plugs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Java concurrency** Java concurrency: The Java programming language and the Java virtual machine (JVM) is designed to support concurrent programming. All execution takes place in the context of threads. Objects and resources can be accessed by many separate threads. Each thread has its own path of execution, but can potentially access any object in the program. The programmer must ensure read and write access to objects is properly coordinated (or "synchronized") between threads. Thread synchronization ensures that objects are modified by only one thread at a time and prevents threads from accessing partially updated objects during modification by another thread. The Java language has built-in constructs to support this coordination. Processes and threads: Most implementations of the Java virtual machine run as a single process. In the Java programming language, concurrent programming is primarily concerned with threads (also called lightweight processes). Multiple processes can only be realized with multiple JVMs. Processes and threads: Thread objects Threads share the process' resources, including memory and open files. This makes for efficient, but potentially problematic, communication. Every application has at least one thread called the main thread. The main thread has the ability to create additional threads as Runnable or Callable objects. The Callable interface is similar to Runnable in that both are designed for classes whose instances are potentially executed by another thread. A Runnable, however, does not return a result and cannot throw a checked exception.Each thread can be scheduled on a different CPU core or use time-slicing on a single hardware processor, or time-slicing on many hardware processors. There is no general solution to how Java threads are mapped to native OS threads. Every JVM implementation can so this differently. Processes and threads: Each thread is associated with an instance of the class Thread. Threads can be managed either by directly using the Thread objects, or indirectly by using abstract mechanisms such as Executors or Tasks. Processes and threads: Starting a Thread Two ways to start a Thread: Provide a runnable object Subclass thread Interrupts An interrupt tells a thread that it should stop what it is doing and do something else. A thread sends an interrupt by invoking interrupt() on the Thread object for the thread to be interrupted. The interrupt mechanism is implemented using an internal boolean flag known as the "interrupted status". Invoking interrupt() sets this flag. By convention, any method that exits by throwing an InterruptedException clears the interrupted status when it does so. However, it's always possible that the interrupted status will immediately be set again, by another thread invoking interrupt(). Processes and threads: Joins The java.lang.Thread#join() method allows one Thread to wait for the completion of another. Exceptions Uncaught exceptions thrown by code will terminate the thread. The main thread prints exceptions to the console, but user-created threads need a handler registered to do so. Memory model: The Java memory model describes how threads in the Java programming language interact through memory. On modern platforms, code is frequently not executed in the order it was written. It is reordered by the compiler, the processor and the memory subsystem to achieve maximum performance. The Java programming language does not guarantee linearizability, or even sequential consistency, when reading or writing fields of shared objects, and this is to allow for compiler optimizations (such as register allocation, common subexpression elimination, and redundant read elimination) all of which work by reordering memory reads—writes. Memory model: Synchronization Threads communicate primarily by sharing access to fields and the objects that reference fields refer to. This form of communication is extremely efficient, but makes two kinds of errors possible: thread interference and memory consistency errors. The tool needed to prevent these errors is synchronization. Reorderings can come into play in incorrectly synchronized multithreaded programs, where one thread is able to observe the effects of other threads, and may be able to detect that variable accesses become visible to other threads in a different order than executed or specified in the program. Most of the time, one thread doesn't care what the other is doing. But when it does, that's what synchronization is for. To synchronize threads, Java uses monitors, which are a high-level mechanism for allowing only one thread at a time to execute a region of code protected by the monitor. The behavior of monitors is explained in terms of locks; there is a lock associated with each object. Memory model: Synchronization has several aspects. The most well-understood is mutual exclusion—only one thread can hold a monitor at once, so synchronizing on a monitor means that once one thread enters a synchronized block protected by a monitor, no other thread can enter a block protected by that monitor until the first thread exits the synchronized block.But there is more to synchronization than mutual exclusion. Synchronization ensures that memory writes by a thread before or during a synchronized block are made visible in a predictable manner to other threads which synchronize on the same monitor. After we exit a synchronized block, we release the monitor, which has the effect of flushing the cache to main memory, so that writes made by this thread can be visible to other threads. Before we can enter a synchronized block, we acquire the monitor, which has the effect of invalidating the local processor cache so that variables will be reloaded from main memory. We will then be able to see all of the writes made visible by the previous release. Memory model: Reads—writes to fields are linearizable if either the field is volatile, or the field is protected by a unique lock which is acquired by all readers and writers. Memory model: Locks and synchronized blocks A thread can achieve mutual exclusion either by entering a synchronized block or method, which acquires an implicit lock, or by acquiring an explicit lock (such as the ReentrantLock from the java.util.concurrent.locks package ). Both approaches have the same implications for memory behavior. If all accesses to a particular field are protected by the same lock, then reads—writes to that field are linearizable (atomic). Memory model: Volatile fields When applied to a field, the Java volatile keyword guarantees that: (In all versions of Java) There is a global ordering on the reads and writes to a volatile variable. This implies that every thread accessing a volatile field will read its current value before continuing, instead of (potentially) using a cached value. (However, there is no guarantee about the relative ordering of volatile reads and writes with regular reads and writes, meaning that it's generally not a useful threading construct.) (In Java 5 or later) Volatile reads and writes establish a happens-before relationship, much like acquiring and releasing a mutex. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement.A volatile fields are linearizable. Reading a volatile field is like acquiring a lock: the working memory is invalidated and the volatile field's current value is reread from memory. Writing a volatile field is like releasing a lock: the volatile field is immediately written back to memory. Memory model: Final fields A field declared to be final cannot be modified once it has been initialized. An object's final fields are initialized in its constructor. As long as the this reference is not released from the constructor before the constructor returns, then the correct value of any final fields will be visible to other threads without synchronization. History: Since JDK 1.2, Java has included a standard set of collection classes, the Java collections framework Doug Lea, who also participated in the Java collections framework implementation, developed a concurrency package, comprising several concurrency primitives and a large battery of collection-related classes. This work was continued and updated as part of JSR 166 which was chaired by Doug Lea. JDK 5.0 incorporated many additions and clarifications to the Java concurrency model. The concurrency APIs developed by JSR 166 were also included as part of the JDK for the first time. JSR 133 provided support for well-defined atomic operations in a multithreaded/multiprocessor environment. Both the Java SE 6 and Java SE 7 releases introduced updated versions of the JSR 166 APIs as well as several new additional APIs.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polysaccharide–protein conjugate** Polysaccharide–protein conjugate: Polysaccharide–protein conjugates may have better solubility and stability, reduced immunogenicity, prolonged circulation time, and enhanced targeting ability compared to native protein. They are promising alternatives to PEG–protein drugs, in which non-biodegradable high molecular weight PEG causes health concerns. Synthetic methods: Reductive amination, the Maillard reaction, EDC/NHS coupling, DMTMM coupling, disulfide bond formation, and click chemistry are common methods to synthesize polysaccharide–protein conjugates. Applications: Polysaccharide–protein conjugates are used for food industry, vaccines, and drug delivery systems.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ecology and Evolutionary Biology** Ecology and Evolutionary Biology: Ecology and evolutionary biology is an interdisciplinary field of study concerning interactions between organisms and their ever-changing environment, including perspectives from both evolutionary biology and ecology. This field of study includes topics such as the way organisms respond and evolve, as well as the relationships among animals, plants, and micro-organisms, when their habitats change. Ecology and evolutionary biology is a broad field of study that covers various ranges of ages and scales, which can also help us to comprehend human impacts on the global ecosystem and find measures to achieve more sustainable development. Examples of current research topics: Birdsong There is a number of acoustic research about birds. Birds learn to sing in specific patterns because birdsong conveys information to select partners, which is a result of evolution. However, this evolution is also affected by ecological factors. Research with recorded birdsong of male white-crowned sparrows from different regions found that the birdsongs from the same location have the same traits, while birdsongs from different locations are more likely to have different song types. Birdsongs from areas with dense vegetation tend to only have slow trilling sounds and low frequencies, while birdsongs from more open areas have fast trilling sounds and higher frequencies. This is probably due to differences in the propagation of sound through vegetation. Low frequencies can be heard from further away when going through dense vegetation than high frequencies. For that reason it would be an advantage for birds who live in dense vegetation to sing at lower frequencies. That way, their songs can still be heard by competitors and potential mates far away. Examples of current research topics: Something similar was found in birds living on a mountain. The birds who lived higher up were singing at higher frequencies. This was probably due to the higher parts of the mountain being colder and therefore fewer other species living there. Other animals also make sounds with which the birds would have to compete, so when there are less species, there are less high frequency sounds to compete with. Examples of current research topics: Snail colour The colour and ornamentation of the snails’ shells are almost entirely determined by their genes. One kind of land snail, Cepaea nemoralis, which is very common in Europe, has been studied and found to have a few different colours and a different amount of dark bands on their shells. In a large citizen science project ‘the Evolution Mega-Lab’, citizens of many different countries throughout Europe collected snails and counted how many snails of a certain colour/band pattern were present in a certain habitat. Examples of current research topics: Some colours can be seen better by birds, which is one way in which the best camouflaged snails are selected for. This also depends on the habitat in which the snails live. For instance yellow snails living in the dunes are better camouflaged than brown snails. Another reason that one colour of shell might be better in a certain habitat is because of the temperature. It was found that darker shells absorb more heat, which can be a risk for overheating of the snail in certain habitats like dunes. In those places lighter coloured snails were found more often. Urban evolution: With fast growing cities and high rates of urbanization a whole new kind of environment has emerged. The urban ecosystem is a place of extremities and makes for fast evolution. Higher rates of phenotypic change have been observed in urban areas compared to natural and nonurban anthropogenic systems. A field of study has emerged regarding urban evolution in which the adaptations of animals and plants to urban environments are studied. Urban evolution: In tropical regions a certain species of lizards, Anolis cristatellus, lives in both urban and natural areas. These lizards climb on tree trunks, fences and the walls of buildings. In urban areas more slippery and smooth surfaces are found than in natural areas. This creates a higher risk of falling and dying. The lizards in cities were found to have adapted to these slippery surfaces, by developing longer limbs and more lamellae under their feet that help them to run safely on these smooth surfaces.One of the differences between urban areas and natural areas is anthropogenic noise, such as traffic noise. The frequencies of these sounds overlap partly with the frequencies of bird songs. In cities, birds started to sing at higher frequencies than they do in natural areas, in order to still be heard by their conspecifics. Their songs were also found to be shorter. This is a way in which the birds adapt to the new urban environment. Urban evolution: An example of urban evolution in plants was found in Crepis sancta. This plant makes seeds with pappus that can travel with the wind, for seed dispersal. In urban environments green patches are very rare and are also often very small and far apart. Due to this, the chances of the seeds landing on asphalt or stone and not being able to sprout are way higher than in open fields. Crepis sancta makes both light seeds with pappus as well as heavier seeds without pappus. In the city the plants were found to make more heavy seeds in comparison to the plants in nonurban areas. This makes sense from an evolutionary perspective since heavy seeds fall very close to the mother-plant, probably in the same green patch, and therefore have a higher chance of sprouting. Urban evolution: Another characteristic of urban areas is light pollution. One of the well known consequences of light pollution is the attraction of insects. Before the presence of human light, the only source of light at night was the moon. Insects fly with a fixed angle to the moon to be able to fly in a straight line. Our light sources, however, are very close by. So if an insect flies with a fixed angle compared to a street light for instance, he starts flying in circles and eventually ends up circling the street light, which reduces his chances of finding food and a mating partner. Urban moths were found to have a reduced attraction to light sources, which directly impacts their chances for survival and mating by not wasting time close to a light source. Degrees in North America: Some North American universities are home to degree programs titled Ecology and Evolutionary Biology, offering integrated studies in the disciplines of ecology and evolutionary biology. The wording is intended as representing the alternative approach from the frequently used pairing of Cell and Molecular Biology, while being more inclusive than the terminology of Botany or Zoology. Recently, due to advances in the fields of genetics and molecular biology, research and education in ecology and evolutionary biology has integrated many molecular techniques. Degrees in North America: A program that focuses on the relationships and interactions that range across levels of biological organization based on a scientific study is Ecology and Evolutionary Biology. The origins and history of ecosystems, species, genes and genomes, and organisms, and how these have changed over time is all part of the studies of how biodiversity has evolved and how it takes place. Ecology and Evolutionary biology in North America is based on research impact determined by the top 10% of ecology programs. The interactive web of organisms and environment are all part of what the field of Ecology explores. There have been studies in evolution that have worked to prove that "modern organisms have developed from ancestral ones." The reason that evolutionary biology is so interesting to learn about is because of the evolutionary processes that is the reason we have such a diversity of life on Earth.There are many processes that make up evolutionary biology that give great insight to how we came to be, some of which include natural selection, speciation, and common descent. Among the best-known Ph.D.-granting departments that use this name are Columbia University Cornell University Princeton University Rice University University of Arizona University of California at Los Angeles University of Colorado University of Michigan University of Toronto Yale University
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Subglacial stream** Subglacial stream: Subglacial streams are conduits of glacial meltwater that flow at the base of glaciers and ice caps. Meltwater from the glacial surface travels downward throughout the glacier, forming an englacial drainage system consisting of a network of passages that eventually reach the bedrock below, where they form subglacial streams. Subglacial streams form a system of tunnels and interlinked cavities and conduits, with water flowing under extreme pressures from the ice above; as a result, flow direction is determined by the pressure gradient from the ice and the topography of the bed rather than gravity. Subglacial streams form a dynamic system that is responsive to changing conditions, and the system can change significantly in response to seasonal variation in meltwater and temperature. Water from subglacial streams is routed towards the glacial terminus, where it exits the glacier. Discharge from subglacial streams can have a significant impact on local, and in some cases global, environmental and geological conditions. Sediments, nutrients, and organic matter contained in the meltwater can all influence downstream and marine conditions. Climate change may have a significant impact on subglacial stream systems, increasing the volume of meltwater entering subglacial drainage systems and influencing their hydrology. Formation: Subglacial streams derive their water from two sources: meltwater transported from the top of the glacier and meltwater from the glacial bed. When temperatures are high enough to induce melting on the surface of the glacier, typically during summer, water flows down into the glacier. Surface meltwater flows downward through millimeter-sized channels that join together in a network of tributaries, growing in size until reaching the bedrock. Additionally, some water is transported to the surface by moulins (large, vertical shafts up to ten meters wide that range from the surface to a lower elevation, sometimes all the way to the glacial bed). Fractures, crevasses, and cavities between glaciers and valley walls can also provide pathways for water to reach the bed. While surface meltwater can be seasonally dependent, the beds of temperate glaciers are maintained at the pressure melting point (the combination of temperature and pressure at which ice melts). This liquid water at the bed—present in temperate but not polar glaciers—provides a constant input of water to subglacial stream systems. Water from these two sources meets and is concentrated at the bedrock base of the glacier, where pressure from the ice above forces it to move towards the glacial terminus, creating a network of passageways as it works its way out of the glacier. Hydrology: Direction of Streams Water in subglacial streams is subject to large amounts of pressure from the mass of ice above; as a result, the direction of water flow cannot be explained in the same way as typical surface streams. Subglacial water flow is, to a large extent, determined by pressure gradients created by the weight and movement of the glacier. As a result, instead of following the slope of the bed, streams can flow up and across slopes. This behavior can be described by viewing the pressure inside glaciers as equipotential surfaces; as the water is pushed from areas of high pressure to areas of low pressure, it travels in a direction normal to these surfaces. Hydrology: Stream Systems Subglacial stream systems can be placed in two categories based upon the arrangement and type of passages that make up the system: channelized and distributed. Hydrology: Channelized Channelized drainage systems are characterized by water flowing predominantly through tunnels along the bed of the glacier that take meltwater rapidly and directly to the glacial terminus. These tunnels are arranged in a network of tributaries, joining together and growing in size as they near the terminus. Water is fast-moving in these systems, and pressure inside the channels is relatively low compared to pressure in the ice around them. Turbulence in the rapid flow produces heat, which is able to melt the ice walls of the tunnels. While the total water added to the system by this process is insignificant compared to water from the surface and from basal melting, the melting of the channel walls allows the channel to remain open even when the ice pressures surrounding it are much greater than the pressure of the water inside. The constant erosion of the tunnel walls is able to offset the narrowing of the tunnel caused by deformation of the ice. Depending on the water supply and the characteristics of the bed, the tunnels can take different forms, including semicircular tunnels cutting into the ice, broad and low tunnels, and tunnels that cut into the bed rather than the ice. Broad and low tunnels form in channels with variable amounts of meltwater, as melting is concentrated on the tunnel walls rather than the ceiling when the tunnel is not completely full of water. Channels that maintain long-term stability in water flow and location can erode the bedrock over time, resulting in tunnels that cut into the bed rather than the ice above. Hydrology: Distributed Distributed drainage systems can consist of a network of linked cavities, porous flow and canals in the sediment, and a thin film between the ice and bed. Films of water between the ice and bedrock form are rarely thicker than tens of micrometers, and form in areas that are isolated from channels and cavities, maintained at the pressure melting point, and are above impermeable bed. Flow in films does not account for a large amount of the total meltwater flux out of the glacier, but may be important in the sliding movement of glaciers. In cases in which glaciers are above porous, unconsolidated sediment, some water can flow through the sediment; like film flow, porous flow does not account for much of the water flux in the system. When the bed is deformable, wide, shallow canals up to 10 cm in width can form in the surface of the sediment, topped by the glacial ice. In glaciers with steep slopes, canal systems are unstable, as they can be easily absorbed by channels above the sediment. Hydrology: As glaciers move over bumps in the bedrock, differences in pressure can separate the ice from the bed behind the bump if the glacier is moving fast enough. This creates cavities between the glacier and the bed, which fill with water. If the water pressure is high enough, the cavity expands, and the water can cause more separation between the ice and bed surrounding the cavity. With sustained water supply, small passages form between cavities, creating a large network of linked cavities which water flows between. Water in linked cavity systems flows, on average, in a direction normal to the equipotential surfaces of pressure in the glacier. However, the path taken is long and indirect, and at times water can be flowing nearly parallel to the equipotential surfaces. Hydrology: Seasonal Variability The structure of subglacial stream systems changes significantly over time as a result of seasonal changes in the volume and source of meltwater input. During the winter, subglacial stream systems are dominated by distributed streams. As there is very little surface melting during this season, nearly all meltwater is derived from basal melting and the release of stored meltwater. Both of these sources involve small amounts of water released relatively uniformly throughout the bed of the glacier, making them unlikely to form large drainage channels. Some major tunnels remain in the system year-round, and are the main points of discharge during the winter, but the system at large is characterized by distributed drainage. Hydrology: As temperatures rise and surface melting increases water flux to the bed in late spring, the winter stream system is disrupted. Distributed flow channels, lacking the capacity for increased volumes of meltwater, experience a rise in water pressure and are destabilized. High water pressures lead to the formation of larger tunnels—a process known as channelization—that have a greater capacity for meltwater and allow for pressures to fall. This change can happen gradually or can be triggered by events that rapidly increase meltwater flow, such as consecutive days of high melting or a large rainstorm. The now-channelized system grows in extent throughout the summer as meltwater input continues to increase, with new passages forming and growing in size. In autumn, surface melting decreases, and the volume of meltwater is no longer sufficient to maintain the newly formed channels; deformation of the surrounding ice slowly closes channels that do not generate enough frictional melting along their walls to offset the closure. Eventually, a distributed stream system again becomes dominant. Some perennial channels remain throughout the winter season, but the channels formed in spring disappear—when new tunnels form again the next year, they do not form in the same locations as the ones that closed. Impact on Glacial Systems: Submarine Glacial Melt The discharge of subglacial stream systems of marine-terminating glaciers into the ocean has a significant impact on the volume and distribution of glacial melt at the terminus. The discharge of glacial streams into the ocean emerges as plumes that travel up to the ocean surface along the face of the glacier, which can serve as heat sources for glacial melt. Ice melt due to discharge plumes has a significant impact in areas in which discharge rates exceed 100 m3/s−1; with lesser discharge rates, plume-associated heat is insignificant compared to the effects of ocean mixing. Impact on Glacial Systems: Seasonal variability plays an important role in the way that subglacial streams influence glacial melting. During the summer, subglacial stream output is much greater, resulting in plumes that are larger, faster, and more buoyant than during the winter. In addition to the greater volume of discharge increasing glacial melt, the increased buoyancy of the plume results in more turbulence and, consequently, more heat transfer to the glacier, further increasing melt. Impact on Glacial Systems: The effect subglacial stream discharge has on glacial melt is also influenced by the type of subglacial drainage system; distributed subglacial streams result in an output of meltwater uniformly across the grounding line (where the glacier transitions from grounded to floating ice), whereas channelized drainage results in individual, large outlets. Distributed discharge results in glacial melt volumes up to five times greater than that of channelized drainage, as individual strong plumes of meltwater are not as capable of inducing widespread melting as a much greater number of smaller outputs. Impact on Glacial Systems: Glacial Motion In temperate glaciers, which are characterized by the presence of liquid water at their base and are able to slide, subglacial streams have a significant impact on glacial movement. The water pressure and friction experienced at the base of a glacier depends in part on whether the subglacial hydrological system is channelized or distributed. Channelized systems are an efficient form of drainage as they are able to rapidly move water out of the glacier, reducing water pressure in the system. By decreasing water pressure underneath the glacier, friction between the glacier ice and the bedrock below increases, slowing the movement of the glacier. Distributed flow systems, contrastingly, are characterized by slow-moving water in small cavities and passages; when water flux into the system increases, such as during periods of high melt, the system is unable to compensate, resulting in large increases in basal water pressure. As a result, friction between the glacier and the bed is reduced, and glacial sliding speed increases. Impact on Glacial Systems: Glacial motion can also cause changes in subglacial stream systems, and there are feedbacks present between the two. As subglacial water pressure increases, the speed of glacial sliding increases. The glacier encounters bumps in the bedrock as it slides: as a result, cavities are created between the ice and the bed. The glacier encounters more bumps due to its higher speed and, since ice moving at a higher speed is less able to maintain connection with the bedrock, faster moving glaciers are more likely to form cavities when passing over bumps. This increases the subglacial space which can be filled with water, decreasing basal water pressure. The interaction between glacial motion and subglacial hydrology creates a negative feedback loop, in which increased water pressure below the glacier increases glacial sliding speed, which in turn decreases pressure and, consequently, sliding speed. Through this mechanism, the effects of speedup events can decay over time. Impact on Glacial Systems: Another control on glacial sliding speed is the process of channelization. Sustained high levels of meltwater input result in a shift from a distributed network of subglacial streams to a more channelized system as larger passages through the ice develop. As larger channels are able to more efficiently remove water from the subglacial system, water pressure decreases, increasing friction between the glacier and the bedrock and decreasing sliding speed. Channelization is the most significant process in terminating speedup events, and is responsible for the slowdown in glacial speed at the end of summer following the speedup commonly observed as meltwater flow increases in spring. Material Transport: Nutrients and Organic Matter Subglacial streams carry a significant amount of organic matter and nutrients, originating both from supraglacial meltwater and subglacial processes. Meltwater from supraglacial environments containing microbially-produced dissolved organic carbon, or DOC, flows into glaciers, eventually reaching subglacial stream systems, which carry the organic matter out of the glacier. This source of DOC is supplemented by organic matter produced within subglacial ecosystems, where there are diverse microbial communities. Though the concentration of dissolved organic matter in glacial meltwater is low, the sheer amount of freshwater discharge from glaciers makes glacially-sourced DOC an important source of bioavailable carbon to marine ecosystems. In the Gulf of Alaska alone, glacial runoff provides 0.13 Tg of organic carbon per year, much of which travels through subglacial streams. Material Transport: Subglacial streams also transport various other important nutrients. Geological processes, including the grinding of glaciers on the bedrock below and water-rock interaction, ensure that minerals are continuously fed into the subglacial system. Iron transported by subglacial streams, for example, is mostly sourced from subglacial weathering, and may be responsible for an Fe flux large enough to significantly influence global ocean chemistry over geological timescales. Biological processes also provide nutrients to subglacial streams, with nitrification and denitrification by microbes affecting downstream communities during periods of melt. Material Transport: Sediment Subglacial streams can transport, deposit, and remove sediment from the glacier bed; this process is influenced by water supply and the amount and characteristics of the available sediment. The size of sediment particles, the slope of the subglacial stream’s channel, and the roughness of the bed all contribute to whether sediment is mobilized or deposited. Subglacial flooding events can result in significant erosion and sediment transport, and studies modeling subglacial channels suggest that seasonal meltwater flow alone can erode bedrock and transport sediment as large as boulders. Contrastingly, when water pressure is low, such as at the end of a melt season, sediment is deposited. Material Transport: When sediment supply is high enough, the sediment deposition can form an esker: an elongated ridge of sediment that fills the channel of the subglacial stream in which it forms. These eskers can be temporary, lasting only until increasing water pressure during the next melt season flushes out the sediment, or they could be permanent. The permanent formation of eskers is more common in retreating glaciers and ice sheets, as their termini are thinning, which favors the deposition of sediment. Advancing glaciers and ice sheets exhibit steepening termini, which increases shear stresses and, consequently, water pressure, which favors the flushing of deposited sediment out of stream channels. Climate Change: Anthropogenic climate change is likely to cause significant changes in subglacial stream systems. As glacial melting increases as a result of rising global temperatures, water flux into and discharge from subglacial streams increases as well. Greater water input from surface melting may affect the hydrology of subglacial systems, changing the timing of seasonal variations. As a result of climate change-induced increases in meltwater, greater volumes of water are likely to reach the bed earlier in the year. This would cause the transition from winter distributed subglacial drainage to summer channelized streams to occur earlier in the year. Glacial motion could also be affected: since glaciers dominated by channelized systems have lesser sliding speeds, the earlier transition to this system could result in slower moving glaciers. However, short-term fluctuations in meltwater volume and pressure, which may become more intense as runoff increases, could offset this decrease in sliding by causing localized speedups. Increases in the volume of discharge from subglacial streams are likely to increase the melting of marine-terminating glaciers, as submarine melt rates are highly sensitive to the amount of subglacial discharge.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Discourse grammar** Discourse grammar: Discourse Grammar (DG) is a grammatical framework that grew out of the analysis of spoken and written linguistic discourse on the one hand, and of work on parenthetical expressions, including Simon C. Dik's study of extra-clausal constituents, on the other. Initiated by Gunther Kaltenböck, Bernd Heine and Tania Kuteva, the framework is based on the distinction between two organizing principles of grammar where one concerns the structure of sentences and the other the linguistic organization beyond the sentence. Discourse grammar: In accordance with the perspective adopted in this framework, linguistic units such as formulae of social exchange, interjections, discourse markers and other prefabricated expressions, which tend to be assigned a more marginal status in many models of mainstream linguistics, are interpreted as playing an important role in structuring linguistic discourse. Influences: Work on Discourse Grammar (DG) has been inspired by a number of different works, in particular by Simon C. Dik's theory of Functional Grammar according to which linguistic discourse is composed of two different kinds of linguistic material, referred to, respectively, as clausal and extra-clausal constituents. On the other hand, it has benefitted greatly from research on the nature of parenthetical categories and the concept of supplements. Principles: DG is composed of all the linguistic resources that are available for designing texts, irrespective of whether these are spoken or written (or signed) texts. It is viewed both as an activity, a real-time interactional tool, and a knowledge store consisting of a set of conventional linguistic units plus their combinatorial potential. An elementary distinction between two main domains of speech processing is made, referred to as Sentence Grammar and Thetical Grammar.Sentence Grammar is organized in terms of propositional concepts and clauses and their combination. It has been the only, or the main subject of mainstream theories of linguistics. The concern of Thetical Grammar is with theticals, that is, with linguistic discourse units beyond the sentence, being syntactically, semantically, and typically also prosodically detached from expressions of Sentence Grammar. These units include what is traditionally referred to as parenthetical constructions but are not restricted to them. The main categories of Thetical Grammar are conceptual theticals (including comment clauses, discourse markers, etc.) as well as various other extra-clausal categories such as vocatives, formulae of social exchange, and interjections. While being separate in principle, the two domains interact in multiple ways in shaping linguistic discourse. The main way of interaction is via cooptation, an operation whereby chunks of Sentence Grammar such as clauses, phrases, words, or any other units are deployed for use in Thetical Grammar. Application: Being a relatively young framework, DG has so far found only limited applications. Work has focused mainly on comment clauses, discourse markers, final particles, and insubordination. Furthermore, DG as a descriptive tool has for the most part been restricted to the study of English. Analysis within this framework is now being extended to non-European languages. More detailed research has been carried out already on Akie, a traditional hunter-gatherer language of the Nilotic family spoken in north-central Tanzania. A grammar of this language based on DG has been published, the use of theticals in the organization of texts has been studied, and institutional frames surfacing from the analysis of Akie texts have been identified using Thetical Grammar as a basis.In another line of research, DG has been extended to the study of language contact. As the work on the discourse in bilingual situations has shown, theticals play an important role both in code-switching and borrowing. Furthermore, there is reason to assume that the distinction between Sentence Grammar and Thetical Grammar may shed new light on the question of how human language or languages evolved.Finally, a considerable part of the research is devoted to the question of whether the distinction between the two domains is reflected in neural activity. As this research suggests, there appears to be a corresponding distinction in brain lateralization, in that Sentence Grammar correlates primarily with left-hemisphere activity whereas Thetical grammar appears to be more strongly associated with right-hemisphere activation. Related work: That discourse organization operates simultaneously in two different dimensions has also been pointed out in a number of other research traditions. Thus, a distinction akin to that between Sentence Grammar and Thetical Grammar is also made in some psycholinguistic studies on comprehension where a contrast between propositional representation and discourse model is made, and in neurolinguistic discourse analysis there is a related distinction between referential and modalizing speech. In other frameworks, specific manifestations of the distinction are highlighted, such as that between microgrammar and macrogrammar, or between an analytic and a holistic mode of processing, or between conceptual and procedural meaning in the theory of Relevance Grammar.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apoplexy** Apoplexy: Apoplexy (from Ancient Greek ἀποπληξία (apoplexia) 'a striking away') is rupture of an internal organ and the accompanying symptoms. The term formerly referred to what is now called a hemorrhagic stroke, which is the result of a ruptured blood vessel in the brain. Today health care professionals do not use the term, but instead specify the anatomic location of the bleeding, such as cerebral, ovarian or pituitary.Informally or metaphorically, the term apoplexy is associated with being furious, especially as "apoplectic". Historical meaning: From the late 14th to the late 19th century, apoplexy referred to any sudden death that began with a sudden loss of consciousness, especially one in which the victim died within a matter of seconds after losing consciousness. The word apoplexy was sometimes used to refer to the symptom of sudden loss of consciousness immediately preceding death. Ruptured aortic aneurysms, and even heart attacks and strokes were referred to as apoplexy in the past, because before the advent of medical science, there was limited ability to differentiate abnormal conditions and diseased states. Although physiology as a medical field dates back at least to the time of Hippocrates, until the late 19th century physicians often had inadequate or inaccurate understandings of many of the human body's normal functions and abnormal presentations. Hence, identifying a specific cause of a symptom or of death often proved difficult or impossible. Hemorrhage: Because the term by itself is now ambiguous, it is often coupled with a descriptive adjective to indicate the site of bleeding. For example, bleeding within the pituitary gland is called pituitary apoplexy, and bleeding within the adrenal glands can be called adrenal apoplexy.Apoplexy also includes hemorrhaging with the gland and accompanying neurological problems such as confusion, headache, and impairment of consciousness.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Retained placenta** Retained placenta: Retained placenta is a condition in which all or part of the placenta or membranes remain in the uterus during the third stage of labour. Retained placenta can be broadly divided into: failed separation of the placenta from the uterine lining placenta separated from the uterine lining but retained within the uterusA retained placenta is commonly a cause of postpartum haemorrhage, both primary and secondary.Retained placenta is generally defined as a placenta that has not undergone placental expulsion within 30 minutes of the baby’s birth where the third stage of labor has been managed actively. Signs and symptoms: Risks of retained placenta include hemorrhage and infection. After the placenta is delivered, the uterus should contract down to close off all the blood vessels inside the uterus. If the placenta only partially separates, the uterus cannot contract properly, so the blood vessels inside will continue to bleed. A retained placenta thereby leads to hemorrhage. Management: Drugs, such as intraumbilical or intravenous oxytocin, are often used in the management of placental retention. It is useful ensuring the bladder is empty. However, ergometrine should not be given as it causes tonic uterine contractions which may delay placental expulsion. Controlled cord traction has been recommended as a second alternative after more than 30 minutes have passed after stimulation of uterine contractions, provided the uterus is contracted. Manual extraction may be required if cord traction also fails, or if heavy ongoing bleeding occurs. There is currently uncertainty about the effectiveness of anaesthesia or analgesia for manual extraction, in terms of pain and the risk of postpartum haemorrhage. Very rarely a curettage is necessary to ensure that no remnants of the placenta remain (in rare conditions with very adherent placenta such as a placenta accreta). Management: However, in birth centers and attended home birth environments, it is common for licensed care providers to wait for the placenta's birth up to 2 hours in some instances. Other animals: Retention of fetal membranes (afterbirth) is observed more frequently in cattle than in other animals. In a normal condition, a cow’s placenta is expelled within a 12-hour period after calving.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spot delivery** Spot delivery: Spot delivery (or spot financing) is a term used in the automobile industry that means delivering a vehicle to a buyer prior to financing on the vehicle being completed. Spot delivery is used by dealerships on the weekend or after bank hours to be able to deliver a vehicle when a final approval cannot be received from a bank. This method of delivery is regulated by many states in the U.S., and is sometimes referred to as a "Yo-Yo sale" or "Yo-Yo Financing."During a spot delivery, many consumers believe that the deal is final when in fact it is not. Signed agreements allow the dealership the right to take the car back or renegotiate the agreement if it cannot obtain financing within a specific amount of time.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Common Electrical I/O** Common Electrical I/O: The Common Electrical I/O (CEI) refers to a series of influential Interoperability Agreements (IAs) that have been published by the Optical Internetworking Forum (OIF). CEI defines the electrical and jitter requirements for 3.125, 6, 11, 25-28, and 56 Gbit/s electrical interfaces. CEI, the Common Electrical I/O: The Common Electrical I/O (CEI) Interoperability Agreement published by the OIF defines the electrical and jitter requirements for 3.125, 6, 11, 25-28, and 56 Gbit/s SerDes interfaces. This CEI specification has defined SerDes interfaces for the industry since 2004, and it has been highly influential. The development of electrical interfaces at the OIF began with SPI-3 in 2000, and the first differential interface was published in 2003. The seventh generation electrical interface, CEI-56G, defines five reaches of 56 Gbit/s interfaces. The OIF completed work on its eighth generation through its CEI-112G project. The OIF has launched its ninth generation with its CEI-224G project. CEI has influenced or has been adopted or adapted in many other serial interface standards by many different standards organizations over its long lifetime. SerDes interfaces have been developed based on CEI for most ASIC and FPGA products. CEI direct predecessors: Throughout the 2000s, the OIF produced an important series of interfaces that influenced the development of multiple generations of devices. Beginning with the donation of the PL-3 interface by PMC-Sierra in 2000, the OIF produced the System Packet Interface (SPI) family of packet interfaces. SPI-3 and SPI-4.2 defined two generations of devices before they were supplanted by the closely related Interlaken standard in the SPI-5 generation in 2006. CEI direct predecessors: The OIF also defined the SerDes Framer Interface (SFI) family of specifications in parallel with SPI. As a part of the SPI-5 and SFI-5 development, a common electrical interface was developed termed SxI-5. SxI-5 abstracted the electrical I/O interface away from the individual SPI and SFI documents. This abstraction laid the groundwork for the highly successful CEI family of Interoperability Agreements and was incorporated in the original release of CEI 1.0 a generation later. Generations of OIF Electrical Interfaces: Two earlier generations in this development path were defined by some of the same individuals at the ATM Forum in 1994 and 1995. These specifications were called UTOPIA Level 1 and 2. These operated at 25 Mbit/s (0.025 Gbit/s) and 50 Mbit/s per wire single ended and were used in OC-3 (155 Mbit/s) applications. PL-3 was a packet extension of the cells carried by those earlier interfaces. Public demonstrations: Compliant implementations to the draft CEI-56G IAs were demonstrated in the OIF booth at the Optical Fiber Conference in 2015, 2016 and 2017.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Metabolic trapping** Metabolic trapping: Metabolic trapping refers to a localization mechanism of synthesized radiocompounds in the human body. It can be defined as the intracellular accumulation of a radioactive tracer based on the relative metabolic activity of the body's tissues. It is a basic principle of the design of radiopharmaceuticals as metabolic probes for functional studies or tumor location.Metabolic trapping is the mechanism underlying the (PET) scan, an effective tool for detecting tumors, as there is a greater uptake of the target molecule by tumor tissue than by normal tissue. Metabolic trapping: In order to use it as a diagnostic tool in medicine, scientists have studied the trapping of radioactive molecules within different tissues throughout the body. In 1978, Gallagher et al. studied glucose tagged with Fluorine-18 (F-18) to see how it metabolized in the tissues of different organs. This group studied how long it took the lungs, liver, kidneys, heart, and brain to metabolize radioactive glucose. They found the molecule distributed uniformly, and then, after two hours, only the heart and the brain had significant levels of radioactivity from the F-18 due to metabolic trapping. This trapping occurred because once the glucose was pulled into the cells, the glucose was phosphorylated to cause the concentration of glucose in the cell to appear lower than it is, which then promotes the transport of more glucose. This phosphorylation of the radioactive glucose caused the metabolic trapping in the heart and the brain. The lungs, liver, and kidneys did not experience metabolic trapping, and the radioactive glucose that was not trapped was excreted in the urine. F-18 radiolabeled glucose did not get collected by the kidneys and cycled back into the system, as it would do for normal glucose. This suggests that the active transporter requires the hydroxyl (-OH) group found on the C-2 position of the sugar, where the F-18 atom was placed. Without the active transport, the radiolabeled glucose that was not trapped was then excreted as waste instead of being phosphorylated in the cell.A 2001 study of metabolic trapping used choline derivatives, which were synthesized using F-18, to label prostate cancer. The experiments were conducted first in mice and then in human patients. Choline (CH) and choline radiolabeled with F-18 (FCH) were both found to primarily migrate to the kidneys and liver in their experiment. This is different from the earlier experiment with glucose due to the difference in mechanism and metabolic need of glucose versus choline in the body. Phosphorylation was again found to be responsible for the trapping of the tracer in the tissues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Response surface methodology** Response surface methodology: In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. The method was introduced by George E. P. Box and K. B. Wilson in 1951. The main idea of RSM is to use a sequence of designed experiments to obtain an optimal response. Box and Wilson suggest using a second-degree polynomial model to do this. They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process. Response surface methodology: Statistical approaches such as RSM can be employed to maximize the production of a special substance by optimization of operational factors. Of late, for formulation optimization, the RSM, using proper design of experiments (DoE), has become extensively used. In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques. Basic approach of response surface methodology: An easy way to estimate a first-degree polynomial model is to use a factorial experiment or a fractional factorial design. This is sufficient to determine which explanatory variables affect the response variable(s) of interest. Once it is suspected that only significant explanatory variables are left, then a more complicated design, such as a central composite design can be implemented to estimate a second-degree polynomial model, which is still only an approximation at best. However, the second-degree model can be used to optimize (maximize, minimize, or attain a specific target for) the response variable(s) of interest. Important RSM properties and features: Orthogonality The property that allows individual effects of the k-factors to be estimated independently without (or with minimal) confounding. Also orthogonality provides minimum variance estimates of the model coefficient so that they are uncorrelated. Rotatability The property of rotating points of the design about the center of the factor space. The moments of the distribution of the design points are constant. Uniformity A third property of CCD designs used to control the number of center points is uniform precision (or Uniformity). Special geometries: Cube Cubic designs are discussed by Kiefer, by Atkinson, Donev, and Tobias and by Hardin and Sloane. Sphere Spherical designs are discussed by Kiefer and by Hardin and Sloane. Simplex geometry and mixture experiments Mixture experiments are discussed in many books on the design of experiments, and in the response-surface methodology textbooks of Box and Draper and of Atkinson, Donev and Tobias. An extensive discussion and survey appears in the advanced textbook by John Cornell. Extensions: Multiple objective functions Some extensions of response surface methodology deal with the multiple response problem. Multiple response variables create difficulty because what is optimal for one response may not be optimal for other responses. Other extensions are used to reduce variability in a single response while targeting a specific value, or attaining a near maximum or minimum while preventing variability in that response from getting too large. Practical concerns: Response surface methodology uses statistical models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model. Practical concerns: Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, Box's original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years. The engineers had not been able to afford to fit a cubic three-level design to estimate a quadratic model, and their biased linear-models estimated the gradient to be zero. Box's design reduced the costs of experimentation so that a quadratic model could be fit, which led to a (long-sought) ascent direction.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ross Mine Formation** Ross Mine Formation: The Ross Mine Formation is a geologic formation in Texas. It preserves fossils dating back to the Permian period.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**V particle** V particle: In particle physics, V was a generic name for heavy, unstable subatomic particles that decay into a pair of particles, thereby producing a characteristic letter V in a bubble chamber or other particle detector. Such particles were first detected in cosmic ray interactions in the atmosphere in the late 1940s and were first produced using the Cosmotron particle accelerator at Brookhaven National Laboratory in the 1950s. Since all such particles have now been identified and given specific names, for instance Kaons or Sigma baryons, this term has fallen into disuse. V particle: V0 is still used on occasion to refer generally to neutral particles that may confuse the B-tagging algorithms in a modern particle detector, as is used in Section 7 of this ATLAS conference note.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sherwood number** Sherwood number: The Sherwood number (Sh) (also called the mass transfer Nusselt number) is a dimensionless number used in mass-transfer operation. It represents the ratio of the convective mass transfer to the rate of diffusive mass transport, and is named in honor of Thomas Kilgore Sherwood. Sherwood number: It is defined as follows Convective mass transfer rate Diffusion rate where L is a characteristic length (m) D is mass diffusivity (m2 s−1) h is the convective mass transfer film coefficient (m s−1)Using dimensional analysis, it can also be further defined as a function of the Reynolds and Schmidt numbers: Sh=f(Re,Sc) For example, for a single sphere it can be expressed as: Sh=Sh0+CRemSc13 where Sh0 is the Sherwood number due only to natural convection and not forced convection. Sherwood number: A more specific correlation is the Froessling equation: 0.552 Re12Sc13 This form is applicable to molecular diffusion from a single spherical particle. It is particularly valuable in situations where the Reynolds number and Schmidt number are readily available. Since Re and Sc are both dimensionless numbers, the Sherwood number is also dimensionless. Sherwood number: These correlations are the mass transfer analogies to heat transfer correlations of the Nusselt number in terms of the Reynolds number and Prandtl number. For a correlation for a given geometry (e.g. spheres, plates, cylinders, etc.), a heat transfer correlation (often more readily available from literature and experimental work, and easier to determine) for the Nusselt number (Nu) in terms of the Reynolds number (Re) and the Prandtl number (Pr) can be used as a mass transfer correlation by replacing the Prandtl number with the analogous dimensionless number for mass transfer, the Schmidt number, and replacing the Nusselt number with the analogous dimensionless number for mass transfer, the Sherwood number. Sherwood number: As an example, a heat transfer correlation for spheres is given by the Ranz-Marshall Correlation: 0.6 200 250 This correlation can be made into a mass transfer correlation using the above procedure, which yields: 0.6 200 250 This is a very concrete way of demonstrating the analogies between different forms of transport phenomena.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Total harmonic distortion** Total harmonic distortion: The total harmonic distortion (THD or THDi) is a measurement of the harmonic distortion present in a signal and is defined as the ratio of the sum of the powers of all harmonic components to the power of the fundamental frequency. Distortion factor, a closely related term, is sometimes used as a synonym. In audio systems, lower distortion means the components in a loudspeaker, amplifier or microphone or other equipment produce a more accurate reproduction of an audio recording. Total harmonic distortion: In radio communications, devices with lower THD tend to produce less unintentional interference with other electronic devices. Since harmonic distortion can potentially widen the frequency spectrum of the output emissions from a device by adding signals at multiples of the input frequency, devices with high THD are less suitable in applications such as spectrum sharing and spectrum sensing.In power systems, lower THD implies lower peak currents, less heating, lower electromagnetic emissions, and less core loss in motors. IEEE std 519-2014 covers the recommended practice and requirements for harmonic control in electric power systems. Definitions and examples: To understand a system with an input and an output, such as an audio amplifier, we start with an ideal system where the transfer function is linear and time-invariant. When a sinusoidal signal of frequency ω passes through a non-ideal, non-linear device, additional content is added at multiples nω (harmonics) of the original frequency. THD is a measure of that additional signal content not present in the input signal. Definitions and examples: When the main performance criterion is the ″purity″ of the original sine wave (in other words, the contribution of the original frequency with respect to its harmonics), the measurement is most commonly defined as the ratio of the RMS amplitude of a set of higher harmonic frequencies to the RMS amplitude of the first harmonic, or fundamental, frequency THDF=V22+V32+V42+⋯V1 where Vn is the RMS value of the nth harmonic voltage and V1 is the RMS value of the fundamental component. Definitions and examples: In practice, the THDF is commonly used in audio distortion specifications (percentage THD); however, THD is a non-standardized specification and the results between manufacturers are not easily comparable. Since individual harmonic amplitudes are measured, it is required that the manufacturer disclose the test signal frequency range, level and gain conditions, and number of measurements taken. It is possible to measure the full 20–20 kHz range using a sweep (though distortion for a fundamental above 10 kHz is inaudible). Definitions and examples: Measurements for calculating the THD are made at the output of a device under specified conditions. The THD is usually expressed in percent or in dB relative to the fundamental as distortion attenuation. Definitions and examples: A variant definition uses the fundamental plus harmonics as the reference, though usage is discouraged: THDR=V22+V32+V42+⋯V12+V22+V32+⋯=THDF1+THDF2 These can be distinguished as THDF (for "fundamental"), and THDR (for "root mean square"). THDR cannot exceed 100%. At low distortion levels, the difference between the two calculation methods is negligible. For instance, a signal with THDF of 10% has a very similar THDR of 9.95%. However, at higher distortion levels the discrepancy becomes large. For instance, a signal with THDF 266% has a THDR of 94%. A pure square wave with infinite harmonics has THDF of 48.3%, or THDR of 43.5%.Some use the term "distortion factor" as a synonym for THDR, while others use it as a synonym for THDF.The International Electrotechnical Commission (IEC) also defines another term total harmonic factor for the "ratio of the RMS value of the harmonic content of an alternating quantity to the RMS value of the quantity" using a different equation. THD+N: THD+N means total harmonic distortion plus noise. This measurement is much more common and more comparable between devices. It is usually measured by inputting a sine wave, notch filtering the output, and comparing the ratio between the output signal with and without the sine wave: harmonics noise fundamental Like the THD measurement, this is a ratio of RMS amplitudes, and can be measured as THDF (bandpassed or calculated fundamental as the denominator) or, more commonly, as THDR (total distorted signal as the denominator).A meaningful measurement must include the bandwidth of measurement. This measurement includes effects from ground loop power line hum, high-frequency interference, intermodulation distortion between these tones and the fundamental, and so on, in addition to harmonic distortion. For psychoacoustic measurements, a weighting curve is applied such as A-weighting or ITU-R BS.468, which is intended to accentuate what is most audible to the human ear, contributing to a more accurate measurement. A-weighting is a rough way to estimate the frequency sensitivity of every persons ears, as it doesn't take into account the non-linear behavior of the ear. The loudness model proposed by Zwicker includes these complexities. The model is described in the German standard DIN45631For a given input frequency and amplitude, THD+N is reciprocal to SINAD, provided that both measurements are made over the same bandwidth. Measurement: The distortion of a waveform relative to a pure sinewave can be measured either by using a THD analyzer to analyse the output wave into its constituent harmonics and noting the amplitude of each relative to the fundamental; or by cancelling out the fundamental with a notch filter and measuring the remaining signal, which will be total aggregate harmonic distortion plus noise. Measurement: Given a sinewave generator of very low inherent distortion, it can be used as input to amplification equipment, whose distortion at different frequencies and signal levels can be measured by examining the output waveform. There is electronic equipment both to generate sinewaves and to measure distortion; but a general-purpose digital computer equipped with a sound card can carry out harmonic analysis with suitable software. Different software can be used to generate sinewaves, but the inherent distortion may be too high for measurement of very low-distortion amplifiers. Measurement: Interpretation For many purposes different types of harmonics are not equivalent. For instance, crossover distortion at a given THD is much more audible than clipping distortion at the same THD, since the harmonics produced by crossover distortion are nearly as strong at higher frequency harmonics, such as 10x to 20x the fundamental, as they are at lower-frequency harmonics like 3x or 5x the fundamental. Those harmonics appearing far away in frequency from a fundamental (desired signal) are not as easily masked by that fundamental. In contrast, at the onset of clipping, harmonics first appear at low order frequencies and gradually start to occupy higher frequency harmonics. A single THD number is therefore inadequate to specify audibility, and must be interpreted with care. Taking THD measurements at different output levels would expose whether the distortion is clipping (which decreases with an decreasing level) or crossover (which stays constant with varying output level, and thus is a greater percentage of the sound produced at low volumes). Measurement: THD is an summation of a number of harmonics equally weighted, even though research performed decades ago identifies that lower order harmonics are harder to hear at the same level, compared with higher order ones. In addition, even order harmonics are said to be generally harder to hear than odd order. A number of formulas that attempt to correlate THD with actual audibility have been published, but none have gained mainstream use. Examples: For many standard signals, the above criterion may be calculated analytically in a closed form. For example, a pure square wave has THDF equal to 0.483 48.3 % The sawtooth signal possesses 0.803 80.3 % The pure symmetrical triangle wave has THDF of 96 0.121 12.1 % For the rectangular pulse train with the duty cycle μ (called sometimes the cyclic ratio), the THDF has the form sin 2⁡πμ−1,0<μ<1 and logically, reaches the minimum (≈0.483) when the signal becomes symmetrical μ=0.5, i.e. the pure square wave. Appropriate filtering of these signals may drastically reduce the resulting THD. For instance, the pure square wave filtered by the Butterworth low-pass filter of the second-order (with the cutoff frequency set equal to the fundamental frequency) has THDF of 5.3%, while the same signal filtered by the fourth-order filter has THDF of 0.6%. However, analytic computation of the THDF for complicated waveforms and filters often represents a difficult task, and the resulting expressions may be quite laborious to obtain. For example, the closed-form expression for the THDF of the sawtooth wave filtered by the first-order Butterworth low-pass filter is simply coth 0.370 37.0 % while that for the same signal filtered by the second-order Butterworth filter is given by a rather cumbersome formula cot coth cot coth cot coth cot coth 0.181 18.1 % Yet, the closed-form expression for the THDF of the pulse train filtered by the pth-order Butterworth low-pass filter is even more complicated and has the following form csc sin cot sin ⁡πzs∏l=1l≠s2p1zs−zl where μ is the duty cycle, 0<μ<1, and exp ⁡iπ(2l−1)2p,l=1,2,…,2p see for more details.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cabot rings** Cabot rings: Cabot rings are thin, red-violet staining, threadlike strands in the shape of a loop or figure-8 that are found on rare occasions in red blood cells (erythrocytes). They are believed to be microtubules that are remnants from a mitotic spindle, and their presence indicates an abnormality in the production of red blood cells. Cabot Rings, considerably rare findings, when present are found in the cytoplasm of red blood cells and in most cases, are caused by defects of erythrocytic production and are not commonly found in the blood circulating throughout the body. Cytologic appearance: Cabot rings appear as ring, figure-8 or loop-shaped structures on microscopy. Cabot rings stain red or purple with Wright's stain. Associated conditions: Cabot rings have been observed in a handful of cases in patients with pernicious anemia, lead poisoning, certain other disorders of red blood cell production (erythropoiesis). History: They were first described in 1903 by American physician Richard Clarke Cabot (1868–1939).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Line–line intersection** Line–line intersection: In Euclidean geometry, the intersection of a line and a line can be the empty set, a point, or another line. Distinguishing these cases and finding the intersection have uses, for example, in computer graphics, motion planning, and collision detection. Line–line intersection: In three-dimensional Euclidean geometry, if two lines are not in the same plane, they have no point of intersection and are called skew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope, they are said to be parallel and have no points in common; otherwise, they have a single point of intersection. Line–line intersection: The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line. Formulas: A necessary condition for two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume. For the algebraic form of this condition, see Skew lines § Testing for skewness. Formulas: Given two points on each line First we consider the intersection of two lines L1 and L2 in two-dimensional space, with line L1 being defined by two distinct points (x1, y1) and (x2, y2), and line L2 being defined by two distinct points (x3, y3) and (x4, y4).The intersection P of line L1 and L2 can be defined using determinants. Formulas: Px=||x1y1x2y2||x11x21||x3y3x4y4||x31x41||||x11x21||y11y21||x31x41||y31y41||Py=||x1y1x2y2||y11y21||x3y3x4y4||y31y41||||x11x21||y11y21||x31x41||y31y41|| The determinants can be written out as: Px=(x1y2−y1x2)(x3−x4)−(x1−x2)(x3y4−y3x4)(x1−x2)(y3−y4)−(y1−y2)(x3−x4)Py=(x1y2−y1x2)(y3−y4)−(y1−y2)(x3y4−y3x4)(x1−x2)(y3−y4)−(y1−y2)(x3−x4) When the two lines are parallel or coincident, the denominator is zero. Formulas: Given two points on each line segment The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define lines L1 and L2 in terms of first degree Bézier parameters: L1=[x1y1]+t[x2−x1y2−y1],L2=[x3y3]+u[x4−x3y4−y3] (where t and u are real numbers). The intersection point of the lines is found with one of the following values of t or u, where t=|x1−x3x3−x4y1−y3y3−y4||x1−x2x3−x4y1−y2y3−y4|=(x1−x3)(y3−y4)−(y1−y3)(x3−x4)(x1−x2)(y3−y4)−(y1−y2)(x3−x4) and u=|x1−x3x1−x2y1−y3y1−y2||x1−x2x3−x4y1−y2y3−y4|=(x1−x3)(y1−y2)−(y1−y3)(x1−x2)(x1−x2)(y3−y4)−(y1−y2)(x3−x4), with or (Px,Py)=(x3+u(x4−x3),y3+u(y4−y3)) There will be an intersection if 0 ≤ t ≤ 1 and 0 ≤ u ≤ 1. The intersection point falls within the first line segment if 0 ≤ t ≤ 1, and it falls within the second line segment if 0 ≤ u ≤ 1. These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point. Formulas: Given two line equations The x and y coordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements. Formulas: Suppose that two lines have the equations y = ax + c and y = bx + d where a and b are the slopes (gradients) of the lines and where c and d are the y-intercepts of the lines. At the point where the two lines intersect (if they do), both y coordinates will be the same, hence the following equality: ax+c=bx+d. Formulas: We can rearrange this expression in order to extract the value of x, ax−bx=d−c, and so, x=d−ca−b. To find the y coordinate, all we need to do is substitute the value of x into either one of the two line equations, for example, into the first: y=ad−ca−b+c. Hence, the point of intersection is P=(d−ca−b,ad−ca−b+c). Note if a = b then the two lines are parallel. If c ≠ d as well, the lines are different and there is no intersection, otherwise the two lines are identical and intersect at every point. Formulas: Using homogeneous coordinates By using homogeneous coordinates, the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple (x, y, w). The mapping from 3D to 2D coordinates is (x′, y′) = (x/w, y/w). We can convert 2D points to homogeneous coordinates by defining them as (x, y, 1). Formulas: Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined as a1x + b1y + c1 = 0 and a2x + b2y + c2 = 0. We can represent these two lines in line coordinates as U1 = (a1, b1, c1) and U2 = (a2, b2, c2). The intersection P′ of two lines is then simply given by P′=(ap,bp,cp)=U1×U2=(b1c2−b2c1,a2c1−a1c2,a1b2−a2b1) If cp = 0, the lines do not intersect. More than two lines: The intersection of two lines can be generalized to involve additional lines. The existence of and expression for the n-line intersection problem are as follows. More than two lines: In two dimensions In two dimensions, more than two lines almost certainly do not intersect at a single point. To determine if they do and, if so, to find the intersection point, write the ith equation (i = 1, …, n) as [ai1ai2][xy]=bi, and stack these equations into matrix form as Aw=b, where the ith row of the n × 2 matrix A is [ai1, ai2], w is the 2 × 1 vector [xy], and the ith element of the column vector b is bi. If A has independent columns, its rank is 2. Then if and only if the rank of the augmented matrix [A | b] is also 2, there exists a solution of the matrix equation and thus an intersection point of the n lines. The intersection point, if it exists, is given by w=Agb=(ATA)−1ATb, where Ag is the Moore–Penrose generalized inverse of A (which has the form shown because A has full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank of A is only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other. More than two lines: In three dimensions The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are called skew lines. But if an intersection does exist it can be found, as follows. In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form [ai1ai2ai3][xyz]=bi. More than two lines: Thus a set of n lines can be represented by 2n equations in the 3-dimensional coordinate vector w: Aw=b where now A is 2n × 3 and b is 2n × 1. As before there is a unique intersection point if and only if A has full column rank and the augmented matrix [A | b] does not, and the unique intersection if it exists is given by w=(ATA)−1ATb. Nearest points to skew lines: In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in a least-squares sense. Nearest points to skew lines: In two dimensions In the two-dimensional case, first, represent line i as a point pi on the line and a unit normal vector n̂i, perpendicular to that line. That is, if x1 and x2 are points on line 1, then let p1 = x1 and let := [0−110]x2−x1‖x2−x1‖ which is the unit vector along the line, rotated by a right angle. Nearest points to skew lines: The distance from a point x to the line (p, n̂) is given by d(x,(p,n^))=|(x−p)⋅n^|=|(x−p)Tn^|=|n^T(x−p)|=(x−p)Tn^n^T(x−p). And so the squared distance from a point x to a line is d(x,(p,n^))2=(x−p)T(n^n^T)(x−p). The sum of squared distances to many lines is the cost function: E(x)=∑i(x−pi)T(n^in^iT)(x−pi). This can be rearranged: E(x)=∑ixTn^in^iTx−xTn^in^iTpi−piTn^in^iTx+piTn^in^iTpi=xT(∑in^in^iT)x−2xT(∑in^in^iTpi)+∑ipiTn^in^iTpi. To find the minimum, we differentiate with respect to x and set the result equal to the zero vector: ∂E(x)∂x=0=2(∑in^in^iT)x−2(∑in^in^iTpi) so (∑in^in^iT)x=∑in^in^iTpi and so x=(∑in^in^iT)−1(∑in^in^iTpi). Nearest points to skew lines: In more than two dimensions While n̂i is not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting that n̂i n̂iT is simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing a seminorm on the distance between pi and another point giving the distance to the line. In any number of dimensions, if v̂i is a unit vector along the ith line, then n^in^iT becomes I−v^iv^iT where I is the identity matrix, and so x=(∑iI−v^iv^iT)−1(∑i(I−v^iv^iT)pi). Nearest points to skew lines: General derivation In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an origin ai and a unit direction vector n̂i. The square of the distance from a point p to one of the lines is given from Pythagoras: di2=‖p−ai‖2−((p−ai)Tn^i)2=(p−ai)T(p−ai)−((p−ai)Tn^i)2 where (p − ai)T n̂i is the projection of p − ai on line i. The sum of distances to the square to all lines is ∑idi2=∑i((p−ai)T(p−ai)−((p−ai)Tn^i)2) To minimize this expression, we differentiate it with respect to p. Nearest points to skew lines: ∑i(2(p−ai)−2((p−ai)Tn^i)n^i)=0 ∑i(p−ai)=∑i(n^in^iT)(p−ai) which results in (∑i(I−n^in^iT))p=∑i(I−n^in^iT)ai where I is the identity matrix. This is a matrix Sp = C, with solution p = S+C, where S+ is the pseudo-inverse of S. Non-Euclidean geometry: In spherical geometry, any two lines intersect.In hyperbolic geometry, given any line and any point, there are infinitely many lines through that point that do not intersect the given line.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hill bomb** Hill bomb: A hill bomb is a maneuver in skateboarding in which a rider rides down a big hill. The trick is noted for its particular danger and, sometimes, grace. History: Thrasher magazine refers to hill bombing as "one of the first thrills ever on a skateboard." Hill bombs are dangerous and should only be attempted by highly skilled skateboarders. Sean Greene, Pablo Ramirez, Frank Gerwer, GX1000, and others have repopularized hill bombing in the mid to late 2010s. 1980s In the 1985 Powell Peralta Skate video Future Primitive, Tommy Guerrero skates down the hills of San Francisco, using the steep landscape of the city in ways previously unseen. In the 1988 skate video Sick Boys, skaters, in particular Julien Stranger, skate down the steep streets of San Francisco. 1990s In Toy Machine's 1998 skate video - Jump Off A Building - Chris Senn's part contains a number of hill bombs. 2000s At the end of Jon Allie's part in the 2005 Zero skateboards video "New Blood," he does a frontside 180 kickflip to hill bomb. In the 2005 DVS skate video Skate More Dennis Busenitz incorporates a number of hill bombs into his part. History: 2010s In 2010, Emerica released the skate video Stay Gold featuring a part by Brandon Westgate that contains a hill bomb down a drainage ditch. In 2011, Magenta skateboards released SF Hill Street Blues filmed by Yoan Taillandier which features many San Francisco hill bombs. In the 2011, Emerica released a video: Brandon Westgate: New Shoe, New Part which contains a number of hill bomb lines filmed in San Francisco. The GX1000 videos are known to contain gnarly hill bombing, including the 2017: Adrenaline Junkie and the 2018 Roll Up and El Camino. In the 2019 Supreme video CANDYLAND - dedicated to Pablo Ramirez and directed by William Strobeck - a number of hill bombs are featured, including ones by Sean Greene, Jeff Carlyle, Rowan Zorilla, Matt Finley, Sean Pablo, Andrew Torralvo, Taylor Nida, and Elissa Steamer. History: San Francisco Due to its hilly nature, San Francisco, California is known to be a particularly good city in which to bomb hills. Dolores Park hill bomb In July in San Francisco, California, hundreds of skateboarders gather on Dolores Street across from Dolores park for an impromptu hill bombing event. The event has become an annual tradition. There have been some injuries and at least one death associated with the event. The city attempted to stop the event from happening by installing Botts dots in 2020. However, skaters returned anyway, in spite of those.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Automation Studio** Automation Studio: Automation Studio is a circuit design, simulation and project documentation software for fluid power systems and electrical projects conceived by Famic Technologies Inc.. It is used for CAD, maintenance, and training purposes. Mainly used by engineers, trainers, and service and maintenance personnel. Automation Studio can be applied in the design, training and troubleshooting of hydraulics, pneumatics, HMI, and electrical control systems.Two versions of the software exist: Automation Studio Professional Automation Studio EducationalThe educational version of Automation Studio is a limited features version used by engineering and technical schools to train students who are future engineers or technicians. The software is designed for schools that teach technical subjects such as industrial technologies, mechatronics, electromechanical technologies, electrical & electronics, automation, and maintenance. Modeling and simulation are used to illustrate theoretical aspects. Libraries: Automation Studio has various symbol libraries. All libraries follow standards such as ISO, IEC, JIC and NEMA. Libraries: Hydraulics Hydraulic Manifold Block Pneumatics Electrical (IEC & NEMA standards) Fluid Power & Electrical Component Sizing Valve Spool Designer OPC communications server Bill of Materials & Report PLC Ladder Logic HMI & Control Panel Digital Electronics Sequential Function Chart (GRAFCET) Electrical Controls Multi-Fluid Simulation Teachware Manufacturer's Catalogue Workflow Manager Block Diagram (Math) Workshop CANBus Communication Interface with Unity 3D System Analysis (FMECA) Libraries features: Automation Studio is used as a design and simulation tool in the fields of hydraulics, pneumatics, electrical and automation. Automation Studio Hydraulics Automation Studio Hydraulics’ functions are used for hydraulic system engineering purposes. Automation Studio Hydraulics includes a specific symbol library and uses modeling techniques such the Bernoulli's law and the gradient method. Automation Studio Hydraulics is the main aspect of Automation Studio: it is used to conceive and to test hydraulic systems while taking into account thermal parameters. It displays inside views of the elements in the schematics. The Automation Studio library includes additional elements such as commands and control devices (PID controller, CAN bus, and servo-direction). Fluid power is one of the central elements in such simulation. Automation Studio Pneumatics Automation Studio Pneumatics is similar to Automation Studio Hydraulics, but the simulation is done for air rather than fluids. This library, like Automation Studio Hydraulics, is used to design and test models. Thus, the simulation elements that are used are not the same as those in the hydraulics library. Automation Studio Electrotechnical The electrotechnical module in Automation Studio is used for design, simulation, validation, documentation and troubleshooting of electrical diagrams. It includes multi-line and one-line representation according to the users' choice. The different aspects of the IEC and NEMA international standards are respected: components’ identification, symbols, ratings, port names, … etc. The electrotechnical module works simultaneously with the fluid power technologies which allows the users to design and simulate complete systems. Versions: Automation Studio Professional 1996-2000: 1.0 to 3.0.5.1 (Windows 98, 2000, Me, XP, NT 4.0); 2003-2004: 4.0, 4.1, 5.0, 5.1, 5.2 (Windows 2000, XP, NT 4.0); 2005-2006: 5.3, 5.4 (Windows 2000, 2003, XP); 2007: 5.5 (Windows XP, Vista); 2008: 5.6 (Windows XP, Vista); 2009: 5.7 (Windows XP, Vista); 2011: 6.0 (Windows XP, Windows 7, Windows 8); 2014: 6.1 (Windows 7, Windows 8); 2016: 6.2 (Windows 7, Windows 8, Windows 10, Vista); 2017: 6.3 (Windows 7, Windows 8, Windows 10, Vista); 2019: 6.4 with service release (SR) 1, SR2, SR3(Windows 7, Windows 8, Windows 10, Vista); 2021: 7.0 with SR1 and SR2 (Windows 8.1, Windows 10); 2022: 7.1 with SR1 (win10, win11); Automation Studio Educational 2002-2005: 4.0, 4.1, 5.0, 5.1, 5.2 (Windows 2000, XP, NT 4.0); 2006: 5.3 (Windows 2000, 2003, XP); 2008: 5.6 (Windows XP, Vista); 2009: 5.7 (Windows XP, Vista); 2014: 6.1 (Windows 7, Windows 8); 2016: 6.2 (Windows 7, Windows 8, Windows 10, Vista); 2017: 6.3 (Windows 7, Windows 8, Windows 10, Vista); 2019: 6.4 (Windows 7, Windows 8, Windows 10, Vista); 2021: 7.0 (Windows 8.1, Windows 10); 2022: 7.1 with SR1 (win10, win11);
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radio spectrum scope** Radio spectrum scope: The radio spectrum scope (also radio panoramic receiver, panoramic adapter, pan receiver, pan adapter, panadapter, panoramic radio spectroscope, panoramoscope, panalyzor and band scope) was invented by Marcel Wallace - and measures and shows the magnitude of an input signal versus frequency within one or more radio bands - e.g. shortwave bands. A spectrum scope is normally a lot cheaper than a spectrum analyzer, because the aim is not high quality frequency resolution - nor high quality signal strength measurements. Radio spectrum scope: The spectrum scope use can be to: find radio channels quickly of known and unknown signals when receiving. find radio amateurs activity quickly e.g. with the intent of communicating with them.Modern spectrum scopes, like the Elecraft P3, also plot signal frequencies and amplitudes over time, in a rolling format called a waterfall plot.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Journal of the IEST** Journal of the IEST: The Journal of the IEST is a peer-reviewed scientific journal and the official publication of the Institute of Environmental Sciences and Technology (IEST). It covers research on simulation, testing, modeling, control, and the teaching of the environmental sciences and technologies. The journal was established in 1958 as the Journal of Environmental Engineering. In October 1959, it was renamed Journal of Environmental Sciences and obtained its current title in 1998.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**X-Win32** X-Win32: In computing, X-Win32 is a proprietary implementation of the X Window System for Microsoft Windows, produced by StarNet Communications. It is based on X11R7.4. X- Win32 allows remote display of UNIX windows on Windows machines in a normal window alongside the other Windows applications Version History: X-Win32 was first introduced by StarNet Communications as a product called MicroX in 1991. As the internet became more widely used in the 1990s the name changed to X-Win32. The table below details the origination and transformation of MicroX into X-Win32. A limited set of versions and their release notes are available from the product's website. Features: Standard connection protocols - X-Win32 offers six standard connection protocols: ssh, telnet, rexec, rlogin, rsh, and XDMCP Window modes - Like other X servers for Microsoft Windows, X-Win32 has two window modes, Single and Multiple. Single window mode contains all X windows with one large visible root window. Multiple window mode allows the Microsoft Window Manager to manage the X client windows Copy and paste - X-Win32 incorporates a clipboard manager which allows for dynamic copying and pasting of text from X clients to Windows applications and vice versa. A screen-shot tool saves to a PNG file. Features: OpenGL support - X-Win32 uses the GLX extension which allows for OpenGL Support Related products: X-Win32 Flash is a version of X-Win32 that can be installed and run directly from a USB Flash Drive Discontinued products: X-Win64 was a version for 64-bit Windows, but the extended features in that version can now be found in the current version of X-Win32. X-Win32 LX was a free commercially supported X Server for Microsoft Windows which supported Microsoft Windows Services for UNIX (SFU). Discontinued products: Recon-X was an add-on product for all X server products, including X-Win32 competitors such as Exceed and Reflection X, which added suspend and resume capabilities to running X sessions. Features of Recon-X were incorporated into the LIVE product line LinuxLIVE is a LIVE client for Linux systems MacLIVE is a LIVE client for Mac OS X systems LIVE Console is a LIVE client installed with the LIVE server which allows localhost LIVE connections to be made
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Abortion doping** Abortion doping: Abortion doping is a rumoured practice of purposely inducing pregnancy specifically for athletic performance-enhancing benefits, and then aborting the pregnancy. Rumours and allegations began during international sporting events in the mid-twentieth century, and a number of doctors and scientists have repeated claims about it, but it remains unproven, and is often regarded as a myth. Potential physical benefits: Hormonal and other changes in pregnancy affect physical performance. In the first three months it is known that a woman's body produces a natural surplus of red blood cells, which are well supplied with oxygen-carrying hemoglobin, in order to support the growing fetus. Other potential advantages are obtained from the surge in hormones that pregnancy induces, predominantly progesterone and estrogen, but also testosterone, which could increase muscle strength. Increases in hormones like relaxin, which loosens the hip joints to prepare for childbirth, may have a performance-enhancing effect on joint mobility. Peter Larkins, an official of the Australian Sports Medicine Association, opined that the advantages of abortion doping would be "far outweighed by the drawbacks of morning sickness and fatigue" which are common in early pregnancy.Although female athletes often become pregnant during their careers and immediately before major sports events, and although many athletes have reported increased performance while being pregnant, "abortion doping" refers specifically to deliberate pregnancy with the express aim to increase performance. The stigma and beliefs surrounding abortion doping have been found to have a detrimental effect on female athletes who become pregnant during an active career. Claims: Western media outlets began accusing Soviet countries of abortion doping as early as the 1956 Summer Olympics, and allegations were raised again at the 1964 Summer Olympics. Rumours of abortion doping continued throughout the 1970s and 1980s, predominately aimed at East German athletes.Concerns around pregnancy as a doping method were discussed in an IOC Medical Commission meeting on February 16, 1984, though the idea was dismissed as members did not feel “that such a procedure would be of benefit to the female athletes”.On May 22, 1988, Finnish doctor Risto Erkola reportedly told the Sunday Mirror "It's horrible and immoral. Now that drug testing is routine, pregnancy is becoming the favourite way of getting an edge on competitors". The second sentence of Erkola's comment is frequently cited in discussions, reports and papers on abortion doping. According to the fact-checking website Snopes.com, media reports following this claim were skeptical of it, and "it is not clear if Erkola would have had any first-hand knowledge of Soviet doping practices". In the same Sunday Mirror story, Prof. Renate Huch (misspelled "Hoch" by Sunday Mirror) of Geneva, who deals with artificial insemination, said that "the problem has become so widespread now that [we] made a rule never to help a woman athlete if she wants to get pregnant purely and simply to win a race."On the First Permanent World Conference on Anti-Doping in Sport held in June 1988, Prince Alexandre de Merode, the vice-president of the International Olympic Committee (IOC), supported reports that Eastern European athletes were getting artificially inseminated and then aborting two to three months later in an attempt to boost athletic performance on the First Permanent World Conference on Anti-Doping in Sport. Merode said he knew a Swiss doctor who performed the procedure. Dr. Robert Voy of the United States Olympic Committee dismissed such claims as a "ludicrous myth".In the Textbook in Physiology and Pathophysiology (1999), Dr. Paul-Erik Paulev, a Danish professor of physiology at the University of Copenhagen, wrote that "in some countries female athletes have become pregnant for 2-3 months, in order to improve their performance just following an abortion." Dr. Paulev's comment is also frequently cited in discussions on abortion doping; his comments were first made in a self-published document that contains no references for its assertions regarding abortion doping.In November 1994, a person claiming to be Olga Karasyova, who won a gold medal in gymnastics at the 1968 Summer Olympics, gave an interview with German television station RTL Television, in which she said that abortion doping was widespread among Soviet athletes in the 1970s, and that girls as young as 14 were being forced to have sex with their coaches. When contacted by various newspapers for comment, Karasyova said the person who had given the interviews was an imposter. In 1997, Karasyova successfully sued the Russian newspaper "AIDS-Info" for libel after they published references to the 1994 story. Despite her legal victory, the original interviews attributed to her continue to be reported as facts by some third parties.As of 2002, Snopes.com categorises abortion doping as "unproven", concluding that "abortion doping claims, specifically, have their roots in Cold War era rumors, are confirmed only by a single dubious case, are buttressed by speculative science, and are largely amplified in recent years by anti-abortion groups." Snopes accuse anti-abortion groups of selective reporting and using poorly sourced arguments when writing articles about the subject. Feminist Germaine Greer wrote in 2007 that "there is no real evidence" that abortion doping has ever been done, and British health journalist Peta Bea found in 2009 that "evidence that it occurred has never been substantiated". Legality: The practice is not considered illegal by the IOC. Prince Alexandre de Merode has stated the organisation does not "police motherhood". Abortion doping is not on the World Anti-Doping Agency's current list of prohibited substances or methods.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Anterior cruciate ligament** Anterior cruciate ligament: The anterior cruciate ligament (ACL) is one of a pair of cruciate ligaments (the other being the posterior cruciate ligament) in the human knee. The two ligaments are also called "cruciform" ligaments, as they are arranged in a crossed formation. In the quadruped stifle joint (analogous to the knee), based on its anatomical position, it is also referred to as the cranial cruciate ligament. The term cruciate translates to cross. This name is fitting because the ACL crosses the posterior cruciate ligament to form an “X”. It is composed of strong, fibrous material and assists in controlling excessive motion. This is done by limiting mobility of the joint. The anterior cruciate ligament is one of the four main ligaments of the knee, providing 85% of the restraining force to anterior tibial displacement at 30 and 90° of knee flexion. The ACL is the most injured ligament of the four located in the knee. Structure: The ACL originates from deep within the notch of the distal femur. Its proximal fibers fan out along the medial wall of the lateral femoral condyle. The two bundles of the ACL are the anteromedial and the posterolateral, named according to where the bundles insert into the tibial plateau. The tibial plateau is a critical weight-bearing region on the upper extremity of the tibia. The ACL attaches in front of the intercondyloid eminence of the tibia, where it blends with the anterior horn of the medial meniscus. Purpose: The purpose of the ACL is to resist the motions of anterior tibial translation and internal tibial rotation; this is important to have rotational stability. This function prevents anterior tibial subluxation of the lateral and medial tibiofemoral joints, which is important for the pivot-shift phenomenon. The ACL has mechanoreceptors that detect changes in direction of movement, position of the knee joint, and changes in acceleration, speed, and tension. A key factor in instability after ACL injuries is having altered neuromuscular function secondary to diminished somatosensory information. For athletes who participate in sports involving cutting, jumping, and rapid deceleration, the knee must be stable in terminal extension, which is the screw-home mechanism. Clinical significance: Injury An ACL tear is one of the most common knee injuries, with over 100,000 tears occurring annually in the US. Most ACL tears are a result of a non-contact mechanism such as a sudden change in a direction causing the knee to rotate inward. As the knee rotates inward, additional strain is placed on the ACL, since the femur and tibia, which are the two bones that articulate together forming the knee joint, move in opposite directions, causing the ACL to tear. Most athletes require reconstructive surgery on the ACL, in which the torn or ruptured ACL is completely removed and replaced with a piece of tendon or ligament tissue from the patient (autograft) or from a donor (allograft). Conservative treatment has poor outcomes in ACL injury, since the ACL is unable to form a fibrous clot, as it receives most of its nutrients from synovial fluid; this washes away the reparative cells, making the formation of fibrous tissue difficult. The two most common sources for tissue are the patellar ligament and the hamstrings tendon. The patellar ligament is often used, since bone plugs on each end of the graft are extracted, which helps integrate the graft into the bone tunnels during reconstruction. The surgery is arthroscopic, meaning that a tiny camera is inserted through a small surgical cut. The camera sends video to a large monitor so the surgeon can see any damage to the ligaments. In the event of an autograft, the surgeon makes a larger cut to get the needed tissue. In the event of an allograft, in which material is donated, this is not necessary, since no tissue is taken directly from the patient's own body. The surgeon drills a hole forming the tibial bone tunnel and femoral bone tunnel, allowing for the patient's new ACL graft to be guided through. Once the graft is pulled through the bone tunnels, two screws are placed into the tibial and femoral bone tunnel. Recovery time usually ranges between one and two years, but is sometimes longer, depending if the patient chose an autograft or allograft. A week or so after the occurrence of the injury, the athlete is usually deceived by the fact that he/she is walking normally and not feeling much pain. This is dangerous, as some athletes start resuming some of their activities such as jogging, which with a wrong move or twist, could damage the bones, as the graft has not completely become integrated into the bone tunnels. Injured athletes must understand the significance of each step of an ACL injury to avoid complications and ensure a proper recovery. Clinical significance: Nonoperative treatment of the ACL ACL reconstruction is the most common treatment for an ACL tear, but it is not the only treatment available for individuals. Some may find it more beneficial to complete a nonoperative rehabilitation program. Individuals who are going to continue with physical activity that involves cutting and pivoting, and individuals who are no longer participating in those specific activities both are candidates for the nonoperative route. In comparing operative and nonoperative approaches to ACL tears, few differences were noted between surgical and nonsurgical groups, with no significant differences in regard to knee function or muscle strength reported by the patients.The main goals to achieve during rehabilitation (rehab) of an ACL tear is to regain sufficient functional stability, maximize full muscle strength, and decrease risk of reinjury. Typically, three phases are involved in nonoperative treatment - the acute phase, the neuromuscular training phase, and the return to sport phase. During the acute phase, the rehab is focusing on the acute symptoms that occur right after the injury and are causing an impairment. The use of therapeutic exercises and appropriate therapeutic modalities is crucial during this phase to assist in repairing the impairments from the injury. The neuromuscular training phase is used to focus on the patient regaining full strength in both the lower extremity and the core muscles. This phase begins when the patient regains full range of motion, no effusion, and adequate lower extremity strength. During this phase, the patient completes advanced balance, proprioception, cardiovascular conditioning, and neuromuscular interventions. In the final, return to sport phase, the patient focuses on sport-specific activities and agility. A functional performance brace is suggested to be used during the phase to assist with stability during pivoting and cutting activities. Clinical significance: Operative treatment of the ACL Anterior cruciate ligament surgery is a complex operation that requires expertise in the field of orthopedic and sports medicine. Many factors should be considered when discussing surgery, including the athlete's level of competition, age, previous knee injury, other injuries sustained, leg alignment, and graft choice. Typically, four graft types are possible, the bone-patella tendon-bone graft, the semitendinosus and gracilis tendons (quadrupled hamstring tendon), quadriceps tendon, and an allograft. Although extensive research has been conducted on which grafts are the best, the surgeon typically chooses the type of graft with which he or she is most comfortable. If rehabilitated correctly, the reconstruction should last. In fact, 92.9% of patients are happy with graft choice.Prehabilitation has become an integral part of the ACL reconstruction process. This means that the patient exercises before getting surgery to maintain factors such as range of motion and strength. Based on a single leg hop test and self-reported assessment, prehab improved function; these effects were sustained 12 weeks postoperatively.Postsurgical rehabilitation is essential in the recovery from the reconstruction. This typically takes a patient 6 to 12 months to return to life as it was prior to the injury. The rehab can be divided into protection of the graft, improving range of motion, decrease swelling, and regaining muscle control. Each phase has different exercises based on the patients' needs. For example, while the ligament is healing, a patient's joint should not be used for full weight-bearing, but the patient should strengthen the quadriceps and hamstrings by doing quad sets and weight shifting drills. Phase two would require full weight-bearing and correcting gait patterns, so exercises such as core strengthening and balance exercises would be appropriate. In phase three, the patient begins running, and can do aquatic workouts to help with reducing joint stresses and cardiorespiratory endurance. Phase four includes multiplanar movements, thus enhancing a running program and beginning agility and plyometric drills. Lastly, phase five focuses on sport- or life-specific motions, depending on the patient.A 2010 Los Angeles Times review of two medical studies discussed whether ACL reconstruction was advisable. One study found that children under 14 who had ACL reconstruction fared better after early surgery than those who underwent a delayed surgery. For adults 18 to 35, though, patients who underwent early surgery followed by rehabilitation fared no better than those who had rehabilitative therapy and a later surgery.The first report focused on children and the timing of an ACL reconstruction. ACL injuries in children are a challenge because children have open growth plates in the bottom of the femur or thigh bone and on the top of the tibia or shin. An ACL reconstruction typically crosses the growth plates, posing a theoretical risk of injury to the growth plate, stunting leg growth, or causing the leg to grow at an unusual angle.The second study noted focused on adults. It found no significant statistical difference in performance and pain outcomes for patients who receive early ACL reconstruction vs. those who receive physical therapy with an option for later surgery. This would suggest that many patients without instability, buckling, or giving way after a course of rehabilitation can be managed nonoperatively, but was limited to outcomes after two years and did not involve patients who were serious athletes. Patients involved in sports requiring significant cutting, pivoting, twisting, or rapid acceleration or deceleration may not be able to participate in these activities without ACL reconstruction. Clinical significance: ACL injuries in women Risk differences between outcomes in men and women can be attributed to a combination of multiple factors, including anatomical, hormonal, genetic, positional, neuromuscular, and environmental factors. The size of the anterior cruciate ligament is often the most reported difference. Studies look at the length, cross-sectional area, and volume of ACLs. Researchers use cadavers, and in vivo placement to study these factors, and most studies confirm that women have smaller anterior cruciate ligaments. Other factors that could contribute to higher risks of ACL tears in women include patient weight and height, the size and depth of the intercondylar notch, the diameter of the ACL, the magnitude of the tibial slope, the volume of the tibial spines, the convexity of the lateral tibiofemoral articular surfaces, and the concavity of the medial tibial plateau. While anatomical factors are most talked about, extrinsic factors, including dynamic movement patterns, might be the most important risk factor when it comes to ACL injury. Environmental factors also play a big role. Extrinsic factors are controlled by the individual. These could be strength, conditioning, shoes, and motivation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ERCC1** ERCC1: DNA excision repair protein ERCC-1 is a protein that in humans is encoded by the ERCC1 gene. Together with ERCC4, ERCC1 forms the ERCC1-XPF enzyme complex that participates in DNA repair and DNA recombination.Many aspects of these two gene products are described together here because they are partners during DNA repair. The ERCC1-XPF nuclease is an essential activity in the pathway of DNA nucleotide excision repair (NER). The ERCC1-XPF nuclease also functions in pathways to repair double-strand breaks in DNA, and in the repair of “crosslink” damage that harmfully links the two DNA strands. ERCC1: Cells with disabling mutations in ERCC1 are more sensitive than normal to particular DNA damaging agents, including ultraviolet (UV) radiation and to chemicals that cause crosslinking between DNA strands. Genetically engineered mice with disabling mutations in ERCC1 have defects in DNA repair, accompanied by metabolic stress-induced changes in physiology that result in premature aging. Complete deletion of ERCC1 is incompatible with viability of mice, and no human individuals have been found with complete (homozygous) deletion of ERCC1. Rare individuals in the human population harbor inherited mutations that impair the function of ERCC1. When the normal genes are absent, these mutations can lead to human syndromes, including Cockayne syndrome (CS) and COFS. ERCC1: ERCC1 and ERCC4 are the gene names assigned in mammalian genomes, including the human genome (Homo sapiens). Similar genes with similar functions are found in all eukaryotic organisms. Gene: The genomic DNA for ERCC1 was the first human DNA repair gene to be isolated by molecular cloning. The original method was by transfer of fragments of the human genome to ultraviolet light (UV)-sensitive mutant cell lines derived from Chinese hamster ovary cells. Reflecting this cross-species genetic complementation method, the gene was called “Excision repair cross-complementing 1”. Multiple independent complementation groups of Chinese hamster ovary (CHO) cells were isolated, and this gene restored UV resistance to cells of complementation group 1. Gene: The human ERCC1 gene encodes the ERCC1 protein of 297 amino acids with a molecular mass of about 32,500 daltons. Genes similar to ERCC1 with equivalent functions (orthologs) are found in other eukaryotic genomes. Some of the most studied gene orthologs include RAD10 in the budding yeast Saccharomyces cerevisiae, and swi10+ in the fission yeast Schizosaccharomyces pombe. Protein: One ERCC1 molecule and one XPF molecule bind together, forming an ERCC1-XPF heterodimer which is the active nuclease form of the enzyme. In the ERCC1–XPF heterodimer, ERCC1 mediates DNA– and protein–protein interactions. XPF provides the endonuclease active site and is involved in DNA binding and additional protein–protein interactions.The ERCC4/XPF protein consists of two conserved domains separated by a less conserved region in the middle. The N-terminal region has homology to several conserved domains of DNA helicases belonging to superfamily II, although XPF is not a DNA helicase. The C-terminal region of XPF includes the active site residues for nuclease activity. Most of the ERCC1 protein is related at the sequence level to the C-terminus of the XPF protein, but residues in the nuclease domain are not present. A DNA binding “helix-hairpin-helix” domain at the C-terminus of each protein. Protein: By primary sequence and protein structural similarity, the ERCC1-XPF nuclease is a member of a broader family of structure specific DNA nucleases comprising two subunits. Such nucleases include, for example, the MUS81-EME1 nuclease. Structure-specific nuclease: The ERCC1–XPF complex is a structure-specific endonuclease. ERCC1-XPF does not cut DNA that is exclusively single-stranded or double-stranded, but it cleaves the DNA phosphodiester backbone specifically at junctions between double-stranded and single-stranded DNA. It introduces a cut in double-stranded DNA on the 5′ side of such a junction, about two nucleotides away. This structure-specificity was initially demonstrated for RAD10-RAD1, the yeast orthologs of ERCC1 and XPF.The hydrophobic helix–hairpin–helix motifs in the C-terminal regions of ERCC1 and XPF interact to promote dimerization of the two proteins. There is no catalytic activity in the absence of dimerization. Indeed, although the catalytic domain is within XPF and ERCC1 is catalytically inactive, ERCC1 is indispensable for activity of the complex. Structure-specific nuclease: Several models have been proposed for binding of ERCC1–XPF to DNA, based on partial structures of relevant protein fragments at atomic resolution. DNA binding mediated by the helix-hairpin-helix domains of ERCC1 and XPF domains positions the heterodimer at the junction between double-stranded and single-stranded DNA. Structure-specific nuclease: Nucleotide excision repair During nucleotide excision repair, several protein complexes cooperate to recognize damaged DNA and locally separate the DNA helix for a short distance on either side of the site of a DNA damage. The ERCC1–XPF nuclease incises the damaged DNA strand on the 5′ side of the lesion. During NER, the ERCC1 protein interacts with the XPA protein to coordinate DNA and protein binding. Structure-specific nuclease: DNA double-strand break repair Mammalian cells with mutant ERCC1–XPF are moderately more sensitive than normal cells to agents (such as ionizing radiation) that cause double-stranded breaks in DNA. Particular pathways of both homologous recombination repair and non-homologous end-joining rely on ERCC1-XPF function. The relevant activity of ERCC1–XPF for both types of double-strand break repair is the ability to remove non-homologous 3′ single-stranded tails from DNA ends before rejoining. This activity is needed during a single-strand annealing subpathway of homologous recombination. Trimming of 3’ single-stranded tail is also needed in a mechanistically distinct subpathway of non-homologous end-joining, dependent on the Ku proteins. Homologous integration of DNA, an important technique for genetic manipulation, is dependent on the function of ERCC1-XPF in the host cell. Structure-specific nuclease: DNA interstrand crosslink repair Mammalian cells carrying mutations in ERCC1 or XPF are especially sensitive to agents that cause DNA interstrand crosslinks. Interstrand crosslinks block the progression of DNA replication, and structures at blocked DNA replication forks provide substrates for cleavage by ERCC1-XPF. Incisions may be made on either side of the crosslink on one DNA strand to unhook the crosslink and initiate repair. Alternatively, a double-strand break may be made in the DNA near the ICL, and subsequent homologous recombination repair may involve ERCC1-XPF action. Although not the only nuclease involved, ERCC1–XPF is required for ICL repair during several phases of the cell cycle. Clinical significance: Cerebro-oculo-facio-skeletal syndrome A few patients with severely disabling ERCC1 mutations that cause cerebro-oculo-facio-skeletal syndrome (COFS) have been reported. COFS syndrome is a rare recessive disorder in which affected individuals undergo rapid neurologic decline and indications of accelerated aging. A very severe case of such disabling mutations is F231L mutation in the tandem helix-hairpin-helix domain of ERCC1 at its interface with XPF. It is shown that this single mutation is very important for the stability of the ERCC1-XPF complex. This Phenylalanine residue is assisting ERCC1 to accommodate a key Phenylalanine residue from XPF (F894) and the mutation (F231L) disturbs this accommodating function. As a consequence, F894 protrudes out of the interface and the mutant complex is dissociating faster compared to the native one. The life span of patients with such mutations is often around 1–2 years. Clinical significance: Cockayne syndrome One Cockayne syndrome (CS) type II patient designated CS20LO exhibited a homozygous mutation in exon 7 of ERCC1, producing a F231L mutation. Clinical significance: Relevance in chemotherapy Measuring ERCC1 activity may have utility in clinical cancer medicine because one mechanism of resistance to platinum chemotherapy drugs correlates with high ERCC1 activity. Nucleotide excision repair (NER) is the primary DNA repair mechanism that removes the therapeutic platinum-DNA adducts from the tumor DNA. ERCC1 activity levels, being an important part of the NER common final pathway, may serve as a marker of general NER throughput. This has been suggested for patients with gastric, ovarian and bladder cancers. In Non-small cell lung carcinoma (NSCLC), surgically removed tumors that receive no further therapy have a better survival if ERCC1-positive than if ERCC1-negative. Thus, ERCC1 positivity is a favorable prognostic marker, referring to how the disease will proceed if not further treated. ERCC1-positive NSCLC tumors do not benefit from adjuvant platinum chemotherapy. However, ERCC1-negative NSCLC tumors, prognostically worse without treatment, derive substantial benefit from adjuvant cisplatin-based chemotherapy. High ERCC1 is thus a negative predictive marker, referring to how it will respond to a specific type of treatment. In colorectal cancer, clinical trials have not demonstrated the predictive ability of ERCC1 in oxaliplatin‐based treatment. Thus, European Society for Medical Oncology (ESMO) has not recommended ERCC1 testing prior to the use of oxaliplatin in routine practice. ERCC1 genotyping in humans has shown significant polymorphism at codon 118. These polymorphisms may have differential effects on platinum and mitomycin damage. Clinical significance: Deficiency in cancer ERCC1 protein expression is reduced or absent in 84% to 100% of colorectal cancers, and lower expression of ERCC1 has been reported as being associated with unfavorable prognosis in patients undergoing treatment with oxaliplatin. The promoter of ERCC1 is methylated in 38% of gliomas, resulting in reduced mRNA and protein expression. The promoter of ERCC1 was located in the DNA 5 kilobases upstream of the protein coding region. Frequencies of epigenetic reductions of nine other DNA repair genes have been evaluated in various cancers and range from 2% (OGG1 in papillary thyroid cancer) to 88% and 90% (MGMT in gastric and colon cancers, respectively). Thus, reduction of protein expression of ERCC1 in 84% to 100% of colon cancers indicates that reduced ERCC1 is one of the most frequent reductions of a DNA repair gene observed in a cancer. Deficiency in ERCC1 protein expression appears to be an early event in colon carcinogenesis, since ERCC1 was found to be deficient in 40% of the crypts within 10 cm on each side of colonic adenocarcinomas (within the early field defects from which the cancers likely arose).Cadmium (Cd) and its compounds are well-known human carcinogens. During Cd-induced malignant transformation, the promoter regions of ERCC1, as well as of hMSH2, XRCC1, and hOGG1, were heavily methylated and both the messenger RNA and proteins of these DNA repair genes were progressively reduced. DNA damage also increased with Cd-induced transformation. Reduction of protein expression of ERCC1 in progression to sporadic cancer is unlikely to be due to mutation. While germ line (familial) mutations in DNA repair genes cause a high risk of cancer (see inherited impairment in DNA repair increases cancer risk), somatic mutations in DNA repair genes, including ERCC1, only occur at low levels in sporadic (non-familial) cancers.Control of ERCC1 protein level occurred at the translational level. In addition to the wild-type sequence, three splice variants of mRNA ERCC1 exist. ERCC1 mRNA is also found to have either wild-type or three alternative transcription start points. Neither the level of overall mRNA transcription, splice variation nor transcription start point of mRNA correlates with protein level of ERCC1. The rate of ERCC1 protein turnover also does not correlate with ERCC1 protein level. A translational level control of ERCC1, due to a microRNA (miRNA), has been shown during HIV viral infection. A trans-activation response element (TAR) miRNA, coded for by the HIV virus, down-regulates ERCC1 protein expression. TAR miRNA allows ERCC1 mRNA to be transcribed, but acts at the p-body level to prevent translation of ERCC1 protein. (A p-body is a cytoplasmic granule “processing body” that interacts with miRNAs to repress translation or trigger degradation of target RNAs.) In breast cancer cell lines, almost one third (55/167) of miRNA promoters were targets for aberrant methylation (epigenetic repression). In breast cancers themselves, methylation of let-7a-3/let-7b miRNA in particular was found. This indicates that let-7a-3/let-7b can be epigenetically repressed. Clinical significance: Repression of let-7a can cause repression of ERCC1 expression through an intermediary step involving the HMGA2 gene. The let-7a miRNA normally represses the HMGA2 gene, and in normal adult tissues, almost no HMGA2 protein is present. (See also Let-7 microRNA precursor.) Reduction or absence of let-7a miRNA allows high expression of the HMGA2 protein. HMGA proteins are characterized by three DNA-binding domains, called AT-hooks, and an acidic carboxy-terminal tail. HMGA proteins are chromatin architectural transcription factors that both positively and negatively regulate the transcription of a variety of genes. They do not display direct transcriptional activation capacity, but regulate gene expression by changing local DNA conformation. Regulation is achieved by binding to AT-rich regions in the DNA and/or direct interaction with several transcription factors. HMGA2 targets and modifies the chromatin architecture at the ERCC1 gene, reducing its expression. Hypermethylation of the promoter for let-7a miRNA reduces its expression and this allows hyperexpression of HMGA2. Hyperexpression of HMGA2 can then reduce expression of ERCC1. Clinical significance: Thus, there are three mechanisms that may be responsible for the low level of protein expression of ERCC1 in 84% to 100% of sporadic colon cancers. From results in gliomas and in cadmium carcinogenesis, methylation of the ERCC1 promoter may be a factor. One or more miRNAs that repress ERCC1 may be a factor. And epigenetically reduced let-7a miRNA allowing hyperexpression of HMGA2 could also reduce protein expression of ERCC1 in colon cancers. Which epigenetic mechanism occurs most frequently, or whether multiple epigenetic mechanisms reduce ERCC1 protein expression in colon cancers has not been determined. Accelerated aging: DNA repair-deficient Ercc1 mutant mice show numerous features of accelerated aging, and have a limited lifespan. Accelerated aging in the mutant involves various organs. Ercc1 mutant mice are deficient in several DNA repair processes including transcription-coupled DNA repair. This deficiency prevents resumption of RNA synthesis on the template DNA strand subsequent to it receiving a transcription-blocking DNA damage. Such blockages of transcription appear to promote premature aging, particularly in non-proliferating or slowly proliferating organs such as the nervous system, liver and kidney (see DNA damage theory of aging). Accelerated aging: When Ercc1 mutant mice were subjected to dietary restriction their response closely resembled the beneficial response to dietary restriction of wild-type mice. Dietary restriction extended the lifespan of the Ercc1 mutant mice from 10 to 35 weeks for males and from 13 to 39 weeks for females. It appears that in Ercc1 mutant mice dietary restriction while delaying aging also attenuates accumulation of genome-wide DNA damage and preserves transcriptional output, likely contributing to improved cell viability. Spermatogenesis and oogenesis: Both male and female Ercc1-deficient mice are infertile. The DNA repair function of Ercc1 appears to be required in both male and female germ cells at all stages of their maturation. The testes of Ercc1-deficient mice have an increased level of 8-oxoguanine in their DNA, suggesting that Ercc1 may have a role in removing oxidative DNA damages.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nihon Kohden** Nihon Kohden: Nihon Kohden Corporation (日本光電工業株式会社, Nihon Kōdenkōgyō Kabushiki-gaisha) is a Tokyo-based leading manufacturer, developer and distributor of medical electronic equipment, which include EEGs, EMG measuring systems, ECGs, patient monitors, Invasive and Non-Invasive Ventilators, Defibrillators, AEDs and clinical information systems, with subsidiaries in the U.S., Europe and Asia. The company's products are now used in more than 120 countries, and it is the largest supplier of EEG products worldwide.In 1972, Takuo Aoyagi, a researcher at the company, invented and patented the basic principles of pulse oximetry. Two years later he developed the world's first pulse oximeter, the OLV-5100, which has helped improve patient safety during anaesthesia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bromochlorofluoromethane** Bromochlorofluoromethane: Bromochlorofluoromethane or fluorochlorobromomethane, is a chemical compound and trihalomethane derivative with the chemical formula CHBrClF. As one of the simplest possible stable chiral compounds, it is useful for fundamental research into this area of chemistry. However, its relative instability to hydrolysis, and lack of suitable functional groups, made separation of the enantiomers of bromochlorofluoromethane especially challenging, and this was not accomplished until almost a century after it was first synthesised, in March 2005, though it has now been done by a variety of methods. More recent research using bromochlorofluoromethane has focused on its potential use for experimental measurement of parity violation, a major unsolved problem in quantum physics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Botanical prospecting for uranium** Botanical prospecting for uranium: Botanical prospecting for uranium is a method of finding uranium deposits either by observation of plant life growing on the surface, or by geochemical analysis of plant material. Botanical prospecting for uranium: The history of uranium prospecting, especially in the Colorado Plateau of North America, has seen several methods of identifying likely ore body locations. The use of radiation detectors, such as Geiger counters and scintillation counter is one such method. Another method widely used relies on knowledge of the geologic history of an area, such as locating a geologic formation known to host ore deposits. United States: During the early efforts to locate uranium deposits in the United States, the U.S. Geological Survey conducted studies of prospecting through botanical surveys. These studies examined three methods. Each method begins with the identification of an area of interest. This area is then gridded off, which allows the prospector to map samples to specific locations on the ground. United States: Plant morphology variations The first method, not widely used in the Colorado Plateau, looks for physiologic and morphologic changes in plants growing in or around ore bodies. A survey of plants in the gridded area is conducted. Comparison of normal growth habits and rates is done with known normal plants, and areas with high rates of change in either physiology or morphology indicate likely spots for further prospecting. This method is time consuming, and is not useful in all areas. United States: Deep-rooted plants The second method uses a survey of deep-rooted plants in an area of interest. This works because the plant roots carry uranium to the surface, where it is concentrated in growing areas of the plant. Juniper or saltbrush are usually used, as they are known uranium concentrators. Samples of tree branch tips and leaves are taken from each area in the grid. These samples are then sent to a laboratory for analysis. Concentrations of more than 1 part in a million-(> 1 ppm) of uranium indicate likely areas to investigate further, through drilling or digging. This method provides information about likely ore bodies down to a depth of between 50 and 70 feet, and is generally good in areas where mineralized beds form broad flat benches, so that a grid pattern can be used. United States: Indicator plant species The third method looks for concentrations of indicator plant species in an area of interest. Some uranium ore bodies contain higher concentrations of certain elements, such as selenium, than the surrounding host rock in which they are found. Certain plants that concentrate these elements act as indicator species for likely ore body locations. Mapping these plants provides information about areas in which further prospecting should be done. For example, in areas such as the Colorado Plateau, various species of Astragalus are selenium concentrators (A. pattersoni, A. preussi, A. thompsonae). Other indicator plants for sulfur and calcium, such as Eriogonum inflatum and Oenothera caespitosa help to identify likely areas also, especially in conjunction with the selenium indicators. Other regions: In areas outside the Colorado Plateau, such as in South Australia or Saskatchewan, Canada, other plants would naturally be used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Obstetrical forceps** Obstetrical forceps: Obstetrical forceps are a medical instrument used in childbirth. Their use can serve as an alternative to the ventouse (vacuum extraction) method. Medical uses: Forceps births, like all assisted births, should only be undertaken to help promote the health of the mother or baby. In general, a forceps birth is likely to be safer for both the mother and baby than the alternatives – either a ventouse birth or a caesarean section – although caveats such as operator skill apply.Advantages of forceps use include avoidance of caesarean section (and the short and long-term complications that accompany this), reduction of delivery time, and general applicability with cephalic presentation (head presentation). Common complications include the possibility of bruising the baby and causing more severe vaginal tears (perineal laceration) than would otherwise be the case (although it is important to recognise that almost all women will sustain some form of tear when delivering their first baby). Severe and rare complications (occurring less frequently than 1 in 200) include nerve damage, Descemet's membrane rupture, skull fractures, and cervical cord injury. Medical uses: Maternal factors for use of forceps:Maternal exhaustion. Prolonged second stage of labour. Maternal illness such as heart disease, hypertension, glaucoma, aneurysm, or other conditions that make pushing difficult or dangerous. Hemorrhaging. Analgesic drug-related inhibition of maternal effort (especially with epidural/spinal anaesthesia).Fetal factors for use of forceps:Non-reassuring fetal heart tracing. Fetal distress. After-coming head in breech delivery. Complications: Baby Cuts and bruises. Increased risk of facial nerve injury (usually temporary). Increased risk of clavicle fracture (rare). Increased risk of intracranial hemorrhage - sometimes leading to death: 4/10,000. Increased risk of damage to cranial nerve VI, resulting in strabismus. Mother Increased risk of perineal lacerations, pelvic organ prolapse, and incontinence. Increased risk of injury to vagina and cervix. Increased postnatal recovery time and pain. Increased difficulty evacuating during recovery time. Structure: Obstetric forceps consist of two branches (blades) that are positioned around the head of the fetus. These branches are defined as left and right depending on which side of the mother's pelvis they will be applied. The branches usually, but not always, cross at a midpoint, which is called the articulation. Most forceps have a locking mechanism at the articulation, but a few have a sliding mechanism instead that allows the two branches to slide along each other. Forceps with a fixed lock mechanism are used for deliveries where little or no rotation is required, as when the fetal head is in line with the mother's pelvis. Forceps with a sliding lock mechanism are used for deliveries requiring more rotation.The blade of each forceps branch is the curved portion that is used to grasp the fetal head. The forceps should surround the fetal head firmly, but not tightly. The blade characteristically has two curves, the cephalic and the pelvic curves. The cephalic curve is shaped to conform to the fetal head. The cephalic curve can be rounded or rather elongated depending on the shape of the fetal head. The pelvic curve is shaped to conform to the birth canal and helps direct the force of the traction under the pubic bone. Forceps used for rotation of the fetal head should have almost no pelvic curve.The handles are connected to the blades by shanks of variable lengths. Forceps with longer shanks are used if rotation is being considered. Structure: Anglo-American types All American forceps are derived from French forceps (long forceps) or English forceps (short forceps). Short forceps are applied on the fetal head already descended significantly in the maternal pelvis (i.e., proximal to the vagina). Long forceps are able to reach a fetal head still in the middle or even in the upper part of the maternal pelvis. At present practice, it is uncommon to use forceps to access a fetal head in the upper pelvis. So, short forceps are preferred in the UK and USA. Long forceps are still in use elsewhere. Structure: Simpson forceps (1848) are the most commonly used among the types of forceps and has an elongated cephalic curve. These are used when there is substantial molding, that is, temporary elongation of the fetal head as it moves through the birth canal.Elliot forceps (1860) are similar to Simpson forceps but with an adjustable pin in the end of the handles which can be drawn out as a means of regulating the lateral pressure on the handles when the instrument is positioned for use. They are used most often with women who have had at least one previous vaginal delivery because the muscles and ligaments of the birth canal provide less resistance during second and subsequent deliveries. In these cases the fetal head may thus remain rounder.Kielland forceps (1915, Norwegian) are distinguished by having no angle between the shanks and the blades and a sliding lock. The pelvic curve of the blades is identical to all other forceps. The common misperception that there is no pelvic curve has become so entrenched in the obstetric literature that it may never be able to be overcome, but it can be proved by holding a blade of Kielland's against any other forceps of one's choice. Kielland forceps are probably the most common forceps used for rotation. The sliding mechanism at the articulation can be helpful in asynclitic births (when the fetal head is tilted to the side) since it is no longer in line with the birth canal. Because the handles, shanks, and blades are all in the same plane the forceps can be applied in any position to affect rotation. Because the shanks and handles are not angled, the forceps cannot be applied to a high station as readily as those with the angle since the shanks impinge on the perineum. Structure: Wrigley's forceps, named after Arthur Joseph Wrigley, are used in low or outlet deliveries (see explanations below), when the maximum diameter is about 2.5 cm (0.98 in) above the vulva. Wrigley's forceps were designed for use by general practitioner obstetricians, having the safety feature of an inability to reach high into the pelvis. Obstetricians now use these forceps most commonly in cesarean section delivery where manual traction is proving difficult. The short length results in a lower chance of uterine rupture. Structure: Piper's forceps has a perineal curve to allow application to the after-coming head in breech delivery. Technique: The cervix must be fully dilated and retracted and the membranes ruptured. The urinary bladder should be empty, perhaps with the use of a catheter. High forceps are never indicated in the modern era. Mid forceps can occasionally be indicated but require operator skill and caution. The station of the head must be at the level of the ischial spines. The woman is placed on her back, usually with the aid of stirrups or assistants to support her legs. A regional anaesthetic (usually either a spinal, epidural or pudendal block) is used to help the mother remain comfortable during the birth. Ascertaining the precise position of the fetal head is paramount, and though historically was accomplished by feeling the fetal skull suture lines and fontanelles, in the modern era, confirmation with ultrasound is essentially mandatory. At this point, the two blades of the forceps are individually inserted, the left blade first for the commonest occipito-anterior position; posterior blade first if a transverse position, then locked. The position on the baby's head is checked. The fetal head is then rotated to the occiput anterior position if it is not already in that position. An episiotomy may be performed if necessary. The baby is then delivered with gentle (maximum 30 lbf or 130 Newton) traction in the axis of the pelvis. Technique: Outlet, low, mid or high The accepted clinical standard classification system for forceps deliveries according to station and rotation was developed by the American College of Obstetricians and Gynecologists (ACOG) and consists of: Outlet forceps delivery, where the forceps are applied when the fetal head has reached the perineal floor and its scalp is visible between contractions. This type of assisted delivery is performed only when the fetal head is in a straight forward or backward vertex position or in slight rotation (less than 45 degrees to the right or left) from one of these positions. Technique: Low forceps delivery, when the baby's head is at +2 station or lower. There is no restriction on rotation for this type of delivery. Midforceps delivery, when the baby's head is above +2 station. There must be head engagement before it can be carried out. High forceps delivery is not performed in modern obstetrics practice. It would be a forceps-assisted vaginal delivery performed when the baby's head is not yet engaged. History: The obstetric forceps were invented by the eldest son of the Chamberlen family of surgeons. The Chamberlens were French Huguenots from Normandy who worked in Paris before they migrated to England in 1569 to escape the religious violence in France. William Chamberlen, the patriarch of the family, was most likely a surgeon; he had two sons, both named Pierre, who became maverick surgeons and specialists in midwifery. William and the eldest son practiced in Southampton and then settled in London. The inventor was probably the eldest Peter Chamberlen the elder, who became obstetrician-surgeon of Queen Henriette, wife of King Charles I of England and daughter of Henry IV, King of France. He was succeeded by his nephew, Dr. Peter Chamberlen (barbers-surgeons were not Doctors in the sense of physician), as royal obstetrician. The success of this dynasty of obstetricians with the Royal family and high nobles was related in part to the use of this "secret" instrument allowing delivery of a live child in difficult cases. History: In fact, the instrument was kept secret for 150 years by the Chamberlen family, although there is evidence for its presence as far back as 1634. Hugh Chamberlen the elder, grandnephew of Peter the eldest, tried to sell the instrument in Paris in 1670, but the demonstration he performed in front of François Mauriceau, responsible for Paris Hotel-Dieu maternity, was a failure which resulted in the death of mother and child. The secret may have been sold by Hugh Chamberlen to Dutch obstetricians at the start of the 18th century in Amsterdam, but there are doubts about the authenticity of what was actually provided to buyers. History: The forceps were used most notably in difficult childbirths. The forceps could avoid some infant deaths when previous approaches (involving hooks and other instruments) extracted them in parts. In the interest of secrecy, the forceps were carried into the birthing room in a lined box and would only be used once everyone was out of the room and the mother blindfolded.Models derived from the Chamberlen instrument finally appeared gradually in England and Scotland in 1735. About 100 years after the invention of the forceps by Peter Chamberlen Sr. a surgeon by the name of Jan Palfijn presented his obstetric forceps to the Paris Academy of Sciences in 1723. They contained parallel blades and were called the Hands of Palfijn. History: These "hands" were possibly the instruments described and used in Paris by Gregoire father and son, Dussée, and Jacques Mesnard. In 1813, Peter Chamberlen's midwifery tools were discovered at Woodham Mortimer Hall near Maldon (UK) in the attic of the house. The instruments were found along with gloves, old coins and trinkets. The tools discovered also contained a pair of forceps that were assumed to have been invented by the father of Peter Chamberlen because of the nature of the design.The Chamberlen family's forceps were based on the idea of separating the two branches of "sugar clamp" (as those used to remove "stones" from bladder), which were put in place one after another in the birth canal. This was not possible with conventional tweezers previously tested. However, they could only succeed in a maternal pelvis of normal dimensions and on fetal heads already well engaged (i.e. well lowered into maternal pelvis). Abnormalities of pelvis were much more common in the past than today, which complicated the use of Chamberlen forceps. The absence of pelvic curvature of the branches (vertical curvature to accommodate the anatomical curvature of maternal sacrum) prohibited blades from reaching the upper-part of the pelvis and exercising traction in the natural axis of pelvic excavation. History: In 1747, French obstetrician Andre Levret, published Observations sur les causes et accidents de plusieurs accouchements laborieux (Observations on the Causes and Accidents of Several difficult Deliveries), in which he described his modification of the instrument to follow the curvature of the maternal pelvis, this "pelvic curve" allowing a grip on a fetal head still high in the pelvic excavation, which could assist in more difficult cases. History: This improvement was published in 1751 in England by William Smellie in the book A Treatise on the theory and practice of midwifery. After this fundamental improvement, the forceps would become a common obstetrical instrument for more than two centuries. History: The last improvement of the instrument was added in 1877 by a French obstetrician, Stephan Tarnier in "descriptions of two new forceps." This instrument featured a traction system misaligned with the instrument itself, sometimes called the "third curvature of the forceps". This particularly ingenious traction system, allowed the forceps to exercise traction on the head of the child following the axis of the maternal pelvic excavation, which had never been possible before. History: Tarnier's idea was to "split" mechanically the grabbing of the fetal head (between the forceps blades) on which the operator does not intervene after their correct positioning, from a mechanical accessory set on the forceps itself, the "tractor" on which the operator exercises traction needed to pull down the fetal head in the correct axis of the pelvic excavation. Tarnier forceps (and its multiple derivatives under other names) remained the most widely used system in the world until the development of the cesarean section. History: Forceps had a profound influence on obstetrics as it allowed for the speedy delivery of the baby in cases of difficult or obstructed labour. Over the course of the 19th century, many practitioners attempted to redesign the forceps, so much so that the Royal College of Obstetrics and Gynecologists' collection has several hundred examples. In the last decades, however, with the ability to perform a cesarean section relatively safely, and the introduction of the ventouse or vacuum extractor, the use of forceps and training in the technique of its use has sharply declined. History: Historical role in the medicalisation of childbirth The introduction of the obstetrical forceps provided huge advances in the medicalisation of childbirth. Before the 18th century, childbirth was thought of as a medical phase that could be overseen by a female relative. Usually, if a doctor had to get involved that meant something had gone wrong. Around this era in the 18th century, there were no female doctors. Since males were exclusively called in under extreme circumstances, the act of childbirth was thought to be better known to a midwife or female relative than a male doctor. Usually the male doctor's job was to save the mother's life if, for example, the baby had become stuck on his or her way exiting the mother. History: Before the obstetrical forceps, this had to be done by cutting the baby out piece by piece. In other cases, if the baby was deemed undeliverable, then the doctor would use a tool called a crochet. This was used to crush the baby's skull, allowing the baby to be pulled out of the mother's womb. Still in other cases, a caesarean section (c section) could be performed, but this would almost always result in the mother's death. "In addition, women who had forceps deliveries had shorter after-childbirth complications than those who had caesarean sections performed." These procedures came with various risks to the mother's health, along with the death of the baby. History: However, with the introduction of the obstetrical forceps, the male doctor had a more important role. In many cases, they could actually save the baby's life if called early enough. Although the use of the forceps in childbirth came with its own set of risks, the positives included a significant decrease in risk to the mother, a decrease in child morbidity, and a decreased risk to the baby. Since the forceps in childbirth were made public around 1720, they gave male doctors a way to assist and even oversee childbirths. History: Around this time, in large cities such as London and Paris, some men would become devoted to obstetrical practices. It became stylish among wealthy women of the era to have their childbirth overseen by male midwives. A notable male midwife was William Hunter. He popularised obstetrics. "In 1762, he was appointed as obstetrician to Queen Charlotte." In addition, with the use of forceps, male doctors invented lying in hospitals to provide safe, somewhat advanced obstetrical care because of the use of the obstetrical forceps. History: Historical complications Child birth was not considered a medical practice before the 18th century. It was mostly overseen by a midwife, mother, stepmother, neighbor, or any female relative. Around the 19th and 20th centuries, childbirth was considered dangerous for women. With the introduction of obstetrical forceps, this allowed non-medical professionals, such as the aforementioned individuals, to continue to oversee childbirths. In addition, this gave some of the public more comfort in trusting childbirth oversight to common people. However, the introduction of obstetrical forceps also had a negative effect, because there was no medical oversight of childbirth by any kind of medical professional, this exposed the practice to unnecessary risks and complications for the fetus and mother. These risks could range from minimal effects to lifetime consequences for both individuals. The baby could develop cuts and bruises in various body parts due to the forcible squeezing of his or her body through the mother's vagina. In addition, there could be bruising on the baby's face if the forceps' handler were to squeeze too tight. In some extreme cases, this could cause temporary or permanent facial nerve injury. Furthermore, if the forceps' handler were to twist his or her wrist while the grip was on the baby's head, this would twist the baby's neck and cause damage to a cranial nerve, resulting in strabismus. In rare cases, a clavicle fracture to the baby could occur. The addition of obstetrical forceps came with complication to the mother during and after childbirth. The use of the forceps gave rise to an increased risk in cuts and lacerations along the vaginal wall. This, in turn, would cause an increase in post-operative recovery time and increase the pain experienced by the mother. In addition, the use of forceps could cause more difficulty evacuating during the recovery time as compared to a mother who did not use the forceps. While some of these risks and complications were very common, in general, many people overlooked them and continued to use them.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bioorganic &amp; Medicinal Chemistry Letters** Bioorganic &amp; Medicinal Chemistry Letters: Bioorganic & Medicinal Chemistry Letters is a scientific journal focusing on the results of research on the molecular structure of biological organisms and the interaction of biological targets with chemical agents. It is published by Elsevier, which also publishes Bioorganic & Medicinal Chemistry for longer works.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Encyclopedia of Life Support Systems** Encyclopedia of Life Support Systems: The Encyclopedia of Life Support Systems (EOLSS) is an integrated compendium of twenty one encyclopedias. The first Earth Summit of 1992, held in Rio de Janeiro, issued a document that is now famous as Agenda 21. This document refers to the Earth's life support systems, considering the whole of our planet as a grand intensive care unit that supports all forms of life (both natural and human-engineered systems). Encyclopedia of Life Support Systems: The Encyclopedia of Life Support Systems (EOLSS) is based on this concept and the above definition of 'life support systems'. It is intended to be a source of knowledge conveniently accessible at a single location, knowledge that could be useful in providing an understanding of global systems and issues with the potential to render decisions well informed to ensure the ability of our planet to support life. Unlike most encyclopedias, the contents of which are alphabetically arranged, EOLSS has a thematic organization (Theme Level, Topic Level and Article Level where Theme Level - Presentations of broad perspectives of major subjects; Topic Level - Presentations of perspectives of the special topics within the subjects and Article Level - In-depth presentations of the subjects in various aspects. Some themes have a three level structure and some two level structure. The EOLSS web of knowledge is woven over a hierarchical structure through cross reference links.). It can be regarded as an 'encyclopedia of encyclopedias (now 21 in number, presenting a wide range of major foundation subjects in a process of gradual development, from a broad overview to great detail under the following categories: Within these twenty one on-line encyclopedias, there are hundreds of Themes, each of which has been compiled under the editorial supervision of a recognized world expert or a team of experts such as an International Commission specially appointed for the purpose. Each of these 'Honorary Theme Editors' was responsible for selection and appointment of authors to produce the material specified by EOLSS. On average each Theme contains about thirty chapters. It deals in detail with interdisciplinary subjects, but it is also disciplinary, as each major core subject is covered in great depth by world experts. The EOLSS is different from traditional encyclopedias. It is the result of an unprecedented global effort that has attempted to forge pathways between disciplines in order to address contemporary problems” said UNESCO Director General Koïchiro Matsuura. “A source-book of knowledge that links together our concern for peace, progress, and sustainable development, the EOLSS draws sustenance from the ethics, science and culture of peace. At the same time, it is a forward-looking publication, designed as a global guide to professional practice, education, and heightened social awareness of critical life support issues. In particular, the EOLSS presents perspectives from regions and cultures around the world, and seeks to avoid geographic, racial, cultural, political, gender, age, or religious bias.” It is regarded as the largest comprehensive professional publication carrying state-of-the-art, thematically organized subject matter for a wide audience at the university level with contributions from thousands of experts from over 101 countries. It is an authoritative resource for education, research and policy making in the 21st century.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trailer light converter** Trailer light converter: A trailer light converter is an electrical component used for connecting the wiring of a trailer onto a towing vehicle.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trinity Universe (setting)** Trinity Universe (setting): The Trinity Universe (also called the Æon Continuum or the Æonverse) is the shared science fiction setting created by White Wolf Publishing. Its component game lines include: Adventure!, a pulp-action game set in 1924. Aberrant, a superpowers game set in the near-future (from 2008 to 2015). Trinity, a science fiction game where players take on the role of psions in the far future (from 2120 to 2122). History: ÆON, later known as Trinity was a science-fiction game intended to create a whole new series of games, a trilogy.: 222  Rob Hatch's near-future, superheroic Aberrant (1999) had already been in development and was remolded to fit into the "Trinity Universe" setting.: 223  By the time the third "Trinity" book was published, it was obvious that the line suffered from insufficient sales, and thus Adventure! (2001), the pulp component of the RPG trilogy, was released as a standalone book.: 223  White Wolf's ArtHaus imprint was eventually also in charge of the "Trinity" line, so they produced d20 Trinity books to test the waters for White Wolf's other universes; however, Aberrant d20 (2004), Adventure d20 (2004) and Trinity d20 (2004) came and went in a time when the d20 market was already weakened.: 226 
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Square Pegs (Hong Kong TV series)** Square Pegs (Hong Kong TV series): Square Pegs (Traditional Chinese: 戇夫成龍) was a Hong Kong television series 2003. The program's title is an abbreviated reference to the English idiomatic phrase "square peg in a round hole." The series was the runaway success of 2003, commanding a peak viewership of 46 points, approximately 3.5 million or roughly half of Hong Kong's population during the last week of its broadcast, and breaking TVB's ten-year ratings record. It also went on to win four awards for its two lead actors in the TVB 36th Anniversary Awards, and made both Roger Kwok and Jessica Hsuan household names in the territory. Synopsis: Choi Fong (Jessica Hsuan) is the eldest daughter of the Ling family that includes her oft-absent father, stepmother and stepsister Choi Dip (Leila Tong). Like Cinderella, she undertakes all the housework, does the grocery shopping, cooks for the family, and prevents her father's antique collection from falling prey to her stepmother's gambling appetite. Synopsis: One day, Mrs. Ling's vice finally catches up with her and to pay a particular debt, Choi Dip is consigned to marry the village idiot Ding Seung Wong or Ah Wong (Roger Kwok), who has about the same intelligence as an eight-year-old child. Unwilling to commit her real daughter to a life of misery, Mrs. Ling arranges a double wedding and switches the brides so that Choi Fong ends up marrying Ah Wong, while Choi Dip marries Bao Gai Zong (Raymond Cho), the scion of the wealthy Bao family. And so begins Choi Fong's merry schemes to escape from her marriage with Ah Wong who, to her consternation, takes an immediate liking to her and clings to her like sticky biscuit dough. Synopsis: After several failed attempts evading her fate, Choi Fong gradually resigns herself to play Ah Wong's "lou por jai" or "little wife". One day, a strange girl Yeung Pui Kwan (Winnie Yeung) arrives in town and claims Ah Wong for her fiancé. Choi Fong soon learns that Ah Wong was actually a bright young man and the real heir of the Bao family who inexplicably disappeared two years ago, only to reappear with an IQ of an eight-year-old. Synopsis: Hoping to return Ah Wong to his rightful babysitter as soon as possible, Choi Fong agrees to help Pui Kwan get to the root of the mystery. So the girls embark on a campaign to expose the bogus Bao Gai Zong, reinstate Ah Wong as the rightful heir, and help him regain his memory. But just as Ah Wong begins to show signs of recovery, Choi Fong realises to her dismay that she has fallen for him... Awards and nominations: Roger Kwok won his first "Best Actor in a Leading Role" Award for his role Ding Sheung Wong, at the 36th TVB Anniversary Awards in 2003.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**People (Windows)** People (Windows): People is a contact management app and address book included in Microsoft's Windows 8 and 10. It allows a user to organize and link contacts from different email accounts. People has a unique graphical interface, unlike Windows Contacts' File Explorer-based interface, based on the Metro design language that had already been used for Outlook.com and the integrated online People service. In addition to being an address book, it provides a list of recent mail conversations with a selected contact. It used to also be a social media hub, in which users could integrate their social networking accounts (e.g. Twitter), but API changes in both Windows and social media services caused this functionality to break.People works with other Metro-style apps, but it has its own front-end interface and can be opened by end users. Unlike Windows Contacts, it does not currently allow users to import or export .pst files, vCard files, Windows Address Book files, or other files directly. Instead, it gathers contact information from email accounts the user has set up on other services in Windows, such as Mail and Calendar, Skype Preview, or the Xbox app. Changes, additions, and deletions made in the People app will be exported to the corresponding email accounts. Users can select which accounts should display contact info in People. People (Windows): The People app supports Outlook.com People, Google Contacts, iCloud contacts, Yahoo! contacts, and other contact lists that can be imported by logging into an email account. Development: The first version of People was a text-heavy app added to Windows 8 as one of many apps written to run full-screen or snapped as part of Microsoft's Metro design language philosophy. It is one of three apps on Windows that originate from Microsoft Outlook, from which the Mail and Calendar apps also originate. Structurally, the three apps were one, but each had its own user interface. Like many Microsoft apps introduced for Windows 8, many of the features and controls were hidden in the Charms Bar or a menu at the bottom of the screen that was triggered by right clicking and it relies on horizontal scrolling through sets of lists. When a user with a Microsoft account added an email account on one computer with Windows 8 People, the account would be automatically added to all other Windows 8 computers the user is logged into. Development: During the initial development of Windows 10, Microsoft deprecated the functionality of the Windows 8 Mail, Calendar, and People apps. It rebuilt the apps with new Windows 10 APIs later in development. While Mail and Calendar were rebuilt as one underlying app, the new People is now a separate app that still interacts with Mail and Calendar.An early concept image for Windows 10's People app show the hamburger menu and history feature, neither of which were present in the initial release of Windows 10. It also shows three features that have not been released: a what's new panel, a link to a messaging app, and a section for managing group contacts. It is unknown if these features are still planned.Although People replicates much of the functionality of Windows Contacts, it is not a true replacement, as Contacts still exists and functions properly in the most recent release of Windows 10. Some apps, including Mail and Calendar, formerly used Windows Contacts to manage contacts but switched to using People to manage contacts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Maximum entropy thermodynamics** Maximum entropy thermodynamics: In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory, Bayesian probability, and the principle of maximum entropy. These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g., image reconstruction, signal processing, spectral analysis, and inverse problems). MaxEnt thermodynamics began with two papers by Edwin T. Jaynes published in the 1957 Physical Review. Maximum Shannon entropy: Central to the MaxEnt thesis is the principle of maximum entropy. It demands as given some partly specified model and some specified data related to the model. It selects a preferred probability distribution to represent the model. The given data state "testable information" about the probability distribution, for example particular expectation values, but are not in themselves sufficient to uniquely determine it. The principle states that one should prefer the distribution which maximizes the Shannon information entropy, ln ⁡pi. Maximum Shannon entropy: This is known as the Gibbs algorithm, having been introduced by J. Willard Gibbs in 1878, to set up statistical ensembles to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function). Maximum Shannon entropy: A direct connection is thus made between the equilibrium thermodynamic entropy STh, a state function of pressure, volume, temperature, etc., and the information entropy for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables: Th (eqm) =kBSI(P,V,T,…) kB, the Boltzmann constant, has no fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by Clausius (1865) (see Boltzmann constant). Maximum Shannon entropy: However, the MaxEnt school argue that the MaxEnt approach is a general technique of statistical inference, with applications far beyond this. It can therefore also be used to predict a distribution for "trajectories" Γ "over a period of time" by maximising: ln ⁡pΓ This "information entropy" does not necessarily have a simple correspondence with thermodynamic entropy. But it can be used to predict features of nonequilibrium thermodynamic systems as they evolve over time. Maximum Shannon entropy: For non-equilibrium scenarios, in an approximation that assumes local thermodynamic equilibrium, with the maximum entropy approach, the Onsager reciprocal relations and the Green–Kubo relations fall out directly. The approach also creates a theoretical framework for the study of some very special cases of far-from-equilibrium scenarios, making the derivation of the entropy production fluctuation theorem straightforward. For non-equilibrium processes, as is so for macroscopic descriptions, a general definition of entropy for microscopic statistical mechanical accounts is also lacking. Maximum Shannon entropy: Technical note: For the reasons discussed in the article differential entropy, the simple definition of Shannon entropy ceases to be directly applicable for random variables with continuous probability distribution functions. Instead the appropriate quantity to maximize is the "relative information entropy", log ⁡p(x)m(x)dx. Maximum Shannon entropy: Hc is the negative of the Kullback–Leibler divergence, or discrimination information, of m(x) from p(x), where m(x) is a prior invariant measure for the variable(s). The relative entropy Hc is always less than zero, and can be thought of as (the negative of) the number of bits of uncertainty lost by fixing on p(x) rather than m(x). Unlike the Shannon entropy, the relative entropy Hc has the advantage of remaining finite and well-defined for continuous x, and invariant under 1-to-1 coordinate transformations. The two expressions coincide for discrete probability distributions, if one can make the assumption that m(xi) is uniform – i.e. the principle of equal a-priori probability, which underlies statistical thermodynamics. Philosophical implications: Adherents to the MaxEnt viewpoint take a clear position on some of the conceptual/philosophical questions in thermodynamics. This position is sketched below. Philosophical implications: The nature of the probabilities in statistical mechanics Jaynes (1985, 2003, et passim) discussed the concept of probability. According to the MaxEnt viewpoint, the probabilities in statistical mechanics are determined jointly by two factors: by respectively specified particular models for the underlying state space (e.g. Liouvillian phase space); and by respectively specified particular partial descriptions of the system (the macroscopic description of the system used to constrain the MaxEnt probability assignment). The probabilities are objective in the sense that, given these inputs, a uniquely defined probability distribution will result, the same for every rational investigator, independent of the subjectivity or arbitrary opinion of particular persons. The probabilities are epistemic in the sense that they are defined in terms of specified data and derived from those data by definite and objective rules of inference, the same for every rational investigator. Here the word epistemic, which refers to objective and impersonal scientific knowledge, the same for every rational investigator, is used in the sense that contrasts it with opiniative, which refers to the subjective or arbitrary beliefs of particular persons; this contrast was used by Plato and Aristotle, and stands reliable today. Philosophical implications: Jaynes also used the word 'subjective' in this context because others have used it in this context. He accepted that in a sense, a state of knowledge has a subjective aspect, simply because it refers to thought, which is a mental process. But he emphasized that the principle of maximum entropy refers only to thought which is rational and objective, independent of the personality of the thinker. In general, from a philosophical viewpoint, the words 'subjective' and 'objective' are not contradictory; often an entity has both subjective and objective aspects. Jaynes explicitly rejected the criticism of some writers that, just because one can say that thought has a subjective aspect, thought is automatically non-objective. He explicitly rejected subjectivity as a basis for scientific reasoning, the epistemology of science; he required that scientific reasoning have a fully and strictly objective basis. Nevertheless, critics continue to attack Jaynes, alleging that his ideas are "subjective". One writer even goes so far as to label Jaynes' approach as "ultrasubjectivist", and to mention "the panic that the term subjectivism created amongst physicists".The probabilities represent both the degree of knowledge and lack of information in the data and the model used in the analyst's macroscopic description of the system, and also what those data say about the nature of the underlying reality. Philosophical implications: The fitness of the probabilities depends on whether the constraints of the specified macroscopic model are a sufficiently accurate and/or complete description of the system to capture all of the experimentally reproducible behavior. This cannot be guaranteed, a priori. For this reason MaxEnt proponents also call the method predictive statistical mechanics. The predictions can fail. But if they do, this is informative, because it signals the presence of new constraints needed to capture reproducible behavior in the system, which had not been taken into account. Philosophical implications: Is entropy "real"? The thermodynamic entropy (at equilibrium) is a function of the state variables of the model description. It is therefore as "real" as the other variables in the model description. If the model constraints in the probability assignment are a "good" description, containing all the information needed to predict reproducible experimental results, then that includes all of the results one could predict using the formulae involving entropy from classical thermodynamics. To that extent, the MaxEnt STh is as "real" as the entropy in classical thermodynamics. Philosophical implications: Of course, in reality there is only one real state of the system. The entropy is not a direct function of that state. It is a function of the real state only through the (subjectively chosen) macroscopic model description. Philosophical implications: Is ergodic theory relevant? The Gibbsian ensemble idealizes the notion of repeating an experiment again and again on different systems, not again and again on the same system. So long-term time averages and the ergodic hypothesis, despite the intense interest in them in the first part of the twentieth century, strictly speaking are not relevant to the probability assignment for the state one might find the system in. Philosophical implications: However, this changes if there is additional knowledge that the system is being prepared in a particular way some time before the measurement. One must then consider whether this gives further information which is still relevant at the time of measurement. The question of how 'rapidly mixing' different properties of the system are then becomes very much of interest. Information about some degrees of freedom of the combined system may become unusable very quickly; information about other properties of the system may go on being relevant for a considerable time. Philosophical implications: If nothing else, the medium and long-run time correlation properties of the system are interesting subjects for experimentation in themselves. Failure to accurately predict them is a good indicator that relevant macroscopically determinable physics may be missing from the model. Philosophical implications: The second law According to Liouville's theorem for Hamiltonian dynamics, the hyper-volume of a cloud of points in phase space remains constant as the system evolves. Therefore, the information entropy must also remain constant, if we condition on the original information, and then follow each of those microstates forward in time: ΔSI=0 However, as time evolves, that initial information we had becomes less directly accessible. Instead of being easily summarizable in the macroscopic description of the system, it increasingly relates to very subtle correlations between the positions and momenta of individual molecules. (Compare to Boltzmann's H-theorem.) Equivalently, it means that the probability distribution for the whole system, in 6N-dimensional phase space, becomes increasingly irregular, spreading out into long thin fingers rather than the initial tightly defined volume of possibilities. Philosophical implications: Classical thermodynamics is built on the assumption that entropy is a state function of the macroscopic variables—i.e., that none of the history of the system matters, so that it can all be ignored. Philosophical implications: The extended, wispy, evolved probability distribution, which still has the initial Shannon entropy STh(1), should reproduce the expectation values of the observed macroscopic variables at time t2. However it will no longer necessarily be a maximum entropy distribution for that new macroscopic description. On the other hand, the new thermodynamic entropy STh(2) assuredly will measure the maximum entropy distribution, by construction. Therefore, we expect: Th Th (1) At an abstract level, this result implies that some of the information we originally had about the system has become "no longer useful" at a macroscopic level. At the level of the 6N-dimensional probability distribution, this result represents coarse graining—i.e., information loss by smoothing out very fine-scale detail. Philosophical implications: Caveats with the argument Some caveats should be considered with the above. Philosophical implications: 1. Like all statistical mechanical results according to the MaxEnt school, this increase in thermodynamic entropy is only a prediction. It assumes in particular that the initial macroscopic description contains all of the information relevant to predicting the later macroscopic state. This may not be the case, for example if the initial description fails to reflect some aspect of the preparation of the system which later becomes relevant. In that case the "failure" of a MaxEnt prediction tells us that there is something more which is relevant that we may have overlooked in the physics of the system. Philosophical implications: It is also sometimes suggested that quantum measurement, especially in the decoherence interpretation, may give an apparently unexpected reduction in entropy per this argument, as it appears to involve macroscopic information becoming available which was previously inaccessible. (However, the entropy accounting of quantum measurement is tricky, because to get full decoherence one may be assuming an infinite environment, with an infinite entropy). Philosophical implications: 2. The argument so far has glossed over the question of fluctuations. It has also implicitly assumed that the uncertainty predicted at time t1 for the variables at time t2 will be much smaller than the measurement error. But if the measurements do meaningfully update our knowledge of the system, our uncertainty as to its state is reduced, giving a new SI(2) which is less than SI(1). (Note that if we allow ourselves the abilities of Laplace's demon, the consequences of this new information can also be mapped backwards, so our uncertainty about the dynamical state at time t1 is now also reduced from SI(1) to SI(2)). Philosophical implications: We know that STh(2) > SI(2); but we can now no longer be certain that it is greater than STh(1) = SI(1). This then leaves open the possibility for fluctuations in STh. The thermodynamic entropy may go "down" as well as up. A more sophisticated analysis is given by the entropy Fluctuation Theorem, which can be established as a consequence of the time-dependent MaxEnt picture. Philosophical implications: 3. As just indicated, the MaxEnt inference runs equally well in reverse. So given a particular final state, we can ask, what can we "retrodict" to improve our knowledge about earlier states? However the Second Law argument above also runs in reverse: given macroscopic information at time t2, we should expect it too to become less useful. The two procedures are time-symmetric. But now the information will become less and less useful at earlier and earlier times. (Compare with Loschmidt's paradox.) The MaxEnt inference would predict that the most probable origin of a currently low-entropy state would be as a spontaneous fluctuation from an earlier high entropy state. But this conflicts with what we know to have happened, namely that entropy has been increasing steadily, even back in the past. Philosophical implications: The MaxEnt proponents' response to this would be that such a systematic failing in the prediction of a MaxEnt inference is a "good" thing. It means that there is thus clear evidence that some important physical information has been missed in the specification the problem. If it is correct that the dynamics "are" time-symmetric, it appears that we need to put in by hand a prior probability that initial configurations with a low thermodynamic entropy are more likely than initial configurations with a high thermodynamic entropy. This cannot be explained by the immediate dynamics. Quite possibly, it arises as a reflection of the evident time-asymmetric evolution of the universe on a cosmological scale (see arrow of time). Criticisms: The Maximum Entropy thermodynamics has some important opposition, in part because of the relative paucity of published results from the MaxEnt school, especially with regard to new testable predictions far-from-equilibrium.The theory has also been criticized in the grounds of internal consistency. For instance, Radu Balescu provides a strong criticism of the MaxEnt School and of Jaynes' work. Balescu states that Jaynes' and coworkers theory is based on a non-transitive evolution law that produces ambiguous results. Although some difficulties of the theory can be cured, the theory "lacks a solid foundation" and "has not led to any new concrete result".Though the maximum entropy approach is based directly on informational entropy, it is applicable to physics only when there is a clear physical definition of entropy. There is no clear unique general physical definition of entropy for non-equilibrium systems, which are general physical systems considered during a process rather than thermodynamic systems in their own internal states of thermodynamic equilibrium. It follows that the maximum entropy approach will not be applicable to non-equilibrium systems until there is found a clear physical definition of entropy. This problem is related to the fact that heat may be transferred from a hotter to a colder physical system even when local thermodynamic equilibrium does not hold so that neither system has a well defined temperature. Classical entropy is defined for a system in its own internal state of thermodynamic equilibrium, which is defined by state variables, with no non-zero fluxes, so that flux variables do not appear as state variables. But for a strongly non-equilibrium system, during a process, the state variables must include non-zero flux variables. Classical physical definitions of entropy do not cover this case, especially when the fluxes are large enough to destroy local thermodynamic equilibrium. In other words, for entropy for non-equilibrium systems in general, the definition will need at least to involve specification of the process including non-zero fluxes, beyond the classical static thermodynamic state variables. The 'entropy' that is maximized needs to be defined suitably for the problem at hand. If an inappropriate 'entropy' is maximized, a wrong result is likely. In principle, maximum entropy thermodynamics does not refer narrowly and only to classical thermodynamic entropy. It is about informational entropy applied to physics, explicitly depending on the data used to formulate the problem at hand. According to Attard, for physical problems analyzed by strongly non-equilibrium thermodynamics, several physically distinct kinds of entropy need to be considered, including what he calls second entropy. Attard writes: "Maximizing the second entropy over the microstates in the given initial macrostate gives the most likely target macrostate.". The physically defined second entropy can also be considered from an informational viewpoint.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Saltzer and Schroeder's design principles** Saltzer and Schroeder's design principles: Saltzer and Schroeder's design principles are design principles enumerated by Jerome Saltzer and Michael Schroeder in their 1975 article The Protection of Information in Computer Systems, that from their experience are important for the design of secure software systems. The design principles: Economy of mechanism: Keep the design as simple and small as possible. Fail-safe defaults: Base access decisions on permission rather than exclusion. Complete mediation: Every access to every object must be checked for authority. Open design: The design should not be secret. Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key. Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job. Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users. Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly. Work factor: Compare the cost of circumventing the mechanism with the resources of a potential attacker. Compromise recording: It is sometimes suggested that mechanisms that reliably record that a compromise of information has occurred can be used in place of more elaborate mechanisms that completely prevent loss.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RExcel** RExcel: RExcel is an add-in for Microsoft Excel. It allows access to the statistics package R from within Excel. RExcel: The main features are: data transfer (matrices and data frames) between R and Excel in both directions; running R code from Excel ranges; writing macros calling R to perform calculations; calling R functions from cell formulas, using Excel's auto-update mechanism to trigger recalculation by R; using Excel as a GUI for R.RExcel works on Microsoft Windows (XP, Vista or 7), with Excel 2003, 2007, 2010, and 2013. RExcel: It uses the statconnDCOM server and, for certain configurations, additionally, the rcom package to access R from within Excel.The RExcelInstaller package was removed from CRAN due to FOSS license restrictions.[1]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Exchange spring magnet** Exchange spring magnet: An exchange spring magnet is a magnetic material with high coercivity and high saturation properties derived from the exchange interaction between a hard magnetic material and a soft magnetic material, respectively. Coehoorn et al. were the first able to observe an actual exchange spring magnet. Exchange spring magnets are cheaper than many magnets containing rare earth/transition metals (RE-TM Magnets), as the hard phase of the magnet (which commonly comprises RE-TM material) can be less than 15% of the overall magnet by volume. Principle: First proposed by Kneller and Hawig in 1991, the exchange spring magnet utilizes the epitaxy between hard and soft magnetic materials: the hard material helps retain the soft material's anisotropy, which increases its coercivity.The magnetic hysteresis loop of an exchange spring magnet theoretically takes on a shape resembling that of a summation of its hard and soft magnetic components (as seen in Figure 1), meaning its energy product is higher than those of its components. A magnet's maximum energy product (BH)max, which is roughly proportional to its coercivity (HC) and magnetization saturation (Msat), is used as a metric of its ability to do magnetic work as (BH)max is twice the magnet's available magnetostatic energy. The exchange spring magnet offers a geometry able to improve upon the previously reported maximum energy products of materials such as Rare Earth/Transition Metal complexes; while both materials have sufficiently large HC values and operate at relatively high Curie Temperatures, the exchange spring magnet can achieve much higher Msat values than the Rare Earth/Transition Metal (RE-TM) complexes.An important component of exchange spring magnets is anisotropy: while exchange spring magnets that are isotropic in bulk still exhibit a greater energy product than many RE-TM magnets, the energy product of their anisotropic form is theorized to be significantly higher. Magnetic energy: Exchange energy The magnetic moment of a bulk material is the sum of all of its atomic moments. The atomic moments' interactions with each other and with the externally applied field determine the behavior of the magnet. Each atomic magnetic moment tries to orient itself so that the total magnetic energy reaches a minimum. There are generally four types of energy competing with each other to reach equilibrium: each is derived from the exchange coupling effect, magnetic anisotropy, the magnetiostatic energy of the magnet, and the magnet's interaction with external field. Magnetic energy: Exchange coupling is a quantum mechanical effect that keeps the adjacent moments aligned with one another. Exchange energy of adjacent moments increases as the angle between the two moments increases. Ex=A(dθdx)2 where A=16nJS2∑Δrj2 is the exchange constant and Δrj=rj−ri is the position vector of neighbor j with respect to site i . Typical values of A are on the order of 10 11 J/m. Anisotropy energy Magnetic anisotropy energy arises from the crystalline structure of the material. For a simple case, the effect can be modeled by a uniaxial energy distribution. Along an axial direction, called the easy axis, the magnetic moments tend to align. The energy increases if the orientation of a magnetic moment deviates from the easy axis. Magnetic energy: sin 2⁡θ Magnetostatic energy The magnetostatic energy is the energy stored in the field generated by a material's magnetic moments. The magnet's field reaches its maximum intensity if all the magnetic moments orient in one direction; this is what occurs in a hard magnet. In order to prevent building up the magnetic field, sometimes magnetic moments tend to form loops. That way, the energy stored in the magnetic field can be constrained; this is what occurs in a soft magnet. What determines whether a magnet is hard or soft is the dominant term of its magnetic energy. For hard magnets, the anisotropy constant is relatively large, making the magnetic moments align with easy axis. The opposite case applies to soft magnets, in which the magnetostatic energy is dominant. Magnetic energy: Another magnetostatic energy arises interaction with an external field. Magnetic moments naturally try to align with the applied field. Eext=−MBsin(θ) Since the magnetostatic energy dominates in the soft magnet, the magnetic moments tend to successfully orient along the external field. Exchange spring magnet In the exchange spring magnet, the hard phase has high coercivity and the soft phase has high saturation. The hard phase and the soft phase interact through their interface by exchange coupling. Magnetic energy: From left to right in Figure 3, an external field is first applied in an upward direction in order to saturate the magnet. Then, the external field is reversed and starts to demagnetize the magnet. Since the coercivity of the hard phase is relatively high, the moments remain unchanged so as to minimize the anisotropy and exchange energy. The magnetic moments in the soft phase start rotating to align with the applied field. Because of the exchange coupling at the soft/hard interface, the magnetic moments at the soft phase boundary have to align with the adjacent moment in hard phase. At the regions close to the interface, because of exchange coupling, the chain of magnetic moments acts like a spring. If the external field is increased, more moments in soft phase rotate downward, and the width of the transition region becomes smaller as the exchange energy density increases. The magnetic moments in the hard phase do not rotate until the external field is high enough that the exchange energy density in the transition region is comparable to the anisotropy energy density in the hard phase. At this point, the rotation of magnetic moments in the soft phase starts to affect the hard phase. As the external field surpasses the hard material's coercivity, the hard magnet gets fully demagnetized. Magnetic energy: In the previous process, when the magnetic moments in the hard magnet start to rotate, the intensity of external field is already much higher than the coercivity of the soft phase, but there is still a transition region in soft phase. If the thickness of the soft phase is less than twice as thick as the transition region, the soft phase should have a large effective coercivity, smaller than but comparable to the coercivity of the hard phase. Magnetic energy: In a thin soft phase, it is hard for the external field to rotate the magnetic moments, similar to a hard magnet with high saturation magnetization. After applying a high external field to partially demagnetize the magnetic moments in the hard phase and after subsequently removing the external field, the rotated moments in the soft phase can be rotated back by exchange coupling with the hard phase (Figure 5). This phenomenon is shown in the hysteresis loop of an exchange spring magnet (Figure 6). Magnetic energy: Comparing the exchange spring magnet's hysteresis loop with that of a conventional hard magnet demonstrates that the exchange spring magnet is more likely to recover from the opposing external field. When the external field is removed, the remanent magnetization can recover to a value close to its original. The name "exchange spring magnet" is derived from the reversibility of magnetization.The dimension of the soft phase inside the exchange spring magnet should be kept small enough so as to retain reversible magnetization. Additionally, the volume fraction of the soft phase needs to be as large as possible in order to achieve a high magnetization saturation. One viable material geometry is to fabricate a magnet by embedding hard particles inside a soft matrix. That way, the soft matrix material occupies the largest volume fraction while being close to the hard particles. The size and spacing of the hard particles is on the scale of nanometers. If the hard magnets are spheres on an fcc space lattice in the soft magnetic phase, the volume fraction of the hard phase can be 9%. Since the total magnetization saturation is summed up by volume fraction, it is close to the value of a pure soft phase. Fabrication: The fabrication of an exchange spring magnet requires precise control of the particle-matrix structure at the nanometer-scale dimension. Several approaches have been tested, including metallurgical method, sputtering, and particle self-assembly. Particle Self-Assembly - 4 nm Fe3O4 nanoparticles and 4 nm Fe58Pt42 nanoparticles dispersed in solution were deposited as compact structures through self-assembly by evaporating the solution. Then, through annealing, a FePt-Fe3Pt nanocomposite magnet was formed. The energy density increased from 117 kJ/m3 of the single phase Fe58Pt42 to 160 kJ/m3 of FePt-Fe3Pt nanocomposite. Sputtering - Sm and Co were co-sputtered from elemental targets using a DC Magnetron, onto a Cr(211) buffer on MgO(110) substrates to create Sm2Co7. An Fe layer was deposited at 300 - 400 °C and capped with Cr. Annealing - Multilayers of Fe and Pt were sputtered from elemental targets onto glass. Varying layer composition and annealing conditions were found to alter magnetic properties of final material.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**European Sleep Apnea Database** European Sleep Apnea Database: The European Sleep Apnea Database (ESADA) (also referred to with spelling European Sleep Apnoea Database and European Sleep Apnoea Cohort) is a collaboration between European sleep centres as part of the European Cooperation in Science and Technology (COST) Action B 26. The main contractor of the project is the Sahlgrenska Academy at Gothenburg University, Institute of Medicine, Department of Internal Medicine, and the co-ordinator is Jan Hedner, MD, PhD, Professor of Sleep Medicine.The book Clinical Genomics: Practical Applications for Adult Patient Care said ESADA was an example initiatives which afford an "excellent opportunity" for future collaborative research into genetic aspects of obstructive sleep apnea syndrome (OSAS). Both the European Respiratory Society and the European Sleep Research Society have noted the impact for research cooperative efforts of the database resource. History: 2006 – 2010 In 2006 the European Sleep Apnea Database (ESADA) began as an initiative between 27 European sleep study facilities to combine information and compile it into one shared resource. It was formed as part of the European Cooperation in Science and Technology (COST) Action B 26. In addition to financial help from COST, the initiative received assistance from companies Philips Respironics and ResMed. The database storing the association's resource information is located in Gothenburg, Sweden. The group's goal was twofold: to serve as a reference guide to those researching sleep disorders, and to compile information about how different caregivers treat patients with sleep apnea.5,103 patients were tracked from March 2007 to August 2009. Data collected on these patients included symptoms experienced, medication, medical history, and sleep data, all inputted into an online format for further analysis. Database researchers reported on their methodology and results in 2010 to the American Thoracic Society, on their observed findings regarding percentages of metabolic and cardiovascular changes related to patients with obstructive sleep apnea. The 2010 research resulted from collaboration between 22 study centres across 16 countries in Europe involving 27 researchers. The primary participants who presented to the American Thoracic Society included researchers from: Sahlgrenska University Hospital, Gothenburg, Sweden; Technion – Israel Institute of Technology, Haifa, Israel; National TB & Lung Diseases Research Institute, Warsaw, Poland; CNR Institute of Biomedicine and Molecular, Palermo, Italy; Instituto Auxologico Italiano, Ospedale San Luca, Milan, Italy; and St. Vincent University Hospital, Dublin, Ireland. Their analysis was published in 2010 in the American Journal of Respiratory and Critical Care Medicine. History: 2011 – present In 2011 there were 22 sleep disorder centres in Europe involved in the collaboration. The group published research in 2011 analyzing the percentage of patients with sleep apnea that have obesity. By 2012 the database maintained information on over 12,500 patients in Europe; it also contained DNA samples of 2,600 individuals. ESADA was represented in 2012 at the 21st annual meeting of the European Sleep Research Society in Paris, France, and was one of four European Sleep Research Networks that held a session at the event. Pierre Escourrou and Fadia Jilwan wrote a 2012 article for the European Respiratory Journal after studying data from ESADA involving 8,228 total patients from 23 different facilities. They analyzed whether polysomnography was a good measure for hypopnea and sleep apnea. Researchers from the department of pulmonary diseases at Turku University Hospital in Turku, Finland compared variations between sleep centres in the ESADA database and published their findings in the European Respiratory Journal. They looked at the traits of 5,103 patients from 22 centres. They reported on the average age of patients in the database, and the prevalence by region of performing sleep study with cardiorespiratory polygraphy.The database added a centre in Hamburg, Germany in 2013 managed by physician Holger Hein. The group's annual meeting in 2013 was held in Edinburgh, United Kingdom and was run by Renata Riha. By March 2013, there were approximately 13,000 total patients being studied in the program, with about 200 additional patients being added into the database each month. Analysis published by researchers from Italy and Sweden in September 2013 in the European Respiratory Journal analyzed if there was a correlation between renal function problems and obstructive sleep apnea. They analyzed data from 17 countries in Europe representing 24 sleep centres and 8,112 total patients. They tested whether patients of different types of demographics with other existing health problems had a change in probability of kidney function problems, if they concurrently had obstructive sleep apnea.In 2014, researchers released data studying 5,294 patients from the database compared prevalence of sleep apnea with increased blood sugar. Their results were published in the European Respiratory Journal. They studied glycated hemoglobin levels in the patients and compared them with measured severity in sleep apnea. The researchers analyzed glycated hemoglobin levels among a class of individuals with less severe sleep apnea and those with a higher determined amount of sleep apnea problems. As of 20 March 2014 the database included information on a total of 15,956 patients. A 2014 article in the European Respiratory Journal drawing from the ESADA analyzed whether lack of adequate oxygen during a night's sleep was an indicator for high blood pressure. Reception: In the 2013 book Clinical Genomics: Practical Applications for Adult Patient Care, ESADA is said to be an example of the kind of initiative which affords an "excellent opportunity" for future collaborative research into genetic aspects of obstructive sleep apnea syndrome (OSAS). Both the European Respiratory Society and the European Sleep Research Society have noted the impact for research cooperative efforts of the database resource.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**LTR retrotransposon** LTR retrotransposon: LTR retrotransposons are class I transposable element characterized by the presence of long terminal repeats (LTRs) directly flanking an internal coding region. As retrotransposons, they mobilize through reverse transcription of their mRNA and integration of the newly created cDNA into another location. Their mechanism of retrotransposition is shared with retroviruses, with the difference that most LTR-retrotransposons do not form infectious particles that leave the cells and therefore only replicate inside their genome of origin. Those that do (occasionally) form virus-like particles are classified under Ortervirales. LTR retrotransposon: Their size ranges from a few hundred base pairs to 25kb, for example the Ogre retrotransposon in the pea genome. In plant genomes, LTR retrotransposons are the major repetitive sequence class, for example, constituting more than 75% of the maize genome. LTR retrotransposons make up about 8% of the human genome and approximately 10% of the mouse genome. Structure and propagation: LTR retrotransposons have direct long terminal repeats that range from ~100 bp to over 5 kb in size. LTR retrotransposons are further sub-classified into the Ty1-copia-like (Pseudoviridae), Ty3-like (Metaviridae, formally referred to as Gypsy-like, a name that is being considered for retirement), and BEL-Pao-like (Belpaoviridae) groups based on both their degree of sequence similarity and the order of encoded gene products. Ty1-copia and Ty3-Metaviridae groups of retrotransposons are commonly found in high copy number (up to a few million copies per haploid nucleus) in animals, fungi, protista, and plants genomes. BEL-Pao like elements have so far only been found in animals.All functional LTR-retrotransposons encode a minimum of two genes, gag and pol, that are sufficient for their replication. Gag encodes a polyprotein with a capsid and a nucleocapsid domain. Gag proteins form virus-like particles in the cytoplasm inside which reverse-transcription occurs. The Pol gene produces three proteins: a protease (PR), a reverse transcriptase endowed with an RT (reverse-transcriptase) and an RNAse H domains, and an integrase (IN).Typically, LTR-retrotransposon mRNAs are produced by the host RNA pol II acting on a promoter located in their 5’ LTR. The Gag and Pol genes are encoded in the same mRNA. Depending on the host species, two different strategies can be used to express the two polyproteins: a fusion into a single open reading frame (ORF) that is then cleaved or the introduction of a frameshift between the two ORFs. Occasional ribosomal frameshifting allows the production of both proteins, while ensuring that much more Gag protein is produced to form virus-like particles. Structure and propagation: Reverse transcription usually initiates at a short sequence located immediately downstream of the 5’-LTR and termed the primer binding site (PBS). Specific host tRNAs bind to the PBS and act as primers for reverse-transcription, which occurs in a complex and multi-step process, ultimately producing a double- stranded cDNA molecule. The cDNA is finally integrated into a new location, creating short TSDs (Target Site Duplications) and adding a new copy in the host genome Types: Ty1-copia retrotransposons Ty1-copia retrotransposons are abundant in species ranging from single-cell algae to bryophytes, gymnosperms, and angiosperms. They encode four protein domains in the following order: protease, integrase, reverse transcriptase, and ribonuclease H. At least two classification systems exist for the subdivision of Ty1-copia retrotransposons into five lineages: Sireviruses/Maximus, Oryco/Ivana, Retrofit/Ale, TORK (subdivided in Angela/Sto, TAR/Fourf, GMR/Tork), and Bianca. Sireviruses/Maximus retrotransposons contain an additional putative envelope gene. This lineage is named for the founder element SIRE1 in the Glycine max genome, and was later described in many species such as Zea mays, Arabidopsis thaliana, Beta vulgaris, and Pinus pinaster. Plant Sireviruses of many sequenced plant genomes are summarized at the MASIVEdb Sirevirus database. Types: Ty3-retrotransposons (formally gypsy) Ty3-retrotransposons are widely distributed in the plant kingdom, including both gymnosperms and angiosperms. They encode at least four protein domains in the order: protease, reverse transcriptase, ribonuclease H, and integrase. Based on structure, presence/absence of specific protein domains, and conserved protein sequence motifs, they can be subdivided into several lineages: Errantiviruses contain an additional defective envelope ORF with similarities to the retroviral envelope gene. First described as Athila-elements in Arabidopsis thaliana, they have been later identified in many species, such as Glycine max and Beta vulgaris.Chromoviruses contain an additional chromodomain (chromatin organization modifier domain) at the C-terminus of their integrase protein. They are widespread in plants and fungi, probably retaining protein domains during evolution of these two kingdoms. It is thought that the chromodomain directs retrotransposon integration to specific target sites. According to sequence and structure of the chromodomain, chromoviruses are subdivided into the four clades CRM, Tekay, Reina and Galadriel. Chromoviruses from each clade show distinctive integration patterns, e.g. into centromeres or into the rRNA genes.Ogre-elements are gigantic Ty3-retrotransposons reaching lengths up to 25 kb. Ogre elements have been first described in Pisum sativum.Metaviruses describe conventional Ty3-gypsy retrotransposons that do not contain additional domains or ORFs. Types: BEL/pao family The BEL/pao family is found in animals. Types: Endogenous retroviruses (ERV) Although retroviruses are often classified separately, they share many features with LTR retrotransposons. A major difference with Ty1-copia and Ty3-gypsy retrotransposons is that retroviruses have an envelope protein (ENV). A retrovirus can be transformed into an LTR retrotransposon through inactivation or deletion of the domains that enable extracellular mobility. If such a retrovirus infects and subsequently inserts itself in the genome in germ line cells, it may become transmitted vertically and become an Endogenous Retrovirus. Types: Terminal repeat retrotransposons in miniature (TRIMs) Some LTR retrotransposons lack all of their coding domains. Due to their short size, they are referred to as terminal repeat retrotransposons in miniature (TRIMs). Nevertheless, TRIMs can be able to retrotranspose, as they may rely on the coding domains of autonomous Ty1-copia or Ty3-gypsy retrotransposons. Among the TRIMs, the Cassandra family plays an exceptional role, as the family is unusually wide-spread among higher plants. In contrast to all other characterized TRIMs, Cassandra elements harbor a 5S rRNA promoter in their LTR sequence. Due to their short overall length and the relatively high contribution of the flanking LTRs, TRIMs are prone to re-arrangements by recombination.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Errorless learning** Errorless learning: Errorless learning was an instructional design introduced by psychologist Charles Ferster in the 1950s as part of his studies on what would make the most effective learning environment. B. F. Skinner was also influential in developing the technique, noting that, ...errors are not necessary for learning to occur. Errors are not a function of learning or vice versa nor are they blamed on the learner. Errors are a function of poor analysis of behavior, a poorly designed shaping program, moving too fast from step to step in the program, and the lack of the prerequisite behavior necessary for success in the program. Errorless learning can also be understood at a synaptic level, using the principle of Hebbian learning ("Neurons that fire together wire together"). Errorless learning: Many of Skinner's other students and followers continued to test the idea. In 1963, Herbert Terrace wrote a paper describing an experiment with pigeons which allows discrimination learning to occur with few or even with no responses to the negative stimulus (abbreviated S−). A negative stimulus is a stimulus associated with undesirable consequences (e.g., absence of reinforcement). In discrimination learning, an error is a response to the S−, and according to Terrace errors are not required for successful discrimination performance. Principles: A simple discrimination learning procedure is one in which a subject learns to associate one stimulus, S+ (positive stimulus), with reinforcement (e.g. food) and another, S− (negative stimulus), with extinction (e.g. absence of food). For example, a pigeon can learn to peck a red key (S+), and avoid a green key (S−). Using traditional procedures, a pigeon would be initially trained to peck a red key (S+). When the pigeon was responding consistently to the red key (S+), a green key (S−) would be introduced. At first the pigeon would also respond to the green key (S−) but gradually responses to this key would decrease, because they are not followed by food, so that they occurred only a few times or even never. Principles: Terrace (1963) found that discrimination learning could occur without errors when the training begins early in operant conditioning and visual stimuli (S+ and S−) like colors are used that differ in terms of brightness, duration and wavelength. He used a fading procedure in which the brightness and duration differences between the S+ and the S− were decreased progressively leaving only the difference in wavelength. In other words, the S+ and S− were initially presented with different brightness and duration, i.e., the S+ would appear for 5 s and fully red, and the S− would appear for 0.5 s and dark. Gradually, over successive presentations, the duration of the S− and its brightness were gradually increased until the keylight was fully green for 5 s. Principles: Studies of implicit memory and implicit learning from cognitive psychology and cognitive neuropsychology have provided additional theoretical support for errorless learning methods (e.g., Brooks and Baddeley, 1976, Tulving and Schacter, 1990). Implicit memory is known to be poor at eliminating errors, but can be used to compensate when explicit memory function is impaired. In experiments on amnesiac patients, errorless implicit learning was more effective because it reduced the possibility of errors "sticking" in amnesiacs' memories. Effects: The errorless learning procedure is highly effective in reducing the number of responses to the S− during training. In Terrace's (1963) experiment, subjects trained with the conventional discrimination procedure averaged over 3000 S− (errors) responses during 28 sessions of training; whereas subjects trained with the errorless procedure averaged only 25 S− (errors) responses in the same number of sessions. Effects: Later, Terrace (1972) claimed not only that the errorless learning procedure improves long-term discrimination performance, but also that: 1) S− does not become aversive and so does not elicit "aggressive" behaviors, as it often does with conventional training; 2) S− does not develop inhibitory properties; 3) positive behavioral contrast to S+ does not occur. In other words, Terrace has claimed that the "by-products" of conventional discrimination learning do not occur with the errorless procedure. Limits: However, some evidence suggests that errorless learning may not be as qualitatively different from conventional training as Terrace initially claimed. For example, Rilling (1977) demonstrated in a series of experiments that these "by-products" can occur after errorless learning, but that their effects may not be as large as in the conventional procedure; and Marsh and Johnson (1968) found that subjects given errorless training were very slow to make a discrimination reversal. Applications: Interest from psychologists studying basic research on errorless learning declined after the 1970s. However, errorless learning attracted the interest of researchers in applied psychology, and studies have been conducted with both children (e.g., educational settings) and adults (e.g. Parkinson's patients). Errorless learning continues to be of practical interest to animal trainers, particularly dog trainers.Errorless learning has been found to be effective in helping memory-impaired people learn more effectively. The reason for the method's effectiveness is that, while those with sufficient memory function can remember mistakes and learn from them, those with memory impairment may have difficulty remembering not only which methods work, but may strengthen incorrect responses over correct responses, such as via emotional stimuli. See also the reference by Brown to its application in teaching mathematics to undergraduates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Structural complexity theory** Structural complexity theory: In computational complexity theory of computer science, the structural complexity theory or simply structural complexity is the study of complexity classes, rather than computational complexity of individual problems and algorithms. It involves the research of both internal structures of various complexity classes and the relations between different complexity classes. History: The theory has emerged as a result of (still failing) attempts to resolve the first and still the most important question of this kind, the P = NP problem. Most of the research is done basing on the assumption of P not being equal to NP and on a more far-reaching conjecture that the polynomial time hierarchy of complexity classes is infinite. Important results: The compression theorem The compression theorem is an important theorem about the complexity of computable functions. The theorem states that there exists no largest complexity class, with computable boundary, which contains all computable functions. Important results: Space hierarchy theorems The space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems. Important results: Time hierarchy theorems The time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time. Valiant–Vazirani theorem The Valiant–Vazirani theorem is a theorem in computational complexity theory. It was proven by Leslie Valiant and Vijay Vazirani in their paper titled NP is as easy as detecting unique solutions published in 1986. The theorem states that if there is a polynomial time algorithm for Unambiguous-SAT, then NP=RP. The proof is based on the Mulmuley–Vazirani isolation lemma, which was subsequently used for a number of important applications in theoretical computer science. Sipser–Lautemann theorem The Sipser–Lautemann theorem or Sipser–Gács–Lautemann theorem states that Bounded-error Probabilistic Polynomial (BPP) time, is contained in the polynomial time hierarchy, and more specifically Σ2 ∩ Π2. Savitch's theorem Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function log ⁡(n)) ,NSPACE(f(n))⊆DSPACE((f(n))2). Toda's theorem Toda's theorem is a result that was proven by Seinosuke Toda in his paper "PP is as Hard as the Polynomial-Time Hierarchy" (1991) and was given the 1998 Gödel Prize. The theorem states that the entire polynomial hierarchy PH is contained in PPP; this implies a closely related statement, that PH is contained in P#P. Important results: Immerman–Szelepcsényi theorem The Immerman–Szelepcsényi theorem was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s(n)) = co-NSPACE(s(n)) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem. Research topics: Major directions of research in this area include: study of implications stemming from various unsolved problems about complexity classes study of various types of resource-restricted reductions and the corresponding complete languages study of consequences of various restrictions on and mechanisms of storage and access to data
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Reproductive surgery** Reproductive surgery: Reproductive surgery is surgery in the field of reproductive medicine. It can be used for contraception, e.g. in vasectomy, wherein the vasa deferentia of a male are severed, but is also used plentifully in assisted reproductive technology. Reproductive surgery is generally divided into three categories: surgery for infertility, in vitro fertilization, and fertility preservation.A reproductive surgeon is an obstetrician-gynecologist or urologist who specializes in reproductive surgery.Reproductive surgeries will be referred to based on biological sex, and terms such male and female will be used to denote to men and women respectively. Uses: Reproductive surgery aims to address concerns spanning from male and female fertility to gender-affirming care. Uses for reproductive surgery may encompass different abnormalities, dysfunctions, and areas of focus that are unable to be treated solely through medication or nonsurgical treatment. Screening measures may be completed to determine the necessity of surgery. For example, intrauterine pathology may be assessed by utilizing techniques such as hysteroscopy to identify complications for reproductive surgical interventions.Assisted reproductive technology (ART) supports enhancement of fertility success through processes such as in vitro fertilization (IVF). Screening and reproductive surgery also have a role in identifying and addressing abnormalities, such as notable cysts, prior to initiating IVF. Surgical sperm retrieval is an alternative means of semen collection, where other means are not possible in circumstances like posthumous sperm retrieval or male infertility.These surgical techniques may also be utilized as a form of permanent contraception referred to as sterilization. A vasectomy or tubal ligation would be examples of this procedure for male and female individuals respectively. Reproductive surgeons can potentially perform a reverse vasectomy to restore male reproductive function following the vasectomy. Individuals may choose to reverse the procedure due to pain experience after the surgery.People might find themselves wanting to preserve their fertility. Biological material such as sperm or oocyte are capable of being surgically collected and preserved cryogenically. Fertility preservation also provides individuals who are receiving gender-affirming surgeries the option of preserving gametes if having biological children is desired following the procedures and hormonal therapy.Reproductive surgery is also considered for complications such as endometriosis, polycystic ovary syndrome, ectopic pregnancy, and vas deferens obstruction. Trends: History Albeit an increase in overall use of assisted reproductive technology (ART), surgeries on the fallopian tubes and ovaries have decreased, leading to a rise in insecurity in the field of reproductive surgery. Reproductive surgery in women has largely been complementary to other ART methods such as medication, except for in tubal infertility, where surgery remains the main treatment. Although reproductive surgery has been most relevant for severe symptoms, there has been a strong interest in greater analysis surrounding this topic of research.Reproductive surgery first began with fertility sparing surgeries, such as uterine myomectomy, and was transitioned later into the addition of surgeries for infertility and the advancement of success rates for fertility. Hysterectomies and myomectomies date back to ancient times, where fascination grew around fertility sparing surgeries, specifically for young women who were able to conceive but were considered to have suspected ailments. However, the lack of knowledge of medicine eventually led to mortality, thereby causing myomectomies to become more uncommon. Overtime, various advancements and extensive research allowed for the discovery of minimally invasive myomectomies, which became popular among women who were capable of bearing children.Laparoscopy continues to be a common procedure approach as it is minimally invasive and is thought to be associated with a decrease in hospital stay and surgical complications. The development of newer technology and surgical techniques allowed for the increase in success rates for various other surgeries, such as endometriosis and adenomyosis surgeries or adnexal surgeries. Trends: The Future of Reproductive Surgery With respect to the future of reproductive surgeries, greater advancements of surgical techniques and equipment are growing in popularity to increase the potential of fertility success rates. For example, vaginal natural orifice transluminal endoscopic surgery (vNOTES) is a new innovative approach that has been used for ovarian torsion, tubal ectopic pregnancy, and ovarian cystectomies. This surgical approach is minimally invasive and has emerged in an effort to reduce pain, risks, and potential for scarring. Another technique that has emerged is radiofrequency ablation (RFA) which has been used for uterine fibroids. It works to necrotize fibroids through the use of laparoscopic and transcervical procedures with two devices, Acessa (Hologic) and Sonata (Gynesonics). However, these two medical devices come with the caveat that fertility may not be preserved in those with uterine leiomyoma. Although not ideal for people who are able to and wanting to bear children, RFA still poses as an alternative successful technique to reducing the volume of fibroids.A new common interest in alliance with reproductive surgery is the use of regenerative medicine. Although it has not been studied in its entirety, the use of stem cells to restore damaged endometrium has shown promising improvements. Regenerative medicine has been used for premature ovarian failure and will continue to be studied for in vitro fertilization (IVF). With the use of various stem cells, researchers hope to mitigate and treat any future signs of infertility with the use of two specific stem cells, induced pluripotent stem cells (iPSCs) and mesenchymal stem cells (MSC). Risks/Complications: The risks and complications of reproductive surgery depend on patient specific characteristics and the degree of the surgery itself; however, some common complications of general reproductive surgery are hemorrhage, visceral damage, infection, and blot clotting.In vasectomies, infection and hematomas are the most frequently reported complications of surgery, with the incidence rate of infection being 3-4% and the incidence rate of hematoma ranging around 0-29%. An important note to consider is the fact that the surgical technique of the vasectomy did have an impact on the incident rates of these complications. No-scalpel vasectomy (NSV) is widely recognized due to its low incident rate of complications. Another common complication of vasectomy is post-vasectomy pain syndrome (PVPS). PVPS involves chronic pain that may be persistent or intermittent in one or both of the testicles, and lasting longer than three months after the procedure. While the pathophysiology of PVPS is unknown, various causes include damage to structures of the testis, buildup of pressure from epididymal congestion, and compression of nerves in the testis. The pain in PVPS can manifest in various forms, such as pain and tenderness in the scrotum, pressure or pain after ejaculation, pain with sex, etc. Incidence rates of PVPS are around 1-14%.In hysterectomies, complications of the procedure include infection, gastrointestinal injury, and venous thromboembolic injury. Similar to vasectomies, one of the most common complications is infection, with the incidence rate being 10.5% for abdominal hysterectomy, 13% for vaginal hysterectomy, and 9% for laparoscopic hysterectomy.Today, one of the most effective forms of ART is in vitro fertilization (IVF). While it is very effective in those experiencing infertility, there are numerous risks of IVF, such as multiple births, premature delivery, and ovarian hyper-stimulation syndrome. Ovarian hyper-stimulation syndrome is a condition that involves enlargement of the ovaries as the result of the injected fertility drugs causing increased capacity of the blood vessels to allow molecules to go in and out. It can lead to abdominal pain, soreness, and nausea for those experiencing it. The symptoms and severity of ovarian hyper-stimulation syndrome can be classified amongst various grades. Grade 1 involves mild discomfort and abdominal distention, and as the grades increase, severity and symptoms also increase. Grade 4 and grade 5 encompass severe ovarian hyper-stimulation syndrome and involve changes in blood volume and viscosity due to the condition. Those who have a history of heightened response to gonadotropins, history of previous ovarial hyper-stimulation, and/or have a history of polycystic ovary syndrome (PCOS) are at increased risk of developing this complication. Contraindications: There are no existing medical guidelines that outline the absolute contraindications to reproductive surgery. However, there are relative contraindications recommended in the current literature. There are several circumstances under which having reproductive surgery is contraindicated. This is because surgery itself may cause extensive tissue damage to the person, the success of the procedure is limited (i.e. the condition is invasive or metastatic), or the surgery's potential risk outweighs the potential benefits. However, each person's situation is different and the possibility of reproductive surgery should be consulted with a healthcare professional. Uterine atony after fetal extraction, and pre-existing maternal bleeding disorders have been reported as accepted contraindications for cesarean myomectomies in women. Contraindications to reproductive surgery used for tubal surgery and infertility include women ages 43 and older, tubal disease that surgery cannot treat (i.e., surgery cannot be safely performed without hurting the person or the patient has multiple medical conditions that reduces the chance of success), bipolar disease, and abnormal semen analysis.Many studies examining surgery for endometriosis excluded women who previously received medical or surgical treatment for endometriosis. Women with a pre-operative diagnosis of a deep endometriosis of their bowel or bladder were also excluded from surgery.For male reproductive surgery for the treatment of varicocele by percutaneous embolization, current literature considers adolescents, allergies to contrast, men with a bilateral grade 3 varicocele, and men with primary infertility as relative contraindications to surgery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Head pressing** Head pressing: Head pressing is a veterinary condition characterized by pressing the head against a wall or pushing the face into a corner for no apparent reason. This condition is seen in dogs, cats, cows, horses, and goats. Head pressing is usually a sign of a neurological disorder, especially of the forebrain (e.g., prosencephalon disease), or of toxicity due to liver damage, such as portosystemic shunt and hepatic encephalopathy.It should be distinguished from bunting, which is a normal behavior found in healthy animals. Possible causes: Prosencephalon disease Liver shunt Brain tumor Metabolic disorder (e.g., hyponatremia or hyperatremia) Stroke Infection of the nervous system (rabies, parasites, bacterial, viral or fungal infection) Head trauma Liver neurotoxicity A liver shunt is a congenital or acquired condition that may lead to toxicity and head pressing. Additional symptoms include drooling and slow maturation early in development. Middle-aged and older animals more commonly suffer from liver cirrhosis than younger animals. Possible causes: Viral causes Several viruses that cause encephalitis or meningoencephalitis can lead to the neurological sign of head pressing, such as eastern equine encephalitis and bovine herpesvirus 5.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chris Van Hoof** Chris Van Hoof: Chris Van Hoof is a full professor at KU Leuven’s Faculty of Engineering Science, Vice-President Connected Health Solutions and Fellow at IMEC, and general manager of OnePlanet Research Center.Van Hoof has published over 600 papers in journals, conference proceedings and has given over 100 invited talks. His research has resulted in five startups, of which four are healthcare-related. Education: In 1992, Van Hoof received his PhD in Electrical Engineering from the University of Leuven in collaboration with IMEC. OnePlanet Research Center: Van Hoof is general manager of OnePlanet Research Center. In February 2019, the Dutch government provided a 65 million euro grant to create the research venture, which is a multidisciplinary collaboration agreement between IMEC, Wageningen University & Research, Radboud University, and Radboud University Medical Center.At OnePlanet, Van Hoof oversees fundamental and applied research and aims to use the latest chip and digital technologies to create novel health and agriculture technology-based solutions. Agrifood and health scientists are developing technologies to help people eat and live healthier, while ensuring a sustainable supply chain. IMEC: At IMEC, Van Hoof oversees all research that goes into wearables, smart sensors and other connected health-related technology, such as ingestible and invisible body monitoring technologies. Van Hoof previously held positions at IMEC as manager and director leading research on sensors, imagers, 3D integration, MEMS, energy harvesting, body area networks, biomedical electronics, and wearable health. Van Hoof’s research at IMEC has been published in leading scientific journals, such as Nature. IMEC: Digital Twin Technology Van Hoof is applying digital twin technology to build a more complete picture of an individual’s lifestyle, behavior and environment, with the aim of preventing and intercepting diseases that currently rely on longitudinal recordings. In January 2019, Van Hoof stated that mental health therapy in particular is based on a trial and error approach, which can stall the recovery process dramatically, and believes it could enable therapists to personalize the treatment of mental health disorders faster and more effectively. IMEC: EEG Headset In 2018, IMEC announced that Van Hoof and his team had developed a prototype of an electroencephalogram (EEG) headset that can measure emotions and cognitive processes in the brain. IMEC: IMEC stated that while traditional EEG technologies have been around for a long time to diagnose medical conditions like epilepsy or sleep disorders, they fall short for promising novel therapeutic applications, such as cognitive skill improvement through sensory stimulation and VR-based cognitive treatments for conditions such as autism and ADHD. IMEC’s wireless EEG headset allows users to comfortably wear the device for hours on end which makes it suitable for therapeutic, learning and gaming applications. IMEC: Health Patch In 2016, Van Hoof and his team at IMEC presented a small-form health patch that was reported to have more functionalities than any other patch available on the market. The chip was optimized for low power consumption and was combined with an electrode patch that can stay on the body for long periods of time, including when showering.Van Hoof’s technology was the first patch to combine a variety of sensing capabilities – ranging from an accelerometer (to track a person’s physical activity) to ECG tracking (measuring the heart’s electrical activity) and bioelectrical impedance monitoring (measuring body composition, respiratory activity and the distribution of body fluids).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hyoscyamine/hexamethylenetetramine/phenyl salicylate/methylene blue/benzoic acid** Hyoscyamine/hexamethylenetetramine/phenyl salicylate/methylene blue/benzoic acid: Hyoscyamine/­hexamethylenetetramine/­phenyl salicylate/­methylene blue/­benzoic acid (trade names Methylphen, Prosed DS) is a drug combination. It is not safe or effective for any medical purpose.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Parallel I/O** Parallel I/O: Parallel I/O, in the context of a computer, means the performance of multiple input/output operations at the same time, for instance simultaneously outputs to storage devices and display devices. It is a fundamental feature of operating systems.One particular instance is parallel writing of data to disk; when file data is spread across multiple disks, for example in a RAID array, one can store multiple parts of the data at the same time, thereby achieving higher write speeds than with a single device.Other ways of parallel access to data include: Parallel Virtual File System, Lustre, GFS etc. Features: Scientific computing It is used for scientific computing and not for databases. It breaks up support into multiple layers including High level I/O library, Middleware layer and Parallel file system. Parallel File System manages the single view, maintains logical space and provides access to data files. Storage A single file may be stripped across one or more object storage target, which increases the bandwidth while accessing the file and available disk space. The caches are larger in Parallel I/O and shared through distributed memory systems. Breakthroughs Companies have been running Parallel I/O on their servers to achieve results with regard to price and performance. Parallel processing is especially critical for scientific calculations where applications are not only CPU but also are I/O bound.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corn whiskey** Corn whiskey: Corn whiskey is an American liquor made principally from corn. Distinct from the stereotypical American moonshine, in which sugar is normally added to the mash, corn whiskey uses a traditional mash process, and is subject to the tax and identity laws for alcohol under federal law. Legal requirements: Corn whiskey is made from a mash of at least 80 percent corn and distilled to a maximum strength of 160 proof (80% alcohol by volume).Unlike other American whiskey styles, corn whiskey is not required to be aged in wood. If aged, it must be in either uncharred or previously-used oak barrels and must be barreled at lower than 125 proof (62.5% abv). In contrast, a whiskey distilled from a mash consisting of at least 80% corn in a charred new oak barrel would be considered bourbon. Aging is usually brief – six months or less – during which time the whiskey absorbs color and flavor from the barrel while the off-flavors and fusel alcohols are reduced. A variant called straight corn whiskey is also produced, in which the whiskey is stored in used or uncharred new oak containers for two years or more. Whiskeys produced in this manner and aged for at least four years can be designated bottled in bond if they meet additional requirements. Availability: Many American whiskey distillers include unaged corn whiskeys in their product lines along with bourbons and other styles. A few large whiskey producers make unaged corn whiskeys but most corn whiskeys are made by smaller distillers located all around the country.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Process calculus** Process calculus: In computer science, the process calculi (or process algebras) are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation). Leading examples of process calculi include CSP, CCS, ACP, and LOTOS. More recent additions to the family include the π-calculus, the ambient calculus, PEPA, the fusion calculus and the join-calculus. Essential features: While the variety of existing process calculi is very large (including variants that incorporate stochastic behaviour, timing information, and specializations for studying molecular interactions), there are several features that all process calculi have in common: Representing interactions between independent processes as communication (message-passing), rather than as modification of shared variables. Describing processes and systems using a small collection of primitives, and operators for combining those primitives. Defining algebraic laws for the process operators, which allow process expressions to be manipulated using equational reasoning. Mathematics of processes: To define a process calculus, one starts with a set of names (or channels) whose purpose is to provide means of communication. In many implementations, channels have rich internal structure to improve efficiency, but this is abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old ones. The basic operators, always present in some form or other, allow: parallel composition of processes specification of which channels to use for sending and receiving data sequentialization of interactions hiding of interaction points recursion or process replication Parallel composition Parallel composition of two processes P and Q , usually written P|Q , is the key primitive distinguishing the process calculi from sequential models of computation. Parallel composition allows computation in P and Q to proceed simultaneously and independently. But it also allows interaction, that is synchronisation and flow of information from P to Q (or vice versa) on a channel shared by both. Crucially, an agent or process can be connected to more than one channel at a time. Mathematics of processes: Channels may be synchronous or asynchronous. In the case of a synchronous channel, the agent sending a message waits until another agent has received the message. Asynchronous channels do not require any such synchronization. In some process calculi (notably the π-calculus) channels themselves can be sent in messages through (other) channels, allowing the topology of process interconnections to change. Some process calculi also allow channels to be created during the execution of a computation. Mathematics of processes: Communication Interaction can be (but isn't always) a directed flow of information. That is, input and output can be distinguished as dual interaction primitives. Process calculi that make such distinctions typically define an input operator (e.g. x(v) ) and an output operator (e.g. x⟨y⟩ ), both of which name an interaction point (here x ) that is used to synchronise with a dual interaction primitive. Mathematics of processes: Should information be exchanged, it will flow from the outputting to the inputting process. The output primitive will specify the data to be sent. In x⟨y⟩ , this data is y . Similarly, if an input expects to receive data, one or more bound variables will act as place-holders to be substituted by data, when it arrives. In x(v) , v plays that role. The choice of the kind of data that can be exchanged in an interaction is one of the key features that distinguishes different process calculi. Mathematics of processes: Sequential composition Sometimes interactions must be temporally ordered. For example, it might be desirable to specify algorithms such as: first receive some data on x and then send that data on y . Sequential composition can be used for such purposes. It is well known from other models of computation. In process calculi, the sequentialisation operator is usually integrated with input or output, or both. For example, the process x(v)⋅P will wait for an input on x . Only when this input has occurred will the process P be activated, with the received data through x substituted for identifier v Reduction semantics The key operational reduction rule, containing the computational essence of process calculi, can be given solely in terms of parallel composition, sequentialization, input, and output. The details of this reduction vary among the calculi, but the essence remains roughly the same. The reduction rule is: x⟨y⟩⋅P|x(v)⋅Q⟶P|Q[y/v] The interpretation to this reduction rule is: The process x⟨y⟩⋅P sends a message, here y , along the channel x . Dually, the process x(v)⋅Q receives that message on channel x Once the message has been sent, x⟨y⟩⋅P becomes the process P , while x(v)⋅Q becomes the process Q[y/v] , which is Q with the place-holder v substituted by y , the data received on x .The class of processes that P is allowed to range over as the continuation of the output operation substantially influences the properties of the calculus. Mathematics of processes: Hiding Processes do not limit the number of connections that can be made at a given interaction point. But interaction points allow interference (i.e. interaction). For the synthesis of compact, minimal and compositional systems, the ability to restrict interference is crucial. Hiding operations allow control of the connections made between interaction points when composing agents in parallel. Hiding can be denoted in a variety of ways. For example, in the π-calculus the hiding of a name x in P can be expressed as (νx)P , while in CSP it might be written as P∖{x} Recursion and replication The operations presented so far describe only finite interaction and are consequently insufficient for full computability, which includes non-terminating behaviour. Recursion and replication are operations that allow finite descriptions of infinite behaviour. Recursion is well known from the sequential world. Replication !P can be understood as abbreviating the parallel composition of a countably infinite number of P processes: !P=P∣!P Null process Process calculi generally also include a null process (variously denoted as nil , 0 , STOP , δ , or some other appropriate symbol) which has no interaction points. It is utterly inactive and its sole purpose is to act as the inductive anchor on top of which more interesting processes can be generated. Discrete and continuous process algebra: Process algebra has been studied for discrete time and continuous time (real time or dense time). History: In the first half of the 20th century, various formalisms were proposed to capture the informal concept of a computable function, with μ-recursive functions, Turing machines and the lambda calculus possibly being the best-known examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into each other, supports the Church-Turing thesis. Another shared feature is more rarely commented on: they all are most readily understood as models of sequential computation. The subsequent consolidation of computer science required a more subtle formulation of the notion of computation, in particular explicit representations of concurrency and communication. Models of concurrency such as the process calculi, Petri nets in 1962, and the actor model in 1973 emerged from this line of inquiry. History: Research on process calculi began in earnest with Robin Milner's seminal work on the Calculus of Communicating Systems (CCS) during the period from 1973 to 1980. C.A.R. Hoare's Communicating Sequential Processes (CSP) first appeared in 1978, and was subsequently developed into a full-fledged process calculus during the early 1980s. There was much cross-fertilization of ideas between CCS and CSP as they developed. In 1982 Jan Bergstra and Jan Willem Klop began work on what came to be known as the Algebra of Communicating Processes (ACP), and introduced the term process algebra to describe their work. CCS, CSP, and ACP constitute the three major branches of the process calculi family: the majority of the other process calculi can trace their roots to one of these three calculi. Current research: Various process calculi have been studied and not all of them fit the paradigm sketched here. The most prominent example may be the ambient calculus. This is to be expected as process calculi are an active field of study. Currently research on process calculi focuses on the following problems. Developing new process calculi for better modeling of computational phenomena. Current research: Finding well-behaved subcalculi of a given process calculus. This is valuable because (1) most calculi are fairly wild in the sense that they are rather general and not much can be said about arbitrary processes; and (2) computational applications rarely exhaust the whole of a calculus. Rather they use only processes that are very constrained in form. Constraining the shape of processes is mostly studied by way of type systems. Current research: Logics for processes that allow one to reason about (essentially) arbitrary properties of processes, following the ideas of Hoare logic. Current research: Behavioural theory: what does it mean for two processes to be the same? How can we decide whether two processes are different or not? Can we find representatives for equivalence classes of processes? Generally, processes are considered to be the same if no context, that is other processes running in parallel, can detect a difference. Unfortunately, making this intuition precise is subtle and mostly yields unwieldy characterisations of equality (which in most cases must also be undecidable, as a consequence of the halting problem). Bisimulations are a technical tool that aids reasoning about process equivalences. Current research: Expressivity of calculi. Programming experience shows that certain problems are easier to solve in some languages than in others. This phenomenon calls for a more precise characterisation of the expressivity of calculi modeling computation than that afforded by the Church-Turing thesis. One way of doing this is to consider encodings between two formalisms and see what properties encodings can potentially preserve. The more properties can be preserved, the more expressive the target of the encoding is said to be. For process calculi, the celebrated results are that the synchronous π-calculus is more expressive than its asynchronous variant, has the same expressive power as the higher-order π-calculus, but is less than the ambient calculus. Current research: Using process calculus to model biological systems (stochastic π-calculus, BioAmbients, Beta Binders, BioPEPA, Brane calculus). It is thought by some that the compositionality offered by process-theoretic tools can help biologists to organise their knowledge more formally. Software implementations: The ideas behind process algebra have given rise to several tools including: CADP Concurrency Workbench mCRL2 toolset Relationship to other models of concurrency: The history monoid is the free object that is generically able to represent the histories of individual communicating processes. A process calculus is then a formal language imposed on a history monoid in a consistent fashion. That is, a history monoid can only record a sequence of events, with synchronization, but does not specify the allowed state transitions. Thus, a process calculus is to a history monoid what a formal language is to a free monoid (a formal language is a subset of the set of all possible finite-length strings of an alphabet generated by the Kleene star). Relationship to other models of concurrency: The use of channels for communication is one of the features distinguishing the process calculi from other models of concurrency, such as Petri nets and the actor model (see Actor model and process calculi). One of the fundamental motivations for including channels in the process calculi was to enable certain algebraic techniques, thereby making it easier to reason about processes algebraically.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Novikov ring** Novikov ring: In mathematics, given an additive subgroup Γ⊂R , the Novikov ring Nov ⁡(Γ) of Γ is the subring of Z[[Γ]] consisting of formal sums ∑nγitγi such that γ1>γ2>⋯ and γi→−∞ . The notion was introduced by Sergei Novikov in the papers that initiated the generalization of Morse theory using a closed one-form instead of a function. The notion is used in quantum cohomology, among the others. Novikov ring: The Novikov ring Nov ⁡(Γ) is a principal ideal domain. Let S be the subset of Z[Γ] consisting of those with leading term 1. Since the elements of S are unit elements of Nov ⁡(Γ) , the localization Nov ⁡(Γ)[S−1] of Nov ⁡(Γ) with respect to S is a subring of Nov ⁡(Γ) called the "rational part" of Nov ⁡(Γ) ; it is also a principal ideal domain. Novikov numbers: Given a smooth function f on a smooth manifold M with nondegenerate critical points, the usual Morse theory constructs a free chain complex C∗(f) such that the (integral) rank of Cp is the number of critical points of f of index p (called the Morse number). It computes the (integral) homology of M (cf. Morse homology): H∗(C∗(f))≅H∗(M,Z) In an analogy with this, one can define "Novikov numbers". Let X be a connected polyhedron with a base point. Each cohomology class ξ∈H1(X,R) may be viewed as a linear functional on the first homology group H1(X,R) ; when composed with the Hurewicz homomorphism, it can be viewed as a group homomorphism ξ:π=π1(X)→R . By the universal property, this map in turns gives a ring homomorphism, Nov Nov ⁡(R) ,making Nov a module over Z[π] . Since X is a connected polyhedron, a local coefficient system over it corresponds one-to-one to a Z[π] -module. Let Lξ be a local coefficient system corresponding to Nov with module structure given by ϕξ . The homology group Hp(X,Lξ) is a finitely generated module over Nov , which is, by the structure theorem, the direct sum of its free part and its torsion part. The rank of the free part is called the Novikov Betti number and is denoted by bp(ξ) . The number of cyclic modules in the torsion part is denoted by qp(ξ) . If ξ=0 , Lξ is trivial and bp(0) is the usual Betti number of X. Novikov numbers: The analog of Morse inequalities holds for Novikov numbers as well (cf. the reference for now.)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lexical entrainment** Lexical entrainment: Lexical entrainment is the phenomenon in conversational linguistics of the process of the subject adopting the reference terms of their interlocutor. In practice, it acts as a mechanism of the cooperative principle in which both parties to the conversation employ lexical entrainment as a progressive system to develop "conceptual pacts" (a working temporary conversational terminology) to ensure maximum clarity of reference in the communication between the parties; this process is necessary to overcome the ambiguity inherent in the multitude of synonyms that exist in language. Lexical entrainment arises by two cooperative mechanisms: Embedded corrections – a reference to the object implied by the context of the sentence, but with no explicit reference to the change in terminology Exposed corrections – an explicit reference to the change in terminology, possibly including a request to assign the referent a common term (e.g., "by 'girl', do you mean 'Jane'?") Violation of Grice's maxim of quantity: Once lexical entrainment has come to determine the phrasing for a referent, both parties will use that terminology for the referent for the duration, even if it proceeds to violate the Gricean maxim of quantity. For example, if one wants to refer to a brown loafer out of a set of shoes that consist of: the loafer, a sneaker, and a high-heeled shoe, they will not use the shoe to describe the object as this phrasing does not unambiguously describe one item in the set under consideration. They will also not call the object the brown loafer which would violate Grice's maxim of quantity. The speaker will settle on using the term the loafer as it is just informative enough without giving too much information.Another important factor is lexical availability; the ease of conceptualizing a referent in a certain way and then retrieving and producing a label for it. For many objects, the most available labels are basic nouns; for example, the word "dog". Instead of saying animal or husky for the referent, most subjects will default to dog. If in a set of objects one is to refer to either a husky, a table, and a poster, people are still most likely to use the word "dog." This is technically a violation of Grice's maxim of quantity, as using the term animal is sufficient. Applications: Lexical entrainment has applications in natural language processing in computers, as well as human–human interaction. Currently, the adaptability of computers to modify their referencing to the terms of their human interlocutor is limited, so the entrainment adaptation falls to the human operator; this phenomenon is readily demonstrated in Brennan's 1996 experiment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ceftaroline fosamil** Ceftaroline fosamil: Ceftaroline fosamil (INN) , brand name Teflaro in the US and Zinforo in Europe, is a cephalosporin antibiotic with anti-MRSA activity. Ceftaroline fosamil is a prodrug of ceftaroline. It is active against methicillin-resistant Staphylococcus aureus (MRSA) and other Gram-positive bacteria. It retains some activity of later-generation cephalosporins having broad-spectrum activity against Gram-negative bacteria, but its effectiveness is relatively much weaker. It is currently being investigated for community-acquired pneumonia and complicated skin and skin structure infection.Ceftaroline is being developed by Forest Laboratories, under a license from Takeda. Ceftaroline fosamil: Ceftaroline received approval from the U.S. Food and Drug Administration (FDA) for the treatment of community-acquired bacterial pneumonia and acute bacterial skin infections on 29 October 2010. In vitro studies show it has a similar spectrum to ceftobiprole, the only other fifth-generation cephalosporin to date, although no head-to-head clinical trials have been conducted. Ceftaroline and ceftobiprole are on an unnamed subclass of cephalosporins by the Clinical and Laboratory Standards Institute (CLSI).It was removed from the World Health Organization's List of Essential Medicines in 2019. Clinical use: Ceftaroline is a novel cephalosporin with activity against methicillin-resistant S. aureus (MRSA) with phase III clinical trials for complicated skin and skin structure infections with reported non-inferior efficacy against MRSA compared to vancomycin and aztreonam. In 2009, ceftaroline had completed phase-III clinical trials for community-acquired pneumonia comparing it against ceftriaxone with non-inferior results and similar adverse reaction profile. However, only results for phase-II clinical trials in treatment of complicated skin and skin structure infections have been published. Sept 2009 : Phase III trials results reported. Clinical use: On 8 September 2010, the FDA Advisory Committee recommended approval for the treatment of community acquired bacterial pneumonia and complicated skin and skin structure infections. Clinical use: In October 2010, FDA approval was gained for treatment of community-acquired bacterial pneumonia and acute bacterial skin and skin structure infections, including MRSA.MRSA can develop resistance to ceftaroline through the alteration of penicillin-binding proteins. Amino acid-altering mutations in the ceftaroline-binding pocket of the transpeptidase region of penicillin-binding protein 2a (PBP2a) confer resistance to ceftaroline. Ceftaroline- and methicillin-resistant strains of S. aureus have been identified in Europe and Asia, but have not been identified in the United States. While cephalosporinases (a type of beta-lactamase that inactivates cephalosporins) confers resistance to other cephalosporins, cephalosporinases have not yet been identified as a mechanism of resistance to ceftaroline. Safety: The clinical studies indicated ceftaroline was well tolerated. The overall rate of adverse events was comparable between the two treatment groups (The CANVAS I and CANVAS II trials evaluated ceftaroline monotherapy versus vancomycin plus aztreonam in adult subjects with complicated skin and skin structure infections caused by Gram-positive and Gram-negative bacteria.). The overall discontinuation rate for ceftaroline-treated subjects was 2.7% compared to a rate of 3.7% for the comparator group-treated subjects. The most common adverse reactions occurring in > 2% of subjects receiving ceftaroline in the pooled phase-III clinical trials were diarrhea, nausea, and rash.: Contraindications: Known serious hypersensitivity to ceftaroline or other members of the cephalosporin class Anaphylaxis and anaphylactoid reactions Warnings and precautions: The warnings and precautions associated with ceftaroline include: Hypersensitivity reactions Serious hypersensitivity (anaphylactic) reactions and serious skin reactions have been reported with beta-lactam antibiotics, including ceftaroline. Exercise caution in people with known hypersensitivity to beta-lactam antibiotics including ceftaroline. Before therapy with ceftaroline is instituted, careful inquiry about previous hypersensitivity reactions to other cephalosporins, penicillins, or carbapenems should be made. If this product is to be given to penicillin- or other beta-lactam-allergic people, caution should be exercised because cross sensitivity among beta-lactam antibacterial agents has been clearly established. If an allergic reaction to ceftaroline occurs, the drug should be discontinued. Serious acute hypersensitivity reactions require emergency treatment with epinephrine and other emergency measures, that may include airway management, oxygen, intravenous fluids, antihistamines, corticosteroids, and vasopressors as clinically indicated. Warnings and precautions: Clostridium difficile-associated diarrhea Clostridium difficile-associated diarrhea (CDAD) has been reported for nearly all antibacterial agents including ceftaroline, and may range in severity from mild diarrhea to fatal colitis. Careful medical history is necessary because CDAD has been reported to occur more than two months after the administration of antibacterial agents. If CDAD is suspected or confirmed, antibacterials not directed against C. difficile should be discontinued, if possible. Warnings and precautions: Development of drug-resistant bacteria Prescribing ceftaroline in the absence of a proven or strongly suspected bacterial infection is unlikely to provide benefit to the patient and increases the risk of the development of drug-resistant bacteria. Warnings and precautions: Direct Coombs test seroconversion In the pooled phase-III CABP trials, 51/520 (9.8%) of subjects treated with ceftaroline compared to 24/534 (4.5%) of subjects treated with ceftriaxone seroconverted from a negative to a positive direct Coombs' test result. No clinical adverse reactions representing hemolytic anemia were reported in any treatment group. If anemia develops during or after treatment with ceftaroline, drug-induced hemolytic anemia should be considered. If drug-induced hemolytic anemia is suspected, discontinuation of ceftaroline should be considered and supportive care should be administered to the patient if clinically indicated. = Interactions: = No clinical drug-drug interaction studies have been conducted with ceftaroline fosamil. In vitro studies in human liver microsomes indicated that neither ceftaroline fosamil nor ceftaroline inhibits the major cytochrome P450 isoenzymes. Therefore, neither ceftaroline fosamil nor ceftaroline is expected to inhibit or induce the clearance of drugs that are metabolized by these metabolic pathways in a clinically relevant manner. = Interactions: Use in specific populations For pregnant or nursing mothers, ceftaroline fosamil should be used only if the potential benefit outweighs the potential risk to the fetus or child. Safety and effectiveness in pediatric children has not been studied. = Interactions: Because elderly people 65 years of age or older are more likely to have decreased renal function and ceftaroline is excreted primarily by the kidney, care should be taken in dose selection in this age group as in younger people with impaired renal function. Dosage adjustment is required in people with moderately (30 to ‰¤ 50 mL/min) or severely (< 30 mL/min) impaired renal function. = Interactions: The pharmacokinetics of ceftaroline in people with hepatic impairment have not been established. Side effects: No adverse reactions occurred in greater than 5% of people receiving ceftaroline. The most common adverse reactions occurring in > 2% of people receiving ceftaroline in the pooled phase-III clinical trials were: Diarrhea Nausea Rash Chemistry: Ceftaroline fosamil is used in form of the acetate. It is a prodrug that is converted to active metabolite ceftaroline and inactive metabolite ceftaroline-M1. Initial in vitro and in vivo animal studies referred to ceftaroline fosamil acetate as PPI-0903.Characteristic of cephalosporins, ceftaroline has a bicyclic ring with four-member β-lactam ring fused to a six-member cephem ring. Ceftaroline is thought to have activity against MRSA with its 1,3-thiazole ring.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Facility information model** Facility information model: A facility information model is an information model of an individual facility that is integrated with data and documents about the facility. The facility can be any large facility that is designed, fabricated, constructed and installed, operated, maintained and modified; for example, a complete infrastructural network, a process plant, a building, a highway, a ship or an airplane. Facility information model: The difference with a product model is that a product model is typically a model about a kind of product expressed as a data structure, whereas a facility information model typically is an integration of 1000–10,000 components and their properties and relations and 10,000–50,000 documents. A facility information model is intended for users that search for data and documents about the components of the facility and their operation. Facility information model: A Facility Information Model can be an instantiation of a fixed data model or it can be expressed in a flexible modeling language such as Gellish English. A facility information model about a plant, a building, etc. is usually called a Plant Information Model, a Building Information Model, etc. Facility information model: A Facility Information Model can be created according to various modeling methods. For example, the Gellish Modeling Method [1] enables to model it in a system-independent and computer-interpretable way. This means that the model can be imported and managed in any system that is able to read Gellish English expressions. A facility information model is in principle system independent and only deals with data and document content. Architecture: A Facility Information Model consists of at least the following sections: 1. A Facility Model, which may include processes and activities 2. A Documents and Data sets section 3. An electronic Common Dictionary And possibly also: 4. Requirements Models The Facility Model A Facility Model describes a facility, primarily in a breakdown structure that specifies a decomposition hierarchy of the facility. For example, the facility may be decomposed in sections, whereas each section is decomposed in units and utility systems, which are further decomposed in equipment systems, control loops, sub-systems, which are decomposed in pieces of equipment, building components, etc. as far as required. Architecture: The Facility Model consists partly of the facts (data) that are expressed as relations between the components and their properties and relations to other 'objects'. That data reflects the facility and its operation and its properties. Documents and data sets section Another section of the Facility Information Model consists of documents and data sets in various formats. Each of those documents and data sets is related to the element in the facility model about which the document or data set contains information. Architecture: Electronic Common Dictionary Each facility model component, property, activity as well as each document and data set shall be defined. This is normally done by classification. The classes (concepts) that classify the objects are defined in an electronic Common Dictionary. To ensure consistency and communication between systems and other parties that dictionary is also an integral part of every Facility Information Model. Architecture: Requirements Models The quality of a Facility Information Model is determined by its completeness, consistency, up-to-dateness and accessibility. To measure that quality it is necessary to define requirements. This is preferably done in a computer interpretable way. These requirements and standard specifications have the nature of relations between kinds of things. Implementation: A facility information model can be implemented in various ways. The essence is that the user of a system by which the data and documents are accessed should experience it as one integrated system. Nevertheless, the system may be constructed such that the documents are stored in a simple directory or such that they are stored in a separate document management system and the data are stored in one or more databases. Implementation: Important kinds of systems in which Facility Information Models will most likely be implemented are document-oriented systems, such as Electronic Document Management Systems (EDMSs), Content Management Systems (CMS systems) or Enterprise Content Management Systems (ECM systems). Another kind of systems in which facility information models may be implemented are more data-oriented systems, such as Product Data Management Systems (PDM systems) and Product lifecycle management systems (PLM systems). Examples of some of the Facility information models include QuickBase, Inc., MYBOS, ServiceChannel, AwareManager, OfficeSpace Software, eSSETS, faciliCAD, VAR facility management solutions and SKYSITE.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Agricultural microbiology** Agricultural microbiology: Agricultural microbiology is a branch of microbiology dealing with plant-associated microbes and plant and animal diseases. It also deals with the microbiology of soil fertility, such as microbial degradation of organic matter and soil nutrient transformations. Soil microorganisms: Importance of soil microorganisms Involved in nutrient transformation process Decomposition of resistant components of plant and animal tissue Role in microbial antagonism Microorganisms as biofertilizers Biofertilizers are seen as promising, sustainable alternatives to harmful chemical fertilizers due to their ability to increase yield and soil fertility through enhancing crop immunity and development. When applied to the soil, plant, or seed these biofertilizers colonize the rhizosphere or interior of the plant root. Once the microbial community is established, these microorganisms can help to solubilize and break down essential nutrients in the environment which would otherwise be unavailable or difficult for the crop to incorporate into biomass. Soil microorganisms: Nitrogen Nitrogen is an essential element needed for the creation of biomass and is usually seen as a limiting nutrient in agricultural systems. Though abundant in the atmosphere, the atmospheric form of nitrogen cannot be utilized by plants and must be transformed into a form that can be taken up directly by the plants; this problem is solved by biological nitrogen fixers. Nitrogen fixing bacteria, also known as diazotrophs, can be broken down into three groups: free-living (ex. Azotobacter, Anabaena, and Clostridium) , symbiotic (ex. Rhizobium and Trichodesmium) and associative symbiotic (ex. Azospirillum). These organisms have the ability to fix atmospheric nitrogen to bioavailable forms that can be taken up by plants and incorporated into biomass. An important nitrogen fixing symbiosis is that between Rhizobium and leguminous plants. Rhizobium have been shown to contribute upwards of 300 kg N/ha/year in different leguminous plants, and their application to agricultural crops has been shown to increase crop height, seed germination, and nitrogen content within the plant. The utilization of nitrogen fixing bacteria in agriculture could help reduce the reliance on man-made nitrogen fertilizers that are synthesized via the Haber-Bosch process. Soil microorganisms: Phosphorus Phosphorus can be made available to plants via solubilization or mobilization by bacteria or fungi. Under most soil conditions, phosphorus is the least mobile nutrient in the environment and therefore must be converted to solubilized forms in order to be available for plant uptake. Phosphate solubilization is the process by which organic acids are secreted into the environment, this lowers the pH and dissolves phosphate bonds therefore leaving the phosphate solubilized. Phosphate-solubilizing bacteria (PBS) (ex. Bacillus subtilis and Bacillus circulans) are responsible for upwards of 50% of microbial phosphate solubilization. In addition to the solubilized phosphate, PBS can also provide trace elements such as iron and zinc which further enhance plant growth. Fungi (ex. Aspergillus awamori and Penicillium spp.) also perform this process, however their contribution is less than 1% of all activity. A 2019 study showed that when crops were inoculated with Aspergillus niger , there was a significant increase fruit size and yield compared with non-inoculated crops; when the crop was co-inoculated with A. niger and the nitrogen fixing bacteria Azobacter, the crop performance was better than with inoculation using only one of the biofertilizer and the crops that were not inoculated at all. Phosphorus mobilization is the process of transferring phosphorus to the root from the soil; this process is carried out via mycorrhiza (ex. Arbuscular mycorrhiza) . Arbuscular mycorrhiza mobilize phosphate by penetrating and increasing the surface area of the roots which helps to mobilize phosphorus into the plant. Phosphate solubilizing and mobilizing microorganisms can contribute upwards of 30–50 kg P2O5/ha which, in turn, has the potential to increase crop yield by 10–20%.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Magnetization reversal by circularly polarized light** Magnetization reversal by circularly polarized light: Discovered only as recently as 2006 by C.D. Stanciu and F. Hansteen and published in Physical Review Letters, this effect is generally called all-optical magnetization reversal. This magnetization reversal technique refers to a method of reversing magnetization in a magnet simply by circularly polarized light and where the magnetization direction is controlled by the light helicity. In particular, the direction of the angular momentum of the photons would set the magnetization direction without the need of an external magnetic field. In fact, this process could be seen as similar to magnetization reversal by spin injection (see also spintronics). The only difference is that now, the angular momentum is supplied by the circularly polarized photons instead of the polarized electrons. Magnetization reversal by circularly polarized light: Although experimentally demonstrated, the mechanism responsible for this all-optical magnetization reversal is not clear yet and remains a subject of debate. Thus, it is not yet clear whether an Inverse Einstein–de Haas effect is responsible for this switching or a stimulated Raman-like coherent optical scattering process. However, because phenomenologically is the inverse effect of the magneto-optical Faraday effect, magnetization reversal by circularly polarized light is referred to as the inverse Faraday effect. Magnetization reversal by circularly polarized light: Early studies in plasmas, paramagnetic solids, dielectric magnetic materials and ferromagnetic semiconductors demonstrated that excitation of a medium with a circularly polarized laser pulse corresponds to the action of an effective magnetic field. Yet, before the experiments of Stanciu and Hansteen, all-optical controllable magnetization reversal in a stable magnetic state was considered impossible.In quantum field theory and quantum chemistry the effect where the angular momentum associated to the circular motion of the photons induces an angular momentum in the electrons is called photomagneton. This axial magnetic field with the origins in the angular momentum of the photons has been sometimes referred in the literature as the field B.Magnetization reversal by circularly polarized light is the fastest known way to reverse magnetization, and therefore to store data: magnetization reversal is induced on the femtosecond time scale - that translates to a potential of about 100 TBit/s data storage speeds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wolf hunting with dogs** Wolf hunting with dogs: Wolf hunting with dogs is a method of wolf hunting which relies on the use of hunting dogs. While any dog, especially a hound used for hunting wolves may be loosely termed a "wolfhound", several dog breeds have been specifically bred for the purpose, some of which, such as the Irish Wolfhound, have the word in their breed name. Reaction: Accounts as to how wolves react to being attacked by dogs vary, though John James Audubon wrote that young wolves generally show submissive behaviour, while older wolves fight savagely. As wolves are not as fast as smaller canids such as coyotes, they typically run to a low place and wait for the dogs to come over from the top and fight them. Theodore Roosevelt stressed the danger cornered wolves can pose to a pack of dogs in his Hunting the Grisly and Other Sketches: A wolf is a terrible fighter. He will decimate a pack of hounds by rapid snaps with his giant jaws while suffering little damage himself; nor are the ordinary big dogs, supposed to be fighting dogs, able to tackle him without special training. I have known one wolf to kill a bulldog which had rushed at it with a single snap, while another which had entered the yard of a Montana ranch house slew in quick succession both of the large mastiffs by which it was assailed. The immense agility and ferocity of the wild beast, the terrible snap of his long-toothed jaws, and the admirable training in which he always is, give him a great advantage over fat, small-toothed, smooth-skinned dogs, even though they are nominally supposed to belong to the fighting classes. In the way that bench competitions are arranged nowadays this is but natural, as there is no temptation to produce a worthy class of fighting dog when the rewards are given upon technical points wholly unconnected with the dog's usefulness. A prize-winning mastiff or bulldog may be almost useless for the only purposes for which his kind is ever useful at all. A mastiff, if properly trained and of sufficient size, might possibly be able to meet a young or undersized Texas wolf; but I have never seen a dog of this variety which I would esteem a match single-handed for one of the huge timber wolves of western Montana. Even if the dog was the heavier of the two, his teeth and claws would be very much smaller and weaker and his hide less tough. Reaction: The fighting styles of wolves and dogs differ significantly; while dogs typically limit themselves to attacking the head, neck and shoulder, wolves will attack the extremities of their opponents. Irish Wolfhounds: In Ireland, Irish wolfhounds were bred as far back as 3 BC.After the Cromwellian conquest of Ireland, Oliver Cromwell imposed a ban on the exportation of Irish wolfhounds in order to tackle wolves. France: According to the Encyclopédie, wolf hunting squads in France typically consisted of 25-30 good sized dogs, usually grey in color with red around the eyes and jowls. The main pack would be supplemented with six or eight large sighthounds and a few dogues. Wolf hunting sighthounds were usually separated into three categories; lévriers d'estric, lévriers compagnons (or lévriers de flanc) and lévriers de tête. It was preferable to have two teams of each kind, with each team consisting of 2-3 dogs. It is specified that one can never have enough bloodhounds in a wolf hunt, as the wolf is the most challenging quarry for the hounds to track, due to its light tread leaving scant debris, and thus very little scent. This was not so serious a problem in winter, when the tracks were easier to detect in the snow. Each bloodhound group would be used alternately throughout the hunt, in order to allow the previous team to recuperate. Because of the wolf's feeble scent, a wolf hunt would have to begin by motivating the bloodhounds with repeated caresses and the recitation in old French; "va outre ribaut hau mon valet; hau lo lo lo lo, velleci, velleci aller mon petit". It was preferable that the area of the hunt contained no stronger smelling animals which could distract the dogs, or that the dogs themselves were entirely specialised in hunting wolves. Once the scent had been found, the hunters would give a further recitation in order to motivate the dogs; "qu'est-ce là mon valet, hau l'ami après, vellici il dit vrai". The scent was usually found at a crossroad, where the wolf would scratch the earth or leave a scent mark. The two teams of lévriers d'estric would be placed at separate points on the borders of the forest, where the wolf was expected to run to. The lévriers compagnons would be concealed on either side of the path, while the lévriers de tête, which were the largest and most aggressive, would initiate the chase once the wolf was sighted. The lévriers de tête would chase the wolf through the path and funnel it toward the other waiting lévrier teams. Once the wolf was apprehended, the dogs would be pulled back, and the hunters would place a wooden stick between the wolf's jaws in order to stop it injuring them or the dogs. The hunt master would then quickly dispatch the wolf by stabbing it between the shoulder blades with a dagger. Russian Wolf hunting and the Borzoi: Wolves were hunted in both Czarist and Soviet Russia with borzoi by landowners and Cossacks. Covers were drawn by sending mounted men through a wood with a number of dogs of various breeds, including deerhounds, staghounds and Siberian wolfhounds, as well as smaller greyhounds and foxhounds, as they made more noise than borzoi. A beater, holding up to six dogs by leash, would enter a wooded area where wolves would have been previously sighted. Other hunters on horseback would select a place in the open where the wolf or wolves may break. Each hunter held one or two borzois, which would be slipped the moment the wolf takes flight. Once the beater sighted a wolf, he would shout "Loup! Loup! Loup!" and slip the dogs. The idea was to trap the wolf between the pursuing dogs and the hunters on horseback outside the wood. The borzois would pursue the wolf along with the horsemen and yapping curs. Once the wolf was caught by the borzois, the foremost rider would dismount and quickly dispatch the wolf with a knife. Occasionally, wolves are captured alive in order to better train borzoi pups. Afghan Hunting with Afghan Hounds: The Afghan Royal Family and the Pashtun tribes would hunt Wolves using the ancient Afghan Hound, also known as Tazi. The Afghan Hound has a very thick, long and versatile coat. A pack of wolves would scatter in fear once they were aware of being hunted by the Afghan Hound. The Afghan's coat not only protects them from teeth, claws and harsh temperatures but also strike fear in large animals such as wolves because the long hair on the hounds, combined with high winds, cause the hounds to appear extremely large. The Tazi runs at speeds of 40 miles per hour. Kazakhstani wolf hunting: Unlike Russian wolf hunts with hounds, which occur usually in the summer period when wolves have less protective fur and the terrain is more favourable for the hounds to give chase, Kazakhstani wolf hunts with hounds depend on favourable snow conditions. The hunts take place either in the steppes regions of the country, or in semi-deserts. The hunters track wolves on horseback, with their dogs in sleds. Once a wolf is spotted, the dogs are released from the sled, and give chase. North America: In North America wolf hunting with hounds was done in the context of pest control rather than sport. George Armstrong Custer enjoyed wolf coursing with dogs, and favoured large greyhounds and Staghound. Of the latter, he took a pair of large, white, shaggy animals which he would turn loose against wolves in the Sioux sacred Black Hills. In his book Hunting the Grisly and Other Sketches, Theodore Roosevelt wrote that greyhound crossbreeds were a favourite of his, and wrote that exclusively purebred greyhounds were unnecessary, sometimes to the point of uselessness in a wolf hunt. Some bulldog blood in the dogs was considered helpful, though not essential. Roosevelt wrote that many ranchmen of Colorado, Wyoming, and Montana in the final decade of the 19th century managed to breed greyhound or deerhound packs capable of killing wolves unassisted, if numbering in three or more. These greyhounds were usually thirty inches at the shoulder and weighed 90 lbs. These American greyhounds apparently outclassed imported Russian borzois in hunting wolves. Wolf hunting with dogs became a specialised pursuit in the 1920s, with well trained and pedigreed dogs being used. Several wolfhounds were killed in wolf hunts in the warden sponsored Wisconsin Conservation Department of the 1930s. These losses induced the state to begin a dog insurance policy in order to reimburse wolf hunters. Wolf hunting with dogs is now legal only in Wisconsin in the US as of 2013. Training: Dogs are normally fearful of wolves. Both James Rennie and Theodore Roosevelt wrote how even dogs which enthusiastically confront bears and large cats will hesitate to approach wolves. According to the Encyclopédie, dogs used in a wolf hunt are typically veteran animals, as younger hunting dogs would be intimidated by the wolf's scent. However, dogs can be taught to overcome their fear if habituated to it at an early age. As pups, Russian wolfhounds are sometimes introduced to captured live wolves, and are trained to grab them behind the ears in order to avoid being injured by the wolf's teeth. A similar practice was recorded in the United States by John James Audubon, who wrote how wolves caught in a pit trap would be hamstrung and given to a dog pack in order to condition the dogs into losing their fear.Dogs typically do not readily eat wolf curée (entrails). The Encyclopédie specifies that the curée had to be prepared in a special way in order for the dogs to accept it. The carcass would be skinned, gutted and decapitated, with the entrails placed in an oven. After roasting, the entrails would be mixed with breadcrumbs and placed in a cauldron of boiling water. In winter, they would then be mixed with 3-4 lbs of fat, while in summer, two or three bucketloads of milk and flour was applied. After soaking, the entrails would be placed on a sheet of cloth and taken to the dogs whilst still warm.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nuclear receptor** Nuclear receptor: In the field of molecular biology, nuclear receptors are a class of proteins responsible for sensing steroids, thyroid hormones, vitamins, and certain other molecules. These intracellular receptors work with other proteins to regulate the expression of specific genes thereby controlling the development, homeostasis, and metabolism of the organism. Nuclear receptor: Nuclear receptors bind directly to DNA regulating the expression of adjacent genes; hence these receptors are classified as transcription factors. The regulation of gene expression by nuclear receptors often occurs in the presence of a ligand—a molecule that affects the receptor's behavior. Ligand binding to a nuclear receptor results in a conformational change activating the receptor. The result is up- or down-regulation of gene expression. Nuclear receptor: A unique property of nuclear receptors that differentiates them from other classes of receptors is their direct control of genomic DNA. Nuclear receptors play key roles in both embryonic development and adult homeostasis. As discussed below nuclear receptors are classified according to mechanism or homology. Species distribution: Nuclear receptors are specific to metazoans (animals) and are not found in protists, algae, fungi, or plants. Amongst the early-branching animal lineages with sequenced genomes, two have been reported from the sponge Amphimedon queenslandica, two from the comb jelly Mnemiopsis leidyi four from the placozoan Trichoplax adhaerens and 17 from the cnidarian Nematostella vectensis. There are 270 nuclear receptors in the roundworm Caenorhabditis elegans alone, 21 in the fruit fly and other insects, 73 in zebrafish. Humans, mice, and rats have respectively 48, 49, and 47 nuclear receptors each. Ligands: Ligands that bind to and activate nuclear receptors include lipophilic substances such as endogenous hormones, vitamins A and D, and xenobiotic hormones. Because the expression of a large number of genes is regulated by nuclear receptors, ligands that activate these receptors can have profound effects on the organism. Many of these regulated genes are associated with various diseases, which explains why the molecular targets of approximately 13% of U.S. Food and Drug Administration (FDA) approved drugs target nuclear receptors.A number of nuclear receptors, referred to as orphan receptors, have no known (or at least generally agreed upon) endogenous ligands. Some of these receptors such as FXR, LXR, and PPAR bind a number of metabolic intermediates such as fatty acids, bile acids and/or sterols with relatively low affinity. These receptors hence may function as metabolic sensors. Other nuclear receptors, such as CAR and PXR appear to function as xenobiotic sensors up-regulating the expression of cytochrome P450 enzymes that metabolize these xenobiotics. Structure: Most nuclear receptors have molecular masses between 50,000 and 100,000 daltons. Structure: Nuclear receptors are modular in structure and contain the following domains: (A-B) N-terminal regulatory domain: Contains the activation function 1 (AF-1) whose action is independent of the presence of ligand. The transcriptional activation of AF-1 is normally very weak, but it does synergize with AF-2 in the E-domain (see below) to produce a more robust upregulation of gene expression. The A-B domain is highly variable in sequence between various nuclear receptors. Structure: (C) DNA-binding domain (DBD): Highly conserved domain containing two zinc fingers that binds to specific sequences of DNA called hormone response elements (HRE). Recently, a novel zinc finger motif (CHC2) is identified in parasitic flatworm NRs. (D) Hinge region: Thought to be a flexible domain that connects the DBD with the LBD. Influences intracellular trafficking and subcellular distribution with a target peptide sequence. Structure: (E) Ligand binding domain (LBD): Moderately conserved in sequence and highly conserved in structure between the various nuclear receptors. The structure of the LBD is referred to as an alpha helical sandwich fold in which three anti parallel alpha helices (the "sandwich filling") are flanked by two alpha helices on one side and three on the other (the "bread"). The ligand binding cavity is within the interior of the LBD and just below three anti parallel alpha helical sandwich "filling". Along with the DBD, the LBD contributes to the dimerization interface of the receptor and in addition, binds coactivator and corepressor proteins. The LBD also contains the activation function 2 (AF-2) whose action is dependent on the presence of bound ligand, controlled by the conformation of helix 12 (H12). Structure: (F) C-terminal domain: Highly variable in sequence between various nuclear receptors.The N-terminal (A/B), DNA-binding (C), and ligand binding (E) domains are independently well folded and structurally stable while the hinge region (D) and optional C-terminal (F) domains may be conformationally flexible and disordered. Domains relative orientations are very different by comparing three known multi-domain crystal structures, two of them binding on DR1 (DBDs separated by 1 bp), one binding on DR4 (by 4 bp). Mechanism of action: Nuclear receptors are multifunctional proteins that transduce signals of their cognate ligands. Nuclear receptors (NRs) may be classified into two broad classes according to their mechanism of action and subcellular distribution in the absence of ligand. Mechanism of action: Small lipophilic substances such as natural hormones diffuse through the cell membrane and bind to nuclear receptors located in the cytosol (type I NR) or nucleus (type II NR) of the cell. Binding causes a conformational change in the receptor which, depending on the class of receptor, triggers a cascade of downstream events that direct the NRs to DNA transcription regulation sites which result in up or down-regulation of gene expression. They generally function as homo/heterodimers. In addition, two additional classes, type III which are a variant of type I, and type IV that bind DNA as monomers have also been identified.Accordingly, nuclear receptors may be subdivided into the following four mechanistic classes: Type I Ligand binding to type I nuclear receptors in the cytosol results in the dissociation of heat shock proteins, homo-dimerization, translocation (i.e., active transport) from the cytoplasm into the cell nucleus, and binding to specific sequences of DNA known as hormone response elements (HREs). Type I nuclear receptors bind to HREs consisting of two half-sites separated by a variable length of DNA, and the second half-site has a sequence inverted from the first (inverted repeat). Type I nuclear receptors include members of subfamily 3, such as the androgen receptor, estrogen receptors, glucocorticoid receptor, and progesterone receptor.It has been noted that some of the NR subfamily 2 nuclear receptors may bind to direct repeat instead of inverted repeat HREs. In addition, some nuclear receptors that bind either as monomers or dimers, with only a single DNA binding domain of the receptor attaching to a single half site HRE. These nuclear receptors are considered orphan receptors, as their endogenous ligands are still unknown. Mechanism of action: The nuclear receptor/DNA complex then recruits other proteins that transcribe DNA downstream from the HRE into messenger RNA and eventually protein, which causes a change in cell function. Mechanism of action: Type II Type II receptors, in contrast to type I, are retained in the nucleus regardless of the ligand binding status and in addition bind as hetero-dimers (usually with RXR) to DNA. In the absence of ligand, type II nuclear receptors are often complexed with corepressor proteins. Ligand binding to the nuclear receptor causes dissociation of corepressor and recruitment of coactivator proteins. Additional proteins including RNA polymerase are then recruited to the NR/DNA complex that transcribe DNA into messenger RNA. Mechanism of action: Type II nuclear receptors include principally subfamily 1, for example the retinoic acid receptor, retinoid X receptor and thyroid hormone receptor. Type III Type III nuclear receptors (principally NR subfamily 2) are similar to type I receptors in that both classes bind to DNA as homodimers. However, type III nuclear receptors, in contrast to type I, bind to direct repeat instead of inverted repeat HREs. Type IV Type IV nuclear receptors bind either as monomers or dimers, but only a single DNA binding domain of the receptor binds to a single half site HRE. Examples of type IV receptors are found in most of the NR subfamilies. Dimerization: Human Nuclear Receptors are capable of dimerizing with many other Nuclear Receptors (homotypic dimerization), as has been shown from large-scale Y2H experiments and text mining efforts of the literature that were focused on specific interactions. Nevertheless, there exists specificity, with members of the same subfamily having very similar NR dimerization partners and the underlying dimerization network has certain topological features, such as the presence of highly connected hubs (RXR and SHP). Coregulatory proteins: Nuclear receptors bound to hormone response elements recruit a significant number of other proteins (referred to as transcription coregulators) that facilitate or inhibit the transcription of the associated target gene into mRNA. The function of these coregulators are varied and include chromatin remodeling (making the target gene either more or less accessible to transcription) or a bridging function to stabilize the binding of other coregulatory proteins. Nuclear receptors may bind specifically to a number of coregulator proteins, and thereby influence cellular mechanisms of signal transduction both directly, as well as indirectly. Coregulatory proteins: Coactivators Binding of agonist ligands (see section below) to nuclear receptors induces a conformation of the receptor that preferentially binds coactivator proteins. These proteins often have an intrinsic histone acetyltransferase (HAT) activity, which weakens the association of histones to DNA, and therefore promotes gene transcription. Corepressors Binding of antagonist ligands to nuclear receptors in contrast induces a conformation of the receptor that preferentially binds corepressor proteins. These proteins, in turn, recruit histone deacetylases (HDACs), which strengthens the association of histones to DNA, and therefore represses gene transcription. Agonism vs antagonism: Depending on the receptor involved, the chemical structure of the ligand and the tissue that is being affected, nuclear receptor ligands may display dramatically diverse effects ranging in a spectrum from agonism to antagonism to inverse agonism. Agonism vs antagonism: Agonists The activity of endogenous ligands (such as the hormones estradiol and testosterone) when bound to their cognate nuclear receptors is normally to upregulate gene expression. This stimulation of gene expression by the ligand is referred to as an agonist response. The agonistic effects of endogenous hormones can also be mimicked by certain synthetic ligands, for example, the glucocorticoid receptor anti-inflammatory drug dexamethasone. Agonist ligands work by inducing a conformation of the receptor which favors coactivator binding (see upper half of the figure to the right). Agonism vs antagonism: Antagonists Other synthetic nuclear receptor ligands have no apparent effect on gene transcription in the absence of endogenous ligand. However they block the effect of agonist through competitive binding to the same binding site in the nuclear receptor. These ligands are referred to as antagonists. An example of antagonistic nuclear receptor drug is mifepristone which binds to the glucocorticoid and progesterone receptors and therefore blocks the activity of the endogenous hormones cortisol and progesterone respectively. Antagonist ligands work by inducing a conformation of the receptor which prevents coactivator binding, and promotes corepressor binding (see lower half of the figure to the right). Agonism vs antagonism: Inverse agonists Finally, some nuclear receptors promote a low level of gene transcription in the absence of agonists (also referred to as basal or constitutive activity). Synthetic ligands which reduce this basal level of activity in nuclear receptors are known as inverse agonists. Agonism vs antagonism: Selective receptor modulators A number of drugs that work through nuclear receptors display an agonist response in some tissues and an antagonistic response in other tissues. This behavior may have substantial benefits since it may allow retaining the desired beneficial therapeutic effects of a drug while minimizing undesirable side effects. Drugs with this mixed agonist/antagonist profile of action are referred to as selective receptor modulators (SRMs). Examples include Selective Androgen Receptor Modulators (SARMs), Selective Estrogen Receptor Modulators (SERMs) and Selective Progesterone Receptor Modulators (SPRMs). The mechanism of action of SRMs may vary depending on the chemical structure of the ligand and the receptor involved, however it is thought that many SRMs work by promoting a conformation of the receptor that is closely balanced between agonism and antagonism. In tissues where the concentration of coactivator proteins is higher than corepressors, the equilibrium is shifted in the agonist direction. Conversely in tissues where corepressors dominate, the ligand behaves as an antagonist. Alternative mechanisms: Transrepression The most common mechanism of nuclear receptor action involves direct binding of the nuclear receptor to a DNA hormone response element. This mechanism is referred to as transactivation. However some nuclear receptors not only have the ability to directly bind to DNA, but also to other transcription factors. This binding often results in deactivation of the second transcription factor in a process known as transrepression. One example of a nuclear receptor that are able to transrepress is the glucocorticoid receptor (GR). Furthermore, certain GR ligands known as Selective Glucocorticoid Receptor Agonists (SEGRAs) are able to activate GR in such a way that GR more strongly transrepresses than transactivates. This selectivity increases the separation between the desired antiinflammatory effects and undesired metabolic side effects of these selective glucocorticoids. Alternative mechanisms: Non-genomic The classical direct effects of nuclear receptors on gene regulation normally take hours before a functional effect is seen in cells because of the large number of intermediate steps between nuclear receptor activation and changes in protein expression levels. However it has been observed that many effects of the application of nuclear hormones, such as changes in ion channel activity, occur within minutes which is inconsistent with the classical mechanism of nuclear receptor action. While the molecular target for these non-genomic effects of nuclear receptors has not been conclusively demonstrated, it has been hypothesized that there are variants of nuclear receptors which are membrane associated instead of being localized in the cytosol or nucleus. Furthermore, these membrane associated receptors function through alternative signal transduction mechanisms not involving gene regulation.While it has been hypothesized that there are several membrane associated receptors for nuclear hormones, many of the rapid effects have been shown to require canonical nuclear receptors. However, testing the relative importance of the genomic and nongenomic mechanisms in vivo has been prevented by the absence of specific molecular mechanisms for the nongenomic effects that could be blocked by mutation of the receptor without disrupting its direct effects on gene expression. Alternative mechanisms: A molecular mechanism for non-genomic signaling through the nuclear thyroid hormone receptor TRβ involves the phosphatidylinositol 3-kinase (PI3K). This signaling can be blocked by a single tyrosine to phenylalanine substitution in TRβ without disrupting direct gene regulation. When mice were created with this single, conservative amino acid substitution in TRβ, synaptic maturation and plasticity in the hippocampus was impaired almost as effectively as completely blocking thyroid hormone synthesis. This mechanism appears to be conserved in all mammals but not in TRα or any other nuclear receptors. Thus, phosphotyrosine-dependent association of TRβ with PI3K provides a potential mechanism for integrating regulation of development and metabolism by thyroid hormone and receptor tyrosine kinases. In addition, thyroid hormone signaling through PI3K can alter gene expression. Family members: The following is a list of the 48 known human nuclear receptors (and their orthologs in other species) categorized according to sequence homology. The list also includes selected family members that lack human orthologs (NRNC symbol highlighted in yellow). Family members: Of the two 0-families, 0A has a family 1-like DBD, and 0B has a very unique LBD. The second DBD of family 7 is probably related to the family 1 DBD. Three probably family-1 NRs from Biomphalaria glabrata possess a DBD along with an family 0B-like LBD. The placement of C. elegans nhr-1 (Q21878) is disputed: although most sources place it as NR1K1, manual annotation at WormBase considers it a member of NR2A. There used to be a group 2D for which the only member was Drosophila HR78/NR1D1 (Q24142) and orthologues, but it was merged into group 2C later due to high similarity, forming a "group 2C/D". Knockout studies on mice and fruit flies support such a merged group. Evolution: A topic of debate has been on the identity of the ancestral nuclear receptor as either a ligand-binding or an orphan receptor. This debate began more than twenty-five years ago when the first ligands were identified as mammalian steroid and thyroid hormones. Shortly thereafter, the identification of the ecdysone receptor in Drosophila introduced the idea that nuclear receptors were hormonal receptors that bind ligands with a nanomolar affinity. At the time, the three known nuclear receptor ligands were steroids, retinoids, and thyroid hormone, and of those three, both steroids and retinoids were products of terpenoid metabolism. Thus, it was postulated that ancestral receptor would have been liganded by a terpenoid molecule.In 1992, a comparison of the DNA-binding domain of all known nuclear receptors led to the construction of a phylogenic tree of nuclear receptor that indicated that all nuclear receptors shared a common ancestor. As a result, there was an increased effort upon uncovering the state of the first nuclear receptor, and by 1997 an alternative hypothesis was suggested: the ancestral nuclear receptor was an orphan receptor and it acquired ligand-binding ability over time This hypothesis was proposed based on the following arguments: The nuclear receptor sequences that had been identified in the earliest metazoans (cnidarians and Schistosoma) were all members of the COUP-TF, RXR, and FTZ-F1 groups of receptors. Both COUP-TF and FTZ-F1 are orphan receptors, and RXR is only found to bind a ligand in vertebrates. Evolution: While orphan receptors had known arthropod homologs, no orthologs of liganded vertebrate receptors had been identified outside vertebrates, suggesting that orphan receptors are older than liganded-receptors. Orphan receptors are found amongst all six subfamilies of nuclear receptors, while ligand-dependent receptors are found amongst three. Thus, since the ligand-dependent receptors were believed to be predominantly member of recent subfamilies, it seemed logical that they gained the ability to bind ligands independently. Evolution: The phylogenetic position of a given nuclear receptor within the tree correlates to its DNA-binding domain and dimerization abilities, but there is no identified relationship between a ligand-dependent nuclear receptor and the chemical nature of its ligand. In addition to this, the evolutionary relationships between ligand-dependent receptors did not make much sense as closely related receptors of subfamilies bound ligands originating from entirely different biosynthetic pathways (e.g. TRs and RARs). On the other hand, subfamilies that are not evolutionarily related bind similar ligands (RAR and RXR both bind all-trans and 9-cis retinoic acid respectively). Evolution: In 1997, it was discovered that nuclear receptors did not exist in static off and on conformations, but that a ligand could alter the equilibrium between the two states. Furthermore, it was found that nuclear receptors could be regulated in a ligand-independent manner, through either phosphorylation or other post-translational modifications. Thus, this provided a mechanism for how an ancestral orphan receptor was regulated in a ligand-independent manner, and explained why the ligand binding domain was conserved.Over the next 10 years, experiments were conducted to test this hypothesis and counterarguments soon emerged: Nuclear receptors were identified in the newly sequenced genome of the demosponge Amphimedon queenslandica, a member Porifera, the most ancient metazoan phylum. The A. queenslandica genome contains two nuclear receptors known as AqNR1 and AqNR2 and both were characterized to bind and be regulated by ligands. Evolution: Homologs for ligand-dependent vertebrate receptors were found outside vertebrates in mollusks and Platyhelminthes. Furthermore, the nuclear receptors found in cnidarians were found to have structural ligands in mammals, which could mirror the ancestral situation. Two putative orphan receptors, HNF4 and USP were found, via structural and mass spectrometry analysis, to bind fatty acids and phospholipids respectively. Nuclear receptors and ligands are found to be a lot less specific than was previously thought. Retinoids can bind mammalian receptors other than RAR and RXR such as, PPAR, RORb, or COUP-TFII. Furthermore, RXR is sensitive to a wide range of molecules including retinoids, fatty acids, and phospholipids. Evolution: Study of steroid receptor evolution revealed that the ancestral steroid receptor could bind a ligand, estradiol. Conversely, the estrogen receptor found in mollusks is constitutively active and did not bind estrogen-related hormones. Thus, this provided an example of how an ancestral ligand-dependent receptor could lose its ability to bind ligands.A combination of this recent evidence, as well as an in-depth study of the physical structure of the nuclear receptor ligand binding domain has led to the emergence of a new hypothesis regarding the ancestral state of the nuclear receptor. This hypothesis suggests that the ancestral receptor may act as a lipid sensor with an ability to bind, albeit rather weakly, several different hydrophobic molecules such as, retinoids, steroids, hemes, and fatty acids. With its ability to interact with a variety of compounds, this receptor, through duplications, would either lose its ability for ligand-dependent activity, or specialize into a highly specific receptor for a particular molecule. History: Below is a brief selection of key events in the history of nuclear receptor research. History: 1905 – Ernest Starling coined the word hormone 1926 – Edward Calvin Kendall and Tadeus Reichstein isolated and determined the structures of cortisone and thyroxine 1929 – Adolf Butenandt and Edward Adelbert Doisy – independently isolated and determined the structure of estrogen 1958 – Elwood Jensen – isolated the estrogen receptor 1980s – cloning of the estrogen, glucocorticoid, and thyroid hormone receptors by Pierre Chambon, Ronald Evans, and Björn Vennström respectively 2004 – Pierre Chambon, Ronald Evans, and Elwood Jensen were awarded the Albert Lasker Award for Basic Medical Research, an award that frequently precedes a Nobel Prize in Medicine
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Video capture** Video capture: Video capture is the process of converting an analog video signal—such as that produced by a video camera, DVD player, or television tuner—to digital video and sending it to local storage or to external circuitry. The resulting digital data are referred to as a digital video stream, or more often, simply video stream. Depending on the application, a video stream may be recorded as computer files, or sent to a video display, or both. Devices: Special electronic circuitry is required to capture video from analog video sources. At the system level this function is typically performed by a dedicated video capture device. Such devices typically employ integrated circuit video decoders to convert incoming video signals to a standard digital video format, and additional circuitry to convey the resulting digital video to local storage or to circuitry outside the video capture device, or both. Depending on the device, the resulting video stream may be conveyed to external circuitry via a computer bus (e.g., PCI/104 or PCIe) or a communication interface such as USB, Ethernet or WiFi, or stored in mass-storage memory in the device itself (e.g., digital video recorder).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded