text
stringlengths
60
353k
source
stringclasses
2 values
**Connector (mathematics)** Connector (mathematics): In mathematics, a connector is a map which can be defined for a linear connection and used to define the covariant derivative on a vector bundle from the linear connection. Definition: Let ∇ be a connection on the tangent space TN of a smooth manifold N. For smooth mappings h:M→TN from any smooth manifold M, the connector K:TTN→TN satisfies : ∇ h = K○Th:TM→TN where Th:TM→TTN is the differential of h.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Indeloxazine** Indeloxazine: Indeloxazine (INN) (Elen, Noin) is an antidepressant and cerebral activator that was marketed in Japan and South Korea by Yamanouchi Pharmaceutical Co., Ltd for the treatment of psychiatric symptoms associated with cerebrovascular diseases, namely depression resulting from stroke, emotional disturbance, and avolition. It was marketed from 1988 to 1998, when it was removed from the market reportedly for lack of effectiveness.Indeloxazine acts as a serotonin releasing agent, norepinephrine reuptake inhibitor, and NMDA receptor antagonist. It has been found to enhance acetylcholine release in the rat forebrain through activation of the 5-HT4 receptor via its action as a serotonin releasing agent. The drug has been found to possess nootropic, neuroprotective, anticonvulsant, and antidepressant-like effects in animal models.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Isooncodine** Isooncodine: Isooncodine is an anticholinergic alkaloid. It was first synthesized in 1989 because it is an isomer of oncodine, an azafluorenone alkaloid derived from Meiogyne monosperma. It was first derived from the leaves of Polyalthia longifolia.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**AP3B1** AP3B1: AP-3 complex subunit beta-1 is a protein that in humans is encoded by the AP3B1 gene. Function: This gene encodes a protein that may play a role in organelle biogenesis associated with melanosomes, platelet dense granules, and lysosomes. The encoded protein is part of the heterotetrameric AP-3 protein complex which interacts with the scaffolding protein clathrin. Mutations in this gene are associated with Hermansky–Pudlak syndrome type 2. Interactions: AP3B1 has been shown to interact with AP3S2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microsoft acquisition hoax** Microsoft acquisition hoax: The Microsoft acquisition hoax is a bogus 1994 press release suggesting that the information technology company Microsoft had acquired the Roman Catholic Church. It is considered to be the first Internet hoax to reach a mass audience.The hoax comprises part of a cycle of "Microsoft jokes" in which Microsoft Corporation is portrayed as a wealthy but evil monopoly built on bloated or unreliable desktop software, planned obsolescence of products, corporate takeovers of once-innovative rivals and litigiousness. While multiple books have been devoted to the subject, the jokes most commonly circulated online as Internet memes. Press release: The hoax consisted of a press release, purportedly from the Associated Press, that circulated around the Internet in 1994. The press release claimed that Microsoft "will acquire the Roman Catholic Church in exchange for an unspecified number of shares of Microsoft common stock," and that the company expects "a lot of growth in the religious market in the next five to ten years... the combined resources of Microsoft and the Catholic Church will allow us to make religion easier and more fun for a broader range of people."Many of the press release's claims were unrealistic, from suggesting that Catholics would soon be able to take Holy Communion through their computer to claiming that conversion to Catholicism was an "upgrade". Despite these warning signs, several readers of the false press release contacted Microsoft to confirm the claims of the hoax, and on December 16, 1994, Microsoft formally debunked the claims. Press release: Aftermath Follow-up press releases made similarly outrageous claims—for example, one false press release claimed that IBM had acquired the Episcopal Church, and another suggested that the Italian television network RAI had invested in what the release claimed to be "Microsoft Corp.'s planned on-line computer service, the Microsoft Divine Network."An Internet meme "Microsoft Acquires" spawned a series of similarly formatted mock press releases with an assortment of varying acquisition targets, including the government of the United States of America. According to the release, "United States citizens will be able to expect lower taxes, increases in government services, discounts on all Microsoft products and the immediate arrest of all executive officials of Sun Microsystems Inc. and Netscape Corp." One meta-joke claimed that Microsoft ultimately put an end to the jokes by acquiring "Microsoft Acquires". Press release: Despite the proliferation of chain emails circulating the Internet both in 1994 and in the present, the Microsoft hoax was considered the first such hoax to reach a mass audience through the Internet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transient hot wire method** Transient hot wire method: The transient hot wire method (THW) is a very popular, accurate and precise technique to measure the thermal conductivity of gases, liquids, solids, nanofluids and refrigerants in a wide temperature and pressure range. The technique is based on recording the transient temperature rise of a thin vertical metal wire with infinite length when a step voltage is applied to it. The wire is immersed in a fluid and can act both as an electrical heating element and a resistance thermometer. The transient hot wire method has advantage over the other thermal conductivity methods, since there is a fully developed theory and there is no calibration or single-point calibration. Furthermore, because of the very small measuring time (1 s) there is no convection present in the measurements and only the thermal conductivity of the fluid is measured with very high accuracy. Transient hot wire method: The most of the transient hot wire sensors used in academia consist of two identical very thin wires with only difference in the length. Sensors using a single wire are used both in academia and industry with the advantage over the two-wire sensors in the ease of handling of the sensor and change of the wire. An ASTM standard is published for the measurements of engine coolants using a single-transient hot wire method. History: 200 years ago scientists were using a crude version of this method to make the first ever thermal conductivity measurements on gases. 1781 - Joseph Priestley attempts to measure the ability of different gases to conduct heat using the heated wire experiment. 1931 - Sven Pyk and Bertil Stalhane proposed the first “transient” hot wire method for the measurement of thermal conductivity of solids and powders. Unlike previous methods, the one devised by Pyk and Stalhane used shorter measurement times due to the transient nature of the measurement. 1971 - J. W. Haarman who introduced the electronic Wheatstone bridge that is a common feature of other modern transient methods. 1976 - Healy et al. published a journal article detailing the theory of the transient hot wire, described by an ideal solution with appropriate corrections to address effects like convection, among others.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pod corn** Pod corn: Pod corn or wild maize is a variety of maize (corn). It is not a wild ancestor of maize but rather a mutant that forms leaves around each kernel.Pod corn (tunicata Sturt) is not grown commercially, but it is preserved in some localities.Pod corn forms glumes around each kernel which is caused by a mutation at the Tunicate locus. Because of its bizarre appearance, pod corn has had a religious significance to certain Native American tribes.The six major types of corn are dent corn, flint corn, pod corn, popcorn, flour corn, and sweet corn.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Smash product** Smash product: In topology, a branch of mathematics, the smash product of two pointed spaces (i.e. topological spaces with distinguished basepoints) (X, x0) and (Y, y0) is the quotient of the product space X × Y under the identifications (x, y0) ∼ (x0, y) for all x in X and y in Y. The smash product is itself a pointed space, with basepoint being the equivalence class of (x0, y0). The smash product is usually denoted X ∧ Y or X ⨳ Y. The smash product depends on the choice of basepoints (unless both X and Y are homogeneous). Smash product: One can think of X and Y as sitting inside X × Y as the subspaces X × {y0} and {x0} × Y. These subspaces intersect at a single point: (x0, y0), the basepoint of X × Y. So the union of these subspaces can be identified with the wedge sum X∨Y=(X⨿Y)/∼ . In particular, {x0} × Y in X × Y is identified with Y in X∨Y , ditto for X × {y0} and X. In X∨Y , subspaces X and Y intersect in the single point x0∼y0 . The smash product is then the quotient X∧Y=(X×Y)/(X∨Y). Smash product: The smash product shows up in homotopy theory, a branch of algebraic topology. In homotopy theory, one often works with a different category of spaces than the category of all topological spaces. In some of these categories the definition of the smash product must be modified slightly. For example, the smash product of two CW complexes is a CW complex if one uses the product of CW complexes in the definition rather than the product topology. Similar modifications are necessary in other categories. Examples: The smash product of any pointed space X with a 0-sphere (a discrete space with two points) is homeomorphic to X. The smash product of two circles is a quotient of the torus homeomorphic to the 2-sphere. More generally, the smash product of two spheres Sm and Sn is homeomorphic to the sphere Sm+n. The smash product of a space X with a circle is homeomorphic to the reduced suspension of X: The k-fold iterated reduced suspension of X is homeomorphic to the smash product of X and a k-sphere In domain theory, taking the product of two domains (so that the product is strict on its arguments). As a symmetric monoidal product: For any pointed spaces X, Y, and Z in an appropriate "convenient" category (e.g., that of compactly generated spaces), there are natural (basepoint preserving) homeomorphisms X∧Y≅Y∧X,(X∧Y)∧Z≅X∧(Y∧Z). As a symmetric monoidal product: However, for the naive category of pointed spaces, this fails, as shown by the counterexample X=Y=Q and Z=N found by Dieter Puppe. A proof due to Kathleen Lewis that Puppe's counterexample is indeed a counterexample can be found in the book of Johann Sigurdsson and J. Peter May.These isomorphisms make the appropriate category of pointed spaces into a symmetric monoidal category with the smash product as the monoidal product and the pointed 0-sphere (a two-point discrete space) as the unit object. One can therefore think of the smash product as a kind of tensor product in an appropriate category of pointed spaces. Adjoint relationship: Adjoint functors make the analogy between the tensor product and the smash product more precise. In the category of R-modules over a commutative ring R, the tensor functor (−⊗RA) is left adjoint to the internal Hom functor Hom(A,−) , so that Hom(X⊗A,Y)≅Hom(X,Hom(A,Y)). Adjoint relationship: In the category of pointed spaces, the smash product plays the role of the tensor product in this formula: if A,X are compact Hausdorff then we have an adjunction Maps∗(X∧A,Y)≅Maps∗(X,Maps∗(A,Y)) where Maps* denotes continuous maps that send basepoint to basepoint, and Maps∗(A,Y) carries the compact-open topology.In particular, taking A to be the unit circle S1 , we see that the reduced suspension functor Σ is left adjoint to the loop space functor Ω :Maps∗(ΣX,Y)≅Maps∗(X,ΩY).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Roller furling** Roller furling: Roller furling is a method of furling (i.e. reefing) a yacht's staysail by rolling the sail around a stay. Roller furling is typically used for foresails such as jibs or genoas.A mainsail may also be furled by a similar system, whereby the sail is furled within the mast or around a rotating boom (or around a rotating shaft within a boom). Although staysail roller-furling is effective and very common, in-mast or in-boom mainsail furling involves some compromises, and mainsail slab reefing gives a better sail shape. Methods: The idea for a furling jib is usually attributed to Major E du Boulay in England who invented a device similar to a roller blind for reefing a jib. Major Wykeham-Martin used one of Boulay's rollers and improved the system by incorporating roller bearings in 1907 when the system was patented. The original castings were made by London-based toilet makers Bouldings. Furling systems are commonplace on cruising yachts today.An early version of staysail roller furling was as follows: the leading edge of the sail (luff) to be furled is stiffened in some way, such as by attaching it to a length of plastic pipe or by sewing in a stiffening material such as foam. This stiffened edge serves to spread the force of the furler along the edge of the sail, so that the sail will furl along its full length. This stiffened edge is then attached to the source of energy for furling, which may be a handle that is turned, a spool containing a line that is pulled, or a motor.Murray Scheiner, a sailor and professional rigging designer from Great Neck, New York, modernized the furling jib in the late 1960s. His inspiration came from observing a disabled sailor friend who required several crew members to hoist the jib, preventing him from sailing independently. This invention greatly changed sailing for professionals and leisure sailors alike.The simplest and most common furling systems are for jibs and other headsails. These generally consist of either a plastic pipe or a specially stiffened jib, and a spool to hold the furling line. The jib is attached to the furler, and the line is wound around the spool. When the line is pulled, the furler turns, rolling up the jib; when the furling line is released, the jibsheet may be used to unfurl the jib. Methods: The other common type of furling system is for the mainsail. The mainsail may be furled into the mast or the boom, with boom furling systems being simpler and more common. The simplest boom furling system consists of a boom that can rotate along its axis, with a latch to lock it in place. Provision must be made to allow the mainsail to wrap around the boom without interfering with the mainsheet, such as end-boom sheeting or a bridle. To furl the mainsail, the boom is unlocked, and then rotated to take up the desired amount of mainsail, and then locked in place. More advanced boom furling systems will wrap the furling mechanism in a slotted cover, so the sail furls inside the cover; this also makes sheeting easier, since the sheet may be attached to the outer portion of the boom. These systems use either a line, a hand-operated crank, or a hydraulic or electric powered furler. Requirements: To be successfully furled, a sail must be flexible enough to wrap around a tight radius. Sail stiffening devices such as battens can be accommodated by roller furling systems when they lie parallel to the mast; otherwise furling must be stopped at the lowest batten or the battens must be removed.The sail must also have a fairly straight edge to lie along the furling roller, and be flat enough to form a neat, compact roll. If the sail meets these conditions, then it may be suitable for use with a roller furling system; if the sail is not, then it may be possible to replace the sail with one of a design more conducive to furling. Jibs and genoas, for example, are generally suitable for furling, as they are relatively flat, while a gennaker, with its larger degree of camber, is probably not suitable for furling. In particularly windy areas, sometimes a separate smaller jib may be separately rigged as a Solent to avoid the disadvantage of a partially furled Genoa. Requirements: Boom furling mainsails also must deal with the issue of the leading edge. Most sails attach to the mast with sliders or a rope supported edge that rides in a track in the mast, and this adds bulk to the leading edge of the sail as it rolls. This is also an issue when unfurling, as the edge of the sail must be fed smoothly into the mast's track when unfurling. The weight of the sail and furling equipment also increases the boom's mass, which can increase the danger of injury to a crewmember if they are hit by the swinging boom.Mast furling systems avoid the issues of boom furling, but add their own issues. Mast furling systems essentially eliminate the possibility of battens, as vertical battens are not practical. Without battens, the mainsail must be cut with a hollow leech, like the typical jib, which reduces the sail area. Mast furlers also add mass all along the length of the mast, raising the center of mass of the boat, which decreases stability.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acetomepregenol** Acetomepregenol: Acetomepregenol (ACM), also known as mepregenol diacetate and sold under the brand name Diamol, is a progestin medication which is used in Russia for the treatment of gynecological conditions and as a method of birth control in combination with an estrogen. It has also been studied in the treatment of threatened abortion. It has been used in veterinary medicine as well. It has been marketed since at least 1981. Pharmacology: Based on its chemical structure, namely the lack of a C3 ketone, it is probable that acetomepregenol is a prodrug of megestrol acetate (the 3-keto analogue). Chemistry: Acetomepregenol, also known as megestrol 3β,17α-diacetate, as well as 3β-dihydro-6-dehydro-6-methyl-17α-hydroxyprogesterone diacetate or as 3β,17α-diacetoxy-6-methylpregna-4,6-dien-20-one, is a synthetic pregnane steroid and a derivative of progesterone and 17α-hydroxyprogesterone. It is very close to megestrol acetate (6-dehydro-6-methyl-17α-acetoxyprogesterone) in structure, except that there is a hydroxyl group with an acetate ester attached at the C3 position instead of a ketone. A closely related medication is cymegesolate (also known as megestrol 3β-cypionate 17α-acetate), which, in contrast, has not been marketed.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tad Williams bibliography** Tad Williams bibliography: This is complete list of works by American science fiction and fantasy writer Tad Williams. Osten Ard: Prequels Brothers of the Wind (2021)n.b. a prequel set a millennium before The Dragonbone Chair and previously known under the working title The Shadow of Things to ComeThe Splintered Sun (forthcoming fall 2024)n.b. a prequel following the adventures of Flann Alderwood and his band of misfit rebels in one of Osten Ard’s oldest and strangest cities, Crannhyr, during the pre-Dragonbone Chair history of Hernystir and Erkynland. Osten Ard: Memory, Sorrow, and Thorn The Dragonbone Chair (1988) Stone of Farewell (1990) To Green Angel Tower (1993) Book 3 was split into 2 parts for paperback publication (1994): To Green Angel Tower, Part 1 and To Green Angel Tower, Part 2 (United States edition) To Green Angel Tower: Siege and To Green Angel Tower: Storm (United Kingdom edition)Novelette - "The Burning Man" (Legends) 1998 Graphic Novel- The Burning Man Osten Ard (bridge novel) The Heart of What Was Lost (2017) The Last King of Osten Ard The Witchwood Crown (2017) Empire of Grass (2019) Into the Narrowdark (2022) The Navigator's Children (forthcoming 2023) Otherland: City of Golden Shadow (1996) River of Blue Fire (1998) Mountain of Black Glass (1999) Sea of Silver Light (2001)Novelette - "The Happiest Dead Boy in the World" (Legends II, 2004) Novelette - "The Boy Detective of Oz" (Oz Reimagined: New Tales from the Emerald City and Beyond, 2013) Novella - "The Deathless Prince and the Peach Maiden: An Otherland Novella" (forthcoming September 2023) Shadowmarch: Shadowmarch (2004) Shadowplay (2007) Shadowrise (2010) Shadowheart (2010)Similar to the Memory, Sorrow, and Thorn series that was initially intended to be a trilogy but became a tetralogy instead, Book 3 became so lengthy that it was not possible to be published in one volume. Ordinary Farm series: Young adult series, written with Deborah Beale (his wife) The Dragons of Ordinary Farm (2009) The Secrets of Ordinary Farm (2011) The Heirs of Ordinary Farm (work-title, forthcoming) Bobby Dollar: Noir fantasy thrillers The Dirty Streets of Heaven (2012) Happy Hour in Hell (2013), Sleeping Late on Judgement Day (2014) God Rest Ye Merry, Gentlepig (2014) Standalone novels: Tailchaser's Song (1985) Child of an Ancient City (1992), written with Nina Kiriki Hoffman Caliban's Hour (Hardcover 1994) The War of the Flowers (2003) Collections: Rite: Short Work (2006) Rite: Short Work (2008) Subterranean Press/Far Territories. Reprint of the limited hc edition without the non-fiction pieces. The reprint only contains the short stories. A Stark and Wormy Knight (2011) Beale-Williams Enterprise. And ebook original that contains new pieces of short fiction. Edited by Deborah Beale and sold exclusively by Amazon.com. "A Stark and Wormy Knight" (2012) Subterranean Press The Very Best of Tad Williams (2014) Tachyon Publications Short fiction and screenplays: The Very Best of Tad Williams. Forthcoming career retrospective collection, featuring 16 stories and one screenplay. Short fiction and screenplays: Short Fiction. Williams has published many works of short fiction, beginning with “Child of an Ancient City” in Weird Tales, Fall 1988 (expanded to book length in 1992), and continuing through 2013 with “The Boy Detective of Oz: An Otherland Story” in the anthology Oz Reimagined: New Tales from The Emerald City and Beyond from editors John Joseph Adams and Douglas Cohen; “The Old Scale Game” in the anthology Unfettered from editor Shawn Speakman; and Diary Of A Dragon, a limited edition chapbook from Subterranean Press. Williams’s short fiction has been collected in RITE: Short Work (2006), A Stark and Wormy Knight (2012), and The Very Best of Tad Williams (2014). His short story “The Burning Man” was included in a graphic novel omnibus, The Wood Boy—The Burning Man, (with Raymond Feist) from the Dabel Brothers in 2005. Short fiction and screenplays: Screenplays. Two television ideas, both unproduced, are included in RITE: Short Work: two episodes of “THE CLOAK” and “DOGS VERSUS THE WORLD.” The screenplay, “BLACK SUNSHINE,” is included in A Stark and Wormy Knight. Comics: Williams’s first comic book series was Mirrorworld: Rain published in 1997. Only two were issued: Number 1 (the premier issue, February 1997) and Number 0 (April 1997), before the publisher Tekno Comix went out of business. In 2006 Williams wrote The Next, a six issue miniseries for DC Comics featuring art by Dietrich Smith (Aquaman, Outsiders) and Walden Wong (Day of Vengeance). In 2007, Tad wrote a one-shot issue for DC Comics’ Helmet of Fate Limited series: The Helmet Of Fate: Ibis The Invincible #1 (March 2007) featuring art by Phil Winslade (The Monolith). Tad continued writing for DC with issues 50 through 57 of Aquaman: Sword Of Atlantis, teamed with artists Shawn McManus (The Sandman, Shadowpact) and Walden Wong (The Creeper, The Next). His proposal, Bad Guy Factory, for a series “based on the idea that all those supervillains had to get their training and equipment somewhere” is included in the collection A Stark and Wormy Knight. Comics: Maps and illustrations Williams drew the maps included in his books, and his original illustrations are included in the first world edition of Caliban's Hour. Comics: Tailchaser's World (Map) (1986) The Dragonbone Chair (maps) (1988) Stone of Farewell (Maps) (1990) To Green Angel Tower (maps) (1993) Caliban's Hour (illustrations 1994) The Burning Man (map) (1998) Shadowmarch: Eion and Xand (maps) (2004) Southmarch: The Outer Keep (map) (2004) Southmarch: The Inner Keep (map) (2004) The March Kingdoms (map) (2004) March Kingdoms (map) (2007) Shadowplay: Hierosol (map) (2007) Non-fiction: In addition to writing Introductions, Appendices, Synopses, Forewords, Afterwords, and Author’s Notes for his own books and stories, Williams’s non-fiction includes introductions for other books, essays, letters, and toastmaster speeches. Non-fiction: Collected in RITE: Short Work (2006)Why I Write What I Write Idiot: A Brief History of a Band 100 Best Horror (The Three Stigmata of Palmer Eldritch) Six Books by Philip K. Dick Introduction to Michael Moorcock’s Gloriana Doctor Strangetoast, or, How I Learned to Stop Worrying and Love the Rocket-Shaped ThingIntroduction (Dragon Fantastic) (1992) Letter (Locus #407) (1994) An Appreciation (Elsie Wollheim) (1996) Mike Gilbert: An Appreciation (2000) Introduction (Gormenghast) (2007) Afterword (“The Lamentably Comical Tragedy (or The Laughably Tragic Comedy) of Lixal Laqavee”) (2009) Foreword (Elric: Swords and Roses) (2010) Ubik: An Afterword (2012) Adaptations to other media: Tailchaser’s Song is in development as an animated feature film from Animetropolis. Otherland is in development as an MMORPG. Production is currently relocating to the northern US.On August 8, 2020, Warner Bros. purchased the film rights to the Memory, Sorrow and Thorn book trilogy and production is currently in the developmental planning stages.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corona radiata (embryology)** Corona radiata (embryology): The corona radiata is the innermost layer of the cells of the cumulus oophorus and is directly adjacent to the zona pellucida, the inner protective glycoprotein layer of the ovum. Cumulus oophorus are the cells surrounding corona radiata, and are the cells between corona radiata and follicular antrum. Its main purpose in many animals is to supply vital proteins to the cell. It is formed by follicle cells adhering to the oocyte before it leaves the ovarian follicle, and originates from the squamous granulosa cells present at the primordial stage of follicular development. The corona radiata is formed when the granulosa cells enlarge and become cuboidal, which occurs during the transition from the primordial to primary stage. These cuboidal granulosa cells, also known as the granulosa radiata, form more layers throughout the maturation process, and remain attached to the zona pellucida after the ovulation of the Graafian follicle. For fertilization to occur, sperm cells rely on hyaluronidase (an enzyme found in the acrosome of spermatozoa) to disperse the corona radiata from the zona pellucida of the secondary (ovulated) oocyte, thus permitting entry into the perivitelline space and allowing contact between the sperm cell and the nucleus of the oocyte.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hajos–Parrish–Eder–Sauer–Wiechert reaction** Hajos–Parrish–Eder–Sauer–Wiechert reaction: The Hajos–Parrish–Eder–Sauer–Wiechert reaction in organic chemistry is a proline catalysed asymmetric aldol reaction. The reaction is named after the principal investigators of the two groups who reported it simultaneously: Zoltan Hajos and David Parrish from Hoffmann-La Roche and Rudolf Wiechert and co-workers from Schering AG. Discovered in the 1970s the original Hajos-Parrish catalytic procedure – shown in the reaction equation, leading to the optically active bicyclic ketol – paved the way of asymmetric organocatalysis. The Eder-Sauer-Wiechert modification lead directly to the optically active enedione, through the loss of water from the bicyclic ketol shown in figure. It has been used extensively as a tool in the synthesis of steroids and other enantiomerically pure molecules. Hajos–Parrish–Eder–Sauer–Wiechert reaction: In the original reaction shown in the figure above naturally occurring chiral proline is the chiral catalyst in an Aldol reaction. The starting material is an achiral triketone and it requires just 3% of proline to obtain the reaction product, a ketol in 93% enantiomeric excess. As shown above, Hajos and Parrish worked at ambient temperature in dimethylformamide (DMF) solvent using a catalytic amount (3% molar equiv.) of (S)-(−)-proline enabling them to isolate the optically active intermediate bicyclic ketol. Thus, they described the first use of proline in a catalytic asymmetric aldol reaction. History: Researches on asymmetric enamine catalysis applied to important intermediates in steroids synthesis is due to an increased interest for efficient and convenient steroid total syntheses in the 1960s. In particular, two industrial groups in the early 1970s reported proline-catalyzed intramolecular aldol reactions. History: In 1971, the Schering group headed by Escher worked under non biological conditions using (S)-Proline (47 mol%), 1N perchloric acid, in acetonitrile at 80 °C. Hence, they could not isolate the Hajos-Parrish intermediate bicyclic ketol but instead the condensation product (S)-7a-methyl-2,3,7,7a-tetrahydro-1H-indene-1,5(6H)-dione through the loss of water. Thirty-seven years later a new group at Schering AG published the continuation of the earlier Schering work. Instead of the aforementioned non biological conditions the new group used the Hajos-Parrish catalytic procedure. Thus, they could isolate the optically active 6,5-bicyclic ketol described so far only in the Hajos-Parrish publications.In 1974, Hajos and Parrish published the synthesis of bicyclic ketol intermediates in good yield and enantiomeric excess.They investigated further the exact configuration of the cis-fused-7a-methyl- 6,5-bicyclic-ketol shown in the reaction scheme above by circular dichroism, and these results were confirmed by a single-crystal X-ray diffraction study. The centrosymmetrical crystal of the corresponding racemic ketol without a heavy atom label has been obtained by the use of racemic proline. It showed by X-ray diffraction an axial orientation of the angular methyl group and an equatorial orientation of the hydroxyl group in the chair conformer of the six-membered ring. This is in good agreement with the crystal structure of the CD-ring of digitoxigenin. The structure of this ketol and its ethyl homologue are shown as follows: Similar studies of the 7a-ethyl-homologue showed that the ethyl bicyclic ketol existed in a cis conformation in which the 7a-ethyl group is equatorially oriented and the hydroxyl group is axially oriented in the chair form of the six-membered ring as shown above. The reason for a preference for this conformation could be enhanced 1,3-diaxial interaction in the other cis conformer between the angular ethyl group and the axial hydrogens at C-4 and C-6 in the six membered ring. Intermolecular versions: In a 2000 study the Barbas group found that intermolecular aldol additions (those between ketones and aldehydes) are also possible albeit with use of considerably more proline: The authors noted the similarity of proline, the aldolase antibodies they had created and natural aldolase enzymes aldolase A all of which operate through an enamine intermediate. In this reaction the large concentration of acetone (one of the two reactants) suppresses various possible side-reactions: reaction of the ketone with proline to an oxazolidinone and reaction of the aldehyde with proline to an azomethine ylide. Intermolecular versions: Notz and List went on to expand the utility of this reaction to the synthesis of 1,2-diols: In their full account of their 2000 Communication, the group revealed that proline together with the thiazolium salt 5,5-dimethyl thiazolidinium-4-carboxylate were found to be the most effective catalysts among a large group of amines, while catalysis with (S)-1-(2-pyrrolidinylmethyl)-pyrrolidine salts formed the basis for the development of diamine organocatalysts that have proven effective in a wide variety or organocatalytic reactions.The asymmetric synthesis of the Wieland-Miescher ketone (1985) is another intramolecular reaction also based on proline, that was explored by the Barbas group in 2000. In this study the Barbas group demonstrated for the first time that proline can catalyze the cascade Michael-aldol reaction through combined iminium-enamine catalysis. This work is significant because despite the 30-year history and application of the Hajos-Parrish reaction in industry, the triketone substrate for this reaction had always been synthesized in a discrete independent step, demonstrating that there was a fundamental lack of understanding of the chemical mechanism of this reaction. The Barbas group had reported the aldolase antibody catalyzed iminium-enamine Robinson annulation in their 1997 study that marked the beginning of their studies in the area now called organocatalysis. In a report published in 2002 Carlos F. Barbas III said: "Work in the 1970s on proline-catalyzed intramolecular aldol addition reactions by synthetic organic chemists Zoltan G. Hajos and David R. Parrish of the chemical research department at Hoffmann-La Roche, Nutley, N.J., inspired us to look more closely at parallels between small-molecule catalysts and enzymes".In 2002 the Macmillan group was the first to demonstrate the proline catalyzed Aldol reaction between different aldehydes. This reaction is unusual because in general aldehydes will self-condense. Intermolecular versions: The organocatalytic intermolecular aldol reaction is now known as the Barbas-List Aldol reaction. Reaction mechanism: Several reaction mechanisms for the triketone reaction have been proposed over the years. Hajos and Parrish proposed the enamine mechanism in their paper [2]. However, their experiment with a stoichiometric amount of labeled water (H218O) supported a carbinolamine mechanism. Therefore, Hajos put forward (1974) a hemiaminal intermediate.[2] The Agami mechanism (1984) has an enamine intermediate with two proline units involved in the transition state (based on experimental reaction kinetics) and according to a mechanism by Houk (2001) a single proline unit suffices with a cyclic transition state and with the proline carboxyl group involved in hydrogen bonding. Reaction mechanism: The hemiaminal (carbinolamine) put forward by Hajos in 1974 can change to a tautomeric iminium hydroxide intermediate. The iminium hydroxide ion caused enolization of the side chain methyl ketone would be followed by ring closure to the above shown optically active bicyclic ketol product (see Figure 1.) under the influence of the catalytic amount of (S)-(−)-proline. Pengxin Zhou, Long Zhang, Sanzhong Luo, and Jin-Pei Cheng obtained excellent results using the simple chiral primary amine t-Bu-CH(NH2)-CH2-NEt2.TfOH for the synthesis of both the Wieland-Miescher ketone and the Hajos-Parrish ketone as well as their analogues. This supports the iminium mechanism, because it is textbook chemistry that primary amines form imines rather than enamines with carbonyl compounds. Reaction mechanism: The Hajos 1974 carbinolamine mechanism has had an unwitting support in a more recent paper by Michael Limbach. The triketone starting material 2- methyl-2-(3-oxobutyl)-1,3-cyclopentanedione gave the expected optically active bicyclic ketol (+)-(3aS,7aS)-3a,4,7,7a-tetrahydro-3a-hydroxy-7a-methyl-1,5(6H)-indanedione with (S)-(−)-proline catalyst. On the other hand, the stereochemical outcome is reversed with ee selectivities of up to 83% by using the homologue amino acid catalysts, such as (S)-β-homoproline, [(pyrrolidine-(2S)-yl) acetic acid]. The virtual anomaly can be explained with a top side approach of the bulkier beta amino acids to the above triketone starting material of reflective symmetry. The top side approach results in the formation of an enantiotopic carbinolamine to give the (−)-(3aR,7aR)-3a,4,7,7a-tetrahydro-3a-hydroxy-7a-methyl-1,5(6H)-indanedione bicyclic ketol enantiomer identical to the one obtained with unnatural (R)-(+)-proline. List in 2010 on the other hand is perplexed and surprised that Hajos rejected the enamine mechanism, certainly in light of earlier work by Spencer in 1965 on amine catalysed aldol reactions. It is interesting and surprising that Eder, Sauer and Wiechert have not attempted to explain the reaction mechanism. [3] The reaction mechanism as proposed by the Barbas group in 2000 for the intermolecular reactions is based also on enamine formation and the observed stereoselectivity based on the Zimmerman-Traxler model favoring Re-face approach. This is the same mechanism proposed by Barbas for aldolase antibodies reported by the group in 1995: This enamine mechanism also drives the original Hajos-Parrish triketone reaction but the involvement of two proline molecules in it as proposed by Agami is disputed by Barbas based on the lack of a non-linear effects and supported by later studies of List based on reaction kinetics. The general mechanism is further supported by List by the finding that in a reaction carried out in labeled water (H218O), the oxygen isotope finds its way into the reaction product. The Hajos and Parrish experiment with a stoechiometric amount of labeled water (H218O) supported the carbinolamine mechanism.[2]In the same study [20] the reaction of proline with acetone to the oxazolidinone (in DMSO) was examined: The equilibrium constant for this reaction is only 0.12 leading List to conclude that the involvement of oxazolidinone is only parasitic. Reaction mechanism: Blackmond in 2004 also found oxazolidinones as intermediates (NMR) in a related proline-catalysed α-aminooxylation of propanal with nitrosobenzene: Chiong Teck Wong of the Institute of High Performance Computing Singapore studied the similar oxyamination reaction of nitrosobenzene with butanal using a chiral prolinol silyl ether catalyst. His studies strongly suggest that the catalyst generates the enol, and forms an enol-catalyst complex. Nitsosobenzene subsequently reacts with the enol-catalyst complex to afford the (S)-N-nitroso aldol product in agreement with Pauling’s chart of electronegativity. Sodiumborohydride reduction of the primarily formed aldol products gave the corresponding alcohols in good yield and excellent enantioselectivity in the ratio of PN/PO=>99:1 as shown in the Scheme below. Wong suggests that the reaction mechanism of the (S)-Cat catalyzed N-nitroso aldol reaction between nitrosobenzene and butanal proceeds via an enol intermediate and not via an enamine intermediate. Reaction mechanism: The view of oxazolidinones as a parasitic species is contested by Seebach and Eschenmoser who in 2007 published an article in which they argue that oxazolidinones in fact play a pivotal role in proline catalysis. One of the things they did was reacting an oxazolidinone with the activated aldehyde chloral in an aldol addition: In 2008, Barbas in an essay addressed the question why it took until the year 2000 before interest regenerated for this seemingly simple reaction 30 years after the pioneering work by Hajos and Parrish and why the proline catalysis mechanism appeared to be an enigma for so long. One explanation has to do with different scientific cultures: a proline mechanism in the context of aldolase catalysis already postulated in 1964 by a biochemist was ignored by organic chemists. Another part of the explanation was the presumed complexity of aldolase catalysis that dominated chemical thinking for a long time. Finally, research did not expand in this area at Hoffmann-La Roche after the resignation of ZGH in November, 1970. Origin of the name of the reaction: The name for this reaction took some time to develop. In 1985 Professor Agami and associates were the first to name the proline catalyzed Robinson annulation the Hajos-Parrish reaction. In 1986 Professor Henri B. Kagan and Professor Agami still called it the Hajos-Parrish reaction in the Abstract of this paper. In 2001 Kagan published a paper entitled "Nonlinear Effects in Asymmetric Catalysis: A Personal Account" in Synlett. In this paper he introduced the new title the Hajos-Parrish-Wiechert reaction. In 2002 Benjamin List added two more names and introduced the term Hajos–Parrish–Eder–Sauer–Wiechert reaction. Scientific papers published as late as 2008 in the field of organocatalysis use either the 1985, 2001 or 2002 names of the reaction. Origin of the name of the reaction: A June, 2014 search limited to the years 2009–2014 by Google Scholar returns 44 hits for Hajos-Parrish reaction, 3 for Hajos-Parrish-Wiechert reaction and 184 for Hajos–Parrish–Eder–Sauer–Wiechert reaction. The term 'Hajos-Parrish ketone' (and similar) remains common, however.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rune Factory 4** Rune Factory 4: Rune Factory 4 is a role-playing video game developed by Neverland and published by Marvelous AQL for the Nintendo 3DS. It is the sixth game in the Rune Factory series, and the first to be released on the 3DS. It was released in Japan in July 2012, in North America in October 2013, and in PAL regions in December 2014. An enhanced version, titled Rune Factory 4 Special, was released for the Nintendo Switch in Japan in July 2019 and worldwide in February 2020. It was also released for the PlayStation 4, Xbox One and Microsoft Windows in 2021. Gameplay: Features common to previous games in the Rune Factory series, including farming, dungeon exploring, and marriage, return in Rune Factory 4. Crafting is one of the main features in the series, with which all equipment used by the main character is created. From shoes to many types of weapons, crafting materials of various stats to form new equipment is the key to character progression - more so than the traditional leveling up feature that most RPGs rely on. Rune Factory 4 adds the ability to make "Orders". As the prince or princess of Selphia, these Orders can range from requesting a town event (such as a harvest festival) to pushing back a storm from wiping out your crops. Story: The game begins by offering the player two lines of dialogue, and the choice between the two determines their character's gender. It is revealed that the character is traveling by airship to the town of Selphia to meet and deliver a gift to its 'god'. The airship, however, is invaded by rogue soldiers and a fight ensues. During the fight the character is hit in the head and it is later revealed that they developed amnesia, as has been the case with all previous Rune Factory mobile installments. The player is thrown out of the airship and lands in the town of Selphia, where they are mistaken for a member of royalty who was supposed to be showing up soon to help run the town. Although this is quickly revealed not to be the case, the actual prince, named Arthur, who was due to arrive, is happy to let the player take over his job. From there on out you are to attract tourists, gain trust from the villagers in Selphia, and work around the town to unlock features needed to carry on with the slice of life aspects of the game. At the same time, you will find a mysterious force at work in the nearby dungeons that is in need of investigation, with some monsters turning into humans upon their defeat. You may also date a bachelor or bachelorette, get married, and have one child. There are six bachelors and six bachelorettes, each have their own charming points and back stories which you will learn through series of events before marriage. You can equip other villagers, even your child, with battle gear and have maximum two person to fight alongside you. Characters from Rune Factory 2 and 3, Barrett and Raven, appear as cameos and can be recruited into a players party for dungeon exploration. Characters: Rune Factory 4 features many characters residing in Selphia. If playing as the male protagonist, the player can choose to marry one of the following female characters: If playing as the female protagonist, the player can choose to marry one of the following male characters: Development: Producer Yoshifumi Hashimoto said that the main theme is "passionate love, sweet marriage". This led him to greatly expand the types of dating events and their dramatic nature, and creating scenarios where players can go adventuring with their families. This was done to create a world that is not purely combat or farming driven, but gives players a choice. Another focus of development was to make farming, though repetitive by nature, a satisfying experience for a player. Drawing inspiration from games such as Pikmin, where Captain Olimar would pull Pikmin from the ground with a pop, and DokiDoki Panic, he decided to make the game's framerate run at 60 so that character responses to controller input would be felt immediately. It was announced in January 2013 that publisher Xseed Games would be localizing the game for North American audiences; they had previously localized Rune Factory Frontier for the Wii.On September 12, Xseed Games announced that the game would have a release date for the North American audiences, which was announced to be October 1, 2013. Xseed would later release the game in Europe and Australia via the 3DS eShop on December 11, 2014. An enhanced version of the game for the Nintendo Switch, titled Rune Factory 4 Special, was released in Japan on July 25, 2019, in North America on February 25, 2020, and in Europe and Australia on February 28, 2020. This release features a new opening theme, another difficulty option, and uses Live2D technology for the additional Newlywed mode. This version of the game was also released for the PlayStation 4, Xbox One and Microsoft Windows on December 7, 2021. Reception: Japanese sales exceeded 150,000 copies, becoming the best selling game in the Rune Factory series, eclipsing Rune Factory 2, which had the top sales prior. Profits were well above expectation for game publisher Marvelous AQL. Due to the game's success, the game caused an upward revision of profits by 106.7% for the second financial quarter of 2012.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spectroscopic notation** Spectroscopic notation: Spectroscopic notation provides a way to specify atomic ionization states, atomic orbitals, and molecular orbitals. Ionization states: Spectroscopists customarily refer to the spectrum arising from a given ionization state of a given element by the element's symbol followed by a Roman numeral. The numeral I is used for spectral lines associated with the neutral element, II for those from the first ionization state, III for those from the second ionization state, and so on. For example, "He I" denotes lines of neutral helium, and "C IV" denotes lines arising from the third ionization state, C3+, of carbon. This notation is used for example to retrieve data from the NIST Atomic Spectrum Database. Atomic and molecular orbitals: Before atomic orbitals were understood, spectroscopists discovered various distinctive series of spectral lines in atomic spectra, which they identified by letters. These letters were later associated with the azimuthal quantum number, ℓ. The letters, "s", "p", "d", and "f", for the first four values of ℓ were chosen to be the first letters of properties of the spectral series observed in alkali metals. Other letters for subsequent values of ℓ were assigned in alphabetical order, omitting the letter "j" because some languages do not distinguish between the letters "i" and "j": This notation is used to specify electron configurations and to create the term symbol for the electron states in a multi-electron atom. When writing a term symbol, the above scheme for a single electron's orbital quantum number is applied to the total orbital angular momentum associated to an electron state. Atomic and molecular orbitals: Molecular spectroscopic notation The spectroscopic notation of molecules uses Greek letters to represent the modulus of the orbital angular momentum along the internuclear axis. The quantum number that represents this angular momentum is Λ. Λ = 0, 1, 2, 3, ... Symbols: Σ, Π, Δ, ΦFor Σ states, one denotes if there is a reflection in a plane containing the nuclei (symmetric), using the + above. The − is used to indicate that there is not. Atomic and molecular orbitals: For homonuclear diatomic molecules, the index g or u denotes the existence of a center of symmetry (or inversion center) and indicates the symmetry of the vibronic wave function with respect to the point-group inversion operation i. Vibronic states that are symmetric with respect to i are denoted g for gerade (German for "even"), and unsymmetric states are denoted u for ungerade (German for "odd"). Quarkonium: For mesons whose constituents are a heavy quark and its own antiquark (quarkonium) the same notation applies as for atomic states. However, uppercase letters are used. Furthermore, the first number is (as in nuclear physics) n=N+1 where is the number of nodes in the radial wave function, while in atomic physics n=N+ℓ+1 is used. Hence, a 1P state in quarkonium corresponds to a 2p state in an atom or positronium.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Myosin-light-chain phosphatase** Myosin-light-chain phosphatase: Myosin light-chain phosphatase, also called myosin phosphatase (EC 3.1.3.53; systematic name [myosin-light-chain]-phosphate phosphohydrolase), is an enzyme (specifically a serine/threonine-specific protein phosphatase) that dephosphorylates the regulatory light chain of myosin II: [myosin light-chain] phosphate + H2O = [myosin light-chain] + phosphateThis dephosphorylation reaction occurs in smooth muscle tissue and initiates the relaxation process of the muscle cells. Thus, myosin phosphatase undoes the muscle contraction process initiated by myosin light-chain kinase. The enzyme is composed of three subunits: the catalytic region (protein phosphatase 1, or PP1), the myosin binding subunit (MYPT1), and a third subunit (M20) of unknown function. The catalytic region uses two manganese ions as catalysts to dephosphorylate the light-chains on myosin, which causes a conformational change in the myosin and relaxes the muscle. The enzyme is highly conserved and is found in all organisms’ smooth muscle tissue. While it is known that myosin phosphatase is regulated by rho-associated protein kinases, there is current debate about whether other molecules, such as arachidonic acid and cAMP, also regulate the enzyme. Function: Smooth muscle tissue is mostly made of actin and myosin, two proteins that interact together to produce muscle contraction and relaxation. Myosin II, also known as conventional myosin, has two heavy chains that consist of the head and tail domains and four light chains (two per head) that bind to the heavy chains in the “neck” region. When the muscle needs to contract, calcium ions flow into the cytosol from the sarcoplasmic reticulum, where they activate calmodulin, which in turn activates myosin light-chain kinase (MLC kinase). MLC kinase phosphorylates the myosin light chain (MLC20) at the Ser-19 residue. This phosphorylation causes a conformational change in the myosin, activating crossbridge cycling and causing the muscle to contract. Because myosin undergoes a conformational change, the muscle will stay contracted even if calcium and activated MLC kinase concentrations are brought to normal levels. The conformational change must be undone to relax the muscle.When myosin phosphatase binds to myosin, it removes the phosphate group. Without the group, the myosin reverts to its original conformation, in which it cannot interact with the actin and hold the muscle tense, so the muscle relaxes. The muscle will remain in this relaxed position until myosin is phosphorylated by MLC kinase and undergoes a conformational change. Structure: Myosin phosphatase is made of three subunits. The catalytic subunit, PP1, is one of the more important Ser/Thr phosphatases in eukaryotic cells, as it plays a role in glycogen metabolism, intracellular transport, protein synthesis, and cell division as well as smooth muscle contraction. Because it is so important to basic cellular functions, and because there are far fewer protein phosphatases than kinases in cells, PP1’s structure and function is highly conserved (though the specific isoform used in myosin phosphatase is the δ isoform, PP1δ). PP1 works by using two manganese ions as catalysts for the dephosphorylation (see below). Structure: Surrounding these ions is a Y-shaped cleft with three grooves: a hydrophobic, an acidic, and a C-terminal groove. When PP1 is not bonded to any other subunit, it is not particularly specific. However, when it bonds to the second subunit of myosin phosphatase, MYPT1 (MW ~130 kDa), this catalytic cleft changes configuration. This results in a dramatic increase in myosin specificity. Thus, it is clear that MYPT1 has great regulatory power over PP1 and myosin phosphatase, even without the presence of other activators or inhibitors. Structure: The third subunit, M20 (not to be confused with MLC20, the critical regulatory subunit of myosin), is the smallest and most mysterious subunit. Currently little is known about M20, except that it is not necessary for catalysis, as removing the subunit does not affect turnover or selectivity. While some believe it could have regulatory function, nothing has been determined yet. Mechanism: The mechanism of removing the phosphate from Ser-19 is very similar to other dephosphorylation reactions in the cell, such as the activation of glycogen synthase. Myosin's regulatory subunit MLC20 binds to both the hydrophobic and acid grooves of PP1 and MYPT1, the regulatory site on myosin phosphatase. Once in the proper configuration, both the phyosphorylated serine and a free water molecule are stabilized by the hydrogen-bonding residues in the active site, as well as the positively charged ions (which interact strongly with the negative phosphate group). His-125 (on myosin phosphatase) donates a proton to Ser-19 MLC20), and the water molecule attacks the phosphorus atom. After shuffling protons to stabilize (which happens rapidly compared to the attack on phosphorus), the phosphate and alcohol are formed, and both leave the active site. Regulation and Human Health: The regulatory pathways of MLC kinase have been well-established, but until the late 1980s, it was assumed that myosin phosphatase was not regulated, and contraction/relaxation was entirely dependent on MLC kinase activity. However, since the 1980s, the inhibiting effect of rho-associated protein kinase has been discovered and thoroughly investigated. RhoA GTP activates Rho-kinase, which phosphorylates the MYPT1 at two major inhibitory sites, Thr-696 and Thr-866. This fully demonstrates the value of the MYPT1, not only to increase reaction rate and specificity, but also to greatly slow down the reaction. However, when telokin is added, it effectively undoes the effect of Rho-kinase, even though it does not dephosphorylate MYPT1.One other proposed regulatory strategy involves arachidonic acid. When arachidonic acid is added to tensed muscle tissue, the acid decreases the rate of dephosphorylation (and thus relaxation) of myosin. However, it is unclear how arachidonic acid functions as an inhibitor. Two competing theories are that either arachidonic acid acts as a co-messenger in the rho-kinase cascade mentioned above, or that it binds to the c-terminal of MYPT1.When the regulatory systems of myosin phosphatase begin to fail, there can be major health consequences. Since smooth muscle is found in the respiratory, circulatory, and reproductive systems of humans (as well as other places), if the smooth muscle can no longer relax because of faulty regulation, then a wide number of problems ranging from asthma, hypertension, and erectile dysfunction can result.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cookie Run: Kingdom** Cookie Run: Kingdom: Cookie Run: Kingdom is an action role-playing gacha game by Devsisters and the sixth game in the Cookie Run series. Gameplay: Cookie Run: Kingdom is an RPG & city-building battle simulator. The game is mainly played by building the player's Cookie Kingdom and collecting Cookies using the game's gacha to fight in various game modes. Gameplay: At the beginning of the game, players receive a piece of land on which to build structures and amenities and collect resources. To unlock more items and resources, the player must level up the Cookie Castle, production buildings, and the Fountain of Abundance. Furthermore, players will have to build additional structures to use for either trading or upgrading. Players can produce resources for their kingdom with various production buildings. The Cookie Castle can be upgraded when given enough items or when certain tasks are cleared, and this allows the player to expand the space available, level up buildings, and place more of them. The same applies to the offline reward-giving Fountain of Abundance. The player can also expand their land by using materials or fighting enemies in certain areas with their team. Gameplay: There are over 100 playable cookies in the game as of July 2023, used as the main fighting units for the game's various game modes. These cookies can be obtained from the game's gacha, which can be activated by spending Crystals, or specific event tokens for a specific Cookie. Crystals are earned through various methods in the game or can be purchased with real-life money. Cookies are placed in teams of up to five by the player. There are 8 rarities of Cookies, ranging from Common to Ancient, and 8 classes of Cookies that serve different functions in battle. For example, Support Cookies provide a mix of healing, damage, and temporary buffs for allies, and Magic Cookies can de-buff enemies. All Cookies have a skill that can be activated, which causes damage, healing, or other effects. These skills have a short cooldown before being able to be used again. Gameplay: Cookies can be powered up by various means. General stats, which are HP, CRIT%, Attack, and Defense, are upgraded by leveling up the characters, which is primarily done through EXP Jellies. EXP Jellies are obtained from playing through the game's multiple modes or from Cookie Houses that give EXP Jellies over time. The power of a skill can be increased with skill powders, which are obtained through daily bounties that give a specific type of skill powder. Toppings can be put on a cookie to improve other stats, such as how long it takes for a skill to cool down or how much a cookie will be affected by a debuff. Up to 5 can be equipped, and they have to be upgraded using coins and topping pieces. Some cookies can be given items called Magic Candies that power up and give new effects to their skill. Magic Candies require a specific type of crystal and resonant ingredient to upgrade. Cookies are obtained through soulstones, or the Cookie Gacha, and obtaining a certain amount of soulstones for a cookie if you already have it will give you a chance to promote it up to 5 stars, which greatly increases HP, Attack, and Defense. After a cookie has reached 5 stars, they can be upgraded further through Ascension, which requires Soulcores (a version of Soulstones that is unlocked after promoting a cookie to 5 stars) and Soul Essences. Gameplay: The main story mode, known as "World Exploration," contains lots of levels that are played in order. Each level is played by using a team of Cookies to attack multiple enemies and, on some levels, making their cookies jump to collect coins, similar to previous Cookie Run games. Another major game mode is Kingdom Arena, where players can advance their tier by battling other players. Over time, other modes are added that function in a similar way but with different approaches. Story: In a world populated with anthropomorphized dessert items (created by witches using cookie batter and Life Powder), the 5 Ancient Cookies - Pure Vanilla, White Lily, Hollyberry, Dark Cacao, and Golden Cheese - created their own kingdoms and were given Soul Jams, which granted them special powers and immortality. One day, a certain cookie wanted to know about the origin and creation of cookies and tried to contact the witches to seek an answer. Upon learning the horrible truth that cookies were meant to be eaten, the cookie falls into the ultimate dough and is re-baked as Dark Enchantress Cookie, who planned to use Cake Monsters to create a new world order. Pure Vanilla was forced to seal Dark Enchantress Cookie away and have the Ancient Cookies fall into hiding, leaving their kingdoms in disarray. Story: A long time after Dark Enchantress Cookie is sealed away, GingerBrave was baked by a witch and escaped. He later learns that his friends, Wizard Cookie and Strawberry Cookie, has escaped from the witch too. They were found by the Sugar Gnomes and they started to rebuild a long-forgotten kingdom and explore Earthbread, the Cookie world. Reception: The game had gained massive popularity in the wake of Genshin Impact's anniversary rewards controversy and the similar free-to-play gacha model featured in both of the games.Cookie Run: Kingdom is ranked 31st in the Free Role-Playing Game category in Thailand. In terms of free games, Japan ranked 1st on the Apple App Store and Google Play and 3rd in the Free Role-Playing game category in the United States. It was also ranked 1st in South Korea, 2nd in Taiwan, 3rd in Thailand, and 5th in Hong Kong in the Free Game on Apple App Store category in January 2021. In South Korea, Taiwan, and Thailand, the game's revenue is ranked 1st on the Apple App Store, and in Hong Kong and Singapore, it is ranked 3rd in January 2021. Cookie Run: Kingdom had 10 million downloads in the first two months after its release and has been downloaded over 150 million times as of June 2021.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbon-14** Carbon-14: Carbon-14, C-14, 14C or radiocarbon, is a radioactive isotope of carbon with an atomic nucleus containing 6 protons and 8 neutrons. Its presence in organic materials is the basis of the radiocarbon dating method pioneered by Willard Libby and colleagues (1949) to date archaeological, geological and hydrogeological samples. Carbon-14 was discovered on February 27, 1940, by Martin Kamen and Sam Ruben at the University of California Radiation Laboratory in Berkeley, California. Its existence had been suggested by Franz Kurie in 1934.There are three naturally occurring isotopes of carbon on Earth: carbon-12 (12C), which makes up 99% of all carbon on Earth; carbon-13 (13C), which makes up 1%; and carbon-14 (14C), which occurs in trace amounts, making up about 1 or 1.5 atoms per 1012 atoms of carbon in the atmosphere. Carbon-12 and carbon-13 are both stable, while carbon-14 is unstable and has a half-life of 5700±30 years. Carbon-14 has a maximum specific activity of 62.4 mCi/mmol (2.31 GBq/mmol), or 164.9 GBq/g. Carbon-14 decays into nitrogen-14 (14N) through beta decay. A gram of carbon containing 1 atom of carbon-14 per 1012 atoms will emit ~0.2 beta particles per second. The primary natural source of carbon-14 on Earth is cosmic ray action on nitrogen in the atmosphere, and it is therefore a cosmogenic nuclide. However, open-air nuclear testing between 1955 and 1980 contributed to this pool. Carbon-14: The different isotopes of carbon do not differ appreciably in their chemical properties. This resemblance is used in chemical and biological research, in a technique called carbon labeling: carbon-14 atoms can be used to replace nonradioactive carbon, in order to trace chemical and biochemical reactions involving carbon atoms from any given organic compound. Radioactive decay and detection: Carbon-14 goes through radioactive beta decay: 146C → 147N + e− + νe + 156.5 keVBy emitting an electron and an electron antineutrino, one of the neutrons in the carbon-14 atom decays to a proton and the carbon-14 (half-life of 5,730 ± 40 years) decays into the stable (non-radioactive) isotope nitrogen-14. Radioactive decay and detection: As usual with beta decay, almost all the decay energy is carried away by the beta particle and the neutrino. The emitted beta particles have a maximum energy of about 156 keV, while their weighted mean energy is 49 keV. These are relatively low energies; the maximum distance traveled is estimated to be 22 cm in air and 0.27 mm in body tissue. The fraction of the radiation transmitted through the dead skin layer is estimated to be 0.11. Small amounts of carbon-14 are not easily detected by typical Geiger–Müller (G-M) detectors; it is estimated that G-M detectors will not normally detect contamination of less than about 100,000 disintegrations per minute (0.05 µCi). Liquid scintillation counting is the preferred method although more recently, accelerator mass spectrometry has become the method of choice; it counts all the carbon-14 atoms in the sample and not just the few that happen to decay during the measurements; it can therefore be used with much smaller samples (as small as individual plant seeds), and gives results much more quickly. The G-M counting efficiency is estimated to be 3%. The half-distance layer in water is 0.05 mm. Radiocarbon dating: Radiocarbon dating is a radiometric dating method that uses (14C) to determine the age of carbonaceous materials up to about 60,000 years old. The technique was developed by Willard Libby and his colleagues in 1949 during his tenure as a professor at the University of Chicago. Libby estimated that the radioactivity of exchangeable carbon-14 would be about 14 disintegrations per minute (dpm) per gram of pure carbon, and this is still used as the activity of the modern radiocarbon standard. In 1960, Libby was awarded the Nobel Prize in chemistry for this work. Radiocarbon dating: One of the frequent uses of the technique is to date organic remains from archaeological sites. Plants fix atmospheric carbon during photosynthesis, so the level of 14C in plants and animals when they die approximately equals the level of 14C in the atmosphere at that time. However, it decreases thereafter from radioactive decay, allowing the date of death or fixation to be estimated. The initial 14C level for the calculation can either be estimated, or else directly compared with known year-by-year data from tree-ring data (dendrochronology) up to 10,000 years ago (using overlapping data from live and dead trees in a given area), or else from cave deposits (speleothems), back to about 45,000 years before the present. A calculation or (more accurately) a direct comparison of carbon-14 levels in a sample, with tree ring or cave-deposit carbon-14 levels of a known age, then gives the wood or animal sample age-since-formation. Radiocarbon is also used to detect disturbance in natural ecosystems; for example, in peatland landscapes, radiocarbon can indicate that carbon which was previously stored in organic soils is being released due to land clearance or climate change.Cosmogenic nuclides are also used as proxy data to characterize cosmic particle and solar activity of the distant past. Origin: Natural production in the atmosphere Carbon-14 is produced in the upper troposphere and the stratosphere by thermal neutrons absorbed by nitrogen atoms. When cosmic rays enter the atmosphere, they undergo various transformations, including the production of neutrons. The resulting neutrons (n) participate in the following n-p reaction (p is proton): 147N + n → 146C + pThe highest rate of carbon-14 production takes place at altitudes of 9 to 15 kilometres (30,000 to 49,000 ft) and at high geomagnetic latitudes. Origin: The rate of 14C production can be modelled, yielding values of 16,400 or 18,800 atoms of 14C per second per square meter of the Earth's surface, which agrees with the global carbon budget that can be used to backtrack, but attempts to measure the production time directly in situ were not very successful. Production rates vary because of changes to the cosmic ray flux caused by the heliospheric modulation (solar wind and solar magnetic field), and, of great significance, due to variations in the Earth's magnetic field. Changes in the carbon cycle however can make such effects difficult to isolate and quantify. Origin: Occasional spikes may occur; for example, there is evidence for an unusually high production rate in AD 774–775, caused by an extreme solar energetic particle event, the strongest such event to have occurred within the last ten millennia. Another "extraordinarily large" 14C increase (2%) has been associated with a 5480 BC event, which is unlikely to be a solar energetic particle event.Carbon-14 may also be produced by lightning but in amounts negligible, globally, compared to cosmic ray production. Local effects of cloud-ground discharge through sample residues are unclear, but possibly significant. Origin: Other carbon-14 sources Carbon-14 can also be produced by other neutron reactions, including in particular 13C(n,γ)14C and 17O(n,α)14C with thermal neutrons, and 15N(n,d)14C and 16O(n,3He)14C with fast neutrons. The most notable routes for 14C production by thermal neutron irradiation of targets (e.g., in a nuclear reactor) are summarized in the table. Carbon-14 may also be radiogenic (cluster decay of 223Ra, 224Ra, 226Ra). However, this origin is extremely rare. Origin: Formation during nuclear tests The above-ground nuclear tests that occurred in several countries between 1955 and 1980 (see nuclear test list) dramatically increased the amount of carbon-14 in the atmosphere and subsequently in the biosphere; after the tests ended, the atmospheric concentration of the isotope began to decrease, as radioactive CO2 was fixed into plant and animal tissue, and dissolved in the oceans. Origin: One side-effect of the change in atmospheric carbon-14 is that this has enabled some options (e.g., bomb-pulse dating) for determining the birth year of an individual, in particular, the amount of carbon-14 in tooth enamel, or the carbon-14 concentration in the lens of the eye.In 2019, Scientific American reported that carbon-14 from nuclear bomb testing has been found in the bodies of aquatic animals found in one of the most inaccessible regions of the earth, the Mariana Trench in the Pacific Ocean. Origin: Emissions from nuclear power plants Carbon-14 is produced in coolant at boiling water reactors (BWRs) and pressurized water reactors (PWRs). It is typically released to the atmosphere in the form of carbon dioxide at BWRs, and methane at PWRs. Best practice for nuclear power plant operator management of carbon-14 includes releasing it at night, when plants are not photosynthesizing. Carbon-14 is also generated inside nuclear fuels (some due to transmutation of oxygen in the uranium oxide, but most significantly from transmutation of nitrogen-14 impurities), and if the spent fuel is sent to nuclear reprocessing then the carbon-14 is released, for example as CO2 during PUREX. Occurrence: Dispersion in the environment After production in the upper atmosphere, the carbon-14 atoms react rapidly to form mostly (about 93%) 14CO (carbon monoxide), which subsequently oxidizes at a slower rate to form 14CO2, radioactive carbon dioxide. The gas mixes rapidly and becomes evenly distributed throughout the atmosphere (the mixing timescale in the order of weeks). Carbon dioxide also dissolves in water and thus permeates the oceans, but at a slower rate. The atmospheric half-life for removal of 14CO2 has been estimated to be roughly 12 to 16 years in the northern hemisphere. The transfer between the ocean shallow layer and the large reservoir of bicarbonates in the ocean depths occurs at a limited rate. Occurrence: In 2009 the activity of 14C was 238 Bq per kg carbon of fresh terrestrial biomatter, close to the values before atmospheric nuclear testing (226 Bq/kg C; 1950). Total inventory The inventory of carbon-14 in Earth's biosphere is about 300 megacuries (11 EBq), of which most is in the oceans. Occurrence: The following inventory of carbon-14 has been given: Global inventory: ~8500 PBq (about 50 t) Atmosphere: 140 PBq (840 kg) Terrestrial materials: the balance From nuclear testing (until 1990): 220 PBq (1.3 t) In fossil fuels Many human-made chemicals are derived from fossil fuels (such as petroleum or coal) in which 14C is greatly depleted because the age of fossils far exceeds the half-life of 14C. The relative absence of 14CO2 is therefore used to determine the relative contribution (or mixing ratio) of fossil fuel oxidation to the total carbon dioxide in a given region of the Earth's atmosphere.Dating a specific sample of fossilized carbonaceous material is more complicated. Such deposits often contain trace amounts of carbon-14. These amounts can vary significantly between samples, ranging up to 1% of the ratio found in living organisms, a concentration comparable to an apparent age of 40,000 years. This may indicate possible contamination by small amounts of bacteria, underground sources of radiation causing the 14N(n,p)14C reaction, direct uranium decay (although reported measured ratios of 14C/U in uranium-bearing ores would imply roughly 1 uranium atom for every two carbon atoms in order to cause the 14C/12C ratio, measured to be on the order of 10−15), or other unknown secondary sources of carbon-14 production. The presence of carbon-14 in the isotopic signature of a sample of carbonaceous material possibly indicates its contamination by biogenic sources or the decay of radioactive material in surrounding geologic strata. In connection with building the Borexino solar neutrino observatory, petroleum feedstock (for synthesizing the primary scintillant) was obtained with low 14C content. In the Borexino Counting Test Facility, a 14C/12C ratio of 1.94×10−18 was determined; probable reactions responsible for varied levels of 14C in different petroleum reservoirs, and the lower 14C levels in methane, have been discussed by Bonvicini et al. Occurrence: In the human body Since many sources of human food are ultimately derived from terrestrial plants, the relative concentration of carbon-14 in human bodies is nearly identical to the relative concentration in the atmosphere. The rates of disintegration of potassium-40 and carbon-14 in the normal adult body are comparable (a few thousand disintegrated nuclei per second). The beta decays from external (environmental) radiocarbon contribute approximately 0.01 mSv/year (1 mrem/year) to each person's dose of ionizing radiation. This is small compared to the doses from potassium-40 (0.39 mSv/year) and radon (variable). Occurrence: Carbon-14 can be used as a radioactive tracer in medicine. In the initial variant of the urea breath test, a diagnostic test for Helicobacter pylori, urea labeled with approximately 37 kBq (1.0 μCi) carbon-14 is fed to a patient (i.e., 37,000 decays per second). In the event of a H. pylori infection, the bacterial urease enzyme breaks down the urea into ammonia and radioactively-labeled carbon dioxide, which can be detected by low-level counting of the patient's breath.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Calendar spread** Calendar spread: In finance, a calendar spread (also called a time spread or horizontal spread) is a spread trade involving the simultaneous purchase of futures or options expiring on a particular date and the sale of the same instrument expiring on another date. These individual purchases, known as the legs of the spread, vary only in expiration date; they are based on the same underlying market and strike price. Calendar spread: The usual case involves the purchase of futures or options expiring in a more distant month--the far leg--and the sale of futures or options in a more nearby month--the near leg. Uses: The calendar spread can be used to attempt to take advantage of a difference in the implied volatilities between two different months' options. The trader will ordinarily implement this strategy when the options they are buying have a distinctly lower implied volatility than the options they are writing (selling). In the typical version of this strategy, a rise in the overall implied volatility of a market's options during the trade will tend very strongly to be to the trader's advantage, and a decline in implied volatility will tend strongly to work to the trader's disadvantage. Uses: If the trader instead buys a nearby month's options in some underlying market and sells that same underlying market's further-out options of the same striking price, this is known as a reverse calendar spread. This strategy will tend strongly to benefit from a decline in the overall implied volatility of that market's options over time. The calendar spread is mostly neutral with regard to the price of the underlying. The short calendar spread has net negative theta. Futures pricing: Futures calendar spreads or switches represent simultaneous purchase and sales in different delivery months, and are quoted as the difference in prices. If gold for August delivery is bid $1601.20 asking $1601.30, and gold for October delivery is bid $1603.20 asking $1603.30, then the calendar spread would be bid -$2.10 asking -$1.90 for August–October. Calendar spreads or switches are most often used in the futures markets to 'roll over' a position for delivery from one month into another month. Trading strategies: Pick expiration months as for a covered call When trading a calendar spread, try to think of this strategy as a covered call. The only difference is that you do not own the underlying stock, but you do own the right to purchase it. By treating this trade like a covered call, it will help you pick expiration months quickly. When selecting the expiration date of the long option, it is wise to go at least two to three months out. This will depend largely on your forecast. However, when selecting the short strike, it is a good practice to always sell the shortest dated option available. These options lose value the fastest, and can be rolled out month-to-month over the life of the trade. Trading strategies: Leg into a calendar spread For traders who own calls or puts against a stock, they can sell an option against this position and "leg" into a calendar spread at any point. For example, if you own calls on a particular stock and it has made a significant move to the upside but has recently leveled out, you can sell a call against this stock if you are neutral over the short term. Traders can use this legging-in strategy to ride out the dips in an upward trending stock. Trading strategies: Manage risk Plan your position size around the max loss of the trade and try to cut losses short when you have determined that the trade no longer falls within the scope of your forecast. What to avoid: Limited upside in the early stages This trade has limited upside when both legs are in play. However, once the short option expires, the remaining long position has unlimited profit potential. In the early stages of this trade, it is a neutral trading strategy. If the stock starts to move more than anticipated, this is what can result in limited gains. What to avoid: Be aware of expiration dates As the expiration date for the short option approaches, action needs to be taken. If the short option expires out of the money, then the contract expires worthless. If the option is in the money, then the trader should consider buying back the option at the market price. After the trader has taken action with the short option, he or she can then decide whether to roll the long option position. What to avoid: Time your entry well The last risk to avoid when trading calendar spreads is an untimely entry. In general, market timing is much less critical when trading spreads, but a trade that is very ill-timed can result in a max loss very quickly. Therefore, it is important to survey the condition of the overall market and to make sure you are trading within the direction of the underlying trend of the stock. Conclusion: In summary, it is important to remember that a long calendar spread is a neutral – and in some instances a directional – trading strategy that is used when a trader expects a gradual or sideways movement in the short term and has more direction bias over the life of the longer-dated option. This trade is constructed by selling a short-dated option and buying a longer-dated option, resulting in a net debit. This spread can be created with either calls or puts, and therefore can be a bullish or bearish strategy. The trader wants to see the short-dated option decay at a faster rate than the longer-dated option. Conclusion: When trading this strategy here are a few key points: Can be traded as either a bullish or bearish strategy Generates profit as time decays Risk is limited to the net debit Benefits from an increase in volatility If assigned, the trader loses the time value left in the position Provides additional leverage in order to make excess returns Losses are limited if the stock price moves dramatically
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radar altimeter** Radar altimeter: A radar altimeter (RA), also called a radio altimeter (RALT), electronic altimeter, reflection altimeter, or low-range radio altimeter (LRRA), measures altitude above the terrain presently beneath an aircraft or spacecraft by timing how long it takes a beam of radio waves to travel to ground, reflect, and return to the craft. This type of altimeter provides the distance between the antenna and the ground directly below it, in contrast to a barometric altimeter which provides the distance above a defined vertical datum, usually mean sea level. Principle: As the name implies, radar (radio detection and ranging) is the underpinning principle of the system. The system transmits radio waves down to the ground and measures the time it takes them to be reflected back up to the aircraft. The altitude above the ground is calculated from the radio waves' travel time and the speed of light. Radar altimeters required a simple system for measuring the time-of-flight that could be displayed using conventional instruments, as opposed to a cathode ray tube normally used on early radar systems. Principle: To do this, the transmitter sends a frequency modulated signal that changes in frequency over time, ramping up and down between two frequency limits, Fmin and Fmax over a given time, T. In the first units, this was accomplished using an LC tank with a tuning capacitor driven by a small electric motor. The output is then mixed with the radio frequency carrier signal and sent out the transmission antenna.Since the signal takes some time to reach the ground and return, the frequency of the received signal is slightly delayed relative to the signal being sent out at that instant. The difference in these two frequencies can be extracted in a frequency mixer, and because the difference in the two signals is due to the delay reaching the ground and back, the resulting output frequency encodes the altitude. The output is typically on the order of hundreds of cycles per second, not megacycles, and can easily be displayed on analog instruments. This technique is known as Frequency Modulated Continuous-wave radar. Principle: Radar altimeters normally work in the E band, Ka band, or, for more advanced sea-level measurement, S band. Radar altimeters also provide a reliable and accurate method of measuring height above water, when flying long sea-tracks. These are critical for use when operating to and from oil rigs.The altitude specified by the device is not the indicated altitude of the standard barometric altimeter. A radar altimeter measures absolute altitude - the height Above Ground Level (AGL). Absolute altitude is sometimes referred to as height because it is the height above the underlying terrain. Principle: As of 2010, all commercial radar altimeters use linear frequency modulation - continuous wave (LFM-CW or FM-CW). As of 2010, about 25,000 aircraft in the US have at least one radio altimeter. History: Original concept The underlying concept of the radar altimeter was developed independent of the wider radar field, and originates in a study of long-distance telephony at Bell Labs. During the 1910s, Bell Telephone was struggling with the reflection of signals caused by changes in impedance in telephone lines, typically where equipment connected to the wires. This was especially significant at repeater stations, where poorly matched impedances would reflect large amounts of the signal and made long-distance telephony difficult.Engineers noticed that the reflections appeared to have a "humpy" pattern to them; for any given signal frequency, the problem would only be significant if the devices were located at specific points in the line. This led to the idea of sending a test signal into the line and then changing its frequency until significant echos were seen. This would reveal the approximate distance to the device, allowing it to be identified and fixed.Lloyd Espenschied was working at Bell Labs when he conceived using this same phenomenon to measure distances in a wire. One of his first developments in this field was a 1919 patent (granted 1924) on the idea of sending a signal into railway tracks and measuring the distance to discontinuities. These could be used to detect broken tracks, or if the distance was changing more rapidly than the speed of the train, other trains on the same line. History: Appleton's ionosphere measurements During this same period there was a great debate in physics over the nature of radio propagation. Guglielmo Marconi's successful trans-Atlantic transmissions appeared to be impossible. Studies of radio signals demonstrated they travelled in straight lines, at least over long distances, so the broadcast from Cornwall should have disappeared into space instead of being received in Newfoundland. In 1902, Oliver Heaviside in the UK and Arthur Kennelly in the USA independently postulated the existence of an ionized layer in the upper atmosphere that was bouncing the signal back to the ground so it could be received. This became known as the Heaviside layer.While an attractive idea, direct evidence was lacking. In 1924, Edward Appleton and Miles Barnett were able to demonstrate the existence of such a layer in a series of experiments carried out in partnership with the BBC. After scheduled transmissions had ended for the day, a BBC transmitter in Bournemouth sent out a signal that slowly increased in frequency. This was picked up by Appleton's receiver in Oxford, where two signals appeared. One was the direct signal from the station, the groundwave, while the other was received later in time after it travelled to the Heaviside layer and back again, the skywave.The trick was how to accurately measure the distance travelled by the skywave to demonstrate it was actually in the sky. This was the purpose of the changing frequency. Since the ground signal travelled a shorter distance, it was more recent and thus closer to the frequency being sent at that instant. The skywave, having to travel a longer distance, was delayed, and was thus the frequency as it was some time ago. By mixing the two in a frequency mixer, a third signal is produced that has its own unique frequency that encodes the difference in the two inputs. Since in this case the difference is due to the longer path, the resulting frequency directly reveals the path length. Although technically more challenging, this was ultimately the same basic technique being used by Bell to measure the distance to the reflectors in the wire. History: Everitt and Newhouse In 1929, William Littell Everitt, a professor at Ohio State University, began considering the use of Appleton's basic technique as the basis for an altimeter system. He assigned the work to two seniors, Russell Conwell Newhouse and M. W. Havel. Their experimental system was more in common with the earlier work at Bell, using changes in frequency to measure the distance to the end of wires. The two used it as the basis for a joint senior thesis in 1929.Everitt disclosed the concept to the US Patent Office, but did not file a patent at that time. He then approached the Daniel Guggenheim Fund for the Promotion of Aeronautics for development funding. Jimmy Doolittle, secretary of the Foundation, approached Vannevar Bush of Bell Labs to pass judgment. Bush was skeptical that the system could be developed at that time, but nevertheless suggested the Foundation fund development of a working model. This allowed Newhouse to build an experimental machine which formed the basis of his 1930 Master's thesis, in partnership with J. D. Corley.The device was taken to Wright Field where it was tested by Albert Francis Helgenberger, a noted expert in aircraft navigation. Hegenberger found that the system worked as advertised, but stated that it would have to work at higher frequencies to be practical. History: Espenschied and Newhouse Espenschied had also been considering the use of Appleton's idea for altitude measurement. In 1926 he suggested the idea both as a way to measure altitude as well as a forward-looking system for terrain avoidance and collision detection. However, at that time the frequency of available radio systems even in what was known as shortwave was calculated to be fifty times lower than what would be needed for a practical system.Espenschied eventually filed a patent on the idea in 1930. By this time, Newhouse had left Ohio State and taken a position at Bell Labs. Here he met Peter Sandretto, who was also interested in radio navigation topics. Sandretto left Bell in 1932 to become the Superintendent of Communications at United Air Lines (UAL), where he led the development of commercial radio systems.Espenschied's patent was not granted until 1936, and its publication generated intense interest. Around the same time, Bell Labs had been working on new tube designs that were capable of delivering between 5 and 10 Watts at up to 500 MHz, perfect for the role. This led Sandretto to contact Bell about the idea, and in 1937 a partnership between Bell Labs and UAL was formed to build a practical version. Led by Newhouse, a team had a working model in testing in early 1938, and Western Electric (Bell's manufacturing division) was already gearing up for a production model. Newhouse also filed several patents on improvements in technique based on this work. History: Commercial introduction The system was publicly announced on 8 and 9 October 1938. During World War II, mass production was taken up by RCA, who produced them under the names ABY-1 and RC-24. In the post-war era, many companies took up production and it became a standard instrument on many aircraft as blind landing became commonplace.A paper describing the system was published jointly by Espenschied and Newhouse the next year. The paper explores sources of error and concludes that the worst-case built-in scenario was on the order of 9%, but this might be as high as 10% when flying over rough terrain like the built-up areas of cities.During early flights of the system, it was noticed that the pattern of the returns as seen on an oscilloscope was distinct for different types of terrain below the aircraft. This opened the possibility of all sorts of other uses for the same technology, including ground-scanning and navigation. However, these concepts were not able to be explored by Bell at the time. History: Use as general purpose radar It had been known since the late 1800s that metal and water made excellent reflectors of radio signals, and there had been a number of attempts to build ship, train and iceberg detectors over the years since that time. Most of these had significant practical limitations, especially the use of low-frequency signals that demanded large antennas in order to provide reasonable performance. The Bell unit, operating at a base frequency of 450 MHz, was among the highest frequency systems of its era.In Canada, the National Research Council began working on an airborne radar system using the altimeter as its basis. This came as a great surprise to British researchers when they visited in October 1940 as part of the Tizard Mission, as the British believed at that time that they were the only ones working on the concept. However, the Canadian design was ultimately abandoned in favour of building the fully developed British ASV Mark II design, which operated at much higher power levels.In France, researchers at IT&T's French division were carrying out similar experiments when the German invasion approached the labs in Paris. The labs were deliberately destroyed to prevent the research falling into German hands, but German teams found the antennas in the rubble and demanded an explanation. The IT&T director of research deflected suspicion by showing them the unit on the cover of a magazine and admonishing them for not being up-to-date on the latest navigation techniques. Applications: In civil aviation Radar altimeters are frequently used by commercial aircraft for approach and landing, especially in low-visibility conditions (see instrument flight rules) and automatic landings, allowing the autopilot to know when to begin the flare maneuver. Radar altimeters give data to the autothrottle which is a part of the Flight Computer. Applications: Radar altimeters generally only give readings up to 2,500 feet (760 m) above ground level (AGL). Frequently, the weather radar can be directed downwards to give a reading from a longer range, up to 60,000 feet (18,000 m) above ground level (AGL). As of 2012, all airliners are equipped with at least two and possibly more radar altimeters, as they are essential to autoland capabilities. (As of 2012, determining height through other methods such as GPS is not permitted by regulations.) Older airliners from the 1960s (such as the British Aircraft Corporation BAC 1-11) and smaller airliners in the sub-50 seat class (such as the ATR 42 and BAe Jetstream series) are equipped with them. Applications: Radar altimeters are an essential part in ground proximity warning systems (GPWS), warning the pilot if the aircraft is flying too low or descending too quickly. However, radar altimeters cannot see terrain directly ahead of the aircraft, only that below it; such functionality requires either knowledge of position and the terrain at that position or a forward looking terrain radar. Radar altimeter antennas have a fairly large main lobe of about 80° so that at bank angles up to about 40°, the radar detects the range from the aircraft to the ground (specifically to the nearest large reflecting object). This is because range is calculated based on the first signal return from each sampling period. It does not detect slant range until beyond about 40° of bank or pitch. This is not an issue for landing as pitch and roll do not normally exceed 20°. Applications: Radio altimeters used in civil aviation operate in the IEEE C-band between 4.2 and 4.4 GHz.In early 2022, potential interference from 5G cell phone towers caused some flight delays and a few flight cancellations in the United States. In military aviation Radar altimeters are also used in military aircraft to fly quite low over the land and the sea to avoid radar detection and targeting by anti-aircraft guns or surface-to-air missiles. A related use of radar altimeter technology is terrain-following radar, which allows fighter bombers to fly at very low altitudes. Applications: The F-111s of the Royal Australian Air Force and the U.S. Air Force have a forward-looking, terrain-following radar (TFR) system connected via digital computer to their automatic pilots. Beneath the nose radome are two separate TFR antennae, each providing individual information to the dual-channel TFR system. In case of a failure in that system, the F-111 has a back-up radar altimeter system, also connected to the automatic pilot. Then, if the F-111 ever dips below the preset minimum altitude (for example, 15 meters) for any reason, its automatic pilot is commanded to put the F-111 into a 2G fly-up (a steep nose-up climb) to avoid crashing into terrain or water. Even in combat, the hazard of a collision is far greater than the danger of being detected by an enemy. Similar systems are used by F/A-18 Super Hornet aircraft operated by Australia and the United States. International regulation: The International Telecommunication Union (ITU) defines radio altimeters as “radionavigation equipment, on board an aircraft or spacecraft, used to determine the height of the aircraft or the spacecraft above the Earth's surface or another surface" in article 1.108 of the ITU Radio Regulations (RR). Radionavigation equipment shall be classified by the radiocommunication service in which it operates permanently or temporarily. The use of radio altimeter equipment is categorised as a safety-of-life service, must be protected for interferences, and is an essential part of navigation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Accordion (GUI)** Accordion (GUI): The accordion is a graphical control element comprising a vertically stacked list of items, such as labels or thumbnails. Each item can be "expanded" or "collapsed" to reveal the content associated with that item. There can be zero expanded items, exactly one, or more than one item expanded at a time, depending on the configuration. The term stems from the musical accordion in which sections of the bellows can be expanded by pulling outward. A common example of an accordion is the Show/Hide operation of a box region, but extended to have multiple sections in a list. An accordion is similar in purpose to a tabbed interface, a list of items where exactly one item is expanded into a panel (i.e. list items are shortcuts to access separate panels). User definition: Several windows are stacked on each other. All of them are "shaded", so only their captions are visible. If one of them is clicked, to make it active, it is "unshaded" or "maximized". Other windows in accordion are displaced around top or bottom edge. Examples: A common example using a GUI accordion is the Show/Hide operation of a box region, but extended to have multiple sections in a list. SlideVerse is an accordion interface providing access to web content.The list view of Google Reader also features this. In an early example, Apple's download page used roll-over accordions in 2008. In this example, captured in the Wayback Machine in the Internet Archive, the left column of the page includes three categories that expand on roll-over: "All Downloads", "Top Apple Downloads", and "Top Downloads".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Appendix H** Appendix H: Appendix H is the name of an infamous appendix in Pentium Processor Family Developer's Manual, Volume 3. This appendix contained reference to documentation only available under a legally binding NDA. Appendix H: This NDAed documentation described various new features introduced in the Pentium processor, notably Virtual Mode Extensions (VME) and 4 MB paging. VME added an additional feature to the existing virtual 8086 mode (which was introduced with the 80386 processor), and included optimized handling and delivery of interrupts to and from virtual machines by reducing the number of traps required. VME should not be confused with the later Intel VT virtualization technology aiming at full virtualization of the CPU, rather than just the 8086 mode. Appendix H: The appendix was referenced by the official chapters in the documentation, provoking irritation among the public who was not allowed to access the detailed descriptions. This started a movement with observers trying to reverse-engineer the information in various ways. Notably, Robert Collins (writing in Dr. Dobb's Journal) and Christian Ludloff (owner of the sandpile.org website) played a major role in this. From the Pentium Pro, the information in Appendix H was moved to the main documentation chapters, making the features publicly documented.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ICD-11** ICD-11: The ICD-11 is the eleventh revision of the International Classification of Diseases (ICD). It replaces the ICD-10 as the global standard for recording health information and causes of death. The ICD is developed and annually updated by the World Health Organization (WHO). Development of the ICD-11 started in 2007 and spanned over a decade of work, involving over 300 specialists from 55 countries divided into 30 work groups, with an additional 10,000 proposals from people all over the world. Following an alpha version in May 2011 and a beta draft in May 2012, a stable version of the ICD-11 was released on 18 June 2018, and officially endorsed by all WHO members during the 72nd World Health Assembly on 25 May 2019.The ICD-11 is a large taxonomy consisting of about 85,000 entities, also called classes or nodes. An entity can be anything that is relevant to health care. It usually represents a disease or a pathogen, but it can also be an isolated symptom or (developmental) anomaly of the body. There are also classes for reasons for contact with health services, social circumstances of the patient, and external causes of injury or death. The ICD-11 is part of the WHO-FIC, a family of medical classifications. The WHO-FIC contains the Foundation Component, which comprises all entities of all classifications endorsed by the WHO. The Foundation is the common core from which all classifications are derived. For example, the ICD-O is a derivative classification optimized for use in oncology. The primary derivative of the Foundation is called the ICD-11 MMS, and it is this system that is commonly referred to as simply "the ICD-11". MMS stands for Mortality and Morbidity Statistics. The ICD-11 is distributed under a Creative Commons BY-ND license.The ICD-11 officially came into effect on 1 January 2022. On 11 February, the WHO claimed that 35 countries were using the ICD-11. In the United States, an expected implementation year of 2025 has been given, but if a clinical modification is determined to be needed (similar to the ICD-10-CM), ICD-11 implementation might not begin until 2027.The ICD-11 MMS can be viewed online on the WHO's website. Aside from this, the site offers two maintenance platforms: the ICD-11 Maintenance Platform, and the WHO-FIC Foundation Maintenance Platform. Users can submit evidence-based suggestions for the improvement of the WHO-FIC, i.e. the ICD-11, the ICF, and the ICHI. Structure: WHO-FIC The WHO Family of International Classifications (WHO-FIC), also called the WHO Family, is a suit of classifications used to describe various aspects of the health care system in a consistent manner, with a standardised terminology. The abbreviation is variously written with or without a hyphen ("WHO-FIC" or "WHOFIC"). The WHO-FIC consists of four components: the WHO-FIC Foundation, the Reference Classifications, the Derived Classifications, and the Related Classifications. The WHO-FIC Foundation, also called the Foundation Component, represents the entire WHO-FIC universe. It is a collection of over hundred thousand entities, also called classes or nodes. Entities are anything relevant to health care. They are used to describe diseases, disorders, body parts, bodily functions, reasons for visit, medical procedures, microbes, causes of death, social circumstances of the patient, and much more.The Foundation Component is a multidimensional collection of entities. An entity can have multiple parents and child nodes. For example, pneumonia can be categorized as a lung infection, but also as a bacterial or viral infection (i.e. by site or by etiology). Thus, the node Pneumonia (entity id: 142052508) has two parents: Lung infections (entity id: 915779102) and Certain infectious or parasitic diseases (entity id: 1435254666). The Pneumonia node in turn has various children, including Bacterial pneumonia (entity id: 1323682030) and Viral pneumonia (entity id: 1024154490). Structure: The Foundation Component is the common core on which all Reference and Derived Classifications are based. The WHO-FIC contains three Reference Classifications: the ICD-11 MMS (see below), the ICF, and the ICHI. Derived Classifications are based on the three Reference Classifications, and are usually tailored for a particular specialty. For example, the ICD-O is a Derived Classification used in oncology. Each node of the Foundation has a unique entity id, which remains the same in all Reference and Derived Classifications, guaranteeing consistency. Related Classifications are complementary, and cover specialty areas not covered elsewhere in the WHO-FIC. For example, the International Classification of Nursing Practice (ICNP), draws on terms from the Foundation Component, but also uses terms specific for nursing not found in the Foundation.A classification can be represented as a tabular list, which is a "flat" hierarchical tree of categories. In this tree, all entities can only have a single parent, and therefore must be mutually exclusive of each other. Such a classification is also called a linearization. Structure: ICD-11 MMS The ICD-11 MMS is the main Reference Classification of the WHO-FIC, and the primary linearization of the Foundation Component. The ICD-11 MMS is commonly referred to as simply "the ICD-11". The "MMS" was added to differentiate the ICD-11 entities in the Foundation from those in the Classification. The ICD-11 MMS does not contain all classes from the Foundation ICD-11, and also adds some classes from the ICF. MMS stands for Mortality and Morbidity Statistics. The abbreviation is variously written with or without a hyphen between 11 and MMS ("ICD-11 MMS" or "ICD-11-MMS"). Structure: The ICD-11 MMS consists of approximately 85,000 entities. Entities can be chapters, blocks or categories. A chapter is a top level entity of the hierarchy; the MMS contains 28 of them (see Chapters section below). A block is used to group related categories or blocks together. A category can be anything that is relevant to health care. Every category has a unique, alphanumeric code called an ICD-11 code, or just ICD code. Chapters and blocks never have ICD-11 codes, and therefore cannot be diagnosed. An ICD-11 code is not the same as an entity id. Structure: The ICD-11 MMS takes the form of a "flat" hierarchical tree. As aforementioned, the entities in this linearization can only have a single parent, and therefore must be mutually exclusive of each other. To make up for this limitation, the hierarchy of the MMS contains gray nodes. These nodes appear as children in the hierarchy, but actually have a different parent node. They originally belong to a different block or chapter, but are also listed elsewhere because of overlap. For example, Pneumonia (CA40) has two parents in the Foundation: "Lung infections" (site) and "Certain infectious or parasitic diseases" (etiology). In the MMS, Pneumonia is categorized in the "Lung infections", with a gray node in "Certain infectious or parasitic diseases". The same goes for injuries, poisonings, neoplasms, and developmental anomalies, which can occur in almost any part of the body. They each have their own chapters, but their categories also have gray nodes in the chapters of the organs they affect. For instance, the blood cancers, including all forms of leukemia, are in the "Neoplasms" chapter, but they are also displayed as gray nodes in the chapter "Diseases of the blood or blood-forming organs". Structure: The ICD-11 MMS also contains residual categories, or residual nodes. These are the "Other specified" and "Unspecified" categories, miscellaneous classes which can be used to code conditions that do not fit with any of the more specific MMS entities. In the ICD-11 Browser, residual nodes are displayed in a maroon color. Residual categories are not in the Foundation, and therefore are the only classes with derivative entity IDs: their IDs are the same as their parent nodes, with "/mms/otherspecified" or "/mms/unspecified" tagged at the end. Their ICD codes always end with Y for "Other specified" categories, or Z for "Unspecified" categories (e.g. 1C4Y and 1C4Z). Structure: Health informatics The ICD-11, both the ICD-11 Foundation and the MMS, can be accessed using a multilingual REST API. Documentation on the ICD API and some additional tools for integration into third-party applications can be found at the ICD API home page.The WHO has released a map that can be used to link and convert ICD-10 terms to those of the ICD-11. It can be downloaded from the ICD-11 MMS browser. In 2017, SNOMED International announced plans to release a SNOMED CT to ICD-11 MMS map.The ICD-11 Foundation, and consequently the MMS, are updated annually, similarly to the ICD-10. As of February 2023, six versions of the Foundation and MMS have been released. Chapters: Below is a list of all chapters of the ICD-11 MMS, the primary linearization of the Foundation Component. Unlike the ICD-10 codes, the ICD-11 MMS codes never contain the letters I or O, to prevent confusion with the numbers 1 and 0. Changes: Below is a summary of notable changes in the ICD-11 MMS compared to the ICD-10. Changes: General The ICD-11 MMS features a more flexible coding structure. In the ICD-10, every code starts with a letter, indicating the chapter. This is followed by a two digit number (e.g. P35), creating 99 slots per chapter, excluding subcategories and blocks. This proved enough for most chapters, but four are so voluminous that they span two letters: chapter 1 (A00–B99), chapter 2 (C00.0–D48.9), chapter 19 (S00–T98), and chapter 20 (V01–Y98). In the ICD-11 MMS, there is a single first character for every chapter. The codes of the first nine chapters begin with the numbers 1 to 9, while the next nineteen chapters start with the letters A to X. The letters I and O are not used, to prevent confusion with the numbers 1 and 0. The chapter character is then followed by a letter, a number, and a fourth character that starts as a number (0–9, e.g. KA80) and may then continue as a letter (A–Z, e.g. KA8A). The WHO opted for a forced number as the third character to prevent the spelling of "undesirable words". In the ICD-10, each entity within a chapter either has a code (e.g. P35) or a code range (e.g. P35–P39). The latter is a block. In the ICD-11 MMS, blocks never have codes, and not every entity necessarily has a code, although each entity does have a unique id.In the ICD-10, the next level of the hierarchy is indicated in the code by a dot and a single number (e.g. P35.2). This is the lowest available level in the ICD-10 hierarchy, causing an artificial limitation of 10 subcategories per code (.0 to .9). In the ICD-11 MMS, this limitation no longer exists: after 0–9, the list may continue with A–Z (e.g. KA62.0 – KA62.A). Then, following the first character after the dot, a second character may be used in the next level of the hierarchy (e.g. KA40.00 – KA40.08). This level is currently the lowest appearing in the MMS. The large amount of unused coding space in the MMS allows for updates to be made without having to change the other categories, ensuring that codes remain stable.The ICD-11 features five new chapters. The third chapter of the ICD-10, "Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism", has been split in two: "Diseases of the blood or blood-forming organs" (chapter 3) and "Diseases of the immune system" (chapter 4). The other new chapters are "Sleep-wake disorders" (chapter 7), "Conditions related to sexual health" (chapter 17, see section), and "Supplementary Chapter Traditional Medicine Conditions - Module I" (chapter 26, see section). Changes: Mental disorders Overview The following mental disorders have been newly added to the ICD-11, but were already included in the American ICD-10-CM adaption: Binge eating disorder (ICD-11: 6B82; ICD-10-CM: F50.81), Bipolar type II disorder (ICD-11: 6A61; ICD-10-CM: F31.81), Body dysmorphic disorder (ICD-11: 6B21; ICD-10-CM: F45.22), Excoriation disorder (ICD-11: 6B25.1; ICD-10-CM: F42.4), Frotteuristic disorder (ICD-11: 6D34; ICD-10-CM: F65.81), Hoarding disorder (ICD-11: 6B24; ICD-10-CM: F42.3), and Intermittent explosive disorder (ICD-11: 6C73; ICD-10-CM: F63.81).The following mental disorders have been newly added to the ICD-11, and are not in the ICD-10-CM: Avoidant/restrictive food intake disorder (6B83), Body integrity dysphoria (6C21), Catatonia (486722075), Complex post-traumatic stress disorder (6B41), Gaming disorder (6C51), Olfactory reference disorder (6B22), and Prolonged grief disorder (6B42).Other notable changes include: Distinct personality disorders have been collapsed into a single Personality disorder diagnosis, using a dimensional (as opposed to categorical) model; see Personality disorders section. Changes: All subtypes of Schizophrenia (e.g. paranoid, hebephrenic, catatonic) have been removed. Instead, a dimensional model is used with the category Symptomatic manifestations of primary psychotic disorders (6A25), which allows the coding for Positive symptoms (6A25.0), Negative symptoms (6A25.1), Depressive symptoms (6A25.2), Manic symptoms (6A25.3), Psychomotor symptoms (6A25.4), and Cognitive symptoms (6A25.5). Persistent mood disorders (F34), which consists of Cyclothymia (F34.0) and Dysthymia (F34.1), have been deleted. The ICD-10 differentiates between Phobic anxiety disorders (F40), such as Agoraphobia (F40.0), and Other anxiety disorders (F41), such as Generalized anxiety disorder (F41.1). The ICD-11 merges both groups together as Anxiety or fear-related disorders (1336943699). All Pervasive developmental disorders (F84) are merged into one category, Autism spectrum disorder (6A02), except for Rett syndrome, which is moved to the developmental anomalies chapter (LD90.4). Hyperkinetic disorders (F90) is renamed Attention deficit hyperactivity disorder (6A05), and a distinction in subtypes is made between predominantly inattentive (6A05.0), predominantly hyperactive-impulsive (6A05.1), and combined (6A05.2). Hyperkinetic conduct disorder (F90.1) has been removed. Changes: Acute stress reaction (F43.0) has been moved out of the mental disorder chapter, and placed in the chapter "Factors influencing health status or contact with health services" (QE84). Thus, in the ICD-11, Acute stress reaction is no longer considered a mental disorder.Aside from the updates made for the ICD-11, the WHO has developed an ICD-11 subset of the Clinical descriptions and diagnostic guidelines (CDDG), although it has not yet been published. A book of the same name was released in 1992 for the ICD-10, which was also known as the "Blue Book". It contains expanded definitions and diagnostic criteria for the mental disorders, whereas the ICD-10/-11 mental disorders chapters contain only short summaries. The ICD chapters are meant as a quick reference point, whereas the CDDG is meant for extensive diagnosing by health care professionals. To differentiate the old and the new version, the newest revision is called the ICD-11 CDDG. The WHO described the development of the ICD-11 CDDG as "the most global, multilingual, multidisciplinary and participative revision process ever implemented for a classification of mental disorders", involving nearly 15,000 clinicians from 155 countries. As of February 2023, the WHO has not made the ICD-11 CDDG publicly available. Changes: Personality disorder The personality disorder (PD) section has been completely revamped. All distinct PDs have been merged into one: Personality disorder (6D10), which can be coded as Mild (6D10.0), Moderate (6D10.1), Severe (6D10.2), or severity unspecified (6D10.Z). There is also an additional category called Personality difficulty (QE50.7), which can be used to describe personality traits that are problematic, but do not rise to the level of a PD. A personality disorder or difficulty can be specified by one or more Prominent personality traits or patterns (6D11). The ICD-11 uses five trait domains: (1) Negative affectivity (6D11.0); (2) Detachment (6D11.1), (3) Dissociality (6D11.2), (4) Disinhibition (6D11.3), and (5) Anankastia (6D11.4). Listed directly underneath is Borderline pattern (6D11.5), a category similar to Borderline personality disorder. This is not a trait in itself, but a combination of the five traits in certain severity. Changes: Described as a clinical equivalent to the Big Five model, the five-trait system addresses several problems of the old category-based system. Of the ten PDs in the ICD-10, two were used with a disproportionate high frequency: Emotionally unstable personality disorder, borderline type (F60.3) and Dissocial (antisocial) personality disorder (F60.2). Many categories overlapped, and individuals with severe disorders often met the requirements for multiple PDs, which Reed et al. (2019) described as "artificial comorbidity". PD was therefore reconceptualized in terms of a general dimension of severity, focusing on five negative personality traits which a person can have to various degrees.There was considerable debate regarding this new dimensional model, with many believing that categorical diagnosing should not be abandoned. In particular, there was disagreement about the status of Borderline personality disorder. Reed (2018) wrote: "Some research suggests that borderline PD is not an independently valid category, but rather a heterogeneous marker for PD severity. Other researchers view borderline PD as a valid and distinct clinical entity, and claim that 50 years of research support the validity of the category. Many – though by no means all – clinicians appear to be aligned with the latter position. In the absence of more definitive data, there seemed to be little hope of accommodating these opposing views. However, the WHO took seriously the concerns being expressed that access to services for patients with borderline PD, which has increasingly been achieved in some countries based on arguments of treatment efficacy, might be seriously undermined." Thus, the WHO believed the inclusion of a Borderline pattern category to be a "pragmatic compromise".The Alternative DSM-5 Model for Personality Disorders (AMPD) included near the end of the DSM-5 is similar to the PD-system of the ICD-11, although much larger and more comprehensive. It was considered for inclusion in the ICD-11, but the WHO decided against it because it was considered "too complicated for implementation in most clinical settings around the world", since an explicit aim of the WHO was to develop a simple and efficient method that could also be used in low-resource settings. Changes: Gaming disorder Gaming disorder (6C51) has been newly added to the ICD-11, and placed in the group "Disorders due to addictive behaviours", alongside Gambling disorder (6C50). The latter was called Pathological gambling (F63.0) in the ICD-10. Aside from Gaming disorder, the ICD-11 also features Hazardous gaming (QE22), an ancillary category that can be used to identify problematic gaming which does not rise to the level of a disorder. Changes: Although a majority of scholars supported the inclusion of Gaming disorder (GD), a significant number did not. Aarseth et al. (2017) stated that the evidence base which this decision relied upon is of low quality, that the diagnostic criteria of gaming disorder are rooted in substance use and gambling disorder even though they are not the same, that no consensus exist on the definition and assessment of GD, and that a pre-defined category would lock research in a confirmatory approach. Rooij et al. (2017) questioned if what was called "gaming disorder" is in fact a coping strategy for underlying problems, such as depression, social anxiety, or ADHD. They also asserted moral panic, fueled by sensational media stories, and stated that the category could be stigmatizing people who are simply engaging in a very immersive hobby. Bean et al. (2017) wrote that the GD category caters to false stereotypes of gamers as physically unfit and socially awkward, and that most gamers have no problems balancing their expected social roles outside games with those inside.In support of the GD category, Lee et al. (2017) agreed that there were major limitations of the existing research, but that this actually necessitates a standardized set of criteria, which would benefit studies more than self-developed instruments for evaluating problematic gaming. Saunders et al. (2017) argued that gaming addiction should be in the ICD-11 just as much as gambling addiction and substance addiction, citing functional neuroimaging studies which show similar brain regions being activated, and psychological studies which show similar antecedents (risk factors). Király and Demetrovics (2017) did not believe that a GD category would lock research into a confirmatory approach, noting that the ICD is regularly revised and characterized by permanent change. They wrote that moral panic around gamers does indeed exist, but that this is not caused by a formal diagnosis. Rumpf et al. (2018) noted that stigmatization is a risk not specific to GD alone. They agreed that GD could be a coping strategy for an underlying disorder, but that in this debate, "comorbidity is more often the rule than the exception". For example, a person can have an alcohol dependence due to PTSD. In clinical practice, both disorders need to be diagnosed and treated. Rumpf et al. also warned that the lack of a GD category might jeopardize insurance reimbursement of treatments.The DSM-5 (2013) features a similar category called Internet Gaming Disorder (IGD). However, due to the controversy over its definition and inclusion, it is not included in its main body of mental diagnoses, but in the additional chapter "Conditions for Further Study". Disorders in this chapter are meant to encourage research and are not intended to be officially diagnosed. Changes: Burn-out In May 2019, a number of media incorrectly reported that burn-out was newly added to the ICD-11. In reality, burn-out is also in the ICD-10 (Z73.0), albeit with a short, one-sentence definition only. The ICD-11 features a longer summary, and specifically notes that the category should only be used in an occupational context. Furthermore, it should only be applied when mood disorders (6A60–6A8Z), Disorders specifically associated with stress (6B40–6B4Z), and Anxiety or fear-related disorders (6B00–6B0Z) have been ruled out. Changes: As with the ICD-10, burn-out is not in the mental disorders chapter, but in the chapter "Factors influencing health status or contact with health services", where it is coded QD85. In response to media attention over its inclusion, the WHO emphasized that the ICD-11 does not define burn-out as a mental disorder or a disease, but as an occupational phenomenon that undermines a person's well-being in the workplace. Changes: Sexual health Conditions related to sexual health is a new chapter in the ICD-11. The WHO decided to put the sexual disorders in a separate chapter due to "the outdated mind/body split". A number of ICD-10 categories, including sex disorders, were based on a Cartesian separation of "organic" (physical) and "non-organic" (mental) conditions. As such, the sexual dysfunctions that were considered non-organic were included in the mental disorder chapter, while those that were considered organic were for the most part listed in the chapter on diseases of the genitourinary system. In the ICD-11, the brain and the body are seen as an integrate whole, with sexual dysfunctions considered to involve an interaction between physical and psychological factors. Thus, the organic/non-organic distinction was abolished. Changes: Sexual dysfunctions Regarding general sexual dysfunction, the ICD-10 has three main categories: Lack or loss of sexual desire (F52.0), Sexual aversion and lack of sexual enjoyment (F52.1), and Failure of genital response (F52.2). The ICD-11 replaces these with two main categories: Hypoactive sexual desire dysfunction (HA00) and Sexual arousal dysfunction (HA01). The latter has two subcategories: Female sexual arousal dysfunction (HA01.0) and Male erectile dysfunction (HA01.1). The difference between Hypoactive sexual desire dysfunction and Sexual arousal dysfunction is that in the former, there is a reduced or absent desire for sexual activity. In the latter, there is insufficient physical and emotional response to sexual activity, even though there still is a desire to engage in satisfying sex. The WHO acknowledged that there is an overlap between desire and arousal, but they are not the same. Management should focus on their distinct features.The ICD-10 contains the categories Vaginismus (N94.2), Nonorganic vaginismus (F52.5), Dyspareunia (N94.1), and Nonorganic dyspareunia (F52.6). As the WHO aimed to steer away from the aforementioned "outdated mind/body split", the organic and nonorganic disorders were merged. Vaginismus has been reclassified as sexual pain-penetration disorder (HA20). Dyspareunia (GA12) has been retained. A related condition is Vulvodynia, which is in the ICD-9 (625.7), but not in the ICD-10. It has been re-added to the ICD-11 (GA34.02).Sexual dysfunctions and Sexual pain-penetration disorder can be coded alongside a temporal qualifier, "lifelong" or "acquired", and a situational qualifier, "general" or "situational". Furthermore, the ICD-11 offers five aetiological qualifiers, or "Associated with..." categories, to further specify the diagnosis. For example, a woman who experiences sexual problems due to adverse effects of an SSRI antidepressant may be diagnosed with "Female sexual arousal dysfunction, acquired, generalised" (HA01.02) combined with "Associated with use of psychoactive substance or medication" (HA40.2). Changes: Compulsive sexual behaviour disorder Excessive sexual drive (F52.7) from the ICD-10 has been reclassified as Compulsive sexual behaviour disorder (CSBD, 6C72) and listed under Impulse control disorders. The WHO was unwilling to overpathologize sexual behaviour, stating that having a high sexual drive is not necessarily a disorder, so long as these people do not exhibit impaired control over their behavior, significant distress, or impairment in functioning. Kraus et al. (2018) noted that several people self-identify as "sex addicts", but on closer examination do not actually exhibit the clinical characteristics of a sexual disorder, although they may have other mental health problems, such as anxiety or depression. Experiencing shame and guilt about sex is not a reliable indicator of a sex disorder, Kraus et al. stated.There was debate on whether CSBD should be considered a (behavioral) addiction. It has been claimed that neuroimaging shows overlap between compulsive sexual behavior and substance-use disorder through common neurotransmitter systems. Nonetheless, it was ultimately decided to place the disorder in the Impulse control disorders group. Kraus et al. wrote that, for the ICD-11, "a relatively conservative position has been recommended, recognizing that we do not yet have definitive information on whether the processes involved in the development and maintenance of [CSBD] are equivalent to those observed in substance use disorders, gambling and gaming". Changes: Paraphilic disorders Paraphilic disorders, called Disorders of sexual preference in the ICD-10, have remained in the mental disorders chapter, although they have gray nodes in the sexual health chapter. The ICD-10 categories Fetishism (F65.0) and Fetishistic transvestism (F65.1) were removed because, if they don't cause distress or harm, they are not considered mental disorders. Frotteuristic disorder (6D34) has been newly added. Changes: Gender incongruence Transgenderism and gender dysphoria are called Gender incongruence in the ICD-11. In the ICD-10, the group Gender identity disorders (F64) consisted of three main categories: Transsexualism (F64.0), Dual-role transvestism (F64.1), and Gender identity disorder of childhood (F64.2). In the ICD-11, Dual-role transvestism was deleted due to a lack of public health or clinical relevance. Transsexualism was renamed Gender incongruence of adolescence or adulthood (HA60), and Gender identity disorder of childhood was renamed Gender incongruence of childhood (HA61). Changes: In the ICD-10, the Gender identity disorders were placed in the mental disorders chapter, following what was customary at the time. Throughout the 20th century, both the ICD and the DSM approached transgender health from a psychopathological position, as transgender identity presents a discrepancy between someone's assigned sex and their gender identity. Since this may cause mental distress, it was consequently considered a mental disorder, with distress or discomfort being a core diagnostic feature. In the 2000s and 2010s, this notion became increasingly challenged, as the idea of viewing transgender people as having a mental disorder was believed by some to be stigmatizing. It has been suggested that distress and dysfunction among transgender people should be more appropriately viewed as the result of social rejection, discrimination, and violence toward individuals with gender variant appearance and behavior. Studies have shown transgender people to be at higher risk of developing mental health problems than other populations, but that health services aimed at transgender people are often insufficient or nonexistent. Since an official ICD code is usually needed to gain access to and reimbursement for therapy, the WHO found it ill-advised to remove transgender health from the ICD-11 altogether. It was therefore decided to transpose the concept from the mental disorders chapter to the new sexual health chapter. Changes: Antimicrobial resistance and GLASS The group related to coding antimicrobial resistance has been significantly expanded: compare U82-U85 in the ICD-10 to 1882742628 in the ICD-11. Also, the ICD-11 codes are more closely in line with the WHO's Global Antimicrobial Resistance Surveillance System (GLASS). Launched in October 2015, this project aims to track the worldwide immunity of malicious microbes (viruses, bacteria, fungi, and protozoa) against medication. Changes: Traditional medicine "Supplementary Chapter Traditional Medicine Conditions - Module I" is an additional chapter in the ICD-11. It consists of concepts that are commonly referred to as Traditional Chinese Medicine (TCM), although the WHO prefers to use the more general and neutral sounding term Traditional Medicine (TM). Many of the traditional therapies and medicines that originally came from China also have long histories of usage and development in Japan (Kampo), Korea (TKM), and Vietnam (TVM). Medical procedures that can be labeled as "traditional" continue to be used all over the world, and are an integral part of health services in some countries. A 2008 survey by the WHO found that "[i]n some Asian and African countries, 80% of the population depend on traditional medicine for primary health care". Also, "[i]n many developed countries, 70% to 80% of the population has used some form of alternative or complementary medicine (e.g. acupuncture)".From approximately 2003 to 2007, a group of experts from various countries developed the WHO International Standard Terminologies on Traditional Medicine in the Western Pacific Region, or simply IST. In the following years, based on this nomenclature, the group created the International Classification of Traditional Medicine, or ICTM. As of February 2023, Module I, also called TM1, is the only module of the ICTM to have been released. Morris, Gomes, & Allen (2012) have stated that Module II will cover Ayurveda, that Module III will cover homeopathy, and that Module IV will cover "other TM systems with independent diagnostic conditions in a similar fashion". However, these modules have yet to be made public, and Singh & Rastogi (2018) noted that this "keeps the speculations open for what actually is encompassing under the current domain [of the ICTM]".The decision to include T(C)M in the ICD-11 has been criticized, because it is often alleged to be pseudoscience. Editorials by Nature and Scientific American admitted that some TM techniques and herbs have shown effectiveness or potential, but that others are pointless, or even outright harmful. They wrote that the inclusion of the TM-chapter is at odds with the scientific, evidence-based methods usually employed by the WHO. Both editorials accused the government of China of pushing the WHO to incorporate TCM, a global, billion-dollar market in which China plays a leading role. The WHO has stated that the categories of TM1 "do not refer to – or endorse – any form of treatment", and that their inclusion is primarily intended for statistical purposes. The TM1 codes are recommended to be used in conjunction with the Western Medicine concepts of ICD-11 chapters 1-25. Changes: Other changes Other notable changes in the ICD-11 include: Stroke is now classified as a neurological disorder instead of a disease of the circulatory system. Allergies are now coded under diseases of the immune system. In the ICD-10, a distinction was made between Sleep disorders (G47), included in nervous system diseases chapter, and Nonorganic sleep disorders (F51), included in the mental disorders chapter. In the ICD-11, they are merged and placed into a new chapter called Sleep-wake disorders, since the separation between organic (physical) and non-organic (mental) disorders is considered obsolete. "Supplementary section for functioning assessment" is an additional chapter that provides codes for use in the WHO Disability Assessment Schedule 2.0 (WHODAS 2.0), the Model Disability Survey (MDS), and the ICF.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ABLIM1** ABLIM1: Actin binding LIM protein 1, also known as ABLIM1, is a protein which in humans is encoded by the ABLIM1 gene. Function: This gene encodes a cytoskeletal LIM protein that binds to actin filaments via a domain that is homologous to erythrocyte dematin. LIM domains, found in over 60 proteins, play key roles in the regulation of developmental pathways. LIM domains also function as protein-binding interfaces, mediating specific protein-protein interactions. The protein encoded by this gene could mediate such interactions between actin filaments and cytoplasmic targets. Alternatively spliced transcript variants encoding different isoforms have been identified. Interactions: ABLIM1 has been shown to interact with LDOC1.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ACTA2** ACTA2: ACTA2 (actin alpha 2) is an actin protein with several aliases including alpha-actin, alpha-actin-2, aortic smooth muscle or alpha smooth muscle actin (α-SMA, SMactin, alpha-SM-actin, ASMA). Actins are a family of globular multi-functional proteins that form microfilaments. ACTA2 is one of 6 different actin isoforms and is involved in the contractile apparatus of smooth muscle. ACTA2 (as with all the actins) is extremely highly conserved and found in nearly all mammals. ACTA2: In humans, ACTA2 is encoded by the ACTA2 gene located on 10q22-q24. Mutations in this gene cause a variety of vascular diseases, such as thoracic aortic disease, coronary artery disease, stroke, Moyamoya disease, and multisystemic smooth muscle dysfunction syndrome.ACTA2 (commonly referred to as alpha-smooth muscle actin or α-SMA) is often used as a marker of myofibroblast formation. Studies have shown that ACTA2 is associated with TGF-β pathway that enhances contractile properties of hepatic stellate cells leading to liver fibrosis and cirrhosis.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Petya and NotPetya** Petya and NotPetya: Petya is a family of encrypting malware that was first discovered in 2016. The malware targets Microsoft Windows–based systems, infecting the master boot record to execute a payload that encrypts a hard drive's file system table and prevents Windows from booting. It subsequently demands that the user make a payment in Bitcoin in order to regain access to the system. Petya and NotPetya: Variants of Petya were first seen in March 2016, which propagated via infected e-mail attachments. In June 2017, a new variant of Petya was used for a global cyberattack, primarily targeting Ukraine. The new variant propagates via the EternalBlue exploit, which is generally believed to have been developed by the U.S. National Security Agency (NSA), and was used earlier in the year by the WannaCry ransomware. Kaspersky Lab referred to this new version as NotPetya to distinguish it from the 2016 variants, due to these differences in operation. It looked like ransomware, but without functioning recovery feature it was equivalent to a wiper. The NotPetya attacks have been blamed on the Russian government, specifically the Sandworm hacking group within the GRU Russian military intelligence organization, by security researchers, Google, and several governments. History: Petya was discovered in March 2016; Check Point noted that while it had achieved fewer infections than other ransomware active in early 2016, such as CryptoWall, it contained notable differences in operation that caused it to be "immediately flagged as the next step in ransomware evolution". Another variant of Petya discovered in May 2016 contained a secondary payload used if the malware cannot achieve administrator-level access.The name "Petya" is a reference to the 1995 James Bond film GoldenEye, wherein Petya is one of the two Soviet weapon satellites which carry a "Goldeneye"—an atomic bomb detonated in low Earth orbit to produce an electromagnetic pulse. A Twitter account that Heise suggested may have belonged to the author of the malware, named "Janus Cybercrime Solutions" after Alec Trevelyan's crime group in GoldenEye, had an avatar with an image of GoldenEye character Boris Grishenko, a Russian hacker and antagonist in the film played by Scottish actor Alan Cumming.On 30 August 2018, a regional court in Nikopol in the Dnipropetrovsk Oblast of Ukraine convicted an unnamed Ukrainian citizen to one year in prison after pleading guilty to having spread a version of Petya online. 2017 cyberattack: On 27 June 2017, a major global cyberattack began (Ukrainian companies were among the first to state they were being attacked), utilizing a new variant of Petya. On that day, Kaspersky Lab reported infections in France, Germany, Italy, Poland, the United Kingdom, and the United States, but that the majority of infections targeted Russia and Ukraine, where more than 80 companies were initially attacked, including the National Bank of Ukraine. ESET estimated on 28 June 2017 that 80% of all infections were in Ukraine, with Germany second hardest hit with about 9%. Russian president Vladimir Putin's press secretary, Dmitry Peskov, stated that the attack had caused no serious damage in Russia. Experts believed this was a politically-motivated attack against Ukraine, since it occurred on the eve of the Ukrainian holiday Constitution Day.Kaspersky dubbed this variant "NotPetya", as it has major differences in its operations in comparison to earlier variants. McAfee engineer Christiaan Beek stated that this variant was designed to spread quickly, and that it had been targeting "complete energy companies, the power grid, bus stations, gas stations, the airport, and banks".It was believed that the software update mechanism of M.E.Doc—a Ukrainian tax preparation program that, according to F-Secure analyst Mikko Hyppönen, "appears to be de facto" among companies doing business in the country—had been compromised to spread the malware. Analysis by ESET found that a backdoor had been present in the update system for at least six weeks prior to the attack, describing it as a "thoroughly well-planned and well-executed operation". The developers of M.E.Doc denied that they were entirely responsible for the cyberattack, stating that they too were victims.On 4 July 2017, Ukraine's cybercrime unit seized the company's servers after detecting "new activity" that it believed would result in "uncontrolled proliferation" of malware. Ukraine police advised M.E.Doc users to stop using the software, as it presumed that the backdoor was still present. Analysis of the seized servers showed that software updates had not been applied since 2013, there was evidence of Russian presence, and an employee's account on the servers had been compromised; the head of the units warned that M.E.Doc could be found criminally responsible for enabling the attack because of its negligence in maintaining the security of their servers. Operation: Petya's payload infects the computer's master boot record (MBR), overwrites the Windows bootloader, and triggers a restart. Upon startup, the payload encrypts the Master File Table of the NTFS file system, and then displays the ransom message demanding a payment made in Bitcoin. Meanwhile, the computer's screen displays a purportedly output by chkdsk, Windows' file system scanner, suggesting that the hard drive's sectors are being repaired.The original payload required the user to grant it administrative privileges; one variant of Petya was bundled with a second payload, Mischa, which activated if Petya failed to install. Mischa is a more conventional ransomware payload that encrypts user documents, as well as executable files, and does not require administrative privileges to execute. The earlier versions of Petya disguised their payload as a PDF file, attached to an e-mail. United States Computer Emergency Response Team (US-CERT) and National Cybersecurity and Communications Integration Center (NCCIC) released Malware Initial Findings Report (MIFR) about Petya on 30 June 2017.The "NotPetya" variant used in the 2017 attack uses EternalBlue, an exploit that takes advantage of a vulnerability in Windows' Server Message Block (SMB) protocol. EternalBlue is generally believed to have been developed by the U.S. National Security Agency (NSA); it was leaked in April 2017 and was also used by WannaCry. The malware harvests passwords (using tweaked build of open-source Mimikatz) and uses other techniques to spread to other computers on the same network, and uses those passwords in conjunction with PSExec to run code on other local computers. Additionally, although it still purports to be ransomware, the encryption routine was modified so that the malware could not technically revert its changes. This characteristic, along with other unusual signs in comparison to WannaCry (including the relatively low unlock fee of US$300, and using a single, fixed Bitcoin wallet to collect ransom payments rather than generating a unique ID for each specific infection for tracking purposes), prompted researchers to speculate that this attack was not intended to be a profit-generating venture, but to damage devices quickly, and ride off the media attention WannaCry received by claiming to be ransomware. Mitigation: It was found that it may be possible to stop the encryption process if an infected computer is immediately shut down when the fictitious chkdsk screen appears, and a security analyst proposed that creating read-only files named perfc and/or perfc.dat in the Windows installation directory could prevent the payload of the current strain from executing. The email address listed on the ransom screen was suspended by its provider, Posteo, for being a violation of its terms of use. As a result, infected users could not actually send the required payment confirmation to the perpetrator. Additionally, if the computer's filesystem was FAT based, the MFT encryption sequence was skipped, and only the ransomware's message was displayed, allowing data to be recovered trivially.Microsoft had already released patches for supported versions of Windows in March 2017 to address the EternalBlue vulnerability. This was followed by patches for unsupported versions of Windows (such as Windows XP) in May 2017, in the direct wake of WannaCry. Wired believed that "based on the extent of damage Petya has caused so far, though, it appears that many companies have put off patching, despite the clear and potentially devastating threat of a similar ransomware spread." Some enterprises may consider it too disruptive to install updates on certain systems, either due to possible downtime or compatibility concerns, which can be problematic in some environments. Impact: In a report published by Wired, a White House assessment pegged the total damages brought about by NotPetya to more than $10 billion. This assessment was repeated by former Homeland Security advisor Tom Bossert, who at the time of the attack was the most senior cybersecurity focused official in the US government.During the attack initiated on 27 June 2017, the radiation monitoring system at Ukraine's Chernobyl Nuclear Power Plant went offline. Several Ukrainian ministries, banks and metro systems were also affected. It is said to have been the most destructive cyberattack ever.Among those affected elsewhere included British advertising company WPP, Maersk Line, American pharmaceutical company Merck & Co. (internationally doing business as MSD), Russian oil company Rosneft (its oil production was unaffected), multinational law firm DLA Piper, French construction company Saint-Gobain and its retail and subsidiary outlets in Estonia, British consumer goods company Reckitt Benckiser, German personal care company Beiersdorf, German logistics company DHL, United States food company Mondelez International, and American hospital operator Heritage Valley Health System. The Cadbury's Chocolate Factory in Hobart, Tasmania, is the first company in Australia to be affected by Petya. On 28 June 2017, JNPT, India's largest container port, had reportedly been affected, with all operations coming to a standstill. Princeton Community Hospital in rural West Virginia will scrap and replace its entire computer network on its path to recovery.The business interruption to Maersk, the world's largest container ship and supply vessel operator, was estimated between $200m and $300m in lost revenues.The business impact on FedEx is estimated to be $400m in 2018, according to the company's 2019 annual report.Jens Stoltenberg, NATO Secretary-General, pressed the alliance to strengthen its cyber defenses, saying that a cyberattack could trigger the Article 5 principle of collective defense.Mondelez International's insurance carrier, Zurich American Insurance Company, has refused to pay out a claim for cleaning up damage from a Notpetya infection, on the grounds that Notpetya is an "act of war" that is not covered by the policy. Mondelez sued Zurich American for $100 million in 2018; the suit was settled in 2022 with the terms of the settlement remaining confidential. Reaction: Europol said it was aware of and urgently responding to reports of a cyber attack in member states of the European Union. The United States Department of Homeland Security was involved and coordinating with its international and local partners. In a letter to the NSA, Democratic Congressman Ted Lieu asked the agency to collaborate more actively with technology companies to notify them of software vulnerabilities and help them prevent future attacks based on malware created by the NSA. Reaction: On 15 February 2018, the Trump administration blamed Russia for the attack and warned that there would be "international consequences". The United Kingdom and the Australian government also issued similar statements.In October 2020 the DOJ named further GRU officers in an indictment. At the same time, the UK government blamed GRU's Sandworm also for attacks on the 2020 Summer Games. Other notable low-level malware: CIH (1998) Stuxnet (2010) WannaCry (2017)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dell MediaDirect** Dell MediaDirect: Dell MediaDirect is a software application that is published by Dell, Inc. and is pre-installed on the computers they sell. It attempts to provide DVD and CD playback and recent editions include features such as an address book and calendar. It is a custom version of CyberLink PowerCinema developed and licensed to Dell by CyberLink. MediaDirect works in conjunction with the operating system and the Dell QuickSet application. Design controversy: Earlier versions of MediaDirect attracted criticism since they adopt a distinctive combination of BIOS and hard drive layout to bypass the installed OS and boot directly to the media player application using a single button press. The chosen approach causes disk geometry to be deliberately misreported, can prevent the successful backup of hard disks and may trigger catastrophic data loss when MediaDirect is launched. Design controversy: Unless the drive and all pre-existing operating systems are left as originally installed, MediaDirect can trigger a forced repartitioning of the drive whilst attempting to load. This intervention typically causes the loss of all operating systems and data on the device. Removing or disabling the application is challenging because Dell employs Host protected area technology to cloak the location of the partition containing the software, contributing to the misreported disk geometry. Versions: Version 4 Version 4 deletes the dual-boot "fast start" capability and associated disk partition. It is now installed as a standard application. MediaDirect 4 includes optimisations for multi-media playback and is primarily used to support Blu-ray drives. Version 3 Version 3.5 is compatible with Microsoft Windows XP and Vista (it also works on Windows 7 and Windows 8.1). Each version has separate editions that can only be installed on certain computer models. This is achieved by using folder and file names in the installation software that matches the BIOS SystemID. XPS M1330 XPS M1530 MXG071 - XPSM1730Version 3.3 is compatible with Microsoft Windows XP and Vista (it also works on Windows 7 and Windows 8.1). Each version has separate editions that can only be installed on certain computer models. This is achieved by using folder and file names in the installation software that matches the BIOS SystemID. MXC061 - Inspiron 640M MM061 - Inspiron 6400/E1505 MP061 - Inspiron 9400/E1705 MXC062 - XPS M1210 MXG061 - XPS M1710 MXP061 - XPS M2010Version 3 has a dual-boot option where the software can utilize a minimal load of the operating system which speeds boot time and simplifies operation. Version 1.1 Released Sep. 1, 2005, is compatible with Microsoft Windows XP and the following Dell systems: Inspiron 6000 XPS/Inspiron XPS Gen 2 Inspiron 9300 XPS/Inspiron M170 Inspiron 1720 More information: Understanding the Dell Media Direct Partition Dell Media Direct Destroys Partitions?, Dell Direct Media Nuked my System Dell MediaDirect 4.0 Frequently Asked Questions (FAQ) How to Reinstall MediaDirect 3.0 or 3.3 How to Install Dell MediaDirect 2.0 Download / How to Install MediaDirect 1.1 CyberLink PowerCinema Linux
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mercury Meltdown** Mercury Meltdown: Mercury Meltdown is a puzzle-platform game for the PlayStation Portable (PSP). It is the sequel to Archer Maclean's Mercury. Like the first game, the goal is to tilt the stage in order to navigate one or more blobs of mercury to the destination. In contrast to the original, Ignition Banbury had more time and experience developing the game and listened to player feedback, allowing the game to be easier and provide players with more freedom to choose levels. The game has new hazards, enemies, and minigames. Mercury Meltdown: The game received a port to the PlayStation 2 (PS2) titled Mercury Meltdown Remix released a month after the original, with improved graphics, new levels and optimized for the PS2's controller. A second port for the Wii titled Mercury Meltdown Revolution was released in 2007 and also changing the levels, improving the graphics further from the Remix version and making use of Wii's motion controls. Mercury Meltdown: All versions of Mercury Meltdown were well-received by critics. The game was praised for being an overall improvement from the original in terms of difficulty and art style. Mercury Meltdown Remix received mixed reviews in regards to the PS2 controls with some criticism of removing multiplayer. Mercury Meltdown Revolution also criticized for lack of multiplayer but was praised for its motion controls. Gameplay: Similar to its predecessor, Mercury Meltdown is a puzzle-platform game. The goal is to navigate one or more blobs of mercury to one or more finish posts in the level by tilting the stage using the analog stick of the PSP. Players automatically lose the stage if all of the mercury is from the stage is lost, or no longer meeting the requirements to complete the level. The mercury can be split into multiple blobs by using sharp objects or obstacles. The color of the mercury can be changed using a Paintshop or merging with other blobs of different colors. Color mixing is based on the RGB color model. The game is made up of worlds referred to as "Laboratories" that are split into 16 stages represented as test-tubes. Stages have achievements for completing with 100% remaining mercury, obtaining the top score, and obtaining all bonus stars. An additional achievement is granted for obtaining all three in a single stage and can all be obtained individually via multiple playthroughs. Laboratories are unlocked after accumulating enough mercury after each stage completion. If players do exceptionally well in one particular laboratory, a secret 17th stage in that laboratory is unlocked.Mercury Meltdown introduces the Playground; a circular arena, with most of the items found in stages to play and test with. Another new mechanic is the ability to change the mercury into 3 new states: Cold, Hot, and Solid. The Hot state makes the mercury an easily splittable liquid that travels quickly. The Cold state makes the mercury a semi-solid blob that moves slowly and is harder to split. The Solid state turns the mercury into a solid ball that can't be split, allowing it to traverse over rails. Multiplayer is accessible between two PSPs via Ad-Hoc wireless mode or online network infrastructure mode. In multiplayer mode, players can participate in battle mode in which players can race each other from previously unlocked single player levels. Bonus stars are replaced with battle pick-ups that can assist players or hinder their opponent.In addition to the main game, Mercury Meltdown introduces five unlockable party games: Rodeo, Race, Metrix, Shove, and Paint. In Rodeo, players tilt the stage to prevent the mercury from falling off. In Race, players race mercury around a track. Metrix is a match 3 puzzle minigame requiring one to make a group of three or more colored blobs that fits inside a pre-defined grid. In Shove, players aim the mercury for the center spot of a target, avoiding hazards; similar to curling. In Paint, players move the mercury to paint the tray in their respected colors before the opponent does. These can be unlocked by collecting the bonus stars in the main game. All the party games can be played in single-player and multiplayer. Development and release: Mercury Meltdown was developed by Ignition Banbury (formerly Awesome Studios). Early in production stages Archer Maclean who originally coined the concept of the first game, had resigned from Ignition Banbury. His resignation was early enough in development to not have hindered the game's production. The first game, Mercury, was released in a tight production schedule to match the launch of the PlayStation Portable, resulting in a lack of refinements. Mercury Meltdown was closer to what the development team originally wanted in the first game due to them becoming more experienced with PSP development. Ignition Banbury chose to use a cel-shaded style with the purpose of differentiating it from its predecessor and to appeal to a wider audience. One of the criticism of the original game that the developers made note of was the difficulty. Ignition Banbury focused on making it easier and less linear. It was intended to be released in Europe by September 2006 with plans of having downloadable content (DLC), however the game was delayed and no DLC was released. Mercury Meltdown was released in North America on October 3, 2006 and in Europe on October 6, 2006. A limited edition bundle was released with its predecessor, Archer Maclean's Mercury on October 19, 2010. An iOS version was announced in E3 2011, however no new information had since been released.A month prior to the release of the PSP version, Ignition Banbury announced a revised version for the PlayStation 2 titled Mercury Meltdown Remix. This version makes use of the DualShock controller's second analog stick and rumble feature as opposed to the PSP's singular analog. It added new levels, making the total over 200 levels. Mercury Meltdown Remix was released in Europe on November 24, 2006, and in North America on December 4, 2006.The PS2 version was revised once more and ported onto the Wii under the title, Mercury Meltdown Revolution. Ignition Banbury began development when Nintendo announced the Wii under the code name: Revolution and were inspired by the Wii's motion controls. Ignition Banbury then pitched the concept to Nintendo in E3 2006, which resulted in it being approved by Nintendo. Ignition Banbury produced the game using an unfinished Gamecube engine and the tilt sensor mechanics that were intended to be used for the original Mercury game. Ignition Banbury further improved graphics from Mercury Meltdown Remix, added in new levels, and refined the difficulty curve. In addition to utilizing the Wii Remote's tilt control, Ignition Banbury also implemented the option to play Revolution with a Classic Controller and also attempted to add GameCube controller support, but this feature did not make the final release. Mercury Meltdown Revolution was released in Europe on June 8, 2007, and in North America on October 17, 2007. Reception: All versions of Mercury Meltdown have been well received by critics. The PSP, PS2, and Wii versions hold an aggregated score of 78, 73, and 77 out of 100 respectively. Both the PSP and Wii version were featured in 1001 Video Games You Must Play Before You Die. In regards to the original PSP version, critics gave praise on the improvements it made from the difficult and new visual style. Eurogamer praised the difficulty curves being more consistent and the variety of the level designs IGN complimented the new cel-shaded design and bright colors as opposed from the cold-steel design of its predecessor, stating that it brightens the game and makes it more fun. IGN further elaborated that the new cel-shaded design of the mercury blob making it easier to define the shape of the blob and when it dissipates. GameSpot also praised the stages being more pleasing to the eye due to the bright and colorful. PALGN noted that the environments didn't feel as epic, however, did feel more lively. Pocket Gamer stated it was an improvement from its predecessor in nearly every way.Mercury Meltdown Remix received mixed reviews from critics. Both GameZone and PALGN complimented the camera controls for improving the game. GameSpot criticized the new controls for controlling the camera, stating the sensitivity was too high and allowing more mistakes to be made. Another criticism was the lack of multiplayer option, causing the Party games to become dull without it. IGN was also critical of it, feeling that the PlayStation 2 controller did not feel right.Mercury Meltdown Revolution received more positive reception, in particular for the motion controls. PALGN complimented how well the motion controls work. Eurogamer felt that the controls were more well-realized than other Wii games at the time and were accessible to all players. IGN UK gave it an editor's choice award, although they said that the sound could be better and a lack of multiplayer is disappointing, especially on the Wii. GamePro stated the motion controls were fresh and a lot of fun, however also criticized the lack of multiplayer option. Play was very critical of the Wii version due to similar titles in the Wii's library and the art style not being as appealing as other titles such as Kororinpa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tablet-K** Tablet-K: Tablet-K is a kosher certification agency that was under the leadership of Rabbi Rafael Saffra until his death in 2009. Supervision and certification: Tablet-K products are commonly available at Costco, often for dairy and fish products. Many cheeses produced by Cabot Creamery have a Tablet-K hechsher. In 2006, Cabot Creamery expanded its line of kosher products, with some cheeses receiving a Tablet-K certification.The Tablet-K hechsher is generally not regarded as reliable by Orthodox Jews, with cheeses and meats considered especially problematic, but some Modern Orthodox Jews find them to be acceptable.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Horizontal integration** Horizontal integration: Horizontal integration is the process of a company increasing production of goods or services at the same level of the value chain, in the same industry. A company may do this via internal expansion, acquisition or merger.The process can lead to monopoly if a company captures the vast majority of the market for that product or service.Other benefits include, increasing economies of scale, expanding an existing market or improving product differentiation. Horizontal integration contrasts with vertical integration, where companies integrate multiple stages of production of a small number of production units. Horizontal alliance: Horizontal integration is related to horizontal alliance (also known as horizontal cooperation). However, in the case of a horizontal alliance, the partnering companies set up a contract, but remain independent. For example, Raue & Wieland (2015) describe the example of legally independent logistics service providers who cooperate. Such an alliance relates to competition. Aspects: Benefits of horizontal integration to both the firm and society may include economies of scale and economies of scope. For the firm, horizontal integration may provide a strengthened presence in the reference market. This means that with the merger, two firms would then be able to produce more revenue than one firm alone. It may also allow the horizontally integrated firm to engage in monopoly pricing, which is disadvantageous to society as a whole and which may cause regulators to ban or constrain horizontal integration. Strategies around horizontal mergers often relate to revenue production, reducing market entrants or expanding into new markets. The three forms of horizontal integration are mergers, acquisitions and internal expansion. Mergers: Mergers and acquisitions (M&A) refer to the consolidation of companies or assets through various financial transactions, such as mergers, acquisitions, and consolidations. M&A activities can be an effective way for companies to expand their operations, diversify their product or service offerings, and increase their market share. These activities can also lead to cost savings, increased efficiencies, and access to new technologies or markets.Mergers involve the combination of two or more companies to form a new entity. This can occur through a stock-for-stock transaction, where shareholders of both companies receive shares in the new entity based on a predetermined exchange ratio. Alternatively, a cash merger can occur, where one company purchases another using cash or other financial instruments.Acquisitions: Acquisitions, on the other hand, involve the purchase of one company by another. This can occur through a friendly acquisition, where the target company agrees to the acquisition and its shareholders receive compensation for their shares. Alternatively, a hostile takeover can occur, where the acquiring company purchases a controlling stake in the target company without its approval.Consolidations refer to the combination of two or more companies to form a single entity without the creation of a new entity. This can occur through the merger of equals, where two companies of equal size and strength combine forces, or through the acquisition of a smaller company by a larger one.M&A activities can have a significant impact on various stakeholders, including shareholders, employees, customers, and suppliers. Shareholders can benefit from increased stock prices and dividends, while employees may face job losses or changes to their employment terms. Customers and suppliers may also be affected by changes to product or service offerings and supplier relationships.Regulatory bodies play an important role in overseeing M&A activities to ensure they do not violate antitrust laws and do not harm competition in the marketplace. The approval of mergers and acquisitions may also require approval from government agencies or industry regulators. Aspects: Overall, mergers and acquisitions can be an effective strategy for companies to achieve growth and gain a competitive advantage. However, careful consideration of the potential benefits and drawbacks, as well as regulatory compliance, is essential to ensure a successful outcome for all stakeholders involved.Internal Expansion: In addition to mergers and acquisitions, companies can also pursue internal expansion through horizontal integration. This involves expanding their operations and product or service offerings within their existing industry by acquiring or developing new capabilities. Aspects: Horizontal integration can take various forms, including expanding through new product development, expanding geographically, or acquiring competitors or suppliers. This strategy can enable companies to increase their market share and achieve economies of scale by leveraging existing resources and capabilities.Internal expansion through horizontal integration can also involve the integration of different business functions, such as production, marketing, and sales, to streamline operations and increase efficiency. This can result in cost savings and improved profitability.However, there are potential drawbacks to internal expansion through horizontal integration. It can be costly and time-consuming to develop new capabilities or expand into new markets, and there is a risk that these efforts may not be successful. Additionally, companies may face increased competition and regulatory scrutiny as they expand their operations.Overall, internal expansion through horizontal integration can be a viable strategy for companies looking to achieve growth and gain a competitive advantage. However, it requires careful planning, execution, and management to ensure success and mitigate potential risks. Companies should also consider the potential benefits and drawbacks of this strategy compared to other growth strategies, such as mergers and acquisitions. Media terms: Media critics, such as Robert W. McChesney, have noted that the current trend within the entertainment industry has been toward the increased concentration of media ownership into the hands of a smaller number of transmedia and transnational conglomerates. Media is seen to amass in center where wealthy individuals have the ability to purchase such ventures (e.g., Rupert Murdoch).That emerged are new strategies for content development and distribution designed to increase the "synergy" between the different divisions of the same company. Studios seek content that can move fluidly across media channels. Examples: An example of horizontal integration in the food industry was the Heinz and Kraft Foods merger. On 25 March 2015, Heinz and Kraft merged into one company, the deal valued at $46 billion. Both produce processed food for the consumer market. On 9 December 2013, Sysco agreed to acquire US Foods but on 24 June 2015, the federal judge ruled against the deal saying that such merger would control 75% of the U.S. foodservice industry and that will stifle competition. It would have been horizontal integration, as both distribute food to restaurants, healthcare, and educational facilities. Examples: On 16 November 2015, Marriott International announced that it would acquire Starwood for $13.6 billion, creating the world's largest hotel chain. The merger was finalized on 23 September 2016.The AB-Inbev acquisition of SAB Miller for $107 Billion, which completed in 2016, is one of the biggest deals of all time.On 1 November 2017, Centurylink bought Level 3 Communications for $34 billion, and incorporated Level 3 as part of Centurylink. Examples: On 14 December 2017, The Walt Disney Company announced a $52.4 billion bid in stock to acquire 21st Century Fox along with its bulk of assets, which included the famed 20th Century Fox film studio and other assets such as FX Networks and 30% stake in Hulu. Both companies produced and distributed films and television series, as well as each owning a 30% stake in Hulu.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Physical change** Physical change: Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate. Physical change: A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density. Physical change: An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge. Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change. Examples: Heating and cooling Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation. Magnetism Ferro-magnetic materials can become magnetic. The process is reversible and does not affect the chemical composition. Crystalisation Many elements and compounds form crystals. Some such as carbon can form several different forms including diamond, graphite, graphene and fullerenes including buckminsterfullerene. Examples: Crystals in metals have a major effect of the physical properties of the metal including strength and ductility. Crystal type, shape and size can be altered by physical hammering, rolling and by heat Mixtures Mixtures of substances that are not soluble are usually readily separated by physical sieving or settlement. However mixtures can have different properties from the individual components. One familiar example is the mixture of fine sand with water used to make sandcastles. Neither the sand on its own nor the water on its own will make a sand-castle but by using physical properties of surface tension, the mixture behaves in a different way. Examples: Solutions Most solutions of salts and some compounds such as sugars can be separated by evaporation. Others such as mixtures or volatile liquids such as low molecular weight alcohols, can be separated by fractional distillation. Alloys The mixing of different metal elements is known as alloying. Brass is an alloy of copper and zinc. Separating individual metals from an alloy can be difficult and may require chemical processing – making an alloy is an example of a physical change that cannot readily be undone by physical means. Alloys where mercury is one of the metals can be separated physically by melting the alloy and boiling the mercury off as a vapour.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stoop (architecture)** Stoop (architecture): In American English, a stoop is a small staircase ending in a platform and leading to the entrance of an apartment building or other building. Etymology: Originally brought to the Hudson Valley of New York by settlers from the Netherlands, the word "stoop" is part of the Dutch vocabulary that has survived there from colonial times until the present. Stoop, "a small porch", comes from Dutch stoep (meaning: step/sidewalk, pronounced the same as English "stoop"); the word is now in general use in the Northeastern United States and is probably spreading. History: New York stoops may have been a simple carry-over from the Dutch practice of constructing elevated buildings. Stoops as a social device: Traditionally, in North American cities, the stoop served an important function as a spot for brief, incidental social encounters. Homemakers, children, and other household members would sit on the stoop outside their home to relax, and greet neighbors passing by. Similarly, while on an errand, one would stop and converse with neighbors sitting on their stoops. Within an urban community, stoop conversations helped to disseminate gossip and reaffirm casual relationships. Similarly, it was the place that children would congregate to play street games such as stoop ball. Urbanites lacking yards often hold stoop sales instead of yard sales. Stoops as a social device: In her pivotal book The Death and Life of Great American Cities, Jane Jacobs includes the stoop as part of her model of the self-regulating urban street. By providing a constant human presence watching the street, institutions such as stoops prevent street crime, without intervention from authority figures. In addition, they motivate better street maintenance and beautification, by giving it social as well as utilitarian value. Literature: Jane Jacobs, The Death and Life of Great American Cities, New York: Random House, 1961 Mario Maffi, New York City: An Outsider's Inside View, Columbus: Ohio State University Press, 2004
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ice pop** Ice pop: An ice pop is a liquid-based frozen snack on a stick. Unlike ice cream or sorbet, which are whipped while freezing to prevent ice crystal formation, an ice pop is "quiescently" frozen—frozen while at rest—and becomes a solid block of ice. The stick is used as a handle to hold it. Without a stick, the frozen product would be a freezie. Ice pop: An ice pop is also referred to as a popsicle in Canada and the United States, paleta in Mexico, the Southwestern United States and parts of Latin America, ice lolly in the United Kingdom (the term ice pop refers to a freezie in the United Kingdom), Ireland and the Commonwealth, lolly ice by most people in Liverpool and some people in Ireland, ice lol as a colloquial form in areas where people say ice lolly, ice drop in the Philippines, ice gola in India, ice candy in the Philippines, India and Japan, ai tim tang or ice cream tang in Thailand (though both words are also colloquially used to refer to ice cream bar), and kisko in the Caribbean. The term icy pole is often used in Australia, but is a brand name for a specific type, so ice block is also used. History: As far back as 1872, two men, doing business as Ross and Robbins, sold a frozen-fruit confection on a stick, which they called the Hokey-Pokey.Francis William "Frank" Epperson of San Francisco, California, popularized ice pops after patenting the concept of "frozen ice on a stick" in 1923.Epperson claimed to have first created an ice pop in 1905, at the age of 11, when he accidentally left a glass of powdered lemonade soda and water with a mixing stick in it on his porch during a cold night, a story still printed on the back of Popsicle treat boxes. History: Epperson lived in Oakland and worked as a lemonade salesman.In 1922, Epperson, a realtor with Realty Syndicate Company in Oakland, introduced the Popsicle at a fireman's ball. The product got traction quickly; in 1923, at the age of 29, Epperson received a patent for his "Epsicle" ice pop, and by 1924, had patented all handled, frozen confections or ice lollipops. He officially debuted the Epsicle in seven fruit flavors at Neptune Beach amusement park, marketed as a "frozen lollipop," or a "drink on a stick."A couple of years later, Epperson sold the rights to the invention and the Popsicle brand to the Joe Lowe Company in New York City. Terminology: In the United States and Canada frozen ice on a stick is generically referred to as a popsicle due to the early popularity of the Popsicle brand, and the word has become a genericized trademark to mean any ice pop, regardless of brand or format. The word is a portmanteau of pop and icicle; the word is genericized to such an extent that there are decades-old derived slang meanings such as "popsicle stand". The term ice pop is also used in the United States.In Ireland the term ice pop is predominantly used. In the United Kingdom the term ice lolly is used to refer to ice pop while the term ice pop refers to a freezie (flavoured ice inside a tube). The term chihiro is used as a slang term in the Cayman Islands, partially derived from chill. Different parts of Australia use either ice block or icy pole (which is a brand name), and New Zealand uses ice block. In the Philippines the term ice drop is used with coconut flavor ice pops being called ice bukos. India uses the terms ice gola and ice candy. In Japan the term ice candy is used. Terminology: Paleta After a trip to the United States in the early 1940s Ignacio Alcázar returned to his home city of Tocumbo, Michoacán, México, bringing the idea to manufacture ice pops or paletas (little sticks) using locally available fresh fruit. He and some family members expanded by opening a shop in Mexico City which became very popular and he began to franchise Paletería La Michoacana to friends and family from his town. The popularity of paletas and association with Tocumbo has increased to the status of a national Mexican food.Paleta flavors can be divided into two basic categories: milk-based or water-based. The composition of each flavor may vary, but the base is most often fruit. Paleterias usually have dozens of flavors of paleta including local flavors like horchata, tamarind, mamey and nanche along with other flavors like strawberry, lime, chocolate and mango. Distinctly Mexican ingredients like chili pepper, chamoy, and vanilla are often present in these paletas. Paleterias adapt their flavors to the tastes of the community and local availability of ingredients. Terminology: Paletero A paletero (roughly equivalent to the English "ice cream man"), is a street seller of paletas and other frozen treats, usually from a pushcart labeled with the name of the enterprise that made the paletas (paletería). Today, many paleteros are now commonly found in American cities with significant Mexican populations. Vending requirements for paleteros vary widely by city. Homemade ice pops: An alternative to the store-bought ice pops is making them at home using fruit juice, drinks, or any freezable beverage. A classic method involves using ice cube trays and toothpicks, although various ice pop freezer molds are also available. In the UK, there is an increasing number of people making alcoholic ice lollies at home by putting alcoholic drinks inside the mould. Buckfast, Kopparberg and Strongbow Dark Fruit ciders are popular choices used. Innovations in ice pop creation: In 2018, the UK food-focused design firm called Bompas & Parr announced that they had created the world's first 'non-melting' ice pop. The ice pop does melt but not as fast as other ice pops. This is due to the strands of fruit fibers inside the ice pops which makes them thicker than regular ice pops. The thicker the ice pop the slower it melts. This design was inspired by the material called pykrete, which was invented by Geoffrey Pyke. World record ice pop: On June 22, 2005, Snapple tried to beat the existing Guinness World Records entry of a 1997 Dutch 21-foot (6.4 m) ice pop by attempting to erect a 25-foot (7.6 m) ice pop in New York City. The 17.5 short tons (15.9 t) of frozen juice that had been brought from Edison, New Jersey, in a freezer truck melted faster than expected, dashing hopes of a new record. Spectators fled to higher ground as firefighters hosed away the melted juice.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Path-ordering** Path-ordering: In theoretical physics, path-ordering is the procedure (or a meta-operator P ) that orders a product of operators according to the value of a chosen parameter: P{O1(σ1)O2(σ2)⋯ON(σN)}≡Op1(σp1)Op2(σp2)⋯OpN(σpN). Here p is a permutation that orders the parameters by value: p:{1,2,…,N}→{1,2,…,N} σp1≤σp2≤⋯≤σpN. For example: P{O1(4)O2(2)O3(3)O4(1)}=O4(1)O2(2)O3(3)O1(4). Examples: If an operator is not simply expressed as a product, but as a function of another operator, we must first perform a Taylor expansion of this function. This is the case of the Wilson loop, which is defined as a path-ordered exponential to guarantee that the Wilson loop encodes the holonomy of the gauge connection. The parameter σ that determines the ordering is a parameter describing the contour, and because the contour is closed, the Wilson loop must be defined as a trace in order to be gauge-invariant. Time ordering: In quantum field theory it is useful to take the time-ordered product of operators. This operation is denoted by T . (Although T is often called the "time-ordering operator", strictly speaking it is neither an operator on states nor a superoperator on operators.) For two operators A(x) and B(y) that depend on spacetime locations x and y we define: := if if τx<τy. Time ordering: Here τx and τy denote the invariant scalar time-coordinates of the points x and y.Explicitly we have := θ(τx−τy)A(x)B(y)±θ(τy−τx)B(y)A(x), where θ denotes the Heaviside step function and the ± depends on if the operators are bosonic or fermionic in nature. If bosonic, then the + sign is always chosen, if fermionic then the sign will depend on the number of operator interchanges necessary to achieve the proper time ordering. Note that the statistical factors do not enter here. Time ordering: Since the operators depend on their location in spacetime (i.e. not just time) this time-ordering operation is only coordinate independent if operators at spacelike separated points commute. This is why it is necessary to use τ rather than t0 , since t0 usually indicates the coordinate dependent time-like index of the spacetime point. Note that the time-ordering is usually written with the time argument increasing from right to left. Time ordering: In general, for the product of n field operators A1(t1), …, An(tn) the time-ordered product of operators are defined as follows: T{A1(t1)A2(t2)⋯An(tn)}=∑pθ(tp1>tp2>⋯>tpn)ε(p)Ap1(tp1)Ap2(tp2)⋯Apn(tpn)=∑p(∏j=1n−1θ(tpj−tpj+1))ε(p)Ap1(tp1)Ap2(tp2)⋯Apn(tpn) where the sum runs all over p's and over the symmetric group of n degree permutations and for bosonic operators, sign of the permutation for fermionic operators. Time ordering: The S-matrix in quantum field theory is an example of a time-ordered product. The S-matrix, transforming the state at t = −∞ to a state at t = +∞, can also be thought of as a kind of "holonomy", analogous to the Wilson loop. We obtain a time-ordered expression because of the following reason: We start with this simple formula for the exponential exp lim N→∞(1+hN)N. Time ordering: Now consider the discretized evolution operator S=⋯(1+h+3)(1+h+2)(1+h+1)(1+h0)(1+h−1)(1+h−2)⋯ where 1+hj is the evolution operator over an infinitesimal time interval [jε,(j+1)ε] . The higher order terms can be neglected in the limit ε→0 . The operator hj is defined by hj=1iℏ∫jε(j+1)εdt∫d3xH(x→,t). Note that the evolution operators over the "past" time intervals appears on the right side of the product. We see that the formula is analogous to the identity above satisfied by the exponential, and we may write exp exp ⁡(∫dtd3xH(x→,t)iℏ). The only subtlety we had to include was the time-ordering operator T because the factors in the product defining S above were time-ordered, too (and operators do not commute in general) and the operator T ensures that this ordering will be preserved.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**COinS** COinS: ContextObjects in Spans (COinS) is a method to embed bibliographic metadata in the HTML code of web pages. This allows bibliographic software to publish machine-readable bibliographic items and client reference management software to retrieve bibliographic metadata. The metadata can also be sent to an OpenURL resolver. This allows, for instance, searching for a copy of a book at a specific library. History: In the late 1990s OpenURL was created at Ghent University as a framework to provide context-sensitive links. The OpenURL link server implementation called SFX was sold to Ex Libris Group which marketed it to libraries, shaping the idea of a "link resolver". The OpenURL framework was later standardized as ANSI/NISO Z39.88 in 2004 (revised 2010). A core part of OpenURL was the concept of "ContextObjects" as metadata to describe referenced resources. History: In late 2004, Richard Cameron, the creator of CiteULike, drew attention to the need for a standard way of embedding metadata in HTML pages. In January 2005 Daniel Chudnov suggested the use of OpenURL. Embedding OpenURL ContextObjects in HTML had been proposed before by Herbert Van de Sompel and Oren Beit-Arie and a working paper by Chudnov and Jeremy Frumkin. Discussion of the latter on the GPS-PCS mailing list resulted in a draft specification for embedding OpenURLs in HTML, which later became COinS. A ContextObject is embedded in an HTML span element. History: The adoption of COinS was pushed by various publications and implementations. The specification was OCOinS.info, which includes specific guides to implement COinS for journal articles and books. Summary of the data model: From OpenURL 1.0 COinS borrows one of its serialization formats ("KEV") and some ContextObject metadata formats included in OpenURL implementation guidelines. The ContextObject implementation guidelines of COinS include four publication types (article with several subtypes, book, patent, and generic) and a couple of simple fields. However, the guidelines are not required part of COinS, so the standard does not provide a strict metadata model like Dublin Core or the Bibliographic Ontology. Use in websites: The following websites make use of COinS: Citebase CiteULike Copac HubMed Mendeley Wikipedia Wikivoyage (German branch) WorldCat Server-side applications: Some server-side applications embed COinS, including refbase. Client tools: Client tools which can make use of COinS include: BibDesk Bookends (Mac) Citavi LibX Mendeley ResearchGate Sente (Mac) Zotero
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hilbert curve scheduling** Hilbert curve scheduling: In parallel processing, the Hilbert curve scheduling method turns a multidimensional task allocation problem into a one-dimensional space filling problem using Hilbert curves, assigning related tasks to locations with higher levels of proximity. Other space filling curves may also be used in various computing applications for similar purposes.The SLURM job scheduler which is used on a number of supercomputers uses a best fit algorithm based on Hilbert curve scheduling in order to optimize locality of task assignments.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Great hexacronic icositetrahedron** Great hexacronic icositetrahedron: In geometry, the great hexacronic icositetrahedron is the dual of the great cubicuboctahedron. Its faces are kites. Part of each kite lies inside the solid, hence is invisible in solid models. Proportions: The kites have two angles of arccos 117.200 570 380 16 ∘ , one of arccos 94.199 144 429 76 ∘ and one of arccos 31.399 714 809 92 ∘ . The dihedral angle equals arccos 17 94.531 580 798 20 ∘ . The ratio between the lengths of the long and short edges is 2.70710678118655
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Organoantimony chemistry** Organoantimony chemistry: Organoantimony chemistry is the chemistry of compounds containing a carbon to antimony (Sb) chemical bond. Relevant oxidation states are Sb(V) and Sb(III). The toxicity of antimony limits practical application in organic chemistry. Organoantimony(V) chemistry: Antimony compounds of the type R5Sb (stiboranes) can be synthesised from trivalent Sb precursors: Ph3Sb + Cl2 → Ph3SbCl2 Ph3SbCl2 + 2 PhLi → Ph5SbAsymmetric compounds can also be obtained through the stibonium ion: R5Sb + X2 → [R4Sb]+[X] [R4Sb]+[X] + R'MgX → R4R'SbJust as in the related organobismuth compounds (same group 15), organoantimony(V) compounds form onium compounds and ate complexes. Organoantimony(V) chemistry: Pentaphenylantimony decomposes at 200 °C to triphenylstibine and biphenyl. It forms a trigonal bipyramidal molecular geometry. In the related Me5Sb all methyl protons are equivalent at -100 °C in proton NMR. Compounds of the type R4SbX tend to form dimers. Organoantimony(III) chemistry: Compounds of the type R3Sb (stibines) can be accessed by reaction of antimony trichloride with organolithium or Grignard reagents. SbCl3 + 3 RLi (or RMgCl) → R3SbTypical reactions are: R3Sb + Br2 → R3SbBr2 R3Sb + O2 → R3SbO R3Sb + Na + NH3 → R2SbNa R3Sb + B2H6 → R3Sb·BH3Stibanes are weak Lewis acids and therefore ate complexes are not encountered. On the other hand, they have good donor properties and are therefore widely used in coordination chemistry. R3Sb compounds are more air-sensitive than the R5Sb counterparts. Organoantimony(III) chemistry: Antimony Metallocenes are known as well: 14SbI3 + 3 (Cp*Al)4 → [2Cp*Sb]+[AlI4]− + 8Sb + 6 AlI3The Cp*-Sb-Cp* angle is 154°. The cyclic compound stibole, a structural analog of pyrrole, has not been isolated, but substituted derivatives known as stiboles are known. Organoantimony(II) chemistry: Distibines have a Sb-Sb single bond and are of some interest as thermochromic materials. For example, tetramethyldistibine is colorless as a gas, yellow as a liquid, red as solid just below the melting point of 18.5 °C and again yellow well below the melting point.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JWH-018** JWH-018: JWH-018 (1-pentyl-3-(1-naphthoyl)indole, NA-PIMO or AM-678) is an analgesic chemical from the naphthoylindole family that acts as a full agonist at both the CB1 and CB2 cannabinoid receptors, with some selectivity for CB2. It produces effects in animals similar to those of tetrahydrocannabinol (THC), a cannabinoid naturally present in cannabis, leading to its use in synthetic cannabis products that in some countries are sold legally as "incense blends".As a full agonist at both the CB1 and CB2 cannabinoid receptors, this chemical compound is classified as an analgesic medication. The analgesic effects of cannabinoid ligands, mediated by CB1 receptors are well established in treatment of neuropathic pain, as well as cancer pain and arthritis.These compounds work by mimicking the body's naturally-produced endocannabinoid hormones such as 2-AG and anandamide (AEA), which are biologically active and can exacerbate or inhibit nerve signaling. As the cause is poorly understood in chronic pain states, more research and development must be done before the therapeutic potential of this class of biologic compounds can be realized. History: John W. Huffman, an organic chemist at Clemson University, synthesized a variety of chemical compounds that affect the endocannabinoid system. JWH-018 is one of these compounds, with studies showing an affinity for the cannabinoid (CB1) receptor five times greater than that of THC. Cannabinoid receptors are found in mammalian brain and spleen tissue; however, the structural details of the active sites are currently unknown.On December 15, 2008, it was reported by German pharmaceutical companies that JWH-018 was found as one of the active components in at least three versions of the grey market drug Spice, which has been sold as an incense in a number of countries around the world since 2002. An analysis of samples acquired four weeks after the German prohibition of JWH-018 took place found that the manufacturers had shortened the alkyl chain by one carbon to circumvent the ban. Pharmacology: JWH-018 is a full agonist of both the CB1 and CB2 cannabinoid receptors, with a reported binding affinity of 9.00 ± 5.00 nM at CB1 and 2.94 ± 2.65 nM at CB2. JWH-018 has an EC50 of 102 nM for human CB1 receptors, and 133 nM for human CB2 receptors. JWH-018 produces bradycardia and hypothermia in rats at doses of 0.3–3 mg/kg, suggesting potent cannabinoid-like activity. Pharmacology: Pharmacokinetics Metabolism of JWH-018 was assessed using Wistar rats which had been administered an ethanolic extract containing JWH-018. Urine was collected for 24 hours, followed by extraction of JWH-018 metabolites using both liquid-liquid extraction and solid-phase extraction. GC-MS was utilized to separate and identify the extracted compounds. JWH-018 and its N-dealkylated metabolite were only detected in small amounts, with hydroxylated N-dealkylated metabolites comprising the primary signal. The observed mass shift indicates that it is likely that hydroxylation occurs in both the naphthalene and indole portions of the molecule. Human metabolites were similar although most metabolism took place on the indole ring and pentyl side chain, and the hydroxylated metabolites were extensively conjugated with glucuronide. Usage: At least one case of JWH-018 dependence has been reported by the media. The user consumed JWH-018 daily for eight months. Withdrawal symptoms were more severe than those experienced as a result of cannabis dependence. JWH-018 has been shown to cause profound changes in CB1 receptor density following administration, causing desensitization to its effects more rapidly than related cannabinoids.On October 15, 2011, Anderson County coroner Greg Shore attributed the death of a South Carolina college basketball player to "drug toxicity and organ failure" caused by JWH-018. A November 2011 email concerning the case was released in December 2011 under the Freedom of Information Act after multiple requests to see the information had been denied.Compared to THC, which is a partial agonist at CB1 receptors, JWH-018, and many synthetic cannabinoids, are full agonists. THC has been shown to inhibit GABA receptor neurotransmission in the brain via several pathways. JWH-018 may cause intense anxiety, agitation, and, in rare cases (generally with non-regular JWH users), has been assumed to have been the cause of seizures and convulsions by inhibiting GABA neurotransmission more effectively than THC. Cannabinoid receptor full agonists may present serious dangers to the user when used to excess.Various physical and psychological adverse effects have been reported from JWH-018 use. One study reported psychotic relapses and anxiety symptoms in well-treated patients with mental illness following JWH-018 inhalation. Due to concerns about the potential of JWH-018 and other synthetic cannabinoids to cause psychosis in vulnerable individuals, it has been recommended that people with risk factors for psychotic illnesses (like a past or family history of psychosis) not use these substances. Detection in biological fluids: JWH-018 usage is readily detected in urine using "spice" screening immunoassays from several manufacturers focused on both the parent drug and its omega-hydroxy and carboxyl metabolites. JWH-018 will not be detected by older methods employed for detecting THC and other cannabis terpenoids. Determination of the parent drug in serum or its metabolites in urine has been accomplished by GC-MS or LC-MS. Serum JWH-018 concentrations are generally in the 1–10 μg/L range during the first few hours after recreational usage. The major urinary metabolite is a compound that is monohydroxylated on the omega minus one carbon atom of the alkyl side chain. A lesser metabolite monohydroxylated on the omega (terminal) position was present in the urine of six users of the drug at concentrations of 6–50 μg/L, primarily as a glucuronide conjugate.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Restraint chair** Restraint chair: A restraint chair is a type of physical restraint that is used to force an individual to remain seated in one place to prevent injury and harm to themselves or others. They are commonly used in prisons for violent inmates and hospitals for out of control patients. However, they have also been used to restrain prisoners at Guantanamo Bay detention camp during force-feeding. Restraint chair: In the United States, the use of these chairs is controversial because a number of deaths and injuries from prolonged periods have been reported. There have been numerous cases of financial settlements, as well as personal lawsuits and at least one class action suit.In Australia, the mere use of restraint chairs has sparked opposition. History: Various forms of restraint chair have been used for centuries. The modern, institutional type was introduced into the United States in the late 1990s. Description: A typical, modern restraint chair consists of a sturdy frame, padded seat and padded reclining back, arm rests, a foot rest, and a set of back wheels. Straps secure the individual at the ankles, wrists, shoulders, and waist. Organizations using restraint chairs: Restraint chairs are used in local jails as well as state and federal prisons. They are used by the U.S. Marshals Service, the U.S. Immigration and Customs Enforcement and also in psychiatric hospitals and juvenile detention facilities. Statistics: Numbers vary within the United States and across the world. An example of the number of times the chair was used is as follows: According to Jacksonville Sheriff's Office records, the restraint chair was used 137 times in 2014 and 130 times in 2015. In Gwinnett County, Georgia, during the first half of 2013, 129 inmates were held in a restraint chair. Hazards: A review of deaths at United States county jails revealed that there have been nearly 40 restraint chair-related deaths since the late 1990s. Prolonged periods in a restraint chair can cause blood clots. Incidents There have been numerous incidents associated with the improper use of restraint chairs involving injury, torture, and death. Cheatham County Jail officers were placed on leave after a 2017 video was released showing Jordan Norris being tased while restrained. One officer says on the video "I'll keep on doing that until I run out of batteries." The victim has since filed a civil rights lawsuit in federal court. San Luis Obispo County was ordered to pay $5 million following the death of Andrew Holland who died after spending 46 hours in a restraining chair at the San Luis Obispo County Jail. In July 2017, six Oklahoma officers were charged with manslaughter after Anthony Huff, a 58-year-old prisoner, died after spending over 48 hours in a restraining chair without enough food and water, and inadequate medical attention. In March 2009, a Florida man was pepper sprayed multiple times then placed in a restraint chair in a Lee County jail. After being strapped in, a spit hood was placed over his head. He was then pepper sprayed twice more and left in the chair for a further six hours. He died in the hospital. A video was released of two Georgia officers tasing a mentally ill man while he was restrained in a chair. 22-year-old Matthew Ajibade later died while in police custody. The use of restraint chairs and spit hoods at the Don Dale Youth Detention Centre in the Northern Territory, Australia were part of the reason for the establishment of the Royal Commission into the Protection and Detention of Children in the Northern Territory.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Supporting line** Supporting line: In geometry, a supporting line L of a curve C in the plane is a line that contains a point of C, but does not separate any two points of C. In other words, C lies completely in one of the two closed half-planes defined by L and has at least one point on L. Properties: There can be many supporting lines for a curve at a given point. When a tangent exists at a given point, then it is the unique supporting line at this point, if it does not separate the curve. Generalizations: The notion of supporting line is also discussed for planar shapes. In this case a supporting line may be defined as a line which has common points with the boundary of the shape, but not with its interior.The notion of a supporting line to a planar curve or convex shape can be generalized to n dimension as a supporting hyperplane. Critical support lines: If two bounded connected planar shapes have disjoint convex hulls that are separated by a positive distance, then they necessarily have exactly four common lines of support, the bitangents of the two convex hulls. Two of these lines of support separate the two shapes, and are called critical support lines. Without the assumption of convexity, there may be more or fewer than four lines of support, even if the shapes themselves are disjoint. For instance, if one shape is an annulus that contains the other, then there are no common lines of support, while if each of two shapes consists of a pair of small disks at opposite corners of a square then there may be as many as 16 common lines of support.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shibor** Shibor: The Shanghai Interbank Offered Rate (or Shibor, 上海银行间同业拆放利率) is a daily reference rate based on the interest rates at which banks offer to lend unsecured funds to other banks in the Shanghai wholesale (or "interbank") money market. There are eight Shibor rates, with maturities ranging from overnight to a year. They are calculated from rates quoted by 18 banks, eliminating the four highest and the four lowest rates, and then averaging the remaining 10.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microbiology of decomposition** Microbiology of decomposition: Microbiology of decomposition is the study of all microorganisms involved in decomposition, the chemical and physical processes during which organic matter is broken down and reduced to its original elements. Decomposition microbiology can be divided into two fields of interest, namely the decomposition of plant materials and the decomposition of cadavers and carcasses. The decomposition of plant materials is commonly studied in order to understand the cycling of carbon within a given environment and to understand the subsequent impacts on soil quality. Plant material decomposition is also often referred to as composting. The decomposition of cadavers and carcasses has become an important field of study within forensic taphonomy. Decomposition microbiology of plant materials: The breakdown of vegetation is highly dependent on oxygen and moisture levels. During decomposition, microorganisms require oxygen for their respiration. If anaerobic conditions dominate the decomposition environment, microbial activity will be slow and thus decomposition will be slow. Appropriate moisture levels are required for microorganisms to proliferate and to actively decompose organic matter. In arid environments, bacteria and fungi dry out and are unable to take part in decomposition. In wet environments, anaerobic conditions will develop and decomposition can also be considerably slowed down. Decomposing microorganisms also require the appropriate plant substrates in order to achieve good levels of decomposition. This usually translates to having appropriate carbon to nitrogen ratios (C:N). The ideal composting carbon-to-nitrogen ratio is thought to be approximately 30:1. As in any microbial process, the decomposition of plant litter by microorganisms will also be dependent on temperature. For example, leaves on the ground will not undergo decomposition during the winter months where snow cover occurs as temperatures are too low to sustain microbial activities. Decomposition microbiology of cadavers and carcasses: The decomposition processes of cadavers and carcasses are studied within the field of forensic taphonomy in order to: aid in the estimation of post-mortem interval (PMI) or time since death; aid in the location of potential clandestine graves.Decomposition microbiology as applied to forensic taphonomy can be divided into 2 groups of studies: microorganisms from within the body; microorganisms from the decomposition environment. Decomposition microbiology of cadavers and carcasses: Microorganisms in the body When considering cadavers and carcasses, putrefaction is the proliferation of microorganisms within the body following death and also encompasses the breakdown of tissues brought upon by the growth of bacteria. The first signs of putrefaction are usually the discolorations of the body which can vary between shades of green, blue, red or black depending on 1) where the color changes are observed and 2) how far along within the decomposition process the observation is made. This phenomenon is known as marbling. Discolorations are the results of bile pigments being released following an enzymatic attack of the liver, gallbladder and pancreas and the release of hemoglobin breakdown products. Proliferation of bacteria throughout the body is accompanied with the production of considerable amounts of gases due to their capacities of fermentation. As gases accumulate within the bodily cavities the body appears to swell as it enters the bloat stage of decomposition. Decomposition microbiology of cadavers and carcasses: As oxygen is present within a body at the beginning of decomposition, aerobic bacteria flourish during the first stages of the process. As the microbial population increases, an accumulation of gases changes the environment into anaerobic conditions which is consequently followed by a change to anaerobic bacteria. Gastro-intestinal bacteria are thought to be responsible for the majority of the putrefactive processes that occur in cadavers and carcasses. This can be in part attributed to the impressive concentrations of viable gastro-intestinal organisms and the metabolic capacities they possess allowing them to use an array of different nutrient sources. Gastro-intestinal bacteria are also capable of migrating from the gut to any other region of the body by using the lymphatic system and blood vessels. Furthermore, we know that coliform varieties of Staphylococcus are important members of the aerobic putrefactive bacteria and that members of the genus Clostridium make up a large part of anaerobic putrefactive bacteria. Decomposition microbiology of cadavers and carcasses: Microorganisms outside the body Cadavers and carcasses are usually left to decompose in contact with soil whether through burial in a grave or if left to decompose on the soil surface. This allows microorganisms in the soil and air to come in contact with the body and to take part in the decomposition process. Soil microorganism communities also undergo changes as a result of decomposition fluids leaching in the environment. Cadavers and carcasses often show signs of fungal growth suggesting that fungi use the body as a source of nutrients. Decomposition microbiology of cadavers and carcasses: The exact impacts that decomposition may have on surrounding soil microbial communities remains unclear as some studies have shown increases in microbial biomass following decomposition whereas other have seen decreases. It is likely that the survival of microorganisms throughout the decomposition process is highly dependent of a multitude of environmental factors including pH, temperature and moisture. Decomposition microbiology of cadavers and carcasses: Decomposition fluids and soil microbiology Decomposition fluids entering the soil represent an important influx of organic matter and can also contain a large microbial load of organisms from the body. The area where the majority of the decomposition fluid leaches into the soil is often referred to as a cadaver decomposition island (CDI). It has been observed that decomposition can have a favorable influence on the growth of plants due to increased fertility, a useful tool when trying to locate clandestine graves. The changes in the concentration of nutrients can have lasting effects that are still seen years after a body or carcass has completely disappeared. The influence that the surge in nutrients can have on the microorganisms and vegetation of a given site is not well understood but it appears that decomposition initially has an inhibitory effect for an initial stage before entering a second stage of increased growth. Decomposition microbiology of cadavers and carcasses: Decomposition fungi It is well known that fungi are heterotrophic for carbon compounds and almost all other nutrients they require. They must obtain these through saprophytic or parasitic associations with their hosts which implicates them in many decomposition processes. Decomposition microbiology of cadavers and carcasses: Two major groups of fungi have been identified as being linked to cadaver decomposition: ammonia fungi post-putrefactive fungiAmmonia fungi are broken-down into two groups referred to as "early stage fungi" and "late stage fungi." Such a classification is possible due to the successions that are observed between the types of fungi that fruit in or around a burial environment. The progression between the two groups occurs following the release of nitrogenous products from a body in decomposition. Early stage fungi are described as being ascomycetes, deuteromycetes and saprophytic basidiomycetes whereas late stage fungi consisted of ectomycorrhizal basidiomycetes. Decomposition microbiology of cadavers and carcasses: Decomposition fungi as PMI estimators Considering the number of forensic cases in which significant amounts of mycelia are observed is quite high, investigating cadaver associated mycota may prove valuable to the scientific community as they have much forensic potential. Decomposition microbiology of cadavers and carcasses: Only one attempt at using fungi as a PMI marker in a forensic case has been published to date. The study reported the presence of two types of fungi (Penicillium and Aspergillus) on a body found in a well in Japan and stated that they could estimate PMI as being approximately ten days based on the known growth cycles of the fungi in question.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Field sports** Field sports: Field sports are outdoor sports that take place in the wilderness or sparsely populated rural areas, where there are vast areas of uninhabited greenfields. The term specifically refers to activities that mandate sufficiently large open spaces and/or interaction with natural ecosystems, including hiking/canyoning, equestrianism, hawking, archery and shooting, but can also extend to various surface water sports such as river trekking, angling, rowing/paddling, rafting and boating/yachting. Field sports: Field sports are considered nostalgic pastimes, especially among country folk. For example, participants of field sports such as riding and fox hunting in the United Kingdom frequently wear traditional attires (British country clothing) to imitate landed gentries and aristocrats of the 19th-century English countryside. Types: Hiking, backpacking and camping Cross country/trail running and mountain biking Hillwalking, mountaineering, canyoning and caving Rock climbing, scrambling, rappelling and tree climbing Equestrianism (horse racing, polo, show jumping, dressage, etc.) Falconry Sport hunting (trophy hunting, safari/big game hunting, fowling) Bowhunting Sport fishing (angling, bowfishing, spearfishing, big game fishing) Shooting sport Field shooting (metallic silhouette, long-range, field target, etc) Clay pigeon Plinking Meat shooting Rook shooting Field archery Rowing/sculling, paddling (canoeing, kayaking, rafting), punting and paddleboarding Environmental issues: Field sports, by definition, involve activities away from typical human settlements, which implies entering into natural areas usually devoid of human presence. Such encroachments can potentially cause ecological disturbances to the wild faunae and florae, including environmental contamination by littered wastes (especially non-degradable plastic waste), wildfire risk from campfires and cigarette butts, disruption of groundcovers and topsoil due to trail-making and camping, damages to rocks by anchors used for aid climbing, irresponsible luring and feeding of wild animals, and light and sound pollution that can frequently trigger startle responses and territorial behaviors, leading to animal attacks, nest abandonment, habitat fragmentation and even habitat loss. Environmental issues: Some field sports, especially hunting and fishing, involve the catching and/or killing of wild animals (collectively referred as "game") for meat, for removing species in conflict with humans (often as volunteered assistance to farmers and landowners), or simply for personal leisure and trophy (i.e. sport hunting or "sporting"). Opponents to such sports consider them controversial, and even immoral, on grounds of animal cruelty (regarded as blood sports using wildlife), animal welfare (of the working animals such as horse and hunting dogs) and environmental protection (concerns for habitat conservation, overexploitation and poaching), especially those involving commercial incentives such as safari big game hunting.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vietnamese numerals** Vietnamese numerals: Historically Vietnamese has two sets of numbers: one is etymologically native Vietnamese; the other uses Sino-Vietnamese vocabulary. In the modern language the native Vietnamese vocabulary is used for both everyday counting and mathematical purposes. The Sino-Vietnamese vocabulary is used only in fixed expressions or in Sino-Vietnamese words, in a similar way that Latin and Greek numerals are used in modern English (e.g., the bi- prefix in bicycle). For numbers up to one million, native Vietnamese terms is often used the most, whilst mixed Sino-Vietnamese origin words and native Vietnamese words are used for units of one million or above. Concept: For non-official purposes prior to the 20th century, Vietnamese had a writing system known as Hán-Nôm. Sino-Vietnamese numbers were written in Chữ Hán and native vocabulary was written in Chữ Nôm. Hence, there are two concurrent system in Vietnamese nowadays in the romanized script, one for native Vietnamese and one for Sino-Vietnamese. Concept: In the modern Vietnamese writing system, numbers are written as Arabic numerals or in the romanized script Chữ Quốc ngữ (một, hai, ba), which had a Chữ Nôm character. Less common for numbers under one million are the numbers of Sino-Vietnamese origin (nhất [1], nhị [2], tam [3]), using Chữ Hán (classical Chinese characters). Chữ Hán and Chữ Nôm has all but become obsolete in the Vietnamese language, with the Latin-style of reading, writing, and pronouncing native Vietnamese and Sino-Vietnamese being wide spread instead, when France occupied Vietnam. Chữ Hán can still be seen in traditional temples or traditional literature or in cultural artefacts. The Hán-Nôm Institute resides in Hanoi, Vietnam. Basic figures: The following table is an overview of the basic Vietnamese numeric figures, provided in both native and Sino-Vietnamese counting systems. The form that is highlighted in green is the most widely used in all purposes whilst the ones highlighted in blue are seen as archaic but may still be in use. There are slight differences between the Hanoi and Saigon dialects of Vietnamese, readings between each are differentiated below. Basic figures: Some other features of Vietnamese numerals include the following: Outside of fixed Sino-Vietnamese expressions, Sino-Vietnamese words are usually used in combination with native Vietnamese words. For instance, "mười triệu" combines native "mười" and Sino-Vietnamese "triệu". Basic figures: Modern Vietnamese separates place values in thousands instead of myriads. For example, "123123123" is recorded in Vietnamese as "một trăm hai mươi ba triệu một trăm hai mươi ba nghìn (ngàn) một trăm hai mươi ba, or '123 million, 123 thousand and 123'. Meanwhile, in Chinese, Japanese & Korean, the same number is rendered as "1億2312萬3123" (1 hundred-million, 2312 ten-thousand and 3123). Basic figures: Sino-Vietnamese numbers are not in frequent use in modern Vietnamese. Sino-Vietnamese numbers such as "vạn/萬" 'ten thousand', "ức/億" 'hundred-thousand' and "triệu/兆" 'million' are used for figures exceeding one thousand, but with the exception of "triệu" are becoming less commonly used. Number values for these words are used for each numeral increasing tenfold in digit value, 億 being the number for 105, 兆 for 106, et cetera. However, Triệu in Vietnamese and 兆 in Modern Chinese now have different values. Other figures: When the number 1 appears after 20 in the unit digit, the pronunciation changes to "mốt". When the number 4 appears after 20 in the unit digit, it is more common to use Sino-Vietnamese "tư/四". When the number 5 appears after 10 in the unit digit, the pronunciation changes to "lăm/𠄻". When "mười" appears after 20, the pronunciation changes to "mươi". Ordinal numbers: Vietnamese ordinal numbers are generally preceded by the prefix "thứ-", which is a Sino-Vietnamese word which corresponds to "次-". For the ordinal numbers of one and four, the Sino-Vietnamese readings "nhất/一" and "tư/四" are more commonly used; two is occasionally rendered using the Sino-Vietnamese "nhị/二". In all other cases, the native Vietnamese number is used. In formal cases, the ordinal number with the structure "đệ (第) + Sino-Vietnamese numbers" is used, especially in calling the generation of monarches, with an example being Nữ vương Elizabeth đệ nhị/女王 Elizabeth 第二 (Queen Elizabeth II).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sphinx (search engine)** Sphinx (search engine): Sphinx is a fulltext search engine that provides text search functionality to client applications. Overview: Sphinx can be used either as a stand-alone server or as a storage engine ("SphinxSE") for the MySQL family of databases. When run as a standalone server Sphinx operates similar to a DBMS and can communicate with MySQL, MariaDB and PostgreSQL through their native protocols or with any ODBC-compliant DBMS via ODBC. MariaDB, a fork of MySQL, is distributed with SphinxSE. Overview: SphinxAPI If Sphinx is run as a stand-alone server, it is possible to use SphinxAPI to connect an application to it. Official implementations of the API are available for PHP, Java, Perl, Ruby and Python languages. Unofficial implementations for other languages, as well as various third party plugins and modules are also available. Other data sources can be indexed via pipe in a custom XML format. Overview: SphinxQL The Sphinx search daemon supports the MySQL binary network protocol and can be accessed with the regular MySQL API and/or clients. Sphinx supports a subset of SQL known as SphinxQL. It supports standard querying of all index types with SELECT, modifying RealTime indexes with INSERT, REPLACE, and DELETE, and more. SphinxSE Sphinx can also provide a special storage engine for MariaDB and MySQL databases. This allows those MySQL, MariaDB to communicate with Sphinx's searchd to run queries and obtain results. Sphinx indices are treated like regular SQL tables. The SphinxSE storage engine is shipped with MariaDB. Overview: Full-text fields and indexing Sphinx is configured to examine a data set via its Indexer. The Indexer process creates a full-text index (a special data structure that enables quick keyword searches) from the given data/text. Full-text fields are the resulting content that is indexed by Sphinx; they can be (quickly) searched for keywords. Fields are named, and you can limit your searches to a single field (e.g. search through "title" only) or a subset of fields (e.g. to "title" and "abstract" only). Sphinx's index format generally supports up to 256 fields. Note that the original data is not stored in the Sphinx index, but are discarded during the Indexing process; Sphinx assumes that you store those contents elsewhere. Overview: Attributes Attributes are additional values associated with each document that can be used to perform additional filtering and sorting during search. Attributes are named. Attribute names are case insensitive. Attributes are not full-text indexed; they are stored in the index as is. Currently supported attribute types are: unsigned integers (1-bit to 32-bit wide); UNIX timestamps; floating point values (32-bit, IEEE 754 single precision); string ordinals (specially computed integers); strings(since 1.10-beta); JSON(since 2.1.1-beta); MVA, multi-value attributes (variable-length lists of 32-bit unsigned integers). Overview: JSON attributes in Sphinx Sphinx, like classic SQL databases, works with a so-called fixed schema, that is, a set of predefined attribute columns. These work well when most of the data stored actually has values: mapping sparse data to static columns can be cumbersome. Assume for example that you're running a price comparison or an auction site with many different products categories. Some of the attributes like the price or the vendor are identical across all goods. But from there, for laptops, you also need to store the weight, screen size, HDD type, RAM size, etc. And, say, for shovels, you probably want to store the color, the handle length, and so on. So it's manageable across a single category, but all the distinct fields that you need for all the goods across all the categories are legion. The JSON field can be used to overcome this. Inside the JSON attribute you don't need a fixed structure. You can have various keys which may or may not be present in all documents. When you try to filter on one of these keys, Sphinx will ignore documents that don't have the key in the JSON attribute and will work only with those documents that have it. License: Up until version 3, Sphinx is dual licensed; either: GNU General Public License version 2 or proprietary licensing is available for use-cases which are not within the terms of the GNU GPLv2.Since version 3, Sphinx has become proprietary, with a promise to release its source code in the future Sphinx use examples: Craigslist.org Recruitment.aleph-graymatter.com Tradebit.com vBulletin.com MediaWiki extension Boardreader.com OMBE.com Limundo.com Feature list: Batch and incremental (soft real-time) full-text indexing. Support for non-text attributes (scalars, strings, sets, JSON). Direct indexing of SQL databases. Native support for MySQL, MariaDB, PostgreSQL, MSSQL, plus ODBC connectivity. XML document indexing support. Distributed searching support out-of-the-box. Integration via access APIs. SQL-like syntax support via MySQL protocol (since 0.9.9) Full-text searching syntax. Database-like result set processing. Relevance ranking utilizing additional factors besides standard BM25. Text processing support for SBCS and UTF-8 encodings, stopwords, indexing of words known not to appear in the database ("hitless"), stemming, word forms, tokenizing exceptions, and "blended characters" (dual-indexing as both a real character and a word separator). Supports UDF (since 2.0.1). Performance and scalability: Indexing speed of up to 10-15 MB/sec per core and HDD. Searching speed of over 500 queries/sec against 1,000,000 document/1.2 GB collection using a 2-core desktop system with 2 GB of RAM. The biggest known installation using Sphinx, Boardreader.com, indexes 16 billion documents. The busiest known installation, Craigslist, serves over 300,000,000 queries/day and more than 50 billion page views/month. Fork: In 2017, key members of the original Sphinx team formed a fork of the project called Manticore. The Manticore team has set itself the following goal: to deliver fast, stable and powerful free software for full text search. Manticore team keep it's fork opensource, releasing it under GPLv2 license as opposed to Original Sphinx search, which closing the source from the third version.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Combined Online Information System** Combined Online Information System: The Combined Online Information System (COINS) is a database containing HM Treasury's detailed analysis of departmental spending under thousands of category headings. The database contains around 24 million lines of data. The database has codes for more than 1,700 public bodies in the United Kingdom including central government departments, local authorities, NHS trusts and public corporations. COINS is used by the Office for National Statistics for statistical purposes.The Treasury describes the database as "a web based multi-dimensional database used by HM Treasury to collect financial information". Data from the COINS database is used to prepare the National Accounts. Structure and technical details: The Combined Online Information System or COINS database is one of the biggest datasets in government. COINS uses a database called Camelot. The system is supplied by Descisys. History: COINS replaced three separate systems previously used by the British Government, Public Expenditure System (PES), Government Online Data System (GOLD) and General Expenditure Monitoring System (GEMS). Disclosure: The Treasury turned down requests under the Freedom of Information Act 2000 for data contained in COINS prior to the 2010 General Election. After promises during the election campaign to publish the database if elected, the Cameron–Clegg coalition government made available all 120 GB of COINS data in a raw format as of 4 June 2010. The hope is that this will spur third party organisations to find innovative ways to present this information to the public.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EFAMRO** EFAMRO: EFAMRO is a federation of national bodies representing the market research profession in Europe. Members: EFAMRO is composed of 16 national bodies: Activities: EFAMRO has three primary roles: To adjudicate on cross-border complaints made against market research organizations through a self-regulatory framework To provide a common voice for national bodies when lobbying at a European or international level To develop and enhance international quality standards for market research (most notably the ISO 20252 quality standard which EFAMRO initiated)EFAMRO co-ordinates these activities with other research bodies globally through its participation in the Global Research Business Network (GRBN), a joint initiative with the Asia Pacific Research Committee (APRC) and the Americas Research Industry Alliance (ARIA). Leadership: EFAMRO is led by an Executive Board overseen by Jan Oostveen (Director General) and Andrew Cannon (President).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Wide-format printer** Wide-format printer: Wide format printers (large format printers) are generally accepted to be any computer-controlled printing machines (printers) that support a maximum print roll width of between 18 and 100 inches (460 and 2,540 mm). Printers with capacities over 100 in wide are considered super-wide or grand format. Wide-format printers are used to print banners, posters, trade show graphics, wallpaper, murals, backlit film (duratrans), vehicle image wraps, electronic circuit schematics, architectural drawings, construction plans, backdrops for theatrical and media sets, and any other large format artwork or signage. Wide-format printers usually employ some variant of inkjet or toner-based technology to produce the printed image; and are more economical than other print methods such as screen printing for most short-run (low quantity) print projects, depending on print size, run length (quantity of prints per single original), and the type of substrate or print medium. Wide-format printers are usually designed for printing onto a roll of print media that feeds incrementally during the print process, rather than onto individual sheets. Technologies: Wide-format printers can be categorized by the type of ink transfer process they employ: Aqueous: Thermal or Piezo inkjet printers using an ink known as aqueous or water-based. The term water base is a generally accepted misnomer. The pigment is held in a non-reactive carrier solution that is sometimes water and other times a substitute liquid, including a soy-based liquid used by Kodak. Aqueous ink generally comes in two flavors, dye and pigment. Dye ink is high color, low UV-resistant variety that offers the widest color gamut. Pigment ink is generally duller in color, requiring more inks to achieve wide inks but withstands fading from UV rays. Similar in general principle to desktop inkjet printers. Finished prints must be laminated to protect them if they are to be used outdoors. Various substrates (media) are available, including canvases, banners, metabolized plastic, and cloth. Aqueous technology requires that all materials be properly coated to accept and hold the ink. Technologies: Solvent: This term is used to describe any ink that is not water-based. Piezo inkjet printers whose inks use petroleum or a petroleum by-product such as an acetone like carrier liquid. "Eco-Solvent" inks usually contain glycol esters or glycol ether esters and are slower drying. The resulting prints are waterproof. May be used to print directly on uncoated vinyl and other media as well as ridged substrates such as Painted/Coated Metal, Foam Board and PVC. The solvents soften the base material and allow the ink pigments to mechanically latch on to the chemically etched surface. Certain ink manufacturers have different bite based on what solvent carriers they use. Which is what makes solvent ink prints more durable than aqueous inks. However, solvent inks give off strong odor or fumes when drying, as the carrier fluid dissipates through applied heat from the printer's platen. There are various levels of solvent ink ranging from "True or Full Solvent" to "Medium/Mild Solvent" all the way down to "Eco-Solvent". The fume and odour levels decrease accordingly, so does the surface etch of the base material. Full to Medium/Mild Solvents require fume extraction to be considered safe in the working environment. Most Eco-Solvents can be used in an office environment with minimal or tolerable odor levels. Technologies: Dye sublimation: Inks are diffused into the special print media to produce continuous-tone prints of photographic quality. UV: Piezo inkjet printers whose inks are UV-curable (dry when cured with UV light). The resulting prints are waterproof, embossed & vibrant. Any media material can be used in this technology, polymer made media are best. Ceramics, glass, metals, and woods are also used with printing with this technology. Pen/plotter: A pen or pens are used to draw on the print substrate. Mainly used for producing CAD drawings. Generally superseded by digital technologies such as Solvent, Aqueous, and UV.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Product activation** Product activation: Product activation is a license validation procedure required by some proprietary software programs. Product activation prevents unlimited free use of copied or replicated software. Unactivated software refuses to fully function until it determines whether it is authorized to fully function. Activation allows the software to stop blocking its use. An activation can last "forever", or it can have a time limit, requiring a renewal or re-activation for continued use. Implementations: In one form, product activation refers to a method invented by Ric Richardson and patented (U.S. Patent 5,490,216) by Uniloc where a software application hashes hardware serial numbers and an ID number specific to the product's license (a product key) to generate a unique installation ID. This installation ID is sent to the manufacturer to verify the authenticity of the product key and to ensure that the product key is not being used for multiple installations. Implementations: Alternatively, the software vendor sends the user a unique product serial number. When the user installs the application it requests that the user enter their product serial number, and checks it with the vendor's systems over the Internet. The application obtains the license limits that apply to that user's license, such as a time limit or enabling of product features, from the vendor's system and optionally also locks the license to the user's system. Once activated the license continues working on the user's machine with no further communication required with the vendor's systems. Some activation systems also support activation on user systems without Internet connections; a common approach is to exchange encrypted files at an Internet terminal. Implementations: An early example of product activation was in the MS-DOS program D'Bridge Email System written by Chris Irwin, a commercial network system for BBS users and Fidonet. The program generated a unique serial number which then called the author's BBS via a dialup modem connection. Upon connection, the serial number was validated. A unique "key" was returned which allowed the program to continue for a trial period. If two D'Bridge systems communicated using the same key, the software deliberately crashed. The software has long since had the entire activation system removed and is now freeware by Nick J. Andre, Ltd. Implementations: Microsoft Microsoft Product Activation was introduced in the Brazilian version of Microsoft Office 97 Small Business Edition and Microsoft Word 97 sold in the Hungarian market. It broadened that successful pilot with the release of Microsoft Publisher 98 in the Brazilian market. Microsoft then rolled out product activation in its flagship Microsoft Office 2000 product. All retail copies sold in Australia, Brazil, China, France, and New Zealand, and some sold in Canada and the United States, required the user to activate the product via the Internet. However, all copies of Office 2000 do not require activation after April 15, 2003. After its success, the product activation system was extended worldwide and incorporated into Windows XP and Office XP and all subsequent versions of Windows and Office. Despite independently developing its own technology, in April 2009 a jury found Microsoft to have willfully infringed Uniloc's patent. However, in September 2009, US District Judge William Smith "vacated" the jury's verdict and ruled in favour of Microsoft. This ruling was subsequently overturned in 2011. Blocking: Software that has been installed but not activated does not perform its full functions, and/or imposes limits on file size or session time. Some software allows full functionality for a limited "trial" time before requiring activation. Unactivated software typically reminds the user to activate, at program startup or at intervals, and when the imposed size or time limits are reached. (Some unactivated software has taken disruptive actions such as crashing or vandalism, but this is rare.) Some 'unactivated' products act as a time-limited trial until a product key—a number encoded as a sequence of alphanumeric characters—is purchased and used to activate the software. Some products allow licenses to be transferred from one machine to another using online tools, without having to call technical support to deactivate the copy on the old machine before reactivating it on the new machine. Blocking: Software verifies activation every time it starts up, and sometimes while it is running. Some software even "phones home", checking a central database (across the Internet or other means) to check whether the specific activation has been revoked. Some software might stop working or reduce functionality if it cannot connect to the central database. Criticisms: It can enforce software license agreement restrictions that may be legally invalid. For example, a company may refuse to reactivate software on an upgraded or new PC, even if the user may have a legal right to use the product under such circumstances. If the company ceases to support a specific product, goes out of business due to insolvency or consolidation, its purchased product may become unusable or incapable of being (re)installed unless an activation-free copy or final patch that removes or bypasses activation is released. Criticisms: Product activation where there is no straightforward way to transfer the license to another person to activate on their computer has been widely criticised as making second-hand sales of products, particularly games, very difficult. Some suspect companies such as EA to be using product activation to reduce second-hand sales of their games in order to increase sales of new copies. Criticisms: As the transfer of an activation request usually happens encrypted or at least obfuscated, the user cannot see or check if additional data from his/her machine gets transferred, creating privacy concerns. Malfunction of the activating mechanism can delay users from getting started using newly-licensed software. Malfunction of the verification mechanism can cause vital software to suddenly stop working until re-activated or patched. This can happen in response to detected changes of installed hardware, or other software, of the operating system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SLITRK2** SLITRK2: SLIT and NTRK-like protein 2 is a protein that in humans is encoded by the SLITRK2 gene. Function: Members of the SLITRK family, such as SLITRK2, are integral membrane proteins with 2 N-terminal leucine-rich repeat (LRR) domains similar to those of SLIT proteins (see SLIT1; MIM 603742). Most SLITRKs, including SLITRK2, also have C-terminal regions that share homology with neurotrophin receptors (see NTRK1; MIM 191315). SLITRKs are expressed predominantly in neural tissues and have neurite-modulating activity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Blue sign** Blue sign: A blue sign or blue board is used by inland waterways vessels within the Trans-European Inland Waterway network when performing a special manoeuvre or passing on the starboard side. On navigable waterways vessels normally pass each other on the port-side, so the display of the blue sign and flashing white light signal intention to pass each other on the starboard-side. This process is known as blue boarding or historically blue flagging.The Code Européen des Voies de la Navigation Intérieure (CEVNI) regulations require upstream vessels operating on the opposite side to display a light-blue sign and scintillating (flashing) white light. Article 3.03 states that the board must be rectangular and 1-metre × 1-metre for large vessels, or 0.6-metres × 0.6-metres for small vessels.The presence and status of the blue sign is transmitted by the ship's Inland-Automatic Identification System (Inland-AIS) transponder to other vessels. The status of the sign is transmitted using two bits of the "regional application flags"/"special manoeuvre field" in the AIS position reports. This must be transmitted every ten seconds.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Drinking straw** Drinking straw: A drinking straw is a utensil that is intended to carry the contents of a beverage to one's mouth. Straws are commonly made from plastics but environmental concerns and new regulation have led to rise in reusable and biodegradable straws. These straws are often made of silicone, cardboard, or metal. A straw is used by placing one end in one's mouth and the other in a beverage. By employing suction, the air pressure in one's mouth drops causing atmospheric pressure to force the liquid through the straw and into the mouth. Drinking straws can be straight or have an angle-adjustable bellows segment. Drinking straws have historically been intended as a single-use product and several countries, regions, and municipalities have banned single-use plastic straws to reduce plastic pollution. Additionally, some companies have even voluntarily banned or reduced the number of plastic straws distributed from their premises. History: Early examples The first known straws were made by the Sumerians and were used for drinking beer, probably to avoid the solid byproducts of fermentation that sink to the bottom. The oldest drinking straw in existence, found in a Sumerian tomb dated 3,000 BCE, was a gold tube inlaid with the precious blue stone lapis lazuli. Others claim metal ‘sceptres’ discovered in Armenia in 1897 and date to the Maykop culture (3700 to 2900 BCE) as the oldest surviving straws. History: Argentines and their neighbors have, for several hundred years, used (for drinking mate tea) a similar metallic device called a bombilla, that acts as both a straw and a sieve.In the 1800s, the rye grass straw came into fashion because it was cheap and soft, but it had an unfortunate tendency to turn to mush when put in liquid. History: American Marvin C. Stone patented the modern drinking straw, 8 1/2 inches long and made of paper, in 1888, to address the shortcomings of the rye grass straw. He came upon the idea while drinking a mint julep on a hot day in Washington, D.C.; the taste of the rye grass straw was mixing with the drink and giving it a grassy taste, which he found unsatisfactory. He wound paper around a pencil to make a thin tube, slid out the pencil from one end, and applied glue between the strips. He later refined it by building a machine that would coat the outside of the paper with wax to hold it together, so the glue would not dissolve in bourbon.Early paper straws had a narrow bore, to prevent seeds from clogging them. It was common to use two of them, to reduce the effort needed to take each sip. History: Mass production Plastic straws became widespread following World War II. The materials used in their manufacture were inexpensive, and the types of restaurant fare that they accompanied had become more affordable and popular.In 1930, Otto W. Dieffenbach (Sr.) developed and produced a cellophane drinking straw in Baltimore MD. His company known as Glassips Inc. produced straws for restaurants and other products. One patent dates to 1954. The Sr. Mr. Dieffenbach served as chairman until 1972 and the business, then based in Towson MD. was sold in 1979.One of the first mass-produced twisted straw was Sip-N-See invented by Milton Dinhofer who later came up with the idea and designs for the chimp in the iconic game, Barrel of Monkeys. Dinhofer originally patented his straw in the shape of a scissor with two loops on top, but Macy's would not carry the straw unless it had a character on it. They suggested Dinhofer make three straws (eventually patented in 1950): a cowboy, a clown and an animal for which he made an elephant. Each of his characters was attached to a looping soft polyethylene straw, and users were to sip from another detachable, small, straight, straw of acetate. Rexor Corp. copyrighted the straw the same year, but Macy's decided not to carry them. Dinhofer was told the selling price was too low. Dinhofer then turned to Woolworth and convinced the chain to let him deliver some to several of their stores near his home. After one weekend of sales, Woolworth's placed an order for all of its stores and Sip-N-See went national. The straws were sold in individual boxes, and more characters were eventually added. Other buyers began to carry it, too, and it was marketed as an "action drinking toy." Sip-N-See went on to sell approximately six million units, and, a decade later, the s-shape of the arms on the cowboy straw would inspire Dinhofer's monkey design for Barrel of Monkeys. Types: Drinking straws come in many variations and are manufactured using a variety of materials. Types: Plastic The most common form of drinking straw is made of the thermoplastic polymer Polypropylene. This plastic is known for its durability, lightness, and ability to be manufactured at a low cost. Other plastic polymers that exhibit these traits include polyethylene (PE) and polyvinyl chloride (PVC).These attributes are what have made the traditional plastic straw ubiquitous in fast food establishments and take-out orders around the world. Additionally, other advantages of plastic straws include their ability to be molded into different shapes and sizes while also being able to withstand a wide range of temperatures without deforming. This is important because straws must be temperature resistant and thermally insulated because they can be used in both hot and cold beverages. Types: One interesting variation of the plastic straw is the "bendy straw", commonly referred to in the industry as an "articulated straw". This straw has a concertina-type hinge near its top to allow for improved maneuverability of the straw when drinking a beverage, especially from a low angle. The articulated straw was invented by Joseph Friedman in 1937. He quickly developed the straw after he saw his daughter struggling to use a normal straight straw.Another variation of the plastic straw, the “spoon straw”, has a spoon-like tip at the bottom, and is often used with iced slush beverages."Stir straws" with a relatively short length and quite a narrow bore are often given along with disposable cups for preparing coffee or tea and serve the primary function of being able to stir in sugar, sweetener, cream, or non-dairy creamer, as well as allowing for sipping a hot beverage. Additionally, boba tea plastic straws with wider openings are commonly used to drink bubble tea, to better accommodate its characteristic tapioca pearls. The tip of these straws is often cut at an angle creating a point which allows one to use the straw to puncture the plastic cover of bubble tea drinks.Plastic straws can also be embellished with some forms marketed as "crazy straws", having a number of twists and turns at the top. These straws are often marketed and can be entertaining for young children. The crazy straw was invented by Arthur Philip Gildersleeve and patented in 1936. Types: Reusable Environmental concerns, stemming from the impact plastic waste has had on the ocean, have led to a rise in reusable straws. Reusable straws are primarily being manufactured out of Polylactic acid (PLA), silicone, and metal. Polylactic acid and silicone straws are the most similar in texture and feel to their plastic counterparts, however, they fit into the category of biodegradable polymers. These types of straws have some benefits over other more ecologically conscious straws because they are resistant to disintegrating in one's drink and provide adequate insulation for hot and cold drinks. One manufacturer of silicon straws even claims that their straws can be burned into biodegradable ash.Metal and glass straws are other reusable alternatives. A "vampire straw" is a large metal drinking straw with a pointed tip that allows it to double as a dagger-like weapon. A man was arrested at Boston Logan International Airport after a vampire straw was confiscated from his carry-on luggage. Bamboo straws are making headway into the reusable straw industry with their sustainability, inexpensive cost, and relative ease of cleaning. Types: Single-use Some companies such as Starbucks have moved away from plastic straws. Bamboo straws are sometimes made from the moso bamboo tree (Phyllostachys edulis).Some companies such as McDonald's have switched to paper and paperboard straws. There are some innovation companies that try to introduce alternative to plastic straws such as Drinking-Straw that are made out of wheat, grass or reed. Types: Edible Edible straws have been made out of materials like rice, seaweed, rye, and confectioneries (such as candy).Flavor straws are a form of drinking straw with a flavoring included, designed to make drinking milk more pleasant for children. They first marketed in the United States in 1956 as Flav-R-Straws. Newer variations of the original idea have been resurrected in forms such as Sipahhs, and Magic Milk Straws that contain hundreds of flavored pellets encased within a stiff plastic straw. Environmental impact: Plastic drinking straw production contributes a small amount to petroleum consumption, and the used straws become a small part of global plastic pollution when discarded, most after a single use.Plastic straws are not recyclable and may continue to pollute various aspects of the environment, including bodies of water and streets, for over 200 years due to their lack of proper disposal. The image of a plastic straw lodged into the nostril of a sea turtle, filmed by marine biologist Christine Figgener, quickly spread across all forms of media and spurred the elevation of awareness regarding the potential danger of plastic straws for marine life. The scientist who uploaded the video remarks that it is the emotional pull of the imagery, rather than the significance of the plastic straw itself in the plastic debacle, that garnered such high viewership. Environmental impact: Quantity One anti-straw advocacy group has estimated that about 500 million straws are used daily in the United States alone – an average 1.6 straws per capita per day. This statistic has been criticized as inaccurate, because it was approximated by Milo Cress, who was nine years old at the time, after surveying straw manufacturers to ask their estimates of the total, which he then averaged. (Further details are unavailable as "being 9, he had not thought to document the process closely.") This figure has been widely cited by major news organizations. Market research firm Freedonia Group estimated the number to be 390 million. Another market research firm Technomic estimated the number to be 170 million, although this number excludes some types of straws.Plastic straws amounted to 5–7.5% of all waste collected from beaches during the 2017 International Cleanup Event, conducted by Ocean Conservancy, making it a minor contamination source, yet considered easy to avoid. In total, they are less than 0.022% of plastic waste emitted to oceans. Environmental impact: Microplastics Microplastics pollution is a concern if plastic waste is improperly dumped. If plastic straws are improperly disposed of, they can be transported via water into soil ecosystems, and others, where they break down into smaller, more hazardous pieces than the original plastic straw.Water can break down plastic waste into microplastic and nanoplastic particles. These particles are capable of transmitting harmful substances or can themselves prove dangerous, as they have been shown to negatively affect the surrounding environment. Environmental impact: Alternatives Alternatives to plastic straws, some reusable, exist, although they are not always readily available, or deemed to be of sufficient quality for all users (including, in particular, those with a disability). Paper straws have proliferated as a popular alternative, although they are prone to losing their rigidity when soaked inside a beverage, and in some cases are not durable enough for thicker beverages such as milkshakes. Metal straws are more durable, but they are incapable of being bent, can damage teeth or lacerate children or kill adults during falls, and some restaurants have reported them as a target of theft.Some critics have argued that paper and metal alternatives are no more environmentally-friendly than plastic, citing the environmental impacts of paper and mining, and that paper straws would likely end up in landfills and not be composted. In August 2019, after deploying paper straws in the United Kingdom, McDonald's stated that its straws could not actually be recycled at present, since their thickness "makes it difficult for them to be processed by our waste solution providers". The chain stated that they went towards energy production, and not to landfills.Polylactic acid (PLA), a biodegradable plastic, requires 69% fewer fossil fuel resources to produce than plastic, but it requires very specific conditions to break down fully. Polyhydroxyalkanoate (PHA), derived from plant oil, is marine biodegradable. In 2021, the manufacturing company Wincup was distributing a PHA product branded as "the Phade straw."As of 2021 several eco-friendly alternative materials have been tried. Among them are hay straws, bamboo straws, seaweed straws, and straws made from naturally dried fallen coconut leaves. Environmental impact: Greenwashing Not all attempts to be more environmentally friendly are in earnest, though. In an attempt to artificially boost sales, some groups have been guilty of "greenwashing," or falsely marketing their products as a viable environmentally friendly alternative, when it is actually just as harmful to the environment, or worse. These marketing tactics draw in well-meaning consumers who believe they are helping the environment (often by paying more for a product), when they are instead encouraging these misleading strategies.To combat this scheme, TerraChoice, an America-based advertising company, crafted a rubric to calculate the amount of greenwashing prevalent in a product. They determined that 95% of products they surveyed at American and Canadian stores are guilty of at least one act of greenwashing. Plastic straw bans and proposals: In the late-2010s, a movement towards laws banning or otherwise restricting the use of plastic straws and other single-use plastics emerged. Environmental groups have encouraged consumers to object to "forced" inclusion of plastic straws with food service. The movement followed the discovery of plastic particles in oceanic garbage patches and larger plastic waste-reduction efforts that focused on banning plastic bags in some jurisdictions. It has been intensified by viral videos, including one of a plastic straw being removed from a sea turtle's nostril by biologist Nathan J. Robinson and filmed by marine biologist and activist Christine Figgener. Plastic straw bans and proposals: By country Australia A single-use plastic ban was introduced in the state of South Australia in 2020. Fast food chain McDonald's promised to phase out plastic straws throughout Australia by 2020. Brazil On 5 July 2018, the city of Rio de Janeiro became the first state capital of Brazil to forbid the distribution of plastic straws, "forcing restaurants, coffee shops, bars and the like, beach huts and hawkers of the municipality to use and provide to its customers only biodegradable and/or recyclable paper straws individually". Plastic straw bans and proposals: Canada In May 2018, the Vancouver city council voted in favor of adopting a "Single Use Reduction Strategy", targeting single-use styrofoam containers and plastic straws. The council approved the first phase of the regulations in November 2019, expected to be in place by April 2020, barring the distribution of single-use straws unless requested (with straws on hand required to be bendable for accessibility reasons). Bubble tea shops will be given a one-year exemption.In March 2019, Starbucks announced that they would be debuting strawless lids for cold drinks across Toronto as a part of their global environmental aspirations.In June 2019, in the lead-up to the federal election, prime minister Justin Trudeau announced his intent to enact legislation restricting the use of petroleum-based single-used plastics as early as 2021. Plastic straw bans and proposals: European Union In May 2018, the European Union proposed a directive banning a number of single-use plastic items including straws, cotton buds, cutlery, balloon sticks and drink stirrers, in addition to limiting the use other single-use plastics and extending producer responsibility. The EU estimated the plan would avoid 3.4 million tons of carbon emissions, save consumers €6.5 billion, and prevent environmental damage that would cost the equivalent of €22 billion by the year 2030. In October 2018, the European Parliament voted to pass the directive with 571 votes for and 53 votes against, and the directive came into effect on July 2, 2021. The specificity of the European market is that it prohibits all types of straws made of plastic, whether bio-based or compostable. This means that popular straws made of PHA, PBS or PLA for example, are prohibited in this territory. It is not always clear whether or not a drinking straw complies with this legislation, so it is recommended that a pyrolysis test be performed to determine its composition. Plastic straw bans and proposals: Taiwan Single-use plastic straws banned in government facilities, schools, department stores, shopping malls and fast food restaurants from 1 July 2019. Plastic straw bans and proposals: United Kingdom The UK government committed at most £4 million to “Plastics innovation: towards zero waste” in the summer of 2017 in an attempt to mitigate the circulation of unnecessary plastic. In this endeavor, eleven projects secured the full amount in government support. These projects each invented new ways to recycle used plastic products and prevent them from reaching landfills. In 2018, Queen Elizabeth II banned all single-use plastic items from her palaces.On 19 April 2018, ahead of Earth Day, a proposal to phase out single-use plastics was announced during the meeting of the Commonwealth Heads of Government. It is estimated that as of 2018, about 23 million straws are used and discarded daily in the UK. In May 2019, England announced that it would ban single-use plastic straws, stirring sticks and cotton buds in April 2020: only registered pharmacies will be allowed to sell straws to the public, and restaurants may only offer them by request of customers. The ban was delayed due to the coronavirus pandemic and came into effect on 1 October 2020. Plastic straw bans and proposals: United States California On 7 November 2017, the city of Santa Cruz, California implemented a ban on all non-recyclable to-go containers, straws, and lids but allowed for 6 months for all businesses to come into compliance before enforcement would occur. On 1 January 2018, the city of Alameda, California citing the Santa Cruz effort, implemented an immediate ban on all straws, except if requested by a customer, and gave business until 1 July 2018 when it would be required that all straws to be of compostable paper and that all other to-go containers be recyclable.A statewide California law restricting the providing of single-use plastic straws went into effect on 1 January 2019. Under the law, restaurants are only allowed to provide single-use plastic straws upon request. The law applies to sit-down restaurants but exempts fast-food restaurants, delis, coffee shops, and restaurants that do takeout only. The law does not apply to-go cups and takeaway drinks. A restaurant will receive warnings for its first two violations, then a $25 per day fine for each subsequent violation, up to a maximum of $300 in a year. In a statement released upon his signing the legislation into law, then-Governor Jerry Brown said "It is a very small step to make a customer who wants a plastic straw ask for it. And it might make them pause and think again about an alternative. But one thing is clear, we must find ways to reduce and eventually eliminate single-use plastic products."Local regulations have also been passed in Malibu, Davis and San Luis Obispo, California. Plastic straw bans and proposals: Florida Local regulations have been passed in Miami Beach and Fort Myers, Florida. Maryland A ban on single-use straws has been instituted in Montgomery County, Maryland, going into full effect on December 21, 2021. Plastic straw bans and proposals: Massachusetts In 2015, Williamstown, Massachusetts banned straws that are not recyclable or compostable as part of its Article 42 polystyrene regulations.In the first half of 2018, three towns in Massachusetts banned petrochemical plastic straws directly in the case of Provincetown, and as part of broader sustainable food packaging laws in Andover and Brookline.In 2019, Longmeadow, Massachusetts banned plastic straws and polystyrene packaging. Plastic straw bans and proposals: New York A drinking straw ban has been proposed in New York City since May 2018. Businesses are fined if a straw is provided (unless requested) and also fined if no plastic straws are available and also fined for other reasons regarding straws. Washington state The city of Seattle implemented a ban on non-compostable disposable straws on 1 July 2018. Plastic straw bans and proposals: Voluntary conversions After consideration of a ban in the UK, in 2018, after a two-month trial of paper straws at a number of outlets in the UK, McDonald's announced they would be switching to paper straws for all locations in the United Kingdom and Ireland. and testing the switch in U.S. locations in June 2018.A month after the Vancouver ban passed (but before it took effect) Canada's second-largest fast food chain, A&W announced they would have plastic straws fully phased out by January 2019 in all of their locations.Various independent restaurants have also stopped using plastic straws.Starbucks announced conversion by 2020 to no-straw lids for all cold drinks except for frappucinos, which will be served with straws made from paper or other sustainable materials.Hyatt Hotels announced straws would be provided by request only, starting 1 September 2018. Royal Caribbean plans to offer only paper straws on request by 2019, and IKEA said it would eliminate all single-use plastic items by 2020. Other conversions include Waitrose, London City Airport, and Burger King UK stores starting September 2018. A few other cruise lines, air lines, beverage companies, and hotels, have also made partial or complete reductions, but most companies in those industries have not, as of May 2018. Plastic straw bans and proposals: Opposition to bans Since plastic straws account only for a tiny portion (0.022%) of plastic waste emitted in the oceans each year, some pro-environment critics have argued that plastic straw bans are insufficient to address the issue of plastic waste, and are mostly symbolic.Full bans on single-use plastic straws have faced opposition from disability rights advocates, as they feel that alternative materials are not well-suited for use by those with impaired mobility (caused by conditions such as cerebral palsy and spinal muscular atrophy). Some with neuromuscular disabilities may rely on a plastic straw for its heat resistance and due to an inability to lift a cup. The Americans with Disabilities Act (ADA) has required public places to provide plastic straws in order to ensure that those who need them will be able to access them. In particular, not all people with disabilities may be capable of washing reusable straws, straws made from inflexible materials are not capable of being repositioned, paper straws lose their firmness over time when soaked in a beverage, and straws made from hard materials such as metal can cause injuries. Advocates have preferred laws that still allow plastic straws to be offered upon request.The American Legislative Exchange Council (ALEC)—a U.S. conservative lobbying group against "excessive" regulation—has promoted model state bills which contain carve-outs for fast food and fast casual restaurants from straw bans (in effect only restricting "sit-down" restaurants), and restrict municipalities from preempting the rule with a stricter regulation (with the draft law text stating that the latter leads to "confusing and varying regulations that could lead to unnecessary increased costs for retail and food establishments to comply with such regulations"). In 2019, the re-election campaign of U.S., Republican Party president Donald Trump marketed packages of reusable plastic straws branded with Trump's name and colored in the signature red associated with the "Make America Great Again" slogan, as a fundraising stunt. The campaign website promoted them as an alternative to "liberal paper straws". Fiction: In Miguel Cervantes's novel, Don Quixote (1605, 1615), the narrator tells of an innkeeper who, because Don Quixote refuses to remove his makeshift helmet, fashions a drinking straw by hollowing out a reed and pours wine through it, suggesting that Don Quixote was not accustomed to this method of drinking.Nicholson Baker's novel, The Mezzanine (1988), includes a detailed discussion of various types of drinking straws experienced by the narrator and their relative merits.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phyllocladane** Phyllocladane: Phyllocladane is a tricyclic diterpane which is generally found in gymnosperm resins. It has a formula of C20H34 and a molecular weight of 274.4840. As a biomarker, it can be used to learn about the gymnosperm input into a hydrocarbon deposit, and about the age of the deposit in general. It indicates a terrogenous origin of the source rock. Diterpanes, such as Phyllocladane are found in source rocks as early as the middle and late Devonian periods, which indicates any rock containing them must be no more than approximately 360 Ma. Phyllocladane is commonly found in lignite, and like other resinites derived from gymnosperms, is naturally enriched in 13C. This enrichment is a result of the enzymatic pathways used to synthesize the compound.The compound can be identified by GC-MS. A peak of m/z 123 is indicative of tricyclic diterpenoids in general, and phyllocladane in particular is further characterized by strong peaks at m/z 231 and m/z 189. Presence of phyllocladane and its relative abundance to other tricyclic diterpanes can be used to differentiate between various oil fields.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Soosõrv opening rule** Soosõrv opening rule: Soosõrv opening rule is a renju opening rule. It was proposed by Estonian player Ants Soosõrv. Rule details: The sequence of moves implied by the rule follows. The first player puts one of the 26 openings. The other player has the right to swap. The white player puts the 4th move anywhere on board and declares whether there will be 1, 2, 3 or 4 fifth moves offered in the game. The other player has a right to swap. The black player puts as many 5th moves on the board as it was declared before. The fifth moves can not be symmetrical. The white player chooses one 5th from these offerings and plays the 6th move. Brief description: This rule gives an average variety of new playable variants in a good number of openings, especially white-oriented, but openings that are very strong for black (like 2D, 2I, 4I, 7I, 4D etc.) don't become playable. To solve this problem the Soosõrv-N advancement was proposed and certified by RIF. Advancement: Soosõrv-N opening rule is an advancement of Soosõrv opening rule. When the white player puts the 4th move and declares the number of fifth moves, the number has to be not less than 1 and not greater than N, instead of the default value 4 for N in the original Soosõrv opening rule.Depending on the value of N this rule gives an average to big variety of new playable variants in a growing number of openings. Soosõrv-5 is very close to Taraguchi concerning a number of playable positions. Soosõrv-8 makes all 26 renju openings available. Tournaments played by this rule: Soosõrv opening rule was an official opening rule for European Renju Championship in 2008 and a couple of minor international tournaments. Soosõrv-N opening rule was certified by Renju International Federation in 2011 after a proposal from Russian Renju Association. In 2015, Soosõrv-8 opening rule was decided to be the opening rule for the Renju World Championship since 2017.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dental pharmacology** Dental pharmacology: Dental pharmacology is the study of drugs used to treat conditions of the oral cavity.Some of these drugs include antibiotics, analgesics, anti-inflammatory drugs and anti-periodontitis agents.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Orbit (dynamics)** Orbit (dynamics): In mathematics, specifically in the study of dynamical systems, an orbit is a collection of points related by the evolution function of the dynamical system. It can be understood as the subset of phase space covered by the trajectory of the dynamical system under a particular set of initial conditions, as the system evolves. As a phase space trajectory is uniquely determined for any given set of phase space coordinates, it is not possible for different orbits to intersect in phase space, therefore the set of all orbits of a dynamical system is a partition of the phase space. Understanding the properties of orbits by using topological methods is one of the objectives of the modern theory of dynamical systems. For discrete-time dynamical systems, the orbits are sequences; for real dynamical systems, the orbits are curves; and for holomorphic dynamical systems, the orbits are Riemann surfaces. Definition: Given a dynamical system (T, M, Φ) with T a group, M a set and Φ the evolution function Φ:U→M where U⊂T×M with Φ(0,x)=x we define := {t∈T:(t,x)∈U}, then the set := {Φ(t,x):t∈I(x)}⊂M is called orbit through x. An orbit which consists of a single point is called constant orbit. A non-constant orbit is called closed or periodic if there exists a t≠0 in I(x) such that Φ(t,x)=x Real dynamical system Given a real dynamical system (R, M, Φ), I(x) is an open interval in the real numbers, that is I(x)=(tx−,tx+) . For any x in M := {Φ(t,x):t∈(0,tx+)} is called positive semi-orbit through x and := {Φ(t,x):t∈(tx−,0)} is called negative semi-orbit through x. Definition: Discrete time dynamical system For discrete time dynamical system : forward orbit of x is a set : γx+=def{Φ(t,x):t≥0} backward orbit of x is a set : γx−=def{Φ(−t,x):t≥0} and orbit of x is a set : γx=defγx−∪γx+ where : Φ is an evolution function Φ:X→X which is here an iterated function, set X is dynamical space, t is number of iteration, which is natural number and t∈T x is initial state of system and x∈X Usually different notation is used : Φ(t,x) is written as Φt(x) xt=Φt(x) where x0 is x in the above notation. Definition: General dynamical system For a general dynamical system, especially in homogeneous dynamics, when one has a "nice" group G acting on a probability space X in a measure-preserving way, an orbit G.x⊂X will be called periodic (or equivalently, closed) if the stabilizer StabG(x) is a lattice inside G In addition, a related term is a bounded orbit, when the set G.x is pre-compact inside X The classification of orbits can lead to interesting questions with relations to other mathematical areas, for example the Oppenheim conjecture (proved by Margulis) and the Littlewood conjecture (partially proved by Lindenstrauss) are dealing with the question whether every bounded orbit of some natural action on the homogeneous space SL3(R)∖SL3(Z) is indeed periodic one, this observation is due to Raghunathan and in different language due to Cassels and Swinnerton-Dyer . Such questions are intimately related to deep measure-classification theorems. Definition: Notes It is often the case that the evolution function can be understood to compose the elements of a group, in which case the group-theoretic orbits of the group action are the same thing as the dynamical orbits. Examples: The orbit of an equilibrium point is a constant orbit. Stability of orbits: A basic classification of orbits is constant orbits or fixed points periodic orbits non-constant and non-periodic orbitsAn orbit can fail to be closed in two ways. It could be an asymptotically periodic orbit if it converges to a periodic orbit. Such orbits are not closed because they never truly repeat, but they become arbitrarily close to a repeating orbit. An orbit can also be chaotic. These orbits come arbitrarily close to the initial point, but fail to ever converge to a periodic orbit. They exhibit sensitive dependence on initial conditions, meaning that small differences in the initial value will cause large differences in future points of the orbit. There are other properties of orbits that allow for different classifications. An orbit can be hyperbolic if nearby points approach or diverge from the orbit exponentially fast.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gnog** Gnog: GNOG is a 2017 puzzle video game, developed by KO_OP and published by Double Fine Presents for PlayStation 4, iOS, Microsoft Windows, and macOS. Development: GNOG was developed by Montreal-based studio KO_OP. Originally titled "GNAH", the title was changed due to a trademark dispute. The game was showcased at the 2014 E3 "Horizon conference". It was shown at E3 2015 with a playable demo.It was released on the PlayStation 4 on May 2, 2017, and on iOS on November 28 the same year. Later, on July 17, 2018, it became available via Steam on Windows and macOS as well. Reception: GNOG received generally positive reviews from video game critics.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Harrington–Hollingsworth experiment** Harrington–Hollingsworth experiment: The Harrington–Hollingsworth experiment was an experiment that established the autoimmune nature of the blood disorder immune thrombocytopenic purpura. It was performed in 1950 by the academic staff of Barnes-Jewish Hospital in St. Louis, Missouri. Experiment: The experiment was undertaken in 1950 by William J. Harrington and James W. Hollingsworth, who postulated that in patients with idiopathic thrombocytopenic purpura (ITP), it was a blood factor that caused the destruction of platelets. To test this hypothesis, Harrington received 500 ml of blood from a patient with ITP. Within three hours, his platelets dropped to dangerously low levels and he experienced a seizure. His platelet count remained extremely low for four days, finally returning to normal levels by the fifth day. Bone marrow biopsy from Harrington's sternum demonstrated normal megakaryocytes, the cells necessary for platelet production.Subsequently the experiment was repeated on all suitable staff members at the Barnes-Jewish Hospital. All subjects developed low platelet counts within three hours, and all recovered after a period of several days. Implications: Schwartz notes that the Harrington–Hollingsworth experiment was a turning point in the understanding of ITP's pathophysiology: The Harrington–Hollingsworth experiment changed the meaning of the "I" in ITP from idiopathic to immune, but "immune" in this case means "autoimmune," because the antibodies bind to and cause the destruction of the patient's own platelets. Implications: The experiment was the first to demonstrate that infusion of an ITP patient's plasma into a normal patient caused a precipitous drop in platelet count. This suggested that low platelet counts (thrombocytopenia) in patients with ITP was caused by a circulating factor found in the blood. Many studies performed since then have demonstrated that this circulating factor is in fact a collection of immunoglobulins. Implications: Many physician-scientists believe the findings had a major influence on the field of autoimmunity, which was not universally accepted at the time as a mechanism of human disease.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Von Braun amide degradation** Von Braun amide degradation: The von Braun amide degradation is the chemical reaction of a monosubstituted amide with phosphorus pentachloride or thionyl chloride to give a nitrile and an organohalide. It is named after Julius Jacob von Braun, who first reported the reaction. Reaction mechanism: The secondary amide 1 reacts via its enolized form with phosphorus pentachloride to form the oxonium ion 2. This produces a chloride ion which deprotonates the oxonium ion to form and imine 3 and hydrogen chloride. These then react with one another to form an amine, with loss of the phosphorus chloride residue. The β-chloroimine 4 is unstable and undergoes internal elimination to a form a nitrilium cation 5 which is cleaved by attack by chloride to form a nitrile 6a and a haloalkane 6b.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Protegrin** Protegrin: Protegrins are small peptides containing 16-18 amino acid residues. Protegrins were first discovered in porcine leukocytes and were found to have antimicrobial activity against bacteria, fungi, and some enveloped viruses. The amino acid composition of protegrins contains six positively charged arginine residues and four cysteine residues. Their secondary structure is classified as cysteine-rich β-sheet antimicrobial peptides, AMPs, that display limited sequence similarity to certain defensins and tachyplesins. In solution, the peptides fold to form an anti-parallel β-strand with the structure stabilized by two cysteine bridges formed among the four cysteine residues. Recent studies suggest that protegrins can bind to lipopolysaccharide, a property that may help them to insert into the membranes of gram-negative bacteria and permeabilize them. Structure: There are five known porcine protegrins, PG-1 to PG-5. Three were identified biochemically and rest of them were deduced from DNA sequences. Structure: The protegrins are synthesized from quadiripartite genes as 147 to 149 amino acid precursors with a cathelin-like propiece. Protegrin sequence is similar to certain prodefensins and tachyplesins, antibiotic peptides derived from the horseshoe crab. Protegrin-1 that consists of 18 amino acids, six of which are arginine residues, forms two antiparallel β-sheets with a β-turn. Protegrin-2 is missing two carboxy terminal amino acids. So, Protegrin-2 is shorter than Protegrin-1 and it has one less positive charge. Protegrin-3 substitutes a glycine for an arginine at position 4 and it also has one less positive charge. Protegrin-4 substitutes a phenylalanine for a valine at position 14 and sequences are different in the β-turn. This difference makes protegrin-4 less polar than others and less positively charged. Protegrin-5 substitutes a proline for an arginine with one less positive charge. Mechanism of action: Protegrin-1 induces membrane disruption by forming a pore/channel that leads to cell death. This ability depends on its secondary structure. It forms an oligomeric structure in the membrane that creates a pore. Two ways of the self association of protegrin-1 into a dimeric β-sheet, an antiparallel β-sheet with a turn-next-to-tail association or a parallel β-sheet with a turn-next-to-turn association, were suggested. The activity can be restored by stabilizing the peptide structure with the two disulfide bonds. The interacts with membranes depends on membrane lipid composition and the cationic nature of the protegrin-1 adapts to the amphipathic characteristic which is related to the membrane interaction. The insertion of Protegrin-1 into the lipid layer results in the disordering of lipid packing to the membrane disruption. Antimicrobial activity: The protegrins are highly microbicidal against Candida albicans, Escherichia coli, Listeria monocytogenes, Neisseria gonorrhoeae, and the virions of the human immunodeficiency virus in vitro under conditions which mimic the tonicity of the extracellular milieu. The mechanism of this microbicidal activity is believed to involve membrane disruption, similar to many other antibiotic peptides Mimetics as antibiotics: Protegrin-1 (PG-1) peptidomimetics developed by Polyphor AG and the University of Zurich are based on the use of the beta hairpin-stabilizing D-Pro-L-Pro template which promote a beta hairpin loop structure found in PG-I. Fully synthetic cyclic peptide libraries of this peptidomimetic template produced compounds that had an antimicrobial activity like that of PG-1 but with reduced hemolytic activity on human red blood cells. Iterative rounds of synthesis and optimization led to the pseudomonas-specific clinical candidate Murepavadin that successfully completed phase-II clinical tests in hospital patients with life-threatening Pseudomonas lung infections.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Naomi Leonard** Naomi Leonard: Naomi Ehrich Leonard is the Edwin S. Wilsey Professor of Mechanical and Aerospace Engineering at Princeton University. She is the director of the Princeton Council on Science and Technology and an associated faculty member in the Program in Applied & Computational Mathematics, Princeton Neuroscience Institute, and the Program in Quantitative and Computational Biology. She is the founding editor of the Annual Review of Control, Robotics, and Autonomous Systems. Life: Leonard graduated from Princeton University with a B.S.E. degree in mechanical engineering in 1985. From 1985 to 1989, she worked in the electric power industry. She graduated from the University of Maryland with a M.S. in 1991 and Ph.D. in 1994, in electrical engineering, under the supervision of P. S. Krishnaprasad. She Joined Princeton's faculty as an assistant professor of Mechanical and Aerospace Engineering in 1994. Research: Leonard's research is in the area of dynamics and control theory. Her early work involved the development of "energy-shaping" methods of feedback control for single vehicles. It has applications to the control theory of more general mechanical systems. Research: She later expanded her work to the control of multi-agent systems, with an emphasis on collective sensing, decision-making, and motion. Her work includes the study of multi-agent systems in nature and the application of insights from nature to man-made systems.Many of Leonard's projects have involved the control of aquatic vehicles. She operates the underwater robotic tank lab at Princeton. She has worked for a number of years with the Autonomous Ocean Sampling Network. In 2006, she led the Adaptive Sampling and Prediction project, which used 10 underwater vehicles to form an automated and adaptive ocean observing system in Monterey Bay.In developing algorithms for robot control, she integrates physics and fluid mechanics with research about uncertainty and collective decision-making. She draws upon nature for her models, studying the animal flocking behavior of fish, honeybees, and birds. Her autonomous robotic swarms mimic schools of fish and are used to collect data and explore their marine environment. Awards: 1995 National Science Foundation CAREER Award 2004 MacArthur Fellows Program 2007 IEEE Fellow 2011 ASME Fellow 2012 Fellow of the Society for Industrial and Applied Mathematics 2014 Fellow of the International Federation of Automatic Control
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pregnanediol** Pregnanediol: Pregnanediol, or 5β-pregnane-3α,20α-diol, is an inactive metabolic product of progesterone. A test can be done to measure the amount of pregnanediol in urine, which offers an indirect way to measure progesterone levels in the body.From the urine of pregnant women from London clinics, Guy Frederic Marrian isolated a substance that contained two hydroxyl groups and could be converted into a diacetate with acetic anhydride. However, the formula had not been clearly clarified. Almost at the same time, Adolf Butenandt at the Chemical University Laboratory in Göttingen investigated the constituents of pregnant urine and clarified the structure of the diol. The name pregnandiol, coined by Butenandt, is derived from the Latin verb praegnans (pregnant, pregnant), or the English pregnant and pregnancy. This gave rise to the name pregnane for the underlying parent hydrocarbon. In 1936, Venning and Browne demonstrated the presence of pregnanediol, specifically the glucuronide of pregnanediol in pregnancy urine. Their study extracted pregnanediol from pregnancy urine and revealed that pregnanediol concentration in urine indicates the amount of progesterone excreted. Since progesterone levels indicate the functionality of a corpus luteum, and pregnanediol concentration represents 40-45% of the progesterone excreted, estimations of pregnanediol reveal the functionality of a corpus luteum. However, pregnanediol concentrations vary with menstrual cycle phases, so it is essential to consider the menstrual cycle phase when examining them. Furthermore, current research has demonstrated that pregnanediol concentration in urine is also a measure of ovarian activity.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Distributed.net** Distributed.net: Distributed.net is a volunteer computing effort that is attempting to solve large scale problems using otherwise idle CPU or GPU time. It is governed by Distributed Computing Technologies, Incorporated (DCTI), a non-profit organization under U.S. tax code 501(c)(3). Distributed.net: Distributed.net is working on RC5-72 (breaking RC5 with a 72-bit key). The RC5-72 project is on pace to exhaust the keyspace in just under 47 years, although the project will end whenever the required key is found. RC5 has eight unsolved challenges from RSA Security, although in May 2007, RSA Security announced that they would no longer be providing prize money for a correct key to any of their secret key challenges. distributed.net has decided to sponsor the original prize offer for finding the key as a result.In 2001, distributed.net was estimated to have a throughput of over 30 TFLOPS. As of August 2019, the throughput was estimated to be the same as a Cray XC40, as used in the Lonestar 5 supercomputer, or around 1.25 petaFLOPs. History: A coordinated effort was started in February 1997 by Earle Ady and Christopher G. Stach II of Hotjobs.com and New Media Labs, as an effort to break the RC5-56 portion of the RSA Secret-Key Challenge, a 56-bit encryption algorithm that had a $10,000 USD prize available to anyone who could find the key. Unfortunately, this initial effort had to be suspended as the result of SYN flood attacks by participants upon the server.A new independent effort, named distributed.net, was coordinated by Jeffrey A. Lawson, Adam L. Beberg, and David C. McNett along with several others who would serve on the board and operate infrastructure. By late March 1997 new proxies were released to resume RC5-56 and work began on enhanced clients. A cow head was selected as the icon of the application and the project's mascot.The RC5-56 challenge was solved on October 19, 1997 after 250 days. The correct key was "0x532B744CC20999" and the plaintext message read "The unknown message is: It's time to move to a longer key length".The RC5-64 challenge was solved on July 14, 2002 after 1,757 days. The correct key was "0x63DE7DC154F4D039" and the plaintext message read "The unknown message is: Some things are better left unread".The search for OGRs of order 24, 25, 26, 27 and 28 were completed by distributed.net on 13 October 2004, 25 October 2008, 24 February 2009, 19 February 2014, and 23 November 2022 respectively. Client: "DNETC" is the file name of the software application which users run to participate in any active distributed.net project. It is a command line program with an interface to configure it, available for a wide variety of platforms. distributed.net refers to the software application simply as the "client". As of April 2019, volunteers running 32-bit Windows with ATI/AMD Stream enabled GPUs have contributed the most processing power to the RC5-72 project and volunteers running 64-bit Linux have contributed the most processing power to the OGR-28 project.Portions of the source code for the client are publicly available, although users are not permitted to distribute modified versions themselves.Distributed.net's RC5-72 project is available on the BOINC client through the Moo! Wrapper. Development of GPU-enabled clients: In recent years, most of the work on the RC5-72 project has been submitted by clients that run on the GPU of modern graphics cards. Although the project had already been underway for almost 6 years when the first GPUs began submitting results, as of May 2023, GPUs represent 86% of all completed work units, and complete more than 93% of all work units each day. Development of GPU-enabled clients: NVIDIAIn late 2007, work began on the implementation of new RC5-72 cores designed to run on NVIDIA CUDA-enabled hardware, with the first completed work units reported in November 2008. On high-end NVIDIA video cards at the time, upwards of 600 million keys/second was observed For comparison, a 2008-era high-end single CPU working on RC5-72 achieved about 50 million keys/second, representing a very significant advancement for RC5-72. As of May 2023, CUDA clients have completed 11% of all work on the RC5-72 project.ATISimilarly, near the end of 2008, work began on the implementation of new RC5-72 cores designed to run on ATI Stream-enabled hardware. Some of the products in the Radeon HD 5000 and 6000 series provided key rates in excess of 1.8 billion keys/second. As of May 2023, Stream clients have completed nearly 28% of all work on the RC5-72 project. Daily production from Stream clients has dropped below 0.5% as the majority of AMD GPU contributors now use the OpenCL client.OpenCLAn OpenCL client entered beta testing in late 2012 and was released in 2013. As of May 2023, OpenCL clients have completed more than 47% of all work on the RC5-72 project. No breakdown of OpenCL production by GPU manufacturer exists, as AMD, NVIDIA, and Intel GPUs all support OpenCL. Timeline of distributed.net projects: CurrentRSA Lab's 72-bit RC5 Encryption Challenge — In progress, 10.413% complete as of 28 July 2023 (although RSA Labs has discontinued sponsorship)CryptographyRSA Lab's 56-bit RC5 Encryption Challenge — Completed 19 October 1997 (after 250 days and 47% of the key space tested). Timeline of distributed.net projects: RSA Lab's 56-bit DES-II-1 Encryption Challenge — Completed 23 February 1998 (after 39 days) RSA Lab's 56-bit DES-II-2 Encryption Challenge — Ended 15 July 1998 (found independently by the EFF DES cracker after 2.5 days) RSA Lab's 56-bit DES-III Encryption Challenge — Completed 19 January 1999 (after 22.5 hours with the help of the EFF DES cracker) CS-Cipher Challenge — Completed 16 January 2000 (after 60 days and 98% of the key space tested). Timeline of distributed.net projects: RSA Lab's 64-bit RC5 Encryption Challenge — Completed 14 July 2002 (after 1726 days and 83% of the key space tested).Golomb rulersOptimal Golomb Rulers (OGR-24) — Completed 13 October 2004 (after 1552 days, confirmed predicted best ruler) Optimal Golomb Rulers (OGR-25) — Completed 24 October 2008 (after 3006 days, confirmed predicted best ruler) Optimal Golomb Rulers (OGR-26) — Completed 24 February 2009 (after 121 days, confirmed predicted best ruler) Optimal Golomb Rulers (OGR-27) — Completed 19 February 2014 (after 1822 days, confirmed predicted best ruler) Optimal Golomb Rulers (OGR-28) — Completed 23 November 2022 (after 3199 days, confirmed predicted best ruler)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jackie Chan J-Mat Fitness** Jackie Chan J-Mat Fitness: The Jackie Chan J-Mat Fitness is a mat serving as a video game requiring a XaviXPORT console to operate. This 2005 game, similar to the later released Nintendo game Wii Fit, is made to make players exercise. The players control Jackie Chan in a variety of modes such as reflex mode, running and exercising (which is played in a similar style to Dance Dance Revolution games). Articles: Kotaku - "Jackie Chan Kind of Invented Wii Fit" Siliconera - "Jackie Chan's Take on Wii Fit"
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shell game** Shell game: The shell game (also known as thimblerig, three shells and a pea, the old army game) is often portrayed as a gambling game, but in reality, when a wager for money is made, it is almost always a confidence trick used to perpetrate fraud. In confidence trick slang, this swindle is referred to as a short-con because it is quick and easy to pull off. The shell game is related to the cups and balls conjuring trick, which is performed purely for entertainment purposes without any purported gambling element. Play: In the shell game, three or more identical containers (which may be cups, shells, bottle caps, or anything else) are placed face-down on a surface. A small ball is placed beneath one of these containers so that it cannot be seen, and they are then shuffled by the operator in plain view. One or more players are invited to bet on which container holds the ball – typically, the operator offers to double the player's stake if they guess right. Where the game is played honestly, the operator can win if he shuffles the containers in a way which the player cannot follow.In practice, however, the shell game is notorious for its use by confidence tricksters who will typically rig the game using sleight of hand to move or hide the ball during play and replace it as required. Fraudulent shell games are also known for the use of psychological tricks to convince potential players of the legitimacy of the game – for example, by using shills or by allowing a player to win a few times before beginning the scam. History: The shell game dates back at least to Ancient Greece. It can be seen in several paintings of the European Middle Ages. Later, walnut shells were used, and today the use of bottle caps or matchboxes is common. The game has also been called "thimblerig" as it could be played using sewing thimbles. The first recorded use of the term "thimblerig" is in 1826.The swindle became very popular throughout the nineteenth century, and games were often set up in or around traveling fairs. A thimblerig team (comprising operator and confederates) was depicted in William Powell Frith's 1858 painting, The Derby Day. In Frith's 1888 My Autobiography and Reminiscences, the painter-turned-memoirist leaves an account of his encounter with a thimble-rig team (operator and accomplices): Fear of jail and the need to find new "flats" (victims) kept these "sharps" (shell men or "operators") traveling from one town to the next, never staying in one place very long. One of the most infamous confidence men of the nineteenth century, Jefferson Randolph Smith, known as Soapy Smith, led organized gangs of shell men throughout the mid-western United States, and later in Alaska. History: Today, the game is still being played for money in many major cities around the world, usually at locations with a high tourist concentration (for example: La Rambla in Barcelona, Gran Via in Madrid, Westminster Bridge in London, Kurfürstendamm in Berlin, Bahnhofsviertel in Frankfurt am Main and public spaces in Paris, Buenos Aires, Benidorm, New York City, Chicago, and Los Angeles). The swindle is classified as a confidence trick game, and illegal to play for money in most countries.The game also inspired a pricing game on the game show The Price Is Right, in which contestants attempt to win a larger prize by pricing smaller prizes to earn attempts at finding a ball hidden under one of four shells designed to resemble walnut shells. While the ball is not shown during the game, and the host shuffles the shells before the start of the game, contestants can win by either winning all four attempts or winning enough attempts (via big "chips" to mark the shells), and picking the one that has the ball. The shuffling is only allowed before the pricing part of the game begins, and once the first small prize is announced, no further shuffling is permitted. Federal game show regulations are designed to ensure the game is legally a game that can be won.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Proof-carrying code** Proof-carrying code: Proof-carrying code (PCC) is a software mechanism that allows a host system to verify properties about an application via a formal proof that accompanies the application's executable code. The host system can quickly verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to determine whether the application is safe to execute. This can be particularly useful in ensuring memory safety (i.e. preventing issues like buffer overflows). Proof-carrying code: Proof-carrying code was originally described in 1996 by George Necula and Peter Lee. Packet filter example: The original publication on proof-carrying code in 1996 used packet filters as an example: a user-mode application hands a function written in machine code to the kernel that determines whether or not an application is interested in processing a particular network packet. Because the packet filter runs in kernel mode, it could compromise the integrity of the system if it contains malicious code that writes to kernel data structures. Traditional approaches to this problem include interpreting a domain-specific language for packet filtering, inserting checks on each memory access (software fault isolation), and writing the filter in a high-level language which is compiled by the kernel before it is run. These approaches have performance disadvantages for code as frequently run as a packet filter, except for the in-kernel compilation approach, which only compiles the code when it is loaded, not every time it is executed. Packet filter example: With proof-carrying code, the kernel publishes a security policy specifying properties that any packet filter must obey: for example, will not access memory outside of the packet and its scratch memory area. A theorem prover is used to show that the machine code satisfies this policy. The steps of this proof are recorded and attached to the machine code which is given to the kernel program loader. The program loader can then rapidly validate the proof, allowing it to thereafter run the machine code without any additional checks. If a malicious party modifies either the machine code or the proof, the resulting proof-carrying code is either invalid or harmless (still satisfies the security policy).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Managed private cloud** Managed private cloud: Managed private cloud (also known as "hosted private cloud") refers to a principle in software architecture where a single instance of the software runs on a server, serves a single client organization (tenant), and is managed by a third party. The third-party provider is responsible for providing the hardware for the server and also for preliminary maintenance. This is in contrast to multitenancy, where multiple client organizations share a single server, or an on-premises deployment, where the client organization hosts its software instance. Managed private cloud: Managed private clouds also fall under the larger umbrella of cloud computing. Adoption: The need for private clouds arose due to enterprises requiring a dedicated service and infrastructure for their cloud computing needs, such as for business critical operations, improved security and better control over their resources. Managed private cloud adoption is a popular choice among organizations, and has been on the rise due to enterprises requiring a dedicated cloud environment, and preferring to avoid having to deal with management, maintenance, or future upgradation costs for the associated infrastructure and services. Such operational costs are unavoidable in on-premises private cloud data centers. Advantages and challenges of managed private cloud: A managed private cloud cuts down on upkeep costs by outsourcing infrastructure management and maintenance to the managed cloud provider. It is easier to integrate an organization's existing software, services, and applications into a dedicated cloud hosting infrastructure which can be customized to the client's needs, instead of a public cloud platform, whose hardware or infrastructure/software platform cannot be individualized to each client.Customers who choose a managed private cloud deployment usually choose them because of their desire for an efficient cloud deployment, but also have the need for service customization or integration only available in a single-tenant environment. Advantages and challenges of managed private cloud: This chart shows key benefits of the different types of deployments, and shows the overlap between these cloud solutions. This chart shows key drawbacks. Advantages and challenges of managed private cloud: Since deployments are done in a single-tenant environment, it is usually cost-prohibitive for small and medium-sized businesses. While server upkeep and maintenance is handled by the service provider, including network management and security, the client is charged for all such services. It is up to the potential client to determine if a managed private cloud solution aligns with their business objectives and budget. While the service-provider maintains the upkeep of servers, network, and platform infrastructure, sensitive data is typically not stored on managed private clouds as it may leave business-critical information prone to breaches via third-party attacks on the cloud service provider. Advantages and challenges of managed private cloud: Common Customizations and integrations include: Active Directory Single Sign-on Learning Management Systems Video Teleconferencing Deployment strategies and service providers: Software companies have taken a variety of strategies in the Managed Private Cloud realm. Some software organisations have provided managed private cloud options internally, such as Microsoft. Companies that offer an on-premises deployment option, by definition enable third-party companies to market Managed Private Cloud solutions. A few managed private cloud service providers are: Adobe Connect: Adobe Connect may be purchased for on-premises deployment, multi-tenant hosted deployment, managed private cloud as ACMS, or managed by third-party managed private cloud provider ConnectSolutions. Deployment strategies and service providers: Rackspace CenturyLink Microsoft licenses for Lync, SharePoint and Exchange may be purchased for on-premises deployment, a multi-tenant hosted deployment via Office 365, or managed by third-party cloud hosting for from Azaleos, ConnectSolutions and others.Others Popular web conferencing products like Cisco WebEx, Citrix Go-to-Meeting and Skype are available via multitenancy, and not available in a managed private cloud environment.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Days of Memories** Days of Memories: Days of Memories is a series of dating sims from SNK for cell phones, beginning in 2005. SNK released a compilation of the first three games for the Nintendo DS in 2007, with new graphics and an extra viewing mode. Summary: The games are dating sims starring SNK and ADK characters that take place in a parallel world to their own. In each game, the player is given the month of July to start a relationship with one of the girls featured in the game, in order to finish the game with the beginnings of a workable relationship. Games: Days of Memories ~Boku to Kanojo no Atsui Natsu~ (Days of Memories 〜僕と彼女の熱い夏〜)Released on October 17th, 2005. The cast of this game is considered to be fan favorites from their respective debut games. Features - Athena Asamiya, Kasumi Todoh, B. Jenet, King, Mai Shiranui, Yuri Sakazaki, Leona Heidern, Kula Diamond. Male characters - Kyoya Kaido (original)Days of Memories 2 ~Boku no Ichiban Taisetsu na Kimi e~ (Days of Memories 2 〜僕の一番大切な君へ〜)Released on February 1st, 2006. Debuted the first unique Days Of Memories character. Features - Hotaru Futaba, Kisarah Westfield, Fiolina "Fio" Germi, Chizuru Kagura, Mature, Blue Mary. Male characters - Kyo Kusanagi, Iori Yagami Exclusive character - Shizuku Misawa.Days of Memories ~Ōedo Ren'ai Emaki~ (Days of Memories 〜大江戸恋愛絵巻〜)Released on May 15th, 2006. Is set during the era of Feudal Japan. It is the first game in the series to show where the girls are. Features - Nakoruru, Mina Majikina, Rinka Yoshino, Saya, Mikoto, Shiki, Iroha. Male characters - Haohmaru, Genjuro Kibagami, Ukyo Tachibana, Kyouemon (original) Exclusive characters - Shino, Chiyo. This game features only Samurai Shodown characters, rather than the normal cast of The King of Fighters characters.Days of Memories ~Kare to Watashi no Atsui Natsu~ (Days of Memories 〜彼と私の熱い夏〜)Released on November 1st, 2006. This game is marketed as a dating game for girls, rather than the normal male perspective. Features - Kyo Kusanagi, Iori Yagami, K', Ash Crimson, Terry Bogard, Rock Howard, Alba Meira, Ryo Sakazaki.Days of Memories ~Koi wa Good Job!~ (Days of Memories 〜恋はグッジョブ!〜)Released on April 3rd, 2007. This game focuses on characters at work in various jobs, related to their normal game appearances. Features - Kisarah Westfield, King, Kasumi Todoh, Mai Shiranui, Ai, Athena Asamiya. Male characters - Geese Howard, Wolfgang Krauser, Konoe Hideki (original) Exclusive character - Karen Ōkain. All characters except Ai and Karen appeared first in the original two games.Days of MemoriesReleased on June 14th, 2007. Compilation of the first three Days of Memories games for the Nintendo DS.Days of Memories ~Junpaku no Tenshitachi~ (Days of Memories 〜純白の天使たち〜)Released on June 19th, 2007. The character roster is taken from The King of Fighters XI and KOF: Maximum Impact 2 Features - Ninon Beart, Elisabeth Blanctorche, Luise Meyrink, Momoko, Malin, Vanessa, Kaoru Watabe (Athena Asamiya's fan and friend), Alice Garnet Nakata (from the Fatal Fury slot machine, Alice would later appear in The King of Fighters XIV). Side characters - Mignon Beart Male characters - Magaki, Shion Exclusive characters - Ayame Ichitsuka, Tsugumi Ichitsuka.Days of Memories 2Released on April 24th, 2008. Compilation of the fourth to sixth Days of Memories games for the Nintendo DS.Days of Memories ~Boku to Kanojo to Koto no Koi~ (Days of Memories 〜僕と彼女と古都の恋〜)Released on May 5th, 2008. This game focuses on characters at work in various jobs, related to their normal game appearances. This is the first game in the series to include characters from The Last Blade series. Features - Athena Asamiya, Leona Heidern, Kula Diamond, Angel, Whip Side characters - Rimururu, Tsunami (from the exclusive Iroha game) Male characters - Kyo Kusanagi, K', Ash Crimson, Haohmaru, Genjuro Kibagami, Setsuna, Kojiroh Sanada Exclusive character - Kamisaki Misato
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Imidazolidinone** Imidazolidinone: Imidazolidinones or imidazolinones are a class of 5-membered ring heterocycles structurally related to imidazole. Imidazolidinones feature a saturated C3N2 nucleus, except for the presence of a urea or amide functional group in the 2 or 4 positions. 2-Imidazolidinones: The 2-imidazolidinones are cyclic derivatives of urea. 1,3-Dimethyl-2-imidazolidinone is a polar solvent and Lewis base. Drugs featuring this ring system include emicerfont, imidapril, and azlocillin. Dimethylol ethylene urea is the reagent used in permanent press clothing. 4-Imidazolidinones: 4-Imidazolidinones can be prepared from phenylalanine in two chemical steps (amidation with methylamine followed by condensation reaction with acetone): Imidazolidinone catalyst work by forming an iminium ion with carbonyl groups of α,β-unsaturated aldehydes (enals) and enones in a rapid chemical equilibrium. This iminium activation lower the substrate's LUMO. Several 4-imidazolidinones have been investigated.Drugs featuring the 4-imidazolidinone ring include hetacillin, NNC 63-0532, spiperone, and spiroxatrine. Imidazolones: Imidazolones (also called imidazolinones) are oxo derivatives of imidazoline (dihydroimidazoles). Examples include imidazol-4-one-5-propionic acid, a product of the catabolism of histidine, and imazaquin, a member of the imidazolinone class of herbicide.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbon tetrachloride** Carbon tetrachloride: Carbon tetrachloride, also known by many other names (such as carbon tet for short and tetrachloromethane, also recognised by the IUPAC) is a chemical compound with the chemical formula CCl4. It is a non-flammable, colourless liquid with a "sweet" chloroform-like smell that can be detected at low levels. It was formerly widely used in fire extinguishers, as a precursor to refrigerants and as a cleaning agent, but has since been phased out because of environmental and safety concerns. Exposure to high concentrations of carbon tetrachloride can affect the central nervous system and degenerate the liver and kidneys. Prolonged exposure can be fatal. Carbon tetrachloride: Tradenames include: Carbon-Tet, Katharin (Germany, 1890s), Benzinoform, Carbona and Thawpit in the cleaning industry, Halon-104 in firefighting, Refrigerant-10 in HVACR, and Necatorina and Seretin as a medication. Properties: In the carbon tetrachloride molecule, four chlorine atoms are positioned symmetrically as corners in a tetrahedral configuration joined to a central carbon atom by single covalent bonds. Because of this symmetric geometry, CCl4 is non-polar. Methane gas has the same structure, making carbon tetrachloride a halomethane. As a solvent, it is well suited to dissolving other non-polar compounds such as fats and oils. It can also dissolve iodine. It is volatile, giving off vapors with a smell characteristic of other chlorinated solvents, somewhat similar to the tetrachloroethylene smell reminiscent of dry cleaners' shops. Properties: Solid tetrachloromethane has two polymorphs: crystalline II below −47.5 °C (225.6 K) and crystalline I above −47.5 °C. At −47.3 °C it has monoclinic crystal structure with space group C2/c and lattice constants a = 20.3, b = 11.6, c = 19.9 (.10−1 nm), β = 111°.With a specific gravity greater than 1, carbon tetrachloride will be present as a dense nonaqueous phase liquid if sufficient quantities are spilled in the environment. Reactions: Despite being generally inert, carbon tetrachloride can undergo various reactions. Hydrogen or an acid in presence of an iron catalyst can reduce carbon tetrachloride to chloroform, dichloromethane, chloromethane and even methane. When its vapours passed through a red-hot tube, carbon tetrachloride dechlorinates to Tetrachloroethylene and hexachloroethane. Carbon tetrachloride, when treated with HF, gives various compounds such as trichlorofluoromethane (R-11), dichlorodifluoromethane (R-12), chlorotrifluoromethane (R-13) and carbon tetrafluoride with HCl as the by-product: CCl HF CCl HCl CCl HF CCl HCl CCl HF CClF HCl CCl HF CF HCl This was once one of the main uses of carbon tetrachloride, as R-11 and R-12 were widely used as refrigerants. An alcohol solution of potassium hydroxide decomposes it to potassium chloride and potassium carbonate in water: CCl KOH KCl CO 3+3H2O When a mixture of carbon tetrachloride and carbon dioxide is heated to 350 degrees C, it gives phosgene: CCl CO COCl 2 A similar reaction with carbon monoxide instead gives phosgene and tetrachloroethylene: CCl CO COCl Cl 4 Reaction with hydrogen sulfide gives thiophosgene: CCl CCl HCl Reaction with sulfur trioxide gives phosgene and pyrosulfuryl chloride: CCl SO COCl Cl 2 Reaction with phosphoric anhydride gives phosgene and phosphoryl chloride: CCl COCl POCl 3 Carbon tetrachloride reacts with dry zinc oxide at 200 degrees Celsius to yield zinc chloride, phosgene and carbon dioxide: CCl ZnO ZnCl COCl CO 2 History and synthesis: Carbon tetrachloride was originally synthesized in 1820 by Michael Faraday, who named it "protochloride of carbon", by decomposition of hexachloroethane ("perchloride of carbon") which he synthesized by chlorination of ethylene. The protochloride of carbon has been previously misidentified as tetrachloroethylene due to being made with the same reaction of hexachloroethane. Later in the 19th century, the name protochloride of carbon was used for tetrachloroethylene, and carbon tetrachloride was called "bichloride of carbon" or "perchloride of carbon". Henri Victor Regnault developed another method to synthesise carbon tetrachloride from chloroform, chloroethane or methanol with excess chlorine in 1839.Kolbe made carbon tetrachloride in 1845 by passing chlorine over carbon disulfide through a porcelain tube. Prior to the 1950s, carbon tetrachloride was manufactured by the chlorination of carbon disulfide at 105 to 130 °C: CS2 + 3Cl2 → CCl4 + S2Cl2But now it is mainly produced from methane: CH4 + 4 Cl2 → CCl4 + 4 HClThe production often utilizes by-products of other chlorination reactions, such as from the syntheses of dichloromethane and chloroform. Higher chlorocarbons are also subjected to this process named "chlorinolysis": C2Cl6 + Cl2 → 2 CCl4The production of carbon tetrachloride has steeply declined since the 1980s due to environmental concerns and the decreased demand for CFCs, which were derived from carbon tetrachloride. In 1992, production in the U.S./Europe/Japan was estimated at 720,000 tonnes. History and synthesis: Natural occurrence Carbon Tetrachloride was discovered along with chloromethane and chloroform in oceans, marine algae and volcanoes. The natural emissions of carbon tetrachloride are too little compared to those from anthropogenic sources; for example, the Momotombo Volcano in Nicaragua emits carbon tetrachloride at a flux of 82 grams per year while the global industrial emissions were at 2 × 1010 grams per year.Carbon tetrachloride was found in Red algae Asparagopsis taxiformis and Asparagopsis armata. It was detected in Southern California ecosystems, salt lakes of Kalmykian Steppe and a common liverwort in Czechia. Safety: At high temperatures in air, it decomposes or burns to produce poisonous phosgene. This was a common problem when carbon tetrachloride was used as a fire extinguisher: there have been deaths due to its conversion to phosgene reported.Carbon tetrachloride is a suspected human carcinogen based on sufficient evidence of carcinogenicity from studies in experimental animals. The World Health Organization reports carbon tetrachloride can induce hepatocellular carcinomas (hepatomas) in mice and rats. The doses inducing hepatic tumours are higher than those inducing cell toxicity. The International Agency for Research on Cancer (IARC) classified this compound in Group 2B, "possibly carcinogenic to humans". Carbon tetrachloride is one of the most potent hepatotoxins (toxic to the liver), so much so that it is widely used in scientific research to evaluate hepatoprotective agents. Exposure to high concentrations of carbon tetrachloride (including vapor) can affect the central nervous system and degenerate the liver and kidneys, and prolonged exposure may lead to coma or death. Chronic exposure to carbon tetrachloride can cause liver and kidney damage and could result in cancer. See safety data sheets.Consumption of alcohol increases the toxic effects of carbon tetrachloride and may cause more severe organ damage, such as acute renal failure, in heavy drinkers. The doses that can cause mild toxicity to non-drinkers can be fatal to drinkers.The effects of carbon tetrachloride on human health and the environment have been assessed under REACH in 2012 in the context of the substance evaluation by France.In 2008, a study of common cleaning products found the presence of carbon tetrachloride in "very high concentrations" (up to 101 mg/m3) as a result of manufacturers' mixing of surfactants or soap with sodium hypochlorite (bleach).Carbon tetrachloride is also both ozone-depleting and a greenhouse gas. However, since 1992 its atmospheric concentrations have been in decline for the reasons described above (see atmospheric concentration graphs in the gallery). CCl4 has an atmospheric lifetime of 85 years. Uses: In organic chemistry, carbon tetrachloride serves as a source of chlorine in the Appel reaction. Carbon tetrachloride made from heavy chlorine-37 has been used in the detection of neutrinos. Historical uses: Carbon tetrachloride was widely used as a dry cleaning solvent, as a refrigerant, and in lava lamps. In the last case, carbon tetrachloride is a key ingredient that adds weight to the otherwise buoyant wax. Historical uses: One specialty use of carbon tetrachloride was in stamp collecting, to reveal watermarks on postage stamps without damaging them. A small amount of the liquid is placed on the back of a stamp, sitting in a black glass or obsidian tray. The letters or design of the watermark can then be seen clearly. Today, this is done on lit tables without using carbon tetrachloride. Historical uses: Cleaning Being a good solvent for many materials (such as grease and tar), carbon tetrachloride was widely used as a cleaning fluid for nearly 70 years. It is nonflammable and nonexplosive, and did not leave any odour on the cleaned material unlike gasoline which was also used for cleaning at the time. It was used as a "safe" alternative to gasoline. It was first marketed as Katharin, in 1892 and as Benzinoform later. Carbon tetrachloride was the first chlorinated solvent to be used in dry-cleaning and used until the 1950s. It was corrosive to the dry-cleaning equipment and caused illness among dry-cleaning operators and was replaced by trichloroethylene, tetrachloroethylene and methyl chloroform (trichloroethane).Carbon tetrachloride was also used as an alternative to petrol (gasoline) in dry shampoos, from the beginning of 1903 to the 1930s. Several women had fainted from its fumes during the hair wash in barber shops, the hairdressers often used electric fans to blow the fumes away. In 1909, a baronet's daughter, Helenora Elphinstone-Dalrymple (aged 29), died after having her hair shampooed with carbon tetrachloride.It is assumed that carbon tetrachloride was still used as a dry cleaning solvent in North Korea as of 2006. Historical uses: Medical uses Carbon tetrachloride has been briefly used as a volatile inhalation anaesthetic and analgesic for intense menstruation pains and headaches in the mid-19th century. Its anaesthetic effects were known as early as 1847 or 1848.It was introduced as a safer alternative to Chloroform by Doctor Protheroe Smith in 1864. In December 1865, the Scottish obstetrician who discovered the anaesthetic effects of chloroform on humans, James Young Simpson, had experimented with carbon tetrachloride as an anaesthetic. Simpson named the compound "Chlorocarbon" for its similarity to chloroform. His experiments involved injecting carbon tetrachloride into two women's vaginas. Simpson orally consumed carbon tetrachloride and described it as having "the same effect as swallowing a capsule of chloroform".Because of the higher amount of chlorine atoms (compared to chloroform) in its molecule, carbon tetrachloride has a stronger anaesthetic effect than chloroform and required a smaller amount. Its anaesthetic action was likened to ether, rather than the relates chloroform. It is less volatile than chloroform, therefore it was more difficult to apply and needed warm water to evaporate. Its smell has been described as "fruity", quince-like and "more pleasant than chloroform", and had a "pleasant taste". Carbon tetrachloride for anaesthetic use was made by the chlorination of carbon disulfide. It was used on at least 50 patients, of which most were women in labour. During anaesthesia, carbon tetrachloride has caused violent muscular contractions and negative effects on the heart in some patients that it had to be substituted with chloroform or ether. Such use was experimental and the anaesthetic use of carbon tetrachloride never gained popularity due to its potential toxicity. Historical uses: The veterinary doctor Maurice Crowther Hall (1881-1938) discovered in 1921 that carbon tetrachloride was incredibly effective as an anthelminthic in eradicating hookworm by ingesting it. Beginning in 1922, capsules of pure carbon tetrachloride were marketed by Merck under the name Necatorina (variants include Neo-necatorina and Necatorine). Necatorina was used as a medication against parasitic diseases in humans. This medication was most prevalently used in Latin American countries. Its toxicity was not well-understood at the time and toxic effects were attributed to impurities in the capsules rather than carbon tetrachloride itself. Historical uses: Solvent It once was a popular solvent in organic chemistry, but because of its adverse health effects, it is rarely used today. It is sometimes useful as a solvent for infrared spectroscopy, because there are no significant absorption bands above 1600 cm−1. Because carbon tetrachloride does not have any hydrogen atoms, it was historically used in proton NMR spectroscopy. In addition to being toxic, its dissolving power is low. Its use in NMR spectroscopy has been largely superseded by deuterated solvents (mainly deuterochloroform). Use of carbon tetrachloride in determination of oil has been replaced by various other solvents, such as tetrachloroethylene. Because it has no C–H bonds, carbon tetrachloride does not easily undergo free-radical reactions. It is a useful solvent for halogenations either by the elemental halogen or by a halogenation reagent such as N-bromosuccinimide (these conditions are known as Wohl–Ziegler bromination). Historical uses: Fire suppression In 1910, the Pyrene Manufacturing Company of Delaware filed a patent to use carbon tetrachloride to extinguish fires. The liquid was vaporized by the heat of combustion and extinguished flames, an early form of gaseous fire suppression. At the time it was believed the gas simply displaced oxygen in the area near the fire, but later research found that the gas actually inhibits the chemical chain reaction of the combustion process.In 1911, Pyrene patented a small, portable extinguisher that used the chemical. The extinguisher consisted of a brass bottle with an integrated hand-pump that was used to expel a jet of liquid toward the fire. As the container was unpressurized, it could easily be refilled after use. Carbon tetrachloride was suitable for liquid and electrical fires and the extinguishers were often carried on aircraft or motor vehicles. However, as early as 1920, there were reports of fatalities caused by the chemical when used to fight a fire in a confined space.In the first half of the 20th century, another common fire extinguisher was a single-use, sealed glass globe known as a "fire grenade", filled with either carbon tetrachloride or salt water. The bulb could be thrown at the base of the flames to quench the fire. The carbon tetrachloride type could also be installed in a spring-loaded wall fixture with a solder-based restraint. When the solder melted by high heat, the spring would either break the globe or launch it out of the bracket, allowing the extinguishing agent to be automatically dispersed into the fire.A well-known brand of fire grenade was the "Red Comet", which was variously manufactured with other fire-fighting equipment in the Denver, Colorado area by the Red Comet Manufacturing Company from its founding in 1919 until manufacturing operations were closed in the early 1980s.Since carbon tetrachloride freezes at –23 °C, the fire extinguishers would contain only 89-90% carbon tetrachloride and 10% trichloroethylene (m.p. –85 °C) or chloroform (m.p. –63 °C) for lowering its freezing point. The extinguishers with 10% trichloroethylene would contain 1% carbon disulfide as a stabiliser. Historical uses: Refrigerants Prior to the Montreal Protocol, large quantities of carbon tetrachloride were used to produce the chlorofluorocarbon refrigerants R-11 (trichlorofluoromethane) and R-12 (dichlorodifluoromethane). However, these refrigerants play a role in ozone depletion and have been phased out. Carbon tetrachloride is still used to manufacture less destructive refrigerants. Fumigant Carbon tetrachloride was widely used as a fumigant to kill insect pests in stored grain. It was employed in a mixture known as 80/20, that was 80% carbon tetrachloride and 20% Carbon disulfide. The United States Environmental Protection Agency banned its use in 1985. Society and culture: The French writer René Daumal intoxicated himself by inhalation of carbon tetrachloride which he used to kill the beetles he collected, to "encounter another worlds" by voluntarily plunging himself into intoxications close to comatose states. Carbon tetrachloride is listed (along with salicylic acid, toluene, sodium tetraborate, silica gel, methanol, potassium carbonate, ethyl acetate and "BHA") as an ingredient in Peter Parker's (Spider-Man) custom web fluid formula in the book The Wakanda Files: A Technological Exploration of the Avengers and Beyond. Society and culture: Australian YouTuber Tom of Explosions&Fire and Extractions&Ire made a video on extracting carbon tetrachloride from an old fire extinguisher in 2019, and later experimenting with it by mixing it with sodium, and the chemical gained a fan base called "Tet Gang" on social media (especially on Reddit). The channel owner later used carbon tetrachloride themed designs in the channel's merch. Society and culture: In the Ramones song "Carbona Not Glue" released in 1977, the narrator says that huffing the vapours of Carbona, a carbon tetrachloride-based stain remover, was better than huffing glue. They later removed the song from album as Carbona was a corporate trademark. Famous deaths from carbon tetrachloride poisoning Evalyn Bostock, (1917 – 1944) British actress who died from accidentally drinking carbon tetrachloride after mistaking it for her drink while working in a photographic darkroom. Harry Edwards (1887–1952), American director who died from carbon tetrachloride poisoning shortly after directing his first television production. Zilphia Horton, (1910–1952) American musician and activist who died from accidentally drinking a glass full of carbon tetrachloride-based typewriter cleaning fluid that she mistook for water. Margo Jones, (1911–1955) American stage director who was exposed to the fumes of carbon tetrachloride that was used to clean off paint from a carpet. She died a week later from kidney failure. Jim Beck, (1919–1956), American record producer, died after exposure to carbon tetrachloride fumes that he was exposed to during cleaning recording equipment. Tommy Tucker, (1933–1982) American blues singer, died after using carbon tetrachloride in floor refinishing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Similac** Similac: Similac (for "similar to lactation") is a brand of infant formula that was developed by Alfred Bosworth of Tufts University and marketed by Abbott Laboratories. It was first released in the late 1920s, and then reformulated and concentrated in 1951. Today, Similac is sold in 96 countries worldwide. History: 1903 - Harry C. Moores and Stanley M. Ross launch the Moores & Ross Milk Company which specialized on bottling milk for home delivery. 1925 - Alfred Bosworth creates an infant formula called “Franklin Infant Food”, later renamed to Similac. 1928 - Company renames itself to "M &R Diatetic Laboratories", sells off its regular milk operations to Borden and focuses on infant milk. 1950 - Company introduces "Similac Concentrated Liquid" in the USA, a non-powder infant formula. 1959 - Company launches "Similac with Iron", an iron-fortified infant formula. 1961 - Similac opens a new plant in The Netherlands, its first factory outside of the US 1962 - Similac begins offering "Similac PM 60/40", for babies with specific medical conditions. 1964 - Company merges with Abbott Laboratories 1966 - Similac introduces "Isomil", a soy-based formula. 1970 - Similac arrives in Israel 1994 - Similac launches "NeoCare", a formula tailored to premature babies. Later renamed to "Similac NeoSure". 1999 - Similac creates "Similac with Iron Ready to Feed" formula bottle. 2000 - Similac starts offering "Human Milk Fortifier". 2002 - Similac introduces "Similac Advance with Iron", an infant formula with DHA and ARA. 2006 - Similac launches "Similac Organic", a certified USDA organic infant formula. 2011 - Simiilac launches "Similac Advance Plus", "Similac LeMehadrin" and "Similac Gentle" (lactose-free formula). 2013 Similac begins offering "Similac Human Milk Fortifier Concentrated Liquid" for preterm babies in NICUs. Similac launches a formula designed for breastfeeding moms who choose to supplement. Similac launches "The Baby Journal" app, Diaper Decoder and Ecodu developmental kits 2014 - Similac promotes "Similac Breastfeeding Supplement" for nursing mothers 2015 Similac brings forward "Similac Advance NON-GMO", a formula with ingredients not genetically engineered. Similac delivers a "big hit" commercial, whereby Hilary and Haylie Duff teamed up with Similac "to help raise awareness against mom-on-mom bullying" 2016 Similac introduces "Go & Grow by Similac Food Mix-Ins", a supplement designed to mix into the food of toddlers. History: Similac begins offering "Pure Bliss by Similac", a formula starting with fresh milk from grass-fed cows that has no artificial growth hormones or antibiotics Similac launches "Similac Pro-Advance" and "Similac Pro-Sensitive", formulas containing 2’-FL Human Milk Oligosaccharide 2022 By February 2022, Abbott had initiated a voluntary recall of some Similac and Alimentum powdered infant formula (PIF) after finding evidence of Cronobacter sakazakii in some areas of Abbott's Sturgis, Michigan facility, known for manufacturing Similac, the leading PIF brand. In the United States, about 90% of the multibillion-dollar PIF market is controlled by only four companies, including Abbott, and the Sturgis facility is Abbott's largest. Most of Abbott's powdered formula was produced there—mainly under the Similac brand name—representing 40% of the US market. The Office of the Commissioner of the Food and Drug Administration (FDA) published a May 2022 update on the recall of certain Similac, Alimentum and EleCare products as they investigate four cases of hospitalized infants involving Cronobacter sakazakii infection following the infants' consumption of PIF produced in Sturgis plant. Abbott shut down the Sturgis plant, out of an abundance of caution. There is no evidence that the infants' infections were caused by the powdered formula. The closure of the Sturgis plant for five months exacerbated the 2022 United States infant formula shortage which peaked in May. As of June 2022, the FDA was unable to prove a causal relationship between the deaths of nine infants who had consumed Abbott's PIF and Abbott products. The plant reopened in June. Product Lineup: Premature Newborn & Infants Toddlers For Mothers Ingredients: Each formula contains various ingredients but most have OptiGRO, a mixture containing DHA Lutein Vitamin E Nucleotides Antioxidants Prebiotics
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Epimestrol** Epimestrol: Epimestrol (INN, USAN, BAN) (brand names Alene, Stimovul; former developmental code name ORG-817), also known as 3-methoxy-17-epiestriol, is a synthetic, steroidal estrogen and an estrogen ether and prodrug of 17-epiestriol. It has been used as a component of ovulation induction in combination with gonadotropin-releasing hormone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mediation (statistics)** Mediation (statistics): In statistics, a mediation model seeks to identify and explain the mechanism or process that underlies an observed relationship between an independent variable and a dependent variable via the inclusion of a third hypothetical variable, known as a mediator variable (also a mediating variable, intermediary variable, or intervening variable). Rather than a direct causal relationship between the independent variable and the dependent variable, a mediation model proposes that the independent variable influences the mediator variable, which in turn influences the dependent variable. Thus, the mediator variable serves to clarify the nature of the relationship between the independent and dependent variables.Mediation analyses are employed to understand a known relationship by exploring the underlying mechanism or process by which one variable influences another variable through a mediator variable. In particular, mediation analysis can contribute to better understanding the relationship between an independent variable and a dependent variable when these variables do not have an obvious direct connection. Baron and Kenny's (1986) steps for mediation analysis: Baron and Kenny (1986) laid out several requirements that must be met to form a true mediation relationship. They are outlined below using a real-world example. See the diagram above for a visual representation of the overall mediating relationship to be explained. Note: Hayes (2009) critiqued Baron and Kenny's mediation steps approach, and as of 2019, David A. Kenny on his website stated that mediation can exist in the absence of a 'significant' total effect, and therefore step 1 below may not be needed. This situation is sometimes referred to as "inconsistent mediation". Later publications by Hayes also questioned the concepts of full or partial mediation and advocated for these terms, along with the classical mediation steps approach outlined below, to be abandoned. Baron and Kenny's (1986) steps for mediation analysis: Step 1 Regress the dependent variable on the independent variable to confirm that the independent variable is a significant predictor of the dependent variable.Independent variable → dependent variable 10 11 X+ε1 β11 is significant Step 2 Regress the mediator on the independent variable to confirm that the independent variable is a significant predictor of the mediator. If the mediator is not associated with the independent variable, then it couldn’t possibly mediate anything.Independent variable → mediator 20 21 X+ε2 β21 is significant Step 3 Regress the dependent variable on both the mediator and independent variable to confirm that a) the mediator is a significant predictor of the dependent variable, and b) the strength of the coefficient of the previously significant independent variable in Step #1 is now greatly reduced, if not rendered nonsignificant. Baron and Kenny's (1986) steps for mediation analysis: 30 31 32 Me+ε3 β32 is significant β31 should be smaller in absolute value than the original effect for the independent variable (β11 above) Example The following example, drawn from Howell (2009), explains each step of Baron and Kenny's requirements to understand further how a mediation effect is characterized. Step 1 and step 2 use simple regression analysis, whereas step 3 uses multiple regression analysis. Baron and Kenny's (1986) steps for mediation analysis: How you were parented (i.e., independent variable) predicts how confident you feel about parenting your own children (i.e., dependent variable). How you were parented (i.e., independent variable) predicts your feelings of competence and self-esteem (i.e., mediator). Baron and Kenny's (1986) steps for mediation analysis: Your feelings of competence and self-esteem (i.e., mediator) predict how confident you feel about parenting your own children (i.e., dependent variable), while controlling for how you were parented (i.e., independent variable).Such findings would lead to the conclusion implying that your feelings of competence and self-esteem mediate the relationship between how you were parented and how confident you feel about parenting your own children. Baron and Kenny's (1986) steps for mediation analysis: If step 1 does not yield a significant result, one may still have grounds to move to step 2. Sometimes there is actually a significant relationship between independent and dependent variables but because of small sample sizes, or other extraneous factors, there could not be enough power to predict the effect that actually exists. Direct versus indirect effects: In the diagram shown above, the indirect effect is the product of path coefficients "A" and "B". The direct effect is the coefficient " C' ". The direct effect measures the extent to which the dependent variable changes when the independent variable increases by one unit and the mediator variable remains unaltered. In contrast, the indirect effect measures the extent to which the dependent variable changes when the independent variable is held constant and the mediator variable changes by the amount it would have changed had the independent variable increased by one unit. Direct versus indirect effects: In linear systems, the total effect is equal to the sum of the direct and indirect (C' + AB in the model above). In nonlinear models, the total effect is not generally equal to the sum of the direct and indirect effects, but to a modified combination of the two. Full versus partial mediation: A mediator variable can either account for all or some of the observed relationship between two variables. Full mediation Maximum evidence for mediation, also called full mediation, would occur if inclusion of the mediation variable drops the relationship between the independent variable and dependent variable (see pathway c in diagram above) to zero. Partial mediation Partial mediation maintains that the mediating variable accounts for some, but not all, of the relationship between the independent variable and dependent variable. Partial mediation implies that there is not only a significant relationship between the mediator and the dependent variable, but also some direct relationship between the independent and dependent variable. Full versus partial mediation: In order for either full or partial mediation to be established, the reduction in variance explained by the independent variable must be significant as determined by one of several tests, such as the Sobel test. The effect of an independent variable on the dependent variable can become nonsignificant when the mediator is introduced simply because a trivial amount of variance is explained (i.e., not true mediation). Thus, it is imperative to show a significant reduction in variance explained by the independent variable before asserting either full or partial mediation. It is possible to have statistically significant indirect effects in the absence of a total effect. This can be explained by the presence of several mediating paths that cancel each other out, and become noticeable when one of the cancelling mediators is controlled for. This implies that the terms 'partial' and 'full' mediation should always be interpreted relative to the set of variables that are present in the model. In all cases, the operation of "fixing a variable" must be distinguished from that of "controlling for a variable," which has been inappropriately used in the literature. The former stands for physically fixing, while the latter stands for conditioning on, adjusting for, or adding to the regression model. The two notions coincide only when all error terms (not shown in the diagram) are statistically uncorrelated. When errors are correlated, adjustments must be made to neutralize those correlations before embarking on mediation analysis (see Bayesian network). Sobel's test: Sobel's test is performed to determine if the relationship between the independent variable and dependent variable has been significantly reduced after inclusion of the mediator variable. In other words, this test assesses whether a mediation effect is significant. It examines the relationship between the independent variable and the dependent variable compared to the relationship between the independent variable and dependent variable including the mediation factor. Sobel's test: The Sobel test is more accurate than the Baron and Kenny steps explained above; however, it does have low statistical power. As such, large sample sizes are required in order to have sufficient power to detect significant effects. This is because the key assumption of Sobel's test is the assumption of normality. Because Sobel's test evaluates a given sample on the normal distribution, small sample sizes and skewness of the sampling distribution can be problematic (see Normal distribution for more details). Thus, the rule of thumb as suggested by MacKinnon et al., (2002) is that a sample size of 1000 is required to detect a small effect, a sample size of 100 is sufficient in detecting a medium effect, and a sample size of 50 is required to detect a large effect. Sobel's test: The equation for Sobel is: z=abb2sa2+a2sb2 Preacher–Hayes bootstrap method: The bootstrapping method provides some advantages to the Sobel's test, primarily an increase in power. The Preacher and Hayes bootstrapping method is a non-parametric test and does not impose the assumption of normality. Therefore, if the raw data is available, the bootstrap method is recommended. Bootstrapping involves repeatedly randomly sampling observations with replacement from the data set to compute the desired statistic in each resample. Computing over hundreds, or thousands, of bootstrap resamples provide an approximation of the sampling distribution of the statistic of interest. The Preacher–Hayes method provides point estimates and confidence intervals by which one can assess the significance or nonsignificance of a mediation effect. Point estimates reveal the mean over the number of bootstrapped samples and if zero does not fall between the resulting confidence intervals of the bootstrapping method, one can confidently conclude that there is a significant mediation effect to report. Significance of mediation: As outlined above, there are a few different options one can choose from to evaluate a mediation model. Significance of mediation: Bootstrapping is becoming the most popular method of testing mediation because it does not require the normality assumption to be met, and because it can be effectively utilized with smaller sample sizes (N < 25). However, mediation continues to be most frequently determined using the logic of Baron and Kenny or the Sobel test. It is becoming increasingly more difficult to publish tests of mediation based purely on the Baron and Kenny method or tests that make distributional assumptions such as the Sobel test. Thus, it is important to consider your options when choosing which test to conduct. Approaches to mediation: While the concept of mediation as defined within psychology is theoretically appealing, the methods used to study mediation empirically have been challenged by statisticians and epidemiologists and interpreted formally. Experimental-causal-chain design An experimental-causal-chain design is used when the proposed mediator is experimentally manipulated. Such a design implies that one manipulates some controlled third variable that they have reason to believe could be the underlying mechanism of a given relationship. Measurement-of-mediation design A measurement-of-mediation design can be conceptualized as a statistical approach. Such a design implies that one measures the proposed intervening variable and then uses statistical analyses to establish mediation. This approach does not involve manipulation of the hypothesized mediating variable, but only involves measurement. Criticisms of mediation measurement: Experimental approaches to mediation must be carried out with caution. First, it is important to have strong theoretical support for the exploratory investigation of a potential mediating variable. A criticism of a mediation approach rests on the ability to manipulate and measure a mediating variable. Thus, one must be able to manipulate the proposed mediator in an acceptable and ethical fashion. As such, one must be able to measure the intervening process without interfering with the outcome. The mediator must also be able to establish construct validity of manipulation. One of the most common criticisms of the measurement-of-mediation approach is that it is ultimately a correlational design. Consequently, it is possible that some other third variable, independent from the proposed mediator, could be responsible for the proposed effect. However, researchers have worked hard to provide counter-evidence to this disparagement. Specifically, the following counter-arguments have been put forward: Temporal precedence For example, if the independent variable precedes the dependent variable in time, this would provide evidence suggesting a directional, and potentially causal, link from the independent variable to the dependent variable. Criticisms of mediation measurement: Nonspuriousness and/or no confounds For example, should one identify other third variables and prove that they do not alter the relationship between the independent variable and the dependent variable he/she would have a stronger argument for their mediation effect. See other 3rd variables below.Mediation can be an extremely useful and powerful statistical test; however, it must be used properly. It is important that the measures used to assess the mediator and the dependent variable are theoretically distinct and that the independent variable and mediator cannot interact. Should there be an interaction between the independent variable and the mediator one would have grounds to investigate moderation. Other third variables: Confounding Another model that is often tested is one in which competing variables in the model are alternative potential mediators or an unmeasured cause of the dependent variable. An additional variable in a causal model may obscure or confound the relationship between the independent and dependent variables. Potential confounders are variables that may have a causal impact on both the independent variable and dependent variable. They include common sources of measurement error (as discussed above) as well as other influences shared by both the independent and dependent variables. Other third variables: In experimental studies, there is a special concern about aspects of the experimental manipulation or setting that may account for study effects, rather than the motivating theoretical factor. Any of these problems may produce spurious relationships between the independent and dependent variables as measured. Ignoring a confounding variable may bias empirical estimates of the causal effect of the independent variable. Other third variables: Suppression A suppressor variable increases the predictive validity of another variable when included in a regression equation. Suppression can occur when a single causal variable is related to an outcome variable through two separate mediator variables, and when one of those mediated effects is positive and one is negative. In such a case, each mediator variable suppresses or conceals the effect that is carried through the other mediator variable. For example, higher intelligence scores (a causal variable, A) may cause an increase in error detection (a mediator variable, B) which in turn may cause a decrease in errors made at work on an assembly line (an outcome variable, X); at the same time, intelligence could also cause an increase in boredom (C), which in turn may cause an increase in errors (X). Thus, in one causal path intelligence decreases errors, and in the other it increases them. When neither mediator is included in the analysis, intelligence appears to have no effect or a weak effect on errors. However, when boredom is controlled intelligence will appear to decrease errors, and when error detection is controlled intelligence will appear to increase errors. If intelligence could be increased while only boredom was held constant, errors would decrease; if intelligence could be increased while holding only error detection constant, errors would increase. Other third variables: In general, the omission of suppressors or confounders will lead to either an underestimation or an overestimation of the effect of A on X, thereby either reducing or artificially inflating the magnitude of a relationship between two variables. Other third variables: Moderators Other important third variables are moderators. Moderators are variables that can make the relationship between two variables either stronger or weaker. Such variables further characterize interactions in regression by affecting the direction and/or strength of the relationship between X and Y. A moderating relationship can be thought of as an interaction. It occurs when the relationship between variables A and B depends on the level of C. See moderation for further discussion. Moderated mediation: Mediation and moderation can co-occur in statistical models. It is possible to mediate moderation and moderate mediation. Moderated mediation: Moderated mediation is when the effect of the treatment A on the mediator and/or the partial effect B on the dependent variable depend in turn on levels of another variable (moderator). Essentially, in moderated mediation, mediation is first established, and then one investigates if the mediation effect that describes the relationship between the independent variable and dependent variable is moderated by different levels of another variable (i.e., a moderator). This definition has been outlined by Muller, Judd, and Yzerbyt (2005) and Preacher, Rucker, and Hayes (2007). Moderated mediation: Models of moderated mediation There are five possible models of moderated mediation, as illustrated in the diagrams below. In the first model the independent variable also moderates the relationship between the mediator and the dependent variable. The second possible model of moderated mediation involves a new variable which moderates the relationship between the independent variable and the mediator (the A path). The third model of moderated mediation involves a new moderator variable which moderates the relationship between the mediator and the dependent variable (the B path). Moderated mediation can also occur when one moderating variable affects both the relationship between the independent variable and the mediator (the A path) and the relationship between the mediator and the dependent variable (the B path). Moderated mediation: The fifth and final possible model of moderated mediation involves two new moderator variables, one moderating the A path and the other moderating the B path.In addition to the models mentioned above, a new variable can also exist which moderates the relationship between the independent variable and mediator (the A path) while at the same time have the new variable moderate the relationship between the independent variable and dependent variable (the C Path). Mediated moderation: Mediated moderation is a variant of both moderation and mediation. This is where there is initially overall moderation and the direct effect of the moderator variable on the outcome is mediated. The main difference between mediated moderation and moderated mediation is that for the former there is initial (overall) moderation and this effect is mediated and for the latter there is no moderation but the effect of either the treatment on the mediator (path A) is moderated or the effect of the mediator on the outcome (path B) is moderated.In order to establish mediated moderation, one must first establish moderation, meaning that the direction and/or the strength of the relationship between the independent and dependent variables (path C) differs depending on the level of a third variable (the moderator variable). Researchers next look for the presence of mediated moderation when they have a theoretical reason to believe that there is a fourth variable that acts as the mechanism or process that causes the relationship between the independent variable and the moderator (path A) or between the moderator and the dependent variable (path C). Mediated moderation: Example The following is a published example of mediated moderation in psychological research. Participants were presented with an initial stimulus (a prime) that made them think of morality or made them think of might. They then participated in the Prisoner's Dilemma Game (PDG), in which participants pretend that they and their partner in crime have been arrested, and they must decide whether to remain loyal to their partner or to compete with their partner and cooperate with the authorities. The researchers found that prosocial individuals were affected by the morality and might primes, whereas proself individuals were not. Thus, social value orientation (proself vs. prosocial) moderated the relationship between the prime (independent variable: morality vs. might) and the behaviour chosen in the PDG (dependent variable: competitive vs. cooperative). Mediated moderation: The researchers next looked for the presence of a mediated moderation effect. Regression analyses revealed that the type of prime (morality vs. might) mediated the moderating relationship of participants’ social value orientation on PDG behaviour. Prosocial participants who experienced the morality prime expected their partner to cooperate with them, so they chose to cooperate themselves. Prosocial participants who experienced the might prime expected their partner to compete with them, which made them more likely to compete with their partner and cooperate with the authorities. In contrast, participants with a pro-self social value orientation always acted competitively. Regression equations for moderated mediation and mediated moderation: Muller, Judd, and Yzerbyt (2005) outline three fundamental models that underlie both moderated mediation and mediated moderation. Mo represents the moderator variable(s), Me represents the mediator variable(s), and εi represents the measurement error of each regression equation. Step 1 Moderation of the relationship between the independent variable (X) and the dependent variable (Y), also called the overall treatment effect (path C in the diagram). 40 41 42 43 XMo+ε4 To establish overall moderation, the β43 regression weight must be significant (first step for establishing mediated moderation). Establishing moderated mediation requires that there be no moderation effect, so the β43 regression weight must not be significant. Step 2 Moderation of the relationship between the independent variable and the mediator (path A). 50 51 52 53 XMo+ε5 If the β53 regression weight is significant, the moderator affects the relationship between the independent variable and the mediator. Step 3 Moderation of both the relationship between the independent and dependent variables (path A) and the relationship between the mediator and the dependent variable (path B). 60 61 62 63 64 65 MeMo+ε6 If both β53 in step 2 and β63 in step 3 are significant, the moderator affects the relationship between the independent variable and the mediator (path A). If both β53 in step 2 and β65 in step 3 are significant, the moderator affects the relationship between the mediator and the dependent variable (path B). Either or both of the conditions above may be true. Causal mediation analysis: Fixing versus conditioning Mediation analysis quantifies the extent to which a variable participates in the transmittance of change from a cause to its effect. It is inherently a causal notion, hence it cannot be defined in statistical terms. Traditionally, however, the bulk of mediation analysis has been conducted within the confines of linear regression, with statistical terminology masking the causal character of the relationships involved. This led to difficulties, biases, and limitations that have been alleviated by modern methods of causal analysis, based on causal diagrams and counterfactual logic. Causal mediation analysis: The source of these difficulties lies in defining mediation in terms of changes induced by adding a third variables into a regression equation. Such statistical changes are epiphenomena which sometimes accompany mediation but, in general, fail to capture the causal relationships that mediation analysis aims to quantify. The basic premise of the causal approach is that it is not always appropriate to "control" for the mediator M when we seek to estimate the direct effect of X on Y (see the Figure above). Causal mediation analysis: The classical rationale for "controlling" for M" is that, if we succeed in preventing M from changing, then whatever changes we measure in Y are attributable solely to variations in X and we are justified then in proclaiming the effect observed as "direct effect of X on Y." Unfortunately, "controlling for M" does not physically prevent M from changing; it merely narrows the analyst's attention to cases of equal M values. Moreover, the language of probability theory does not possess the notation to express the idea of "preventing M from changing" or "physically holding M constant". Causal mediation analysis: The only operator probability provides is "Conditioning" which is what we do when we "control" for M, or add M as a regressor in the equation for Y. The result is that, instead of physically holding M" constant (say at M = m) and comparing Y for units under X = 1' to those under X = 0, we allow M to vary but ignore all units except those in which M achieves the value M = m. These two operations are fundamentally different, and yield different results, except in the case of no omitted variables. Improperly conditioning mediated effects can be a type of bad control. Causal mediation analysis: To illustrate, assume that the error terms of M and Y are correlated. Under such conditions, the structural coefficient B and A (between M and Y and between Y and X) can no longer be estimated by regressing Y on X and M. Causal mediation analysis: In fact, the regression slopes may both be nonzero even when C is zero. This has two consequences. First, new strategies must be devised for estimating the structural coefficients A,B and C. Second, the basic definitions of direct and indirect effects must go beyond regression analysis, and should invoke an operation that mimics "fixing M", rather than "conditioning on M." Definitions Such an operator, denoted do(M = m), was defined in Pearl (1994) and it operates by removing the equation of M and replacing it by a constant m. For example, if the basic mediation model consists of the equations: X=f(ε1),M=g(X,ε2),Y=h(X,M,ε3), then after applying the operator do(M = m) the model becomes: X=f(ε1),M=m,Y=h(X,m,ε3) and after applying the operator do(X = x) the model becomes: X=x,M=g(x,ε2),Y=h(x,M,ε3) where the functions f and g, as well as the distributions of the error terms ε1 and ε3 remain unaltered. If we further rename the variables M and Y resulting from do(X = x) as M(x) and Y(x), respectively, we obtain what came to be known as "potential outcomes" or "structural counterfactuals". Causal mediation analysis: These new variables provide convenient notation for defining direct and indirect effects. In particular, four types of effects have been defined for the transition from X = 0 to X = 1: (a) Total effect – TE=E[Y(1)−Y(0)] (b) Controlled direct effect - CDE(m)=E[Y(1,m)−Y(0,m)] (c) Natural direct effect - NDE=E[Y(1,M(0))−Y(0,M(0))] (d) Natural indirect effect NIE=E[Y(0,M(1))−Y(0,M(0))] Where E[ ] stands for expectation taken over the error terms. Causal mediation analysis: These effects have the following interpretations: TE measures the expected increase in the outcome Y as X changes from X=0 to X =1, while the mediator is allowed to track the change in X as dictated by the function M = g(X, ε2). Causal mediation analysis: CDE measures the expected increase in the outcome Y as X changes from X = 0 to X = 1, while the mediator is fixed at a pre-specified level M = m uniformly over the entire population NDE measures the expected increase in Y as X changes from X = 0 to X = 1, while setting the mediator variable to whatever value it would have obtained under X = 0, i.e., before the change. Causal mediation analysis: NIE measures the expected increase in Y when the X is held constant, at X = 1, and M changes to whatever value it would have attained (for each individual) under X = 1. The difference TE-NDE measures the extent to which mediation is necessary for explaining the effect, while the NIE measures the extent to which mediation is sufficient for sustaining it.A controlled version of the indirect effect does not exist because there is no way of disabling the direct effect by fixing a variable to a constant. According to these definitions the total effect can be decomposed as a sum TE=NDE−NIEr where NIEr stands for the reverse transition, from X = 1 to X = 0; it becomes additive in linear systems, where reversal of transitions entails sign reversal. The power of these definitions lies in their generality; they are applicable to models with arbitrary nonlinear interactions, arbitrary dependencies among the disturbances, and both continuous and categorical variables. The mediation formula In linear analysis, all effects are determined by sums of products of structural coefficients, giving independent of mNIE=AB. Therefore, all effects are estimable whenever the model is identified. In non-linear systems, more stringent conditions are needed for estimating the direct and indirect effects. For example, if no confounding exists, (i.e., ε1, ε2, and ε3 are mutually independent) the following formulas can be derived: TE=E(Y∣X=1)−E(Y∣X=0)CDE(m)=E(Y∣X=1,M=m)−E(Y∣X=0,M=m)NDE=∑m[E(Y|X=1,M=m)−E(Y∣X=0,M=m)]P(M=m∣X=0)NIE=∑m[P(M=m∣X=1)−P(M=m∣X=0)]E(Y∣X=0,M=m). Causal mediation analysis: The last two equations are called Mediation Formulas and have become the target of estimation in many studies of mediation. They give distribution-free expressions for direct and indirect effects and demonstrate that, despite the arbitrary nature of the error distributions and the functions f, g, and h, mediated effects can nevertheless be estimated from data using regression. The analyses of moderated mediation and mediating moderators fall as special cases of the causal mediation analysis, and the mediation formulas identify how various interactions coefficients contribute to the necessary and sufficient components of mediation. Causal mediation analysis: Example Assume the model takes the form X=ε1M=b0+b1X+ε2Y=c0+c1X+c2M+c3XM+ε3 where the parameter c3 quantifies the degree to which M modifies the effect of X on Y. Even when all parameters are estimated from data, it is still not obvious what combinations of parameters measure the direct and indirect effect of X on Y, or, more practically, how to assess the fraction of the total effect TE that is explained by mediation and the fraction of TE that is owed to mediation. In linear analysis, the former fraction is captured by the product b1c2/TE , the latter by the difference (TE−c1)/TE , and the two quantities coincide. In the presence of interaction, however, each fraction demands a separate analysis, as dictated by the Mediation Formula, which yields: NDE=c1+b0c3NIE=b1c2TE=c1+b0c3+b1(c2+c3)=NDE+NIE+b1c3. Causal mediation analysis: Thus, the fraction of output response for which mediation would be sufficient is NIETE=b1c2c1+b0c3+b1(c2+c3), while the fraction for which mediation would be necessary is 1−NDETE=b1(c2+c3)c1+b0c3+b1(c2+c3). Causal mediation analysis: These fractions involve non-obvious combinations of the model's parameters, and can be constructed mechanically with the help of the Mediation Formula. Significantly, due to interaction, a direct effect can be sustained even when the parameter c1 vanishes and, moreover, a total effect can be sustained even when both the direct and indirect effects vanish. This illustrates that estimating parameters in isolation tells us little about the effect of mediation and, more generally, mediation and moderation are intertwined and cannot be assessed separately.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RCHY1** RCHY1: RING finger and CHY zinc finger domain-containing protein 1 is a protein that in humans is encoded by the RCHY1 gene. Function: The protein encoded by this gene has ubiquitin-protein ligase activity. This protein binds with p53 and promotes the ubiquitin-mediated proteosomal degradation of p53. This gene is oncogenic because loss of p53 function contributes directly to malignant tumor development. Transcription of this gene is regulated by p53. Alternative splicing results in multiple transcript variants encoding different isoforms. Interactions: RCHY1 has been shown to interact with P53 and Androgen receptor.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nandrolone undecanoate** Nandrolone undecanoate: Nandrolone undecanoate (NU), also known as nandrolone undecylate, and sold under the brand names Dynabolon, Dynabolin, and Psychobolan, is an androgen and anabolic steroid medication and a nandrolone ester. It was developed in the 1960s, and was previously marketed in France, Germany, Italy, and Monaco, but has since been discontinued and is now no longer known to be available. The pharmacokinetics of nandrolone undecanoate alone (Dynabolon) and in combination with other steroid esters (Trophobolene) have been studied and compared.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Katalon Studio** Katalon Studio: Katalon Platform is an automation testing software tool developed by Katalon, Inc. The software is built on top of the open-source automation frameworks Selenium, Appium with a specialized IDE interface for web, API, mobile and desktop application testing. Its initial release for internal use was in January 2015. Its first public release was in September 2016. In 2018, the software acquired 9% of market penetration for UI test automation, according to The State of Testing 2018 Report by SmartBear.Katalon is recognized as a March 2019 and March 2020 Gartner Peer Insights Customers’ Choice for Software Test Automation. Platform: Katalon Platform provides a dual interchangeable interface for creating test cases: a manual view for the less technical users and a script view gearing toward experienced testers to author automation tests with syntax highlight and intelligent code completion.Katalon Platform follows the Page Object Model pattern. GUI elements on web, mobile, and desktop apps can be captured using the recording utility and stored into the Object Repository, which is accessible and reusable across different test cases. Platform: Test cases can be structured using test suites with environment variables. Test execution can be parameterized and parallelized using profiles. Remote execution in Katalon Platform can be triggered by CI systems via Docker container or command line interface (CLI).From version 7.4.0, users are able to execute test cases from Selenium projects, along with the previous migration from TestNG and JUnit to Katalon Platform. Platform: In version 7.8, users can save team effort while debugging with smart troubleshooting approaches offered via highlight features: Time Capsule, Browser-based Video Recorder, Self-healing and Test Failure Snapshots.Provided in the latest version 8.4.0 is the native integration with Azure DevOps (ADO) which enables users to easily map test cases in Azure DevOps to automated test cases in Katalon Platform. Additionally, this new integration will allow users to automatically send test execution logs and reports from Katalon Platform to test run in ADO, which will enable them to get a clearer picture of the testing process. Other highlight features offered in this version are reusable desired capabilities across projects, 60% faster load time to speed up team working process, a newly-made product tour to enhance user experience and so on. Technologies: The test automation framework provided within Katalon Platform was developed with the keyword-driven approach as the primary test authoring method with data-driven functionality for test execution.The user interface is a complete integrated development environment (IDE) implemented on Eclipse rich client platform (RCP).The keyword libraries are a composition of common actions for web, API, and mobile testings. External libraries written in Java can be imported into a project to be used as native functions.The main programming language used in Katalon Platform are Groovy and Java. Katalon Platform supports cross-environment test executions based on Selenium and Appium.Supported technologies Modern web technologies: HTML, HTML5, JavaScript, Ajax, Angular Windows desktop apps platforms: Universal Windows Platform (UWP), Windows Forms (WinForms), Windows Presentation Foundation (WPF), and Classic Windows (Win32) Cross-browser testing: Firefox, Chrome, Microsoft Edge, Internet Explorer (9,10,11), Safari, headless browsers Mobile apps: Android and iOS (Native apps and mobile web apps) Web services: RESTful and SOAPSystem requirements Operating systems: Windows 7, Windows 8, Windows 10, macOS 10.11+, Linux (Ubuntu-based) License: Katalon Platform started out as Freeware. In October 2019, Katalon introduced a new product set with proprietary licenses in its seventh release. The new products and licenses include, including Katalon Platform (Free), Katalon Platform Enterprise, and Katalon Runtime Engine, so that teams and projects of various complexities can have a flexible allocation on budget, licensing, and scalability. Several features that were previously free were moved to the Katalon Platform Enterprise license. Core products: Katalon TestOps Katalon TestOps is a web-based application that provides visualized test data and execution results through charts, graphs, and reports. Its key features include test management, test planning, and test execution. Katalon TestOps can be integrated with Jira and other CI/CD tools.Katalon TestOps was originally released as Katalon Analytics in November 2017. In October 2019, Katalon officially changed the name to Katalon TestOps. It is currently available in the May 2021 version and is expected to provide DevOps team with the optimal test orchestration. Core products: Katalon Recorder Katalon Recorder is a browser add-on for recording user's actions in web applications and generating test scripts. Katalon Recorder supports both Chrome and Firefox. Katalon Recorder functions in the same way as Katalon Platform's recording utility, but it can also execute test steps and export test scripts in many languages such as C#, Java, and Python. Katalon Recorder 5.4 was released in May 2021. Core products: Katalium Katalium is a framework that provides a blueprint for test automation projects based on Selenium and TestNG. The framework is built to help users who still need to work with TestNG and Selenium to quickly set up test cases.Katalium Server is a component of the Katalium framework. It is a set of enhancements to improve the user experience with Selenium Grid. Katalium Server can be run as a Standalone (single) server in development mode. Core products: Both Katalium Framework and Katalium Server are made open-source. Katalon Store: Katalon Store serves as a platform for testers and developers to install add-on products (or ‘plugins’) and add more features and optimize test automation strategies in Katalon Platform. Users can install, manage, rate, and write reviews for plugins.In Katalon Store, plugins are made available in 3 main categories: Integration, Custom Keywords, and Utilities. Katalon Store also allows users to build and submit their own plugins. Integrations: Katalon Platform can be integrated with other software products, including: Software development life cycle (SDLC) management: Jira, TestRail, qTest, and TestLink CI/CD integration: Jenkins, Bamboo, TeamCity, CircleCI, Azure DevOps, and Travis CI Team collaboration: Git, Slack, and Microsoft Teams Execution platform support: Selenium, BrowserStack, SauceLabs, LambdaTest, and Kobiton Visual testing: Applitools
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phi value analysis** Phi value analysis: Phi value analysis, ϕ analysis, or ϕ -value analysis is an experimental protein engineering technique for studying the structure of the folding transition state of small protein domains that fold in a two-state manner. The structure of the folding transition state is hard to find using methods such as protein NMR or X-ray crystallography because folding transitions states are mobile and partly unstructured by definition. In ϕ -value analysis, the folding kinetics and conformational folding stability of the wild-type protein are compared with those of point mutants to find phi values. These measure the mutant residue's energetic contribution to the folding transition state, which reveals the degree of native structure around the mutated residue in the transition state, by accounting for the relative free energies of the unfolded state, the folded state, and the transition state for the wild-type and mutant proteins. Phi value analysis: The protein's residues are mutated one by one to identify residue clusters that are well-ordered in the folded transition state. These residues' interactions can be checked by double-mutant-cycle ϕ analysis, in which the single-site mutants' effects are compared to the double mutants'. Most mutations are conservative and replace the original residue with a smaller one (cavity-creating mutations) like alanine, though tyrosine-to-phenylalanine, isoleucine-to-valine and threonine-to-serine mutants can be used too. Chymotrypsin inhibitor, SH3 domains, WW domain, individual domains of proteins L and G, ubiquitin, and barnase have all been studied by ϕ analysis. Mathematical approach: Phi is defined thus: ΔGWTS→D is the difference in energy between the wild-type protein's transition and denatured state, ΔGMTS→D is the same energy difference but for the mutant protein, and the ΔGN→D bits are the differences in energy between the native and denatured state. The phi value is interpreted as how much the mutation destabilizes the transition state versus the folded state. Mathematical approach: Though ϕ may have been meant to range from zero to one, negative values can appear. A value of zero suggests the mutation doesn't affect the structure of the folding pathway's rate-limiting transition state, and a value of one suggests the mutation destabilizes the transition state as much as the folded state; values near zero suggest the area around the mutation is relatively unfolded or unstructured in the transition state, and values near one suggest the transition state's local structure near the mutation site is similar to the native state's. Conservative substitutions on the protein's surface often give phi values near one. When ϕ is well between zero and one, it is less informative as it doesn't tell us which is the case: The transition state itself is partly structured; or There are two protein populations of near-equal numbers, one kind which is mostly-unfolded and the other which is mostly-folded. Key assumptions: Phi value analysis assumes Hammond's postulate, which states that energy and chemical structure are correlated. Though the relationship between the folding intermediate and native state's structures may correlate that between their energies when the energy landscape has a well-defined, deep global minimum, free energy destabilizations may not give useful structural information when the energy landscape is flatter or has many local minima. Key assumptions: Phi value analysis assumes the folding pathway isn't significantly altered, though the folding energies may be. As nonconservative mutations may not bear this out, conservative substitutions, though they may give smaller energetic destabilizations which are harder to detect, are preferred. Key assumptions: Restricting ϕ to numbers greater than zero is the same as assuming the mutation increases the stability and lowers the energy of neither the native nor the transition state. It is in the same line assumed that interactions that stabilize a folding transition state are like those of the native structure, though some protein folding studies found that stabilizing non-native interactions in a transition state facilitates folding. Example: barnase: Alan Fersht pioneered phi value analysis in his study of the small bacterial protein barnase. Using molecular dynamics simulations, he found that the transition state between folding and unfolding looks like the native state and is the same no matter the reaction direction. Phi varied with the mutation location as some regions gave values near zero and others near one. The distribution of ϕ values throughout the protein's sequence agreed with all of the simulated transition state but one helix which folded semi-independently and made native-like contacts with the rest of the protein only once the transition state had formed fully. Such variation in the folding rate in one protein makes it hard to interpret ϕ values as the transition state structure must otherwise be compared to folding-unfolding simulations which are computationally expensive. Variants: Other 'kinetic perturbation' techniques for studying the folding transition state have appeared recently. Best known is the psi ( ψ ) value which is found by engineering two metal-binding amino acid residues like histidine into a protein and then recording the folding kinetics as a function of metal ion concentration, though Fersht thought this approach difficult. A 'cross-linking' variant of the ϕ -value was used to study segment association in a folding transition state as covalent crosslinks like disulfide bonds were introduced. ϕ -T value analysis has been used as an extension of ϕ -value analysis to measure the response of mutants as a function of temperature to separate enthalpic and entropic contributions to the transition state free energy. Limitations: The error in equilibrium stability and aqueous (un)folding rate measurements may be large when values of ϕ for solutions with denaturants must be extrapolated to aqueous solutions that are nearly pure or the stability difference between the native and mutant protein is 'low', or less than 7 kJ/mol. This may cause ϕ to fall beyond the zero-one range. Calculated values ϕ depend strongly on how many data point are available. A study of 78 mutants of WW domain with up to four mutations per residue has quantified what types of mutations avoid interference from native state flexibility, solvation, and other effects, and statistical analysis shows that reliable information about transition state perturbation can be obtained from large mutant screens.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Arsinic acid** Arsinic acid: Arsinic acids are organoarsenic compounds with the formula R2AsO2H. They are formally, but not actually, related to arsinic acid, a hypothetical compound of the formula H2AsO2H. Arsinic acids are monoprotic, weak acids. They react with sodium sulfide to give the dithioarinates R2AsS2Na. Arsinic acids are related to phosphinic acids (R2PO2H.). Well known arsinic acids include diphenylarsinic acid and cacodylic acid, R2AsO2H (R = Ph, Me, respectively).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Revengers** Revengers: The Revengers is the name of different fictional teams appearing in American comic books published by Marvel Comics. Publication history: The Revengers is a fictional team of supervillains who were formed to fight A-Next in the MC2 series A-Next. They were created by Tom DeFalco and Ron Frenz. The Revengers is also the name of a comical parody of The Avengers in Earth-665, the setting of Marvel's parody comic Not Brand Echh. In September 2011, an Earth-616 version of the Revengers led by Wonder Man appears. They were created by Brian Michael Bendis. Fictional team history: MC2 After a tragic mission that claimed the life of several Avengers including Hank Pym and the Wasp, their children were furious to see A-Next, a team of heroes referred to by people as the "next generation of Avengers". The children of Hank Pym and Wasp used their parents' technology to replicate their powers. The daughter Hope Pym duplicated her mother's powers as the Red Queen while her brother Henry Pym Jr. copied his father's powers as Big Man. Fictional team history: Red Queen also helped create an energetic villain called Ion Man, whom she sent to kill Mainframe. Although he failed (unknowingly, since Mainframe was later rebuilt), he passed her test. Red Queen soon added more villains to her fold (Spider-Girl's nemesis, Killerwatt, and Wild Thing's homicidal half-brother Sabreclaw). She waited until the members of the Avengers were at their most vulnerable before sending her team, the Revengers, to destroy them. Red Queen herself wanted to personally torture Stinger, as she felt that Stinger dishonored her parents' memories, but the reserve members of A-Next defeated the Revengers and Big Man turned himself in, stopping his crazed sister, as Big Man did not want to be a killer. Fictional team history: However, in Last Planet Standing, the Revengers returned with the aid of Magneta to fight A-Next again. The fight was halted when Galactus came to destroy Earth. A-Next teamed up with a number of other heroes while the Revengers got away, minus Sabreclaw, who stayed behind to help fight and later joined A-Next to help fill out their roster. Fictional team history: Cancerverse In the Cancerverse reality of Earth-10011, the Revengers were that reality's version of the Avengers who were corrupted by that reality's version of Captain Marvel who turned them into servants of the Many-Angled Ones. They were allies with the Defenders of the Realm and the Ex-Men. The Revengers were all destroyed when Thanos brought Death to this Deathless reality. Fictional team history: Earth-616 In 2011, another version of the Revengers appear in the Earth-616 universe. Formed by Wonder Man (whose ionic energy leaking problem has caused him to become convinced that the Avengers are not helping the world and that he must stop them), it consists of lesser-known heroes who he has convinced to help him as this antihero group. During the Revengers attack on Avengers Mansion where they fought the New Avengers, Ms. Marvel tries to reason with Wonder Man which does not work. This team manages to defeat the New Avengers and they move on to attack Stark Tower. Wonder Man has Atlas attack Stark Tower to get their attention upon calling a press conference. After failing to reason with Wonder Man, Iron Man trapped him in a stasis container. When the Avengers do not want to fight with so many civilians nearby, Thor teleports the Revengers to Citi Field and all three teams (the main Avengers, the New Avengers, and the Secret Avengers) gang up on the Revengers all at once. With the Revengers imprisoned at the Raft, each member has been interrogated with Captain America, Thor, and Iron Man watching the video feed of the interrogation. Beast later visits Wonder Man in his stasis container. Wonder Man insists that he is acting of his own free will and remains steadfast in his claim that the Avengers must disband before more people are hurt. He also adds something new: his realization that Scarlet Witch created him and that he probably is not even real. Appealing to their friendship, Wonder Man tries to extract a promise from Beast to shut the Avengers down when he realizes that Wonder Man is in the right, but a distressed Beast walks away. Various news programs are buzzing about the Avengers' lack of transparency and stonewalling tactics. Some openly speculate that the time for a self-appointed hero team is over and done. In his bubble, Wonder Man smiles and vanishes into a white light. Fictional team history: New Revengers As part of the All-New, All-Different Marvel, Maker of W.H.I.S.P.E.R. assembles a new incarnation of the Revengers dubbed the New Revengers with plans to have them face the New Avengers. They consist of Asti the All-Seeing, Paibok, Vermin, White Tiger, and alternate versions of Angar the Screamer and Skar.During the Civil War II storyline, the New Revengers gain City's O.M.N.I.T.R.O.C.U.S form as its latest member. At the time when A.I.M. was facing off against S.H.I.E.L.D., Maker took advantage of this by sending his New Revengers to attack them. While O.M.N.I.T.R.O.C.U.S. kept Sunspot trapped in his office while having his own defense system attack him, Angela del Toro fought with her aunt Ava Ayala while the other Revengers members attacked the rest of the New Avengers and the staff of Avengers Base Two. Donning a variation of Pepper Potts' Rescue armor, Toni Ho managed to slay the alternate Skar. Also, Mockingbird managed to get free from O.M.N.I.T.R.O.C.U.S.' clutches with the help of Warlock while Ava broke Angela free from the combined influences of the Tiger God and the Hand. The remaining members of the New Revengers faced off against the New Avengers and the rest of the A.I.M. staff. While the remaining members of the New Revengers were defeated, Maker got away. Roster: MC2 version Ion Man - An ionic-powered villain created by Red Queen. Ion Man has the power to fly and fire blasts of destructive energy. Killerwatt - Electrically powered enemy of Spider-Girl, Killerwatt's wicked sense of humor is made less funny by his tendency to fire lightning bolts at the people he tells his jokes to. Magneta - Obsessed fan of Magneto who abandoned her dreams of being a hero and uses her own magnetic powers to help the Red Queen fight A-Next. Red Queen - Daughter of Henry Pym and the Wasp, Hope Pym copied her mother's electric sting blasts and insect-wings and donned a sinister version of her mother's Wasp costume. She is the team leader and has a pathological hatred of Stinger. Former members Sabreclaw - Hudson Logan is the son of Wolverine and the half-brother to Wild Thing. His healing factor and deadly claws are made more frightening by his sociopathic nature. He later joined A-Next. Big Man - Brother to Red Queen, Henry Pym Jr. copied his father's growing abilities. Big Man joined the team to watch over his sister. Earth-616 version Wonder Man - Leader Anti-Venom - Anti-Venom joined up with the Revengers because he thought Wonder Man might be right about what he claims about the Avengers. Atlas - Atlas joined up with the Revengers out of anger that his numerous requests to join the Fifty State Initiative were denied. Captain Ultra - Captain Ultra joined up with the Revengers where despite being part of the Fifty State Initiative, he resented being disrespected despite having as much power as an Avenger. Century - Century sided with Wonder Man out of sense of honor to him and recognition of the cycle of life where he previously blamed himself for Wonder Man's previous death and wanted to make amends. Demolition Man - Demolition Man claimed the Grandmaster called him to reclaim the Infinity Gems from the Avengers and that the Avengers have not been returning his calls, leading him to be recruited into the Revengers. This happened while he was suffering from brain damage. Devil-Slayer - He joined the Revengers in order to make a reality where the Avengers were held accountable for their actions. Ethan Edwards - He joined the Revengers in order to avenge the defeat of the Skrulls at the end of their Secret Invasion. Goliath V - He joined the Revengers where he still blames Iron Man for his uncle Bill Foster's death at the hands of Ragnarok. New Revengers Maker - Leader White Tiger - Paibok - Asti the All-Seeing - Skar - This version was taken from an as-yet-unidentified reality. Angar the Screamer - This version was taken from an as-yet-unidentified reality. Vermin - O.M.N.I.T.R.O.C.U.S - In other media: The Revengers appear in the Avengers Assemble episode "Ant-Man Makes It Big" as the main characters of the in-universe film Human Ant and the Revengers, consisting of the Human Ant, Iron Guy, Colonel America, the Bulk, Dark Spider, and the Viking King. A group loosely inspired by the Revengers appears in Thor: Ragnarok, formed and led by Thor and consisting of Valkyrie, Bruce Banner / Hulk, and Loki, with Korg and Miek as associates. This version of the group was formed by Thor to help him escape Sakaar and return to Asgard, where he quickly disbands the group while fighting Hela.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bump mapping** Bump mapping: Bump mapping is a texture mapping technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not changed. Bump mapping was introduced by James Blinn in 1978.Normal mapping is the most common variation of bump mapping used. Principles: Bump mapping is a technique in computer graphics to make a rendered surface look more realistic by simulating small displacements of the surface. However, unlike displacement mapping, the surface geometry is not modified. Instead only the surface normal is modified as if the surface had been displaced. The modified surface normal is then used for lighting calculations (using, for example, the Phong reflection model) giving the appearance of detail instead of a smooth surface. Principles: Bump mapping is much faster and consumes less resources for the same level of detail compared to displacement mapping because the geometry remains unchanged. Principles: There are also extensions which modify other surface features in addition to increasing the sense of depth. Parallax mapping and horizon mapping are two such extensions.The primary limitation with bump mapping is that it perturbs only the surface normals without changing the underlying surface itself. Silhouettes and shadows therefore remain unaffected, which is especially noticeable for larger simulated displacements. This limitation can be overcome by techniques including displacement mapping where bumps are applied to the surface or using an isosurface. Principles: Methods There are two primary methods to perform bump mapping. The first uses a height map for simulating the surface displacement yielding the modified normal. This is the method invented by Blinn and is usually what is referred to as bump mapping unless specified. The steps of this method are summarized as follows. Before a lighting calculation is performed for each visible point (or pixel) on the object's surface: Look up the height in the heightmap that corresponds to the position on the surface. Calculate the surface normal of the heightmap, typically using the finite difference method. Combine the surface normal from step two with the true ("geometric") surface normal so that the combined normal points in a new direction. Calculate the interaction of the new "bumpy" surface with lights in the scene using, for example, the Phong reflection model.The result is a surface that appears to have real depth. The algorithm also ensures that the surface appearance changes as lights in the scene are moved around. Principles: The other method is to specify a normal map which contains the modified normal for each point on the surface directly. Since the normal is specified directly instead of derived from a height map this method usually leads to more predictable results. This makes it easier for artists to work with, making it the most common method of bump mapping today. Realtime bump mapping techniques: Realtime 3D graphics programmers often use variations of the technique in order to simulate bump mapping at a lower computational cost. Realtime bump mapping techniques: One typical way was to use a fixed geometry, which allows one to use the heightmap surface normal almost directly. Combined with a precomputed lookup table for the lighting calculations the method could be implemented with a very simple and fast loop, allowing for a full-screen effect. This method was a common visual effect when bump mapping was first introduced.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SimPy** SimPy: SimPy is a process-based discrete-event simulation framework based on standard Python. It enables users to model active components such as customers, vehicles, or agents as simple Python generator functions. SimPy is released as open source software under the MIT License. The first version was released in December 2002. Its event dispatcher is based on Python's generators and can be used for asynchronous networking or to implement multi-agent systems (with both, simulated and real communication). Simulations can be performed “as fast as possible”, in real time (wall clock time) or by manually stepping through the events. Though it is theoretically possible to do continuous simulations with SimPy, it has no features to carry out that. However, SimPy is overkill for simulations with a fixed step size where your processes don't interact with each other or with shared resources — use a simple while loop in this case. SimPy: Additionally, SimPy provides different types of shared resources to simulate congestion points that have limited capacity, such as servers, checkout counters, and tunnels. In version 3.1 and above, SimPy offers monitoring capabilities to assist in collecting statistics about processes and resources. Simpy 3.0 requires Python 3., while Simpy 4.0 requires Python 3.6+. SimPy distribution contains tutorials, documentation, and examples. Example: The following is a SimPy simulation showing a clock process that prints the current simulation time at each step:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**OpenVRML** OpenVRML: OpenVRML is a free and open-source software project that makes it possible to view three-dimensional objects in the VRML and X3D formats in Internet-based applications. The software was initially developed by Chris Morley; since 2000 the project has been led by Braden McDaniel.OpenVRML provides a GTK+-based plugin to render VRML and X3D worlds in web browsers. Its libraries can be used to add VRML and X3D support to applications. The software is licensed under the terms of the GNU Lesser General Public License (LGPL) and distributed a GNU-style source package that is portable to most POSIX systems with a C++ compiler. The source distribution also includes project files for building on Microsoft Windows with the freely-available Visual C++ Express compiler.Binary (compiled) versions of the software are available within the Linux distributions Fedora and Debian, as well as under FreshPorts for FreeBSD and Fink for Mac OS X. OpenVRML: A number of software applications are designed to generate VRML code; see for instance GNU Octave.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Twentify** Twentify: Twentify is an applications based market research company that studies consumer behavior, products, communications, brands, and markets. History: The company was founded in February 2014, by İlker İnanç, Çağlar Bozkurt and Tolga Bakkaloğlu.After being founded in Istanbul Turkey, the company extended its operations to North America, starting with Canada. It has also piloted projects in Mexico, Ukraine, South Africa, Nigeria and Thailand.Twentify currently has operations in Turkey with its Istanbul Office, in Canada with its Ottawa, ON Office and in United States of America with its New York, NY office. Awards: Webrazzi Arena Startup Competition Winner TechCrunch Disrupt Startup Alley Wild Card Position Winner
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phenomenological model** Phenomenological model: A phenomenological model is a scientific model that describes the empirical relationship of phenomena to each other, in a way which is consistent with fundamental theory, but is not directly derived from theory. In other words, a phenomenological model is not derived from first principles. A phenomenological model forgoes any attempt to explain why the variables interact the way they do, and simply attempts to describe the relationship, with the assumption that the relationship extends past the measured values. Regression analysis is sometimes used to create statistical models that serve as phenomenological models. Examples of use: Phenomenological models have been characterized as being completely independent of theories, though many phenomenological models, while failing to be derivable from a theory, incorporate principles and laws associated with theories. The liquid drop model of the atomic nucleus, for instance, portrays the nucleus as a liquid drop and describes it as having several properties (surface tension and charge, among others) originating in different theories (hydrodynamics and electrodynamics, respectively). Certain aspects of these theories—though usually not the complete theory—are then used to determine both the static and dynamical properties of the nucleus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Psychology of science** Psychology of science: The psychology of science is a branch of the studies of social science defined most simply as the study of scientific thought or behavior. It is a collection of studies of various topics. The thought of psychology has been around since the late 19th century. Research on the psychology of science began in 1874, the field has seen a substantial expansion of activity in recent years. The specific field of psychology as a science first gained popularity mostly in the 1960s, with Abraham Maslow publishing an influential text on the subject (Maslow, 1966), but this popularity faded, only re-emerging in the 1980s (e.g., Simonton, 1988). Other studies of science include philosophy of science, history of science, and sociology of science or sociology of scientific knowledge. Psychology of science: The psychology of science applies methods and theory from psychology to the analysis of scientific thought and behavior, each of which is defined both narrowly and broadly. Narrowly defined, "science" refers to thought and behavior of scientists and technologists. More broadly defined, "science" refers to thought and behavior of anyone (past and present) of any age engaged in problem finding and problem solving, scientific theory construction, learning scientific or mathematical concepts, scientific modelling, testing plausible rival hypotheses, or other scientific reasoning. The methods of psychology that are applied to the study of scientific thought and behavior include: psychohistorical, psychobiographical, observational, descriptive, correlational, and experimental techniques (e.g., Gholson et al., 1989; Giere, 1992; Kowlowski, 1996; Magnani et al., 1999; Carruthers et al., 2002; Feist, 2006; Proctor & Capaldi, 2012; Feist & Gorman, 2013). Psychology of science: The psychology of science includes research in many subfields of psychology, such as but not limited to neuroscientific, developmental, educational, cognitive, personality, social, and clinical (Feist, 2011). A recent branch of psychology of science investigates attitudes towards science and science skepticism (e.g. Rutjens, Heine et al., 2018; Rutjens, Sutton et al., 2018). Gregory Feist's 2006 book The Psychology of Science and the Origins of the Scientific Mind (Feist, 2006), and the 2013 edited book Handbook of the Psychology of Science (Feist & Gorman, 2013) review and integrate many sub-disciplines of psychology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Checkpoint Systems** Checkpoint Systems: Checkpoint Systems is an American company that specializes in loss prevention and merchandise visibility for retail companies. It makes products that allow retailers to check inventory, quicken the replenishment cycle, prevent out-of-stocks and reduce theft. Checkpoint offers Electronic Article Surveillance (EAS) radio frequency solutions for retail, high-theft and loss-prevention solutions, RFID hardware, software, and labeling capabilities. It is currently a division of CCL Industries, which acquired Checkpoint in 2016. History: The checkpoint was invented in the United States during the 1960s, when Peter Stern, who was president of the Board of Library Directors in the city of Philadelphia, was deeply concerned about the widespread theft of books from public libraries. He was the leader of a team of researchers working at a privately held converter of paperboard and paper, and believed this was a problem his team could tackle and that a system could be developed to prevent books from being stolen. Thus the CHECKPOINT system was developed and patented. History: 1977–1998 The CHECKPOINT system was based on sheets of non-ferrous metal laminated into flexible tags, which reacted with magnetic metal-detection technology housed in a turnstile. Trim and unobtrusive, these paper tags could be attached to book covers. In 1969, Checkpoint Systems was formed as a wholly-owned subsidiary of Logistics Industries Corporation. Eight years later, on June 30, 1977, Checkpoint was spun off from its parent company and began trading on NASDAQ under the symbol CHECK. By then, the company's CHECKPOINT technology was already being adapted for use in retail. History: Within the next twenty years, Checkpoint Systems implemented RF electronic article surveillance (EAS) across different stores and in October 1993, the company's common stock began trading on the New York Stock Exchange under the symbol CKP. Its operations expanded through acquisitions, and in the mid-1990s, after purchasing two European systems manufacturers, Checkpoint established direct access to the European market. Its security systems were marketed to retail customers, including drug store chains, hypermarkets, supermarkets, mass merchandisers, discount stores and electronics retailers, as well as libraries in the United States. History: 1999–2006 In 1999, Checkpoint Systems broadened its product offering with the purchase of METO, a German provider of handheld labeling systems used by food and discount retailers to brand and price mark merchandise. The newly acquired company doubled Checkpoint's revenues and helped to expand relationships with European retailers. Two years later, Checkpoint bought A.W. Printing Inc., a U.S.-based printer of tickets, tags, and labels for apparel retailers and brand owners. This acquisition expanded the company's label printing operations and gave entrée to customers in the soft goods market segment. History: By the mid-2000s, Checkpoint had a source tagging program, facilitated by the company's service bureau business. Check-Net®, a web-based platform, started to provide apparel retailers and brand owners with a repository and logistics service to manage all their retail labeling needs. In 2006, Checkpoint further expanded its source tagging business with new print technology and production capabilities through the purchase of ADS Worldwide, a UK-based supplier of apparel labels, tags, and trim products. History: Acquisitions (2007–present) Over the next five years, several acquisitions enabled Checkpoint to extend its offering of merchandise availability applications for retailers: In 2007, Alpha S3. A U.S.-based provider of security products for protecting high-theft merchandise; and SIDEP, a supplier of EAS systems operating in France and China. In 2008, Asialco Electronics Company. A China-based manufacturer of RF-EAS labels to meet growing demand in emerging Asian markets. Also in 2008, OATSystems. An RFID application software company, founded by Prasad Putta and Dr. Sanjay Sarma (who co-founded the MIT Auto-ID Lab), enabled Checkpoint to provide tracking and inventory management solutions throughout the supply chain. In 2009, Brilliant Label Manufacturing Ltd. A China-based manufacturer of apparel labels and tags added more labeling products for the global apparel industry. In 2011, Shore to Shore. A global manufacturer of labels and tags for apparel and footwear expanded Checkpoint's production, capabilities, and global reach. In 2016, Checkpoint announces a collaboration with Microsoft Corp. to bring their RFID Merchandise Visibility solutions to the cloud, and utilize Microsoft analytical tools.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Serrate RNA effector molecule homolog** Serrate RNA effector molecule homolog: Serrate RNA effector molecule homolog (SRRT) also known as arsenite-resistance protein 2 (ARS2) is a protein that in humans is encoded by the SRRT gene.The SRRT gene product plays a role in RNA-mediated gene silencing (RNAi) by miRNAs. Independently of its activity on miRNAs, it is necessary and sufficient to promote neural stem cell self-renewal, by directly binding to the SOX2 promoter and positively regulating its transcription. It enables the binding activity of the mRNA cap binding complex and the adaptor activity of certain protein molecules. It can be found in the nucleoplasm and is part of the ribonucleoprotein complex. It is involved in cell cycle progression around the S phase.It does not directly confer arsenite resistance but rather modulates arsenic sensitivity. Diseases associated with SRRT include spondylocostal dysostosis and cerebral arteriopathy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mezzaluna** Mezzaluna: A mezzaluna (; Italian: [ˌmɛddzaˈluːna]) is a knife consisting of one or more curved blades with a handle on each end, which is rocked back and forth chopping the ingredients below with each movement. They most commonly have a single blade, but are sometimes seen with two or three blades.It is typically used for mincing herbs or garlic, but it can be used for chopping other things such as cheese or meat. Very large single blade versions are sometimes used for pizza. Common uses in Italy include preparation of a soffritto or a pesto, etc. Name: Mezzaluna means "half moon" in Italian, after the curved shape of the blade, and is the most common name used in the UK. Other names used include herb chopper, hachoir [aʃ.waʁ] (from French) and hokmesser (from Yiddish). Cutting board: Mezzalunas may be found sold with a cutting board that has a shallow indentation in it, marketed as a herb chopper.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sun protective clothing** Sun protective clothing: Sun protective clothing is clothing specifically designed for sun protection and is produced from a fabric rated for its level of ultraviolet (UV) protection. A novel weave structure and denier (related to thread count per inch) may produce sun protective properties. In addition, some textiles and fabrics employed in the use of sun protective clothing may be pre-treated with UV-inhibiting ingredients during manufacture to enhance their effectiveness. Sun protective clothing: In addition to special fabrics, sun protective clothing may also adhere to specific design parameters, including styling appropriate to full coverage of the skin most susceptible to UV damage. Long sleeves, ankle-length trousers, knee- to floor-length skirts, knee- to floor-length dresses, and collars are common styles for clothing as a sun protective measure. Sun protective clothing: A number of fabrics and textiles in common use today need no further UV-blocking enhancement based on their inherent fiber structure, density of weave, and dye components, especially darker colors and indigo dyes. Good examples of these fabrics contain full percentages or blends of heavy-weight natural fibers like cotton, linen and hemp or light-weight synthetics such as polyester, nylon, spandex and polypropylene. Natural or synthetic indigo-dyed denim, twill weaves, canvas and satin are also good examples. However, a significant disadvantage is the heat retention caused by heavier-weight and darker-colored fabrics.As sun protective clothing is usually meant to be worn during warm and humid weather, some UV-blocking textiles and clothing may be designed with ventilated weaves, moisture wicking and antibacterial properties to assist in cooling and breathability. Sun protective clothing: UPF (ultraviolet protection factor) represents the ratio of sunburn-causing UV without and with the protection of the fabric, similar to SPF (sun protection factor) ratings for sunscreen. While standard summer fabrics have UPF ~6, sun protective clothing typically has UPF ~30, which means that only 1 out of ~30 units of UV will pass through (~3%). History: Although clothing has been used for protection against solar exposure for thousands of years, modern sun protective clothing was popularized (but not exclusively used) in Australia as an option or adjunct to sunscreen lotions and sunblock creams. Sun protective clothing and UV protective fabrics in Australia now follow a lab-testing procedure regulated by a Commonwealth agency: ARPANSA. This standard was established in 1996 after work by Australian swimwear companies. The British standard was established in 1998 by the National Radiological Protection Board and the British Standards Institute. Using the Australian method as a model, the US standard was formally established in 2001, and now employs a more-stringent testing protocol that includes fabric longevity, abrasion/wear and washability. UPF testing is now widely used on clothing for outdoor activities. History: The original UPF rating system was enhanced in the United States by the American Society for Testing and Materials (ASTM) Committee D13.65, at the behest of the Federal Trade Commission (FTC) and the Consumer Product Safety Commission, to qualify and standardize the emerging sun protective clothing and textile industry. When the Food and Drug Administration (FDA) discontinued regulating sun-protective clothing, the Solar Protective Factory (whose CEO chaired the ASTM Committee) took the lead in developing the UPF testing protocols and labeling standards that are presently used in the United States.In 1992, the FDA reviewed clothing that was being marketed with claims of sun protection (SPF, % UV blockage, or skin cancer prevention). Only one brand of sun protective clothing, Solumbra, was cleared under medical device regulations. The FDA initially regulated sun protective clothing as a medical device, but later transferred oversight for general sun protective clothing to the FTC. The UPF rating system may eventually be adopted by interested apparel/textile/fabric manufacturers as a "value added" program for consumer safety and awareness. Before UPF standards were in place (which directly measure a fabric's ability to block UV radiation), clothing was previously rated using SPF standards (which measure how long a person's skin takes to redden). Fabric: Factors that affect the level of sun protection provided by a fabric, in approximate order of importance, include weave, color, weight, stretch, and wetness. The less open or more dense the fabric (weave, weight, stretch), the better the protection. Getting a fabric wet reduces the protection as much as half, except for silk and viscose which can get more protective when wet. Polyester contains a benzene ring that absorbs UV light. In addition, UV absorbers may be added at various points in the manufacturing process to enhance protection levels. In 2003, chemical company BASF embedded nanoparticles of titanium dioxide into a nylon fabric, which can be used for sun protective clothing that maintains its UV protection when wet.There is some indication that washing fabrics in detergents containing fabric brighteners, which absorb UV radiation, might increase their protective capability. Studies at the University of Alberta also indicate that darker-colored fabrics offer more protection than lighter-colored fabrics.While there is some correlation between the percentages of visible light and UV that pass through the same fabric, it is not a strong relationship. With new-technology textiles designed for the sole purpose of UV blocking, it is not always possible to judge the UV protection level simply by holding up the fabric and examining how much visible light passes through.Provide more protection: specially manufactured fabrics cotton viscose fabrics black or dark blue denim jeans wool garments satin-finished silk of any weight tightly woven Bamboo/Lycra fabric polyacrylonitrile 100% polyester shiny polyester blends tightly woven fabrics REPREVE fabric unbleached cotton (most cotton sold is bleached) bamboo/cotton blendProvide less protection: polyester crepe bleached cotton viscose knits undyed/white jeans worn/old fabric UPF rating: A relatively new rating designation for sun protective textiles and clothing is UPF (ultraviolet protection factor), which represents the ratio of sunburn-causing UV measured without and with the protection of the fabric. For example, a fabric rated UPF 30 means that, if 30 units of UV fall on the fabric, only 1 unit will pass through to the skin. A UPF 30 fabric that blocks 29 out of 30 units of UV is therefore blocking 96.7%. Unlike SPF (sun protection factor) measurements that traditionally use human sunburn testing, UPF is measured using a laboratory instrument (spectrophotometer or spectroradiometer) and an artificial light source, and then applying a sunburn weighting curve (erythemal action spectrum) across the relevant UV wavelengths. Theoretically, human SPF testing and instrument UPF testing both generate comparable measurements of a product's ability to protect against sunburn. UPF rating: Below is the ASTM Standard for Sun Protective Clothing and Swimwear: According to testing by Consumer Reports, UPF 30+ is typical for protective fabrics, while UPF 20 is typical for standard summer fabrics. UPF testing protocol: Developed in 1998 by Committee RA106, the testing standard for sun protective fabrics in the United States is the American Association of Textile Chemists and Colorists (AATCC) Test Method 183. This method is based on the original guidelines established in Australia in 1994. UPF testing protocol: AATCC 183 should be used in conjunction with other related standards including ASTM D 6544 and ASTM D 6603. ASTM D 6544 specifies simulating the life cycle of a fabric so that a UPF test can be done near the end of the fabric's life, when it typically provides the least UV protection. ASTM D 6603 is a consumer format recommended for visible hangtag and care labeling of sun protective clothing and textiles. A manufacturer may publish a test result to a maximum of UPF 50+. UPF testing protocol: Sun protective clothing and textile/fabric manufacturers are currently a self-regulating industry in North America, prescribed by the AATCC and ASTM methods of testing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Psychoticism** Psychoticism: Psychoticism is one of the three traits used by the psychologist Hans Eysenck in his P–E–N model (psychoticism, extraversion and neuroticism) model of personality. Nature: Psychoticism is conceptually similar to the constraint factor in Tellegen's three-factor model of personality. Psychoticism may be divided into narrower traits such as impulsivity and sensation-seeking. These may in turn be further subdivided into even more specific traits. For example, impulsivity may be divided into narrow impulsivity (unthinking responsivity), risk taking, non-planning, and liveliness. Sensation seeking has also been analysed into a number of separate facets. Nature: Eysenck argued that there might be a correlation between psychoticism and creativity. Critics: Critics of the trait have suggested that the trait is too heterogeneous to be taken as a single trait. Costa and McCrae believe that agreeableness and conscientiousness (both of which represent low levels of psychoticism) need to be distinguished in personality models. It has also been suggested that "psychoticism" may be a misnomer and that "psychopathy" or "Impulsive Unsocialized Sensation Seeking" would be better labels. Biological bases: Psychoticism is believed to be associated with levels of dopamine. Other biological correlates of psychoticism include low conditionability and low levels of monoamine oxidase; beta-hydroxylase, cortisol, norepinephrine in cerebrospinal fluid also appear relevant to psychoticism level. Eysenck's theoretical basis for the model was the theory of Einheitspsychosen (unitary psychosis) of the nineteenth-century German psychiatrist Heinrich Neumann. More information: Eysenck, H.J. & Eysenck, S.B.G. (1976). Psychoticism as a Dimension of Personality. London: Hodder and Stoughton
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pistachio ice cream** Pistachio ice cream: Pistachio ice cream or pistachio nut ice cream is an ice cream flavor made with pistachio nuts or flavoring. It is often distinctively green in color. Pistachio is also a flavor of sorbet and gelato. Pistachio ice cream is a layer in spumoni. Pistachio ice cream: At the Bakdash in Damascus, Syria, a pounded ice cream covered with pistachio called Booza is produced. It has an elastic texture made of mastic and sahlab and is famous around the Arab World. Tripoli's Al Mina district is known for its Arabic ice cream including "ashta" with pistachios.It is widely produced including by Brigham's Ice Cream, Ben & Jerry's, Graeter's and major brands. According to a poll among adults in the US 23 percent of respondents stated that they enjoyed pistachio ice cream while only 4 percent of respondents stated that pistachio ice cream was their favorite flavor. [1]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Play-by-post role-playing game** Play-by-post role-playing game: A play-by-post role-playing game (or sim) is an online text-based role-playing game in which players interact with each other and a predefined environment via text. It is a subset of the online role-playing community which caters to both gamers and creative writers. Play-by-post games may be based on other role-playing games, non-game fiction including books, television and movies, or original settings. This activity is closely related to both interactive fiction and collaborative writing. Compared to other roleplaying game formats, this type tends to have the loosest rulesets. History: Play-by-post roleplaying has its origins on the large computer networks and bulletin board systems of major universities in the United States in the 1980s. It drew heavily upon the traditions of fanzines and off-line role-playing games. The introduction of IRC enabled users to engage in real-time chat-based role-playing and resulted in the establishment of open communities.Development of forum hosting software and browser-based chat services such as AOL and Yahoo Chat increased the availability of these mediums to the public and improved accessibility to the general public. Rules: Unlike other forms of online role-playing games such as MUDs or MMORPGs, the events in play-by-post games are rarely handled by software and instead rely on participants or moderators to make decisions or improvise. Players create their own characters and descriptions of events and their surroundings during play. Results of combat, which may include Player versus player encounters, may be determined by chance through dice rolls or software designed to provide a random result. The results of random chance may need to be provided to the players in order to avoid disputes that may be a result of cheating or favoritism. Alternatively a forum may be diceless and rely on cooperation among players to agree on outcomes of events and thus forgo the use of randomisers. Rules: In the latter case, combat and other measures are handled by requiring players to avoid detailing the results of their actions, and thus leave an opening for a response by other involved players. Consider the following possible post from a character named Bob attacking Joe: This post makes the assumption that Joe takes no further action to avoid the attack from Bob and that he will drop as a result. These types of actions are often called "autohits" as they "automatically hit" without allowing for a response by the affected character, and there may be rules against such actions (commonly referred to as the 'no power playing' rule). Alternatively, Bob may be required to write something like the following: This allows Joe to respond to the action without contradicting the post. Rules: Depending on the rules established on the forum, role-playing and story can be pushed forward through moderation by a gamemaster, specific rules (often existing role-playing game systems), or by mutual agreement between players. Rules: Some games allow members of any writing proficiency to join, while others may require members to provide a sample of writing for review before allowing participation. In addition, a minimum word-count for each post may be required in order to encourage more detailed writing. Forums that cater to all levels of role-playing may have specific sections for various difficulty levels. Characters: In general, each player plays and develops his or her own character. Characters may be original creations of the player, or may be based on a character taken from canon if the setting and rules provide this option. Each community may have its own rules regarding the process of character creation and either allow characters to be liberally created and used with minimal review, or require characters to undergo a review process in which administrators examine the character application and decide whether to approve or reject the application. Characters: In many cases, characters are regarded as belonging to the players who created them, and others are not allowed to make drastic changes to them without the creator's prior consent (referred to as god-modding). In addition to standard characters, games may also incorporate non-player characters (NPCs). Some NPCs have recurring roles, while others appear only briefly to aid in the writing of a scene. The use and control of NPCs varies widely among role-playing games. Setting: Games vary in the degree to which the setting is established; some go as far as to include a virtual "world" to roleplay in, while others allow players to improvise the setting as they progress. Settings may be derived from novels, TV shows or movies (often resulting in collaborative fan-fiction) or may be original and unique to the game. Style: Play-by-post games are frequently written in the third person perspective due to the fact that multiple players must share each scene, each with his or her character as the focus of attention. Common online game terms such as OOC (Out of character) or OOG (Out of Game) are used to differentiate in-character from personal posting. Style: The opening message or post of each scene typically lays down the scenario and describes a scene, or continues from a previously started scene. Threads then become an ongoing story in which players periodically advance the plot by reading the latest reply and then responding with what their character does and how the environment changes in response. These replies are often open-ended so that other players can continue. Mediums: Message-board role-playing Internet forums (aka Play-By-Message-Board or PBMB, Forum Role-Playing or simply Forum-Games) are the most common medium for Play-by-Post gaming. Forums may provide features such as online dice rolling, maps, character profiling and game history. Using a forum (as opposed to a live-chat interface) allows players to re-read what they have previously written at a later date, and to read posts made by players in other threads. Many online services provide free game hosting specifically for gamemasters, or provide general forum services that can be used for role-playing purposes (such as Proboards or Invisionfree). Mediums: As an asynchronous collaborative editing tool, forums lack safeguards to prevent two writers from posting simultaneously and contradicting each other. House Rules may require players to take turns sequentially in order to avoid such conflicts, or players may require posts to be edited or deleted to rectify the situation which may result in dispute and intervention from a moderator if one is available. To avoid this, many boards offer guides and tips on roleplay etiquette. Many message boards are listed in roleplay directories, such as TopRPsites to make them easy to find. Mediums: Twitter and Tumblr are also very popular mediums for Roleplaying. Mediums: Play-by-post role-playing Play-by-post role-playing is generally devoted to advancing a single overarching storyline that all board members participate in, rather than many different non-related stories proceeding in separate threads [the latter being known as "multi-genre"]. They vary in organization, but the primary formation includes a full set of rules governing role-playing, out-of-character conduct, combat between players, threads detailing a set storyline (often contributed to by plot-advancing, staff-organized events, or player role-plays), character approval forums, and a full staff with admin(s) and moderators. Larger boards set in a single setting are often organized by cutting up the setting into separate forums, each based on locations within the setting. Mediums: Many message board based games, such as NationStates, establish a hierarchy of moderators to manage plot, pacing and continuity. To keep story threads organized the message board is often organized into forums based on geographical location within the game setting. Other message boards, however, may choose to sort their board on genre. Mediums: Play-by-email Play-by-email (PBeM) games are played as other play-by-mail games, using email as the postal medium. Players email their actions to the gamemaster or to each other using a mailing list. Play-by-email games are often slow, since the players must wait for each post before replying, but have the advantage that replies may be tailored to the players, allowing the gamemaster to keep information secret from the other players. Mediums: This should not be confused with simming style of post or email games. Sims are more collaborative storytelling, where each player tells a portion of the story, usually utilizing other characters in the area as they wish in order to complete their portion of the story. PBeM games more closely resemble table-top role playing games where players react to gamemaster presented scenarios, and characters actions are controlled by individuals. Mediums: Play-by-chat Online Chat Rooms may be used in a similar fashion as forums for role-playing purposes. Unlike forums, posts are displayed to the screen in real-time and thus may increase the pace at which responses are written. Play-by chat games require users to be present for the duration of a scene which may last several hours. The game may be supplemented by external character profiles or may rely on users to provide information about their character upon request or upon entering a room. Discord has become a major medium for play-by-chat games due to its rapidly-increasing popularity and its user-friendly administrative features that allow users to create private chat rooms (known on the site as "servers") in very little time. Discord servers listing roleplay servers have also become popular and many have thousands of members. Mediums: Real-time interaction between characters in chat rooms are similar to those encountered in MUDs but lack automated features of MUDs such as combat resolution and item descriptions. Players in chat rooms are required to describe objects and events through manually written text. Mediums: Play-by-internet Play-by-internet (PBI) refers to fully automated games which take place using server-based software. Play-by-internet games differ from other play-by-post games in that, for most computerized multiplayer games, the players have to be online at the same time, and players can make their moves independently of any other players in the game. The turn-time is usually fixed. A server updates the game after the turn-time has elapsed evaluating all the player's moves sent to the server. The turn-time duration can be hours, days, weeks or even months. Mediums: Play-by-wiki A play-by-wiki game is played using wiki software instead of a forum. Because players' previous posts are editable and the gamemaster takes responsibility as the overall editor of the story, plot holes can be avoided and writing skills may not be as important for each writer. Wiki space provides not only a means of communication, but also a permanent archive and a designated off-topic discussion area for each page. Players can edit posts freely because records are automatically maintained and changes can be easily undone. Sites such as Wetpaint are commonly used for this. Mediums: Role-playing blog The role-playing blog (RPB) is a game which is played out online using posts within a blog or weblog. Unlike message board role-playing, a role-playing blog is generally restricted to one gaming group, and the blog contains static files such as maps, archives, and character sheets specific for that group. RPBs often incorporate mixed elements of message board role-playing, play-by-chat, as well as play-by-email styles, allowing players to mix and match the style of play that they prefer. Popular blog sites used to host these games are Tumblr and LiveJournal.The style of role-playing on Tumblr often comes in the format of a 'main blog', the headquarters of the game, and multiple character blogs from which each player posts. The main blog often advertises for players through the tags of Tumblr. The main blog is where applicants to the game apply by filling out an application on the main blog, and where other such administration of the game occurs. The Roleplay game creator is often referred to as the "Admin" short for "Administrator" and players may be required to run major plots and game changes by the Admin before proceeding, making the Admin function in a way that a traditional GM might have. Players role-play by reblogging each other's posts and adding paragraphs of interaction from their own character to the end, each of which are called 'paras.' Often seen are text posts with dialogue and an accompanying .gif image expressing the character mood or intended expression of emotion. Though, recently .jpg icons have risen in popularity in some games. Mediums: The style of role-playing on Livejournal style RPB's is maintained through "community" blogs that connect "character blogs or journals." Character Blogs/Journals are generally written in first-person character driven context. These character journals are then open to all players of the community to interact on a first person style of writing. Interaction on the "Community" blog is done mostly in third person storybook fashion. RPB's on a "livejournal platform" are frequently run by an individual referred to as MOD (moderator). MOD's are in charge of creating the community/game setting, character limitations, rules, style of play, frequency of play and general worldly game views. MOD's are also in charge of creating worldly events for game play response for individually plotted characters. Mediums: Role-playing Google documents Somewhat similar to blogs and wikis, Google's documents can have permissions set to allow users to access and modify a document online. This allows multiple users to edit the document at the same time, meaning that others can modify the story online. There is also a revision history that can be split allows commenting on particular words or phrases, or even a general comment, as well as a chat bar for that particular document. Since this form of role-playing is relatively new, it's not a common way of role-playing, and it has drawbacks in the content being editable by anyone with permissions. Player-player combat: In diceless games where randomisers are not used to determine the outcome of combat, the onus is on players to come to an agreement. Disputes may arise from players in competitive engagements if neither player is able to come to a compromise that is acceptable to both. Players may write their characters in a way that makes them overly powerful or invulnerable, a practice referred to as "power-gaming", "god-modding", or "superhero syndrome". In such cases, a moderator may be required to review the conflict and make a ruling as to what should be accepted as the final result. In some rulesets, the winner of a contest may be a foregone conclusion agreed upon out-of-character and the battle itself a ceremonial description of each character's prowess. In other systems, the only rule may be that the first character to yield or surrender is the loser of the combat. No matter the case, some sort of player consent to wins and losses is required in this type of role-playing game. Player-player combat: For this reason, text-based role-playing games tend to be focused slightly more on story and player-character interactions, negotiations, and relationships rather than combat. Combat-focused games tend to flow more smoothly in more rules based role-playing platforms and venues such as MUDs, D&D-style games, or video games, where character building or playing skill rather than consent determines the winner of a combat, and play-by-post games usually focus on exploration, negotiation, or romance storylines where characters aren't usually directly opposing one another.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hand to hand acrobatics** Hand to hand acrobatics: Hand to hand acrobatics is a type of performance in which an acrobatic base and flyer balance on top of each other in either a gymnastic or acrobatic medium. It combines strength, agility, flexibility, and balance. For it to be considered hand to hand acrobatics, the top performer (flyer) must be making physical contact only with the base's hands, with the flyer's hands keeping them balanced. Positions the top can perform in this style of acrobatics are straddles, handstands, pikes, press to handstand, one arm handstands, planches, flags, and many others. Hand to hand acrobatics can also include dynamic catches and throws that either begin with a throw from a hand to hand position or end in a catch in the hand to hand position. Hand to hand acrobatics: Hand to hand acrobatics has been used in a variety of contexts throughout history. These include circus acrobatics, acrobatic gymnastics, muscle beaches, acroyoga, and strongman competitions. Circus acrobatics: In circus shows such as Le Rêve or Cirque du Soleil: Worlds Away, hand to hand acrobatics have featured as an important part of the show. Often the circus will recruit gymnasts from sports such as acrobatic gymnastics because hand to hand acrobatics is such a big part of that sport. Circus acrobatics: Hand to hand acrobatics also appears within other disciplines in the circus arts. Aerialists, trapeze flyers, and contortionists often use hand to hand acrobatics. There are times when aerialists and contortionists perform alone, but when performing in pairs or groups, simple hand to hand acrobatics is often incorporated. For example, this may involve one person hanging from a hoop or curtain while holding another person in the air below them. In the case of trapeze artists, many of the throws and catches are hand to hand grabs, and handstands are often performed between throws to not only show strength but to give the flyer a short amount of time to collect themselves before being thrown across the stage again. Acrobatic gymnastics: According to Chrissy Antoniades, an acrobat who represented Team USA and is now working at The House of Dancing Water in Macau, China, acrobatic gymnastics is not a well-known sport. The routines are performed on the same spring floor gymnasts use for floor exercise competitions. Partner balances, tosses, catches, dance, and tumbling elements are all choreographed and performed to music. Working in groups or pairs, hand to hand acrobatics is especially stressed in the sport. While you are allowed to perform skills on other body parts, it is very common to see skills performed in the hand to hand fashion. This is true especially in the case of pairs—two athletes working together in a routine. Acrobatic gymnastics: The first use of acrobatics as a specific sport was in the Soviet Union in the 1930s, and the first acrobatic world championships were in 1974. At this time many acrobats were not performing the same types of routines one would see today. Instead, for groups of four boys, they would have only one balance skill where all four of them are stacked on top of each other, and hand to hand acrobatic moves of the competitors choice were performed. Acroyoga: Acroyoga is a recent practice stemming from acrobatic gymnastics. It features hand to hand acrobatics in a similar way, but routines are not performed as in the sport. Jason Nemer, the creator of acroyoga, made it a combination of hand to hand acrobatics, yoga, fitness, and healing within balance. While hand to hand acrobatics is a small part in acroyoga, it is more common to do many different types of balancing positions. College campuses: A small group of college students perform hand to hand acrobatics in their own acrobatic competitions. Currently, only girls are allowed to compete in the sport. The sport resembles cheerleading more than it does acrobatics. Teams that compete come from Baylor University, West Liberty University, East Texas Baptist University, Quinnipiac University, University of Oregon, Hawaii Pacific University, Converse College, and Arizona Christian University. Strongman: In strongman culture, handstands are often an impressive feat due to the massive size of these men. While their form is far different from those in gymnastics, their hand to hand acrobatics resemble acts closer to what one would find in places such as muscle beach. California is a huge breeding ground for those types of athletes. Instructions from strongmen on how to perform handstands vary from what one would find in gymnastics, but they fall under the same category.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sea salt** Sea salt: Sea salt is salt that is produced by the evaporation of seawater and saltwater. It is used as a seasoning in foods, cooking, cosmetics and for preserving food. It is also called bay salt, solar salt, or simply salt. Like mined rock salt, production of sea salt has been dated to prehistoric times. Composition: Commercially available sea salts on the market today vary widely in their chemical composition. Although the principal component is sodium chloride, the remaining portion can range from less than 0.2 to 10% of other salts. These are mostly calcium, potassium, and magnesium salts of chloride and sulfate with substantially lesser amounts of many trace elements found in natural seawater. Though the composition of commercially available salt may vary, the ionic composition of natural saltwater is relatively constant. Historical production: Sea salt is mentioned in the Vinaya Pitaka, a Buddhist scripture compiled in the mid-5th century BC. The principle of production is evaporation of the water from the sea brine. In warm and dry climates this may be accomplished entirely by using solar energy, but in other climates fuel sources have been used. Modern sea salt production is almost entirely found in Mediterranean and other warm, dry climates. Historical production: Such places are today called salt works, instead of the older English word saltern. An ancient or medieval saltern was established where there was: Access to a market for the salt A gently shelving coast, protected from exposure to the open sea An inexpensive and easily worked fuel supply, or preferably the sun Another trade, such as pastoral farming or tanning—which benefited from proximity to the saltern (by producing leather, salted meat, etc.) and provided the saltern with a local marketIn this way, salt marsh, pasture (salting), and salt works (saltern) enhanced each other economically. This was the pattern during the Roman and medieval periods around The Wash, in eastern England. There, the tide brought the brine, the extensive saltings provided the pasture, the fens and moors provided the peat fuel, and the sun sometimes shone. Historical production: The dilute brine of the sea was largely evaporated by the sun. In Roman areas, this was done using ceramic containers known as briquetage. Workers scraped up the concentrated salt and mud slurry and washed it with clean sea water to settle impurities out of the now concentrated brine. They poured the brine into shallow pans (lightly baked from local marine clay) and set them on fist-sized clay pillars over a peat fire for final evaporation. Then they scraped out the dried salt and sold it. In traditional salt production in the Visayas Islands of the Philippines, salt are made from coconut husks, driftwood, or other plant matter soaked in seawater for at least several months. These are burned into ash then seawater is run through the ashes on a filter. The resulting brine is then evaporated in containers. Coconut milk is sometimes added to the brine before evaporation. The practice is endangered due to competition with cheap industrially-produced commercial salt. Only two traditions survive to the present day: asín tibuok and túltul (or dúkdok).In the colonial New World, slaves were brought from Africa to rake salt on various islands in the West Indies, Bahamas and particularly Turks and Caicos Islands. Historical production: Today, salt labelled "sea salt" in the US might not have actually come from the sea, as long as it meets the FDA's purity requirements. All mined salts were originally sea salts since they originated from a marine source at some point in the distant past, usually from an evaporating shallow sea. Taste: Some gourmets believe sea salt tastes better and has a better texture than ordinary table salt. In applications that retain sea salt's coarser texture, it can provide a different mouthfeel, and may change flavor due to its different rate of dissolution. The mineral content also affects the taste. The colors and variety of flavors are due to local clays and algae found in the waters the salt is harvested from. For example, some boutique salts from Korea and France are pinkish gray and some from India are black. Black and red salts from Hawaii may even have powdered black lava and baked red clay added in. Some sea salt contains sulfates. It may be difficult to distinguish sea salt from other salts, such as pink Himalayan salt, Maras salt from the ancient Inca hot springs, or rock salt (halite). Taste: Black lava salt is a marketing term for sea salt harvested from various places around the world that has been blended and colored with activated charcoal. The salt is used as a decorative condiment to be shown at the table. Health: The nutritional value of sea salt and table salt are about the same as they are both primarily sodium chloride. Table salt is more processed than sea salt to eliminate minerals and usually contains an additive such as silicon dioxide to prevent clumping.Iodine, an element essential for human health, is present only in small amounts in sea salt. Iodised salt is table salt mixed with a minute amount of various salts of the element iodine. Health: Studies have found some microplastic contamination in sea salt from the US, Europe and China. Sea salt has also been shown to be contaminated by fungi that can cause food spoilage as well as some that may be mycotoxigenic.In traditional Korean cuisine, jugyeom (죽염, 竹鹽), which means "bamboo salt", is prepared by roasting salt at temperatures between 800 and 2000 °C in a bamboo container plugged with mud at both ends. This product absorbs minerals from the bamboo and the mud, and is claimed to increase the anticlastogenic and antimutagenic properties of the fermented soybean paste known in Korea as doenjang. However, these claims are not substantiated by high-quality studies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Shift rule** Shift rule: The shift rule is a mathematical rule for sequences and series. Here n and N are natural numbers. For sequences, the rule states that if (an) is a sequence, then it converges if and only if (an+N) also converges, and in this case both sequences always converge to the same number.For series, the rule states that the series ∑n=1∞an converges to a number if and only if ∑n=1∞an+N converges.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Schaefer's dichotomy theorem** Schaefer's dichotomy theorem: In computational complexity theory, a branch of computer science, Schaefer's dichotomy theorem states necessary and sufficient conditions under which a finite set S of relations over the Boolean domain yields polynomial-time or NP-complete problems when the relations of S are used to constrain some of the propositional variables. It is called a dichotomy theorem because the complexity of the problem defined by S is either in P or NP-complete as opposed to one of the classes of intermediate complexity that is known to exist (assuming P ≠ NP) by Ladner's theorem. Special cases of Schaefer's dichotomy theorem include the NP-completeness of SAT (the Boolean satisfiability problem) and its two popular variants 1-in-3 SAT and not-all-equal 3SAT (often denoted by NAE-3SAT). In fact, for these two variants of SAT, Schaefer's dichotomy theorem shows that their monotone versions (where negations of variables are not allowed) are also NP-complete. Original presentation: Schaefer defines a decision problem that he calls the Generalized Satisfiability problem for S (denoted by SAT(S)), where S={R1,…,Rm} is a finite set of relations over propositional variables. An instance of the problem is an S-formula, i.e. a conjunction of constraints of the form Rj(xi1,…,xin) where Rj∈S and the xij are propositional variables. The problem is to determine whether the given formula is satisfiable, in other words if the variables can be assigned values such that they satisfy all the constraints as given by the relations from S. Original presentation: Schaefer identifies six classes of sets of Boolean relations for which SAT(S) is in P and proves that all other sets of relations generate an NP-complete problem. A finite set of relations S over the Boolean domain defines a polynomial time computable satisfiability problem if any one of the following conditions holds: all relations which are not constantly false are true when all its arguments are true; all relations which are not constantly false are true when all its arguments are false; all relations are equivalent to a conjunction of binary clauses; all relations are equivalent to a conjunction of Horn clauses; all relations are equivalent to a conjunction of dual-Horn clauses; all relations are equivalent to a conjunction of affine formulae. Otherwise, the problem SAT(S) is NP-complete. Modern presentation: A modern, streamlined presentation of Schaefer's theorem is given in an expository paper by Hubie Chen. In modern terms, the problem SAT(S) is viewed as a constraint satisfaction problem over the Boolean domain. In this area, it is standard to denote the set of relations by Γ and the decision problem defined by Γ as CSP(Γ). Modern presentation: This modern understanding uses algebra, in particular, universal algebra. For Schaefer's dichotomy theorem, the most important concept in universal algebra is that of a polymorphism. An operation f:Dm→D is a polymorphism of a relation R⊆Dk if, for any choice of m tuples 11 ,…,t1k),…,(tm1,…,tmk) from R, it holds that the tuple obtained from these m tuples by applying f coordinate-wise, i.e. 11 ,…,tm1),…,f(t1k,…,tmk)) , is in R. That is, an operation f is a polymorphism of R if R is closed under f: applying f to any tuples in R yields another tuple inside R. A set of relations Γ is said to have a polymorphism f if every relation in Γ has f as a polymorphism. This definition allows for the algebraic formulation of Schaefer's dichotomy theorem. Modern presentation: Let Γ be a finite constraint language over the Boolean domain. The problem CSP(Γ) is decidable in polynomial-time if Γ has one of the following six operations as a polymorphism: the constant unary operation 0; the constant unary operation 1; the binary AND operation ∧; the binary OR operation ∨; the ternary majority operation Majority ⁡(x,y,z)=(x∧y)∨(x∧z)∨(y∧z); the ternary minority operation Minority ⁡(x,y,z)=x⊕y⊕z. Modern presentation: Otherwise, the problem CSP(Γ) is NP-complete. In this formulation, it is easy to check if any of the tractability conditions hold. Properties of Polymorphisms: Given a set Γ of relations, there is a surprisingly close connection between its polymorphisms and the computational complexity of CSP(Γ). A relation R is called primitive positive definable, or short pp-definable, from a set Γ of relations if R(v1, ... , vk) ⇔ ∃x1 ... xm. C holds for some conjunction C of constraints from Γ and equations over the variables {v1,...,vk, x1,...,xm}. For example, if Γ consists of the ternary relation nae(x,y,z) holding if x,y,z are not all equal, and R(x,y,z) is x∨y∨z, then R can be pp-defined by R(x,y,z) ⇔ ∃a. nae(0,x,a) ∧ nae(y,z,¬a); this reduction has been used to prove that NAE-3SAT is NP-complete. The set of all relations which are pp-definable from Γ is denoted by ≪Γ≫. If Γ' ⊆ ≪Γ≫ for some finite constraint sets Γ and Γ', then CSP(Γ') reduces to CSP(Γ).Given a set Γ of relations, Pol(Γ) denotes the set of polymorphisms of Γ. Conversely, if O is a set of operations, then Inv(O) denotes the set of relations having all operations in O as a polymorphism. Pol and Inv together build a Galois connection. Properties of Polymorphisms: For any finite set Γ of relations over a finite domain, ≪Γ≫ = Inv(Pol(Γ)) holds, that is, the set of relations pp-definable from Γ can be derived from the polymorphisms of Γ. Moreover, if Pol(Γ) ⊆ Pol(Γ') for two finite relation sets Γ and Γ', then Γ' ⊆ ≪Γ≫ and CSP(Γ') reduces to CSP(Γ). As a consequence, two relation sets having the same polymorphisms lead to the same computational complexity. Generalizations: The analysis was later fine-tuned: CSP(Γ) is either solvable in co-NLOGTIME, L-complete, NL-complete, ⊕L-complete, P-complete or NP-complete and given Γ, one can decide in polynomial time which of these cases holds.Schaefer's dichotomy theorem was recently generalized to a larger class of relations. Related work: If the problem is to count the number of solutions, which is denoted by #CSP(Γ), then a similar result by Creignou and Hermann holds. Let Γ be a finite constraint language over the Boolean domain. The problem #CSP(Γ) is computable in polynomial time if Γ has a Mal'tsev operation as a polymorphism. Otherwise, the problem #CSP(Γ) is #P-complete. A Mal'tsev operation m is a ternary operation that satisfies m(x,y,y)=m(y,y,x)=x. Related work: An example of a Mal'tsev operation is the Minority operation given in the modern, algebraic formulation of Schaefer's dichotomy theorem above. Thus, when Γ has the Minority operation as a polymorphism, it is not only possible to decide CSP(Γ) in polynomial time, but to compute #CSP(Γ) in polynomial time. There are a total of 4 Mal'tsev operations on Boolean variables, determined by the values of m(T,F,T) and m(F,T,F) . An example of less symmetric one is given by m(x,y,z)=(x∧z)∨(¬y∧(x∨z)) . On another domains, such as groups, examples of Mal'tsev operations include x−y+z and xy−1z. Related work: For larger domains, even for a domain of size three, the existence of a Mal'tsev polymorphism for Γ is no longer a sufficient condition for the tractability of #CSP(Γ). However, the absence of a Mal'tsev polymorphism for Γ still implies the #P-hardness of #CSP(Γ).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spin-stabilisation** Spin-stabilisation: In aerospace engineering, spin stabilization is a method of stabilizing a satellite or launch vehicle by means of spin, i.e. rotation along the longitudinal axis. The concept originates from conservation of angular momentum as applied to ballistics, where the spin is commonly obtained by means of rifling. For most satellite applications this approach has been superseded by three-axis stabilization. Use: Spin-stabilization is used on rockets and spacecraft where attitude control is required without the requirement for on-board 3-axis propulsion or mechanisms, and sensors for attitude control and pointing. On rockets with a solid motor upper stage, spin stabilization is used to keep the motor from drifting off course as they don't have their own thrusters. Usually small rockets are used to spin up the spacecraft and rocket then fire the rocket and send the craft off. Use: Rockets and spacecraft that use spin stabilization: The Jupiter-C and Minotaur V launch vehicles used spin-stabilization. The upper stages on both system employ spin-stabilization to stabilize the system during propulsive maneuvers. The Aryabhata satellite used spin-stabilization The Pioneer 4 spacecraft, the second object sent on a lunar flyby in 1959, maintained its attitude using spin-stabilization. The Schiaparelli EDM lander was spun up to 2.5 RPM before being ejected from the ExoMars Trace Gas Orbiter prior to its attempted landing on Mars in October 2016. The Juno was spin-stabilized and arrived at Jupiter orbit in 2016. The launches of Pioneer 10 and Pioneer 11 probes on two Atlas Centaur vehicles in 1972 and 1973 employed Star 37 rocket motors that were spin-stabilized in order to inject the satellites into the high-energy hyperbolic orbits necessary to achieve solar system escape velocity. Additionally, both probes were spin-stabilized during their flights and rotated at approximately 5 rpm. Use: In operation as a third stage, the Star 48 rocket booster sits on top of spin table, and before it is separated it is spun up to stabilize it during the separation from the previous stage. The Delta II launch vehicle third stage employed a Star 48 motor and was spin-stabilized and depended on the second stage for proper orientation prior to stage separation, but was sometimes equipped with a nutation control system to maintain proper spin axis. It also included a yo-weight system to induce tumbling in the third stage after payload separation to prevent recontact, or a yo-yo de-spin mechanism to slow the rotation before payload release.Despinning can be achieved by various techniques, including yo-yo de-spin.With advancements in attitude control propulsion systems, guidance systems, and the needs for satellites to point instruments and communications systems precisely, 3-axis attitude control has become much more common than spin-stabilization for systems operating in space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ava (company)** Ava (company): Ava is a medical technology company that developed the Ava bracelet, a wearable device that functions as a fertility tracker. History: The company was founded in Zurich, Switzerland by Lea Von Bidder, Pascal Koenig, Philipp Tholen, and Peter Stein. In September 2015, the company took part in TechCrunch’s Startup Battlefield. In November 2015, Ava raised a $2.6 million funding round led by Swisscom and ZKB. The company began shipping the Ava bracelet to customers in July 2016. The company raised nearly $40 million in funding between 2017 and 2018. Technology: The device is intended to allow wearers to estimate their fertile window by tracking their menstrual cycle and ovulation based on measurements of their skin temperature, heart rate, perfusion, breathing rate, and heart rate variability. Data collected from the bracelet is displayed on an app, so that the wearer can track their fertility or monitor their health during pregnancy. An independent study of the bracelet's validity found that it provided accurate assessments of sleep duration but that its estimates of other data such as heart rate were inaccurate in comparison to other monitoring methods such as actigraphy.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lateral thyrohyoid ligament** Lateral thyrohyoid ligament: The lateral thyrohyoid ligament (lateral hyothyroid ligament) is a round elastic cord, which forms the posterior border of the thyrohyoid membrane and passes between the tip of the superior cornu of the thyroid cartilage and the extremity of the greater cornu of the hyoid bone. The internal branch of the superior laryngeal nerve typical lies lateral to this ligament. Triticeal cartilage: A small cartilaginous nodule (cartilago triticea), sometimes bony, is frequently found in the lateral thyrohyoid ligament.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded