text
stringlengths
60
353k
source
stringclasses
2 values
**Moissanite** Moissanite: Moissanite () is naturally occurring silicon carbide and its various crystalline polymorphs. It has the chemical formula SiC and is a rare mineral, discovered by the French chemist Henri Moissan in 1893. Silicon carbide is useful for commercial and industrial applications due to its hardness, optical properties and thermal conductivity. Background: The mineral moissanite was discovered by Henri Moissan while examining rock samples from a meteor crater located in Canyon Diablo, Arizona, in 1893. At first, he mistakenly identified the crystals as diamonds, but in 1904 he identified the crystals as silicon carbide. Artificial silicon carbide had been synthesized in the lab by Edward G. Acheson in 1891, just two years before Moissan's discovery.The mineral form of silicon carbide was named in honor of Moissan later on in his life. Geological occurrence: In its natural form, moissanite remains very rare. Until the 1950s, no other source for moissanite other than as presolar grains in carbonaceous chondrite meteorites had been encountered. Then, in 1958, moissanite was found in the upper mantle Green River Formation in Wyoming and, the following year, as inclusions in the ultramafic rock kimberlite from a diamond mine in Yakutia in the Russian Far East. Yet the existence of moissanite in nature was questioned as late as 1986 by the American geologist Charles Milton.Discoveries show that it occurs naturally as inclusions in diamonds, xenoliths, and such other ultramafic rock such as lamproite. Meteorites: Analysis of silicon carbide grains found in the Murchison meteorite has revealed anomalous isotopic ratios of carbon and silicon, indicating an extraterrestrial origin from outside the Solar System. 99% of these silicon carbide grains originate around carbon-rich asymptotic giant branch stars. Silicon carbide is commonly found around these stars, as deduced from their infrared spectra. The discovery of silicon carbide in the Canyon Diablo meteorite and other places was delayed for a long time as carborundum (SiC) contamination had occurred from man-made abrasive tools. Physical properties: The crystalline structure is held together with strong covalent bonding similar to diamonds, that allows moissanite to withstand high pressures up to 52.1 gigapascals. Colors vary widely and are graded from D to K range on the diamond color grading scale. Sources: All applications of silicon carbide today use synthetic material, as the natural material is very scarce. Sources: The idea that a silicon-carbon bond might in fact exist in nature was first proposed by the Swedish chemist Jöns Jacob Berzelius as early as 1824 (Berzelius 1824). In 1891, Edward Goodrich Acheson produced viable minerals that could substitute for diamond as an abrasive and cutting material. This was possible, as moissanite is one of the hardest substances known, with a hardness just below that of diamond and comparable with those of cubic boron nitride and boron. Pure synthetic moissanite can also be made from thermal decomposition of the preceramic polymer poly(methylsilyne), requiring no binding matrix, e.g., cobalt metal powder. Sources: Single-crystalline silicon carbide, in certain forms, has been used for the fabrication of high-performance semiconductor devices. As natural sources of silicon carbide are rare, and only certain atomic arrangements are useful for gemological applications, North Carolina-based Cree Research, Inc., founded in 1987, developed a commercial process for producing large single crystals of silicon carbide. Cree is the world leader in the growth of single crystal silicon carbide, mostly for electronics use.In 1995 C3 Inc., a company helmed by Charles Eric Hunter, formed Charles & Colvard to market gem quality moissanite. Charles & Colvard was the first company to produce and sell synthetic moissanite under U.S. patent US5723391 A, first filed by C3 Inc. in North Carolina. Applications: Moissanite was introduced to the jewelry market as a diamond alternative in 1998 after Charles & Colvard (formerly known as C3 Inc.) received patents to create and market lab-grown silicon carbide gemstones, becoming the first firm to do so. By 2018 all patents on the original process world-wide had expired. Charles & Colvard currently makes and distributes moissanite jewelry and loose gems under the trademarks Forever One, Forever Brilliant, and Forever Classic. Other manufacturers market silicon carbide gemstones under trademarked names such as Amora. On the Mohs scale of mineral hardness (with diamond as the upper extreme, 10) moissanite is rated as 9.25. As a diamond alternative Moissanite has some optical properties exceeding those of diamond. It is marketed as a lower price alternative to diamond that does not involve the expensive mining practices used for the extraction of natural diamonds. As some of its properties are quite similar to diamond, moissanite may be used as counterfeit diamond. Testing equipment based on measuring thermal conductivity in particular may give results similar to diamond. In contrast to diamond, moissanite exhibits a thermochromism, such that heating it gradually will cause it to temporarily change color, starting at around 65 °C (150 °F). A more practical test is a measurement of electrical conductivity, which will show higher values for moissanite. Moissanite is birefringent (i.e., light sent through the material splits into separate beams that depend on the source polarization), which can be easily seen, and diamond is not.Because of its hardness, it can be used in high-pressure experiments, as a replacement for diamond (see diamond anvil cell). Since large diamonds are usually too expensive to be used as anvils, moissanite is more often used in large-volume experiments. Synthetic moissanite is also interesting for electronic and thermal applications because its thermal conductivity is similar to that of diamonds. High power silicon carbide electronic devices are expected to find use in the design of protection circuits used for motors, actuators, and energy storage or pulse power systems. It also exhibits thermoluminescence, making it useful in radiation dosimetry.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Transformers: Prime Wars Trilogy** Transformers: Prime Wars Trilogy: Transformers: Prime Wars Trilogy is a toyline and transmedia series that is part of the Transformers franchise by Hasbro. Premise: The main story line is set in an alternate Transformers: Generation 1 continuity. Premise: Arc 1: "Combiner Wars" Forty years after the Great War between the Autobots and the Decepticons on Earth, the two factions returned to Cybertron, where Optimus Prime defeated Megatron in a final duel, ending their war permanently. A Council of Worlds was forged, consisting of Rodimus Prime, Starscream and the Mistress of Flame, ruling Cybertron and Caminus in an uneasy peace. However, the rise of the Combiners caused destruction and death on Caminus, setting Windblade, the Cityspeaker of the Titans, on a quest for vengeance. Meanwhile, the Combiner Victorion is on her own mission to find the Enigma of Combination. Premise: Arc 2: "Titans Return" Following the end of the Combiner Wars, the Titans are awakened and Trypticon begins to wreak havoc on Cybertron. To combat Trypticon, Windblade gathers up a ragtag team of Transformers to resurrect an "ancient ally". And while some may be forever changed by the events, others may not emerge with their sparks intact. Arc 3: "Power of the Primes" Following the death of Optimus Prime, the rest of the Transformers must band together to survive before Megatronus / The Fallen can wipe out their species forever. Media: Video games Comics Various comic book issues from The Transformers by IDW Publishing make reference to the Prime Wars Trilogy. Book In July 2016, the Hasbro Pulse website published a book titled The Power of the Titan Masters to promote the Titans Return toyline. Media: Web series On 2016, Machinima Inc. and Hasbro Studios created an animated web series titled Transformers: Combiner Wars which started airing from August 2 to September 20 for Verizon's go90 streaming media format. Prior to the premiere on go90, a set of four prelude videos was released that detailed some of the events which had transpired in this continuity prior to the start of the series. The second season titled Transformers: Titans Return was released for November 2017. The third and final web season was titled Transformers: Power of the Primes and was announced for May 1, 2018. Rooster Teeth is streaming the complete Prime Wars Trilogy on their website for audiences in the US & internationally. Future: Hasbro and Machinima announced a new trilogy of web series titled Transformers: War for Cybertron Trilogy, which is not related to the video game of the same name. The first installment is titled Transformers: Siege and was scheduled for 2019.After Machinima closed down in February 2019, Hasbro Studios and Netflix closed a deal for War for Cybertron Trilogy animated series, produced by Allspark Animation, Rooster Teeth and Polygon Pictures for 2020. F. J. DeSanto returned as showrunner while George Krstic, Gavin Hignight and Brandon M. Easton contributed as writers. Characters: This is a list of the characters from the Transformers: Prime Wars Trilogy. Autobots Optimus Prime (voiced by Jon Bailey (Combiner Wars), Peter Cullen (Titans Return and Power of the Primes) - The former leader of the Autobots. He is killed by Megatronus/The Fallen, until he was revived following Megatron's sacrifice. Characters: Rodimus Prime/Hot Rod/Rodimus Cron (voiced by Ben Pronsky (Combiner Wars), Judd Nelson (Titans Return, Power of the Primes)) - A former member of the Council of Worlds. He turned back into Hot Rod after he failed of being a leader, saying that he didn't see the threat that Starscream planned. He is later corrupted by Overlord using the Matrix of Chaos, becoming Rodimus Cron. As Rodimus Cron, he can sense any presence of any Transformer, hidden or not. Eventually, the Matrix of Chaos gets removed from his chest, turning him back into Hot Rod, though physical and mental trauma remained after. Characters: Perceptor (voiced by Wil Wheaton) - An Autobot scientist Optimus suggested for a seat on the Council of Worlds. Combiners Volcanicus (voiced by Gregg Berger) - Volcanicus is the combined robot form of the five Dinobots - Grimlock (also voiced by Gregg Berger), Snarl (voiced by Mikey Way), Sludge (voiced by Frank Todaro) He is sadly killed by Rodimus Cron, Slug (voiced by Jamie Iracleanos), and Swoop (voiced by Matthew Patrick). Computron (voiced by Ricky Hayberg (Combiner Wars), Matthew Patrick (Titans Return)) - Computron is the combined robot form of the five Technobots - Scattershot, Strafe, Afterbreaker, Nosecone, and Lightsteed. He is killed by Rodimus Cron. Titans Metroplex (voiced by Michael Green (Combiner Wars), Nolan North (Titans Return)) - Metroplex is a Titan stationed in Cybertron who is connected with Cityspeaker Windblade. He is killed by Trypticon. Fortress Maximus (voiced by Michael Dorn, with grunts and screams by Nolan North) - Fortress Maximus was a decommissioned Titan, who was reawakened to fight Trypticon. Emissary (voiced by Jason David Frank) - The Autobot Titan Master of Fortress Maximus, who grants additional powers to any linked Titan. Optimus Primal/Optimal Optimus (voiced by Ron Perlman) - Protector of the Requiem Blaster, and an agent to the gods. He eventually gains the Matrix of Leadership, becoming Optimal Optimus. Decepticons Megatron (voiced by Jason Marnocha) - The former leader of the Decepticons. Dies sacrificing himself to allow Optimal Optimus to use the Requiem Blaster to destroy the Matrix of Chaos. Characters: Starscream (voiced by Frank Todaro) - A Seeker who became a member of the Council of Worlds. After being killed by Metroplex, Starscream's spirit causes Trypticon to come back to life. Later, after Megatronus's death and Unicron's defeat, Starscream's spirit reappears behind Optimal Optimus and Optimus Prime, saying that he will be there as well for the new age of peace and prosperity of Cybertron. Characters: Elite Air Resistance Squadron - Seekers who are ordered to attack Trypticon, but are overwhelmed and killed. Characters: Thundercracker Skywarp Sunstorm Hotlink Overlord (voiced by Patrick Seitz) - A Decepticon whose robot form is the combined form of a tank and a jet. Defeated by Megatron in the past, Overlord sought revenge and pledged his loyalty to Megatronus. Using the Matrix of Chaos he uncovered from within the remains of Unicron, he corrupted Hot Rod into becoming Rodimus Cron. Killed by Megatron using the Requiem Blaster. Characters: Combiners Devastator (voiced by Patrick Seitz (Combiner Wars), Rob Gavagan (Titans Return, Power of the Primes))- Devastator is the combined robot form of the six Constructicons - Scrapper, Long Haul (voiced by Frank Todaro), Scavenger, Mixmaster, Bonecrusher, and Hook. Killed by Rodimus Cron. Menasor (voiced by Charlie Guzman) - Menasor is the combined robot form of the five Stunticons - Motormaster, Dead End, Breakdown, Drag Strip, and Brake-Neck. Killed by Overlord. Predaking (voiced by Samoa Joe) - Predaking is the combined robot form of the five Predacons - Razorclaw, Rampage, Headstrong, Divebomb, and Tantrum. He is killed by Megatronus. Soundblaster Trypticon (voiced by Frank Todaro) - Decepticon Titan who is awakened by Starscream's ghost. Killed by the Power of the Matrix. Camiens Mistress of Flame (voiced by Lana McKissack) - A Camien leader who served as a member of the Council of Worlds. She is killed by Overlord. Windblade (voiced by Abby Trott) - Cityspeaker from Caminus who became bloodthirsty after Caminus succumbed to the Combiner Wars. Maxima (voiced by Amy Johnston) - A comrade of Windblade. She is killed by Menasor. Characters: Victorion (voiced by Anna Akana (Combiner Wars), Kari Wahlgren (Titans Return, Power of the Primes)) - Victorion is the combined robot form of the six Torchbearers - Pyra Magna, Stormclash, Skyburst, Dust Up, Jumpstream, and Rust Dust. She is given the Enigma of Combination by Windblade, but is later killed by Rodimus Cron. Shares a name with a robot from Brave Saga 2. Characters: Others Thirteen Solus Prime (voiced by Jamie King) - A member of the Thirteen and the first female Transformer. She is accidentally killed by Megatronus. Appearing as a spirit, she sacrifices herself to kill Megatronus once and for all, as falling with him into the Well of Sparks. Characters: Megatronus (voiced by Mark Hamill) - Also known as "The Fallen", Megatronus is a former member of the Thirteen and the First Decepticon. He searches for the Requiem Blaster with help from Overlord and Rodimus Cron. He plans to use the Enigma of Combination to merge the Matrix of Leadership and the Requiem Blaster into a doomsday device to kill every Transformer in the universe to try and revive Solus Prime. He is finally killed by Solus Prime, who appeared as a spirit, who falls with him into the Well of Sparks. Characters: Chorus of the Primes (voiced by Tay Zonday) - A council consisting of the deceased Primes who reside in the Primal Basilica. They help Rodimus Prime remove the Matrix of Leadership from him, turning him back into Hot Rod.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Embossing (manufacturing)** Embossing (manufacturing): Sheet metal embossing is a stamping process for producing raised or sunken designs or relief in sheet metal. This process can be made by means of matched male and female roller dies, or by passing sheet or a strip of metal between rolls of the desired pattern. It is often combined with foil stamping to create a shiny, 3D effect. Process: The metal sheet embossing operation is commonly accomplished with a combination of heat and pressure on the sheet metal, depending on what type of embossing is required. Theoretically, with any of these procedures, the metal thickness is changed in its composition. Process: Metal sheet is drawn through the male and female roller dies, producing a pattern or design on the metal sheet. Depending on the roller dies used, different patterns can be produced on the metal sheet. The pressure and a combination of heat actually "irons" while raising the level of the image higher than the substrate to make it smooth. The term "impressing" refers to an image lowered into the surface of a material, in distinction to an image raised out of the surface of a material. Process: In most of the pressure embossing operation machines, the upper roll blocks are stationary, while the bottom roll blocks are movable. The pressure with which the bottom roll is raised is referred to as the tonnage capacity. Process: Embossing machines are generally sized to give 2 inches (5 cm) of strip clearance on each side of an engraved embossing roll. Many embossing machines are custom-manufactured, so there are no industry-standard widths. It is not uncommon to find embossing machines in operation producing patterns less than 6 inches (15 cm) wide all the way up to machines producing patterns 70 inches (180 cm) wide or more. Characteristics: The metal embossing manufacturing process has these characteristics: The ability to form ductile metals. Use in medium to high production runs. The ability to maintain the same metal thickness before and after embossing. The ability to produce unlimited patterns, depending on the roll dies. The ability to reproduce the product with no variation. Commonly used materials: The following materials are suitable for embossing: Aluminium (all alloys) Aluminium (T1/T2) Brass Card stock Cold rolled steel Copper Galvanized steel High strength, low alloy, steel Hot rolled steel Steel (all alloys) Zinc
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mixed oxide** Mixed oxide: In chemistry, a mixed oxide is a somewhat informal name for an oxide that contains cations of more than one chemical element or cations of a single element in several states of oxidation.The term is usually applied to solid ionic compounds that contain the oxide anion O2− and two or more element cations. Typical examples are ilmenite (FeTiO3), a mixed oxide of iron (Fe2+) and titanium (Ti4+) cations, perovskite and garnet.The cations may be the same element in different ionization states: a notable example is magnetite Fe3O4, which is also known as ferrosoferric oxide , contains the cations Fe(2+) ("ferrous" iron) and Fe3+ ("ferric" iron) in 1:2 ratio. Other notable examples include red lead Pb3O4, the ferrites, and the yttrium aluminum garnet Y3Al5O12, used in lasers. The term is sometimes also applied to compounds of oxygen and two or more other elements, where some or all of the oxygen atoms are covalently bound into oxyanions. In sodium zincate Na2ZnO2, for example, the oxygens are bound to the zinc atoms forming zincate anions. (On the other hand, strontium titanate SrTiO3, despite its name, contains Ti4+ cations and not the TiO2−3 anion.) Sometimes the term is applied loosely to solid solutions of metal oxides rather than chemical compounds, or to fine mixtures of two or more oxides. Mixed oxide: Mixed oxide minerals are plentiful in nature. Synthetic mixed oxides are components of many ceramics with remarkable properties and important advanced technological applications, such as strong magnets, fine optics, lasers, semiconductors, piezoelectrics, superconductors, catalysts, refractories, gas mantles, nuclear fuels, and more. Piezoelectric mixed oxides, in particular, are extensively used in pressure and strain gauges, microphones, ultrasound transducers, micromanipulators, delay lines, etc.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Myelin regulatory factor** Myelin regulatory factor: Myelin regulatory factor (MyRF), also known as myelin gene regulatory factor (MRF), is a protein that in humans is encoded by the MYRF gene. Orthologs: Myelin regulatory factor is encoded by the Myrf/GM98 gene in mice and by the MYRF gene in humans. The family of MyRF-like-proteins also contains the orthologues pqn-47 from C. elegans and MYRFA from Dictyostelium. All orthologs have a DNA-binding domain of high homology to the Saccharomyces cerevisiae protein Ndt80 (a p53-like transcription factor) and therefore likely act as a transcription factor. Function: MyRF is a transcription factor that promotes the expression of many genes important in the production of myelin. It is therefore of critical importance in the development and maintenance of myelin sheaths.The expression of MYRF is specific to mature, myelinating oligodendrocytes in the CNS. It has been shown to be critical for the maintenance of myelin by these cells. Following ablation of MYRF the expression of myelin genes such as PLP1, MBP, MAG and MOG drops rapidly. Therefore, MYRF is a key regulator and likely a direct activator of the expression of these genes. Animal models: Mice that lose MYRF during adulthood present with a severe demyelination similar to that seen in animal models of multiple sclerosis. This underlines the importance of an active renewal of proteins in the myelin sheath. Further, the activity of MYRF increases during remyelination, suggesting it has a critical role in this process. Animals with repressed Myrf in a proportion of oligodendrocyte precursor cells showed a delayed functional recovery from spinal cord injury.Myrf has been shown to be significantly downregulated in a mouse model carrying the same mutation in the NPC1 protein that is underlying Niemann-Pick type C1 disease, a neurodegenerative process in which dysmyelination is a main pathogenic factor. Therefore, a disruption of oligodendrocyte formation and myelination may be the root cause of the neurological abnormalities.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stacks Project** Stacks Project: The Stacks Project is an open source collaborative mathematics textbook writing project with the aim to cover "algebraic stacks and the algebraic geometry needed to define them". As of July 2022, the book consists of 115 chapters (excluding the license and index chapters) spreading over 7500 pages. The maintainer of the project, who reviews and accepts the changes, is Aise Johan de Jong.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cenomanian-Turonian boundary event** Cenomanian-Turonian boundary event: The Cenomanian-Turonian boundary event, also known as the Cenomanian-Turonian extinction, Cenomanian-Turonian oceanic anoxic event (OAE 2), and referred to also as the Bonarelli event, was one of two anoxic extinction events in the Cretaceous period. (The other being the earlier Selli event, or OAE 1a, in the Aptian.) The Cenomanian-Turonian oceanic anoxic event is considered to be the most recent truly global oceanic anoxic event in Earth's geologic history. Selby et al. in 2009 concluded the OAE 2 occurred approximately 91.5 ± 8.6 Ma, though estimates published by Leckie et al. (2002) are given as 93–94 Ma. The Cenomanian-Turonian boundary has been refined in 2012 to 93.9 ± 0.15 Ma. There was a large carbon cycle disturbance during this time period, signified by a large positive carbon isotope excursion. However, apart from the carbon cycle disturbance, there were also large disturbances in the nitrogen, oxygen, phosphorus, sulphur, and iron cycles of the ocean. Background: The Cenomanian and Turonian stages were first noted by D'Orbigny between 1843 and 1852. The global type section for this boundary is located in the Bridge Creek Limestone Member of the Greenhorn Formation near Pueblo, Colorado, which are bedded with the Milankovitch orbital signature. Here, a positive carbon-isotope event is clearly shown, although none of the characteristic, organic-rich black shale is present. It has been estimated that the isotope shift lasted approximately 850,000 years longer than the black shale event, which may be the cause of this anomaly in the Colorado type section. A significantly expanded OAE2 interval from southern Tibet documents a complete, more detailed, and finer-scale structures of the positive carbon isotope excursion that contains multiple shorter-term carbon isotope stages amounting to a total duration of 820 ±25 ka.The level is also known as the Bonarelli event because of 1-to-2-metre (3 ft 3 in to 6 ft 7 in) layer of thick, black shale that marks the boundary and was first studied by Guido Bonarelli in 1891. It is characterized by interbedded black shales, chert and radiolarian sands and is estimated to span a 400,000-year interval. Planktonic foraminifera do not exist in this Bonarelli level, and the presence of radiolarians in this section indicates relatively high productivity and an availability of nutrients. In the Western Interior Seaway, the Cenomanian-Turonian boundary event is associated with the Benthonic Zone, characterised by a higher density of benthic foraminifera relative to planktonic foraminifera, although the timing of the appearance of the Benthonic Zone is not uniformly synchronous with the onset of the oceanic anoxic event and is thus cannot be used to consistently demarcate its beginning. Timeline: The total duration of OAE2 has been estimated at around 0.82 ±0.025 Myr [1] or 0.71 ± 0.17 Myr. Biodiversity patterns of planktic foraminifera indicate that the Cenomanian-Turonian extinction occurred in five phases. Phase I, which took place from 313,000 to 55,000 years before the onset of the anoxic event, witnessed a stratified water column and high planktonic foraminiferal diversity, suggesting a stable marine environment. Phase II, characterised by significant environmental perturbations, lasted from 55,000 years before OAE2 until its onset and witnessed a decline in rotaliporids and heterohelicids, a zenith of schackoinids and hedbergellids, a 'large form eclipse' during which foraminifera exceeding 150 microns disappeared, and the start of a trend of dwarfism among many foraminifera. This phase also saw an enhanced oxygen minimum zone and increased productivity in surface waters. Phase III lasted for 100,000 to 900,000 years and was coincident with the Bonarelli Level's deposition and exhibited extensive proliferation of radiolarians, indicative of extremely eutrophic conditions. Phase IV lasted for around 35,000 years and was most notable for the increase in the abundance of hedbergellids and schackoinids, being extremely similar to Phase II, with the main difference being that rotaliporids were absent from Phase IV. Phase V was a recovery interval lasting 118,000 years and marked the end of the 'large form eclipse' that began in Phase II; heterohelicids and hedbergellids remained in abundance during this phase, pointing to continued environmental disturbance during this phase. Causes: Climate change Earth pronouncedly warmed just before the beginning of OAE2. The Cenomanian-Turonian boundary represents one of the hottest intervals of the entire Phanerozoic eon, and it boasted the highest carbon dioxide concentrations of the Cretaceous period. Even before OAE2, during the late Cenomanian, tropical sea surface temperatures (SSTs) were very warm, about 27-29 °C. Mean tropical SSTs during OAE2 have been conservatively estimated to have been at least 30 °C, but may have reached as high as 36 °C. Minimum SSTs in mid-latitude oceans were >20 °C.One possible cause of this hothouse was sub-oceanic volcanism. During the middle of the Cretaceous period, the rate of crustal production reached a peak, which may have been related to the rifting of the newly formed Atlantic Ocean. It was also caused by the widespread melting of hot mantle plumes under the ocean crust, at the base of the lithosphere, which may have resulted in the thickening of the oceanic crust in the Pacific and Indian Oceans. The resulting volcanism would have sent large quantities of carbon dioxide into the atmosphere, leading to an increase in global temperatures. Several independent events related to large igneous provinces (LIPs) occurred around the time of OAE2. A multitude of LIPs were active during OAE2: the Madagascar, Caribbean, Gorgona, Ontong Java, and High Arctic LIPs. Trace metals such as chromium (Cr), scandium (Sc), copper (Cu) and cobalt (Co) have been found at the Cenomanian-Turonian boundary, which suggests that an LIP could have been one of the main basic causes involved in the contribution of the event. The timing of the peak in trace metal concentration coincides with the middle of the anoxic event, suggesting that the effects of the LIPs may have occurred during the event, but may not have initiated the event. Other studies linked the lead (Pb) isotopes of OAE-2 to the Caribbean-Colombian and the Madagascar LIPs. An osmium isotope excursion coeval with OAE2 strongly suggests submarine volcanism as its cause; in the Pacific, an unradiogenic osmium spike began about 350 kyr before the onset of OAE2 and terminated around 240 kyr after OAE2's beginning; the osimum isotope data from a highly expanded OAE2 interval in southern Tibet show multiple Osi excursions with the most pronounced one lagging the onset of OAE2 by ~50 kyr that was probably related to the ocean connectivity change at ~94.5 Ma [2]. Positive neodymium isotope excursions provide additional indications of pervasive volcanism as a cause of OAE2. Enrichments in zinc further bolster and reinforce the existence of extensive hydrothermal volcanism. The absence of geographically widespread mercury (Hg) anomalies resulting from OAE2 has been suggested to be because of the limited dispersal range of this heavy metal by submarine volcanism. A modeling study performed in 2011 confirmed that it is possible that a LIP may have initiated the event, as the model revealed that the peak amount of carbon dioxide degassing from volcanic LIP degassing could have resulted in more than 90 percent global deep-ocean anoxia. Causes: Plenus Cool Event Large-scale organic carbon burial acted as a negative feedback loop that partially mitigated the warming effects of volcanic discharge of carbon dioxide, resulting in the Plenus Cool Event during the Metoicoceras geslinianum European ammonite biozone. Global average temperatures fell to around 4 °C lower than they were pre-OAE2. Equatorial SSTs dropped by 2.5–5.5 °C. This cooling event was insufficient at completely stopping the rise in global temperatures. This negative feedback was ultimately overridden, as global temperatures continued to shoot up in sync with continued volcanic release of carbon dioxide following the Plenus Cool Event, although this theory has been criticised and the warming after the Plenus Cool Event attributed to decreased silicate weathering instead. Causes: Ocean acidification Within the oceans, the emission of SO2, H2S, CO2, and halogens would have increased the acidity of the water, causing the dissolution of carbonate, and a further release of carbon dioxide. Evidence of ocean acidification can be gleaned from calcium isotope ratios coeval with the extinction event, as well as from coccolith malformation and dwarfism. Ocean acidification was exacerbated by a positive feedback loop of increased heterotrophic respiration in highly biologically productive waters, elevating seawater concentrations of carbon dioxide and further decreasing pH. Causes: Anoxia and euxinia When the volcanic activity declined, this run-away greenhouse effect would have likely been put into reverse. The increased CO2 content of the oceans could have increased organic productivity in the ocean surface waters. The consumption of this newly abundant organic life by aerobic bacteria would produce anoxia and mass extinction. An acceleration of the hydrological cycle induced by warmer global temperatures drove greater fluxes of nutrient runoff into the oceans, fuelling primary productivity. The global environmental disturbance that resulted in these conditions increased atmospheric and oceanic temperatures. Boundary sediments show an enrichment of trace elements, and contain elevated δ13C values. The positive δ13C isotope excursion found at the Cenomanian-Turonian boundary is one of the main carbon isotope events of the Mesozoic. It represents one of the largest disturbances in the global carbon cycle from the past 110 million years. This δ13C isotope excursion indicates a significant increase in the burial rate of organic carbon, indicating the widespread deposition and preservation of organic carbon-rich sediments and that the ocean was depleted of oxygen at the time. Depletion of manganese in sediments corresponding to OAE2 provides additional strong evidence of severe bottom water oxygen depletion. The resulting elevated levels of carbon burial would account for the black shale deposition in the ocean basins.Sulphate reduction increased during OAE2, causing euxinia, a type of anoxia defined by sulphate reduction and hydrogen sulphide production, to occur during OAE2, as revealed by negative chromium isotope excursions, a low seawater molybdenum inventory, and molecular biomarkers of green sulphur bacteria.OAE2 began on the southern margins of the proto-North Atlantic, from where anoxia spread across the rest of the proto-North Atlantic and then into the Western Interior Seaway (WIS) and the epicontinental seas of the Western Tethys. Anoxic waters spread rapidly throughout the WIS due to marine transgression and a powerful cyclonic circulation resulting from an imbalance between precipitation in the north and evaporation in the south. Anoxia was especially intense in the eastern North Sea, evidenced by its very positive δ13C values. Thanks to persistent upwelling, some marine regions, such as the South Atlantic, were able to remain partially oxygenated at least intermittently. Indeed, redox states of oceans vary geographically, bathymetrically and temporally during OAE2. Causes: Milankovitch cycles It has been hypothesised that the Cenomanian-Turonian boundary event occurred during a period of very low variability in Earth's insolation, which has been theorised to be the result of coincident nodes in all orbital parameters. Barring chaotic perturbations in Earth's and Mars' orbits, the simultaneous occurrence of nodes of orbital eccentricity, axial precession, and obliquity on Earth occurs approximately every 2.45 million years. Numerous other oceanic anoxic events occurred throughout the extremely warm greenhouse conditions of the Middle Cretaceous, and it has been suggested that these Middle Cretaceous ocean anoxic events occurred cyclically in accordance with orbital cycle patterns. The mid-Cenomanian Event (MCE), which occurred in the Rotalipora cushmani planktonic foraminifer biozone, has been argued to be another example supporting this hypothesis of regular oceanic anoxic events governed by Milankovitch cycles. The MCE took place approximately 2.4 million years before the Cenomanian-Turonian oceanic anoxic event, roughly at the time when an anoxic event would be expected to occur given such a cycle. Geochemical evidence from a sediment core in the Tarfaya Basin is indicative of the main positive carbon isotope excursion occurring during a prolonged eccentricity minimum. Carbon isotope shifts smaller in scale observed in this core likely reflected variability in obliquity. Ocean Drilling Program Site 1138 in the Kerguelen Plateau yields evidence of a 20,000 to 70,000 year periodicity in changes in sedimentation, suggesting that either obliquity or precession governed the large-scale burial of organic carbon. Within the positive carbon isotope excursion, short eccentricity scale carbon isotope variability is documented in a significantly expanded OAE2 interval from southern Tibet. Causes: Enhanced phosphorus recycling The phosphorus retention ability of seafloor sediments declined during OAE2, revealed by a decline in reactive phosphorus species within OAE2 sediments. The mineralisation of seafloor phosphorus into apatite was inhibited by the significantly lower pH of seawater and much warmer temperatures during the Cenomanian and Turonian compared to the present day, which meant that significantly more phosphorus was recycled back into ocean water after being deposited on the sea floor during this time. This would have intensified a positive feedback loop in which phosphorus is recycled faster into anoxic seawater compared to oxygen-rich water, which in turn fertilises the water, causes increased eutrophication, and further depletes the seawater of oxygen. The influx of volcanically erupted and chemically weathered sulphate into the ocean also inhibited phosphorus burial by increasing hydrogen sulphide production, which hinders the burial of phosphorus through sorption to iron oxyhydroxide phases. OAE2 may have occurred during a peak in a 5-6 Myr cycle governing phosphorus availability; at this and other peaks in this oscillation, an increase in chemical weathering would have increased the marine phosphorus inventory and sparked a positive feedback loop of increasing productivity, anoxia, and phosphorus recycling that was only ended by a negative feedback of increased atmospheric oxygenation and wildfire activity that decreased chemical weathering, a feedback which operated on a much longer timescale. Enhanced phosphorus recycling would have resulted in an abundance of nitrogen fixing bacteria, increasing the availability of yet another limiting nutrient and supercharging primary productivity through nitrogen fixation. The ratio of bioavailable nitrogen to bioavailable phosphorus, which is 16:1 in the present, fell precipitously as the ocean transitioned from being oxic and nitrate-dominated to anoxic and ammonium-dominated. A potent feedback loop of nitrogen fixation, productivity, deoxygenation, nitrogen removal, and phosphorus recycling was created. Bacterial hopanoids indicate populations of nitrogen fixing cyanobacteria were high during OAE2, providing a rich supply of nitrates and nitrites. Negative δ15N values reveal the dominance of ammonium through regenerative nutrient loops in the proto-North Atlantic. Causes: Decreased sulphide oxidation In the present day, sulphidic waters are generally prevented from spreading throughout the water column by the oxidation of sulphide with nitrate. However, during OAE2, the inventory of seawater nitrate was lower, meaning that chemolithoautotrophic oxidation of sulphides with nitrates was inefficient at preventing the spread of euxinia. Causes: Sea level rise A marine transgression in the latest Cenomanian resulted in an increase in average water depth, causing seawater to become less eutrophic in shallow, epicontinental seas. Turnovers in marine biota in such epicontinental seas have been suggested to be driven more so by changes in water depth rather than anoxia. Sea level rise also contributed to anoxia by transporting terrestrial plant matter from inundated lands seaward, providing an abundant source of sustenance for eutrophicating microorganisms. Geological effects: Phosphate deposition A phosphogenic event occurred in the Bohemian Cretaceous Basin during the peak of oceanic anoxia. Phosphorus liberation in the pore water environment, several centimetres below the interface between seafloor sediments and the water column, enabled the precipitation of phosphate through biological mediation by microorganisms. Geological effects: Increase in weathering Strontium and calcium isotope ratios both indicate that silicate weathering increased over the course of OAE2. Because of its effectiveness as a carbon sink on geologic timescales, the uptick in sequestration of carbon dioxide by the lithosphere may have helped to stabilise global temperatures after global temperatures soared. Particularly so at high latitudes, where the increase in weatherability was very pronounced. Biotic effects: Changes in oceanic biodiversity and its implications The event brought about the extinction of the pliosaurs, and most ichthyosaurs. Coracoids of Maastrichtian age were once interpreted by some authors as belonging to ichthyosaurs, but these have since been interpreted as plesiosaur elements instead.Although the cause is still uncertain, the result starved the Earth's oceans of oxygen for nearly half a million years, causing the extinction of approximately 27 percent of marine invertebrates, including certain planktic and benthic foraminifera, mollusks, bivalves, dinoflagellates and calcareous nannofossils. Planktonic formainifera that dwelt in deeper waters were especially hard hit by the expansion of oxygen minimum zones. The alterations in diversity of various marine invertebrate species such as calcareous nannofossils are reflective and characteristic of oligotrophy and ocean warmth in an environment with short spikes of productivity followed by long periods of low fertility. A study performed in the Cenomanian-Turonian boundary of Wunstorf, Germany, reveal the uncharacteristic dominance of a calcareous nannofossil species, Watznaueria, present during the event. Unlike the Biscutum species, which prefer mesotrophic conditions and were generally the dominant species before and after the C/T boundary event; Watznaueria species prefer warm, oligotrophic conditions. In the Ohaba-Ponor section in Romania, the presence of Watznaueria barnesae indicates warm conditions, while the abundances of Biscutum constans, Zeugrhabdotus erectus, and Eprolithus floralis peak during cool intervals. Sites in Colorado, England, France, and Sicily show an inverse relationship between atmospheric carbon dioxide levels and the size of calcareous nannoplankton. In Whadi El Ghaib, a site in Sinai, Egypt, the foraminiferal community during OAE2 was low in diversity and dominated by taxa that were extremely tolerant of low salinity, anoxic water. Radiolarians also suffered heavy losses in OAE2, one of their highest diversity losses in the Cretaceous.The diversity of trace fossils sharply plummetted during the beginning of the Cenomanian-Turonian boundary event. The recovery interval after the anoxic event's conclusion features an abundance of Planolites and is characterised overall by a high degree of bioturbation.At the time, there were also peak abundances of the green algal groups Botryococcus and prasinophytes, coincident with pelagic sedimentation. The abundances of these algal groups are strongly related to the increase of both the oxygen deficiency in the water column and the total content of organic carbon. The evidence from these algal groups suggest that there were episodes of halocline stratification of the water column during the time. A species of freshwater dinocyst—Bosedinia—was also found in the rocks dated to the time and these suggest that the oceans had reduced salinity. Biotic effects: Changes in terrestrial biodiversity No major change in terrestrial ecosystems is known to have been synchronous with the marine transgression associated with OAE2, although the loss of freshwater floodplain habitat has been speculated to have possibly resulted in the demise of some freshwater taxa. In fossiliferous rocks in southwestern Utah, a local extirpation of some metatherians and brackish water vertebrates is associated with the later marine regression following OAE2 in the Turonian. Whatever the nature and magnitude of terrestrial extinctions at or near the Cenomanian-Turonian boundary was, it was most likely caused mainly by other factors than eustatic sea level fluctuations. The effect of the ecological crisis on terrestrial plants has been concluded to have been inconsequential, in contrast to extinction events driven by terrestrial large igneous provinces.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tvheadend** Tvheadend: TVHeadend, sometimes TVH for short, is a server application that reads video streams from LinuxTV sources and publishes them as internet streams. It supports multiple inputs, a DVB-T USB tuner stick and a Sat>IP tuner for instance, combining them together into a single channel listing. TVH servers are themselves IP signal providers, allowing networks of TVH servers to be combined. Tvheadend: TVH is typically used to send video to receiver devices like smart televisions and set top boxes throughout a household network, but is also used to forward signals over long distance links, even between countries. It also includes electronic program guide information (if available) and the ability to record programs like a DVR, including the ability to transcode from MPEG2 to h264 and h265.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fungerin** Fungerin: Fungerin is an antifungal alkaloid with the molecular formula C13H18N2O2 which is produced by Fusarium species.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Microsoft Notification Protocol** Microsoft Notification Protocol: Microsoft Notification Protocol (MSNP, also known as the Mobile Status Notification Protocol) is an instant messaging protocol developed by Microsoft for use by the Microsoft Messenger service and the instant messaging clients that connect to it, such as Skype since 2014, and the earlier Windows Live Messenger, MSN Messenger, Windows Messenger, and Microsoft Messenger for Mac. Third-party clients such as Pidgin and Trillian can also communicate using the protocol. MSNP was first used in a publicly available product with the first release of MSN Messenger in 1999. Technical details: Any major change made to the protocol, such as a new command or syntax changes, results in a version-number incremented by one in the format of MSNP#. During October 2003, Microsoft started blocking access to Messenger service using versions below MSNP8.Starting on September 11, 2007, Microsoft forces most current users of MSN Messenger to upgrade to Windows Live Messenger 8.1 due to security considerations. Version history: MSNP1 MSNP1 has never been public. It is believed it was used during the early stages of design and development with MSN Messenger 1 MSNP2 A pre-release version was made available to developers in 1999 in an Internet Draft[1]. However, the production version differed from the published version in a few subtle ways. Version history: MSNP3 Both MSNP2 and MSNP3 were supported by MSN Messenger 2.0. MSNP3 was also supported by the first version of the WebTV (MSN TV) Messenger client released in its Summer 2000 upgrade, and introduces a new command specifically for use by those clients — IMS — which allows the ability for a client to allow or block new switchboard sessions (chats) with other users at any point while the user is signed in. Version history: MSNP4 and MSNP5 MSNP3, 4, and 5 were supported by the Messenger servers by July 2000 [2] and used by MSN Messenger 3.0 and 4.0. MSNP6 and MSNP7 MSNP6 was used by later versions of MSN Messenger 4.x. In 2002 MSN Messenger 5.0 used MSNP7. MSNP8 MSNP8 introduced a different authentication method, now sending authorization to the secure Microsoft Passport servers and returning a challenge string. It was the minimum version of the protocol accepted by .NET Messenger Service after Microsoft blocked earlier versions for security reasons. As such, old and obsolete clients are unable to sign in, forcing users to upgrade clients. Version history: Version 5.0 of MSN Messenger and Windows Messenger versions 4.7 through 5.1 are the only known desktop clients that use MSNP8. MSNP8 was also supported by the Messenger clients in later versions of MSN TV starting at 2.8.1, as well as its successor, the MSN TV 2, and was the last version of MSNP to be supported by MSN TV. Version history: This version of the protocol supports Windows Messenger-to-Windows Messenger webcam and voice capabilities. MSNP9 MSNP9 was introduced with MSN Messenger 6, adding support for "D type" (data) messages, which are used for transferring display pictures and custom emoticons between clients, frame-by-frame web cam (rather than a traditional stream like Windows Media Player's WMV format) and an improved voice system, as well as improved NAT traversal for file transfers. MSNP10 Employed in MSN Messenger 6.1, after Microsoft started blocking earlier versions in October 2003. However, it was not a big overhaul, the only obvious change was integration with Hotmail address books. MSNP11 Employed by MSN Messenger 7.0 MSNP12 Employed by MSN Messenger 7.5. Version history: MSNP13 Employed by Windows Live Messenger 8.0, MSNP13 features a lot of changes. Most notably, contact list synchronization has been removed and clients must instead send a SOAP request to a contacts server, also known as "Client goes to ABCH" (where ABCH stands for Address Book Clearing House, the address book service behind all MSN and Windows Live services). The client must then send the contacts data to the server for it to send presence information. Version history: MSNP14 MSNP14 adds Yahoo! Messenger interoperability. MSNP15 MSNP15 is the protocol version introduced with Windows Live Messenger 8.1 on 2006-09-08. It is based on MSNP14 but uses a different authentication mechanism called RPS (Relying Party Suite). Where TWN "Tweener" authentication is used on protocol versions 14 and below, SSO (Single Sign-On; RPS) authentication will be used on protocol versions 15 and above. Version history: In addition to a new authentication mechanism, Microsoft is also planning on making more of the properties of the user roaming. That is, the user's display picture, and in the future personal status messages, will be the same wherever the user signs in.Furthermore, support for user locations has been added to the Personal Status Message, although this feature was later removed from the Windows Live Messenger 8.1 client. Version history: MSNP16 MSNP16 is used in a pre-release version of Windows Live Messenger 9.0, leaked in December 2007. It features "Multiple Points of Presence" (MPOP), the ability to sign in at 2 places at the same time with having chats replicated at all places. The UUX data have been extended to contain Endpoint Data (also MPOP), as well as Signature Sound MSN Object Data. MSNP17 MSNP17 is identified by Windows Live Messenger servers on messenger.hotmail.com, but unused by any official client released by Microsoft. MSNP18 MSNP18 is used in Windows Live Messenger 2009 (14.0). Its main new addition is the Groups feature, much like persistent grouped conversations. UUX Data have been extended to include Scene image MSN Object data. MSNP21 Employed by Windows Live Messenger 2011 (Wave 4) and Windows Live Messenger 2012 MSNP24 Employed by Skype since early 2014. Also used by Microsoft Teams.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**EGS (program)** EGS (program): The EGS (Electron Gamma Shower) computer code system is a general purpose package for the Monte Carlo simulation of the coupled transport of electrons and photons in an arbitrary geometry for particles with energies from a few keV up to several hundreds of GeV. It originated at SLAC but National Research Council of Canada and KEK have been involved in its development since the early 80s. EGS (program): Development of the original EGS code ended with version EGS4. Since then two groups have re-written the code with new physics: EGSnrc, maintained by the Ionizing Radiation Standards Group, Measurement Science and Standards, National Research Council of Canada EGS5, maintained by KEK, the Japanese particle physics research facility. EGSnrc: EGSnrc is a general-purpose software toolkit that can be applied to build Monte Carlo simulations of coupled electron-photon transport, for particle energies ranging from 1 keV to 10 GeV. It is widely used internationally in a variety of radiation-related fields. The EGSnrc implementation improves the accuracy and precision of the charged particle transport mechanics and the atomic scattering cross-section data. The charged particle multiple scattering algorithm allows for large step sizes without sacrificing accuracy - a key feature of the toolkit that leads to fast simulation speeds. EGSnrc also includes a C++ class library called egs++ that can be used to model elaborate geometries and particle sources. EGSnrc: EGSnrc is open source and distributed on GitHub under the GNU Affero General Public License. Download EGSnrc for free, submit bug reports, and contribute pull requests on a group GitHub page. The documentation for EGSnrc is also available online.EGSnrc is distributed with a wide range of applications that utilize the radiation transport physics to calculate specific quantities. These codes have been developed by numerous authors over the lifetime of EGSnrc to support the large user community. It is possible to calculate quantities such as absorbed dose, kerma, particle fluence, and much more, with complex geometrical conditions. One of the most well-known EGSnrc applications is BEAMnrc, which was developed as part of the OMEGA project. This was a collaboration between the National Research Council of Canada and a research group at the University of Wisconsin–Madison. All types of medical linear accelerators can be modelled using the BEAMnrc's component module system.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**IKBKAP** IKBKAP: IKBKAP (inhibitor of kappa light polypeptide gene enhancer in B-cells, kinase complex-associated protein) is a human gene encoding the IKAP protein, which is ubiquitously expressed at varying levels in all tissue types, including brain cells. The IKAP protein is thought to participate as a sub-unit in the assembly of a six-protein putative human holo-Elongator complex, which allows for transcriptional elongation by RNA polymerase II. Further evidence has implicated the IKAP protein as being critical in neuronal development, and directs that decreased expression of IKAP in certain cell types is the molecular basis for the severe, neurodevelopmental disorder familial dysautonomia. Other pathways that have been connected to IKAP protein function in a variety of organisms include tRNA modification, cell motility, and cytosolic stress signalling.Homologs of the IKBKAP gene have been identified in multiple other Eukaryotic model organisms. Notable homologs include Elp1 in yeast, Ikbkap in mice, and D-elp1 in fruit flies. The fruit fly homolog (D-elp1) has RNA-dependent RNA polymerase activity and is involved in RNA interference.The IKBKAP gene is located on the long (q) arm of chromosome 9 at position 31, from base pair 108,709,355 to base pair 108,775,950. Function and mechanism: Originally, it was proposed that the IKBKAP gene in humans was encoding a scaffolding protein (IKAP) for the IκB enzyme kinase (IKK) complex, which is involved in pro-inflammatory cytokine signal transduction in the NF-κB signalling pathway. However, this was subsequently disproven when researchers applied a gel filtration method and could not identify IKK complexes contained in fractions with IKAP, thus dissociating IKAP from having a role in the NF-κB signalling pathway. Function and mechanism: Later, it was discovered that IKAP functions as a cytoplasmic scaffold protein in the mammalian JNK-signalling pathway which is activated in response to stress stimuli. In an in vivo experiment, researchers showed direct interaction between IKAP and JNK induced by the application of stressors such as ultraviolet light and TNF-α (a pro-inflammatory cytokine).IKAP is now also widely acknowledged to have a role in transcriptional elongation in humans. The RNA polymerase II holoenzyme constitutes partly of a multi-subunit histone acetyltransferase element known as the RNA polymerase II elongator complex, of which IKAP is one subunit. The association of the elongator complex with RNA polymerase II holoenzyme is necessary for subsequent binding to nascent pre-mRNA of certain target genes, and thus their successful transcription. Specifically, within the cell, the depletion of functional elongater complexes due to low IKAP expression has been found to have a profound effect on transcription of genes involved in cell migration.In yeast, experimental data shows the elongator complex functioning in a variety of processes — from exocytosis to tRNA modification. This finding demonstrates that the function of the elongator complex is not conserved among species. Related conditions: Familial Dysautonomia Familial dysautonomia (also known as “Riley-Day syndrome”) is a complex congenital neurodevelopmental disease, characterized by unusually low numbers of neurons in the sensory and autonomic nervous systems. The resulting symptoms of patients include gastrointestinal dysfunction, scoliosis, and pain insensitivity. This disease is especially prevalent in the Ashkenazi Jewish population, where 1/3600 live births present familial dysautonomia.By 2001, the genetic cause of familial dysautonomia was localized to a dysfunctional region spanning 177kb on chromosome 9q31. With the use of blood samples from diagnosed patients, the implicated region was successfully sequenced. The IKBKAP gene, one of the five genes identified in that region, was found to have a single-base mutation in over 99.5% of cases of familial dysautonomia seen.The single-base mutation, overwhelmingly noted as a transition from cytosine to thymine, is present in the 5’ splice donor site of intron 20 in the IKBKAP pre-mRNA. This prevents recruitment of splicing machinery, and thus exon 19 is spliced directly to exon 21 in the final mRNA product – exon 20 is removed from the pre-mRNA with the introns. The unintentional removal of an exon from the final mRNA product is termed exon skipping. Therefore, there is a decreased level of functional IKAP protein expression within affected tissue. However, this disorder is tissue-specific. Lymphoblasts, even with the mutation present, may continue to express some functional IKAP protein. In contrast, brain tissue with the single-base mutation in the IKBKAP gene predominantly express a resulting truncated, mutant IKAP protein which is nonfunctional. The exact mechanism for how the familial dysautonomia phenotype is induced due to reduced IKAP expression is unclear; still, as a protein involved in transcriptional regulation, there have been a variety of proposed mechanisms. One such theory suggests that critical genes in the development of wild-type sensory and autonomic neurons are improperly transcribed. An extension of this research suggests that genes involved in cell migration are impaired in the nervous system, creating a foundation for this disorder.In a small number of reported familial dysautonomia cases, researchers have identified other mutations that cause a change in amino acids (the building blocks of proteins). In these cases, arginine is replaced by proline at position 696 in the IKAP protein's chain of amino acids (also written as Arg696Pro), or proline is replaced by leucine at position 914 (also written as Pro914Leu). Together, these mutations cause the resulting IKAP protein to malfunction.As an autosomal recessive disorder, two mutated alleles of the IKBKAP gene are required for the disorder to manifest. However, despite the predominance of the same single-base mutation being the reputed cause of familial dysautonomia, the severity of the affected phenotype varies within and between families.Kinetin (6-furfurylaminopurine) has been found to have the capacity to repair the splicing defect and increase wild-type IKBKAP mRNA expression in vivo. Further research is still required to assess the fitness of kinetin as a possible future oral treatment. Model organisms: Model organisms have been used in the study of IKBKAP gene function. Model organisms: Mouse A conditional knockout mouse line, called Ikbkaptm1a(KOMP)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists — at the Wellcome Trust Sanger Institute.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty five tests were carried out and two phenotypes were reported. No homozygous mutant embryos were identified during gestation, and in a separate study, none survived until weaning. The remaining tests were carried out on heterozygous mutant adult mice; no significant abnormalities were observed in these animals. Model organisms: Saccharomyces cerevisiae The homologous protein for IKAP in yeast is Elp1, with 29% identity and 46% similarity detected between the proteins. The yeast Elp1 protein is a subunit of a three-protein RNA polymerase II-associated elongator complex. Drosophila melanogaster The IKBKAP gene homolog in fruit flies is the CG10535 gene, encoding the D-elp1 protein — the largest of three subunits making the RNA polymerase II core elongator complex. This subunit was found to have RNA-dependent RNA polymerase activity, through which it could synthesize double-stranded RNA from single-stranded RNA templates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Half hitch** Half hitch: The half hitch is a simple overhand knot, where the working end of a line is brought over and under the standing part. Insecure on its own, it is a valuable component of a wide variety of useful and reliable hitches, bends, and knots. The half hitch is tied with one end of a rope which is passed around an object and secured to its own standing part with a single hitch. Half hitch: Securing an additional single hitch to the rope's standing part produces the related knot two half-hitches.: 283 Alternatively, a half hitch may be made secure on its own by placing the final crossing opposite to the turn around the working end. This locks the end in place, and holds fast as long as the hitch is loaded by a steady pull.: 290  A half hitch in this configuration is sometimes used to tie strings to the bridge of a classical guitar. Half hitch: Another instance where a half hitch stands on its own without additional embellishment is when added to a timber hitch to help stabilize a load in the direction of pull. A timber hitch is tied on the far end of the load to bind it securely and a half hitch made at the forward end to serve as a guide for the rope. In this instance, the half hitch combined with a timber hitch is known as a killick hitch or kelleg hitch.The knot is attractive to the eye and so is used decoratively for French whipping which is also known as half hitch whipping.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cost-exchange ratio** Cost-exchange ratio: In anti-ballistic missile (ABM) defence the cost-exchange ratio is the ratio of the incremental cost to the aggressor of getting one additional warhead through the defence screen, divided by the incremental cost to the defender of offsetting the additional missile. For instance, a single new ICBM might require a single new ABM to counter it, and if they both cost the same, the cost-exchange ratio would be 1:1. Cost-exchange ratio: Throughout the Cold War, the cost-exchange ratio was almost always strongly in favor of the offense. Some of this has to do with the fact that an ICBM can be aimed at any target, which the defender cannot know in advance. To shoot that warhead down, the defender has to wait until it appears on radar, which typically happens only a few hundred miles from the target. This means a single defensive missile cannot be used to counter a single warhead; ABMs have to be deployed in a spread-out fashion so that one can respond no matter where the warhead appears. Even if a single ABM is needed to shoot down that single new missile, that single new ABM would be needed to be added to multiple bases depending on their range. For short-range weapons like the Sprint, dozens are needed for every new Soviet warhead. Cost-exchange ratio: Through the 1950s and 60s there were intense ongoing debates about the exact figures of the cost-exchange ratio. This ended with the introduction of multiple independently targetable re-entry vehicles, or MIRVs. MIRV allowed a single ICBM to launch multiple warheads, each attacking a different target. Now every new ICBM required dozens and dozens of new ABMs to counter it, swinging the cost-exchange ratio so dramatically in favor of the offense that it ended any debate on the topic. Consideration of cost-exchange ratios was influential in persuading the United States and the Soviet Union to sign the ABM Treaty. Cost-exchange ratio: The topic was once again a consideration in the era of the Strategic Defense Initiative, SDI or "Star Wars". In this case the defensive weapons attacked the ICBMs before they released their warheads, reducing the exchange ratio to one, although at a very high dollar cost. Some weapons, like the Project Excalibur system, completely reversed the ratio by attacking dozens of missiles at once, a single weapon thereby potentially destroying hundreds of warheads. Ultimately these technologies failed to mature and the system was ultimately abandoned with the ending of the Cold War.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tsavorite** Tsavorite: Tsavorite or tsavolite is a variety of the garnet group species grossular, a calcium-aluminium garnet with the formula Ca3Al2Si3O12. Trace amounts of vanadium or chromium provide the green color. Tsavorite: In 1967, British gem prospector and geologist Campbell R. Bridges discovered a deposit of green grossular in the mountains of Simanjiro District of Manyara Region of north-east Tanzania in a place called Lemshuko, 15 km (9.3 mi) away from Komolo, the first village. The specimens he found were of very intense color and of high transparency. The find interested the gem trade, and attempts were made to export the stones, but the Tanzanian government did not provide permits. Tsavorite: Believing that the deposit was a part of a larger geological structure extending possibly into Kenya, Bridges began prospecting in that nation. He was successful a second time in 1971, when he found the mineral variety there, and was granted a permit to mine the deposit. The gemstone was known only to mineral specialists until 1974, when Tiffany and Co launched a marketing campaign which brought broader recognition of the stone.Bridges was murdered in 2009 when a mob attacked him and his son on their property in Tsavo East National Park. It is believed that the attack was connected to a three-year dispute over access and control of Bridges' gemstone mines.The name tsavorite was proposed by Tiffany and Co president Henry Platt in honor of Tsavo East National Park in Kenya. Apart from the source locality and was first discovered in Manyara, Tanzania, it is also found in Toliara (Tuléar) Province, Madagascar. Small deposits of gem grade material have been found in Pakistan and Queen Maud Land, Antarctica. No other occurrences of gem material have yet been discovered.Rare in gem-quality over several carats (1 carat = 200 mg), tsavorite has been found in larger sizes. In late 2006, a 925-carat (185.0 g) crystal was discovered. It yielded an oval mixed-cut 325 carat (65 g) stone, one of the largest, if not the largest, faceted tsavorites in the world. A crystal that yielded a 120.68-carat (24.136 g) oval mixed-cut gem was also uncovered in early 2006. The Lion of Merelani is a square cushion cut tsavorite that weighs 116.76 carats and has 177 facets. It is on display at the Smithsonian Institution. Tsavorite formed in a Neoproterozoic metamorphic event which involved extensive folding and refolding of rock. This resulted in a wide range of inclusions forming within most Tsavorite crystals. These inclusions are strong identifying features in Tsavorite.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Human uses of bats** Human uses of bats: Human uses of bats include economic uses such as bushmeat or in traditional medicine. Bats are also used symbolically in religion, mythology, superstition, and the arts. Perceived medical uses of bats include treating epilepsy in South America, night blindness in China, rheumatism, asthma, chest pain, and fever in South Asia. Bat meat is consumed in Oceania, Australia, Asia, and Africa, with about 13% of all species hunted for food. Other economic uses of bats include using their teeth as currency on the island of Makira. Human uses of bats: Bats are widely represented in the arts, with inclusion in epic poems, plays, fables, and comic books. Though frequently associated with malevolence in Western art, bats are symbols of happiness in China. Economic uses: Traditional medicine Live bats are sold in Bolivia for purported medicinal uses. Specifically, consuming the bats' blood is believed to treat epilepsy. A 2010 study documented that per month, 3,000 bats were sold in markets in four Bolivian cities. Species sold in these markets include Seba's short-tailed bats, mouse-eared bats, and common vampire bats. Bat excrement (guano) is used in traditional Chinese medicine as a treatment for night blindness. The Romans believed that bat blood was an antidote for snake venom.Flying foxes are killed for use in traditional medicine. The Indian flying fox, for example, has many perceived medical uses. Some believe that its fat is a treatment for rheumatism. Tribes in the Attappadi region of India eat the cooked flesh of the Indian flying fox to treat asthma and chest pain. Economic uses: Healers of the Kanda Tribe of Bangladesh use hair from Indian flying foxes to create treatments for "fever with shivering." Meat Bats are consumed for their meat in several regions, including Oceania, Australia, Southeast Asia, China, and West and Central Africa. Bats have been used as a food source for humans for thousands of years. At least 167 species of bats are hunted around the world, or about 13% of all bat species. Economic uses: Materials Indigenous societies in Oceania used parts of flying foxes for functional and ceremonial weapons. In the Solomon Islands, people created barbs out of their bones for use in spears, and still use their dry skins to make kites. In New Caledonia, ceremonial axes made of jade were decorated with braids of flying fox fur.There are modern and historical references to flying fox byproducts used as currency. In New Caledonia, braided flying fox fur was once used as currency. On the island of Makira, which is part of the Solomon Islands, indigenous peoples still hunt flying foxes for their teeth as well as for bushmeat. The canine teeth are strung together on necklaces that are used as currency. Teeth of the insular flying fox are particularly prized, as they are usually large enough to drill holes in. Economic uses: The Makira flying fox is also hunted, though, despite its smaller teeth. Economic uses: Fertilizer Bat guano is a natural fertilizer used by gardeners and plant enthusiasts across the world. Bat guano is a natural and organic fertilizer that not only benefits the plants, but also benefits the bats as many gardeners will build bat houses to house the bats, their natural fertilizer supplier. Bat guano contains many elements that benefit plant growth: carbon, nitrogen, sulfur, and phosphorus. Therefore, because of its natural properties, guano has become very popular across the world for its use as a natural and organic fertilizer. Economic uses: Pest control Bats can eat up to 3,000 insects a night and are becoming increasingly more common among neighborhoods to use as natural pest control. Especially for communities at high risk of diseases such as zika, west nile virus, and St. Louis encephalitis, bats can decrease the population of mosquitoes and other pests naturally, without the use of pesticides. Bats also perform important pest control for farmers, cutting down the numbers of pests that eat and destroy their crops. Farmers in turn are protective of bats and often have bat houses near their crop fields to help attract and house bats for their natural pest control. Symbolic uses: Mythology, religion, and superstition In Mayan mythology, the deity Camazotz was a bat god. "Camazotz" translates to "death bat" or "snatch bat". Though many superstitions related to bats are negative, some are positive. In Ancient Macedonia, people carried amulets made out of bat bones. Bats were considered the luckiest of all animals, thus their bones were sure to bring good luck. In China, bats are also considered good luck or bringers of happiness, as the Chinese word Fu is a homophone for both "bat" and "happiness". Flying fox wings were depicted on the war shields of the Asmat people of Indonesia; they believed that the wings offered protection to their warriors. The 10th century Geoponica stated that affixing a bat's head to a dovecote would prevent domestic pigeons from straying, and Pliny the Elder's Natural History asserted that carrying a bat three times around a room and then nailing it head-down to a window would magically protect sheep pens.Bats are associated with negative uses or beings in many cultures. Symbolic uses: In Nigeria, for example, bats are thought of as witches; in Ivory Coast, they are believed to be ghosts or spirits. In the Bible's Book of Leviticus, bats are referred to as "birds you are to regard as unclean," and therefore should not be consumed. Symbolic uses: Arts Bats have a long history of inclusion in the arts. The Ancient Greek playwright Aristophanes is believed to have been the first to allude to bats coming from hell in 414 BC, leading to the popular expression "bat out of hell". The Greek storyteller Aesop used bats as characters in two of his Fables, and bats appear twice in the Ancient Greek epic poem the Odyssey. One of the most famous bat-inspired characters is Batman, a superhero who debuted via American comic book in 1939. In more recent times, bats are main characters in the children's book Stellaluna (1993) and the Silverwing series (1997 – 2007). Symbolic uses: Bats are a popular component of natural horror genre films and books. In 1897, author Bram Stoker wrote Dracula; the book and its film adaptations continued a legacy of bats being portrayed as "evil, bloodsucking monsters". Other natural horror films including bats are The Devil Bat (1940), Nightwing (1979), and Bats (1999).In Chinese art, bats are used to symbolize happiness. A popular use of bats in Chinese art is the wufu, a depiction of a tree surrounded by five bats, symbolizing the five happinesses: good luck, health, wealth, longevity, and tranquility. Bats are similarly found on Chinese teacups, on greeting cards, in paintings, and in embroidery.In theatre, bats are featured in the 1874 German operetta Die Fledermaus (The Bat in English). Die Fledermaus is unusual in Western culture in that bats are not portrayed as a symbol of malevolence. A 1920 play The Bat featured a villain called "the Bat". Symbolic uses: Heraldry and branding Bats are a common element of heraldry, particularly in Spain, France, Switzerland, Ireland, and England. Bats are frequently displayed with their wings outstretched, facing the observer. The use of bats in heraldry was meant to inspire fear in enemies, as well as symbolize vigilance.The liquor company Bacardi prominently uses bats in its branding, with its main logo featuring a new world fruit bat. Several sports teams use bats in their logos, including Valencia CF (soccer) and the Louisville Bats (Minor League Baseball).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nikolsky's sign** Nikolsky's sign: Nikolsky's sign is a clinical dermatological sign, named after Pyotr Nikolsky (1858–1940), a Russian physician who trained and worked in the Russian Empire. The sign is present when slight rubbing of the skin results in exfoliation of the outermost layer. A typical test would be to place the eraser of a pencil on the roof of a lesion and spin the pencil in a rolling motion between the thumb and forefinger. If the lesion is opened (i.e., skin sloughed off), then the Nikolsky's sign is present/positive. Nikolsky's sign: Nikolsky's sign is almost always present in Stevens–Johnson syndrome/toxic epidermal necrolysis and staphylococcal scalded skin syndrome, caused by the exfoliative toxin of Staphylococcus aureus. It is also associated with pemphigus vulgaris and pemphigus foliaceus. It is useful in differentiating between the diagnosis of pemphigus vulgaris or mucous membrane pemphigoid (where the sign is present) and bullous pemphigoid (where it is absent). Nikolsky's sign: The Nikolsky sign is dislodgement of intact superficial epidermis by a shearing force, indicating a plane of cleavage in the skin epidermal-epidermal junctions (e.g., desmosomes). The histological picture involves thinner, weaker attachments of the skin lesion itself to the normal skin – resulting in easier dislodgement. The formation of new blisters upon slight pressure (direct Nikolsky) and shearing of the skin due to rubbing (indirect Nikolsky) is a sign of pemphigus vulgaris, albeit not a 100% reliable diagnosis. In addition, another physical exam, the Asboe-Hansen signs, must be used to determine the absence of intracellular connections holding epidermal cells together.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Miller cycle** Miller cycle: In engineering, the Miller cycle is a thermodynamic cycle used in a type of internal combustion engine. The Miller cycle was patented by Ralph Miller, an American engineer, U.S. Patent 2,817,322 dated Dec 24, 1957. The engine may be two- or four-stroke and may be run on diesel fuel, gases, or dual fuel. It uses a supercharger to offset the performance loss of the Atkinson cycle. Miller cycle: This type of engine was first used in ships and stationary power-generating plants, and is now used for some railway locomotives such as the GE PowerHaul. It was adapted by Mazda for their KJ-ZEM V6, used in the Millenia sedan, and in their Eunos 800 sedan (Australia) luxury cars. Subaru combined a Miller-cycle flat-4 with a hybrid driveline for their concept "Turbo Parallel Hybrid" car, known as the Subaru B5-TPH. Nissan introduced a small three-cylinder engine with variable intake valve timing that claims to operate an Atkinson cycle at low load (thus the lower power density is not a handicap) and a Miller cycle when under light boost. Overview: A traditional reciprocating internal combustion engine uses four strokes, of which two can be considered high-power: the compression stroke (high power flow from crankshaft to the charge) and power stroke (high power flow from the combustion gases to crankshaft). Overview: In the Miller cycle, the intake valve is left open longer than it would be in an Otto-cycle engine. In effect, the compression stroke is two discrete cycles: the initial portion when the intake valve is open and final portion when the intake valve is closed. This two-stage compression stroke creates the so-called "fifth" stroke that the Miller cycle introduces. As the piston initially moves upwards in what is traditionally the compression stroke, the charge is partially expelled back out through the still-open intake valve. Typically, this loss of charge air would result in a loss of power. However, in the Miller cycle, this is compensated for by the use of a supercharger. The supercharger typically will need to be of the positive-displacement (Roots or screw) type due to its ability to produce boost at relatively low engine speeds. Otherwise, low-speed power will suffer. Alternatively, a turbocharger can be used for greater efficiency, if low-speed operation is not required, or supplemented with electric motors. Overview: In the Miller-cycle engine, the piston begins to compress the fuel-air mixture only after the intake valve closes; and the intake valve closes after the piston has traveled a certain distance above its bottom-most position: around 20 to 30% of the total piston travel of this upward stroke. So in the Miller cycle engine, the piston actually compresses the fuel-air mixture only during the latter 70% to 80% of the compression stroke. During the initial part of the compression stroke, the piston pushes part of the fuel-air mixture through the still-open intake valve, and back into the intake manifold. Overview: Charge temperature The charge air is compressed using a supercharger (and cooled by an intercooler) to a pressure higher than that needed for the engine cycle, but filling of the cylinders is reduced by suitable timing of the inlet valve. Thus the expansion of the air and the consequent cooling take place in the cylinders and partially in the inlet. Reducing the temperature of the air/fuel charge allows the power of a given engine to be increased without making any major changes such as increasing the cylinder/piston compression relationship. When the temperature is lower at the beginning of the cycle, the air density is increased without a change in pressure (the mechanical limit of the engine is shifted to a higher power). At the same time, the thermal load limit shifts due to the lower mean temperatures of the cycle. This allows ignition timing to be advanced beyond what is normally allowed before the onset of detonation, thus increasing the overall efficiency still further. An additional advantage of the lower final charge temperature is that the emission of NOx in diesel engines is decreased, which is an important design parameter in large diesel engines on board ships and power plants. Overview: Compression ratio Efficiency is increased by having the same effective compression ratio and a larger expansion ratio. This allows more work to be extracted from the expanding gases as they are expanded almost to atmospheric pressure. In an ordinary spark ignition engine at the end of the expansion stroke of a wide open throttle cycle, the gases are at around five atmospheres when the exhaust valve opens. Because the stroke is limited to that of the compression, still some work could be extracted from the gas. Delaying the closing of the intake valve in the Miller cycle in effect shortens the compression stroke compared to the expansion stroke. This allows the gases to be expanded to atmospheric pressure, increasing the efficiency of the cycle. Overview: Supercharger losses The benefits of using positive-displacement superchargers come with a cost due to parasitic load. About 15 to 20% of the power generated by a supercharged engine is usually required to do the work of driving the supercharger, which compresses the intake charge (also known as boost). Overview: Major advantage/drawback The major advantage of the cycle is that the expansion ratio is greater than the compression ratio. By intercooling after the external supercharging, an opportunity exists to reduce NOx emissions for diesel, or knock for spark ignition engines. However, multiple tradeoffs on boosting system efficiency and friction (due to the larger displacement) need to be balanced for every application. Summary of the patent: The overview given above may describe a modern version of the Miller cycle, but it differs in some respects from the 1957 patent. The patent describes "a new and improved method of operating a supercharged intercooled engine". The engine may be two-cycle or four-cycle and the fuel may be diesel, dual fuel, or gas. It is clear from the context that "gas" means gaseous fuel and not gasoline. The pressure-charger shown in the diagrams is a turbocharger, not a positive-displacement supercharger. The engine (whether four-stroke or two-stroke) has a conventional valve or port layout, but an additional "compression control valve" (CCV) is in the cylinder head. The servo mechanism, operated by inlet manifold pressure, controls the lift of the CCV during part of the compression stroke and releases air from the cylinder to the exhaust manifold. The CCV would have maximum lift at full load and minimum lift at no load. The effect is to produce an engine with a variable compression ratio. As inlet manifold pressure goes up (because of the action of the turbocharger) the effective compression ratio in the cylinder goes down (because of the increased lift of the CCV) and vice versa. This "will insure proper starting and ignition of the fuel at light loads". Atkinson-cycle engine: A similar delayed valve-closing method is used in some modern versions of Atkinson cycle engines, but without the supercharging. These engines are generally found on hybrid electric vehicles, where efficiency is the goal, and the power lost compared to the Miller cycle is made up through the use of electric motors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Variable-order Bayesian network** Variable-order Bayesian network: Variable-order Bayesian network (VOBN) models provide an important extension of both the Bayesian network models and the variable-order Markov models. VOBN models are used in machine learning in general and have shown great potential in bioinformatics applications. These models extend the widely used position weight matrix (PWM) models, Markov models, and Bayesian network (BN) models. In contrast to the BN models, where each random variable depends on a fixed subset of random variables, in VOBN models these subsets may vary based on the specific realization of observed variables. The observed realizations are often called the context and, hence, VOBN models are also known as context-specific Bayesian networks. The flexibility in the definition of conditioning subsets of variables turns out to be a real advantage in classification and analysis applications, as the statistical dependencies between random variables in a sequence of variables (not necessarily adjacent) may be taken into account efficiently, and in a position-specific and context-specific manner.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sprinkler fitting** Sprinkler fitting: Sprinkler fitting is an occupation consisting of the installing, testing, inspecting, and certifying of automatic fire suppression systems in all types of structures. Sprinkler systems installed by sprinkler fitters can include the underground supply as well as integrated overhead piping systems and standpipes. The fire suppression piping may contain water, air (in a dry system), antifreeze, gas or chemicals as in a hood system, or a mixture producing fire retardant foam. Sprinkler fitting: Sprinkler fitters work with a variety of pipe and tubing materials including several types of plastic, copper, steel, cast iron, and ductile iron. Sprinkler fitting: Sprinkler fitters specialize in piping associated with fire sprinkler systems. The piping within these types of systems are required to be installed and maintained in accordance with strict guidelines in order to maintain compliance with the local building code and the fire code. This type of fire protection is considered a part of active fire protection rather than passive fire protection. Local and national standards: In the US, fire protection systems must adhere to the standards set forth in the installation standards of NFPA 13, (NFPA) 13D,(NFPA) 13R, (NFPA 14) and (NFPA) 25which are administered, copyrighted, and published by the National Fire Protection Association.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biological constraints** Biological constraints: Biological constraints are factors which make populations resistant to evolutionary change. One proposed definition of constraint is "A property of a trait that, although possibly adaptive in the environment in which it originally evolved, acts to place limits on the production of new phenotypic variants." Constraint has played an important role in the development of such ideas as homology and body plans. Types of constraint: Any aspect of an organism that has not changed over a certain period of time could be considered to provide evidence for "constraint" of some sort. To make the concept more useful, it is therefore necessary to divide it into smaller units. First, one can consider the pattern of constraint as evidenced by phylogenetic analysis and the use of phylogenetic comparative methods; this is often termed phylogenetic inertia, or phylogenetic constraint. It refers to the tendency of related taxa sharing traits based on phylogeny. Charles Darwin spoke of this concept in his 1859 book "On the Origin of Species", as being "Unity of Type" and went on to explain the phenomenon as existing because organisms do not start over from scratch, but have characteristics that are built upon already existing ones that were inherited from their ancestors; and these characteristics likely limit the amount of evolution seen in that new taxa due to these constraints.If one sees particular features of organisms that have not changed over rather long periods of time (many generations), then this could suggest some constraint on their ability to change (evolve). However, it is not clear that mere documentation of lack of change in a particular character is good evidence for constraint in the sense of the character being unable to change. For example, long-term stabilizing selection related to stable environments might cause stasis. It has often been considered more fruitful, to consider constraint in its causal sense: what are the causes of lack of change? Stabilizing selection The most common explanation for biological constraint is that stabilizing selection acts on an organism to prevent it changing, for example, so that it can continue to function in a tightly-defined niche. This may be considered to be a form of external constraint, in the sense that the organism is constrained not by its makeup or genetics, but by its environment. The implication would be that if the population was in a new environment, its previously constrained features would potentially begin to evolve. Types of constraint: Functional coupling and physico-chemical constraint Related to the idea of stabilizing selection is that of the requirement that organisms function adequately in their environment. Thus, where stabilizing selection acts because of the particular niche that is occupied, mechanical and physico-chemical constraints act in a more general manner. For example, the acceleration caused by gravity places constraints on the minimum bone density and strength for an animal of a particular size. Similarly, the properties of water mean that tissues must have certain osmotic properties in order to function properly. Types of constraint: Functional coupling takes the idea that organisms are integrated networks of functional interactions (for example, the vertebral column of vertebrates is involved in the muscle, nerve, and vascular systems as well as providing support and flexibility) and therefore cannot be radically altered without causing severe functional disruption. This may be viewed as one type of trade-off. As Rupert Riedl pointed out, this degree of functional constraint — or burden — generally varies according to position in the organism. Structures literally in the centre of the organism — such as the vertebral column — are often more burdened than those at the periphery, such as hair or toes. Types of constraint: Lack of genetic variation and developmental integration This class of constraint depends on certain types of phenotype not being produced by the genotype (compare stabilizing selection, where there is no constraint on what is produced, but rather on what is naturally selected). For example, for a highly homozygous organism, the degree of observed phenotypic variability in its descendants would be lower than those of a heterozygous one. Similarly, developmental systems may be highly canalised, to prevent the generation of certain types of variation. Relationships of constraint classes: Although they are separate, the types of constraints discussed are nevertheless relatable to each other. In particular, stabilizing selection, mechanical, and physical constraints might lead through time to developmental integration and canalisation. However, without any clear idea of any of these mechanisms, deducing them from mere patterns of stasis as deduced from phylogenetic patterns or the fossil record remains problematic. In addition, the terminology used to describe constraints has led to confusion. Examples: “Variational inaccessibility. Despite mutations, certain character variants are never produced. These variants are therefore developmentally impossible to achieve and are never introduced into a population. This is implied by canalization and has been called both genetic and developmental constraint.”
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Comparison of different machine translation approaches** Comparison of different machine translation approaches: Machine translation (MT) algorithms may be classified by their operating principle. MT may be based on a set of linguistic rules, or on large bodies (corpora) of already existing parallel texts. Rule-based methodologies may consist in a direct word-by-word translation, or operate via a more abstract representation of meaning: a representation either specific to the language pair, or a language-independent interlingua. Corpora-based methodologies rely on machine learning and may follow specific examples taken from the parallel texts, or may calculate statistical probabilities to select a preferred option out of all possible translations. Rule-based and corpus-based machine translation: Rule-based machine translation (RBMT) is generated on the basis of morphological, syntactic, and semantic analysis of both the source and the target languages. Corpus-based machine translation (CBMT) is generated on the analysis of bilingual text corpora. The former belongs to the domain of rationalism and the latter empiricism. Given large-scale and fine-grained linguistic rules, RBMT systems are capable of producing translations with reasonable quality, but constructing the system is very time-consuming and labor-intensive because such linguistic resources need to be hand-crafted, frequently referred to as knowledge acquisition problem. Moreover, it is of great difficulty to correct the input or add new rules to the system to generate a translation. By contrast, adding more examples to a CBMT system can improve the system since it is based on the data, though the accumulation and management of the huge bilingual data corpus can also be costly. Direct, transfer and interlingual machine translation: The direct, transfer-based machine translation and interlingual machine translation methods of machine translation all belong to RBMT but differ in the depth of analysis of the source language and the extent to which they attempt to reach a language-independent representation of meaning or intent between the source and target languages. Their dissimilarities can be obviously observed through the Vauquois Triangle (see illustration), which illustrates these levels of analysis. Direct, transfer and interlingual machine translation: Starting with the shallowest level at the bottom, direct transfer is made at the word level. Depending on finding direct correspondences between source language and target language lexical units, DMT is a word-by-word translation approach with some simple grammatical adjustments. A DMT system is designed for a specific source and target language pair and the translation unit of which is usually a word. Translation is then performed on representations of the source sentence structure and meaning respectively through syntactic and semantic transfer approaches. Direct, transfer and interlingual machine translation: A transfer-based machine translation system involves three stages. The first stage makes analysis of the source text and converts it into abstract representations; the second stage converts those into equivalent target language-oriented representations; and the third generates the final target text. The representation is specific for each language pair. The transfer strategy can be viewed as “a practical compromise between the efficient use of resources of interlingual systems, and the ease of implementation of direct systems” . Direct, transfer and interlingual machine translation: Finally, at the interlingual level, the notion of transfer is replaced by the interlingua. The IMT operates over two phases: analyzing the SL text into an abstract universal language-independent representation of meaning, i.e. the interlingua, which is the phase of analysis; generating this meaning using the lexical units and the syntactic constructions of the TL, which is the phase of synthesis. Theoretically, the higher the triangle, the less cost the analysis and synthesis. For example, to translate one SL to N TLs, (1+N) steps are needed using an interlingua compared to N steps of transfer. But to translate all the languages, only 2N steps are needed by the IMT approach compared to N² by the TBMT approach, which is a significant reduction. Though no transfer component has to be created for each language pair by adopting the approach of IMT, the definition of an interlingua is of great difficulty and even maybe impossible for a wider domain. Statistical and example-based machine translation: Statistical machine translation (SMT) is generated on the basis of statistical models whose parameters are derived from the analysis of bilingual text corpora. The initial model of SMT, based on Bayes Theorem, proposed by Brown et al. takes the view that every sentence in one language is a possible translation of any sentence in the other and the most appropriate is the translation that is assigned the highest probability by the system. Example-based machine translation (EBMT) is characterized by its use of bilingual corpus with parallel texts as its main knowledge, in which translation by analogy is the main idea. There are four tasks in EBMT: example acquisition, example base and management, example application and synthesis. Statistical and example-based machine translation: Both belonging to CBMT, sometimes referred to as data-driven MT, EBMT and SMT have something in common which distinguish them from RBMT. First, they both use a bitext as the fundamental data source. Second, they are both empirical with the principle of machine learning instead of rational with the principle of linguists writing rules. Third, they both can be improved by getting more data. Fourth, new language pairs can be developed just by finding suitable parallel corpus data, if possible. Apart from these similarities, there are also some dissimilarities. SMT essentially uses statistical data such as parameters and probabilities derived from the bitext, in which preprocessing the data is essential and even if the input is in the training data, the same translation is not guaranteed to occur. By contrast, EBMT uses the bitext as its primary data source, in which preprocessing the data is optional and if the input is in the example set, the same translation is to occur.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dwarf galaxy problem** Dwarf galaxy problem: The dwarf galaxy problem, also known as the missing satellites problem, arises from a mismatch between observed dwarf galaxy numbers and collisionless numerical cosmological simulations that predict the evolution of the distribution of matter in the universe. In simulations, dark matter clusters hierarchically, in ever increasing numbers of halo "blobs" as halos' components' sizes become smaller-and-smaller. However, although there seem to be enough observed normal-sized galaxies to match the simulated distribution of dark matter halos of comparable mass, the number of observed dwarf galaxies is orders of magnitude lower than expected from such simulation. Context: For example, around 38 dwarf galaxies have been observed in the Local Group, and only around 11 orbiting the Milky Way, yet dark matter simulations predict that there should be around 500 dwarf satellites for the Milky Way alone. Prospective resolution: There are two main alternatives which may resolve the dwarf galaxy problem: The smaller-sized clumps of dark matter may be unable to obtain or retain the baryonic matter needed to form stars in the first place; or, after they form, dwarf galaxies may be quickly “eaten” by the larger galaxies that they orbit. Prospective resolution: Baryonic matter too sparse One proposal is that the smaller halos do exist but that only a few of them end up becoming visible, because they are unable to acquire enough baryonic matter to form a visible dwarf galaxy. In support of this, in 2007 the Keck telescopes observed eight newly-discovered ultra-faint Milky Way dwarf satellites of which six were around 99.9% dark matter (with a mass-to-light ratio of about 1,000). Prospective resolution: Early demise of young dwarfs The other popular proposed solution is that dwarf galaxies may tend to merge into the galaxies they orbit shortly after star-formation, or to be quickly torn apart and tidally stripped by larger galaxies, due to complicated orbital interactions. Tidal stripping may also have been part of the problem of detecting dwarf galaxies in the first place: Finding dwarf galaxies is an extremely difficult task, since they tend to have low surface brightness and are highly diffuse – so much so that they are close to blending into background and foreground stars.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Burning mouth syndrome** Burning mouth syndrome: Burning mouth syndrome (BMS) is a burning, tingling or scalding sensation in the mouth, lasting for at least four to six months, with no underlying known dental or medical cause. No related signs of disease are found in the mouth. People with burning mouth syndrome may also have a subjective xerostomia (dry mouth sensation where no cause can be found such as reduced salivary flow), paraesthesia (altered sensation such as tingling in the mouth), or an altered sense of taste or smell.A burning sensation in the mouth can be a symptom of another disease when local or systemic factors are found to be implicated; this is not considered to be burning mouth syndrome, which is a syndrome of medically unexplained symptoms. The International Association for the Study of Pain defines burning mouth syndrome as "a distinctive nosological entity characterized by unremitting oral burning or similar pain in the absence of detectable mucosal changes" and "burning pain in the tongue or other oral mucous membranes", and the International Headache Society defines it as "an intra-oral burning sensation for which no medical or dental cause can be found". To ensure the correct diagnosis of burning mouth syndrome, Research Diagnostic Criteria (RDC/BMS) have been developed.Insufficient evidence leaves it unclear if effective treatments exist. Signs and symptoms: By definition, BMS has no signs. Sometimes affected persons will attribute the symptoms to sores in the mouth, but these are in fact normal anatomic structures (e.g. lingual papillae, varices). Symptoms of BMS are variable, but the typical clinical picture is given below, considered according to the Socrates pain assessment method (see table). If clinical signs are visible, then another explanation for the burning sensation may be present. Erythema (redness) and edema (swelling) of papillae on the tip of the tongue may be a sign that the tongue is being habitually pressed against the teeth. The number and size of filiform papillae may be reduced. If the tongue is very red and smooth, then there is likely a local or systemic cause (e.g. erythematous candidiasis, anemia). Causes: Theories In about 50% of cases of burning mouth sensation no identifiable cause is apparent; these cases are termed (primary) BMS. Several theories of what causes BMS have been proposed, and these are supported by varying degrees of evidence, but none is proven. Causes: As most people with BMS are postmenopausal women, one theory of the cause of BMS is of estrogen or progesterone deficit, but a strong statistical correlation has not been demonstrated. Another theory is that BMS is related to autoimmunity, as abnormal antinuclear antibody and rheumatoid factor can be found in the serum of more than 50% of persons with BMS, but these levels may also be seen in elderly people who do not have any of the symptoms of this condition. Whilst salivary flow rates are normal and there are no clinical signs of a dry mouth to explain a complaint of dry mouth, levels of salivary proteins and phosphate may be elevated and salivary pH or buffering capacity may be reduced.Depression and anxiety are strongly associated with BMS. It is not known if depression is a cause or result of BMS, as depression may develop in any setting of constant unrelieved irritation, pain, and sleep disturbance. It is estimated that about 20% of BMS cases involve psychogenic factors, and some consider BMS a psychosomatic illness, caused by cancerophobia, concern about sexually transmitted infections, or hypochondriasis.Chronic low-grade trauma due to parafunctional habits (e.g. rubbing the tongue against the teeth or pressing it against the palate), may be involved. BMS is more common in persons with Parkinson's disease, so it has been suggested that it is a disorder of reduced pain threshold and increased sensitivity. Often people with BMS have unusually raised taste sensitivity, termed hypergeusia ("super tasters"). Dysgeusia (usually a bitter or metallic taste) is present in about 60% of people with BMS, a factor which led to the concept of a defect in sensory peripheral neural mechanisms. Changes in the oral environment, such as changes in the composition of saliva, may induce neuropathy or interruption of nerve transduction. The onset of BMS is often spontaneous, although it may be gradual. There is sometimes a correlation with a major life event or stressful period in life. In women, the onset of BMS is most likely three to twelve years following menopause. Causes: Other causes of an oral burning sensation Several local and systemic factors can give a burning sensation in the mouth without any clinical signs, and therefore may be misdiagnosed as BMS. Some sources state that where there is an identifiable cause for a burning sensation, this can be termed "secondary BMS" to distinguish it from primary BMS. However, the accepted definitions of BMS hold that there are no identifiable causes for BMS, and where there are identifiable causes, the term BMS should not be used.Some causes of a burning mouth sensation may be accompanied by clinical signs in the mouth or elsewhere on the body. For example, burning mouth pain may be a symptom of allergic contact stomatitis. This is a contact sensitivity (type IV hypersensitivity reaction) in the oral tissues to common substances such as sodium lauryl sulfate, cinnamaldehyde or dental materials. However, allergic contact stomatitis is accompanied by visible lesions and gives positive response with patch testing. Acute (short term) exposure to the allergen (the substance triggering the allergic response) causes non-specific inflammation and possibly mucosal ulceration. Chronic (long term) exposure to the allergen may appear as chronic inflammatory, lichenoid (lesions resembling oral lichen planus), or plasma cell gingivitis, which may be accompanied by glossitis and cheilitis. Apart from BMS itself, a full list of causes of an oral burning sensation is given below: Deficiency of iron, folic acid or various B vitamins (glossitis e.g. due to anemia), or zinc Neuropathy, e.g. following damage to the chorda tympani nerve. Causes: Hypothyroidism. Medications ("scalded mouth syndrome", unrelated to BMS) - protease inhibitors and angiotensin-converting-enzyme inhibitors (e.g. captopril). Type 2 diabetes True xerostomia, caused by hyposalivation e.g. Sjögren's syndrome Parafunctional activity, e.g. nocturnal bruxism or a tongue thrusting habit. Restriction of the tongue by poorly constructed dentures. Geographic tongue. Oral candidiasis. Herpetic infection (herpes simplex virus). Fissured tongue. Lichen planus. Allergies and contact sensitivities to foods, metals, and other substances (see table). Hiatal hernia. Human immunodeficiency virus. Multiple myeloma Diagnosis: BMS is a diagnosis of exclusion, i.e. all other explanations for the symptoms are ruled out before the diagnosis is made. There are no clinically useful investigations that would help to support a diagnosis of BMS (by definition all tests would have normal results), but blood tests and / or urinalysis may be useful to rule out anemia, deficiency states, hypothyroidism and diabetes. Investigation of a dry mouth symptom may involve sialometry, which objectively determines if there is any reduction of the salivary flow rate (hyposalivation). Oral candidiasis can be tested for with use of a swabs, smears, an oral rinse or saliva samples. It has been suggested that allergy testing (e.g., patch test) is inappropriate in the absence of a clear history and clinical signs in people with a burning sensation in the mouth. The diagnosis of a people with a burning symptom may also involve psychologic screening e.g. depression questionnaires.The second edition of the International Classification of Headache Disorders lists diagnostic criteria for "Glossodynia and Sore Mouth": A. Pain in the mouth present daily and persisting for most of the day, B. Oral mucosa is of normal appearance, C. Local and systemic diseases have been excluded. Diagnosis: Classification A burning sensation in the mouth may be primary (i.e. burning mouth syndrome) or secondary to systemic or local factors. Other sources refer to a "secondary BMS" with a similar definition, i.e. a burning sensation which is caused by local or systemic factors, or "where oral burning is explained by a clinical abnormality". However this contradicts the accepted definition of BMS which specifies that no cause can be identified. "Secondary BMS" could therefore be considered a misnomer. BMS is an example of dysesthesia, or a distortion of sensation.Some consider BMS to be a variant of atypical facial pain. More recently, BMS has been described as one of the 4 recognizable symptom complexes of chronic facial pain, along with atypical facial pain, temporomandibular joint dysfunction and atypical odontalgia. BMS has been subdivided into three general types, with type two being the most common and type three being the least common. Types one and two have unremitting symptoms, whereas type three may show remitting symptoms. Diagnosis: Type 1 - Symptoms not present upon waking, and then increase throughout the day Type 2 - Symptoms upon waking and through the day Type 3 - No regular pattern of symptomsSometimes those terms specific to the tongue (e.g. glossodynia) are reserved for when the burning sensation is located only on the tongue. Treatment: If a cause can be identified for a burning sensation in the mouth, then treatment of this underlying factor is recommended. If symptom persist despite treatment a diagnosis of BMS is confirmed. BMS has been traditionally treated by reassurance and with antidepressants, anxiolytics or anticonvulsants. A 2016 Cochrane review of treatment for burning mouth syndrome concluded that strong evidence of an effective treatment was not available, however, a systematic review in 2018 found that the use of antidepressants and alpha-lipoic acids gave promising results.Other treatments which have been used include atypical antipsychotics, histamine receptor antagonists, and dopamine agonists. Supplementation with vitamin complexes and cognitive behavioral therapy may be helpful in the management of burning mouth syndrome. Prognosis: BMS is benign (importantly, it is not a symptom of oral cancer), but as a cause of chronic pain which is poorly controlled, it can detriment quality of life, and may become a fixation which cannot be ignored, thus interfering with work and other daily activities. Two thirds of people with BMS have a spontaneous partial recovery six to seven years after the initial onset, but in others the condition is permanent. Recovery is often preceded by a change in the character of the symptom from constant to intermittent. No clinical factors predicting recovery have been noted.If there is an identifiable cause for the burning sensation, then psychologic dysfunctions such as anxiety and depression often disappear if the symptom is successfully treated. Epidemiology: BMS is fairly uncommon worldwide, affecting up to five individuals per 100,000 general population. People with BMS are more likely to be middle aged or elderly, and females are three to seven times more likely to have BMS than males. Some report a female to male ratio of as much as 33 to 1. BMS is reported in about 10-40% of women seeking medical treatment for menopausal symptoms, and BMS occurs in about 14% of postmenopausal women. Males and younger individuals of both sexes are sometimes affected.Asian and Native American people have considerably higher risk of BMS. Notable cases: Sheila Chandra, a singer of Indian heritage, retired due to this condition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Semitendinosus muscle** Semitendinosus muscle: The semitendinosus () is a long superficial muscle in the back of the thigh. It is so named because it has a very long tendon of insertion. It lies posteromedially in the thigh, superficial to the semimembranosus. Structure: The semitendinosus, remarkable for the great length of its tendon of insertion, is situated at the posterior and medial aspect of the thigh. It arises from the lower and medial impression on the upper part of the tuberosity of the ischium, by a tendon common to it and the long head of the biceps femoris; it also arises from an aponeurosis which connects the adjacent surfaces of the two muscles to the extent of about 7.5 cm. from their origin. Structure: The muscle is fusiform and ends a little below the middle of the thigh in a long round tendon which lies along the medial side of the popliteal fossa; it then curves around the medial condyle of the tibia and passes over the medial collateral ligament of the knee-joint, from which it is separated by a bursa, and is inserted into the upper part of the medial surface of the body of the tibia, nearly as far forward as its anterior crest. Structure: The semitendinosus is more superficial than the semimembranosus (with which it shares very close insertion and attachment points). However, because the semimembranosus is wider and flatter than the semitendinosus, it is still possible to palpate the semimembranosus directly. Structure: At its insertion it gives off from its lower border a prolongation to the deep fascia of the leg and lies behind the tendon of the sartorius, and below that of the gracilis, to which it is united. These three tendons form what is known as the pes anserinus, so named because it looks like the foot of a goose. Structure: Innervation A lower motor neuron exits to the sacral plexus exiting through the spinal levels L5-S2. From the sacral plexus, the lower motor neuron travels down the sciatic nerve. The sciatic nerve branches into the deep fibular nerve and the tibial nerve. The tibial nerve innervates the semitendinosus as well as the other hamstring muscles, the semimembranosus and biceps femoris. Function: The semitendinosus muscle is one of three hamstring muscles that are located at the back of the thigh. The other two are the semimembranosus muscle and the biceps femoris. The semitendinosus muscle lies between the other two. These three muscles work collectively to flex the knee and extend the hip. The muscle also helps to medially rotate the tibia on the femur when the knee is flexed and medially rotate the femur when the hip is extended. It counteracts forward bending at the hips as well. Clinical significance: Along with patellar ligament and quadriceps femoris, semitendinosus/gracilis (STG) tendon autografts has been used commonly and successfully for anterior cruciate ligament reconstruction. Sufficient graft size could typically be obtained using either a semitendinosus/gracilis tendon double-bundle technique, or a quadruple-bundle technique using a single tendon.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hatchet ribozyme** Hatchet ribozyme: Background: The hatchet ribozyme is an RNA structure that catalyzes its own cleavage at a specific site. In other words, it is a self-cleaving ribozyme. Hatchet ribozymes were discovered by a bioinformatics strategy as RNAs Associated with Genes Associated with Twister and Hammerhead ribozymes, or RAGATH. Hatchet ribozyme: Subsequent biochemical analysis supports the conclusion of a ribozyme function, and determined further characteristics of the chemical reaction catalyzed by the ribozyme.Nucleolytic ribozymes are small RNAs that adopt compact folds capable of site-specific cleavage/ligation reactions. 14 unique nucleolytic ribozymes have been identified to date, including recently discovered twister, pistol, twister-sister, and hatchet ribozymes that were identified based on application of comparative sequence and structural algorithms. Hatchet ribozyme: The consensus sequence and secondary structure of this class includes 13 highly conserved and numerous other modestly conserved nucleotides inter-dispersed among bulges linking four base-paired substructures. A representative hatchet ribozyme requires divalent cations such as Mg2+ to promote RNA strand scission with a maximum rate constant of ~4/min. As with all other small self-cleaving ribozymes discovered to date, hatchet ribozymes employ a general mechanism for catalysis consisting of a nucleophilic attack of a ribose 2′-oxygen atom on the adjacent phosphorus center. Kinetic characteristics of the reaction demonstrate that members of this ribozyme class have an essential requirement for divalent metal cations and that they have a complex active site which employs multiple catalytic strategies to accelerate RNA cleavage by internal phosphoester transfer. Mechanism: Nucleolytic ribozymes like the Hatchet Ribozyme adopt an SN2-like mechanism that results in site-specific phosphodiester bond cleavage. An activated 2′-OH of the ribose 5′ to the scissile phosphate adopts an in-line alignment to target the adjacent to-be-cleaved P-O5′ phosphodiester bond, resulting in formation of 2′,3′-cyclic phosphate and 5′-OH groups. X-ray crystallographic structural studies on the hammerhead, hairpin, GlmS, hepatitis delta virus (HDV), Varkud satellite, and pistol ribozymes have defined the overall RNA fold, the catalytic pocket arrangement, the in-line alignment, and the key residues that contribute to the cleavage reaction. The cleavage site is located at the 5' end of its consensus secondary motif.In addition, the removal of the nucleophilic hydroxyl renders the ribozyme inactive as it is not able to create the cleavage site. More specifically, if the 2'-ribose or 2'-OH is replaced with a 2'-deoxyribose or 2'-H, there are no electrons available to perform the nucleophilic attack on the adjacent phosphate group. This results in no phosphoester bond being formed, which again inactivates the ribozyme's enzymatic cleavage ability. Secondary Structure: In 2019, researchers crystallized a 2.1 Å product of the Hatchet Ribozyme. The consensus sequence is depicted in the image to the right. Most hatchet ribozymes and ribozymes in general adopt a P0 configuration. P0 is an additional hairpin loop located at the 5' end of the cleavage site, though it does not contribute to catalytic activity or functionality unlike Hammerhead ribozymes which have a short consensus sequence near P1, or the 5' end, that promotes high speed catalytic activity. About 90% of the sequence is conserved and similar to other ribozymes in this class.Based on the RNA sequence, the resulting DNA sequence which ends up coding for the Hatchet Ribozyme is as follows from 5'-3' because in DNA uracil is replaced by thymine. Secondary Structure: TTAGCAAGAATGACTATAGTCACTG TTTGTACACCCCGAATAGATTAGAA GCCTAATCATAATCACGTCTGCAAT TTTGGTACA Due to this sequence construct, after self catalyzed cleavage, it leaves an 8 nucleotide residue upstream on the 3'-end of the RNA. Tertiary Structure: Each ribozyme may have different motifs and thus different tertiary structures: The Tertiary structure of the Hatchet Ribozyme with the motif of HT-UUCG is through dimerization. The dimer is formed through the swapping of the 3' ends of the pairing strands which is also in equilibrium with the dimer formed product of HT-GAAA. Therefore, the RNA sequence shifts between monomer and dimer configurations. To view the 3-D shape of the ribozyme see Figure S1A and B. Two molecules of the HT-GAAA ribozyme can actually form a pseudosymmetric dimer with both monomers of the ribozyme exhibiting relatively well-defined electron density. The tertiary fold consists of four stem substructures which covalently stack upon each other forming the helical and loop structures, called P1, P2, P3, and P4, L1, L2 and L3 respectively (though not shown in the figure above). The actual cleavage site is positioned between the junction of P1 and P2 adjacent to P3 and L2. P1 is composed of three or six base pairs roughly 40% and 60% of the time respectively in its natural state, suggesting that length corresponds to catalytic function.There is also a conserved palindromic sequencing between base U70' and A67', which likely triggers the formation of the dimer due to Watson-Crick base pair interactions. Tertiary Structure: The tertiary structure also has long range implications within itself based on interactions between its loops. Effect of pH and Mg2+: Ribozyme catalysis experiments were done by the addition of MgCl2 and stopped for measurement at each time point by the addition of a stop solution containing urea and EDTA. Effect of pH and Mg2+: A plot of the kobs values measured at pH 7.5 with increasing concentrations of Mg2+. There is a sharp increase in ribozyme function that plateaus as the concentration approaches 10 mM. The steep slope observed at lower Mg2+ concentrations suggests that more than one metal ion is necessary for each RNA to achieve maximal ribozyme activity. Moreover, this suggests that the construct requires higher than normal physiological concentrations of Mg2+ to become completely saturated with Mg2+ as the cofactor. It is possible that native unimolecular constructs, also carrying P0, might achieve saturation at concentrations of Mg2+ that are closer to normal physiological levels. Effect of pH and Mg2+: The effect of pH on ribozyme rate constant in reactions containing 10 mM Mg2+ was also experimentally measured. pH-dependent ribozyme activity increases linearly with a slope of 1 until reaching a kobs, of a Michaelis-Menten plot, plateau of ~4/min near a pH value of 7.5. Any higher pH has the same catalytic effect and more acidic pH's begin denaturing the ribozyme and thus reducing catalytic function. Both the pH dependency and the maximum rate constant have interesting implications for the possible catalytic strategies used by this ribozyme class. Effect of pH and Mg2+: The effects of various mono- and divalent metal ions on hatchet ribozyme activity The Hatchet ribozyme construct remains completely inactive when incubated in the absence of Mg2+ in reactions containing only other monovalent cations at 1 M (Na+, K+, Rb+, Li+, Cs+), 2.5 M (Na+, K+), or 3 M (Li+). In contrast, other divalent metal ions such as Mn2+, Co2+, Zn2+, and Cd2+ support ribozyme function with varying levels of efficiency. Furthermore, two metal ions (Zn2+, Cd2+) function only at low concentrations, and three metal ions (Ba2+, Ni2+, and Cu2+) inhibit activity at 0.5 mM, even when Mg2+ is present. These results indicate that hatchet ribozymes are relatively restrictive in their use of cations to promote catalysis, perhaps indicating that one or more specialized binding sites that accommodate a limited number of divalent cations are present in the RNA structure or perhaps even at the active site. Inhibition by certain divalent metal ions could be due to the displacement of critical Mg2+ ions or by general disruption of RNA folding. Significance/Applications: One standard application is to use flanking self-cleaving ribozymes to generate precisely cut out sequences of functional RNA molecules (i.e. shRNA, saiRNA, sgRNA). This is especially useful for in vivo expression of gene editing systems (i.e. CRISPR/Cas sgRNA) and inhibitory systems.Another method is for in vivo transcription of siRNA. This design uses multiple self-cleaving ribozymes, which are all transcribed from the same gene. After cleavage, both parts of the precursor siRNA (siRNA 1 and 2) can form a double strand and act as intended. To see the setup, see saiRNA graphicLastly, if you want to combine self-cleaving ribozymes with protein sequences, it is important to know that the self-cleaving mechanism of the ribozymes will modify the mRNA. A 5' ribozyme will modify the downstream 5' end of the pre-mRNA, disabling the cell from creating a 5' cap. This decreases the stability of the pre-mRNA and prevents it from being fully functional mature mRNA. On the other side, a 3' ribozyme would prevent polyadenylation of the upstream pre-mRNA, again decreasing stability and preventing maturation. Both interfere with translation as well.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Vortex power** Vortex power: Vortex power is a form of hydro power which generates energy by placing obstacles in rivers and oceans to cause the formation of vortices which can then be tapped into a usable form of energy such as electricity. This method is pioneered by a team at the University of Michigan who call the technology VIVACE or Vortex Induced Vibrations Aquatic Clean Energy. Vortex power: The company Vortex Hydro Power has been created to commercialize the technology. This technology has an expected life span of 10-20 years, which could meet life cycle cost targets. Environmental impacts: As of right now, this technology seems to be nonpolluting and low maintenance. In addition, it does not have any major impact on wildlife such as fish or other animals. This form of power is still in the developmental research stage and is currently undergoing optimization experiments before it can be implemented.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Long-acting beta-adrenoceptor agonist** Long-acting beta-adrenoceptor agonist: Long-acting β adrenoceptor agonists (LABAs, more specifically, long-acting β2 adrenergic receptor agonists) are usually prescribed for moderate-to-severe persistent asthma patients or patients with chronic obstructive pulmonary disease (COPD). They are designed to reduce the need for shorter-acting β2 agonists such as salbutamol (albuterol), as they have a duration of action of approximately 12 hours in comparison with the 4-to-6-hour duration of salbutamol, making them candidates for sparing high doses of corticosteroids or treating nocturnal asthma and providing symptomatic improvement in patients with COPD. With the exception of formoterol, long-acting β2 agonists are not recommended for the treatment of acute asthma exacerbations because of their slower onset of action compared to salbutamol. Their long duration of action is due to the addition of a long, lipophilic side-chain that binds to an exosite on adrenergic receptors. This allows the active portion of the molecule to continuously bind and unbind at β2 receptors in the smooth muscle in the lungs. Medical uses: When combined with inhaled steroids, β adrenoceptor agonists can improve symptoms. In children this benefit is uncertain and they may be potentially harmful. They should not be used without an accompanying steroid due to an increased risk of severe symptoms, including exacerbation in both children and adults. A 2018 meta-analysis was unable to determine whether an increase serious adverse events reported in the previous meta-analysis on regular salmeterol alone is abolished by the additional use of regular inhaled corticosteroid. Large surveillance studies are ongoing to provide more information. There were no asthma-related deaths and few asthma-related serious adverse events when salmeterol is used with an inhaled steroid. At least with formoterol, an increased risk appears to be present even when steroids are used and this risk has not been ruled out for salmeterol. Agents: Some of the currently available long-acting β2 adrenoceptor agonists include:International nonproprietary name (INN): Trade (brand) name arformoterol: Brovana (some consider it to be an ultra-LABA) bambuterol: Bambec, Oxeol clenbuterol: Dilaterol, Spiropent formoterol: Foradil, Oxis, Perforomist salmeterol: Serevent protokylol: Ventaire Ultra-LABAs: Several long-acting β adrenoreceptor agonists have a duration of action of 24 hours, allowing for once-daily dosing. They are considered to be ultra-long-acting β adrenoreceptor agonists (ultra-LABAs) and are now approved. Ultra-LABAs: indacaterol: approved by the European Medicines Agency (EMA) on November 30, 2009, and by Russian FDA-equivalent under the trade name Onbrez Breezhaler. In the United States. it was approved by the Food and Drug Administration (FDA) under the trade name Arcapta Neohaler on July 1, 2011) olodaterol: approved in some European countries and Russia, and by the U.S. Food and Drug Administration (FDA) on July 31, 2014, under Striverdi Respimat vilanterol is the ultra-LABA not available by itself but only as a component of combination drugs: with fluticasone furoate: Breo Ellipta (U.S.), Relvar Ellipta (EU, RU). This second medication in this combination is the synthetic inhaled corticosteroid fluticasone furoate. This product was approved by the FDA in May 2013 as once-daily inhaled therapy for the treatment of chronic obstructive pulmonary disease (COPD) with umeclidinium bromide: Anoro Ellipta. Umeclidinium bromide is a long-acting muscarinic antagonist. This combination was approved by the FDA on December 18, 2013 for the long-term maintenance treatment of COPD. On March 28, 2014, it was approved in European countries and in Russia under the same trade name Ultra-LABAs under development abediterol (codenamed LAS100977) salmefamol (salbutamol and para-methoxyamphetamine (PMA) hybrid) Failed agents carmoterol (formerly TA-2005): development terminated PF-610355: development terminated Concerns: A meta-analysis study from 2006 (pooled results of 19 trials, 33,826 participants) raised concerns that Salmeterol may increase the risk of death in asthmatics, and that the additional risk was not reduced with the adjunctive use of inhaled steroids (e.g., as with the combination product fluticasone/salmeterol). The proposed mechanism is that while LABAs relieve asthma symptoms, they can also promote bronchial inflammation and sensitivity without warning. On February 18, 2011, the FDA issued a safety alert for long-acting β agonists. Following new clinical safety trials however, the FDA issued updated guidance on 20 December 2017, that there is no significant increased risk of serious asthma outcomes with LABAs when used together with inhaled corticosteroids.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Product-form solution** Product-form solution: In probability theory, a product-form solution is a particularly efficient form of solution for determining some metric of a system with distinct sub-components, where the metric for the collection of components can be written as a product of the metric across the different components. Using capital Pi notation a product-form solution has algebraic form P(x1,x2,x3,…,xn)=B∏i=1nP(xi) where B is some constant. Solutions of this form are of interest as they are computationally inexpensive to evaluate for large values of n. Such solutions in queueing networks are important for finding performance metrics in models of multiprogrammed and time-shared computer systems. Equilibrium distributions: The first product-form solutions were found for equilibrium distributions of Markov chains. Trivially, models composed of two or more independent sub-components exhibit a product-form solution by the definition of independence. Initially the term was used in queueing networks where the sub-components would be individual queues. For example, Jackson's theorem gives the joint equilibrium distribution of an open queueing network as the product of the equilibrium distributions of the individual queues. After numerous extensions, chiefly the BCMP network it was thought local balance was a requirement for a product-form solution.Gelenbe's G-network model was the first to show that this is not the case. Motivated by the need to model biological neurons which have a point-process like spiking behaviour, he introduced the precursor of G-Networks, calling it the random neural network. By introducing "negative customers" which can destroy or eliminate other customers, he generalised the family of product form networks. Then this was further extended in several steps, first by Gelenbe's "triggers" which are customers which have the power of moving other customers from some queue to another. Another new form of customer that also led to product form was Gelenbe's "batch removal". This was further extended by Erol Gelenbe and Jean-Michel Fourneau with customer types called "resets" which can model the repair of failures: when a queue hits the empty state, representing (for instance) a failure, the queue length can jump back or be "reset" to its steady-state distribution by an arriving reset customer, representing a repair. All these previous types of customers in G-Networks can exist in the same network, including with multiple classes, and they all together still result in the product form solution, taking us far beyond the reversible networks that had been considered before.Product-form solutions are sometimes described as "stations are independent in equilibrium". Product form solutions also exist in networks of bulk queues.J.M. Harrison and R.J. Williams note that "virtually all of the models that have been successfully analyzed in classical queueing network theory are models having a so-called product-form stationary distribution" More recently, product-form solutions have been published for Markov process algebras (e.g. RCAT in PEPA) and stochastic petri nets. Martin Feinberg's deficiency zero theorem gives a sufficient condition for chemical reaction networks to exhibit a product-form stationary distribution.The work by Gelenbe also shows that product form G-Networks can be used to model spiking random neural networks, and furthermore that such networks can be used to approximate bounded and continuous real-valued functions. Sojourn time distributions: The term product form has also been used to refer to the sojourn time distribution in a cyclic queueing system, where the time spent by jobs at M nodes is given as the product of time spent at each node. In 1957 Reich showed the result for two M/M/1 queues in tandem, later extending this to n M/M/1 queues in tandem and it has been shown to apply to overtake–free paths in Jackson networks. Walrand and Varaiya suggest that non-overtaking (where customers cannot overtake other customers by taking a different route through the network) may be a necessary condition for the result to hold. Mitrani offers exact solutions to some simple networks with overtaking, showing that none of these exhibit product-form sojourn time distributions.For closed networks, Chow showed a result to hold for two service nodes, which was later generalised to a cycle of queues and to overtake–free paths in Gordon–Newell networks. Extensions: Approximate product-form solutions are computed assuming independent marginal distributions, which can give a good approximation to the stationary distribution under some conditions. Semi-product-form solutions are solutions where a distribution can be written as a product where terms have a limited functional dependency on the global state space, which can be approximated. Quasi-product-form solutions are either solutions which are not the product of marginal densities, but the marginal densities describe the distribution in a product-type manner or approximate form for transient probability distributions which allows transient moments to be approximated.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sisa (drug)** Sisa (drug): Sisa is a psychoactive drug from Greece. The basic ingredient is methamphetamine, with additives such as battery acid, engine oil, shampoo and salt. It's notably abused by many homeless people in Athens, and causes dangerous side effects such as insomnia, delusions, heart attacks, and violent tendencies. Routes of administration include smoking, snorting, and intravenous injection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dining car** Dining car: A dining car (American English) or a restaurant car (British English), also a diner, is a railroad passenger car that serves meals in the manner of a full-service, sit-down restaurant. Dining car: It is distinct from other railroad food service cars that do not duplicate the full-service restaurant experience, such as buffet cars, cars in which one purchases food from a walk-up counter to be consumed either within the car or elsewhere in the train. Grill cars, in which customers sit on stools at a counter and purchase and consume food cooked on a grill behind the counter are generally considered to be an "intermediate" type of dining car. History: United States Before dining cars in passenger trains were common in the United States, a rail passenger's option for meal service in transit was to patronize one of the roadhouses often located near the railroad's "water stops". Fare typically consisted of rancid meat, cold beans, and old coffee. Such poor conditions discouraged some from making the journey. History: Most railroads began offering meal service on trains even before the First transcontinental railroad. By the mid-1880s, dedicated dining cars were a normal part of long-distance trains from Chicago to points west, save those of the Santa Fe Railway, which relied on America's first interstate network of restaurants to feed passengers en route. The "Harvey Houses", located strategically along the line, served top-quality meals to railroad patrons during water stops and other planned layovers and were favored over in-transit facilities for all trains operating west of Kansas City. History: As competition among railroads intensified, dining car service was taken to new levels. When the Santa Fe unveiled its new Pleasure Dome lounge cars in 1951, the railroad introduced the travelling public to the Turquoise Room, promoted as "The only private dining room in the world on rails." The room accommodated 12 guests, and could be reserved anytime for private dinner or cocktail parties, or other special functions. The room was often used by celebrities and dignitaries traveling on the Super Chief. History: Edwin Kachel was a steward for more than twenty-five years in the Dining-Car Department of the Great Northern Railway. He said that "on a dining car, three elements can be considered -- the equipment, the employee, then passenger." In other words, "the whole is constituted by two-thirds of human parts." As cross-country train travel became more commonplace, passengers began to expect high-quality food to be served at the meals on board. The level of meal service on trains in the 1920s and 1930s rivaled that of high-end restaurants and clubs. History: United Kingdom They were first introduced in England on 1 November 1879 by the Great Northern Railway Company on services between Leeds and London. A Pullman car was attached to the train for the purpose.As of 2018, Great Western Railway is the only UK train company to provide a full dining Pullman service on selected trains to the West Country & Wales. Food: Elegance is one of the main words used to describe the concept of dining on a train. Use of fresh ingredients was encouraged whenever possible. Some of the dishes prepared by chefs were: Braised Duck Cumberland, Hungarian Beef Goulash with Potato Dumplings, Lobster Americaine, Mountain Trout Au Bleu, Curry of Lamb Madras, Scalloped Brussels Sprouts, Pecan and Orange Sticks and Pennepicure Pie to name a few items.The Christmas menu for the Chicago, Milwaukee & St. Paul Railway in 1882 listed the following items: Hunter's Soup, Salmon with Hollandaise Sauce, Boned Pheasant in Aspic Jelly, Chicken Salad, Salmis Prairie Chicken, Oyster Patties, Rice Croquette, Roast Beef, English Ribs of Beef, Turkey with Cranberry Sauce, Stuffed Suckling Pig with Applesauce, Antelope Steak with Currant Jelly, potatoes, green peas, tomatoes, sweet potatoes, Mince Pie, Plum Pudding, Cake, Ice Cream, Fruits and coffee. Configuration: In one of the most common dining car configurations, one end of the car contains a galley (with an aisle next to it so that passengers can pass through the car to the rest of the train), and the other end has table or booth seating on either side of a center aisle. Trains with high demand for dining car services sometimes feature "double-unit dining cars" consisting of two adjacent cars functioning to some extent as a single entity, generally with one car containing a galley as well as table or booth seating and the other car containing table or booth seating only. In the dining cars of Amtrak's modern bilevel Superliner trains, booth seating on either side of a center aisle occupies almost the entire upper level, with a galley below; food is sent to the upper level on a dumbwaiter. Dining cars enhance the familiar restaurant experience with the unique visual entertainment of the ever-changing view. While dining cars are less common today than in the past (having been supplemented or in some cases replaced altogether by other types of food-service cars), they still play a significant role in passenger railroading, especially on medium- and long-distance trains. Today, a number of tourist-oriented railroads offer dinner excursions to capitalize on the public's fascination with the dining car experience. The U76/U70 tram line between the German cities of Düsseldorf and Krefeld offers a Bistrowagen ("dining car" in German), where passengers can order drinks and snacks. That practice comes from the early 20th century, when interurban trams conveyed a dining car. Despite the introduction of modern tram units, four trams still have a Bistrowagen and operate every weekday.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Niobium–germanium** Niobium–germanium: Niobium-germanium (Nb3Ge) is an intermetallic chemical compound of niobium (Nb) and germanium (Ge). It has A15 phase structure. It is a superconductor with a critical temperature of 23.2 K. Sputtered films have been reported to have an upper critical field of 37 teslas at 4.2 K. History: Nb3Ge was discovered to be a superconductor in 1973 and for 13 years (until the discovery in 1986 of the cuprate superconductors) it held the record as having the highest critical temperature.It has not been as widely used for superconductive applications as niobium–tin or niobium–titanium. Related alloys: Niobium-germanium-aluminium has an upper critical field of about 10 teslas.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neoromanticism (music)** Neoromanticism (music): Neoromanticism in music is a return (at any of several points in the nineteenth or twentieth centuries) to the emotional expression associated with nineteenth-century Romanticism. It is part of the wider movement of neo-romanticism. Definitions: Neoromanticism was a term that originated in literary theory in the early 19th century to distinguish later kinds of romanticism from earlier manifestations. In music, it was first used by Richard Wagner in his polemical 1851 article "Oper und Drama", as a disparaging term for the French romanticism of Hector Berlioz and Giacomo Meyerbeer from 1830 onwards, which he regarded as a degenerated form of true romanticism. The word came to be used by historians of ideas to refer to music from 1850 onwards, and to the work of Wagner in particular. The designation "neo" was used to acknowledge the fact that music of the second half of the 19th century remained in a romantic mode in an unromantic age, dominated by positivism, when literature and painting had moved on to realism and impressionism (Dahlhaus 1979, 98–99). Definitions: According to Daniel Albright, In the late twentieth century, the term Neoromanticism came to suggest a music that imitated the high emotional saturation of the music of (for example) Schumann [ Romanticism ], but in the 1920s it meant a subdued and modest sort of emotionalism, in which the excessive gestures of the Expressionists were boiled down into some solid residue of stable feeling. (Albright 2004, 278–80) Thus, in Albright's view, neoromanticism in the 1920s was not a return to romanticism but, on the contrary, a tempering of an overheated post-romanticism. Notable composers: In this sense, Virgil Thomson proclaimed himself to be "most easily-labeled practitioner [of Neo-Romanticism] in America," (Thomson 2002, 268): Neo-Romanticism involves rounded melodic material (the neo-Classicists affected angular themes) and the frank expression of personal sentiments. . . . That position is an esthetic one purely, because technically we are eclectic. Our contribution to contemporary esthetics has been to pose the problems of sincerity in a new way. We are not out to impress, and we dislike inflated emotions. The feelings we really have are the only ones we think worthy of expression. . . . Sentiment is our subject and sometimes landscape, but preferably a landscape with figures. (Hoover and Cage 1959, 250; Thomson 2002, 268–69) In the twentieth century, composers such as John Adams, Airat Ichmouratov and Richard Danielpour have been described as neoromantics (Boone 1983; Hill, Carlin, and Hubbs 2005, 64). Notable composers: Since the mid-1970s the term has come to be identified with neoconservative postmodernism, especially in Germany, Austria, and the United States, with composers such as Wolfgang Rihm and George Rochberg. Currently active US-based composers widely described as neoromantic include David Del Tredici and Ellen Taaffe Zwilich (Pasler 2001). Francis Poulenc and Henri Sauguet were French composers considered neoromantic (Thomson 2002, 268) while Virgil Thomson (Thomson 2002, 268), Nicolas Nabokov (Thomson 2002, 268), Howard Hanson (Simmons 2004, 111; Barkan 2001, 149; Thomson 2002, 268; Watanabe and Perone 2001) and Douglas Moore were American composers considered neoromantic (Thomson 2002, 268). Sources: Albright, Daniel. 2004. Modernism and Music: An Anthology of Sources. Chicago: University of Chicago Press. ISBN 0-226-01267-0. Barkan, Elliot Robert (ed.). 2001. Making It in America: A Sourcebook on Eminent Ethnic Americans. Santa Barbara: ABC-CLIO. ISBN 1-57607-098-0. Boone, Charles. 1983. "Presents from the Past: Modernism and Post-Modernism in Music". The Threepenny Review, no. 15 (Autumn): 29. Dahlhaus, Carl. 1979. "Neo-Romanticism". 19th-Century Music 3, no. 2 (November): 97–105. Hill, Brad, Richard Carlin, and Nadine Hubbs. 2005. Classical. American Popular Music. New York: Infobase Publishing. ISBN 978-0-8160-6976-7. Hoover, Kathleen, and John Cage. 1959. Virgil Thompson: His Life and Music. New York: Thomas Yoseloff. Pasler, Jann. 2001. “Neo-romantic". The New Grove Dictionary of Music and Musicians, second edition, edited by Stanley Sadie and John Tyrrell. London: Macmillan Publishers. Simmons, Walter. 2004. Voices in the Wilderness: Six American Neo-Romantic Composers. Lanham, MD: Scarecrow Press; Oxford: Oxford Publicity Partnership. ISBN 978-0-8108-4884-9 (hardcover) Paperback reprint 2006. ISBN 978-0-8108-5728-5. Thomson, Virgil. 2002. Virgil Thomson: A Reader: Selected Writings, 1924–1984, edited by Richard Kostelanetz. New York: Routledge. ISBN 0-415-93795-7. Watanabe, Ruth T., and James Perone. 2001. "Hanson, Howard." The New Grove Dictionary of Music and Musicians, second edition, edited by Stanley Sadie and John Tyrrell. London: Macmillan Publishers.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Porch (company)** Porch (company): Porch is a website that connects homeowners with local home improvement contractors. The site features advice articles, cost guides and online booking for over 160 common home improvement, maintenance, and repair projects. History: Porch was founded in September 2012 after co-founder Matt Ehrlichman noticed that there was no organized website to find the best home improvement professionals as he was building his home.Porch was launched as an online home improvement network connecting homeowners with qualified professionals. Current listings are in excess of 300,000 active professionals across the U.S. Porch partners with large retailers like Lowe's, Wayfair, and Pottery Barn to provide home services fulfillment for their customers. Porch also offers direct-to-consumer access to more than 160 different home services offerings through Porch.com associates.Focused on assisting customers at every stage of the “home journey” – including moving in, installations, assembly, repairs and ongoing maintenance – the company facilitated over 2 million home-related projects in 2017 generating almost $1 billion in revenue for small business owners and sole proprietors in specialty service areas such as plumbing, roofing, electrical work carpentry, and more. History: Funding In June 2013 Porch announced a $6.25 million seed round, including investments from Ron Conway of SV Angel, Javier Olivan, and Jeffrey Skoll.In September, 2014 Porch reported a $27.6 million Series A round, led by Lowe's. Joe Hanauer, Chairman of Move and former CEO of Coldwell Banker, joined the board of directors.In January, 2015 Porch reported a $65 million Series B round, led by Valor Equity Partners. Backers included Lowe's, Founders Fund, Battery Ventures, Panorama Point Partners, Capricorn Investment Group and home improvement expert Ty Pennington. Valor Equity Partners’ Antonio Gracias also joined the board of directors. History: Lowe's partnership In April 2014, Porch announced a nationwide partnership with home improvement retailer Lowe's, establishing in-store promotional signage and computer kiosks where customers and sales associates can search Porch's professionals database. Headquarters Relocation In May, 2015, Porch relocated their headquarters to the SODO neighborhood in south Seattle, Washington. Acquisition of Fountain Software, inc. In October 2015, Porch announced the acquisition of Fountain, an online service that connects Internet users with a variety of experts through video chats, texts and annotated photos. Fountain was co-founded by Aaron Patzer, the founder of Mint.com, and Jean Sini. Sini joined Porch; Patzer did not. History: Wayfair partnership In April 2016, Wayfair implemented the Porch Retail Solution nationally starting in 15 markets, online and in support centers. Through the Porch Retail Solution, Wayfair.com shoppers who need help with tasks such as furniture assembly and installation of products such as lighting and plumbing will be able to purchase those services at checkout and set an appointment with a qualified Porch professional. This work will be backed by the Porch Guarantee. History: Layoffs After quickly growing to 500 employees Porch began a series of layoffs which resulted in the headcount being reduced to about 250 employees. In addition, many key executives including the Chief Product Officer, Chief Financial Officer, and Chief Technology Officer left the company. History: Renewed Focus and Growth In April, 2018, it was announced that Porch had grown to 450 employees, approaching the record number of employees Porch had before the 2015 and 2016 layoffs. Porch is now focused on fast growth in 2018 and beyond, with a strong emphasis on investing in the 100-person EPDA team (engineering, product, design, analytics) and building and expanding strategic and business partnerships. History: Addition to Facebook Marketplace In May, 2018, Porch was added as a service provider in the home services category of Facebook Marketplace to help people find service professionals in a convenient social experience.Facebook Marketplace was launched in 2016 and has been growing at a rate of 18 million new listings per month. As of May 2018, 800 million people globally use Facebook Marketplace to buy and sell things each month, including 1 out of 3 Facebook users from the U.S.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Diffuse leptomeningeal glioneuronal tumor** Diffuse leptomeningeal glioneuronal tumor: Diffuse leptomeningeal glioneuronal tumor (DLGNT) is a rare, primary CNS tumor, classified as distinct entity in 2016 and described as diffuse oligodendroglial-like leptomeningeal tumor of children in the 2016 classification of CNS neoplasms by the WHO., Typically, it's considered juvenile tumors but can occur in adults, the average age of diagnosis is five years. It's characterised by wide leptomeningeal spread with male predominance, like histopathology of neurocytoma, oligodendrocyte-like cytopathology, bland appearance, and severe clinical behaviour. Children's basal cisterns and inter-hemispheric fissures are typically involved in plaque like subarachnoid tumors. A common related intraparenchymal lesion is a spinal lesion. However, in certain situations, superficial parenchyma or Virchow-Robin gaps were affected. Diffuse leptomeningeal glioneuronal tumor: Molecular and genetic investigations frequently show a combination of KIAA1549 and the serine/threonine protein kinase BRAF gene, and also deletions of the short arm of chromosome number 1 and/or the long arm of chromosome number 19. Clinical features: Patients with DLGNT presented with a variety of clinical manifestations, depending on the involved area of the disease, ranging from, numbness and seizure to hydrocephalus symptoms Irritability, headaches and vomiting. The progression of the disease is slow, however there have been reports of anaplastic transition. Diagnosis: MRI reveals broad leptomeningeal enhancing and thickness, which is frequently most visible throughout the spine, brainstem, and posterior fossa.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gram domain containing 1b** Gram domain containing 1b: GRAM domain containing 1B, also known as GRAMD1B, Aster-B and KIAA1201, is a cholesterol transport protein that is encoded by the GRAMD1B gene. It contains a transmembrane region and two domains of known function; the GRAM domain and a VASt domain. It is anchored to the endoplasmic reticulum. This highly conserved gene is found in a variety of vertebrates and invertebrates. Homologs (Lam/Ltc proteins) are found in yeast. Gene: GRAMD1B, also known as KIAA1201, is located in the human genome at 11q24.1. It is located on the + strand and is flanked by a variety of other genes. It spans 269,347 bases. mRNA: The most verified isoform, isoform 1, contains 21 exons. There are four validated isoform variants of human GRAMD1B. These consist of truncated 5’ and 3’ regions, resulting in the loss of an exon. One prominent analysis of the mouse gene predicts one form of Gramd1b that is 699 amino acids long. Protein: GRAMD1B is an integral membrane protein that contains several domains, motifs and signals. Protein: Domains There are two confirmed cytoplasmic domains within GRAMD1B. The protein gets its name from the GRAM domain, located approximately 100 amino acids from the start codon. The GRAM domain is commonly found in myotubularin family phosphatases and predominantly involved in membrane coupled processes. GRAMD1B also contains the VASt (VAD1 Analog of StAR-related lipid transfer) domain. The VASt domain is predominantly associated with lipid binding domains, such as GRAM. It is most likely to function in binding large hydrophobic ligands and may be specific for sterol. A C-terminal domain in GRAMD1B sits within the lumen of the ER, is predicted to have alpha-helical secondary structure, and is modified by tryptophan C-mannosyaltion. Protein: Composition Features There are two negative charge clusters, located from amino acids 232-267 and 348-377. The first cluster is not highly conserved, nor is it located in a motif or domain. The second cluster is located directly before the VASt domain and is conserved. There are three repeat sequence regions, all fairly conserved in orthologs. Molecular weight and isoelectric point are conserved in orthologs. Protein: Structure The protein contains four dileucine motifs, three located within or close to the GRAM domain. A predicted leucine zipper pattern extends through a majority the transmembrane region though it is not a nuclear protein. A SUMOylation site is located directly after the VASt domain. The proteins secondary structure consists of alpha-helices, beta-strands and coils. Beta-strands are mainly located within the two domains, while the alpha-helixes are concentrated near the transmembrane region. Three disulfide bonds are predicted throughout the protein. Protein: Subcellular location GRAMD1B is anchored to in the endoplasmic reticulum by a transmembrane domain. Expression: GRAMD1B is expressed in a variety of tissues. It is most highly expressed in the gonadal tissue, adrenal gland, brain and placenta. It has raised expression rates in adrenal tumors, lung tumors. Developmentally, it is most highly expressed during infancy. The EST profile is supported with experimental data from multiple sources Homology: Orthologs The ortholog space for GRAMD1B spans a large portion of evolutionary time. GRAMD1B can be found in mammals, bird, fish and invertebrates. Homologous proteins (Lam/Ltc) are found in yeast. Paralogs There are four paralogs of GRAMD1B. The most closely related is GRAMD1A while the most distant ortholog is GRAMD2A/GRAMD2. Phylogeny GRAMD2 diverged earliest in history while the most recent split is GRAMD1A. The GRAMD1B gene’s rate of divergence significantly faster than Fibrinogen but is not as high as Cytochrome C. Function: When the plasma membrane contains high levels of cholesterol, GRAMD1b as well as GRAMD1a and GRAMD1c move to sites of contact between the plasma membrane and the endoplasmic reticulum. GRAMD1 proteins then facilitate the transport of cholesterol into the endoplasmic reticulum. In the case of GRAMD1b, the plasma membrane source of cholesterol is high-density lipoprotein (HDL). The VASt domain is responsible for binding cholesterol while the GRAM domain determines the location of the protein through sensing of cholesterol and binding partially negatively charged lipids in the plasma membrane, especially phosphatidylserine.GRAMD1b is also implicated in transporting carotenoids within the cell. Function: Protein interactions Several different proteins have been experimentally confirmed or predicted to interact with GRAMD1B. Clinical significance: Mutations and other genetic studies link GRAMD1B to neurodevelopmental disorders, such as intellectual disability and schizophrenia. Loss of GRAMD1b results in reduced cholesterol storage in the adrenal gland and serum corticosterone levels in mice. Reduction of GRAMD1B and GRAMD1C suppresses the onset of a form of non-alcoholic fatty liver disease, non-alcoholic steatohepatitis (NASH) in mice.A study tagging SNPs from chronic lymphocytic leukemia found GRAMD1B to be the second strongest risk allele region. This association is supported through a number of studies The aberrant tri-methylation of histone H3 lysine 27 induces inflammation and has been shown to increase GRAMD1B levels in colon tumors.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbonyl sulfide** Carbonyl sulfide: Carbonyl sulfide is the chemical compound with the linear formula OCS. It is a colorless flammable gas with an unpleasant odor. It is a linear molecule consisting of a carbonyl group double bonded to a sulfur atom. Carbonyl sulfide can be considered to be intermediate between carbon dioxide and carbon disulfide, both of which are valence isoelectronic with it. Occurrence: Carbonyl sulfide is the most abundant sulfur compound naturally present in the atmosphere, at 0.5±0.05 ppb, because it is emitted from oceans, volcanoes and deep sea vents. As such, it is a significant compound in the global sulfur cycle. Measurements on the Antarctica ice cores and from air trapped in snow above glaciers (firn air) have provided a detailed picture of OCS concentrations from 1640 to the present day and allow an understanding of the relative importance of anthropogenic and non-anthropogenic sources of this gas to the atmosphere. Some carbonyl sulfide that is transported into the stratospheric sulfate layer is oxidized to sulfuric acid. Sulfuric acid forms particulate which affects energy balance due to light scattering. The long atmospheric lifetime of COS makes it the major source of stratospheric sulfate, though sulfur dioxide from volcanic activity can be significant too. Carbonyl sulfide is also removed from the atmosphere by terrestrial vegetation by enzymes associated with the uptake of carbon dioxide during photosynthesis, and by hydrolysis in ocean waters. Loss processes, such as these, limit the persistence (or lifetime) of a molecule of COS in the atmosphere to a few years. Occurrence: The largest man-made sources of carbonyl sulfide release include its primary use as a chemical intermediate and as a byproduct of carbon disulfide production; however, it is also released from automobiles and their tire wear, coal-fired power plants, coking ovens, biomass combustion, fish processing, combustion of refuse and plastics, petroleum manufacture, and manufacture of synthetic fibers, starch, and rubber. The average total worldwide release of carbonyl sulfide to the atmosphere has been estimated at about 3 million tons/year, of which less than one third was related to human activity. It is also a significant sulfur-containing impurity in many fuel gases such as synthesis gas, which are produced from sulfur-containing feedstocks.Carbonyl sulfide is present in foodstuffs, such as cheese and prepared vegetables of the cabbage family. Traces of COS are naturally present in grains and seeds in the range of 0.05–0.1 mg·kg−1. Occurrence: Carbonyl sulfide has been observed in the interstellar medium (see also List of molecules in interstellar space), in comet 67P and in the atmosphere of Venus, where, because of the difficulty of producing COS inorganically, it is considered a possible indicator of life. Reactions and applications: Carbonyl sulfide is used as an intermediate in the production of thiocarbamate herbicides.The hydrolysis of carbonyl sulfide is promoted by chromium-based catalysts: COS + H2O → CO2 + H2SThis conversion is catalyzed in solution by carbonic anhydrase enzymes in plants and mammals. Because of this chemistry, the release of carbonyl sulfide from small organic molecules has been identified as a strategy for delivering hydrogen sulfide, which is gaseous signaling molecule.This compound is found to catalyze the formation of peptides from amino acids. This finding is an extension of the Miller–Urey experiment and it is suggested that carbonyl sulfide played a significant role in the origin of life.In ecosystem science, atmospheric studies of carbonyl sulfide are increasingly being used to describe the rate of photosynthesis. Synthesis: Carbonyl sulfide was first described in 1841, but was apparently mischaracterized as a mixture of carbon dioxide and hydrogen sulfide. Carl von Than first characterized the substance in 1867. It forms when carbon monoxide reacts with molten sulfur. CO + 1/8 S8 → COSThis reaction reverses above 1200 K (930 °C; 1700 °F). A laboratory synthesis entails the reaction potassium thiocyanate and sulfuric acid. The resulting gas contains significant amounts of byproducts and requires purification. Synthesis: KSCN + 2 H2SO4 + H2O → KHSO4 + NH4HSO4 + COSHydrolysis of isothiocyanates in hydrochloric acid solution also affords COS. Toxicity: As of 1994, limited information existed on the acute toxicity of carbonyl sulfide in humans and in animals. High concentrations (above 1000 ppm) can cause sudden collapse, convulsions, and death from respiratory paralysis. Occasional fatalities have been reported, practically without local irritation or olfactory warning. In tests with rats, 50% animals died when exposed to 1400 ppm of COS for 90 minutes, or at 3000 ppm for 9 minutes. Limited studies with laboratory animals also suggest that continued inhalation of low concentrations (around 50 ppm for up to 12 weeks) does not affect the lungs or the heart.Carbonyl sulfide is a potential alternative fumigant to methyl bromide and phosphine. In some cases, however, residues on the grain result in flavours that are unacceptable to consumers, such as in barley used for brewing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Well-bird exam** Well-bird exam: A well-bird exam is a check-up for birds which are assumed to be healthy. These examinations are frequently performed by an avian veterinarian when the bird is first acquired and annually thereafter. The examination: The veterinarian will likely ask the owner about the bird's housing, diet, and activities, then examine the birds feathers, eyes, ears, and nares for signs of illness. He or she will probably acquire a Gram's stain, and may also clip the bird's wings and toenails if requested. He or she will likely offer advice about caring for the pet. Importance: Birds often hide their illnesses very well, and an avian veterinarian may see symptoms of illness that are not observable to the untrained eye. Periodic examination serves as a baseline for comparison for future reference. If an examined bird ever becomes sick and requires the care of a veterinarian, he or she will already have much data about the bird's weight (one of the earliest indicators of illness in birds), diet, and care.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Minivan** Minivan: Minivan (sometimes called simply as van) is a car classification for vehicles designed to transport passengers in the rear seating row(s), with reconfigurable seats in two or three rows. The equivalent classification in Europe is MPV (multi-purpose vehicle). In Southeast Asia, the equivalent classification is Asian Utility Vehicle (AUV).Compared with a full-size van, most minivans are based on a passenger car platform and have a lower body. Early models such as the Ford Aerostar and Chevrolet Astro utilized a compact pickup truck platform. Minivans often have a 'one-box' or 'two-box' body configuration, a higher roof, a flat floor, sliding doors for rear passengers, and high H-point seating. The largest size of minivans is also referred to as 'Large MPV' and became popular following the introduction of the 1984 Dodge Caravan and Renault Espace. Typically, these have platforms derived from D-segment passenger cars or compact pickups. Since the 1990s, the smaller compact MPV and mini MPV sizes of minivans have also become popular.Though predecessors to the minivan date back to the 1930s, the contemporary minivan body style was developed concurrently by several companies in the early 1980s, most notably by Chrysler (producer of the Chrysler minivans) and Renault (the Renault Espace), both first sold for model year 1984. Minivans cut into and eventually overshadowed the traditional market of the station wagon, and grew in global popularity and diversity throughout the 1990s. Since the 2000s, their reception has varied in different parts of the world: in North America, for example, they have been largely eclipsed by crossovers and SUVs, while in Asia they are commonly marketed as luxury vehicles. Etymology: The term minivan originated in both North America and in the United Kingdom in 1959. In the UK, the Minivan was a small van manufactured by Austin and based on the newly introduced Mini car. In the US, the term was used in order to differentiate the smaller passenger vehicles from full-size vans (such as the Ford E-Series, Dodge Ram Van, and Chevrolet Van), which were then simply called 'vans'.The first known use of the term was in 1959, but not until the 1980s was it commonly used. Characteristics: Chassis In contrast to larger vans, most modern minivans/MPVs use a front-engine, front-wheel drive layout, while some model lines offer all-wheel drive as an option. Alongside the adoption of the form factor introduced by Chrysler minivans, the configuration allows for less engine intrusion and a lower floor in the passenger compartment. In line with larger full-size vans, unibody construction has been commonly used (the spaceframe design of the Renault Espace and the General Motors APV minivans being exceptions). Characteristics: Minivans/MPVs are produced on either distinct chassis architecture or share platforms with other types of vehicles such as sedans and crossover SUVs. Characteristics: Body style Minivans/MPVs use either a two-box or a one-box body design with A, B, C and D pillars. The cabin may be fitted with two, three, or four rows of seats, with the most common configurations being 2+3+2 or 2+3+3. Compared to other types of passenger vehicles, the body shape of minivans is designed to maximize interior space for both passengers and cargo. It is achieved by lengthening the wheelbase, creating a flatter floor, taller roof, and more upright side profile, but not as prominent as commercial-oriented vans that are boxier in profile. Practicality and comfort for passengers are also enhanced with a larger rear cargo space opening and larger windows.Some minivans/MPVs may use sliding doors while others offer conventional forward-hinged doors. Initially, a feature of the 1982 Nissan Prairie, the 1996 Chrysler minivans introduced a driver-side sliding door; by 2002, all minivans were sold with doors on both sides of the body. Most minivans are configured with a rear liftgate; few minivans have used panel-style rear doors, for example, cargo versions of the Chevrolet Astro, Ford Aerostar, and the Mercedes-Benz V-Class. Characteristics: Interior Most minivans are designed with a reconfigurable interior to carry passengers and their effects. The first examples were designed with removable rear seats unlatched from the floor for removal and storage (in line with larger vans); however, users gave poor reception to the design as many seats were heavy and hard to remove. In 1995, the Honda Odyssey was introduced with a third-row seat that folded flat into the floor, which was then adopted by many competitors, including Chrysler that introduced third-row and fold-flat second-row seats in 2005. Characteristics: High-end minivans may include distinguished features such as captain seats or Ottoman seats, as opposed to bench seats for the second row. Predecessors: Prior to adoption of the minivan term, there is a long history of one-box passenger vehicles roughly approximating the body style, with the 1936 Stout Scarab often cited as the first minivan. The passenger seats in the Scarab were moveable and could be configured for the passengers to sit around a table in the rear of the cabin. Passengers entered and exited the Scarab via a centrally-mounted door. Predecessors: The DKW Schnellaster — manufactured from 1949 until 1962 — featured front-wheel drive, a transverse engine, flat floor and multi-configurable seating, all of which would later become characteristics of minivans.In 1950, the Volkswagen Type 2 adapted a bus-shaped body to the chassis of a small passenger car (the Volkswagen Beetle). When Volkswagen introduced a sliding side door to the Type 2 in 1968, it then had the prominent features that would later come to define a minivan: compact length, three rows of forward-facing seats, station wagon-style top-hinged tailgate/liftgate, sliding side door, passenger car base.The 1956–1969 Fiat Multipla also had many features in common with modern minivans. The Multipla was based on the chassis of the Fiat 600 and had a rear engine and cab forward layout.The early 1960s saw Ford and Chevrolet introduce "compact" vans for the North American market, the Econoline Club Wagon and Greenbrier respectively. The Ford version was marketed in the Falcon series, the Chevrolet in the Corvair 95 series. The Econoline grew larger in the 1970s, while the Greenbrier was joined by (and later replaced by) the Chevy Van. North America: Minivans developed for the North American market are distinct from most minivan/MPVs marketed in other regions such as Europe and Asia owing to their larger footprint and larger engine. As of 2020, average exterior length for minivans in North America ranged around 200 inches (5.08 m), while many models uses V6 engines with more than 270 horsepower (201 kW; 274 PS) mainly to fulfill towing capacity requirements which is demanded by North American customers.In 2021, sales of the segment totalled 310,630 units in the U.S. (2.1% of the overall car market), and 33,544 in Canada (2.0% of the overall car market). As of 2022, the passenger-oriented minivan segment consists of the Toyota Sienna, Chrysler Pacifica, Chrysler Voyager, Honda Odyssey, and Kia Carnival. North America: Top 3 best-sellers in the U.S., 2021 History 1970s and 1980s In the late 1970s, Chrysler began a development program to design "a small affordable van that looked and handled more like a car." The result of this program was the first American minivans based on the S platform, the 1984 Plymouth Voyager and Dodge Caravan. The S minivans debuted the minivan design features of front-wheel drive, a flat floor and a sliding door for rear passengers.The term minivan came into use largely in comparison to size to full-size vans; at six feet tall or lower, 1980s minivans were intended to fit inside a typical garage door opening. In 1984, The New York Times described minivans "the hot cars coming out of Detroit," noting that "analysts say the mini-van has created an entirely new market, one that may well overshadow the... station wagon."In response to the popularity of the Voyager/Caravan, General Motors released the 1985 Chevrolet Astro and GMC Safari badge-engineered twins, and Ford released the 1986 Ford Aerostar. These vehicles used a traditional rear-wheel drive layout, unlike the Voyager/Caravan.To match the launch of minivans by American manufacturers, Japanese manufacturers introduced the Toyota TownAce, Nissan Vanette, and Mitsubishi Delica to North America in 1984, 1986, and 1987, respectively. These vehicles were marketed with the generic "Van" and "Wagon" names (for cargo and passenger vans, respectively).In 1989, the Mazda MPV was released as the first Japanese-brand minivan developed from the ground up specifically for the North American market. Its larger chassis allowed for the fitment of an optional V6 engine and four-wheel drive. In contrast to the sliding doors of American minivans, a hinged passenger-side door was used. A driver-side door was added for 1996, as Mazda gradually remarketed the model line as an early crossover SUV. North America: By the end of the 1980s, demand for minivans as family vehicles had largely superseded full-size station wagons in the United States. North America: 1990s During the 1990s, the minivan segment underwent several major changes. Many models switched to the front-wheel drive layout used by the Voyager/Caravan minivans. For example, Ford replaced the Aerostar with the front-wheel drive Mercury Villager for 1993 and the Ford Windstar for 1995. The models also increased in size, as a result of the extended-wheelbase ("Grand") versions of the Voyager and Caravan which were launched in 1987. An increase in luxury features and interior equipment was seen in the Eddie Bauer version of the 1988 Ford Aerostar, the 1990 Chrysler Town & Country, and the 1990 Oldsmobile Silhouette. The third-generation Plymouth Voyager, Dodge Caravan, and Chrysler Town & Country – released for the 1996 model year – were available with an additional sliding door on the driver's side. North America: Following the 1990 discontinuation of the Nissan Vanette into the United States, Nissan also ended the sale of the second-generation Nissan Axxess. Nissan reentered the segment by forming a joint venture with Ford to develop and assemble a minivan which became the Nissan Quest and its Mercury Villager counterpart. North America: Toyota also introduced the Toyota Previa in 1990 to replace the Van/Wagon in North America. It was designed solely as a passenger vehicle sized to compete with American-market minivans. For 1998, the Toyota Sienna became the first Japanese-brand minivan assembled in North America, replacing the Toyota Previa in that market. For 1999, Honda introduced a separate version of the Odyssey for North America, with North America receiving a larger vehicle with sliding doors. North America: 2000s and 2010s The highest selling year for minivans was in 2000, when 1.4 million units were sold. However, in the following years, sales of minivans began to decrease. In 2013, sales of the segment reached approximately 500,000, one-third of its 2000 peak. Market share of minivans in 2019 reached around 2% after a steady decline from 2004, when the segment recorded above 6% of share. It has been suggested that the falling popularity of minivans is due to the increasing popularity of SUVs and crossovers, and its increasingly undesirable image as a vehicle for older drivers or the soccer mom demographics.From 2000 onward, several minivan manufacturers adopted boxier square-based exterior designs, and began offering more advanced equipment; including power doors and/or liftgate; seating that folded flat into the cabin floor; DVD/VCR entertainment systems; in-dash navigation and rear-view camera (both only offered on higher-end trims); and parking sensors. However, the Quest and Sedona did not echo these design changes until their third and second respective generations while Chrysler introduced fold-flat seating in 2005 (under the trademark “Stow-n’-go”). Mazda’s MPV never utilized power doors until its discontinuation in 2017. North America: Due to the market decline, North American sales of the Volkswagen Eurovan ceased in 2003. Ford exited the segment in 2006 when the Ford Freestar was canceled, Chrysler discontinued its short-wheelbase minivans in 2007, and General Motors exited the segment in 2009 with the cancellation of the Chevrolet Uplander. However, Volkswagen marketed the Volkswagen Routan (a rebadged Chrysler RT minivans) between 2009 and 2013. In 2010, Ford started importing the commercial-oriented Ford Transit Connect Wagon from Turkey. A similar vehicle, the Mercedes-Benz Metris entered the North American market in 2016. North America: The Kia Sedona, which was introduced for the 2002 model year, is notable for being the first minivan from a South Korean manufacturer in the region. For 2007, Kia also introduced the three-row Kia Rondo compact MPV, where it was prominently marketed as a crossover due to its small size and the use of hinged rear doors.Another compact MPV released to the market was the Mazda5 in 2012, which is a three-row vehicle with rear sliding doors. Mazda claimed the model "does not fit into any traditional (North American) segmentation." The Ford C-Max was released for 2013 as a hybrid electric and battery electric compact MPV with sliding doors, although it did not offer third-row seating in North America. Europe: In Europe, the classification is commonly known as "MPV" or "people carrier" and includes smaller vehicles with two-row seating. Europe: History 1980s The 1984 Renault Espace was the first European-developed minivan developed primarily for passenger use (as the Volkswagen Caravelle/Vanagon was a derivative of a commercial van). Beginning development in the 1970s under the European subsidiaries of Chrysler, the Espace was intended as a successor for the Matra Rancho, leading to its use of front-hinged doors. While slow-selling at the time of its release, the Espace would go on to become the most successful European-brand minivans.Initially intending to market the Espace in North America through American Motors Corporation (AMC), the 1987 sale of AMC to Chrysler canceled the plans for Renault to do so. In the late 1980s, Chrysler and Ford commenced sales of American-designed minivans in Europe (categorized as full-size in the region), selling the Chrysler Voyager and Ford Aerostar. General Motors imported the Oldsmobile Silhouette (branded as the Pontiac Trans Sport), later marketing the American-produced Opel/Vauxhall Sintra. Europe: 1990s In the 1990s, several joint ventures produced long-running minivan designs. In 1994, badge engeenered series of Eurovans was introduced produced by Sevel Nord and marketed by Citroën, Fiat, Lancia, and Peugeot. The Eurovans were produced with two sliding doors; to increase interior space, the gearshift was located on the dashboard and adopted a petal-type handbrake. In 1995, Ford of Europe and Volkswagen entered a joint venture, producing the Ford Galaxy, SEAT Alhambra, and Volkswagen Sharan badged vans that featured rear side doors that were front-hinged rather than sliding. Europe: In 1996, Mercedes introduced the Mercedes-Benz V-Class, it is available as a standard panel van for cargo (called Vito), or with passenger accommodations substituted for part or all of the load area (called V-Class or Viano). In 1998, the Fiat Multipla was released. A two-row, six-seater MPV with a 3+3 seat configuration borrowing its name from an older minivan, it is notable for its highly controversial design.Market reaction to these new full-size MPV models was mixed. Consumers perceived MPVs as large and truck-like despite boasting similar footprints as large sedans. Arguably, cultural reasons regarding vehicle size and high fuel prices were a factor. During 1996 and 1997, the Western European MPV market expanded from around 210,000 units to 350,000 units annually. However, the growth did not continue as expected, resulting in serious plant overcapacity.Renault set a new "compact MPV" standard with the Renault Scénic in 1996 which became popular. Based on the C-segment Mégane platform, it offered the same multi-use and flexibility aspects as the larger MPVs in a much smaller footprint. Europe: 2000s After the success of the Renault Scénic, other makers have developed similar European-focused products such as the Opel Zafira that offered three-row seating, Citroën Xsara Picasso and others. Asia: Japan In Japan, the classification is known as "minivan" (Japanese: ミニバン, Hepburn: Miniban) and defined by its three-row seating capacity.Before the birth of minivans with modern form factors, tall wagon-type vehicles with large seating capacity in Japan were known as light vans. It commonly adopts mid-engine, cab over design and rear-wheel drive layout with one-box form factor. Examples included the Toyota TownAce, Toyota HiAce, Nissan Vanette, Mitsubishi Delica and Mazda Bongo. These vehicles were based on commercial vehicles, which created a gap compared to sedans in terms of ride quality and luxury.The Nissan Prairie released in 1982 has been considered to be the first Japanese compact minivan. Derived closely from a compact sedan, the Prairie was marketed as a "boxy sedan", configured with sliding doors, folding rear seats, and a lifting rear hatch. The Mitsubishi Chariot adopted nearly the same form factor, instead using wagon-style front-hinged doors. Asia: In 1990, Toyota introduced the Toyota Estima in Japan, which carried over the mid-engine configuration the TownAce. Along with its highly rounded exterior, the Estima was distinguished by its nearly panoramic window glass. The Estima was redesigned in 2000, adopting a front-wheel drive layout and offered with a hybrid powertrain since 2001. In 2002, Toyota introduced the Toyota Alphard which was developed as a luxury-oriented model. Asia: In 2020, a luxury division Lexus introduced their first luxury minivan the Lexus LM, produced with varying degrees of relation with the Toyota Alphard/Vellfire. The LM designation stands for "Luxury Mover".Nissan introduced the Nissan Serena in 1990 and the Nissan Elgrand in 1997. Asia: In 1995, Honda entered the minivan segment by introducing the Honda Odyssey. The Odyssey was designed with front-hinged doors and as derived from the Honda Accord. As the result, it came with advantages such as sedan-like driving dynamics and a lower floor to allow for easy access. In a design feature that would become widely adopted by other manufacturers, the Odyssey introduced a rear seat that folded flat into the floor (replacing a removable rear seat). The Odyssey evolved to become a low-roof, estate-like minivan until 2013, when it adopted a high-roof body with rear sliding doors. Honda also produced the Honda Stepwgn mid-size MPV since 1996, which is designed with a higher cabin and narrow width, and the Honda Stream since 2002 to slot below the Odyssey.In 2020, minivans made up 20.8% of total automobile sales in Japan behind SUVs and compact hatchbacks, making it one of the largest minivan markets in the world. Asia: South Korea In South Korea, both the "minivan" and "MPV" terms are used. The Kia Carnival (also sold the Kia Sedona) was introduced in 1998 with dual sliding doors. Sharing its configuration with the Honda Odyssey, the Hyundai Trajet was sold from 1999 to 2008. Introduced in 2004, the SsangYong Rodius is the highest-capacity minivan, seating up to 11 passengers. It was discontinued in 2019. Asia: Current minivans marketed in South Korea are the Kia Carnival and Hyundai Staria, along with imported options such as the Toyota Sienna (originally for North America) and later generations of Honda Odyssey. Asia: China In 1999, Shanghai GM commenced production of the Buick GL8 minivan, derived from a minivan platform designed by GM in the United States. After two generations of production, the GL8 is the final minivan produced by General Motors or its joint ventures today. It remained dominant in the high-end minivan segment in the market.Sales of minivans in China increased rapidly in 2015 and 2016, when the Chinese government lifted the one-child policy in favor of the two-child policy which pushed customer preference toward three-row vehicles in anticipation of a larger family. In 2016, 2,497,543 minivans were sold in China, a major increase from 2012 which recorded 936,232 sales. However, sales volume had shrunk ever since, with only 1,082,028 minivans sold in the domestic market in 2021 (4.1% of the total car market), around 720,000 of which were sold by domestic manufacturers. Asia: Indonesia The MPV segment is the most popular passenger car segment in Indonesia with a market share of 40 percent in 2021. India The category is commonly known as multi utility vehicle (MUV) or MPV. In fiscal year 2020, sales volume of the segment totalled 283,583 vehicles, or 10.3% of industry total. Luxury vehicles: Some manufacturers such as Mercedes-Benz, Toyota, Nissan, Buick, Hyundai, Hongqi and Lexus have marketed upscale MPVs as luxury vehicles. Luxury MPVs generally have 7 seats, but multiple ultra-luxury MPVs have 4 seats, and generally have more luxury features than regular MPVs. By 2020s, many manufacturers have been adding more luxury features to their MPVs, such as technology and interior comfort. Examples of luxury MPV models include Mercedes-Benz V-Class, Nissan Elgrand, Wey Gaoshan, Toyota Alphard, Lexus LM, Buick GL8, Hongqi HQ9, Zeekr 009 and the Hyundai Staria Lounge. Examples of upscale MPVs Size categories: Mini MPV Mini MPV – an abbreviation for Mini Multi-Purpose Vehicle – is a vehicle size class for the smallest size of minivans (MPVs). The Mini MPV size class sits below the compact MPV size class and the vehicles are often built on the platforms of B-segment hatchback models. Size categories: Several minivans based on B-segment platforms have been marketed as 'leisure activity vehicles' in Europe. These include the Fiat Fiorino and Ford Transit Courier.Examples: Category:Mini MPVs ( 70 ) Compact MPV Compact MPV – an abbreviation for Compact Multi-Purpose Vehicle – is a vehicle size class for the middle size of MPVs/minivans. The Compact MPV size class sits between the mini MPV and minivan size classes. Size categories: Compact MPVs remain predominantly a European phenomenon, although they are also built and sold in many Latin American and Asian markets. As of 2016, the only compact MPV sold widely in the United States was the Ford C-Max. Examples: Category:Compact MPVs ( 109 ) Related categories: Leisure activity vehicle A leisure activity vehicle (abbreviated LAV), also known as van-based MPV and ludospace in French, is the passenger-oriented version of small commercial vans primarily marketed in Europe. One of the first LAVs was the 1977 Matra Rancho (among the first crossover SUVs and a precursor to the Renault Espace), with European manufacturers expanding the segment in the late 1990s, following the introduction of the Citroën Berlingo and Renault Kangoo. Related categories: Leisure activity vehicles are typically derived from supermini or subcompact car platforms, differing from mini MPVs in body design. To maximize interior space, LAVs feature a taller roof, more upright windshield, and longer hood/bonnet with either a liftgate or barn doors to access the boot. Marketed as an alternative to sedan-derived small family cars, LAVs have seating with a lower H-point than MPVs or minivans, offering two (or three) rows of seating. Related categories: Though sharing underpinnings with superminis, subcompacts, and mini MPVs, the use of an extended wheelbase can make leisure activity vehicles longer than the vehicles they are derived from. For example, the Fiat Doblò is one of the longest LAVs with a total length of 4,255 mm (167.5 in), versus the 4,050 mm (159.4 in) of the Opel Meriva (a mini MPV) and the 4,030 mm (158.7 in) of the Peugeot 206 SW (a supermini). Related categories: Asian utility vehicle An Asian utility vehicle (abbreviated AUV) is a term originating from the Philippines to describe basic and affordable vehicles with either large seating capacity or cargo designed to be sold in developing countries. These vehicles are usually available in minivan-like wagon body style with a seating capacity of 7 to 16 passengers, and are usually based on a compact pickup truck with body-on-frame chassis and rear-wheel drive to maximize its load capacity and durability while maintaining low manufacturing cost. Until the 2000s, AUVs were popular in Southeast Asia, particularly in Indonesia and the Philippines, Taiwan, and some African markets.The first AUV is the Toyota Tamaraw/Kijang, which was introduced in the Philippines and Indonesia since 1975 as a pickup truck with optional rear cabin. In the 1990s, other vehicles such as the Isuzu Panther/Hi-Lander/Crosswind and Mitsubishi Freeca/Adventure/Kuda emerged in the AUV segment. Modern equivalent of AUV is the Toyota Innova, an MPV that is the direct successor to the Kijang which in its first two generations were built with body-on-frame construction. The third generation of the vehicle switched to unibody construction. Related categories: Three-row SUV With the decline of the minivan/MPV category in many regions such as North America and Europe in the mid-2010s, SUVs and crossovers with three rows of seating became popular alternatives. Compared to minivans, three-row SUVs lose sliding doors and generally offer less interior space due to the higher priorities on exterior styling and higher ground clearance. Further media: Videos "The Fall of the Minivan". CNBC. 18 September 2019. Archived from the original on 11 December 2021. Retrieved 24 November 2020 – via YouTube.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FreeDOS** FreeDOS: FreeDOS (formerly Free-DOS and PD-DOS) is a free software operating system for IBM PC compatible computers. It intends to provide a complete MS-DOS-compatible environment for running legacy software and supporting embedded systems.FreeDOS can be booted from a floppy disk or USB flash drive. It is designed to run well under virtualization or x86 emulation.Unlike most versions of MS-DOS, FreeDOS is composed of free software, licensed under the terms of the GNU General Public License. However, other packages that form part of the FreeDOS project include non-GPL software considered worthy of preservation, such as 4DOS, which is distributed under a modified MIT License. History: The FreeDOS project began on 29 June 1994, after Microsoft announced it would no longer sell or support MS-DOS. Jim Hall – who at the time was a student – posted a manifesto proposing the development of PD-DOS, a public domain version of DOS. Within a few weeks, other programmers including Pat Villani and Tim Norman joined the project. Between them, a kernel (by Villani), the COMMAND.COM command line interpreter (by Villani and Norman), and core utilities (by Hall) were created by pooling code they had written or found available. For some time, the project was maintained by Morgan "Hannibal" Toal. There have been many official pre-release distributions of FreeDOS before the final FreeDOS 1.0 distribution. GNU/DOS, an unofficial distribution of FreeDOS, was discontinued after version 1.0 was released.Blinky the Fish is the mascot of FreeDOS. He was designed by Bas Snabilie. Distribution: FreeDOS 1.1, released on 2 January 2012, is available for download as a CD-ROM image: a limited install disc that only contains the kernel and basic applications, and a full disc that contains many more applications (games, networking, development, etc.), not available as of November 2011 but with a newer, fuller 1.2. The legacy version 1.0 (2006) consisted of two CDs, one of which was an 8 MB install CD targeted at regular users and the other which was a larger 49 MB live CD that also held the source code of the project. Distribution: Commercial uses FreeDOS is used by several companies: Dell preloaded FreeDOS with their n-series desktops to reduce their cost. The firm has been criticized for making these machines no cheaper, and harder to buy, than identical systems with Windows. HP provided FreeDOS as an option in its dc5750 desktops, Mini 5101 netbooks and Probook laptops. FreeDOS is also used as bootable media for updating the BIOS firmware in HP systems. FreeDOS is included by Steve Gibson's hard drive maintenance and recovery program, SpinRite. Intel's Solid-State Drive Firmware Update Tool loaded the FreeDOS kernel. Non-commercial uses FreeDOS is also used in multiple independent projects: FED-UP is the Floppy Enhanced DivX Universal Player. FUZOMA is a FreeDOS-based distribution that can boot from a floppy disk and converts older computers into educational tools for children. XFDOS is a FreeDOS-based distribution with a graphical user interface, porting Nano-X and FLTK. Compatibility: Hardware FreeDOS requires a PC/XT machine with at least 640 kB of memory. Programs not bundled with FreeDOS often require additional system resources. Compatibility: MS-DOS and Win32 console FreeDOS is mostly compatible with MS-DOS. It supports COM executables, standard DOS executables and Borland's 16-bit DPMI executables. It is also possible to run 32-bit DPMI executables using DOS extenders. The operating system has several improvements relative to MS-DOS, mostly involving support for newer standards and technologies that did not exist when Microsoft ended support for MS-DOS, such as internationalization, or the Advanced Power Management TSRs. Furthermore, with the use of HX DOS Extender, many Windows Console applications function properly in FreeDOS, as do some rare GUI programs, like QEMM and Bochs. Compatibility: DOS-based Windows FreeDOS is able to run Microsoft Windows 1.0 and 2.0 releases. Windows 3.x releases, which had support for i386 processors, cannot fully be run in 386 Enhanced Mode, except partially in the experimental FreeDOS kernel 2037.Windows 95, Windows 98 and Windows Me use a stripped-down version of MS-DOS. FreeDOS cannot be used as a replacement because the undocumented interfaces between MS-DOS 7.0–8.0 and Windows "4.xx" are not emulated by FreeDOS; however, it can be installed and used beside these systems using a boot manager program, such as BOOTMGR or METAKERN included with FreeDOS. Compatibility: Windows NT and ReactOS Windows NT-based operating systems, including Windows 2000, XP, Vista, 7, 8, 8.1,10 and 11 for desktops, and Windows Server 2003, 2008 and 2008 R2 for servers, do not make use of MS-DOS as a core component of the system. These systems can make use of the FAT file systems which are used by MS-DOS and earlier versions of Windows; however, they typically use the NTFS (New Technology File System) by default for security and other reasons. FreeDOS can co-exist on these systems on a separate partition or on the same partition on FAT systems. The FreeDOS kernel can be booted by adding it to the Windows 2000 or XP's NT Boot Loader configuration file, boot.ini, or the freeldr.ini equivalent for ReactOS. Compatibility: File systems FAT32 is fully supported and is the preferred format for the boot drive. Depending on the BIOS used, up to four Logical Block Addressing (LBA) hard disks of up to 128 GB, or 2 TB, in size are supported. There has been little testing with large disks, and some BIOSes support LBA but produce errors on disks larger than 32 GB; a driver such as OnTrack or EZ-Drive resolves this problem. FreeDOS can also be used with a driver called LFNDOS to enable support for Windows 95-style long file names, but most pre-Windows 95 programs do not support LFNs, even with a driver loaded. There is no planned support for NTFS, ext2 or exFAT, but there are several external third-party drivers available for that purpose. To access ext2 file systems, LTOOLS, a counterpart to Mtools, can sometimes be used to copy data to and from ext2 file system drives.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kathleen Giacomini** Kathleen Giacomini: Kathleen M. Giacomini is a professor of bioengineering and therapeutic sciences at the University of California, San Francisco. Her work focuses on how genetics affects the efficacy of drugs. She is also the co-director UCSF-Stanford Center of Excellence in Regulatory Sciences and Innovation for the department of Bioengineering at the University of California, San Francisco. Giacomini has organized Health Care conferences in the San Francisco Bay Area Education: Giacomini earned her doctorate in pharmaceutics from the University at Buffalo. From 1979 to 81, Giacomini was a post-doctoral fellow in clinical pharmacology at Stanford University. Career: In 1998, Giacomini was named chair of the department of biopharmaceutical sciences at the University of California San Francisco. In 1999, Giacomini became the first woman honored as Pharmaceutical Scientist of the Year by the International Pharmaceutical Federation. In 2000, Giacomini organized the Pharmacogenomics of Membrane Transporters (PMT) Project at the University of California, San Francisco. In 2005, while serving as vice chair of the Pharmacogenetics Research Network, Giacomini was awarded the Paul Dawson Biotechnology Award by the American Association of Colleges of Pharmacy.In 2010, Giacomini received the Therapeutic Frontiers Lecture Award from the American College of Clinical Pharmacy. The following year she was awarded the Scheele Award for her work on the pharmacogenetics of drug transporters. In 2014, Giacomini became the co-director UCSF-Stanford Center of Excellence in Regulatory Sciences and Innovation for the department of Bioengineering.In 2018, Giacomini was awarded the Bill Heller Mentor of the Year Award by the American Foundation for Pharmaceutical Education and the Volwiler Research Award.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Spaghetti sort** Spaghetti sort: Spaghetti sort is a linear-time, analog algorithm for sorting a sequence of items, introduced by A. K. Dewdney in his Scientific American column. This algorithm sorts a sequence of items requiring O(n) stack space in a stable manner. It requires a parallel processor. Algorithm: For simplicity, assume we are sorting a list of natural numbers. The sorting method is illustrated using uncooked rods of spaghetti: For each number x in the list, obtain a rod of length x. (One practical way of choosing the unit is to let the largest number m in the list correspond to one full rod of spaghetti. In this case, the full rod equals m spaghetti units. To get a rod of length x, break a rod in two so that one piece is of length x units; discard the other piece.) Once you have all your spaghetti rods, take them loosely in your fist and lower them to the table, so that they all stand upright, resting on the table surface. Now, for each rod, lower your other hand from above until it meets with a rod—this one is clearly the longest. Remove this rod and insert it into the front of the (initially empty) output list (or equivalently, place it in the last unused slot of the output array). Repeat until all rods have been removed. Analysis: Preparing the n rods of spaghetti takes linear time. Lowering the rods on the table takes constant time, O(1). This is possible because the hand, the spaghetti rods and the table work as a fully parallel computing device. There are then n rods to remove so, assuming each contact-and-removal operation takes constant time, the worst-case time complexity of the algorithm is O(n).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Event monitoring** Event monitoring: In computer science, event monitoring is the process of collecting, analyzing, and signaling event occurrences to subscribers such as operating system processes, active database rules as well as human operators. These event occurrences may stem from arbitrary sources in both software or hardware such as operating systems, database management systems, application software and processors. Event monitoring may use a time series database. Basic concepts: Event monitoring makes use of a logical bus to transport event occurrences from sources to subscribers, where event sources signal event occurrences to all event subscribers and event subscribers receive event occurrences. An event bus can be distributed over a set of physical nodes such as standalone computer systems. Typical examples of event buses are found in graphical systems such as X Window System, Microsoft Windows as well as development tools such as SDT. Basic concepts: Event collection is the process of collecting event occurrences in a filtered event log for analysis. A filtered event log is logged event occurrences that can be of meaningful use in the future; this implies that event occurrences can be removed from the filtered event log if they are useless in the future. Event log analysis is the process of analyzing the filtered event log to aggregate event occurrences or to decide whether or not an event occurrence should be signalled. Event signalling is the process of signalling event occurrences over the event bus. Basic concepts: Something that is monitored is denoted the monitored object; for example, an application, an operating system, a database, hardware etc. can be monitored objects. A monitored object must be properly conditioned with event sensors to enable event monitoring, that is, an object must be instrumented with event sensors to be a monitored object. Event sensors are sensors that signal event occurrences whenever an event occurs. Whenever something is monitored, the probe effect must be managed. Monitored objects and the probe effect: As discussed by Gait, when an object is monitored, its behavior is changed. In particular, in any concurrent system in which processes can run in parallel, this poses a particular problem. The reason is that whenever sensors are introduced in the system, processes may execute in a different order. This can cause a problem if, for example, we are trying to localize a fault, and by monitoring the system we change its behavior in such a way that the fault may not result in a failure; in essence, the fault can be masked by monitoring the system. The probe effect is the difference in behavior between a monitored object and its un-instrumented counterpart. Monitored objects and the probe effect: According to Schütz, we can avoid, compensate for, or ignore the probe effect. In critical real-time system, in which timeliness (i.e., the ability of a system to meet time constraints such as deadlines) is significant, avoidance is the only option. If we, for example, instrument a system for testing and then remove the instrumentation before delivery, this invalidates the results of most testing based on the complete system. In less critical real-time system (e.g., media-based systems), compensation can be acceptable for, for example, performance testing. In non-concurrent systems, ignorance is acceptable, since the behavior with respect to the order of execution is left unchanged. Event log analysis: Event log analysis is known as event composition in active databases, chronicle recognition in artificial intelligence and as real-time logic evaluation in real-time systems. Essentially, event log analysis is used for pattern matching, filtering of event occurrences, and aggregation of event occurrences into composite event occurrences. Commonly, dynamic programming strategies from algorithms are employed to save results of previous analyses for future use, since, for example, the same pattern may be match with the same event occurrences in several consecutive analysis processing. In contrast to general rule processing (employed to assert new facts from other facts, cf. inference engine) that is usually based on backtracking techniques, event log analysis algorithms are commonly greedy; for example, when a composite is said to have occurred, this fact is never revoked as may be done in a backtracking based algorithm. Event log analysis: Several mechanisms have been proposed for event log analysis: finite state automata, Petri nets, procedural (either based on an imperative programming language or an object-oriented programming languages), a modification of Boyer–Moore string-search algorithm, and simple temporal networks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Polycyclic aromatic hydrocarbon** Polycyclic aromatic hydrocarbon: A polycyclic aromatic hydrocarbon (PAH) is a class of organic compounds that is composed of multiple aromatic rings. The simplest representative is naphthalene, having two aromatic rings and the three-ring compounds anthracene and phenanthrene. PAHs are uncharged, non-polar and planar. Many are colorless. Many of them are found in coal and in oil deposits, and are also produced by the incomplete combustion of organic matter—for example, in engines and incinerators or when biomass burns in forest fires. Polycyclic aromatic hydrocarbon: Polycyclic aromatic hydrocarbons are discussed as possible starting materials for abiotic syntheses of materials required by the earliest forms of life. Nomenclature and structure: The terms polyaromatic hydrocarbon or polynuclear aromatic hydrocarbon are also used for this concept.By definition, polycyclic aromatic hydrocarbons have multiple rings, precluding benzene from being considered a PAH. Some sources, such as the US EPA and CDC, consider naphthalene to be the simplest PAH. Other authors consider PAHs to start with the tricyclic species phenanthrene and anthracene. Most authors exclude compounds that include heteroatoms in the rings, or carry substituents.A polyaromatic hydrocarbon may have rings of various sizes, including some that are not aromatic. Those that have only six-membered rings are said to be alternant.The following are examples of PAHs that vary in the number and arrangement of their rings: Examples of polycyclic aromatic hydrocarbons Geometry Most PAHs, like naphthalene, anthracene, and coronene, are planar. This geometry is a consequence of the fact that the σ-bonds that result from the merger of sp2 hybrid orbitals of adjacent carbons lie on the same plane as the carbon atom. Those compounds are achiral, since the plane of the molecule is a symmetry plane. Nomenclature and structure: In rare cases, PAHs are not planar. In some cases, the non-planarity may be forced by the topology of the molecule and the stiffness (in length and angle) of the carbon-carbon bonds. For example, unlike coronene, corannulene adopts a bowl shape in order to reduce the bond stress. The two possible configurations, concave and convex, are separated by a relatively lowenergy barrier (about 11 kcal/mol)In theory, there are 51 structural isomers of coronene that have six fused benzene rings in a cyclic sequence, with two edge carbons shared between successive rings. All of them must be non-planar and have considerable higher bonding energy (computed to be at least 130 kcal/mol) than coronene; and, as of 2002, none of them had been synthesized.Other PAHs that might seem to be planar, considering only the carbon skeleton, may be distorted by repulsion or steric hindrance between the hydrogen atoms in their periphery. Benzo[c]phenantrene, with four rings fused in a "C" shape, has a slight helical distortion due to repulsion between the closest pair of hydrogen atoms in the two extremal rings. This effect also causes distortion of picene.Adding another benzene ring to form dibenzo[c,g]phenantrene creates steric hindrance between the two extreme hydrogen atoms. Adding two more rings on the same sense yields heptahelicene in which the two extreme rings overlap. These non-planar forms are chiral, and their enantiomers can be isolated. Nomenclature and structure: Benzenoid hydrocarbons The benzenoid hydrocarbons have been defined as condensed polycyclic unsaturated fully-conjugated hydrocarbons whose molecules are essentially planar with all rings six-membered. Full conjugation means that all carbon atoms and carbon-carbon bonds must have the sp2 structure of benzene. This class is largely a subset of the alternant PAHs, but is considered to include unstable or hypothetical compounds like triangulene or heptacene.As of 2012, over 300 benzenoid hydrocarbons had been isolated and characterized. Bonding and aromaticity: The aromaticity varies for PAHs. According to Clar's rule, the resonance structure of a PAH that has the largest number of disjoint aromatic pi sextets—i.e. benzene-like moieties—is the most important for the characterization of the properties of that PAH. Bonding and aromaticity: Benzene-substructure resonance analysis for Clar's rule For example, phenanthrene has two Clar structures: one with just one aromatic sextet (the middle ring), and the other with two (the first and third rings). The latter case is therefore the more characteristic electronic nature of the two. Therefore, in this molecule the outer rings have greater aromatic character whereas the central ring is less aromatic and therefore more reactive. In contrast, in anthracene the resonance structures have one sextet each, which can be at any of the three rings, and the aromaticity spreads out more evenly across the whole molecule. This difference in number of sextets is reflected in the differing ultraviolet–visible spectra of these two isomers, as higher Clar pi-sextets are associated with larger HOMO-LUMO gaps; the highest-wavelength absorbance of phenanthrene is at 293 nm, while anthracene is at 374 nm. Three Clar structures with two sextets each are present in the four-ring chrysene structure: one having sextets in the first and third rings, one in the second and fourth rings, and one in the first and fourth rings. Superposition of these structures reveals that the aromaticity in the outer rings is greater (each has a sextet in two of the three Clar structures) compared to the inner rings (each has a sextet in only one of the three). Properties: Physicochemical PAHs are nonpolar and lipophilic. Larger PAHs are generally insoluble in water, although some smaller PAHs are soluble. The larger members are also poorly soluble in organic solvents and in lipids. The larger members, e.g. perylene, are strongly colored. Redox Polycyclic aromatic compounds characteristically yield radicals and anions upon treatment with alkali metals. The large PAH form dianions as well. The redox potential correlates with the size of the PAH. Sources: Natural Fossil carbon Polycyclic aromatic hydrocarbons are primarily found in natural sources such as bitumen.PAHs can also be produced geologically when organic sediments are chemically transformed into fossil fuels such as oil and coal. The rare minerals idrialite, curtisite, and carpathite consist almost entirely of PAHs that originated from such sediments, that were extracted, processed, separated, and deposited by very hot fluids. Sources: Natural fires PAHs may result from the incomplete combustion of organic matter in natural wildfires. Substantially higher outdoor air, soil, and water concentrations of PAHs have been measured in Asia, Africa, and Latin America than in Europe, Australia, the U.S., and Canada.High levels of such PAHs have been detected in the Cretaceous-Tertiary (K-T) boundary, more than 100 times the level in adjacent layers. The spike was attributed to massive fires that consumed about 20% of the terrestrial above-ground biomass in a very short time. Sources: Extraterrestrial PAHs are prevalent in the interstellar medium (ISM) of galaxies in both the nearby and distant Universe and make up a dominant emission mechanism in the mid-infrared wavelength range, containing as much as 10% of the total integrated infrared luminosity of galaxies. PAHs generally trace regions of cold molecular gas, which are optimum environments for the formation of stars.NASA's Spitzer Space Telescope and James Webb Space Telescope include instruments for obtaining both images and spectra of light emitted by PAHs associated with star formation. These images can trace the surface of star-forming clouds in our own galaxy or identify star forming galaxies in the distant universe.In June 2013, PAHs were detected in the upper atmosphere of Titan, the largest moon of the planet Saturn. Sources: Minor sources Volcanic eruptions may emit PAHs.Certain PAHs such as perylene can also be generated in anaerobic sediments from existing organic material, although it remains undetermined whether abiotic or microbial processes drive their production. Sources: Artificial The dominant sources of PAHs in the environment are thus from human activity: wood-burning and combustion of other biofuels such as dung or crop residues contribute more than half of annual global PAH emissions, particularly due to biofuel use in India and China. As of 2004, industrial processes and the extraction and use of fossil fuels made up slightly more than one quarter of global PAH emissions, dominating outputs in industrial countries such as the United States.A year-long sampling campaign in Athens, Greece found a third (31%) of PAH urban air pollution to be caused by wood-burning, like diesel and oil (33%) and gasoline (29%). It also found that wood-burning is responsible for nearly half (43%) of annual PAH cancer-risk (carcinogenic potential) compared to the other sources and that wintertime PAH levels were 7 times higher than in other seasons, especially if atmospheric dispersion is low.Lower-temperature combustion, such as tobacco smoking or wood-burning, tends to generate low molecular weight PAHs, whereas high-temperature industrial processes typically generate PAHs with higher molecular weights. Incense is also a source.PAHs are typically found as complex mixtures. Distribution in the environment: Aquatic environments Most PAHs are insoluble in water, which limits their mobility in the environment, although PAHs sorb to fine-grained organic-rich sediments. Aqueous solubility of PAHs decreases approximately logarithmically as molecular mass increases.Two-ringed PAHs, and to a lesser extent three-ringed PAHs, dissolve in water, making them more available for biological uptake and degradation. Further, two- to four-ringed PAHs volatilize sufficiently to appear in the atmosphere predominantly in gaseous form, although the physical state of four-ring PAHs can depend on temperature. In contrast, compounds with five or more rings have low solubility in water and low volatility; they are therefore predominantly in solid state, bound to particulate air pollution, soils, or sediments. In solid state, these compounds are less accessible for biological uptake or degradation, increasing their persistence in the environment. Distribution in the environment: Human exposure Human exposure varies across the globe and depends on factors such as smoking rates, fuel types in cooking, and pollution controls on power plants, industrial processes, and vehicles. Developed countries with stricter air and water pollution controls, cleaner sources of cooking (i.e., gas and electricity vs. coal or biofuels), and prohibitions of public smoking tend to have lower levels of PAH exposure, while developing and undeveloped countries tend to have higher levels. Distribution in the environment: Surgical smoke plumes have been proven to contain PAHs in several independent research studies. Distribution in the environment: Burning solid fuels such as coal and biofuels in the home for cooking and heating is a dominant global source of PAH emissions that in developing countries leads to high levels of exposure to indoor particulate air pollution containing PAHs, particularly for women and children who spend more time in the home or cooking.In industrial countries, people who smoke tobacco products, or who are exposed to second-hand smoke, are among the most highly exposed groups; tobacco smoke contributes to 90% of indoor PAH levels in the homes of smokers. For the general population in developed countries, the diet is otherwise the dominant source of PAH exposure, particularly from smoking or grilling meat or consuming PAHs deposited on plant foods, especially broad-leafed vegetables, during growth. PAHs are typically at low concentrations in drinking water. Distribution in the environment: Emissions from vehicles such as cars and trucks can be a substantial outdoor source of PAHs in particulate air pollution. Geographically, major roadways are thus sources of PAHs, which may distribute in the atmosphere or deposit nearby. Catalytic converters are estimated to reduce PAH emissions from gasoline-fired vehicles by 25-fold.People can also be occupationally exposed during work that involves fossil fuels or their derivatives, wood-burning, carbon electrodes, or exposure to diesel exhaust. Industrial activity that can produce and distribute PAHs includes aluminum, iron, and steel manufacturing; coal gasification, tar distillation, shale oil extraction; production of coke, creosote, carbon black, and calcium carbide; road paving and asphalt manufacturing; rubber tire production; manufacturing or use of metal working fluids; and activity of coal or natural gas power stations. Distribution in the environment: Environmental pollution and degradation PAHs typically disperse from urban and suburban non-point sources through road runoff, sewage, and atmospheric circulation and subsequent deposition of particulate air pollution. Soil and river sediment near industrial sites such as creosote manufacturing facilities can be highly contaminated with PAHs. Oil spills, creosote, coal mining dust, and other fossil fuel sources can also distribute PAHs in the environment.Two- and three-ringed PAHs can disperse widely while dissolved in water or as gases in the atmosphere, while PAHs with higher molecular weights can disperse locally or regionally adhered to particulate matter that is suspended in air or water until the particles land or settle out of the water column. PAHs have a strong affinity for organic carbon, and thus highly organic sediments in rivers, lakes, and the ocean can be a substantial sink for PAHs.Algae and some invertebrates such as protozoans, mollusks, and many polychaetes have limited ability to metabolize PAHs and bioaccumulate disproportionate concentrations of PAHs in their tissues; however, PAH metabolism can vary substantially across invertebrate species. Most vertebrates metabolize and excrete PAHs relatively rapidly. Tissue concentrations of PAHs do not increase (biomagnify) from the lowest to highest levels of food chains.PAHs transform slowly to a wide range of degradation products. Biological degradation by microbes is a dominant form of PAH transformation in the environment. Soil-consuming invertebrates such as earthworms speed PAH degradation, either through direct metabolism or by improving the conditions for microbial transformations. Abiotic degradation in the atmosphere and the top layers of surface waters can produce nitrogenated, halogenated, hydroxylated, and oxygenated PAHs; some of these compounds can be more toxic, water-soluble, and mobile than their parent PAHs. Distribution in the environment: Urban soils The British Geological Survey reported the amount and distribution of PAH compounds including parent and alkylated forms in urban soils at 76 locations in Greater London. The study showed that parent (16 PAH) content ranged from 4 to 67 mg/kg (dry soil weight) and an average PAH concentration of 18 mg/kg (dry soil weight) whereas the total PAH content (33 PAH) ranged from 6 to 88 mg/kg and fluoranthene and pyrene were generally the most abundant PAHs. Benzo[a]pyrene (BaP), the most toxic of the parent PAHs, is widely considered a key marker PAH for environmental assessments; the normal background concentration of BaP in the London urban sites was 6.9 mg/kg (dry soil weight). London soils contained more stable four- to six-ringed PAHs which were indicative of combustion and pyrolytic sources, such as coal and oil burning and traffic-sourced particulates. However, the overall distribution also suggested that the PAHs in London soils had undergone weathering and been modified by a variety of pre-and post-depositional processes such as volatilization and microbial biodegradation. Distribution in the environment: Peatlands Managed burning of moorland vegetation in the UK has been shown to generate PAHs which become incorporated into the peat surface. Burning of moorland vegetation such as heather initially generates high amounts of two- and three-ringed PAHs relative to four- to six-ringed PAHs in surface sediments, however, this pattern is reversed as the lower molecular weight PAHs are attenuated by biotic decay and photodegradation. Evaluation of the PAH distributions using statistical methods such as principal component analyses (PCA) enabled the study to link the source (burnt moorland) to pathway (suspended stream sediment) to the depositional sink (reservoir bed). Distribution in the environment: Rivers, estuarine and coastal sediments Concentrations of PAHs in river and estuarine sediments vary according to a variety of factors including proximity to municipal and industrial discharge points, wind direction and distance from major urban roadways, as well as tidal regime which controls the diluting effect of generally cleaner marine sediments relative to freshwater discharge. Consequently, the concentrations of pollutants in estuaries tends to decrease at the river mouth. Understanding of sediment hosted PAHs in estuaries is important for the protection of commercial fisheries (such as mussels) and general environmental habitat conservation because PAHs can impact the health of suspension and sediment feeding organism. River-estuary surface sediments in the UK tend to have a lower PAH content than sediments buried 10–60 cm from the surface reflecting lower present day industrial activity combined with improvement in environmental legislation of PAH. Typical PAH concentrations in UK estuaries range from about 19 to 16,163 µg/kg (dry sediment weight) in the River Clyde and 626 to 3,766 µg/kg in the River Mersey. In general estuarine sediments with a higher natural total organic carbon content (TOC) tend to accumulate PAHs due to high sorption capacity of organic matter. A similar correspondence between PAHs and TOC has also been observed in the sediments of tropical mangroves located on the coast of southern China. Human health: Cancer is a primary human health risk of exposure to PAHs. Exposure to PAHs has also been linked with cardiovascular disease and poor fetal development. Cancer PAHs have been linked to skin, lung, bladder, liver, and stomach cancers in well-established animal model studies. Specific compounds classified by various agencies as possible or probable human carcinogens are identified in the section "Regulation and Oversight" below. Human health: History Historically, PAHs contributed substantially to our understanding of adverse health effects from exposures to environmental contaminants, including chemical carcinogenesis. In 1775, Percivall Pott, a surgeon at St. Bartholomew's Hospital in London, observed that scrotal cancer was unusually common in chimney sweepers and proposed the cause as occupational exposure to soot. A century later, Richard von Volkmann reported increased skin cancers in workers of the coal tar industry of Germany, and by the early 1900s increased rates of cancer from exposure to soot and coal tar was widely accepted. In 1915, Yamigawa and Ichicawa were the first to experimentally produce cancers, specifically of the skin, by topically applying coal tar to rabbit ears.In 1922, Ernest Kennaway determined that the carcinogenic component of coal tar mixtures was an organic compound consisting of only carbon and hydrogen. This component was later linked to a characteristic fluorescent pattern that was similar but not identical to benz[a]anthracene, a PAH that was subsequently demonstrated to cause tumors. Cook, Hewett and Hieger then linked the specific spectroscopic fluorescent profile of benzo[a]pyrene to that of the carcinogenic component of coal tar, the first time that a specific compound from an environmental mixture (coal tar) was demonstrated to be carcinogenic. Human health: In the 1930s and later, epidemiologists from Japan, the UK, and the US, including Richard Doll and various others, reported greater rates of death from lung cancer following occupational exposure to PAH-rich environments among workers in coke ovens and coal carbonization and gasification processes. Human health: Mechanisms of carcinogenesis The structure of a PAH influences whether and how the individual compound is carcinogenic. Some carcinogenic PAHs are genotoxic and induce mutations that initiate cancer; others are not genotoxic and instead affect cancer promotion or progression.PAHs that affect cancer initiation are typically first chemically modified by enzymes into metabolites that react with DNA, leading to mutations. When the DNA sequence is altered in genes that regulate cell replication, cancer can result. Mutagenic PAHs, such as benzo[a]pyrene, usually have four or more aromatic rings as well as a "bay region", a structural pocket that increases reactivity of the molecule to the metabolizing enzymes. Mutagenic metabolites of PAHs include diol epoxides, quinones, and radical PAH cations. These metabolites can bind to DNA at specific sites, forming bulky complexes called DNA adducts that can be stable or unstable. Stable adducts may lead to DNA replication errors, while unstable adducts react with the DNA strand, removing a purine base (either adenine or guanine). Such mutations, if they are not repaired, can transform genes encoding for normal cell signaling proteins into cancer-causing oncogenes. Quinones can also repeatedly generate reactive oxygen species that may independently damage DNA.Enzymes in the cytochrome family (CYP1A1, CYP1A2, CYP1B1) metabolize PAHs to diol epoxides. PAH exposure can increase production of the cytochrome enzymes, allowing the enzymes to convert PAHs into mutagenic diol epoxides at greater rates. In this pathway, PAH molecules bind to the aryl hydrocarbon receptor (AhR) and activate it as a transcription factor that increases production of the cytochrome enzymes. The activity of these enzymes may at times conversely protect against PAH toxicity, which is not yet well understood.Low molecular weight PAHs, with two to four aromatic hydrocarbon rings, are more potent as co-carcinogens during the promotional stage of cancer. In this stage, an initiated cell (a cell that has retained a carcinogenic mutation in a key gene related to cell replication) is removed from growth-suppressing signals from its neighboring cells and begins to clonally replicate. Low-molecular-weight PAHs that have bay or bay-like regions can dysregulate gap junction channels, interfering with intercellular communication, and also affect mitogen-activated protein kinases that activate transcription factors involved in cell proliferation. Closure of gap junction protein channels is a normal precursor to cell division. Excessive closure of these channels after exposure to PAHs results in removing a cell from the normal growth-regulating signals imposed by its local community of cells, thus allowing initiated cancerous cells to replicate. These PAHs do not need to be enzymatically metabolized first. Low molecular weight PAHs are prevalent in the environment, thus posing a significant risk to human health at the promotional phases of cancer. Human health: Cardiovascular disease Adult exposure to PAHs has been linked to cardiovascular disease. PAHs are among the complex suite of contaminants in tobacco smoke and particulate air pollution and may contribute to cardiovascular disease resulting from such exposures.In laboratory experiments, animals exposed to certain PAHs have shown increased development of plaques (atherogenesis) within arteries. Potential mechanisms for the pathogenesis and development of atherosclerotic plaques may be similar to the mechanisms involved in the carcinogenic and mutagenic properties of PAHs. A leading hypothesis is that PAHs may activate the cytochrome enzyme CYP1B1 in vascular smooth muscle cells. This enzyme then metabolically processes the PAHs to quinone metabolites that bind to DNA in reactive adducts that remove purine bases. The resulting mutations may contribute to unregulated growth of vascular smooth muscle cells or to their migration to the inside of the artery, which are steps in plaque formation. These quinone metabolites also generate reactive oxygen species that may alter the activity of genes that affect plaque formation.Oxidative stress following PAH exposure could also result in cardiovascular disease by causing inflammation, which has been recognized as an important factor in the development of atherosclerosis and cardiovascular disease. Biomarkers of exposure to PAHs in humans have been associated with inflammatory biomarkers that are recognized as important predictors of cardiovascular disease, suggesting that oxidative stress resulting from exposure to PAHs may be a mechanism of cardiovascular disease in humans. Human health: Developmental impacts Multiple epidemiological studies of people living in Europe, the United States, and China have linked in utero exposure to PAHs, through air pollution or parental occupational exposure, with poor fetal growth, reduced immune function, and poorer neurological development, including lower IQ. Regulation and oversight: Some governmental bodies, including the European Union as well as NIOSH and the United States Environmental Protection Agency (EPA), regulate concentrations of PAHs in air, water, and soil. The European Commission has restricted concentrations of 8 carcinogenic PAHs in consumer products that contact the skin or mouth.Priority polycyclic aromatic hydrocarbons identified by the US EPA, the US Agency for Toxic Substances and Disease Registry (ATSDR), and the European Food Safety Authority (EFSA) due to their carcinogenicity or genotoxicity and/or ability to be monitored are the following: A Considered probable or possible human carcinogens by the US EPA, the European Union, and/or the International Agency for Research on Cancer (IARC). Detection and optical properties: A spectral database exists for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. Detection of PAHs in materials is often done using gas chromatography-mass spectrometry or liquid chromatography with ultraviolet-visible or fluorescence spectroscopic methods or by using rapid test PAH indicator strips. Structures of PAHs have been analyzed using infrared spectroscopy.PAHs possess very characteristic UV absorbance spectra. These often possess many absorbance bands and are unique for each ring structure. Thus, for a set of isomers, each isomer has a different UV absorbance spectrum than the others. This is particularly useful in the identification of PAHs. Most PAHs are also fluorescent, emitting characteristic wavelengths of light when they are excited (when the molecules absorb light). The extended pi-electron electronic structures of PAHs lead to these spectra, as well as to certain large PAHs also exhibiting semi-conducting and other behaviors. Detection and optical properties: Origins of life PAHs may be abundant in the universe. They seem to have been formed as early as a couple of billion years after the Big Bang, and are associated with new stars and exoplanets. More than 20% of the carbon in the universe may be associated with PAHs. PAHs are considered possible starting material for the earliest forms of life. Detection and optical properties: Light emitted by the Red Rectangle nebula possesses spectral signatures that suggest the presence of anthracene and pyrene. This report was considered a controversial hypothesis that as nebulae of the same type as the Red Rectangle approach the ends of their lives, convection currents cause carbon and hydrogen in the nebulae's cores to get caught in stellar winds, and radiate outward. As they cool, the atoms supposedly bond to each other in various ways and eventually form particles of a million or more atoms. Adolf Witt and his team inferred that PAHs—which may have been vital in the formation of early life on Earth—can only originate in nebulae. Detection and optical properties: PAHs, subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation, and hydroxylation, to more complex organic compounds—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."Low-temperature chemical pathways from simple organic compounds to complex PAHs are of interest. Such chemical pathways may help explain the presence of PAHs in the low-temperature atmosphere of Saturn's moon Titan, and may be significant pathways, in terms of the PAH world hypothesis, in producing precursors to biochemicals related to life as we know it.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Carbonyl cyanide-p-trifluoromethoxyphenylhydrazone** Carbonyl cyanide-p-trifluoromethoxyphenylhydrazone: Carbonyl cyanide-p-trifluoromethoxyphenylhydrazone (FCCP) is an ionophore that is a mobile ion carrier. It is referred to as an uncoupling agent because it disrupts ATP synthesis by transporting hydrogen ions through the mitochondrial membrane before they can be used to provide the energy for oxidative phosphorylation. It is a nitrile and hydrazone. FCCP was first described in 1962 by Heytler.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Netarsudil/latanoprost** Netarsudil/latanoprost: Netarsudil/latanoprost, sold under the brand name Rocklatan among others, is a fixed-dose combination medication use to treat elevated intraocular pressure (IOP) in people with open-angle glaucoma or ocular hypertension. It contains netarsudil mesylate and latanoprost. It is applied as eye drops to the eyes.The most common side effects include conjunctival hyperaemia (red eye), pain at the site where the medicine was applied, cornea verticillata (deposits in the cornea, the transparent layer in front of the eye that covers the pupil and iris), pruritus (itching of the eye), erythema (reddening) and discomfort in the eye, increased lacrimation (watery eyes), and conjunctival haemorrhage (bleeding in the surface layer of the eye).Netarsudil/latanoprost was approved for medical use in the United States in March 2019, and in the European Union in January 2021. Medical uses: Netarsudil/latanoprost is indicated for the reduction of elevated intraocular pressure (IOP) in adults with primary open-angle glaucoma or ocular hypertension.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Datar–Mathews method for real option valuation** Datar–Mathews method for real option valuation: The Datar–Mathews Method (DM Method) is a method for real options valuation. The method provides an easy way to determine the real option value of a project simply by using the average of positive outcomes for the project. The method can be understood as an extension of the net present value (NPV) multi-scenario Monte Carlo model with an adjustment for risk aversion and economic decision-making. The method uses information that arises naturally in a standard discounted cash flow (DCF), or NPV, project financial valuation. It was created in 2000 by Vinay Datar, professor at Seattle University; and Scott H. Mathews, Technical Fellow at The Boeing Company. Method: The mathematical equation for the DM Method is shown below. The method captures the real option value by discounting the distribution of operating profits at R, the market risk rate, and discounting the distribution of the discretionary investment at r, risk-free rate, before the expected payoff is calculated. The option value is then the expected value of the maximum of the difference between the two discounted distributions or zero. Fig. 1. Method: max (S~Te−Rt−X~Te−rt,0)] S~T is a random variable representing the future benefits, or operating profits at time T. The present valuation of S~T uses R, a discount rate consistent with the risk level of S~T, S~0=S~Te−RT. R is the required rate of return for participation in the target market, sometimes termed the hurdle rate. X~T is a random variable representing the strike price. The present valuation of X~T uses r, the rate consistent with the risk of investment of X~T, X~0=X~Te−RT. In many generalized option applications, the risk-free discount rate is used. However other discount rates can be considered, such as the corporate bond rate, particularly when the application is an internal corporate product development project. Method: C0 is the real option value for a single stage project. The option value can be understood as the expected value of the difference of two present value distributions with an economically rational threshold limiting losses on a risk-adjusted basis. This value may also be expressed as a stochastic distribution.The differential discount rate for R and r implicitly allows the DM Method to account for the underlying risk. If R > r, then the option will be risk-averse, typical for both financial and real options. If R < r, then the option will be risk-seeking. If R = r, then this is termed a risk-neutral option, and has parallels with NPV-type analyses with decision-making, such as decision trees. The DM Method gives the same results as the Black–Scholes and the binomial lattice option models, provided the same inputs and the discount methods are used. This non-traded real option value therefore is dependent on the risk perception of the evaluator toward a market asset relative to a privately held investment asset. Method: The DM Method is advantageous for use in real option applications because unlike some other option models it does not require a value for sigma (a measure of uncertainty) or for S0 (the value of the project today), both of which are difficult to derive for new product development projects; see further under real options valuation. Finally, the DM Method uses real-world values of any distribution type, avoiding the requirement for conversion to risk-neutral values and the restriction of a lognormal distribution; see further under Monte Carlo methods for option pricing. Method: Extensions of the method for other real option valuations have been developed such as contract guarantee (put option), Multi-Stage (compound option), Early Launch (American option), and others. Implementation: The DM Method may be implemented using Monte-Carlo simulation, or in a simplified algebraic or other form (see the Range Option below). Implementation: Using simulation, for each sample, the engine draws a random variable from both and X~T, calculates their present values, and takes the difference. Fig. 2A. The difference value is compared to zero, the maximum of the two is determined, and the resulting value recorded by the simulation engine. Here, reflecting the optionality inherent in the project, a forecast of a net negative value outcome corresponds to an abandoned project, and has a zero value. Fig. 2B. The resulting values create a payoff distribution representing the economically rational set of plausible, discounted value forecasts of the project at time T0. Implementation: When sufficient payoff values have been recorded, typically a few hundred, then the mean, or expected value, of the payoff distribution is calculated. Fig. 2C. The option value is the expected value, the first moment of all positive NPVs and zeros, of the payoff distribution.A simple interpretation is: Real option value average max operating profit launch costs ),0)] where operating profit and launch costs are the appropriately discounted range of cash flows to time T0.The option value can also be understood as a distribution ( C~0 ) reflecting the uncertainty of the underlying variables. DM Option Variations: Algebraic lognormal form The DM real option can be considered a generalized form for option valuation. Its simulation produces a truncated present value distribution of which the mean value is interpreted to be the option value. With certain boundary conditions, the DM option can be reformulated algebraically as a conditional expectation of a lognormal distribution similar to the form and characteristics of a typical financial option, such as the European, single stage Black-Scholes financial option. This section illustrates the transformation of the DM real option to its algebraic lognormal form and its relationship to the Black-Scholes financial option formula. The process illuminates some of the more technical elements of the option formulation thereby providing further insight to the underlying concepts. DM Option Variations: The lognormal form of the DM Method remains a simple concept based on the same computation procedures as the simulation form. It is the conditional expectation of the discounted projected future value outcome distribution, S~T , less a predetermined purchase cost (strike price or launch cost), X¯T , (modeled in this example as a scalar value) multiplied by the probability of that truncated distribution greater than a threshold—nominally 0. A conditional expectation is the expected value of the truncated distribution (mean of the tail), MT, computed with respect to its conditional probability distribution (Fig. 3). DM Option Variations: The option calculation procedure values the project investment (option purchase), C0, at T0. For the DM option the time differentiated discounting (R and r) results in an apparent shift of the projected value outcome distribution, S~ , relative to the X~ , or the scalar mean X¯ in the example shown in Fig. 4. This relative shift sets up the conditional expectation of the truncated distribution at T0. DM Option Variations: In a lognormal distribution for a project future value outcome, S~T , both the mean, S¯T , and standard deviation, SDT , must be specified. The standard deviation, SDT , of the distribution S~T is proportionately discounted along with the distribution, SD0=SDTe−RT. The parameters of and μ , of a lognormal distribution at T0 can be derived from the values and S0 respectively, as: ln where SDS=SD0S0=SDTST ln ln 0.5 ln ln 0.5 σ2. The conditional expectation of the discounted value outcome is the mean of the tail MT: ln ln where N(⋅) is the cumulative distribution function of the standard normal distribution (N(0,1)).The probability of the project being in the money and launched (“exercised”) is ln ⁡X0σ). The project investment (option) value is: ln ln ln ⁡X0σ). DM Option Variations: The involved lognormal mathematics can be burdensome and opaque for some business practices within a corporation. However, several simplifications can ease that burden and provide clarity without sacrificing the soundness of the option calculation. One simplification is the employment of the standard normal distribution, also known as the Z-distribution, which has a mean of 0 and a standard deviation of 1. It is common practice to convert a normal distribution to a standard normal and then use the standard normal table to find the value of probabilities. DM Option Variations: Define as the standard normal variable: ln ⁡X0−μ)σ. The conditional expectation of the discounted value outcome is: ln ln ⁡X0σ)]=S0[N(σ−Z)N(−Z)]. Then probability of the project being in the money and launched (“exercised”) is: ln ⁡X0σ)=N(−Z). The Datar-Mathews lognormal option value simplifies to: DM ={S0[N(σ−Z)N(−Z)]−X0}N(−Z)=S0N(σ−Z)−X0N(−Z). DM Option Variations: Transformation to the Black–Scholes Option The Black–Scholes option formula (as well as the binomial lattice) is a special case of the simulated DM real option. With subtle, but notable differences, the logarithmic form of the DM Option can be algebraically transformed into The Black-Scholes option formula. The real option valuation is based on an approximation of the future value outcome distribution, which may be lognormal, at time TT projected (discounted) to T0. In contrast, the Black-Scholes is based on a lognormal distribution projected from historical asset returns to present time T0. Analysis of these historical trends results in a calculation termed the volatility (finance) factor. For Black-Scholes (BS) the volatility factor is σBST .The following lognormal distribution with a standard deviation σ is replaced by the volatility factor σBST ln ln 0.5 ln ln BS BS where ln ln ⁡X0−rT ln ln 0.5 ln ln BS BS BS T=d2 The Black-Scholes option value simplifies to its familiar form: BS =SN(d1)−XTe−rTN(d2) The terms N(d1) and N(d2) are applied in the calculation of the Black–Scholes formula, and are expressions related to operations on lognormal distributions; see section "Interpretation" under Black–Scholes. Referring to Fig. 5 and using the lognormal form of the DM Option, it is possible to derive certain insights to the internal operation of an option: N(σ−Z)=N(d1) N(−Z)=N(d2) N(-Z) or N(d2) is a measure of the area of the tail of the distribution, MT (delineated by X0), relative to that of the entire distribution, e.g. the probability of tail of the distribution, at time T0. Fig. 5, Right. The true probability of expiring in-the-money in the real (“physical”) world is calculated at time T0, the launch or strike date, measured by area of the tail of the distribution. N(σ-Z) or N(d1) is the value of the option payoff relative to that of the asset. N(d1)=[MTxN(d2)]/S0, where MT is the mean of the tail at time T0. DM Option Variations: Data patterns A simplified DM Method computation conforms to the same essential features—it is the conditional expectation of the discounted projected future value outcome distribution, or MT , less a discounted cost , X0 , multiplied by the probability of exercise, N(−Z). The value of the DM Method option can be understood as C0=(MT−X0)xN(−Z). This simplified formulation has strong parallels to an expected value calculation. DM Option Variations: Businesses that collect historical data may be able to leverage the similarity of assumptions across related projects facilitating the calculation of option values. One resulting simplification is the Uncertainty Ratio, UR=(SD/S) , which can often be modeled as a constant for similar projects. UR is the degree of certainty by which the projected future cash flows can be estimated. UR is invariant of time (SDTST=SD0S0) with values typically between 0.35 and 1.0 for many multi-year business projects. DM Option Variations: Applying this observation as a constant, K, to the above formulas results in a simpler formulation: Define ln , and ln 0.5 ln 0.5 K2. ln ln 0.5 , and ln 0.5 K. Z is normally distributed and the values can be accessed in a table of standard normal variables. The resulting real option value can be derived simply on a hand-held calculator once K is determined: C0=S0N(σ−Z)−X0N(−Z)=S0N(K−Z)−X0N(−Z). Assuming the UR is held constant, then the relative value of the option can simply be determined by the ratio of and X0 , which is proportional to the probability of the project being in the money and launched (exercised). Then 0.3 x(S0/X0)−k]. (Fig. 6) A real option valuation is typically applied when the probability of exercise is approximately less than 50% [(S0/X0)⪅1]. Businesses need apply their own data across similar projects to establish relevant parameter values. DM Option Variations: Triangular form (Range Option) Given the difficulty in estimating the lognormal distribution mean and standard deviation of future returns, other distributions instead are more often applied for real options used in business decision making. The sampled distributions may take any form, although the triangular distribution is often used, as is typical for low data situations, followed by a uniform distribution (continuous) or a beta distribution. This approach is useful for early-stage estimates of project option value when there has not been sufficient time or resources to gather the necessary quantitative information required for a complete cash flow simulation, or in a portfolio of projects when simulation of all the projects is too computationally demanding. Regardless of the distribution chosen, the procedure remains the same for a real option valuation. For a triangular distribution, sometimes referred to as three-point estimation, the mode value corresponds to the “most-likely” scenario, and the other two other scenarios, “pessimistic” and “optimistic”, represent plausible deviations from the most-likely scenario (often modeled as approximating a two-sided 1-out-of-10 likelihood or 90% confidence). This range of estimates results in the eponymous name for the option, the DM Range Option. The DM Range Option method is similar to the fuzzy method for real options. The following example (Fig. 7) uses a range of future estimated operating profits of a (pessimistic), b (optimistic) and m (mode or most-likely). DM Option Variations: For T0 discount a, b and m by and X0=XTe−rT. DM Option Variations: The classic DM Method presumes that the strike price is represented by a random variable (distribution X~0 ) with the option solution derived by simulation. Alternatively, without the burden of performing a simulation, applying the average or mean scalar value of the launch cost distribution X¯0 (strike price) results in a conservative estimate of DM Range Option value. If the launch cost is predetermined as a scalar value, then the DM Range Option value calculation is exact. DM Option Variations: The expected value of the truncated triangular distribution (mean of the right tail), is MT=(2X0+b)3. The probability of the project being in the money and launched is the proportional area of the truncated distribution relative to the complete triangular distribution. This partial expectation is computed by the cumulative distribution function (CDF) given the probability distribution will be found at a value greater than or equal to X for a right tail: P(X0|X0≥x)=1−(b−X0)2[(b−a)(b−m)]. The DM Range Option value, or project investment, is: C0=(MT−X0)⋅P(X0|X0≥x)=(MT−X0)⋅{(b−X0)2[(b−a)(b−m)]}. Note: 0.5 ln , 0.25 ln , and UR=eσ2−1. DM Option Variations: Use of a DM Range Option facilitates the application of real option valuation to future project investments. The DM Range Option provides an estimate of valuation that differs marginally with that of the DM Option algebraic lognormal distribution form. However, the projected future value outcome, S, of a project is rarely based on a lognormal distribution derived from historical asset returns, as is a financial option. Rather, the future value outcome, S, (as well as the strike price, X, and the standard deviation, SD), is more than likely a three-point estimation based on engineering and marketing parameters. Therefore, the ease of application of the DM Range Option is often justified by its expediency and is sufficient to estimate the conditional value of a future project. Demand Curve Integration: Many early-stage projects find that the dominant unknown values are the first-order range estimates of the major components of operating profits: revenue and manufacturing cost of goods sold (COGS). In turn the uncertainty about revenue is driven by guesstimates of either market demand price or size. Market price and size can be estimated independently, though coupling them together in a market demand relationship is a better approach. COGS, the total of cost of product quantity to be sold, is the final component and trends according to an experience or learning curve cost relationship linked to market size. The interplay of these three market elements within a DM Real Options simulation, even with early-stage ranges, can reduce uncertainty for project planning by yielding reasonably narrowed target estimates for product pricing and production size that maximizes potential operating profits and improves option value. Demand Curve Integration: A market price demand curve graphs the relationship of price to size, or quantity demanded. The law of demand states there is an inverse relationship between price and quantity demanded, or simply as the price decreases product quantity demanded will increase. A second curve, the manufacturing cost graph, models the learning curve effect illustrating the relationship between the quantity of goods produced and the efficiency gains of that production. Fig. 9. Mathematically, the learning curve takes the form of a power function.A demand curve can be realistically modeled using an inverse lognormal distribution which convolves the market price distribution estimate with the market size range. A demand curve deftly models highly differentiated markets which through pricing distinguish selective product or service characteristics such as quality or grade, functional features, and availability, along with quantity sold. Examples are automobiles, shoes, smart phones, and computers. Airfare markets are highly differentiated where demand pricing and quantity sold are dependent on seasonality, day of week, time of day, routing, sale promotions, and seating or fare class. The airfare demand distribution pattern is well represented with an inverse lognormal distribution as shown in Fig. 10. Demand Curve Integration: Curves for all the above components, market price, size, and COGS, can be simulated with variability to yield an optimal operating profit input for the real option calculation (Fig. 11). For example, the simulation results represented in Fig. 12. indicate ranges for price and unit quantity that potentially will maximize profitability. Extracted from these first-order range estimates, a selection of the peak (frequency) values identifies a significantly narrowed spread of promising estimates. Knowing these optimal value spreads substantially reduces uncertainty and provides a better, more targeted set of parameters from which to confidently base innovation development plans and option value. Comparison to other methods: The fuzzy pay-off method for real option valuation, created in 2009, provides another accessible approach to real option valuation. Though each use differing mathematical methods (Fuzzy: fuzzy logic; DM: numerical simulation and geometry) the underlying principal is strikingly similar: the likelihood of a positive payoff. Separately examining the two factors (possibility/probability, and positive payoff) demonstrates this similarity. The possibility function for the fuzzy pay-off is A(Pos)A(Pos)+A(Neg) . A simple interpretation is the proportionality ratio of the positive area of the fuzzy NPV over the total area of the fuzzy NPV. The probability of the project payoff for the DM Range Option is proportional to the area (CDF) of the positive distribution relative to the complete distribution. This is computed as (b−X0)2[(b−a)(b−m)]. Comparison to other methods: In each the ratios of the areas compute to the same possibility/probability value. The positive payoff of fuzzy pay-off simply is the mean of the positive area of the fuzzy NPV, or E[A+] . Likewise, the positive payoff for the DM Range Option is the mean of the right tail (MT), or (2X0+b)3 less the strike price X0 This insight to the mechanics of the two methods illustrates not only their similarity but also their equivalency. Comparison to other methods: In a 2016 article in the Advances in Decision Sciences journal, researchers from the Lappeenranta University of Technology School of Business and Management compared the DM Method to the fuzzy pay-off method for real option valuation and noted that while the valuation results were similar, the fuzzy pay-off one was more robust in some conditions. In some comparative cases, the Datar-Mathews Method has a significant advantage in that it is easier to operate and connects NPV valuation and scenario analysis with Monte Carlo simulation (or geometry) technique thus greatly improving intuition in the usage of real options methods in managerial decision and explanation to third parties. Through its simulation interface, the Datar-Mathews Method easily accommodates multiple and sometimes correlated cash flow scenarios, including dynamic programming, typical of complex projects, such as aerospace, that are difficult to model using fuzzy sets. DM Method and Prospect Theory: Real options are about objectively valuing innovation opportunities. Vexingly, these opportunities, evanescent and seemingly risky, are often comprehended subjectively. However, both the objective valuation mechanism and the subjective interpretation of results are often misunderstood leading to investment reluctance and potentially undervaluing opportunities. DM Method and Prospect Theory: The DM real option employs the objective valuation formula max (...,0)], where 0 is the default threshold when it is economically rational to terminate (abandon) an opportunity event. If a simulation event (‘draw’) calculates a negative outcome (i.e., [S~Te−Rt≤X~Te−rt], operating profits less than launch costs), then that event outcome should be rationally cut, or terminated, recording a 0 residual. Only net positive economic outcomes [S~Te−Rt>X~Te−rt] are tallied. This operation leaves the misperception of ‘the odds being stacked’ favoring only positive outcomes seemingly resulting in an abnormally high valuation. However, the E of the formula max (...,0)] mathematically calculates the correct option value by adjusting these positive outcomes according to their likelihood, i.e., probability of a success (POS).The actual DM formula is max (...,?)], where the threshold ? (‘floor’) can assume any value (or alternative formula) including the default 0 . Using a threshold other than 0 transforms the formula into a hurdle-weighted option variation. The result is no longer equivalent to the value of a financial option. DM Method and Prospect Theory: Much of the perceived high value of a real option valuation is disproportionately located in the far-right end of the tail of the simulation distribution, an area of low probability but high value outcomes. The option valuation reflects the potential opportunity value if the various outcome assumptions are validated. Targeted, incremental investments can validate these low probability assumptions. If not, replace the assumptions with proven ‘plausible’ elements, then recalculate the value based on new learnings. DM Method and Prospect Theory: The subjective undervaluation of real options partially can be explained by behavioral sciences. An innovation investor may perceive the initial investments to be potentially at a loss, particularly if the POS is low. Kahneman and Tversky’s Prospect theory proclaims that losses are perceived to have an impact more than twice that of gains for the same value. The result is the loss averse investor will subjectively undervalue the opportunity, and therefore the investment, despite the objective and financially accurate real option valuation. DM Method and Prospect Theory: Regret aversion, another behavioral science observation, occurs when an unfounded decision is made to avoid regretting a future outcome. For example, a regret-averse investor decides to invest in a relatively ‘sure bet’ but smaller payoff opportunity relative to an alternative with a significantly higher but presumably uncertain payoff. The regret aversion phenomenon is closely aligned with uncertainty aversion (certainty bias), where the unknown aspects of the innovation opportunity (i.e., newness, lack of control) are rationalized as a hurdle to further investments. The consequences of loss- and regret-averse decision-making are parsimonious investments and underfunding (‘undervaluing’) of promising early-stage innovation opportunities. DM Method and Prospect Theory: A savvy investor can overcome the perceived mis-valuation of an option price. Loss aversion registers significantly high when the entire option value is interpreted as investment risk. This emotional response fails to consider that the initial early-stage investments are only a fraction of the entire option value, necessarily targeted to validate the most salient assumption. Similarly regret aversion should not be misconceived as risk aversion because the exposure of small early-stage investments is usually not material. Instead, these initial investments carefully probe the opportunity’s core value while providing a sense of control over an otherwise uncertain outcome. Regret is minimized by the realization that the opportunity development can be terminated if the assumption outcomes are not promising. The investment funds expended are prudently applied only to investigate a promising opportunity, and, in return, are enhanced by the acquired knowledge and the ability to make a better decision. DM Method and Prospect Theory: Since individuals are prone to cognitive biases, various intervention strategies are designed to reduce them including expert review along with bias and naïve realism awareness. A phenomenon termed “bias blind spot” succinctly describes an individual’s unconscious susceptibility to biases. This fundamental attribution error remains subconsciously hidden by an illusion of self-introspection, i.e., that we believe, falsely, we have access to our inner intentions or motivations. Biases may be post hoc rationalized away, but nonetheless impact decision-making. To counteract biases, it is insufficient simply be aware of their characteristics, but necessary also to become educated of one’s own introspection illusion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Logical quality** Logical quality: In many philosophies of logic, statements are categorized into different logical qualities based on how they go about saying what they say. Doctrines of logical quality are an attempt to answer the question: "How many qualitatively different ways are there of saying something?" Aristotle answers, two: you can affirm something of something or deny something of something. Since Frege, the normal answer in the West, is only one, assertion, but what is said, the content of the claim, can vary. For Frege asserting the negation of a claim serves roughly the same role as denying a claim does in Aristotle. Other Western logicians such as Kant and Hegel answer, ultimately three; you can affirm, deny or make merely limiting affirmations, which transcend both affirmation and denial. In Indian logic, four logical qualities have been the norm, and Nagarjuna is sometimes interpreted as arguing for five. Aristotle's two logical qualities: In Aristotle's term logic there are two logical qualities: affirmation (kataphasis) and denial (apophasis). The logical quality of a proposition is whether it is affirmative (the predicate is affirmed of the subject) or negative (the predicate is denied of the subject). Thus "every man is a mortal" is affirmative, since "mortal" is affirmed of "man". "No men are immortals" is negative, since "immortal" is denied of "man". Making do with a single logical quality: Logical quality has become much less central to logical theory in the twentieth century. It has become common to use only one logical quality, typically called logical assertion. Much of the work previously done by distinguishing affirmation from denial is typically now done through the theory of negation. Thus, to most contemporary logicians, making a denial is essentially reducible to affirming a negation. Denying that Socrates is ill, is the same thing as affirming that it is not the case that Socrates is ill, which is basically affirming that Socrates is not ill. This trend may go back to Frege although his notation for negation is ambiguous between asserting a negation and denying. Gentzens notation definitely assimilates denial to assertion of negation, but might not quite have a single logical quality, see below. Third logical qualities: Logicians in the western traditions have often expressed belief in some other logical quality besides affirmation and denial. Sextus Empiricus, in the 2nd or 3rd century CE, argued for the existence of "nonassertive" statements, which indicate suspension of judgment by refusing to affirm or deny anything. Pseudo-Dionysius the Areopagite in the 6th century, argued for the existence of "non-privatives" which transcend both affirmation and denial. For example, it is not quite correct to affirm that God is, nor to deny that God moves, but rather one should say that God is beyond-motion, or super-motive, and this is intended not just as a special kind of affirmation or denial, but a third move besides affirmation and denial.For Kant every judgment takes one of three possible logical qualities, Affirmative, Negative or Infinite. For Kant, if I say “The soul is mortal” I have made an affirmation about the soul; I have said something contentful about it. If I say “The soul is not mortal,” I have made a negative judgment and thus “warded off error” but I have not said what the soul is instead. If, however, I say “The soul is non-mortal,” I have made an infinite judgment. For the purposes of “General logic” it is sufficient to see infinite judgments as a sub-variety of affirmative judgments, I have said something of the soul, namely that it is not mortal. But from the standpoint of “Transcendental Logic” it is important to distinguish the infinite from the affirmative. Although I have taken something away from the possibilities of what the soul might be like, I have not thereby said what it is or clarified the concept of the soul, there are still an infinite number of possible ways the soul could be. The content of an infinite judgment is purely limitative of our knowledge rather than ampliative of it. Hegel follows Kant in insisting that, at least transcendentally, affirmation and negation are not enough but require a third logical quality sublating them both. The Indian Tradition: In Indian logic it has long been traditional to claim that there are four kinds of claims. You can affirm that X is so, you can deny that X is so, you can neither-affirm-nor-deny that X is so, or you can both-affirm-and-deny that X is so. Each claim can also take one of four truth-values true, false, neither-true-nor-false, and both-true-and-false. However the tradition is clear that the four kinds of statements are distinct from the four values of statements. Nagarjuna is sometimes interpreted as teaching that there is a fifth logical quality besides the four typical of Indian logic, but there are disputing interpretations. More than One Quality Today: Although the distinction between affirmation and denial is rarely supported today, you might try to argue that some other distinctions in the structure of assertion could be thought of as differences of logical quality. One might argue, for instance, that the distinction between sequents with empty and non-empty antecedents amounts to a distinction between logical consequences and logical assertions. Alternately one might claim that both forms are really just logical assertions in the metalanguage, and are not statements at all in the object language, since the turnstile isn't in the object language. Similarly you might argue that a modern language which includes both an assertion mechanism, and a "retraction" mechanism (such as Diderik Batens' "Adaptive Logics") could be thought of as having two logical qualities "assertion" and "retraction."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WNT7A** WNT7A: Protein Wnt-7a is a protein that in humans is encoded by the WNT7A gene. Function: The WNT gene family consists of structurally related genes that encode secreted signaling proteins. These proteins have been implicated in oncogenesis and in several developmental processes, including regulation of cell fate and patterning during embryogenesis. This gene is a member of the WNT gene family. It encodes a protein showing 99% amino acid identity to the mouse Wnt7A protein. This gene not only guides the development of the anterior-posterior axis in the female reproductive tract but also plays a critical role in uterine smooth muscle pattering and maintenance of adult uterine function. It is also responsive to changes in the levels of sex steroid hormone in the female reproductive tract. Decreased expression of this gene in human uterine leiomyoma is found to be inversely associated with the expression of estrogen receptor alpha.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Real data type** Real data type: A real data type is a data type used in a computer program to represent an approximation of a real number. Because the real numbers are not countable, computers cannot represent them exactly using a finite amount of information. Most often, a computer will use a rational approximation to a real number. Rational numbers: The most general data type for a rational number stores the numerator and denominator as integers. Fixed-point numbers: A fixed-point data type uses the same denominator for all numbers. The denominator is usually a power of two. For example, in a fixed-point system that uses the denominator 65,536 (216), the hexadecimal number 0x12345678 means 0x12345678/65536 or 305419896/65536 or 4660 + 22136/65536 or about 4660.33777. Floating-point numbers: A floating-point data type is a compromise between the flexibility of a general rational number data type and the speed of fixed-point arithmetic. It uses some of the bits in the data type to specify a power of two for the denominator. See IEEE Standard for Floating-Point Arithmetic. Decimal numbers: Similar to fixed-point or floating-point data type, but with a denominator that is a power of 10 instead of a power of 2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Rocky Mountain Trophy Hunter** Rocky Mountain Trophy Hunter: Rocky Mountain Trophy Hunter is a series of hunting games developed or published by Sunstorm Interactive.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**BCODE** BCODE: A bCODE is an identifier that can be sent to a mobile phone/device and used as a ticket/voucher/identification or other type of token. The bCODE is an SMS message that can be read electronically from the screen of a mobile device. Bcodes can be sent by text message, and as they are just a standard SMS they can be received on over 99% of all devices. BCODE: Bcodes have many uses such as advertising, loyalty programs, promotions, ticketing and more. History: bCODE was developed by an Australian company from 2003 to 2005. bCODE Technology: A bCODE is a simple SMS text message that looks something like this: This text message is read from the screen of a mobile phone/device and decoded into a unique token ID. This ID can then be used to supply the consumer with their own unique experience.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bulbus glandis** Bulbus glandis: The bulbus glandis (also called a bulb or knot) is an erectile tissue structure on the penis of canid mammals. During mating, immediately before ejaculation the tissues swell up to lock (tie) the male's penis inside the female. The locking is completed by circular muscles just inside the female's vagina; this is called "the knot" tightening thus preventing the male from withdrawing. The circular muscles also contract intermittently, which has the effect of stimulating ejaculation of sperm, followed by prostatic fluid, as well as maintaining the swelling of the penis and therefore the tie, for some time. For domestic dogs the tie may last up to half an hour or more, though usually less. When male canines are sexually excited, the bulbus glandis may swell up inside the penile sheath, even if the dog has been neutered.The bulbus glandis also occurs in the penises of some pinnipeds, including South American fur seals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Euclidean relation** Euclidean relation: In mathematics, Euclidean relations are a class of binary relations that formalize "Axiom 1" in Euclid's Elements: "Magnitudes which are equal to the same are equal to each other." Definition: A binary relation R on a set X is Euclidean (sometimes called right Euclidean) if it satisfies the following: for every a, b, c in X, if a is related to b and c, then b is related to c. To write this in predicate logic: ∀a,b,c∈X(aRb∧aRc→bRc). Dually, a relation R on X is left Euclidean if for every a, b, c in X, if b is related to a and c is related to a, then b is related to c: ∀a,b,c∈X(bRa∧cRa→bRc). Properties: Due to the commutativity of ∧ in the definition's antecedent, aRb ∧ aRc even implies bRc ∧ cRb when R is right Euclidean. Similarly, bRa ∧ cRa implies bRc ∧ cRb when R is left Euclidean. The property of being Euclidean is different from transitivity. For example, ≤ is transitive, but not right Euclidean, while xRy defined by 0 ≤ x ≤ y + 1 ≤ 2 is not transitive, but right Euclidean on natural numbers. For symmetric relations, transitivity, right Euclideanness, and left Euclideanness all coincide. However, a non-symmetric relation can also be both transitive and right Euclidean, for example, xRy defined by y=0. A relation that is both right Euclidean and reflexive is also symmetric and therefore an equivalence relation. Similarly, each left Euclidean and reflexive relation is an equivalence. Properties: The range of a right Euclidean relation is always a subset of its domain. The restriction of a right Euclidean relation to its range is always reflexive, and therefore an equivalence. Similarly, the domain of a left Euclidean relation is a subset of its range, and the restriction of a left Euclidean relation to its domain is an equivalence. Therefore, a right Euclidean relation on X that is also right total (respectively a left Euclidean relation on X that is also left total) is an equivalence, since its range (respectively its domain) is X. Properties: A relation R is both left and right Euclidean, if, and only if, the domain and the range set of R agree, and R is an equivalence relation on that set. A right Euclidean relation is always quasitransitive, as is a left Euclidean relation. A connected right Euclidean relation is always transitive; and so is a connected left Euclidean relation. Properties: If X has at least 3 elements, a connected right Euclidean relation R on X cannot be antisymmetric, and neither can a connected left Euclidean relation on X. On the 2-element set X = { 0, 1 }, e.g. the relation xRy defined by y=1 is connected, right Euclidean, and antisymmetric, and xRy defined by x=1 is connected, left Euclidean, and antisymmetric. Properties: A relation R on a set X is right Euclidean if, and only if, the restriction R′ := R|ran(R) is an equivalence and for each x in X\ran(R), all elements to which x is related under R are equivalent under R′. Similarly, R on X is left Euclidean if, and only if, R′ := R|dom(R) is an equivalence and for each x in X\dom(R), all elements that are related to x under R are equivalent under R′. Properties: A left Euclidean relation is left-unique if, and only if, it is antisymmetric. Similarly, a right Euclidean relation is right unique if, and only if, it is anti-symmetric. A left Euclidean and left unique relation is vacuously transitive, and so is a right Euclidean and right unique relation. A left Euclidean relation is left quasi-reflexive. For left-unique relations, the converse also holds. Dually, each right Euclidean relation is right quasi-reflexive, and each right unique and right quasi-reflexive relation is right Euclidean.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Heroes of Battle** Heroes of Battle: Heroes of Battle is a hardcover supplement to the 3.5 edition of the Dungeons & Dragons role-playing game. Contents: Heroes of Battle is intended for use by Dungeon Masters who want to incorporate large-scale, epic battles into their game. It contains ideas for wartime adventures, new rules for wartime games, and military-oriented feats, prestige classes and non-player characters. Publication history: Heroes of Battle was written by David Noonan, Will McDermott and Stephen Schubert, and published May 2005 by Wizards of the Coast. Cover art is by David Hudnut, with interior art by Wayne England, Doug Kovacs, Chuck Lukacs, Roberto Marchesi, Mark Nelson, Eric Polak, Wayne Reynolds, and Franz Vohwinkel. David Noonan was the in-house designer for the project, so he outlined the vision for the book before he, Will McDermott, and Steve Schubert started writing. Andy Collins joined the process as the design phase was winding down, and led the project development phase during the six weeks after its completion. Reception: The reviewer from Pyramid commented that "Heroes of Battle not only provides both the mechanics and the RPG feel the story demands, but weaves them together into a beautiful tapestry."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Strategic Explorations of Exoplanets and Disks with Subaru** Strategic Explorations of Exoplanets and Disks with Subaru: Strategic Explorations of Exoplanets and Disks with Subaru (SEEDS) is a multi-year survey that used the Subaru Telescope on Mauna Kea, Hawaii in an effort to directly image extrasolar planets and protoplanetary/debris disks around hundreds of nearby stars. SEEDS is a Japanese-led international project. It consists of some 120 researchers from a number of institutions in Japan, the U.S. and the EU. The survey's headquarters is at the National Astronomical Observatory of Japan (NAOJ) and led by Principal Investigator Motohide Tamura. The goals of the survey are to address the following key issues in the study of extrasolar planets and disks: the detection and census of exoplanets in the regions around solar-mass and massive stars; the evolution of protoplanetary disks and debris disks; and the link between exoplanets and circumstellar disks. Observations and Results: The direct imaging survey was carried out with a suite of high-contrast instrumentation at the large Subaru 8.2 m telescope, including a second-generation adaptive optics (AO) system with 188 actuators (AO188) and a dedicated coronagraph instrument called HiCIAO. Observations began in late October 2009 and were completed in early January 2015, having observed roughly 500 nearby stars (including duplicates). The survey was conducted in the H-band (1.65 micron) and once a planet/companion candidate was detected, it was also observed at other near-infrared wavelengths.SEEDS has reported four candidate planets to date. The first one is GJ 758 b, with a mass around 10–30 Jupiter masses and orbiting around a Sun-like star. The projected distance from the central star to the companion is 29 AU at a distance of around 52 light years. The second discovery was of a very faint planet orbiting a Sun-like star named GJ 504. The projected distance from the central star is 44 AU at a distance of 59 light years. The central star itself is bright, visible to the naked-eye (V ∼ 5 mag), but the planet is very dim, 17–20 mag at infrared wavelengths. The planet mass is estimated to be only 3–4.5 Jupiter masses, estimated from its luminosity and age. It is one of the lightest-mass planets ever imaged.The survey also discovered a likely superjovian-mass planet named Kappa Andromedae b, orbiting a young B-type star 2.8 times the mass of Sun. HD 100546 b was confirmed as a planet with a disk system around a very young star as part of the SEEDS survey. SEEDS has also reported the detection of three brown dwarfs in the Pleiades cluster as part of the Open Cluster category survey and several stellar or substellar companions around planetary systems, from the radial velocity detection. SEEDS has detected interesting fine-structures in disks around dozens of young stars. These disks exhibit gaps, spiral arms, rings, and other structures at similar radial distances where the outer planets are imaged. These structures can be considered to be “signposts” of planets. The results obtained on disks support the need for a new planet formation model.Additional planet and disk discoveries include: New high resolution imaging of the AB Aurigae system Detection of extended outer regions of the debris ring around HR 4796 A New reflected light imaging of the transitional disk gap around LkCa 15 Explorations for outer massive bodies around the transiting planet system HAT-P-7 Imaging discovery of the debris disk around HIP 79977 First infrared images of the inner gap in the 2MASS J16042165-2130284 transitional disk. Observations and Results: Direct imaging discovery of a large inner gap in the protoplanetary disk around PDS 70 Discovery of spiral structures in the transitional disk around SAO 206462 Discovery of a stellar companion to the extrasolar planet system HAT-P-7 Near-IR scattered light detection of the spiral-armed transitional disk of the star MWC 758 Direct imaging of the UX Tau A pre-transitional disk revealing gap structures Scattered light imaging of the MWC 80 protoplanetary disk at a historic minimum of the near-IR excess High-contrast imaging discovery of architecture in the LkCa 15 transitional disk Submillimeter and near-infrared observation of the transitional disk around Sz 91 First high resolution infrared images of circumstellar disk around SU Aur revealing tidal-like tails
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Maxwell (microarchitecture)** Maxwell (microarchitecture): Maxwell is the codename for a GPU microarchitecture developed by Nvidia as the successor to the Kepler microarchitecture. The Maxwell architecture was introduced in later models of the GeForce 700 series and is also used in the GeForce 800M series, GeForce 900 series, and Quadro Mxxx series, as well as some Jetson products, all manufactured with TSMC's 28 nm process.The first Maxwell-based products were the GeForce GTX 745 (OEM), GeForce GTX 750, and the GeForce GTX 750 Ti. Both were released on February 18, 2014, both with the chip code number GM107. Earlier GeForce 700 series GPUs had used Kepler chips with the code numbers GK1xx. First-generation Maxwell GPUs (code numbers GM10x) are also used in the GeForce 800M series and the Quadro Kxxx series. A second generation of Maxwell-based products was introduced on September 18, 2014 with the GeForce GTX 970 and GeForce GTX 980, followed by the GeForce GTX 960 on January 22, 2015, the GeForce GTX Titan X on March 17, 2015, and the GeForce GTX 980 Ti on June 1, 2015. The final and lowest spec Maxwell 2.0 card was the GTX950 released on Aug 20th, 2015. Maxwell (microarchitecture): These GPUs have GM20x chip code numbers. Maxwell introduced an improved Streaming Multiprocessor (SM) design that increased power efficiency, the sixth and seventh generation PureVideo HD, and CUDA Compute Capability 5.2. The architecture is named after James Clerk Maxwell, the founder of the theory of electromagnetic radiation. The Maxwell architecture is used in the system on a chip (SOC), mobile application processor, Tegra X1. First generation Maxwell (GM10x): First generation Maxwell GPUs (GM107/GM108) were released as GeForce GTX 745, GTX 750/750 Ti, GTX 850M/860M (GM107) and GeForce 830M/840M (GM108). These new chips introduced few consumer-facing additional features, as Nvidia instead focused more on increasing GPU power efficiency. The L2 cache was increased from 256 KiB on Kepler to 2 MiB on Maxwell, reducing the need for more memory bandwidth. Accordingly, the memory bus was reduced from 192 bit on Kepler (GK106) to 128 bit, reducing die area, cost, and power draw.The "SMX" streaming multiprocessor design from Kepler was also retooled and partitioned, being renamed "SMM" for Maxwell. The structure of the warp scheduler was inherited from Kepler, with the texture units and FP64 CUDA cores still shared, but the layout of most execution units were partitioned so that each warp schedulers in an SMM controls one set of 32 FP32 CUDA cores, one set of 8 load/store units and one set of 8 special function units. This is in contrast to Kepler, where each SMX had 4 schedulers that scheduled to a shared pool of execution units. The latter necessitated an SMX-wide crossbar that used unnecessary power to allow all execution units to be shared. Conversely, Maxwell's more modular design allows for a finer-grained and more efficient allocation of resources, saving power when the workload isn't optimal for shared resources. Nvidia claims a 128 CUDA core SMM has 90% of the performance of a 192 CUDA core SMX while efficiency increases by a factor of 2. Also, each Graphics Processing Cluster, or GPC, contains up to 4 SMX units in Kepler, and up to 5 SMM units in first generation Maxwell.GM107 also supports CUDA Compute Capability 5.0 compared to 3.5 on GK110/GK208 GPUs and 3.0 on GK10x GPUs. Dynamic Parallelism and HyperQ, two features in GK110/GK208 GPUs, are also supported across the entire Maxwell product line. Maxwell also provides native shared memory atomic operations for 32-bit integers and native shared memory 32-bit and 64-bit compare-and-swap (CAS), which can be used to implement other atomic functions. First generation Maxwell (GM10x): Nvidia's video encoder, NVENC, was upgraded to be 1.5 to 2 times faster than on Kepler-based GPUs, meaning it can encode video at six to eight times playback speed. Nvidia also claims an eight to ten times performance increase in PureVideo Feature Set E video decoding due to the video decoder cache, paired with increases in memory efficiency. However, H.265 is not supported for full hardware decoding in first generation Maxwell GPUs, relying on a mix of hardware and software decoding. When decoding video, a new low power state "GC5" is used on Maxwell GPUs to conserve power.Maxwell GPUs were thought to use tile-based rendering, but they actually use tiled caching. Second generation Maxwell (GM20x): Second generation Maxwell GPUs introduced several new technologies: Dynamic Super Resolution, Third Generation Delta Color Compression, Multi-Pixel Programming Sampling, Nvidia VXGI (Real-Time-Voxel-Global Illumination), VR Direct, Multi-Projection Acceleration, Multi-Frame Sampled Anti-Aliasing(MFAA) (however, support for Coverage-Sampling Anti-Aliasing(CSAA) was removed), and Direct3D12 API at Feature Level 12_1. HDMI 2.0 support was also added.The ROP to memory controller ratio was changed from 8:1 to 16:1. However, some of the ROPs are generally idle in the GTX 970 because there are not enough enabled SMMs to give them work to do, reducing its maximum fill rate.The Polymorph Engine responsible for tessellation was upgraded to version 3.0 in second generation Maxwell GPUs, resulting in improved tessellation performance per unit/clock. Second generation Maxwell (GM20x): Second generation Maxwell also has up to 4 SMM units per GPC, compared to 5 SMM units per GPC.GM204 supports CUDA Compute Capability 5.2 (compared to 5.0 on GM107/GM108 GPUs, 3.5 on GK110/GK208 GPUs and 3.0 on GK10x GPUs).GM20x GPUs have an upgraded NVENC which supports HEVC encoding and adds support for H.264 encoding resolutions at 1440p/60FPS & 4K/60FPS (compared to NVENC on Maxwell first generation GM10x GPUs which only supported H.264 1080p/60FPS encoding).After consumer complaints, Nvidia revealed that it is able to disable individual units, each containing 256KB of L2 cache and 8 ROPs, without disabling whole memory controllers. This comes at the cost of dividing the memory bus into high speed and low speed segments that cannot be accessed at the same time for reads, because the L2/ROP unit managing both of the GDDR5 controllers shares the read return channel and the write data bus between the GDDR5 controllers. This makes simultaneous reading from both GDDR5 controllers or simultaneous writing to both GDDR5 controllers impossible. This is used in the GeForce GTX 970, which therefore can be described as having 3.5 GB in a high-speed segment on a 224-bit bus and 512 MB in a low-speed segment on a 32-bit bus. The peak speed of such a GPU can still be attained, but the peak speed figure is only reachable if one segment is executing a read operation while the other segment is executing a write operation. Performance: The theoretical single-precision processing power of a Maxwell GPU in FLOPS is computed as 2 (operations per FMA instruction per CUDA core per cycle) × number of CUDA cores × core clock speed (in Hz). The theoretical double-precision processing power of a Maxwell GPU is 1/32 of the single precision performance (which has been noted as being very low compared to the previous generation Kepler). Successor: The successor to Maxwell is codenamed Pascal. The Pascal architecture features higher bandwidth unified memory and NVLink.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sclerosing lymphangitis** Sclerosing lymphangitis: Sclerosing lymphangitis, also known as lymphangiosclerosis or sclerotic lymphangitis, is a skin condition characterized by a cordlike structure encircling the coronal sulcus of the penis, or running the length of the shaft, that has been attributed to trauma during vigorous sexual play.: 43 Nonvenereal sclerosing lymphangitis is a rare penile lesion consisting of a minimally tender, indurated cord involving the coronal sulcus and occasionally adjacent distal penile skin. The condition involves the hardening of a lymph vessel connected to a vein in the penis. It can look like a thick cord and can feel like a hardened, almost calcified or fibrous, vein, however it tends to not share the common blue tint with a vein. It can be felt as a hardened lump or "vein" even when the penis is flaccid, and is even more prominent during an erection. This disorder is fairly common and most often occurs after vigorous sexual activity and resolves spontaneously. Cause: Etiology of sclerosing lymphangitis is unknown but has been postulated to be secondary to thrombosis of lymphatic vessels. Management: In most cases it tends to go away if given rest and more gentle care, for example by use of lubricants. Even without rest or gentle care, in some cases it will simply disappear after a few weeks on its own. Spontaneous recovery can occur anywhere within a couple weeks to several months.Although it is commonly recommended the patient abstain from sexual activity during recovery, there is no evidence that this expedites resolution or that engaging in sexual activity worsens the condition.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Grey box model** Grey box model: In mathematics, statistics, and computational modelling, a grey box model combines a partial theoretical structure with data to complete the model. The theoretical structure may vary from information on the smoothness of results, to models that need only parameter values from data or existing literature. Thus, almost all models are grey box models as opposed to black box where no model form is assumed or white box models that are purely theoretical. Some models assume a special form such as a linear regression or neural network. These have special analysis methods. In particular linear regression techniques are much more efficient than most non-linear techniques. The model can be deterministic or stochastic (i.e. containing random components) depending on its planned use. Model form: The general case is a non-linear model with a partial theoretical structure and some unknown parts derived from data. Models with unlike theoretical structures need to be evaluated individually, possibly using simulated annealing or genetic algorithms. Model form: Within a particular model structure, parameters or variable parameter relations may need to be found. For a particular structure it is arbitrarily assumed that the data consists of sets of feed vectors f, product vectors p, and operating condition vectors c. Typically c will contain values extracted from f, as well as other values. In many cases a model can be converted to a function of the form: m(f,p,q)where the vector function m gives the errors between the data p, and the model predictions. The vector q gives some variable parameters that are the model's unknown parts. Model form: The parameters q vary with the operating conditions c in a manner to be determined. This relation can be specified as q = Ac where A is a matrix of unknown coefficients, and c as in linear regression includes a constant term and possibly transformed values of the original operating conditions to obtain non-linear relations between the original operating conditions and q. It is then a matter of selecting which terms in A are non-zero and assigning their values. The model completion becomes an optimization problem to determine the non-zero values in A that minimizes the error terms m(f,p,Ac) over the data. Model completion: Once a selection of non-zero values is made, the remaining coefficients in A can be determined by minimizing m(f,p,Ac) over the data with respect to the nonzero values in A, typically by non-linear least squares. Selection of the nonzero terms can be done by optimization methods such as simulated annealing and evolutionary algorithms. Also the non-linear least squares can provide accuracy estimates for the elements of A that can be used to determine if they are significantly different from zero, thus providing a method of term selection.It is sometimes possible to calculate values of q for each data set, directly or by non-linear least squares. Then the more efficient linear regression can be used to predict q using c thus selecting the non-zero values in A and estimating their values. Once the non-zero values are located non-linear least squares can be used on the original model m(f,p,Ac) to refine these values .A third method is model inversion, which converts the non-linear m(f,p,Ac) into an approximate linear form in the elements of A, that can be examined using efficient term selection and evaluation of the linear regression. For the simple case of a single q value (q = aTc) and an estimate q* of q. Putting dq = aTc − q* gives m(f,p,aTc) = m(f,p,q* + dq) ≈ m(f,p.q*) + dq m’(f,p,q*) = m(f,p.q*) + (aTc − q*) m’(f,p,q*)so that aT is now in a linear position with all other terms known, and thus can be analyzed by linear regression techniques. For more than one parameter the method extends in a direct manner. After checking that the model has been improved this process can be repeated until convergence. This approach has the advantages that it does not need the parameters q to be able to be determined from an individual data set and the linear regression is on the original error terms Model validation: Where sufficient data is available, division of the data into a separate model construction set and one or two evaluation sets is recommended. This can be repeated using multiple selections of the construction set and the resulting models averaged or used to evaluate prediction differences. Model validation: A statistical test such as chi-squared on the residuals is not particularly useful. The chi squared test requires known standard deviations which are seldom available, and failed tests give no indication of how to improve the model. There are a range of methods to compare both nested and non nested models. These include comparison of model predictions with repeated data. Model validation: An attempt to predict the residuals m(, ) with the operating conditions c using linear regression will show if the residuals can be predicted. Residuals that cannot be predicted offer little prospect of improving the model using the current operating conditions. Terms that do predict the residuals are prospective terms to incorporate into the model to improve its performance.The model inversion technique above can be used as a method of determining whether a model can be improved. In this case selection of nonzero terms is not so important and linear prediction can be done using the significant eigenvectors of the regression matrix. The values in A determined in this manner need to be substituted into the nonlinear model to assess improvements in the model errors. The absence of a significant improvement indicates the available data is not able to improve the current model form using the defined parameters. Extra parameters can be inserted into the model to make this test more comprehensive.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PDE4C** PDE4C: cAMP-specific 3',5'-cyclic phosphodiesterase 4C is an enzyme that in humans is encoded by the PDE4C gene. Tissue localisation: PDE4C is predominantly found in peripheral tissues.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Alan Mycroft** Alan Mycroft: Alan Mycroft is a professor at the Computer Laboratory, University of Cambridge and a Fellow of Robinson College, Cambridge, where he is also director of studies for computer science. Education: Mycroft read mathematics at Cambridge then moved to Edinburgh where he completed his Doctor of Philosophy degree with a thesis on Abstract interpretation and optimising transformations for applicative programs supervised by Rod Burstall and Robin Milner. Research: Mycroft's research interests are in programming languages, software engineering and algorithms.With Arthur Norman, he co-created the Norcroft C compiler. He is also a named trustee of the Raspberry Pi Foundation, a charitable organisation whose single-board computer is intended to stimulate the teaching of basic computer science in schools.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hua's lemma** Hua's lemma: In mathematics, Hua's lemma, named for Hua Loo-keng, is an estimate for exponential sums. It states that if P is an integral-valued polynomial of degree k, ε is a positive real number, and f a real function defined by exp ⁡(2πiP(x)α), then ∫01|f(α)|λdα≪P,εNμ(λ) ,where (λ,μ(λ)) lies on a polygonal line with vertices (2ν,2ν−ν+ε),ν=1,…,k.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Multiomics** Multiomics: Multiomics, multi-omics, integrative omics, "panomics" or "pan-omics" is a biological analysis approach in which the data sets are multiple "omes", such as the genome, proteome, transcriptome, epigenome, metabolome, and microbiome (i.e., a meta-genome and/or meta-transcriptome, depending upon how it is sequenced); in other words, the use of multiple omics technologies to study life in a concerted way. By combining these "omes", scientists can analyze complex biological big data to find novel associations between biological entities, pinpoint relevant biomarkers and build elaborate markers of disease and physiology. In doing so, multiomics integrates diverse omics data to find a coherently matching geno-pheno-envirotype relationship or association. The OmicTools service lists more than 99 softwares related to multiomic data analysis, as well as more than 99 databases on the topic. Multiomics: Systems biology approaches are often based upon the use of panomic analysis data. The American Society of Clinical Oncology (ASCO) defines panomics as referring to "the interaction of all biological functions within a cell and with other body functions, combining data collected by targeted tests ... and global assays (such as genome sequencing) with other patient-specific information." Single-cell multiomics: A branch of the field of multiomics is the analysis of multilevel single-cell data, called single-cell multiomics. This approach gives us an unprecedent resolution to look at multilevel transitions in health and disease at the single cell level. An advantage in relation to bulk analysis is to mitigate confounding factors derived from cell to cell variation, allowing the uncovering of heterogeneous tissue architectures.Methods for parallel single-cell genomic and transcriptomic analysis can be based on simultaneous amplification or physical separation of RNA and genomic DNA. They allow insights that cannot be gathered solely from transcriptomic analysis, as RNA data do not contain non-coding genomic regions and information regarding copy-number variation, for example. An extension of this methodology is the integration of single-cell transcriptomes to single-cell methylomes, combining single-cell bisulfite sequencing to single cell RNA-Seq. Other techniques to query the epigenome, as single-cell ATAC-Seq and single-cell Hi-C also exist. Single-cell multiomics: A different, but related, challenge is the integration of proteomic and transcriptomic data. One approach to perform such measurement is to physically separate single-cell lysates in two, processing half for RNA, and half for proteins. The protein content of lysates can be measured by proximity extension assays (PEA), for example, which use DNA-barcoded antibodies. A different approach uses a combination of heavy-metal RNA probes and protein antibodies to adapt mass cytometry for multiomic analysis. Multiomics and machine learning: In parallel to the advances in highthroughput biology, machine learning applications to biomedical data analysis are flourishing. The integration of multi-omics data analysis and machine learning has led to the discovery of new biomarkers. For example, one of the methods of the mixOmics project implements a method based on sparse Partial Least Squares regression for selection of features (putative biomarkers). A unified and flexible statistical framewok for heterogeneous data integration called "Regularized Generalized Canonical Correlation Analysis" (RGCCA ) enables identifying such putative biomarkers. This framework is implemented and made freely avalaible within the RGCCA R package . Multiomics in health and disease: Multiomics currently holds a promise to fill gaps in the understanding of human health and disease, and many researchers are working on ways to generate and analyze disease-related data. The applications range from understanding host-pathogen interactions and infectious diseases, cancer, to understanding better chronic and complex non-communicable diseases and improving personalized medicine. Multiomics in health and disease: Integrated Human Microbiome Project The second phase of the $170 million Human Microbiome Project was focused on integrating patient data to different omic datasets, considering host genetics, clinical information and microbiome composition. The phase one focused on characterization of communities in different body sites. Phase 2 focused in the integration of multiomic data from host & microbiome to human diseases. Specifically, the project used multiomics to improve the understanding of the interplay of gut and nasal microbiomes with type 2 diabetes, gut microbiomes and inflammatory bowel disease and vaginal microbiomes and pre-term birth. Multiomics in health and disease: Systems Immunology The complexity of interactions in the human immune system has prompted the generation of a wealth of immunology-related multi-scale omic data. Multi-omic data analysis has been employed to gather novel insights about the immune response to infectious diseases, such as pediatric chikungunya, as well as noncommunicable autoimmune diseases. Integrative omics has also been employed strongly to understand effectiveness and side effects of vaccines, a field called systems vaccinology. For example, multiomics was essential to uncover the association of changes in plasma metabolites and immune system transcriptome on response to vaccination against herpes zoster. List of softwares for multi-omic analysis: The Bioconductor project curates a variety of R packages aimed at integrating omic data: omicade4, for multiple co-inertia analysis of multi omic datasets MultiAssayExperiment, offering a bioconductor interface for overlapping samples IMAS, a package focused on using multi omic data for evaluating alternative splicing bioCancer, a package for visualization of multiomic cancer data mixOmics, a suite of multivariate methods for data integration MultiDataSet, a package for encapsulating multiple data setsThe RGCCA package implements a versatile framework for data integration. This package is freely available on the Comprehensive R Archive Network (CRAN). List of softwares for multi-omic analysis: The OmicTools database further highlights R packages and othertools for multi omic data analysis: PaintOmics, a web resource for visualization of multi-omics datasets SIGMA, a Java program focused on integrated analysis of cancer datasets iOmicsPASS, a tool in C++ for multiomic-based phenotype prediction Grimon, an R graphical interface for visualization of multiomic data Omics Pipe, a framework in Python for reproducibly automating multiomic data analysis Multiomic Databases: A major limitation of classical omic studies is the isolation of only one level of biological complexity. For example, transcriptomic studies may provide information at the transcript level, but many different entities contribute to the biological state of the sample (genomic variants, post-translational modifications, metabolic products, interacting organisms, among others). With the advent of high-throughput biology, it is becoming increasingly affordable to make multiple measurements, allowing transdomain (e.g. RNA and protein levels) correlations and inferences. These correlations aid the construction or more complete biological networks, filling gaps in our knowledge. Multiomic Databases: Integration of data, however, is not an easy task. To facilitate the process, groups have curated database and pipelines to systematically explore multiomic data: Multi-Omics Profiling Expression Database (MOPED), integrating diverse animal models, The Pancreatic Expression Database, integrating data related to pancreatic tissue, LinkedOmics, connecting data from TCGA cancer datasets, OASIS, a web-based resource for general cancer studies, BCIP, a platform for breast cancer studies, C/VDdb, connecting data from several cardiovascular disease studies, ZikaVR, a multiomic resource for Zika virus data Ecomics, a normalized multi-omic database for Escherichia coli data, GourdBase, integrating data from studies with gourd, MODEM, a database for multilevel maize data, SoyKB, a database for multilevel soybean data, ProteomicsDB, a multi-omics and multi-organism resource for life science research
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**P system** P system: For the computer p-System, see UCSD p-System.A P system is a computational model in the field of computer science that performs calculations using a biologically inspired process. They are based upon the structure of biological cells, abstracting from the way in which chemicals interact and cross cell membranes. The concept was first introduced in a 1998 report by the computer scientist Gheorghe Păun, whose last name is the origin of the letter P in 'P Systems'. Variations on the P system model led to the formation of a branch of research known as 'membrane computing.' Although inspired by biology, the primary research interest in P systems is concerned with their use as a computational model, rather than for biological modeling, although this is also being investigated. Informal description: A P system is defined as a series of membranes containing chemicals (in finite quantities), catalysts and rules which determine possible ways in which chemicals may react with one another to form products. Rules may also cause chemicals to pass through membranes or even cause membranes to dissolve. Informal description: Just as in a biological cell, where a chemical reaction may only take place upon the chance event that the required chemical molecules collide and interact (possibly also with a catalyst), the rules in a P system are applied at random. This causes the computation to proceed in a non-deterministic manner, often resulting in multiple solutions being encountered if the computation is repeated. Informal description: A P system continues until it reaches a state where no further reactions are possible. At this point the result of the computation is all those chemicals that have been passed outside of the outermost membrane, or otherwise those passed into a designated 'result' membrane. Components of a P system: Although many varieties of P system exist, most share the same basic components. Each element has a specific role to play, and each has a founding in the biological cell architecture upon which P systems are based. Components of a P system: The environment The environment is the surroundings of the P system. In the initial state of a P system it contains only the container-membrane, and while the environment can never hold rules, it may have objects passed into it during the computation. The objects found within the environment at the end of the computation constitute all or part of its “result.” Membranes Membranes are the main “structures” within a P system. A membrane is a discrete unit which can contain a set of objects (symbols/catalysts), a set of rules, and a set of other membranes contained within. The outermost membrane, held within the environment, is often referred to as the 'container membrane' or 'skin membrane'. As implied to by their namesake, membranes are permeable and symbols resulting from a rule may cross them. A membrane (but not the container membrane) may also “dissolve”, in which case its content, except for rules (which are lost), migrate into the membrane in which it was contained.Some P system variants allow for a membrane to divide, possess a charge or have varying permeability by changing membrane thickness. Components of a P system: Symbols Symbols represent chemicals that may react with other chemicals to form some product. In a P system, each type of symbol is typically represented by a different letter. The symbol content of a membrane is therefore represented by a string of letters. Because the multiplicity of symbols in a region matters, multisets are commonly used to represent the symbol content of a region. Components of a P system: Special case symbols exist, for example, a lower case delta (δ) is often used to initiate the dissolving of a membrane, and this will only ever be found in the output of a rule: upon being encountered it invokes a reaction, and is used in the process. Catalysts Catalysts are similar to their namesakes in chemistry. They are represented and used in the same way as symbols, but are never consumed during a “reaction,” they are simply a requirement for it to occur. Components of a P system: Rules Rules represent a possible chemical reaction within a membrane, causing it to evolve to a new state. A rule has a required set of input objects (symbols or catalysts) that must be present in order for it to be applied. If the required objects are present, it consumes them and produces a set of output objects. A rule may also be specified to have a priority over other rules, in which case less dominant rules will only be applied when it is not possible to apply a more dominant rule (i.e. the required inputs are not present). Components of a P system: There are three (in the basic P system model) distinct ways in which a rule may handle its output objects. Usually, the output objects are passed into the current membrane (the same membrane in which the rule and the inputs reside), known as a here rule. However, there are two modifiers that can be specified upon output objects when rules are defined, in and out. The in modifier causes the object to be passed to one of the current membrane's children (travelling inwards relative to the structure of the P system), chosen at random during the computation. The out modifier causes the object to be passed out of the current membrane and into either its parent membrane or to a sibling membrane, specified during specification of the P system. Computation process: A computation works from an initial starting state towards an end state through a number of discrete steps. Each step involves iterating through all membranes in the P system and the application of rules, which occurs in both a maximally parallel and non-deterministic manner.Working through step-by-step, a computation halts when no further evolution can take place (i.e. when no rules are able to be applied). At this point whatever objects have been passed to the environment, or into a designated 'result' membrane, are counted as the result of the computation. Computation process: Rule application At each step of a computation an object may only be used once, as they are consumed by rules when applied. The method of applying a rule within a membrane is as follows: Assign symbols from a membrane's content to the rule's inputs If all inputs are satisfied, remove all assigned symbols from membrane Create output symbols and hold until all rule assignment, for all membranes, has taken place. Computation process: Add output symbols to targeted membranes. Dissolve membranes as necessaryOutputs are not passed immediately into membranes because this would contravene the maximally parallel nature of rule application, instead they are distributed after all possible rules have been applied. Non-deterministic application The order of rule application is chosen at random. Rule application order can have a significant effect on which rules may be applied at any given time, and the outcome of a step of execution. Computation process: Consider a membrane containing only a single "a" symbol, and the two rules a → ab and a → aδ. As both rules rely on an “a” symbol being present, of which there is only one, the first step of computation will allow either the first or second rule to be applied, but not both. The two possible results of this step are very different: The membrane carries over to the next step of the computation with both an "a" symbol and a "b" symbol present, and again one of the two rules is randomly assigned to the "a" symbol. Computation process: The membrane dissolves and a single "a" symbol is passed out to the containing membrane. Computation process: Maximally parallel application This is a property of rule application whereby all possible rule assignments must take place during every step of the computation. In essence this means that the rule a → aa has the effect of doubling the number of "a" symbols in its containing membrane each step, because the rule is applied to every occurrence of an "a" symbol present. As a computational model: Most P systems variants are computationally universal. This extends even to include variants that do not use rule priorities, usually a fundamental aspect of P systems.As a model for computation, P systems offer the attractive possibility of solving NP-complete problems in less-than exponential time. Some P system variants are known to be capable of solving the SAT (boolean satisfiability) problem in linear time and, owing to all NP-complete problems being equivalent, this capability then applies to all such problems. As there is no current method of directly implementing a P system in its own right, their functionality is instead emulated and therefore solving NP-complete problems in linear time remains theoretical. However, it has also been proven that any deterministic P system may be simulated on a Turing Machine in polynomial time. Example computation: The image shown depicts the initial state of a P system with three membranes. Because of their hierarchical nature, P systems are often depicted graphically with drawings that resemble Venn diagrams or David Harel's Higraph (see Statechart). Example computation: The outermost membrane, 1, is the container membrane for this P system and contains a single out rule. Membrane 2 contains four here rules, with two in a priority relationship: cc → c will always be applied in preference to c → δ. The delta symbol represents the special “dissolve” symbol. The innermost membrane, 3, contains a set of symbols (“ac”) and three rules, of type here. In this initial state no rules outside of membrane 3 are applicable: there are no symbols outside of that membrane. However, during evolution of the system, as objects are passed between membranes, the rules in other membranes will become active. Example computation: Computation Because of the non-deterministic nature of P systems, there are many different paths of computation a single P system is capable of, leading to different results. The following is one possible path of computation for the P system depicted. Example computation: Step 1 From the initial configuration only membrane 3 has any object content: "ac" "c" is assigned to c → cc "a" is assigned to a → ab Step 2 Membrane 3 now contains: "abcc" "a" is assigned to a → bδ "c" is assigned to c → cc "c" is assigned to c → ccNotice the maximally parallel behaviour of rule application leading to the same rule being applied twice during one step. Example computation: Notice also that the application of the second rule (a → bδ) as opposed to the first (a → ab) is non-deterministic and can be presumed random. The system could just as well have continued applying the first rule (and at the same time doubling the c particles) indefinitely. Membrane 3 now dissolves, as the dissolve symbol (δ) has been encountered and all object content from this membrane passes into membrane 2. Example computation: Step 3 Membrane 2 now contains: "bbcccc" "b" is assigned to b → d "b" is assigned to b → d "cc" is assigned to cc → c "cc" is assigned to cc → c Step 4 Membrane 2 now contains: "ddcc" "d" is assigned to d → de "d" is assigned to d → de "cc" is assigned to cc → c Step 5 Membrane 2 now contains: "dedec" "d" is assigned to d → de "d" is assigned to d → de "c" is assigned to c → δNotice that the priority over c → δ has been lifted now the required inputs for cc→ c no longer exist. Membrane 2 now dissolves, and all object content passes to membrane 1. Example computation: Step 6 Membrane 1 now contains: "deedee" "e” is assigned to e → eout "e” is assigned to e → eout "e” is assigned to e → eout "e” is assigned to e → eout Computation halts Membrane 1 now contains: "dd" and, due to the out rule e → eout, the environment contains: "eeee." At this point the computation halts as no further assignments of objects to rules is possible. The result of the computation is four "e" symbols. Example computation: The only non-deterministic choices occurred during steps 1 and 2, when choosing where to assign the solitary "a" symbol. Consider the case where "a" is assigned to a → bδ during step 1: upon membrane 3 dissolving only a single "b" and two "c" objects would exist, leading to the creation of only a single "e" object to eventually be passed out as the computation's result.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**ADAM17** ADAM17: A disintegrin and metalloprotease 17 (ADAM17), also called TACE (tumor necrosis factor-α-converting enzyme), is a 70-kDa enzyme that belongs to the ADAM protein family of disintegrins and metalloproteases. Chemical characteristics: ADAM17 is an 824-amino acid polypeptide. Function: ADAM17 is understood to be involved in the processing of tumor necrosis factor alpha (TNF-α) at the surface of the cell, and from within the intracellular membranes of the trans-Golgi network. This process, which is also known as 'shedding', involves the cleavage and release of a soluble ectodomain from membrane-bound pro-proteins (such as pro-TNF-α), and is of known physiological importance. ADAM17 was the first 'sheddase' to be identified, and is also understood to play a role in the release of a diverse variety of membrane-anchored cytokines, cell adhesion molecules, receptors, ligands, and enzymes. Function: Cloning of the TNF-α gene revealed it to encode a 26 kDa type II transmembrane pro-polypeptide that becomes inserted into the cell membrane during its maturation. At the cell surface, pro-TNF-α is biologically active, and is able to induce immune responses via juxtacrine intercellular signaling. However, pro-TNF-α can undergo a proteolytic cleavage at its Ala76-Val77 amide bond, which releases a soluble 17kDa extracellular domain (ectodomain) from the pro-TNF-α molecule. This soluble ectodomain is the cytokine commonly known as TNF-α, which is of pivotal importance in paracrine signaling. This proteolytic liberation of soluble TNF-α is catalyzed by ADAM17. Function: Recently, ADAM17 was discovered as a crucial mediator of resistance to radiotherapy. Radiotherapy can induce a dose-dependent increase of furin-mediated cleavage of the ADAM17 proform to active ADAM17, which results in enhanced ADAM17 activity in vitro and in vivo. It was also shown that radiotherapy activates ADAM17 in non-small cell lung cancer, which results in shedding of multiple survival factors, growth factor pathway activation, and radiotherapy-induced treatment resistance.ADAM17 may play a prominent role in the Notch signaling pathway, during the proteolytic release of the Notch intracellular domain (from the Notch1 receptor) that occurs following ligand binding. ADAM17 also regulates the MAP kinase signaling pathway by regulating shedding of the EGFR ligand amphiregulin in the mammary gland. ADAM17 also has a role in the shedding of L-selectin, a cellular adhesion molecule. Interactions: ADAM17 has been shown to interact with: DLG1 MAD2L1, and MAPK1. Activation: The localization of ADAM17 is speculated to be an important determinant of shedding activity. TNF-α processing has classically been understood to occur in the trans-Golgi network, and be closely connected to transport of soluble TNF-α to the cell surface. Shedding is also associated with clustering of ADAM17 with its substrate, membrane bound TNF, in lipid rafts. The overall process is called substrate presentation and regulated by cholesterol. Research also suggests that the majority of mature, endogenous ADAM17 may be localized to a perinuclear compartment, with only a small amount of TACE being present on the cell surface. The localization of mature ADAM17 to a perinuclear compartment, therefore, raises the possibility that ADAM17-mediated ectodomain shedding may also occur in the intracellular environment, in contrast with the conventional model. Activation: Functional ADAM17 has been documented to be ubiquitously expressed in the human colon, with increased activity in the colonic mucosa of patients with ulcerative colitis, a main form of inflammatory bowel disease. Other experiments have also suggested that expression of ADAM17 may be inhibited by ethanol. Clinical significance: Adam17 may facilitate entry of the SARS‑CoV‑2 virus, possibly by enabling fusion of virus particles with the cytoplasmic membrane. Adam17 has similar ACE2 cleavage activity as TMPRSS2, but by forming soluble ACE2, Adam17 may actually have the protective effect of blocking circulating SARS‑CoV‑2 virus particles.Adam17 sheddase activity may contribute to COVID-19 inflammation by cleavage of TNF-α and Interleukin-6 receptor. Model organisms: Model organisms have been used in the study of ADAM17 function. A conditional knockout mouse line, called Adam17tm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program – a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty eight tests were carried out on mutant mice and two significant abnormalities were observed. Few homozygous mutant embryos were identified during gestation. The remaining tests were carried out on heterozygous mutant adult mice; an increased bone mineral content was observed in these animals using Micro-CT.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Leafcasting** Leafcasting: Leafcasting is a method of strengthening paper so as to preserve it. Leafcasting fills in parts that may be missing in papers by the design of conservators or by age. The process covers an existing sheet of damaged paper with replacement fiber, thus increasing its future usability. The process must be performed on a perfectly calibrated machine to avoid damaging the paper. There are few institutions around the world that have the capabilities to perform leafcasting treatments. As few institutions have the required equipment, leafcasting is not a popular form of paper strengthening.Computerized leafcasting was first employed in the mid-1980s at the Folger Shakespeare Library.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Opium** Opium: Opium (or poppy tears, scientific name: Lachryma papaveris) is dried latex obtained from the seed capsules of the opium poppy Papaver somniferum. Approximately 12 percent of opium is made up of the analgesic alkaloid morphine, which is processed chemically to produce heroin and other synthetic opioids for medicinal use and for the illegal drug trade. The latex also contains the closely related opiates codeine and thebaine, and non-analgesic alkaloids such as papaverine and noscapine. The traditional, labor-intensive method of obtaining the latex is to scratch ("score") the immature seed pods (fruits) by hand; the latex leaks out and dries to a sticky yellowish residue that is later scraped off and dehydrated. The word meconium (derived from the Greek for "opium-like", but now used to refer to newborn stools) historically referred to related, weaker preparations made from other parts of the opium poppy or different species of poppies.The production methods have not significantly changed since ancient times. Through selective breeding of the Papaver somniferum plant, the content of the phenanthrene alkaloids morphine, codeine, and to a lesser extent thebaine has been greatly increased. In modern times, much of the thebaine, which often serves as the raw material for the synthesis for oxycodone, hydrocodone, hydromorphone, and other semisynthetic opiates, originates from extracting Papaver orientale or Papaver bracteatum. Opium: For the illegal drug trade, the morphine is extracted from the opium latex, reducing the bulk weight by 88%. It is then converted to heroin which is almost twice as potent, and increases the value by a similar factor. The reduced weight and bulk make it easier to smuggle. History: The Mediterranean region contains the earliest archeological evidence of human use; the oldest known seeds date back to more than 5000 BCE in the Neolithic age with purposes such as food, anaesthetics, and ritual. Evidence from ancient Greece indicates that opium was consumed in several ways, including inhalation of vapors, suppositories, medical poultices, and as a combination with hemlock for suicide. Opium is mentioned in the most important medical texts of the ancient and medieval world, including the Ebers Papyrus and the writings of Dioscorides, Galen, and Avicenna. Widespread medical use of unprocessed opium continued through the American Civil War before giving way to morphine and its successors, which could be injected at a precisely controlled dosage. History: Ancient use (pre-500 CE) Opium has been actively collected since approximately 3400 BCE.At least 17 finds of Papaver somniferum from Neolithic settlements have been reported throughout Switzerland, Germany, and Spain, including the placement of large numbers of poppy seed capsules at a burial site (the Cueva de los Murciélagos, or "Bat Cave", in Spain), which has been carbon-14 dated to 4200 BCE. Numerous finds of P. somniferum or P. setigerum from Bronze Age and Iron Age settlements have also been reported. History: The first known cultivation of opium poppies was in Mesopotamia, approximately 3400 BCE, by Sumerians, who called the plant hul gil, the "joy plant". Tablets found at Nippur, a Sumerian spiritual center south of Baghdad, described the collection of poppy juice in the morning and its use in production of opium. Cultivation continued in the Middle East by the Assyrians, who also collected poppy juice in the morning after scoring the pods with an iron scoop; they called the juice aratpa-pal, possibly the root of Papaver. Opium production continued under the Babylonians and Egyptians. History: Opium was used with poison hemlock to put people quickly and painlessly to death. It was also used in medicine. Spongia somnifera, sponges soaked in opium, were used during surgery. The Egyptians cultivated opium thebaicum in famous poppy fields around 1300 BCE. Opium was traded from Egypt by the Phoenicians and Minoans to destinations around the Mediterranean Sea, including Greece, Carthage, and Europe. By 1100 BCE, opium was cultivated on Cyprus, where surgical-quality knives were used to score the poppy pods, and opium was cultivated, traded, and smoked. Opium was also mentioned after the Persian conquest of Assyria and Babylonian lands in the 6th century BC.From the earliest finds, opium has appeared to have ritual significance, and anthropologists have speculated ancient priests may have used the drug as a proof of healing power. In Egypt, the use of opium was generally restricted to priests, magicians, and warriors, its invention is credited to Thoth, and it was said to have been given by Isis to Ra as treatment for a headache. A figure of the Minoan "goddess of the narcotics", wearing a crown of three opium poppies, c. 1300 BCE, was recovered from the Sanctuary of Gazi, Crete, together with a simple smoking apparatus.The Greek gods Hypnos (Sleep), Nyx (Night), and Thanatos (Death) were depicted wreathed in poppies or holding them. Poppies also frequently adorned statues of Apollo, Asclepius, Pluto, Demeter, Aphrodite, Kybele and Isis, symbolizing nocturnal oblivion. History: Islamic societies (500–1500 CE) As the power of the Roman Empire declined, the lands to the south and east of the Mediterranean Sea became incorporated into the Islamic Empires. Some Muslims believe hadiths, such as in Sahih Bukhari, prohibits every intoxicating substance, though the use of intoxicants in medicine has been widely permitted by scholars. Dioscorides' five-volume De Materia Medica, the precursor of pharmacopoeias, remained in use (which was edited and improved in the Arabic versions) from the 1st to 16th centuries, and described opium and the wide range of its uses prevalent in the ancient world.Between 400 and 1200 CE, Arab traders introduced opium to China, and to India by 700. The physician Muhammad ibn Zakariya al-Razi of Persian origin ("Rhazes", 845–930 CE) maintained a laboratory and school in Baghdad, and was a student and critic of Galen; he made use of opium in anesthesia and recommended its use for the treatment of melancholy in Fi ma-la-yahdara al-tabib, "In the Absence of a Physician", a home medical manual directed toward ordinary citizens for self-treatment if a doctor was not available.The renowned Andalusian ophthalmologic surgeon Abu al-Qasim al-Zahrawi ("Abulcasis", 936–1013 CE) relied on opium and mandrake as surgical anesthetics and wrote a treatise, al-Tasrif, that influenced medical thought well into the 16th century.The Persian physician Abū ‘Alī al-Husayn ibn Sina ("Avicenna") described opium as the most powerful of the stupefacients, in comparison to mandrake and other highly effective herbs, in The Canon of Medicine. The text lists medicinal effects of opium, such as analgesia, hypnosis, antitussive effects, gastrointestinal effects, cognitive effects, respiratory depression, neuromuscular disturbances, and sexual dysfunction. It also refers to opium's potential as a poison. Avicenna describes several methods of delivery and recommendations for doses of the drug. This classic text was translated into Latin in 1175 and later into many other languages and remained authoritative until the 19th century. Şerafeddin Sabuncuoğlu used opium in the 14th-century Ottoman Empire to treat migraine headaches, sciatica, and other painful ailments. History: Reintroduction to Western medicine Manuscripts of Pseudo-Apuleius's 5th-century work from the 10th and 11th centuries refer to the use of wild poppy Papaver agreste or Papaver rhoeas (identified as P. silvaticum) instead of P. somniferum for inducing sleep and relieving pain.The use of Paracelsus' laudanum was introduced to Western medicine in 1527, when Philippus Aureolus Theophrastus Bombastus von Hohenheim, better known by the name Paracelsus, claimed (dubiously) to have returned from wanderings in Arabia with a famous sword, within the pommel of which he kept "Stones of Immortality" compounded from opium thebaicum, citrus juice, and "quintessence of gold". The name "Paracelsus" was a pseudonym signifying him the equal or better of Aulus Cornelius Celsus, whose text, which described the use of opium or a similar preparation, had recently been translated and reintroduced to medieval Europe. The Canon of Medicine, the standard medical textbook that Paracelsus burned in a public bonfire three weeks after being appointed professor at the University of Basel, also described the use of opium, though many Latin translations were of poor quality. Laudanum ("worthy of praise") was originally the 16th-century term for a medicine associated with a particular physician that was widely well-regarded, but became standardized as "tincture of opium", a solution of opium in ethanol, which Paracelsus has been credited with developing. During his lifetime, Paracelsus was viewed as an adventurer who challenged the theories and mercenary motives of contemporary medicine with dangerous chemical therapies, but his therapies marked a turning point in Western medicine. In the 1660s, laudanum was recommended for pain, sleeplessness, and diarrhea by Thomas Sydenham, the renowned "father of English medicine" or "English Hippocrates", to whom is attributed the quote, "Among the remedies which it has pleased Almighty God to give to man to relieve his sufferings, none is so universal and so efficacious as opium." Use of opium as a cure-all was reflected in the formulation of mithridatium described in the 1728 Chambers Cyclopedia, which included true opium in the mixture. History: Eventually, laudanum became readily available and extensively used by the 18th century in Europe, especially England. Compared to other chemicals available to 18th century regular physicians, opium was a benign alternative to arsenic, mercury, or emetics, and it was remarkably successful in alleviating a wide range of ailments. Due to the constipation often produced by the consumption of opium, it was one of the most effective treatments for cholera, dysentery, and diarrhea. As a cough suppressant, opium was used to treat bronchitis, tuberculosis, and other respiratory illnesses. Opium was additionally prescribed for rheumatism and insomnia. Medical textbooks even recommended its use by people in good health, to "optimize the internal equilibrium of the human body".During the 18th century, opium was found to be a good remedy for nervous disorders. Due to its sedative and tranquilizing properties, it was used to quiet the minds of those with psychosis, help with people who were considered insane, and also to help treat patients with insomnia. However, despite its medicinal values in these cases, it was noted that in cases of psychosis, it could cause anger or depression, and due to the drug's euphoric effects, it could cause depressed patients to become more depressed after the effects wore off because they would get used to being high.The standard medical use of opium persisted well into the 19th century. US president William Henry Harrison was treated with opium in 1841, and in the American Civil War, the Union Army used 175,000 lb (80,000 kg) of opium tincture and powder and about 500,000 opium pills. During this time of popularity, users called opium "God's Own Medicine".One reason for the increase in opiate consumption in the United States during the 19th century was the prescribing and dispensing of legal opiates by physicians and pharmacists to women with "female complaints" (mostly to relieve menstrual pain and hysteria). Because opiates were viewed as more humane than punishment or restraint, they were often used to treat the mentally ill. Between 150,000 and 200,000 opiate addicts lived in the United States in the late 19th century and between two-thirds and three-quarters of these addicts were women.Opium addiction in the later 19th century received a hereditary definition. Dr. George Beard in 1869 proposed his theory of neurasthenia, a hereditary nervous system deficiency that could predispose an individual to addiction. Neurasthenia was increasingly tied in medical rhetoric to the "nervous exhaustion" suffered by many a white-collar worker in the increasingly hectic and industrialized U.S. life—the most likely potential clients of physicians. History: Recreational use in Europe, the Middle East and the US (11th to 19th centuries) Soldiers returning home from the Crusades in the 11th to 13th century brought opium with them. Opium is said to have been used for recreational purposes from the 14th century onwards in Muslim societies. Ottoman and European testimonies confirm that from the 16th to the 19th centuries Anatolian opium was eaten in Constantinople as much as it was exported to Europe. In 1573, for instance, a Venetian visitor to the Ottoman Empire observed many of the Turkish natives of Constantinople regularly drank a "certain black water made with opium" that makes them feel good, but to which they become so addicted, if they try to go without, they will "quickly die". From drinking it, dervishes claimed the drugs bestowed them with visionary glimpses of future happiness. Indeed, the Ottoman Empire supplied the West with opium long before China and India.Extensive textual and pictorial sources also show that poppy cultivation and opium consumption were widespread in Safavid Iran and Mughal India. History: England In England, opium fulfilled a "critical" role, as it did other societies, in addressing multifactorial pain, cough, dysentery, diarrhea, as argued by Virginia Berridge. A medical panacea of the 19th century, "any respectable person" could purchase a range of hashish pastes and (later) morphine with complementary injection kit.Thomas De Quincey's Confessions of an English Opium-Eater (1822), one of the first and most famous literary accounts of opium addiction written from the point of view of an addict, details the pleasures and dangers of the drug. In the book, it is not Ottoman, nor Chinese, addicts about whom he writes, but English opium users: "I question whether any Turk, of all that ever entered the paradise of opium-eaters, can have had half the pleasure I had." De Quincey writes about the great English Romantic poet Samuel Taylor Coleridge (1772–1834), whose "Kubla Khan" is also widely considered to be a poem of the opium experience. Coleridge began using opium in 1791 after developing jaundice and rheumatic fever, and became a full addict after a severe attack of the disease in 1801, requiring 80–100 drops of laudanum daily. History: China Recreational use in China The earliest clear description of the use of opium as a recreational drug in China came from Xu Boling, who wrote in 1483 that opium was "mainly used to aid masculinity, strengthen sperm and regain vigor", and that it "enhances the art of alchemists, sex and court ladies". He also described an expedition sent by the Ming dynasty Chenghua Emperor in 1483 to procure opium for a price "equal to that of gold" in Hainan, Fujian, Zhejiang, Sichuan and Shaanxi, where it is close to the western lands of Xiyu. A century later, Li Shizhen listed standard medical uses of opium in his renowned Compendium of Materia Medica (1578), but also wrote that "lay people use it for the art of sex," in particular the ability to "arrest seminal emission". This association of opium with sex continued in China until the end of the 19th century. History: Opium smoking began as a privilege of the elite and remained a great luxury into the early 19th century. However, by 1861, Wang Tao wrote that opium was used even by rich peasants, and even a small village without a rice store would have a shop where opium was sold.Recreational use of opium was part of a civilized and mannered ritual, akin to an East Asian tea ceremony, prior to the extensive prohibitions that came later. In places of gathering, often tea shops, or a person's home servings of opium were offered as a form of greeting and politeness. Often served with tea (in China) and with specific and fine utensils and beautifully carved wooden pipes. The wealthier the smoker, the finer and more expensive material used in ceremony. The image of seedy underground, destitute smokers were often generated by anti-opium narratives and became a more accurate image of opium use following the effects of large scale opium prohibition in the 1880s. History: Prohibitions in China Opium prohibition in China began in 1729, yet was followed by nearly two centuries of increasing opium use. A massive destruction of opium by an emissary of the Chinese Daoguang Emperor in an attempt to stop opium smuggling by the British led to the First Opium War (1839–1842), in which Britain defeated China. After 1860, opium use continued to increase with widespread domestic production in China. By 1905, an estimated 25 percent of the male population were regular consumers of the drug. Recreational use of opium elsewhere in the world remained rare into late in the 19th century, as indicated by ambivalent reports of opium usage. In 1906, 41,000 tons were produced, but because 39,000 tons of that year's opium were consumed in China, overall usage in the rest of the world was much lower. These figures from 1906 have been criticized as overestimates. History: Smoking of opium came on the heels of tobacco smoking and may have been encouraged by a brief ban on the smoking of tobacco by the Ming emperor. The prohibition ended in 1644 with the coming of the Qing dynasty, which encouraged smokers to mix in increasing amounts of opium. In 1705, Wang Shizhen wrote, "nowadays, from nobility and gentlemen down to slaves and women, all are addicted to tobacco." Tobacco in that time was frequently mixed with other herbs (this continues with clove cigarettes to the modern day), and opium was one component in the mixture. Tobacco mixed with opium was called madak (or madat) and became popular throughout China and its seafaring trade partners (such as Taiwan, Java, and the Philippines) in the 17th century. In 1712, Engelbert Kaempfer described addiction to madak: "No commodity throughout the Indies is retailed with greater profit by the Batavians than opium, which [its] users cannot do without, nor can they come by it except it be brought by the ships of the Batavians from Bengal and Coromandel."Fueled in part by the 1729 ban on madak, which at first effectively exempted pure opium as a potentially medicinal product, the smoking of pure opium became more popular in the 18th century. In 1736, the smoking of pure opium was described by Huang Shujing, involving a pipe made from bamboo rimmed with silver, stuffed with palm slices and hair, fed by a clay bowl in which a globule of molten opium was held over the flame of an oil lamp. This elaborate procedure, requiring the maintenance of pots of opium at just the right temperature for a globule to be scooped up with a needle-like skewer for smoking, formed the basis of a craft of "paste-scooping" by which servant girls could become prostitutes as the opportunity arose. History: Chinese diaspora in the West The Chinese Diaspora in the West (1800s to 1949) first began to flourish during the 19th century due to famine and political upheaval, as well as rumors of wealth to be had outside of Southeast Asia. Chinese emigrants to cities such as San Francisco, London, and New York City brought with them the Chinese manner of opium smoking, and the social traditions of the opium den. The Indian Diaspora distributed opium-eaters in the same way, and both social groups survived as "lascars" (seamen) and "coolies" (manual laborers). French sailors provided another major group of opium smokers, having gotten the habit while in French Indochina, where the drug was promoted and monopolized by the colonial government as a source of revenue. Among white Europeans, opium was more frequently consumed as laudanum or in patent medicines. Britain's All-India Opium Act of 1878 formalized ethnic restrictions on the use of opium, limiting recreational opium sales to registered Indian opium-eaters and Chinese opium-smokers only and prohibiting its sale to workers from Burma. Likewise, in San Francisco, Chinese immigrants were permitted to smoke opium, so long as they refrained from doing so in the presence of whites.Because of the low social status of immigrant workers, contemporary writers and media had little trouble portraying opium dens as seats of vice, white slavery, gambling, knife- and revolver-fights, and a source for drugs causing deadly overdoses, with the potential to addict and corrupt the white population. By 1919, anti-Chinese riots attacked Limehouse, the Chinatown of London. Chinese men were deported for playing keno and sentenced to hard labor for opium possession. Due to this, both the immigrant population and the social use of opium fell into decline. Yet despite lurid literary accounts to the contrary, 19th-century London was not a hotbed of opium smoking. The total lack of photographic evidence of opium smoking in Britain, as opposed to the relative abundance of historical photos depicting opium smoking in North America and France, indicates the infamous Limehouse opium-smoking scene was little more than fantasy on the part of British writers of the day, who were intent on scandalizing their readers while drumming up the threat of the "yellow peril". History: Prohibition and conflict in China A large scale opium prohibition attempt began in 1729, when the Qing Yongzheng Emperor, disturbed by madak smoking at court and carrying out the government's role of upholding Confucian virtues, officially prohibited the sale of opium, except for a small amount for medicinal purposes. The ban punished sellers and opium den keepers, but not users of the drug. Opium was banned completely in 1799, and this prohibition continued until 1860. History: During the Qing dynasty, China opened itself to foreign trade under the Canton System through the port of Guangzhou (Canton), with traders from the East India Company visiting the port by the 1690s. Due to the growing British demand for Chinese tea and the Chinese Emperor's lack of interest in British commodities other than silver, British traders resorted to trade in opium as a high-value commodity for which China was not self-sufficient. The English traders had been purchasing small amounts of opium from India for trade since Ralph Fitch first visited in the mid-16th century. Trade in opium was standardized, with production of balls of raw opium, 1.1–1.6 kg (2.4–3.5 lb), 30% water content, wrapped in poppy leaves and petals, and shipped in chests of 60–65 kg (132–143 lb) (one picul). Chests of opium were sold in auctions in Calcutta with the understanding that the independent purchasers would then smuggle it into China. History: China had a positive balance sheet in trading with the British, which led to a decrease of the British silver stocks. Therefore, the British tried to encourage Chinese opium use to enhance their balance, and they delivered it from Indian provinces under British control. In India, its cultivation, as well as the manufacture and traffic to China, were subject to the British East India Company (BEIC), as a strict monopoly of the British government. There was an extensive and complicated system of BEIC agencies involved in the supervision and management of opium production and distribution in India. Bengal opium was highly prized, commanding twice the price of the domestic Chinese product, which was regarded as inferior in quality. History: Some competition came from the newly independent United States, which began to compete in Guangzhou, selling Turkish opium in the 1820s. Portuguese traders also brought opium from the independent Malwa states of western India, although by 1820, the British were able to restrict this trade by charging "pass duty" on the opium when it was forced to pass through Bombay to reach an entrepot. History: Despite drastic penalties and continued prohibition of opium until 1860, opium smuggling rose steadily from 200 chests per year under the Yongzheng Emperor to 1,000 under the Qianlong Emperor, 4,000 under the Jiaqing Emperor, and 30,000 under the Daoguang Emperor. The illegal sale of opium became one of the world's most valuable single commodity trades and has been called "the most long continued and systematic international crime of modern times". Opium smuggling provided 15 to 20 percent of the British Empire's revenue and simultaneously caused scarcity of silver in China.In response to the ever-growing number of Chinese people becoming addicted to opium, the Qing Daoguang Emperor took strong action to halt the smuggling of opium, including the seizure of cargo. In 1838, the Chinese Commissioner Lin Zexu destroyed 20,000 chests of opium in Guangzhou. Given that a chest of opium was worth nearly US$1,000 in 1800, this was a substantial economic loss. The British queen Victoria, not willing to replace the cheap opium with costly silver, began the First Opium War in 1840, the British winning Hong Kong and trade concessions in the first of a series of Unequal Treaties.The opium trade incurred intense enmity from the later British Prime Minister William Ewart Gladstone. As a member of Parliament, Gladstone called it "most infamous and atrocious" referring to the opium trade between China and British India in particular. Gladstone was fiercely against both of the Opium Wars Britain waged in China in the First Opium War initiated in 1840 and the Second Opium War initiated in 1857, denounced British violence against Chinese, and was ardently opposed to the British trade in opium to China. Gladstone lambasted it as "Palmerston's Opium War" and said that he felt "in dread of the judgments of God upon England for our national iniquity towards China" in May 1840. A famous speech was made by Gladstone in Parliament against the First Opium War. Gladstone criticized it as "a war more unjust in its origin, a war more calculated in its progress to cover this country with permanent disgrace". His hostility to opium stemmed from the effects of opium brought upon his sister Helen. Due to the First Opium war brought on by Palmerston, there was initial reluctance to join the government of Peel on part of Gladstone before 1841. History: Following China's defeat in the Second Opium War in 1858, China was forced to legalize opium and began massive domestic production. Importation of opium peaked in 1879 at 6,700 tons, and by 1906, China was producing 85 percent of the world's opium, some 35,000 tons, and 27 percent of its adult male population regularly used opium‍—‌13.5 million people consuming 39,000 tons of opium yearly. From 1880 to the beginning of the Communist era, the British attempted to discourage the use of opium in China, but this effectively promoted the use of morphine, heroin, and cocaine, further exacerbating the problem of addiction. History: Scientific evidence of the pernicious nature of opium use was largely undocumented in the 1890s, when Protestant missionaries in China decided to strengthen their opposition to the trade by compiling data which would demonstrate the harm the drug did. Faced with the problem that many Chinese associated Christianity with opium, partly due to the arrival of early Protestant missionaries on opium clippers, at the 1890 Shanghai Missionary Conference, they agreed to establish the Permanent Committee for the Promotion of Anti-Opium Societies in an attempt to overcome this problem and to arouse public opinion against the opium trade. The members of the committee were John Glasgow Kerr, MD, American Presbyterian Mission in Guangzhou (Canton); B.C. Atterbury, MD, American Presbyterian Mission in Beijing (Peking); Archdeacon Arthur E. Moule, Church Missionary Society in Shanghai; Henry Whitney, MD, American Board of Commissioners for foreign Missions in Fuzhou; the Rev. Samuel Clarke, China Inland Mission in Guiyang; the Rev. Arthur Gostick Shorrock, English Baptist Mission in Taiyuan; and the Rev. Griffith John, London Mission Society in Hankou]. These missionaries were generally outraged over the British government's Royal Commission on Opium visiting India but not China. Accordingly, the missionaries first organized the Anti-Opium League in China among their colleagues in every mission station in China. American missionary Hampden Coit DuBose acted as first president. This organization, which had elected national officers and held an annual national meeting, was instrumental in gathering data from every Western-trained medical doctor in China, which was then published as William Hector Park compiled Opinions of Over 100 Physicians on the Use of Opium in China (Shanghai: American Presbyterian Mission Press, 1899). The vast majority of these medical doctors were missionaries; the survey also included doctors who were in private practices, particularly in Shanghai and Hong Kong, as well as Chinese who had been trained in medical schools in Western countries. In England, the home director of the China Inland Mission, Benjamin Broomhall, was an active opponent of the opium trade, writing two books to promote the banning of opium smoking: The Truth about Opium Smoking and The Chinese Opium Smoker. In 1888, Broomhall formed and became secretary of the Christian Union for the Severance of the British Empire with the Opium Traffic and editor of its periodical, National Righteousness. He lobbied the British Parliament to stop the opium trade. He and James Laidlaw Maxwell appealed to the London Missionary Conference of 1888 and the Edinburgh Missionary Conference of 1910 to condemn the continuation of the trade. When Broomhall was dying, his son Marshall read to him from The Times the welcome news that an agreement had been signed ensuring the end of the opium trade within two years. History: Official Chinese resistance to opium was renewed on September 20, 1906, with an antiopium initiative intended to eliminate the drug problem within 10 years. The program relied on the turning of public sentiment against opium, with mass meetings at which opium paraphernalia were publicly burned, as well as coercive legal action and the granting of police powers to organizations such as the Fujian Anti-Opium Society. Smokers were required to register for licenses for gradually reducing rations of the drug. Action against opium farmers centered upon a highly repressive incarnation of law enforcement in which rural populations had their property destroyed, their land confiscated and/or were publicly tortured, humiliated and executed. Addicts sometimes turned to missionaries for treatment for their addiction, though many associated these foreigners with the drug trade. The program was counted as a substantial success, with a cessation of direct British opium exports to China (but not Hong Kong) and most provinces declared free of opium production. Nonetheless, the success of the program was only temporary, with opium use rapidly increasing during the disorder following the death of Yuan Shikai in 1916. Opium farming also increased, peaking in 1930 when the League of Nations singled China out as the primary source of illicit opium in East and Southeast Asia. Many local powerholders facilitated the trade during this period to finance conflicts over territory and political campaigns. In some areas food crops were eradicated to make way for opium, contributing to famines in Kweichow and Shensi Provinces between 1921 and 1923, and food deficits in other provinces. History: Beginning in 1915, Chinese nationalist groups came to describe the period of military losses and Unequal Treaties as the "Century of National Humiliation", later defined to end with the conclusion of the Chinese Civil War in 1949.In the northern provinces of Ningxia and Suiyuan in China, Chinese Muslim General Ma Fuxiang both prohibited and engaged in the opium trade. It was hoped that Ma Fuxiang would have improved the situation, since Chinese Muslims were well known for opposition to smoking opium. Ma Fuxiang officially prohibited opium and made it illegal in Ningxia, but the Guominjun reversed his policy; by 1933, people from every level of society were abusing the drug, and Ningxia was left in destitution. In 1923, an officer of the Bank of China from Baotou found out that Ma Fuxiang was assisting the drug trade in opium which helped finance his military expenses. He earned US$2 million from taxing those sales in 1923. General Ma had been using the bank, a branch of the Government of China's exchequer, to arrange for silver currency to be transported to Baotou to use it to sponsor the trade.The opium trade under the Chinese Communist Party was important to its finances in the 1940s. Peter Vladimirov's diary provided a first hand account. Chen Yung-fa provided a detailed historical account of how the opium trade was essential to the economy of Yan'an during this period. Mitsubishi and Mitsui were involved in the opium trade during the Japanese occupation of China.Mao Zedong government is generally credited with eradicating both consumption and production of opium during the 1950s using unrestrained repression and social reform. Ten million addicts were forced into compulsory treatment, dealers were executed, and opium-producing regions were planted with new crops. Remaining opium production shifted south of the Chinese border into the Golden Triangle region. The remnant opium trade primarily served Southeast Asia, but spread to American soldiers during the Vietnam War; based on a study of opiate use in soldiers returning to the United States in 1971, 20 percent of participants were dependent enough to experience withdrawal symptoms. History: Prohibition outside China There were no legal restrictions on the importation or use of opium in the United States until the San Francisco Opium Den Ordinance, which banned dens for public smoking of opium in 1875, a measure fueled by anti-Chinese sentiment and the perception that whites were starting to frequent the dens. This was followed by an 1891 California law requiring that narcotics carry warning labels and that their sales be recorded in a registry; amendments to the California Pharmacy and Poison Act in 1907 made it a crime to sell opiates without a prescription, and bans on possession of opium or opium pipes in 1909 were enacted.At the US federal level, the legal actions taken reflected constitutional restrictions under the enumerated powers doctrine prior to reinterpretation of the commerce clause, which did not allow the federal government to enact arbitrary prohibitions, but did permit arbitrary taxation. Beginning in 1883, opium importation was taxed at US$6 to US$300 per pound, until the Opium Exclusion Act of 1909 prohibited the importation of opium altogether. In a similar manner, the Harrison Narcotics Tax Act of 1914, passed in fulfillment of the International Opium Convention of 1912, nominally placed a tax on the distribution of opiates, but served as a de facto prohibition of the drugs. Today, opium is regulated by the Drug Enforcement Administration under the Controlled Substances Act. History: Following passage of a Colonial Australian law in 1895, Queensland's Aboriginals Protection and Restriction of the Sale of Opium Act 1897 addressed opium addiction among Aboriginal people, though it soon became a general vehicle for depriving them of basic rights by administrative regulation. By 1905 all Australian states and territories had passed similar laws making prohibitions to Opium sale. Smoking and possession was prohibited in 1908.Hardening of Canadian attitudes toward Chinese opium users and fear of a spread of the drug into the white population led to the effective criminalization of opium for nonmedical use in Canada between 1908 and the mid-1920s.In 1909, the International Opium Commission was founded, and by 1914, 34 nations had agreed that the production and importation of opium should be diminished. In 1924, 62 nations participated in a meeting of the Commission. Subsequently, this role passed to the League of Nations, and all signatory nations agreed to prohibit the import, sale, distribution, export, and use of all narcotic drugs, except for medical and scientific purposes. This role was later taken up by the International Narcotics Control Board of the United Nations under Article 23 of the Single Convention on Narcotic Drugs, and subsequently under the Convention on Psychotropic Substances. Opium-producing nations are required to designate a government agency to take physical possession of licit opium crops as soon as possible after harvest and conduct all wholesaling and exporting through that agency. History: Indochina tax From 1897 to 1902, Paul Doumer (later President of France) was Governor-General of French Indochina. Upon his arrival the colonies were losing millions of francs each year. Determined to put them on a paying basis he levied taxes on various products, opium among them. The Vietnamese, Cambodians and Laotians who could or would not pay these taxes, lost their houses and land, and often became day laborers. Evidently, resorting to this means of gaining income gave France a vested interest in the continuation of opium use among the population of Indochina. History: Regulation in Britain and the United States Before the 1920s, regulation in Britain was controlled by pharmacists. Pharmacists who were found to have prescribed opium for illegitimate uses and anyone found to have sold opium without proper qualifications would be prosecuted. With the passing of the Rolleston Act in Britain in 1926, doctors were allowed to prescribe opiates such as morphine and heroin if they believed their patients demonstrated a medical need. Because addiction was viewed as a medical problem rather than an indulgence, doctors were permitted to allow patients to wean themselves off opiates rather than cutting off any opiate use altogether. The passing of the Rolleston Act put the control of opium use in the hands of medical doctors instead of pharmacists. Later in the 20th century, addiction to opiates, especially heroin in young people, continued to rise and so the sale and prescription of opiates was limited to doctors in treatment centers. If these doctors were found to be prescribing opiates without just cause, then they could lose their license to practice or prescribe drugs.Abuse of opium in the United States began in the late 19th century and was largely associated with Chinese immigrants. During this time the use of opium had little stigma; the drug was used freely until 1882 when a law was passed to confine opium smoking to specific dens. Until the full ban on opium-based products came into effect just after the beginning of the twentieth century, physicians in the US considered opium a miracle drug that could help with many ailments. Therefore, the ban on said products was more a result of negative connotations towards its use and distribution by Chinese immigrants who were heavily persecuted during this particular period in history. As the 19th century progressed however, doctor Hamilton Wright worked to decrease the use of opium in the US by submitting the Harrison Act to congress. This act put taxes and restrictions on the sale and prescription of opium, as well as trying to stigmatize the opium poppy and its derivatives as "demon drugs", to try to scare people away from them. This act and the stigma of a demon drug on opium, led to the criminalization of people that used opium-based products. It made the use and possession of opium and any of its derivatives illegal. The restrictions were recently redefined by the Federal Controlled Substances Act of 1970. History: 20th-century use Opium production in China and the rest of East Asia was nearly wiped out after WWII, however, sustained covert support by the United States Central Intelligence Agency for the Thai Northern Army and the Chinese Nationalist Kuomintang army invading Burma facilitated production and trafficking of the drug from Southeast Asia for decades, with the region becoming a major source of world supplies.During the Communist era in Eastern Europe, poppy stalks sold in bundles by farmers were processed by users with household chemicals to make kompot ("Polish heroin"), and poppy seeds were used to produce koknar, an opiate. History: Obsolescence Globally, opium has gradually been superseded by a variety of purified, semi-synthetic, and synthetic opioids with progressively stronger effects, and by other general anesthetics. This process began in 1804, when Friedrich Wilhelm Adam Sertürner first isolated morphine from the opium poppy. History: The process continued until 1817, when Sertürner published his results after thirteen years of research and a nearly disastrous trial on himself and three boys. The great advantage of purified morphine was that a patient could be treated with a known dose—whereas with raw plant material, as Gabriel Fallopius once lamented, "if soporifics are weak they do not help; if they are strong they are exceedingly dangerous." Morphine was the first pharmaceutical isolated from a natural product, and this success encouraged the isolation of other alkaloids: by 1820, isolations of noscapine, strychnine, veratrine, colchicine, caffeine, and quinine were reported. Morphine sales began in 1827, by Heinrich Emanuel Merck of Darmstadt, and helped him expand his family pharmacy into the Merck KGaA pharmaceutical company. Codeine was isolated in 1832 by Pierre Jean Robiquet.The use of diethyl ether and chloroform for general anesthesia began in 1846–1847, and rapidly displaced the use of opiates and tropane alkaloids from Solanaceae due to their relative safety.Heroin, the first semi-synthetic opioid, was first synthesized in 1874, but was not pursued until its rediscovery in 1897 by Felix Hoffmann at the Bayer pharmaceutical company in Elberfeld, Germany. From 1898 to 1910 heroin was marketed as a non-addictive morphine substitute and cough medicine for children. Because the lethal dose of heroin was viewed as a hundred times greater than its effective dose, heroin was advertised as a safer alternative to other opioids. By 1902, sales made up 5 percent of the company's profits, and "heroinism" had attracted media attention. Oxycodone, a thebaine derivative similar to codeine, was introduced by Bayer in 1916 and promoted as a less-addictive analgesic. Preparations of the drug such as oxycodone with paracetamol and extended release oxycodone remain popular to this day.A range of synthetic opioids such as methadone (1937), pethidine (1939), fentanyl (late 1950s), and derivatives thereof have been introduced, and each is preferred for certain specialized applications. Nonetheless, morphine remains the drug of choice for American combat medics, who carry packs of syrettes containing 16 milligrams each for use on severely wounded soldiers. No drug has been found that can match the painkilling effect of opioids without also duplicating much of their addictive potential. Modern production and use: Opium was prohibited in many countries during the early 20th century, leading to the modern pattern of opium production as a precursor for illegal recreational drugs or tightly regulated, highly taxed, legal prescription drugs. In 1980, 2,000 tons of opium supplied all legal and illegal uses. Worldwide production in 2006 was 6610 tonnes—about one-fifth the level of production in 1906; since then, opium production has fallen.In 2002, the price for one kilogram of opium was US$300 for the farmer, US$800 for purchasers in Afghanistan, and US$16,000 on the streets of Europe before conversion into heroin.Recently, opium production has increased considerably, surpassing 5,000 tons in 2002 and reaching 8,600 tons in Afghanistan and 840 tons in the Golden Triangle in 2014. Production is expected to increase in 2015 as new, improved seeds have been brought into Afghanistan. Afghanistan accounts for the world's largest supply of opium. The World Health Organization has estimated that current production of opium would need to increase fivefold to account for total global medical need. Solar energy panels in use in Afghanistan have allowed farmers to dig their wells deeper, leading to a bumper crop of opium year after year. In a 2023 report, poppy cultivation in southern Afghanistan was reduced by over 80% as a result of Taliban campaigns to stop its use toward Opium. This included a 99% reduction of Opium growth in the Helmand Province. Modern production and use: Papaver somniferum Opium poppies are popular and attractive garden plants, whose flowers vary greatly in color, size and form. A modest amount of domestic cultivation in private gardens is not usually subject to legal controls. In part, this tolerance reflects variation in addictive potency. A cultivar for opium production, Papaver somniferum L. elite, contains 91.2 percent morphine, codeine, and thebaine in its latex alkaloids, whereas in the latex of the condiment cultivar "Marianne", these three alkaloids total only 14.0 percent. The remaining alkaloids in the latter cultivar are primarily narcotoline and noscapine.Seed capsules can be dried and used for decorations, but they also contain morphine, codeine, and other alkaloids. These pods can be boiled in water to produce a bitter tea that induces a long-lasting intoxication. If allowed to mature, poppy pods (poppy straw) can be crushed and used to produce lower quantities of morphinans. In poppies subjected to mutagenesis and selection on a mass scale, researchers have been able to use poppy straw to obtain large quantities of oripavine, a precursor to opioids and antagonists such as naltrexone. Although millennia older, the production of poppy head decoctions can be seen as a quick-and-dirty variant of the Kábáy poppy straw process, which since its publication in 1930 has become the major method of obtaining licit opium alkaloids worldwide, as discussed in Morphine. Modern production and use: Poppy seeds are a common and flavorsome topping for breads and cakes. One gram of poppy seeds contains up to 33 micrograms of morphine and 14 micrograms of codeine, and the Substance Abuse and Mental Health Services Administration in the United States formerly mandated that all drug screening laboratories use a standard cutoff of 300 nanograms per milliliter in urine samples. A single poppy seed roll (0.76 grams of seeds) usually did not produce a positive drug test, but a positive result was observed from eating two rolls. A slice of poppy seed cake containing nearly five grams of seeds per slice produced positive results for 24 hours. Such results are viewed as false positive indications of drug use and were the basis of a legal defense. On November 30, 1998, the standard cutoff was increased to 2000 nanograms (two micrograms) per milliliter. Confirmation by gas chromatography-mass spectrometry will distinguish amongst opium and variants including poppy seeds, heroin, and morphine and codeine pharmaceuticals by measuring the morphine:codeine ratio and looking for the presence of noscapine and acetylcodeine, the latter of which is only found in illicitly produced heroin, and heroin metabolites such as 6-monoacetylmorphine. Modern production and use: Harvesting and processing When grown for opium production, the skin of the ripening pods of these poppies is scored by a sharp blade at a time carefully chosen so that rain, wind, and dew cannot spoil the exudation of white, milky latex, usually in the afternoon. Incisions are made while the pods are still raw, with no more than a slight yellow tint, and must be shallow to avoid penetrating hollow inner chambers or loculi while cutting into the lactiferous vessels. In the Indian Subcontinent, Afghanistan, Central Asia and Iran, the special tool used to make the incisions is called a nushtar or "nishtar" (from Persian, meaning a lancet) and carries three or four blades three millimeters apart, which are scored upward along the pod. Incisions are made three or four times at intervals of two to three days, and each time the "poppy tears", which dry to a sticky brown resin, are collected the following morning. One acre harvested in this way can produce three to five kilograms of raw opium. In the Soviet Union, pods were typically scored horizontally, and opium was collected three times, or else one or two collections were followed by isolation of opiates from the ripe capsules. Oil poppies, an alternative strain of P. somniferum, were also used for production of opiates from their capsules and stems. A traditional Chinese method of harvesting opium latex involved cutting off the heads and piercing them with a coarse needle then collecting the dried opium 24 to 48 hours later. Modern production and use: Raw opium may be sold to a merchant or broker on the black market, but it usually does not travel far from the field before it is refined into morphine base, because pungent, jelly-like raw opium is bulkier and harder to smuggle. Crude laboratories in the field are capable of refining opium into morphine base by a simple acid-base extraction. A sticky, brown paste, morphine base is pressed into bricks and sun-dried, and can either be smoked, prepared into other forms or processed into heroin.Other methods of preparation (besides smoking), include processing into regular opium tincture (tinctura opii), laudanum, paregoric (tinctura opii camphorata), herbal wine (e.g., vinum opii), opium powder (pulvis opii), opium sirup (sirupus opii) and opium extract (extractum opii). Vinum opii is made by combining sugar, white wine, cinnamon, and cloves. Opium syrup is made by combining 97.5 part sugar syrup with 2.5 parts opium extract. Opium extract (extractum opii) finally can be made by macerating raw opium with water. To make opium extract, 20 parts water are combined with 1 part raw opium which has been boiled for 5 minutes (the latter to ease mixing).Heroin is widely preferred because of increased potency. One study in postaddicts found heroin to be approximately 2.2 times more potent than morphine by weight with a similar duration; at these relative quantities, they could distinguish the drugs subjectively but had no preference. Heroin was also found to be twice as potent as morphine in surgical anesthesia. Morphine is converted into heroin by a simple chemical reaction with acetic anhydride, followed by purification. Especially in Mexican production, opium may be converted directly to "black tar heroin" in a simplified procedure. This form predominates in the U.S. west of the Mississippi. Relative to other preparations of heroin, it has been associated with a dramatically decreased rate of HIV transmission among intravenous drug users (4 percent in Los Angeles vs. 40 percent in New York) due to technical requirements of injection, although it is also associated with greater risk of venous sclerosis and necrotizing fasciitis. Modern production and use: Illegal production Afghanistan is currently the primary producer of the drug. After regularly producing 70 percent of the world's opium, Afghanistan decreased production to 74 tons per year under a ban by the Taliban in 2000, a move which cut production by 94 percent. A year later, after American and British troops invaded Afghanistan, removed the Taliban and installed the interim government, the land under cultivation leapt back to 285 square miles (740 km2), with Afghanistan supplanting Burma to become the world's largest opium producer once more. Opium production in that country has increased rapidly since, reaching an all-time high in 2006. According to DEA statistics, Afghanistan's production of oven-dried opium increased to 1,278 tons in 2002, more than doubled by 2003, and nearly doubled again during 2004. In late 2004, the U.S. government estimated that 206,000 hectares were under poppy cultivation, 4.5 percent of the country's total cropland, and produced 4,200 metric tons of opium, 76 percent of the world's supply, yielding 60 percent of Afghanistan's gross domestic product. In 2006, the UN Office on Drugs and Crime estimated production to have risen 59 percent to 165,000 hectares (407,000 acres) in cultivation, yielding 6,100 tons of opium, 82 percent of the world's supply. The value of the resulting heroin was estimated at US$3.5 billion, of which Afghan farmers were estimated to have received US$700 million in revenue. For farmers, the crop can be up to ten times more profitable than wheat. The price of opium is around US$138 per kilo. Opium production has led to rising tensions in Afghan villages. Though direct conflict has yet to occur, the opinions of the new class of young rich men involved in the opium trade are at odds with those of the traditional village leaders. Modern production and use: An increasingly large fraction of opium is processed into morphine base and heroin in drug labs in Afghanistan. Despite an international set of chemical controls designed to restrict availability of acetic anhydride, it enters the country, perhaps through its Central Asian neighbors which do not participate. A counternarcotics law passed in December 2005 requires Afghanistan to develop registries or regulations for tracking, storing, and owning acetic anhydride.Besides Afghanistan, smaller quantities of opium are produced in Pakistan, the Golden Triangle region of Southeast Asia (particularly Burma), Colombia, Guatemala, and Mexico. Modern production and use: Chinese production mainly trades with and profits from North America. In 2002, they were seeking to expand through eastern United States. In the post 9/11 era, trading between borders became difficult and because new international laws were set into place, the opium trade became more diffused. Power shifted from remote to high-end smugglers and opium traders. Outsourcing became a huge factor for survival for many smugglers and opium farmers. Modern production and use: Legal production Legal opium production is allowed under the United Nations Single Convention on Narcotic Drugs and other international drug treaties, subject to strict supervision by the law enforcement agencies of individual countries. The leading legal production method is the Robertson-Gregory process, whereby the entire poppy, excluding roots and leaves, is mashed and stewed in dilute acid solutions. The alkaloids are then recovered via acid-base extraction and purified. The exact date of its discovery is unknown, but it was described by Wurtz in his Dictionnaire de chimie pure et appliquée published in 1868.Legal opium production in India is much more traditional. As of 2008, opium was collected by farmers who were licensed to grow 0.1 hectares (0.25 acres) of opium poppies, who to maintain their licences needed to sell 56 kilograms of unadulterated raw opium paste. The price of opium paste is fixed by the government according to the quality and quantity tendered. The average is around 1500 rupees (US$29) per kilogram. Some additional money is made by drying the poppy heads and collecting poppy seeds, and a small fraction of opium beyond the quota may be consumed locally or diverted to the black market. The opium paste is dried and processed into government opium and alkaloid factories before it is packed into cases of 60 kilograms for export. Purification of chemical constituents is done in India for domestic production, but typically done abroad by foreign importers.Legal opium importation from India and Turkey is conducted by Mallinckrodt, Noramco, Abbott Laboratories, Purdue Pharma, and Cody Laboratories Inc. in the United States, and legal opium production is conducted by GlaxoSmithKline, Johnson & Johnson, Johnson Matthey, and Mayne in Tasmania, Australia; Sanofi Aventis in France; Shionogi Pharmaceutical in Japan; and MacFarlan Smith in the United Kingdom. The UN treaty requires that every country submit annual reports to the International Narcotics Control Board, stating that year's actual consumption of many classes of controlled drugs as well as opioids and projecting required quantities for the next year. This is to allow trends in consumption to be monitored and production quotas allotted.In 2005, the European Senlis Council began developing a programme which hopes to solve the problems caused by the large quantity of opium produced illegally in Afghanistan, most of which is converted to heroin and smuggled for sale in Europe and the United States. This proposal is to license Afghan farmers to produce opium for the world pharmaceutical market, and thereby solve another problem, that of chronic underuse of potent analgesics where required within developing nations. Part of the proposal is to overcome the "80–20 rule" that requires the U.S. to purchase 80 percent of its legal opium from India and Turkey to include Afghanistan, by establishing a second-tier system of supply control that complements the current INCB regulated supply and demand system by providing poppy-based medicines to countries who cannot meet their demand under the current regulations. Senlis arranged a conference in Kabul that brought drug policy experts from around the world to meet with Afghan government officials to discuss internal security, corruption issues, and legal issues within Afghanistan. Modern production and use: In June 2007, the Council launched a "Poppy for Medicines" project that provides a technical blueprint for the implementation of an integrated control system within Afghan village-based poppy for medicine projects: the idea promotes the economic diversification by redirecting proceeds from the legal cultivation of poppy and production of poppy-based medicines. There has been criticism of the Senlis report findings by Macfarlan Smith, who argue that though they produce morphine in Europe, they were never asked to contribute to the report. Modern production and use: Cultivation in the UK In late 2006, the British government permitted the pharmaceutical company MacFarlan Smith (a Johnson Matthey company) to cultivate opium poppies in England for medicinal reasons, after Macfarlan Smith's primary source, India, decided to increase the price of export opium latex. This move is well received by British farmers, with a major opium poppy field located in Didcot, England. The British government has contradicted the Home Office's suggestion that opium cultivation can be legalized in Afghanistan for exports to the United Kingdom, helping lower poverty and internal fighting while helping the NHS to meet the high demand for morphine and heroin. Opium poppy cultivation in the United Kingdom does not need a licence, but a licence is required for those wishing to extract opium for medicinal products. Modern production and use: Consumption In the industrialized world, the United States is the world's biggest consumer of prescription opioids, with Italy being one of the lowest, because of tighter regulations on prescribing narcotics for pain relief. Most opium imported into the United States is broken down into its alkaloid constituents, and whether legal or illegal, most current drug use occurs with processed derivatives such as heroin rather than with unrefined opium. Modern production and use: Intravenous injection of opiates is most used: by comparison with injection, "dragon chasing" (heating of heroin on a piece of foil), and madak and "ack ack" (smoking of cigarettes containing tobacco mixed with heroin powder) are only 40 percent and 20 percent efficient, respectively. One study of British heroin addicts found a 12-fold excess mortality ratio (1.8 percent of the group dying per year). Most heroin deaths result not from overdose per se, but combination with other depressant drugs such as alcohol or benzodiazepines.The smoking of opium does not involve the burning of the material as might be imagined. Rather, the prepared opium is indirectly heated to temperatures at which the active alkaloids, chiefly morphine, are vaporized. In the past, smokers would use a specially designed opium pipe which had a removable knob-like pipe-bowl of fired earthenware attached by a metal fitting to a long, cylindrical stem. A small "pill" of opium about the size of a pea would be placed on the pipe-bowl, which was then heated by holding it over an opium lamp, a special oil lamp with a distinct funnel-like chimney to channel heat into a small area. The smoker would lie on his or her side in order to guide the pipe-bowl and the tiny pill of opium over the stream of heat rising from the chimney of the oil lamp and inhale the vaporized opium fumes as needed. Several pills of opium were smoked at a single session depending on the smoker's tolerance to the drug. The effects could last up to twelve hours. Modern production and use: In Eastern culture, opium is more commonly used in the form of paregoric to treat diarrhea. This is a weaker solution than laudanum, an alcoholic tincture which was prevalently used as a pain medication and sleeping aid. Tincture of opium has been prescribed for, among other things, severe diarrhea. Taken thirty minutes prior to meals, it significantly slows intestinal motility, giving the intestines greater time to absorb fluid in the stool. Modern production and use: Despite the historically negative view of opium as a cause of addiction, the use of morphine and other derivatives isolated from opium in the treatment of chronic pain has been reestablished. If given in controlled doses, modern opiates can be an effective treatment for neuropathic pain and other forms of chronic pain. Chemical and physiological properties: Opium contains two main groups of alkaloids. Phenanthrenes such as morphine, codeine, and thebaine are the main psychoactive constituents. Isoquinolines such as papaverine and noscapine have no significant central nervous system effects. Morphine is the most prevalent and important alkaloid in opium, consisting of 10–16 percent of the total, and is responsible for most of its harmful effects such as lung edema, respiratory difficulties, coma, or cardiac or respiratory collapse. Morphine binds to and activates mu opioid receptors in the brain, spinal cord, stomach and intestine. Regular use can lead to drug tolerance or physical dependence. Chronic opium addicts in 1906 China or modern-day Iran consume an average of eight grams of opium daily. Chemical and physiological properties: Both analgesia and drug addiction are functions of the mu opioid receptor, the class of opioid receptor first identified as responsive to morphine. Tolerance is associated with the superactivation of the receptor, which may be affected by the degree of endocytosis caused by the opioid administered, and leads to a superactivation of cyclic AMP signaling. Long-term use of morphine in palliative care and the management of chronic pain always entails a risk that the patient develops tolerance or physical dependence. There are many kinds of rehabilitation treatment, including pharmacologically based treatments with naltrexone, methadone, or ibogaine.In 2021, the International Agency for Research on Cancer concluded that opium is a Group 1 (sufficient evidence) human carcinogen, causing cancers of the larynx, lung, and urinary bladder. Slang terms: Some slang terms for opium include: "Big O", "Shanghai Sally", "dope", "hop", "midnight oil", "O.P.", and "tar". "Dope" and "tar" can also refer to heroin. The traditional opium pipe is known as a "dream stick." The term dope entered the English language in the early nineteenth century, originally referring to viscous liquids, particularly sauces or gravy. It has been used to refer to opiates since at least 1888, and this usage arose because opium, when prepared for smoking, is viscous.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Surkål** Surkål: Surkål ('sour cabbage') is a traditional side dish where the main ingredient is cabbage. It is particularly common in Northern Europe.The cabbage is finely sliced and slowly cooked with caraway and cumin seeds, apple, vinegar, sugar, salt and butter. Surkål is usually served together with pork. Surkål is not to be mistaken for sauerkraut as it does not go through a fermentation process.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Self-paced instruction** Self-paced instruction: Self-paced instruction is any kind of instruction that proceeds based on learner response. The content itself can be curriculum, corporate training, technical tutorials, or any other subject that does not require the immediate response of an instructor. Self-paced instruction is constructed in such a way that the learner proceeds from one topic or segment to the next at their own speed. This type of instruction is becoming increasingly popular as the education world shifts from the classroom to the Internet.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Neodymium(III) sulfide** Neodymium(III) sulfide: Neodymium(III) sulfide is a inorganic chemical compound with the formula Nd2S3 composed of a two neodymium atoms in the +3 oxidation state and three sulfur atoms in the +2 oxidation state. Like other rare earth sulfides, neodymium(III) sulfide is used as a high-performance inorganic pigment. Preparation: Neodymium(III) sulfide can directly be produced by reacting neodymium with sulfur: 2Nd + 3S → Nd2S3It can also be produced by sulfidizing neodymium oxide with H2S at 1450 °C: Nd2O3 + 3 H2S → Nd2S3 + 3 H2O Properties: Neodymium(III) sulfide is (as γ-form) a light green solid. The compound comes in three forms. The α-form has an orthorhombic crystal structure, the β form has a tetragonal crystal structure, and the γ form has a cubic crystal structure. At 1650 °C in a vacuum, the γ compound decomposes to form neodymium monosulfide.Neodymium(III) sulfide has a high melting point and a lot of polymorphic forms which make it difficult to grow. When heated, neodymium sulfide can lose sulfur atoms and can form a range of compositions between Nd2S3 and Nd3S4. Neodymium(III) sulfide is an electrical insulator.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quarto** Quarto: Quarto (abbreviated Qto, 4to or 4º) is the format of a book or pamphlet produced from full sheets printed with eight pages of text, four to a side, then folded twice to produce four leaves. The leaves are then trimmed along the folds to produce eight book pages. Each printed page presents as one-fourth size of the full sheet. Quarto: The earliest known European printed book is a quarto, the Sibyllenbuch, believed to have been printed by Johannes Gutenberg in 1452–53, before the Gutenberg Bible, surviving only as a fragment. Quarto is also used as a general description of size of books that are about 12 inches (30 cm) tall, and as such does not necessarily indicate the actual printing format of the books, which may even be unknown as is the case for many modern books. These terms are discussed in greater detail in book sizes. Quarto as format: A quarto (from Latin quārtō, ablative form of quārtus, fourth) is a book or pamphlet made up of one or more full sheets of paper on which 8 pages of text were printed, which were then folded two times to produce four leaves. Each leaf of a quarto book thus represents one fourth the size of the original sheet. Each group of 4 leaves (called a "gathering" or "quire") could be sewn through the central fold to attach it to the other gatherings to form a book. Sometimes, additional leaves would be inserted within another group to form, for example, gatherings of 8 leaves, which similarly would be sewn through the central fold. Generally, quartos have more squarish proportions than folios or octavos.There are variations in how quartos were produced. For example, bibliographers call a book printed as a quarto (four leaves per full sheet) but bound in gatherings of 8 leaves each a "quarto in 8s."The actual size of a quarto book depends on the size of the full sheet of paper on which it was printed. A demy quarto (abbreviated demy 4to) is a chiefly British term referring to a book size of about 11.25 by 8.75 inches (286 by 222 mm), a medium quarto 9 by 11.5 inches (230 by 290 mm), a royal quarto 10 by 12.5 inches (250 by 320 mm), and a small quarto equalled a square octavo, all untrimmed.The earliest surviving books printed by movable type by Gutenberg are quartos, which were printed before the Gutenberg Bible. The earliest known one is a fragment of a medieval poem called the Sibyllenbuch, believed to have been printed by Gutenberg in 1452–53. Quartos were the most common format of books printed in the incunabula period (books printed before 1501). The British Library Incunabula Short Title Catalogue currently lists about 28,100 different editions of surviving books, pamphlets and broadsides (some fragmentary only) printed before 1501, of which about 14,360 are quartos, representing just over half of all works in the catalogue. Quarto as size: Beginning in the mid-nineteenth century, technology permitted the manufacture of large sheets or rolls of paper on which books were printed, many text pages at a time. As a result, it may be impossible to determine the actual format (i.e., number of leaves formed from each sheet fed into a press). The term "quarto" as applied to such books may refer simply to the size, i.e., books that are approximately 10 inches (250 mm) tall by 8 inches (200 mm) wide. Quartos for separate plays and poems: During the Elizabethan era and through the mid-seventeenth century, plays and poems were commonly printed as separate works in quarto format. Eighteen of Shakespeare's 36 plays included in first folio collected edition of 1623, were previously separately printed as quartos, with a single exception that was printed in octavo. For example, Shakespeare's Henry IV, Part 1, the most popular play of the era, was first published as a quarto in 1598, with a second quarto edition in 1599, followed by a number of subsequent quarto editions. Bibliographers have extensively studied these different editions, which they refer to by abbreviations such as Q1, Q2, etc. The texts of some of the Shakespeare quartos are highly inaccurate and are full of errors and omissions. Bibliographer Alfred W. Pollard named those editions bad quartos, and it is speculated that they may have been produced not from manuscript texts, but from actors who had memorized their lines. Quartos for separate plays and poems: Other playwrights in this period also published their plays in quarto editions. Christopher Marlowe's Doctor Faustus, for example, was published as a quarto in 1604 (Q1), with a second quarto edition in 1609. The same is true of poems, Shakespeare's poem Venus and Adonis being first printed as a quarto in 1593 (Q1), with a second quarto edition (Q2) in 1594. Quartos for separate plays and poems: In Spanish culture, a similar concept of separate editions of plays is known as comedia suelta. Sources: McKerrow, R. (1927) An Introduction to Bibliography for Literary Students, Oxford University Press: Oxford.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thermonuclear fusion** Thermonuclear fusion: Nuclear fusion is a reaction in which two or more atomic nuclei, usually deuterium and tritium (hydrogen variants), are combined to form one atomic nuclei and subatomic particles (neutrons or protons). The difference in mass between the reactants and products is manifested as either the release or absorption of energy. This difference in mass arises due to the difference in nuclear binding energy between the atomic nuclei before and after the reaction. Nuclear fusion is the process that powers active or main-sequence stars and other high-magnitude stars, where large amounts of energy are released. Thermonuclear fusion: A nuclear fusion process that produces atomic nuclei lighter than iron-56 or nickel-62 will generally release energy. These elements have a relatively small mass and a relatively large binding energy per nucleon. Fusion of nuclei lighter than these releases energy (an exothermic process), while the fusion of heavier nuclei results in energy retained by the product nucleons, and the resulting reaction is endothermic. The opposite is true for the reverse process, called nuclear fission. Nuclear fusion uses lighter elements, such as hydrogen and helium, which are in general more fusible; while the heavier elements, such as uranium, thorium and plutonium, are more fissionable. The extreme astrophysical event of a supernova can produce enough energy to fuse nuclei into elements heavier than iron. History: In 1921, Arthur Eddington suggested hydrogen–helium fusion could be the primary source of stellar energy. Quantum tunneling was discovered by Friedrich Hund in 1927, and shortly afterwards Robert Atkinson and Fritz Houtermans used the measured masses of light elements to demonstrate that large amounts of energy could be released by fusing small nuclei. Building on the early experiments in artificial nuclear transmutation by Patrick Blackett, laboratory fusion of hydrogen isotopes was accomplished by Mark Oliphant in 1932. In the remainder of that decade, the theory of the main cycle of nuclear fusion in stars was worked out by Hans Bethe. Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project. Self-sustaining nuclear fusion was first carried out on 1 November 1952, in the Ivy Mike hydrogen (thermonuclear) bomb test. History: While fusion was achieved in the operation of the hydrogen bomb (H-bomb), the reaction must be controlled and sustained in order for it to be a useful energy source. Research into developing controlled fusion inside fusion reactors has been ongoing since the 1930s, but the technology is still in its developmental phase.The US National Ignition Facility, which uses laser-driven inertial confinement fusion, was designed with a goal of break-even fusion; the first large-scale laser target experiments were performed in June 2009 and ignition experiments began in early 2011. On 13 December 2022, the United States Department of Energy announced that on 5 December 2022, they had successfully accomplished break-even fusion, "delivering 2.05 megajoules (MJ) of energy to the target, resulting in 3.15 MJ of fusion energy output."Prior to this breakthrough, controlled fusion reactions had been unable to produce break-even (self-sustaining) controlled fusion. The two most advanced approaches for it are magnetic confinement (toroid designs) and inertial confinement (laser designs). Workable designs for a toroidal reactor that theoretically will deliver ten times more fusion energy than the amount needed to heat plasma to the required temperatures are in development (see ITER). The ITER facility is expected to finish its construction phase in 2025. It will start commissioning the reactor that same year and initiate plasma experiments in 2025, but is not expected to begin full deuterium–tritium fusion until 2035.Private companies pursuing the commercialization of nuclear fusion received $2.6 billion in private funding in 2021 alone, going to many notable startups including but not limited to Commonwealth Fusion Systems, Helion Energy Inc., General Fusion, TAE Technologies Inc. and Zap Energy Inc. Process: The release of energy with the fusion of light elements is due to the interplay of two opposing forces: the nuclear force, a manifestation of the strong interaction, which holds protons and neutrons tightly together in the atomic nucleus; and the Coulomb force, which causes positively charged protons in the nucleus to repel each other. Lighter nuclei (nuclei smaller than iron and nickel) are sufficiently small and proton-poor to allow the nuclear force to overcome the Coulomb force. This is because the nucleus is sufficiently small that all nucleons feel the short-range attractive force at least as strongly as they feel the infinite-range Coulomb repulsion. Building up nuclei from lighter nuclei by fusion releases the extra energy from the net attraction of particles. For larger nuclei, however, no energy is released, because the nuclear force is short-range and cannot act across larger nuclei. Process: Fusion powers stars and produces virtually all elements in a process called nucleosynthesis. The Sun is a main-sequence star, and, as such, generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen and makes 616 million metric tons of helium each second. The fusion of lighter elements in stars releases energy and the mass that always accompanies it. For example, in the fusion of two hydrogen nuclei to form helium, 0.645% of the mass is carried away in the form of kinetic energy of an alpha particle or other forms of energy, such as electromagnetic radiation.It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen. When accelerated to high enough speeds, nuclei can overcome this electrostatic repulsion and be brought close enough such that the attractive nuclear force is greater than the repulsive Coulomb force. The strong force grows rapidly once the nuclei are close enough, and the fusing nucleons can essentially "fall" into each other and the result is fusion and net energy produced. The fusion of lighter nuclei, which creates a heavier nucleus and often a free neutron or proton, generally releases more energy than it takes to force the nuclei together; this is an exothermic process that can produce self-sustaining reactions.Energy released in most nuclear reactions is much larger than in chemical reactions, because the binding energy that holds a nucleus together is greater than the energy that holds electrons to a nucleus. For example, the ionization energy gained by adding an electron to a hydrogen nucleus is 13.6 eV—less than one-millionth of the 17.6 MeV released in the deuterium–tritium (D–T) reaction shown in the adjacent diagram. Fusion reactions have an energy density many times greater than nuclear fission; the reactions produce far greater energy per unit of mass even though individual fission reactions are generally much more energetic than individual fusion ones, which are themselves millions of times more energetic than chemical reactions. Only direct conversion of mass into energy, such as that caused by the annihilatory collision of matter and antimatter, is more energetic per unit of mass than nuclear fusion. (The complete conversion of one gram of matter would release 9×1013 joules of energy.) In stars: An important fusion process is the stellar nucleosynthesis that powers stars, including the Sun. In the 20th century, it was recognized that the energy released from nuclear fusion reactions accounts for the longevity of stellar heat and light. The fusion of nuclei in a star, starting from its initial hydrogen and helium abundance, provides that energy and synthesizes new nuclei. Different reaction chains are involved, depending on the mass of the star (and therefore the pressure and temperature in its core). In stars: Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was unknown; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy had not yet been discovered, nor even that stars are largely composed of hydrogen (see metallicity). Eddington's paper reasoned that: The leading theory of stellar energy, the contraction hypothesis, should cause the rotation of a star to visibly speed up due to conservation of angular momentum. But observations of Cepheid variable stars showed this was not happening. In stars: The only other known plausible source of energy was conversion of matter to energy; Einstein had shown some years earlier that a small amount of matter was equivalent to a large amount of energy. In stars: Francis Aston had also recently shown that the mass of a helium atom was about 0.8% less than the mass of the four hydrogen atoms which would, combined, form a helium atom (according to the then-prevailing theory of atomic structure which held atomic weight to be the distinguishing property between elements; work by Henry Moseley and Antonius van den Broek would later show that nucleic charge was the distinguishing property and that a helium nucleus, therefore, consisted of two hydrogen nuclei plus additional mass). This suggested that if such a combination could happen, it would release considerable energy as a byproduct. In stars: If a star contained just 5% of fusible hydrogen, it would suffice to explain how stars got their energy. (It is now known that most 'ordinary' stars contain far more than 5% hydrogen.) Further elements might also be fused, and other scientists had speculated that stars were the "crucible" in which light elements combined to create heavy elements, but without more accurate measurements of their atomic masses nothing more could be said at the time.All of these speculations were proven correct in the following decades. In stars: The primary source of solar energy, and that of similar size stars, is the fusion of hydrogen to form helium (the proton–proton chain reaction), which occurs at a solar-core temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle, with the release of two positrons and two neutrinos (which changes two of the protons into neutrons), and energy. In heavier stars, the CNO cycle and other processes are more important. As a star uses up a substantial fraction of its hydrogen, it begins to synthesize heavier elements. The heaviest elements are synthesized by fusion that occurs when a more massive star undergoes a violent supernova at the end of its life, a process known as supernova nucleosynthesis. Requirements: A substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large distances, two naked nuclei repel one another because of the repulsive electrostatic force between their positively charged protons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be overcome by the quantum effect in which nuclei can tunnel through coulomb forces. Requirements: When a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to all the other nucleons of the nucleus (if the atom is small enough), but primarily to its immediate neighbors due to the short range of the force. The nucleons in the interior of a nucleus have more neighboring nucleons than those on the surface. Since smaller nuclei have a larger surface-area-to-volume ratio, the binding energy per nucleon due to the nuclear force generally increases with the size of the nucleus but approaches a limiting value corresponding to that of a nucleus with a diameter of about four nucleons. It is important to keep in mind that nucleons are quantum objects. So, for example, since two neutrons in a nucleus are identical to each other, the goal of distinguishing one from the other, such as which one is in the interior and which is on the surface, is in fact meaningless, and the inclusion of quantum mechanics is therefore necessary for proper calculations. Requirements: The electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus will feel an electrostatic repulsion from all the other protons in the nucleus. The electrostatic energy per nucleon due to the electrostatic force thus increases without limit as nuclei atomic number grows. Requirements: The net result of the opposing electrostatic and strong nuclear forces is that the binding energy per nucleon generally increases with increasing size, up to the elements iron and nickel, and then decreases for heavier nuclei. Eventually, the binding energy becomes negative and very heavy nuclei (all with more than 208 nucleons, corresponding to a diameter of about 6 nucleons) are not stable. The four most tightly bound nuclei, in decreasing order of binding energy per nucleon, are 62Ni, 58Fe, 56Fe, and 60Ni. Even though the nickel isotope, 62Ni, is more stable, the iron isotope 56Fe is an order of magnitude more common. This is due to the fact that there is no easy way for stars to create 62Ni through the alpha process. Requirements: An exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium, the next heavier element. This is because protons and neutrons are fermions, which according to the Pauli exclusion principle cannot exist in the same nucleus in exactly the same state. Each proton or neutron's energy state in a nucleus can accommodate both a spin up particle and a spin down particle. Helium-4 has an anomalously large binding energy because its nucleus consists of two protons and two neutrons (it is a doubly magic nucleus), so all four of its nucleons can be in the ground state. Any additional nucleons would have to go into higher energy states. Indeed, the helium-4 nucleus is so tightly bound that it is commonly treated as a single quantum mechanical particle in nuclear physics, namely, the alpha particle. Requirements: The situation is similar if two nuclei are brought together. As they approach each other, all the protons in one nucleus repel all the protons in the other. Not until the two nuclei actually come close enough for long enough so the strong nuclear force can take over (by way of tunneling) is the repulsive electrostatic force overcome. Consequently, even when the final energy state is lower, there is a large energy barrier that must first be overcome. It is called the Coulomb barrier. Requirements: The Coulomb barrier is smallest for isotopes of hydrogen, as their nuclei contain only a single positive charge. A diproton is not stable, so neutrons must also be involved, ideally in such a way that a helium nucleus, with its extremely tight binding, is one of the products. Requirements: Using deuterium–tritium fuel, the resulting energy barrier is about 0.1 MeV. In comparison, the energy needed to remove an electron from hydrogen is 13.6 eV. The (intermediate) result of the fusion is an unstable 5He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining 4He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was needed to overcome the energy barrier. Requirements: The reaction cross section (σ) is a measure of the probability of a fusion reaction as a function of the relative velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then it is useful to perform an average over the distributions of the product of cross-section and velocity. This average is called the 'reactivity', denoted ⟨σv⟩. The reaction rate (fusions per volume per time) is ⟨σv⟩ times the product of the reactant number densities: f=n1n2⟨σv⟩. Requirements: If a species of nuclei is reacting with a nucleus like itself, such as the DD reaction, then the product n1n2 must be replaced by n2/2 .⟨σv⟩ increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures of 10–100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the fusion reactants exist in a plasma state. Requirements: The significance of ⟨σv⟩ as a function of temperature in a device with a particular energy confinement time is found by considering the Lawson criterion. This is an extremely challenging barrier to overcome on Earth, which explains why fusion research has taken many years to reach the current advanced technical state. Artificial fusion: Thermonuclear fusion Thermonuclear fusion is the process of atomic nuclei combining or "fusing" using high temperatures to drive them close enough together for this to become possible. Such temperatures cause the matter to become a plasma and, if confined, fusion reactions may occur due to collisions with extreme thermal kinetic energies of the particles. There are two forms of thermonuclear fusion: uncontrolled, in which the resulting energy is released in an uncontrolled manner, as it is in thermonuclear weapons ("hydrogen bombs") and in most stars; and controlled, where the fusion reactions take place in an environment allowing some or all of the energy released to be harnessed for constructive purposes. Artificial fusion: Temperature is a measure of the average kinetic energy of particles, so by heating the material it will gain energy. After reaching sufficient temperature, given by the Lawson criterion, the energy of accidental collisions within the plasma is high enough to overcome the Coulomb barrier and the particles may fuse together. In a deuterium–tritium fusion reaction, for example, the energy necessary to overcome the Coulomb barrier is 0.1 MeV. Converting between energy and temperature shows that the 0.1 MeV barrier would be overcome at a temperature in excess of 1.2 billion kelvin. Artificial fusion: There are two effects that are needed to lower the actual temperature. One is the fact that temperature is the average kinetic energy, implying that some nuclei at this temperature would actually have much higher energy than 0.1 MeV, while others would be much lower. It is the nuclei in the high-energy tail of the velocity distribution that account for most of the fusion reactions. The other effect is quantum tunnelling. The nuclei do not actually have to have enough energy to overcome the Coulomb barrier completely. If they have nearly enough energy, they can tunnel through the remaining barrier. For these reasons fuel at lower temperatures will still undergo fusion events, at a lower rate. Artificial fusion: Thermonuclear fusion is one of the methods being researched in the attempts to produce fusion power. If thermonuclear fusion becomes favorable to use, it would significantly reduce the world's carbon footprint. Artificial fusion: Beam–beam or beam–target fusion Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusion. The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross-sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves.A number of attempts to recirculate the ions that "miss" collisions have been made over the years. One of the better-known attempts in the 1970s was Migma, which used a unique particle storage ring to capture ions into circular orbits and return them to the reaction area. Theoretical calculations made during funding reviews pointed out that the system would have significant difficulty scaling up to contain enough fusion fuel to be relevant as a power source. In the 1990s, a new arrangement using a field-reverse configuration (FRC) as the storage system was proposed by Norman Rostoker and continues to be studied by TAE Technologies as of 2021. A closely related approach is to merge two FRC's rotating in opposite directions, which is being actively studied by Helion Energy. Because these approaches all have ion energies well beyond the Coulomb barrier, they often suggest the use of alternative fuel cycles like p-11B that are too difficult to attempt using conventional approaches. Artificial fusion: Muon-catalyzed fusion Muon-catalyzed fusion is a fusion process that occurs at ordinary temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from this reaction has been unsuccessful because of the high energy required to create muons, their short 2.2 µs half-life, and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing fusion. Artificial fusion: Other principles Some other confinement principles have been investigated. Antimatter-initialized fusion uses small amounts of antimatter to trigger a tiny fusion explosion. This has been studied primarily in the context of making nuclear pulse propulsion, and pure fusion bombs feasible. This is not near becoming a practical power source, due to the cost of manufacturing antimatter alone. Artificial fusion: Pyroelectric fusion was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from −34 to 7 °C (−29 to 45 °F), combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. At the estimated energy levels, the D–D fusion reaction may occur, producing helium-3 and a 2.45 MeV neutron. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces. D–T fusion reactions have been observed with a tritiated erbium target. Artificial fusion: Nuclear fusion–fission hybrid (hybrid nuclear power) is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to the delays in the realization of pure fusion. Artificial fusion: Project PACER, carried out at Los Alamos National Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding small hydrogen bombs (fusion bombs) inside an underground cavity. As an energy source, the system is the only fusion power system that could be demonstrated to work using existing technology. However it would also require a large, continuous supply of nuclear bombs, making the economics of such a system rather questionable. Artificial fusion: Bubble fusion also called sonofusion was a proposed mechanism for achieving fusion via sonic cavitation which rose to prominence in the early 2000s. Subsequent attempts at replication failed and the principal investigator, Rusi Taleyarkhan, was judged guilty of research misconduct in 2008. Confinement in thermonuclear fusion: The key problem in achieving thermonuclear fusion is how to confine the hot plasma. Due to the high temperature, the plasma cannot be in direct contact with any solid material, so it has to be located in a vacuum. Also, high temperatures imply high pressures. The plasma tends to expand immediately and some force is necessary to act against it. This force can take one of three forms: gravitation in stars, magnetic forces in magnetic confinement fusion reactors, or inertial as the fusion reaction may occur before the plasma starts to expand, so the plasma's inertia is keeping the material together. Confinement in thermonuclear fusion: Gravitational confinement One force capable of confining the fuel well enough to satisfy the Lawson criterion is gravity. The mass needed, however, is so great that gravitational confinement is only found in stars—the least massive stars capable of sustained fusion are red dwarfs, while brown dwarfs are able to fuse deuterium and lithium if they are of sufficient mass. In stars heavy enough, after the supply of hydrogen is exhausted in their cores, their cores (or a shell around the core) start fusing helium to carbon. In the most massive stars (at least 8–11 solar masses), the process is continued until some of their energy is produced by fusing lighter elements to iron. As iron has one of the highest binding energies, reactions producing heavier elements are generally endothermic. Therefore significant amounts of heavier elements are not formed during stable periods of massive star evolution, but are formed in supernova explosions. Some lighter stars also form these elements in the outer parts of the stars over long periods of time, by absorbing energy from fusion in the inside of the star, by absorbing neutrons that are emitted from the fusion process. Confinement in thermonuclear fusion: All of the elements heavier than iron have some potential energy to release, in theory. At the extremely heavy end of element production, these heavier elements can produce energy in the process of being split again back toward the size of iron, in the process of nuclear fission. Nuclear fission thus releases energy that has been stored, sometimes billions of years before, during stellar nucleosynthesis. Confinement in thermonuclear fusion: Magnetic confinement Electrically charged particles (such as fuel ions) will follow magnetic field lines (see Guiding centre). The fusion fuel can therefore be trapped using a strong magnetic field. A variety of magnetic configurations exist, including the toroidal geometries of tokamaks and stellarators and open-ended mirror confinement systems. Confinement in thermonuclear fusion: Inertial confinement A third confinement principle is to apply a rapid pulse of energy to a large part of the surface of a pellet of fusion fuel, causing it to simultaneously "implode" and heat to very high pressure and temperature. If the fuel is dense enough and hot enough, the fusion reaction rate will be high enough to burn a significant fraction of the fuel before it has dissipated. To achieve these extreme conditions, the initially cold fuel must be explosively compressed. Inertial confinement is used in the hydrogen bomb, where the driver is x-rays created by a fission bomb. Inertial confinement is also attempted in "controlled" nuclear fusion, where the driver is a laser, ion, or electron beam, or a Z-pinch. Another method is to use conventional high explosive material to compress a fuel to fusion conditions. The UTIAS explosive-driven-implosion facility was used to produce stable, centred and focused hemispherical implosions to generate neutrons from D-D reactions. The simplest and most direct method proved to be in a predetonated stoichiometric mixture of deuterium-oxygen. The other successful method was using a miniature Voitenko compressor, where a plane diaphragm was driven by the implosion wave into a secondary small spherical cavity that contained pure deuterium gas at one atmosphere. Confinement in thermonuclear fusion: Electrostatic confinement There are also electrostatic confinement fusion devices. These devices confine ions using electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Also, fusion rates in fusors are very low due to competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a Penning trap and the polywell. The technology is relatively immature, however, and many scientific and engineering questions remain. Confinement in thermonuclear fusion: The most well known Inertial electrostatic confinement approach is the fusor. Starting in 1999, a number of amateurs have been able to do amateur fusion using these homemade devices. Other IEC devices include: the Polywell, MIX POPS and Marble concepts. Important reactions: Stellar reaction chains At the temperatures and densities in stellar cores, the rates of fusion reactions are notoriously slow. For example, at solar core temperature (T ≈ 15 MK) and density (160 g/cm3), the energy release rate is only 276 μW/cm3—about a quarter of the volumetric rate at which a resting human body generates heat. Thus, reproduction of stellar core conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates depend on density as well as temperature and most fusion schemes operate at relatively low densities, those methods are strongly dependent on higher temperatures. The fusion rate as a function of temperature (exp(−E/kT)), leads to the need to achieve temperatures in terrestrial reactors 10–100 times higher than in stellar interiors: T ≈ (0.1–1.0)×109 K. Important reactions: Criteria and candidates for terrestrial reactions In artificial fusion, the primary fuel is not constrained to be protons and higher temperatures can be used, so reactions with larger cross-sections are chosen. Another concern is the production of neutrons, which activate the reactor structure radiologically, but also have the advantages of allowing volumetric extraction of the fusion energy and tritium breeding. Reactions that release no neutrons are referred to as aneutronic. Important reactions: To be a useful energy source, a fusion reaction must satisfy several criteria. It must: Be exothermic This limits the reactants to the low Z (number of protons) side of the curve of binding energy. It also makes helium 4He the most common product because of its extraordinarily tight binding, although 3He and 3H also show up. Involve low atomic number (Z) nuclei This is because the electrostatic repulsion that must be overcome before the nuclei are close enough to fuse is directly related to the number of protons it contains – its atomic number. Have two reactants At anything less than stellar densities, three-body collisions are too improbable. In inertial confinement, both stellar densities and temperatures are exceeded to compensate for the shortcomings of the third parameter of the Lawson criterion, ICF's very short confinement time. Have two or more products This allows simultaneous conservation of energy and momentum without relying on the electromagnetic force. Important reactions: Conserve both protons and neutrons The cross sections for the weak interaction are too small.Few reactions meet these criteria. The following are those with the largest cross sections: For reactions with two products, the energy is divided between them in inverse proportion to their masses, as shown. In most reactions with three products, the distribution of energy varies. For reactions that can result in more than one set of products, the branching ratios are given. Important reactions: Some reaction candidates can be eliminated at once. The D–6Li reaction has no advantage compared to p+–115B because it is roughly as difficult to burn but produces substantially more neutrons through 21D–21D side reactions. There is also a p+–73Li reaction, but the cross section is far too low, except possibly when Ti > 1 MeV, but at such high temperatures an endothermic, direct neutron-producing reaction also becomes very significant. Finally there is also a p+–94Be reaction, which is not only difficult to burn, but 94Be can be easily induced to split into two alpha particles and a neutron. Important reactions: In addition to the fusion reactions, the following reactions with neutrons are important in order to "breed" tritium in "dry" fusion bombs and some proposed fusion reactors: The latter of the two equations was unknown when the U.S. conducted the Castle Bravo fusion bomb test in 1954. Being just the second fusion bomb ever tested (and the first to use lithium), the designers of the Castle Bravo "Shrimp" had understood the usefulness of 6Li in tritium production, but had failed to recognize that 7Li fission would greatly increase the yield of the bomb. While 7Li has a small neutron cross-section for low neutron energies, it has a higher cross section above 5 MeV. The 15 Mt yield was 150% greater than the predicted 6 Mt and caused unexpected exposure to fallout. Important reactions: To evaluate the usefulness of these reactions, in addition to the reactants, the products, and the energy released, one needs to know something about the nuclear cross section. Any given fusion device has a maximum plasma pressure it can sustain, and an economical device would always operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is chosen so that ⟨σv⟩/T2 is a maximum. This is also the temperature at which the value of the triple product nTτ required for ignition is a minimum, since that required value is inversely proportional to ⟨σv⟩/T2 (see Lawson criterion). (A plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating.) This optimum temperature and the value of ⟨σv⟩/T2 at that temperature is given for a few of these reactions in the following table. Important reactions: Note that many of the reactions form chains. For instance, a reactor fueled with 31T and 32He creates some 21D, which is then possible to use in the 21D–32He reaction if the energies are "right". An elegant idea is to combine the reactions (8) and (9). The 32He from reaction (8) can react with 63Li in reaction (9) before completely thermalizing. This produces an energetic proton, which in turn undergoes reaction (8) before thermalizing. Detailed analysis shows that this idea would not work well, but it is a good example of a case where the usual assumption of a Maxwellian plasma is not appropriate. Important reactions: Abundance of the nuclear fusion fuels Neutronicity, confinement requirement, and power density Any of the reactions above can in principle be the basis of fusion power production. In addition to the temperature and cross section discussed above, we must consider the total energy of the fusion products Efus, the energy of the charged fusion products Ech, and the atomic number Z of the non-hydrogenic reactant. Important reactions: Specification of the 21D–21D reaction entails some difficulties, though. To begin with, one must average over the two branches (2i) and (2ii). More difficult is to decide how to treat the 31T and 32He products. 31T burns so well in a deuterium plasma that it is almost impossible to extract from the plasma. The 21D–32He reaction is optimized at a much higher temperature, so the burnup at the optimum 21D–21D temperature may be low. Therefore, it seems reasonable to assume the 31T but not the 32He gets burned up and adds its energy to the net reaction, which means the total reaction would be the sum of (2i), (2ii), and (1): 5 21D → 42He + 2 n0 + 32He + p+, Efus = 4.03 + 17.6 + 3.27 = 24.9 MeV, Ech = 4.03 + 3.5 + 0.82 = 8.35 MeV.For calculating the power of a reactor (in which the reaction rate is determined by the D–D step), we count the 21D–21D fusion energy per D–D reaction as Efus = (4.03 MeV + 17.6 MeV) × 50% + (3.27 MeV) × 50% = 12.5 MeV and the energy in charged particles as Ech = (4.03 MeV + 3.5 MeV) × 50% + (0.82 MeV) × 50% = 4.2 MeV. (Note: if the tritium ion reacts with a deuteron while it still has a large kinetic energy, then the kinetic energy of the helium-4 produced may be quite different from 3.5 MeV, so this calculation of energy in charged particles is only an approximation of the average.) The amount of energy per deuteron consumed is 2/5 of this, or 5.0 MeV (a specific energy of about 225 million MJ per kilogram of deuterium). Important reactions: Another unique aspect of the 21D–21D reaction is that there is only one reactant, which must be taken into account when calculating the reaction rate. Important reactions: With this choice, we tabulate parameters for four of the most important reactions The last column is the neutronicity of the reaction, the fraction of the fusion energy released as neutrons. This is an important indicator of the magnitude of the problems associated with neutrons like radiation damage, biological shielding, remote handling, and safety. For the first two reactions it is calculated as (Efus − Ech)/Efus. For the last two reactions, where this calculation would give zero, the values quoted are rough estimates based on side reactions that produce neutrons in a plasma in thermal equilibrium. Important reactions: Of course, the reactants should also be mixed in the optimal proportions. This is the case when each reactant ion plus its associated electrons accounts for half the pressure. Assuming that the total pressure is fixed, this means that particle density of the non-hydrogenic ion is smaller than that of the hydrogenic ion by a factor 2/(Z + 1). Therefore, the rate for these reactions is reduced by the same factor, on top of any differences in the values of ⟨σv⟩/T2. On the other hand, because the 21D–21D reaction has only one reactant, its rate is twice as high as when the fuel is divided between two different hydrogenic species, thus creating a more efficient reaction. Important reactions: Thus there is a "penalty" of 2/(Z + 1) for non-hydrogenic fuels arising from the fact that they require more electrons, which take up pressure without participating in the fusion reaction. (It is usually a good assumption that the electron temperature will be nearly equal to the ion temperature. Some authors, however, discuss the possibility that the electrons could be maintained substantially colder than the ions. In such a case, known as a "hot ion mode", the "penalty" would not apply.) There is at the same time a "bonus" of a factor 2 for 21D–21D because each ion can react with any of the other ions, not just a fraction of them. Important reactions: We can now compare these reactions in the following table. Important reactions: The maximum value of ⟨σv⟩/T2 is taken from a previous table. The "penalty/bonus" factor is that related to a non-hydrogenic reactant or a single-species reaction. The values in the column "inverse reactivity" are found by dividing 1.24×10−24 by the product of the second and third columns. It indicates the factor by which the other reactions occur more slowly than the 21D–31T reaction under comparable conditions. The column "Lawson criterion" weights these results with Ech and gives an indication of how much more difficult it is to achieve ignition with these reactions, relative to the difficulty for the 21D–31T reaction. The next-to-last column is labeled "power density" and weights the practical reactivity by Efus. The final column indicates how much lower the fusion power density of the other reactions is compared to the 21D–31T reaction and can be considered a measure of the economic potential. Important reactions: Bremsstrahlung losses in quasineutral, isotropic plasmas The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that in aggregate neutralize the ions' bulk electrical charge and form a plasma. The electrons will generally have a temperature comparable to or greater than that of the ions, so they will collide with the ions and emit x-ray radiation of 10–30 keV energy, a process known as Bremsstrahlung. Important reactions: The huge size of the Sun and stars means that the x-rays produced in this process will not escape and will deposit their energy back into the plasma. They are said to be opaque to x-rays. But any terrestrial fusion reactor will be optically thin for x-rays of this energy range. X-rays are difficult to reflect but they are effectively absorbed (and converted into heat) in less than mm thickness of stainless steel (which is part of a reactor's shield). This means the bremsstrahlung process is carrying energy out of the plasma, cooling it. Important reactions: The ratio of fusion power produced to x-ray radiation lost to walls is an important figure of merit. This ratio is generally maximized at a much higher temperature than that which maximizes the power density (see the previous subsection). The following table shows estimates of the optimum temperature and the power ratio at that temperature for several reactions: The actual ratios of fusion to Bremsstrahlung power will likely be significantly lower for several reasons. For one, the calculation assumes that the energy of the fusion products is transmitted completely to the fuel ions, which then lose energy to the electrons by collisions, which in turn lose energy by Bremsstrahlung. However, because the fusion products move much faster than the fuel ions, they will give up a significant fraction of their energy directly to the electrons. Secondly, the ions in the plasma are assumed to be purely fuel ions. In practice, there will be a significant proportion of impurity ions, which will then lower the ratio. In particular, the fusion products themselves must remain in the plasma until they have given up their energy, and will remain for some time after that in any proposed confinement scheme. Finally, all channels of energy loss other than Bremsstrahlung have been neglected. The last two factors are related. On theoretical and experimental grounds, particle and energy confinement seem to be closely related. In a confinement scheme that does a good job of retaining energy, fusion products will build up. If the fusion products are efficiently ejected, then energy confinement will be poor, too. Important reactions: The temperatures maximizing the fusion power compared to the Bremsstrahlung are in every case higher than the temperature that maximizes the power density and minimizes the required value of the fusion triple product. This will not change the optimum operating point for 21D–31T very much because the Bremsstrahlung fraction is low, but it will push the other fuels into regimes where the power density relative to 21D–31T is even lower and the required confinement even more difficult to achieve. For 21D–21D and 21D–32He, Bremsstrahlung losses will be a serious, possibly prohibitive problem. For 32He–32He, p+–63Li and p+–115B the Bremsstrahlung losses appear to make a fusion reactor using these fuels with a quasineutral, isotropic plasma impossible. Some ways out of this dilemma have been considered but rejected. This limitation does not apply to non-neutral and anisotropic plasmas; however, these have their own challenges to contend with. Mathematical description of cross section: Fusion under classical physics In a classical picture, nuclei can be understood as hard spheres that repel each other through the Coulomb force but fuse once the two spheres come close enough for contact. Estimating the radius of an atomic nuclei as about one femtometer, the energy needed for fusion of two hydrogen is: thresh 2 protons fm 1.4 MeV This would imply that for the core of the sun, which has a Boltzmann distribution with a temperature of around 1.4 keV, the probability hydrogen would reach the threshold is 10 290 , that is, fusion would never occur. However, fusion in the sun does occur due to quantum mechanics. Mathematical description of cross section: Parameterization of cross section The probability that fusion occurs is greatly increased compared to the classical picture, thanks to the smearing of the effective radius as the de Broglie wavelength as well as quantum tunneling through the potential barrier. To determine the rate of fusion reactions, the value of most interest is the cross section, which describes the probability that particles will fuse by giving a characteristic area of interaction. An estimation of the fusion cross-sectional area is often broken into three pieces: geometry ×T×R, where geometry is the geometric cross section, T is the barrier transparency and R is the reaction characteristics of the reaction. Mathematical description of cross section: geometry is of the order of the square of the de Broglie wavelength geometry ≈λ2=(ℏmrv)2∝1ϵ where mr is the reduced mass of the system and ϵ is the center of mass energy of the system. T can be approximated by the Gamow transparency, which has the form: T≈e−ϵG/ϵ where ϵG=(παZ1Z2)2×2mrc2 is the Gamow factor and comes from estimating the quantum tunneling probability through the potential barrier. Mathematical description of cross section: R contains all the nuclear physics of the specific reaction and takes very different values depending on the nature of the interaction. However, for most reactions, the variation of R(ϵ) is small compared to the variation from the Gamow factor and so is approximated by a function called the astrophysical S-factor, S(ϵ) , which is weakly varying in energy. Putting these dependencies together, one approximation for the fusion cross section as a function of energy takes the form: σ(ϵ)≈S(ϵ)ϵe−ϵG/ϵ More detailed forms of the cross-section can be derived through nuclear physics-based models and R-matrix theory. Mathematical description of cross section: Formulas of fusion cross sections The Naval Research Lab's plasma physics formulary gives the total cross section in barns as a function of the energy (in keV) of the incident particle towards a target ion at rest fit by the formula: NRL (ϵ)=A5+((A4−A3ϵ)2+1)−1A2ϵ(eA1ϵ−1/2−1) with the following coefficient values:Bosch-Hale also reports a R-matrix calculated cross sections fitting observation data with Padé rational approximating coefficients. With energy in units of keV and cross sections in units of millibarn, the factor has the form: Bosch-Hale (ϵ)=A1+ϵ(A2+ϵ(A3+ϵ(A4+ϵA5)))1+ϵ(B1+ϵ(B2+ϵ(B3+ϵB4))) , with the coefficient values:where Bosch-Hale Bosch-Hale exp ⁡(ϵG/ϵ) Maxwell-averaged nuclear cross sections In fusion systems that are in thermal equilibrium, the particles are in a Maxwell–Boltzmann distribution, meaning the particles have a range of energies centered around the plasma temperature. The sun, magnetically confined plasmas and inertial confinement fusion systems are well modeled to be in thermal equilibrium. In these cases, the value of interest is the fusion cross-section averaged across the Maxwell–Boltzmann distribution. The Naval Research Lab's plasma physics formulary tabulates Maxwell averaged fusion cross sections reactivities in cm3/s For energies 25 keV the data can be represented by: 2.33 10 14 18.76 T−1/3cm3/s 3.68 10 12 19.94 T−1/3cm3/s with T in units of keV.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cycloparaphenylene** Cycloparaphenylene: A cycloparaphenylene is a molecule that consists of several benzene rings connected by covalent bonds in the para positions to form a hoop- or necklace-like structure. Its chemical formula is [C6H4]n or C6nH4n Such a molecule is usually denoted [n]CPP where n is the number of benzene rings. A cycloparaphenylene can be considered as the smallest possible armchair carbon nanotube, and is a type of carbon nanohoop. Cycloparaphenylenes are challenging targets for chemical synthesis due to the ring strain incurred from forcing benzene rings out of planarity. History: In 1934 by V. C. Parekh and P. C. Guha described the first published attempt to synthesize a cycloparaphenylene, specifically [2]CPP. They connected two aromatic rings with a sulfide bridge, and hoped that removal of the latter would yield the desired compound. However, the attempt failed as the compound would have been far too strained to exist under anything but extreme conditions. History: By 1993, Fritz Vögtle attempted to synthesize the less-strained [6]CPP and [8]CPP by the same approach. He produced a hoop of phenyl rings, bridged together by a sulfur atom. However, his attempts to remove the sulfur failed too. They also synthesized a macrocycle that upon dehydrogenation would yield a CPP, but could not perform this final step.In the year 2000, Chandrasekhar and others concluded, by computational analysis, that [5]CPP and [6]CPP should be significantly different in their aromaticity. However, the synthesis in 2014 of [5]CPP refuted this conclusion.In 2008 the first cycloparaphenylenes were synthesized by Ramesh Jasti during his post doctoral research in the lab of Carolyn Bertozzi. He used cyclohexa-1,4-dienes which are closer in oxidation state to the desired phenylene than the cyclohexanes used previously by Vögtle. The first cycloparaphenylenes that were reported and characterized were: [9]CPP, [12]CPP, and [18]CPP. In 2009, the Itami group would report the selective synthesis of [12]CPP, and shortly thereafter Yamago synthesized [8]CPP in 2010. The Jasti Group then synthesized all increasingly smaller CPPs using new methodology that allowed [7]CPP, [6]CPP, and finally [5]CPP to be reported in relatively quick succession. Properties: Structure The normal configuration of each phenylene element would be planar, with the bonds in the para position pointing opposite to each other in a straight line. Therefore, the cycloparaphenylene molecule is strained, and the strain increases as the number of units decreases. The strain energy of [5]CPP was calculated as 117.2 kcal/mol. In spite of the strain, the phenyl rings retain their aromatic character, even in the [5]CPP. However, as the size of the CPP decreases the HOMO-LUMO gap also decreases. This trend opposite to that observed in linear polyparaphenylenes where the HOMO-LUMO gap decreases as size increases. This causes a red-shift of the fluorescent emission. Properties: Solid-state packing Cycloparaphenylenes with 7 to 12 rings all adopt a herringbone-like packing in the solid state. A similar but denser structure was observed for [5]CPP, whereas [6]CPP forms columns. This columnar packing structure has been of interest due to a potentially high internal surface area. By partial fluorination, it was found that this packing geometry could be engineered. Synthesis: There are three main methods used for cycloparaphenylene synthesis. Synthesis: Suzuki Coupling of Curved Oligophenylene Precursors In the initial synthesis, cycloparaphenylenes with n = 9, 12, and 18 have been synthesized starting from macrocycles containing 1,4-syn-dimethoxy-2,5-cyclohexadiene units as masked aromatic rings. Lithium–halogen exchange with p-diiodobenzene followed by a two-fold nucleophilic addition reaction with 1,4-benzoquinone yielded a syn-cyclohexadiene moiety. Borylation of this material followed macrocyclization under Suzuki–Miyuara cross-coupling with an equivalent of the diiodide produced macrocycles in low yields which could be separated by column chromatography. These macrocycles were then reductively aromatized using sodium naphthalenide to yield [n]cycloparaphenylenes. Since this initial synthesis uses symmetric building blocks it is challenging to use it to make smaller CPPs. Therefore, instead of benzoquinone, benzoquinone monomethyl ketal was used to allow the use of asymmetric building blocks. This innovation allowed the selective synthesis of [12]CPP to [5]CPP.[5]CPP is synthesized with an intramolecular boronate homocoupling technique that was originally seen as an undesired by-product of Suzuki-Miyaura cross-coupling reactions in the synthesis of [10]CPP. Synthesis: Cycloparaphenylenes now have selective, modular, and high yielding synthetic pathways. Reductive Elimination of Platinum Macrocycles A quicker route to [8-13]CPPs starts by selectively building [8]CPP and [12]CPP from the reaction of 4,4′-bis(trimethylstannyl)biphenyl and 4,4′ ′-bis(trimethylstannyl)terphenyl, respectively, with Pt(cod)Cl2 (where cod is 1,5-cyclooctadiene) through square-shaped tetranuclear platinum intermediates. A mixture of [8-13]cycloparaphenylenes can be obtained in good combined yields by mixing biphenyl and terphenyl precursors with the platinum sources. Alkyne Cyclotrimerization A third lesser used method developed in the Tanaka group uses rhodium catalyzed alkyne cyclotrimerization for the synthesis of cycloparaphenylenes. Potential applications: Potential applications of cycloparaphenylenes include host–guest chemistry, seeds for carbon nanotube growth, and hybrid nanostructures containing nanohoop-type substituents. A cycloparaphenylene can be seen as minimal single-walled carbon nanotube of the armchair type. As such, a cycloparaphenylene may be a seed for synthesis of longer nanotubes. Their electronic properties may also be useful. Potential applications: Fullerene binding Cycloparaphenylenes have shown affinity to fullerenes and other carbonaceous molecules, with interactions similar to those in carbon peapods. Potential applications of these structures include nanolasers, single electron transistors, spin-qubit arrays for quantum computing, nanopipettes, and data storage devices.Specifically, the π-π interactions and the concave interior of the cycloparaphenylenes is expected to bind to π conjugated systems with convex surfaces that can fit inside the ring. Indeed, [10]CPP has been shown to selectively bind a C60 fullerene within its hole, thus producing a "molecular bearing". The fullerene remains in the ring long enough to be observed on the NMR timescale. The fluorescence of [10]CPP is quenched upon complexation with C60, which suggests its potential as a C60 sensor. In 2018 this affinity was exploited to create CPP-fullerene rotaxanes.It has been observed that such "ball-in-hoop" interactions are stronger for endohedral metallo-fullerenes, in which a positively charged metal ion is trapped inside a fullerene cage and makes it more electronegative. Specifically, [12]CPP was found to preferentially enclose metallo-fullerenes instead of "empty" fullerenes, reducing their solubility in toluene; which provides a convenient separation method for the two species. Related compounds: As the synthesis of CPPs has become easier, derivative structures have begun to be synthesized as well. In 2013 the Itami group reported the synthesis of a nanocage made completely of benzene rings. This compound was especially interesting because it could be viewed as a junction of a branched nanotube structure.Other chiral derivatives of cycloparaphenylenes (which may serve as chemical templates for synthesizing chiral nanotubes) have also been characterized. Similar to the original (n,n) cycloparaphenylenes, these chiral nanorings also exhibit unusual optoelectronic properties with excitation energies growing larger as a function of size; however, the (n+3,n+1) chiral nanoring exhibits larger photoinduced transitions compared to the original (n,n) cycloparaphenylenes, resulting in more readily observable optical properties in spectroscopic experiments.In 2012 the Jasti Group reported the synthesis of dimers of [8]CPP linked by arene bridges. This synthesis was followed two years later by the synthesis of a directly connected dimer of [10]CPP from chloro[10]CPP by the Itami group. Related compounds: Donor–acceptor functionalization CPPs are unique in that their donor–acceptor properties can be adjusted with the addition or removal of each phenyl ring. In the all-carbon nano-hoop systems a reduction in width corresponds to a higher HOMO and a lower LUMO. Additional donor–acceptor selectivity was observed by the addition of an aromatic heterocycles into the larger ring. N-methylaza[n]CPP showed that a lowering of the LUMO could be enhanced by decreasing the ring size, while the HOMO energy level remained the same.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**G.988** G.988: ITU-T Recommendation G.988 defines a management and control interface for optical network units (ONU). It comprises one recommendation:Recommendation ITU-T G.988 specifies the optical network unit (ONU) management and control interface (OMCI) for optical access networks. Recommendation ITU-T G.988 specifies the managed entities (MEs) of a protocol-independent management information base (MIB) that models the exchange of information between an optical line termination (OLT) and an ONU. In addition, it covers the ONU management and control channel, protocol and detailed messages. G.988: G.988, ONU management and control interface (OMCI) specification, 2010.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cooperstown cocktail** Cooperstown cocktail: The Cooperstown cocktail refers to a panel of four drug probes used in human pharmacokinetic studies to determine the activity of drug metabolising enzymes. The terminology 'cocktail' refers to the fact that the drug probes are given together. The Cooperstown cocktail consists of four drugs that are considered specific substrates for four cytochrome P450 (CYP) isoforms. One of the drugs (caffeine) provides, through its metabolites, substrates for two additional enzymes. Uses: The drugs and the enzymes they probe are as follows - caffeine (probes CYP1A2, N-acetyltransferase 2, xanthine oxidase), midazolam (probes CYP3A), omeprazole (probes CYP2C19) and dextromethorphan (probes CYP2D6). After giving the cocktail, the concentrations of the drugs and their metabolites in plasma (for midazolam and omeprazole) and urine (for caffeine and dextromethorphan) are determined at various times. By analysing these concentrations, it is possible to determine the activity (i.e. the phenotype) of the relevant enzyme. Caffeine can be used as a probe for three different enzymes by measuring several of its urinary metabolites and comparing their relative concentrations. Uses: The 'Cooperstown 5 + 1 cocktail', in addition to the four drug probes mentioned above, incorporates warfarin as well. Warfarin (actually the S-warfarin enantiomer) is a specific probe for CYP2C9. The '+ 1' refers to the vitamin K that is given together with the warfarin to prevent any anticoagulant effect. The Cooperstown cocktail and the Cooperstown 5 + 1 cocktail are powerful tools for investigating the activity of important drug metabolising enzymes. They are used in human drug interaction studies in which the ability of a study drug to inhibit or induce cytochrome p450 enzymes is studied.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**APUD cell** APUD cell: APUD cells (DNES cells) constitute a group of apparently unrelated endocrine cells, which were named by the scientist A.G.E. Pearse, who developed the APUD concept in the 1960s based on calcitonin-secreting parafollicular C cells of dog thyroid. These cells share the common function of secreting a low molecular weight polypeptide hormone. There are several different types which secrete the hormones secretin, cholecystokinin and several others. The name is derived from an acronym, referring to the following: Amine Precursor Uptake – for high uptake of amine precursors including 5-hydroxytryptophan (5-HTP) and dihydroxyphenylalanine (DOPA). APUD cell: Decarboxylase – for high content of the enzyme amino acid decarboxylase (for conversion of precursors to amines). Cells in APUD system: Adenohypophysis Neurons of HypothalamusChief Cells of Parathyroid Adrenal Medullary Cells Glomus cells in Carotid Body Melanocytes of Skin Cells of Pineal Gland Renin producing cells in the kidney
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Fractal dimension on networks** Fractal dimension on networks: Fractal analysis is useful in the study of complex networks, present in both natural and artificial systems such as computer systems, brain and social networks, allowing further development of the field in network science. Self-similarity of complex networks: Many real networks have two fundamental properties, scale-free property and small-world property. If the degree distribution of the network follows a power-law, the network is scale-free; if any two arbitrary nodes in a network can be connected in a very small number of steps, the network is said to be small-world. The small-world properties can be mathematically expressed by the slow increase of the average diameter of the network, with the total number of nodes N where l is the shortest distance between two nodes. Equivalently, we obtain: where l0 is a characteristic length. For a self-similar structure, a power-law relation is expected rather than the exponential relation above. From this fact, it would seem that the small-world networks are not self-similar under a length-scale transformation. Self-similarity has been discovered in the solvent-accessible surface areas of proteins. Because proteins form globular folded chains, this discovery has important implications for protein evolution and protein dynamics, as it can be used to establish characteristic dynamic length scales for protein functionality. The methods for calculation of the dimension: Generally we calculate the fractal dimension using either the box counting method or the cluster growing method. The methods for calculation of the dimension: The box counting method Let NB be the number of boxes of linear size lB , needed to cover the given network. The fractal dimension dB is then given by This means that the average number of vertices ⟨MB(lB)⟩ within a box of size lB By measuring the distribution of N for different box sizes or by measuring the distribution of ⟨MB(lB)⟩ for different box sizes, the fractal dimension dB can be obtained by a power law fit of the distribution. The methods for calculation of the dimension: The cluster growing method One seed node is chosen randomly. If the minimum distance l is given, a cluster of nodes separated by at most l from the seed node can be formed. The procedure is repeated by choosing many seeds until the clusters cover the whole network. Then the dimension df can be calculated by where ⟨MC⟩ is the average mass of the clusters, defined as the average number of nodes in a cluster. The methods for calculation of the dimension: These methods are difficult to apply to networks since networks are generally not embedded in another space. In order to measure the fractal dimension of networks we add the concept of renormalization. Fractal scaling in scale-free networks: Box-counting and renormalization To investigate self-similarity in networks, we use the box-counting method and renormalization. Fig.(3a) shows this procedure using a network composed of 8 nodes. Fractal scaling in scale-free networks: For each size lB, boxes are chosen randomly (as in the cluster growing method) until the network is covered, A box consists of nodes all separated by a distance of l < lB, that is every pair of nodes in the box must be separated by a minimal paths of at most lB links. Then each box is replaced by a node(renormalization). The renormalized nodes are connected if there is at least one link between the unrenormalized boxes. This procedure is repeated until the network collapses to one node. Each of these boxes has an effective mass (the number of nodes in it) which can be used as shown above to measure the fractal dimension of the network. In Fig.(3b), renormalization is applied to a WWW network through three steps for lB = 3. Fractal scaling in scale-free networks: Fig.(5) shows the invariance of the degree distribution P(k) under the renormalization performed as a function of the box size on the World Wide Web. The networks are also invariant under multiple renormalizations applied for a fixed box size lB. This invariance suggests that the networks are self-similar on multiple length scales. Skeleton and fractal scaling The fractal properties of the network can be seen in its underlying tree structure. In this view, the network consists of the skeleton and the shortcuts. The skeleton is a special type of spanning tree, formed by the edges having the highest betweenness centralities, and the remaining edges in the network are shortcuts. Fractal scaling in scale-free networks: If the original network is scale-free, then its skeleton also follows a power-law degree distribution, where the degree can be different from the degree of the original network. For the fractal networks following fractal scaling, each skeleton shows fractal scaling similar to that of the original network. The number of boxes to cover the skeleton is almost the same as the number needed to cover the network. Real-world fractal networks: Since fractal networks and their skeletons follow the relation we can investigate whether a network is fractal and what is the fractal dimension of the network. For example, the WWW, the human brain, metabolic network, protein interaction network (PIN) of H. sapiens, and PIN of S. cerevisiaeare considered as fractal networks. Furthermore, the fractal dimensions measured are 4.1 3.7 3.4 2.0 and 1.8 for the networks respectively. On the other hand, the Internet, actor network, and artificial models (for instance, the BA model) do not show the fractal properties. Other definitions for network dimensions: The best definition of dimension for a complex network or graph depends on the application. For example, metric dimension is defined in terms of the resolving set for a graph. Definitions based on the scaling property of the "mass" as defined above with distance, or based on the complex network zeta function have also been studied.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bit field** Bit field: A bit field is a data structure that consists of one or more adjacent bits which have been allocated for specific purposes, so that any single bit or group of bits within the structure can be set or inspected. A bit field is most commonly used to represent integral types of known, fixed bit-width, such as single-bit Booleans. Bit field: The meaning of the individual bits within the field is determined by the programmer; for example, the first bit in a bit field (located at the field's base address) is sometimes used to determine the state of a particular attribute associated with the bit field.Within CPUs and other logic devices, collections of bit fields called flags are commonly used to control or to indicate the outcome of particular operations. Processors have a status register that is composed of flags. For example if the result of an addition cannot be represented in the destination an arithmetic overflow is set. The flags can be used to decide subsequent operations, such as conditional jump instructions. For example, a JE ... (Jump if Equal) instruction in the x86 assembly language will result in a jump if the Z (zero) flag was set by some previous operation. Bit field: A bit field is distinguished from a bit array in that the latter is used to store a large set of bits indexed by integers and is often wider than any integral type supported by the language. Bit fields, on the other hand, typically fit within a machine word, and the denotation of bits is independent of their numerical index. Implementation: Bit fields can be used to reduce memory consumption when a program requires a number of integer variables which always will have low values. For example, in many systems storing an integer value requires two bytes (16-bits) of memory; sometimes the values to be stored actually need only one or two bits. Having a number of these tiny variables share a bit field allows efficient packaging of data in the memory.In C and C++, native implementation-defined bit fields can be created using unsigned int, signed int, or (in C99) _Bool. In this case, the programmer can declare a structure for a bit field which labels and determines the width of several subfields. Adjacently declared bit fields of the same type can then be packed by the compiler into a reduced number of words, compared with the memory used if each 'field' were to be declared separately. Implementation: For languages lacking native bit fields, or where the programmer wants control over the resulting bit representation, it is possible to manually manipulate bits within a larger word type. In this case, the programmer can set, test, and change the bits in the field using combinations of masking and bitwise operations. Examples: C programming language Declaring a bit field in C and C++: The layout of bit fields in a C struct is implementation-defined. For behavior that remains predictable across compilers, it may be preferable to emulate bit fields with a primitive and bit operators: Processor status register The status register of a processor is a bit field consisting of several flag bits. Each flag bit describes information about the processor's current state. As an example, the status register of the 6502 processor is shown below: These bits are set by the processor following the result of an operation. Certain bits (such as the Carry, Interrupt-disable, and Decimal flags) may be explicitly controlled using set and clear instructions. Additionally, branching instructions are also defined to alter execution based on the current state of a flag. Examples: For an instance, after an ADC (Add with Carry) instruction, the BVS (Branch on oVerflow Set) instruction may be used to jump based on whether the overflow flag was set by the processor following the result of the addition instruction. Examples: Extracting bits from flag words A subset of flags in a flag field may be extracted by ANDing with a mask. A large number of languages support the shift operator (<<) where 1 << n aligns a single bit to the nth position. Most also support the use of the AND operator (&) to isolate the value of one or more bits. Examples: If the status-byte from a device is 0x67 and the 5th flag bit indicates data-ready. The mask-byte is 20 . ANDing the status-byte 0x67 (0110 0111 in binary) with the mask-byte 0x20(0010 0000 in binary) evaluates to 0x20. This means the flag bit is set i.e the device has data-ready. If the flag-bit had not been set, this would have evaluated to 0 i.e. there is no data available from the device. Examples: To check the nth bit from a variable v, perform the operation: bool nth_is_set = (v & (1 << n)) != 0; bool nth_is_set = (v >> n) & 1; Changing bits in flag words Writing, reading or toggling bits in flags can be done only using the OR, AND and NOT operations – operations which can be performed quickly in the processor. To set a bit, OR the status byte with a mask byte. Any bits set in the mask byte or the status byte will be set in the result. Examples: To toggle a bit, XOR the status byte and the mask byte. This will set a bit if it is cleared or clear a bit if it is set.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Video DownloadHelper** Video DownloadHelper: Video DownloadHelper is an extension for the Firefox web browser and Chrome web browser. It allows the user to download videos from sites that stream videos through HTTP. The extension was developed by Michel Gutierrez. History: As of December 2019, Video DownloadHelper is the third most popular extension for Firefox (after Adblock Plus and uBlock Origin) and the second most popular Mozilla-recommended extension with 2,848,968 users. In the second quarter of 2015, version 5 of the extension for Firefox was rebased using Mozilla's Add-ons SDK (previous versions used XUL). History: Firefox Quantum ceased support for extensions that use XUL or the Add-ons SDK so the extension was rebased using WebExtensions APIs. As a result of Mozilla's changes, reliance upon the companion application increased. Firefox 57.0 and Video DownloadHelper 7.0.0 were released on the same day (14 November 2017). The most recent release (7.3.7, 26 Jun 2019) addresses problems that were caused by changes to YouTube.Where aggregation (ADP) or conversion is required by the end user, or by a site from which download is required: if the companion app is unlicensed, the end result will include a watermark (a QR code). Since 2019 user reviews complain of slow conversion and unfinished downloads.The software is financed through ads on the developer's website, donations, and associated software sales. Reception: Eric Griffith of PC Magazine named it one of the best Firefox extensions of 2012. Erez Zukerman of PC World rated it 4/5 stars and called it "a valuable tool". TechRadar rated it 5/5 stars and wrote, "Anyone who wants to watch videos, not only online, but also on the train, in the car or on the plane, is very well served with Video DownloadHelper."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Serotonergic cell groups** Serotonergic cell groups: Serotonergic cell groups refer to collections of neurons in the central nervous system that have been demonstrated by histochemical fluorescence to contain the neurotransmitter serotonin (5-hydroxytryptamine). Since they are for the most part localized to classical brainstem nuclei, particularly the raphe nuclei, they are more often referred to by the names of those nuclei than by the B1-9 nomenclature. These cells appear to be common across most mammals and have two main regions in which they develop; one forms in the mesencephlon and the rostral pons and the other in the medulla oblongata and the caudal pons.Nine serotonergic cell groups have been identified. B1 cell group: Cell group B1 occupies the midline nucleus raphes pallidus and adjacent structures in the caudal medulla oblongata of the rodent and the primate. B2 cell group: Cell group B2 occupies the midline nucleus raphes obscurus and adjacent structures in the caudal medulla oblongata of the rodent and the primate. B3 cell group: Cell group B3 occupies the midline nucleus raphes magnus and adjacent structures in the caudal medulla oblongata of the rodent and the primate. Its boundary with the serotonergic group B1 is indistinct. B4 cell group: Cell group B4 is located in the floor of the fourth ventricle, in the vicinity of the vestibular nuclei and abducens nucleus in the rat and in the caudal interstitial nucleus of the medial longitudinal fasciculus of the mouse. A comprehensive study of monoaminergic cell groups in the macaque and the squirrel monkey did not identify a B4 cell group distinct from other groups in the region. B5 cell group: Cell group B5 is located in the midline pontine raphe nucleus and adjacent areas in the rodent and the primate. B6 cell group: Cell group B6 is located in the floor of the fourth ventricle dorsal to, and between, the right and left medial longitudinal fasciculus of the pons in the primate and the rodent. and forms the caudal portion of the dorsal raphe nucleus. B7 cell group: Cell group B7 is a group of cells located in the central gray of the pons, the dorsal raphe nucleus and adjacent structures in the primate and the rodent. B8 cell group: Cell group B8 is located in the dorsal part of the median raphe nucleus (superior central nucleus) and adjacent structures of the pontine reticular formation of the rodent and the primate. B9 cell group: Cell group B9 is a group of cells located in the pontine tegmentum, ventral to serotonergic group B8. In the nonhuman primate they are found in the ventral part of the superior central nucleus and adjacent structures. In the rodent they have a more lateral location within the medial lemniscus of the pons and dorsal and medial to it.,
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Ecogrid** Ecogrid: Ecogrid, known as Ecoraster to most of Europe, is a type of plastic, permeable paving grid used in the construction of parking lots, walkways and other outdoor surfaces. Ecogrid is marketed as a green technology because it is designed to reduce harmful stormwater runoff and is made with post-consumer plastic to reduce waste. Ecoraster was trade marked by Purus Plastics in 2008.Ecogrid is made from specially selected plastics that are recycled in their Bavarian production centre. It has a locking mechanism that secures one grid to the next. There are many fill types for this kind of grid system but mainly they are: Grass Gravel Resin bound stone Resin bound rubber crumb Soil
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Doravirine/lamivudine/tenofovir** Doravirine/lamivudine/tenofovir: Doravirine/lamivudine/tenofovir, sold under the brand name Delstrigo, is a fixed-dose combination antiretroviral medication for the treatment of HIV/AIDS. It contains doravirine, lamivudine, and tenofovir disoproxil. It is taken by mouth.In the United States, it was approved by the Food and Drug Administration (FDA) for the treatment of HIV-1 infection in August 2018.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**N-acetylserotonin O-methyltransferase-like protein** N-acetylserotonin O-methyltransferase-like protein: N-acetylserotonin O-methyltransferase-like protein is an enzyme that in humans is encoded by the ASMTL gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Curve complex** Curve complex: In mathematics, the curve complex is a simplicial complex C(S) associated to a finite-type surface S, which encodes the combinatorics of simple closed curves on S. The curve complex turned out to be a fundamental tool in the study of the geometry of the Teichmüller space, of mapping class groups and of Kleinian groups. It was introduced by W.J.Harvey in 1978. Curve complexes: Definition Let S be a finite type connected oriented surface. More specifically, let S=Sg,b,n be a connected oriented surface of genus g≥0 with b≥0 boundary components and n≥0 punctures. The curve complex C(S) is the simplicial complex defined as follows: The vertices are the free homotopy classes of essential (neither homotopically trivial nor peripheral) simple closed curves on S If c1,…,cn represent distinct vertices of C(S) , they span a simplex if and only if they can be homotoped to be pairwise disjoint. Examples For surfaces of small complexity (essentially the torus, punctured torus, and four-holed sphere), with the definition above the curve complex has infinitely many connected components. One can give an alternate and more useful definition by joining vertices if the corresponding curves have minimal intersection number. With this alternate definition, the resulting complex is isomorphic to the Farey graph. Geometry of the curve complex: Basic properties If S is a compact surface of genus g with b boundary components the dimension of C(S) is equal to ξ(S)=3g−3+b . In what follows, we will assume that ξ(S)≥2 . The complex of curves is never locally finite (i.e. every vertex has infinitely many neighbors). A result of Harer asserts that C(S) is in fact homotopically equivalent to a wedge sum of spheres. Geometry of the curve complex: Intersection numbers and distance on C(S) The combinatorial distance on the 1-skeleton of C(S) is related to the intersection number between simple closed curves on a surface, which is the smallest number of intersections of two curves in the isotopy classes. For example log 2⁡(i(α,β))+2 for any two nondisjoint simple closed curves α,β . One can compare in the other direction but the results are much more subtle (for example there is no uniform lower bound even for a given surface) and harder to prove. Geometry of the curve complex: Hyperbolicity It was proved by Masur and Minsky that the complex of curves is a Gromov hyperbolic space. Later work by various authors gave alternate proofs of this fact and better information on the hyperbolicity. Relation with the mapping class group and Teichmüller space: Action of the mapping class group The mapping class group of S acts on the complex C(S) in the natural way: it acts on the vertices by ϕ⋅α=ϕ∗α and this extends to an action on the full complex. This action allows to prove many interesting properties of the mapping class groups.While the mapping class group itself is not a hyperbolic group, the fact that C(S) is hyperbolic still has implications for its structure and geometry. Relation with the mapping class group and Teichmüller space: Comparison with Teichmüller space There is a natural map from Teichmüller space to the curve complex, which takes a marked hyperbolic structures to the collection of closed curves realising the smallest possible length (the systole). It allows to read off certain geometric properties of the latter, in particular it explains the empirical fact that while Teichmüller space itself is not hyperbolic it retains certain features of hyperbolicity. Applications to 3-dimensional topology: Heegaard splittings A simplex in C(S) determines a "filling" of S to a handlebody. Choosing two simplices in C(S) thus determines a Heegaard splitting of a three-manifold, with the additional data of an Heegaard diagram (a maximal system of disjoint simple closed curves bounding disks for each of the two handlebodies). Some properties of Heegaard splittings can be read very efficiently off the relative positions of the simplices: the splitting is reducible if and only if it has a diagram represented by simplices which have a common vertex; the splitting is weakly reducible if and only if it has a diagram represented by simplices which are linked by an edge.In general the minimal distance between simplices representing diagram for the splitting can give information on the topology and geometry (in the sense of the geometrisation conjecture of the manifold) and vice versa. A guiding principle is that the minimal distance of a Heegaard splitting is a measure of the complexity of the manifold. Applications to 3-dimensional topology: Kleinian groups As a special case of the philosophy of the previous paragraph, the geometry of the curve complex is an important tool to link combinatorial and geometric properties of hyperbolic 3-manifolds, and hence it is a useful tool in the study of Kleinian groups. For example, it has been used in the proof of the ending lamination conjecture. Random manifolds A possible model for random 3-manifolds is to take random Heegaard splittings. The proof that this model is hyperbolic almost surely (in a certain sense) uses the geometry of the complex of curves.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Yoke** Yoke: A yoke is a wooden beam sometimes used between a pair of oxen or other animals to enable them to pull together on a load when working in pairs, as oxen usually do; some yokes are fitted to individual animals. There are several types of yoke, used in different cultures, and for different types of oxen. A pair of oxen may be called a yoke of oxen, and yoke is also a verb, as in "to yoke a pair of oxen". Other animals that may be yoked include horses, mules, donkeys, and water buffalo. Etymology: The word "yoke" is believed to derive from Proto-Indo-European *yugóm (yoke), from root *yewg- (join, unite), and is thus cognate with yoga. This root has descendants in almost all known Indo-European languages including German Joch, Latin iugum, Ancient Greek ζυγόν (zygon), Persian یوغ (yuğ), Sanskrit युग (yugá), Hittite 𒄿𒌑𒃷 (iúkan), Old Church Slavonic иго (igo), Lithuanian jungas, Old Irish cuing, and Armenian լուծ (luts), all meaning "yoke". Neck or bow yoke: A bow yoke is a shaped wooden crosspiece bound to the necks of a pair of oxen (or occasionally to horses). It is held on the animals' necks by an oxbow, from which it gets its name. The oxbow is usually U-shaped and also transmits force from the animals' shoulders. A swivel between the animals, beneath the centre of the yoke, attaches to the pole of a vehicle or to chains (traces) used to drag a load. Neck or bow yoke: Bow yokes are traditional in Europe, and in the United States, Australia and Africa. Head yoke: A head yoke fits onto the head of the oxen. It usually fits behind the horns, and has carved-out sections into which the horns fit; it may be a single beam attached to both oxen, or each ox may have a separate short beam. The yoke is then strapped to the horns of the oxen with yoke straps. Some types fit instead onto the front of the head, again strapped to the horns, and ox pads are then used for cushioning the forehead of the ox (see picture). A tug pole is held to the bottom of the yoke using yoke irons and chains. The tug pole can either be a short pole with a chain attached for hauling, or a long pole with a hook on the end that has no chain at all. Sometimes the pole is attached to a wagon and the oxen are simply backed over this pole, the pole is then raised between them and a backing bolt is dropped into the chains on the yoke irons in order to haul the wagon. Head yoke: Head yokes are used in southern Europe, much of South America and in Canada. Withers yoke: A withers yoke is a yoke that fits just in front of the withers, or the shoulder blades, of the oxen. The yoke is held in position by straps, either alone or with a pair of wooden staves on either side of the ox's withers; the pull is however from the yoke itself, not from the staves. Withers yokes particularly suit zebu cattle, which have high humps on their withers. Withers yoke: Withers yokes are widely used in Africa and India, where zebu cattle are common. Comparison: Although all three yoke types are effective, each has its advantages and disadvantages. As noted above, withers yokes suit zebu cattle, and head yokes can of course only be used for animals with suitable horns. Head yokes need to be re-shaped frequently to fit the animals' horns as they grow; unlike other types, a single-beam head yoke fixes the heads of the oxen apart, helping them to stand quietly without fighting. A single-beam head yoke may offer better braking ability on downhill grades and appears to be preferred in rugged mountainous areas such as Switzerland, Spain and parts of Italy. Bow yokes need to be the correct size for the animal, and new ones are often made as an animal grows, but they need no adjustment in use. Whichever type is used, various lengths of yoke may be required for different agricultural implements or to adjust to different crop-row spacings. Single yoke: A yoke may be used with a single animal. Oxen are normally worked in pairs, but water buffalo in Asian countries are commonly used singly, with the aid of a bow-shaped withers yoke. Use of single bow or withers yokes on oxen is documented from North America, China, Zimbabwe, Tanzania and Switzerland, and several designs of single head or forehead yoke are recorded in Germany. Symbolism: The yoke has connotations of subservience and toiling; in some ancient cultures it was traditional to force a vanquished enemy to pass beneath a symbolic yoke of spears or swords. The yoke may be a metaphor for something oppressive or burdensome, such as feudalism, capitalism, imperialism, corvée, tribute, or conscription, as in the expressions the "Norman Yoke" (in England), the "Tatar Yoke" (in Russia), or the "Turkish Yoke" (in the Balkans). Symbolism: The metaphor can also refer to the state of being linked or chained together by contract or marriage, similar to a pair of oxen. This sense is also the source of the word yoga, as linking with the divine. The yoke is frequently used metaphorically in the Bible, first in Genesis regarding Esau.In the 20th century, the yoke and arrows became a political symbol of the Falange political movement in Spain.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Euparal** Euparal: Euparal is a synthetic microscopy mountant originally formulated in 1904 by Professor G. Gilson, the professor of Zoology at Louvain University, Louvain, Belgium. It has been manufactured by several companies, but is now exclusively manufactured by ASCO Laboratories, Manchester, England. Euparal: Euparal is used extensively in the mounting of entomological and histological specimens, and has gained much favour as a microscopy mountant due to its low refractive index of 1.483. Microscopic objects, such as cells, are stained with carmine or other stains and slides are passed through dehydration grades and finally mounted in a drop of Euparal.Euparal is a mixture of camsal (itself a mixture of camphor and salol), sandarac, eucalyptol, and paraldehyde and has a lower refractive index than Canada balsam.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pain** Pain: Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain defines pain as "an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage."Pain motivates organisms to withdraw from damaging situations, to protect a damaged body part while it heals, and to avoid similar experiences in the future. Most pain resolves once the noxious stimulus is removed and the body has healed, but it may persist despite removal of the stimulus and apparent healing of the body. Sometimes pain arises in the absence of any detectable stimulus, damage or disease.Pain is the most common reason for physician consultation in most developed countries. It is a major symptom in many medical conditions, and can interfere with a person's quality of life and general functioning. People in pain experience impaired concentration, working memory, mental flexibility, problem solving and information processing speed, and are more likely to experience irritability, depression and anxiety. Simple pain medications are useful in 20% to 70% of cases. Psychological factors such as social support, cognitive behavioral therapy, excitement, or distraction can affect pain's intensity or unpleasantness. Etymology: First attested in English in 1297, the word peyn comes from the Old French peine, in turn from Latin poena meaning "punishment, penalty" (also meaning "torment, hardship, suffering" in Late Latin) and that from Greek ποινή (poine), generally meaning "price paid, penalty, punishment". Classification: The International Association for the Study of Pain recommends using specific features to describe a patient's pain: region of the body involved (e.g. abdomen, lower limbs), system whose dysfunction may be causing the pain (e.g., nervous, gastrointestinal), duration and pattern of occurrence, intensity, and cause Chronic versus acute Pain is usually transitory, lasting only until the noxious stimulus is removed or the underlying damage or pathology has healed, but some painful conditions, such as rheumatoid arthritis, peripheral neuropathy, cancer and idiopathic pain, may persist for years. Pain that lasts a long time is called "chronic" or "persistent", and pain that resolves quickly is called "acute". Traditionally, the distinction between acute and chronic pain has relied upon an arbitrary interval of time between onset and resolution; the two most commonly used markers being 3 months and 6 months since the onset of pain, though some theorists and researchers have placed the transition from acute to chronic pain at 12 months.: 93  Others apply "acute" to pain that lasts less than 30 days, "chronic" to pain of more than six months' duration, and "subacute" to pain that lasts from one to six months. A popular alternative definition of "chronic pain", involving no arbitrarily fixed duration, is "pain that extends beyond the expected period of healing". Chronic pain may be classified as "cancer-related" or "benign." Allodynia Allodynia is pain experienced in response to a normally painless stimulus. It has no biological function and is classified by stimuli into dynamic mechanical, punctate and static. Classification: Phantom Phantom pain is pain felt in a part of the body that has been amputated, or from which the brain no longer receives signals. It is a type of neuropathic pain.The prevalence of phantom pain in upper limb amputees is nearly 82%, and in lower limb amputees is 54%. One study found that eight days after amputation, 72% of patients had phantom limb pain, and six months later, 67% reported it. Some amputees experience continuous pain that varies in intensity or quality; others experience several bouts of pain per day, or it may reoccur less often. It is often described as shooting, crushing, burning or cramping. If the pain is continuous for a long period, parts of the intact body may become sensitized, so that touching them evokes pain in the phantom limb. Phantom limb pain may accompany urination or defecation.: 61–69 Local anesthetic injections into the nerves or sensitive areas of the stump may relieve pain for days, weeks, or sometimes permanently, despite the drug wearing off in a matter of hours; and small injections of hypertonic saline into the soft tissue between vertebrae produces local pain that radiates into the phantom limb for ten minutes or so and may be followed by hours, weeks or even longer of partial or total relief from phantom pain. Vigorous vibration or electrical stimulation of the stump, or current from electrodes surgically implanted onto the spinal cord, all produce relief in some patients.: 61–69 Mirror box therapy produces the illusion of movement and touch in a phantom limb which in turn may cause a reduction in pain.Paraplegia, the loss of sensation and voluntary motor control after serious spinal cord damage, may be accompanied by girdle pain at the level of the spinal cord damage, visceral pain evoked by a filling bladder or bowel, or, in five to ten per cent of paraplegics, phantom body pain in areas of complete sensory loss. This phantom body pain is initially described as burning or tingling but may evolve into severe crushing or pinching pain, or the sensation of fire running down the legs or of a knife twisting in the flesh. Onset may be immediate or may not occur until years after the disabling injury. Surgical treatment rarely provides lasting relief.: 61–69 Breakthrough Breakthrough pain is transitory pain that comes on suddenly and is not alleviated by the patient's regular pain management. It is common in cancer patients who often have background pain that is generally well-controlled by medications, but who also sometimes experience bouts of severe pain that from time to time "breaks through" the medication. The characteristics of breakthrough cancer pain vary from person to person and according to the cause. Management of breakthrough pain can entail intensive use of opioids, including fentanyl. Classification: Asymbolia and insensitivity The ability to experience pain is essential for protection from injury, and recognition of the presence of injury. Episodic analgesia may occur under special circumstances, such as in the excitement of sport or war: a soldier on the battlefield may feel no pain for many hours from a traumatic amputation or other severe injury.Although unpleasantness is an essential part of the IASP definition of pain, it is possible to induce a state described as intense pain devoid of unpleasantness in some patients, with morphine injection or psychosurgery. Such patients report that they have pain but are not bothered by it; they recognize the sensation of pain but suffer little, or not at all. Indifference to pain can also rarely be present from birth; these people have normal nerves on medical investigations, and find pain unpleasant, but do not avoid repetition of the pain stimulus.Insensitivity to pain may also result from abnormalities in the nervous system. This is usually the result of acquired damage to the nerves, such as spinal cord injury, diabetes mellitus (diabetic neuropathy), or leprosy in countries where that disease is prevalent. These individuals are at risk of tissue damage and infection due to undiscovered injuries. People with diabetes-related nerve damage, for instance, sustain poorly-healing foot ulcers as a result of decreased sensation.A much smaller number of people are insensitive to pain due to an inborn abnormality of the nervous system, known as "congenital insensitivity to pain". Children with this condition incur carelessly-repeated damage to their tongues, eyes, joints, skin, and muscles. Some die before adulthood, and others have a reduced life expectancy. Most people with congenital insensitivity to pain have one of five hereditary sensory and autonomic neuropathies (which includes familial dysautonomia and congenital insensitivity to pain with anhidrosis). These conditions feature decreased sensitivity to pain together with other neurological abnormalities, particularly of the autonomic nervous system. A very rare syndrome with isolated congenital insensitivity to pain has been linked with mutations in the SCN9A gene, which codes for a sodium channel (Nav1.7) necessary in conducting pain nerve stimuli. Functional effects: Experimental subjects challenged by acute pain and patients in chronic pain experience impairments in attention control, working memory, mental flexibility, problem solving, and information processing speed. Acute and chronic pain are also associated with increased depression, anxiety, fear, and anger. Functional effects: If I have matters right, the consequences of pain will include direct physical distress, unemployment, financial difficulties, marital disharmony, and difficulties in concentration and attention… On subsequent negative emotion Although pain is considered to be aversive and unpleasant and is therefore usually avoided, a meta-analysis which summarized and evaluated numerous studies from various psychological disciplines, found a reduction in negative affect. Across studies, participants that were subjected to acute physical pain in the laboratory subsequently reported feeling better than those in non-painful control conditions, a finding which was also reflected in physiological parameters. A potential mechanism to explain this effect is provided by the opponent-process theory. Theory: Historical Before the relatively recent discovery of neurons and their role in pain, various different body functions were proposed to account for pain. There were several competing early theories of pain among the ancient Greeks: Hippocrates believed that it was due to an imbalance in vital fluids. In the 11th century, Avicenna theorized that there were a number of feeling senses including touch, pain and titillation. Theory: In 1644, René Descartes theorized that pain was a disturbance that passed along nerve fibers until the disturbance reached the brain. Descartes' work, along with Avicenna's, prefigured the 19th-century development of specificity theory. Specificity theory saw pain as "a specific sensation, with its own sensory apparatus independent of touch and other senses". Another theory that came to prominence in the 18th and 19th centuries was intensive theory, which conceived of pain not as a unique sensory modality, but an emotional state produced by stronger than normal stimuli such as intense light, pressure or temperature. By the mid-1890s, specificity was backed mostly by physiologists and physicians, and the intensive theory was mostly backed by psychologists. However, after a series of clinical observations by Henry Head and experiments by Max von Frey, the psychologists migrated to specificity almost en masse, and by century's end, most textbooks on physiology and psychology were presenting pain specificity as fact. Theory: Modern Some sensory fibers do not differentiate between noxious and non-noxious stimuli, while others, nociceptors, respond only to noxious, high intensity stimuli. At the peripheral end of the nociceptor, noxious stimuli generate currents that, above a given threshold, send signals along the nerve fiber to the spinal cord. The "specificity" (whether it responds to thermal, chemical or mechanical features of its environment) of a nociceptor is determined by which ion channels it expresses at its peripheral end. Dozens of different types of nociceptor ion channels have so far been identified, and their exact functions are still being determined.The pain signal travels from the periphery to the spinal cord along A-delta and C fibers. Because the A-delta fiber is thicker than the C fiber, and is thinly sheathed in an electrically insulating material (myelin), it carries its signal faster (5–30 m/s) than the unmyelinated C fiber (0.5–2 m/s). Pain evoked by the A-delta fibers is described as sharp and is felt first. This is followed by a duller pain, often described as burning, carried by the C fibers. These A-delta and C fibers enter the spinal cord via Lissauer's tract and connect with spinal cord nerve fibers in the central gelatinous substance of the spinal cord. These spinal cord fibers then cross the cord via the anterior white commissure and ascend in the spinothalamic tract. Before reaching the brain, the spinothalamic tract splits into the lateral, neospinothalamic tract and the medial, paleospinothalamic tract. The neospinothalamic tract carries the fast, sharp A-delta signal to the ventral posterolateral nucleus of the thalamus. The paleospinothalamic tract carries the slow, dull, C-fiber pain signal. Some of the paleospinothalamic fibers peel off in the brain stem, connecting with the reticular formation or midbrain periaqueductal gray, and the remainder terminate in the intralaminar nuclei of the thalamus.Pain-related activity in the thalamus spreads to the insular cortex (thought to embody, among other things, the feeling that distinguishes pain from other homeostatic emotions such as itch and nausea) and anterior cingulate cortex (thought to embody, among other things, the affective/motivational element, the unpleasantness of pain), and pain that is distinctly located also activates primary and secondary somatosensory cortex.Spinal cord fibers dedicated to carrying A-delta fiber pain signals, and others that carry both A-delta and C fiber pain signals to the thalamus have been identified. Other spinal cord fibers, known as wide dynamic range neurons, respond to A-delta and C fibers, but also to the much larger, more heavily myelinated A-beta fibers that carry touch, pressure and vibration signals.Ronald Melzack and Patrick Wall introduced their gate control theory in the 1965 Science article "Pain Mechanisms: A New Theory". The authors proposed that the thin C and A-delta (pain) and large diameter A-beta (touch, pressure, vibration) nerve fibers carry information from the site of injury to two destinations in the dorsal horn of the spinal cord, and that A-beta fiber signals acting on inhibitory cells in the dorsal horn can reduce the intensity of pain signals sent to the brain. Theory: Three dimensions of pain In 1968 Ronald Melzack and Kenneth Casey described chronic pain in terms of its three dimensions: "sensory-discriminative" (sense of the intensity, location, quality and duration of the pain), "affective-motivational" (unpleasantness and urge to escape the unpleasantness), and "cognitive-evaluative" (cognitions such as appraisal, cultural values, distraction and hypnotic suggestion).They theorized that pain intensity (the sensory discriminative dimension) and unpleasantness (the affective-motivational dimension) are not simply determined by the magnitude of the painful stimulus, but "higher" cognitive activities can influence perceived intensity and unpleasantness. Cognitive activities may affect both sensory and affective experience or they may modify primarily the affective-motivational dimension. Thus, excitement in games or war appears to block both the sensory-discriminative and affective-motivational dimensions of pain, while suggestion and placebos may modulate only the affective-motivational dimension and leave the sensory-discriminative dimension relatively undisturbed. (p. 432) The paper ends with a call to action: "Pain can be treated not only by trying to cut down the sensory input by anesthetic block, surgical intervention and the like, but also by influencing the motivational-affective and cognitive factors as well." (p. 435) Evolutionary and behavioral role: Pain is part of the body's defense system, producing a reflexive retraction from the painful stimulus, and tendencies to protect the affected body part while it heals, and avoid that harmful situation in the future. It is an important part of animal life, vital to healthy survival. People with congenital insensitivity to pain have reduced life expectancy.In The Greatest Show on Earth: The Evidence for Evolution, biologist Richard Dawkins addresses the question of why pain should have the quality of being painful. He describes the alternative as a mental raising of a "red flag". To argue why that red flag might be insufficient, Dawkins argues that drives must compete with one other within living beings. The most "fit" creature would be the one whose pains are well balanced. Those pains which mean certain death when ignored will become the most powerfully felt. The relative intensities of pain, then, may resemble the relative importance of that risk to our ancestors. This resemblance will not be perfect, however, because natural selection can be a poor designer. This may have maladaptive results such as supernormal stimuli.Pain, however, does not only wave a "red flag" within living beings but may also act as a warning sign and a call for help to other living beings. Especially in humans who readily helped each other in case of sickness or injury throughout their evolutionary history, pain might be shaped by natural selection to be a credible and convincing signal of need for relief, help, and care.Idiopathic pain (pain that persists after the trauma or pathology has healed, or that arises without any apparent cause) may be an exception to the idea that pain is helpful to survival, although some psychodynamic psychologists argue that such pain is psychogenic, enlisted as a protective distraction to keep dangerous emotions unconscious. Thresholds: In pain science, thresholds are measured by gradually increasing the intensity of a stimulus in a procedure called quantitative sensory testing which involves such stimuli as electric current, thermal (heat or cold), mechanical (pressure, touch, vibration), ischemic, or chemical stimuli applied to the subject to evoke a response. The "pain perception threshold" is the point at which the subject begins to feel pain, and the "pain threshold intensity" is the stimulus intensity at which the stimulus begins to hurt. The "pain tolerance threshold" is reached when the subject acts to stop the pain. Assessment: A person's self-report is the most reliable measure of pain. Some health care professionals may underestimate pain severity. A definition of pain widely employed in nursing, emphasizing its subjective nature and the importance of believing patient reports, was introduced by Margo McCaffery in 1968: "Pain is whatever the experiencing person says it is, existing whenever he says it does". To assess intensity, the patient may be asked to locate their pain on a scale of 0 to 10, with 0 being no pain at all, and 10 the worst pain they have ever felt. Quality can be established by having the patient complete the McGill Pain Questionnaire indicating which words best describe their pain. Assessment: Visual analogue scale The visual analogue scale is a common, reproducible tool in the assessment of pain and pain relief. The scale is a continuous line anchored by verbal descriptors, one for each extreme of pain where a higher score indicates greater pain intensity. It is usually 10 cm in length with no intermediate descriptors as to avoid marking of scores around a preferred numeric value. When applied as a pain descriptor, these anchors are often 'no pain' and 'worst imaginable pain". Cut-offs for pain classification have been recommended as no pain (0–4mm), mild pain (5–44mm), moderate pain (45–74mm) and severe pain (75–100mm). Assessment: Multidimensional pain inventory The Multidimensional Pain Inventory (MPI) is a questionnaire designed to assess the psychosocial state of a person with chronic pain. Combining the MPI characterization of the person with their IASP five-category pain profile is recommended for deriving the most useful case description. Assessment: Assessment in non-verbal people Non-verbal people cannot use words to tell others that they are experiencing pain. However, they may be able to communicate through other means, such as blinking, pointing, or nodding.With a non-communicative person, observation becomes critical, and specific behaviors can be monitored as pain indicators. Behaviors such as facial grimacing and guarding (trying to protect part of the body from being bumped or touched) indicate pain, as well as an increase or decrease in vocalizations, changes in routine behavior patterns and mental status changes. Patients experiencing pain may exhibit withdrawn social behavior and possibly experience a decreased appetite and decreased nutritional intake. A change in condition that deviates from baseline, such as moaning with movement or when manipulating a body part, and limited range of motion are also potential pain indicators. In patients who possess language but are incapable of expressing themselves effectively, such as those with dementia, an increase in confusion or display of aggressive behaviors or agitation may signal that discomfort exists, and further assessment is necessary. Changes in behavior may be noticed by caregivers who are familiar with the person's normal behavior.Infants do feel pain, but lack the language needed to report it, and so communicate distress by crying. A non-verbal pain assessment should be conducted involving the parents, who will notice changes in the infant which may not be obvious to the health care provider. Pre-term babies are more sensitive to painful stimuli than those carried to full term.Another approach, when pain is suspected, is to give the person treatment for pain, and then watch to see whether the suspected indicators of pain subside. Assessment: Other reporting barriers The way in which one experiences and responds to pain is related to sociocultural characteristics, such as gender, ethnicity, and age. An aging adult may not respond to pain in the same way that a younger person might. Their ability to recognize pain may be blunted by illness or the use of medication. Depression may also keep older adult from reporting they are in pain. Decline in self-care may also indicate the older adult is experiencing pain. They may be reluctant to report pain because they do not want to be perceived as weak, or may feel it is impolite or shameful to complain, or they may feel the pain is a form of deserved punishment.Cultural barriers may also affect the likelihood of reporting pain. Patients may feel that certain treatments go against their religious beliefs. They may not report pain because they feel it is a sign that death is near. Many people fear the stigma of addiction, and avoid pain treatment so as not to be prescribed potentially addicting drugs. Many Asians do not want to lose respect in society by admitting they are in pain and need help, believing the pain should be borne in silence, while other cultures feel they should report pain immediately to receive immediate relief.Gender can also be a perceived factor in reporting pain. Gender differences can be the result of social and cultural expectations, with women expected to be more emotional and show pain, and men more stoic. As a result, female pain is often stigmatized, leading to less urgent treatment of women based on social expectations of their ability to accurately report it. This leads to extended emergency room wait times for women and frequent dismissal of their ability to accurately report pain. Assessment: Diagnostic aid Pain is a symptom of many medical conditions. Knowing the time of onset, location, intensity, pattern of occurrence (continuous, intermittent, etc.), exacerbating and relieving factors, and quality (burning, sharp, etc.) of the pain will help the examining physician to accurately diagnose the problem. For example, chest pain described as extreme heaviness may indicate myocardial infarction, while chest pain described as tearing may indicate aortic dissection. Assessment: Physiological measurement Functional magnetic resonance imaging brain scanning has been used to measure pain, and correlates well with self-reported pain. Mechanisms: Nociceptive Nociceptive pain is caused by stimulation of sensory nerve fibers that respond to stimuli approaching or exceeding harmful intensity (nociceptors), and may be classified according to the mode of noxious stimulation. The most common categories are "thermal" (e.g. heat or cold), "mechanical" (e.g. crushing, tearing, shearing, etc.) and "chemical" (e.g. iodine in a cut or chemicals released during inflammation). Some nociceptors respond to more than one of these modalities and are consequently designated polymodal. Mechanisms: Nociceptive pain may also be classed according to the site of origin and divided into "visceral", "deep somatic" and "superficial somatic" pain. Visceral structures (e.g., the heart, liver and intestines) are highly sensitive to stretch, ischemia and inflammation, but relatively insensitive to other stimuli that normally evoke pain in other structures, such as burning and cutting. Visceral pain is diffuse, difficult to locate and often referred to a distant, usually superficial, structure. It may be accompanied by nausea and vomiting and may be described as sickening, deep, squeezing, and dull. Deep somatic pain is initiated by stimulation of nociceptors in ligaments, tendons, bones, blood vessels, fasciae and muscles, and is dull, aching, poorly-localized pain. Examples include sprains and broken bones. Superficial somatic pain is initiated by activation of nociceptors in the skin or other superficial tissue, and is sharp, well-defined and clearly located. Examples of injuries that produce superficial somatic pain include minor wounds and minor (first degree) burns. Mechanisms: Neuropathic Neuropathic pain is caused by damage or disease affecting any part of the nervous system involved in bodily feelings (the somatosensory system). Neuropathic pain may be divided into peripheral, central, or mixed (peripheral and central) neuropathic pain. Peripheral neuropathic pain is often described as "burning", "tingling", "electrical", "stabbing", or "pins and needles". Bumping the "funny bone" elicits acute peripheral neuropathic pain. Mechanisms: Some manifestations of neuropathic pain include: traumatic neuropathy, tic douloureux, painful diabetic neuropathy, and postherpetic neuralgia. Nociplastic Nociplastic pain is pain characterized by a changed nociception (but without evidence of real or threatened tissue damage, or without disease or damage in the somatosensory system). Mechanisms: Psychogenic Psychogenic pain, also called psychalgia or somatoform pain, is pain caused, increased or prolonged by mental, emotional or behavioral factors. Headache, back pain and stomach pain are sometimes diagnosed as psychogenic. Those affected are often stigmatized, because both medical professionals and the general public tend to think that pain from a psychological source is not "real". However, specialists consider that it is no less actual or hurtful than pain from any other source.People with long-term pain frequently display psychological disturbance, with elevated scores on the Minnesota Multiphasic Personality Inventory scales of hysteria, depression and hypochondriasis (the "neurotic triad"). Some investigators have argued that it is this neuroticism that causes acute pain to turn chronic, but clinical evidence points the other direction, to chronic pain causing neuroticism. When long-term pain is relieved by therapeutic intervention, scores on the neurotic triad and anxiety fall, often to normal levels. Self-esteem, often low in chronic pain patients, also shows improvement once pain has resolved.: 31–32 Management: Pain can be treated through a variety of methods. The most appropriate method depends upon the situation. Management of chronic pain can be difficult and may require the coordinated efforts of a pain management team, which typically includes medical practitioners, clinical pharmacists, clinical psychologists, physiotherapists, occupational therapists, physician assistants, and nurse practitioners.Inadequate treatment of pain is widespread throughout surgical wards, intensive care units, and accident and emergency departments, in general practice, in the management of all forms of chronic pain including cancer pain, and in end of life care. This neglect extends to all ages, from newborns to medically frail elderly. In the US, African and Hispanic Americans are more likely than others to suffer unnecessarily while in the care of a physician; and women's pain is more likely to be undertreated than men's.The International Association for the Study of Pain advocates that the relief of pain should be recognized as a human right, that chronic pain should be considered a disease in its own right, and that pain medicine should have the full status of a medical specialty. It is a specialty only in China and Australia at this time. Elsewhere, pain medicine is a subspecialty under disciplines such as anesthesiology, physiatry, neurology, palliative medicine and psychiatry. In 2011, Human Rights Watch alerted that tens of millions of people worldwide are still denied access to inexpensive medications for severe pain. Management: Medication Acute pain is usually managed with medications such as analgesics and anesthetics. Caffeine when added to pain medications such as ibuprofen, may provide some additional benefit. Ketamine can be used instead of opioids for short-term pain. Pain medications can cause paradoxical side effects, such as opioid-induced hyperalgesia (severe generalized pain caused by long-term opioid use). Management: Sugar (sucrose) when taken by mouth reduces pain in newborn babies undergoing some medical procedures (a lancing of the heel, venipuncture, and intramuscular injections). Sugar does not remove pain from circumcision, and it is unknown if sugar reduces pain for other procedures. Sugar did not affect pain-related electrical activity in the brains of newborns one second after the heel lance procedure. Sweet liquid by mouth moderately reduces the rate and duration of crying caused by immunization injection in children between one and twelve months of age. Management: Psychological Individuals with more social support experience less cancer pain, take less pain medication, report less labor pain and are less likely to use epidural anesthesia during childbirth, or suffer from chest pain after coronary artery bypass surgery.Suggestion can significantly affect pain intensity. About 35% of people report marked relief after receiving a saline injection they believed to be morphine. This placebo effect is more pronounced in people who are prone to anxiety, and so anxiety reduction may account for some of the effect, but it does not account for all of it. Placebos are more effective for intense pain than mild pain; and they produce progressively weaker effects with repeated administration.: 26–28  It is possible for many with chronic pain to become so absorbed in an activity or entertainment that the pain is no longer felt, or is greatly diminished.: 22–23 A number of meta-analyses have found clinical hypnosis to be effective in controlling pain associated with diagnostic and surgical procedures in both adults and children, as well as pain associated with cancer and childbirth. A 2007 review of 13 studies found evidence for the efficacy of hypnosis in the reduction of chronic pain under some conditions, though the number of patients enrolled in the studies was low, raising issues related to the statistical power to detect group differences, and most lacked credible controls for placebo or expectation. The authors concluded that "although the findings provide support for the general applicability of hypnosis in the treatment of chronic pain, considerably more research will be needed to fully determine the effects of hypnosis for different chronic-pain conditions." Alternative medicine An analysis of the 13 highest quality studies of pain treatment with acupuncture, published in January 2009, concluded there was little difference in the effect of real, faked and no acupuncture. However, more recent reviews have found some benefit. Additionally, there is tentative evidence for a few herbal medicines. There has been some interest in the relationship between vitamin D and pain, but the evidence so far from controlled trials for such a relationship, other than in osteomalacia, is inconclusive.For chronic (long-term) lower back pain, spinal manipulation produces tiny, clinically insignificant, short-term improvements in pain and function, compared with sham therapy and other interventions. Spinal manipulation produces the same outcome as other treatments, such as general practitioner care, pain-relief drugs, physical therapy, and exercise, for acute (short-term) lower back pain. Epidemiology: Pain is the main reason for visiting an emergency department in more than 50% of cases, and is present in 30% of family practice visits. Several epidemiological studies have reported widely varying prevalence rates for chronic pain, ranging from 12 to 80% of the population. It becomes more common as people approach death. A study of 4,703 patients found that 26% had pain in the last two years of life, increasing to 46% in the last month.A survey of 6,636 children (0–18 years of age) found that, of the 5,424 respondents, 54% had experienced pain in the preceding three months. A quarter reported having experienced recurrent or continuous pain for three months or more, and a third of these reported frequent and intense pain. The intensity of chronic pain was higher for girls, and girls' reports of chronic pain increased markedly between ages 12 and 14. Society and culture: Physical pain is a universal experience, and a strong motivator of human and animal behavior. As such, physical pain is used politically in relation to various issues such as pain management policy, drug control, animal rights or animal welfare, torture, and pain compliance. The deliberate infliction of pain and the medical management of pain are both important aspects of biopower, a concept that encompasses the "set of mechanisms through which the basic biological features of the human species became the object of a political strategy".In various contexts, the deliberate infliction of pain in the form of corporal punishment is used as retribution for an offence, for the purpose of disciplining or reforming a wrongdoer, or to deter attitudes or behaviour deemed unacceptable. In Western societies, the intentional infliction of severe pain (torture) was principally used to extract confession prior to its abolition in the latter part of the 19th century. Torture as a means to punish the citizen has been reserved for offences posing severe threat to the social fabric (for example, treason).The administration of torture on bodies othered by the cultural narrative, those observed as not 'full members of society' : 101–121[AD1]  met a resurgence in the 20th century, possibly due to the heightened warfare.: 101–121 [AD2] Many cultures use painful ritual practices as a catalyst for psychological transformation. The use of pain to transition to a 'cleansed and purified' state is seen in Catholic self-flagellation practices, or personal catharsis in neo-primitive body suspension experiences.Beliefs about pain play an important role in sporting cultures. Pain may be viewed positively, exemplified by the 'no pain, no gain' attitude, with pain seen as an essential part of training. Sporting culture tends to normalise experiences of pain and injury and celebrate athletes who 'play hurt'.Pain has psychological, social, and physical dimensions, and is greatly influenced by cultural factors. Non-humans: René Descartes argued that animals lack consciousness and therefore do not experience pain and suffering in the way that humans do. Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, wrote that researchers remained unsure into the 1980s as to whether animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. The ability of invertebrate species of animals, such as insects, to feel pain and suffering is unclear.Specialists believe that all vertebrates can feel pain, and that certain invertebrates, like the octopus, may also. The presence of pain in animals is unknown, but can be inferred through physical and behavioral reactions, such as paw withdrawal from various noxious mechanical stimuli in rodents.While plants, as living beings, can perceive and communicate physical stimuli and damage, they do not feel pain simply because of the lack of any pain receptors, nerves, or a brain, and, by extension, lack of consciousness. Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not, are known for their "obvious sensory abilities". Nevertheless, the plant kingdom as a whole do not feel pain notwithstanding their abilities to respond to sunlight, gravity, wind, and any external stimuli such as insect bites, since they lack any nervous system. The primary reason for this is that, unlike the members of the animal kingdom whose evolutionary successes and failures are shaped by suffering, the evolution of plants are simply shaped by life and death.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paper cutter** Paper cutter: A paper cutter, also known as a paper guillotine or simply a guillotine, is a tool often found in offices and classrooms. It is designed to administer straight cuts to single sheets or large stacks of paper at once. History: Paper cutters were developed and patented in 1844 by French inventor Guillaume Massiquot. Later, Milton Bradley patented his own version of the paper cutter in 1879. Since the middle of the 19th century, considerable improvements to the paper cutter have been made by Fomm and Krause of Germany, Furnival in England, and Oswego and Seybold in the United States. Description: Paper cutters vary in size, usually from approximately 30 centimeters (11.8 inches) in length on each side for office work to 841 millimetres (33.1 in), the length of a sheet of A1 paper. The surface will typically have a grid either painted or inscribed on it and may have a ruler across the top. At the very least, it must have a flat edge against which the user may line up the paper at right angles before passing it under the blade. It is typically relatively heavy so that it will remain steady while in use. Description: On the right-hand edge is a long, curved steel blade, often referred to as a knife, attached to the base at one corner. Larger versions have a strong compression coil spring as part of the attachment mechanism that pulls the knife against the stationary edge as the knife is drawn down to cut the paper. The other end of the knife unit is a handle. The stationary right edge of the base is also steel, with an exposed, finely-ground edge. When the knife is pulled down to cut paper, the action resembles that of a pair of scissors; only instead of two knives moving against each other, one is stationary. The combination of a blade mounted to a steady base produces clean and straight cuts, the likes of which would have otherwise required a ruler and razor blade to achieve on a single page. Paper cutters are also used for cutting thin sheet metal, cardboard, and plastic. The steel blade on a paper cutter provides long-term durability and can be resharpened. Description: A variant design uses a wheel-shaped blade mounted on a sliding shuttle attached to a rail. This type of paper cutter is known as a rotary paper cutter. Advantages of this design include being able to make wavy cuts, and perforations or to simply to score the paper without cutting, merely by substituting various types of circular blades. With a rotary cutter, it is also almost impossible for the user to cut oneself, except while changing the blade. This makes it safer for home use. Higher-end versions of rotary paper cutters are used for precision paper cutting and are popular for trimming photographs. An even simpler design uses double-edged blades which do not rotate but cut like a penknife. While cheaper, this design is not preferable for serious work due to its tendency to tear paper, and poor performance with thick media. Safety: Most modern paper cutters come equipped with a finger guard to prevent users from accidentally cutting themselves or severing a digit while using the apparatus. However, injuries are still possible if the device is not used with proper care or attention. Industrial paper cutters: In the modern paper industry, larger machines are used to cut large stacks of paper, cardboard, or similar material. Such machines operate in a manner similar to a guillotine. Commercial versions are motorized and automated, and include clamping mechanisms to prevent shifting of the material during the cutting process. In addition to simple straight paper cutters, vinyl cutters can cut shapes or stencils out of paper, card, vinyl, or thin plastic sheets. Such cutters require vector files and cutting software to manage the cutter. Using small blades, the machine can cut shapes out of the material.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cat skin disorders** Cat skin disorders: Cat skin disorders are among the most common health problems in cats. Skin disorders in cats have many causes, and many of the common skin disorders that afflict people have a counterpart in cats. The condition of a cat's skin and coat can also be an important indicator of its general health. Skin disorders of cats vary from acute, self-limiting problems to chronic or long-lasting problems requiring life-time treatment. Cat skin disorders may be grouped into categories according to the causes. Types of disorders: Immune-mediated skin disorders Skin disease may result from deficiencies in immune system function. In cats, the most common cause of immune deficiency is infection with retroviruses, FIV or FeLV, and cats with these chronic infections are subject to repeated bouts of skin infection and abscesses. This category also includes hypersensitivity disorders and eosinophilic skin diseases such as atopic dermatitis, miliary dermatitis and feline eosinophilic granuloma and skin diseases caused by autoimmunity, such as pemphigus and discoid lupus. Types of disorders: Infectious skin diseases An important infectious skin disease of cats is ringworm, or dermatophytosis. Other cat skin infections include parasitic diseases like mange and lice infestations. Other ectoparasites, including fleas and ticks, are not considered directly contagious but are acquired from an environment where other infested hosts have established the parasite's life cycle. Another common skin infection is cat bite abscess. A mixture of bacteria introduced by a bite wound cause infections in pockets under the skin and affected cats often show manic depression and fever. Hereditary and developmental skin diseases Some diseases are inherent abnormalities of skin structure or function. These include skin fragility syndrome (Ehlers-Danlos), hereditary hypotrichosis and congenital or hereditary alopecia. Types of disorders: Cutaneous manifestations of internal diseases Some systemic diseases can become symptomatic as a skin disorder. In cats, this includes one of the most devastating cat skin disorders, feline acquired skin fragility syndrome. The pathogenesis of this rare syndrome is unknown. It is most commonly associated with such conditions as iatrogenic or naturally occurring hypercortisolism, diabetes mellitus, or extensive use of progestational compounds. Nutrition related disorders: Nutritional related disorders can arise if the cat's food intake decreases, interactions between ingredients or nutrients occur, or mistakes are made during food formulation or manufacturing. Degradation of some nutrients can occur during storage. Nutritional related skin disorders can result in excesses or deficiencies in the production of sebum and in keratinization, the toughening of the outer layer of the skin. This can result in dandruff, erythema, hair loss, greasy skin, and diminished hair growth. Nutrition related disorders: Minerals Zinc is important for the skin's function, as it is involved in the production of DNA and RNA, and therefore important for cells that divide rapidly. A deficiency in zinc mainly results in skin disorders in adult cats, but also results in growth oddities. The skin of a cat deficient in zinc would likely have erythema and hair loss. The cat may have crusty, scaly skin on its limbs or tail. The coat of the cat becomes dull. Similarly, copper can affect coat health of cats; deficiencies will cause fading of coat color and weakened skin, leading to lesions. Nutrition related disorders: Protein The hair of a cat is made of mainly protein, and cats need about 25-30% protein in their diets, much higher than what a dog needs. A deficiency in protein usually happens when kittens are fed dog food or when low-protein diets are fed improperly. If a cat has a protein deficiency, the cat will lose weight. The coat condition will be poor, with dull, thinning, weak, and patchy hair. To remedy this, a diet with adequate amounts of protein must be fed. Nutrition related disorders: Essential fatty acids Cats must have both linoleic acid and arachidonic acid in their diet, due to their low production of the δ-6 desaturase enzyme. A deficiency in these fatty acids can occur if the fats in the cat's food are oxidized and become rancid from improper storage. A cat will be deficient for many months prior to seeing clinical signs in the skin, after which the skin will become scaly and greasy, while the coat will become dull. To treat health concerns caused by a deficiency of fatty acids, the ratio of n-3 to n-6 fatty acids must be corrected and supplemented. Nutrition related disorders: Vitamin A Cats cannot synthesize vitamin A from plant beta-carotene, and therefore must be supplemented with retinol from meat. A deficiency in vitamin A will result in a poor coat, hair loss, and scaly, thickened skin. However, an excess of vitamin A, called hypervitaminosis A, can result from over feeding cod liver oil and large amounts of liver. Signs of hypervitaminosis A are overly sensitive skin and neck pain, causing the cat to be unwilling to groom itself, resulting in a poor coat. Supplementing vitamin A with retinol to a deficient cat and feeding a balanced diet to a cat with hypervitaminosis A will treat the underlying nutritional disorder. Nutrition related disorders: Vitamin B The cat must have a supply of niacin, as cats cannot convert tryptophan into niacin. However, diets high in corn and low in protein can result in skin lesions and scaly, dry, greasy skin with hair loss. A deficiency of the B vitamin biotin causes hair loss around the eyes and face. A lack of B vitamins can be corrected by supplementing with a vitamin B complex and brewer's yeast.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Goodell's sign** Goodell's sign: In medicine, Goodell's sign is an indication of pregnancy. It is a significant softening of the vaginal portion of the cervix from increased vascularization. This vascularization is a result of hypertrophy and engorgement of the vessels below the growing uterus. This sign occurs at approximately six weeks' gestation. The sign is named after William Goodell (1829–1874).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Toby Walsh** Toby Walsh: Toby Walsh is Chief Scientist at UNSW.ai, the AI Institute of UNSW Sydney. He is a Laureate fellow, and professor of artificial intelligence in the UNSW School of Computer Science and Engineering at the University of New South Wales and Data61 (formerly NICTA). He has served as Scientific Director of NICTA, Australia's centre of excellence for ICT research. He is noted for his work in artificial intelligence, especially in the areas of social choice, constraint programming and propositional satisfiability. He has served on the Executive Council on the Association for the Advancement of Artificial Intelligence.He received an M.A. degree in theoretical physics and mathematics from the University of Cambridge and a M.Sc. and Ph.D. degree in artificial intelligence from the University of Edinburgh. He has held research positions in Australia, England, Ireland, Italy, France, Germany, Scotland, and Sweden. He has been Editor-in-Chief of the Journal of Artificial Intelligence Research, and of AI Communications. He was chaired several conferences in the area of artificial intelligence including the International Joint Conference on Artificial Intelligence. He is Editor of the Handbook of Constraint Programming, and of the Handbook of Satisfiability. He proposed the idea of Turing red flag laws which require any AI system to identify itself as a computer program to prevent human confusion.In 2015, he helped release an open letter calling for a ban on offensive autonomous weapons that attracted over 20,000 signatures. He later gave a talk at TEDxBerlin on this topic. In 2017, he organized an open letter calling for a ban signed by over 100 founders of AI and Robotics companies. Also in 2017, he organized a letter to the Prime Minister of Australia calling for Australia to negotiate towards a ban signed by over one hundred researchers from Australia working on artificial intelligence. In 2022, he was one of 121 prominent Australians banned from travelling to Russia indefinitely for his outspoken criticism of the use of AI by the Russian military. Toby Walsh: In 2018, he chaired the Expert Working Group of the Australian Council of Learned Academies (ACOLA) preparing a Horizon Scanning Report on the "Deployment of Artificial Intelligence and what it presents for Australia" at the request of Australia’s Chief Scientist, Dr Alan Finkel, and on behalf of the Commonwealth Science Council. Additionally, he was interviewed on ABC Comedy by Tom Ballard, discussing the "robot revolution".He is the author of three books on artificial intelligence for a general audience: "It's Alive!: Artificial Intelligence from the Logic Piano to Killer Robots" which looks at the history and present of AI, "2062: The World that AI Made" which looks at the potential impact AI will have on our society, and "Machines Behaving Badly: the Morality of AI" which looks at the ethical challenges of AI. All three books are published by Black Inc. The books are available in ten different languages: Chinese, English, German, Korean, Polish, Romanian, Russian, Taiwanese, Turkish and Vietnamese. Honors and awards: In 2020, he was elected a Fellow of the ACM and a Fellow of the American Association for the Advancement of Science. Honors and awards: In 2018, he was runner up in the Arms Control Association's annual Person/s of the Year Award.In 2016, he was elected a Fellow of the Australian Academy of Science, won the NSW Premier's Prize for Excellence in Engineering and ICT, and was made Scientia Professor at UNSW.In 2015, the Association for Constraint Programming presented him with their Research Excellence Award which identifies and honours the most influential people in the field. Honors and awards: In 2014, he won a Humboldt Prize.In 2008, he was elected a Fellow of the Association for the Advancement of Artificial Intelligence for "significant and sustained contributions to automated deduction and constraint programming, and for extraordinary service to the AI community". In 2003, he was elected a Fellow of the European Association for Artificial Intelligence in recognition of "significant, sustained contributions to the field of artificial intelligence".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Creative work** Creative work: A creative work is a manifestation of creative effort including fine artwork (sculpture, paintings, drawing, sketching, performance art), dance, writing (literature), filmmaking, and composition. Legal definitions: Creative works require a creative mindset and are not typically rendered in an arbitrary fashion although some works demonstrate [have in common] a degree of arbitrariness, such that it is improbable that two people would independently create the same work. At its base, creative work involves two main steps – having an idea, and then turning that idea into a substantive form or process. The creative process can involve one or more individuals. Typically the creative process has some aesthetic value that is identified as a creative expression which itself generally invokes external stimuli which a person views as creative. Legal definitions: The term is frequently used in the context of copyright. United Kingdom: For the purpose of section 221(2)(c) of the Income Tax (Trading and Other Income) Act 2005, the expression "creative works" means: (a) literary, dramatic, musical or artistic works, or (b) designs,created by the taxpayer personally or, if the qualifying trade, profession or vocation is carried on in partnership, by one or more of the partners personally.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Sheave** Sheave: A pulley is a wheel on an axle or shaft that is designed to support movement and change of direction of a taut cable or belt, or transfer of power between the shaft and the cable or belt. In case of a pulley supported by a frame or shell that does not transfer power to a shaft, but is used to guide the cable or exert a force, the supporting shell is called a block, and the pulley may be called a sheave or pulley wheel. Sheave: A pulley may have a groove or grooves between flanges around its circumference to locate the cable or belt. The drive element of a pulley system can be a rope, cable, belt, or chain. Sheave: The earliest evidence of pulleys dates back to Ancient Egypt in the Twelfth Dynasty (1991–1802 BC) and Mesopotamia in the early 2nd millennium BC. In Roman Egypt, Hero of Alexandria (c. 10–70 AD) identified the pulley as one of six simple machines used to lift weights. Pulleys are assembled to form a block and tackle in order to provide mechanical advantage to apply large forces. Pulleys are also assembled as part of belt and chain drives in order to transmit power from one rotating shaft to another. Plutarch's Parallel Lives recounts a scene where Archimedes proved the effectiveness of compound pulleys and the block-and-tackle system by using one to pull a fully laden ship towards him as if it was gliding through water. Block and tackle: A block is a set of pulleys (wheels) assembled so that each pulley rotates independently from every other pulley. Two blocks with a rope attached to one of the blocks and threaded through the two sets of pulleys form a block and tackle.A block and tackle is assembled so one block is attached to the fixed mounting point and the other is attached to the moving load. The ideal mechanical advantage of the block and tackle is equal to the number of sections of the rope that support the moving block. Block and tackle: In the diagram on the right, the ideal mechanical advantage of each of the block and tackle assemblies shown is as follows: Gun tackle: 2 Luff tackle: 3 Double tackle: 4 Gyn tackle: 5 Threefold purchase: 6 Rope and pulley systems: A rope and pulley system—that is, a block and tackle—is characterised by the use of a single continuous rope to transmit a tension force around one or more pulleys to lift or move a load—the rope may be a light line or a strong cable. This system is included in the list of simple machines identified by Renaissance scientists.If the rope and pulley system does not dissipate or store energy, then its mechanical advantage is the number of parts of the rope that act on the load. This can be shown as follows. Rope and pulley systems: Consider the set of pulleys that form the moving block and the parts of the rope that support this block. If there are p of these parts of the rope supporting the load W, then a force balance on the moving block shows that the tension in each of the parts of the rope must be W/p. This means the input force on the rope is T=W/p. Thus, the block and tackle reduces the input force by the factor p. Rope and pulley systems: Method of operation The simplest theory of operation for a pulley system assumes that the pulleys and lines are weightless and that there is no energy loss due to friction. It is also assumed that the lines do not stretch. In equilibrium, the forces on the moving block must sum to zero. In addition the tension in the rope must be the same for each of its parts. This means that the two parts of the rope supporting the moving block must each support half the load. Rope and pulley systems: These are different types of pulley systems: Fixed: A fixed pulley has an axle mounted in bearings attached to a supporting structure. A fixed pulley changes the direction of the force on a rope or belt that moves along its circumference. Mechanical advantage is gained by combining a fixed pulley with a movable pulley or another fixed pulley of a different diameter. Rope and pulley systems: Movable: A movable pulley has an axle in a movable block. A single movable pulley is supported by two parts of the same rope and has a mechanical advantage of two. Compound: A combination of fixed and movable pulleys forms a block and tackle. A block and tackle can have several pulleys mounted on the fixed and moving axles, further increasing the mechanical advantage. Rope and pulley systems: The mechanical advantage of the gun tackle can be increased by interchanging the fixed and moving blocks so the rope is attached to the moving block and the rope is pulled in the direction of the lifted load. In this case the block and tackle is said to be "rove to advantage." Diagram 3 shows that now three rope parts support the load W which means the tension in the rope is W/3. Thus, the mechanical advantage is three. Rope and pulley systems: By adding a pulley to the fixed block of a gun tackle the direction of the pulling force is reversed though the mechanical advantage remains the same, Diagram 3a. This is an example of the Luff tackle. Rope and pulley systems: Free body diagrams The mechanical advantage of a pulley system can be analysed using free body diagrams which balance the tension force in the rope with the force of gravity on the load. In an ideal system, the massless and frictionless pulleys do not dissipate energy and allow for a change of direction of a rope that does not stretch or wear. In this case, a force balance on a free body that includes the load, W, and n supporting sections of a rope with tension T, yields: 0. Rope and pulley systems: The ratio of the load to the input tension force is the mechanical advantage MA of the pulley system, MA=WT=n. Thus, the mechanical advantage of the system is equal to the number of sections of rope supporting the load. Belt and pulley systems: A belt and pulley system is characterized by two or more pulleys in common to a belt. This allows for mechanical power, torque, and speed to be transmitted across axles. If the pulleys are of differing diameters, a mechanical advantage is realized. Belt and pulley systems: A belt drive is analogous to that of a chain drive; however, a belt sheave may be smooth (devoid of discrete interlocking members as would be found on a chain sprocket, spur gear, or timing belt) so that the mechanical advantage is approximately given by the ratio of the pitch diameter of the sheaves only, not fixed exactly by the ratio of teeth as with gears and sprockets. Belt and pulley systems: In the case of a drum-style pulley, without a groove or flanges, the pulley often is slightly convex to keep the flat belt centered. It is sometimes referred to as a crowned pulley. Though once widely used on factory line shafts, this type of pulley is still found driving the rotating brush in upright vacuum cleaners, in belt sanders and bandsaws. Agricultural tractors built up to the early 1950s generally had a belt pulley for a flat belt (which is what Belt Pulley magazine was named after). It has been replaced by other mechanisms with more flexibility in methods of use, such as power take-off and hydraulics. Belt and pulley systems: Just as the diameters of gears (and, correspondingly, their number of teeth) determine a gear ratio and thus the speed increases or reductions and the mechanical advantage that they can deliver, the diameters of pulleys determine those same factors. Cone pulleys and step pulleys (which operate on the same principle, although the names tend to be applied to flat belt versions and V-belt versions, respectively) are a way to provide multiple drive ratios in a belt-and-pulley system that can be shifted as needed, just as a transmission provides this function with a gear train that can be shifted. V-belt step pulleys are the most common way that drill presses deliver a range of spindle speeds. Belt and pulley systems: With belts and pulleys, friction is one of the most important forces. Some uses for belts and pulleys involve peculiar angles (leading to bad belt tracking and possibly slipping the belt off the pulley) or low belt-tension environments, causing unnecessary slippage of the belt and hence extra wear to the belt. To solve this, pulleys are sometimes lagged. Lagging is the term used to describe the application of a coating, cover or wearing surface with various textured patterns which is sometimes applied to pulley shells. Lagging is often applied in order to extend the life of the shell by providing a replaceable wearing surface or to improve the friction between the belt and the pulley. Notably drive pulleys are often rubber lagged (coated with a rubber friction layer) for exactly this reason.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Deligne–Mumford stack** Deligne–Mumford stack: In algebraic geometry, a Deligne–Mumford stack is a stack F such that Pierre Deligne and David Mumford introduced this notion in 1969 when they proved that moduli spaces of stable curves of fixed arithmetic genus are proper smooth Deligne–Mumford stacks. If the "étale" is weakened to "smooth", then such a stack is called an algebraic stack (also called an Artin stack, after Michael Artin). An algebraic space is Deligne–Mumford. A key fact about a Deligne–Mumford stack F is that any X in F(B) , where B is quasi-compact, has only finitely many automorphisms. A Deligne–Mumford stack admits a presentation by a groupoid; see groupoid scheme. Examples: Affine Stacks Deligne–Mumford stacks are typically constructed by taking the stack quotient of some variety where the stabilizers are finite groups. For example, consider the action of the cyclic group Cn=⟨a∣an=1⟩ on C2 given by Then the stack quotient [C2/Cn] is an affine smooth Deligne–Mumford stack with a non-trivial stabilizer at the origin. If we wish to think about this as a category fibered in groupoids over Sch /C)fppf then given a scheme S→C the over category is given by Note that we could be slightly more general if we consider the group action on Sch Spec (Z[ζn]) Weighted Projective Line Non-affine examples come up when taking the stack quotient for weighted projective space/varieties. For example, the space P(2,3) is constructed by the stack quotient [C2−{0}/C∗] where the C∗ -action is given by Notice that since this quotient is not from a finite group we have to look for points with stabilizers and their respective stabilizer groups. Then (x,y)=(λ2x,λ3y) if and only if x=0 or y=0 and λ=ζ2 or λ=ζ3 , respectively, showing that the only stabilizers are finite, hence the stack is Deligne–Mumford. Examples: Stacky curve Non-Example One simple non-example of a Deligne–Mumford stack is [pt/C∗] since this has an infinite stabilizer. Stacks of this form are examples of Artin stacks.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bowl-out** Bowl-out: A bowl-out (sometimes termed a bowl-off) was used as a tiebreaker in various forms of limited overs cricket to decide a match that would otherwise end in a tie. Five bowlers from each side deliver one or two balls each at an unguarded wicket (three stumps). If each team has hit the same number of wickets after the first five bowlers per side, the bowling continues and is decided by sudden death. Bowl-out: The bowl-out is no longer used as a tie breaker in ICC matches or domestic professional leagues, as batting has no effect on the result of the otherwise tied game. It has been replaced by the Super Over. History: First match decided by bowl-out A bowl-out was first used in the NatWest Trophy in June 1991 in a match between Derbyshire and Minor County side Hertfordshire at Bishops Stortford. Derbyshire bowled first and Steve Goldsmith managed one hit from his two deliveries. Ole Mortensen, Alan Warner, Frank Griffiths and Simon Base all missed with both of theirs. Hertfordshire's first bowler, Andy Needham, hit with his first ball but missed with his second. John Carr missed with both of his, but Bill Merry struck middle with his second attempt to win the match. History: International cricket The International Cricket Council (ICC) introduced the bowl-out should scores be tied in the semifinals and final of the 2006 ICC Champions Trophy or the 2007 Cricket World Cup, although it was not required to be used in either tournament. At the ICC Annual Conference 2008 it was decided that the bowl-out should be replaced by a one-over "eliminator" (also called a "Super Over") in the 2008 ICC Champions Trophy (postponed to 2009) and the 2009 ICC World Twenty20. Twenty20: International T20 Up until the introduction of the "Super Over" in International Twenty20 cricket, if a match ended with the scores level (either because both teams reached the same score after 20 overs, or the second team scored exactly the par score under the Duckworth-Lewis method), the tie was broken with a bowl-out. The first international bowl-out in a Twenty20 match took place on 16 February 2006, when New Zealand beat West Indies 3-0 in Auckland. A bowl-out was also used on 14 September 2007 when India beat Pakistan 3-0 during the 2007 ICC World Twenty20 in Durban, South Africa. Twenty20: Domestic T20 A bowl-out was first used to decide a domestic Twenty20 match when Surrey beat Warwickshire in July 2005. In the 2009 Twenty20 Cup, Somerset beat Lancashire 5-1 to reach the semi-final stage. One-day: In some forms of domestic one-day cricket competition, a bowl-out is used to decide the result when the match is tied or rained out: for example, the quarterfinal of the Minor Counties Cricket Association Knockout Trophy in 2004, when Northumberland beat Cambridgeshire 4-2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hallucinogenic mushroom** Hallucinogenic mushroom: Hallucinogenic mushrooms are those mushrooms that have hallucinogenic effects on humans. Such mushrooms include: Psychoactive Amanita mushroom Psilocybin mushroom
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brightness temperature** Brightness temperature: Brightness temperature or radiance temperature is a measure of the intensity of electromagnetic energy coming from a source. In particular, it is the temperature at which a black body would have to be in order to duplicate the observed intensity of a grey body object at a frequency ν This concept is used in radio astronomy, planetary science, materials science and climatology.The brightness temperature provides "a more physically recognizable way to describe intensity."When the electromagnetic radiation observed is thermal radiation emitted by an object simply by virtue of its temperature, then the actual temperature of the object will always be equal to or higher than the brightness temperature. The actual temperature will be higher than the brightness temperature if the emissivity of the object is greater than 1. Brightness temperature: For radiation emitted by a non-thermal source such as a pulsar, synchrotron, maser, or a laser, the brightness temperature may be far higher than the actual temperature of the source. In this case, the brightness temperature is simply a measure of the intensity of the radiation as it would be measured at the origin of that radiation. In some applications, the brightness temperature of a surface is determined by an optical measurement, for example using a pyrometer, with the intention of determining the real temperature. As detailed below, the real temperature of a surface can in some cases be calculated by dividing the brightness temperature by the emissivity of the surface. Since the emissivity is a value between 0 and 1, the real temperature will be greater than or equal to the brightness temperature. At high frequencies (short wavelengths) and low temperatures, the conversion must proceed through Planck's law. Brightness temperature: The brightness temperature is not a temperature as ordinarily understood. It characterizes radiation, and depending on the mechanism of radiation can differ considerably from the physical temperature of a radiating body (though it is theoretically possible to construct a device which will heat up by a source of radiation with some brightness temperature to the actual temperature equal to brightness temperature).Nonthermal sources can have very high brightness temperatures. In pulsars the brightness temperature can reach 1030 K. For the radiation of a helium–neon laser with a power of 1 mW, a frequency spread Δf = 1 GHz, an output aperture of 1 mm2, and a beam dispersion half-angle of 0.56 mrad, the brightness temperature would be 1.5×1010 K.For a black body, Planck's law gives: Iν=2hν3c21ehνkT−1 where Iν (the Intensity or Brightness) is the amount of energy emitted per unit surface area per unit time per unit solid angle and in the frequency range between ν and ν+dν ; T is the temperature of the black body; h is Planck's constant; ν is frequency; c is the speed of light; and k is the Boltzmann constant. Brightness temperature: For a grey body the spectral radiance is a portion of the black body radiance, determined by the emissivity ϵ That makes the reciprocal of the brightness temperature: ln [1+ehνkT−1ϵ] At low frequency and high temperatures, when hν≪kT , we can use the Rayleigh–Jeans law: Iν=2ν2kTc2 so that the brightness temperature can be simply written as: Tb=ϵT In general, the brightness temperature is a function of ν , and only in the case of blackbody radiation it is the same at all frequencies. The brightness temperature can be used to calculate the spectral index of a body, in the case of non-thermal radiation. Calculating by frequency: The brightness temperature of a source with known spectral radiance can be expressed as: ln −1⁡(1+2hν3Iνc2) When hν≪kT we can use the Rayleigh–Jeans law: Tb=Iνc22kν2 For narrowband radiation with very low relative spectral linewidth Δν≪ν and known radiance I we can calculate the brightness temperature as: Tb=Ic22kν2Δν Calculating by wavelength: Spectral radiance of black-body radiation is expressed by wavelength as: Iλ=2hc2λ51ehckTλ−1 So, the brightness temperature can be calculated as: ln −1⁡(1+2hc2Iλλ5) For long-wave radiation hc/λ≪kT the brightness temperature is: Tb=Iλλ42kc For almost monochromatic radiation, the brightness temperature can be expressed by the radiance I and the coherence length Lc ln ⁡2 In oceanography: In oceanography, the microwave brightness temperature, as measured by satellites looking at the ocean surface, depends on salinity as well as on the temperature of the water.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Snub disphenoid** Snub disphenoid: In geometry, the snub disphenoid, Siamese dodecahedron, triangular dodecahedron, trigonal dodecahedron, or dodecadeltahedron is a convex polyhedron with twelve equilateral triangles as its faces. It is not a regular polyhedron because some vertices have four faces and others have five. It is a dodecahedron, one of the eight deltahedra (convex polyhedra with equilateral triangle faces), and is the 84th Johnson solid (non-uniform convex polyhedra with regular faces). It can be thought of as a square antiprism where both squares are replaced with two equilateral triangles. Snub disphenoid: The snub disphenoid is also the vertex figure of the isogonal 13-5 step prism, a polychoron constructed from a 13-13 duoprism by selecting a vertex on a tridecagon, then selecting the 5th vertex on the next tridecagon, doing so until reaching the original tridecagon. It cannot be made uniform, however, because the snub disphenoid has no circumscribed sphere. History and naming: This shape was called a Siamese dodecahedron in the paper by Hans Freudenthal and B. L. van der Waerden (1947) which first described the set of eight convex deltahedra. The dodecadeltahedron name was given to the same shape by Bernal (1964), referring to the fact that it is a 12-sided deltahedron. There are other simplicial dodecahedra, such as the hexagonal bipyramid, but this is the only one that can be realized with equilateral faces. Bernal was interested in the shapes of holes left in irregular close-packed arrangements of spheres, so he used a restrictive definition of deltahedra, in which a deltahedron is a convex polyhedron with triangular faces that can be formed by the centers of a collection of congruent spheres, whose tangencies represent polyhedron edges, and such that there is no room to pack another sphere inside the cage created by this system of spheres. This restrictive definition disallows the triangular bipyramid (as forming two tetrahedral holes rather than a single hole), pentagonal bipyramid (because the spheres for its apexes interpenetrate, so it cannot occur in sphere packings), and icosahedron (because it has interior room for another sphere). Bernal writes that the snub disphenoid is "a very common coordination for the calcium ion in crystallography". In coordination geometry, it is usually known as the trigonal dodecahedron or simply as the dodecahedron. History and naming: The snub disphenoid name comes from Norman Johnson's 1966 classification of the Johnson solids, convex polyhedra all of whose faces are regular. It exists first in a series of polyhedra with axial symmetry, so also can be given the name digonal gyrobianticupola. Properties: The snub disphenoid is 4-connected, meaning that it takes the removal of four vertices to disconnect the remaining vertices. It is one of only four 4-connected simplicial well-covered polyhedra, meaning that all of the maximal independent sets of its vertices have the same size. The other three polyhedra with this property are the regular octahedron, the pentagonal bipyramid, and an irregular polyhedron with 12 vertices and 20 triangular faces.The snub disphenoid has the same symmetries as a tetragonal disphenoid: it has an axis of 180° rotational symmetry through the midpoints of its two opposite edges, two perpendicular planes of reflection symmetry through this axis, and four additional symmetry operations given by a reflection perpendicular to the axis followed by a quarter-turn and possibly another reflection parallel to the axis. That is, it has D2d antiprismatic symmetry, a symmetry group of order 8. Properties: Spheres centered at the vertices of the snub disphenoid form a cluster that, according to numerical experiments, has the minimum possible Lennard-Jones potential among all eight-sphere clusters.Up to symmetries and parallel translation, the snub disphenoid has five types of simple (non-self-crossing) closed geodesics. These are paths on the surface of the polyhedron that avoid the vertices and locally look like a shortest path: they follow straight line segments across each face of the polyhedron that they intersect, and when they cross an edge of the polyhedron they make complementary angles on the two incident faces to the edge. Intuitively, one could stretch a rubber band around the polyhedron along this path and it would stay in place: there is no way to locally change the path and make it shorter. For example, one type of geodesic crosses the two opposite edges of the snub disphenoid at their midpoints (where the symmetry axis exits the polytope) at an angle of π/3. A second type of geodesic passes near the intersection of the snub disphenoid with the plane that perpendicularly bisects the symmetry axis (the equator of the polyhedron), crossing the edges of eight triangles at angles that alternate between π/2 and π/6. Shifting a geodesic on the surface of the polyhedron by a small amount (small enough that the shift does not cause it to cross any vertices) preserves the property of being a geodesic and preserves its length, so both of these examples have shifted versions of the same type that are less symmetrically placed. The lengths of the five simple closed geodesics on a snub disphenoid with unit-length edges are 3.464 (for the equatorial geodesic), 13 3.606 , 4 (for the geodesic through the midpoints of opposite edges), 5.292 , and 19 4.359 .Except for the tetrahedron, which has infinitely many types of simple closed geodesics, the snub disphenoid has the most types of geodesics of any deltahedron. Construction: The snub disphenoid is constructed, as its name suggests, as the snub polyhedron formed from a tetragonal disphenoid, a lower symmetry form of a regular tetrahedron. The snub operation produces a single cyclic band of triangles separating two opposite edges (red in the figure) and their adjacent triangles. The snub antiprisms are analogous in having a single cyclic band of triangles, but in the snub antiprisms these bands separate two opposite faces and their adjacent triangles rather than two opposite edges. The snub disphenoid can also constructed from the square antiprism by replacing the two square faces by pairs of equilateral triangles. However, it is one of the elementary Johnson solids that do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids. A physical model of the snub disphenoid can be formed by folding a net formed by 12 equilateral triangles (a 12-iamond), shown. An alternative net suggested by John Montroll has fewer concave vertices on its boundary, making it more convenient for origami construction. Cartesian coordinates: Let 0.16902 be the positive real root of the cubic polynomial 11 1. Furthermore, let 0.41112 , 1.56786 , and 1.28917. The eight vertices of the snub disphenoid may then be given Cartesian coordinates (±t,r,0),(0,−r,±t), (±1,−s,0),(0,s,±1). Because this construction involves the solution to a cubic equation, the snub disphenoid cannot be constructed with a compass and straightedge, unlike the other seven deltahedra.With these coordinates, it's possible to calculate the volume of a snub disphenoid with edge length a as ξa3 , where 0.85949 , is the positive root of the polynomial 5832 1377 2160 4. The exact form of ξ can be expressed as, 17 155249 28848 237 155249 28848 237 3, 17 6049 cos tan 28848 237 155249 )), where i is the imaginary unit. Related polyhedra: Another construction of the snub disphenoid is as a digonal gyrobianticupola. It has the same topology and symmetry, but without equilateral triangles. It has 4 vertices in a square on a center plane as two anticupolae attached with rotational symmetry. Its dual has right-angled pentagons and can self-tessellate space.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mitochondrial myopathy** Mitochondrial myopathy: Mitochondrial myopathies are types of myopathies associated with mitochondrial disease. On biopsy, the muscle tissue of patients with these diseases usually demonstrate "ragged red" muscle fibers. These ragged-red fibers contain mild accumulations of glycogen and neutral lipids, and may show an increased reactivity for succinate dehydrogenase and a decreased reactivity for cytochrome c oxidase. Inheritance was believed to be maternal (non-Mendelian extranuclear). It is now known that certain nuclear DNA deletions can also cause mitochondrial myopathy such as the OPA1 gene deletion. There are several subcategories of mitochondrial myopathies. Signs and symptoms: Signs and symptoms include (for each of the following causes): Mitochondrial encephalomyopathy, lactic acidosis, and stroke-like syndrome (MELAS) Varying degrees of cognitive impairment and dementia Lactic acidosis Strokes Transient ischemic attacks Hearing loss Weight loss Myoclonic epilepsy and ragged-red fibers (MERRF) Progressive myoclonic epilepsy Clumps of diseased mitochondria accumulate in muscle fibers and appear as "ragged-red fibers" when muscle is stained with modified Gömöri trichrome stain Short stature Kearns–Sayre syndrome (KSS) External ophthalmoplegia Cardiac conduction defects Sensorineural hearing loss Chronic progressive external ophthalmoplegia (CPEO) Progressive ophthalmoparesis Symptomatic overlap with other mitochondrial myopathies Cause: Mitochondrial myopathy literally means mitochondrial muscle weakness, muscle weakness caused by mitochondrial dysfunction. The mitochondrion is the powerhouse of the cell. Every muscle cell has mitochondria, and if the muscle cell’s mitochondria have problems by which there is not enough energy to function or perform its duties, problems occur. The cause may be genetic, such as a variation within the POLG (polymerase gamma) gene, which causes mitochondrial DNA (mtDNA) to become damaged and lose function. Diagnosis: Muscle biopsy: ragged red fibers in Gömöri trichrome stain. Treatment: Although no cure currently exists, there is hope in treatment for this class of hereditary diseases trials continue.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**SERPIN A12** SERPIN A12: Serpin A12 (OL-64, Vaspin, Visceral adipose-specific serpin, Ser A12) is a glycoprotein that is a class A member of the serine protease inhibitor (serpin) family. In humans, Serpin A12 is encoded by the SERPINA12 gene.First discovered in 2005, Serpin A12 was highly expressed in white adipose tissue of Otsuka Long Evans Tokushima Fatty Rats at the same time that the rats' obesity and insulin plasma levels reached a peak, at around 30 weeks old. Eventually, it was found to be expressed in visceral and subcutaneous adipose tissue of obese humans, leading the protein to be linked with obesity, glucose metabolism, and insulin resistance. Function: Serpin A12 is a protease inhibitor with an approximate weight of 47 kDa and is a member of the adipokine family of cytokines excreted by adipose tissue. Members of this family regulate a number of cellular processes, such as inflammation mediation and insulin resistance. Made up of 414 amino acids, its main function is modulating the insulin inhibiting protease KLK7, mainly in adipose tissue.Among other functions, Serpin A12 performs insulin-sensitizing actions. Serpin A12 treatment of obese and insulin-resistant mice has been shown to decrease the expression of insulin resistance genes in white adipose tissue as well as improving carbohydrate resistance.Serpin A12 also increases bone density, which helps prevent osteoporosis. It does so by regulating osteoblasts, assisting in their mineralization of the bone matrix, thus balancing bone formation with bone resorption. History: In 2005, Serpin A12 was discovered to be found in rats, mice, and humans. One study confirmed that those with insulin resistance had higher levels of Serpin A12 than others. Thus, those humans (or animals) who were diabetic had more Serpin A12 than those who were not. The explanation for this lies in the fact that type 2 diabetes is related to inflammation processes, hence coinciding with the anti-inflammatory effect of Serpin A12. Mode of Action: Serpin A12 is secreted by visceral adipose tissue. Some of its roles include activation of GLUT4 and STAT3, and increasing acetylcholine and nitric oxide levels. It also inhibits NF-κB, decreases the production of cysteine-rich protein, HOMA-IR, low-density lipoprotein C, leptin, etc. Mode of Action: The function of insulin is to allow the movement of glucose into the cells, and for this it binds to the insulin receptor's tyrosine-kinase, causing, first, the phosphorylation of tyrosine and, then, the activation of the insulin receptor substrate. The insulin receptor substrate, in turn, activates protein kinase-B by stimulating the PI3K protein, and eventually glucose transporters will be inside the cell. If the activation of this pathway is inhibited, glucose will not be able to enter the cell. The NF-κB protein is responsible for regulating inflammation in adipose tissue, so the activation of this protein leads to inflammation, which leads to insulin resistance, since the phosphorylation of tyrosine is interrupted. Serpin A12 inhibits the activation of the protein NF-κB, and thus insulin resistance is decreased. Structure: Serpin A12 is coded by the 14q32.13 gene, on the 14th chromosome. Serpin A12 is made up of 414 amino acids. Its molecular weight is approximately 47kDa. Domain Serpin A12 has a single protein domain consisting of 7 to 9 alpha helices and 3 beta strands (A,B,C). Serpin group: Serpins are a large group of proteins with similar structures. "Serpin" is derived from "serine protease inhibitors", which denotes the group's main characteristic, the inhibition of protease enzymes. More than 1000 serpins have been identified among humans, plants, bacteria, parasites, and some viruses. The first proteins from this group that were studied were antithrombin and antitrypsin, which are blood proteins. Scientists discovered that the two share a large number of amino acid sequences that were also common to ovalbumin. So, they thought they may be faced with a new family of proteins. Although the main role of serpins is the inhibition of proteases, they can also perform other functions, such as storage and transport, as well as the regulation of blood pressure. Modifications: Serpin A12 has three possible glycosylation sites located at asparagine residues. These can be post-translational modifications that may change the protein's properties. Although the protein can undergo these different glycosylation processes, they only diminish heparin affinity. There is no significant effect on KLK7 activity or the protein's thermal stability. Related diseases: When obesity and insulinemia are present, Serpin A12 is found in very high concentrations. However, these levels decrease with the worsening of diabetes and weight loss.Administration of Serpin A12 to obese subjects has been seen to improve glucose tolerance and insulin sensitivity, and it also alters the expression of genes associated with insulin resistance. It is thought that the expression of Serpin A12 could be a mechanism for the compensation of insulin sensitivity and glucose metabolism, as occurs in problems such as obesity or type 2 diabetes. Despite everything, several studies have shown that not all obese, diabetic, or glucose intolerant patients have detectable Serpin A12 levels.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded