text
stringlengths
60
353k
source
stringclasses
2 values
**I. M. Sechenov Institute of Evolutionary Physiology and Biochemistry** I. M. Sechenov Institute of Evolutionary Physiology and Biochemistry: The I. M. Sechenov Institute of Evolutionary Physiology and Biochemistry (IEPHB) is a facility in Saint Petersburg, Russia, dedicated to research in the fields of biochemistry and evolutionary physiology. History: The Institute was founded as a research group in October 1950 by Leon Orbeli, a physiologist and a longtime collaborator with Ivan Pavlov. Initially, Orheli's research group included eight people. It subsequently expanded and transformed into the Laboratory of Evolutionary Physiology of the USSR Academy of Sciences, with the main object of studying functions of the nervous system in animals and man during ontogenesis, and also the effects of ionizing radiation on animals. History: In 1956, the Laboratory became an Institute with Orbeli serving as the first Director of Evolutionary Physiology of the Academy of Sciences. The new Institute was named after Ivan Sechenov. By the end of 1957, the Institute numbered 9 laboratories, one of them being transferred from the former P.F. Lesgaft Institute for Natural Sciences. History: After Orbeli's death in 1958, the Institute was headed by his collaborator Professor Alexander Ginetsinsky. From June 1960 to March 1975, the Institute was guided by Eugenie Kreps: a former pupil of Ivan Pavlov and collaborator of Orbeli's, Kreps is known for his fundamental studies in the field of comparative physiology and biochemistry of the nervous system. Kreps promoted research in evolutionary biochemistry. In response, in 1964, the Institute adopted its current name, I. M. Sechenov Institute of Evolutionary Physiology and Biochemistry. From 1975 to 1981 Institute was headed by Vladimir Govyrin, and from 1981 to 2004 by Vladimir Svidersky. Journal of Evolutionary Biochemistry and Physiology: The Institute is publisher of the Journal of Evolutionary Biochemistry and Physiology (ISSN 0022-0930) which is abstracted in the Chemical Abstracts. The journal is also available online by subscription only (online ISSN 1608-3202). Contents and abstracts are available online in PDF format.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hopp–Woods scale** Hopp–Woods scale: The Hopp–Woods hydrophilicity scale of amino acids is a method of ranking the amino acids in a protein according to their water solubility in order to search for surface locations on proteins, and especially those locations that tend to form strong interactions with other macromolecules such as proteins, DNA, and RNA.Given the amino acid sequence of any protein, likely interaction sites can be identified by taking the moving average of six amino acid hydrophilicity values along the polypeptide chain, and looking for local peaks in the data plot. Hopp–Woods scale: In subsequent papers after their initial publication of the method, Hopp and Woods demonstrated that the data plots, or hydrophilicity profiles, contained much information about protein folding, and that the hydrophobic valleys of the profiles corresponded to internal structures of proteins such as beta-strands and alpha-helices. Furthermore, long hydrophobic valleys were shown to correspond quite closely to the membrane-spanning helices identified by the later-published Kyte and Doolittle hydropathic plotting method.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Peter Kohl (scientist)** Peter Kohl (scientist): Peter Kohl FAHA FHRS FTPS FIUPS is a scientist specializing in integrative cardiac research. He studies heterocellular electrophysiological interactions in cardiac tissue, myocardial structure-function relationships using 'wet' and 'dry' lab models, and mechano-electrical autoregulation of the heart. Education: Kohl studied medicine and biophysics in Moscow before completing his doctorate and his residency in physiology at the Humboldt University in Berlin. Supported by a scholarship from the Boehringer-Ingelheim Foundation, he went as a post-doctoral researcher to the chair of Prof. Denis Noble, Department of Physiology at the University of Oxford, where - using a combination of experimental and theoretical models - he explored cardiac mechanobiology and heterocellular interactions. Career: Supported by personal fellowships from the UK Royal Society and the British Heart Foundation, he founded the Cardiac Mechano-Electric Feedback Lab at Oxford. Work from this time ranged from the mechanistic explanation of the Bainbridge effect (mechanically induced increase in heart rate) in isolated pacemaker cells stretched during patch clamp measurements with carbon fibres, the description of a stretch-induced increase in calcium release from the sarcoplasmic reticulum as a mechanism contributing to the Frank–Starling law, to the exploration of direct electrical coupling of cardiac fibroblasts and muscle cells.After two decades of research and teaching at Oxford, Kohl was appointed Inaugural Chair in Cardiac Biophysics and Systems Biology at Imperial College London. Work during this time, funded by the ERC Advanced Grant CardioNECT, focused on the development and use of novel optogenetic and fluorometric techniques, resulting in the first functional demonstration of heterocellular electrical cell coupling in native heart tissue. After five years in London, Kohl was recruited to Freiburg University in 2016 as the founding director of the Institute for Experimental Cardiovascular Medicine (IEKM).The English-language IEKM is structured with flat hierarchies and a broad interdisciplinary profile. About 40% of staff are from outside Germany, with scientific backgrounds in physiology, pharmacology, medicine, biology, physics, engineering and mathematics. The institute has grown from 6 to almost 60 staff and students in just a few years, established a novel biobank concept (in which functional data collected on live human tissue are an integral part of the biobank), and it is committed to teaching in small group formats such as the new 1-year international MSc in Medical Sciences - Cardiovascular Research with an annual intake of no more than 6 pre-PhD students. Honours: Kohl is a visiting professor at the University of Oxford and Imperial College London. He served as co-founding director (with Peter Coveney, University College London) of the Virtual Physiological Human Network of Excellence (VPH NoE) and he is the Speaker of the German national collaborative research centre SFB1425 'Make Better Scars'. From 2018-2020, Kohl was joint Editor-in-Chief (with Denis Noble and Tom Blundell) of Progress in Biophysics and Molecular Biology, and from 2022-2023, he was Editor-in-Chief of The Journal of Physiology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cuban sunset** Cuban sunset: A Cuban sunset is cocktail made from rum and a mixture of lemonade, lime soda, guava nectar, and grenadine syrup. Cuban sunset: The drink is made from rum and a mixture of lime soda, lemonade, guava nectar, and grenadine syrup. It originated in either Havana or Varadero as a variety of a traditional Cuban guava-based drink. In Cuba, the drink is commonly served (along with either a Cubata or Mojito) as a pre-dinner drink. The Cuban variety of the cocktail commonly uses extra guava nectar in place of grenadine syrup, and the drink normally contains Havana Club rum. Outside of Cuba, many recipes call for the use of Bacardi White Rum.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Benzaldehyde dehydrogenase (NAD+)** Benzaldehyde dehydrogenase (NAD+): In enzymology, a benzaldehyde dehydrogenase (NAD+) (EC 1.2.1.28) is an enzyme that catalyzes the chemical reaction benzaldehyde + NAD+ + H2O ⇌ benzoate + NADH + 2 H+The 3 substrates of this enzyme are benzaldehyde, NAD+, and H2O, whereas its 3 products are benzoate, NADH, and H+. Benzaldehyde dehydrogenase (NAD+): This enzyme belongs to the family of oxidoreductases, specifically those acting on the aldehyde or oxo group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is benzaldehyde:NAD+ oxidoreductase. Other names in common use include benzaldehyde (NAD+) dehydrogenase, and benzaldehyde dehydrogenase (NAD+). This enzyme participates in benzoate degradation via hydroxylation and toluene and xylene degradation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CRACR2A** CRACR2A: Calcium release activated channel regulator 2A is a protein that in humans is encoded by the CRACR2A gene.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dashboard of Sustainability** Dashboard of Sustainability: The Dashboard of Sustainability is a free-of-charge, non-commercial software package configured to convey the complex relationships among economic, social, and environmental issues. Dashboard of Sustainability: The software is designed to help developing countries achieve the Millennium Development Goals and work towards sustainable development. The software package was developed by members of the Consultative Group on Sustainable Development Indicators (CGSDI), and has been applied to quite a number of indicator sets, inter alia to the Millennium Development Goals indicators and the United Nations Commission on Sustainable Development indicators. Dashboard of Sustainability: In 2002, Dashboard of Sustainability researchers Jochen Jesinghaus and Peter Hardi presented the Dashboard of Sustainability at the Johannesburg Summit and the 2002 World Social Forum in Porto Alegre. It was also included in the resources for the OECD World Forum on Key Indicators.In January 2006, the Millennium Project utilized the Dashboard of Sustainability to conclude in their "State of the Future" report that global prospects for improving the overall health, wealth, and sustainability of humanity are improving, but slowly. In February 2006, it was proposed that the Dashboard of Sustainability be utilized to combine and represent two or more of the following five frameworks presently used for developing sustainability indicators: domain-based, goal-based, issue-based, sectoral, and causal frameworks. Known applications (external links): Translating a spreadsheet into a dashboard is relatively straightforward, see The Manual, and numerous indicator sets have been translated into the dashboard format. While many of them are not publicly available, the following applications have been put online by their authors. Known applications (external links): Applications with global scope Millennium Development Goals Indicators Dashboard - see screenshot to the right and the online demo Sustainable Development Indicators Dashboard (UN CSD set)UNESCO/SCOPE Policy brief on Sustainable Development Maternal and Neonatal Program Effort index (MNPI) Applications with national scope Australia: National Land & Water Resources Audit, Sydney Regional Innovation Dashboard Azores regional dashboard Brazil: National multiannual plan (Plano Plurianual, PPA), Rural sustainability, Lages, Mato Grosso Greece regional dashboard India/West Bengal Monitoring of Public Health Progress Italy: Sicily waste management, agriculture indicators, Bienno, Bologna’s Ecological Footprint, Ecosistema Urbano, Padua, Liguria, Regional wellbeing indices, Varese PTCPEstonian National Strategy on Sustainable Development and Estonian regional dashboards Switzerland Regional Dashboard Tanzania Districts Dashboard Sustainable Development in the United States: An Experimental Set of Indicators
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gribenes** Gribenes: Gribenes or grieven (Yiddish: גריבענעס, [ˈɡrɪbənəs], "cracklings"; Hebrew: גלדי שומן) is a dish consisting of crisp chicken or goose skin cracklings with fried onions. Etymology: The word gribenes is related to the German Griebe (plural Grieben) meaning 'piece of fat, crackling' (from the Old High German griobo via the Middle High German griebe), where Griebenschmalz is lard from which the cracklings have not been removed. History: A favored food in the past among Ashkenazi Jews, gribenes appears in Jewish stories and parables, for example in the work of the Hebrew poet Chaim Nachman Bialik. As with other cracklings, gribenes are a byproduct of rendering animal fat to produce cooking fat, in this case kosher schmaltz.Gribenes can be used as an ingredient in dishes like kasha varnishkes, fleishig kugel and gehakte leber.Gribenes is often associated with the Jewish holidays Hanukkah and Rosh Hashanah. Traditionally, gribenes were served with potato kugel or latkes during Hanukkah. It is also associated with Passover, as large amounts of schmaltz, with its resulting gribenes, were traditionally used in Passover recipes. Uses: Gribenes can be eaten as a snack on rye or pumpernickel bread with salt, or used in recipes such as chopped liver, or all of the above. It is often served as a side dish with pastrami on rye or hot dogs.This dish is eaten as a midnight snack, or appetizer. In Louisiana, Jews add gribenes to jambalaya in place of (treyf) shrimp. It was served to children on challah bread as a treat. It can also be served in a GLT, a modified version of a BLT sandwich that replaces bacon with gribenes.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**MDMB-CHMICA** MDMB-CHMICA: MDMB-CHMICA is an indole-based synthetic cannabinoid that is a potent agonist of the CB1 receptor and has been sold online as a designer drug. While MDMB-CHMICA was initially sold under the name "MMB-CHMINACA", the compound corresponding to this code name (i.e. the isopropyl instead of t-butyl analogue of MDMB-CHMINACA) has been identified on the designer drug market in 2015 as AMB-CHMINACA. Chemistry: Several commercial samples of MDMB-CHMICA were found to exclusively contain the (S)-enantiomer based on vibrational and electronic circular dichroism spectroscopy and X-ray crystallography. An (S)-configuration for the tert-leucinate group is unsurprising since MDMB-CHMICA is likely synthesized from the abundant and inexpensive "L" form of the appropriate tert-leucinate reactant. Pharmacology: MDMB-CHMICA acts as a highly potent full agonist of the CB1 receptor with an efficacy of 94% and an EC50 value of 0.14 nM, which is approximately 8 times lower than the EC50 of JWH-018 (1.13 nM) and twofold lower than AB-CHMINACA (0.27 nM). Metabolism MDMB-CHMICA's main metabolic reactions comprise mono-hydroxylations and hydrolysis of the carboxylic ester function. In total, 31 metabolites could be identified in vivo. Side effects: Seventy-one serious adverse events, including 42 acute intoxications and 29 deaths (Germany (5), Hungary (3), Poland (1), Sweden (9), United Kingdom (10), Norway (1)) that occurred in nine European countries between 2014 and 2016 have been associated with MDMB-CHMICA.Side effects such as unconsciousness or coma, hyperemesis, nausea, seizures, convulsions, tachycardia, bradycardia, mydriasis, syncope, spontaneous urinating and defecating, shortness of breath, somnolence, respiratory acidosis, metabolic acidosis, collapse, lower limbs paralysis, chest pain, aggression and severe disturbance of behaviour were reported. Legal status: In the United States, MDMB-CHMICA is a Schedule I controlled substance.MDMB-CHMICA is illegal in Austria, Canada, China, Croatia, Denmark, Estonia, Finland, Germany, Greece, Hungary, Latvia, Lithuania, Louisiana, Luxembourg, Norway, Portugal, Turkey, the UK, Sweden and Switzerland.In August 2016 the European Commission proposed a ban on MDMB-CHMICA across the European Union. In 27 February 2017 the Commission adopted an implementing act in banning MDMB-CHMICA, and Member States shall take the necessary measures to subject it to control measures and criminal penalties no later than by 4 March 2018. Seizures Over 3600 MDMB-CHMICA seizures between 2014 and 2016 in 19 member states of the European Union have been reported to the European Monitoring Centre for Drugs and Drug Addiction (EMCDDA), including a 40 kg seizure [sic] in Luxembourg in December 2014.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Turtle graphics** Turtle graphics: In computer graphics, turtle graphics are vector graphics using a relative cursor (the "turtle") upon a Cartesian plane (x and y axis). Turtle graphics is a key feature of the Logo programming language. Overview: The turtle has three attributes: a location, an orientation (or direction), and a pen. The pen, too, has attributes: color, width, and on/off state (also called down and up). Overview: The turtle moves with commands that are relative to its own position, such as "move forward 10 spaces" and "turn left 90 degrees". The pen carried by the turtle can also be controlled, by enabling it, setting its color, or setting its width. A student could understand (and predict and reason about) the turtle's motion by imagining what they would do if they were the turtle. Seymour Papert called this "body syntonic" reasoning. Overview: A full turtle graphics system requires control flow, procedures, and recursion: many turtle drawing programs fall short. From these building blocks one can build more complex shapes like squares, triangles, circles and other composite figures. The idea of turtle graphics, for example is useful in a Lindenmayer system for generating fractals. Turtle geometry is also sometimes used in graphics environments as an alternative to a strictly coordinate-addressed graphics system. History: Turtle graphics are often associated with the Logo programming language. Seymour Papert added support for turtle graphics to Logo in the late 1960s to support his version of the turtle robot, a simple robot controlled from the user's workstation that is designed to carry out the drawing functions assigned to it using a small retractable pen set into or attached to the robot's body. Turtle geometry works somewhat differently from (x,y) addressed Cartesian geometry, being primarily vector-based (i.e. relative direction and distance from a starting point) in comparison to coordinate-addressed systems such as bitmaps or raster graphics. As a practical matter, the use of turtle geometry instead of a more traditional model mimics the actual movement logic of the turtle robot. The turtle is traditionally and most often represented pictorially either as a triangle or a turtle icon (though it can be represented by any icon). History: Today, the Python programming language's standard library includes a Turtle graphics module. Like its Logo predecessor, the Python implementation of turtle allows programmers to control one or more turtles in a two-dimensional space. Since the standard Python syntax, control flow, and data structures can be used alongside the turtle module, turtle has become a popular way for programmers learning Python to familiarize themselves with the basics of the language. Extension to three dimensions: The ideas behind turtle graphics can be extended to include three-dimensional space. This is achieved by using one of several different coordinate models. A common setup is cartesian-rotational as with the original 2D turtle: an additional "up" vector (normal vector) is defined to choose the plane the turtle's 2D "forward" vector rotates in; the "up" vector itself also rotates around the "forward" vector. In effect, the turtle has two different heading angles, one within the plane and the other determining the plane's angle. Usually changing the plane's angle does not move the turtle, in line with the traditional setup. Extension to three dimensions: Verhoeff 2010 implements the two vector approach; a roll command is used to rotate the "up" vector around the "forward" vector. The article proceeds to develop an algebraic theory to prove geometric properties from syntactic properties of the underlying turtle programs. One of the insights is that a dive command is really a shorthand of a turn-roll-turn sequence. Cheloniidae Turtle Graphics is a 3D turtle library for Java. It has a bank command (same as roll) and a pitch command (same as dive) in the "Rotational Cartesian Turtle". Other coordinate models, including non-Euclidean geometry, are allowed but not included.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Snub triheptagonal tiling** Snub triheptagonal tiling: In geometry, the order-3 snub heptagonal tiling is a semiregular tiling of the hyperbolic plane. There are four triangles and one heptagon on each vertex. It has Schläfli symbol of sr{7,3}. The snub tetraheptagonal tiling is another related hyperbolic tiling with Schläfli symbol sr{7,4}. Images: Drawn in chiral pairs, with edges missing between black triangles: Dual tiling: The dual tiling is called an order-7-3 floret pentagonal tiling, and is related to the floret pentagonal tiling. Related polyhedra and tilings: This semiregular tiling is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n=6, and hyperbolic plane for any higher n. The series can be considered to begin with n=2, with one set of faces degenerated into digons. Related polyhedra and tilings: From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Andreas Zimmer** Andreas Zimmer: Andreas Zimmer is Professor of Neurobiology and Director of the Institute for Molecular Psychiatry at the University of Bonn; he was previously professor at the University of Bielefeld and a researcher at the National Institute of Mental Health. He is perhaps best known in the field of cannabinoid research. His most cited paper on this subject, in Proceedings of the National Academy of Sciences, has been cited 323 times. He has also done major work in the genetic sequencing of the genes for the function of the nervous system. His most widely cited paper in this subject in Nature Genetics, has been cited 262 times. He has produced more than 121 research articles, as shown in Web of Science.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Cakewalk (carnival game)** Cakewalk (carnival game): Cakewalk (or cake-walk) is a game played at carnivals, funfairs, and fundraising events. It is similar to a raffle and musical chairs. Background: Tickets are sold to participants, and a path of numbered squares is laid out on a rug, with one square per ticket sold. The participants walk around the path in time to music, which plays for a duration and then stops. A number is drawn at random and called out, and the person standing on that number wins a cake as a prize (hence the name). Background: During the 1930s, the English poet John Betjeman described St Giles' Fair in Oxford as follows: It is about the biggest fair in England. The whole of St Giles' … is thick with freak shows, roundabouts, cake-walks, the whip, and the witching waves.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**The Vital Question** The Vital Question: The Vital Question is a book by the English biochemist Nick Lane about the way the evolution and origin of life on Earth was constrained by the provision of energy. The Vital Question: The book was well received by critics; The New York Times, for example, found it "seductive and often convincing" though the reviewer considered much of it speculative beyond the evidence provided. The Guardian wrote that the book presented hard evidence and tightly interlocking theory on a question once thought inaccessible to science, the origin of life. New Scientist found the book's arguments powerful and persuasive with many testable ideas; that it was not easy to read was compensated by the "incredible, epic story" that it told. The Telegraph wrote that the book succeeded brilliantly as science writing, expanding the reader's horizons with a gripping narrative. Context: Early theories of the origin of life included spontaneous generation from non-living matter and panspermia, the arrival of life on earth from other bodies in space. The question of how life originated became urgent when Charles Darwin's 1859 On the Origin of Species became widely accepted by biologists. The evolution of new species by splitting off from older ones implied that all life forms were derived from a few such forms, perhaps only one, as Darwin had suggested at the end of his book. Darwin suggested that life could have originated in some "warm little pond" containing a suitable mixture of chemical compounds. The question has continued to be debated into the 21st century.Nick Lane is a biochemist at University College London; he researches "evolutionary biochemistry and bioenergetics, focusing on the origin of life and the evolution of complex cells." He has become known as a science writer, having written four books about evolutionary biochemistry. Book: Synopsis In the book, Lane discusses what he considers to be a major gap in biology: why life operates the way that it does, and how it began. In his view as a biochemist, the core question is about energy, as all cells handle energy in the same way, relying on a steep electrochemical gradient across the very small thickness of a membrane in a cell – to power all the chemical reactions of life. The electrical energy is transformed into forms that the cell can use by a chain of energy-handling structures including ancient proteins such as cytochromes, ion channels, and the enzyme ATP synthase, all built into the membrane. Once evolved, this chain has been conserved by all living things, showing that it is vital to life. He argues that such an electrochemical gradient could not have arisen in ordinary conditions, such as the open ocean or Darwin's "warm little pond". He argues instead (following Günter Wächtershäuser) that life began in deep-sea hydrothermal vents, as these contain chemicals that effectively store energy that cells could use, as long as the cells provided a membrane to generate the needed gradient by maintaining different concentrations of chemicals on either side.Once cells similar to bacteria (the first prokaryotes, cells without a nucleus) had emerged, he writes, they stayed like that for two and a half billion years. Then, just once, cells jumped in complexity and size, acquiring a nucleus and other organelles, and complex behavioural features including sex, which he notes have become universal in complex (eukaryotic) life forms including plants, animals, and fungi.The book is illustrated with 37 figures taken by permission from a wide variety of research sources. They include a timeline, photographs, cladograms, electron flow diagrams and diagrams of the life cycle of cells and their chromosomes. Book: Publication history The book was first published by Profile Books in 2015. The British edition was subtitled with the question of the title, "Why is Life the Way it is?", whereas the American edition was subtitled with the explanation "Energy, Evolution, and the Origins of Complex Life". A paperback edition came out in 2016. The book has been translated into at least seven languages: Chinese, German, Japanese, Korean, Polish, Spanish, and Turkish. Reception: Tim Requarth, reviewing The Vital Question for The New York Times, finds the book "seductive and often convincing, though speculation far outpaces evidence in many of the book’s passages. But perhaps for a biological theory of everything, that's to be expected, even welcomed."Peter Forbes, reviewing The Vital Question in The Guardian, noted that the origin of life was once thought to be "safely consigned to wistful armchair musing", but that in the past 20 years new research in genomics, geology, biochemistry and molecular biology have transformed thinking in the field. "Here is the book that presents all this hard evidence and tightly interlocking theory to a wider audience.", writes Forbes.Michael LePage, reviewing the book in New Scientist, writes that the fact that complex cells only evolved once is "very peculiar when you think about it", but it is just one of many large mysteries that Lane addresses, including aging and death, sex, and speciation. LePage finds Lane's arguments "powerful and persuasive", with many testable ideas. The book is not, he writes, the easiest to read, but "it does tell an incredible, epic story", from the dawn of life to the present day. Reception: Caspar Henderson, in his book review in The Telegraph, writes that Lane's book "succeeds brilliantly" as good science writing can, expanding the reader's horizons "in ways not previously imagined." Lane explains why the counterintuitive idea "that cross-membrane proton gradients power all living cells" is no mere technical detail: per gram, he notes, the power is 10,000 times denser than the sun, and it is conserved across every form of life, telling us something about how life began and how it was constrained to evolve. Henderson recommends the book as amazing and gripping, only criticising the publisher for the "pedestrian" quality of the design and printing.The founder of Microsoft, Bill Gates, reviewed the book under the heading "This Biology Book Blew Me Away". It moved him to read two of Lane's other books, and to bring him to New York to interview him. Gates noted that "As much as I loved The Vital Question, it's not for everyone. Some of the explanations are pretty technical. But this is a technical subject, and I doubt anyone else will make it much easier to understand without sacrificing crucial details."Lane won the Michael Faraday Prize in 2016 for "excellence in communicating science to UK audiences". Sources: Lane, Nick (2016). The Vital Question: Why is Life the Way it is? (paperback ed.). Profile Books. ISBN 978-1-781-25037-2.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DDX3X** DDX3X: ATP-dependent RNA helicase DDX3X is an enzyme that in humans is encoded by the DDX3X gene. Function: DEAD box proteins, characterized by the conserved motif Asp-Glu-Ala-Asp (DEAD), are putative RNA helicases. They are implicated in a number of cellular processes involving alteration of RNA secondary structure such as translation initiation, nuclear and mitochondrial splicing, and ribosome and spliceosome assembly. Based on their distribution patterns, some members of this family are believed to be involved in embryogenesis, spermatogenesis, and cellular growth and division. This gene encodes a DEAD box protein, which interacts specifically with hepatitis C virus core protein resulting a change in intracellular location. This gene has a homolog located in the nonrecombining region of the Y chromosome. The protein sequence is 91% identical between this gene and the Y-linked homolog. Sub-cellular trafficking: DDX3X performs its functions in the cell nucleus and cytoplasm, exiting the nucleus via the exportin-1/CRM1 nuclear export pathway. It was initially reported that the DDX3X helicase domain was necessary for this interaction, while the canonical features of the trafficking pathway, namely the presence of a nuclear export signal (NES) on DDX3X and Ran-GTP binding to exportin-1, were dispensable. DDX3X binding to, and trafficking by, exportin-1 has since been shown not to require the DDX3X helicase domain and be explicitly NES- and Ran-GTP-dependent. Role in cancer: DDX3X is involved in many different types of cancer. For example, it is abnormally expressed in breast epithelial cancer cells in which its expression is activated by HIF1A during hypoxia. Increased expression of DDX3X by HIF1A in hypoxia is initiated by the direct binding of HIF1A to the HIF1A response element, as verified with chromatin immunoprecipitation and luciferase reporter assay. Since the expression of DDX3X is affected by the activity of HIF1A, the co-localization of these proteins has also been demonstrated in MDA-MB-231 xenograft tumor samples.In HeLa cells DDX3X is reported to control cell cycle progression through Cyclin E1. More specifically, DDX3X was shown to directly bind to the 5´ UTR of Cyclin E1 and thereby facilitating the translation of the protein. Increased protein levels of Cyclin E1 was demonstrated to mediate the transition of S phase entry.Melanoma survival, migration and proliferation is affected by DDX3X activity. Melanoma cells with low DDX3X expression exhibit a high migratory capacity, low proliferation rate and reduced vemurafenib sensitivity. While high DDX3X expressing cells are drug sensitive, more proliferative and less migratory. These phenotypes can be explained by the translational effects on the melanoma transcription factor MITF. The 5' UTR of the MITF mRNA contains a complex RNA regulon (IRES) that is bound and activated by DDX3X. Activation of the IRES leads to translation of the MITF mRNA. Mice injected with melanoma cells with a deleted IRES display more aggressive tumor progression including increased lung metastasis. Interestingly, the DDX3X in melanoma is affected by vemurafenib via an undiscovered mechanism. It is unknown how DDX3X is downregulated by the presence of vemurafenib. However, reduced levels of DDX3X during drug treatment explains the development of drug resistant cells frequently detected with low MITF expression. Clinical significance: Mutations of the DDX3X gene are associated with medulloblastoma. In melanoma the low expression of the gene is linked to a poor distant metastasis free survival. In addition, the mRNA level of DDX3X is lower in matched post-relapse melanoma biopsies for patients receiving vemurafenib and in progressing tumors. Mutations of the DDX3X gene also cause DDX3X syndrome, which affects predominantly females and presents with developmental delay or disability, autism, ADHD, and low muscle tone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Associative interference** Associative interference: Associative interference is a cognitive theory established on the concept of associative learning, which suggests that the brain links related elements. When one element is stimulated, its associates can also be activated. The most known study demonstrating the credibility of this concept was Pavlov's experiment in 1927 which was later developed into the learning procedure known as classical conditioning.However, whilst classical conditioning and associative learning both explore how the brain utilizes this cognitive association to benefit us, studies have also shown how the brain can mistakenly associate related, but incorrect elements together, and this is known as associative interference. A simple example of this would be when one was asked a series of multiplication questions. A study conducted in 1985 showed that over 90% of the mistakes subjects made were actually answers to other questions with a common multiplicand. That is, questions such as 4 x 6 = 24 and 3 x 8 = 24 were very likely to promote errors (8 x 4 = 24) due to associative interference.Associative interference was widely investigated and researchers realized there were different types of interference, namely retroactive interference which investigates how new memories disrupts the retrieval of old memories, and proactive interference which investigates how old memories disrupts the retrieval of new memories. These two were subsequently known as the interference theory. Associative interference: Therefore, associative interference is a fundamental theory which the interference theory draws upon. The essential difference between these two is time. Both retroactive and proactive interference are concerned with when the interfering elements, or memories were obtained. However, associative interference does not encompass time, as shown by the previous example. The chronological acquisition of the four times table in relation to the three times table is independent as to why subjects made an error, highlighting the difference between the two. History: Interference in experimental literature has been a topic of interest for psychologists for over 100 years, with early studies dating back to the 1890s. Hugo Münsterberg was among the first to study this concept by recording the effects of altering some of his daily routines, such as dipping his pen in ink and taking a watch out of his pocket. He concluded that by associating the stimulus (what's the time?) with a response (taking out pocket watch), both the stimulus and response have equal likelihood to trigger automatic retrieval if the other was encountered. That is, taking out a pocket watch (response) would often trigger the action of checking the time (stimulus) even if the supposed action was something different, such as taking out the watch and placing it on the table. History: Georg E. Müller and Friedrich Schumann later developed a study in 1894 researching stimuli recall. By making subjects learn a series of words composed of nonsense syllables, it was found that if an association formed between syllables 1 and 2, it was harder for subjects to associate syllable 1 with syllable 3. The phenomenon was subsequently termed the law of associative inhibition. This paved the way for future studies as many psychologists utilized a similar experimental procedure of preparing a series of stimuli for subjects to recognize and/or recall to investigate the effects of interference. History: Over the course of the 19th and 20th century, various psychologists entered the field of interference study and designed their experiments with unique stimuli. John A. Bergström and many others began working with card sorting, while Linus W. Kline used stimuli which were designed to be more recognizable for subjects such as states, capitals and book titles. In the end, it was concluded that as long as the subject's brain had formed associations with the experiment stimuli, regardless of what form or shape the stimuli took interference could occur. Impact on memory: Effect on recall Many studies have concluded that associative interference reduces a subject's ability to recall. In 1925, Erwin A. Esper presented subjects with a series of random shapes with different colours. In total there were 4 different shapes and colours, creating 16 possible shape-colour combinations.Each combination was then assigned a nonsensical name which followed the same rule. The first half of the name would be a syllable corresponding to its colour, whilst the second half would be a syllable corresponding to its shape. For example, the colour red would correspond to the syllable “nas”, whilst the first shape would correspond to the syllable “lin”, creating the name “naslin” for that specific combination. By presenting these combinations alongside their names to subjects to remember throughout the length of the study, they were subsequently tested by being asked to recall the name of each randomly presented combination.The results of this study found that in cases where subjects made a mistake, a common pattern emerged. Using the previous colour-shape combination as an example, when it was presented one participant's response was “nasden”. This response seemed to be an amalgamation of “naslin” the correct answer, and the word “nasdeg” which interestingly was a name assigned to another colour-shape combination with a different shape, but also in red. This occurrence whereby the stimulus and its similar associate forms a variant were seen for other combinations as well and indicates the presence of associative interference. Impact on memory: Similar trends can be seen in other studies. Alison M. Dyne and her peers designed a similar experiment, however instead of colours and shapes, word pairs were used to investigate whether overlapping words from different pairs would result in associative interference during recall.Participants were introduced to a study list with word pairs randomly sampled from a word pool which they were subsequently tested on. The pairs that were chosen consisted of a mix of ones which had unique words (where both words did not appear in any other pair on the list), and ones which had overlapping words (where one or both words appeared in other pairs on the list). They were then given one half of the word-pair, and asked to recall and write down the corresponding word for that specific pair. As hypothesised, associative interference hindered many of the participants recall ability, as they were much more likely to make a mistake in recalling overlapping word pairs. Impact on memory: Effect on recognition Whilst associative interference has shown to reduce recall performance, its effects on recognition remain inconsistent.In the previous study, participants were also tested for recognition, by providing a similar test list with interference inducing word pairs and non-interference word pairs. However, this time certain pairs from both categories were rearranged, and participants were asked to circle the pairs they thought remained the same. Surprisingly, there was no significant result to indicate the sign of associative interference as both the miss rate and accuracy rate increased under interference conditions as opposed to the recall experiment.Michael F. Verde conducted a similar study in 2004 to further investigate how recognition is affected by interference. In his experimental method, participants were also subject to word pairs, however this time in the form of “the person is in the location”. Participants were tested in a similar manner to the previous study by being asked if the tested pair were rearranged or not. After noting the lack of any significant results to Dyne and her peers' study, Verde then introduced a new factor, the concept of familiarity. Despite the results of the recognition test not showing signs of interference, he predicted that interference conditions would increase participant's familiarity to the word pairs.As such, a second experiment was conducted whose method was almost identical to the first. However this time instead of presenting participants with some word pairs which were rearranged, word pairs which contained words not present in the study list were added to the test list. Participants were then asked if the tested word pairs contained novel words which they hadn't seen before to investigate how familiarity affects recognition under interference and non-interference conditions. This resulted in an increase in recognition rate under interference conditions compared to the first experiment, suggesting that familiarity is a contributor to recognition performance while under the influence of associative interference. Impact on learning: Kevin Darby and Vladimir Sloutsky's study of interference effects on memory development has shown that associative interference can have significant implications on learning as a result of its effects on memory (ref). In their study, 2 experimental studies were outlined to test the ways in which interference impacts learning.The first one was conducted on a sample of preschool-aged participants. The stimulus presented were word pairs from 4 different categories: animals (e.g. turtle), vehicles (e.g. train), clothing (e.g. boot) and furniture (e.g. lamp) presented in a visual form. No pair contained words from the same category, and these image pairs where shown to participants alongside 2 familiar characters; Winnie the Pooh and Mickey Mouse. These two characters served as a response to test interference, as each word pair would correspond to one of the characters. Similar to prior interference studies, word pairs were either unique, or contained an overlapping word from another pair. These overlapping pairs were used to facilitate interference conditions. Impact on learning: After an initial phase of teaching participants what each word pair's supposed response was, they then undergone a testing phase and questioned on the proper response for each pair. The results of this experiment revealed that the young participants showed signs of interference. However, these results are all expected, and align with similar conclusions achieved from previous studies. The unique factor regarding Darby and Sloutsky's experiment was that it was repeated, this time with adult participants. Although both younger and older participants have displayed signs of interference, by directly comparing the two age groups it becomes possible to analyse the level of impact interference has with regards to age.These subsequent experiments with adult participants concluded that certain types of interference affected children much more than adults. Therefore, it can be concluded that interference has a much bigger impact on learning during childhood and growing up than it is when an individual is much older.Although this difference isn't substantial to the point where adults can learn more easily than children as shown by multiple studies as a result of neuroplasticity, it is important to outline the importance of interference as a hindrance towards learning in younger individuals. In particular, education systems such as schools may need to reassess certain topics and subjects if they are more difficult for students to grasp as interference may be inhibiting their ability to understand and process the information to learn and memorise it. Proactive Interference and Retroactive interference Summary: Context The concepts of proactive interference and retroactive interference were first introduced by the studies of verbal behavior, focusing on the use of paired-associate learning, in which a stimulus is paired with an idea or object that elicits a response. In order to observe interference using paired-associate stimuli, the target association should be similar, to some extent, to the interfering association, otherwise interference would not occur. For example, if subjects are asked to memorize word pairs (e.g., donkey-tree and dog-tree), interference will occur when two pairs share a common associate (in this example, tree). A study using paired-associate tasks by Wickens, Born, and Allen (1963) showed that if target material and interfering material decrease in similarity, a decrease in proactive interference will follow.Proactive interference is the interfering of older memories with the retrieval of newer memories. Compared with retroactive interference, it is less common and less problematic. Proactive interference is likely to happen when memories are learned in similar contexts. An example is when motor abilities from skills that were previously learned interfere with new abilities in another skill being learned. If someone learned how to drive a stick shift first, and has driven a stick shift for a long time, it would be harder for them to drive an automatic car. interference is also associated with poorer list discrimination, which occurs when participants are asked to judge whether an item has appeared on a previously learned list. If the items or pairs to be learned are conceptually related to one another, then proactive interference has a greater effect. Delos Wickens discovered that proactive interference build-up is released when there is a change to the category of items being learned, leading to increased processing in working memory. Presenting new skills later in practice can considerably reduce proactive interference, which is desirable for participants to have the best opportunity to encode fresh new memories into long-term memory."Retroactive interference is the interference of newer memories with the retrieval of older memories. The learning of new memories contributes to the forgetting of previously learned memories. For example, retroactive interference would happen as an individual learns a list of Italian vocabulary words, had previously learned Spanish. Learning the Italian words would make it more difficult to remember the Spanish words. The term Retroactive Interference was first brought up by Muller and colleagues; they demonstrated that if the retention interval (the amount of time between stimulus presentation and recall) was filled with tasks and material, interference would be caused with the previously learned items. Retroactive interference may have larger effects than proactive interference because it not only involves competition between previously learned material and new material, but it also involves unlearning. Proactive Interference and Retroactive interference Summary: Paired-associate learning One strategy used to understand how people encode and retrieve memory associations is called paired-associate learning. In a typical study using paired-associate learning, subjects would be presented with pairs of unrelated words (cat, phone) and then memory for those word pairs would be tested. Results from Rohwer’s (1966) studies on paired-associate learning indicate that subjects have increased recall ability when words are associated with a specific context than without such context. For example, subjects would perform better in a recalling task if the sentence “The COW chased the BALL” than if they simply tried to remember the words cow and ball., which seems to support the claim that elaborative rehearsal can improve memory. Other research by Bobrow and Bower (1969) indicated that subject-generated sentences were better recalled than experimenter-generated sentences, suggesting that self-generated sentences improved word pair recall. Bower (1969) also suggested that if experimenters try to control subject’s spontaneous elaboration in the control group by telling them to repeat the words over and over (without making any elaborative rehearsal), recall would be negatively affected.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Variable-frequency drive** Variable-frequency drive: A variable-frequency drive (VFD, or adjustable-frequency drives, adjustable-speed drives), variable-speed drives, AC drives, micro drives, inverter drives, or drives) is a type of AC motor drive (system incorporating a motor) that controls speed and torque by varying the frequency of the input electricity. Depending on its topology, it controls the associated voltage or current variation.VFDs are used in applications ranging from small appliances to large compressors. Systems using VFDs can be more efficient than hydraulic systems, such as in systems with pumps and damper control for fans.Since the 1980s, power electronics technology has reduced VFD cost and size and has improved performance through advances in semiconductor switching devices, drive topologies, simulation and control techniques, and control hardware and software. Variable-frequency drive: VFDs include low- and medium-voltage AC-AC and DC-AC topologies. History: Pulse Width Modulating (PWM) variable frequency drive project started in the 1960s at Strömberg in Finland. Martti Harmoinen is regarded the inventor of this technology. Strömberg managed to sell the idea of PWM drive to Helsinki metro in 1973 and in 1982 first PWM drive SAMI10 were operational. System description and operation: A variable-frequency drive is a device used in a drive system consisting of the following three main sub-systems: AC motor, main drive controller assembly, and drive/operator interface.: 210–211 AC motor The AC electric motor used in a VFD system is usually a three-phase induction motor. Some types of single-phase motors or synchronous motors can be advantageous in some situations, but generally three-phase induction motors are preferred as the most economical. Motors that are designed for fixed-speed operation are often used. Elevated-voltage stresses imposed on induction motors that are supplied by VFDs require that such motors be designed for definite-purpose inverter-fed duty in accordance with such requirements as Part 31 of NEMA Standard MG-1. System description and operation: Controller The VFD controller is a solid-state power electronics conversion system consisting of three distinct sub-systems: a rectifier bridge converter, a direct current (DC) link, and an inverter. Voltage-source inverter (VSI) drives (see 'Generic topologies' sub-section below) are by far the most common type of drives. Most drives are AC-AC drives in that they convert AC line input to AC inverter output. However, in some applications such as common DC bus or solar applications, drives are configured as DC-AC drives. The most basic rectifier converter for the VSI drive is configured as a three-phase, six-pulse, full-wave diode bridge. In a VSI drive, the DC link consists of a capacitor which smooths out the converter's DC output ripple and provides a stiff input to the inverter. This filtered DC voltage is converted to quasi-sinusoidal AC voltage output using the inverter's active switching elements. VSI drives provide higher power factor and lower harmonic distortion than phase-controlled current-source inverter (CSI) and load-commutated inverter (LCI) drives (see 'Generic topologies' sub-section below). The drive controller can also be configured as a phase converter having single-phase converter input and three-phase inverter output.Controller advances have exploited dramatic increases in the voltage and current ratings and switching frequency of solid-state power devices over the past six decades. Introduced in 1983, the insulated-gate bipolar transistor (IGBT) has in the past two decades come to dominate VFDs as an inverter switching device.In variable-torque applications suited for Volts-per-Hertz (V/Hz) drive control, AC motor characteristics require that the voltage magnitude of the inverter's output to the motor be adjusted to match the required load torque in a linear V/Hz relationship. For example, for 460 V, 60 Hz motors, this linear V/Hz relationship is 460/60 = 7.67 V/Hz. While suitable in wide-ranging applications, V/Hz control is sub-optimal in high-performance applications involving low speed or demanding, dynamic speed regulation, positioning, and reversing load requirements. Some V/Hz control drives can also operate in quadratic V/Hz mode or can even be programmed to suit special multi-point V/Hz paths.The two other drive control platforms, vector control and direct torque control (DTC), adjust the motor voltage magnitude, angle from reference, and frequency so as to precisely control the motor's magnetic flux and mechanical torque. System description and operation: Although space vector pulse-width modulation (SVPWM) is becoming increasingly popular, sinusoidal PWM (SPWM) is the most straightforward method used to vary drives' motor voltage (or current) and frequency. With SPWM control (see Fig. 1), quasi-sinusoidal, variable-pulse-width output is constructed from intersections of a saw-toothed carrier signal with a modulating sinusoidal signal which is variable in operating frequency as well as in voltage (or current).Operation of the motors above rated nameplate speed (base speed) is possible, but is limited to conditions that do not require more power than the nameplate rating of the motor. This is sometimes called "field weakening" and, for AC motors, means operating at less than rated V/Hz and above rated nameplate speed. Permanent magnet synchronous motors have quite limited field-weakening speed range due to the constant magnet flux linkage. Wound-rotor synchronous motors and induction motors have much wider speed range. For example, a 100 HP, 460 V, 60 Hz, 1775 RPM (4-pole) induction motor supplied with 460 V, 75 Hz (6.134 V/Hz), would be limited to 60/75 = 80% torque at 125% speed (2218.75 RPM) = 100% power. At higher speeds, the induction motor torque has to be limited further due to the lowering of the breakaway torque of the motor. Thus, rated power can be typically produced only up to 130–150% of the rated nameplate speed. Wound-rotor synchronous motors can be run at even higher speeds. In rolling mill drives, often 200–300% of the base speed is used. The mechanical strength of the rotor limits the maximum speed of the motor. System description and operation: An embedded microprocessor governs the overall operation of the VFD controller. Basic programming of the microprocessor is provided as user-inaccessible firmware. User programming of display, variable, and function block parameters is provided to control, protect, and monitor the VFD, motor, and driven equipment.The basic drive controller can be configured to selectively include such optional power components and accessories as follows: Connected upstream of converter – circuit breaker or fuses, isolation contactor, EMC filter, line reactor, passive filter Connected to DC link – braking chopper, braking resistor Connected downstream of inverter—output reactor, sine wave filter, dV/dt filter. System description and operation: Operator interface The operator interface provides a means for an operator to start and stop the motor and adjust the operating speed. The VFD may also be controlled by a programmable logic controller through Modbus or another similar interface. Additional operator control functions might include reversing, and switching between manual speed adjustment and automatic control from an external process control signal. The operator interface often includes an alphanumeric display or indication lights and meters to provide information about the operation of the drive. An operator interface keypad and display unit is often provided on the front of the VFD controller as shown in the photograph above. The keypad display can often be cable-connected and mounted a short distance from the VFD controller. Most are also provided with input and output (I/O) terminals for connecting push buttons, switches, and other operator interface devices or control signals. A serial communications port is also often available to allow the VFD to be configured, adjusted, monitored, and controlled using a computer. System description and operation: Speed control There are two main ways to control the speed of a VFD; networked or hardwired. Networked involves transmitting the intended speed over a communication protocol such as Modbus, Modbus/TCP, EtherNet/IP, or via a keypad using Display Serial Interface while hardwired involves a pure electrical means of communication. Typical means of hardwired communication are: 4-20mA, 0-10VDC, or using the internal 24VDC power supply with a potentiometer. Speed can also be controlled remotely and locally. Remote control instructs the VFD to ignore speed commands from the keypad while local control instructs the VFD to ignore external control and only abide by the keypad. Programming a VFD: Depending on the model a VFD's operating parameters can be programmed via: dedicated programming software, internal keypad, external keypad, or SD card. VFDs will often block out most programming changes while running. Typical parameters that need to be set include: motor nameplate information, speed reference source, on/off control source and braking control. It is also common for VFDs to provide debugging information such as fault codes and the states of the input signals. Starting and software behavior: Most VFDs allow auto-starting to be enabled. Which will drive the output to a designated frequency after a power cycle, or after a fault has been cleared, or after the emergency stop signal has been restored (generally emergency stops are active low logic). One popular way to control a VFD is to enable auto-start and place L1, L2, and L3 into a contactor. Powering on the contactor thus turns on the drive and has it output to a designated speed. Depending on the sophistication of the drive multiple auto-starting behavior can be developed e.g. the drive auto-starts on power up but does not auto-start from clearing an emergency stop until a reset has been cycled. Starting and software behavior: Drive operation Referring to the accompanying chart, drive applications can be categorized as single-quadrant, two-quadrant, or four-quadrant; the chart's four quadrants are defined as follows: Quadrant I – Driving or motoring, forward accelerating quadrant with positive speed and torque Quadrant II – Generating or braking, forward braking-decelerating quadrant with positive speed and negative torque Quadrant III – Driving or motoring, reverse accelerating quadrant with negative speed and torque Quadrant IV – Generating or braking, reverse braking-decelerating quadrant with negative speed and positive torque.Most applications involve single-quadrant loads operating in quadrant I, such as in variable-torque (e.g. centrifugal pumps or fans) and certain constant-torque (e.g. extruders) loads. Starting and software behavior: Certain applications involve two-quadrant loads operating in quadrant I and II where the speed is positive but the torque changes polarity as in case of a fan decelerating faster than natural mechanical losses. Some sources define two-quadrant drives as loads operating in quadrants I and III where the speed and torque is same (positive or negative) polarity in both directions. Starting and software behavior: Certain high-performance applications involve four-quadrant loads (Quadrants I to IV) where the speed and torque can be in any direction such as in hoists, elevators, and hilly conveyors. Regeneration can occur only in the drive's DC link bus when inverter voltage is smaller in magnitude than the motor back-EMF and inverter voltage and back-EMF are the same polarity.In starting a motor, a VFD initially applies a low frequency and voltage, thus avoiding high inrush current associated with direct-on-line starting. After the start of the VFD, the applied frequency and voltage are increased at a controlled rate or ramped up to accelerate the load. This starting method typically allows a motor to develop 150% of its rated torque while the VFD is drawing less than 50% of its rated current from the mains in the low-speed range. A VFD can be adjusted to produce a steady 150% starting torque from standstill right up to full speed. However, motor cooling deteriorates and can result in overheating as speed decreases such that prolonged low-speed operation with significant torque is not usually possible without separately motorized fan ventilation. Starting and software behavior: With a VFD, the stopping sequence is just the opposite as the starting sequence. The frequency and voltage applied to the motor are ramped down at a controlled rate. When the frequency approaches zero, the motor is shut off. A small amount of braking torque is available to help decelerate the load a little faster than it would stop if the motor were simply switched off and allowed to coast. Additional braking torque can be obtained by adding a braking circuit (resistor controlled by a transistor) to dissipate the braking energy. With a four-quadrant rectifier (active front-end), the VFD is able to brake the load by applying a reverse torque and injecting the energy back to the AC line. Benefits: Energy savings Many fixed-speed motor load applications that are supplied direct from AC line power can save energy when they are operated at variable speed by means of VFD. Such energy cost savings are especially pronounced in variable-torque centrifugal fan and pump applications, where the load's torque and power vary with the square and cube, respectively, of the speed. This change gives a large power reduction compared to fixed-speed operation for a relatively small reduction in speed. For example, at 63% speed a motor load consumes only 25% of its full-speed power. This reduction is in accordance with affinity laws that define the relationship between various centrifugal load variables. Benefits: In the United States, an estimated 60–65% of electrical energy is used to supply motors, 75% of which are variable-torque fan, pump, and compressor loads. Eighteen percent of the energy used in the 40 million motors in the U.S. could be saved by efficient energy improvement technologies such as VFDs.Only about 3% of the total installed base of AC motors are provided with AC drives. However, it is estimated that drive technology is adopted in as many as 30–40% of all newly installed motors.An energy consumption breakdown of the global population of AC motor installations is as shown in the following table: Control performance AC drives are used to bring about process and quality improvements in industrial and commercial applications' acceleration, flow, monitoring, pressure, speed, temperature, tension, and torque.Fixed-speed loads subject the motor to a high starting torque and to current surges that are up to eight times the full-load current. AC drives instead gradually ramp the motor up to operating speed to lessen mechanical and electrical stress, reducing maintenance and repair costs, and extending the life of the motor and the driven equipment. Benefits: Variable-speed drives can also run a motor in specialized patterns to further minimize mechanical and electrical stress. For example, an S-curve pattern can be applied to a conveyor application for smoother deceleration and acceleration control, which reduces the backlash that can occur when a conveyor is accelerating or decelerating. Performance factors tending to favor the use of DC drives over AC drives include such requirements as continuous operation at low speed, four-quadrant operation with regeneration, frequent acceleration and deceleration routines, and need for the motor to be protected for a hazardous area. The following table compares AC and DC drives according to certain key parameters: ^ High-frequency injection VFD types and ratings: Generic topologies AC drives can be classified according to the following generic topologies: Voltage-source inverter (VSI) drive topologies (see image): In a VSI drive, the DC output of the diode-bridge converter stores energy in the capacitor bus to supply stiff voltage input to the inverter. The vast majority of drives are VSI type with PWM voltage output. Current-source inverter (CSI) drive topologies (see image): In a CSI drive, the DC output of the SCR-bridge converter stores energy in series-Inductor connection to supply stiff current input to the inverter. CSI drives can be operated with either PWM or six-step waveform output. VFD types and ratings: Six-step inverter drive topologies (see image): Now largely obsolete, six-step drives can be either VSI or CSI type and are also referred to as variable-voltage inverter drives, pulse-amplitude modulation (PAM) drives, square-wave drives or D.C. chopper inverter drives. In a six-step drive, the DC output of the SCR-bridge converter is smoothed via capacitor bus and series-reactor connection to supply via Darlington Pair or IGBT inverter quasi-sinusoidal, six-step voltage or current input to an induction motor. VFD types and ratings: Load commutated inverter (LCI) drive topologies: In an LCI drive (a special CSI case), the DC output of the SCR-bridge converter stores energy via DC link inductor circuit to supply stiff quasi-sinusoidal six-step current output of a second SCR-bridge's inverter and an over-excited synchronous machine.Low-cost SCR-thyristor-based LCI fed synchronous motor drives are often used in high-power low-dynamic-performance fan, pump and compressor applications rated up to 100 MW. VFD types and ratings: Cycloconverter or matrix converter (MC) topologies (see image): Cycloconverters and MCs are AC-AC converters that have no intermediate DC link for energy storage. A cycloconverter operates as a three-phase current source via three anti-parallel-connected SCR-bridges in six-pulse configuration, each cycloconverter phase acting selectively to convert fixed line frequency AC voltage to an alternating voltage at a variable load frequency. MC drives are IGBT-based. VFD types and ratings: Doubly fed slip recovery system topologies: A doubly fed slip recovery system feeds rectified slip power to a smoothing reactor to supply power to the AC supply network via an inverter, the speed of the motor being controlled by adjusting the DC current. Control platforms Most drives use one or more of the following control platforms: PWM V/Hz scalar control PWM field-oriented control (FOC) or vector control Direct torque control (DTC). Load torque and power characteristics Variable-frequency drives are also categorized by the following load torque and power characteristics: Variable torque, such as in centrifugal fan, pump, and blower applications Constant torque, such as in conveyor and positive-displacement pump applications Constant power, such as in machine tool and traction applications. VFD types and ratings: Available power ratings VFDs are available with voltage and current ratings covering a wide range of single-phase and multi-phase AC motors. Low-voltage (LV) drives are designed to operate at output voltages equal to or less than 690 V. While motor-application LV drives are available in ratings of up to the order of 5 or 6 MW, economic considerations typically favor medium-voltage (MV) drives with much lower power ratings. Different MV drive topologies (see Table 2) are configured in accordance with the voltage/current-combination ratings used in different drive controllers' switching devices such that any given voltage rating is greater than or equal to one to the following standard nominal motor voltage ratings: generally either 2+3⁄4.16 kV (60 Hz) or 3+3⁄6.6 kV (50 Hz), with one thyristor manufacturer rated for up to 12 kV switching. In some applications a step-up transformer is placed between a LV drive and a MV motor load. MV drives are typically rated for motor applications greater than between about 375 and 750 kW (503 and 1,006 hp). MV drives have historically required considerably more application design effort than required for LV drive applications. The power rating of MV drives can reach 100 MW (130,000 hp), a range of different drive topologies being involved for different rating, performance, power quality, and reliability requirements. VFD types and ratings: Drives by machines and detailed topologies It is lastly useful to relate VFDs in terms of the following two classifications: In terms of various AC machines as shown in Table 1 below In terms of various detailed AC-AC converter topologies shown in Tables 2 and 3 below. Application considerations: AC line harmonics Note of clarification:.While harmonics in the PWM output can easily be filtered by carrier-frequency-related filter inductance to supply near-sinusoidal currents to the motor load, the VFD's diode-bridge rectifier converts AC line voltage to DC voltage output by super-imposing non-linear half-phase current pulses thus creating harmonic current distortion, and hence voltage distortion, of the AC line input. When the VFD loads are relatively small in comparison to the large, stiff power system available from the electric power company, the effects of VFD harmonic distortion of the AC grid can often be within acceptable limits. Furthermore, in low-voltage networks, harmonics caused by single-phase equipment such as computers and TVs are partially cancelled by three-phase diode bridge harmonics because their 5th and 7th harmonics are in counterphase. However, when the proportion of VFD and other non-linear load compared to total load or of non-linear load compared to the stiffness at the AC power supply, or both, is relatively large enough, the load can have a negative impact on the AC power waveform available to other power company customers in the same grid. Application considerations: When the power company's voltage becomes distorted due to harmonics, losses in other loads such as normal fixed-speed AC motors are increased. This condition may lead to overheating and shorter operating life. Also, substation transformers and compensation capacitors are affected negatively. In particular, capacitors can cause resonance conditions that can unacceptably magnify harmonic levels. To limit the voltage distortion, owners of VFD load may be required to install filtering equipment to reduce harmonic distortion below acceptable limits. Alternatively, the utility may adopt a solution by installing filtering equipment of its own at substations affected by the large amount of VFD equipment being used. In high-power installations, harmonic distortion can be reduced by supplying multi-pulse rectifier-bridge VFDs from transformers with multiple phase-shifted windings.It is also possible to replace the standard diode-bridge rectifier with a bi-directional IGBT switching device bridge mirroring the standard inverter which uses IGBT switching device output to the motor. Such rectifiers are referred to by various designations including active infeed converter (AIC), active rectifier, IGBT supply unit (ISU), active front end (AFE), or four-quadrant operation. With PWM control and a suitable input reactor, an AFE's AC line current waveform can be nearly sinusoidal. AFE inherently regenerates energy in four-quadrant mode from the DC side to the AC grid. Thus, no braking resistor is needed, and the efficiency of the drive is improved if the drive is frequently required to brake the motor. Application considerations: Two other harmonics mitigation techniques exploit use of passive or active filters connected to a common bus with at least one VFD branch load on the bus. Passive filters involve the design of one or more low-pass LC filter traps, each trap being tuned as required to a harmonic frequency (5th, 7th, 11th, 13th, . . . kq+/-1, where k=integer, q=pulse number of converter).It is very common practice for power companies or their customers to impose harmonic distortion limits based on IEC or IEEE standards. For example, IEEE Standard 519 limits at the customer's connection point call for the maximum individual frequency voltage harmonic to be no more than 3% of the fundamental and call for the voltage total harmonic distortion (THD) to be no more than 5% for a general AC power supply system. Application considerations: Switching frequency foldback One drive uses a default switching frequency setting of 4 kHz. Reducing the drive's switching frequency (the carrier-frequency) reduces the heat generated by the IGBTs.A carrier frequency of at least ten times the desired output frequency is used to establish the PWM switching intervals. A carrier frequency in the range of 2,000 to 16,000 Hz is common for LV [low voltage, under 600 Volts AC] VFDs. A higher carrier frequency produces a better sine wave approximation but incurs higher switching losses in the IGBT, decreasing the overall power conversion efficiency. Application considerations: Noise smoothing Some drives have a noise smoothing feature that can be turned on to introduce a random variation to the switching frequency. This distributes the acoustic noise over a range of frequencies to lower the peak noise intensity. Application considerations: Long-lead effects The carrier-frequency pulsed output voltage of a PWM VFD causes rapid rise times in these pulses, the transmission line effects of which must be considered. Since the transmission-line impedance of the cable and motor are different, pulses tend to reflect back from the motor terminals into the cable. The resulting reflections can produce overvoltages equal to twice the DC bus voltage or up to 3.1 times the rated line voltage for long cable runs, putting high stress on the cable and motor windings, and eventual insulation failure. Insulation standards for three-phase motors rated 230 V or less adequately protect against such long-lead overvoltages. On 460 V or 575 V systems and inverters with 3rd-generation 0.1-microsecond-rise-time IGBTs, the maximum recommended cable distance between VFD and motor is about 50 m or 150 feet. For emerging SiC MOSFET powered drives, significant overvoltages have been observed at cable lengths as short as 3 meters. Solutions to overvoltages caused by long lead lengths include minimizing cable length, lowering carrier frequency, installing dV/dt filters, using inverter-duty-rated motors (that are rated 600 V to withstand pulse trains with rise time less than or equal to 0.1 microsecond, of 1,600 V peak magnitude), and installing LCR low-pass sine wave filters. Selection of optimum PWM carrier frequency for AC drives involves balancing noise, heat, motor insulation stress, common-mode voltage-induced motor bearing current damage, smooth motor operation, and other factors. Further harmonics attenuation can be obtained by using an LCR low-pass sine wave filter or dV/dt filter. Application considerations: Motor bearing currents Carrier frequencies above 5 kHz are likely to cause bearing damage unless protective measures are taken.PWM drives are inherently associated with high-frequency common-mode voltages and currents which may cause trouble with motor bearings. When these high-frequency voltages find a path to earth through a bearing, transfer of metal or electrical discharge machining (EDM) sparking occurs between the bearing's ball and the bearing's race. Over time, EDM-based sparking causes erosion in the bearing race that can be seen as a fluting pattern. In large motors, the stray capacitance of the windings provides paths for high-frequency currents that pass through the motor shaft ends, leading to a circulating type of bearing current. Poor grounding of motor stators can lead to shaft-to-ground bearing currents. Small motors with poorly grounded driven equipment are susceptible to high-frequency bearing currents.Prevention of high-frequency bearing current damage uses three approaches: good cabling and grounding practices, interruption of bearing currents, and filtering or damping of common-mode currents for example through soft magnetic cores, common mode chokes sometimes marketed as "inductive absorbers." Good cabling and grounding practices can include use of shielded, symmetrical-geometry power cable to supply the motor, installation of shaft grounding brushes, and conductive bearing grease. Bearing currents can be interrupted by installation of insulated bearings and specially designed electrostatic-shielded induction motors. Filtering and damping high-frequency bearing can be done though inserting soft magnetic cores over the three phases giving a high frequency impedance against the common mode or motor bearing currents. Another approach is to use instead of standard 2-level inverter drives, using either 3-level inverter drives or matrix converters.Since inverter-fed motor cables' high-frequency current spikes can interfere with other cabling in facilities, such inverter-fed motor cables should not only be of shielded, symmetrical-geometry design but should also be routed at least 50 cm away from signal cables. Application considerations: Dynamic braking Torque generated by the drive causes the induction motor to run at synchronous speed less the slip. If the load drives the motor faster than synchronous speed, the motor acts as a generator, converting mechanical power back to electrical power. This power is returned to the drive's DC link element (capacitor or reactor). A DC-link-connected electronic power switch or braking DC chopper controls dissipation of this power as heat in a set of resistors. Cooling fans may be used to prevent resistor overheating.Dynamic braking wastes braking energy by transforming it to heat. By contrast, regenerative drives recover braking energy by injecting this energy into the AC line. The capital cost of regenerative drives is, however, relatively high. Application considerations: Regenerative drives Regenerative AC drives have the capacity to recover the braking energy of a load moving faster than the designated motor speed (an overhauling load) and return it to the power system. Application considerations: Cycloconverter, Scherbius, matrix, CSI, and LCI drives inherently allow return of energy from the load to the line, while voltage-source inverters require an additional converter to return energy to the supply.Regeneration is useful in VFDs only where the value of the recovered energy is large compared to the extra cost of a regenerative system, and if the system requires frequent braking and starting. Regenerative VFDs are widely used where speed control of overhauling loads is required.Some examples: Conveyor belt drives for manufacturing, which stop every few minutes. While stopped, parts are assembled correctly; once that is done, the belt moves on. Application considerations: A crane, where the hoist motor stops and reverses frequently, and braking is required to slow the load during lowering. Plug-in and hybrid electric vehicles of all types (see image and Hybrid Synergy Drive).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Jury rigging** Jury rigging: In maritime transport terms, and most commonly in sailing, jury-rigged is an adjective, a noun, and a verb. It can describe the actions of temporary makeshift running repairs made with only the tools and materials on board; and the subsequent results thereof. The origin of jury-rigged and jury-rigging lies in such efforts done on boats and ships, characteristically sail powered to begin with. Jury-rigging can be applied to any part of a ship; be it its super-structure (hull, decks), propulsion systems (mast, sails, rigging, engine, transmission, propeller), or controls (helm, rudder, centreboard, daggerboards, rigging). Jury rigging: Similarly, after a dismasting, a replacement mast, often referred to as a jury mast (and if necessary, yard) would be fashioned, and stayed to allow a watercraft to resume making way. Etymology: The phrase 'jury-rigged' has been in use since at least 1788. The adjectival use of 'jury', in the sense of makeshift or temporary, has been said to date from at least 1616, when according to the 1933 edition of the Oxford Dictionary of the English Language, it appeared in John Smith's A Description of New England. It appeared in Smith's more extensive The General History of Virginia, New-England, and the Summer Isles published in 1624.Two theories about the origin of this usage of 'jury-rig' are: A corruption of jury mast; i.e., a mast for the day, a temporary mast, being a spare used when the mast has been carried away. From French jour: 'a day'. Etymology: From the Latin adjutare: 'to aid'; via Old French ajurie: 'help' or 'relief'. Rigging: Depending on its size and purpose, a sail-powered boat may carry a limited amount of repair materials, from which some form of jury-rig can be fashioned. Additionally, anything salvageable, such as a spar or spinnaker pole, could be adapted to carrying a form of makeshift sail. Rigging: Ships typically carried a selection of spare parts, e.g., items such as topmasts. However, due to their much larger size, at up to 1 metre (3 ft 3 in) in diameter, the lower masts were too large to carry as spares. Example jury-rig configurations include: A spare topmast The main boom of a brig Replacing the foremast with the mizzenmast (mentioned in W. Brady's The Kedge Anchor (1852)) The bowsprit set upright and tied to the stump of the original mast.The jury mast knot may provide anchor points for securing makeshift stays and shrouds to support a jury mast, although there is differing evidence of the knot's actual historical use.Jury-rigs are not limited to boats designed for sail propulsion. Any form of watercraft found without power can be adapted to carry jury sail as necessary. In addition, other essential components of a boat or ship, such as a rudder or tiller, can be said to be 'jury-rigged' when a repair is improvised out of materials at hand. Similar phrases: The compound word 'jerry-built', a similar but distinct term, referring to things 'built unsubstantially of bad materials', has a separate origin from jury-rigged. The exact etymology is unknown, but it is probably linked to earlier pejorative uses of the word 'jerry', attested as early as 1721, and may have been influenced by 'jury-rigged'. Similar phrases: The American terms 'Afro engineering' (short for African engineering) or 'nigger-rigging' describes a fix that is temporary, done quickly, technically improperly, or without attention to or care for detail. It can also describe shoddy, second-rate workmanship, with whatever materials happen to be available. 'Nigger-rigging' originated in the 1950s United States; the term was euphemized as 'afro engineering' in the 1970s and later again as 'ghetto rigging'. The terms have been used in the U.S. auto mechanic industry to describe quick makeshift repairs. These phrases have largely fallen out of common usage due to their racist, pejorative nature, but are occasionally used within the African-American community. Similar phrases: To 'MacGyver' (or MacGyverize) something is to rig up something in a hurry using materials at hand, from the title character of the American television show of the same name, who specialized in such improvisation stunts. In New Zealand, having a 'Number 8 wire' mentality means to have the ability to make or repair something using any materials at hand, such as standard farm fencing wire.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corey–House synthesis** Corey–House synthesis: The Corey–House synthesis (also called the Corey–Posner–Whitesides–House reaction and other permutations) is an organic reaction that involves the reaction of a lithium diorganylcuprate ( CuLi ) with an organic halide or pseudohalide ( R′−X ) to form a new alkane, as well as an ill-defined organocopper species and lithium (pseudo)halide as byproducts. Corey–House synthesis: Li Cu Cu LiX In principle, a carbanion equivalent such as an organolithium or Grignard reagent can react directly (without copper) with an alkyl halide in a nucleophilic substitution reaction to form a new carbon–carbon bond. However, aside from the use of metal acetylides as nucleophiles, such a process rarely works well in practice due to metal–halogen exchange and/or the formation of large amounts of reduction or elimination side-products. As a solution to this problem, the Corey–House reaction constitutes a general and high yielding method for the joining of two alkyl groups or an alkyl group and an aryl group. Scope: The scope of the Corey-House synthesis is exceptionally broad, and a range of lithium diorganylcuprates (R2CuLi, R = 1°, 2°, or 3° alkyl, aryl, or alkenyl) and organyl (pseudo)halides (RX, R = methyl, benzylic, allylic, 1°, or cyclic 2° alkyl, aryl, or alkenyl and X = Br, I, OTs, or OTf; X = Cl is marginal) will undergo coupling as the nucleophilic and electrophilic coupling partners, respectively. The reaction usually takes place at room temperature or below in an ethereal solvent. Due to the wide range of applicable coupling partners, functional group tolerance, and operational simplicity, the Corey–House synthesis is a powerful and practical tool for the synthesis of complex organic molecules. However, as limitations, hindered (2° or 3°) alkyl halides are generally unsuccessful or low-yielding substrates for the Corey-House synthesis. Furthermore, alkynylcuprates are generally inert under usual coupling conditions. The forging of aryl-aryl bonds is also inefficient and much more effectively achieved using palladium catalysis. Reaction process and mechanism: The Corey-House synthesis is preceded by two preliminary steps to prepare the requisite Gilman reagent from an alkyl halide. In the first step, the alkyl halide is treated with lithium metal in dry ether to prepare an alkyllithium reagent, RLi. The starting alkyl halide for the lithiation step can be a primary, secondary or tertiary alkyl chloride, bromide, or iodide: R–X + 2 Li° → RLi + Li+X–In the second step, a lithium dialkylcuprate, also known as a Gilman reagent (named after Henry Gilman of Iowa State University) is prepared from the alkyllithium by treatment with copper(I) iodide (CuI) in a transmetalation reaction: 2 RLi + CuI → Li+[R–Cu–R]– + LiIIf the use of alkyllithium reagents is precluded by functional group incompatibility, transmetalation from other metals (e.g., Mg, Zn, Al, B) may be considered as alternatives for the preparation of the organocopper reagent. The Corey-House synthesis process is the reaction between the organocopper reagent, usually a lithium dialkylcuprate as prepared above, and a second alkyl (pseudo)halide or an aryl iodide. This results in the formation of a C–C bond between the two organic fragments: Li+[R–Cu–R]– + R'–X → R–R' + "RCu" + LiXFrom the stoichiometry, it is apparent that one equivalent of the R group is wasted as an ill-characterized alkylcopper species (likely polymeric; usually converted to RH upon aqueous workup) in the most common form of the Corey–House synthesis. To avoid this for cases where R is a precious or complex fragment, a reagent (R)(RU)CuM, where RU is an untransferable dummy ligand (e.g., RU = cyano, alkynyl, 2-thienyl, etc.) can be prepared and used instead. Reaction process and mechanism: It is important to note that when R and R' are different, only the cross product R–R' is obtained; R–R or R'–R' are not formed in significant amounts. The Corey–House reaction is therefore an example of a cross-coupling reaction. The Corey–House synthesis is, in fact, one of the earliest transition metal-mediated (or catalyzed, see below) cross-coupling reactions to be discovered. In the case of alkyl bromides and tosylates, inversion of configuration is observed when an configurationally pure alkyl electrophile is used. The reaction is believed to proceed via an SN2-like mechanism to give a copper(III) species, which undergoes reductive elimination to give the coupling product. When alkyl iodides are used, scrambling of configuration is observed, and cyclization products are observed to form for alkyl iodides with an olefin tether, both of which are indicative of the involvement of radicals. Reaction process and mechanism: It is important to note that for this reaction to work successfully, the alkyl (pseudo)halide coupling partner should be methyl, benzylic, allylic, 1° alkyl, or 2° cycloalkyl. In most cases, 3° and acyclic 2° electrophiles give unsatisfactory results. (However, see below for recent modifications that allow 2° electrophiles to be used successfully.) On the other hand, sterically hindered organocopper reagents, including 3° and other branched alkyl reagents, are generally tolerated. However, aryl bromides, iodides and sulfonates, which do not ordinarily undergo nucleophilic substitution in the absence of a transition metal, can be used successfully as coupling partners. Catalytic version: In 1971, Jay Kochi reported that Grignard reagents and alkyl bromides could be coupled using a catalytic amount of lithium tetrachlorocuprate(II), a process that was extended to alkyl tosylates by Schlosser and Fouquet. In the catalytic process, the Grignard reagent undergoes transmetalation with the copper salt or complex to generate an organocuprate as a catalytic intermediate, which then undergoes reaction with the (pseudo)halide electrophile to form the coupling product and release the copper and complete the catalytic cycle. Under recently discovered conditions, using TMEDA as the ligand for copper and lithium methoxide as a base additive, it is now possible to couple 1°, 2°, and 3° Grignard reagents with 1° and 2° alkyl bromides and tosylates in high yields with nearly exclusive stereoinversion. Even β-branched 2° alkyl tosylates react to give coupling product in moderate yield, greatly expanding the scope of the catalytic Corey–House synthesis (Kochi–Schlosser coupling). Background: While the coupling of organocopper compounds and allyl bromide was reported as early as 1936 by Henry Gilman (Iowa State University), this reaction was fully developed by four organic chemists (two at Harvard and two at MIT): E.J. Corey (Harvard University), research advisor of Gary Posner Gary H. Posner (Johns Hopkins University), a student of Harvard University at the time George M. Whitesides (Massachusetts Institute of Technology; later Harvard University), junior colleague of Herbert House Herbert O. House (Massachusetts Institute of Technology; later Georgia Institute of Technology)
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paper soccer** Paper soccer: Paper soccer (or paper hockey) is an abstract strategy game played on a square grid representing a soccer or hockey field. Two players take turns extending a line representing the position of a ball until it reaches one of the grid's two-goal spaces. A traditional paper-and-pencil game, it is commonly played in schools and can be found in some magazines. Many computer implementations of the game also exist. Despite the game's simple rules, paper soccer has various expanded strategies and tactics. General rules: The game's pitch is drawn as a rectangle on a grid, with small extended rectangles in the center of each of the two shortest sides to represent the two goalposts. The grid can be of any size, although both sides should be an even number of squares to allow a center point for kickoff. The goal areas are typically 2 × 1 squares in size. General rules: A "ball" is marked as a dot in the center of the pitch. Players alternately move the ball to a new point by drawing a line from its current position to a new one. Each move must be to a point orthogonally or diagonally adjacent. The ball cannot be moved beyond the boundary of the pitch, nor along a line that has already been drawn (including the boundary of the pitch). General rules: If the ball is moved to a point that already has one or more lines connected to it, including the perimeter of the pitch, the ball "bounces" and the player immediately takes another turn. The player's move ends only when the ball reaches a point with no existing lines. General rules: The winner is the player who places the ball in their opponent's goal. A player also wins if their opponent scores an own goal. If the ball reaches a point from which it cannot be moved (such as a corner of the pitch) this is regarded as a draw, or a loss for the player unable to move, depending on the rules being played. In some versions of the game, their own goals and moves that make it impossible to move are illegal. General rules: Variants and modifications The players can regulate the pitch's shape in various ways by altering the pitch's dimensions or the placement of the gates. In some variants, one or more special lines may be added to the pitch. This modification is found in computer applications, and expanded variants of this modification are used by PDE Football.In some variants, gameplay can be continued from the central pitch's point after a goal is scored or after a play is blocked. The player who did not score moves first from the central point. The game is finished when a move from the central point is not possible, and the player who scored the most goals wins. General rules: Another variant allows a player to make a "bounce" during any turn if the line of that bounce would cross an existing diagonal line.Another variant is Texas Soccer which allows no "X" marks to be made as the ball can no longer cross through a diagonal line. Strategy: Since the ball's movement cannot overlap previously drawn lines, a player can block their opponent's potential moves by filling strategic points on the board and limiting the mobility of their opponent. Another strategy is based on the use of "bounces" for streamlining mobility, which can greatly aid in scoring goals or blocking access to the player's own goal area. In versions of the game where a player who cannot move loses, a player can focus on moving the ball to a position where their opponent cannot move it further.In some early computerized versions of this game, the AI player tends to choose the shortest path to the player's goal, ignoring bounces and other strategies. That strategy is not universal. Computer version of Paper Soccer include graphs and pathfinding (just as other games like Quoridor). Computer programs still do not play this game at competitive/top level. Game notation: Each move in the game may be recorded as a string of one or more digits from 0 to 7, each digit representing the direction of a move (with 0 corresponding to 'north', 1 to 'north-east', 2 to 'east' and so on) across a single grid. Multiple digits are used to record bounce moves. This notation has been used in a PlayOK.com service. Similar games: There are two similar games that imitate paper soccer. Their mechanisms also depend on drawing lines to describe a ball's movement on a field. Similar games: Paper soccer (Russian variant) This paper-and-pencil game also involves drawing lines to adjacent points on a grid and is distinguished by a larger play area and players extending the line to three points each turn. The line can change direction at each point but must not touch any existing line (there are no bounce moves). If a player becomes trapped and unable to draw this line, then their turn is forfeited and the opponent gets a "penalty kick". This special move is a straight line in any of the eight compass directions, extending six points, and is the only occasion that other lines may be crossed. If the penalty lands on an already-occupied point or there are less than three unoccupied points available to move, an additional penalty kick is earned. After six such penalties, if there are insufficient valid moves then the other player starts with a penalty kick. Some versions with very large grids have players extend the line through four points per turn and penalty kicks across 13 points. Similar games: The strategy involves blocking an opponent's potential moves, outmaneuvering the opponent at the edges of the field, and setting traps for penalty kicks in range of the opponent's goal. If a trap is not executed successfully, the opponent may be able to reverse it in an extreme move.The game was popular in many parts of the former Soviet Union. Several Russian magazines describe the game and there are a number of computer versions as well. Similar games: xrSoccer Published in 2005, xrSoccer is a computer game by eXtreme Results International Inc. It may be adapted to paper-and-pencil form for two players. It has common features with the previously described versions of paper soccer but gameplay generally differs. Similar games: The pitch is a 14 × 20 grid (13 × 19 points) with goal gates on the shorter sides. Starting from the center point, the line is extended to three points per turn. The line can touch occupied points and cross existing lines (there are no bounce moves) but cannot move along an existing line. If a player has no place to extend the line, the computer repositions the ball on the nearest unblocked point. This can be favourable when executed properly, as the ball is usually moved depending on the last direction of the line before it became blocked, and can be used to maneuver closer to the opponent's goal. After scoring a goal, the ball is reset to the center point and the game continues until all possible goals are scored.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Song bells** Song bells: Song bells are a musical instrument in the keyboard percussion family. They are a mallet percussion instrument in the metallophone family that is essentially a cross between the vibraphone, glockenspiel, and celesta. They have bars made of aluminum.They sound one octave down from the glockenspiel, or one octave above concert pitch and generally have a range of 2+1⁄2 octaves. Song bells have been made by various makers at different times but were first introduced by J. C. Deagan, Inc. in 1918 and manufactured by the company until 1924.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**2,2'-Dipyrromethene** 2,2'-Dipyrromethene: 2,2'-Dipyrromethene, often called just dipyrromethene or dipyrrin, is a chemical compound with formula C9H8N2 whose skeleton can be described as two pyrrole rings C5N connected by a methyne bridge =CH– through their nitrogen-adjacent (position-2) carbons; the remaining bonds being satisfied by hydrogen atoms. It is an unstable compound that is readily attacked by nucleophilic compounds above −40 °C.2,2'-Dipyrromethene and its more stable and easily prepared derivatives—formally obtained by replacing one or more hydrogen atoms by other functional groups—are important precursors for the family of BODIPY fluorescent dies. The derivatives include salts of the dipyrrinato anion C9H7N–2 and of the cation C9H9N+2. Preparation: 2,2'-Dipyrromethene and its derivatives can be obtained from suitable pyrrole derivatives by several methods.The unsubstituted compound can be prepared by oxidation of 2,2'-dipyrrolemethane with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) at −78 °C in dry dichloromethane solution. An alternative synthesis that avoids the oxidation step is the condensation of 2-formyl pyrrole and pyrrole catalyzed by trifluoroacetic acid, followed by deprotonation with N,N-diisopropylethylamine.More generally, one starts with a pyrrole with suitable substituents at positions 3, 4, or 5 (but not 2). Condensation of two such molecules at their 2 positions with a bridging compound gives the corresponding 2,2'-dipyrromethane. The condensation may use, for example, the Knorr pyrrole synthesis, with an aromatic aldehyde in the presence of TFA. The dipyrromethane core is then oxidized to dipyrromethene using a quinone oxidant such as DDQ or p-chloranil. Preparation: Alternatively, one may use an activated carboxylic acid derivative, usually an acyl chloride. As another possibility, one may condense a substituted pyrroles with a 2-acylpyrrole; this route allows the synthesis of unsymmetrical dipyrromethenes. Reactions: Dipyrrin is unstable above −40 °C. However, its acts as a base, and its chloride [C9H9N+2] [Cl−] is sufficiently stable in solution.The so-called BODIPY dyes can be obtained by reacting 2,2'-dipyrromethene or its derivatives with boron trifluoride-diethyl ether complex (BF3·(C2H5)2O) in the presence of triethylamine or 1,8-diazabicyclo[5.4.0]undec-7-ene (DBU).Dipyrrin and its derivatives for coordination complexes with transition metals. For example, the derivative anion 5-phenyl dipirrinato (pdp) forms the neutral iron(III) complex Fe(pdp)3 (dark green monoclinic crystals, soluble in benzene, orange solution in dichloromethane), where the Fe3+ ion is coordinated to six nitrogen atoms of the dipyrrin cores in distorted octahedral geometry. A similar cobalt(III) complex has also been reported, as well as a complex with copper(II) Cu(pdp)2
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mannosyl-3-phosphoglycerate synthase** Mannosyl-3-phosphoglycerate synthase: In enzymology, a mannosyl-3-phosphoglycerate synthase (EC 2.4.1.217) is an enzyme that catalyzes the chemical reaction GDP-mannose + 3-phospho-D-glycerate ⇌ GDP + 2-(alpha-D-mannosyl)-3-phosphoglycerateThus, the two substrates of this enzyme are GDP-mannose and 3-phospho-D-glycerate, whereas its two products are GDP and 2-(alpha-D-mannosyl)-3-phosphoglycerate. This enzyme belongs to the family of glycosyltransferases, specifically the hexosyltransferases. The systematic name of this enzyme class is GDP-mannose:3-phosphoglycerate 3-alpha-D-mannosyltransferase. This enzyme is also called MPG synthase.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Winter sports in Slovakia** Winter sports in Slovakia: Ski and winter sports in Slovakia are very prominent and popular given the mountainous topography of the region and the fact that much of the country is covered by snow for a long part of the year. Downhill Skiing: In spite of its small area, which is mostly hilly, Slovakia is densely interwoven by a net of ski-tows and ski-lifts. More than 1000 ski-tows and 40 ski-lifts are in operation and their number is growing. From the top the terminal stations there are hundreds of down-hill pistes, from the very gentle – for beginners, to the most demanding, which comply with the international criteria for master competitions. Downhill Skiing: At present, approximately 50 Slovak ski resorts make snow. Snowboarding: Snowboarding is the fastest developing winter sport in the world. Slovakia is no exception - the number of snowboarders has been increasing every year, partly due to the type of terrain available in ski resorts. Well equipped board parks at the larger resorts offer excellent jumps, ramps and rail-slides. This is especially true of Jasna which is the largest resort in Central Europe and very boarder friendly. Ski-Alpinism: Ski-alpinism is a combination of cross-country skiing, mountaineering and alpine skiing. Its fans seek new opportunities for testing own stamina and strength in the extreme conditions of winter alpine nature. The Slovak mountains provide very good conditions for pursuing this sport, and ski-alpinism fans recommend Jasenská dolina in the Veľká Fatra Mts., the area of Chopok and Ďumbier in the Low Tatras, Malá Studená dolina and Veľká Studená dolina and Skalnatá dolina in the High Tatras, Roháče and Žiarska dolina in the Western Tatras, and the well known Vrátna dolina in the Malá Fatra Mts. Cross-country skiing: Bigger ski resorts offer well kept cross-country paths, but skier made paths can be found virtually by any village or tourist centre. Slovakia is rather small, so there is not much of "cross country". Ski jumping: Ski jumping is not a popular sport in Slovakia. However, some jumping hills exist.[1]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Apical dominance** Apical dominance: In botany, apical dominance is the phenomenon whereby the main, central stem of the plant is dominant over (i.e., grows more strongly than) other side stems; on a branch the main stem of the branch is further dominant over its own side twigs. Plant physiology describes apical dominance as the control exerted by the terminal bud (and shoot apex) over the outgrowth of lateral buds. Overview: Apical dominance occurs when the shoot apex inhibits the growth of lateral buds so that the plant may grow vertically. It is important for the plant to devote energy to growing upward so that it can get more light to undergo photosynthesis. If the plant utilizes available energy for growing upward, it may be able to outcompete other individuals in the vicinity. Plants that were capable of outcompeting neighboring plants likely had higher fitness. Apical dominance is therefore most likely adaptive. Overview: Typically, the end of a shoot contains an apical bud, which is the location where shoot growth occurs. The apical bud produces a plant hormone, auxin, (IAA) that inhibits growth of the lateral buds further down on the stem towards the axillary bud. Auxin is predominantly produced in the growing shoot apex and is transported throughout the plant via the phloem and diffuses into lateral buds which prevents elongation. That auxin likely regulates apical dominance was first discovered in 1934.When the apical bud is removed, the lowered IAA concentration allows the lateral buds to grow and produce new shoots, which compete to become the lead growth. Apex removal: Plant physiologists have identified four different stages the plant goes through after the apex is removed (Stages I-IV). The four stages are referred to as lateral bud formation, "imposition of inhibition" (apical dominance), initiation of lateral bud outgrowth following decapitation, and elongation and development of the lateral bud into a branch.These stages can also be defined by the hormones that are regulating the process which are as follows: Stage I, cytokinin promoted, causing the lateral bud to form since cytokinin plays a role in cell division; Stage II, auxin is promoted, resulting in apical dominance ("imposition of inhibition"); Stage III, cytokinin released resulting in outward growth of the lateral bud; and Stage IV, auxin is decreased and gibberellic acid is promoted which results in cell division, enabling the bud or branch to continue outward growth.More simply stated, lateral bud formation is inhibited by the shoot apical meristem (SAM). The lateral bud primordium (from which the lateral bud develops) is located below SAM. The shoot tip rising from the SAM inhibits the growth of the lateral bud by repressing auxin. When the shoot is cut off, the lateral bud begins to lengthen which is mediated by a release of cytokinin. Once the apical dominance has been lifted from the plant, elongation and lateral growth is promoted and the lateral buds grow into new branches. When lateral bud formation prevents the plant from growing upward, it is undergoing lateral dominance. Often, lateral dominance can be triggered by decapitating the SAM or artificially decreasing the concentration of auxin in plant tissues. Applications: When the apical bud is removed, the lowered IAA concentration allows the lateral buds to grow and produce new shoots, which compete to become the lead growth. Pruning techniques such as coppicing and pollarding make use of this natural response to curtail direct plant growth and produce a desired shape, size, and/or productivity level for the plant. The principle of apical dominance is manipulated for espalier creation, hedge building, or artistic sculptures called topiary. If the SAM is removed, it stimulates growth in the lateral direction. By careful pruning, it is possible to create remarkable designs or patterns. Applications: Some fruit trees have strong apical dominance, and young trees can become "leggy", with poor side limb development. Apical dominance can be reduced in this case, or in cases where limbs are broken off by accident, by cutting off the auxin flow above side buds that one wishes to stimulate. This is often done by orchardists for young trees. Occasionally, strong apical dominance is advantageous, as in the "Ballerina" apple trees. These trees are intended to be grown in small gardens, and their strong apical dominance combined with a dwarfing rootstock gives a compact narrow tree with very short fruiting side branches.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bingo card** Bingo card: Bingo cards are playing cards designed to facilitate the game of Bingo in its various forms around the world. History: In the early 1500s the people of Italy began to play a game called "Lo Gioco del Lotto d'Italia," which literally means "The game of lotto of Italy." The game operated very much like a modern lottery as players placed bets on the chances of certain numbers being drawn. By the 1700s, a version of Lo Gioco del Lotto d'Italia was played in France, where paper cards were first used to keep track of numbers drawn by a caller.Before the advent of printing machines, numbers on bingo cards were either painted by hand or stamped using rubber stamps onto thick cardboard. Cards were reusable, meaning players used tokens to mark called numbers. The number of unique cards was limited as randomization had to occur by hand. Before the advent of online Bingo, cards were printed on card stock and, increasingly, disposable paper. While cardboard and paper cards are still in use, Bingo halls are turning more to "flimsies" (also called "throwaways") — a card inexpensively printed on very thin paper to overcome increasing cost — and electronic Bingo cards to overcome the difficulty with randomization. Types of cards: There are two types of Bingo cards. One is a 5x5 grid meant for 75-ball Bingo, which is largely played in the U.S. The other uses a 9x3 grid for U.K. style "Housie" or 90-ball Bingo. 75-ball bingo cards Players use cards that feature five columns of five squares each, with every square containing a number (except the middle square, which is designated a "FREE" space). The columns are labeled "B" (numbers 1–15), "I" (numbers 16–30), "N" (numbers 31–45), "G" (numbers 46–60), and "O" (numbers 61–75). Randomization A popular Bingo myth claims that U.S. Bingo innovator Edwin S. Lowe contracted Columbia University professor Carl Leffler to create 6,000 random and unique Bingo cards. The effort is purported to have driven Leffler insane. Manual random permutation is an onerous and time-consuming task that limited the number of Bingo cards available for play for centuries. Types of cards: The calculation of random permutations is a matter of statistics principally relying on the use of factorial calculations. In its simplest sense, the number of unique "B" columns assumes that all 15 numbers are available for the first row. That only 14 of the numbers are available for the second row (one having been consumed for the first row). And that only 13, 12, and 11 numbers are available for each of the third, fourth, and fifth rows. Thus, the number of unique "B" (and "I", "G", and "O", respectively) columns is (15*14*13*12*11) = 360,360. The combinations of the "N" column differ due to the use of the free space. Therefore, it has only (15*14*13*12) = 32,760 unique combinations. The product of the five rows (360,3604 * 32,760) describes the total number of unique playing cards. That number is 552,446,474,061,128,648,601,600,000 simplified as 5.52x1026 or 552 septillion. Types of cards: Printing a complete set of Bingo cards is impossible for all practical purposes. If one trillion cards could be printed each second, a printer would require more than seventeen million years to print just one set. However, while the number combination of each card is unique, the number of winning cards is not. If a winning game using e.g. row #3 requires the number set B10, I16, G59, and O69, there are 333,105,095,983,435,776 (333 quadrillion) winning cards. Therefore, calculation of the number of Bingo cards is more practical from the point of view of calculating the number of unique winning cards. Types of cards: For example, in a simple one-pattern game of Bingo a winning card may be the first person to complete row #3. Because the "N" column contains a free space, the maximum number of cards that guarantee a unique winner is (15*15*15*15) = 50,625. Because the players need to only focus on row #3, the remaining numbers in rows #1, #2, #4, and #5 are statistically insignificant for purposes of game play and can be selected in any manner as long as no number is duplicated on any card. Types of cards: Perhaps the most common pattern set, known as "Straight-line Bingo" is completing any of the five rows, columns, or either of the main diagonals. In this case the possibility of multiple winning cards is unavoidable because any one of twelve patterns on every card can win the game. But not all 552 septillion cards need to be in play. Any given set of numbers in a column (e.g., 15, 3, 14, 5, 12 in the "B" column) can be represented in any of 5! (for the "B", "I", "G", and "O" columns. 4! for the "N" column) or 120 different ways. These combinations are all statistically redundant. Therefore, the total number of cards can be reduced by a factor of (5!4 * 4!) = 4,976,640,000 for a total unique winning card set of 111,007,923,832,370,565 or 111 quadrillion. (Still impossibly enormous, but our eager printer described above would only need 1.29 days to complete the task.) The challenge of a multiple-pattern game is selecting a winner wherein a tie is possible. The solution is to name the player who shouts "Bingo!" first, is the winner. However, it is more practical and manageable to use card sets that avoid multiple-pattern games. The single-pattern #3 row has already been mentioned, but its limited card set causes problems for the emerging online Bingo culture. Larger patterns, e.g. a diamond pattern consisting of cell positions B3, I2 and I4, N1 and N5, G2 and G4, and O3, are often used by online Bingo games to permit large number of players while ensuring only one player can win. (A unique winner is further desirable for online play where network delays and other communication interference can unfairly affect multiple winning cards. The winner would be determined by the first person to click the "Bingo!" button (emulating the shout of "Bingo!" during a live game).) In this case the number of unique winning cards is calculated as (152*(15*14)3/23) = 260,465,625 (260 million). The division by two for each of the "I", "N", and "G" columns is necessary to once again remove redundant number combinations, such as [31,#,#,#,45] and [45,#,#,#,31] in the N column. Types of cards: 90-ball bingo cards In UK bingo, or Housie, cards are usually called "tickets." The cards contain three rows and nine columns. Each row contains five numbers and four blank spaces randomly distributed along the row. Numbers are apportioned by column (1–9, 10–19, 20–29, 30–39, 40–49, 50–59, 60–69, 70–79 and 80–90). Other types of card Break Open
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**P57 (glycoside)** P57 (glycoside): P57 is an oxypregnane steroidal glycoside isolated from the African cactiform Hoodia gordonii. P57 is hypothesized to be the chemical constituent from this plant mainly responsible for the putative appetite suppressant activity of Hoodia extracts.In a rat study at Brown Medical School, intracerebroventricular injections of the purified P57 demonstrated that the compound has a likely central nervous system (CNS) mechanism of action like that of neuroactive steroids. P57 (glycoside): The studies demonstrated that the compound increases the content of ATP by 50-150% in hypothalamic neurons. In addition, third ventricle administration of P57 reduced subsequent 24-hour food intake by 40-60%.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thymine-DNA glycosylase** Thymine-DNA glycosylase: G/T mismatch-specific thymine DNA glycosylase is an enzyme that in humans is encoded by the TDG gene. Several bacterial proteins have strong sequence homology with this protein. Function: The protein encoded by this gene belongs to the TDG/mug DNA glycosylase family. Thymine-DNA glycosylase (TDG) removes thymine moieties from G/T mismatches by hydrolyzing the carbon-nitrogen bond between the sugar-phosphate backbone of DNA and the mispaired thymine. With lower activity, this enzyme also removes thymine from C/T and T/T mispairings. TDG can also remove uracil and 5-bromouracil from mispairings with guanine. TDG knockout mouse models showed no increase in mispairing frequency suggesting that other enzymes, like the functional homologue MBD4, may provide functional redundancy. This gene may have a pseudogene in the p arm of chromosome 12.Additionally, in 2011, the human thymine DNA glycosylase (hTDG) was reported to efficiently excise 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC), the key oxidation products of 5-methylcytosine in genomic DNA. Later on, the crystal structure of the hTDG catalytic domain in complex with duplex DNA containing 5caC was published, which supports the role of TDG in mammalian 5-methylcytosine demethylation. Interactions: Thymine-DNA glycosylase has been shown to interact with: CREB-binding protein, Estrogen receptor alpha, Promyelocytic leukemia protein, SUMO3, and Small ubiquitin-related modifier 1. Interactive pathway map: Click on genes, proteins and metabolites below to link to respective articles.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Gel conference** Gel conference: Gel (Good Experience Live) is a conference focused on the concept of a "good experience" in all contexts – business, art, society, technology, and life. The conference has been held annually in New York City since 2003, and the first European counterpart, euroGel 2006, took place in Copenhagen, on 1 September 2006. Each conference has been hosted by Gel's founder, Mark Hurst. Past speakers have included Salman Khan, Gabriel Weinberg, and Bob Mankoff, and Rhett and Link.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Band matrix** Band matrix: In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side. Band matrix: Bandwidth Formally, consider an n×n matrix A=(ai,j ). If all matrix elements are zero outside a diagonally bordered band whose range is determined by constants k1 and k2: if or 0. then the quantities k1 and k2 are called the lower bandwidth and upper bandwidth, respectively. The bandwidth of the matrix is the maximum of k1 and k2; in other words, it is the number k such that ai,j=0 if |i−j|>k Examples: A band matrix with k1 = k2 = 0 is a diagonal matrix A band matrix with k1 = k2 = 1 is a tridiagonal matrix For k1 = k2 = 2 one has a pentadiagonal matrix and so on. Triangular matrices For k1 = 0, k2 = n−1, one obtains the definition of an upper triangular matrix similarly, for k1 = n−1, k2 = 0 one obtains a lower triangular matrix. Upper and lower Hessenberg matrices Toeplitz matrices when bandwidth is limited. Block diagonal matrices Shift matrices and shear matrices Matrices in Jordan normal form A skyline matrix, also called "variable band matrix" – a generalization of band matrix The inverses of Lehmer matrices are constant tridiagonal matrices, and are thus band matrices. Applications: In numerical analysis, matrices from finite element or finite difference problems are often banded. Such matrices can be viewed as descriptions of the coupling between the problem variables; the banded property corresponds to the fact that variables are not coupled over arbitrarily large distances. Such matrices can be further divided – for instance, banded matrices exist where every element in the band is nonzero. These often arise when discretising one-dimensional problems.Problems in higher dimensions also lead to banded matrices, in which case the band itself also tends to be sparse. For instance, a partial differential equation on a square domain (using central differences) will yield a matrix with a bandwidth equal to the square root of the matrix dimension, but inside the band only 5 diagonals are nonzero. Unfortunately, applying Gaussian elimination (or equivalently an LU decomposition) to such a matrix results in the band being filled in by many non-zero elements. Band storage: Band matrices are usually stored by storing the diagonals in the band; the rest is implicitly zero. For example, a tridiagonal matrix has bandwidth 1. The 6-by-6 matrix 11 12 21 22 23 32 33 34 43 44 45 54 55 56 65 66 ] is stored as the 6-by-3 matrix 11 12 21 22 23 32 33 34 43 44 45 54 55 56 65 66 0]. A further saving is possible when the matrix is symmetric. For example, consider a symmetric 6-by-6 matrix with an upper bandwidth of 2: 11 12 13 22 23 24 33 34 35 44 45 46 55 56 66 ]. This matrix is stored as the 6-by-3 matrix: 11 12 13 22 23 24 33 34 35 44 45 46 55 56 66 00]. Band form of sparse matrices: From a computational point of view, working with band matrices is always preferential to working with similarly dimensioned square matrices. A band matrix can be likened in complexity to a rectangular matrix whose row dimension is equal to the bandwidth of the band matrix. Thus the work involved in performing operations such as multiplication falls significantly, often leading to huge savings in terms of calculation time and complexity. Band form of sparse matrices: As sparse matrices lend themselves to more efficient computation than dense matrices, as well as in more efficient utilization of computer storage, there has been much research focused on finding ways to minimise the bandwidth (or directly minimise the fill-in) by applying permutations to the matrix, or other such equivalence or similarity transformations.The Cuthill–McKee algorithm can be used to reduce the bandwidth of a sparse symmetric matrix. There are, however, matrices for which the reverse Cuthill–McKee algorithm performs better. There are many other methods in use. Band form of sparse matrices: The problem of finding a representation of a matrix with minimal bandwidth by means of permutations of rows and columns is NP-hard.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Binkp** Binkp: Binkp is a protocol for transferring FidoNet or WWIVNet mail over reliable connections. It is typically used to deliver mail over the internet, instead of point-to-point connections between modems. Application: Historically, FidoNet traffic was transferred mainly over serial (RS-232) modem connections which might not have an error correction layer. These dial-up-oriented protocols for transferring FidoNet traffic like EMSI or ZMODEM had to implement error-recovery. When the members of FidoNet started to use TCP/IP to transfer FidoNet traffic, this error-recovery overhead became unnecessary. Assuming that the connection is reliable makes it possible to eliminate error-checking and unnecessary synchronization steps, achieving both ease of implementation and improved performance. The major advantage of binkp vs EMSI and ZMODEM is achieved over connections with large delays and low bandwidth. Application: IANA (Internet Assigned Numbers Authority) has registered the port number 24554 for binkp when used over TCP/IP connections. History: In 1996, Dima Maloff released the first draft of the protocol specification and the first mailer, binkd, that supported the new protocol. In 1997, Argus mailer began to support the binkp protocol. In 1999, Dima Maloff, Nick Soveiko and Maxim Masiutin submitted the protocol specification to the Fidonet Technical Standards Committee (FTSC), which published the document as Fidonet Standards Proposal (FSP-1011). In 2005, FTSC assigned the Fidonet Technical Standard (FTS) status to the binkp protocol, and split the specification into four separate documents: Binkp/1.0 Protocol specification (FTS-1026), Binkp/1.0 optional protocol extension CRAM (FTS-1027), Binkp protocol extension Non-reliable Mode (FTS-1028), and Binkp optional protocol extension Dataframe Compression (FTS-1029). On October 2nd 2015, WWIV BBS's WWIVnet implemented a binkp backbone for its network.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bhargava factorial** Bhargava factorial: In mathematics, Bhargava's factorial function, or simply Bhargava factorial, is a certain generalization of the factorial function developed by the Fields Medal winning mathematician Manjul Bhargava as part of his thesis in Harvard University in 1996. The Bhargava factorial has the property that many number-theoretic results involving the ordinary factorials remain true even when the factorials are replaced by the Bhargava factorials. Using an arbitrary infinite subset S of the set Z of integers, Bhargava associated a positive integer with every positive integer k, which he denoted by k !S, with the property that if one takes S = Z itself, then the integer associated with k, that is k ! Z , would turn out to be the ordinary factorial of k. Motivation for the generalization: The factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n. For example, 5! = 5×4×3×2×1 = 120. By convention, the value of 0! is defined as 1. This classical factorial function appears prominently in many theorems in number theory. The following are a few of these theorems. Motivation for the generalization: For any positive integers m and n, (m + n)! is a multiple of m! n!. Let f(x) be a primitive integer polynomial, that is, a polynomial in which the coefficients are integers and are relatively prime to each other. If the degree of f(x) is k then the greatest common divisor of the set of values of f(x) for integer values of x is a divisor of k!. Let a0, a1, a2, … , an be any n + 1 integers. Then the product of their pairwise differences is a multiple of 0! 1! … n!. Motivation for the generalization: Let Z be the set of integers and n any integer. Then the number of polynomial functions from the ring of integers Z to the quotient ring Z/nZ is given by gcd (n,k!) .Bhargava posed to himself the following problem and obtained an affirmative answer: In the above theorems, can one replace the set of integers by some other set S (a subset of Z , or a subset of some ring) and define a function depending on S which assigns a value to each non-negative integer k, denoted by k!S, such that the statements obtained from the theorems given earlier by replacing k! by k!S remain true? The generalisation: Let S be an arbitrary infinite subset of the set Z of integers. Choose a prime number p. Construct an ordered sequence {a0, a1, a2, … } of numbers chosen from S as follows (such a sequence is called a p-ordering of S):a0 is any arbitrary element of S. a1 is any arbitrary element of S such that the highest power of p that divides a1 − a0 is minimum. a2 is any arbitrary element of S such that the highest power of p that divides (a2 − a0)(a2 − a1) is minimum. a3 is any arbitrary element of S such that the highest power of p that divides (a3 − a0)(a3 − a1)(a3 − a2) is minimum. The generalisation: … and so on.Construct a p-ordering of S for each prime number p. (For a given prime number p, the p-ordering of S is not unique.) For each non-negative integer k, let vk(S, p) be the highest power of p that divides (ak − a0)(ak − a1)(ak − a2) … (ak − ak − 1). The sequence {v0(S, p), v1(S, p), v2(S, p), v3(S, p), … } is called the associated p-sequence of S. This is independent of any particular choice of p-ordering of S. (We assume that v0(S, p) = 1 always.) The factorial of the integer k, associated with the infinite set S, is defined as k!S=∏pvk(S,p) , where the product is taken over all prime numbers p. Example: Factorials using set of prime numbers: Let S be the set of all prime numbers P = {2, 3, 5, 7, 11, … }. Choose p = 2 and form a p-ordering of P.Choose a0 = 19 arbitrarily from P. Example: Factorials using set of prime numbers: To choose a1:The highest power of p that divides 2 − a0 = −17 is 20 = 1. Also, for any a ≠ 2 in P, a − a0 is divisible by 2. Hence, the highest power of p that divides (a1 − a0) is minimum when a1 = 2 and the minimum power is 1. Thus a1 is chosen as 2 and v1(P, 2) = 1.To choose a2:It can be seen that for each element a in P, the product x = (a − a0)(a − a1) = (a − 19)(a − 2) is divisible by 2. Also, when a = 5, x is divisible 2 and it is not divisible by any higher power of 2. So, a2 may be chosen as 5. We have v2(P, 2) = 2.To choose a3:It can be seen that for each element a in P, the product x = (a − a0)(a − a1)(a − a2) = (a − 19)(a − 2)(a − 5) is divisible by 23 = 8. Also, when a = 17, x is divisible by 8 and it is not divisible by any higher power of 2. Choose a3 = 17. Also we have v3(P,2) = 8.To choose a4:It can be seen that for each element a in P, the product x = (a − a0)(a − a1)(a − a2)(a − a3) = (a − 19)(a − 2)(a − 5)(a − 17) is divisible by 24 = 16. Also, when a = 23, x is divisible 16 and it is not divisible by any higher power of 2. Choose a4 = 23. Also we have v4(P,2) = 16.To choose a5:It can be seen that for each element a in P, the product x = (a − a0)(a − a1)(a − a2)(a − a3)(a − a4) = (a − 19)(a − 2)(a − 5)(a − 17)(a − 23) is divisible by 27 = 128. Also, when a = 31, x is divisible 128 and it is not divisible by any higher power of 2. Choose a5 = 31. Also we have v5(P,2) = 128.The process is continued. Thus a 2-ordering of P is {19, 2, 5, 17, 23, 31, … } and the associated 2-sequence is {1, 1, 2, 8, 16, 128, … }, assuming that v0(P, 2) = 1.For p = 3, one possible p-ordering of P is the sequence {2, 3, 7, 5, 13, 17, 19, … } and the associated p-sequence of P is {1, 1, 1, 3, 3, 9, … }.For p = 5, one possible p-ordering of P is the sequence {2, 3, 5, 19, 11, 7, 13, … } and the associated p-sequence is {1, 1, 1, 1, 1, 5, …}.It can be shown that for p ≥ 7, the first few elements of the associated p-sequences are {1, 1, 1, 1, 1, 1, … }. Example: Factorials using set of prime numbers: The first few factorials associated with the set of prime numbers are obtained as follows (sequence A053657 in the OEIS). Example: Factorials using the set of natural numbers: Let S be the set of natural numbers Z For p = 2, the associated p-sequence is {1, 1, 2, 2, 8, 8, 16, 16, 128, 128, 256, 256, … }. For p = 3, the associated p-sequence is {1, 1, 1, 3, 3, 3, 9, 9, 9, 27, 27, 27, 81, 81, 81, … }. For p = 5, the associated p-sequence is {1, 1, 1, 1, 1, 5, 5, 5, 5, 5, 25, 25, 25, 25, 25, … }. For p = 7, the associated p-sequence is {1, 1, 1, 1, 1, 1, 1, 7, 7, 7, 7, 7, 7, 7, … }. … and so on.Thus the first few factorials using the natural numbers are 0! Z = 1×1×1×1×1×… = 1. 1! Z = 1×1×1×1×1×… = 1. 2! Z = 2×1×1×1×1×… = 2. 3! Z = 2×3×1×1×1×… = 6. 4! Z = 8×3×1×1×1×… = 24. 5! Z = 8×3×5×1×1×… = 120. 6! Z = 16×9×5×1×1×… = 720. Examples: Some general expressions: The following table contains the general expressions for k!S for some special cases of S.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Antipasto** Antipasto: Antipasto (plural antipasti) is the traditional first course of a formal Italian meal. Usually made of bite-size small portions and served on a platter from which everyone serves themselves, the purpose of antipasti is to stimulate the appetite. Typical ingredients of a traditional antipasto includes cured meats, olives, peperoncini, mushrooms, anchovies, artichoke hearts, various cheeses (such as provolone or mozzarella), pickled meats, and vegetables in oil or vinegar. Antipasto: The contents of an antipasto vary greatly according to regional cuisine. Different preparations of saltwater fish and traditional southern cured meats (like soppressata or 'nduja) are popular in the south of Italy, whereas in northern Italy it is common to serve different kinds of cured meats and mushrooms and, especially near lakes, preparations of freshwater fish. The cheeses included also vary significantly between regions and backgrounds, and include hard and soft cheeses. Antipasto: Many compare antipasto to hors d'oeuvre, but antipasto is served at the table and signifies the official beginning of the Italian meal. It may also be referred to as a starter, or an appetizer.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Perfluorodecyltrichlorosilane** Perfluorodecyltrichlorosilane: Perfluorodecyltrichlorosilane, also known as FDTS, is a colorless liquid chemical with molecular formula C10H4Cl3F17Si. FDTS molecules form self-assembled monolayers. They form covalent silicon–oxygen bonds to free hydroxyl (–OH) groups, such as the surfaces of glass, ceramics, or silica. Perfluorodecyltrichlorosilane: Due to its heavily fluorinated tail group, a FDTS monolayer reduces surface energy. Deposition of a FDTS monolayer is achieved by a relatively simple process, also known as molecular vapor deposition (MVD) It usually deposits from a vapor phase, at room to near-to-room temperatures (50 °C) and is thus compatible with most substrates. The process is usually carried out in a vacuum chamber and assisted by the presence of water vapor. Treated surfaces have water repellent and friction reducing properties. Perfluorodecyltrichlorosilane: For this reason, a FDTS monolayer is often applied to movable microparts of microelectromechanical systems (MEMS). A FDTS monolayer reduces surface energy and prevents sticking, so they are used to coat micro- and nano-features on stamps for a nanoimprint lithography which is becoming a method of choice for making electronics, organic photodiodes, microfluidics and other. Reduced surface energy is helpful for reduction of ejection force and demolding of polymer parts in an injection molding and FDTS coating was applied onto some metallic injection molding molds and inserts.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**P-Cresol** P-Cresol: para-Cresol, also 4-methylphenol, is an organic compound with the formula CH3C6H4(OH). It is a colourless solid that is widely used intermediate in the production of other chemicals. It is a derivative of phenol and is an isomer of o-cresol and m-cresol. Production: Together with many other compounds, p-cresol is conventionally extracted from coal tar, the volatilized materials obtained in the roasting of coal to produce coke. This residue contains a few percent by weight of phenol and cresols. Industrially, p-cresol is currently prepared mainly by a two-step route beginning with the sulfonation of toluene: CH3C6H5 + H2SO4 → CH3C6H4SO3H + H2OBasic hydrolysis of the sulfonate salt gives the sodium salt of the cresol: CH3C6H4SO3H + 2 NaOH → CH3C6H4OH + Na2SO3 + H2OOther methods for the production of p-cresol include chlorination of toluene followed by hydrolysis. In the cymene-cresol process, toluene is alkylated with propene to give p-cymene, which can be oxidatively dealkylated in a manner similar to the cumene process. Applications: p-Cresol is consumed mainly in the production of antioxidants, such as butylated hydroxytoluene (BHT). The monoalkylated derivatives undergo coupling to give an extensive family of diphenol antioxidants. These antioxidants are valued because they are relatively low in toxicity and nonstaining. Natural occurrences: In humans p-Cresol is produced by bacterial fermentation of protein in the human large intestine. It is excreted in feces and urine, and is a component of human sweat that attracts female mosquitoes.p-Cresol is a constituent of tobacco smoke. Natural occurrences: In other species p-Cresol is a major component in pig odor. Temporal glands secretion examination showed the presence of phenol and p-cresol during musth in male elephants. It is one of the very few compounds to attract the orchid bee Euglossa cyanura and has been used to capture and study the species. p-Cresol is a component found in horse urine during estrus that can elicit the Flehmen response.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Biostrophin** Biostrophin: Biostrophin is a drug which may serve as a vehicle for gene therapy, in the treatment of Duchenne and Becker muscular dystrophy.As mutations in the gene which codes for the protein dystrophin is the underlying defect responsible for both disorders, biostrophin will deliver a genetically-engineered, functional copy of the gene at the molecular level to affected muscle cells. Dosage, as well as a viable means for systemic release of the drug in patients, is currently being investigated with the use of both canine and primate animal models.Biostrophin is being manufactured by Asklepios BioPharmaceuticals, Inc., with funding provided by the Muscular Dystrophy Association.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Acetivibrio thermocellus** Acetivibrio thermocellus: Acetivibrio thermocellus is an anaerobic, thermophilic bacterium. A. thermocellus has garnered research interest due to its cellulolytic and ethanologenic abilities, being capable of directly converting a cellulosic substrate into ethanol by consolidated bioprocessing. This makes it useful in converting biomass into a usable energy source. The degradation of the cellulose is carried out in the bacterium by a large extracellular cellulase system called a cellulosome, which contains nearly 20 catalytic subunits. The cellulase system of the bacterium significantly differs from fungal cellulases due to its high activity on crystalline cellulose, being able to completely solubilize crystalline sources of cellulose, such as cotton. However, there are some shortfalls in applying the organism to practical applications due to it having low ethanol yield, at least partially due to branched fermentation pathways that produce acetate, formate, and lactate along with ethanol. There is also evidence of inhibition due to the presence of hydrogen and due to agitation. Some recent research has been directed to optimizing the ethanol-producing metabolic pathway in hopes of creating more efficient biomass conversion.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Dark current (physics)** Dark current (physics): In physics and in electronic engineering, dark current is the relatively small electric current that flows through photosensitive devices such as a photomultiplier tube, photodiode, or charge-coupled device even when no photons enter the device; it consists of the charges generated in the detector when no outside radiation is entering the detector. It is referred to as reverse bias leakage current in non-optical devices and is present in all diodes. Physically, dark current is due to the random generation of electrons and holes within the depletion region of the device. Dark current (physics): The charge generation rate is related to specific crystallographic defects within the depletion region. Dark-current spectroscopy can be used to determine the defects present by monitoring the peaks in the dark current histogram's evolution with temperature. Dark current is one of the main sources for noise in image sensors such as charge-coupled devices. The pattern of different dark currents can result in a fixed-pattern noise; dark frame subtraction can remove an estimate of the mean fixed pattern, but there still remains a temporal noise, because the dark current itself has a shot noise. This dark current is the same that is studied in PN-Junction studies.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Plant sources of anti-cancer agents** Plant sources of anti-cancer agents: Plant sources of anti-cancer agents are plants, the derivatives of which have been shown to be usable for the treatment or prevention of cancer in humans. Background: In the 1950s, scientists began systematically examining natural organisms as a source of useful anti-cancer substances. It has recently been argued that "the use of natural products has been the single most successful strategy in the discovery of novel medicines".Plants need to defend themselves from attack by micro-organisms, in particular fungi, and they do this by producing anti-fungal chemicals that are toxic to fungi. Because fungal and human cells are similar at a biochemical level it is often the case that chemical compounds intended for plant defence have an inhibitory effect on human cells, including human cancer cells. Those plant chemicals that are selectively more toxic to cancer cells than normal cells have been discovered in screening programs and developed as chemotherapy drugs Research and development process Some plants that indicate potential as an anticancer agent in laboratory-based in vitro research – for example, Typhonium flagelliforme, and Murraya koenigii are currently being studied. There can be many years between promising laboratory work and the availability of an effective anti-cancer drug: Monroe Eliot Wall discovered anti-cancer properties in Camptotheca in 1958, but it was not until 1996 – after further research and rounds of clinical trials – that topotecan, a synthetic derivative of a chemical in the plant, was approved for use by the US Food and Drug Administration. Plants: Camptotheca acuminataThe cancer treatment drug topotecan is a synthetic chemical compound similar in chemical structure to camptothecin which is found in extracts of Camptotheca (happy tree). Catharanthus roseusVinca alkaloids were originally manufactured by extracting them from Catharanthus (Madagascar Periwinkle). Podophyllum spp.Two chemotherapy drugs, etoposide and teniposide, are synthetic chemical compounds similar in chemical structure to the toxin podophyllotoxin which is found in Podophyllum peltatum (May Apple). Taxus brevifoliaChemicals extracted from clippings of Taxus brevifolia (Pacific yew) have been used as the basis for two chemotherapy drugs, docetaxel and paclitaxel. Euphorbia peplusContains ingenol mebutate (Picato) which is used to treat skin cancer Maytenus ovatusTrastuzumab emtansine (Kadcyla) is an antibody conjugated to a synthetic derivative of the cytotoxic principle of the Ethiopian plant Maytenus ovatus. It used to treat breast cancer.Mappia foetida Some of the research has been showed that it has an effective anticancer property against breast cancer [1]
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Kruskal–Szekeres coordinates** Kruskal–Szekeres coordinates: In general relativity, Kruskal–Szekeres coordinates, named after Martin Kruskal and George Szekeres, are a coordinate system for the Schwarzschild geometry for a black hole. These coordinates have the advantage that they cover the entire spacetime manifold of the maximally extended Schwarzschild solution and are well-behaved everywhere outside the physical singularity. There is no misleading coordinate singularity at the horizon. Kruskal–Szekeres coordinates: The Kruskal–Szekeres coordinates also apply to space-time around a spherical object, but in that case do not give a description of space-time inside the radius of the object. Space-time in a region where a star is collapsing into a black hole is approximated by the Kruskal–Szekeres coordinates (or by the Schwarzschild coordinates). The surface of the star remains outside the event horizon in the Schwarzschild coordinates, but crosses it in the Kruskal–Szekeres coordinates. (In any "black hole" which we observe, we see it at a time when its matter has not yet finished collapsing, so it is not really a black hole yet.) Similarly, objects falling into a black hole remain outside the event horizon in Schwarzschild coordinates, but cross it in Kruskal–Szekeres coordinates. Definition: Kruskal–Szekeres coordinates on a black hole geometry are defined, from the Schwarzschild coordinates (t,r,θ,ϕ) , by replacing t and r by a new timelike coordinate T and a new spacelike coordinate X sinh ⁡(t4GM) cosh ⁡(t4GM) for the exterior region r>2GM outside the event horizon and: cosh ⁡(t4GM) sinh ⁡(t4GM) for the interior region 0<r<2GM . Here GM is the gravitational constant multiplied by the Schwarzschild mass parameter, and this article is using units where c = 1. Definition: It follows that on the union of the exterior region, the event horizon and the interior region the Schwarzschild radial coordinate r (not to be confused with the Schwarzschild radius rs=2GM ), is determined in terms of Kruskal–Szekeres coordinates as the (unique) solution of the equation: T2−X2=(1−r2GM)er/2GM,T2−X2<1 Using the Lambert W function the solution is written as: r=2GM(1+W0(X2−T2e)) .Moreover one sees immediately that in the region external to the black hole T2−X2<0,X>0 t=4GMartanh⁡(T/X) whereas in the region internal to the black hole 0<T2−X2<1,T>0 t=4GMartanh⁡(X/T) In these new coordinates the metric of the Schwarzschild black hole manifold is given by 32 G3M3re−r/2GM(−dT2+dX2)+r2gΩ, written using the (− + + +) metric signature convention and where the angular component of the metric (the Riemannian metric of the 2-sphere) is: sin 2⁡θdϕ2 .Expressing the metric in this form shows clearly that radial null geodesics i.e. with constant Ω=Ω(θ,ϕ) are parallel to one of the lines T=±X . In the Schwarzschild coordinates, the Schwarzschild radius rs=2GM is the radial coordinate of the event horizon r=rs=2GM . In the Kruskal–Szekeres coordinates the event horizon is given by T2−X2=0 . Note that the metric is perfectly well defined and non-singular at the event horizon. The curvature singularity is located at T2−X2=1 The maximally extended Schwarzschild solution: The transformation between Schwarzschild coordinates and Kruskal–Szekeres coordinates defined for r > 2GM and −∞<t<∞ can be extended, as an analytic function, at least to the first singularity which occurs at T2−X2=1 . Thus the above metric is a solution of Einstein's equations throughout this region. The allowed values are −∞<X<∞ −∞<T2−X2<1 Note that this extension assumes that the solution is analytic everywhere. The maximally extended Schwarzschild solution: In the maximally extended solution there are actually two singularities at r = 0, one for positive T and one for negative T. The negative T singularity is the time-reversed black hole, sometimes dubbed a "white hole". Particles can escape from a white hole but they can never return. The maximally extended Schwarzschild geometry can be divided into 4 regions each of which can be covered by a suitable set of Schwarzschild coordinates. The Kruskal–Szekeres coordinates, on the other hand, cover the entire spacetime manifold. The four regions are separated by event horizons. The transformation given above between Schwarzschild and Kruskal–Szekeres coordinates applies only in regions I and II (if we take the square root as positive). A similar transformation can be written down in the other two regions. The Schwarzschild time coordinate t is given by tanh (in I and III) (in II and IV) In each region it runs from −∞ to +∞ with the infinities at the event horizons. The maximally extended Schwarzschild solution: Based on the requirements that the quantum process of Hawking radiation is unitary, 't Hooft proposed that the regions I and III, and II and IV are just mathematical artefacts coming from choosing branches for roots rather than parallel universes and that the equivalence relation (T,X,Ω)∼(−T,−X,−Ω) should be imposed, where −Ω is the antipode of Ω on the 2-sphere. If we think of regions III and IV as having spherical coordinates but with a negative choice for the square root to compute r , then we just correspondingly use opposite points on the sphere to denote the same point in space, so e.g. (t(I),r(I),Ω(I))=(t,r,Ω)∼(t(III),r(III),Ω(III))=(t,−r,−Ω). The maximally extended Schwarzschild solution: This means that r(I)Ω(I)=r(III)Ω(III)=rΩ Since this is a free action by the group Z/2Z preserving the metric, this gives a well-defined Lorentzian manifold (everywhere except at the singularity). It identifies the limit t(II)=−∞ of the interior region II corresponding to the coordinate line segment T=−X,T>0,X<0 with the limit t(I)=−∞ of the exterior region I corresponding to T=−X,T<0,X>0 . The identification does mean that whereas each pair (T,X)∼(−T,−X)≠(0,0) corresponds to a sphere, the point (T,X)=(0,0) (corresponding to the event horizon r=2GM in the Schwarzschild picture) corresponds not to a sphere but to the projective plane RP2=S2/± instead, and the topology of the underlying manifold is no longer R4−line=R2×S2 . The manifold is no longer simply connected, because a loop (involving superluminal portions) going from a point in space-time back to itself but at the opposite Kruskal–Szekeres coordinates cannot be reduced to a null loop. Qualitative features of the Kruskal–Szekeres diagram: Kruskal–Szekeres coordinates have a number of useful features which make them helpful for building intuitions about the Schwarzschild spacetime. Chief among these is the fact that all radial light-like geodesics (the world lines of light rays moving in a radial direction) look like straight lines at a 45-degree angle when drawn in a Kruskal–Szekeres diagram (this can be derived from the metric equation given above, which guarantees that if dX=±dT then the proper time ds=0 ). All timelike world lines of slower-than-light objects will at every point have a slope closer to the vertical time axis (the T coordinate) than 45 degrees. So, a light cone drawn in a Kruskal–Szekeres diagram will look just the same as a light cone in a Minkowski diagram in special relativity. Qualitative features of the Kruskal–Szekeres diagram: The event horizons bounding the black hole and white hole interior regions are also a pair of straight lines at 45 degrees, reflecting the fact that a light ray emitted at the horizon in a radial direction (aimed outward in the case of the black hole, inward in the case of the white hole) would remain on the horizon forever. Thus the two black hole horizons coincide with the boundaries of the future light cone of an event at the center of the diagram (at T=X=0), while the two white hole horizons coincide with the boundaries of the past light cone of this same event. Any event inside the black hole interior region will have a future light cone that remains in this region (such that any world line within the event's future light cone will eventually hit the black hole singularity, which appears as a hyperbola bounded by the two black hole horizons), and any event inside the white hole interior region will have a past light cone that remains in this region (such that any world line within this past light cone must have originated in the white hole singularity, a hyperbola bounded by the two white hole horizons). Note that although the horizon looks as though it is an outward expanding cone, the area of this surface, given by r is just 16 πM2 , a constant. I.e., these coordinates can be deceptive if care is not exercised. Qualitative features of the Kruskal–Szekeres diagram: It may be instructive to consider what curves of constant Schwarzschild coordinate would look like when plotted on a Kruskal–Szekeres diagram. It turns out that curves of constant r-coordinate in Schwarzschild coordinates always look like hyperbolas bounded by a pair of event horizons at 45 degrees, while lines of constant t-coordinate in Schwarzschild coordinates always look like straight lines at various angles passing through the center of the diagram. The black hole event horizon bordering exterior region I would coincide with a Schwarzschild t-coordinate of +∞ while the white hole event horizon bordering this region would coincide with a Schwarzschild t-coordinate of −∞ , reflecting the fact that in Schwarzschild coordinates an infalling particle takes an infinite coordinate time to reach the horizon (i.e. the particle's distance from the horizon approaches zero as the Schwarzschild t-coordinate approaches infinity), and a particle traveling up away from the horizon must have crossed it an infinite coordinate time in the past. This is just an artifact of how Schwarzschild coordinates are defined; a free-falling particle will only take a finite proper time (time as measured by its own clock) to pass between an outside observer and an event horizon, and if the particle's world line is drawn in the Kruskal–Szekeres diagram this will also only take a finite coordinate time in Kruskal–Szekeres coordinates. Qualitative features of the Kruskal–Szekeres diagram: The Schwarzschild coordinate system can only cover a single exterior region and a single interior region, such as regions I and II in the Kruskal–Szekeres diagram. The Kruskal–Szekeres coordinate system, on the other hand, can cover a "maximally extended" spacetime which includes the region covered by Schwarzschild coordinates. Here, "maximally extended" refers to the idea that the spacetime should not have any "edges": any geodesic path can be extended arbitrarily far in either direction unless it runs into a gravitational singularity. Technically, this means that a maximally extended spacetime is either "geodesically complete" (meaning any geodesic can be extended to arbitrarily large positive or negative values of its 'affine parameter', which in the case of a timelike geodesic could just be the proper time), or if any geodesics are incomplete, it can only be because they end at a singularity. In order to satisfy this requirement, it was found that in addition to the black hole interior region (region II) which particles enter when they fall through the event horizon from the exterior (region I), there has to be a separate white hole interior region (region IV) which allows us to extend the trajectories of particles which an outside observer sees rising up away from the event horizon, along with a separate exterior region (region III) which allows us to extend some possible particle trajectories in the two interior regions. There are actually multiple possible ways to extend the exterior Schwarzschild solution into a maximally extended spacetime, but the Kruskal–Szekeres extension is unique in that it is a maximal, analytic, simply connected vacuum solution in which all maximally extended geodesics are either complete or else the curvature scalar diverges along them in finite affine time. Lightcone variant: In the literature, the Kruskal–Szekeres coordinates sometimes also appear in their lightcone variant: U=T−X V=T+X, in which the metric is given by 32 G3M3re−r/2GM(dUdV)+r2dΩ2, and r is defined implicitly by the equation UV=(1−r2GM)er/2GM. These lightcone coordinates have the useful feature that outgoing null geodesics are given by constant , while ingoing null geodesics are given by constant . Furthermore, the (future and past) event horizon(s) are given by the equation UV=0 , and curvature singularity is given by the equation UV=1 The lightcone coordinates derive closely from Eddington–Finkelstein coordinates.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**RRAGC** RRAGC: Ras-related GTP binding C, also known as RRAGC, is a protein which in humans is encoded by the RRAGC gene.RRAGC is a monomeric guanine nucleotide-binding protein, or G protein. By binding GTP or GDP, small G proteins act as molecular switches in numerous cell processes and signaling pathways. Interactions: RRAGC has been shown to interact with RRAGA.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Phosphogypsum** Phosphogypsum: Phosphogypsum (PG) is the calcium sulfate hydrate formed as a by-product of the production of fertilizer from phosphate rock. It is mainly composed of gypsum (CaSO4·2H2O). Although gypsum is a widely used material in the construction industry, phosphogypsum is usually not used, but is stored indefinitely because of its weak radioactivity caused by the presence of naturally occurring uranium (U) and thorium (Th), and their daughter isotopes radium (Ra), radon (Rn) and polonium (Po). On the other hand it includes several valuable components—calcium sulphates and elements such as silicon, iron, titanium, magnesium, aluminum, and manganese. However, the long-range storage of phosphogypsum is controversial. About five tons of phosphogypsum are generated per ton of phosphoric acid production. Annually, the estimated generation of phosphogypsum worldwide is 100 to 280 million metric tons. Production and properties: Phosphogypsum is a by-product from the production of phosphoric acid by treating phosphate ore (apatite) with sulfuric acid according to the following reaction: Ca5(PO4)3X + 5 H2SO4 + 10 H2O → 3 H3PO4 + 5 (CaSO4 · 2 H2O) + HX where X may include OH, F, Cl, or BrPhosphogypsum is radioactive due to the presence of naturally occurring uranium (5–10 ppm) and thorium, and their daughter nuclides radium, radon, polonium, etc. Marine-deposited phosphate typically has a higher level of radioactivity than igneous phosphate deposits, because uranium is present in seawater at about 3 ppb (roughly 85 ppb of total dissolved solids). Uranium is concentrated during the formation of evaporite deposits as dissolved solids precipitate in order of solubility with easily dissolved materials such as sodium chloride remaining in solution longer than less soluble materials like uranium or sulfates. Other components of phosphogypsum include silica (5–10%), fluoride (F, ~1%), phosphorus (P, ~0.5%), iron (Fe, ~0.1%), aluminum (Al, ~0.1%), barium (Ba, 50 ppm), lead (Pb, ~5 ppm), chromium (Cr, ~3 ppm), selenium (Se, ~1 ppm), and cadmium (Cd, ~0.3 ppm). About 90% of Po and Ra from raw ore is retained into Phosphogypsum. Thus it can be considered technologically enhanced naturally occurring radioactive material (TENORM). Use: Various applications have been proposed for using phosphogypsum, including using it as material for: Artificial reefs and oyster beds Cover for landfills Road pavement Roof tiles Soil conditionerAccording to Taylor (2009), "up to 15% of world PG production is used to make building materials, as a soil amendment and as a set controller in the manufacture of Portland cement". The rest remains in stack. Use: In the United States The United States Environmental Protection Agency (EPA) has banned most applications of phosphogypsum having a 226Ra concentration of greater than 10 picocurie/gram (0.4 Bq/g) in 1990. As a result, phosphogypsum which exceeds this limit is stored in large stacks since extracting such low concentrations of radium is either not possible or not economical with current technology for either the use of the gypsum or the radium. Given the traditional definition of the Curie via the specific activity of 226Ra, this limit is equivalent to 0.01 milligrams (0.00015 gr) of radium per metric ton or a concentration of 10 parts per trillion. (See § Gyp stacks below.) EPA approved the use of phosphogypsum for road construction during the Trump Administration in 2020, saying that the approval came at the request of The Fertilizer Institute, which advocates for the fertilizer industry. Environmentalists opposed the decision, saying that using the radioactive material in this way can pose health risks. In 2021, the EPA withdrew the rule authorizing the use of phosphogypsum in road construction.The state of Florida has approximately 80% of the world's phosphogypsum production capacity. In May 2023, the Florida legislature passed a bill requiring the Florida Department of Transportation to study the use of phosphogypsum in road construction, including demonstration projects, though this would require federal approval. The law, which requires the department to complete a study and make a recommendation by April 1, 2024, was signed into law by Governor Ron DeSantis on June 29, 2023. Use: In China China's phosphate fertilizer production exceeded that of the US in 2005, and with it came the problem of excess phosphogypsum. By 2018, inappropriate storage has become a major problem in the Yangtze River watershed, with phosphorus accounting for 56% of all breaches of water quality standards. Phosphorus, which still remains in phosphogypsum, can lead to eutrophication of bodies of water and hence algal blooms or even anoxic events ("dead zones") in the lower layers of a body of water. The total amount of phosphogypsum in storage by 2020 exceeds 600 Mt, with 75 Mt produced each year.The construction industry is the number one user of phosphogypsum in 2020, with 10.5 Mt used as concrete set retarder and 3.5 Mt used in drywall. It is also used as a chemical feedstock for producing sulfates, and as a soil conditioner similar to regular gypsum. The total consumption in 2020 was 31 Mt, much lower than the rate of accumulation. There has been a significant push to expand the use of phosphogypsum on the national level since 2016, being part of two consecutive five-year plans.Phosphogypsum may require pre-processing to remove contaminants before use. Phosphorus (P) significantly retards curing and reduces the strength of the material, an important concern in construction. Fluorine (F) may accumulate in crops. Although Chinese phosphogypsum generally contain less toxic heavy metals and radioactive elements, some nevertheless exceed acceptable radioactivity limits for building material, or produce crops with unacceptable amounts of arsenic (As), lead (Pb), cadmium (Cd), or mercury (Hg). Barriers to further use include cost of heavy metal removal and considerable variation among sources of phosphogypsum. Pollution and cleanup: Phosphogypsum may pollute the environment by its phosphorus content causing eutrophication, by its toxic heavy metal content, and by its radioactivity. PG releases radon, which can accumulate indoors if used as a construction material. Open-air stores also release radon at a level potentially hazardous to workers. Radon is a noble gas that is heavier than air and thus tends to accumulate in poorly ventilated underground spaces like mines or cellars. Naturally occurring radon is considered the second most common cause of lung cancer after smoking. More substantial however is the leaching of the contents of phosphogypsum into the water table and consequently soil, exacerbated by the fact that PG is often transported as a slurry. Accumulation of water inside of gypstacks can lead to weakening of the stack structure, a cause of several alarms in the United States.The main approach to reducing PG pollution is to act before it leaches into the environment. This can mean recycling purified materials from PG in a variety of applications (see above) or converting it into a more stable form for storage. Cement paste backfill converts hazardous mining waste, such as PG, into a cement paste, and then uses the paste to fill in voids created by mining the rocks.Bioremediation may be used to clean up already contaminated water and soil. Microbials can remove heavy metals and radioactive material [citation needed regarding radioactivity removal by plants], any organic pollutants within, and reduce the sulfate material. With suitable soil amendments and additives, PG can also support the growth of hardy plants, hopefully preventing further erosion. Pollution and cleanup: Gyp stacks Often phosphogypsum reuse is uneconomical due to impurities, mining companies commonly dump the waste into man-made hills called "phosphogypsum stacks" or waste ponds near the mine. Waste ponds are open-air reservoirs that contain a variety of different types of industrial and agricultural waste. including at least 70 phosphogypsum stacks (from phosphate mines used for fertilizer production). A leaking phosphogypsum waste pond that nearly collapsed, if waste was not allowed to flow into Tampa Bay in Florida in 2021, highlights the dangers and near-disasters associated with wastewater ponds throughout the country.Central Florida has a large quantity of phosphate deposits, particularly in the Bone Valley region. The marine-deposited phosphate ore from central Florida is weakly radioactive, and as such, the phosphogypsum by-product (in which the radionuclides are somewhat concentrated) is too radioactive to be used for most applications. As a result, there are about a billion tons of phosphogypsum stacked in 25 stacks in Florida (22 are in central Florida) and about 30 million additional tons are generated each year.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Paleosalinity** Paleosalinity: Paleosalinity (or palaeosalinity) is the salinity of the global ocean or of an ocean basin at a point in geological history. Importance: From Bjerrum plots, it is found that a decrease in the salinity of an aqueous fluid will act to increase the value of the carbon dioxide-carbonate system equilibrium constants, (pK*). This means that the relative proportion of carbonate with respect to carbon dioxide is higher in more saline fluids, e.g. seawater, than in fresher waters. Of crucial importance for paleoclimatology is the observation that an increase in salinity will thus reduce the solubility of carbon dioxide in the oceans. Since there is thought to have been a 120 m depression in sea level at the last glacial maximum due to the extensive formation of ice sheets (which are solely freshwater), this represents a significant fractionation towards saltier seas during glacial periods. Correspondingly, this will cause a net outgassing of carbon dioxide into the atmosphere because of its reduced solubility, acting to increase atmospheric carbon dioxide by 6.5‰. This is thought to partly offset the net decrease of 80-100‰ observed during glacial periods. Importance: Stratification In addition, it is thought that extensive salinity stratification can lead to a reduction in the meridional overturning circulation (MOC) through the slowing of thermohaline circulation. Increased stratification means that there is effectively a barrier to subduction of parcels of water; isopycnals effectively do not outcrop at the surface and are parallel to the surface. The ocean, in this case, can be described as "less ventilated", and this has been implicated in the slowing down of the MOC. Measuring paleosalinity: There may exist proxies for salinity, but to date the main way that salinity has been measured has been by directly measuring chlorinity in pore fluids. Adkins et al. (2002) used pore fluid chlorinity in ODP cores, with the paleo-depth estimated from nearby coral horizons. Chlorinity was measured rather than pure salinity because the major ions in seawater are not constant with depth in the sediment column; for example, sulfate reduction and cation-clay interactions can change overall salinity, whereas chlorinity is not heavily affected. Paleosalinity during the Last Glacial Maximum: Adkins' study found that global salinity increased with a global sea level drop of 120 m. Analyzing 18O data they also found that deep waters were within error of the freezing point, with oceanic waters exhibiting a greater degree of homogeneity in temperatures. In contrast, variations in salinity were much greater than they are today. Modern day salinities are all within 0.5 psu of the global average salinity of 34.7 psu, whereas salinities during the last glacial maximum (LGM) ranged from 35.8 psu in the North Atlantic to 37.1 in the Southern Ocean. Paleosalinity during the Last Glacial Maximum: There are some notable differences in the hydrography at the LGM and present day. Today the North Atlantic Deep Water (NADW) is observed to be more saline than Antarctic Bottom Water (AABW), whereas at the last glacial maximum it was observed that the AABW was in fact more saline; a complete reversal. Today the NADW is more salty because of the Gulf Stream; this could thus indicate a reduction of flow through the Florida Straits due to lowered sea level. Paleosalinity during the Last Glacial Maximum: Another observation is that the Southern Ocean was vastly more salty at the LGM than today. This is particularly intriguing given the assumed importance of the Southern Ocean in oceanic dynamical regulation of ice ages. The extreme value of 37.1 psu is assumed to be a consequence of an increased degree of sea ice formation and export. This would account for the increased salinity, but would also account for the lack of oxygen isotopic fractionation; brine rejection without oxygen isotopic fractionation is thought to be highly characteristic of sea ice formation. Paleosalinity during the Last Glacial Maximum: The increased role of salinity The presence of waters near the freezing point alters the balance of the relative effects of contrasts in salinity and temperature on sea water density. This is described in the equation, Δρρ=αΔT−βΔS where α is the thermal expansion coefficient and β is the haline contraction coefficient. In particular, the ratio βα is crucial. Using the observed temperatures and salinities, in the modern ocean, βα is about 10 whilst at the LGM βα it is estimated to have been closer to 25. The modern thermohaline circulation is thus more controlled by density contrasts due to thermal differences, whereas during the LGM the oceans were more than twice as sensitive to differences in salinity rather than temperature. In this way, the thermohaline circulation can be considered to have been less "thermo" and more "haline".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**System prevalence** System prevalence: System prevalence is a simple software architectural pattern that combines system images (snapshots) and transaction journaling to provide speed, performance scalability, transparent persistence and transparent live mirroring of computer system state. In a prevalent system, state is kept in memory in native format, all transactions are journaled and System images are regularly saved to disk. System images and transaction journals can be stored in language-specific serialization format for speed or in XML format for cross-language portability. The first usage of the term and generic, publicly available implementation of a system prevalence layer was Prevayler, written for Java by Klaus Wuestefeld in 2001. Advantages: Simply keeping system state in RAM in its normal, natural, language-specific format is orders of magnitude faster and more programmer-friendly than the multiple conversions that are needed when it is stored and retrieved from a DBMS. As an example, Martin Fowler describes "The LMAX Architecture" with a transaction-journal and system-image (snapshot) based business system at its core, which can process 6 million transactions per second on a single thread. Requirement: A prevalent system needs enough memory to hold its entire state in RAM (the "prevalent hypothesis"). Prevalence advocates claim this is continuously alleviated by decreasing RAM prices, and the fact that many business databases are small enough already to fit in memory. Programmers need skill in working with business state natively in RAM, rather than using explicit API calls for storage and queries for retrieval. The system's events must be capturable for journaling.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Omake** Omake: Omake (御負け, usually written おまけ) means extra in Japanese. Its primary meaning is general and widespread. It is used as an anime and manga term to mean "extra or bonus." In the United States, and United Kingdom the term is most often used in a narrow sense by anime fans to describe special features on DVD releases: deleted scenes, interviews with the actors, "the making of" documentary clips, outtakes, amusing bloopers, and so forth. However, this use of the term actually predates the DVD medium by several years. For at least the past fifty years in Japan, omake of small character figurines and toys have been giveaways that come with soft drinks and candy and sometimes the omake is more desired than the product being sold. Omake: In English, the term is often used with this meaning, although it generally only applies to features included with anime, tokusatsu, and occasionally manga. It is thus generally limited to use amongst fans of Japanese pop culture (sometimes called otaku); like many loan words from Japanese, omake is both the singular and plural form. Description: Omake often include comedy sketches where the characters behave out of character, break the fourth wall, or subtly address opinions of the fandom known to the writers. Sometimes scenes from the TV show or OVA are humorously re-dubbed. One example, included on the Video Girl Ai DVD, replays scenes from the OVA series with new voice-acting in a rural accent. Other times, the same actors voice a new script that is more sexually suggestive, often ludicrously so. Omake can also consist of non-canonical, and often comedic crossover clips that sometimes occur at the end of episodes of two shows airing concurrently from the same studio, such as recent Kamen Rider and Super Sentai programs. Description: For anime, these are often presented in super deformed style, in the same way manga omake often is. For example, the anime OVA Gunbuster features super deformed characters trying to explain what the writers know to be mostly pseudo-science, or talking about their relationships with each other in a way they do not in the series itself. In the anime series Reborn!, one of the characters named Haru Miura has an interview with each of the characters of the anime in chibi forms, and the characters' answers to the questions are often something they would never say in the anime or the manga. For live action programs, although not animated, the expressions and sound effects used for comedic purposes can often be inspired by the omake found in the animated mediums. Description: The term "omake" has use also in video games; the Sega game Shenmue II for the Dreamcast had a hidden folder on the game disc labelled "Omake", found by placing the disc into a computer, containing exclusive wallpapers and conception art. Description: Another example of an omake in popular culture is related to Square's Final Fantasy IX. The secret "Blackjack" minigame after completion of the game is accessed by means of a button combination. The Final Fantasy "Playonline" site has a secrets section for Final Fantasy IX, which requires passwords given in the official Piggyback guide to enter. The password needed to reveal the button combination for the Blackjack minigame is E-OMAKE. The minigame itself is an omake. Description: In some fiction writing communities based on forum sites, the term "omake" refer to derivative stories posted in a story thread, usually by users other than the author of the thread, and as a general rule are non-canonical by default. Members of these communities occasionally refer to having written or posted an omake with the term "omaked." Omake occasionally appears in fanfiction about anime or manga, after the story itself, usually as a humorous "alternative ending". An example of this is that at the end of each episode of Dance in the Vampire Bund is a 20–30 second chibi skit called "Dance with the Vampire Maids".
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radar mile** Radar mile: Radar mile or radar nautical mile is an auxiliary constant for converting a (delay) time to the corresponding scale distance on the radar display.Radar timing is usually expressed in microseconds. To relate radar timing to distances traveled by radar energy, you should know that radiated energy from radar set travels at approximately 984 feet per microsecond. With the knowledge that a nautical mile is approximately 6,080 feet, we can figure the approximate time required for radar energy to travel one nautical mile using the following calculation: radar mile 6080 984 12 35 µs A pulse-type radar set transmits a short burst of electromagnetic energy. The target range is determined by measuring elapsed time while the pulse travels to and returns from the target. Because two-way travel is involved, a total time of 12.35 microseconds per nautical mile will elapse between the start of the pulse from the antenna and its return to the antenna from a target in a range of 1 nautical mile. In equation form, this is: range 12 35 µs
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Thiosulfoxide** Thiosulfoxide: A thiosulfoxide or thiothionyl compound is a chemical compound containing a sulfur to sulfur double bond, with the formula (R−)(R'−)S=S, where R and R' represent any group (typically fluorine, chlorine, alkoxy, alkyl, aryl or other organyl residues. The thiosulfoxide has a molecular shape known as trigonal pyramidal. Its coordination is also trigonal pyramidal. The point group of the thiosulfoxide is Cs. A 1982 review concluded that there was as yet no definitive evidence for the existence of stable thiosulfoxides which can be attributed to the double bond rule which states that elements of period 3 and beyond do not form multiple bonds. The related sulfoxides of the type (R−)(R'−)S=O are very common. Many compounds containing a sulfur-sulfur double bond have been reported in the past although only a few verified classes of actually stable compounds exist, closely related to thiosulfoxides. Sulfur-sulfur double bonds can be stabilized with electron-withdrawing groups in so-called thionosulfites of the type (R−O−)(R'−O−)S=S. These compounds can be prepared by reaction of diols with disulfur dichloride. Sulfur halides such as disulfur dichloride, Cl−S−S−Cl, can convert to the branched isomer thiothionyl chloride, Cl2S=S; disulfur difluoride exists as an equilibrium mixture with thiothionyl fluoride, F2S=S, which is thermodynamically more stable. These disulfide isomerizations are occasionally studied in silico.N-(Thiosulfinyl)amines of the type R−N=S=S are another group of stable compounds containing a S=S bond. The first such compound was prepared in 1974 reaction of the nitroso compound N,N-dimethyl-p-nitrosoaniline with tetraphosphorus decasulfide. Heating to 200 °C extrudes sulfur in this compound and forms the corresponding azo compound. Disulfur monoxide S=S=O is stable at 20 °C for several days. Occasionally thiosulfates are depicted as having a S=S unit but the sulfur-sulfur bond in it is in fact a single bond.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Autodesk Vault** Autodesk Vault: Autodesk Vault is a data management tool integrated with Autodesk Inventor Series, Autodesk Inventor Professional, AutoCAD Mechanical, AutoCAD Electrical, Autodesk Revit and Civil 3D products. It helps design teams track work in progress and maintain version control in multi-user environments. It allows them to organize and reuse designs by consolidating product information and reducing the need to re-create designs from scratch. Users can store and search both CAD data (such as Autodesk Inventor, DWG, and DWF files) and non-CAD documents (such as Microsoft Word and Microsoft Excel files). Overview: The Vault environment functions as a client server application with the central SQL database and Autodesk Data Management Server (ADMS) applications installed on a Windows-based server with client access granted via various clients such as: Thick Client (Vault Explorer) and Application Integrations. ADMS acts as the middleware that handles client transactions with the SQL database. Vault Explorer functions as the client application and is intended to run alongside the companion CAD software. The Vault Explorer UI (User Interface) is intended to have an appearance similar to Microsoft Outlook and can display the Vault folder structure, file metadata in the form of a grid and a preview pane for more detailed information. Overview: Autodesk Vault is a file versioning system that "records" the progression of all edits a file has undergone. All files and their associated metadata are indexed in the SQL base data management system and are searchable from the Vault client interface. Other information about the files include version history, uses (composed of a list of children), "Where Used" (a list of all parents) as well as a light weight viewable in the form of the Autodesk Design Web Format (DWF) file which is automatically published upon check-in. When users intend to edit a file the file is checked-out and edits are made. When the user is satisfied with the changes the file checked-in and new file version, containing the new changes, is then available to other users in the workgroup. In-process file changes (file saves) are hidden from other users until the changes are checked-in. As files are edited, renamed and moved in the folder structure the Vault database automatically updates any file references in related files. Overview: Vault is intended to be the core data management strategy for Autodesk's design products and therefore has add-ins to many of Autodesk's design solutions. ADMS also plays another role as the hosts for Autodesk Inventor's Content Center (Standard Parts Library) for use by Inventor when it's desired that they are hosted in a central location. The Autodesk Vault Family of Products: The Autodesk Vault product family is a stack of products each offering incremental functionality over the previous product. While the base "Autodesk Vault" is included with many Autodesk design applications; additional functionality is available based on the needs of the organization. The following products are part of the Autodesk Vault Family. The Autodesk Vault Family of Products: Vault - Work-in-Process Data Management Vault Workgroup - Customisation, Revision Management & Security Vault Professional - Professional Level Capabilities such as ERP Integration, Item Master and moreNote: For the 2011 Release, Autodesk Vault Manufacturing was Renamed to Autodesk Vault Professional. This was also formerly known as Autodesk Productstream in prior releases. The family was subsequently simplified for the 2014 Release with the retirement of Autodesk Vault Collaboration Functionality Matrix: Legend V - Vault (Base) VW - Vault Workgroup VP - Vault Professional History: Autodesk Vault was initially known as truEVault; part of an acquisition from a company called truEInnovations, Inc. based in Eagan, Minnesota. truEInnovations was started by two entrepreneurs, Brian Roepke and Dean Brisson in 1999. The company was founded on the basis of bringing a more affordable tool for managing engineering data to the market. After the asset acquisition of truEInnovations by Autodesk in 2003, Autodesk began to further the integration of the product into the manufacturing product line, starting with Autodesk Inventor. Supported Applications: As of the 2015 release, the following applications are supported by Autodesk Vault. * These integrations are only available for Vault Workgroup and above. Not the base Vault
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Radiation Research** Radiation Research: Radiation Research, the official journal of the Radiation Research Society, is a monthly peer-reviewed scientific journal covering research into the areas of biology, chemistry, medicine and physics, including epidemiology and translational research at academic institutions, private research institutes, research hospitals and government agencies. The editorial content of Radiation Research is devoted to every aspect of scientific research into radiation. The goal of the Journal is to provide researchers with the latest information in all areas of radiation science. The current editor-in-chief is Marc Mendonca (Indiana University School of Medicine). According to the Journal Citation Reports, the journal has an impact factor of 2.539 and a 5-year impact factor of 2.775.This journal had a supplement titled Radiation Research Supplement which appeared in 8 volumes between 1959 and 1985. Past Editors-in-Chief: Titus C. Evans, Vol. 1–50Oddvar F. Nygaard, Vol. 51–79Daniel Billen, Vol. 80–113R. J. Michael Fry, Vol. 114–147John F. Ward, Vol. 148–154Sara Rockwell, Vol. 155–174
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Stay apparatus** Stay apparatus: The stay apparatus is an arrangement of muscles, tendons and ligaments that work together so that an animal can remain standing with virtually no muscular effort. It is best known as the mechanism by which horses can enter a light sleep while still standing up. The effect is that an animal can distribute its weight on three limbs while resting a fourth in a flexed, non-weight bearing position. The animal can periodically shift its weight to rest a different leg and thus all limbs are able to be individually rested, reducing overall wear and tear. The relatively slim legs of certain large mammals such as horses and cows would be subject to dangerous levels of fatigue if not for the stay apparatus.The lower part of the stay apparatus consists of the suspensory apparatus, which is the same in both front and hind legs, while the upper portion of the stay apparatus is different between the fore and hind limbs.In the front legs, the stay apparatus engages when the animal's muscles relax. The upper portion of the stay apparatus in the forelimbs includes the major attachment, extensor and flexor muscles and tendons. In essence, the accessory check ligaments act as tension bands, they stabilize the carpus (called the "knee" in horses), fetlock and bones of the foot. In the upper portion, the shoulder and elbow joints have several musculo-tendinous structures that keep these joints in passive extension.In the hind limbs, the major muscles, ligaments and tendons work with the reciprocal joints of the hock and stifle, which are a reciprocal apparatus that forces the hock and stifle to flex and extend in unison. The medial patellar ligament "locks" the patella ("kneecap") in place and this prevents flexion in both the stifle and the hock. At the stifle joint, a "hook" structure on the inside bottom end of the femur cups the patella and the medial patella ligament, prevents the leg from bending.Cattle have a stay apparatus which allows them to rest individual limbs, but cattle generally do not sleep standing up. Stay apparatus: Anatomical structures important in the stay apparatus include: The suspensory apparatus, including the superficial and deep digital flexor tendons along with the proximal and distal check ligaments. The distal sesamoidean ligaments run from the sesamoid bones to the two pastern bones. Biceps brachii: originates from the caudal side of the scapula and inserts into the radial tuberosity. Flexes the elbow, and is the part of the stay apparatus that keeps the elbow and shoulder from bending. Stay apparatus: Triceps brachii: has three heads which originate and insert into separate places: the caudal side of the scapula and into the lateral & caudal side of the olecranon, from the humerus and into the lateral side of the olecranon, and from the medial side of the humerus and into the medial and cranial side of the olecranon. The triceps brachii is the most important extensor of the elbow. Important part of the stay apparatus to keep the elbow fixed. Stay apparatus: Extensor carpi radialis: originates from the humerus, continues distally along the dorsal side of the radius, and inserts on the metacarpal tuberosity. Flexes the elbow, extends the carpus. Also used in the stay apparatus to fix the carpus. Stay apparatus: The patellar tendon and patellar ligaments.The most common of the ancient, now-extinct wild horse species in North America, Dinohippus, had a distinctive passive stay apparatus that helped it conserve energy while standing for long periods. Dinohippus was the first horse to show a rudimentary form of this characteristic, and its existence provided additional evidence of the close relationship between Dinohippus and the modern Equus.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Pipette** Pipette: A pipette (sometimes spelled as pipet) is a laboratory tool commonly used in chemistry, biology and medicine to transport a measured volume of liquid, often as a media dispenser. Pipettes come in several designs for various purposes with differing levels of accuracy and precision, from single piece glass pipettes to more complex adjustable or electronic pipettes. Many pipette types work by creating a partial vacuum above the liquid-holding chamber and selectively releasing this vacuum to draw up and dispense liquid. Measurement accuracy varies greatly depending on the instrument. History: The first simple pipettes were made in glass, such as Pasteur pipettes. Large pipettes continue to be made in glass; others are made in squeezable plastic for situations where an exact volume is not required. The first micropipette was patented in 1957 by Dr Heinrich Schnitger (Marburg, Germany). The founder of the company Eppendorf, Dr. Heinrich Netheler, inherited the rights and started the commercial production of micropipettes in 1961. The adjustable micropipette is a Wisconsin invention developed through interactions among several people, primarily inventor Warren Gilson and Henry Lardy, a professor of biochemistry at the University of Wisconsin–Madison. Nomenclature: Although specific names exist for each type of pipette, in practice, any type can be referred to as a "pipette". Pipettes that dispense between 1 and 1000 μl are sometimes distinguished as micropipettes. The terms "pipette" and "pipet" are used interchangeably despite minor historical differences in their usage. Common pipettes: Air displacement micropipettes Air displacement micropipettes are a type of adjustable micropipette that deliver a measured volume of liquid; depending on size, it could be between about 0.1 µl to 1,000 µl (1 ml). These pipettes require disposable tips that come in contact with the fluid. The four standard sizes of micropipettes correspond to four different disposable tip colors: These pipettes operate by piston-driven air displacement. A vacuum is generated by the vertical travel of a metal or ceramic piston within an airtight sleeve. As the piston moves upward, driven by the depression of the plunger, a vacuum is created in the space left vacant by the piston. The liquid around the tip moves into this vacuum (along with the air in the tip) and can then be transported and released as necessary. These pipettes are capable of being very precise and accurate. However, since they rely on air displacement, they are subject to inaccuracies caused by the changing environment, particularly temperature and user technique. For these reasons, this equipment must be carefully maintained and calibrated, and users must be trained to exercise correct and consistent technique. Common pipettes: The micropipette was invented and patented in 1960 by Dr. Heinrich Schnitger in Marburg, Germany. Afterwards, the co-founder of the biotechnology company Eppendorf, Dr. Heinrich Netheler, inherited the rights and initiated the global and general use of micropipettes in labs. In 1972, the adjustable micropipette was invented at the University of Wisconsin-Madison by several people, primarily Warren Gilson and Henry Lardy.Types of air displacement pipettes include: adjustable or fixed volume handled Single-channel, multi-channel or repeater conical tips or cylindrical tips standard or locking manual or electronic manufacturerIrrespective of brand or expense of pipette, every micropipette manufacturer recommends checking the calibration at least every six months, if used regularly. Companies in the drug or food industries are required to calibrate their pipettes quarterly (every three months). Schools which are conducting chemistry classes can have this process annually. Those studying forensics and research where a great deal of testing is commonplace will perform monthly calibrations. Common pipettes: Electronic pipette To minimize the possible development of musculoskeletal disorders due to repetitive pipetting, electronic pipettes commonly replace the mechanical version. Common pipettes: Positive displacement pipette These are similar to air displacement pipettes, but are less commonly used and are used to avoid contamination and for volatile or viscous substances at small volumes, such as DNA. The major difference is that the disposable tip is a microsyringe (plastic), composed of a capillary and a piston (movable inner part) which directly displaces the liquid. Common pipettes: Volumetric pipettes Volumetric pipettes or bulb pipette allow the user to measure a volume of solution extremely precisely (precision of four significant figures). These pipettes have a large bulb with a long narrow portion above with a single graduation mark as it is calibrated for a single volume (like a volumetric flask). Typical volumes are 20, 50, and 100 mL. Volumetric pipettes are commonly used to make laboratory solutions from a base stock as well as prepare solutions for titration. Common pipettes: Graduated pipettes Graduated pipettes are a type of macropipette consisting of a long tube with a series of graduations, as on a graduated cylinder or burette, to indicate different calibrated volumes. They also require a source of vacuum; in the early days of chemistry and biology, the mouth was used. The safety regulations included the statement: "Never pipette by mouth KCN, NH3, strong acids, bases and mercury salts". Some pipettes were manufactured with two bubbles between the mouth piece and the solution level line, to protect the chemist from accidental swallowing of the solution. Common pipettes: Pasteur pipette Pasteur pipettes are plastic or glass pipettes used to transfer small amounts of liquids, but are not graduated or calibrated for any particular volume. The bulb is separate from the pipette body. Pasteur pipettes are also called teat pipettes, droppers, eye droppers and chemical droppers. Transfer pipettes Transfer pipettes, also known as Beral pipettes, are similar to Pasteur pipettes but are made from a single piece of plastic and their bulb can serve as the liquid-holding chamber. Specialized pipettes: Pipetting syringe Pipetting syringes are hand-held devices that combine the functions of volumetric (bulb) pipettes, graduated pipettes, and burettes. They are calibrated to ISO volumetric A grade standards. A glass or plastic pipette tube is used with a thumb-operated piston and PTFE seal which slides within the pipette in a positive displacement operation. Such a device can be used on a wide variety of fluids (aqueous, viscous, and volatile fluids; hydrocarbons; essential oils; and mixtures) in volumes between 0.5 mL and 25 mL. This arrangement provides improvements in precision, handling safety, reliability, economy, and versatility. No disposable tips or pipetting aids are needed with the pipetting syringe. Specialized pipettes: Van Slyke pipette A graduated pipette commonly used in medical technology with serologic pipettes for volumetric analysis. Invented by Donald Dexter Van Slyke. Ostwald–Folin pipette A special pipette used in measuring viscous fluid such as whole blood. Common in medical technology laboratory setups together with other pipettes. Invented by Friedrich Wilhelm Ostwald, a Baltic German Chemist and later refined by Otto Folin, an American chemist. Glass micropipette These are used to physically interact with microscopic samples, such as in the procedures of microinjection and patch clamping. Most micropipettes are made of borosilicate, aluminosilicate or quartz with many types and sizes of glass tubing being available. Each of these compositions has unique properties which will determine suitable applications. Glass micropipettes are fabricated in a micropipette puller and are typically used in a micromanipulator. Specialized pipettes: Microfluidic pipette A recent introduction into the micropipette field integrates the versatility of microfluidics into a freely positionable pipette platform. At the tip of the device a localized flow zone is created, allowing for constant control of the nanoliter environment, directly in front of the pipette. The pipettes are made from polydimethylsiloxane (PDMS) which is formed using reactive injection molding. Interfacing of these pipettes using pneumatics enables multiple solutions to be loaded and switched on demand, with solution exchange times of 100ms. Specialized pipettes: Invented by Alar Ainla, currently situated in the Biophysical Technology Lab at Chalmers University of Technology in Sweden. Extremely low volume pipettes A zeptoliter pipette has been developed at Brookhaven National Laboratory. The pipette is made of a carbon shell, within which is an alloy of gold-germanium. The pipette was used to learn about how crystallization takes place. Specialized pipettes: Pipette aids A variety of devices have been developed for safer, easier, and more efficient pipetting. For example, a motorized pipette controller can aid liquid aspiration or dispensing using volumetric pipettes or graduated pipettes; a tablet can interact in real-time with the pipette and guide a user through a protocol; and a pipette station can help to control the pipette tip immersion depth and improve ergonomics. Specialized pipettes: Robots Pipette robots are capable of manipulating the pipettes as humans would do. Calibration: Pipette recalibration is an important consideration in laboratories using these devices. It is the act of determining the accuracy of a measuring device by comparison with NIST traceable reference standards. Pipette calibration is essential to ensure that the instrument is working according to expectations and as per the defined regimes or work protocols. Pipette calibration is considered to be a complex affair because it includes many elements of calibration procedure and several calibration protocol options as well as makes and models of pipettes to consider. Posture and injuries: Proper pipetting posture is the most important element in establishing good ergonomic work practices. During repetitive tasks such as pipetting, maintaining body positions that provide a maximum of strength with the least amount of muscular stress is important to minimize the risk of injury. A number of common pipetting techniques have been identified as potentially hazardous due to biomechanical stress factors. Recommendations for corrective pipetting actions, made by various US governmental agencies and ergonomics experts, are presented below. Posture and injuries: Winged elbow pipetting Technique: elevated, “winged elbow”. The average human arm weighs approximately 6% of the total body weight. Holding a pipette with the elbow extended (winged elbow) in a static position places the weight of the arm onto the neck and shoulder muscles and reduces blood flow, thereby causing stress and fatigue. Muscle strength is also substantially reduced as arm flexion is increased. Posture and injuries: Corrective action: Position elbows as close to the body as possible, with arms and wrists extended in straight, neutral positions (handshake posture). Keep work items within easy reach to limit extension and elevation of arm. Arm/hand elevation should not exceed 12” from the worksurface.Over rotated arm pipetting Technique: Over-rotated forearm and wrist. Rotation of the forearm in a supinated position (palm up) and/or wrist flexion increases the fluid pressure in the carpal tunnel. This increased pressure can result in compression of soft tissues like nerves, tendons and blood vessels, causing numbness in the thumb and fingers. Posture and injuries: Corrective action: Forearm rotation angle near 45° pronation (palm down) should be maintained to minimize carpal tunnel pressure during repetitive activity.Clenched fist pipetting Technique: Tight grip (clenched fist). Hand fatigue results from continuous contact between a hard object and sensitive tissues. This occurs when a firm grip is needed to hold a pipette, such as when jamming on a tip, and results in diminished hand strength. Posture and injuries: Corrective action: Use pipettes with hooks or other attributes that allow a relaxed grip and/or alleviate need to constantly grip the pipette. This will reduce tension in the arm, wrist and hand.Thumb plunger pipetting Technique: Concentrated area of force (contact stress between a hard object and sensitive tissues). Some devices have plungers and buttons with limited surface areas, requiring a great deal of force to be expended by the thumb or other finger in a concentrated area. Posture and injuries: Corrective action: Use pipettes with large contoured or rounded plungers and buttons. This will disperse the pressure used to operate the pipette across the entire surface of the thumb or finger, reducing contact pressure to acceptable levels.Incorrect posture can have a strong impact on available strength arm strength pipetting Technique: elevated arm. Muscle strength is substantially reduced when arm flexion is increased. Posture and injuries: Corrective action: Keep work items within easy reach to limit extension and elevation of arm. Arm/hand elevation should also not exceed 12” from the worksurface.Elbow strength pipetting Technique: Elbow flexion or abduction. Arm strength diminishes as elbow posture is deviated from a 90° position. Posture and injuries: Corrective action: Keep forearm and hand elevation within 12” of the worksurface, which will allow the elbow to remain near a 90° position.Unlike traditional axial pipettes, ergonomic pipetting can affect posture and prevent common pipetting injuries such as carpal tunnel syndrome, tendinitis and other musculoskeletal disorders. To be "ergonomically correct" significant changes to traditional pipetting postures are essential, like: minimizing forearm and wrist rotations, keeping a low arm and elbow height and relaxing the shoulders and upper arms. Pipette stand: Typically the pipettes are vertically stored on holder called pipette stands. In case of electronic pipettes, such stand can recharge their batteries. The most advance pipette stand can directly control electronic pipettes. Alternatives: An alternative technology, especially for transferring small volumes (micro and nano litre range) is acoustic droplet ejection.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Truncated 24-cells** Truncated 24-cells: In geometry, a truncated 24-cell is a uniform 4-polytope (4-dimensional uniform polytope) formed as the truncation of the regular 24-cell. There are two degrees of truncations, including a bitruncation. Truncated 24-cell: The truncated 24-cell or truncated icositetrachoron is a uniform 4-dimensional polytope (or uniform 4-polytope), which is bounded by 48 cells: 24 cubes, and 24 truncated octahedra. Each vertex joins three truncated octahedra and one cube, in an equilateral triangular pyramid vertex figure. Construction The truncated 24-cell can be constructed from polytopes with three symmetry groups: F4 [3,4,3]: A truncation of the 24-cell. B4 [3,3,4]: A cantitruncation of the 16-cell, with two families of truncated octahedral cells. D4 [31,1,1]: An omnitruncation of the demitesseract, with three families of truncated octahedral cells. Zonotope It is also a zonotope: it can be formed as the Minkowski sum of the six line segments connecting opposite pairs among the twelve permutations of the vector (+1,−1,0,0). Truncated 24-cell: Cartesian coordinates The Cartesian coordinates of the vertices of a truncated 24-cell having edge length sqrt(2) are all coordinate permutations and sign combinations of: (0,1,2,3) [4!×23 = 192 vertices]The dual configuration has coordinates at all coordinate permutation and signs of (1,1,1,5) [4×24 = 64 vertices] (1,3,3,3) [4×24 = 64 vertices] (2,2,2,4) [4×24 = 64 vertices] Structure The 24 cubical cells are joined via their square faces to the truncated octahedra; and the 24 truncated octahedra are joined to each other via their hexagonal faces. Truncated 24-cell: Projections The parallel projection of the truncated 24-cell into 3-dimensional space, truncated octahedron first, has the following layout: The projection envelope is a truncated cuboctahedron. Two of the truncated octahedra project onto a truncated octahedron lying in the center of the envelope. Six cuboidal volumes join the square faces of this central truncated octahedron to the center of the octagonal faces of the great rhombicuboctahedron. These are the images of 12 of the cubical cells, a pair of cells to each image. The 12 square faces of the great rhombicuboctahedron are the images of the remaining 12 cubes. The 6 octagonal faces of the great rhombicuboctahedron are the images of 6 of the truncated octahedra. The 8 (non-uniform) truncated octahedral volumes lying between the hexagonal faces of the projection envelope and the central truncated octahedron are the images of the remaining 16 truncated octahedra, a pair of cells to each image. Images Related polytopes The convex hull of the truncated 24-cell and its dual (assuming that they are congruent) is a nonuniform polychoron composed of 480 cells: 48 cubes, 144 square antiprisms, 288 tetrahedra (as tetragonal disphenoids), and 384 vertices. Its vertex figure is a hexakis triangular cupola. Vertex figure Bitruncated 24-cell: The bitruncated 24-cell. 48-cell, or tetracontoctachoron is a 4-dimensional uniform polytope (or uniform 4-polytope) derived from the 24-cell. E. L. Elte identified it in 1912 as a semiregular polytope. It is constructed by bitruncating the 24-cell (truncating at halfway to the depth which would yield the dual 24-cell). Being a uniform 4-polytope, it is vertex-transitive. In addition, it is cell-transitive, consisting of 48 truncated cubes, and also edge-transitive, with 3 truncated cubes cells per edge and with one triangle and two octagons around each edge. The 48 cells of the bitruncated 24-cell correspond with the 24 cells and 24 vertices of the 24-cell. As such, the centers of the 48 cells form the root system of type F4. Its vertex figure is a tetragonal disphenoid, a tetrahedron with 2 opposite edges length 1 and all 4 lateral edges length √(2+√2). Alternative names Bitruncated 24-cell (Norman W. Johnson) 48-cell as a cell-transitive 4-polytope Bitruncated icositetrachoron Bitruncated polyoctahedron Tetracontaoctachoron (Cont) (Jonathan Bowers) Structure The truncated cubes are joined to each other via their octagonal faces in anti orientation; i. e., two adjoining truncated cubes are rotated 45 degrees relative to each other so that no two triangular faces share an edge. The sequence of truncated cubes joined to each other via opposite octagonal faces form a cycle of 8. Each truncated cube belongs to 3 such cycles. On the other hand, the sequence of truncated cubes joined to each other via opposite triangular faces form a cycle of 6. Each truncated cube belongs to 4 such cycles. Bitruncated 24-cell: Seen in a configuration matrix, all incidence counts between elements are shown. The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time. Edges exist at 4 symmetry positions. Squares exist at 3 positions, hexagons 2 positions, and octagons one. Finally the 4 types of cells exist centered on the 4 corners of the fundamental simplex. Bitruncated 24-cell: Coordinates The Cartesian coordinates of a bitruncated 24-cell having edge length 2 are all permutations of coordinates and sign of: (0, 2+√2, 2+√2, 2+2√2) (1, 1+√2, 1+√2, 3+2√2) Projections Projection to 2 dimensions Projection to 3 dimensions Related regular skew polyhedron The regular skew polyhedron, {8,4|3}, exists in 4-space with 4 octagonal around each vertex, in a zig-zagging nonplanar vertex figure. These octagonal faces can be seen on the bitruncated 24-cell, using all 576 edges and 288 vertices. The 192 triangular faces of the bitruncated 24-cell can be seen as removed. The dual regular skew polyhedron, {4,8|3}, is similarly related to the square faces of the runcinated 24-cell. Bitruncated 24-cell: Disphenoidal 288-cell The disphenoidal 288-cell is the dual of the bitruncated 24-cell. It is a 4-dimensional polytope (or polychoron) derived from the 24-cell. It is constructed by doubling and rotating the 24-cell, then constructing the convex hull. Being the dual of a uniform polychoron, it is cell-transitive, consisting of 288 congruent tetragonal disphenoids. In addition, it is vertex-transitive under the group Aut(F4). Images Geometry The vertices of the 288-cell are precisely the 24 Hurwitz unit quaternions with norm squared 1, united with the 24 vertices of the dual 24-cell with norm squared 2, projected to the unit 3-sphere. These 48 vertices correspond to the binary octahedral group 2O or <2,3,4>, order 48. Bitruncated 24-cell: Thus, the 288-cell is the only non-regular 4-polytope which is the convex hull of a quaternionic group, disregarding the infinitely many dicyclic (same as binary dihedral) groups; the regular ones are the 24-cell (≘ 2T or <2,3,3>, order 24) and the 600-cell (≘ 2I or <2,3,5>, order 120). (The 16-cell corresponds to the binary dihedral group 2D2 or <2,2,2>, order 16.) The inscribed 3-sphere has radius 1/2+√2/4 ≈ 0.853553 and touches the 288-cell at the centers of the 288 tetrahedra which are the vertices of the dual bitruncated 24-cell. Bitruncated 24-cell: The vertices can be coloured in 2 colours, say red and yellow, with the 24 Hurwitz units in red and the 24 duals in yellow, the yellow 24-cell being congruent to the red one. Thus the product of 2 equally coloured quaternions is red and the product of 2 in mixed colours is yellow. Bitruncated 24-cell: Placing a fixed red vertex at the north pole (1,0,0,0), there are 6 yellow vertices in the next deeper “latitude” at (√2/2,x,y,z), followed by 8 red vertices in the latitude at (1/2,x,y,z). The complete coordinates are given as linear combinations of the quaternionic units 1,i,j,k , which at the same time can be taken as the elements of the group 2O. The next deeper latitude is the equator hyperplane intersecting the 3-sphere in a 2-sphere which is populated by 6 red and 12 yellow vertices. Bitruncated 24-cell: Layer 2 is a 2-sphere circumscribing a regular octahedron whose edges have length 1. A tetrahedron with vertex north pole has 1 of these edges as long edge whose 2 vertices are connected by short edges to the north pole. Another long edge runs from the north pole into layer 1 and 2 short edges from there into layer 2. Bitruncated 24-cell: There are 192 long edges with length 1 connecting equal colours and 144 short edges with length √2–√2 ≈ 0.765367 connecting mixed colours. 192*2/48 = 8 long and 144*2/48 = 6 short, that is together 14 edges meet at any vertex. The 576 faces are isosceles with 1 long and 2 short edges, all congruent. The angles at the base are arccos(√4+√8/4) ≈ 49.210°. 576*3/48 = 36 faces meet at a vertex, 576*1/192 = 3 at a long edge, and 576*2/144 = 8 at a short one. Bitruncated 24-cell: The 288 cells are tetrahedra with 4 short edges and 2 antipodal and perpendicular long edges, one of which connects 2 red and the other 2 yellow vertices. All the cells are congruent. 288*4/48 = 24 cells meet at a vertex. 288*2/192 = 3 cells meet at a long edge, 288*4/144 = 8 at a short one. 288*4/576 = 2 cells meet at a triangle. Related polytopes: B4 family of uniform polytopes: F4 family of uniform polytopes:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**PEX10** PEX10: Peroxisome biogenesis factor 10 is a protein that in humans is encoded by the PEX10 gene. Alternative splicing results in two transcript variants encoding different isoforms. Function: Peroxisome biogenesis factor 10 is involved in import of peroxisomal matrix proteins. This protein localizes to the peroxisomal membrane. Clinical significance: Mutations in this gene result in phenotypes within the Zellweger spectrum of peroxisomal biogenesis disorders, ranging from neonatal adrenoleukodystrophy to Zellweger syndrome. Interactions: PEX10 has been shown to interact with PEX12 and PEX19.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**WebFinger** WebFinger: WebFinger is a protocol specified by the Internet Engineering Task Force IETF that allows for discovery of information about people and things identified by a URI. Information about a person might be discovered via an acct: URI, for example, which is a URI that looks like an email address. WebFinger: WebFinger is specified as the discovery protocol for OpenID Connect, which is a protocol that allows one to more easily log in to various sites on the Internet.The WebFinger protocol is used by federated software, such as GNU social, Diaspora, or Mastodon, to discover users on federated nodes and pods, as well as the remoteStorage protocol.As a historical note, the name "WebFinger" is derived from the old ARPANET Finger protocol, but it is a very different protocol designed for HTTP.The protocol payload is represented in JSON format. Example: Basic example with profile page and business card Client request: Server response: Usage on Mastodon On Mastodon, any federated servers can look up users by sending a request to the WebFinger endpoint on other servers. Here is an example for the user@Mastodon@mastodon.social: Client request:Server response:
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Conformal map projection** Conformal map projection: In cartography, a conformal map projection is one in which every angle between two curves that cross each other on Earth (a sphere or an ellipsoid) is preserved in the image of the projection; that is, the projection is a conformal map in the mathematical sense. For example, if two roads cross each other at a 39° angle, their images on a map with a conformal projection cross at a 39° angle. Properties: A conformal projection can be defined as one that is locally conformal at every point on the map, albeit possibly with singular points where conformality fails. Thus, every small figure is nearly similar to its image on the map. The projection preserves the ratio of two lengths in the small domain. All of the projection's Tissot's indicatrices are circles. Conformal projections preserve only small figures. Large figures are distorted by even conformal projections. In a conformal projection, any small figure is similar to the image, but the ratio of similarity (scale) varies by location, which explains the distortion of the conformal projection. Properties: In a conformal projection, parallels and meridians cross rectangularly on the map. The converse is not necessarily true. The counterexamples are equirectangular and equal-area cylindrical projections (of normal aspects). These projections expand meridian-wise and parallel-wise by different ratios respectively. Thus, parallels and meridians cross rectangularly on the map, but these projections do not preserve other angles; i.e. these projections are not conformal. Properties: As proven by Leonhard Euler in 1775, a conformal map projection cannot be equal-area, nor can an equal-area map projection be conformal. This is also a consequence of Carl Gauss's 1827 Theorema Egregium [Remarkable Theorem]. List of conformal projections: Mercator projection (conformal cylindrical projection) Mercator projection of normal aspect (Every rhumb line is drawn as a straight line on the map.) Transverse Mercator projection Gauss–Krüger coordinate system (This projection preserves lengths on the central meridian on an ellipsoid) Oblique Mercator projection Space-oblique Mercator projection (a modified projection from Oblique Mercator projection for satellite orbits with the earth rotation within near conformality) Lambert conformal conic projection Oblique conformal conic projection (This projection is sometimes used for long-shaped regions, like as continents of Americas or Japanese archipelago.) Stereographic projection (Conformal azimuthal projection. Every circle on the earth is drawn as a circle or a straight line on the map.) Miller Oblated Stereographic Projection (Modified stereographic projection for continents of Africa and Europe.) GS50 projection (This projection are made from a stereographic projection with an adjustment by a polynomial on complex numbers.) Littrow projection (conformal retro-azimuthal projection) Lagrange projection (a polyconic projection, and a composition of a Lambert conformal conic projection and a Möbius transformation.) August epicycloidal projection (a composition of Lagrange projection of sphere in circle and a polynomial of degree 3 on complex numbers.) Application of elliptic function Peirce quincuncial projection (This projects the earth into a square conformally except at four singular points.) Lee conformal projection of the world in a tetrahedron Applications: Large scale Many large-scale maps use conformal projections because figures in large-scale maps can be regarded as small enough. The figures on the maps are nearly similar to their physical counterparts. A non-conformal projection can be used in a limited domain such that the projection is locally conformal. Glueing many maps together restores roundness. To make a new sheet from many maps or to change the center, the body must be re-projected. Seamless online maps can be very large Mercator projections, so that any place can become the map's center, then the map remains conformal. However, it is difficult to compare lengths or areas of two far-off figures using such a projection. The Universal Transverse Mercator coordinate system and the Lambert system in France are projections that support the trade-off between seamlessness and scale variability. For small scale Maps reflecting directions, such as a nautical chart or an aeronautical chart, are projected by conformal projections. Maps treating values whose gradients are important, such as a weather map with atmospheric pressure, are also projected by conformal projections. Small scale maps have large scale variations in a conformal projection, so recent world maps use other projections. Historically, many world maps are drawn by conformal projections, such as Mercator maps or hemisphere maps by stereographic projection. Conformal maps containing large regions vary scales by locations, so it is difficult to compare lengths or areas. However, some techniques require that a length of 1 degree on a meridian = 111 km = 60 nautical miles. In non-conformal maps, such techniques are not available because the same lengths at a point vary the lengths on the map. In Mercator or stereographic projections, scales vary by latitude, so bar scales by latitudes are often appended. In complex projections such as of oblique aspect. Contour charts of scale factors are sometimes appended.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nintendo Zone** Nintendo Zone: Nintendo Zone was a download service and an extension of the DS Download Station. Users could access content, third-party data, and other services from a hotspot or download station. The service had demos of upcoming and currently available games and may have location-specific content. When the service debuted, users could also connect to the Nintendo Wi-Fi Connection and DSi Shop.The Nintendo Zone Viewer application allows the Nintendo DSi and 3DS to detect and use the Nintendo Zone service. This application has been discontinued worldwide, but all other Nintendo Zone functionality remains. History: In collaboration with the restaurant chain McDonald's, the service originated in the Kantō, Chūkyō and Kansai regions of Japan. Over 1,000 DS Download Stations in Japan were planned to be converted into Nintendo Zones to enable SpotPass communications. Nintendo Zone content was available at over 29,000 locations in the United States. The service launched in Europe on April 25, 2012 with approximately 25,000 locations. Nintendo announced in July 2013 that the service would receive StreetPass enhancements. The StreetPass Relay Points system was introduced as part of an firmware update to Nintendo 3DS consoles in August 2013. When a 3DS owner visited a Nintendo Zone location, his or her StreetPass data would have been stored there then transferred when another owner visited with the same games. The viewer would always remain on even if it is out range of a Nintendo Zone.On December 8, 2011, a 3DS update began that allowed users to access new Nintendo Zones through a variety of new hotspots. A press release showed that Boingo Wireless teamed up with Nintendo of America to allow users automatic access to the zone within 42 Boingo-serviced airports within North America. This has offered a new range of encounters and features without any additional cost. History: In December 2013, a new feature was added on in celebration of National StreetPass Weekend. This feature combined and mixed together all Nintendo Zones within North America into one and allowed users who come across a Nintendo Zone to streetpass and exchange data with other 3DS users from all around the continent, as opposed to only those 3DS users who have passed by that specific zone. Through this feature, users were able to StreetPass a maximum of 6 users at a time from other parts of North America. This feature helped raise awareness about Nintendo Zone and what it could offer to 3DS users. It encouraged 3DS users to access a nearby zone in order to meet users from other parts of the continent and to gather more StreetPass relay points. Through this feature, many users were able to exchange information and gameplay items with other users. It also encouraged interaction between 3DS users who own the same game to initiate item exchanges that each users would be able to take away with them once the events are over. Locations: North American Nintendo 3DS users were able to access the Nintendo Zone inside these following places: Best Buy, Home Depot, and CrossIron Mills in Canada; and AT&T Retail Store, and McDonald's in the USA. Users could find nearby Nintendo Zones by searching for their city or postal code on the Nintendo website. DS Download Station: The DS Download Station was an in-store demo service launched by Nintendo in early 2006. As the name states, these are stations that can be used to download game demos and trailers to a Nintendo DS. The download station consisted of a standard retail DS hidden inside a sealed box with a special DS Download Station cartridge inserted in it. The cartridge acts as a server for customers to download new game demos or videos. When Nintendo released a new demo cartridge to retailers, they simply load the cartridge into the DS locked in the sealed box.A DS Download Station could distribute only one game at a time, but can send the demos to up to fifteen DS systems simultaneously. The games can be downloaded by navigating to DS Download Play on the Nintendo DS's main menu and browsing for a DS Download Station in range. Players could choose from a wide range of games that refreshed every quarter of the year. The first game demos released were Tetris DS, Brain Age: Train Your Brain in Minutes a Day!, Mario Kart DS and more. From there it would load a simple menu & loader application to facilitate loading the demo of the player's choice. The demos remain on the DS until the power is turned off.The US and European version of the DS Download Station are completely different from the Japanese version. The Japanese version uses 3 PCs, each connected to an Internet connection. The difference in design was due to most retail locations in the US at the time not having an available Internet connection, and therefore, a self-contained solution was necessary.There had been nineteen different volumes of DS Download Station, with each volume differing in content between North America, Europe, and Japan.The DS Download Station had long since been discontinued, with all the Display DS units being resold in the normal retail market.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Objective correlative** Objective correlative: In literary criticism, an objective correlative is a group of things or events which systematically represent emotions. Theory: The theory of the objective correlative as it relates to literature was largely developed through the writings of the poet and literary critic T.S. Eliot, who is associated with the literary group called the New Critics. Helping define the objective correlative, Eliot's essay "Hamlet and His Problems", republished in his book The Sacred Wood: Essays on Poetry and Criticism discusses his view of Shakespeare's incomplete development of Hamlet's emotions in the play Hamlet. Eliot uses Lady Macbeth's state of mind as an example of the successful objective correlative: "The artistic 'inevitability' lies in this complete adequacy of the external to the emotion….", as a contrast to Hamlet. According to Eliot, the feelings of Hamlet are not sufficiently supported by the story and the other characters surrounding him. The objective correlative's purpose is to express the character's emotions by showing rather than describing feelings as discussed earlier by Plato and referred to by Peter Barry in his book Beginning Theory: An Introduction to Literary and Cultural Theory as "...perhaps little more than the ancient distinction (first made by Plato) between mimesis and diegesis…." (28). According to Formalist critics, this action of creating an emotion through external factors and evidence linked together and thus forming an objective correlative should produce an author's detachment from the depicted character and unite the emotion of the literary work. The "occasion" of Eugenio Montale is a further form of correlative. Theory: The works of Eliot were translated into Italian by Montale, who earned the 1975 Nobel Prize in Literature. Origin of terminology: The term was coined by the American painter and poet Washington Allston (1779-1843), and was introduced by T.S. Eliot, rather casually, into his essay "Hamlet and His Problems" (1919); its subsequent vogue in literary criticism, Eliot said, astonished him. Origin of terminology: In "Hamlet and His Problems", Eliot used the term exclusively to refer to his claimed artistic mechanism whereby emotion is evoked in the audience: The only way of expressing emotion in the form of art is by finding an "objective correlative"; in other words, a set of objects, a situation, a chain of events which shall be the formula of that particular emotion; such that when the external facts, which must terminate in sensory experience, are given, the emotion is immediately evoked. Origin of terminology: It seems to be in deference to this principle that Eliot famously described the play Hamlet as "most certainly an artistic failure": Eliot felt that Hamlet's strong emotions "exceeded the facts" of the play, which is to say they were not supported by an "objective correlative". He acknowledged that such a circumstance is "something which every person of sensibility has known," but felt that in trying to represent it dramatically, "Shakespeare tackled a problem which proved too much for him". Criticisms: One possible criticism of Eliot's theory includes his assumption that an author's intentions concerning expression will be understood in one way only. This point is stated by Balachandra Rajan as quoted in David A. Goldfarb's "New Reference Works in Literary Theory" with these words: "Eliot argues that there is a verbal formula for any given state of emotion which, when found and used, will evoke that state and no other." Examples: A famous haiku by Yosa Buson entitled, The Piercing Chill I Feel illustrates the use of objective correlative within poetry:The piercing chill I feel:my dead wife's comb, in our bedroom,under my heel... Examples: In the Clint Eastwood movie Jersey Boys, songwriter Bob Gaudio of The 4 Seasons is asked who the girl is in his song Cry For Me. He makes reference to T.S. Eliot's topic, "the Objective Correlative", as the subject being every girl, or any girl. In adherence to this reference, the author allows himself the literary license to step outside the scope of his personal experience, and to conjecture about the emotions and responses inherent with the situation, and utilize the third party perspective in the first party presentation.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Distearoylphosphatidylcholine** Distearoylphosphatidylcholine: Distearoylphosphatidylcholine is a phosphatidylcholine, a kind of phospholipid. It is a natural constituent of cell membranes, eg. soybean phosphatidylcholines are mostly different 18-carbon phosphatidylcholines (including minority of saturated DSPC), and their hydrogenation results in 85% DSPC. It can be used to prepare lipid nanoparticles which are used in mRNA vaccines, In particular, it forms part of the drug delivery system for the Moderna and Pfizer COVID-19 vaccines.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Combined spinal and epidural anaesthesia** Combined spinal and epidural anaesthesia: Combined spinal and epidural anaesthesia is a regional anaesthetic technique, which combines the benefits of both spinal anaesthesia and epidural anaesthesia and analgesia. The spinal component gives a rapid onset of a predictable block. The indwelling epidural catheter gives the ability to provide long lasting analgesia and to titrate the dose given to the desired effect. Indications: This technique also allows for better post operative pain relief. The epidural catheter may be left in place for up to 72 hours if required. Indications: In labouring women, the onset of analgesia is more rapid with combined spinal and epidural anaesthesia compared with epidural analgesia. Combined spinal and epidural anaesthesia in labour was formerly thought to enable women to mobilise for longer compared with epidural analgesia, but this is not supported by a recent Cochrane review.In the UK, the National Institute for Health and Care Excellence (September 2007) recommends combined spinal and epidural anaesthesia for women who require rapid onset of analgesia in labour. It further recommends the use of bupivacaine and fentanyl to establish the block. Insertion technique: Combined spinal-epidural anaesthesia is a highly specialised technique which should only be administered by a properly trained anaesthetic practitioner working with full aseptic technique.The needle-through-needle technique involves the introduction of a Tuohy needle (epidural needle) into the epidural space. The standard technique of loss of resistance to injection may be employed. A long fine spinal needle (25G) is then introduced via the lumen of the epidural needle and through the dura mater, into the subarachnoid space. A small pop is felt as the dura is punctured, and the correct position is confirmed when cerebrospinal fluid can be seen dripping from the spinal needle. A small dose of local anaesthetic (e.g. bupivacaine) is then instilled. An opioid such as fentanyl may also be given if desired. The spinal needle is then withdrawn and the epidural catheter inserted in the standard manner. Alternatively, a two-level approach may be undertaken. The epidural space is first located in the standard manner. Then, at another level, a standard spinal is performed. Finally, the epidural catheter is threaded through the Tuohy needle. Maintenance technique: When the epidural catheter has been inserted, the techniques of maintenance of block are very similar to those of epidural anaesthesia. The intensity of the block may be adjusted as desired. Large doses of local anaesthetic can produce sufficient anaesthesia for surgery. Alternatively, smaller doses can provide analgesia, e.g. in the postoperative period. Equipment: A standard epidural pack may be used with a standard spinal needle. However, the standard length of a spinal needle (90mm) may be insufficiently long to reach the subarachnoid space through the Tuohy needle. An extra-long needle (e.g., 120 mm) may be required. Alternatively, several manufacturers produce packs containing both a spinal and an epidural needle which are slightly modified to fit together. Complications: Combined spinal and epidural anaesthesia in labouring women is associated with more pruritus if fentanyl (25 μg) is given intrathecally, than low-dose epidural analgesia. However, no difference has been found in the incidence of post dural puncture headache, requirement for epidural blood patch or maternal hypotension.It is unknown if infections are more likely to happen during combined spinal and epidural anaesthesia compared to spinal or epidural techniques. Post-dural-puncture headache has a similar incidence rate (0.8 to 2.5%) to the conventional epidural.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Chrysanthemum bonsai** Chrysanthemum bonsai: Chrysanthemum bonsai (Japanese: 菊の盆栽, romanized: Kiku no bonsai, lit. 'Chrysanthemum tray planting', pronunciation ) is a Japanese art form using cultivation techniques to produce, in containers, chrysanthemum flowers that mimic the shape and scale of full size trees, called bonsai. Cultivation and care: Bonsai cultivation and care requires techniques and tools that are specialized to support the growth and maintenance of the flowers in small containers. There are several cultivated varieties of chrysanthemum that possess the ability to be trained into many of the traditional bonsai styles associated with woody trunked trees and shrubs. But since chrysanthemum rarely grow to be old enough to have wood, deadwood bonsai techniques may also be used.Chrysanthemums are perennials, and while it is possible to keep a chrysanthemum bonsai alive for a number of years (old wood), it is more likely that the bonsai will be 'finished' after all the blooms have faded.The chrysanthemum bonsai artist must complete all design work in fewer than ten months. Most chrysanthemum bonsai artists in the northern latitudes of the United States start the training of their bonsai in April, and are finished by the middle of September.Traditionally in Japan the Chrysanthemum exhibitions showcase the different bonsai forms. This takes place in autumn around the months of October and November. Styles: Various bonsai styles exist, such as the cascade style, the clinging to a rock style, and the forest style.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**JWH-185** JWH-185: JWH-185 is a synthetic cannabinoid receptor ligand from the naphthoylindole family. It is the carbonyl-reduced derivative of related compound JWH-081. The binding affinity of JWH-185 for the CB1 receptor is reported as Ki = 17 ± 3 nM.In the United States, all CB1 receptor agonists of the 3-(1-naphthylmethane)indole class such as JWH-185 are Schedule I Controlled Substances.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**TTUSB** TTUSB: The Numark TTUSB is a belt-driven turntable with a USB audio interface. This allows the user to transfer music from a record onto a computer, from which it can then be burnt onto an audio CD. Introduced in December 2005, the TTUSB was the first turntable of its kind to have been released to the consumer market. A near-identical model called the iTTUSB was also manufactured under the Ion Audio brand name. Product features: Anti-skating control 33-1/3 and 45RPM playback speeds ±10% adjustable pitch control RCA line outputs USB output 1/8" stereo minijack input Moving magnet phono cartridge
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**FtsZ** FtsZ: FtsZ is a protein encoded by the ftsZ gene that assembles into a ring at the future site of bacterial cell division (also called the Z ring). FtsZ is a prokaryotic homologue of the eukaryotic protein tubulin. The initials FtsZ mean "Filamenting temperature-sensitive mutant Z." The hypothesis was that cell division mutants of E. coli would grow as filaments due to the inability of the daughter cells to separate from one another. FtsZ is found in almost all bacteria, many archaea, all chloroplasts and some mitochondria, where it is essential for cell division. FtsZ assembles the cytoskeletal scaffold of the Z ring that, along with additional proteins, constricts to divide the cell in two. History: In the 1960s scientists screened for temperature sensitive mutations that blocked cell division at 42 °C. The mutant cells divided normally at 30°, but failed to divide at 42°. Continued growth without division produced long filamentous cells (Filamenting temperature sensitive). Several such mutants were discovered and mapped to a locus originally named ftsA, which could be one or more genes. In 1980 Lutkenhaus and Donachie showed that several of these mutations mapped to one gene, ftsA, but one well-characterized mutant, PAT84, originally discovered by Hirota et al, mapped to a separate, adjacent gene. They named this cell division gene ftsZ. In 1991 Bi and Lutkenhaus used immunogold electron microscopy to show that FtsZ localized to the invaginating septum at midcell. Subsequently, the Losick and Margolin groups used immuno-fluorescence microscopy and GFP fusions to show that FtsZ assembled Z rings early in the cell cycle, well before the septum began to constrict. Other division proteins then assemble onto the Z ring and constriction occurs in the last part of the cell cycle. History: In 1992-3 three labs independently discovered that FtsZ was related to eukaryotic tubulin, which is the protein subunit that assembles into microtubules. This was the first discovery that bacteria have homologs of eukaryotic cytoskeletal proteins. Later work showed that FtsZ was present in, and essential for, cell division in almost all bacteria and in many but not all archaea. History: Mitochondria and chloroplasts are eukaryotic organelles that originated as bacterial endosymbionts, so there was much interest in whether they use FtsZ for division. Chloroplast FtsZ was first discovered by Osteryoung, and it is now known that all chloroplasts use FtsZ for division. Mitochondrial FtsZ was discovered by Beech in an alga; FtsZ is used for mitochondrial division in some eukaryotes, while others have replaced it with a dynamin-based machinery. History: In 2014, scientists identified two FtsZ homologs in archaea, FtsZ1 and FtsZ2. Function: During cell division, FtsZ is the first protein to move to the division site, and is essential for recruiting other proteins that produce a new cell wall (septum) between the dividing cells. FtsZ's role in cell division is analogous to that of actin in eukaryotic cell division, but, unlike the actin-myosin ring in eukaryotes, FtsZ has no known motor protein associated with it. Cell wall synthesis may externally push the cell membrane, providing the force for cytokinesis. Supporting this, in E. coli the rate of division is affected by mutations in cell wall synthesis. Alternatively, FtsZ may pull the membrane from the inside based on Osawa (2009) showing the protein's contractile force on liposomes with no other proteins present.Erickson (2009) proposed how the roles of tubulin-like proteins and actin-like proteins in cell division became reversed in an evolutionary mystery. The use of the FtsZ ring in dividing chloroplasts and some mitochondria further establishes their prokaryotic ancestry. L-form bacteria that lack a cell wall do not require FtsZ for division, which implies that bacteria may have retained components of an ancestral mode of cell division.Much is known about the dynamic polymerization activities of tubulin and microtubules, but little is known about these activities in FtsZ. While it is known that single-stranded tubulin protofilaments form into 13 stranded microtubules, the multistranded structure of the FtsZ-containing Z-ring is not known. It is only speculated that the structure consists of overlapping protofilaments. Nevertheless, recent work with purified FtsZ on supported lipid bilayers as well as imaging FtsZ in living bacterial cells revealed that FtsZ protofilaments have polarity and move in one direction by treadmilling (see also below). Function: Recently, proteins similar to tubulin and FtsZ have been discovered in large plasmids found in Bacillus species. They are believed to function as components of segrosomes, which are multiprotein complexes that partition chromosomes/plasmids in bacteria. The plasmid homologs of tubulin/FtsZ seem to have conserved the ability to polymerize into filaments. Function: The contractile ring (the "Z ring") FtsZ has the ability to bind to GTP and also exhibits a GTPase domain that allows it to hydrolyze GTP to GDP and a phosphate group. In vivo, FtsZ forms filaments with a repeating arrangement of subunits, all arranged head-to-tail. These filaments form a ring around the longitudinal midpoint, or septum, of the cell. This ring is called the Z-ring. Function: The GTP hydrolyzing activity of the protein is not essential to the formation of filaments or cell division. Mutants defective in GTPase activity often still divide, but sometimes form twisted and disordered septa. It is unclear as to whether FtsZ actually provides the physical force that results in division or serves as a scaffold for other proteins to execute division. Function: There are two models for how FtsZ might generate a constriction force. One model is based on the observation that FtsZ protfilaments can be straight or curved. The transition from straight to curved is suggested to generate a bending force on the membrane. Another model is based on sliding protofilaments. Computer models and in vivo measurements suggest that single FtsZ filaments cannot sustain a length more than 30 subunits long. In this model, FtsZ scission force comes from the relative lateral movement of subunits. Lines of FtsZ would line up together parallel and pull on each other creating a "cord" of many strings that tightens itself. Function: In other models, FtsZ does not provide the contractile force but provides the cell a spatial scaffold for other proteins to execute the division of the cell. This is akin to the creating of a temporary structure by construction workers to access hard-to-reach places of a building. The temporary structure allows unfettered access and ensures that the workers can reach all places. If the temporary structure is not correctly built, the workers will not be able to reach certain places, and the building will be deficient. Function: The scaffold theory is supported by information that shows that the formation of the ring and localization to the membrane requires the concerted action of a number of accessory proteins. ZipA or the actin homologue FtsA permit initial FtsZ localization to the membrane. Following localization to the membrane, division proteins of the Fts family are recruited for ring assembly. Many of these proteins direct the synthesis of the new division septum at midcell (FtsI, FtsW), or regulate the activity of this synthesis (FtsQ, FtsL, FtsB, FtsN). The timing of Z-ring formation suggests the possibility of a spatial or temporal signal that permits the formation of FtsZ filaments. Function: Recent super-resolution imaging in several species supports a dynamic scaffold model, in which small clusters of FtsZ protofilaments or protofilament bundles move unidirectionally around the ring's circumference by treadmilling, anchored to the membrane by FtsA and other FtsZ-specific membrane tethers. The speed of treadmilling depends on the rate of GTP hydrolysis within the FtsZ protofilaments, but in Escherichia coli, synthesis of the division septum remains the rate limiting step for cytokinesis. The treadmilling action of FtsZ is required for proper synthesis of the division septum by septal peptidoglycan synthesis enzymes, suggesting that these enzymes can track the growing ends of the filaments. Function: Septal localization and intracellular signaling The formation of the Z-ring closely coincides with cellular processes associated with replication. Z-ring formation coincides with the termination of genome replication in E. coli and 70% of chromosomal replication in B. subtilis. The timing of Z-ring formation suggests the possibility of a spatial or temporal signal that permits the formation of FtsZ filaments. In Escherichia coli, at least two negative regulators of FtsZ assembly form a bipolar gradient, such that the concentration of active FtsZ required for FtsZ assembly is highest at mid-cell between the two segregating chromosomes, and lowest at the poles and over the chromosomes. This type of regulation seems to occur in other species such as Bacillus subtilis and Caulobacter crescentus. However, other species including Streptococcus pneumoniae and Myxococcus xanthus seem to use positive regulators that stimulate FtsZ assembly at mid-cell. Function: Communicating distress FtsZ polymerization is also linked to stressors like DNA damage. DNA damage induces a variety of proteins to be manufactured, one of them called SulA. SulA prevents the polymerization and GTPase activity of FtsZ. SulA accomplishes this task by binding to self-recognizing FtsZ sites. By sequestering FtsZ, the cell can directly link DNA damage to inhibiting cell division. Function: Preventing DNA damage Like SulA, there are other mechanisms that prevent cell division that would result in disrupted genetic information sent to daughter cells. So far, two proteins have been identified in E. coli and B. subtilis that prevent division over the nucleoid region: Noc and SlmA. Noc gene knockouts result in cells that divide without respect to the nucleoid region, resulting in its asymmetrical partitioning between the daughter cells. The mechanism is not well understood, but thought to involve sequestration of FtsZ, preventing polymerization over the nucleoid region. The mechanism used by SlmA to inhibit FtsZ polymerization over the nucleoid is better understood, and uses two separate steps. One domain of SlmA binds to a FtsZ polymer, then a separate domain of SlmA severs the polymer. A similar mechanism is thought to be used by MinC, another inhibitor of FtsZ polymerization involved in positioning of the FtsZ ring. Clinical significance: The number of multidrug-resistant bacterial strains is currently increasing; thus, the determination of drug targets for the development of novel antimicrobial drugs is urgently needed. The potential role of FtsZ in the blockage of cell division, together with its high degree of conservation across bacterial species, makes FtsZ a highly attractive target for developing novel antibiotics. Researchers have been working on synthetic molecules and natural products as inhibitors of FtsZ.The spontaneous self-assembly of FtsZ can also be used in nanotechnology to fabricate metal nanowires.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Niobium(V) chloride** Niobium(V) chloride: Niobium(V) chloride, also known as niobium pentachloride, is a yellow crystalline solid. It hydrolyzes in air, and samples are often contaminated with small amounts of NbOCl3. It is often used as a precursor to other compounds of niobium. NbCl5 may be purified by sublimation. Structure and properties: Niobium(V) chloride forms chloro-bridged dimers in the solid state (see figure). Each niobium centre is six-coordinate, but the octahedral coordination is significantly distorted. The equatorial niobium–chlorine bond lengths are 225 pm (terminal) and 256 pm (bridging), whilst the axial niobium-chlorine bonds are 229.2 pm and are deflected inwards to form an angle of 83.7° with the equatorial plane of the molecule. The Nb–Cl–Nb angle at the bridge is 101.3°. The Nb–Nb distance is 398.8 pm, too long for any metal-metal interaction. NbBr5, TaCl5 and TaBr5 are isostructural with NbCl5, but NbI5 and TaI5 have different structures. Preparation: Industrially, niobium pentachloride is obtained by direct chlorination of niobium metal at 300 to 350 °C: 2 Nb + 5 Cl2 → 2 NbCl5In the laboratory, niobium pentachloride is often prepared from Nb2O5, the main challenge being incomplete reaction to give NbOCl3. The conversion can be effected with thionyl chloride: It also can be prepared by chlorination of niobium pentoxide in the presence of carbon at 300 °C. Uses: Niobium(V) chloride is the main precursor to the alkoxides of niobium, which find uses in sol-gel processing. It is also the precursor to many other Nb-containing reagents, including most organoniobium compounds. In organic synthesis, NbCl5 is a very specialized Lewis acid in activating alkenes for the carbonyl-ene reaction and the Diels-Alder reaction. Niobium chloride can also generate N-acyliminium compounds from certain pyrrolidines which are substrates for nucleophiles such as allyltrimethylsilane, indole, or the silyl enol ether of benzophenone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Molecular knot** Molecular knot: In chemistry, a molecular knot is a mechanically interlocked molecular architecture that is analogous to a macroscopic knot. Naturally-forming molecular knots are found in organic molecules like DNA, RNA, and proteins. It is not certain that naturally occurring knots are evolutionarily advantageous to nucleic acids or proteins, though knotting is thought to play a role in the structure, stability, and function of knotted biological molecules. The mechanism by which knots naturally form in molecules, and the mechanism by which a molecule is stabilized or improved by knotting, is ambiguous. The study of molecular knots involves the formation and applications of both naturally occurring and chemically synthesized molecular knots. Applying chemical topology and knot theory to molecular knots allows biologists to better understand the structures and synthesis of knotted organic molecules.The term knotane was coined by Vögtle et al. in 2000 to describe molecular knots by analogy with rotaxanes and catenanes, which are other mechanically interlocked molecular architectures. The term has not been broadly adopted by chemists and has not been adopted by IUPAC. Naturally occurring molecular knots: Organic molecules containing knots may fall into the categories of slipknots or pseudo-knots. They are not considered mathematical knots because they are not a closed curve, but rather a knot that exists within an otherwise linear chain, with termini at each end. Knotted proteins are thought to form molecular knots during their tertiary structure folding process, and knotted nucleic acids generally form molecular knots during genomic replication and transcription, though details of knotting mechanism continue to be disputed and ambiguous. Molecular simulations are fundamental to the research on molecular knotting mechanisms. Naturally occurring molecular knots: Knotted DNA was found first by Liu et al. in 1981, in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots. Naturally knotted RNA has not yet been reported.A number of proteins containing naturally occurring molecular knots have been identified. The knot types found to be naturally occurring in proteins are the +31,−31,41,−52, and +61 knots, as identified in the KnotProt database of known knotted proteins. Chemically synthesized molecular knots: Several synthetic molecular knots have been reported. Knot types that have been successfully synthesized in molecules are 31,41,51 and 819 knots. Though the −52 and +61 knots have been found to naturally occur in knotted molecules, they have not been successfully synthesized. Small-molecule composite knots have also not yet been synthesized.Artificial DNA, RNA, and protein knots have been successfully synthesized. DNA is a particularly useful model of synthetic knot synthesis, as the structure naturally forms interlocked structures and can be easily manipulated into forming knots control precisely the raveling necessary to form knots. Molecular knots are often synthesized with the help of crucial metal ion ligands. History: The first researcher to suggest the existence of a molecular knot in a protein was Jane Richardson in 1977, who reported that carbonic anhydrase B (CAB) exhibited apparent knotting during her survey of various proteins' topological behavior. However, the researcher generally attributed with the discovery of the first knotted protein is Marc. L. Mansfield in 1994, as he was the first to specifically investigate the occurrence of knots in proteins and confirm the existence of the trefoil knot in CAB. Knotted DNA was found first by Liu et al. in 1981, in single-stranded, circular, bacterial DNA, though double-stranded circular DNA has been found to also form knots.In 1989, Sauvage and coworkers reported the first synthetic knotted molecule: a trefoil synthesized via a double-helix complex with the aid of Cu+ ions.Vogtle et al. was the first to describe molecular knots as knotanes in 2000. Also in 2000 was William Taylor's creation of an alternative computational method to analyze protein knotting that set the termini at a fixed point far enough away from the knotted component of the molecule that the knot type could be well-defined. In this study, Taylor discovered a deep 41 knot in a protein. With this study, Taylor confirmed the existence of deeply knotted proteins. History: In 2007, Eric Yeates reported the identification of a molecular slipknot, which is when the molecule contains knotted subchains even though their backbone chain as a whole is unknotted and does not contain completely knotted structures that are easily detectable by computational models. Mathematically, slipknots are difficult to analyze because they are not recognized in the examination of the complete structure. History: A pentafoil knot prepared using dynamic covalent chemistry was synthesized by Ayme et al. in 2012, which at the time was the most complex non-DNA molecular knot prepared to date. Later in 2016, a fully organic pentafoil knot was also reported, including the very first use of a molecular knot to allosterically regulate catalysis. In January 2017, an 819 knot was synthesized by David Leigh's group, making the 819 knot the most complex molecular knot synthesized.An important development in knot theory is allowing for intra-chain contacts within an entangled molecular chain. Circuit topology has recently emerged as a topology framework that formalises the arrangement of contacts as well as chain crossings in a folded linear chain. As a complementary approach, Colin Adams. et al., developed a singular knot theory that is applicable to folded linear chains with intramolecular interactions. Applications: Many synthetic molecular knots have a distinct globular shape and dimensions that make them potential building blocks in nanotechnology.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Farid Melgani** Farid Melgani: Farid Melgani is an engineer at the University of Trento, Italy. He was named a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2016 for his contributions to image analysis in remote sensing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Bromocyclopentane** Bromocyclopentane: Bromocyclopentane is an alkyl halide with the chemical formula C5H9Br. It is a colorless to light yellow liquid at standard temperature and pressure. Bromocyclopentane is a building block used in the synthesis of filaminast.Bromocyclopentane is reacted with magnesium turnings in dry tetrahydrofuran making cyclopentyl Grignard reagent, a main precursor in the synthesis of Ketamine.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**CeVIO** CeVIO: CeVIO is the collective name of a range of computer software projects, including Vision (digital signage) and Creative Studio (audio creation software). CeVIO was made to assist in the creation of user-generated content. It works via text-to-speech method. Overview: It allows audio creation software for speech and voice synthesizing. Speech and Song are this program's main features. The Speech portion offers a large dictionary of words to which Sato Sasara, Suzuki Tsudumi, and Takahashi speak from and are accurate in the Japanese language, although the option to manually edit it exists as well. The Speech portion was created with help of the HTS method. This method is famous in the VOCALOID fanbase because this method created the online synthesizers Sinsy, Open J-Talk, Renoid Player, and many more. The Speech portion offers different types of voices for each character. Overview: CeVIO Creative Studio's speech intonation can be controlled with three parameters: cheery, angry, and sad. Other things can be controlled as well, such as volume and speed of consonants and vowels. Overview: The software was initially released as "CeVIO Creative Studio FREE" with Sato Sasara as the only voice. One was free to create tracks, insert lyrics, and add breaths to the end of notes, but even then those would get caught up in the end of her already automatically set breaths. Anything else would require external software but didn't really stop the choppiness of her vowel transitions. After the release of "CeVIO Creative Studio S" on the 14th of November 2014, the FREE version was replaced by one-month free trial of the full version. The free demo version was no longer available since November 19, 2014. Overview: In the full version, more options for fine-tuning became available. Fine-Tune Amplitude Timing, which allows editing of the choppiness. In addition to pitch, pitch-bends can now also be adjusted, along with vibrato, vibrato timing, volume and dynamics. Gender factor is also available, which makes the voice less or more mature. The option to import MIDI's and .xml's is still present. The file extension also has changed from the free version's ".ccs" to ".csv". Products: CeVIO Project Sato Sasara (さとうささら) (Free, CCS, AI, VS), a female vocal for CeVIO Free, CeVIO Creative Studio, CeVIO AI and VoiSona capable of speech and singing. Suzuki Tsudumi (すずきつづみ) (CCS,AI), a female vocal for CeVIO Creative Studio and CeVIO AI capable of speech and singing. Takahashi (タカハシ) (CCS), a male vocal for CeVIO Creative Studio capable only of speech. He has an upcoming Talk voicebank for CeVIO AI. Products: 1st PLACE ONE (オネ) (CCS, AI) is a female vocal for CeVIO Creative Studio and CeVIO AI capable of speech and singing. She is the second vocal in the "- ARIA ON THE PLANETES -" project, the first being the Vocaloid IA. She was released on January 27, 2015, with a speaking voicebank only. A singing voicebank was later released on May 22, 2015. Products: IA (イア) (CCS, AI) is a female vocal for CeVIO Creative Studio and CeVIO AI capable of speech and singing, originally released for VOCALOID3 in the "- ARIA ON THE PLANETES -" project. A talking CeVIO vocal was received in March 2017, named "IA TALK -ARIA ON THE PLANETES". On June 29, 2018, IA English C was released, with a Power and Natural bank. It was also confirmed that a talking English IA for CeVIO was in development. Products: Akasaki Minato (赤咲湊) (CCS), is a male vocal for CeVIO Creative Studio only capable of singing, he is the first member of the "Color Voice Series", a series of singing-only voicebanks by XING Inc. He is illustrated as a 25-year-old male representing the color red. He was released alongside Midorizaki Kasumi on February 19, 2015. The vocal is a counterpart to Kizaki Airi. Products: Midorizaki Kasumi (緑咲香澄) (CCS), is a female vocal for CeVIO Creative Studio only capable of singing, she is the second member of the series. She is illustrated as a 27-year-old female representing the color green. She was released alongside Akasaki Minato on February 19, 2015. The vocal is a counterpart to Shirosaki Yuudai. Ginsaki Yamato (銀咲大和) (CCS), is a male vocal for CeVIO Creative Studio only capable of singing, he is the third member of the series. He is illustrated as a 50-year-old male representing the color silver. He was released alongside Kinzaki Koharu on March 19, 2015. The vocal is a counterpart to Kinzaki Koharu. Kinzaki Koharu (金咲小春) (CCS), is a female vocal for CeVIO Creative Studio only capable of singing, she is the fourth member of the series. She is illustrated as a 52-year-old female representing the color gold. She was released alongside Ginsaki Yamato on March 19, 2015. The vocal is a counterpart to Ginsaki Yamato. Shirosaki Yuudai (白咲優大) (CCS), is a male vocal for CeVIO Creative Studio only capable of singing, he is the fifth member of the series. He is illustrated as a 20-year-old male representing the color white. He was released alongside Kizaki Airi on April 23, 2015. The vocal is a counterpart to Midorizaki Kasumi. Kizaki Airi (黄咲愛里) (CCS), is a female vocal for CeVIO Creative Studio only capable of singing, she is the sixth member of the series. She is illustrated as an 18-year-old female representing the color yellow. She was released alongside Shirosaki Yuudai on April 23, 2015. The vocal is a counterpart to Akasaki Minato. Teichiku Records HAL-O-ROID (ハルオロイド・ミナミ) (CCS), is a male vocal for CeVIO Creative Studio only capable of singing, he is a free vocal for the software with the voice of deceased Enka singer Haruo Minami. KAMITSUBAKI STUDIO KAFU (可不) (AI), is a female vocal for CeVIO AI only capable of singing, she is the first member in a series of singing voicebanks by Kamitsubaki Studio. She is a female AI voicebank described as a "musical isotope" of singer and VTuber KAF. KAFU was released on July 7, 2021. SEKAI (星界) (AI), is a female vocal for CeVIO AI only capable of singing, she is the second member of the series. She is a female AI voicebank described as a "musical isotope" of virtual singer Isekaijoucho. SEKAI was released on April 29, 2022. RIME (裏命) (AI), is a female vocal for CeVIO AI only capable of singing, she is the third member of the series. She is a female AI voicebank described as a "musical isotope" of virtual singer RIM. RIME was released on October 25, 2022. COKO (狐子) (AI), is a female vocal for CeVIO AI only capable of singing, she is the fourth member of the series. She is a female AI voicebank described as a "musical isotope" of virtual singer KOKO. COKO was released on January 25, 2023. HARU (羽累) (AI), is a female vocal for CeVIO AI only capable of singing, she will be the fifth member of the series. She is a female AI voicebank described as a "musical isotope" of virtual rapper/singer Harusaruhi. HARU is scheduled to be released August 12nd, 2023. Kizuna AI #kzn (AI, VS), is a female vocal for CeVIO AI and VoiSona only capable of singing, she is an AI vocal based on VTuber Kizuna AI. It was announced on February 25, 2022. It had 24-hour limited pre-sales on several dates, but the official release date is unannounced. Products: Bushiroad POPY (AI), is a female vocal for CeVIO AI only capable of singing, she is one of the two voice-banks created as part of a collaboration between the BanG Dream! franchise and CeVIO. POPY is based on the character Kasumi Toyama, vocalist of BanG Dream! band Poppin'Party. POPY is an AI vocal, with data recorded from previous Poppin'Party songs. Her voice is provided by Kasumi's voice actress Aimi. POPY was released on December 21, 2022. Products: ROSE (AI), is a female vocal for CeVIO AI only capable of singing, she is the second vocal created for the BanG Dream! x CeVIO project. She is based on the character Yukina Minato, vocalist of Roselia, with AI data recorded from previous Roselia songs. Her voice is provided by Yukina's voice actress Aina Aiba. ROSE was released on December 21, 2022. Products: SSS Tohoku Kiritan(東北きりたん) (AI), is a female vocal for CeVIO AI only capable of singing. Tohoku Itako(東北イタコ)(AI), is a female vocal for CeVIO AI only capable of singing. Tohoku Zunko(東北ずん子)(AI), is a female vocal for CeVIO AI only capable of singing. VOCALOMAKETS Yuzuki Yukari(結月ゆかり) (AI, VS), is a female vocal for CeVIO AI and VoiSona only capable of singing. TOKYO6 ENTERTAINMENT Koharu Rikka(小春六花)(AI), is a female vocal for CeVIO AI only capable of speech. Natsuki Karin(夏色花梨)(AI), is a female vocal for CeVIO AI only capable of speech. Hanakuma Chihuyu(花隈千冬)(AI), is a female vocal for CeVIO AI only capable of speech. INCS toenter Ci flower (AI), is a female vocal for CeVIO AI only capable of singing. AH-Software Tsurumaki Maki(弦巻マキ)(AI), is a female vocal for CeVIO AI only capable of speech. ZAN-SHIN ROSA (ロサ) (AI), is a female vocal for CeVIO AI only capable of speech. (AI) U-Stella FEE-Chan (CCD-0500, フィーちゃん) (AI), is a female vocal for CeVIO AI only capable of speech. She was developed by U-Stella in collaboration with Techno-Speech. UNI-Chan (CCD-0001, ユニちゃん) (AI), is an upcoming female vocal for CeVIO AI. She is to be developed by U-Stella in collaboration with Techno-Speech. Gasoline Alley Futaba Minato(双葉湊音) (AI), is a female vocal for CeVIO AI only capable of singing, she is the youth girl's singing voicebanks by Gasoline Alley. Her voice is provided by Sachika Misawa. She is released on December 2, 2022. candy cream algorithm Kanato Mell(奏兎める)(AI), is a female vocal for CeVIO AI only capable of singing. Sony Music Entertainment Japan ANИA (ANNA) (AI), is an upcoming vocal for CeVIO AI only capable of singing, she is developed by Sony Music Entertainment Japan in collaboration with Techno-Speech. Bandai Namco Entertainment Reml (Remuru, りむる) (AI), is an upcoming female vocal for CeVIO AI only capable of singing. Techno Speech Chis-A(知声)(CP, VS), is a female vocal for CeVIO Pro and VoiSona only capable of singing. Kirune(機流⾳)(VS), is a male vocal for VoiSona only capable of singing. Aisuu (あいす) (VS), is a female vocal for VoiSona only capable of singing. MYK-IV(VS), is a male vocal for VoiSona only capable of singing. Unreleased ALYS (CCS), ALYS is developed by VoxWave. After failing to produce a VOCALOID voicebank for ALYS, VoxWave decided to try and produce a French and a Japanese vocal for CeVIO Creative Studio. Due to unknown reasons the vocals were never produced for CCS, but were rather later released for Plogue's Alter/Ego. Products: Chinese Female Vocal (CCS, AI), Chinese Female Vocal is produced by Techno-Speech. In 2018 an article was published by Techno-Speech showcasing Japanese, English and Chinese samples of their at the time upcoming AI technology compared to their current technology. The Japanese voice shown was Sato Sasara, the English voice was IA, and the Chinese voice was a never seen before new vocal. There has been no updates on her ever since the article was published. Products: Luo Tianyi (洛天依) (CCS), Luo Tianyi is developed by Shanghai Henian. Her name was found in CeVIO Creative Studio's files and she was listed as a female Chinese Talk bank. The details of her cancellation are unknown Sara (CCS), Sara is developed by XING Inc. Her name was found in CeVIO Creative Studio's files and she was listed as a female English Song bank. The details of her cancellation are unknown. Products: Carter (CCS), Carter is developed by XING Inc. His name was found in CeVIO Creative Studio's files and he was listed as a male English Song bank. The details of his cancellation are unknown. Yuri (CCS), is a female vocal for CeVIO Creative Studio capable only of singing used in the "Copy & Loid" game. Miroid (CCS), is a female vocal for CeVIO Creative Studio capable only of singing used in the "Copy & Loid" game. Virtual Wakataisho (CP) 2023 Female Talk Voice (AI), On June 2, 2023, Techno-Speech published an article talking about the development of their new Vocoder. There were various short demonstrations of the vocoder within the article, one of those demonstrations was an unknown female talk voice. There are many other vocals found in CeVIO Creative Studio's and CeVIO AI's files that don't have a name and/ or have little to no information known about them. Miscellaneous Yoko (謡子) (CCS), is a female vocal for CeVIO Creative Studio inside a DAW called Kawai Score Maker. Yokun (谣君) (CCS), is a male vocal for CeVIO Creative Studio inside a DAW called Kawai Score Maker. Reception: In 2013 it won the Microsoft Innovation Award 2013 award. It also won an award in the CEDEC Awards 2013 event, after receiving 300,000 downloads.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Flow-restricted, oxygen-powered ventilation device** Flow-restricted, oxygen-powered ventilation device: A flow-restricted, oxygen-powered ventilation device (FROPVD), also referred to as a manually triggered ventilation device (MTV), is used to assist ventilation in apneic or hypoventilating patients, although these devices can also be used to provide supplemental oxygen to breathing patients. It can be used on patients with spontaneous breaths, as there is a valve that opens automatically on inspiration. When ventilating a patient with a (FROPVD) you must ensure an adequate, constant oxygen supply is available. Once the oxygen source is depleted, the device can no longer be used because it is driven completely by an oxygen source. The (FROPVD) has a peak flow rate of 100% oxygen at up to 40 liters per minute. To use the device, manually trigger it until chest rise is noted and then release. Wait five seconds before repeating. The device must have a pressure relief valve that opens at 60cm of water pressure to avoid over ventilation and trauma to the lungs. The (FROPVD) is contraindicated in adult patients with potential chest trauma and all children. Note: ( In cases with an apneic patient the best results will be achieved using the Two person bag-valve-mask technique.) Proper training and considerable practice is required to correctly use the FROPVD devices.The main components of flow-restricted, oxygen-powered ventilation devices include An inspiratory pressure safety release valve. Flow-restricted, oxygen-powered ventilation device: A trigger or level positioned to allow both hands to remain on the mask to provide an airtight seal while supporting and tilting the patients head. A peak flow rate of 100% oxygen at up to 40 L/min. An audible alarm that goes off when the relief valve pressure is exceeded.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**DarkTek Sourcebook** DarkTek Sourcebook: DarkTek Sourcebook is a supplement published by Game Designers' Workshop in 1991 for the near-future horror role-playing game Dark Conspiracy. Contents: DarkTek Sourcebook, written by Charles E. Gannon, with cover art by John Zeleznik, describes new items for a Dark Conspiracy campaign, including the biologic weapons used by the Dark Minions, the constructs used by the ETs, and the advanced technology used by humans. Reception: In the September 1992 edition of Dragon (Issue #185), Allen Varney thought that this book "shows a shivery imagination that conveys the game’s flavor better than the rulebook did." Varney concluded with a thumbs up, saying, "Put an Obedience Bug (page 16) in your referee’s ear and compel him to get this book."
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Mystery Diagnosis** Mystery Diagnosis: Mystery Diagnosis is a television docudrama series that aired on OWN: Oprah Winfrey Network. Each episode focuses on two or more individuals who have struggled with obscure medical ailments, and their quest for a diagnosis. The program details the patients' and doctors' difficulty in pinpointing a diagnosis; often due to nonspecific symptoms, masquerading syndromes, the rarity of the condition or disease, or the patient's case being an unusual manifestation of said condition or disease. Mystery Diagnosis: The series debuted on Discovery Health Channel in 2005, and was continued when the Oprah Winfrey Network replaced Discovery Health on January 1, 2011. The last season premiered January 5, 2011. Description: Each episode tells the stories of two patients who experienced difficult to diagnose medical conditions. Each segment generally begins with a short description of the patient's life before they fell ill (or in the case of a young child, the parents' life before the child was born). The symptoms that the person experienced are described from their onset, usually becoming progressively worse; the progression is often re-enacted by actors while the original patient narrates. The show chronicles the patient's visits from doctor to doctor, where they may receive misdiagnoses or be told that the doctors have found nothing wrong. After continuing to experience symptoms for an extended period of time, the person discovers a doctor who is able to solve their case. The doctor reviews the patient's medical records, notices a symptom that his or her colleagues overlooked, performing tests, and finally reaching the correct diagnosis and giving the proper treatment. This is followed by a brief explanation of why the disorder was so difficult to diagnose, and a description of what the person's life is like today. Usually, the patient is still alive. Some have died after the episode was taped or aired, and only one has died before the diagnosis (though his afflicted brother survived).The series has no regular cast except for its narrator, David Guion (2005–2009) and David Scott (2009–2011), who describes the patients' lives and the destruction their illnesses bring. The patients along with their friends and family help to narrate their stories. Description: While the majority of the conditions examined in the series are unusual or rare conditions (such as cryoglobulinemia) or genetic disorders, well-known conditions such as epilepsy, Myasthenia gravis, Alpha 1-antitrypsin deficiency, heart disease, Crohn's disease, pulmonary hypertension, Lyme disease, endocarditis and cancer have featured on the show. A significant number of episodes revolve around autoimmune disorders, ranging from Pyoderma gangrenosum to Paraneoplastic cerebellar degeneration. Other activities: In 2009, Mystery Diagnosis was named the program partner in organizing Rare Disease Day, an observance intended to raise awareness of rare diseases among the general public and policy-makers. Mystery Diagnosis worked with the United States coordinator, National Organization for Rare Disorders, to organize events across the country for observing Rare Disease Day at the end of February.All episodes formerly premiered on Discovery Health channel, The Learning Channel (TLC), and sometimes on the Discovery Channel. As of January 2011, new episodes were aired on OWN. The show later re-aired on Discovery Life.The show is not currently on Discovery+, the streaming service offered by Discovery.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Latitudinal gradients in species diversity** Latitudinal gradients in species diversity: Species richness, or biodiversity, increases from the poles to the tropics for a wide variety of terrestrial and marine organisms, often referred to as the latitudinal diversity gradient. The latitudinal diversity gradient is one of the most widely recognized patterns in ecology. It has been observed to varying degrees in Earth's past. A parallel trend has been found with elevation (elevational diversity gradient), though this is less well-studied.Explaining the latitudinal diversity gradient has been called one of the great contemporary challenges of biogeography and macroecology (Willig et al. 2003, Pimm and Brown 2004, Cardillo et al. 2005). The question "What determines patterns of species diversity?" was among the 25 key research themes for the future identified in 125th Anniversary issue of Science (July 2005). There is a lack of consensus among ecologists about the mechanisms underlying the pattern, and many hypotheses have been proposed and debated. A recent review noted that among the many conundrums associated with the latitudinal diversity gradient (or latitudinal biodiversity gradient) the causal relationship between rates of molecular evolution and speciation has yet to be demonstrated. Latitudinal gradients in species diversity: Understanding the global distribution of biodiversity is one of the most significant objectives for ecologists and biogeographers. Beyond purely scientific goals and satisfying curiosity, this understanding is essential for applied issues of major concern to humankind, such as the spread of invasive species, the control of diseases and their vectors, and the likely effects of global climate change on the maintenance of biodiversity (Gaston 2000). Tropical areas play prominent roles in the understanding of the distribution of biodiversity, as their rates of habitat degradation and biodiversity loss are exceptionally high. Patterns in the past: The latitudinal diversity gradient is a noticeable pattern among modern organisms that has been described qualitatively and quantitatively. It has been studied at various taxonomic levels, through different time periods and across many geographic regions (Crame 2001). The latitudinal diversity gradient has been observed to varying degrees in Earth's past, possibly due to differences in climate during various phases of Earth's history. Some studies indicate that the gradient was strong, particularly among marine taxa, while other studies of terrestrial taxa indicate it had little effect on the distribution of animals. Hypotheses for pattern: Although many of the hypotheses exploring the latitudinal diversity gradient are closely related and interdependent, most of the major hypotheses can be split into three general hypotheses. Spatial/Area hypotheses There are five major hypotheses that depend solely on the spatial and areal characteristics of the tropics. Hypotheses for pattern: Mid-domain effect Using computer simulations, Cowell and Hurt (1994) and Willing and Lyons (1998) first pointed out that if species’ latitudinal ranges were randomly shuffled within the geometric constraints of a bounded biogeographical domain (e.g. the continents of the New World, for terrestrial species), species' ranges would tend to overlap more toward the center of the domain than towards its limits, forcing a mid-domain peak in species richness. Colwell and Lees (2000) called this stochastic phenomenon the mid-domain effect (MDE), presented several alternative analytical formulations for one-dimensional MDE (expanded by Connolly 2005), and suggested the hypothesis that MDE might contribute to the latitudinal gradient in species richness, together with other explanatory factors considered here, including climatic and historical ones. Because "pure" mid-domain models attempt to exclude any direct environmental or evolutionary influences on species richness, they have been claimed to be null models (Cowell et al. 2004, 2005). On this view, if latitudinal gradients of species richness were determined solely by MDE, observed richness patterns at the biogeographic level would not be distinguishable from patterns produced by random placement of observed ranges called dinosures(Colwell and Lees 2000). Others object that MDE models so far fail to exclude the role of the environment at the population level and in setting domain boundaries, and therefore cannot be considered null models (Hawkins and Diniz-Filho 2002; Hawkins et al. 2005; Zapata et al. 2003, 2005). Mid-domain effects have proven controversial (e.g. Jetz and Rahbek 2001, Koleff and Gaston 2001, Lees and Colwell, 2007, Romdal et al. 2005, Rahbek et al. 2007, Storch et al. 2006; Bokma and Monkkonen 2001, Diniz-Filho et al. 2002, Hawkins and Diniz-Filho 2002, Kerr et al. 2006, Currie and Kerr, 2007). While some studies have found evidence of a potential role for MDE in latitudinal gradients of species richness, particularly for wide-ranging species (e.g. Jetz and Rahbek 2001, Koleff and Gaston 2001, Lees and Colwell, 2007, Romdal et al. 2005, Rahbek et al. 2007, Storch et al. 2006; Dunn et al. 2007) others report little correspondence between predicted and observed latitudinal diversity patterns (Bokma and Monkkonen 2001, Currie and Kerr, 2007, Diniz-Filho et al. 2002, Hawkins and Diniz-Filho 2002, Kerr et al. 2006). Hypotheses for pattern: Geographical area hypothesis Another spatial hypothesis is the geographical area hypothesis (Terborgh 1973). It asserts that the tropics are the largest biome and that large tropical areas can support more species. More area in the tropics allows species to have larger ranges and consequently larger population sizes. Thus, species with larger ranges are likely to have lower extinction rates (Rosenzweig 2003). Additionally, species with larger ranges may be more likely to undergo allopatric speciation, which would increase rates of speciation (Rosenzweig 2003). The combination of lower extinction rates and high rates of speciation leads to the high levels of species richness in the tropics. Hypotheses for pattern: A critique of the geographical area hypothesis is that even if the tropics is the most extensive of the biomes, successive biomes north of the tropics all have about the same area. Thus, if the geographical area hypothesis is correct these regions should all have approximately the same species richness, which is not true, as is referenced by the fact that polar regions contain fewer species than temperate regions (Gaston and Blackburn 2000). To explain this, Rosenzweig (1992) suggested that if species with partly tropical distributions were excluded, the richness gradient north of the tropics should disappear. Blackburn and Gaston 1997 tested the effect of removing tropical species on latitudinal patterns in avian species richness in the New World and found there is indeed a relationship between the land area and the species richness of a biome once predominantly tropical species are excluded. Perhaps a more serious flaw in this hypothesis is some biogeographers suggest that the terrestrial tropics are not, in fact, the largest biome, and thus this hypothesis is not a valid explanation for the latitudinal species diversity gradient (Rohde 1997, Hawkins and Porter 2001). In any event, it would be difficult to defend the tropics as a "biome" rather than the geographically diverse and disjunct regions that they truly include. Hypotheses for pattern: The effect of area on biodiversity patterns has been shown to be scale-dependent, having the strongest effect among species with small geographical ranges compared to those species with large ranges who are affected more so by other factors such as the mid-domain and/or temperature. Hypotheses for pattern: Species-energy hypothesis The species energy hypothesis suggests the amount of available energy sets limits to the richness of the system. Thus, increased solar energy (with an abundance of water) at low latitudes causes increased net primary productivity (or photosynthesis). This hypothesis proposes the higher the net primary productivity the more individuals can be supported, and the more species there will be in an area. Put another way, this hypothesis suggests that extinction rates are reduced towards the equator as a result of the higher populations sustainable by the greater amount of available energy in the tropics. Lower extinction rates lead to more species in the tropics. Hypotheses for pattern: One critique of this hypothesis has been that increased species richness over broad spatial scales is not necessarily linked to an increased number of individuals, which in turn is not necessarily related to increased productivity. Additionally, the observed changes in the number of individuals in an area with latitude or productivity are either too small (or in the wrong direction) to account for the observed changes in species richness. The potential mechanisms underlying the species-energy hypothesis, their unique predictions and empirical support have been assessed in a major review by Currie et al. (2004).The effect of energy has been supported by several studies in terrestrial and marine taxa. Hypotheses for pattern: Climate harshness hypothesis Another climate-related hypothesis is the climate harshness hypothesis, which states the latitudinal diversity gradient may exist simply because fewer species can physiologically tolerate conditions at higher latitudes than at low latitudes because higher latitudes are often colder and drier than tropical latitudes. Currie et al. (2004) found fault with this hypothesis by stating that, although it is clear that climatic tolerance can limit species distributions, it appears that species are often absent from areas whose climate they can tolerate. Hypotheses for pattern: Climate stability hypothesis Similarly to the climate harshness hypothesis, climate stability is suggested to be the reason for the latitudinal diversity gradient. The mechanism for this hypothesis is that while a fluctuating environment may increase the extinction rate or preclude specialization, a constant environment can allow species to specialize on predictable resources, allowing them to have narrower niches and facilitating speciation. The fact that temperate regions are more variable both seasonally and over geological timescales (discussed in more detail below) suggests that temperate regions are thus expected to have less species diversity than the tropics. Hypotheses for pattern: Critiques for this hypothesis include the fact that there are many exceptions to the assumption that climate stability means higher species diversity. For example, low species diversity is known to occur often in stable environments such as tropical mountaintops. Additionally, many habitats with high species diversity do experience seasonal climates, including many tropical regions that have highly seasonal rainfall (Brown and Lomolino 1998). Hypotheses for pattern: Historical/Evolutionary hypotheses There are four main hypotheses that are related to historical and evolutionary explanations for the increase of species diversity towards the equator. Hypotheses for pattern: The historical perturbation hypothesis The historical perturbation hypothesis proposes the low species richness of higher latitudes is a consequence of an insufficient time period available for species to colonize or recolonize areas because of historical perturbations such as glaciation (Brown and Lomolino 1998, Gaston and Blackburn 2000). This hypothesis suggests that diversity in the temperate regions has not yet reached equilibrium and that the number of species in temperate areas will continue to increase until saturated (Clarke and Crame 2003). However, in the marine environment, where there is also a latitudinal diversity gradient, there is no evidence of a latitudinal gradient in perturbation. Hypotheses for pattern: The evolutionary speed hypothesis The evolutionary speed hypothesis argues higher evolutionary rates due to shorter generation times in the tropics have caused higher speciation rates and thus increased diversity at low latitudes. Higher evolutionary rates in the tropics have been attributed to higher ambient temperatures, higher mutation rates, shorter generation time and/or faster physiological processes, and increased selection pressure from other species that are themselves evolving. Faster rates of microevolution in warm climates (i.e. low latitudes and altitudes) have been shown for plants, mammals, birds, fish and amphibians. Bumblebee species inhabiting lower, warmer elevations have faster rates of both nuclear and mitochondrial genome-wide evolution. Based on the expectation that faster rates of microevolution result in faster rates of speciation, these results suggest that faster evolutionary rates in warm climates almost certainly have a strong influence on the latitudinal diversity gradient. However, recent evidence from marine fish and flowering plants have shown that rates of speciation actually decrease from the poles towards the equator at a global scale. Understanding whether extinction rate varies with latitude will also be important to whether or not this hypothesis is supported. Hypotheses for pattern: The hypothesis of effective evolutionary time The hypothesis of effective evolutionary time assumes that diversity is determined by the evolutionary time under which ecosystems have existed under relatively unchanged conditions, and by evolutionary speed directly determined by effects of environmental energy (temperature) on mutation rates, generation times, and speed of selection. It differs from most other hypotheses in not postulating an upper limit to species richness set by various abiotic and biotic factors, i.e., it is a nonequilibrium hypothesis assuming a largely non-saturated niche space. It does accept that many other factors may play a role in causing latitudinal gradients in species richness as well. The hypothesis is supported by much recent evidence, in particular, the studies of Allen et al. and Wright et al. Hypotheses for pattern: The integrated evolutionary speed hypothesis The integrated evolutionary speed hypothesis argues that species diversity increases due to faster rates of genetic evolution and speciation at lower latitudes where ecosystem productivity is generally greater. It differs from the effective evolutionary time hypothesis by recognizing that species richness generally increases with increasing ecosystem productivity and declines where high environmental energy (temperature) causes water deficits. It also proposes that evolutionary rate increases with population size, abiotic environmental heterogeneity, environmental change and via positive feedback with biotic heterogeneity. There is considerable support for faster rates of genetic evolution in warmer environments, some support for a slower rate among plant species where water availability is limited and for a slower rate among bird species with small population sizes. Many aspects of the hypothesis, however, remain untested. Hypotheses for pattern: Biotic hypotheses Biotic hypotheses claim ecological species interactions such as competition, predation, mutualism, and parasitism are stronger in the tropics and these interactions promote species coexistence and specialization of species, leading to greater speciation in the tropics. These hypotheses are problematic because they cannot be the ultimate cause of the latitudinal diversity gradient as they fail to explain why species interactions might be stronger in the tropics. An example of one such hypothesis is the greater intensity of predation and more specialized predators in the tropics has contributed to the increase of diversity in the tropics (Pianka 1966). This intense predation could reduce the importance of competition (see competitive exclusion) and permit greater niche overlap and promote higher richness of prey. Some recent large-scale experiments suggest predation may indeed be more intense in the tropics, although this cannot be the ultimate cause of high tropical diversity because it fails to explain what gives rise to the richness of the predators in the tropics. Interestingly, the largest test of whether biotic interactions are strongest in the tropics, which focused on predation exerted by large fish predators in the world's open oceans, found predation to peak at mid-latitudes. Moreover, this test further revealed a negative association of predation intensity and species richness, thus contrasting the idea that strong predation near the equator drives or maintains high diversity. Other studies have failed to observe consistent changes in ecological interactions with latitude altogether (Lambers et al. 2002), suggesting that the intensity of species interactions is not correlated with the change in species richness with latitude. Overall, these results highlight the need for more studies on the importance of species interactions in driving global patterns of diversity. Synthesis and conclusions: There are many other hypotheses related to the latitudinal diversity gradient, but the above hypotheses are a good overview of the major ones still cited today. It is important to note that many of these hypotheses are similar to and dependent on one another. For example, the evolutionary hypotheses are closely dependent on the historical climate characteristics of the tropics. Synthesis and conclusions: The generality of the latitudinal diversity gradient An extensive meta-analysis of nearly 600 latitudinal gradients from published literature tested the generality of the latitudinal diversity gradient across different organismal, habitat and regional characteristics. The results showed that the latitudinal gradient occurs in marine, terrestrial, and freshwater ecosystems, in both hemispheres. The gradient is steeper and more pronounced in richer taxa (i.e. taxa with more species), larger organisms, in marine and terrestrial versus freshwater ecosystems, and at regional versus local scales. The gradient steepness (the amount of change in species richness with latitude) is not influenced by dispersal, animal physiology (homeothermic or ectothermic) trophic level, hemisphere, or the latitudinal range of study. The study could not directly falsify or support any of the above hypotheses, however, results do suggest a combination of energy/climate and area processes likely contribute to the latitudinal species gradient. Notable exceptions to the trend include the ichneumonidae, shorebirds, penguins, and freshwater zooplankton. Also, in terrestrial ecosystems the soil bacterial diversity peaks in temperate climatic zones, and has been linked to carbon inputs and the microscale distribution of aqueous habitats. Synthesis and conclusions: Data robustness One of the main assumptions about latitudinal diversity gradients and patterns in species richness is that the underlying data (i.e., the lists of species at specific locations) are complete. However, this assumption is not met in most cases. For instance, diversity patterns for blood parasites of birds suggest higher diversity in tropical regions, however, the data may be skewed by undersampling in rich faunal areas such as Southeast Asia and South America. For marine fishes, which are among the most studied taxonomic groups, current lists of species are considerably incomplete for most of the world's oceans. At a 3° (about 350 km2) spatial resolution, less than 1.8% of the world's oceans have above 80% of their fish fauna currently described. Synthesis and conclusions: Conclusion The fundamental macroecological question that the latitudinal diversity gradient depends on is "What causes patterns in species richness?". Species richness ultimately depends on whatever proximate factors are found to affect processes of speciation, extinction, immigration, and emigration. While some ecologists continue to search for the ultimate primary mechanism that causes the latitudinal richness gradient, many ecologists suggest instead this ecological pattern is likely to be generated by several contributory mechanisms (Gaston and Blackburn 2000, Willig et al. 2003, Rahbek et al. 2007). For now, the debate over the cause of the latitudinal diversity gradient will continue until a groundbreaking study provides conclusive evidence, or there is general consensus that multiple factors contribute to the pattern.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Skinny Client Control Protocol** Skinny Client Control Protocol: The Skinny Client Control Protocol (SCCP) is a proprietary network terminal control protocol originally developed by Selsius Systems, which was acquired by Cisco Systems in 1998. Skinny Client Control Protocol: SCCP is a lightweight IP-based protocol for session signaling with Cisco Unified Communications Manager, formerly named CallManager. The protocol architecture is similar to the media gateway control protocol architecture, in that is decomposes the function of media conversion in telecommunication for transmission via an Internet Protocol network into a relatively low-intelligence customer-premises device and a call agent implementation that controls the CPE via signaling commands. The call agent product is Cisco CallManager, which also performs as a signaling proxy for call events initiated over other common protocols such as H.323, and Session Initiation Protocol (SIP) for voice over IP, or ISDN for the public switched telephone network. Protocol components: An SCCP client uses TCP/IP to communicate with one or more Call Manager applications in a cluster. It uses the Real-time Transport Protocol (RTP) over UDP-transport for the bearer traffic (real-time audio stream) with other Skinny clients or an H.323 terminal. SCCP is a stimulus-based protocol and is designed as a communications protocol for hardware endpoints and other embedded systems, with significant CPU and memory constraints. Protocol components: Some Cisco analog media gateways, such as the VG248 gateway, register and communicate with Cisco Unified Communications Manager using SCCP. Origin: Cisco acquired SCCP technology when it acquired Selsius Corporation in 1998. For this reason the protocol is also referred to in Cisco documentation as the Selsius Skinny Station Protocol. Another remnant of the origin of the Cisco IP phones is the default device name format for registered Cisco phones with CallManager. It is SEP, as in Selsius Ethernet Phone, followed by the MAC address. Cisco also has marketed a Skinny-based softphone called Cisco IP Communicator. Client examples: Examples of SCCP client devices include the Cisco 7900 series of IP phones, Cisco IP Communicator softphone, and the 802.11b wireless Wireless IP Phone 7920, along with Cisco Unity voicemail server. Other implementations: Other companies, such as Symbol Technologies, SocketIP, and Digium, have implemented the protocol in VoIP terminals and IP phones, media gateway controllers, and softswitches. An open source implementation of a call agent is available in the Asterisk and FreeSWITCH systems. IPBlue provides a soft phone that emulates a Cisco 7960 telephone. Twinlights Software distributes a soft phone implementation for Android-based devices. The Cisco Unified Application Environment, the product acquired by Cisco when they purchased Metreos supports using SCCP to emulate Cisco 7960 phones allowing applications to access all Cisco line-side features.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Resource Kit** Resource Kit: Resource Kit is a term used by Microsoft for a set of software resources and documentation released for their software products, but which is not part of that product. Resource kits offer supplementary resources such as technical guidance, compatibility and troubleshooting information, management, support, maintenance and deployment guides and multipurpose useful administrative utilities, which are available separately. Overview: The most common form of the Resource Kits are as a large book or box set of books which come with CD-ROM(s), both of which have been supplemented in some cases such as the Resource Kits for Windows NT Server versions 3.51 and 4.0 and Windows 2000 Server. Overview: The text of the Resource Kit books are also available with versions of the Microsoft Developer Network (MSDN) CD-ROMs, and a large subset to complete set of the tools included in the kits can be downloaded from the Microsoft web site. The tools can include everything from extra commands for the command line to general interest programmes like 3D Paint to network-related tools to performance monitoring tools to interpreters for programming languages like Perl, Rexx, and others to interoperability tools like Windows versions of some Unix commands and shells. Overview: The Resource Kits, especially in the case of the Windows NT-2000 stream of operating systems, also include third-party software like various versions of Crystal Reports and PowerDesk. Typically, Microsoft releases resource kits after every major version of Microsoft Windows, Microsoft Office or another major product. Resource kits have also been released for Internet Explorer, BackOffice and other software. Overview: Those seeking Windows-Unix interoperability in various forms can also use an unrelated software product, Windows Services For Unix, which contains such items as the Interix C and Korn shells, ActiveState's ActivePerl and many other Posix-compliant tools and additions to the operating system. This package is sometimes confused with being a Resource Kit for Unix. The Microsoft Office resource kits are also relevant to the versions of these office suites for the MacIntosh. Overview: The Resource Kit tools mainly help administrators streamline management tasks such as troubleshooting operating system issues, configuring networking and security features, managing Active Directory and automating application deployment. The resource kits are also geared towards "power users" and contain other tools such as extra commands for the Windows batch/shell environment, programming aids, database tools, and miscellaneous tools. Interpreters for programming languages such as Perl, Rexx, KiXtart, awk and a version of the Unix Korn shell are available with many of the operating system Resource Kits, including those for both the Windows 95-98 and Windows NT-2000 streams of operating systems. Windows Resource Kits: Windows Resource Kit was introduced with Windows 3.0 in 1991 and has since been released for every Windows version, except for Windows Me, Windows CE and Windows 98 Second Edition. A Resource kit for MS-DOS 6.22 was released in 1992. Resource Kits were also not produced for Microsoft's two non-Windows operating systems, OS/2 (prior to version 3.0) and Xenix mainly because they were not actively promoted after 1991. With the Windows NT-2000 stream of operating systems, separate kits are released for the Workstation (or Professional) and Server versions thereof; the latter's documentation is a box set of four to a dozen or so books in each case whereas a single large book comes with the former as well as for the Windows 3.11 and Windows 95 to Windows 98 Resource Kits. Windows Resource Kits: Windows 9x family The Windows 95 to Windows 98 Resource Kit documentations and tools were available free of charge and a Resource Kit Sampler was included on the respective Windows installation CD-ROM discs. Resource Kit tools can generally be downloaded from the Microsoft Download Center free of charge, while the technical guidance and information is released in the form of Microsoft Press books. The CD-ROM discs accompanying the books, contain electronic versions of the books and include the Resource Kit tools and utilities, some of which may be exclusive. Windows Resource Kits: Windows NT family The Windows NT 4.0 Resource Kits (Workstation and Server) contained a particularly large number of tools and utilities as well as third-party software. The tools included in these kits for command-line use are considered by many Windows NT shell programmers to be essential to getting the full use of the facility. Windows Resource Kits: In the past, Microsoft used to release supplements for some Resource Kits which offered revised and new tools and resources. Microsoft released two supplements for the Windows NT 3.51 Server Resource Kit, four supplements for the Windows NT 4.0 Server Resource Kit and one supplement for the Windows 2000 Server Resource Kit. Some of these utilities (such as robocopy and takeown) later shipped as part of Windows XP and Windows Vista. Others were included in later Resource Kits. Older Resource Kits are no longer available from Microsoft but can in most cases be ordered from booksellers. Windows Resource Kits: The Windows 2000 Resource Kit also contains over 300 utilities. For Windows XP and Windows Server 2003, over 120 tools and utilities have been updated. The Windows disc for Windows 2000, Windows XP and later operating systems also includes a set of tools known as Windows Support Tools. Many of the support tools are also included in the Resource Kit, some being updated versions of past Resource Kit tools. The Microsoft web site has downloads of Windows 2000/XP era tools which are in addition to those in the standard kits or updated version of the ones shipping in the Resource Kits. Windows XP Professional Resource Kit, Third Edition was released after Windows XP Service Pack 2. All of the Windows Server 2003 Resource Kit Tools are available for download free of charge.There have been no native 64-bit resource kit tools produced and existing 32-bit resource kit tools are not supported on x64 platforms. The text of all Resource Kit books is included in the MSDN Library CD/DVD-ROM sets. Full implentations of MSDN contain all of the Resource Kits in text or HTML format as well as some of the others, full documentation for Microsoft Office, Internet Explorer, and Back Office as well as all of the operating systems covered. Windows Resource Kits: In 2007, Microsoft released the Windows Vista Resource Kit. In 2008, Windows Server 2008 Resource Kits was released and Windows Vista Resource Kit, Second Edition was updated for Service Pack 1. The Windows Vista Resource Kit ships with several sample VBScripts and few PowerShell scripts. Microsoft has also released Resource Kits for Group Policy, Windows security, Active Directory, Terminal Services and Internet Information Services 7. Windows Resource Kits: Windows 7 Resource Kit was released on September 14, 2009. Microsoft has announced that new unsupported Resource Kit tools will not be provided for current and future operating systems. Other resource kits: The Office Resource Kit and tools are included on the respective Office CD/DVD and/or separately. The tools are also available for download from Microsoft web site.Microsoft has also released Resource Kits for Internet Explorer, Windows Media, Internet Information Services, Back Office and several server products such as SharePoint and Microsoft Exchange Server. The PowerShell team has released a Resource Kit PowerShell Pack, a collection of PowerShell modules that adds over 700 scripts to those already present in Windows 7.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Barrett's esophagus** Barrett's esophagus: Barrett's esophagus is a condition in which there is an abnormal (metaplastic) change in the mucosal cells lining the lower portion of the esophagus, from stratified squamous epithelium to simple columnar epithelium with interspersed goblet cells that are normally present only in the small intestine and large intestine. This change is considered to be a premalignant condition because it is associated with a high incidence of further transition to esophageal adenocarcinoma, an often-deadly cancer.The main cause of Barrett's esophagus is thought to be an adaptation to chronic acid exposure from reflux esophagitis. Barrett's esophagus is diagnosed by endoscopy: observing the characteristic appearance of this condition by direct inspection of the lower esophagus; followed by microscopic examination of tissue from the affected area obtained from biopsy. The cells of Barrett's esophagus are classified into four categories: nondysplastic, low-grade dysplasia, high-grade dysplasia, and frank carcinoma. High-grade dysplasia and early stages of adenocarcinoma may be treated by endoscopic resection or radiofrequency ablation. Later stages of adenocarcinoma may be treated with surgical resection or palliation. Those with nondysplastic or low-grade dysplasia are managed by annual observation with endoscopy, or treatment with radiofrequency ablation. In high-grade dysplasia, the risk of developing cancer might be at 10% per patient-year or greater.The incidence of esophageal adenocarcinoma has increased substantially in the Western world in recent years. The condition is found in 5–15% of patients who seek medical care for heartburn (gastroesophageal reflux disease, or GERD), although a large subgroup of patients with Barrett's esophagus are asymptomatic. The condition is named after surgeon Norman Barrett (1903–1979) even though the condition was originally described by Philip Rowland Allison in 1946. Signs and symptoms: The change from normal to premalignant cells indicate Barrett's esophagus does not cause any particular symptoms. Barrett's esophagus, however, is associated with these symptoms: frequent and longstanding heartburn trouble swallowing (dysphagia) vomiting blood (hematemesis) pain under the sternum where the esophagus meets the stomach pain when swallowing (odynophagia), which can lead to unintentional weight lossThe risk of developing Barrett's esophagus is increased by central obesity (vs. peripheral obesity). The exact mechanism is unclear. The difference in distribution of fat among men (more central) and women (more peripheral) may explain the increased risk in males. Pathophysiology: Barrett's esophagus occurs due to chronic inflammation. The principal cause of chronic inflammation is gastroesophageal reflux disease, GERD (UK: GORD). In this disease, acidic stomach, bile, and small intestine and pancreatic contents cause damage to the cells of the lower esophagus. In turn, this provokes an advantage for cells more resistant to these noxious stimuli in particular HOXA13-expressing stem cells that are characterised by distal (intestinal) characteristics and outcompete the normal squamous cells.This mechanism also explains the selection of HER2/neu (also called ERBB2) and the overexpressing (lineage-addicted) cancer cells during the process of carcinogenesis, and the efficacy of targeted therapy against the Her-2 receptor with trastuzumab (Herceptin) in the treatment of adenocarcinomas at the gastroesophageal junction. Pathophysiology: Researchers are unable to predict who with heartburn will develop Barrett's esophagus. While no relationship exists between the severity of heartburn and the development of Barrett's esophagus, a relationship does exist between chronic heartburn and the development of Barrett's esophagus. Sometimes, people with Barrett's esophagus have no heartburn symptoms at all. Pathophysiology: Some anecdotal evidence indicates those with the eating disorder bulimia are more likely to develop Barrett's esophagus because bulimia can cause severe acid reflux, and because purging also floods the esophagus with acid. However, a link between bulimia and Barrett's esophagus remains unproven.During episodes of reflux, bile acids enter the esophagus, and this may be an important factor in carcinogenesis. Individuals with GERD and BE are exposed to high concentrations of deoxycholic acid that has cytotoxic effects and can cause DNA damage. Diagnosis: Both macroscopic (from endoscopy) and microscopic positive findings are required to make a diagnosis. Barrett's esophagus is marked by the presence of columnar epithelia in the lower esophagus, replacing the normal squamous cell epithelium—an example of metaplasia. The secretory columnar epithelium may be more able to withstand the erosive action of the gastric secretions; however, this metaplasia confers an increased risk of adenocarcinoma. Diagnosis: Screening Screening endoscopy is recommended among males over the age of 60 who have reflux symptoms that are of long duration and not controllable with treatment. Among those not expected to live more than five years screening is not recommended.The Seattle protocol is used commonly in endoscopy to obtain endoscopic biopsies for screening, taken every 1 to 2 cm from the gastroesophageal junction. Diagnosis: Since the COVID-19 pandemic In Scotland, the local NHS started using a swallowable sponge (Cytosponge) in hospitals to collect cell samples for diagnosis. Preliminary studies have shown this diagnostic test to be a useful tool for screening people with heartburn symptoms and improved diagnosis. Diagnosis: Intestinal metaplasia The presence of goblet cells, called intestinal metaplasia, is necessary to make a diagnosis of Barrett's esophagus. This frequently occurs in the presence of other metaplastic columnar cells, but only the presence of goblet cells is diagnostic. The metaplasia is grossly visible through a gastroscope, but biopsy specimens must be examined under a microscope to determine whether cells are gastric or colonic in nature. Colonic metaplasia is usually identified by finding goblet cells in the epithelium and is necessary for the true diagnosis.Many histologic mimics of Barrett's esophagus are known (i.e. goblet cells occurring in the transitional epithelium of normal esophageal submucosal gland ducts, "pseudogoblet cells" in which abundant foveolar [gastric] type mucin simulates the acid mucin true goblet cells). Assessment of relationship to submucosal glands and transitional-type epithelium with examination of multiple levels through the tissue may allow the pathologist to reliably distinguish between goblet cells of submucosal gland ducts and true Barrett's esophagus (specialized columnar metaplasia). The histochemical stain Alcian blue pH 2.5 is also frequently used to distinguish true intestinal-type mucins from their histologic mimics. Recently, immunohistochemical analysis with antibodies to CDX-2 (specific for mid and hindgut intestinal derivation) has also been used to identify true intestinal-type metaplastic cells. The protein AGR2 is elevated in Barrett's esophagus and can be used as a biomarker for distinguishing Barrett epithelium from normal esophageal epithelium.The presence of intestinal metaplasia in Barrett's esophagus represents a marker for the progression of metaplasia towards dysplasia and eventually adenocarcinoma. This factor combined with two different immunohistochemical expression of p53, Her2 and p16 leads to two different genetic pathways that likely progress to dysplasia in Barrett's esophagus. Also intestinal metaplastic cells can be positive for CK 7+/CK20-. Diagnosis: Epithelial dysplasia After the initial diagnosis of Barrett's esophagus is rendered, affected persons undergo annual surveillance to detect changes that indicate higher risk to progression to cancer: development of epithelial dysplasia (or "intraepithelial neoplasia"). Diagnosis: Among all metaplastic lesions, around 8% were associated with dysplasia. particularly a recent study demonstrated that dysplastic lesions were located mainly in the posterior wall of the esophagus.Considerable variability is seen in assessment for dysplasia among pathologists. Recently, gastroenterology and GI pathology societies have recommended that any diagnosis of high-grade dysplasia in Barrett be confirmed by at least two fellowship-trained GI pathologists prior to definitive treatment for patients. For more accuracy and reproducibility, it is also recommended to follow international classification systems, such as the "Vienna classification" of gastrointestinal epithelial neoplasia (2000). Management: Many people with Barrett's esophagus do not have dysplasia. Medical societies recommend that if a patient has Barrett's esophagus, and if the past two endoscopy and biopsy examinations have confirmed the absence of dysplasia, then the patient should not have another endoscopy within three years.Endoscopic surveillance of people with Barrett's esophagus is often recommended, although little direct evidence supports this practice. Treatment options for high-grade dysplasia include surgical removal of the esophagus (esophagectomy) or endoscopic treatments such as endoscopic mucosal resection or ablation (destruction).The risk of malignancy is highest in the United States in Caucasian men over fifty years of age with more than five years of symptoms. Current recommendations include routine endoscopy and biopsy (looking for dysplastic changes). Although in the past physicians have taken a watchful waiting approach, newly published research supports consideration of intervention for Barrett's esophagus. Balloon-based radiofrequency ablation, invented by Ganz, Stern, and Zelickson in 1999, is a new treatment modality for the treatment of Barrett's esophagus and dysplasia and has been the subject of numerous published clinical trials. The findings demonstrate radiofrequency ablation is at least 90% effective to completely clear Barrett's esophagus and dysplasia, with durability of up to five years and a favorable safety profile.Anti-reflux surgery has not been proven to prevent esophageal cancer. However, the indication is that proton pump inhibitors are effective in limiting the progression of esophageal cancer. Laser treatment is used in severe dysplasia, while overt malignancy may require surgery, radiation therapy, or systemic chemotherapy. A recent five-year random-controlled trial has shown that photodynamic therapy using photofrin is statistically more effective in eliminating dysplastic growth areas than sole use of a proton pump inhibitor.There is presently no reliable way to determine which patients with Barrett's esophagus will go on to develop esophageal cancer, although a recent study found the detection of three different genetic abnormalities was associated with as much as a 79% chance of developing cancer in six years.Endoscopic mucosal resection has also been evaluated as a management technique. Additionally an operation known as a Nissen fundoplication can reduce the reflux of acid from the stomach into the esophagus.In a variety of studies, nonsteroidal anti-inflammatory drugs (NSAIDS) such as low-dose aspirin (75–300 mg/day) have shown evidence of preventing esophageal cancer in people with Barrett's esophagus. Prognosis: Barrett's esophagus is a premalignant condition, not a malignant one. Its malignant sequela, esophagogastric junctional adenocarcinoma, has a mortality rate of over 85%. The risk of developing esophageal adenocarcinoma in people who have Barrett's esophagus has been estimated to be 6–7 per 1000 person-years, but a cohort study of 11,028 patients from Denmark published in 2011 showed an incidence of only 1.2 per 1000 person-years (5.1 per 1000 person-years in patients with dysplasia, 1.0 per 1000 person-years in patients without dysplasia).The relative risk of esophageal adenocarcinoma is about ten times higher in those with Barrett's esophagus than the general population. Most patients with esophageal carcinoma survive less than one year. Epidemiology: The incidence in the United States among Caucasian men is eight times the rate among Caucasian women and five times greater than African American men. Overall, the male to female ratio of Barrett's esophagus is 10:1. Several studies have estimated the prevalence of Barrett's esophagus in the general population to be 1.3% to 1.6% in two European populations (Italian and Swedish), and 3.6% in a Korean population. History: The condition is named after Australian thoracic surgeon Norman Barrett (1903–1979), who in 1950 argued that "ulcers are found below the squamocolumnar junction ... represent gastric ulcers within 'a pouch of stomach ... drawn up by scar tissue into the mediastinum' ... representing an example of a 'congenital short esophagus'". In contrast, Philip Rowland Allison and Alan Johnstone argued that the condition related to the "esophagus lined with gastric mucous membrane and not intra-thoracic stomach as Barrett mistakenly believed." Philip Allison, cardiothoracic surgeon and Chair of Surgery at the University of Oxford, suggested "calling the chronic peptic ulcer crater of the esophagus a 'Barrett's ulcer'", but added this name did not imply agreement with "Barrett's description of an esophagus lined with gastric mucous membrane as stomach." Bani-Hani KE and Bani-Hani KR argue that the terminology and definition of Barrett's esophagus is surrounded by extraordinary confusion unlike most other medical conditions and that "[t]he use of the eponym 'Barrett's' to describe [the condition] is not justified from a historical point of view". Bani-Hani KE and Bani-Hani KR investigated the historical aspects of the condition and found they could establish "how little Norman Barrett had contributed to the core concept of this condition in comparison to the contributions of other investigators, particularly the contribution of Philip Allison".A further association was made with adenocarcinoma in 1975.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lithium hypochlorite** Lithium hypochlorite: Lithium hypochlorite is the colorless, crystalline lithium salt of hypochlorous acid with the chemical formula of LiClO. It is used as a disinfectant for pools and a reagent for some chemical reactions. Safety: Doses of 500 mg/kg cause clinical signs and significant mortality in rats. The use of chlorine-based disinfectants in domestic water, although widespread, has led to some controversy due to the formation of small quantities of harmful byproducts such as chloroform. Studies showed no uptake of lithium if pools with lithium hypochlorite have been used.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Collicular artery** Collicular artery: The collicular artery or quadrigeminal artery arises from the posterior cerebral artery. This small artery supplies portions of the midbrain, especially the superior colliculus, inferior colliculus, and tectum. Structure: The collicular artery originates from P1 segment of the posterior cerebral artery near the side of interpeduncular fossa. It arises just distal to the bifurcation of the basilar artery. It runs posteriorly along the cerebral peduncle passing the crural and ambient cisterns. It then gives off branches to supply quadrigeminal plate and the adjacent structures in the midbrain. The origin of this artery is proximal to the origin of medial and lateral posterior choroidal branch of the posterior cerebral artery. The main collicular artery also gives branch to an accessory collicular artery. Structure: Branches Anterior branchesAnteromedial branches are rare but sometimes observed to contribute as part of the interpeduncular fossa's lateral rami of the intermediate pedicle. Anterolateral branches are abundant branching from both main and accessory collicular arteries. They only exist on the lower part of celebral crus. Lateral branchesThese lateral branches are found near lateral part of celebral crus arising from both main and accessory collicular arteries. Posterior branchesThese branches originate from the terminal branch of the collicular artery. Function: This small artery supplies the superior colliculus, inferior colliculus, and tectum of midbrain.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Trace Elliot** Trace Elliot: Trace Elliot is a United Kingdom-based bass amplification manufacturer, and has a sub-brand, Trace Acoustic, for acoustic instruments. History: In 1979, a music shop in Romford, Essex, UK, called Soundwave was building and hiring out PA systems to local musicians. It soon became apparent that some of this equipment was not being used simply as PA. Instead, it was being used by bass players, who for so long had to put up with under-powered amplification that was often merely a guitar amplifier with a modified tone circuit. History: The owner of Soundwave, Fred Friedlein, and staff which included Alan Morgan (sales) and Stuart Watson (design engineer) realised the potential market and developed a range of products that incorporated MOSFET output stages driving large cabinets, including 15” drivers, and also the world's first bass-dedicated 4 x 10” cabinet, now an industry standard for all bass amp lines. History: There were several features which made this product unique: the GP11 pre-amp featured 11 graphic EQ bands which were very broad bands, overlapping each other, thereby enabling massive amounts of frequency cut or boost when adjacent bands were boosted or cut. Secondly, the frequency bands were spaced closer together towards the bass end allowing even more variation for bass guitarists to alter their sound like no other amp had previously allowed. Added to this were MOSFET poweramps of 250 or 500 watts and the option of bi-amplified systems where bass and upper frequencies are filtered before being separately amplified and fed to dedicated high frequency and low frequency speaker cabinets. Trace Elliot, as the brand came to be called, gained a reputation for themselves; rumour has it that early users were John Paul Jones of Led Zeppelin, Andy Rourke of The Smiths and Brian Helicopter of punk band The Shapes. Mark King of Level 42 was also an early adopter of the brand. The company, now dedicated to manufacturing, moved to new premises in Witham, Essex, in 1985 to satisfy the growing demand. History: In late 1986, Stuart Watson, technical director and designer of the Trace Elliot range up to the Mark 5 series, left the company. That same year Fred Friedlein (then sole owner of Trace Elliot) employed the services of freelance electronics designer Clive Button. In 1986, Mark Gooday was appointed MD and given 24% of the company by Friedlein in thanks for the growth and production changes made by Gooday. History: In 1989, Trace Elliot introduced the Trace Acoustic range of acoustic amplifiers, whose features were developed by Friedlein, Gooday, Clive Roberts and Clive Button. The company moved again from its base on Witham, this time to Maldon, Essex. History: In 1992, the company was bought by Kaman, which had previously handled the brand's US distribution. The reason for the sale was the need for growth and the importance of the US market. Kaman staff would service a brand but would not grow brands unless they owned them. This arrangement was suggested to Friedlein by Gooday (to whom Friedlein had offered the full company at a very low price). The sale to Kaman meant Friedlein could retire and Gooday could see the brand grow with Kaman. History: Kaman downsized their music division in 1997 and sold the company to a trio of Trace Elliot directors, who took ownership of a brand with nearly 200 staff on a 110,000-square-foot (10,000 m2) site; they focused on exploiting the North American market, and in 1998 sold the company to the Gibson Guitar Corporation.In January 2002, the factory was closed and all the staff were laid off. Gibson moved the production of a few particular products they wanted to continue with to various locations in the United States. History: In April 2005 it was announced that Peavey Electronics had acquired the North American distribution rights to the Trace Elliot brand. Notable products, past and present: GP11 pre-amplifier, very collectable unit combined with various power amp models produced in the 1980s. 1110 Combo, a combination amplifier/speaker unit comprising a GP11 pre-amplifier, V5 mosfet amplifier and 4 x 10” bass cabinet. 1048H Successor to the world's first dedicated 4 x 10” bass cabinet. BLX-80 a compact 80 watt bass combo with an innovative back-of-cabinet mounted 10" speaker and a full-featured GP7 pre-amp section. The name was derived from the phrase "the dog's bollocks" which was used to describe the combo during development. AH1000-12 Fully featured bass head with 12 Band EQ, Valve Drive, dual band compression and many other features. Trace Acoustic range. Numerous models for amplifying acoustic instruments. GP12SMX Bass Preamp: 12 Band EQ Bass Pre-amp. The basis for the preamp in all the SMX series. V-Type V6 300 W all valve head. Used by many Britpop bands in the '90s. V-Type V8 400 W all valve head, with overdrive and compression on board. Velocette: 1990s-era 15 W valve-powered guitar combos; several variants, basis for the Gibson Goldtone range.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Monodomain model** Monodomain model: The monodomain model is a reduction of the bidomain model of the electrical propagation in myocardial tissue. The reduction comes from assuming that the intra- and extracellular domains have equal anisotropy ratios. Although not as physiologically accurate as the bidomain model, it is still adequate in some cases, and has reduced complexity. Formulation: Being T the domain boundary of the model, the monodomain model can be formulated as follows where Σi is the intracellular conductivity tensor, v is the transmembrane potential, ion is the transmembrane ionic current per unit area, Cm is the membrane capacitance per unit area, λ is the intra- to extracellular conductivity ratio, and χ is the membrane surface area per unit volume (of tissue). Formulation: Derivation The monodomain model can be easily derived from the bidomain model. This last one can be written as Assuming equal anisotropy ratios, i.e. Σe=λΣi , the second equation can be written as Then, inserting this into the first bidomain equation gives the unique equation of the monodomain model Boundary conditions: Differently from the bidomain model, usually the monodomain model is equipped with an isoltad boundary condition, which means that it is assumed that there is not current that can flow from or to the domain (usually the heart). Mathematically, this is done imposing a zero transmembrane potential flux, i.e.: on ∂T where n is the unit outward normal of the domain and ∂T is the domain boundary.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Lumacaftor/ivacaftor** Lumacaftor/ivacaftor: Lumacaftor/ivacaftor, sold under the brand name Orkambi among others, is a combination of lumacaftor and ivacaftor used to treat people with cystic fibrosis who have two copies of the F508del mutation. It is unclear if it is useful in cystic fibrosis due to other causes. It is taken by mouth.Common side effects include shortness of breath, nausea, diarrhea, feeling tired, hearing problems, and rash. Severe side effects may include liver problems and cataracts. Ivacaftor increases the activity of the CFTR protein, while lumacaftor improves protein folding of the CFTR protein.It was approved for medical use in the United States in 2015, and in Canada in 2016. In the United States it costs more than $US 22,000 a month as of 2018. While its use was not recommended in the United Kingdom as of 2018, pricing was agreed upon in 2019 and it is expected to be covered by November of that year. Medical use: The combination of lumacaftor/ivacaftor is used to treat people with cystic fibrosis who have two copies of the F508del mutation in the cystic fibrosis transmembrane conductance regulator (CFTR), the defective protein that causes the disease. This genetic abnormality is present in about half of cystic fibrosis cases in Canada. Its use is not recommended for anyone with cystic fibrosis in the United Kingdom as of 2018.While the medication resulted in improvement in the amount of air a person can breathe out in one second, the improvement seen did not reach a clinically important amount. The medication also does not appear to change a person's quality of life or the number of times a year a person has a worsening of lung function. Effects on life expectancy are unclear. Side effects: Some people taking the combination drug had elevated transaminases; the combination drug should be used with caution for people with advanced liver disease and liver function should be measured for the first three months for all people starting the combination drug.People starting the combination have respiratory discomfort, and some children taking the combination drug developed cataracts.Lumacaftor/ivacaftor may interfere with hormonal contraceptives. Dosage of the combination drug should be reduced if the person is taking a drug that inhibits CYP3A, and inducers of CYP3A should not be used concomitantly. Mechanism of action: F508del is a mutation that causes the CFTR protein to misfold and cells destroy such proteins soon after they are made; lumacaftor acts as a chaperone during protein folding and increases the number of CFTR proteins that are trafficked to the cell surface. Ivacaftor is a potentiator of CFTR that is already at the cell surface, increasing the probability that the defective channel will be open and allow chloride ions to pass through the channel pore. The two drugs have synergistic effects. Physical properties: Each of lumacaftor and ivacaftor is a white to off-white powder that is practically insoluble in water. The combination drug is a single pill containing 200 mg of lumacaftor and 125 mg of ivacaftor. History: Lumacaftor/ivacaftor was approved by the FDA in July 2015 under breakthrough therapy status and under a priority review. Previously approved for adults and pre-teens, approved on 8-7-18 for children age 2–5. Society and culture: As of March 2016 the combination drug cost $259,000 a year in the United States.In Denmark, it was estimated in August 2015 that if the drug were introduced, the cost would amount to 2 million Danish krones (approximately 270,000 euro) each year per person.The Dutch Minister of Health announced in October 2017 that the drug would not be admitted to the public health insurance package, making it impossible to have treatment with the drug covered by Dutch health insurance. The minister stated that the price for the drug, negotiated to 170,000 euro per patient per year, is "unacceptably high in relation to the relatively modest effect, as determined by the (Dutch) Healthcare Institute". Approximately 750 patients are affected by this decision. Society and culture: On 25 October, the Dutch Minister of Health announced that an agreement had been brokered with Vertex Pharmaceuticals, the company that manufactures the drug, resulting in admittance to the Dutch public health insurance package. Part of the agreement is that the result of the negotiation about the price of the treatment will not be disclosed.Protracted discussions within the United Kingdom were brought to a conclusion in September and October 2019 as NHS Scotland and NHS England both struck deals with Vertex respectively. This followed discussions where Vertex had wanted £105 000 per patient for Orkambi.The drug was not patented in Argentina, so can be made by other companies. Buyers' clubs in the UK have been buying the generic version from the Argentinian company Gador.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Command history** Command history: Command history is a feature in many operating system shells, computer algebra programs, and other software that allows the user to recall, edit and rerun previous commands. Command line history was added to Unix in Bill Joy's C shell of 1978; Joy took inspiration from an earlier implementation in Interlisp. It quickly became popular because it made the C shell fast and easy to use. History has since become a standard feature in other shells, including ksh, Bash and Microsoft's cmd.exe. History addressed two important scenarios: Executing the same command or a short sequence of commands over and over. An example might be a developer frequently compiling and running a program. Command history: Correcting mistakes or rerunning a command with only a small modification.In Joy's original C shell, the user could refer to a previous command by typing an exclamation, !, followed by additional characters to specify a particular command, only certain words, or to edit it in some way before pasting it back into the command line. For example: !! meant the entire previous command. Command history: !$ meant just the last word of the previous command. Command history: !abc meant the command that started with abc.The usual implementation today is to combine history with command-line editing. The cursor keys are used to navigate up and down through the history list and left or right to anyplace on the line, where the user can simply type a desired change. But some implementations are menu-based: The user presses a certain function key which displays a menu of recent commands, which the user can select one by typing a number. Command history: Some implementation such as Bash support to record command history to a file (history command).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Food column** Food column: A food column is a type of newspaper column dealing with food. It may be focused on recipes, health trends, or improving efficiency. It is generally geared towards gourmets or "foodies". Since 1994, food writers have also written columns and blogs on the web. Kate Heyhoe's Internet column first appeared on the electronic Gourmet Guide in December 1994 and became the centerpiece of its own website, The Global Gourmet, in 1996, making her one of the longest, continuously-running food blogger/columnists on the web. Food columnists in the English-speaking world: Some food columnists of note include: Julia Child Craig Claiborne John T. Edge Kate Heyhoe Judith Huxley Christopher Kimball Sheila Lukins Wolfgang Puck (Wolfgang Puck’s Kitchen) Sylvia Schur Ruth Ellen Church
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**General-purpose macro processor** General-purpose macro processor: A general-purpose macro processor or general purpose preprocessor is a macro processor that is not tied to or integrated with a particular language or piece of software. A macro processor is a program that copies a stream of text from one place to another, making a systematic set of replacements as it does so. Macro processors are often embedded in other programs, such as assemblers and compilers. Sometimes they are standalone programs that can be used to process any kind of text. Macro processors have been used for language expansion (defining new language constructs that can be expressed in terms of existing language components), for systematic text replacements that require decision making, and for text reformatting (e.g. conditional extraction of material from an HTML file).
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**33344-33434 tiling** 33344-33434 tiling: In geometry of the Euclidean plane, a 33344-33434 tiling is one of two of 20 2-uniform tilings of the Euclidean plane by regular polygons. They contains regular triangle and square faces, arranged in two vertex configuration: 3.3.3.4.4 and 3.3.4.3.4.The first has triangles in groups of 3 and square in groups of 1 and 2. It has 4 types of faces and 5 types of edges. The second has triangles in groups of 4, and squares in groups of 2. It has 3 types of face and 6 types of edges. Geometry: Its two vertex configurations are shared with two 1-uniform tilings: Circle Packings: These 2-uniform tilings can be used as a circle packings. Circle Packings: In the first 2-uniform tiling (whose dual resembles a key-lock pattern): cyan circles are in contact with 5 other circles (3 cyan, 2 pink), corresponding to the V33.42 planigon, and pink circles are also in contact with 5 other circles (4 cyan, 1 pink), corresponding to the V32.4.3.4 planigon. It is homeomorphic to the ambo operation on the tiling, with the cyan and pink gap polygons corresponding to the cyan and pink circles (mini-vertex configuration polygons; one dimensional duals to the respective planigons). Both images coincide. Circle Packings: In the second 2-uniform tiling (whose dual resembles jagged streams of water): cyan circles are in contact with 5 other circles (2 cyan, 3 pink), corresponding to the V33.42 planigon, and pink circles are also in contact with 5 other circles (3 cyan, 2 pink), corresponding to the V32.4.3.4 planigon. It is homeomorphic to the ambo operation on the tiling, with the cyan and pink gap polygons corresponding to the cyan and pink circles (mini-vertex configuration polygons; one dimensional duals to the respective planigons). Both images coincide. Circle Packings: Dual tilings The dual tilings have right triangle and kite faces, defined by face configurations: V3.3.3.4.4 and V3.3.4.3.4, and can be seen combining the prismatic pentagonal tiling and Cairo pentagonal tilings.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Screen test** Screen test: A screen test is a method of determining the suitability of an actor or actress for performing on film or in a particular role. The performer is generally given a scene, or selected lines and actions, and instructed to perform in front of a camera to see if they are suitable. The developed film is later evaluated by the relevant production personnel such as the casting director and the director. The actor may be asked to bring a prepared monologue or alternatively, the actor may be given a script to read at sight ("cold reading"). In some cases, the actor may be asked to read a scene, in which another performer reads the lines of another character. Types: Screen tests can also be used to judge the suitability of costume, make-up and other details, but these are generally called costume or make-up tests. Different types of actors can have different tasks for each individual test. For example, a lead for a musical theater-type movie could be requested to sing a popular song or learn a dance routine. Screen tests are routinely used in films and commercials. They are also used for short films. Types: International actors such as Bruce Lee were given screen tests to demonstrate that they are sufficiently articulate in the relevant language. In Lee's case, for the role of Kato in The Green Hornet, he was asked to converse about Chinese culture in English to judge his grasp of the language, then to demonstrate some martial arts moves to show off his physical skills.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Broken Picture Telephone** Broken Picture Telephone: Broken Picture Telephone, sometimes abbreviated BPT, was a collaborative multiplayer online drawing and writing game invented in 2007, based on the pen-and-paper game Telephone Pictionary. Gameplay: Like the children's game called broken telephone or simply telephone, also known as Chinese whispers, Broken Picture Telephone relies on the breakdown of communications for entertainment value. Broken Picture Telephone's gameplay involves a series of 11 or more rounds per game, in which each player can participate in only one round per game. The first and last rounds always require a text contribution; written-contribution turns alternate with drawing-contribution rounds. Whichever player is randomly selected to play round two creates a drawing based on the text provided in round one; the next randomly selected player writes a description of the drawing from round two; the round four player draws whatever the round three player described; and so on. Gameplay: For writing rounds, there is a character limit constraining possible submissions. For drawing rounds, the tools provided are rudimentary, consisting of eleven colors and a few brush sizes in the 2009 edition of the game. Each player has a maximum of ten minutes to submit their description or drawing. Games persist on the BrokenPictureTelephone.com site until finished, so that players can join a game hours or even days after it was begun. Until each game is concluded by the submission of its final text round, the full sequence of rounds is not visible to any site visitor, and when playing a round, players can see only the round immediately preceding their own.In order to deter inappropriate user behavior, players must register using a valid email address. Games with mature content are flagged as such by users—either the player who added the mature content, or any other user who views the game—and users can opt not to be shown any games with content flagged as mature. History: Broken Picture Telephone was created by American indie developer Alishah Novin in 2007. After Jay Is Games published a review of the game in June of that year, the influx of new players temporarily overwhelmed the BrokenPictureTelephone.com servers even though the game had been migrated to new servers in anticipation of such an increase in site visitors. Problems with server load continued, along with some bugs in the game's code and issues with malicious users trolling games; Novin took the game offline in 2010. Only a message saying that development was continuing to address the problems with the game's functionality remained accessible on the website. An Android app version of the game was released 13 October 2012, with the first bugfix release, numbered 1.01, following on 16 October. The browser version of the game remained defunct for several years until it was relaunched in 2013. Reception: Broken Picture Telephone was rated #62 in PC Magazine's list of the Top 100 Web Sites of 2009, and #5 in Jay Is Games's top ten games of 2009 in the Simple Idea category. Gamezebo praised the way its gameplay "tends to rapidly degenerate into hilarious misunderstandings" and called it "maybe one of the greatest online games ever." Appszoom magazine called the Android release "insanely-addicting", and Jay Is Games noted that site visitors "can spend a lot of time just browsing through the archives of completed games and laughing at the results." Academic analysis has identified BPT with the New Games movement, due to its goal being "a shared fun experience, rather than one team winning and one losing."Numerous gaming-review sites lamented the 2010 shuttering of the game. Similar games: Online multiplayer games with similar mechanisms of play, some of which were first released during the period when BPT was not available, include Broken Phone, DoodleOrDie, Drawception, DrawGuess and Teledraw. Similar games: Gartic Phone While Onrizon Social Games' Gartic.io website launched in 2017, the company found massive success upon launching Gartic Phone in December 2020. Like BPT and similar games, it combines elements of Pictionary and Broken Telephone. In Gartic Phone, players sketch a word or phrase and pass it to the next player, who must guess what the drawing represents. The game continues in a loop until the final player compares the last sentence with the original starting sentence. It offers various game modes, including: Normal, Knock-Off, Animation, Crowd, and introduced several new modes in early 2023. Gartic Phone is accessible through its website and supports multiple languages, making it available on PCs, tablets, and smartphones with an internet connection. Similar to Jackbox Games' party video games, Gartic Phone's "Crowd" mode allows for audience voting via invite links. Since Gartic Phone's success, Onrizon has developed spin-off games Gartic.TV and Gartic Show! In 2023, Gartic Phone was released as a free game on the Discord social platform.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Brewster Color** Brewster Color: Brewster Color was an early subtractive color-model film process. A two color process was invented by Percy Douglas Brewster in 1913, based on the earlier work of William Friese-Greene. It attempted to compensate for previous methods' problems with contrast. Brewster introduced a three color process in 1935, in an unsuccessful attempt to compete with Technicolor. Two color process: In his first patent application, filed February 11, 1913, American inventor Percy Douglas Brewster described a new color film process: The exposure is made through a ray filter, preferably light yellow in color and adapted to cut off all the violet and ultra-violet rays of light. The green and blue light with the addition of some yellow, after passing through the ray filter, acts upon the panchromatic emulsion on the front of the film, while the red and orange light with some yellow passes through the film and acts upon the panchromatic emulsion on the back of the film. The color that the transparent emulsion is stained prevents the passage of a substantial amount of blue and green light through the film to act upon the panchromatic film on the back. Two color process: Over the next eight years, Brewster filed a series of further patents pertaining to photographic film, film development, color cinematography, and various improvements to the process. In 1917, a patent for a method of "Coloring or Dyeing Photographic Images" was issued to Hoyt Miller, chief chemist of the Brewster Color Film Corporation, and assigned to the corporation. Two color process: Use in motion pictures Brewster's process was used for the first color animated cartoon, 1920's The Debut of Thomas Cat. However the production company, Bray Pictures, deemed the process to be too expensive, and did not employ it again.As other color processes became available, Brewster Color continued to be preferred by some filmmakers due to its relatively low cost and greater availability for small production runs. It began to fall out of use in the late 1920s, in favor of the Prizma process.In April 1944, a syndicate was formed to purchase the rights to the Brewster Color process and use it to produce films at studios in New York and Washington, D.C. Stanley Neal, member of the syndicate and owner of its laboratory, was mainly known for the production of industrial films and advertising shorts. Three color process: In 1935, Brewster introduced a three color process which added yellow tinting. Though demonstration films received praise from members of the Royal Photographic Society for their "remarkable steadiness" and "extraordinarily good reds", this method failed to meet with commercial success. Brewster v. Technicolor: Brewster filed a lawsuit against Technicolor, Inc. and Technicolor Motion Picture Corporation on April 1, 1941. It sought $100,000 in damages and an injunction, stating that they had infringed on patents for a "method and apparatus for color cinematography." On October 7, 1941, the judge overruled defense objections to some of the plaintiff's interrogatories. This procedural decision has been cited in some subsequent cases, as "2 F.R.D. 186, 51 U.S.P.Q. 319".No further public filings were made by Brewster, suggesting that the case may have been settled out of court.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Tween (software)** Tween (software): Tween is a Twitter client for Microsoft Windows, written in Visual Basic .NET. It was one of the most popular Twitter clients in Japan, and it was open-source. Summary: Originally, Tween was developed for heavy users. For this reason, Tween is a Twitter client with various functions. There are web mode and API mode, and users can switch between modes. Web mode allows users to avoid API limit's operation. History: The first beta version was released at Hatena Diary on Nov. 21, 2007. Originally web mode was an only way to get data, then API mode was implemented. 0.7.7.7 was released on Oct. 2, 2009, and API mode was set as default. 1.2.0.0 or later became Proprietary software.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Information technology in India** Information technology in India: The information technology industry in India comprises information technology services and business process outsourcing. The share of the IT-BPM sector in the GDP of India is 7.4% in FY 2022. The IT and BPM industries' revenue is estimated at US$ 245 billion in FY 2023. The domestic revenue of the IT industry is estimated at $51 billion, and export revenue is estimated at $194 billion in FY 2023. The IT–BPM sector overall employs 5.4 million people as of March 2023. In December 2022, Union Minister of State for Electronics and IT Rajeev Chandrasekhar, in a written reply to a question in Rajya Sabha informed that IT units registered with state-run Software Technology Parks of India (STPI) and Special Economic Zones have exported software worth Rs 11.59 lakh crore in 2021-22. History: The Electronics Committee also known as the "Bhabha Committee", created a 10-year (1966–1975) plan laying the foundation for India’s IT Service Industries. The industry was born in Mumbai in 1967 with the establishment of Tata Consultancy Services who in 1977 partnered with Burroughs which began India's export of IT services. The first software export zone, SEEPZ – the precursor to the modern-day IT park – was established in Mumbai in 1973. More than 80 percent of the country's software exports were from SEEPZ in the 1980s. History: Within 90 days of its establishment, the Task Force produced an extensive background report on the state of technology in India and an IT Action Plan with 108 recommendations. The Task Force could act quickly because it built upon the experience and frustrations of state governments, central government agencies, universities, and the software industry. Much of what it proposed was also consistent with the thinking and recommendations of international bodies like the World Trade Organization (WTO), International Telecommunication Union (ITU), and World Bank. In addition, the Task Force incorporated the experiences of Singapore and other nations, which implemented similar programs. It was less a task of invention than of sparking action on a consensus that had already evolved within the networking community and government. History: Regulated VSAT links became visible in 1994. Desai (2006) describes the steps taken to relax regulations on linking in 1991: In 1991 the Department of Electronics broke this impasse, creating a corporation called Software Technology Parks of India (STPI) that, being owned by the government, could provide VSAT communications without breaching its monopoly. STPI set up software technology parks in different cities, each of which provided satellite links to be used by firms; the local link was a wireless radio link. In 1993 the government began to allow individual companies their own dedicated links, which allowed work done in India to be transmitted abroad directly. Indian firms soon convinced their American customers that a satellite link was as reliable as a team of programmers working in the clients' office. History: A joint EU-India group of scholars was formed on 23 November 2001 to further promote joint research and development. On 25 June 2002, India and the European Union agreed to bilateral cooperation in the field of science and technology. From 2017, India holds an Associate Member State status at CERN, while a joint India-EU Software Education and Development Center will be located in Bangalore. History: In recent years there has been a boom in startups in India across all industries but especially the Information Technology sector. This boom is in part due to various start up schemes such as the Start Up India Scheme and T-Hub. Schemes like this provide resources to support the creation of new startups in hopes to stimulate the economy and put India at the forefront of innovation across all sectors. While the scheme has supported and incubated many companies and helped them succeed, there has been a lack of active support for ST and SCs in the action plans. This reflects a trend across the Information Technology sector as a whole with marginalized communities having a harder time breaking into this booming industry. Indian IT revenues: In the contemporary world economy, India is the largest exporter of IT. The contribution of the IT sector in India's GDP rose from 1.2% in 1998 to 10% in 2019. Exports dominate the Indian IT industry and constitute about 79% of the industry's total revenue. However, the domestic market is also significant, with robust revenue growth.The industry's share of total Indian exports (merchandise plus services) increased from less than 4% in FY1998 to about 25% in FY2012. The technologically-inclined services sector in India accounts for 40% of the country's GDP and 30% of export earnings as of 2006, while employing only 25% of its workforce, according to Sharma (2006). According to Gartner, the "Top Five Indian IT Services Providers" are Tata Consultancy Services, Infosys, Wipro, Tech Mahindra, and HCL Technologies.The IT and BPM industry's revenue is estimated at US$194 billion in FY 2021, an increase of 2.3% YoY. The domestic revenue of the IT industry is estimated at US$45 billion and export revenue is estimated at US$150 billion in FY 2021. The IT industry employed almost 2.8 million employees in FY 2021. The IT–BPM sector overall employs 5.4 million people as of March 2023.In 2022, companies within the sector faced significant employee attrition and intense competition in hirings. Indian IT revenues grow fastest in a decade to $227 billion in COVID-19 pandemic -hit FY22. NASSCOM in its Strategic Review predicted that the IT industry can achieve the ambitious target of being a US$ 350 billion by FY26 growing at a rate of 11-14 per cent. State wise revenue in IT exports: Below is the State wise list of revenue in IT exports as of FY2023. Largest Indian IT companies based on market capitalisation: Top IT services companies in India in 2022 by market capitalization. In September 2021, TCS recorded a market capitalisation of US$ 200 billion, making it the first Indian IT tech company to do so. On 24 August 2021, Infosys became the fourth Indian company to reach $100 billion in market capitalization. Largest Indian IT companies in India based on revenue: Top IT services companies in India in 2022 by revenue. Major information technology hubs: Bangalore Bangalore is a global technology hub and is India's biggest tech hub. As of fiscal 2016–17, Bangalore accounted for 38% of total IT exports from India worth $45 billion, employing 10 lakh people directly and 30 lakh indirectly. The city is known as the "Silicon Valley of India".Bangalore is also known as the "startup capital of India"; the city is home to 44 percent of all Indian unicorn startup companies as of 2020. Major information technology hubs: Hyderabad Hyderabad – known for the HITEC City or Cyberabad – is India's second largest information technology exporter and a major global IT hub, and the largest bioinformatics hub in India. Hyderabad has emerged as the second largest city in the country for software exports pipping competitors Chennai and Pune. Chennai As of 2018, Chennai is India's third-largest exporter of information technology (IT) after Bangalore and Hyderabad and business process outsourcing (BPO) services. TIDEL Park in Chennai was billed as Asia's largest IT park when it was built. Major information technology hubs: Kolkata Kolkata (Greater) is one of the major and the biggest IT hub of East India. Most of the IT parks and offices are located at New Town and Bidhannagar. Salt Lake Electronics Complex in Bidhannagar Salt Lake Sector-V is India's first fully integrated Electronics Complex. As of 2020, The IT sector employs more than 200,000 people directly. Total export from IT sector was estimated at ₹25,918 crore in 2021-22. In 2022, Kolkata generated 20,000 direct jobs in just 6 months, which is a all-time high for IT industry in East India. Major information technology hubs: Pune The Rajiv Gandhi Infotech Park in Hinjawadi is a ₹60,000 crore (US$8.9 billion) project by the Maharashtra Industrial Development Corporation (MIDC). The IT Park encompasses an area of about 2,800 acres (11 km2) and is home to over 800 IT companies of all sizes. Delhi NCR Delhi NCR is one of the major IT hub in India. Cities in NCR like Gurgaon and Noida have several companies that serves the local and global markets who take help from these IT hubs. Controversies: The Indian IT-BPM industry has the highest employee attrition rate. In recent years, the industry has seen a surge in resignations at all levels. As a global outsourcing hub, the Indian IT industry benefits from a lower cost of living and the consequent cheaper labor.In the last decade most of the IT companies developed indigenous R&D and innovation capabilities to develop home grown IT products. As the IT–BPM sector evolves, many are concerned that artificial intelligence (AI) will drive significant automation and destroy jobs in the coming years.In recent years, many IT workers use forged experience certificates to gain entry into the Indian IT industry. These fake documents are provided by consultancies that are mainly operating out of Hyderabad and Bangalore. IT professionals frequently use proxy interviews to clear interviews, but the majority of the phoney candidates are rejected during the interview round.A 2017 study of technical support scams published at the NDSS Symposium found that, of the tech support scams in which the IPs involved could be geolocated, 85% could be traced to locations in India. Indian call centres are infamous for defrauding customers from the US and Europe. Kolkata, Bangalore, Hyderabad, and Mumbai are the main operating locations for these fraud call centres.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Corduroy** Corduroy: Corduroy is a textile with a distinctively raised "cord" or wale texture. Modern corduroy is most commonly composed of tufted cords, sometimes exhibiting a channel (bare to the base fabric) between them. Both velvet and corduroy derive from fustian fabric. Corduroy looks as if it is made from multiple cords laid parallel to each other. Etymology: The word corduroy is from cord and duroy, a coarse woollen cloth made in England in the 18th century. Although the origin of duroy is not attested and although its likely meaning is du roi (of the king), it does not follow that the full phrase corde du roi derives from the cord of the king. This is probably a false etymology. Variations: Corduroy is made by weaving extra sets of fibre into the base fabric to form vertical ridges called wales. The wales are built so that clear lines can be seen when they are cut into pile. Variations: Corduroy is considered a durable cloth and is found in the construction of trousers, jackets, and shirts. The width of the wales varies between fabric styles and is specified by wale count—the number of wales per inch. A wale is a column of loops running lengthwise, corresponding to the warp of woven fabric. The lower the number, the thicker the wales' width (e.g., 4-wale is much thicker than 11-wale). Wale count per inch can vary from 1.5 to 21, although the traditional standard is usually between 10 and 12. Wide wale is more commonly used in trousers, and furniture upholstery (primarily couches); medium, narrow, and fine wale fabrics are usually found in garments worn above the waist.The primary types of corduroy are: Standard wale, at 11 wales/inch, available in many colours Pincord (also called pinwale or needlecord), the finest cord, with a count at the upper end of the spectrum (above 16) Pigment dyed/printed corduroy, where the fabric is coloured or printed with pigment dyes. The dye is applied to the surface; then, the garment is cut and sewn. When washed during the final manufacturing phase, the pigment dye washes out in an irregular way, creating a vintage look. Because of these subtle colour variations, no two garments of pigment-dyed corduroy are exactly alike, and their colour becomes softer with each washing.Corduroy is traditionally used in making British country clothing, even though its origin lies in items worn by townspeople in industrial areas. Although it has existed for a long time and has been used in Europe since the 18th century, only in the 20th century did it become global, notably expanding in popularity during the 1970s. Other names: Other names are often used for corduroy. Alternative names include: corded velveteen, elephant cord (the thick-stripes version), pin cord, Manchester cloth and cords.In continental Europe, corduroy is known as "Cord", "rib cord" or "rib velvet" - in parts of Europe such as Germany, the Czech Republic, Slovakia, the Netherlands and Belgium it used to be simply known as "Manchester" - that still remains the current name for corduroy in Swedish. In Portugal, corduroy is associated with a completely different type of fabric, "bombazine", and is referred to as such. In Greece and Cyprus they are known as kotlé pants. In Iran they are referred to as “Makhmal Kebrity” (velvet matchstick) or just “kebrity” (matchstick) pants as the width of a cord resembles that of a matchstick.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Papoose board** Papoose board: In the medical field a papoose board is a temporary medical stabilization board used to limit a patient’s freedom of movement to decrease risk of injury while allowing safe completion of treatment. The term papoose board refers to a brand name. Papoose board: It is most commonly used during dental work, venipuncture, and other medical procedures. It is also sometimes used during medical emergencies to keep an individual from moving when total sedation is not possible. It is usually used on patients as a means of temporarily and safely limiting movement and is generally more effective than holding the person down. It is mostly used on young patients and patients with special needs. Papoose board: A papoose board is a cushioned board with fabric Velcro straps that can be used to help limit a patient's movement and hold them steady during the medical procedure. Sometimes oral, IV or gas sedation such as nitrous oxide will be used to calm the patient prior to or during use. Using a papoose board to temporarily and safely limit movement is often preferable to medical sedation, which presents serious potential risks, including death. As a result, restraint is preferred by some parents as an alternative to sedation, behavior management/anxiety reduction techniques, better pain management or a low-risk anxiolytic such as nitrous oxide. Informed consent from a parent or guardian is usually required before a papoose board can be used. If assent from the child is required, then in most cases, the papoose board would be prohibited as it is unlikely that a child would agree to restraint and not struggle. In some countries, the papoose board is banned and considered a serious breach of ethics (for example, the U.K.). Use of papoose boards in dentistry: The American Academy of Pediatric Dentistry approves of partial or complete stabilization of the patient in cases when it is necessary to protect the patient, practitioner, staff, or parent from injury while providing dental care. As of 2004, 85 percent of dental programs across the U.S. teach protective stabilization as an acceptable behavioral management practice. By 2004 The Colorado Springs Gazette reported that the dental chain Small Smiles Dental Centers used papoose boards almost 7,000 times in one period of 18 months, according to Colorado state records. Michael and Edward DeRose, two of the owners of Small Smiles, said that they used papoose boards so that they could do dental work on larger numbers of children in a more rapid manner. Small Smiles dentists from other states learned the papoose board method in Colorado and began practicing the method in other states. As a result, a Colorado Board of Dental Examiners-appointed committee established a new Colorado state law forbidding the usage of papoose boards for children unless a dentist has exhausted other possibilities for controlling a child's behavior, and if the dentist uses a papoose board, he or she must document why the papoose board was used in the patient's record. Controversies: In some countries, the papoose board is banned and considered a serious breach of ethical practice. Although the papoose board is discussed as a behavior management technique, it is simply a restraint technique although ethically questionable, thus preventing any behavior from occurring that could be managed with recognized behavioral and anxiety reduction techniques. Origins: Papoose boards were originally a wood-and-leather device used by many Native American tribes to swaddle their infants and children. Papoose boards, also known as cradle boards, are still in use in many places.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**GSDMB** GSDMB: Gasdermin B is a protein that in humans is encoded by the GSDMB gene. Function: This gene encodes a member of the gasdermin-domain containing protein family. Other gasdermin-family genes are implicated in the regulation of apoptosis in epithelial cells, and are linked to cancer. Multiple transcript variants encoding different isoforms have been found for this gene. Additional variants have been described, but they are candidates for nonsense-mediated mRNA decay (NMD) and are unlikely to be protein-coding.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Quantum critical point** Quantum critical point: A quantum critical point is a point in the phase diagram of a material where a continuous phase transition takes place at absolute zero. A quantum critical point is typically achieved by a continuous suppression of a nonzero temperature phase transition to zero temperature by the application of a pressure, field, or through doping. Conventional phase transitions occur at nonzero temperature when the growth of random thermal fluctuations leads to a change in the physical state of a system. Condensed matter physics research over the past few decades has revealed a new class of phase transitions called quantum phase transitions which take place at absolute zero. In the absence of the thermal fluctuations which trigger conventional phase transitions, quantum phase transitions are driven by the zero point quantum fluctuations associated with Heisenberg's uncertainty principle. Overview: Within the class of phase transitions, there are two main categories: at a first-order phase transition, the properties shift discontinuously, as in the melting of solid, whereas at a second order phase transition, the state of the system changes in a continuous fashion. Second-order phase transitions are marked by the growth of fluctuations on ever-longer length-scales. These fluctuations are called "critical fluctuations". At the critical point where a second-order transition occurs the critical fluctuations are scale invariant and extend over the entire system. At a nonzero temperature phase transition, the fluctuations that develop at a critical point are governed by classical physics, because the characteristic energy of quantum fluctuations is always smaller than the characteristic Boltzmann thermal energy kBT At a quantum critical point, the critical fluctuations are quantum mechanical in nature, exhibiting scale invariance in both space and in time. Unlike classical critical points, where the critical fluctuations are limited to a narrow region around the phase transition, the influence of a quantum critical point is felt over a wide range of temperatures above the quantum critical point, so the effect of quantum criticality is felt without ever reaching absolute zero. Quantum criticality was first observed in ferroelectrics, in which the ferroelectric transition temperature is suppressed to zero. Overview: A wide variety of metallic ferromagnets and antiferromagnets have been observed to develop quantum critical behavior when their magnetic transition temperature is driven to zero through the application of pressure, chemical doping or magnetic fields. In these cases, the properties of the metal are radically transformed by the critical fluctuations, departing qualitatively from the standard Fermi liquid behavior, to form a metallic state sometimes called a non-Fermi liquid or a "strange metal". There is particular interest in these unusual metallic states, which are believed to exhibit a marked preponderance towards the development of superconductivity. Quantum critical fluctuations have also been shown to drive the formation of exotic magnetic phases in the vicinity of quantum critical points. Quantum critical endpoints: Quantum critical points arise when a susceptibility diverges at zero temperature. There are a number of materials (such as CeNi2Ge2) where this occurs serendipitously. More frequently a material has to be tuned to a quantum critical point. Most commonly this is done by taking a system with a second-order phase transition which occurs at nonzero temperature and tuning it—for example by applying pressure or magnetic field or changing its chemical composition. CePd2Si2 is such an example, where the antiferromagnetic transition which occurs at about 10K under ambient pressure can be tuned to zero temperature by applying a pressure of 28,000 atmospheres. Less commonly a first-order transition can be made quantum critical. First-order transitions do not normally show critical fluctuations as the material moves discontinuously from one phase into another. However, if the first order phase transition does not involve a change of symmetry then the phase diagram can contain a critical endpoint where the first-order phase transition terminates. Such an endpoint has a divergent susceptibility. The transition between the liquid and gas phases is an example of a first-order transition without a change of symmetry and the critical endpoint is characterized by critical fluctuations known as critical opalescence. Quantum critical endpoints: A quantum critical endpoint arises when a nonzero temperature critical point is tuned to zero temperature. One of the best studied examples occurs in the layered ruthenate metal, Sr3Ru2O7 in a magnetic field. This material shows metamagnetism with a low-temperature first-order metamagnetic transition where the magnetization jumps when a magnetic field is applied within the directions of the layers. The first-order jump terminates in a critical endpoint at about 1 kelvin. By switching the direction of the magnetic field so that it points almost perpendicular to the layers, the critical endpoint is tuned to zero temperature at a field of about 8 teslas. The resulting quantum critical fluctuations dominate the physical properties of this material at nonzero temperatures and away from the critical field. The resistivity shows a non-Fermi liquid response, the effective mass of the electron grows and the magnetothermal expansion of the material is modified all in response to the quantum critical fluctuations.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Outdoor retailer** Outdoor retailer: An outdoor retailer or outdoor store is a retail businesses selling apparel and general merchandise for outdoor activities.The stores may cater for a range of activities, including camping, hunting, fishing, hiking, trekking, mountaineering, skiing, snowboarding, cycling, mountain biking, kayaking, rafting and water sports. They may carry a range of associated equipment, such as hiking boots, climbing harnesses, snowboards, kayaks, mountain bikes, paddleboards, climbing shoes, and tents. History: In 2017, the US Outdoor Retailer trade show moved out of Utah over the state's plan to remove the national monument designations for Bears Ears and Grand Staircase–Escalante.During late 2020 and early 2021, some outdoor retailers experienced a boom from the COVID-19 pandemic, with demand increasing for items like personal watercraft, bicycles, running shoes, hiking shoes, and walking shoes.In 2022, research in the United States found consumers were planning to spend less at outdoor retailers due to rising costs of living and other prices.In March 2022, the US Outdoor Retailer trade show announced a move back to Utah beginning in January 2023, despite the state's stance on national monuments. Several major retailers, such as Patagonia, REI, The North Face, threatened to boycott the event. By market: United States Prominent outdoor retailers in the United States include Dick's Sporting Goods, Eddie Bauer, Backcountry.com, Outdoor Voices, REI, Patagonia, Marmot, Moosejaw, Sierra, The North Face and L.L.Bean.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Augmented reality-based testing** Augmented reality-based testing: Augmented reality-based testing (ARBT) is a test method that combines augmented reality and software testing to enhance testing by inserting an additional dimension into the testers field of view. For example, a tester wearing a head-mounted display (HMD) or Augmented reality contact lenses that places images of both the physical world and registered virtual graphical objects over the user's view of the world can detect virtual labels on areas of a system to clarify test operating instructions for a tester who is performing tests on a complex system. In 2009 as a spin-off to augmented reality for maintenance and repair (ARMAR) Alexander Andelkovic coined the idea 'augmented reality-based testing', introducing the idea of using augmented reality together with software testing. Overview: The test environment of technology is becoming more complex, this puts higher demand on test engineers to have higher knowledge, testing skills and work effective. A powerful unexplored dimension that can be utilized is the Virtual environment, a lot of information and data that today is available but unpractical to use due to overhead in time needed to gather and present can with ARBT be used instantly. Application: ARBT can be of help in following test environments: Support Assembling and disassembling a test object can be learned out and practice scenarios can be run through to learn how to fix fault scenarios that may occur. Guidance Minimizing risk of misunderstanding complex test procedures can be done by virtually describing test steps in front of the tester on the actual test object. Educational Background information about test scenario with earlier bugs found pointed out on the test object and reminders to avoid repeating previous mistakes made during testing of selected test area. Training Junior testers can learn complex test scenarios with less supervision. Test steps will be pointed out and information about pass criteria need to be confirmed the junior tester can train before the functionality is finished and do some regression testing. Informational Tester can point at a physical object and get detailed updated technical data and information needed to perform selected test task. Inspire Testers performing exploratory testing that need inspiration of areas to explore can get instant information about earlier exploratory test sessions gathered through session-based testing.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Testosterone glucuronide** Testosterone glucuronide: Testosterone glucuronide is an endogenous, naturally occurring steroid and minor urinary metabolite of testosterone.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Woodin cardinal** Woodin cardinal: In set theory, a Woodin cardinal (named for W. Hugh Woodin) is a cardinal number λ such that for all functions f:λ→λ there exists a cardinal κ<λ with {f(β)∣β<κ}⊆κ and an elementary embedding j:V→M from the Von Neumann universe V into a transitive inner model M with critical point κ and Vj(f)(κ)⊆M. An equivalent definition is this: λ is Woodin if and only if λ is strongly inaccessible and for all A⊆Vλ there exists a λA<λ which is <λ -A -strong. Woodin cardinal: λA being <λ -A -strong means that for all ordinals α<λ , there exist a j:V→M which is an elementary embedding with critical point λA , j(λA)>α , Vα⊆M and j(A)∩Vα=A∩Vα . (See also strong cardinal.) A Woodin cardinal is preceded by a stationary set of measurable cardinals, and thus it is a Mahlo cardinal. However, the first Woodin cardinal is not even weakly compact. Consequences: Woodin cardinals are important in descriptive set theory. By a result of Martin and Steel, existence of infinitely many Woodin cardinals implies projective determinacy, which in turn implies that every projective set is Lebesgue measurable, has the Baire property (differs from an open set by a meager set, that is, a set which is a countable union of nowhere dense sets), and the perfect set property (is either countable or contains a perfect subset). Consequences: The consistency of the existence of Woodin cardinals can be proved using determinacy hypotheses. Working in ZF+AD+DC one can prove that Θ0 is Woodin in the class of hereditarily ordinal-definable sets. Θ0 is the first ordinal onto which the continuum cannot be mapped by an ordinal-definable surjection (see Θ (set theory)). Consequences: Mitchell and Steel showed that assuming a Woodin cardinal exists, there is an inner model containing a Woodin cardinal in which there is a Δ41 -well-ordering of the reals, ◊ holds, and the generalized continuum hypothesis holds.Shelah proved that if the existence of a Woodin cardinal is consistent then it is consistent that the nonstationary ideal on ω1 is ℵ2 -saturated. Woodin also proved the equiconsistency of the existence of infinitely many Woodin cardinals and the existence of an ℵ1 -dense ideal over ℵ1 Hyper-Woodin cardinals: A cardinal κ is called hyper-Woodin if there exists a normal measure U on κ such that for every set S , the set {λ<κ∣λ is <κ -S -strong } is in U .λ is <κ -S -strong if and only if for each δ<κ there is a transitive class N and an elementary embedding j:V→N with crit (j), j(λ)≥δ , and j(S)∩Hδ=S∩Hδ .The name alludes to the classical result that a cardinal is Woodin if and only if for every set S , the set {λ<κ∣λ is <κ -S -strong } is a stationary set. Hyper-Woodin cardinals: The measure U will contain the set of all Shelah cardinals below κ Weakly hyper-Woodin cardinals: A cardinal κ is called weakly hyper-Woodin if for every set S there exists a normal measure U on κ such that the set {λ<κ∣λ is <κ -S -strong } is in U . λ is <κ -S -strong if and only if for each δ<κ there is a transitive class N and an elementary embedding j:V→N with crit (j) , j(λ)≥δ , and j(S)∩Hδ=S∩Hδ. Weakly hyper-Woodin cardinals: The name alludes to the classic result that a cardinal is Woodin if for every set S , the set {λ<κ∣λ is <κ -S -strong } is stationary. The difference between hyper-Woodin cardinals and weakly hyper-Woodin cardinals is that the choice of U does not depend on the choice of the set S for hyper-Woodin cardinals.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Nipple confusion** Nipple confusion: Nipple confusion is the tendency of an infant to unsuccessfully adapt between breast-feeding and bottle-feeding. It can happen when the infant is put back onto breast-feeding. Nipple confusion can turn into nipple refusal in which the infant refuses both the bottle and breastfeeding.Preventing nipple confusion requires avoiding bottles and pacifiers for the first few weeks after birth. An infant that is used to feeding at the breast and gets switched to a bottle cannot use the same technique as latching on to the breast. An infant who gets used to nipple on a bottle and fast-flowing milk can have trouble making the transition. Nipple confusion or nipple preference may occur when an infant switches from the breast to an artificial feeding method before the proper breastfeeding routine is established. Young infants who are exposed to artificial teats or bottle nipples might find the switch back and forth from bottle to breast a little tricky as the feeding mechanism of both breasts and bottle differ. An infant learns to feed on different nipples differently. Causes: How an infant feeds from the breast to bottle differs. A breastfed infant regulates the suction required for the flow of milk from the breast by using small pauses to breathe and to swallow. On the other hand, for a bottle-fed infant, they do not have to create suction as the flow from the bottle allows for a continuous flow. When switched back to the breast, the infant faces sudden confusion regarding the lack of continuous flow that they got adapted to. Bottle-feeding requires no serious effort whereas breastfeeding demands the usage of at least 40 muscles in the infant’s face. This could make it difficult for the infant to latch efficiently and be breastfed well after being fed from the bottle. Prevention: If the parent does not wait for the infant to perfect their breastfeeding skill, there is a risk the infant might give up breastfeeding sooner than preferred. While some infants easily go back and forth from bottle to breast, not all infants find this constant transitioning easy. However, infants are born with strong instincts to get breastfed. With patience and practice, the infant can be soothed into good feeding habits. Since there is no way to predict whether an infant might face nipple confusion, the use of a bottle or pacifier should be delayed, at least until the infant is four weeks old. This allows the infant to get used to breastfeeding at an early stage. Breastfeeding is advocated for the first two to three weeks. It is important that the infant is latching on well and that the breast milk reserve is well established. In case giving supplements to the infant is medically necessary, they can be given in ways that do not involve artificial nipples.Nipple confusion can result in sub optimal nutrition for the infant and using artificial nipples is discouraged by the World Health Organization. The American College of Paediatrics recommends the use of pacifiers to prevent sudden infant death syndrome. This, however, conflicts with the recommendations of the World Health Organisation to discourage the use of artificial nipples because it may cause nipple confusion and then inadequate nutrition. "Un-confusing" the infant: For getting the infant habituated, what is recommended is breastfeeding only when the infant is calm, not switching the infant back to the breast when they are extremely hungry, and more skin-to-skin contact (during breast-feeding) would help reacquaint them. For some special instances, the usage of a nipple shield can be considered to lure the infant back to breastfeeding. To switch to a bottle, a slow-flow nipple is recommended so that the infant has time to adapt to the new technique of feeding. A bottle system that imitates the natural breastfeeding motions of the infant makes the transition of bottle to breast easier. A parent can provide instant gratification to the infant by making it easier for them to feed from the breast. This can be done manually or by pumping your breast milk before the feeding starts, so the process of breastfeeding is a little less hard. Parents facing difficulties can consider a Lactation Consultant or advice from their paediatrician.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Zone melting** Zone melting: Zone melting (or zone refining, or floating-zone method, or floating-zone technique) is a group of similar methods of purifying crystals, in which a narrow region of a crystal is melted, and this molten zone is moved along the crystal. The molten region melts impure solid at its forward edge and leaves a wake of purer material solidified behind it as it moves through the ingot. The impurities concentrate in the melt, and are moved to one end of the ingot. Zone refining was invented by John Desmond Bernal and further developed by William G. Pfann in Bell Labs as a method to prepare high-purity materials, mainly semiconductors, for manufacturing transistors. Its first commercial use was in germanium, refined to one atom of impurity per ten billion, but the process can be extended to virtually any solute–solvent system having an appreciable concentration difference between solid and liquid phases at equilibrium. This process is also known as the float zone process, particularly in semiconductor materials processing. Process details: The principle is that the segregation coefficient k (the ratio of an impurity in the solid phase to that in the liquid phase) is usually less than one. Therefore, at the solid/liquid boundary, the impurity atoms will diffuse to the liquid region. Thus, by passing a crystal boule through a thin section of furnace very slowly, such that only a small region of the boule is molten at any time, the impurities will be segregated at the end of the crystal. Because of the lack of impurities in the leftover regions which solidify, the boule can grow as a perfect single crystal if a seed crystal is placed at the base to initiate a chosen direction of crystal growth. When high purity is required, such as in semiconductor industry, the impure end of the boule is cut off, and the refining is repeated.In zone refining, solutes are segregated at one end of the ingot in order to purify the remainder, or to concentrate the impurities. In zone leveling, the objective is to distribute solute evenly throughout the purified material, which may be sought in the form of a single crystal. For example, in the preparation of a transistor or diode semiconductor, an ingot of germanium is first purified by zone refining. Then a small amount of antimony is placed in the molten zone, which is passed through the pure germanium. With the proper choice of rate of heating and other variables, the antimony can be spread evenly through the germanium. This technique is also used for the preparation of silicon for use in integrated circuits ("chips"). Process details: Heaters A variety of heaters can be used for zone melting, with their most important characteristic being the ability to form short molten zones that move slowly and uniformly through the ingot. Induction coils, ring-wound resistance heaters, or gas flames are common methods. Another method is to pass an electric current directly through the ingot while it is in a magnetic field, with the resulting magnetomotive force carefully set to be just equal to the weight in order to hold the liquid suspended. Optical heaters using high-powered halogen or xenon lamps are used extensively in research facilities particularly for the production of insulators, but their use in industry is limited by the relatively low power of the lamps, which limits the size of crystals produced by this method. Zone melting can be done as a batch process, or it can be done continuously, with fresh impure material being continually added at one end and purer material being removed from the other, with impure zone melt being removed at whatever rate is dictated by the impurity of the feed stock.Indirect-heating floating zone methods use an induction-heated tungsten ring to heat the ingot radiatively, and are useful when the ingot is of a high-resistivity semiconductor on which classical induction heating is ineffective. Process details: Mathematical expression of impurity concentration When the liquid zone moves by a distance dx , the number of impurities in the liquid change. Impurities are incorporated in the melting liquid and freezing solid. Process details: kO : segregation coefficient L : zone length CO : initial uniform impurity concentration of the solidified rod CL : concentration of impurities in the liquid melt per length I : number of impurities in the liquid IO : number of impurities in zone when first formed at bottom CS : concentration of impurities in the solid rodThe number of impurities in the liquid changes in accordance with the expression below during the movement dx of the molten zone dI=(CO−kOCL)dx CL=I/L ∫0xdx=∫IOIdICO−kOIL IO=COL CS=kOI/L CS(x)=CO(1−(1−kO)e−kOxL) Applications: Solar cells In solar cells, float zone processing is particularly useful because the single-crystal silicon grown has desirable properties. The bulk charge carrier lifetime in float-zone silicon is the highest among various manufacturing processes. Float-zone carrier lifetimes are around 1000 microseconds compared to 20–200 microseconds with Czochralski method, and 1–30 microseconds with cast polycrystalline silicon. A longer bulk lifetime increases the efficiency of solar cells significantly. Applications: High-resistivity devices It's used for production of float-zone silicon-based high-power semiconductor devices.: 364 Related processes: Zone remelting Another related process is zone remelting, in which two solutes are distributed through a pure metal. This is important in the manufacture of semiconductors, where two solutes of opposite conductivity type are used. For example, in germanium, pentavalent elements of group V such as antimony and arsenic produce negative (n-type) conduction and the trivalent elements of group III such as aluminium and boron produce positive (p-type) conduction. By melting a portion of such an ingot and slowly refreezing it, solutes in the molten region become distributed to form the desired n-p and p-n junctions.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded
**Hunting oscillation** Hunting oscillation: Hunting oscillation is a self-oscillation, usually unwanted, about an equilibrium. The expression came into use in the 19th century and describes how a system "hunts" for equilibrium. The expression is used to describe phenomena in such diverse fields as electronics, aviation, biology, and railway engineering. Railway wheelsets: A classical hunting oscillation is a swaying motion of a railway vehicle (often called truck hunting or bogie hunting) caused by the coning action on which the directional stability of an adhesion railway depends. It arises from the interaction of adhesion forces and inertial forces. At low speed, adhesion dominates but, as the speed increases, the adhesion forces and inertial forces become comparable in magnitude and the oscillation begins at a critical speed. Above this speed, the motion can be violent, damaging track and wheels and potentially causing derailment. The problem does not occur on systems with a differential because the action depends on both wheels of a wheelset rotating at the same angular rate, although differentials tend to be rare, and conventional trains have their wheels fixed to the axles in pairs instead. Some trains, like the Talgo 350, have no differential, yet they are mostly not affected by hunting oscillation, as most of their wheels rotate independently from one another. The wheels of the power car, however, can be affected by hunting oscillation, because the wheels of the power car are fixed to the axles in pairs like in conventional bogies. Less conical wheels and bogies equipped with independent wheels that turn independently from each other and are not fixed to an axle in pairs are cheaper than a suitable differential for the bogies of a train.The problem was first noticed towards the end of the 19th century, when train speeds became high enough to encounter it. Serious efforts to counteract it got underway in the 1930s, giving rise to lengthened trucks and the side-damping swing hanger truck. In the development of the Japanese Shinkansen, less-conical wheels and other design changes were used to extend truck design speeds above 225 km/h (140 mph). Advances in wheel and truck design based on research and development efforts in Europe and Japan have extended the speeds of steel wheel systems well beyond those attained by the original Shinkansen, while the advantage of backwards compatibility keeps such technology dominant over alternatives such as the hovertrain and maglev systems. The speed record for steel-wheeled trains is held by the French TGV, at 574.9 km/h (357 mph). Railway wheelsets: Kinematic analysis While a qualitative description provides some understanding of the phenomenon, deeper understanding inevitably requires a mathematical analysis of the vehicle dynamics. Even then, the results may be only approximate. Railway wheelsets: A kinematic description deals with the geometry of motion, without reference to the forces causing it, so the analysis begins with a description of the geometry of a wheel set running on a straight track. Since Newton's second law relates forces to the acceleration of bodies, the forces acting may then be derived from the kinematics by calculating the accelerations of the components. However, if these forces change the kinematic description (as they do in this case) then the results may only be approximately correct. Railway wheelsets: Assumptions and non-mathematical description This kinematic description makes a number of simplifying assumptions since it neglects forces. For one, it assumes that the rolling resistance is zero. A wheelset (not attached to a train or truck), is given a push forward on a straight and level track. The wheelset starts coasting and never slows down since there are no forces (except downward forces on the wheelset to make it adhere to the track and not slip). If initially the wheelset is centered on the railroad track then the effective diameters of each wheel are the same and the wheelset rolls down the track in a perfectly straight line forever. But if the wheelset is a little off-center so that the effective diameters (or radii) are different, then the wheelset starts to move in a curve of radius R (depending on these wheelset radii, etc.; to be derived later on). The problem is to use kinematic reasoning to find the trajectory of the wheelset, or more precisely, the trajectory of the center of the wheelset projected vertically on the roadbed in the center of the track. This is a trajectory on the plane of the level earth's surface and plotted on an x-y graphical plot where x is the distance along the railroad and y is the "tracking error", the deviation of the center of the wheelset from the straight line of the railway running down the center of the track (midway between the two rails). Railway wheelsets: To illustrate that a wheelset trajectory follows a curved path, one may place a nail or screw on a flat table top and give it a push. It will roll in a circular curve because the nail or screw is like a wheelset with extremely different diameter wheels. The head is analogous to a large diameter wheel and the pointed end is like a small diameter wheel. While the nail or screw will turn around in a full circle (and more) the railroad wheelset behaves differently because as soon at it starts to turn in a curve, the effective diameters change in such a way as to decrease the curvature of the path. Note that "radius" and "curvature" refer to the curvature of the trajectory of the wheelset and not the curvature of the railway since this is perfectly straight track. As the wheelset rolls on, the curvature decreases until the wheels reach the point where their effective diameters are equal and the path is no longer curving. But the trajectory has a slope at this point (it is a straight line which crosses diagonally over the centerline of the track) so that it overshoots the centerline of the track and the effective diameters reverse (the formerly smaller diameter wheel becomes the larger diameter and conversely). This results in the wheelset moving in a curve in the opposite direction. Again it overshoots the centerline and this phenomenon continues indefinitely with the wheelset oscillating from side to side. Note that the wheel flange never makes contact with the rail. In this model, the rails are assumed to always contact the wheel tread along the same line on the rail head which assumes that the rails are knife-edge and only make contact with the wheel tread along a line (of zero width). Railway wheelsets: Mathematical analysis The train stays on the track by virtue of the conical shape of the wheel treads. If a wheelset is displaced to one side by an amount y (the tracking error), the radius of the tread in contact with the rail on one side is reduced, while on the other side it is increased. The angular velocity is the same for both wheels (they are coupled via a rigid axle), so the larger diameter tread speeds up, while the smaller slows down. The wheel set steers around a centre of curvature defined by the intersection of the generator of a cone passing through the points of contact with the wheels on the rails and the axis of the wheel set. Applying similar triangles, we have for the turn radius: where d is the track gauge, r the wheel radius when running straight and k is the tread taper (which is the slope of tread in the horizontal direction perpendicular to the track). Railway wheelsets: The path of the wheel set relative to the straight track is defined by a function y(x), where x is the progress along the track. This is sometimes called the tracking error. Provided the direction of motion remains more or less parallel to the rails, the curvature of the path may be related to the second derivative of y with respect to distance along the track as approximately It follows that the trajectory along the track is governed by the equation: This is a simple harmonic motion having wavelength: This kinematic analysis implies that trains sway from side to side all the time. In fact, this oscillation is damped out below a critical speed and the ride is correspondingly more comfortable. The kinematic result ignores the forces causing the motion. These may be analyzed using the concept of creep (non-linear) but are somewhat difficult to quantify simply, as they arise from the elastic distortion of the wheel and rail at the regions of contact. These are the subject of frictional contact mechanics; an early presentation that includes these effects in hunting motion analysis was presented by Carter. See Knothe for a historical overview. Railway wheelsets: If the motion is substantially parallel with the rails, the angular displacement of the wheel set (θ) is given by: Hence: The angular deflection also follows a simple harmonic motion, which lags behind the side to side motion by a quarter of a cycle. In many systems which are characterised by harmonic motion involving two different states (in this case the axle yaw deflection and the lateral displacement), the quarter cycle lag between the two motions endows the system with the ability to extract energy from the forward motion. This effect is observed in "flutter" of aircraft wings and "shimmy" of road vehicles, as well as hunting of railway vehicles. The kinematic solution derived above describes the motion at the critical speed. Railway wheelsets: In practice, below the critical speed, the lag between the two motions is less than a quarter cycle so that the motion is damped out but, above the critical speed, the lag is greater than a quarter cycle so that the motion is amplified. Railway wheelsets: In order to estimate the inertial forces, it is necessary to express the distance derivatives as time derivatives. This is done using the speed of the vehicle U, which is assumed constant: The angular acceleration of the axle in yaw is: The inertial moment (ignoring gyroscopic effects) is: where F is the force acting along the rails and C is the moment of inertia of the wheel set. Railway wheelsets: the maximum frictional force between the wheel and rail is given by: where W is the axle load and μ is the coefficient of friction. Gross slipping will occur at a combination of speed and axle deflection given by: this expression yields a significant overestimate of the critical speed, but it does illustrate the physical reason why hunting occurs, i.e. the inertial forces become comparable with the adhesion forces above a certain speed. Limiting friction is a poor representation of the adhesion force in this case. Railway wheelsets: The actual adhesion forces arise from the distortion of the tread and rail in the region of contact. There is no gross slippage, just elastic distortion and some local slipping (creep slippage). During normal operation these forces are well within the limiting friction constraint. A complete analysis takes these forces into account, using rolling contact mechanics theories. However, the kinematic analysis assumed that there was no slippage at all at the wheel-rail contact. Now it is clear that there is some creep slippage which makes the calculated sinusoidal trajectory of the wheelset (per Klingel's formula) not exactly correct. Railway wheelsets: Energy balance In order to get an estimate of the critical speed, we use the fact that the condition for which this kinematic solution is valid corresponds to the case where there is no net energy exchange with the surroundings, so by considering the kinetic and potential energy of the system, we should be able to derive the critical speed. Railway wheelsets: Let: Using the operator: the angular acceleration equation may be expressed in terms of the angular velocity in yaw, ω integrating: so the kinetic energy due to rotation is: When the axle yaws, the points of contact move outwards on the treads so that the height of the axle is lowered. The distance between the support points increases to: (to second order of small quantities). Railway wheelsets: the displacement of the support point out from the centres of the treads is: the axle load falls by The work done by lowering the axle load is therefore: This is energy lost from the system, so in order for the motion to continue, an equal amount of energy must be extracted from the forward motion of the wheelset. The outer wheel velocity is given by: The kinetic energy is: for the inner wheel it is where m is the mass of both wheels. The increase in kinetic energy is: The motion will continue at constant amplitude as long as the energy extracted from the forward motion, and manifesting itself as increased kinetic energy of the wheel set at zero yaw, is equal to the potential energy lost by the lowering of the axle load at maximum yaw. Railway wheelsets: Now, from the kinematics: but The translational kinetic energy is The total kinetic energy is: The critical speed is found from the energy balance: Hence the critical speed is given by This is independent of the wheel taper, but depends on the ratio of the axle load to wheel set mass. If the treads were truly conical in shape, the critical speed would be independent of the taper. In practice, wear on the wheel causes the taper to vary across the tread width, so that the value of taper used to determine the potential energy is different from that used to calculate the kinetic energy. Denoting the former as a, the critical speed becomes: where a is now a shape factor determined by the wheel wear. This result is derived in Wickens (1965) from an analysis of the system dynamics using standard control engineering methods. Railway wheelsets: Limitation of simplified analysis The motion of a wheel set is much more complicated than this analysis would indicate. There are additional restraining forces applied by the vehicle suspension and, at high speed, the wheel set will generate additional gyroscopic torques, which will modify the estimate of the critical speed. Conventionally a railway vehicle has stable motion in low speeds, when it reaches to high speeds stability changes to unstable form. The main purpose of nonlinear analysis of rail vehicle system dynamics is to show the view of analytical investigation of bifurcation, nonlinear lateral stability and hunting behavior of rail vehicles in a tangent track. This study describes the Bogoliubov method for the analysis.Two main matters, namely assuming the body as a fixed support and influence of the nonlinear elements in calculation of the hunting speed, are mostly focused in studies. A real railway vehicle has many more degrees of freedom and, consequently, may have more than one critical speed; it is by no means certain that the lowest is dictated by the wheelset motion. However, the analysis is instructive because it shows why hunting occurs. As the speed increases, the inertial forces become comparable with the adhesion forces. That is why the critical speed depends on the ratio of the axle load (which determines the adhesion force) to the wheelset mass (which determines the inertial forces). Railway wheelsets: Alternatively, below a certain speed, the energy which is extracted from the forward motion is insufficient to replace the energy lost by lowering the axles and the motion damps out; above this speed, the energy extracted is greater than the loss in potential energy and the amplitude builds up. Railway wheelsets: The potential energy at maximum axle yaw may be increased by including an elastic constraint on the yaw motion of the axle, so that there is a contribution arising from spring tension. Arranging wheels in bogies to increase the constraint on the yaw motion of wheelsets and applying elastic constraints to the bogie also raises the critical speed. Introducing elastic forces into the equation permits suspension designs which are limited only by the onset of gross slippage, rather than classical hunting. The penalty to be paid for the virtual elimination of hunting is a straight track, with an attendant right-of-way problem and incompatibility with legacy infrastructure. Railway wheelsets: Hunting is a dynamic problem which can be solved, in principle at least, by active feedback control, which may be adapted to the quality of track. However, the introduction of active control raises reliability and safety issues. Shortly after the onset of hunting, gross slippage occurs and the wheel flanges impact on the rails, potentially causing damage to both. Railway wheelsets: Road–rail vehicles Many road–rail vehicles feature independent axles and suspension systems on each rail wheel. When this is combined with the presence of road wheels on the rail it becomes difficult to use the formulae above. Historically, road–rail vehicles have their front wheels set slightly toe-in, which has been found to minimise hunting whilst the vehicle is being driven on-rail.
kaggle.com/datasets/mbanaei/all-paraphs-parsed-expanded